diff --git a/PBDSecPaper.pdf b/PBDSecPaper.pdf index b60529f..08cdecd 100644 Binary files a/PBDSecPaper.pdf and b/PBDSecPaper.pdf differ diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index d4f2f82..dded758 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -58,7 +58,7 @@ \begin{document} -\title{Mapping Security to Platform-Based Design} +\title{SoK: Mapping Security to Platform-Based Design} % document status: submitted to foo, published in bar, etc. %\docstatus{Submitted to Cool Stuff Conference 2002} @@ -194,7 +194,7 @@ lower cost and first-pass success. \begin{figure} \includegraphics[width=0.5\textwidth]{./images/recursivePBD.png} - \caption{Recursive PBD~\cite{Vincentelli2007}} + \caption{Visualization of Recursive PBD Model~\cite{Vincentelli2007}} \label{fig:RecursivePBD} \end{figure} @@ -237,7 +237,7 @@ The issue of platform-based design is not so much an over-engineering of design/ \label{Related Work} As systems move towards more complex designs and implementations (as allowed by Moore's Law and other growths in technology) the ability to make even simple changes to these designs becomes exponentially more difficult. For this reason, levels of abstraction are desired when simplifying the design/evaluation phases of systems development. An example of this abstraction simplification is the use of system-on-chip (SoC) to replace multi-chip solutions. This SoC abstraction solution is then used for a variety of tasks, ranging from arithmetic to system behavior. It is an industry standard to use SoC to handle encryption/security in a secure and removed manner~\cite{Wu2006}. Middleware describes software that resides between an application and the inner workings of the system hosting the application. The purpose of the middleware is to abstract the complexities of the underlying technology from the application layer~\cite{Lang2003}; to act as translation software for communicating from lower level to a higher level. Platform-Based Design (PBD) has been proposed as a methodology to allow for this lower-to-higher translation ``abstraction bridge'' that enables a ``meet-in-the-middle'' approach~\cite{Vincentelli2007}. -Work in the security realm is much more chaotic, although undertakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Avizienis2004, Lin2013, Zakinthinos1997, Jorgen2008, Zhou2007}. Security has many facets to it: failure, processes, security mechanisms, security principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocol standards. +Work in the security realm is much more chaotic, although undertakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Avizienis2004, Lin2013, Zakinthinos1997, Jorgen2008, Zhou2007, David2006}. Security has many facets to it: failure, processes, security mechanisms, security principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocol standards. Component-Based Software Engineering (CBSE) is widely adopted in the software industry as the mainstream approach to software engineering~\cite{Mohammed2009, Mohammad2013}. CBSE is a reuse-based approach of defining, implementing and composing loosely coupled independent components into systems and emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. Thus it can be seen that the ideals being outlined in this paper are already in use, all that is needed is their re-application to a new design methodology. In a way one can see CBSE as a restricted-platform version of platform-based design. While an effective tool, it still falls short of the hardware side of systems. This furthers this paper's point, that the required elements for combining PBD and security are already in use for different purposes and simply need to be `re-purposed' for this new security centric platform-based design. @@ -262,7 +262,7 @@ Requirement specifications are the different high-level prerequisites that a use \begin{figure*} \includegraphics[width=\textwidth,height=10cm]{./images/SecurityDesignMap.png} - \caption{Security Design Map~\cite{Benzel2005}} + \caption{Security Design Map of Security Design Principles~\cite{Benzel2005}} \label{fig:SecDesignMap} \end{figure*} @@ -277,11 +277,12 @@ From the development and design exploration of the function space a user should \subsection{Component Definitions} \label{Component Definitions} -Component definitions relates to the security properties that different components have based on the definitions of their behavior and functionality. Security properties, at the component definition level, include more physical properties of the varying components; reliability, confidentiality, uniqueness. If the component in question were a processor then security properties could manifest as the following: cryptographic functions/functionality, is the component a TPM, secure key storage/protection, PUF availability, uniform power consumption as a countermeasure to side-channel attacks, unexposed pins using a ball grid array (BGA), specific power drain/usage restrictions for operational functionality, anti-tampering or anti-reverse engineering properties of a component, etc. A developer/designer will need to determine just how low-level the architectural space will go (e.g. what is the lowest level being incorporated into the mapping process), along with the properties that must be noted/maintained at such a level. Furthermore, the exploration of this design space will lead to creating higher-level implementations and functionality that is derived from the lower-level components. As with any development and design procedure there will need to be tradeoff optimization that will examine any conflicting properties (e.g. size and heat dissipation requirements) when mapping the platform/architectural space toward the function space. +Component definitions relates to the security properties that different components have based on the definitions of their behavior and functionality. Security properties, at the component definition level, include more physical properties of the varying components; reliability, confidentiality, uniqueness. If the component in question were a processor then security properties could manifest as the following: cryptographic functions/functionality, is the component a TPM, secure key storage/protection, PUF availability, uniform power consumption as a countermeasure to side-channel attacks~\cite{Danger2009}, unexposed pins using a ball grid array (BGA), specific power drain/usage restrictions for operational functionality, anti-tampering or anti-reverse engineering properties of a component, etc. A developer/designer will need to determine just how low-level the architectural space will go (e.g. what is the lowest level being incorporated into the mapping process), along with the properties that must be noted/maintained at such a level. Further more the exploration of this design space will lead to creating higher-level implementations and functionality that is derived from the lower-level components. As with any development and design procedure there will need to be tradeoff optimization that will examine any conflicting properties (e.g. size and heat dissipation requirements) when mapping the platform/architectural space toward the function space. As we mentioned with security requirements, converting these properties to metrics is a challenge. What does cryptographic functionality mean? Is it AES, RSA, DES, etc? How do you quantify the cryptographic strength in a way that is meaningful to an optimization mapping function. Similar questions arise with other security properties. This needs to be investigated in conjunction with further work on the mapping function. Below, we examine three components in further detail - TPM, PUFs, and reference monitors. +is the component a TPM, secure key storage/protection, use of PUFs greater variability, uniform power \paragraph{Trusted Platform Modules} Different groups have tackled aspects of @@ -332,7 +333,7 @@ The concern is how to evaluate the security of the system. Two methods are to e \begin{figure*} \includegraphics[width=\textwidth,height=8cm]{./images/pbdsec_mapping.png} - \caption{PBD Security Map Space} + \caption{PBD Security Map Space and Function} \label{fig:MapSpace} \end{figure*} @@ -361,7 +362,7 @@ Having touched upon the network communication considerations for modeling a secu \begin{figure} \includegraphics[width=0.5\textwidth]{./images/pbdsec_scopes.png} - \caption{Visualization of Scopes of Trustworthiness} + \caption{Visualization of Considered Scopes of Trustworthiness} \label{fig:PBDSecScopes} \end{figure} @@ -386,7 +387,7 @@ Focusing efforts on rigorous design documentation allows security concerns to be Just as there is a requirement for adequate documentation there is also the need for `procedural rigor'. The rigor of the system's life cycle process should be commensurate with its intended trustworthiness. Procedural rigor defines the depth and detail of a system's lifecycle procedures. These rigors contribute to the assurance that a system is correct and free of unintended functionality in two ways. Firstly, imposing a set of checks and balances on the life cycle process such that the introduction of unspecified functionality is thwarted. Secondly, applying rigorous procedures to specifications and other design documents contribute to the ability of users to understand a system as it has been built, rather than being misled by inaccurate system representation, thus helping ensure that security and functional objectives of the system have been met. Highly rigorous development procedures supporting high trustworthiness are costly to follow. However, the lowered cost of ownership resulting from fewer flaws and security breaches during the product maintenance phase can help to mitigate the higher initial development costs associated with a rigorous life cycle process. The reasoning for having procedural rigor and sufficient user documentation is to allow for `repeatable, documented procedures'. Techniques used to construct a component should permit the same component to be completely and correctly reconstructed at a later time. Repeatable and documented procedures support the creation of components that are identical to a component created earlier that may be in widespread use. -Virtualization will help offset the time and monetary costs of using and implementing these new methodologies/ideologies. Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexibility of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable designs' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textbf{but} not how the output manifests). The reason behind this is that the design can remain simplistic in its [design and] operation. Virtualization tools lend to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will)~\cite{Alagar2007}. Another option for the automated tool could trade out specific security components as an easier way to increase security without requiring re-design/re-construction of the underlying element (e.g. modularity). There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system. For these reasons a user would desire rigorous documentation that would lay out the requirements of each component, so that in the case of trying to replace faulty or damaged components there would be no loss to the overall trustworthiness of the system; while also not introducing any vulnerabilities due to the inclusion of new system components. +Virtualization will help offset the time and monetary costs of using and implementing these new methodologies/ideologies. Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexibility of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable designs' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textit{but} not how the output manifests). The reason behind this is that the design can remain simplistic in its [design and] operation. Virtualization tools lend to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will)~\cite{Alagar2007}. Another option for the automated tool could trade out specific security components as an easier way to increase security without requiring re-design/re-construction of the underlying element (e.g. modularity). There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system. For these reasons a user would desire rigorous documentation that would lay out the requirements of each component, so that in the case of trying to replace faulty or damaged components there would be no loss to the overall trustworthiness of the system; while also not introducing any vulnerabilities due to the inclusion of new system components. As with any new system there should be `sufficient user documentation'. Users should be provided with adequate documentation and other information such that they contribute to, rather than detract from, a system's security. The availability of documentation and training can help to ensure a knowledgeable cadre of users, developers, and administrators. If any level of user does not know how to use a component properly, does not know the standard security procedures, or does not know the proper behavior to prevent social engineering attacks, then said user can easily introduce new system vulnerabilities. @@ -458,7 +459,7 @@ slow and arduous process, but the resulting `pay dirt' will be a new set of virtualization tools at abstraction levels with design spaces yet not truly explored at regular levels. -The manufacturer's standpoint boils down to: the design should minimize mask-making costs but be flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime~\cite{Vincentelli2007}. Companies try to drive adoptability by means of creating something that users want to interact with, but not be complicated to learn (e.g. abstraction of technology for ease of use). Accounting for ease of use can lead to vulnerabilities in security or the development of new tools. Automation is desirable from a `business' standpoint since customers/users enjoy the `set it and forget it' mentality for technology (especially new technologies). Companies/Manufacturers need positive customer/user experiences, otherwise there is no desire to extend any supplied functionality to any other devices/needs on the part of the consumer. Adoptability tends to come from user `word of mouth' praising the functionality and ease of use of new technology/methods/devices and how the developing party reacts to system failures or user-need (branching from complaints and support requests). This is exactly why industry would love for platform-based design to become a new standard; gain high adoptability. The monetary costs saved would be enough to warrant adoption of the technology, \textbf{but} the monetary costs of developing such a system (e.g. design, evaluation, validation) does not carry the same attraction (simply because companies are selfish and want to \textbf{make} money). +The manufacturer's standpoint boils down to: the design should minimize mask-making costs but be flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime~\cite{Vincentelli2007}. Companies try to drive adoptability by means of creating something that users want to interact with, but not be complicated to learn (e.g. abstraction of technology for ease of use). Accounting for ease of use can lead to vulnerabilities in security or the development of new tools. Automation is desirable from a `business' standpoint since customers/users enjoy the `set it and forget it' mentality for technology (especially new technologies). Companies/Manufacturers need positive customer/user experiences, otherwise there is no desire to extend any supplied functionality to any other devices/needs on the part of the consumer. Adoptability tends to come from user `word of mouth' praising the functionality and ease of use of new technology/methods/devices and how the developing party reacts to system failures or user-need (branching from complaints and support requests). This is exactly why industry would love for platform-based design to become a new standard; gain high adoptability. The monetary costs saved would be enough to warrant adoption of the technology, \textit{but} the monetary costs of developing such a system (e.g. design, evaluation, validation) does not carry the same attraction (simply because companies are selfish and want to \textit{make} money). \begin{figure} \includegraphics[width=0.5\textwidth]{./images/pbdsec_refmon.png} @@ -492,7 +493,7 @@ properties of the design that have already been established (e.g. the useful virtualization tools for thorough exploration of design spaces for both hardware and software elements. -With these concepts in-mind, it should be obvious that security design \textbf{must} occur from the start! Unless security design is incorporated a priori, a developer can only hope to spend the rest of the development processes, and beyond, attempting to secure a system that took security as optional. Simply put, data \textbf{must} be kept safe. In addition, performing security planning from the start allows for disaster planning and any other possible `unforeseen' complications. +With these concepts in-mind, it should be obvious that security design \textit{must} occur from the start! Unless security design is incorporated a priori, a developer can only hope to spend the rest of the development processes, and beyond, attempting to secure a system that took security as optional. Simply put, data \textit{must} be kept safe. In addition, performing security planning from the start allows for disaster planning and any other possible `unforeseen' complications. %references section %\bibliographystyle{plain} @@ -657,6 +658,29 @@ DRAM based Intrinsic Physical Unclonable Functions for System Level Security}, P \bibitem{Tehranipoor2010} Xiaoxiao Wang and Mohammad Tehranipoor, \emph{ Novel Physical Unclonable Function with Process and Environmental Variations}, Proceedings of the Conference on Design, Automation and Test in Europe (March 2010) +\bibitem{Yan2015} Wei Yan, Fatemeh Tehranipoor, and John A.~Chandy, \emph{ +A Novel Way to Authenticate Untrusted Integrated Circuits}, Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (November 2015) + +\bibitem{TCG2009} \emph{ +How to Use the TPM: A Guide to Hardware-Based Endpoint Security}, White Paper, Trusted Computing Group, 2009 + +\bibitem{David2006} David D.~Hwang, Patrick Schaumont, Kris Tiri, and Ingrid Verbauwhede, \emph{ +Securing Embedded Systems}, IEEE Computer Society (March/April 2006) + +\bibitem{Berger2006} Stefan Berger, Ram{\'o}n C{\'a}ceres, Kenneth A.~Goldman, Ronald Perez, Reiner Sailer, and Leendert van Doorn,\emph{ +vTPM: Virtualizing the Trusted Platform Module}, Proceedings of the 15th Conference on USENIX Security Symposium (2006) + +\bibitem{Aaraj2008} Najwa Aaraj, Anand Raghunathan, and Niraj K.~Jha, \emph{ +Analysis and Design of a Hardware/Software Trusted Platform Module for Embedded Systems}, ACM Transactions on Embedded Computing Systems (TECS) Volume 8, Issue 1 (December 2008) + +\bibitem{Gokhale2008} Aniruddha Gokhale, Krishnakumar Balasubramanian, Arvind S.~Krishna, Jaiganesh Balasubramanian, George Edwards, Gan Deng, Emre Turkay, Jeffrey Parsons, and Douglas C.~Schmidt, \emph{Model driven middleware: A new paradigm for developing distributed real-time and embedded systems}, Science of Computer programming 73.1:39-58 (2008) + +\bibitem{Ravi2004} Srivaths Ravi, Anand Raghunathan, Paul Kocher, and Sunil Hattangady, \emph{ +Security in Embedded Systems: Design Challenges}, ACM Transactions on Embedded Computing Systems (TECS) Volume3, Issue 3 (August 2004) + +\bibitem{Danger2009} Jean-Luc Danger, Sylvain Guilley, Shivam Bhasin, and Maxime Nassar, \emph{ +Overview of Dual rail with Precharge logic styles to thwart implementation-level attacks on hardware cryptoprocessors}, 3rd International Conference on Signals, Circuits and Systems (November 2009) + \end{thebibliography} \end{document}