diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index f2d5ffd..d36e753 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -107,7 +107,7 @@ \section{Motivation} \item Hardware/Software Codesign \begin{itemize} \item There are different coding languages to handle different aspects (i.e. SW/HW) of vitrualization. When dealing with the virtualization of software the aspects of timing and concurrenccy semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC~\cite{Kreku2008}, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction. - \item Future considerations (end of Jurgen Teich paper~\cite{Teich2012}) + \item The reasoning being the constant growth in complexity calls for simulation/virtualization of the design process. \begin{itemize} \item System-on-chip (SoC) technology will be alreadu dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, the heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. \item A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. Coverification will require an increasing proportion of the design time as systems become more complex. Progress at electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certian properties formulated as contraints during syntehsis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the sutrctureal view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. @@ -116,10 +116,14 @@ \section{Motivation} \item As with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tools that already exists for low, or high, system simulation. New terretory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective. \end{itemize} \item Metropolis is one tool that is based in part on the concept of platform-based design. Metropolis can analyze statically and dynamically functional designs with models that have no notion of physical quantities and mapped designs where the association of functionality to architectural services allows for evaluation of characteristics (e.g.~latency, throughput, power, and energy) of an implementation of a particular functionality with a particular platform instance~\cite{Vincentelli2007, Metropolis}. Metropolis is but one manifestation of platform-based design as a tool. PBD has been used for the platform-exploration of synthetic biological systems as seen in the work done by Densmore et.~al.~to create a strong and flexable tool~\cite{Densmore2009}. Other applications include design on a JPEG encoder and use for distributed automotive design~\cite{}\textbf{ADD ALL CITING OF OTHER PBD BASED WORK HERE!!!}. +\end{itemize} + +\begin{itemize} \item Manufacturer's standpoint boils down to: ``minimize mask-making costs but flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime.''~\cite{Vincentelli2007} \begin{itemize} \item This is exactly why industry would love for platform-based design to pick up. The monetary costs saved would be enough to warrent adoption of the technology, \textbf{but} the monetary costs of developing such a system (e.g. design, evalutation, validation) does not carry the same attraction (simply because companies are selfish and want to \textbf{make} money). \end{itemize} +\item Security concerns center around how to define trust/trustworthiness, determining the functions and behvaiors of security components, and the prinicples, policies, and mechanisms that are rigorously documented to standardize behavior. \end{itemize} \section{Platford-based design} @@ -130,7 +134,7 @@ \section{Platford-based design} \item The size of the application space that can be supported by the architectures belonging to the architecture platform represents the flexability of the platform. The size of the architecture space that satisfies the constraints embodied in the design architecture is what providers/manufacturers have in designing their hardware instances. Even at this level of abstraction these two aspects of hardware design can compete with each other. Ex: A manufacturer has pre-determined sizes that constrain their development space apriori. Further more, aspects such as design space constraints and heat distribution are a well known source of trouble when designing embedded systems. \end{itemize} \item As with any new technology/design methodoloy/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action; same as with software development). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design developement and adoptation are software developement and a set of tools that insulate the details of rachitecture from application software (e.g. virtualization). The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it.~\cite{Vincentelli2002} -\item In PBD, the partitioning of the design into hardware and software is not the essence of system design as many think, rather it is a consequence of decisions taken at a higher level of abstraction~\cite{Vincentelli2007}. For example, a lower level consideration may be heat dissipation concerns which manifests itself as size constraints at higher levels. The types of decisions that are made at these higher levels lend to the design traits that are requirements at lower levels. These sort of complexities in cost of design are exactly why abstraction/virtualization are required/desired. Additional levels of abstraction (along with virtualization tools for design space exploration) aid in simplifying the design process. ``Critical decisions are about the architecture of the system, e.g., processors, buses, hardware accelerators, and memories, that will carry on the computation and communication tasks associated with the overall specification of the design.''~\cite{Vincentelli2007} The library of functional and communication components is the design space that we are allowed to explore at the appropriate level of abstraction~\cite{Vincentelli2007}. There are elements of recursive behavior that need to be tackled from a virtualized tool standpoint. In a PBD refinement-based design process, platforms shoould be defined to eliminate large loop iterations for affordable designs~\cite{Vincentelli2007}. This refinement should restrict the design space via new forms of regularity and structure that surrender some design potential for lower cost and first-pass sucess. +\item In PBD, the partitioning of the design into hardware and software is not the essence of system design as many think, rather it is a consequence of decisions taken at a higher level of abstraction~\cite{Vincentelli2007}. For example, a lower level consideration may be heat dissipation concerns which manifests itself as size constraints at higher levels. The types of decisions that are made at these higher levels lend to the design traits that are requirements at lower levels. These sort of complexities in cost of design are exactly why abstraction/virtualization are required/desired. Additional levels of abstraction (along with virtualization tools for design space exploration) aid in simplifying the design process. The abstraction levels deal with critical decisions that are about the architecture of the system, e.g., processors, buses, hardware accelerators, and memories, that will carry on the computation and communication tasks associated with the overall specification of the design~\cite{Vincentelli2007, Pellizzoni2009}. In certain scenarios these critical decisions can be centered around `safety-critical' elements, as can be seen in embedded systems deployed in the medical market (e.g. pacemakers). The library of functional and communication components is the design space that we are allowed to explore at the appropriate level of abstraction~\cite{Vincentelli2007}. There are elements of recursive behavior that need to be tackled from a virtualized tool standpoint. In a PBD refinement-based design process, platforms shoould be defined to eliminate large loop iterations for affordable designs~\cite{Vincentelli2007}. This refinement should restrict the design space via new forms of regularity and structure that surrender some design potential for lower cost and first-pass sucess. \end{itemize} \begin{figure*} @@ -140,12 +144,8 @@ \section{Platford-based design} \end{figure*} \begin{itemize} -\item Due to the recursive nature of platform-based design, establishing the number, location, and components of intermediate ``platforms'' is the essence of PBD~\cite{Vincentelli2007}. ``In fact, designs with different requirements and specification may use different intermediate platforms, hence different layers of regularity and design-space constraints. The tradeoffs involved in the selection of the number and characteristics of platforms relate to the size of the design space to be explored and the accuracy of the estimation of the chracteristics of the solution adopted. Naturally, the larger the step across platforms, the more difficult is predicting performance, optimizing at the higher levels of abstraction, and providing a tight lower bound. In fact, the design space for this approach may actually be smaller than the one obtained with smaller steps because it becomes harder to explore meaningful design alternatives and the restriction on search impedes complete design-space exploration. Ultimately, predictions/abstractions may be so inaccurate that design optimizations are misguided and the lower bounds are incorrect.''~\cite{Vincentelli2007} -\item ``The identification of precisely defined layers where the mapping processes take place is an important design decision and should be agreed upon at the top design management level.''~\cite{Vincentelli2007} - \begin{itemize} - \item ``Each layer supports a design stage where the performance indexes that characterize the architectural components provide an opaque abstraction of lower layers that allows accurate performance estimations used to guide the mapping process.''~\cite{Vincentelli2007} - \end{itemize} -\item ``This approach results in better reuse, because it decouples independent aspects, that would otherwise be tired, e.g., a given functional specification to low-level implementation details, or to a specific communication paradigm, or to a scheduling algorithm.''~\cite{Vincentelli2007} Coupled with well developed software virtualization tools, this would allow for better exploration of initial design spaces but does come at a cost of a rigorous, documented, standardization. ``It is very important to define only as many aspects as needed at every level of abstraction, in the interest of flexibility and rapid design-space exploration.''~\cite{Vincentelli2007} The issue of platform-based design is not so much an over-engineering of design/development, but rather a need to strike a balance between all aspects of design; much like what already happens in hardware and software design. +\item Due to the recursive nature of platform-based design, establishing the number, location, and components of intermediate ``platforms'' is the essence of PBD~\cite{Vincentelli2007}. In fact, designs with different requirements and specification may use different intermediate platforms, hence different layers of regularity and design-space constraints per scenarios. The tradeoffs involved in the selection of the number and characteristics of platforms relates to the size of the design space to be explored and the accuracy of the estimation of the chracteristics of the solution adopted. Naturally, the larger the step across platforms, the more difficult is predicting performance, optimizing at the higher levels of abstraction, and providing a tight lower bound. In fact, the design space for this approach may actually be smaller than the one obtained with smaller steps because it becomes harder to explore meaningful design alternatives and the restriction on the search space impedes complete design-space exploration. Ultimately, predictions/abstractions may be so inaccurate that design optimizations are misguided and the lower bounds are incorrect~\cite{Vincentelli2007}. On the other hand, having minimally small steps across the design space leads to needless complexity and unnecessary refinement of abstraction levels. The identification of precisely defined layers where the mapping processes takes place is an important design decision and should be agreed upon at the top design management level or during the initial early design phase of the system~\cite{Vincentelli2007}. As can be seen in Figure~\ref{fig:RecursivePBD}, each layer supports a design stage where the performance indexes that characterize the architectural components provide an opaque abstraction of lower layers that allows accurate performance estimations used to guide the mapping process~\cite{Vincentelli2007}. Standardization, or atleast some abstraction rule documentation, is what will greatly lend to minimizing the costs of using platform-based design. It is here that one clearly sees the recursive nature of this design methodology, and hopefully, also is able to see the advantages of the wide adoptation of platform-based design. +\item This approach results in better reuse, because it decouples independent aspects, that would otherwise be tired, e.g., a given functional specification to low-level implementation details, or to a specific communication paradigm, or to a scheduling algorithm~\cite{Vincentelli2007}. Coupled with well developed software virtualization tools, this would allow for better exploration of initial design spaces but does come at a cost of a rigorous, documented, standardization. ``It is very important to define only as many aspects as needed at every level of abstraction, in the interest of flexibility and rapid design-space exploration.''~\cite{Vincentelli2007} The issue of platform-based design is not so much an over-engineering of design/development, but rather a need to strike a balance between all aspects of design; much like what already happens in hardware and software design. \end{itemize} \section{Security} @@ -153,6 +153,7 @@ \section{Security} \begin{itemize} \item Evolving with time and understanding as knowledge of encryption and other security encapsulation techniques. There are considerations that are accounted from a software standpoint: capabilities of the software, speed of the algorthims/actions that take place, and how unique (level of uniqueness) a given solution is. Similarly there are hardware considerations as well: tolerance of the chip elements in use, their capabilities, power distribution over the entire hardware design, signal lag between different elements, and cross-talk caused by communication channels. Different groups have tackled aspects of these considerations. \item The Trusted Computing Group (TCG) created Trusted Platform Modules (TPM) which are able to validate their own functionality and if they have been tampered with. This is, in essence, a method of `self-analysis'; thus the ground work for a `self-analyzing' security component is already in place. This self checking can be used as a method for allowing the larger system of security components to locate damaged/error-prone componenets so that they can be replaced/fixed thus raising the overall trustworthiness of the system of components. Therefore TPMs are a good stepping stone on the path for better security, but TPM/TCG has no found ``the answer'' yet~\cite{Sadeghi2008}. +\item Use of monitors for distributed system security. Different methods for determining trust of actions (e.g. Byzantine General Problem). These methods are used for determining the most sane/trustworhy machine out of a distributed network so that users know they are interacting with the `freshest' data at hand. While the realm of distributed systems has found good methods/principles to govern their actions/updates of systems. These methods are by no means `final' or perfect but will rather develop with time. \item The definition of ``trustworthiness'' that is chosen to help color this perspective on security is as defined in the Benzel et.~al.~paper. \begin{quotation} ``\textit{Trustworthy} (noun): the degree to which the security behavior of the component is demonstrably compliant with its stated functionality (\textit{e.g., trustworthy component}). \\ \textit{Trust}: (verb) the degree to which the user or a component depends on the trustworthiness of another component. For example, component A \textit{trusts} component B, or component B is \textit{trusted} by component A. Trust and trustworthiness are assumed to be measured on the same scale.''~\cite{Benzel2005} @@ -176,11 +177,32 @@ \section{Security} \begin{itemize} \item Local (self) \begin{itemize} - \item The local scope encompasses a security elements own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, partially ordered dependencies, reduced complexity, minimized sharing (and the conflict between this as least common mechanisms). + \item The local scope encompasses a security elements own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, reduced complexity, minimized sharing, and the conflict between this as least common mechanisms). + \item Failure - a condition in which, given a specifically documented input that conforms to specification, a component or system exhibits behavior that deviates from its specified behavior. + \item Module/Database - a unit of computation that encapsulates a database and provides an interface for the initialization, modification, and retireval of information from the database. The database may be either implicit, e.g. an algorithm, or explicit. + \item Process(es) - a program(s) in execution. + \item Least Common Mechanisms - If multiple components in the system require the same function of mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is create once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanims. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. + \item Reduced Complexity - The simpler a system is, the fewer vulnerabilities it will have. From a security perspective, the benefit to this simplicity is that is is easier to understand whether an intended security policy has been captured in system design. At a security model level, it can be easier to determine whether the initial system state is secure and whether subsequent state changes preserve the system security. Given current state of the art of systems, the conservative assumption is that every complex system will contain vulnerabilities and it will be impossible to eliminate all of them, even in the most highly trustworthy of systems. + \item Minimized Sharing - No computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; parition resources into discrete, private subsets for each dependent component. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing is requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. + \item As the reader has undoutably seen, there are conflict within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly appose each other, while still being aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occuring within the security component. A designer can use one of the development techniques already mention to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key? \end{itemize} \item Network (communication) \begin{itemize} - \item How to elements communicate? Rigorous definition of input/output nets for componenets and the methods for communication (buses, encryption, multiplexing). Self-Reliant Trustworthiness, Trusted Communication Channels, Secure System Evolution. Dealing with Failure, Secure Failure. + \item How to elements communicate? Rigorous definition of input/output nets for componenets and the methods for communication (buses, encryption, multiplexing). + \item Service - processing or protection provided by a component to users or other components. E.g., communication service (TCP/IP), security service (encryption, firewall) + \item Principle of Secure Communication Channels - when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with security dependencies it supports. In other words, how much it is trusted to perform its security functions by other components. + \begin{itemize} + \item Several techniques can be used to mititgate threats to the communication channels in use. Use of a channel may be restricted by protecting access to it with suitable access control mechanism (e.g. reference monitor located beneath or within each component). \textit{\textbf{NOTE: MAKE SURE TO MENTION REFERENCE MONITOR BY THIS POINT}} End-to-end communications technologies (e.g. encryption) may be used to eliminate security threats in the communication channel's physical environment. Once again, the intrinsic characteristices assumed for and provided by the channel must be specified with such documentation that it is posisble for system designers to understand the nature of the channel as initially consturcted and to assess the impact of any subseruqnet changes to the system. Without this rigorous documentation and standardization, the trustworthiness of the communications between security elements can not be assured. + \end{itemize} + \item Self-Reliant Trustworthiness - Systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoin with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. \textit{\textbf{Add information about Partially Ordered Dependencies}} + \item Secure System Evolution - system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality sturcture or configuration. Changes may include upgrades to system, maintenance activites. + \begin{itemize} + \item Benefits include: reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. + \item \textbf{Note:} Most systems can aniticipate maintenance, upgrades, and chages to configuration. If component constructed using precepts of modularity and information hiding; easier tp replace without disrupting rest of the system. + \item System designer needs to take into account the impact of dynamic reconfiguration will have on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to ahcieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. + \end{itemize} + \item Secure Failure - Failure in system function or mechanism should not lead to violation of security policy. Ideally the system should be capable of detecting failure at any stage of operation (initialization, normal operation, shutdown, maintenace, error detection and recovery) and take appropriate steps to ensure security policies are not violated. Touching on the earlier idea of secure system evolution, reconfiguration function of the system should be designed to ensure continuous enforcement of security policy during various phases of reconfiguration. Once a failed security function is detected, the system may reconfigure itself to cirumvent the failed component, while maintaining security, and stilll provide all or part of the functionality of the original system, or completely shut itself down to prevent any (further) violation in security policies. Another method for achieving this is to rollback to a secure state (which may be the initial state) and then either shutdown or replace the service/component that failed with an orthogonal or repliacted mechanisms. + \item Dealing with Failure - Failure of a component may or may not be detectable to components using it. For this reason components should fail in a state that denies rather than grants access. A designer could employ multiple protection mechanisms (whose features can be significantly different) to reduce the possibility of attack repetition, but it should be noted that redundancy techniques may increase resource usage and adversely affect the system performance. Instead the atomicity properties of a service/component should be well documented and characterized so that the component availing itself of service can detect and handle interruption event appropriately; similar to the `self-analysis' function that TPMs have. \end{itemize} \item Within distributed system \begin{itemize}