diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index b4f68d7..d4f2f82 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -82,7 +82,7 @@ \begin{abstract} There is a distinct lack of incorporating security, as a preliminary, when performing any systems development and design. There are tools and methodologies that tackle aspects of this issue, but nothing that brings a `complete' solution. Platform-Based Design (PBD) centers around minimizing costs of -developing and manufacturing new components along with changing the +developing and manufacturing systems comprised of multiple components along with changing the landscape of this ecosystem, through the use of a ``meet-in-the-middle'' methodology. Successive refinements of specifications meet with abstractions of potential implementations; @@ -114,14 +114,15 @@ Security design, development, verification, and evaluation is still a relatively new and unexplored space. Because of this there is a constant growth and evolution of security protocols and standards, which requires a thorough exploration of the security design space. -It is the belief of this paper that the best model for focusing effort -and development towards is a platform-based design for security +It is the belief of this paper that using platform-based design methodologies can help tremendously in security development. The levels of abstraction aid in virtualization design, the overarching concept of mapping platforms to instances (and vice-versa) aids in early development stages, and the need for rigorous documentation and meticulous following of standards are concepts that not only stem from platform-based design but greatly -lend to the needs of security design and development. The intent of this paper is to show the reader +lend to the needs of security design and development. + +The intent of this paper is to show how the needs and benefits of platform-based design and security development are closely aligned along the same concepts of rigorous design, virtualization/automation of tools, and the needs for @@ -136,7 +137,7 @@ systems). Standardization is the process of developing and implementing technical standards. Standardization can help to maximize compatibility, interoperability, safety, repeatability, or quality; it can also facilitate commoditization of formerly custom processes. Standards appear in all sorts of domains. For the IC domain standardization manifests as a flexible integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offers to develop quickly new applications (adaptability). In other words, this requires a distillation of the principles so that a rigorous methodology can be developed and profitably used across different design domains. So why is standardization useful? Standardization allows for manufacturers, system developers, and software designers to all work around a single, accepted, platform. It is understood what the capabilities of the hardware are, what the limitations of system IO will be, along with what the methods/protocols of communication will be. Even these preset aspects of the `standard' have their own `contractual obligations' of how they will function, what their respective `net-lists' are, and where the limitations of such standards lie. While the act of creating a standard takes time, not only due to time of development but due to speed of adoption of wide use, it is a necessary step. Without standardization, it becomes far more difficult to create universally used complex systems, let alone validate their correctness and trustworthiness. This is how one is able to change the current paradigm to a new standard model. -What is gained, and lost, when shifting the method of design to a more platform-based design/security centric model? After all development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. The advantage to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototyping re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Mapping}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for different manufacturers/developers to create different aspects of a system, while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new guide lines for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). +What is gained, and lost, when shifting the method of design to a more platform-based design/security centric model? After all, development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. The advantage to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototyping re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Mapping}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for different manufacturers/developers to create different aspects of a system, while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new guide lines for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). But as with the development of any tool, and more so when expecting said tools to be more publicly used, there is a deep need @@ -147,14 +148,14 @@ and implementation, without proper outlining of behavior and function there is greater possibility for erroneous use thus leading to greater vulnerability of the overall system. -It is the aim of this paper to first review platform-based design (Section~\ref{Platform-based design}) and its nuances, then move towards examining previous and related work to PBD and security design/development (Section~\ref{Related Work}). Next this paper will discuss the proposed approach (Section~\ref{Proposed Approach}) along with the different spaces and mapping that would occur between them, and lastly illustrate to the reader considerations of the proposed model (Section~\ref{Model Review}) and why platform-based design should be the basis for security design and development (Section~\ref{Conclusion}). +It is the aim of this paper to first introduce platform-based design (Section~\ref{Platform-based design}) and its nuances, then move towards examining previous and related work to PBD and security design/development (Section~\ref{Related Work}). Next this paper will discuss the proposed approach (Section~\ref{Proposed Approach}) along with the different spaces and mapping that would occur between them, and lastly illustrate considerations of the proposed model (Section~\ref{Model Review}) and why platform-based design should be the basis for security design and development (Section~\ref{Conclusion}). \section{Platform-based design} \label{Platform-based design} Platform-based design is a methodology where function spaces are mapped to platform spaces (e.g. architectural space) or vice-versa. Generally the mapping of a function instance towards a platform instance leads to a larger subset of platform instances branching from a single function instance; as can be seen in Figure~\ref{fig:RecursivePBD}. Of course this relationship works in the reverse manner as well; a single platform instance maps to a subset of function instances. Although one can think of the mapping process as happening in one direction, or the other, platform-based design is a ``meet-in-the-middle'' approach. A designer/developer works towards merging two platforms (e.g. software functionality and hardware architectures) where the resulting `middle-ground', or mapping, can be thought of as its own `mapping instance'; merging an abstract higher level function with an abstracted lower level platform. More on the recursive nature of PBD will be explored later in this section, for now we will examine the benefits of using such a methodology. -The monetary considerations of platform-based design include system re-use, flexibility of elements, and re-programmability. As system complexity grows the costs of producing chips becomes more expensive, thus pushing for the systems that are produced to be ``multi-capable'' (e.g. re-use value)~\cite{Keutzer2000}. The greatest challenge in the hardware design space is placement and arrangement of components based on an array of differing variables. Design of architecture platforms is a result of trade-off in complex space, and worse yet some aspects directly compete/contradict each other~\cite{Vincentelli2002, Gruttner2013}. The size of the application space that can be supported by the architectures belonging to the architecture platform represents the flexibility of the platform. The size of the architecture space that satisfies the constraints embodied in the design architecture is what providers/manufacturers have in designing their hardware instances. Even at this level of abstraction these two aspects of hardware design can compete with each other. Ex: A manufacturer has pre-determined sizes that constrain their development space a priori. Further more, aspects such as design space constraints and heat distribution are a well known source of trouble when designing embedded systems. +The monetary considerations of platform-based design include system re-use, flexibility of elements, and re-programmability. As system complexity grows the costs of producing chips becomes more expensive, thus pushing for the systems that are produced to be ``multi-capable'' (e.g. re-use value)~\cite{Keutzer2000}. The greatest challenge in the hardware design space is placement and arrangement of components based on an array of differing variables. Design of architecture platforms is a result of trade-off in complex space, and worse yet some aspects directly compete/contradict each other~\cite{Vincentelli2002, Gruttner2013}. The size of the application space that can be supported by the architectures belonging to the architecture platform represents the flexibility of the platform. The size of the architecture space that satisfies the constraints embodied in the design architecture is what providers/manufacturers have in designing their hardware instances. Even at this level of abstraction these two aspects of hardware design can compete with each other. Eg. a manufacturer has pre-determined sizes that constrain their development space a priori. Further more, aspects such as design space constraints and heat distribution are a well known source of trouble when designing embedded systems. In PBD, the partitioning of the design into hardware and software is not the essence of system design as many think, rather it is a @@ -227,9 +228,9 @@ components provide an opaque abstraction of lower layers that allows accurate performance estimations to be used to guide the mapping process~\cite{Vincentelli2007}. -\begin{quotation} -``It is very important to define only as many aspects as needed at every level of abstraction, in the interest of flexibility and rapid design-space exploration.''~\cite{Vincentelli2007} -\end{quotation} +%\begin{quotation} +%``It is very important to define only as many aspects as needed at every level of abstraction, in the interest of flexibility and rapid design-space exploration.''~\cite{Vincentelli2007} +%\end{quotation} The issue of platform-based design is not so much an over-engineering of design/development, but rather a need to strike a balance between all aspects of design; much like what already happens in hardware and software design. \section{Related Work} @@ -240,7 +241,7 @@ Work in the security realm is much more chaotic, although undertakings have been Component-Based Software Engineering (CBSE) is widely adopted in the software industry as the mainstream approach to software engineering~\cite{Mohammed2009, Mohammad2013}. CBSE is a reuse-based approach of defining, implementing and composing loosely coupled independent components into systems and emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. Thus it can be seen that the ideals being outlined in this paper are already in use, all that is needed is their re-application to a new design methodology. In a way one can see CBSE as a restricted-platform version of platform-based design. While an effective tool, it still falls short of the hardware side of systems. This furthers this paper's point, that the required elements for combining PBD and security are already in use for different purposes and simply need to be `re-purposed' for this new security centric platform-based design. -Hardware/Software codesign is crucial for bridging together the software and hardware aspects of a new system in an efficient and effective manner. There are different coding languages to handle different aspects (i.e. SW/HW) of virtualization. When dealing with the virtualization of software the aspects of timing and concurrency semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC~\cite{Kreku2008}, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction. The reasoning being the constant growth in complexity calls for simulation/virtualization of the design process. System-on-chip (SoC) technology will be already dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, a system's heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. ``Indeed, market data indicate[s] that more than 80\% of system development efforts are now in software versus hardware. This implies that an effective platform has to offer a powerful design environment for software to cope with development cost.''~\cite{Vincentelli2002} Coverification will require an increasing proportion of the design time as systems become more complex. Progress at the electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certain properties formulated as constraints during synthesis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the structural view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. The uncertainty of environment and communication partners of complex interacting cyber-physical systems, runtime adaptivity will be a must for guaranteeing the efficiency of a system. Due to the availability of reconfigurable hardware and multicore processing, which will also take a more important role in the tool chain for system simulation and evaluation, online codesign techniques will work towards a standard as time moves forward. As with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tools that already exists for low, or high, system simulation. New territory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective. +%Hardware/Software codesign is crucial for bridging together the software and hardware aspects of a new system in an efficient and effective manner. There are different coding languages to handle different aspects (i.e. SW/HW) of virtualization. When dealing with the virtualization of software the aspects of timing and concurrency semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC~\cite{Kreku2008}, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction. The reasoning being the constant growth in complexity calls for simulation/virtualization of the design process. System-on-chip (SoC) technology will be already dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, a system's heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. ``Indeed, market data indicate[s] that more than 80\% of system development efforts are now in software versus hardware. This implies that an effective platform has to offer a powerful design environment for software to cope with development cost.''~\cite{Vincentelli2002} Coverification will require an increasing proportion of the design time as systems become more complex. Progress at the electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certain properties formulated as constraints during synthesis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the structural view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. The uncertainty of environment and communication partners of complex interacting cyber-physical systems, runtime adaptivity will be a must for guaranteeing the efficiency of a system. Due to the availability of reconfigurable hardware and multicore processing, which will also take a more important role in the tool chain for system simulation and evaluation, online codesign techniques will work towards a standard as time moves forward. As with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tools that already exists for low, or high, system simulation. New territory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective. Metropolis is one tool that is based in part on the concept of platform-based design. Metropolis can analyze statically and dynamically functional designs with models that have no notion of physical quantities and mapped designs where the association of functionality to architectural services allows for evaluation of characteristics (e.g.~latency, throughput, power, and energy) of an implementation of a particular functionality with a particular platform instance~\cite{Vincentelli2007, Metropolis}. Metropolis is but one manifestation of platform-based design as a tool. While an effective tool it still does not handle security centric design in its implementation. @@ -271,11 +272,16 @@ Another approach is to specify requirements that establish bounds on the amount Information protection, required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintaining integrity of kernel code and data), must be protected to a level of continuity consistent with the security policy and the security architecture assumptions; thus providing `continuous protection on information'. Simply stated, no guarantees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. -From the development and design exploration of the function space a user should have two aspects of the high-level development; the behavioral description (e.g. how the system should behave given an input) and levels of abstraction that yield a `plan' for how the functional requirements will be implemented by lower levels of software and hardware. The reader should recognize the precept of platform mapping from platform-based design although only part of the full process has been explored; mapping of the function space. +From the development and design exploration of the function space a user should have two aspects of the high-level development; the behavioral description (e.g. how the system should behave given an input) and levels of abstraction that yield a `plan' for how the functional and security requirements will be implemented by lower levels of software and hardware. This mapping between functional and security requirements and implementation is a key optimization problem. +%The reader should recognize the precept of platform mapping from platform-based design although only part of the full process has been explored; mapping of the function space. \subsection{Component Definitions} \label{Component Definitions} -Component definitions relates to the security properties that different components have based on the definitions of their behavior and functionality. Security properties, at the component definition level, include more physical properties of the varying components; reliability, confidentiality, uniqueness. If the component in question were a processor then security properties could manifest as the following: cryptographic functions/functionality, is the component a TPM, secure key storage/protection, use of PUFs greater variability, uniform power consumption as a countermeasure to side-channel attacks, unexposed pins using a ball grid array (BGA), specific power drain/usage restrictions for operational functionality, anti-tampering or anti-reverse engineering properties of a component, etc. A developer/designer will need to determine just how low-level the architectural space will go (e.g. what is the lowest level being incorporated into the mapping process), along with the properties that must be noted/maintained at such a level. Further more the exploration of this design space will lead to creating higher-level implementations and functionality that is derived from the lower-level components. As with any development and design procedure there will need to be tradeoff optimization that will examine any conflicting properties (e.g. size and heat dissipation requirements) when mapping the platform/architectural space toward the function space. +Component definitions relates to the security properties that different components have based on the definitions of their behavior and functionality. Security properties, at the component definition level, include more physical properties of the varying components; reliability, confidentiality, uniqueness. If the component in question were a processor then security properties could manifest as the following: cryptographic functions/functionality, is the component a TPM, secure key storage/protection, PUF availability, uniform power consumption as a countermeasure to side-channel attacks, unexposed pins using a ball grid array (BGA), specific power drain/usage restrictions for operational functionality, anti-tampering or anti-reverse engineering properties of a component, etc. A developer/designer will need to determine just how low-level the architectural space will go (e.g. what is the lowest level being incorporated into the mapping process), along with the properties that must be noted/maintained at such a level. Furthermore, the exploration of this design space will lead to creating higher-level implementations and functionality that is derived from the lower-level components. As with any development and design procedure there will need to be tradeoff optimization that will examine any conflicting properties (e.g. size and heat dissipation requirements) when mapping the platform/architectural space toward the function space. + +As we mentioned with security requirements, converting these properties to metrics is a challenge. What does cryptographic functionality mean? Is it AES, RSA, DES, etc? How do you quantify the cryptographic strength in a way that is meaningful to an optimization mapping function. Similar questions arise with other security properties. This needs to be investigated in conjunction with further work on the mapping function. + +Below, we examine three components in further detail - TPM, PUFs, and reference monitors. \paragraph{Trusted Platform Modules} Different groups have tackled aspects of @@ -299,7 +305,7 @@ There are three main properties of PUFs: reliability, uniqueness, and randomness %For example, let us examine the use of Physical Unclonable Functions (PUFs) for low-level security design. As mentioned in Section~\ref{Related Work} PUFs have a desirable resilience against tampering, are enticing due to their basis in process variation (i.e. hardware variations: jitter, noise, temperature, voltage), and have `Challenge-Response Pairs' (CRPs) as their input/output nets~\cite{Tehranipoor2015}. Hence, PUFs have the properties that different components may need which is why PUFs can be used for elements ranging from key generation to authentication functionality. These aspects are dependent on the properties of the underlying hardware and while they may be similar in function and purpose these varying aspects contribute not only to the developer's use of these components but to the behavioral description and restrictions that the implemented function/module carry going forward. Further investigation into the concerns and considerations of the mapping process will occur in Section~\ref{Mapping}. -\paragraph{Resource Monitors} +\paragraph{Reference Monitors} Another example of security/trustworthiness implementation is the use of monitors for distributed system security. Different methods are used for determining trust of actions @@ -347,7 +353,7 @@ Dependability is a measure of a system's availability, reliability, and its main Different scopes of security/trustworthiness can be roughly placed into three categories(Figure~\ref{fig:PBDSecScopes}): local, network, and distributed. Each of these scopes will be examined in terms of the considerations/challenges, principles, and policies that should be used to determine action/behavior of these different security elements and the system as a whole. The local scope encompasses a security element's own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, reduced complexity, minimized sharing, and the conflict between this as least common mechanisms). The purpose of this section is to present the considerations, principles, and policies that govern the behavior and function of security elements/components at the local scope/level. First, this paper will reiterate the definitions stated in the Benzel et.~al.~paper. Failure is a condition in which, given a specifically documented input that conforms to specification, a component or system exhibits behavior that deviates from its specified behavior. A module/database is seen as a unit of computation that encapsulates a database and provides an interface for the initialization, modification, and retrieval of information from the database. The database may be either implicit, e.g. an algorithm, or explicit. Lastly, a process(es) is a program(s) in execution. To further define the actions/behavior of a given component in the local scope, this paper moves to outlining the principles that define component behavior at the local device level. \\ -The first principle is that of `Least Common Mechanisms'. If multiple components in the system require the same function of a mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is created once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanisms. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. The simpler a system is, the fewer vulnerabilities it will have; the principle of `Reduced Complexity'. From a security perspective, the benefit to this simplicity is that is is easier to understand whether an intended security policy has been captured in system design. At a security model level, it can be easier to determine whether the initial system state is secure and whether subsequent state changes preserve the system security. Given current state of the art of systems, the conservative assumption is that every complex system will contain vulnerabilities and it will be impossible to eliminate all of them, even in the most highly trustworthy of systems. At the lowest level of secure data, no computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. This is the concept behind `Minimized Sharing'. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; partition resources into discrete, private subsets for each dependent component. This in turn can be seen as a virtualization of hardware, a concept already illustrated earlier with the discussion of platform-based design. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. Once again illustrating the need for rigorous standards to be clearly documented. As the reader has undoubtably seen, there are conflicts within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly oppose each other, while still being desirable aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occurring within the security component. A designer can use one of the development techniques already mentioned to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key and how to then communicate that information with high security and fidelity. +The first principle is that of `Least Common Mechanisms'. If multiple components in the system require the same function of a mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is created once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanisms. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. The simpler a system is, the fewer vulnerabilities it will have; the principle of `Reduced Complexity'. From a security perspective, the benefit to this simplicity is that is is easier to understand whether an intended security policy has been captured in system design. At a security model level, it can be easier to determine whether the initial system state is secure and whether subsequent state changes preserve the system security. Given current state of the art of systems, the conservative assumption is that every complex system will contain vulnerabilities and it will be impossible to eliminate all of them, even in the most highly trustworthy of systems. At the lowest level of secure data, no computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. This is the concept behind `Minimized Sharing'. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; partition resources into discrete, private subsets for each dependent component. This in turn can be seen as a virtualization of hardware, a concept already illustrated earlier with the discussion of platform-based design. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. Once again illustrating the need for rigorous standards to be clearly documented. As can be seen, there are conflicts within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly oppose each other, while still being desirable aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occurring within the security component. A designer can use one of the development techniques already mentioned to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key and how to then communicate that information with high security and fidelity. How does a designer decide upon how elements communicate? How does one define a security component in terms of a network; how is communication done, what concerns/challenges are there? Rigorous definitions of input/output nets for components and the methods for communication (buses, encryption, multiplexing) must be documented and distributed as the standard methods for communicating. Service, defined in the scope of a network, refers to processing or protection provided by a component to users or other components. E.g., communication service (TCP/IP), security service (encryption, firewall). These services represent the communication that occurs between different elements in a secure system. Each service should be documented to rigorously define the input, and output, nets for each component and the method used for communication (buses, encryption, multiplexing). Since these services, and their actions, are central to the operation/behavior of a secure system, there are a series of considerations, principles, and policies that a system architect/designer must acknowledge when deciding on the layout of security components within a network. Principle of Secure Communication Channels states that when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with the security dependencies it supports. In other words, how much is the component trusted to perform its security functions by other components. Several techniques can be used to mitigate threats to the communication channels in use. Use of a channel may be restricted by protecting access to it with suitable access control mechanism (e.g. reference monitor located beneath or within each component). End-to-end communications technologies (e.g. encryption) may be used to eliminate security threats in the communication channel's physical environment.