diff --git a/AlterationOutline.txt b/AlterationOutline.txt new file mode 100644 index 0000000..020ad48 --- /dev/null +++ b/AlterationOutline.txt @@ -0,0 +1,63 @@ +Intro + -> Why do this? + -> Current design ideals incorrect + -> Security needs to be a 1st round element is system design + -> Protect the function of the system + -> Protect the design of a secure system + -> Anti: tampering, cloning, reverse engineering + -> PBD (overview) + -> Security not in the thought process of PBD +Related Work + -> Low level concepts + -> Flaws + -> Hardware implementations + -> Flaws + -> Security primitives + -> Flaws + -> PBD implementations + -> Flaws +Proposed Approach + -> Requirement Specifications + -> High-level requirements: data1 must remain private to some cost value (time, money), system functionality must remain private to some cost value (time, money) + -> How to go about protecting data to keep it private? This gets us from high-level behavioral requirements down to implementation (level of abstraction down) + -> Work downwards to lower level design and development + -> Component Definitions + -> Security Properties + -> If a processor would need: cryptographic function, TPM, secure keys, PUF, uniform/obfuscated implementation (to prevent side channel attacks; branch of confidentiality), unexposed pins (BGA), power drain/usage, integrity of data/information, anti-tampering, anti-reverse engineering + -> PUF (Physical Unclonable Function): reason to use PUF is that it is based on process variations (hardware variations; temp, age, leakage, component properties, voltage, jitter, noise, etc.) which makes it very difficult if not impossible to predect or model, mainly used for key generation and ID extraction purposes (authentication). Three main properties of PUF are: reliability, uniqueness, and randomness (of the output data). It should be noted that the properties of relibaility and randomness conflict and it is required that a balance be struck during the design and development stage. PUF is a prevention method; in that PUF can be used to ensure that the system is operating as expected. + -> From Sara's paper: better resiliance against tampering compared to other solutions, PUF input and ouput nets are maps of challenges and responses which are refered to as `Challenge-Response Pairs' (CRPs), two broad categories for PUFs (strong PUF and weak PUF); where strong PUFs are used for authentication and weak PUFs are used for key generation (the reason being that strong PUFs can handle a larger number of CRPs), three types of PUF; cover-based (based on physical optic/electric properties of the system; e.g. light scattering, dieletric coefficient), memory-based (based on properties of memory (start-up behavior); DRAM/SRAM/Flash PUF), delay-based (based on delay of operations by components; Arbiter PUF) + -> Mention/Combine first about reason to use PUF and what are PUF's input/ouput, then characteristics of PUFs, what are the types of PUFs, why would I ever want to use PUFs (they are good) + -> If memory would need: maintaining protection, strength of encryption (of disk/space), `hidden' pins (BGA), memory based PUF + -> What happens when unable to trust components + -> Secure system built from unsecure components + -> Examine Root-of-Trust (e.g. TPM) + -> System is ONLY as secure as it's least secure component + -> How to evaluate secure + -> Overhead brought into initial system state to validate all of this on the fly (perhaps pre-validate) + -> Other method is to know that all components are trustworthy and thus do not require validation + -> Mapping + -> Mapping is difficult + -> Need a method for comparing arbitrary values/rankings + -> Would require comparision of properties such as trustworthiness, relibaility, fidelity, etc. + -> Documentation and standardization is required for a mapping function/framework to be developed + -> Virtualization aids in exploration of the development/design space + -> Time is the most expensive and omni-present cost of development + + -> Development considerations + -> Disaster planning + -> Upgrade/Modification paths + + -> Point out security considereations that a mapping function may need to account for + -> Principles and functionality that may need to be maintained + -> Different scope considerations for security (local, network, distributed) + -> Documentation + -> A necessary evil + -> Documentation is good + -> Documentation leads to better virtualization and automation of tools + -> Leads to need for user documentation + -> Leads to better adoptability +Restate point being made + -> Security needs to be implemented 1st in design + -> Use PBD as a design methodology + -> Proposed application is an attempt to tackle the open ended problem of mapping high-level requirements to component hardware/function +Conclusion \ No newline at end of file diff --git a/PBDSecPaper.pdf b/PBDSecPaper.pdf index 996d2e1..86824fa 100644 Binary files a/PBDSecPaper.pdf and b/PBDSecPaper.pdf differ diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index 069565b..12c7c83 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -106,13 +106,9 @@ uniqueness) a given solution is. Similarly there are hardware considerations as well: tolerance of the chip elements in use, their capabilities, power distribution over the entire hardware design, signal lag between different elements, and cross-talk caused by -communication channels. +communication channels. The primary cost of developing security, and running a secure system, is time. There are the monetary and hardware costs of security development and implementation, but even those aspects all have a time cost coupled with them. Time, while being the most expensive part of security design is also the aspect that can be tackled and minimized with rigorous planning and documentation. Taking into account that even the development of documentation and standards also has its own time cost associated with it, this early phase development can also diminish the time-cost impact for later steps in the system's development/implementation life-cycle. Security must be paramount in design from the very beginning of any design and development cycle. -The primary cost of developing security, and running a secure system, is time. There are the monetary and hardware costs of security development and implementation, but even those aspects all have a time cost coupled with them. Time, while being the most expensive part of security design is also the aspect that can be tackled and minimized with rigorous planning and documentation. Taking into account that even the development of documentation and standards also has its own time cost associated with it, this early phase development can also diminish the time-cost impact for later steps in the system's development/implementation life-cycle. Security must be paramount in design from the very beginning of any design and development cycle. - -Security concerns center around how to define trust/trustworthiness, determining the functions and behaviors of security components, and the principles, policies, and mechanisms that are rigorously documented to standardize behavior. Also designed by industry to clearer standards, giving better security and ease of set-up and implementation. - -The Common Criteria standard~\cite{CommonCriteria} provides a framework for the derivation of system requirements that is comprised of the following steps: definition of system goals and its concept of operation; identification of threats to the defined system; identification of assumptions regarding the system and its environment; identification of organizational policies external to the system; identification of security objectives for the system and its environment based on previous steps; and specification of requirements that will meet the objectives. +Security concerns center around how to define trust/trustworthiness, determining the functions and behaviors of security components, and the principles, policies, and mechanisms that are rigorously documented to standardize behavior. Industry also assists in designing clearer standards, giving better security and ease of set-up and implementation. But even so a security design framework is lacking. The Common Criteria standard~\cite{CommonCriteria} provides a framework for the derivation of system requirements that is comprised of the following steps: definition of system goals and its concept of operation; identification of threats to the defined system; identification of assumptions regarding the system and its environment; identification of organizational policies external to the system; identification of security objectives for the system and its environment based on previous steps; and specification of requirements that will meet the objectives. Security design, development, verification, and evaluation is still a relatively new and unexplored space. Because of this there is a @@ -125,9 +121,7 @@ the overarching concept of mapping platforms to instances (and vice-versa) aids in early development stages, and the need for rigorous documentation and meticulous following of standards are concepts that not only stem from platform-based design but greatly -lend to the needs of security design and development. - -The intent of this paper is to show the reader +lend to the needs of security design and development. The intent of this paper is to show the reader how the needs and benefits of platform-based design and security development are closely aligned along the same concepts of rigorous design, virtualization/automation of tools, and the needs for @@ -137,11 +131,23 @@ which security development can be mapped over. PBD can be used for development of hardware elements, security centric SoCs, or even as a set of abstract blocks that can be used to design higher level applications (e.g. virtualization development of larger security -systems). +systems). -What is gained, and lost, when shifting the method of design to a more platform-based design/security centric model? After all development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. The advantage to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototyping re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Security}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for different manufacturers/developers to create different aspects of a system, while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new guide lines for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). +Standardization is the process of developing and implementing technical standards. Standardization can help to maximize compatibility, interoperability, safety, repeatability, or quality; it can also facilitate commoditization of formerly custom processes. Standards appear in all sorts of domains. For the IC domain standardization manifests as a flexible integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offers to develop quickly new applications (adoptability). In other words, this requires a distillation of the principles so that a rigorous methodology can be developed and profitably used across different design domains. +So why is standardization useful? Standardization allows for manufacturers, system developers, and software designers to all work around a single, accepted, platform. It is understood what the capabilities of the hardware are, what the limitations of system IO will be, along with what the methods/protocols of communication will be. Even these preset aspects of the `standard' have their own `contractual obligations' of how they will function, what their respective `net-lists' are, and where the limitations of such standards lie. While the act of creating a standard takes time, not only due to time of development but due to speed of adoption of wide use, it is a necessary step. Without standardization, it becomes far more difficult to create universally used complex systems, let alone validate their correctness and trustworthiness. This is how one is able to change the current paradigm to a new standard model. + +What is gained, and lost, when shifting the method of design to a more platform-based design/security centric model? After all development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. The advantage to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototyping re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Mapping}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for different manufacturers/developers to create different aspects of a system, while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new guide lines for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). + +But as with the development of any tool, and more so when +expecting said tools to be more publicly used, there is a deep need +for meticulous documentation and rigorous use/distribution of +standards. Without this, there is no guarantee that anyone will +benefit from use of this new model. Much like with security innovation +and implementation, without proper outlining of behavior and function +there is greater possibility for erroneous use thus leading to greater +vulnerability of the overall system. -It is the aim of this paper to first outline platform-based design (Section~\ref{Platform-based design}) and its advantages and disadvantages, then move towards examining a model for designing security (Section~\ref{Security}) and lastly illustrate to the reader why platform-based design should be the basis for security design and development. +It is the aim of this paper to first review platform-based design (Section~\ref{Platform-based design}) and its nuances, then move towards examining previous and related work to PBD and security design/development (Section~\ref{Related Work}). Next this paper will discuss the proposed approach (Section~\ref{Proposed Approach}) along with the different spaces and mapping that would occur between them, and lastly illustrate to the reader considerations of the proposed model (Section~\ref{Model Review}) and why platform-based design should be the basis for security design and development (Section~\ref{Conclusion}). \section{Platform-based design} \label{Platform-based design} @@ -185,11 +191,11 @@ refinement should restrict the design space via new forms of regularity and structure that surrender some design potential for lower cost and first-pass success. -\begin{figure*} - \includegraphics[width=\textwidth,height=8cm]{./images/recursivePBD.png} +\begin{figure} + \includegraphics[width=0.5\textwidth]{./images/recursivePBD.png} \caption{Recursive PBD~\cite{Vincentelli2007}} \label{fig:RecursivePBD} -\end{figure*} +\end{figure} Due to the recursive nature of platform-based design, establishing the number, location, and components of intermediate ``platforms'' is the @@ -228,18 +234,10 @@ The issue of platform-based design is not so much an over-engineering of design/ \section{Related Work} \label{Related Work} +As systems move towards more complex designs and implementations (as allowed by Moore's Law and other growths in technology) the ability to make even simple changes to these designs becomes exponentially more difficult. For this reason, levels of abstraction are desired when simplifying the design/evaluation phases of systems development. An example of this abstraction simplification is the use of system-on-chip (SoC) to replace multi-chip solutions. This SoC abstraction solution is then used for a variety of tasks, ranging from arithmetic to system behavior. It is an industry standard to use SoC to handle encryption/security in a secure and removed manner~\cite{Wu2006}. Middleware describes software that resides between an application and the inner workings of the system hosting the application. The purpose of the middleware is to abstract the complexities of the underlying technology from the application layer~\cite{Lang2003}; to act as translation software for communicating from lower level to a higher level. Platform-Based Design (PBD) has been proposed as a methodology to allow for this lower-to-higher translation ``abstraction bridge'' that enables a ``meet-in-the-middle'' approach~\cite{Vincentelli2007}. -As systems move towards more complex designs and implementations (as allowed by Moore's Law and other growths in technology) the ability to make even simple changes to these designs becomes exponentially more difficult. For this reason, levels of abstraction are desired when simplifying the design/evaluation phases of systems development. An example of this abstraction simplification is the use of system-on-chip (SoC) to replace multi-chip solutions. This SoC abstraction solution is then used for a variety of tasks, ranging from arithmetic to system behavior. It is an industry standard to use SoC to handle encryption/security in a secure and removed manner~\cite{Wu2006}. Middleware describes software that resides between an application and the inner workings of the system hosting the application. The purpose of the middleware is to abstract the complexities of the underlying technology from the application layer~\cite{Lang2003}; to act as translation software for communicating from lower level to a higher level. Platform-Based Design (PBD) has been proposed as a methodology to allow for this lower-to-higher translation ``abstraction bridge'' that enables a ``meet-in-the-middle'' approach~\cite{Vincentelli2007}. Platform-based design attempts to map applications to a platform chosen from a set of well-defined components. These components are defined by various characteristics including performance, weight, cost, etc. The mapping process to select architectures, platforms, and components sets out to maximize some objective function in the context of constraints and system requirements. - -While security in these systems is clearly important, it is rarely part of the early design process and thus, does not enter into the platform-based design process. In this paper, we advocate for a new security aware platform-based design approach that takes into account component security attributes during platform definition. In addition, the mapping process must incorporate security requirements as part of the objective function and constraints. -%To date, the PBD approach to design has been along with the sort of software construct that benefits the virtualization of security component mapping. -%\begin{quotation} -% ``However, even though current silicon technology is closely following the growing demands; the effort needed in modeling, simulating, and validating such designs is adversely affected. This is because current modeling tools and frameworks, hardware and software co-design environments, and validation and verification frameworks do not scale with the rising demands.''~\cite{Patel2007} -%\end{quotation} Work in the security realm is much more chaotic, although undertakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Avizienis2004, Lin2013, Zakinthinos1997, Jorgen2008, Zhou2007}. Security has many facets to it: failure, processes, security mechanisms, security principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocol standards. -Metropolis is one tool that is based in part on the concept of platform-based design. Metropolis can analyze statically and dynamically functional designs with models that have no notion of physical quantities and mapped designs where the association of functionality to architectural services allows for evaluation of characteristics (e.g.~latency, throughput, power, and energy) of an implementation of a particular functionality with a particular platform instance~\cite{Vincentelli2007, Metropolis}. Metropolis is but one manifestation of platform-based design as a tool. PBD has been used for the platform-exploration of synthetic biological systems as seen in the work done by Densmore et.~al.~to create a strong and flexible tool~\cite{Densmore2009}. Other applications, of platform-based design, include design on a JPEG encoder, imaging, and use for distributed automotive design~\cite{Vincentelli2007, Teich2012, Keutzer2000, Lin2013, Gerstlauer2009, Gronbaek2008, Pimentel2006, Schaumont2005, Sedcole2006, Benveniste2012, Pinto2006, Bonivento2006, Pellizzoni2009, Kreku2008, Gamatie2011, Gruttner2013, Densmore2009} - Different groups have tackled aspects of security design and development. The Trusted Computing Group (TCG) created Trusted Platform Modules (TPM) which are able to validate their own @@ -251,7 +249,12 @@ security components to locate damaged/error-prone components so that they can be replaced/fixed thus raising the overall trustworthiness of the system of components. Therefore TPMs are a good stepping stone on the path for better security, but TPM/TCG has not found ``the answer'' -yet~\cite{Sadeghi2008}. Another example of security/trustworthiness +yet~\cite{Sadeghi2008}. + +Physical Unclonable Functions are yet an other hardware attempt to tackle larger issues of security design and development. PUFs are based on process variations (e.g. hardware variations; temperature, voltage, jitter, noise, leakge, age) which make them very difficult, if not impossible, to predict or model~\cite{Tehranipoor2015}, but that do produce input/output nets that are maps of challenges and responses (hereto defined as `Challenge-Response Pairs'; CRPs). These PUFs have subcategories based on their functionality (e.g. strong vs weak~\cite{Tehranipoor2015}) and the component variations examined (e.g. cover-based, memory-based, and delay-based~\cite{Tehranipoor2010}). The important take away is that strong PUFs are used for ID extraction purposes (authentication) while weak PUFs are used for key generation; reason being that strong PUFs can handle a larger number of CRPs. PUF is a prevention method; in that PUF can be used to ensure that a system is operating as expected (similar to the TPM concept). +There are three main properties of PUFs: reliability, uniqueness, and randomness (of the output data). It should be noted that the properties of reliability and randomess conflict, thus requiring that a balance be struck between the design and development stages. Functionality, much like security, can not be a second thought when designing and developing systems. Although PUFs are also designed with security in mind, they do not serve the paper's purpose: to incorporate security development into platform-based design. + +Another example of security/trustworthiness implementation is the use of monitors for distributed system security. Different methods are used for determining trust of actions (e.g. Byzantine General Problem). These methods are used for @@ -265,21 +268,67 @@ series of different HW and SW platforms for tackling different aspects of security. The unifying factor in all of them is determining what is or isn't trustworthy. -Component-Based Software Engineering (CBSE) is widely adopted in the software industry as the mainstream approach to software engineering~\cite{Mohammed2009, Mohammad2013}. CBSE is a reuse-based approach of defining, implementing and composing loosely coupled independent components into systems and emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. Thus it can be seen that the ideals being outlined in this paper are already in use, all that is needed is their re-application to a new design methodology. In a way one can see CBSE as a restricted-platform version of platform-based design. This furthers this paper's point, that the required elements for combining PBD and security are already in use for different purposes and simply need to be `re-purposed' for this new security centric platform-based design. +Component-Based Software Engineering (CBSE) is widely adopted in the software industry as the mainstream approach to software engineering~\cite{Mohammed2009, Mohammad2013}. CBSE is a reuse-based approach of defining, implementing and composing loosely coupled independent components into systems and emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. Thus it can be seen that the ideals being outlined in this paper are already in use, all that is needed is their re-application to a new design methodology. In a way one can see CBSE as a restricted-platform version of platform-based design. While an effective tool, it still falls short of the hardware side of systems. This furthers this paper's point, that the required elements for combining PBD and security are already in use for different purposes and simply need to be `re-purposed' for this new security centric platform-based design. + +Hardware/Software codesign is crucial for bridging together the software and hardware aspects of a new system in an efficient and effective manner. There are different coding languages to handle different aspects (i.e. SW/HW) of virtualization. When dealing with the virtualization of software the aspects of timing and concurrency semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC~\cite{Kreku2008}, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction. The reasoning being the constant growth in complexity calls for simulation/virtualization of the design process. System-on-chip (SoC) technology will be already dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, a system's heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. ``Indeed, market data indicate[s] that more than 80\% of system development efforts are now in software versus hardware. This implies that an effective platform has to offer a powerful design environment for software to cope with development cost.''~\cite{Vincentelli2002} Coverification will require an increasing proportion of the design time as systems become more complex. Progress at the electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certain properties formulated as constraints during synthesis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the structural view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. The uncertainty of environment and communication partners of complex interacting cyber-physical systems, runtime adaptivity will be a must for guaranteeing the efficiency of a system. Due to the availability of reconfigurable hardware and multicore processing, which will also take a more important role in the tool chain for system simulation and evaluation, online codesign techniques will work towards a standard as time moves forward. As with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tools that already exists for low, or high, system simulation. New territory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective. -Others are incorporating platform-based design in everything from image processing to implementing security across different domains~\cite{Lin2015}. People are seeing the use and effectiveness of this sort of design methodology and it is the belief of this paper that more focus should be placed on this topic. Platform-based design should be the basis for security design and development. +Metropolis is one tool that is based in part on the concept of platform-based design. Metropolis can analyze statically and dynamically functional designs with models that have no notion of physical quantities and mapped designs where the association of functionality to architectural services allows for evaluation of characteristics (e.g.~latency, throughput, power, and energy) of an implementation of a particular functionality with a particular platform instance~\cite{Vincentelli2007, Metropolis}. Metropolis is but one manifestation of platform-based design as a tool. While an effective tool it still does not handle security centric design in its implementation. + +PBD has been used for the platform-exploration of synthetic biological systems as seen in the work done by Densmore et.~al.~to create a strong and flexible tool~\cite{Densmore2009}. Other applications, of platform-based design, include design on a JPEG encoder, imaging, and use for distributed automotive design~\cite{Vincentelli2007, Teich2012, Keutzer2000, Lin2013, Gerstlauer2009, Gronbaek2008, Pimentel2006, Schaumont2005, Sedcole2006, Benveniste2012, Pinto2006, Bonivento2006, Pellizzoni2009, Kreku2008, Gamatie2011, Gruttner2013, Densmore2009}. Others are incorporating platform-based design in everything from image processing to implementing security across different, though sometimes limited, domains~\cite{Lin2015}. People are seeing the use and effectiveness of this sort of design methodology and it is the belief of this paper that more focus should be placed on this topic. Platform-based design should be the basis for security design and development. + +Platform-based design attempts to map applications to a platform chosen from a set of well-defined components. These components are defined by various characteristics including performance, weight, cost, etc. The mapping process to select architectures, platforms, and components sets out to maximize some objective function in the context of constraints and system requirements. While security in these systems is clearly important, it is rarely part of the early design process and thus, does not enter into the platform-based design process. In this paper, we advocate for a new security aware platform-based design approach that takes into account component security attributes during platform definition. In addition, the mapping process must incorporate security requirements as part of the objective function and constraints. \section{Proposed Approach} \label{Proposed Approach} +Since there is a lack of implementation of security in platform-based design, the following section is purposed with outlining the requirements and properties that the function and architectural spaces contain that will then be combined via a mapping processes that will then develop a final design; as can be seen in Figure~\ref{fig:MapSpace}. More on the concept, properties, and actions of the mapping process will be discussed in Section~\ref{Mapping}. + +\begin{figure*} + \includegraphics[width=\textwidth,height=10cm]{./images/SecurityDesignMap.png} + \caption{Security Design Map~\cite{Benzel2005}} + \label{fig:SecDesignMap} +\end{figure*} + +It should be noted that the design prinicples for security in this paper will follow the same structure shown in Figure~\ref{fig:SecDesignMap}. + +The first step in forumlating a framework is to define the inputs; these being the requirement specifications that rule the behavior of the system and the component definitions/properties that underpin all functional implementation. Next will be a conversation about the mapping process that will need to combine these two spaces into a single design (or series of designs; based on requirement or property needs), along with other considerations and concerns that need to be taken into account during the mapping process. \subsection{Requirement Specifications} \label{Requirement Specifications} +Requirement specifications are the different high-level prerequisites that a user, or manufacturer, needs a system to implement. Specifications can include the need to ensure that data is private or the function/design of a system is kept private (e.g. kernel and code) to some evaluative function. This evaluation is commonly along the lines of requiring a specific amount of time for which data can remain safe (e.g. data privacy in ensured for `X' number of years) or requiring that an attacker must spend a specific amount of monetary resources to access private data (e.g. data privacy in ensured for `X' amount of money). In both of these instances the developer/designer is attempting to strike a balance between the requirement specifications provided and varying implementation schema for producing the desired functionality. As one can see the overarching concern at this level of thinking is how to protect data, or the system, in order to keep it private; which is at the core of all security design and hence why it needs to be a preliminary part of the design process. + +Information protection, required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintaining integrity of kernel code and data), must be protected to a level of continuity consistent with the security policy and the security architecture assumptions; thus providing `continuous protection on information'. Simply stated, no guarantees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. + +From the development and design exploration of the function space a user should have two aspects of the high-level development; the behavioral description (e.g. how the system should behave given an input) and levels of abstraction that yield a `plan' for how the functional requirements will be implemented by lower levels of software and hardware. The reader should recognize the precept of platform mapping from platform-based design although only part of the full process has been explored; mapping of the function space. \subsection{Component Definitions} \label{Component Definitions} +Component definitions relates to the security properties that different components have based on the definitions of their behavior and functionality. Security properties, at the component definition level, include more physical properties of the varying components; reliability, confidentiality, uniqueness. If the component in question were a processor then security properties could manifest as the following: cryptographic functions/functionality, is the component a TPM, secure key storage/protection, use of PUFs greater variability, uniform power consumption as a countermeasure to side-channel attacks, unexposed pins using a ball grid array (BGA), specific power drain/usage restrictions for operational functionality, anti-tampering or anti-reverse engineering properties of a component, etc. A developer/designer will need to determine just how low-level the architectural space will go (e.g. what is the lowest level being incorporated into the mapping process), along with the properties that must be noted/maintained at such a level. Further more the exploration of this design space will lead to creating higher-level implementations and functionality that is derived from the lower-level components. As with any development and design procedure there will need to be tradeoff optimization that will examine any conflicting properties (e.g. size and heat dissipation requirements) when mapping the platform/architectural space toward the function space. + +For example, let us examine the use of Physical Unclonable Functions (PUFs) for low-level security design. As mentioned in Section~\ref{Related Work} PUFs have a desireable resiliance against tampering, are enticing due to their basis in process variation (i.e. hardware variations: jitter, noise, temperature, voltage), and have `Challenge-Response Pairs' (CRPs) as their input/ouput nets~\cite{Tehranipoor2015}. Hence, PUFs have the properties that different components may need which is why PUFs can be used for elements ranging from key generation to authentication functionality. These aspects are dependent on the properties of the underlying hardware and while they may be similiar in function and purpose these varying aspects contribute not only to the developer's use of these components but to the behavioral description and restrictions that the implemented function/module carry going foward. Further investigation into the concerns and considerations of the mapping process will occur in Section~\ref{Mapping}. + +As the development goes `upwards' in levels of abstraction, one will be implementing functions, and modules, as the result of combining/connecting different components. These abstracted function/implementation levels will need to be viewed as its own new `component' with its own security properties, functionality, and behavior. This is the same `circular' behavior we see in mapping down from the function space. An other consideration at this level of design and development is: what happens when one is unable to trust the componenets being used to develop a system design (e.g. can a secure system be constructed from unsecure components). + +\begin{quotation} +``In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle.''~\cite{Mohammed2009} +\end{quotation} + +With this in mind, and following the precepts presented by Benzel et.~al., the assumption is that the system is only as secure as its least secure component. + +The concern is how to evaluate the security of the system. Two methods are to either evaluate the entire system after its completion (which has been repeatedly shown to be a terrible idea) or to evaluate components during the development and design stage. Evaluation of components can be seen as either needing to perform an `initial conditions validation' for evaluating security of the system or one can `pre-evaluate' security such that the level of trustworthiness is known, and trusted, ahead of time. Performing the initial system state validation incurs a definite overhead to ensure security of the system prior to operation, though after the initial validation one can assume there is no change in trustworthiness until an upgrade/modification occurs. This is yet another one of the open-ended questions of this new design methodology; how to evaluate (and ensure) the trustworthiness and security of the system. \subsection{Mapping} \label{Mapping} +Mapping is the processes of evaluating, and maximizing, the requirement specifications and component definitions for the purpose of developing a design that incorporates all the behavioral characteristics necessary along with the predetermined essential hardware/functional properties using a predetermined objective function. Developing mapping functionality is no easy task and this is where the open-ended question for incorporating security into platform-based design lies. The map space should take into account not only the current requirements of the system, but also account for any future upgrades/modifications along with disaster planning and recovery. To the knowledge of this paper this is an area lacking hard standardization/optimization techniques and has had no formal definition of a framework. + +The first, and most difficult step in developing a framework is determining how to evalute, and compare, both the requirement specifications and component definitions. Along this same mindset mapping also needs to account for layers of abstraction which will also necessitate their own evaluation/validation schema, as this will allow for comparision of different implementation schemes at the various recursive explorations of intermediate `platform levels'. At each of these varying levels of development there will be different criteria that will dicatate the validity of a given design; e.g. trustworthiness, functional reliability, data fidelity. Each of these properties, while paramount to the design development at the arbitrary levels, should be notably different which then complicates how to compare levels of abstraction. Standards and documentation come to the recue at this point. With thorough and rigorous documentation one can begin formulating a structured framework that others can follow and develop on. Even this documentation and standardization will need to account for a large range of scenarios and concerns that may arise during development of varying designs. Virtualization is a solution to this aspect of the process, and will need to be at the core of the mapping functionality. As with any hardware/software development and design there is the need for virtualization tools that will hasten the exploration of the platform and function spaces. It should be noted that while documentation, virtualization, and standarization do play key roles in the mapping process they are also the most expensive aspects of incorporating a new methodology to security design through PBD. Time is the true cost of all this development, but is also the return once virtualization and standards are completed. + +Future development considerations should also be taken into account, depending on the `time-to-live' that is expected of the system being designed. These future considerations include disaster/catastrophe planning and any upgrade/modification paths. These are important considerations when developing not only sinlge modules/components within the system, but how the system will act when these moments occur. This is similar to planning for a `secure failure' event, but these topics will be expanded upon later. + +This paper will now examine some of the attention that has to be given to development of a secure system. The definition of ``trustworthiness'' that is chosen to help color this perspective on security is as defined in the Benzel et.~al.~paper. +\begin{quotation} +``\textit{Trustworthy} (noun): the degree to which the security behavior of the component is demonstrably compliant with its stated functionality (\textit{e.g., trustworthy component}). \\ \textit{Trust}: (verb) the degree to which the user or a component depends on the trustworthiness of another component. For example, component A \textit{trusts} component B, or component B is \textit{trusted} by component A. Trust and trustworthiness are assumed to be measured on the same scale.''~\cite{Benzel2005} +\end{quotation} +Dependability is a measure of a system's availability, reliability, and its maintainability. Furthermore, Avizienis et.~al.~extends this to also encompass mechanisms designs to increase and maintain the dependability of a system~\cite{Avizienis2004}. For the purpose of this paper, the interpretation is that trustworthiness requires inherent dependability; i.e. a user should be able to trust that trustworthy components will dependably perform the desired actions and alert said user should an error/failure occur. Unfortunately the measurement of trust(worthiness) is measured using the same, abstract scale~\cite{Benzel2005}. Thus, in turn, there is a void for the design and development of an understandable and standardized scale of trust(worthiness). \begin{figure*} \includegraphics[width=\textwidth,height=8cm]{./images/pbdsec_mapping.png} @@ -287,21 +336,44 @@ Others are incorporating platform-based design in everything from image processi \label{fig:MapSpace} \end{figure*} -As promised earlier, the paper will now examine the different scopes of security/trustworthiness: local, network, and distributed. Each of these scopes will be examined in terms of the considerations/challenges, principles, and policies that should be used to determine action/behavior of these different security elements and the system as a whole. +Different scopes of security/trustworthiness can be roughly placed into three categories: local, network, and distributed. Each of these scopes will be examined in terms of the considerations/challenges, principles, and policies that should be used to determine action/behavior of these different security elements and the system as a whole. The local scope encompasses a security element's own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, reduced complexity, minimized sharing, and the conflict between this as least common mechanisms). The purpose of this section is to present the considerations, principles, and policies that govern the behavior and function of security elements/components at the local scope/level. First, this paper will reiterate the definitions stated in the Benzel et.~al.~paper. Failure is a condition in which, given a specifically documented input that conforms to specification, a component or system exhibits behavior that deviates from its specified behavior. A module/database is seen as a unit of computation that encapsulates a database and provides an interface for the initialization, modification, and retrieval of information from the database. The database may be either implicit, e.g. an algorithm, or explicit. Lastly, a process(es) is a program(s) in execution. To further define the actions/behavior of a given component in the local scope, this paper moves to outlining the principles that define component behavior at the local device level. \\ The first principle is that of `Least Common Mechanisms'. If multiple components in the system require the same function of a mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is created once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanisms. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. The simpler a system is, the fewer vulnerabilities it will have; the principle of `Reduced Complexity'. From a security perspective, the benefit to this simplicity is that is is easier to understand whether an intended security policy has been captured in system design. At a security model level, it can be easier to determine whether the initial system state is secure and whether subsequent state changes preserve the system security. Given current state of the art of systems, the conservative assumption is that every complex system will contain vulnerabilities and it will be impossible to eliminate all of them, even in the most highly trustworthy of systems. At the lowest level of secure data, no computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. This is the concept behind `Minimized Sharing'. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; partition resources into discrete, private subsets for each dependent component. This in turn can be seen as a virtualization of hardware, a concept already illustrated earlier with the discussion of platform-based design. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. Once again illustrating the need for rigorous standards to be clearly documented. As the reader has undoubtably seen, there are conflicts within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly oppose each other, while still being desirable aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occurring within the security component. A designer can use one of the development techniques already mentioned to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key and how to then communicate that information with high security and fidelity. How does a designer decide upon how elements communicate? How does one define a security component in terms of a network; how is communication done, what concerns/challenges are there? Rigorous definitions of input/output nets for components and the methods for communication (buses, encryption, multiplexing) must be documented and distributed as the standard methods for communicating. Service, defined in the scope of a network, refers to processing or protection provided by a component to users or other components. E.g., communication service (TCP/IP), security service (encryption, firewall). These services represent the communication that occurs between different elements in a secure system. Each service should be documented to rigorously define the input, and output, nets for each component and the method used for communication (buses, encryption, multiplexing). Since these services, and their actions, are central to the operation/behavior of a secure system, there are a series of considerations, principles, and policies that a system architect/designer must acknowledge when deciding on the layout of security components within a network. Principle of Secure Communication Channels states that when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with the security dependencies it supports. In other words, how much is the component trusted to perform its security functions by other components. Several techniques can be used to mitigate threats to the communication channels in use. Use of a channel may be restricted by protecting access to it with suitable access control mechanism (e.g. reference monitor located beneath or within each component). End-to-end communications technologies (e.g. encryption) may be used to eliminate security threats in the communication channel's physical environment. -`Self-Reliant Trustworthiness' means that systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. The principle of `Partially Ordered Dependencies' states that calling, synchronization and other dependencies in the system should be partially ordered; e.g. for certain pairs of elements in a set, one of the elements precedes the other. A fundamental tool in system design is `layering'. A system can be organized into functionally related modules of components, where layers are linearly ordered with respect to inter-later dependencies. Inherent problems of circularity can be more easily managed if circular dependencies are constrained to occur within layers. In other words, if a shared mechanism also makes calls to or otherwise depends on services of calling mechanisms, creating a circular dependency, performance and liveness problems can result. Partially ordered dependencies and system layering contribute significantly to the simplicity and coherency of system design. A system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality structure or configuration; in other words allow for `secure system evolution'. Changes may include upgrades to system, maintenance activities. The benefits of designing a system with secure system evolution are reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. Most systems can anticipate maintenance, upgrades, and changes to configuration. If a component is constructed using the precepts of modularity and information hiding then it becomes easier to replace components without disrupting the rest of a system. A system designer needs to take into account the impact of dynamic reconfiguration on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to achieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. +Having touched upon the network communication considerations for modeling a secure and trustworthy component, this paper moves toward examining these security element behaviors and functions with respect to their existence within a larger distributed system. Trust, in the scope of a distributed system, shifts to define the degree to which the user or a component depends on the trustworthiness of another component. + +`Self-Reliant Trustworthiness' means that systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. + +The next concept that will be tackled is that of `Hierarchical Trust for Components'. Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. Taking a deeper dive into `hierarchical protections', a component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. This is the same challenge that occurs at all levels and requires rigorous documentation to alleviate the constraint. Hierarchical protections is the precept that regulates the following concept of `secure distributed composition'. Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individual components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. This is similar behavior to how SSL/TLS are used in current day implementations; data may be secure in transit, but if the end points are not secure then the data is not secure either. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of the distributed composite system must be thoroughly analyzed. + +The principle of `Partially Ordered Dependencies' states that calling, synchronization and other dependencies in the system should be partially ordered; e.g. for certain pairs of elements in a set, one of the elements precedes the other. A fundamental tool in system design is `layering'. A system can be organized into functionally related modules of components, where layers are linearly ordered with respect to inter-later dependencies. Inherent problems of circularity can be more easily managed if circular dependencies are constrained to occur within layers. In other words, if a shared mechanism also makes calls to or otherwise depends on services of calling mechanisms, creating a circular dependency, performance and liveness problems can result. Partially ordered dependencies and system layering contribute significantly to the simplicity and coherency of system design. A system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality structure or configuration; in other words allow for `secure system evolution'. Changes may include upgrades to system, maintenance activities. The benefits of designing a system with secure system evolution are reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. Most systems can anticipate maintenance, upgrades, and changes to configuration. If a component is constructed using the precepts of modularity and information hiding then it becomes easier to replace components without disrupting the rest of a system. A system designer needs to take into account the impact of dynamic reconfiguration on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to achieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. + +For maintaining a trustworthy system, and network of distributed trustworthy components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworthiness (does not lower the overall trustworthiness of the entire distributed system). A system should be able to handle a `secure failure'. Failure in system function or mechanism should not lead to violation of any security policy. Ideally the system should be capable of detecting failure at any stage of operation (initialization, normal operation, shutdown, maintenance, error detection and recovery) and take appropriate steps to ensure security policies are not violated; as is done with most machines now a days anyway. Touching on the earlier idea of secure system evolution, the reconfiguration function of the system should be designed to ensure continuous enforcement of security policies during various phases of reconfiguration. Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken; there must be `accountability and traceability'. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify the entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional information that may be helpful in determining the path or component that allowed the violation of the security policy. Just as this audit trail would be invaluable to a debugging developer, an attacker could also use this information to illuminate the actions/behavior of the system; therefore this data absolutely must remain protected. +\subsection{Documentation, Rigor, and Tools} +\label{Documentation, Rigor, and Tools} +As development and design, along with implementation and realization, of a circuit/component/system continues along its life-cycle it is paramount that thorough and rigorous documentation is maintained throughout the entire process. This documentation is not just for chronicling the steps to create a component/system, but to also note standards that are developed (or dropped) during design stages and to aid in the virtualization/automation of tools for exploring design/development space. + +Focusing efforts on rigorous design documentation allows security concerns to be recognized early in the development process and these aspects can be given sufficient attention in subsequent stages of the device's life cycle. ``By controlling system security during architectural refinement, we can decrease software production costs and speed up the time to market~\cite{ZhouFoss2006}.'' This approach also enhances the role of the software architects by requiring that their decisions be not only for functional decomposition, but also for non-functional requirements fulfillment. Hence, virtualization/automation of security development is not only effective but also enticing. + +Just as there is a requirement for adequate documentation there is also the need for `procedural rigor'. The rigor of the system's life cycle process should be commensurate with its intended trustworthiness. Procedural rigor defines the depth and detail of a system's lifecycle procedures. These rigors contribute to the assurance that a system is correct and free of unintended functionality in two ways. Firstly, imposing a set of checks and balances on the life cycle process such that the introduction of unspecified functionality is thwarted. Secondly, applying rigorous procedures to specifications and other design documents contribute to the ability of users to understand a system as it has been built, rather than being misled by inaccurate system representation, thus helping ensure that security and functional objectives of the system have been met. Highly rigorous development procedures supporting high trustworthiness are costly to follow. However, the lowered cost of ownership resulting from fewer flaws and security breaches during the product maintenance phase can help to mitigate the higher initial development costs associated with a rigorous life cycle process. +The reasoning for having procedural rigor and sufficient user documentation is to allow for `repeatable, documented procedures'. Techniques used to construct a component should permit the same component to be completely and correctly reconstructed at a later time. Repeatable and documented procedures support the creation of components that are identical to a component created earlier that may be in widespread use. + +Virtualization will help offset the time and monetary costs of using and implementing these new methodologies/ideologies. Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexibility of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable designs' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textbf{but} not how the output manifests). The reason behind this is that the design can remain simplistic in its [design and] operation. Virtualization tools lend to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will)~\cite{Alagar2007}. Another option for the automated tool could trade out specific security components as an easier way to increase security without requiring re-design/re-construction of the underlying element (e.g. modularity). There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system. For these reasons a user would desire rigorous documentation that would lay out the requirements of each component, so that in the case of trying to replace faulty or damaged components there would be no loss to the overall trustworthiness of the system; while also not introducing any vulnerabilities due to the inclusion of new system components. + +As with any new system there should be `sufficient user documentation'. Users should be provided with adequate documentation and other information such that they contribute to, rather than detract from, a system's security. The availability of documentation and training can help to ensure a knowledgeable cadre of users, developers, and administrators. If any level of user does not know how to use a component properly, does not know the standard security procedures, or does not know the proper behavior to prevent social engineering attacks, then said user can easily introduce new system vulnerabilities. + + \section{Model Review} -\label{Model review} +\label{Model Review} +This section is purposed with a discussion about some of the more nuanced aspects of a security incorporated platform-based design model/framework that was discussed in Section~\ref{Proposed Approach}. More attention will be given towards the challenges, considerations, and advantages of a security centric PBD framework. As with any new technology/design methodology/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action; same as with software development). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design development and adoptation are software development and a set of tools that insulate the details of architecture from application software (e.g. virtualization). The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it~\cite{Vincentelli2002}. @@ -323,13 +395,11 @@ documented, standardization. ``Most prevalent trust models only focus on assessing trustworthiness of systems at runtime, and do not provide necessary predictions of the trust of systems before they are built. This makes improving trustworthiness of the composed system to be costly and time-consuming. Therefore it is important to consider trust of the composed software system at the development time itself.''~\cite{Gamage} \end{quotation} -Once again, the intrinsic characteristics assumed for and provided by the channel must be specified with such documentation that it is possible for system designers to understand the nature of the channel as initially constructed and to assess the impact of any subsequent changes to the system. Without this rigorous documentation and standardization, the trustworthiness of the communications between security elements can not be assured. +Once again, the intrinsic characteristics assumed for and provided by the channel must be specified with such documentation that it is possible for system designers to understand the nature of the channel as initially constructed and to assess the impact of any subsequent changes to the system. Without this rigorous documentation and standardization, the trustworthiness of the communications between security elements can not be assured. The main challenge here is that there needs to be a clear and documented way by which one can determine trustworthiness and protection for a system, along with outlining the hierarchy of trust that is inherent to the system. Once a failed security function is detected, the system may reconfigure itself to circumvent the failed component, while maintaining security, and still provide all or part of the functionality of the original system, or completely shut itself down to prevent any (further) violation in security policies. Another method for achieving this is to rollback to a secure state (which may be the initial state) and then either shutdown or replace the service/component that failed with an orthogonal or replicated mechanisms. Failure of a component may or may not be detectable to components using it; thus one must design a method for `dealing with failure'. For this reason components should fail in a state that denies rather than grants access. A designer could employ multiple protection mechanisms (whose features can be significantly different) to reduce the possibility of attack repetition, but it should be noted that redundancy techniques may increase resource usage and adversely affect the system performance. Instead the atomicity properties of a service/component should be well documented and characterized so that the component availing itself of service can detect and handle interruption event appropriately; similar to the `self-analysis' function that TPMs have. A well designed reference monitor could fill most of these roles, though, it would require that the reference monitor can `self-analyze' itself for trustworthiness. While even this would not be a perfect solution, it does help limit total failure to the reference monitor instead of some arbitrary security component. -The main challenge here is that there needs to be a clear and documented way by which one can determine trustworthiness and protection for a system, along with outlining the hierarchy of trust that is inherent to the system. - -Having tackled the network communication considerations for modeling a secure and trustworthy component, this paper moves toward examining these security element behaviors and functions with respect to their existence within a larger distributed system. Trust, in the scope of a distributed system, shifts to define the degree to which the user or a component depends on the trustworthiness of another component. +Any designer must ensure protection of the system by choosing interface parameters so that security critical values are provided by more trustworthy components. To eliminate time-of-check-to-time-of-use vulnerabilities the system's security-relevant operations should appear atomic. It could also be desirable to allow system security policies to be ``modifiable'' at runtime; in the case of needing to adjust to catastrophic external events. This raises the complexity of the system, but does allow for flexibility in the face of failure. Any changes to security policies must not only be traceable but also verifiable; it must be possible to verify that changes do not violate security policies. This could be handled by a central reference monitor. Following this thread of thinking, a system architect/designer should understand the consequences of allowing modifiable policies within the system. Depending on the type of access control and actions that are allowed and controlled by policies, certain configuration changes may lead to inconsistent states of discontinuous protection due to the complex and undecidable nature of the problem of allowing runtime changes to the security policies of the system. In other words, even modifications/updates need to be planned and documented rigorously for the purpose of maintaining a secure and trustworthy system. System modification procedures must maintain system security with respect to goals, objectives, and requirements of owners; allowing for `secure system modification'. Without proper planning and documentation, upgrades and modifications to systems can transform a secure system into an insecure one. These are similar concepts to `secure system evolution' at the network scope of these security components. Furthermore, a designer can use the precepts of a reference monitor to provide continuous enforcement of a security policy, noting that every request must be validated, and the reference monitor must protect itself. Ideally the reference monitor component would be ``perfect'' in the sense of being absolutely trustworthy and not requiring an upgrade/modification path (thus limiting this element's chance of becoming compromised). @@ -343,8 +413,6 @@ Complexity must be minimized. This point was touched upon earlier, but is just While this paper proposes one model for security design and development this, by no means, is the only model for implementing security in a system. ``Defense in depth''~\cite{DoD2002} is a model in which security is derived from the application of multiple mechanisms; to create a series of barriers against attack by an adversary. Unfortunately, for the model, without any sound security architecture and supporting theory, the non-constructive basis of this approach equivocates this model to a temporary patch; putting barriers in places does not equate to levels of trustworthiness. The ``Balanced assurance''~\cite{Lunt1988} model centers around a hierarchy of security policies, where different policies are allocated to different components of a system. The concept is that the trustworthiness of a given component must be consistent with the importance of that component's policy; the greater the trustworthiness the greater the importance of that component. The fault here is that a system can only be considered as secure as it's least secure component. While an interesting model and shows promise with respect to specific scenarios, this is not an overarching model that will function in all cases. There are multiple models for performing/implementing security, but a significant part of the cost of building a secure system is that of evaluating, and subsequently proving, trustworthiness through a third party's efforts. A method for minimizing the costs of performing this evaluation is to make use of components that have already had their trustworthiness evaluated and verified, thereby minimizing the need to evaluate the system itself; as it is made of already trustworthy components. This model would allow for ``evaluation by pieces'' whereby one acknowledges previously evaluated components and does not require their examination in the greater evaluation of the composite system. Unfortunately, this model has only been made available to ``low assurance'' systems as it lacks a well-formed theory of correctness~\cite{Benzel2005}. -This paper can not be concluded without proposing a framework that is believed to be the jumping point for a security model to begin the adaptation of platform-based design into security development. The reference monitor seems a favorable choice as this sort of model is already used in distributed systems, but there is an extremely important need to maintain the security/trust/trustworthiness of this reference monitor (abstraction for necessary and sufficient features of component that enforces access control in secure systems). While it should be immediately seen that there are faults with this design (e.g. single point of failure), do not allow these shortcomings to run your doubt. It is the belief of this paper that an initial starting point for PBD-Security design development is to use this existing reference monitor concept, along with other developed tools (e.g. CBSE, TPM), and piece together the initial framework and early phase development for this new methodology, so that future efforts can be spent developing and perfecting this technique. - There is a need for development of platform-based design for security elements of all types, across all platforms. The mechanisms and procedures required are already in use, but this does not mean @@ -372,6 +440,8 @@ yet not truly explored at regular levels. The manufacturer's standpoint boils down to: the design should minimize mask-making costs but be flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime~\cite{Vincentelli2007}. Companies try to drive adoptability by means of creating something that users want to interact with, but not be complicated to learn (e.g. abstraction of technology for ease of use). Accounting for ease of use can lead to vulnerabilities in security or the development of new tools. Automation is desirable from a `business' standpoint since customers/users enjoy the `set it and forget it' mentality for technology (especially new technologies). Companies/Manufacturers need positive customer/user experiences, otherwise there is no desire to extend any supplied functionality to any other devices/needs on the part of the consumer. Adoptability tends to come from user `word of mouth' praising the functionality and ease of use of new technology/methods/devices and how the developing party reacts to system failures or user-need (branching from complaints and support requests). This is exactly why industry would love for platform-based design to become a new standard; gain high adoptability. The monetary costs saved would be enough to warrant adoption of the technology, \textbf{but} the monetary costs of developing such a system (e.g. design, evaluation, validation) does not carry the same attraction (simply because companies are selfish and want to \textbf{make} money). +This paper can not be concluded without proposing a framework that is believed to be the jumping point for a security model to begin the adaptation of platform-based design into security development. The reference monitor seems a favorable choice as this sort of model is already used in distributed systems, but there is an extremely important need to maintain the security/trust/trustworthiness of this reference monitor (abstraction for necessary and sufficient features of component that enforces access control in secure systems). While it should be immediately seen that there are faults with this design (e.g. single point of failure), this is not enough to abandon the model. It is the belief of this paper that an initial starting point for PBD-Security design development is to use this existing reference monitor concept, along with other developed tools (e.g. CBSE, TPM), and piece together the initial framework and early phase development for this new methodology, so that future efforts can be spent developing and perfecting this technique. + \section{Conclusion} \label{Conclusion} @@ -381,7 +451,11 @@ design and development (PBD) but also for proper incorporation and planning of system security in an intelligent, rigorous and documented/standardized way. As with the design of any tool there are concerns during the development, evaluations and validation -processes~\cite{Pinto2006}. Common pitfalls of development are +processes~\cite{Pinto2006}. + +Virtualization should be used for exploring the design space; not only is the cost of prototyping incredibly expensive, but redesign is equally costly. Virtualization aids by removing the need for physical prototyping (less monetary costs) and allows for more rapid exploration of the full design space. While the design time for such powerful tools will be expensive (both in monetary and temporal costs), the rewards of developing, validating, and evaluating this virtualization tool will offset the early design phase costs of automated security component design. + +Common pitfalls of development are mishandling corner cases and inadvertently misinterpreting changes in the communication semantics. Problems arise because of poor understanding and the lack of precise, rigorous, definitions of the @@ -392,90 +466,7 @@ properties of the design that have already been established (e.g. the useful virtualization tools for thorough exploration of design spaces for both hardware and software elements. -With these concepts in-mind, it should be obvious that security design \textbf{must} occur from the start! Unless security design is incorporated a priori, a developer can only hope to spend the rest of the development processes, and beyond, attempting to secure a system that took security as optional. Simply put, data \textbf{must} be kept safe. In addition, performing security planning from the start allows for disaster planning and any other possible 'unforeseen' complications. - -% -% -% -% -%---------------------------------------- -% Previous Writing of Paper -%---------------------------------------- -% -% -% -% - -\section{Related Work} -\label{Related Work} - -Standardization is the process of developing and implementing technical standards. Standardization can help to maximize compatibility, interoperability, safety, repeatability, or quality; it can also facilitate commoditization of formerly custom processes. Standards appear in all sorts of domains. For the IC domain standardization manifests as a flexible integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offers to develop quickly new applications (adoptability). In other words, this requires a distillation of the principles so that a rigorous methodology can be developed and profitably used across different design domains. -So why is standardization useful? Standardization allows for manufacturers, system developers, and software designers to all work around a single, accepted, platform. It is understood what the capabilities of the hardware are, what the limitations of system IO will be, along with what the methods/protocols of communication will be. Even these preset aspects of the `standard' have their own `contractual obligations' of how they will function, what their respective `net-lists' are, and where the limitations of such standards lie. While the act of creating a standard takes time, not only due to time of development but due to speed of adoption of wide use, it is a necessary step. Without standardization, it becomes far more difficult to create universally used complex systems, let alone validate their correctness and trustworthiness. This is how one is able to change the current paradigm to a new standard model. - - - -Virtualization will help offset the time and monetary costs of using and implementing these new methodologies/ideologies. Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexibility of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable designs' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textbf{but} not how the output manifests). The reason behind this is that the design can remain simplistic in its [design and] operation. Virtualization tools lend to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will)~\cite{Alagar2007}. They are a much needed tool for exploring design spaces and bringing codesign of software and hardware elements. - -Hardware/Software codesign is crucial for bridging together the software and hardware aspects of a new system in an efficient and effective manner. There are different coding languages to handle different aspects (i.e. SW/HW) of virtualization. When dealing with the virtualization of software the aspects of timing and concurrency semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC~\cite{Kreku2008}, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction. The reasoning being the constant growth in complexity calls for simulation/virtualization of the design process. System-on-chip (SoC) technology will be already dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, a system's heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. ``Indeed, market data indicate[s] that more than 80\% of system development efforts are now in software versus hardware. This implies that an effective platform has to offer a powerful design environment for software to cope with development cost.''~\cite{Vincentelli2002} Coverification will require an increasing proportion of the design time as systems become more complex. Progress at the electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certain properties formulated as constraints during synthesis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the structural view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. The uncertainty of environment and communication partners of complex interacting cyber-physical systems, runtime adaptivity will be a must for guaranteeing the efficiency of a system. Due to the availability of reconfigurable hardware and multicore processing, which will also take a more important role in the tool chain for system simulation and evaluation, online codesign techniques will work towards a standard as time moves forward. As with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tools that already exists for low, or high, system simulation. New territory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective. - -\section{Platform-based design} -\label{Platform-based design} - - -\section{Security} -\label{Security} - -The definition of ``trustworthiness'' that is chosen to help color this perspective on security is as defined in the Benzel et.~al.~paper. -\begin{quotation} -``\textit{Trustworthy} (noun): the degree to which the security behavior of the component is demonstrably compliant with its stated functionality (\textit{e.g., trustworthy component}). \\ \textit{Trust}: (verb) the degree to which the user or a component depends on the trustworthiness of another component. For example, component A \textit{trusts} component B, or component B is \textit{trusted} by component A. Trust and trustworthiness are assumed to be measured on the same scale.''~\cite{Benzel2005} -\end{quotation} -Dependability is a measure of a system's availability, reliability, and its maintainability. Furthermore, Avizienis et.~al.~extends this to also encompass mechanisms designs to increase and maintain the dependability of a system~\cite{Avizienis2004}. For the purpose of this paper, the interpretation is that trustworthiness requires inherent dependability; i.e. a user should be able to trust that trustworthy components will dependably perform the desired actions and alert said user should an error/failure occur. Unfortunately the measurement of trust(worthiness) is measured using the same, abstract scale~\cite{Benzel2005}. Thus, in turn, there is a void for the design and development of an understandable and standardized scale of trust(worthiness). Which could then lead to a change of paradigm in the methods by which security policies, mechanisms, and principles are implemented and evolve. -\begin{quotation} -``In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle.''~\cite{Mohammed2009} -\end{quotation} - -\begin{figure*} - \includegraphics[width=\textwidth,height=10cm]{./images/SecurityDesignMap.png} - \caption{Security Design Map} - \label{fig:SecDesignMap} -\end{figure*} - - - -The first concept that will be tackled is that of `Hierarchical Trust for Components'. Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. Taking a deeper dive into `hierarchical protections', a component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. This is the same challenge that occurs at all levels and requires rigorous documentation to alleviate the constraint. Hierarchical protections is the precept that regulates the following concept of `secure distributed composition'. Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individual components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. This is similar behavior to how SSL/TLS are used in current day implementations; data may be secure in transit, but if the end points are not secure then the data is not secure either. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of the distributed composite system must be thoroughly analyzed. - -Information protection, required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintaining integrity of kernel code and data), must be protected to a level of continuity consistent with the security policy and the security architecture assumptions; thus providing `continuous protection on information'. Simply stated, no guarantees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. For maintaining a trustworthy system, and network of distributed trustworthy components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworthiness (does not lower the overall trustworthiness of the entire distributed system). - -Any designer must ensure protection of the system by choosing interface parameters so that security critical values are provided by more trustworthy components. To eliminate time-of-check-to-time-of-use vulnerabilities the system's security-relevant operations should appear atomic. It could also be desirable to allow system security policies to be ``modifiable'' at runtime; in the case of needing to adjust to catastrophic external events. This raises the complexity of the system, but does allow for flexibility in the face of failure. Any changes to security policies must not only be traceable but also verifiable; it must be possible to verify that changes do not violate security policies. This could be handled by a central reference monitor. Following this thread of thinking, a system architect/designer should understand the consequences of allowing modifiable policies within the system. Depending on the type of access control and actions that are allowed and controlled by policies, certain configuration changes may lead to inconsistent states of discontinuous protection due to the complex and undecidable nature of the problem of allowing runtime changes to the security policies of the system. In other words, even modifications/updates need to be planned and documented rigorously for the purpose of maintaining a secure and trustworthy system. System modification procedures must maintain system security with respect to goals, objectives, and requirements of owners; allowing for `secure system modification'. Without proper planning and documentation, upgrades and modifications to systems can transform a secure system into an insecure one. These are similar concepts to `secure system evolution' at the network scope of these security components. - - - -In the same manner that these various security aspects (e.g. mechanisms, principles, policies) must be considered during development automation, the software and hardware aspects must also come under consideration based on the desired behavior/functionality of the system under design. One could have security elements that attempt to optimize themselves to the system they are in based on a few pivot points (power, time, efficiency, level of randomness). Another option for the automated tool could trade out specific security components as an easier way to increase security without requiring re-design/re-construction of the underlying element (e.g. modularity). There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system. For these reasons a user would desire rigorous documentation that would lay out the requirements of each component, so that in the case of trying to replace faulty or damaged components there would be no loss to the overall trustworthiness of the system; while also not introducing any vulnerabilities due to the inclusion of new system components. - -Virtualization should be used for exploring the design space; it is hoped that it is obvious as to why. Not only is the cost of prototyping incredibly expensive, but redesign is equally costly. Virtualization aids by removing the need for physical prototyping (less monetary costs) and allows for more rapid exploration of the full design space. While the design time for such powerful tools will be expensive (both in monetary and temporal costs), the rewards of developing, validating, and evaluating this virtualization tool will offset the early design phase costs of automated security component design. - -But as with the development of any tool, and more so when -expecting said tools to be more publicly used, there is a deep need -for meticulous documentation and rigorous use/distribution of -standards. Without this, there is no guarantee that anyone will -benefit from use of this new model. Much like with security innovation -and implementation, without proper outlining of behavior and function -there is greater possibility for erroneous use thus leading to greater -vulnerability of the overall system. - - -Focusing efforts on rigorous design documentation allows security concerns to be recognized early in the development process and these aspects can be given sufficient attention in subsequent stages of the device's life cycle. ``By controlling system security during architectural refinement, we can decrease software production costs and speed up the time to market~\cite{ZhouFoss2006}.'' This approach also enhances the role of the software architects by requiring that their decisions be not only for functional decomposition, but also for non-functional requirements fulfillment. Hence, virtualization/automation of security development is not only effective but also enticing. -As with any new system there should be `sufficient user documentation'. Users should be provided with adequate documentation and other information such that they contribute to, rather than detract from, a system's security. The availability of documentation and training can help to ensure a knowledgeable cadre of users, developers, and administrators. If any level of user does not know how to use a component properly, does not know the standard security procedures, or does not know the proper behavior to prevent social engineering attacks, then said user can easily introduce new system vulnerabilities. -Just as there is a requirement for adequate documentation there is also the need for `procedural rigor'. The rigor of the system's life cycle process should be commensurate with its intended trustworthiness. Procedural rigor defines the depth and detail of a system's lifecycle procedures. These rigors contribute to the assurance that a system is correct and free of unintended functionality in two ways. Firstly, imposing a set of checks and balances on the life cycle process such that the introduction of unspecified functionality is thwarted. Secondly, applying rigorous procedures to specifications and other design documents contribute to the ability of users to understand a system as it has been built, rather than being misled by inaccurate system representation, thus helping ensure that security and functional objectives of the system have been met. Highly rigorous development procedures supporting high trustworthiness are costly to follow. However, the lowered cost of ownership resulting from fewer flaws and security breaches during the product maintenance phase can help to mitigate the higher initial development costs associated with a rigorous life cycle process. -The reasoning for having procedural rigor and sufficient user documentation is to allow for `repeatable, documented procedures'. Techniques used to construct a component should permit the same component to be completely and correctly reconstructed at a later time. Repeatable and documented procedures support the creation of components that are identical to a component created earlier that may be in widespread use. - -The last, and by no means least, important topic that must be tackled in this section is the question of what exactly are the research challenges. There has been a lot of information, ideas, and principles presented over the course of this writing along with parallels to existing research and methodologies that can be almost directly applied to the concept of mapping security development to platform-based design. - - -\section{Conclusion} -\label{Conclusion} - - +With these concepts in-mind, it should be obvious that security design \textbf{must} occur from the start! Unless security design is incorporated a priori, a developer can only hope to spend the rest of the development processes, and beyond, attempting to secure a system that took security as optional. Simply put, data \textbf{must} be kept safe. In addition, performing security planning from the start allows for disaster planning and any other possible `unforeseen' complications. %references section %\bibliographystyle{plain} @@ -632,6 +623,12 @@ Element-level classification with AI assurance}, Computer and Security, 7:73-82, \bibitem{Lin2015} Chung-Wei Lin, \emph{ Security Mechanisms and Security-Aware Mapping for Real-Time Distributed Embedded Systems}, Doctoral Thesis, University of California, Berkeley (Summer 2015) +\bibitem{Tehranipoor2015} Fatemeh Tehranipoor, Nima Karimian, Kan Xiao, and John Chandy, \emph{ +DRAM based Intrinsic Physical Unclonable Functions for System Level Security}, Proceedings of the 25th edition on Great Lakes Symposium on VLSI (May 2015) + +\bibitem{Tehranipoor2010} Xiaoxiao Wang and Mohammad Tehranipoor, \emph{ +Novel Physical Unclonable Function with Process and Environmental Variations}, Proceedings of the Conference on Design, Automation and Test in Europe (March 2010) + \end{thebibliography} \balance diff --git a/images/Paul.vsd b/images/Paul.vsd new file mode 100644 index 0000000..7409790 Binary files /dev/null and b/images/Paul.vsd differ diff --git a/images/pbdsec_mapping.png b/images/pbdsec_mapping.png index 13a806c..d211088 100644 Binary files a/images/pbdsec_mapping.png and b/images/pbdsec_mapping.png differ diff --git a/images/recursivePBD.pdf b/images/recursivePBD.pdf new file mode 100644 index 0000000..739e7ba Binary files /dev/null and b/images/recursivePBD.pdf differ diff --git a/images/recursivePBD_redo.png b/images/recursivePBD_redo.png new file mode 100644 index 0000000..eabb9f5 Binary files /dev/null and b/images/recursivePBD_redo.png differ