From 9dfb212bf8b7bf0d89ce6029a78334d189700e2b Mon Sep 17 00:00:00 2001 From: "John A. Chandy" Date: Fri, 13 Nov 2015 00:03:41 -0500 Subject: [PATCH] rewrite of proposed approach section --- PBDSecPaper.tex | 99 +++++++++++++++++++++++++++---------------------- 1 file changed, 54 insertions(+), 45 deletions(-) diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index 12c7c83..f40c264 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -81,7 +81,7 @@ \maketitle \begin{abstract} -There is a dsitinct lack of incorporating security, as a preliminary, when performing any development and design. There are tools and methodologies that tackle aspects of this issue, but nothing that brings a `complete' solution. Platform-Based Design (PBD) centers around minimizing costs of +There is a distinct lack of incorporating security, as a preliminary, when performing any systems development and design. There are tools and methodologies that tackle aspects of this issue, but nothing that brings a `complete' solution. Platform-Based Design (PBD) centers around minimizing costs of developing and manufacturing new components along with changing the landscape of this ecosystem, through the use of a ``meet-in-the-middle'' methodology. Successive refinements of @@ -133,7 +133,7 @@ set of abstract blocks that can be used to design higher level applications (e.g. virtualization development of larger security systems). -Standardization is the process of developing and implementing technical standards. Standardization can help to maximize compatibility, interoperability, safety, repeatability, or quality; it can also facilitate commoditization of formerly custom processes. Standards appear in all sorts of domains. For the IC domain standardization manifests as a flexible integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offers to develop quickly new applications (adoptability). In other words, this requires a distillation of the principles so that a rigorous methodology can be developed and profitably used across different design domains. +Standardization is the process of developing and implementing technical standards. Standardization can help to maximize compatibility, interoperability, safety, repeatability, or quality; it can also facilitate commoditization of formerly custom processes. Standards appear in all sorts of domains. For the IC domain standardization manifests as a flexible integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offers to develop quickly new applications (adaptability). In other words, this requires a distillation of the principles so that a rigorous methodology can be developed and profitably used across different design domains. So why is standardization useful? Standardization allows for manufacturers, system developers, and software designers to all work around a single, accepted, platform. It is understood what the capabilities of the hardware are, what the limitations of system IO will be, along with what the methods/protocols of communication will be. Even these preset aspects of the `standard' have their own `contractual obligations' of how they will function, what their respective `net-lists' are, and where the limitations of such standards lie. While the act of creating a standard takes time, not only due to time of development but due to speed of adoption of wide use, it is a necessary step. Without standardization, it becomes far more difficult to create universally used complex systems, let alone validate their correctness and trustworthiness. This is how one is able to change the current paradigm to a new standard model. What is gained, and lost, when shifting the method of design to a more platform-based design/security centric model? After all development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. The advantage to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototyping re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Mapping}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for different manufacturers/developers to create different aspects of a system, while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new guide lines for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). @@ -238,36 +238,6 @@ As systems move towards more complex designs and implementations (as allowed by Work in the security realm is much more chaotic, although undertakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Avizienis2004, Lin2013, Zakinthinos1997, Jorgen2008, Zhou2007}. Security has many facets to it: failure, processes, security mechanisms, security principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocol standards. -Different groups have tackled aspects of -security design and development. The Trusted Computing Group (TCG) created -Trusted Platform Modules (TPM) which are able to validate their own -functionality and if the TPMs have been tampered with. This is, in -essence, a method of `self-analysis'; thus the ground work for a -`self-analyzing' security component is already in place. This self -checking can be used as a method for allowing the larger system of -security components to locate damaged/error-prone components so that -they can be replaced/fixed thus raising the overall trustworthiness of -the system of components. Therefore TPMs are a good stepping stone on -the path for better security, but TPM/TCG has not found ``the answer'' -yet~\cite{Sadeghi2008}. - -Physical Unclonable Functions are yet an other hardware attempt to tackle larger issues of security design and development. PUFs are based on process variations (e.g. hardware variations; temperature, voltage, jitter, noise, leakge, age) which make them very difficult, if not impossible, to predict or model~\cite{Tehranipoor2015}, but that do produce input/output nets that are maps of challenges and responses (hereto defined as `Challenge-Response Pairs'; CRPs). These PUFs have subcategories based on their functionality (e.g. strong vs weak~\cite{Tehranipoor2015}) and the component variations examined (e.g. cover-based, memory-based, and delay-based~\cite{Tehranipoor2010}). The important take away is that strong PUFs are used for ID extraction purposes (authentication) while weak PUFs are used for key generation; reason being that strong PUFs can handle a larger number of CRPs. PUF is a prevention method; in that PUF can be used to ensure that a system is operating as expected (similar to the TPM concept). -There are three main properties of PUFs: reliability, uniqueness, and randomness (of the output data). It should be noted that the properties of reliability and randomess conflict, thus requiring that a balance be struck between the design and development stages. Functionality, much like security, can not be a second thought when designing and developing systems. Although PUFs are also designed with security in mind, they do not serve the paper's purpose: to incorporate security development into platform-based design. - -Another example of security/trustworthiness -implementation is the use of monitors for distributed system security. -Different methods are used for determining trust of actions -(e.g. Byzantine General Problem). These methods are used for -determining the most sane/trustworthy machine out of a distributed -network so that users know they are interacting with the `freshest' -data at hand. While the realm of distributed systems has found good -methods/principles to govern their actions/updates of systems, these -methods are by no means `final' or perfect but will rather develop -with time. As one can see, there are solutions implemented across a -series of different HW and SW platforms for tackling different aspects -of security. The unifying factor in all of them is determining what -is or isn't trustworthy. - Component-Based Software Engineering (CBSE) is widely adopted in the software industry as the mainstream approach to software engineering~\cite{Mohammed2009, Mohammad2013}. CBSE is a reuse-based approach of defining, implementing and composing loosely coupled independent components into systems and emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. Thus it can be seen that the ideals being outlined in this paper are already in use, all that is needed is their re-application to a new design methodology. In a way one can see CBSE as a restricted-platform version of platform-based design. While an effective tool, it still falls short of the hardware side of systems. This furthers this paper's point, that the required elements for combining PBD and security are already in use for different purposes and simply need to be `re-purposed' for this new security centric platform-based design. Hardware/Software codesign is crucial for bridging together the software and hardware aspects of a new system in an efficient and effective manner. There are different coding languages to handle different aspects (i.e. SW/HW) of virtualization. When dealing with the virtualization of software the aspects of timing and concurrency semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC~\cite{Kreku2008}, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction. The reasoning being the constant growth in complexity calls for simulation/virtualization of the design process. System-on-chip (SoC) technology will be already dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, a system's heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. ``Indeed, market data indicate[s] that more than 80\% of system development efforts are now in software versus hardware. This implies that an effective platform has to offer a powerful design environment for software to cope with development cost.''~\cite{Vincentelli2002} Coverification will require an increasing proportion of the design time as systems become more complex. Progress at the electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certain properties formulated as constraints during synthesis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the structural view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. The uncertainty of environment and communication partners of complex interacting cyber-physical systems, runtime adaptivity will be a must for guaranteeing the efficiency of a system. Due to the availability of reconfigurable hardware and multicore processing, which will also take a more important role in the tool chain for system simulation and evaluation, online codesign techniques will work towards a standard as time moves forward. As with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tools that already exists for low, or high, system simulation. New territory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective. @@ -282,19 +252,22 @@ Platform-based design attempts to map applications to a platform chosen from a s \label{Proposed Approach} Since there is a lack of implementation of security in platform-based design, the following section is purposed with outlining the requirements and properties that the function and architectural spaces contain that will then be combined via a mapping processes that will then develop a final design; as can be seen in Figure~\ref{fig:MapSpace}. More on the concept, properties, and actions of the mapping process will be discussed in Section~\ref{Mapping}. +The first step in formulating a framework is to define the inputs; these being the requirement specifications that rule the behavior of the system and the component definitions/properties that underpin all functional implementation. Next will be a conversation about the mapping process that will need to combine these two spaces into a single design (or series of designs; based on requirement or property needs), along with other considerations and concerns that need to be taken into account during the mapping process. + +\subsection{Requirement Specifications} +\label{Requirement Specifications} +Requirement specifications are the different high-level prerequisites that a user, or manufacturer, needs a system to implement. Benzel et al. defined a set of security design principles (Figure~\ref{fig:SecDesignMap}), that could be used as starting point for defined security requirements. At its core, security is about protecting access to data or information. Thus any requirements should define what data is important and needs to be kept private and place a value or metric on that data to which the requirement can be measured or evaluated. + + \begin{figure*} \includegraphics[width=\textwidth,height=10cm]{./images/SecurityDesignMap.png} \caption{Security Design Map~\cite{Benzel2005}} \label{fig:SecDesignMap} \end{figure*} -It should be noted that the design prinicples for security in this paper will follow the same structure shown in Figure~\ref{fig:SecDesignMap}. - -The first step in forumlating a framework is to define the inputs; these being the requirement specifications that rule the behavior of the system and the component definitions/properties that underpin all functional implementation. Next will be a conversation about the mapping process that will need to combine these two spaces into a single design (or series of designs; based on requirement or property needs), along with other considerations and concerns that need to be taken into account during the mapping process. +One could look to reliability methodology as a way to develop measurable security metrics. For example, we usually specify mean-time-to-failure for reliability or probability that system is functioning for availability. These metrics are inherently probabilistic definitions. Similarly, for security, we could state requirements that the system must provide security mechanisms such that the data can not be stolen for at least $X$ years on average or has a $Y\%$ chance of being kept secure. Unfortunately, non-deterministic guarantees for security are not very satisfying. Moreover, it is not clear that there is enough statistical data to build probabilistic models that could even provide these estimates. Especially since most security mechanisms are dependent on computational complexity, any time estimates are dependent on an understanding of computational capability in the future. One need only look over the history of encryption standards to see that Moore's law can often outstrip the lifetime of encryption algorithms. -\subsection{Requirement Specifications} -\label{Requirement Specifications} -Requirement specifications are the different high-level prerequisites that a user, or manufacturer, needs a system to implement. Specifications can include the need to ensure that data is private or the function/design of a system is kept private (e.g. kernel and code) to some evaluative function. This evaluation is commonly along the lines of requiring a specific amount of time for which data can remain safe (e.g. data privacy in ensured for `X' number of years) or requiring that an attacker must spend a specific amount of monetary resources to access private data (e.g. data privacy in ensured for `X' amount of money). In both of these instances the developer/designer is attempting to strike a balance between the requirement specifications provided and varying implementation schema for producing the desired functionality. As one can see the overarching concern at this level of thinking is how to protect data, or the system, in order to keep it private; which is at the core of all security design and hence why it needs to be a preliminary part of the design process. +Another approach is to specify requirements that establish bounds on the amount of resources an attacker must expend to access private data - e.g. data privacy is ensured for $X$ resources (money, CPU cycles, time, etc). In both of these instances the developer/designer is attempting to strike a balance between the requirement specifications provided and varying implementation schema for producing the desired functionality. As one can see the overarching concern at this level of thinking is how to protect data, or the system, in order to keep it private; which is at the core of all security design and hence why it needs to be a preliminary part of the design process. Information protection, required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintaining integrity of kernel code and data), must be protected to a level of continuity consistent with the security policy and the security architecture assumptions; thus providing `continuous protection on information'. Simply stated, no guarantees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. @@ -304,9 +277,44 @@ From the development and design exploration of the function space a user should \label{Component Definitions} Component definitions relates to the security properties that different components have based on the definitions of their behavior and functionality. Security properties, at the component definition level, include more physical properties of the varying components; reliability, confidentiality, uniqueness. If the component in question were a processor then security properties could manifest as the following: cryptographic functions/functionality, is the component a TPM, secure key storage/protection, use of PUFs greater variability, uniform power consumption as a countermeasure to side-channel attacks, unexposed pins using a ball grid array (BGA), specific power drain/usage restrictions for operational functionality, anti-tampering or anti-reverse engineering properties of a component, etc. A developer/designer will need to determine just how low-level the architectural space will go (e.g. what is the lowest level being incorporated into the mapping process), along with the properties that must be noted/maintained at such a level. Further more the exploration of this design space will lead to creating higher-level implementations and functionality that is derived from the lower-level components. As with any development and design procedure there will need to be tradeoff optimization that will examine any conflicting properties (e.g. size and heat dissipation requirements) when mapping the platform/architectural space toward the function space. -For example, let us examine the use of Physical Unclonable Functions (PUFs) for low-level security design. As mentioned in Section~\ref{Related Work} PUFs have a desireable resiliance against tampering, are enticing due to their basis in process variation (i.e. hardware variations: jitter, noise, temperature, voltage), and have `Challenge-Response Pairs' (CRPs) as their input/ouput nets~\cite{Tehranipoor2015}. Hence, PUFs have the properties that different components may need which is why PUFs can be used for elements ranging from key generation to authentication functionality. These aspects are dependent on the properties of the underlying hardware and while they may be similiar in function and purpose these varying aspects contribute not only to the developer's use of these components but to the behavioral description and restrictions that the implemented function/module carry going foward. Further investigation into the concerns and considerations of the mapping process will occur in Section~\ref{Mapping}. +\paragraph{Trusted Platform Modules} +Different groups have tackled aspects of +security design and development. The Trusted Computing Group (TCG) created +Trusted Platform Modules (TPM) which are able to validate their own +functionality and if the TPMs have been tampered with. This is, in +essence, a method of `self-analysis'; thus the ground work for a +`self-analyzing' security component is already in place. This self +checking can be used as a method for allowing the larger system of +security components to locate damaged/error-prone components so that +they can be replaced/fixed thus raising the overall trustworthiness of +the system of components. Therefore TPMs are a good stepping stone on +the path for better security, but TPM/TCG has not found ``the answer'' +yet~\cite{Sadeghi2008}. + +\paragraph{Physical Unclonable Functions} + +Physical Unclonable Functions are yet an other hardware attempt to tackle larger issues of security design and development. PUFs are based on process variations (e.g. hardware variations; temperature, voltage, jitter, noise, leakage, age) which make them very difficult, if not impossible, to predict or model~\cite{Tehranipoor2015}, but that do produce input/output nets that are maps of challenges and responses (hereafter defined as `Challenge-Response Pairs'; CRPs). These PUFs have subcategories based on their functionality (e.g. strong vs weak~\cite{Tehranipoor2015}) and the component variations examined (e.g. cover-based, memory-based, and delay-based~\cite{Tehranipoor2010}). The important take away is that strong PUFs are used for ID extraction purposes (authentication) while weak PUFs are used for key generation; reason being that strong PUFs can handle a larger number of CRPs. PUF is a prevention method; in that PUF can be used to ensure that a system is operating as expected (similar to the TPM concept). +There are three main properties of PUFs: reliability, uniqueness, and randomness (of the output data). It should be noted that the properties of reliability and randomness conflict, thus requiring that a balance be struck between the design and development stages. Functionality, much like security, can not be a second thought when designing and developing systems. Although PUFs are also designed with security in mind, they do not serve the paper's purpose: to incorporate security development into platform-based design. + +%For example, let us examine the use of Physical Unclonable Functions (PUFs) for low-level security design. As mentioned in Section~\ref{Related Work} +PUFs have a desirable resilience against tampering, are enticing due to their basis in process variation (i.e. hardware variations: jitter, noise, temperature, voltage), and have `Challenge-Response Pairs' (CRPs) as their input/output nets~\cite{Tehranipoor2015}. Hence, PUFs have the properties that different components may need which is why PUFs can be used for elements ranging from key generation to authentication functionality. These aspects are dependent on the properties of the underlying hardware and while they may be similar in function and purpose these varying aspects contribute not only to the developer's use of these components but to the behavioral description and restrictions that the implemented function/module carry going forward. Further investigation into the concerns and considerations of the mapping process will occur in Section~\ref{Mapping}. -As the development goes `upwards' in levels of abstraction, one will be implementing functions, and modules, as the result of combining/connecting different components. These abstracted function/implementation levels will need to be viewed as its own new `component' with its own security properties, functionality, and behavior. This is the same `circular' behavior we see in mapping down from the function space. An other consideration at this level of design and development is: what happens when one is unable to trust the componenets being used to develop a system design (e.g. can a secure system be constructed from unsecure components). +\paragraph{Resource Monitors} +Another example of security/trustworthiness +implementation is the use of monitors for distributed system security. +Different methods are used for determining trust of actions +(e.g. Byzantine General Problem). These methods are used for +determining the most sane/trustworthy machine out of a distributed +network so that users know they are interacting with the `freshest' +data at hand. While the realm of distributed systems has found good +methods/principles to govern their actions/updates of systems, these +methods are by no means `final' or perfect but will rather develop +with time. As one can see, there are solutions implemented across a +series of different HW and SW platforms for tackling different aspects +of security. The unifying factor in all of them is determining what +is or isn't trustworthy. + +As the development goes `upwards' in levels of abstraction, one will be implementing functions, and modules, as the result of combining/connecting different components. These abstracted function/implementation levels will need to be viewed as its own new `component' with its own security properties, functionality, and behavior. This is the same `circular' behavior we see in mapping down from the function space. An other consideration at this level of design and development is: what happens when one is unable to trust the components being used to develop a system design (e.g. can a secure system be constructed from insecure components). \begin{quotation} ``In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle.''~\cite{Mohammed2009} @@ -320,9 +328,9 @@ The concern is how to evaluate the security of the system. Two methods are to e \label{Mapping} Mapping is the processes of evaluating, and maximizing, the requirement specifications and component definitions for the purpose of developing a design that incorporates all the behavioral characteristics necessary along with the predetermined essential hardware/functional properties using a predetermined objective function. Developing mapping functionality is no easy task and this is where the open-ended question for incorporating security into platform-based design lies. The map space should take into account not only the current requirements of the system, but also account for any future upgrades/modifications along with disaster planning and recovery. To the knowledge of this paper this is an area lacking hard standardization/optimization techniques and has had no formal definition of a framework. -The first, and most difficult step in developing a framework is determining how to evalute, and compare, both the requirement specifications and component definitions. Along this same mindset mapping also needs to account for layers of abstraction which will also necessitate their own evaluation/validation schema, as this will allow for comparision of different implementation schemes at the various recursive explorations of intermediate `platform levels'. At each of these varying levels of development there will be different criteria that will dicatate the validity of a given design; e.g. trustworthiness, functional reliability, data fidelity. Each of these properties, while paramount to the design development at the arbitrary levels, should be notably different which then complicates how to compare levels of abstraction. Standards and documentation come to the recue at this point. With thorough and rigorous documentation one can begin formulating a structured framework that others can follow and develop on. Even this documentation and standardization will need to account for a large range of scenarios and concerns that may arise during development of varying designs. Virtualization is a solution to this aspect of the process, and will need to be at the core of the mapping functionality. As with any hardware/software development and design there is the need for virtualization tools that will hasten the exploration of the platform and function spaces. It should be noted that while documentation, virtualization, and standarization do play key roles in the mapping process they are also the most expensive aspects of incorporating a new methodology to security design through PBD. Time is the true cost of all this development, but is also the return once virtualization and standards are completed. +The first, and most difficult step in developing a framework is determining how to evaluate, and compare, both the requirement specifications and component definitions. Along this same mindset mapping also needs to account for layers of abstraction which will also necessitate their own evaluation/validation schema, as this will allow for comparison of different implementation schemes at the various recursive explorations of intermediate `platform levels'. At each of these varying levels of development there will be different criteria that will dictate the validity of a given design; e.g. trustworthiness, functional reliability, data fidelity. Each of these properties, while paramount to the design development at the arbitrary levels, should be notably different which then complicates how to compare levels of abstraction. Standards and documentation come to the rescue at this point. With thorough and rigorous documentation one can begin formulating a structured framework that others can follow and develop on. Even this documentation and standardization will need to account for a large range of scenarios and concerns that may arise during development of varying designs. Virtualization is a solution to this aspect of the process, and will need to be at the core of the mapping functionality. As with any hardware/software development and design there is the need for virtualization tools that will hasten the exploration of the platform and function spaces. It should be noted that while documentation, virtualization, and standardization do play key roles in the mapping process they are also the most expensive aspects of incorporating a new methodology to security design through PBD. Time is the true cost of all this development, but is also the return once virtualization and standards are completed. -Future development considerations should also be taken into account, depending on the `time-to-live' that is expected of the system being designed. These future considerations include disaster/catastrophe planning and any upgrade/modification paths. These are important considerations when developing not only sinlge modules/components within the system, but how the system will act when these moments occur. This is similar to planning for a `secure failure' event, but these topics will be expanded upon later. +Future development considerations should also be taken into account, depending on the `time-to-live' that is expected of the system being designed. These future considerations include disaster/catastrophe planning and any upgrade/modification paths. These are important considerations when developing not only single modules/components within the system, but how the system will act when these moments occur. This is similar to planning for a `secure failure' event, but these topics will be expanded upon later. This paper will now examine some of the attention that has to be given to development of a secure system. The definition of ``trustworthiness'' that is chosen to help color this perspective on security is as defined in the Benzel et.~al.~paper. \begin{quotation} @@ -375,13 +383,13 @@ As with any new system there should be `sufficient user documentation'. Users s \label{Model Review} This section is purposed with a discussion about some of the more nuanced aspects of a security incorporated platform-based design model/framework that was discussed in Section~\ref{Proposed Approach}. More attention will be given towards the challenges, considerations, and advantages of a security centric PBD framework. -As with any new technology/design methodology/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action; same as with software development). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design development and adoptation are software development and a set of tools that insulate the details of architecture from application software (e.g. virtualization). The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it~\cite{Vincentelli2002}. +As with any new technology/design methodology/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action; same as with software development). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design development and adoption are software development and a set of tools that insulate the details of architecture from application software (e.g. virtualization). The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it~\cite{Vincentelli2002}. Standardization, or at least some abstraction rule documentation, is what will greatly lend to minimizing the costs of using platform-based design. It is here that one clearly sees the recursive nature of this design methodology, and -hopefully, also is able to see the advantages of the wide adoptation +hopefully, also is able to see the advantages of the wide adoption of platform-based design. This approach results in better reuse, because it decouples independent aspects that would otherwise be tied together, e.g., a given functional specification to low-level implementation @@ -472,6 +480,8 @@ With these concepts in-mind, it should be obvious that security design \textbf{m %\bibliographystyle{plain} %\bibliography{body} +\balance + \begin{thebibliography}{99} \bibitem{Vincentelli2007} Alberto Sangiovanni-Vincentelli, @@ -630,7 +640,6 @@ DRAM based Intrinsic Physical Unclonable Functions for System Level Security}, P Novel Physical Unclonable Function with Process and Environmental Variations}, Proceedings of the Conference on Design, Automation and Test in Europe (March 2010) \end{thebibliography} -\balance \end{document}