From 7a36d0c244ff79e918adcaa89235f63f0bf0a80d Mon Sep 17 00:00:00 2001 From: Paul Wortman Date: Thu, 27 Aug 2015 17:51:35 -0400 Subject: [PATCH] Spelling and sentence corrections --- PBDSecPaper.tex | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index 83440df..7776ec2 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -78,7 +78,7 @@ \maketitle \begin{abstract} -PBD centers around minimizing costs of developing and manufacturering new components along with changing the landscape of this ecosystem, through the use of a ``meet-in-the-middle'' methodology where successive refinements of specificiations meet with abstractions of pototential implemenetations; the goal being to obtain the same level of abstraction as is writtien into good coding functions~\cite{Vincentelli2002}. Security centers around being able to gauge the trustworthiness of components as well as the larger system made of distributed components. There is a distinct lack of design methodology for doing platform-based design of security elements, although there is conceptual use in mobile embedded systems~\cite{Schaumont2005}. Ground work for implementing security via PBD exists; this paper is centered around connecting the dots and laying the foundation for framework that will be built upon for creating security in a documented, rigorous, and standardized way. +PBD centers around minimizing costs of developing and manufacturering new components along with changing the landscape of this ecosystem, through the use of a ``meet-in-the-middle'' methodology where successive refinements of specificiations meet with abstractions of potential implemenetations; the goal being to obtain the same level of abstraction as is writtien into good coding functions~\cite{Vincentelli2002}. Security centers around being able to gauge the trustworthiness of components as well as the larger system made of distributed components. There is a distinct lack of design methodology for doing platform-based design of security elements, although there is conceptual use in mobile embedded systems~\cite{Schaumont2005}. Ground work for implementing security via PBD exists; this paper is centered around connecting the dots and laying the foundation for framework that will be built upon for creating security in a documented, rigorous, and standardized way. \end{abstract} \section{Motivation} @@ -87,9 +87,9 @@ As systems move towards more complex designs and implementations (as allowed by \begin{quotation} ``However, even though current silicon technology is closely following the growing demands; the effort needed in modeling, simulating, and validating such designs is adversely affected. This is because current modeling tools and frameworks, hardware and software co-design environments, and validation and verification frameworks do not scale with the rising demands.''`\cite{Patel2007} \end{quotation} -Work in the security realm is much more chaotic, although understakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Lin2013, Avizienis2004, Jorgen2008, Zakinthinos1997, Zhou2007}. Security has many facets to it: failure, processes, security mechanisms, secuirty principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocol standards. \\ +Work in the security realm is much more chaotic, although undertakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Lin2013, Avizienis2004, Jorgen2008, Zakinthinos1997, Zhou2007}. Security has many facets to it: failure, processes, security mechanisms, secuirty principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocol standards. \\ -Standardization is the process of developing and implementing technical standards. Standardization can help to maximinze compatability, interoperability, safety, repeatability, or quality; it can also faciliate commoditization of formerly custom processes. Standards appear in all sorts of domains. For the IC domain standardization manifests as a flexibile integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offeres to develop quickly new applications (adoptability). In other words, this requires a distillation of the principles so that a rigorous methodology can developed and profitably used across different design domains. +Standardization is the process of developing and implementing technical standards. Standardization can help to maximinze compatability, interoperability, safety, repeatability, or quality; it can also faciliate commoditization of formerly custom processes. Standards appear in all sorts of domains. For the IC domain standardization manifests as a flexibile integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offers to develop quickly new applications (adoptability). In other words, this requires a distillation of the principles so that a rigorous methodology can developed and profitably used across different design domains. So why is standardization useful? Standardization allows for manufacturers, system developers, and software designers to all work around a single, accepted, platform. It is understood what the capabilities of the hardware are, what the limitations of system IO will be, along with what the methods/protocols of communication will be. Even these preset aspects of the `standard' have their own `contractual obligations' of how they will function, what their respective `net-lists' are, and where the limtations of such standards lie. While the act of creating a standard takes time, not only due to time of development but due to speed of adoption of wide use, it is a necessary step. Without standarization, it becomes far more difficult to create universally used complex systems, let alone validate their correctness and trustworthiness. This is how one is able to change the current paradigm to a new standard model. What is gained, and lost, when shifting the method of design to a more platform-based design/security centric model? After all development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. The advatange to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototpying re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Security}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for a different manufacturers/developers to create different aspects of a system, while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new guide lines for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). @@ -103,7 +103,7 @@ The manufacturer's standpoint boils down to: the design should minimize mask-mak \section{Platford-based design} \label{Platform-based design} -Platform-based design is a methodology where function spaces are mapped to platform spaces (e.g. architectural space) or vica-versa. Generally the mapping of a function instance towards a platform instance leads to a larger subset of platform instances branching from a single function instance; as can be seen in Figure~\ref{fig:RecursivePBD}. Of course this relationship works in the reverse manner as well; a single platform instance maps to a subset of function instances. Although one can think of the mapping process as happening in one direction, or the other, platform-based design is a ``meet-in-the-middle'' approach. A designer/developer works towards merging two platforms (e.g. software functionality and hardware architectures) where the resulting `middle-ground', or mapping, can be thought of as its own `mapping instance'; merging an abstract higher level function with an abstraction lower level platform. More on the recusive nature of PBD will be explored later in this section, for now we will examine the benefits of using such a methodology. +Platform-based design is a methodology where function spaces are mapped to platform spaces (e.g. architectural space) or vica-versa. Generally the mapping of a function instance towards a platform instance leads to a larger subset of platform instances branching from a single function instance; as can be seen in Figure~\ref{fig:RecursivePBD}. Of course this relationship works in the reverse manner as well; a single platform instance maps to a subset of function instances. Although one can think of the mapping process as happening in one direction, or the other, platform-based design is a ``meet-in-the-middle'' approach. A designer/developer works towards merging two platforms (e.g. software functionality and hardware architectures) where the resulting `middle-ground', or mapping, can be thought of as its own `mapping instance'; merging an abstract higher level function with an abstracted lower level platform. More on the recusive nature of PBD will be explored later in this section, for now we will examine the benefits of using such a methodology. The monetary considerations of platform-based design include system re-use, flexibility of elements, and re-programmability. As system complexity grows the costs of producing chips becomes more expensive, thus pushing for the systems that are produced to be ``multi-capable'' (e.g. re-use value)~\cite{Keutzer2000}. The greatest challenge in the hardware design space is placement and arragement of components based on an array of differing variables. Design of architecture platforms is a result of trade-off in complex space, and worse yet some aspects directly compete/contradict each other~\cite{Vincentelli2002, Gruttner2013}. The size of the application space that can be supported by the architectures belonging to the architecture platform represents the flexability of the platform. The size of the architecture space that satisfies the constraints embodied in the design architecture is what providers/manufacturers have in designing their hardware instances. Even at this level of abstraction these two aspects of hardware design can compete with each other. Ex: A manufacturer has pre-determined sizes that constrain their development space apriori. Further more, aspects such as design space constraints and heat distribution are a well known source of trouble when designing embedded systems. @@ -154,11 +154,11 @@ The first principle is that of `Least Common Mechanisms'. If multiple component How does a designer decide upon how elements communicate? How does one define a security component in terms of a network; how is communication done, what concerns/challenges are there? Rigorous definitions of input/output nets for componenets and the methods for communication (buses, encryption, multiplexing) must be documented and distributed as the standard methods for communicating. Service, defined in the scope of a network, refers to processing or protection provided by a component to users or other components. E.g., communication service (TCP/IP), security service (encryption, firewall). These services represent the communication that occurs between different elements in a secure system. Each service should be documented to rigorously define the input, and output, nets for each component and the method used for communication (buses, encrpytion, multiplexing). Since these services, and their actions, are central to the operation/behavior of a secure system, there are a series of considerations, principles, and policies that a system architect/designer must acknowledge when deciding on the layout of security components within a network. Principle of Secure Communication Channels states that when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with the security dependencies it supports. In other words, how much is the component trusted to perform its security functions by other components. Several techniques can be used to mititgate threats to the communication channels in use. Use of a channel may be restricted by protecting access to it with suitable access control mechanism (e.g. reference monitor located beneath or within each component). End-to-end communications technologies (e.g. encryption) may be used to eliminate security threats in the communication channel's physical environment. Once again, the intrinsic characteristices assumed for and provided by the channel must be specified with such documentation that it is posisble for system designers to understand the nature of the channel as initially consturcted and to assess the impact of any subseruqnet changes to the system. Without this rigorous documentation and standardization, the trustworthiness of the communications between security elements can not be assured. -`Self-Reliant Trustworthiness' means that systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. The principle of `Partially Ordered Dependencies' states that calling, synchronization and other dependencies in the system should be partially ordered; e.g. for certain pairs of elements in a set, one of the elements precedes the other. A fundamental tool in system design is `layering'. A system can be organized into functionally related modules of components, where layers are linearly ordered with respect to inter-later dependencies. Inherent problems of circularity can be more easily managed if circular dependencies are constrained to occur within layers. In other words, if a shared mechanism also makes calls to or otherwise depends on services of calling mechanisms, creating a circular dependency, perforamcen and liveness problems can result. Partially ordered dependencies and system layering contribute significantly to the simplicity and coherency of system design. A system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality sturcture or configuration; in other words allow for `secure system evolution'. Changes may include upgrades to system, maintenance activites. The benefits of designing a system with secure system evolution are reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. Most systems can aniticipate maintenance, upgrades, and chages to configuration. If a component is constructed using the precepts of modularity and information hiding then it becomes easier to replace components without disrupting the rest of a system. A system designer needs to take into account the impact of dynamic reconfiguration on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to ahcieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. +`Self-Reliant Trustworthiness' means that systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. The principle of `Partially Ordered Dependencies' states that calling, synchronization and other dependencies in the system should be partially ordered; e.g. for certain pairs of elements in a set, one of the elements precedes the other. A fundamental tool in system design is `layering'. A system can be organized into functionally related modules of components, where layers are linearly ordered with respect to inter-later dependencies. Inherent problems of circularity can be more easily managed if circular dependencies are constrained to occur within layers. In other words, if a shared mechanism also makes calls to or otherwise depends on services of calling mechanisms, creating a circular dependency, performance and liveness problems can result. Partially ordered dependencies and system layering contribute significantly to the simplicity and coherency of system design. A system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality sturcture or configuration; in other words allow for `secure system evolution'. Changes may include upgrades to system, maintenance activites. The benefits of designing a system with secure system evolution are reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. Most systems can aniticipate maintenance, upgrades, and chages to configuration. If a component is constructed using the precepts of modularity and information hiding then it becomes easier to replace components without disrupting the rest of a system. A system designer needs to take into account the impact of dynamic reconfiguration on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to ahcieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. A system should be able to handle as `secure failure'. Failure in system function or mechanism should not lead to violation of any security policy. Ideally the system should be capable of detecting failure at any stage of operation (initialization, normal operation, shutdown, maintenace, error detection and recovery) and take appropriate steps to ensure security policies are not violated; as is done with most machines now a days anyway. Touching on the earlier idea of secure system evolution, the reconfiguration function of the system should be designed to ensure continuous enforcement of security policies during various phases of reconfiguration. Once a failed security function is detected, the system may reconfigure itself to cirumvent the failed component, while maintaining security, and still provide all or part of the functionality of the original system, or completely shut itself down to prevent any (further) violation in security policies. Another method for achieving this is to rollback to a secure state (which may be the initial state) and then either shutdown or replace the service/component that failed with an orthogonal or repliacted mechanisms. Failure of a component may or may not be detectable to components using it; thus one must design a method for `dealing with failure'. For this reason components should fail in a state that denies rather than grants access. A designer could employ multiple protection mechanisms (whose features can be significantly different) to reduce the possibility of attack repetition, but it should be noted that redundancy techniques may increase resource usage and adversely affect the system performance. Instead the atomicity properties of a service/component should be well documented and characterized so that the component availing itself of service can detect and handle interruption event appropriately; similar to the `self-analysis' function that TPMs have. A well designed reference monitor could fill most of these roles, though, it would require that the reference monitor can `self-analyze' itself for trustworthiness. While even this would not be a perfect solution, it does help limit total failure to the reference monitor instead of some arbitrary security component. -Having tackled the network communication considerations for modeling a secure and trustworthy component, this paper moves toward examining these security element behvaiors and functions with respect to their existence within a larger distributed system. Trust, in the scope of a distributed system, shifts to define the degree to which the user or a componenet depends on the trustworthiness of another component. The first concept that will be tackled is that of `Hierarchical Trust for Components'. Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. Taking a deeper dive into `hierarchical protections', a component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. The main challenge here is that there needs to be a clear and documented way by which one can determine trustworthiness and protection for a system, along with outlining the hierarchy of trust that is inherent to the system. This is the same challenge that occurs at all levels and requires rigorous documentation to alleviate the constraint. Hierarchical protections is the precept that regulates the following concept of `secure distributed composition'. Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individualy components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. This is similar behavior to how SSL/TLS are used in current day implementations; data may be secure in transit, but if the end points are not secure then the data is not secure either. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of the distributed composite system must be thoroughly analyzed. Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken; there must be `accountability and traceability'. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify the entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional infomraiton that may be helpful in determinging the path or component that allowed the violation of the security policy. Just as this audit trail would be invaluable to a debugging developer, an attacker could also use this information to illuminate the actions/behavior of the system; therefore this data absolutely must remain protected. +Having tackled the network communication considerations for modeling a secure and trustworthy component, this paper moves toward examining these security element behvaiors and functions with respect to their existence within a larger distributed system. Trust, in the scope of a distributed system, shifts to define the degree to which the user or a componenet depends on the trustworthiness of another component. The first concept that will be tackled is that of `Hierarchical Trust for Components'. Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. Taking a deeper dive into `hierarchical protections', a component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. The main challenge here is that there needs to be a clear and documented way by which one can determine trustworthiness and protection for a system, along with outlining the hierarchy of trust that is inherent to the system. This is the same challenge that occurs at all levels and requires rigorous documentation to alleviate the constraint. Hierarchical protections is the precept that regulates the following concept of `secure distributed composition'. Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individualy components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. This is similar behavior to how SSL/TLS are used in current day implementations; data may be secure in transit, but if the end points are not secure then the data is not secure either. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of the distributed composite system must be thoroughly analyzed. Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken; there must be `accountability and traceability'. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify the entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional information that may be helpful in determinging the path or component that allowed the violation of the security policy. Just as this audit trail would be invaluable to a debugging developer, an attacker could also use this information to illuminate the actions/behavior of the system; therefore this data absolutely must remain protected. Information protection, required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintining integrity of kernel code and data), must be protected to a level of continuity consistent with the security policy and the security architecture assumptions; thus providing `continuous protection on information'. Simpley stated, no guarentees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. For maintaining a trustworthy system, and network of distributed truthworhty components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworhiness (does not lower the overall trustworthiness of the entire distributed system). Furthermore, a designer can use the precepts of a reference monitor to provide continuous enforcement of a security policy, noting that every request must be validated, and the reference monitor must protect iteself. Ideally the reference monitor component would be ``perfect'' in the sense of being absolutely trustworthy and not requiring an upgrade/modification path (thus limiting this element's chance of becoming compromised). Any designer must ensure protection of the system by choosing interface parameters so that security critical values are provided by more trustworthy components. To eliminate time-of-check-to-time-of-use vulnerabilities the system's security-relevant operations should appear atmoic. It could also be desirable to allow system security policies to be ``modifiable'' at runtime; in the case of needing to adjust to catastrophic external events. This raises the complexity of the system, but does allow for flexibility in the face of failure. Any changes to security policies must not only be traceable but also verifiable; it must be possible to verify that changes do not violate security policies. This could be handled by a central reference monitor. Following this thread of thinking, a system architect/designer should understand the consequences of allowing modifiable policies within the system. Depending on the type of access control and actions that are allowed and controlled by policies, certain configuration changes may lead to inconsistent states of discontinuous protection due to the complex and undecidable nature of the problem of allowing runtime changes to the security policies of the system. In other words, even modifications/updates need to be planned and documented rigorously for the purpose of maintaining a secure and trustworthy system. System modification procedures must maintain system security with respect to goals, objectives, and requirements of owners; allowing for `secure system modification'. Without proper planning and documentation, upgrades and modifications to systems can transform a secure system into an insecure one. These are similar concepts to `secure system evolution' at the network scope of these security components. @@ -174,7 +174,7 @@ At this point, it is the hope of the author that the reader can see how the need \end{quotation} Focusing efforts on rigorous design documentation allows security concerns to be recognized early in the development process and these aspects can be given sufficient attention in subsequent stages of the device's life cycle. ``By controlling system security during architectural refinement, we can decrease software production costs and speed up the time to market~\cite{ZhouFoss2006}.'' This approach also enchances the role of the software architects by requiring that their decisions be not only for functional decomposition, but also for non-functional requirements fulfillment. Hence, virtualization/automation of security development is not only effective but also enticing. As with any new system there should be `sufficient user documentation'. Users should be provided with adequate documentation and other information such that they contribute to, rather than detract from, a system's security. The availability of documentation and training can help to ensure a knowledgable cadre of users, developers, and administrators. If any level of user does not know how to use a component properly, does not know the standard security procedures, or does not know the proper behavior to prevent social engineering attacks, then said user can easily introduce new system vulnerabilities. Complexity must be minimized. This point was touched upon earlier, but is just as valid from a documentation standpoint. User documentation should also be kept as simple as possible to ensure adoptability by a larger group of new users. Having complex user documention is problematic because if users are unable to understand how to implement/use a system they will refuse to use it, thus leading to the death of said system. If on-line documentation is inadequate, then it should be obvious that written documentation and appropriate training are needed. -Just as there is a reuirement for adequate documentation there is also the need for `procedural rigor'. The rigor of the system's life cycle process should be commensurate with its intended trustworthiness. Procedural rigor defines the depth and detail of a system's lifecycle procedures. These rigors contribute to the assurance that a system is correct and free of unintended functionality in two ways. Firstly, imposing a set of checks and balances on the life cycle process such that the introduction of unspecified functionality is thwarted. Secondly, applying rigorous procedures to specifications and other design documents contribute to the ability of users to understand a system as it has been built, rather than being misled by inaccurate system representation, thus helping ensure that security and functional objectives of the system have been met. Highly rigorous development procedures supporting high trustworthiness are costly to follow. However, the lowered cost of ownership resulting from fewer flaws and security breaches during the product maintenance phase can help to mitigate the higher initial development costs associated with a rigorous life cycle process. +Just as there is a requirement for adequate documentation there is also the need for `procedural rigor'. The rigor of the system's life cycle process should be commensurate with its intended trustworthiness. Procedural rigor defines the depth and detail of a system's lifecycle procedures. These rigors contribute to the assurance that a system is correct and free of unintended functionality in two ways. Firstly, imposing a set of checks and balances on the life cycle process such that the introduction of unspecified functionality is thwarted. Secondly, applying rigorous procedures to specifications and other design documents contribute to the ability of users to understand a system as it has been built, rather than being misled by inaccurate system representation, thus helping ensure that security and functional objectives of the system have been met. Highly rigorous development procedures supporting high trustworthiness are costly to follow. However, the lowered cost of ownership resulting from fewer flaws and security breaches during the product maintenance phase can help to mitigate the higher initial development costs associated with a rigorous life cycle process. The reasoning for having procedural rigor and sufficient user documentation is to allow for `repeatable, documented procedures'. Techniques used to construct a component should permit the same component to be completely and correctly reconstructed at a later time. Repeatable and documented procedures support the creation of components that are identical to a component created earlier that may be in widespread use. The Common Criteria standard~\cite{CommonCriteria} provides a framework for the derivation of system requirements that is comprised of the following steps: definition of system goals and its concept of operation; identification of threats to the defined system; identification of assumptions regarding the system and its environment; identification of organizational policies external to the system; identification of security objectives for the system and its environment based on previous steps; and specification of requirements that will meet the objectives. The last, and by no means least, important topic that must be tackled in this section is the question of what exactly are the research challenges. There has been a lot of information, ideas, and principles presented over the course of this writing along with parallels to existing research and methodologies that can be almost directly applied to the concept of mapping security development to platform-based design. The primary cost of developing security, and running a secure system, is time. There are the monitary and hardware costs of security developement and implementation, but even those aspects all have a time cost coupled with them. Time, while being the most expensive part of security design is also the aspect that can be tackled and minimized with rigorous planning and documentation. Taking into account that even the development of documentation and standards also has its own time cost associated with it, this early phase development can also diminsh the time-cost impact for later steps in the system's development/implementation life-cycle. Security must be paramount in design from the very beginning of any design and development cycle.