From 59ae11ce756ea4f5cf64fe6a30defb5ca8986d40 Mon Sep 17 00:00:00 2001 From: Paul Wortman Date: Tue, 21 Jul 2015 15:49:15 -0400 Subject: [PATCH] Edits to spelling and additions to paper Signed-off-by: Paul Wortman --- PBDSecPaper.tex | 57 ++++++++++++++++++++++++++----------------------- 1 file changed, 30 insertions(+), 27 deletions(-) diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index aeeaabb..3486acd 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -173,48 +173,51 @@ \section{Security} \end{figure*} \begin{itemize} -\item Scopes of security/trustworthiness +\item As promised earlier, the paper will now examine the different scopes of security/trustworthiness: local, network, and distributed. \begin{itemize} \item Local (self) \begin{itemize} - \item The local scope encompasses a security elements own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, reduced complexity, minimized sharing, and the conflict between this as least common mechanisms). + \item The local scope encompasses a security element's own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, reduced complexity, minimized sharing, and the conflict between this as least common mechanisms). \item Failure - a condition in which, given a specifically documented input that conforms to specification, a component or system exhibits behavior that deviates from its specified behavior. \item Module/Database - a unit of computation that encapsulates a database and provides an interface for the initialization, modification, and retireval of information from the database. The database may be either implicit, e.g. an algorithm, or explicit. \item Process(es) - a program(s) in execution. - \item Least Common Mechanisms - If multiple components in the system require the same function of mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is create once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanims. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. + \item Least Common Mechanisms - If multiple components in the system require the same function of mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is created once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanims. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. \item Reduced Complexity - The simpler a system is, the fewer vulnerabilities it will have. From a security perspective, the benefit to this simplicity is that is is easier to understand whether an intended security policy has been captured in system design. At a security model level, it can be easier to determine whether the initial system state is secure and whether subsequent state changes preserve the system security. Given current state of the art of systems, the conservative assumption is that every complex system will contain vulnerabilities and it will be impossible to eliminate all of them, even in the most highly trustworthy of systems. - \item Minimized Sharing - No computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; parition resources into discrete, private subsets for each dependent component. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing is requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. - \item As the reader has undoutably seen, there are conflict within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly appose each other, while still being aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occuring within the security component. A designer can use one of the development techniques already mention to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key? + \item Minimized Sharing - At the lowest level of secure data, no computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; parition resources into discrete, private subsets for each dependent component. This in turn can be seen as a virtualization of hardware, a concept already illustrated earlier with the discussion of platform-based design. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. Once again illustrating the need for rigorous standards to be clearly documented. + \item As the reader has undoutably seen, there are conflicts within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly appose each other, while still being desirable aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occuring within the security component. A designer can use one of the development techniques already mentioned to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key and how to then communicate that information with high security and fidelity. \end{itemize} \item Network (communication) \begin{itemize} - \item How to elements communicate? Rigorous definition of input/output nets for componenets and the methods for communication (buses, encryption, multiplexing). + \item How does a designer decide upon how elements communicate? Rigorous definition of input/output nets for componenets and the methods for communication (buses, encryption, multiplexing) must be documented and distributed as the standard methods for communicating. \item Service - processing or protection provided by a component to users or other components. E.g., communication service (TCP/IP), security service (encryption, firewall). These services represent the communication that occurs between different elements in a secure system. Each service should be documented to rigorously define the input, and output, nets for each component and the method used for communication (buses, encrpytion, multiplexing). Since these services, and their actions, are central to the operation/behavior of a secure system, there are a series of considerations, principles, and policies that a system architect/designer must acknowledge when deciding on the layout of security components within a network. - \item Principle of Secure Communication Channels - when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with security dependencies it supports. In other words, how much it is trusted to perform its security functions by other components. + \item Principle of Secure Communication Channels - when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with the security dependencies it supports. In other words, how much is the component trusted to perform its security functions by other components. \begin{itemize} \item Several techniques can be used to mititgate threats to the communication channels in use. Use of a channel may be restricted by protecting access to it with suitable access control mechanism (e.g. reference monitor located beneath or within each component). \textit{\textbf{NOTE: MAKE SURE TO MENTION REFERENCE MONITOR BY THIS POINT}} End-to-end communications technologies (e.g. encryption) may be used to eliminate security threats in the communication channel's physical environment. Once again, the intrinsic characteristices assumed for and provided by the channel must be specified with such documentation that it is posisble for system designers to understand the nature of the channel as initially consturcted and to assess the impact of any subseruqnet changes to the system. Without this rigorous documentation and standardization, the trustworthiness of the communications between security elements can not be assured. \end{itemize} - \item Self-Reliant Trustworthiness - Systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoin with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. \textit{\textbf{Add information about Partially Ordered Dependencies}} + \item Self-Reliant Trustworthiness - Systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. + \begin{itemize} + \item \textit{\textbf{Add information about Partially Ordered Dependencies}} + \end{itemize} \item Secure System Evolution - system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality sturcture or configuration. Changes may include upgrades to system, maintenance activites. \begin{itemize} \item Benefits include: reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. - \item \textbf{Note:} Most systems can aniticipate maintenance, upgrades, and chages to configuration. If component constructed using precepts of modularity and information hiding; easier tp replace without disrupting rest of the system. - \item System designer needs to take into account the impact of dynamic reconfiguration will have on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to ahcieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. + \item \textbf{Note:} Most systems can aniticipate maintenance, upgrades, and chages to configuration. If component constructed using precepts of modularity and information hiding; easier to replace without disrupting rest of the system. + \item System designer needs to take into account the impact of dynamic reconfiguration on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to ahcieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. \end{itemize} - \item Secure Failure - Failure in system function or mechanism should not lead to violation of security policy. Ideally the system should be capable of detecting failure at any stage of operation (initialization, normal operation, shutdown, maintenace, error detection and recovery) and take appropriate steps to ensure security policies are not violated. Touching on the earlier idea of secure system evolution, reconfiguration function of the system should be designed to ensure continuous enforcement of security policy during various phases of reconfiguration. Once a failed security function is detected, the system may reconfigure itself to cirumvent the failed component, while maintaining security, and stilll provide all or part of the functionality of the original system, or completely shut itself down to prevent any (further) violation in security policies. Another method for achieving this is to rollback to a secure state (which may be the initial state) and then either shutdown or replace the service/component that failed with an orthogonal or repliacted mechanisms. + \item Secure Failure - Failure in system function or mechanism should not lead to violation of any security policy. Ideally the system should be capable of detecting failure at any stage of operation (initialization, normal operation, shutdown, maintenace, error detection and recovery) and take appropriate steps to ensure security policies are not violated; as is done with most machines now a days anyway. Touching on the earlier idea of secure system evolution, the reconfiguration function of the system should be designed to ensure continuous enforcement of security policies during various phases of reconfiguration. Once a failed security function is detected, the system may reconfigure itself to cirumvent the failed component, while maintaining security, and still provide all or part of the functionality of the original system, or completely shut itself down to prevent any (further) violation in security policies. Another method for achieving this is to rollback to a secure state (which may be the initial state) and then either shutdown or replace the service/component that failed with an orthogonal or repliacted mechanisms. \item Dealing with Failure - Failure of a component may or may not be detectable to components using it. For this reason components should fail in a state that denies rather than grants access. A designer could employ multiple protection mechanisms (whose features can be significantly different) to reduce the possibility of attack repetition, but it should be noted that redundancy techniques may increase resource usage and adversely affect the system performance. Instead the atomicity properties of a service/component should be well documented and characterized so that the component availing itself of service can detect and handle interruption event appropriately; similar to the `self-analysis' function that TPMs have. \end{itemize} \item Within distributed system \begin{itemize} - \item Trust (degree to which the user or a componenet depends on the trustworthiness of another component). Secure System Modification. + \item Trust (degree to which the user or a componenet depends on the trustworthiness of another component). \item Hierarchical Trust for Components - Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. - \item Hierarchical Protections - Component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. - \item Secure Distributed Composition - Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individualy components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of dsitributed composite system must be thoroughly analyzed. - \item Accountability and Traceability - Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional infomraiton that may be helpful in determinging the path or component that allowed the violation of the security policy. - \item Continuous Protection on Information - Information protection required by security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintining integrity of kernel code and data) must be protected to a level of continueity consistent with the security policy and the security architecture assumptions. Simpley stated, no guarentees about information integrity, configentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. For maintaining a trustworthy system, and network of distributed truthworhty components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworhiness (does not lower the overall trustworthiness of the entire distributed system). Furthermore, a designer can use the precepts of a reference monitor to provide continuous enforcement of security policy, noting that every request must be validated, and the reference monitor must protect iteself. Ideally the reference monitor component would ``perfect'' in the sense of being absolutely trustworthy and not requiring an upgrade/modification path (thus limiting this element's chance of becoming compromised). + \item Hierarchical Protections - A component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. + \item Secure Distributed Composition - Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individualy components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of distributed composite system must be thoroughly analyzed. + \item Accountability and Traceability - Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify the entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional infomraiton that may be helpful in determinging the path or component that allowed the violation of the security policy. + \item Continuous Protection on Information - Information protection required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintining integrity of kernel code and data) must be protected to a level of continuity consistent with the security policy and the security architecture assumptions. Simpley stated, no guarentees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. For maintaining a trustworthy system, and network of distributed truthworhty components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworhiness (does not lower the overall trustworthiness of the entire distributed system). Furthermore, a designer can use the precepts of a reference monitor to provide continuous enforcement of a security policy, noting that every request must be validated, and the reference monitor must protect iteself. Ideally the reference monitor component would be ``perfect'' in the sense of being absolutely trustworthy and not requiring an upgrade/modification path (thus limiting this element's chance of becoming compromised). \begin{itemize} \item Any designer must ensure protection of the system by choosing interface parameters so that security critical values are provided by more trustworthy components. To eliminate time-of-check-to-time-of-use vulnerabilities the system's security-relevant operations should appear atmoic. - \item It could also be desirable to allow system security policies to be ``modifiable'' at runtime; in the case of needing to adjust to catastrophic external events. Any changes to security policies must not only be traceable but also verifiable; it must be possible to verify that changes do not violate security policies. Following this thread of thinking, a system architect/designer should understand the consequences of allowing modifiable policies within the system. Depending on the type of access control and actions that are allowed and controlled by policies, certain configuration changes may lead to inconsistent states of discontinuous protection due to the complex and undecidable nature of the problem of allowing runtime changes to the security policies of the system. + \item It could also be desirable to allow system security policies to be ``modifiable'' at runtime; in the case of needing to adjust to catastrophic external events. This raises the complexity of the system, but does allow for flexibility in the face of failure. Any changes to security policies must not only be traceable but also verifiable; it must be possible to verify that changes do not violate security policies. This could be handled by a central reference monitor. Following this thread of thinking, a system architect/designer should understand the consequences of allowing modifiable policies within the system. Depending on the type of access control and actions that are allowed and controlled by policies, certain configuration changes may lead to inconsistent states of discontinuous protection due to the complex and undecidable nature of the problem of allowing runtime changes to the security policies of the system. In other words, even modifications/updates need to be planned and documented rigorously for the purpose of maintaining a secure and trustworthy system. \end{itemize} \item Secure System Modification - System modification procedures must maintain system security with respect to goals, objectives, and requirements of owners. Without proper planning and documentation, upgrades and modifications to systems can transform a secure system into an insecure one. \end{itemize} @@ -223,19 +226,19 @@ \section{Security} \begin{itemize} \item When automating the development of security systems there are three key elements of the system that need to be examined/accounted for in the virtualization stage: security mechanisms, security principles, and security policies. \begin{itemize} - \item For the purpose of reiteration, security mechanisms are the system artifacts that are used to enforece system security policies. Security principles are the guidelines or rules that when followed during system design will aid in making the system secure. Organizational security policies are ``the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information.''~\cite{Benzel2005} System Security Policies are rules that the information system enforces relative to the resources under its control to reflect the organizational security policy. - \item Each of these aspects plays its part in determining the behvaior and function of the overall security system. \textbf{Illustrate the importance of these different security elements (mechanisms, principles and policies) and talk a little about the difference between organizational security policies and system security policies}. The security prinicples set the groundwork for how the system should behave and interact based on the expected user interactions. The security policies (both organizational and system) govern the rules and practices that regulate how the system, and its resources, is managed, how the information is protected, and how the system controls and distributes sensitive information. The security mechanisms are the implementations on these previous two aspects by being the system artifacts that are used to enforce the system security policies. + \item For the purpose of reiteration, security mechanisms are the system artifacts that are used to enforce system security policies. Security principles are the guidelines or rules that when followed during system design will aid in making the system secure. Organizational security policies are ``the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information.''~\cite{Benzel2005} System Security Policies are rules that the information system enforces relative to the resources under its control to reflect the organizational security policy. + \item Each of these aspects plays its part in determining the behavior and function of the overall security system. \textbf{Illustrate the importance of these different security elements (mechanisms, principles and policies) and talk a little about the difference between organizational security policies and system security policies}. The security prinicples set the groundwork for how the system should behave and interact based on the expected user interactions. The security policies (both organizational and system) govern the rules and practices that regulate how the system, and its resources, is managed, how the information is protected, and how the system controls and distributes sensitive information. The security mechanisms are the implementations on these previous two aspects by being the system artifacts that are used to enforce the system security policies. \end{itemize} - \item In the same manner that these various security aspects (e.g. mechanisms, principles, policies) must be considered during developemtn automation, the software and hardware aspects must also come under consideration based on the desired behavior/functionality of the system under design. Could have security elements that attempt to optimize themselves to the system they are in based on a few pivot points (power, time, efficiency, level of randomness). Another option for the automated tool could trade out specific security components as an easier way to increase security without requiring re-design/re-construction of the underlying element. There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system. For these reasons a user would desire rigorous documentation that would lay out the requirements of each component, so that in the case of trying to replace faulty or damaged components there would be no loss to the overall trustworthiness of the system while also not introducing any vulnerabilities due to the inclusion of new system components. - \item Virtualization should be used for exploring the design space, as it is hoped that it is obvious as to why. Not only is the cost of prototyping incredably expensive, but redesign is equally costly. Virtualization aids by removing the need for a physical prototyping (less monitary costs) and allows for more rapid exploration of the full design space. While the design time for such powerful tools will be expensive (both in monitary and temporal costs), the rewards of developing, validating, and evaluating this virtualization tool will offset the early design phase costs of an automation of security component design. + \item In the same manner that these various security aspects (e.g. mechanisms, principles, policies) must be considered during development automation, the software and hardware aspects must also come under consideration based on the desired behavior/functionality of the system under design. One could have security elements that attempt to optimize themselves to the system they are in based on a few pivot points (power, time, efficiency, level of randomness). Another option for the automated tool could trade out specific security components as an easier way to increase security without requiring re-design/re-construction of the underlying element (e.g. modularity). There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system. For these reasons a user would desire rigorous documentation that would lay out the requirements of each component, so that in the case of trying to replace faulty or damaged components there would be no loss to the overall trustworthiness of the system; while also not introducing any vulnerabilities due to the inclusion of new system components. + \item Virtualization should be used for exploring the design space; it is hoped that it is obvious as to why. Not only is the cost of prototyping incredably expensive, but redesign is equally costly. Virtualization aids by removing the need for physical prototyping (less monitary costs) and allows for more rapid exploration of the full design space. While the design time for such powerful tools will be expensive (both in monitary and temporal costs), the rewards of developing, validating, and evaluating this virtualization tool will offset the early design phase costs of automated security component design. \end{itemize} \item Mapping of Security onto PBD structure \begin{itemize} - \item ``Despite occasional cryptology-related attacks, most seucirty vulnerabilities result from poor software design and implementation, such as the ever-lasting buffer overrun bugs. Thus approaches to designing secure software, not just from a traditional cryptology viewpoint, but with a software engineering perspective, are needed to counter the current unsatisfactory situation.''~\cite{Ren2006} - \item ``...allows security concerns to be recognized early in the development process and can be given sufficient attention in subsequent stages. By controlling system security during architectural refinement, we can decrease software production costs and speed up the time to market. This approach aslo enchances the role of the software architects by requiring that their decisions be not only for functional decomposition, but also for [non-functional requirements] fulfillment.''~\cite{ZhouFoss2006} + \item ``Despite occasional cryptology-related attacks, most security vulnerabilities result from poor software design and implementation, such as the ever-lasting buffer overrun bugs. Thus approaches to designing secure software, not just from a traditional cryptology viewpoint, but with a software engineering perspective, are needed to counter the current unsatisfactory situation.''~\cite{Ren2006} + \item ``...allows security concerns to be recognized early in the development process and can be given sufficient attention in subsequent stages. By controlling system security during architectural refinement, we can decrease software production costs and speed up the time to market. This approach also enchances the role of the software architects by requiring that their decisions be not only for functional decomposition, but also for [non-functional requirements] fulfillment.''~\cite{ZhouFoss2006} \item Apply here ideas of Sufficient User Documentation, Procedural Rigor, Repeatable, Documented Procedures. \end{itemize} -\item The last, and by no means least, important topic that must be tackled in this section is the question of what exactly are the research challenges. There has been a lot of information, ideas, and principles presented over the course of this writing along with parallels to existing research and methodologies that can be almost directly applied to the concept of mapping security development to platform-based design. The primary cost of developing security, and running a secure system, is time. There are the monitary and hardware costs of security developement and implementation, but even those aspects all have a time cost coupled with them. Time, while being the most expensive part of security design is also the aspect that can be tackled and minimized with rigorous planning and documentation. Taking into account that even the development of documentation and standards also has its own time cost associated with it, this early phase development can also diminsh the time-cpst impact for later steps in the system's development/implementation life-cycle. +\item The last, and by no means least, important topic that must be tackled in this section is the question of what exactly are the research challenges. There has been a lot of information, ideas, and principles presented over the course of this writing along with parallels to existing research and methodologies that can be almost directly applied to the concept of mapping security development to platform-based design. The primary cost of developing security, and running a secure system, is time. There are the monitary and hardware costs of security developement and implementation, but even those aspects all have a time cost coupled with them. Time, while being the most expensive part of security design is also the aspect that can be tackled and minimized with rigorous planning and documentation. Taking into account that even the development of documentation and standards also has its own time cost associated with it, this early phase development can also diminsh the time-cost impact for later steps in the system's development/implementation life-cycle. \begin{itemize} \item How to put cost on security? How to make models? What is high vs. low? (Are there models that exist?) \end{itemize} @@ -248,12 +251,12 @@ \section{Conclusion} \begin{itemize} \item Need for standardization, or `contracts', for all elements that compose a larger distributed systems. Reason being: without these `contracts' to clearly state the expectations of various inputs, and outputs, then there is no guarentee that any designs/developments will be able to harmonize with the rest of a complex system. Additionally this is important for determining the trustworthiness of not only different elements, but of the entire distributed system as a whole. \end{itemize} -\item Advantages: swap out old security modules with newer ones (re-use of base system), degree of system customization to meet system hardware/software needs +\item Advantages of security mapped to PBD: swap out old security modules with newer ones (re-use of base system), degree of system customization to meet system hardware/software needs, ease of development (and costs). \item As with any new shift in design methodoloy the largest cost in this new system would be the need for rigorous documentation and standardization of the process, components, and communication elements of said components. \item This is why the development of groundwork for PBD-Security designs will be a slow and arduous work, but the resulting `paydirt' will be a new set of virtualization tools at abstraction levels with design spaces yet true explored at regualr levels. The hope of this paper is to begin designing a frame work that pushes for not only better system design and development (PBD) but also for proper incorporation and planning of system security in an intelligent, rigorous and documented/standardized way. \item As with the design of any tool there are concerns during the development, evaluations and valdiation processes~\cite{Pinto2006}. Common pitfalls of development are mishandling corner cases and inadvertently misinterpreting changes in the communication semantics. Problems arise because of poor understanding and the lack of precise, rigorous, definitions of the abstraction and refinement maps used in the design flow. Abstraction and refinement should be designed to preserve, whenever possible, the properties of the design that have already been established (e.g. the 'contract' of the design). -\item With these concepts in-mind, it should be obvious that security design \textbf{must} occur from the start! Unless security design is incorporated apriori, a developer can only hope to spend the rest of the development processes, and beyond, attempting to secure a system. -\item Reference monitor seems a favorable choice as this sort of model is already used in distributed systems, but there is an extremely important need to maintain the security/trust/trustworthiness of this reference monitor (abstraction for necessary and sufficient features of component that enforces access controll in secure systems). It is the belief of this paper that an initial starting point for PBD-Security design development is to use this existing reference monitor concept, along with other developed tools (e.g. CBSE, TPM), and piece together the initial framework and early phase development for this new methodology. +\item With these concepts in-mind, it should be obvious that security design \textbf{must} occur from the start! Unless security design is incorporated apriori, a developer can only hope to spend the rest of the development processes, and beyond, attempting to secure a system that took security as optional. +\item The reference monitor seems a favorable choice as this sort of model is already used in distributed systems, but there is an extremely important need to maintain the security/trust/trustworthiness of this reference monitor (abstraction for necessary and sufficient features of component that enforces access controll in secure systems). It is the belief of this paper that an initial starting point for PBD-Security design development is to use this existing reference monitor concept, along with other developed tools (e.g. CBSE, TPM), and piece together the initial framework and early phase development for this new methodology, so that future efforts can be spent developing and perfecting this technique. \end{itemize} %references section