diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index 797f875..1c5b21f 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -152,17 +152,11 @@ As promised earlier, the paper will now examine the different scopes of security The local scope encompasses a security element's own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, reduced complexity, minimized sharing, and the conflict between this as least common mechanisms). The purpose of this section is to present the considerations, principles, and policies that govern the behavior and function of security elements/components at the local scope/level. First, this paper will reiterate the definitions stated in the Benzel et.~al.~paper. Failure is a condition in which, given a specifically documented input that conforms to specification, a component or system exhibits behavior that deviates from its specified behavior. A module/database is seen as a unit of computation that encapsulates a database and provides an interface for the initialization, modification, and retireval of information from the database. The database may be either implicit, e.g. an algorithm, or explicit. Lastly, a process(es) is a program(s) in execution. To further define the actions/behavior of a given component in the local scope, this paper moves to outlinging the principles that define component behavior at the local device level. \\ The first principle is that of `Least Common Mechanisms'. If multiple components in the system require the same function of a mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is created once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanims. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. The simpler a system is, the fewer vulnerabilities it will have; the principle of `Reduced Complexity'. From a security perspective, the benefit to this simplicity is that is is easier to understand whether an intended security policy has been captured in system design. At a security model level, it can be easier to determine whether the initial system state is secure and whether subsequent state changes preserve the system security. Given current state of the art of systems, the conservative assumption is that every complex system will contain vulnerabilities and it will be impossible to eliminate all of them, even in the most highly trustworthy of systems. At the lowest level of secure data, no computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. This is the concept behind `Minimized Sharing'. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; parition resources into discrete, private subsets for each dependent component. This in turn can be seen as a virtualization of hardware, a concept already illustrated earlier with the discussion of platform-based design. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. Once again illustrating the need for rigorous standards to be clearly documented. As the reader has undoutably seen, there are conflicts within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly appose each other, while still being desirable aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occuring within the security component. A designer can use one of the development techniques already mentioned to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key and how to then communicate that information with high security and fidelity. -\begin{itemize} -\item Network (communication) - \begin{itemize} - \item How does a designer decide upon how elements communicate? How does one define a security component in terms of a network; how is communication done, what concerns/challenges are there? Rigorous definition of input/output nets for componenets and the methods for communication (buses, encryption, multiplexing) must be documented and distributed as the standard methods for communicating. - \item Service - processing or protection provided by a component to users or other components. E.g., communication service (TCP/IP), security service (encryption, firewall). These services represent the communication that occurs between different elements in a secure system. Each service should be documented to rigorously define the input, and output, nets for each component and the method used for communication (buses, encrpytion, multiplexing). Since these services, and their actions, are central to the operation/behavior of a secure system, there are a series of considerations, principles, and policies that a system architect/designer must acknowledge when deciding on the layout of security components within a network. - \item Principle of Secure Communication Channels - when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with the security dependencies it supports. In other words, how much is the component trusted to perform its security functions by other components. Several techniques can be used to mititgate threats to the communication channels in use. Use of a channel may be restricted by protecting access to it with suitable access control mechanism (e.g. reference monitor located beneath or within each component). End-to-end communications technologies (e.g. encryption) may be used to eliminate security threats in the communication channel's physical environment. Once again, the intrinsic characteristices assumed for and provided by the channel must be specified with such documentation that it is posisble for system designers to understand the nature of the channel as initially consturcted and to assess the impact of any subseruqnet changes to the system. Without this rigorous documentation and standardization, the trustworthiness of the communications between security elements can not be assured. - \item `Self-Reliant Trustworthiness' means that systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. The principle of `Partially Ordered Dependencies' states that calling, synchronization and other dependencies in the system should be partially ordered; e.g. for certain pairs of elements in a set, one of the elements precedes the other. A fundamental tool in system design is `layering'. A system can be organized into functionally related modules of components, where layers are linearly ordered with respect to inter-later dependencies. Inherent problems of circularity can be more easily managed if circular dependencies are constrained to occur within layers. In other words, if a shared mechanism also makes calls to or otherwise depends on services of calling mechanisms, creating a circular dependency, perforamcen and liveness problems can result. Partially ordered dependencies and system layering contribute significantly to the simplicity and coherency of system design. - \item A system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality sturcture or configuration; in other words allow for `secure system evolution'. Changes may include upgrades to system, maintenance activites. The benefits of designing a system with secure system evolution are reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. Most systems can aniticipate maintenance, upgrades, and chages to configuration. If a component is constructed using the precepts of modularity and information hiding then it becomes easier to replace components without disrupting the rest of a system. A system designer needs to take into account the impact of dynamic reconfiguration on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to ahcieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. - \item A system should be able to handle as `secure failure'. Failure in system function or mechanism should not lead to violation of any security policy. Ideally the system should be capable of detecting failure at any stage of operation (initialization, normal operation, shutdown, maintenace, error detection and recovery) and take appropriate steps to ensure security policies are not violated; as is done with most machines now a days anyway. Touching on the earlier idea of secure system evolution, the reconfiguration function of the system should be designed to ensure continuous enforcement of security policies during various phases of reconfiguration. Once a failed security function is detected, the system may reconfigure itself to cirumvent the failed component, while maintaining security, and still provide all or part of the functionality of the original system, or completely shut itself down to prevent any (further) violation in security policies. Another method for achieving this is to rollback to a secure state (which may be the initial state) and then either shutdown or replace the service/component that failed with an orthogonal or repliacted mechanisms. Failure of a component may or may not be detectable to components using it; thus one must design a method for `dealing with failure'. For this reason components should fail in a state that denies rather than grants access. A designer could employ multiple protection mechanisms (whose features can be significantly different) to reduce the possibility of attack repetition, but it should be noted that redundancy techniques may increase resource usage and adversely affect the system performance. Instead the atomicity properties of a service/component should be well documented and characterized so that the component availing itself of service can detect and handle interruption event appropriately; similar to the `self-analysis' function that TPMs have. A well designed reference monitor could fill most of these roles, though, it would require that the reference monitor can `self-analyze' itself for trustworthiness. While even this would not be a perfect solution, it does help limit total failure to the reference monitor instead of some arbitrary security component. - \end{itemize} -\end{itemize} +How does a designer decide upon how elements communicate? How does one define a security component in terms of a network; how is communication done, what concerns/challenges are there? Rigorous definitions of input/output nets for componenets and the methods for communication (buses, encryption, multiplexing) must be documented and distributed as the standard methods for communicating. Service, defined in the scope of a network, refers to processing or protection provided by a component to users or other components. E.g., communication service (TCP/IP), security service (encryption, firewall). These services represent the communication that occurs between different elements in a secure system. Each service should be documented to rigorously define the input, and output, nets for each component and the method used for communication (buses, encrpytion, multiplexing). Since these services, and their actions, are central to the operation/behavior of a secure system, there are a series of considerations, principles, and policies that a system architect/designer must acknowledge when deciding on the layout of security components within a network. Principle of Secure Communication Channels states that when composing a system where there is a threat to communication between components, each communications channel must be trustworthy to a level commensurate with the security dependencies it supports. In other words, how much is the component trusted to perform its security functions by other components. Several techniques can be used to mititgate threats to the communication channels in use. Use of a channel may be restricted by protecting access to it with suitable access control mechanism (e.g. reference monitor located beneath or within each component). End-to-end communications technologies (e.g. encryption) may be used to eliminate security threats in the communication channel's physical environment. Once again, the intrinsic characteristices assumed for and provided by the channel must be specified with such documentation that it is posisble for system designers to understand the nature of the channel as initially consturcted and to assess the impact of any subseruqnet changes to the system. Without this rigorous documentation and standardization, the trustworthiness of the communications between security elements can not be assured. + +`Self-Reliant Trustworthiness' means that systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function. The principle of `Partially Ordered Dependencies' states that calling, synchronization and other dependencies in the system should be partially ordered; e.g. for certain pairs of elements in a set, one of the elements precedes the other. A fundamental tool in system design is `layering'. A system can be organized into functionally related modules of components, where layers are linearly ordered with respect to inter-later dependencies. Inherent problems of circularity can be more easily managed if circular dependencies are constrained to occur within layers. In other words, if a shared mechanism also makes calls to or otherwise depends on services of calling mechanisms, creating a circular dependency, perforamcen and liveness problems can result. Partially ordered dependencies and system layering contribute significantly to the simplicity and coherency of system design. A system should be built to facilitate maintenance of its security properties in face of changes to its interface, functionality sturcture or configuration; in other words allow for `secure system evolution'. Changes may include upgrades to system, maintenance activites. The benefits of designing a system with secure system evolution are reduced lifecycles costs for the vendor, reduced costs of ownership for the user, and improved system security. Most systems can aniticipate maintenance, upgrades, and chages to configuration. If a component is constructed using the precepts of modularity and information hiding then it becomes easier to replace components without disrupting the rest of a system. A system designer needs to take into account the impact of dynamic reconfiguration on the secure state of the system. Just as it is easier to build trustworthiness into a system from the outset (and for highly trustworthy systems, impossible to ahcieve without doing so), it is easier to plan for change than to be surprised by it~\cite{Benzel2005}. + +A system should be able to handle as `secure failure'. Failure in system function or mechanism should not lead to violation of any security policy. Ideally the system should be capable of detecting failure at any stage of operation (initialization, normal operation, shutdown, maintenace, error detection and recovery) and take appropriate steps to ensure security policies are not violated; as is done with most machines now a days anyway. Touching on the earlier idea of secure system evolution, the reconfiguration function of the system should be designed to ensure continuous enforcement of security policies during various phases of reconfiguration. Once a failed security function is detected, the system may reconfigure itself to cirumvent the failed component, while maintaining security, and still provide all or part of the functionality of the original system, or completely shut itself down to prevent any (further) violation in security policies. Another method for achieving this is to rollback to a secure state (which may be the initial state) and then either shutdown or replace the service/component that failed with an orthogonal or repliacted mechanisms. Failure of a component may or may not be detectable to components using it; thus one must design a method for `dealing with failure'. For this reason components should fail in a state that denies rather than grants access. A designer could employ multiple protection mechanisms (whose features can be significantly different) to reduce the possibility of attack repetition, but it should be noted that redundancy techniques may increase resource usage and adversely affect the system performance. Instead the atomicity properties of a service/component should be well documented and characterized so that the component availing itself of service can detect and handle interruption event appropriately; similar to the `self-analysis' function that TPMs have. A well designed reference monitor could fill most of these roles, though, it would require that the reference monitor can `self-analyze' itself for trustworthiness. While even this would not be a perfect solution, it does help limit total failure to the reference monitor instead of some arbitrary security component. Having tackled the network communication considerations for modeling a secure and trustworthy component, this paper moves toward examining these security element behvaiors and functions with respect to their existence within a larger distributed system. Trust, in the scope of a distributed system, shifts to define the degree to which the user or a componenet depends on the trustworthiness of another component. The first concept that will be tackled is that of `Hierarchical Trust for Components'. Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. Taking a deeper dive into `hierarchical protections', a component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. The main challenge here is that there needs to be a clear and documented way by which one can determine trustworthiness and protection for a system, along with outlining the hierarchy of trust that is inherent to the system. This is the same challenge that occurs at all levels and requires rigorous documentation to alleviate the constraint. Hierarchical protections is the precept that regulates the following concept of `secure distributed composition'. Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individualy components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. This is similar behavior to how SSL/TLS are used in current day implementations; data may be secure in transit, but if the end points are not secure then the data is not secure either. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of the distributed composite system must be thoroughly analyzed. Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken; there must be `accountability and traceability'. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify the entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional infomraiton that may be helpful in determinging the path or component that allowed the violation of the security policy. Just as this audit trail would be invaluable to a debugging developer, an attacker could also use this information to illuminate the actions/behavior of the system; therefore this data absolutely must remain protected.