Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
Continuing changes
Signed-off-by: Paul Wortman <paul.mauddib28@gmail.com>
  • Loading branch information
paw10003 committed Jul 28, 2015
1 parent dc0ac1b commit c7207cf
Showing 1 changed file with 3 additions and 12 deletions.
15 changes: 3 additions & 12 deletions PBDSecPaper.tex
Expand Up @@ -174,23 +174,15 @@ The first principle is that of `Least Common Mechanisms'. If multiple component
\item Hierarchical Protections - A component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. The main challenge here is that there needs to be a clear and documented way by which one can determine trustworthiness and protection for a system, along with outlining the hierarchy of trust that is inherent to the system. This is the same challenge that occurs at all levels and requires rigorous documentation to alleviate the constraint. Hierarchical protections is the precept that regulates the following concept of `secure distributed composition'.
\item Secure Distributed Composition - Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individualy components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. This is similar behavior to how SSL/TLS are used in current day implementations; data may be secure in transit, but if the end points are not secure then the data is not secure either. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of distributed composite system must be thoroughly analyzed.
\item Accountability and Traceability - Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify the entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional infomraiton that may be helpful in determinging the path or component that allowed the violation of the security policy. Just as this audit trail would be invaluable to a debugging developer, an attacker could also use this information to illuminate the actions/behavior of the system; therefore this data absolutely must remain protected.
\item Continuous Protection on Information - Information protection required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintining integrity of kernel code and data) must be protected to a level of continuity consistent with the security policy and the security architecture assumptions. Simpley stated, no guarentees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. For maintaining a trustworthy system, and network of distributed truthworhty components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworhiness (does not lower the overall trustworthiness of the entire distributed system). Furthermore, a designer can use the precepts of a reference monitor to provide continuous enforcement of a security policy, noting that every request must be validated, and the reference monitor must protect iteself. Ideally the reference monitor component would be ``perfect'' in the sense of being absolutely trustworthy and not requiring an upgrade/modification path (thus limiting this element's chance of becoming compromised).
\begin{itemize}
\item Any designer must ensure protection of the system by choosing interface parameters so that security critical values are provided by more trustworthy components. To eliminate time-of-check-to-time-of-use vulnerabilities the system's security-relevant operations should appear atmoic.
\item It could also be desirable to allow system security policies to be ``modifiable'' at runtime; in the case of needing to adjust to catastrophic external events. This raises the complexity of the system, but does allow for flexibility in the face of failure. Any changes to security policies must not only be traceable but also verifiable; it must be possible to verify that changes do not violate security policies. This could be handled by a central reference monitor. Following this thread of thinking, a system architect/designer should understand the consequences of allowing modifiable policies within the system. Depending on the type of access control and actions that are allowed and controlled by policies, certain configuration changes may lead to inconsistent states of discontinuous protection due to the complex and undecidable nature of the problem of allowing runtime changes to the security policies of the system. In other words, even modifications/updates need to be planned and documented rigorously for the purpose of maintaining a secure and trustworthy system.
\end{itemize}
\item Continuous Protection on Information - Information protection required by a security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintining integrity of kernel code and data) must be protected to a level of continuity consistent with the security policy and the security architecture assumptions. Simpley stated, no guarentees about information integrity, confidentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. For maintaining a trustworthy system, and network of distributed truthworhty components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworhiness (does not lower the overall trustworthiness of the entire distributed system). Furthermore, a designer can use the precepts of a reference monitor to provide continuous enforcement of a security policy, noting that every request must be validated, and the reference monitor must protect iteself. Ideally the reference monitor component would be ``perfect'' in the sense of being absolutely trustworthy and not requiring an upgrade/modification path (thus limiting this element's chance of becoming compromised). Any designer must ensure protection of the system by choosing interface parameters so that security critical values are provided by more trustworthy components. To eliminate time-of-check-to-time-of-use vulnerabilities the system's security-relevant operations should appear atmoic. It could also be desirable to allow system security policies to be ``modifiable'' at runtime; in the case of needing to adjust to catastrophic external events. This raises the complexity of the system, but does allow for flexibility in the face of failure. Any changes to security policies must not only be traceable but also verifiable; it must be possible to verify that changes do not violate security policies. This could be handled by a central reference monitor. Following this thread of thinking, a system architect/designer should understand the consequences of allowing modifiable policies within the system. Depending on the type of access control and actions that are allowed and controlled by policies, certain configuration changes may lead to inconsistent states of discontinuous protection due to the complex and undecidable nature of the problem of allowing runtime changes to the security policies of the system. In other words, even modifications/updates need to be planned and documented rigorously for the purpose of maintaining a secure and trustworthy system.
\item Secure System Modification - System modification procedures must maintain system security with respect to goals, objectives, and requirements of owners. Without proper planning and documentation, upgrades and modifications to systems can transform a secure system into an insecure one. These are similar concepts to `secure system evolution' at the network scope of these security components.
\end{itemize}
\end{itemize}

\begin{itemize}
\item Automation of security development
\begin{itemize}
\item When automating the development of security systems there are three key elements of the system that need to be examined/accounted for in the virtualization stage: security mechanisms, security principles, and security policies.
\begin{itemize}
\item For the purpose of reiteration, security mechanisms are the system artifacts that are used to enforce system security policies. Security principles are the guidelines or rules that when followed during system design will aid in making the system secure. Organizational security policies are ``the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information.''~\cite{Benzel2005} System Security Policies are rules that the information system enforces relative to the resources under its control to reflect the organizational security policy.
\item Each of these aspects plays its part in determining the behavior and function of the overall security system. The security prinicples set the groundwork for how the system should behave and interact based on the expected user interactions. The security policies (both organizational and system) govern the rules and practices that regulate how the system, and its resources, is managed, how the information is protected, and how the system controls and distributes sensitive information. The security mechanisms are the implementations on these previous two aspects by being the system artifacts that are used to enforce the system security policies. Together these different facets shape and mold the desired higher level abstracted behavior and function that the system has been designed and developed for. Security principles may account for the majority of restrictions and considerations for a given system, but are by no means the most influential or important aspect. The security polcies developed out of the principles constrain the behavior, functions, and methods of communication between security elements. The mechanisms developed for implementing these rules and regulations must be designed in such a manner to ensurce the system's fidelity towards trustworthy actions while also being responsbile for how the system will react to unexpected input and failure.
\end{itemize}
\item When automating the development of security systems there are three key elements of the system that need to be examined/accounted for in the virtualization stage: security mechanisms, security principles, and security policies. For the purpose of reiteration, security mechanisms are the system artifacts that are used to enforce system security policies. Security principles are the guidelines or rules that when followed during system design will aid in making the system secure. Organizational security policies are ``the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information.''~\cite{Benzel2005} System Security Policies are rules that the information system enforces relative to the resources under its control to reflect the organizational security policy. Each of these aspects plays its part in determining the behavior and function of the overall security system. The security prinicples set the groundwork for how the system should behave and interact based on the expected user interactions. The security policies (both organizational and system) govern the rules and practices that regulate how the system, and its resources, is managed, how the information is protected, and how the system controls and distributes sensitive information. The security mechanisms are the implementations on these previous two aspects by being the system artifacts that are used to enforce the system security policies. Together these different facets shape and mold the desired higher level abstracted behavior and function that the system has been designed and developed for. Security principles may account for the majority of restrictions and considerations for a given system, but are by no means the most influential or important aspect. The security polcies developed out of the principles constrain the behavior, functions, and methods of communication between security elements. The mechanisms developed for implementing these rules and regulations must be designed in such a manner to ensurce the system's fidelity towards trustworthy actions while also being responsbile for how the system will react to unexpected input and failure.
\item In the same manner that these various security aspects (e.g. mechanisms, principles, policies) must be considered during development automation, the software and hardware aspects must also come under consideration based on the desired behavior/functionality of the system under design. One could have security elements that attempt to optimize themselves to the system they are in based on a few pivot points (power, time, efficiency, level of randomness). Another option for the automated tool could trade out specific security components as an easier way to increase security without requiring re-design/re-construction of the underlying element (e.g. modularity). There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system. For these reasons a user would desire rigorous documentation that would lay out the requirements of each component, so that in the case of trying to replace faulty or damaged components there would be no loss to the overall trustworthiness of the system; while also not introducing any vulnerabilities due to the inclusion of new system components.
\item Virtualization should be used for exploring the design space; it is hoped that it is obvious as to why. Not only is the cost of prototyping incredably expensive, but redesign is equally costly. Virtualization aids by removing the need for physical prototyping (less monitary costs) and allows for more rapid exploration of the full design space. While the design time for such powerful tools will be expensive (both in monitary and temporal costs), the rewards of developing, validating, and evaluating this virtualization tool will offset the early design phase costs of automated security component design.
\end{itemize}
Expand All @@ -206,8 +198,7 @@ The first principle is that of `Least Common Mechanisms'. If multiple component
\item Procedural Rigor - The rigor of the system's life cycle process should be commensurate with its intended trustworthiness. Procedural rigor defines the depth and detail of a system's lifecycle procedures. These rigors contribute to the assurance that a system is correct and free of unintended functionality in two ways:
\begin{itemize}
\item Imposing a set of checks and balances on the life cycle process such that the introduction of unspecified functionality is thwarted.
\item Applying rigorous procedures to specifications and other design documents contribute to the ability of users to understand a system as it has been built, rather than being misled by inaccurate system representation, thus helping ensure that security and functional objectives of the system have been met.
\item \textbf{Note:} Highly rigorous development procedures supporting high trustworthiness are costly to follow. However, the lowered cost of ownership resulting from fewer flaws and security breaches during the product maintenance phase can help to mitigate the higher initial development costs associated with a rigorous life cycle process.
\item Applying rigorous procedures to specifications and other design documents contribute to the ability of users to understand a system as it has been built, rather than being misled by inaccurate system representation, thus helping ensure that security and functional objectives of the system have been met. Highly rigorous development procedures supporting high trustworthiness are costly to follow. However, the lowered cost of ownership resulting from fewer flaws and security breaches during the product maintenance phase can help to mitigate the higher initial development costs associated with a rigorous life cycle process.
\end{itemize}
\item Repeatable, Documented Procedures - Techniques used to construct a component should permit the same component to be completely and correctly reconstructed at a later time. Repeatable and documented procedures support the creation of components that are identical to a component created earlier that may be in widespread use.
\begin{itemize}
Expand Down

0 comments on commit c7207cf

Please sign in to comment.