diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index d36e753..856393c 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -206,7 +206,12 @@ \section{Security} \end{itemize} \item Within distributed system \begin{itemize} - \item Trust (degree to which the user or a componenet depends on the trustworthiness of another component). Hiearchical Trust for components, hierarchical protections, Secure Distributed Composition. Accountability and Traceability, Continuous Protection on Information, Secure System Modification. + \item Trust (degree to which the user or a componenet depends on the trustworthiness of another component). Secure System Modification. + \item Hierarchical Trust for Components - Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. + \item Hierarchical Protections - Component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. + \item Secure Distributed Composition - Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individualy components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of dsitributed composite system must be thoroughly analyzed. + \item Accountability and Traceability - Actions that are security-relevant must be traceable to the entity on whose behalf the action is being taken. This requires the designer to put into place a trustworthy infrastructure that can record details about actions that affect system security (e.g., audit subsystem). This system must not only be able to uniquely identify entity on whose behalf the action is being carried out, but also record the relevant sequence of actions that are carried out. An accountability policy ought to require the audit trail itself to be protected from unauthorized access and modifications. Associating actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide nonrepudiation, as once some action is recorded, it is not possible to change the audit trail. Any designer should note that if a violation occurs, analysis of the audit log may provide additional infomraiton that may be helpful in determinging the path or component that allowed the violation of the security policy. + \item Continuous Protection on Information - Information protection required by security policy (e.g., access control to user-domain objects) or for system self-protection (e.g., maintining integrity of kernel code and data) must be protected to a level of continueity consistent with the security policy and the security architecture assumptions. Simpley stated, no guarentees about information integrity, configentiality or privacy can be made if data is left unprotected while under control of the system (i.e., during creation, storages, processing or communication of information and during system initialization, execution, failure, interruption, and shutdown); one cannot claim to have a secure system without remaining secure for all aspects of said system. For maintaining a trustworthy system, and network of distributed truthworhty components, a designer should not only prepare for expected inputs but also for possible invalid requests or malicious mutations that could occur in the future. Invalid requests should not result in a system state in which the system cannot properly enforce the security policy. The earlier mentioned concept of secure failure applies in that a roll back mechanism can return the system to a secure state or at least fail the component in a safe and secure manner that maintains the required level of trustworhiness (does not lower the overall trustworthiness of the entire distributed system). Furthermore, a designer can use the precepts of a reference monitor to provide continuous enforcement of security policy, noting that every request must be validated, and the reference monitor must protect iteself. Ideally the reference monitor component would ``perfect'' in the sense of being absolutely trustworthy and not requiring an upgrade/modification path (thus limiting this element's chance of becoming compromised). \end{itemize} \end{itemize} \item Automation of security development