Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
Pushing small edits to paper
	- addition of '[the]' to a quote to clairify
	- addition of images to the paper
		- Trustworthiness Scopes
		- Reference Monitor Visualization
Push of Reference Monitor image

Signed-off-by: Paul Wortman <paul.mauddib28@gmail.com>
  • Loading branch information
paw10003 committed Nov 13, 2015
1 parent 0de04d2 commit fb71787
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 8 deletions.
Binary file modified PBDSecPaper.pdf
Binary file not shown.
28 changes: 20 additions & 8 deletions PBDSecPaper.tex
Expand Up @@ -317,13 +317,19 @@ is or isn't trustworthy.
As the development goes `upwards' in levels of abstraction, one will be implementing functions, and modules, as the result of combining/connecting different components. These abstracted function/implementation levels will need to be viewed as its own new `component' with its own security properties, functionality, and behavior. This is the same `circular' behavior we see in mapping down from the function space. An other consideration at this level of design and development is: what happens when one is unable to trust the components being used to develop a system design (e.g. can a secure system be constructed from insecure components).

\begin{quotation}
``In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle.''~\cite{Mohammed2009}
``In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of [the] system life-cycle.''~\cite{Mohammed2009}
\end{quotation}

With this in mind, and following the precepts presented by Benzel et.~al., the assumption is that the system is only as secure as its least secure component.

The concern is how to evaluate the security of the system. Two methods are to either evaluate the entire system after its completion (which has been repeatedly shown to be a terrible idea) or to evaluate components during the development and design stage. Evaluation of components can be seen as either needing to perform an `initial conditions validation' for evaluating security of the system or one can `pre-evaluate' security such that the level of trustworthiness is known, and trusted, ahead of time. Performing the initial system state validation incurs a definite overhead to ensure security of the system prior to operation, though after the initial validation one can assume there is no change in trustworthiness until an upgrade/modification occurs. This is yet another one of the open-ended questions of this new design methodology; how to evaluate (and ensure) the trustworthiness and security of the system.

\begin{figure*}
\includegraphics[width=\textwidth,height=8cm]{./images/pbdsec_mapping.png}
\caption{PBD Security Map Space}
\label{fig:MapSpace}
\end{figure*}

\subsection{Mapping}
\label{Mapping}
Mapping is the processes of evaluating, and maximizing, the requirement specifications and component definitions for the purpose of developing a design that incorporates all the behavioral characteristics necessary along with the predetermined essential hardware/functional properties using a predetermined objective function. Developing mapping functionality is no easy task and this is where the open-ended question for incorporating security into platform-based design lies. The map space should take into account not only the current requirements of the system, but also account for any future upgrades/modifications along with disaster planning and recovery. To the knowledge of this paper this is an area lacking hard standardization/optimization techniques and has had no formal definition of a framework.
Expand All @@ -338,13 +344,7 @@ This paper will now examine some of the attention that has to be given to develo
\end{quotation}
Dependability is a measure of a system's availability, reliability, and its maintainability. Furthermore, Avizienis et.~al.~extends this to also encompass mechanisms designs to increase and maintain the dependability of a system~\cite{Avizienis2004}. For the purpose of this paper, the interpretation is that trustworthiness requires inherent dependability; i.e. a user should be able to trust that trustworthy components will dependably perform the desired actions and alert said user should an error/failure occur. Unfortunately the measurement of trust(worthiness) is measured using the same, abstract scale~\cite{Benzel2005}. Thus, in turn, there is a void for the design and development of an understandable and standardized scale of trust(worthiness).

\begin{figure*}
\includegraphics[width=\textwidth,height=8cm]{./images/pbdsec_mapping.png}
\caption{PBD Security Map Space}
\label{fig:MapSpace}
\end{figure*}

Different scopes of security/trustworthiness can be roughly placed into three categories: local, network, and distributed. Each of these scopes will be examined in terms of the considerations/challenges, principles, and policies that should be used to determine action/behavior of these different security elements and the system as a whole.
Different scopes of security/trustworthiness can be roughly placed into three categories(Figure~\ref{fig:PBDSecScopes}): local, network, and distributed. Each of these scopes will be examined in terms of the considerations/challenges, principles, and policies that should be used to determine action/behavior of these different security elements and the system as a whole.

The local scope encompasses a security element's own abilities, trustworthiness, and the dependencies of that element (e.g. least common mechanisms, reduced complexity, minimized sharing, and the conflict between this as least common mechanisms). The purpose of this section is to present the considerations, principles, and policies that govern the behavior and function of security elements/components at the local scope/level. First, this paper will reiterate the definitions stated in the Benzel et.~al.~paper. Failure is a condition in which, given a specifically documented input that conforms to specification, a component or system exhibits behavior that deviates from its specified behavior. A module/database is seen as a unit of computation that encapsulates a database and provides an interface for the initialization, modification, and retrieval of information from the database. The database may be either implicit, e.g. an algorithm, or explicit. Lastly, a process(es) is a program(s) in execution. To further define the actions/behavior of a given component in the local scope, this paper moves to outlining the principles that define component behavior at the local device level. \\
The first principle is that of `Least Common Mechanisms'. If multiple components in the system require the same function of a mechanism, then there should be a common mechanism that can be used by all of them; thus various components do not have separate implementations of the same function but rather the function is created once (e.g. device drivers, libraries, OS resource managers). The benefit of this being to minimize complexity of the system by avoiding unnecessary duplicate mechanisms. Furthermore, modifications to a common function can be performed (only) once and impact of the proposed modifications allows for these alterations to be more easily understood in advance. The simpler a system is, the fewer vulnerabilities it will have; the principle of `Reduced Complexity'. From a security perspective, the benefit to this simplicity is that is is easier to understand whether an intended security policy has been captured in system design. At a security model level, it can be easier to determine whether the initial system state is secure and whether subsequent state changes preserve the system security. Given current state of the art of systems, the conservative assumption is that every complex system will contain vulnerabilities and it will be impossible to eliminate all of them, even in the most highly trustworthy of systems. At the lowest level of secure data, no computer resource should be shared between components or subjects (e.g. processes, functions, etc.) unless it is necessary to do so. This is the concept behind `Minimized Sharing'. To protect user-domain information from active entities, no information should be shared unless that sharing has been explicitly requested and granted. Encapsulation is a design discipline or compiler feature for ensuring there are no extraneous execution paths for accessing private subsets of memory/data. Minimized sharing influenced by common mechanisms can lead to designs being reentrant/virtualized so that each component depending mechanism will have its own virtual private data space; partition resources into discrete, private subsets for each dependent component. This in turn can be seen as a virtualization of hardware, a concept already illustrated earlier with the discussion of platform-based design. A further consideration of minimized sharing is to avoid covert timing channels in which the processor is one of the shared components. In other words, the scheduling algorithms must ensure that each depending component is allocated a fixed amount of time to access/interact with a given shared space/memory. Development of a technique for controlled sharing requires execution durations of shared mechanisms (or mechanisms and data structures that determine duration) to be explicitly stated in design specification documentation so that effects of sharing can be verified and evaluated. Once again illustrating the need for rigorous standards to be clearly documented. As the reader has undoubtably seen, there are conflicts within just these simple concepts and principles for the local scope of trustworthiness of a security component. The principles of least common mechanism and minimal sharing directly oppose each other, while still being desirable aspects of the same system. Minimizing system complexity through the use of common mechanisms does lower hardware space requirements for an architectural platform, but at the same time does also maximize the amount of sharing occurring within the security component. A designer can use one of the development techniques already mentioned to help balance these two principles, but in the end the true design question is: what information is safe to share and what information must be protected under lock and key and how to then communicate that information with high security and fidelity.
Expand All @@ -353,6 +353,12 @@ How does a designer decide upon how elements communicate? How does one define a

Having touched upon the network communication considerations for modeling a secure and trustworthy component, this paper moves toward examining these security element behaviors and functions with respect to their existence within a larger distributed system. Trust, in the scope of a distributed system, shifts to define the degree to which the user or a component depends on the trustworthiness of another component.

\begin{figure}
\includegraphics[width=0.5\textwidth]{./images/pbdsec_scopes.png}
\caption{Visualization of Scopes of Trustworthiness}
\label{fig:PBDSecScopes}
\end{figure}

`Self-Reliant Trustworthiness' means that systems should minimize their reliance on external components for system trustworthiness. A corollary to this relates to the ability of a component to operate in isolation and then resynchronize with other components when it is rejoined with them. In other words, if a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that very system would be vulnerable to drop in connection/communication channels. Thus from a network standpoint a system should be trustworthy by default with the external connection being used as a supplement to the component's function.

The next concept that will be tackled is that of `Hierarchical Trust for Components'. Security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. This is essential to eliminate circular dependencies with regard to trustworthiness. Trust chains have various manifestations but this should not prohibit the use of overly trustworthy components. Taking a deeper dive into `hierarchical protections', a component need not be protected from more trustworthy components. In the most degenerate case of most trusted component, the component must protect itself from all other components. One should note that a trusted computer system need not protect itself from an equally trustworthy user. This is the same challenge that occurs at all levels and requires rigorous documentation to alleviate the constraint. Hierarchical protections is the precept that regulates the following concept of `secure distributed composition'. Composition of distributed components that enforce the same security policy should result in a system that enforces that policy at least as well as individual components do. If components are composed into a distributed system that supports the same policy, and information contained in objects is transmitted between components, then the transmitted information must be at least as well protected in the receiving component as it was in the sending component. This is similar behavior to how SSL/TLS are used in current day implementations; data may be secure in transit, but if the end points are not secure then the data is not secure either. To ensure correct system-wide level of confidence of correct policy enforcement, the security architecture of the distributed composite system must be thoroughly analyzed.
Expand Down Expand Up @@ -448,6 +454,12 @@ yet not truly explored at regular levels.

The manufacturer's standpoint boils down to: the design should minimize mask-making costs but be flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime~\cite{Vincentelli2007}. Companies try to drive adoptability by means of creating something that users want to interact with, but not be complicated to learn (e.g. abstraction of technology for ease of use). Accounting for ease of use can lead to vulnerabilities in security or the development of new tools. Automation is desirable from a `business' standpoint since customers/users enjoy the `set it and forget it' mentality for technology (especially new technologies). Companies/Manufacturers need positive customer/user experiences, otherwise there is no desire to extend any supplied functionality to any other devices/needs on the part of the consumer. Adoptability tends to come from user `word of mouth' praising the functionality and ease of use of new technology/methods/devices and how the developing party reacts to system failures or user-need (branching from complaints and support requests). This is exactly why industry would love for platform-based design to become a new standard; gain high adoptability. The monetary costs saved would be enough to warrant adoption of the technology, \textbf{but} the monetary costs of developing such a system (e.g. design, evaluation, validation) does not carry the same attraction (simply because companies are selfish and want to \textbf{make} money).

\begin{figure}
\includegraphics[width=0.5\textwidth]{./images/pbdsec_refmon.png}
\caption{Visualization of Reference Monitor Model}
\label{fig:RefMon}
\end{figure}

This paper can not be concluded without proposing a framework that is believed to be the jumping point for a security model to begin the adaptation of platform-based design into security development. The reference monitor seems a favorable choice as this sort of model is already used in distributed systems, but there is an extremely important need to maintain the security/trust/trustworthiness of this reference monitor (abstraction for necessary and sufficient features of component that enforces access control in secure systems). While it should be immediately seen that there are faults with this design (e.g. single point of failure), this is not enough to abandon the model. It is the belief of this paper that an initial starting point for PBD-Security design development is to use this existing reference monitor concept, along with other developed tools (e.g. CBSE, TPM), and piece together the initial framework and early phase development for this new methodology, so that future efforts can be spent developing and perfecting this technique.

\section{Conclusion}
Expand Down
Binary file added images/pbdsec_refmon.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit fb71787

Please sign in to comment.