Skip to content
Browse files

Example of estimation metric calculation added along with table of po…

…tential values. Next steps: finish writing out the extension of consideration sections. Fix writing within the paper to cut down pages, fix topics for framework example, ensure flow is stilll there
  • Loading branch information
Duncan committed Jul 1, 2016
1 parent 02e0906 commit 5977f0da11dc7339a2cb101b19816b5786af8ace
Showing with 23 additions and 3 deletions.
  1. +23 −3 AADLSecPaper.tex
@@ -182,8 +182,8 @@ \section{Defining Risk}
The purpose of this paper is to propose a method for combining security and risk in a measurable and meaningful manner. Taking the already defined risk equation (i.e. Equation~\ref{equ:riskDefinition}), we move to add in the existence of a `security metric' and `cost weight' to the probability that either a direct or an indirect attack occurs to a given system.

\begin{multline} \label{equ:securityRisk}
`Security Risk' = `Security Metric' * direct attack probability * `cost' weight \\
+ ... + `Security Risk' * indirect attack probability * `cost' weight
`Security Risk' = \frac{direct attack probability * `cost' weight}{`Security Metric'}\\
+ \frac{indirect attack probability * `cost' weight}{`Security Metic'}

This can be seen in Equation~\ref{equ:securityRisk}, where the probability aspect of risk is split between the chance of how exactly an attack will occur. Direct attack is defined as an event where an attacker directly attempts to brute force a given security mechanism or standard. An indirect attacker is one where a malicious user attempts to circumvent existing security by some aspect that is not directly related to the mentioned security implementation. Once risk has been defined in the scope/lens of examination, one can move develop an `Estimation Metric' that can be compared and contrasted with each other to determine the `value'/`worth' of any given design. However, before these metrics can be developed, one must first determine a framework by which these calculations will be incorporated to allow for a relevant and meaningful interpretation of verification and selection metrics.
@@ -240,7 +240,7 @@ \section{Introducing the Framework}

\begin{multline} \label{equ:estimationMetric}
`Estimation Metric' = `User Risk Type' * `Security Risk' + `implementation cost' + \\
`maintenance cost' + `solution metric' * \frac{1}{`requirements weight'}
`maintenance cost' + \frac{`solution metric'}{`requirements weight'}

Some of the values for this equation, implementation and maintenance costs, are expected to be flat costs that are pre-calculated by a company or business since these values will be specific to the given organization. An important difference between these values are that the `maintenance cost' and `implementation cost' values do not incorporate the operational costs of the design, they only account for the cost of initial implementation of a system design and the cost of performing upkeep for said design.
@@ -324,6 +324,26 @@ \section{Exploring a Simple Implementation}
The next step is how to consider differences between the four implementations of the wireless transmitter. Differences from values can come from alternative design choices and/or algorithm and policy implementations of security or other standard constraints being imposed onto the design problem. For this simple example, the main differences are the number of IO lines and the implementation of encryption. These different aspects can be compared by assigning weights for allowing relative importance, thus representing (in some arbitrary manner) the user-defined requirements imposed on the design being generated. This human-related favoritism causes the generated metric to alter from a small amount to notable change based on the chosen importance of different features. Other areas originate from the development of `user risk type' and `solution metric', where there are some arbitary decisions made on ranking of users or generated solutions.

Before being able to calculate out the estimation matrix for our wireless transmitter example, allow this paper to make some simplifications and assumptions to smooth the process. First we will take the assumption that there will be three separate implementations of encryption for the wireless transmitter: AES256 (MIPS), AES128 (MIPS), and no encryption. Drawing from the work by Ferrante et.~al.~, the corresponding security level (SL) values are {1.00, 0.60, 0.10} respectively. Since we have simplified the example to having a single requirements (i.e. encryption of data) then the weight value used for calculating the Security Metric (SM) is 1.00. Now that we have values for SL and the weight, we can move to calculating the SM value for our different encryption scenarios. However, first we must make some assumptions about the cost weight (CW), direct attack probablitiy (DAP), and indirect attack probability (IAP). For the CW value we make the assumption that some company may find that the data collected by this wireless transmitter is worth \$20 if lost and would take one person about 8 man-hours to repair a failure; thus the CW for loss becomes \$20 per 8 man-hours. As for the attack probabilities, this paper assumes that attacking any other wireless transmitter (indirect attack) would be the same as attacking a chosen wireless transmitter and that attacking a central aggregation computer would be out of scope for this example, therefore the indirect attack probability can be taken as 0\%. Assuming that an employee has a less than enthusiastic installation of the transmitter, the DAP value is taken to be 25\% chance of an adversary brute forcing encryption to see the transmitted data. Using Equation~\ref{equ:riskDefinition}, one finds that the SR values for the {AES256, AES128, None} encryption implementations are {0.625, 1.04, 6.25} respectively.

From this point, we make further assumptions about the implementation cost (IC), the maintenance cost (MC), and `solution metric'/operational cost (OC) since these values would come from metrics internal to a company or organization. IC is taken to be \$50 in parts and design per 40 man-hours, MC is taken to be \$50 in drive out cost per 4 man-hours to check the system, and OC is assumed to be \$3 in power costs per 12 hours of operation. The RW value is assumed to be 1.00 if the system is encrypted and 0.10 is not; since the effect of not meeting requirements can be view more clearly. Taking the calculation of the estimation metric (EM) from Equation~\ref{equ:estimationMetric}, we produce the contents of Table~\ref{tbl:estimationMetrics} which represents the estimation metric for each encryption scenario {AES256 (MIPS), AES128 (MIPS), None} and how different User Risk Types (URT) also further influence the metric.

% Please add the following required packages to your document preamble:
% \usepackage[normalem]{ulem}
% \useunder{\uline}{\ul}{}
& \multicolumn{3}{c|}{User Risk Type} & \\ \cline{2-5}
& Risk Averse & Risk Neutral & Risk Seeking & \\ \hline
AES256 (MIPS) & 14.07 & 14.13 & 14.31 & \\ \cline{1-4}
AES128 (MIPS) & 14.12 & 14.21 & 14.52 & \\ \cline{1-4}
No Encryption & 16.94 & 17.50 & 19.38 & \\ \hline
\caption{Calculated Estimation Metrics for Wireless Transmitter (USD/man-hour)}

Now that a simple example has been shown, allow this paper to now expand the considerations that are made for this simplistic example. The following section will examine further expansion of `estimation metric' considerations, showing how the calculation of comparable metrics can become more involved and complicated.

0 comments on commit 5977f0d

Please sign in to comment.
You can’t perform that action at this time.