Skip to content

Commit

Permalink
New edits into a more paragraph-shaped manner
Browse files Browse the repository at this point in the history
  • Loading branch information
paw10003 committed Jul 10, 2015
1 parent 611c441 commit 175e5c5
Showing 1 changed file with 16 additions and 41 deletions.
57 changes: 16 additions & 41 deletions PBDSecPaper.tex
Original file line number Diff line number Diff line change
Expand Up @@ -103,28 +103,19 @@ \section{Motivation}
\item Where do we gain/lose on shifting the method of design. Development of these tools implies that there is a need to change the focus and methods of design/development\textbf{[Add Quo Vadis paper citation here]}.
\begin{itemize}
\item The advatange to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototpying re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that the rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Security}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for a different manufacturers/developers to create different aspect while knowing that these different elements will come together in the final product; much like the advantages of platform-based design.
\item The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new standards for the industry.
\begin{itemize}
\item Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best'')
\end{itemize}
\item Virtualization will help offset the time and monetary costs in the long run.
\begin{itemize}
\item Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexbilitiy of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable desings' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textbf{but} not how the output manifests). This tool lends to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will).
\end{itemize}
\item The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new standards for the industry. The other disadvantage is in the adoptability of these new methods and designs. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best'').
\item Virtualization will help offset the time and monetary costs in the long run. Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexbilitiy of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable desings' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textbf{but} not how the output manifests). The reason behind this is that the design can remain simplistic in its design and operation. Virtualization tools lend to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will). They are a much needed tool for exploring design spaces and bringing codesign of software and hardware elements.
\end{itemize}
\item Hardware/Software Codesign
\begin{itemize}
\item There are different coding languages to handle different aspects (i.e. SW/HW)
\begin{itemize}
\item Software: bad at timing and concurrency semantics
\item Hardware: Hardware semantics are very specific (and tough to simulate; thus existence of languages like SystemC)
\end{itemize}
\item Previous issues and considerations
\item There are different coding languages to handle different aspects (i.e. SW/HW) of vitrualization. When dealing with the virtualization of software the aspects of timing and concurrenccy semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction.
\item Future considerations (end of Jurgen Teich paper~\cite{Teich2012})
\begin{itemize}
\item Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction(?)
\item System-on-chip (SoC) technology will be alreadu dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, the heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems.
\item A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. Coverification will require an increasing proportion of the design time as systems become more complex. Progress at electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certian properties formulated as contraints during syntehsis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the sutrctureal view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}.
\item The uncertainty of environment and communication partners of complex interacting cyber-physical systems, runtime adaptivity will be a must for guaranteeing the efficiency of a system. Due to the availability of reconfigurable hardware and multicore processing, which will also take a more important role in the tool chain for system simulation and evaluation, online codesign techniques will work towards a standard as time moves forward.
\end{itemize}
\item Future considerations (end of Jurgen Teich paper~\cite{Teich2012})
\item ``If the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations.''~\cite{Vincentelli2007}
\item AS with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tolls that already exists for low, or high, system simulation. New terretory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective.
\end{itemize}
\item Manufacturer's standpoint boils down to: ``minimize mask-making costs but flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime.''~\cite{Vincentelli2007}
\begin{itemize}
Expand All @@ -137,28 +128,12 @@ \section{Platford-based design}
\begin{itemize}
\item PBD (Platform-based design)
\begin{itemize}
\item Monetary considerations: re-use, flexibility of elements, re-programmable
\item The monetary considerations of platform-based design include system re-use, flexibility of elements, and re-programmability. As system complexity grows the costs of producing chips becomes more expensive, thus pushing for the systems that are produced to be ``multi-capable'' (e.g. re-use value)~\cite{Keutzer2000}. The greatest challenge in the hardware design space is placement and arragement of components based on an array of differing variables. Design of architecture platforms is result of trade-off in complex space, and worse yet some aspects directly compete/contradict each other~\cite{Vincentelli2002}.
\begin{itemize}
\item Costs of producing chips becomes more expensive, thus pushing for the systems that are produced to be ``multi-capable'' (e.g. re-use value).~\cite{Keutzer2000}
\item The size of the application space that can be supported by the architectures belonging to the architecture platform represents the flexability of the platform. The size of the architecture space that satisfies the constraints embodied in the design architecture is what providers/manufacturers have in designing their hardware instances. Even at this level of abstraction these two aspects of hardware design can compete with each other. Ex: A manufacturer has pre-determined sizes that constrain their development space apriori. Further more, aspects such as design space constraints and heat distribution are a well known source of trouble when designing embedded systems.
\end{itemize}
\item The greatest challenge in the hardware design space is placement and arragement of components based on an array of differing variables. Design of architecture platform is result of trade-off in complex space, and worse yet some directly compete/contradict each other.~\cite{Vincentelli2002}
\begin{itemize}
\item The size of application space that can be supported by the architectures belonging to the architecture platform. This represents the flexability of the platform.
\item The Size of the architecture space that satisfies the constraints embodied in the architecture what providers have in designing their hardware instances.
\item Even at this level of abstraction these two aspects of hardware design can compete with each other. Ex: A manufacturer has pre-determined sizes that constrain their development space apriori.
\end{itemize}
\item As with any new technology/design methodoloy/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design:
\begin{itemize}
\item Software development
\item Set of tools that insulate the details of architecture from application software
\item \textbf{Note Bene:} The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it.~\cite{Vincentelli2002}
\end{itemize}
\item ``In PBD, the partitioning of the design into hardware and software is not the essence of system design as many think, rather it is a consequence of decisions taken at a higher level of abstraction.''~\cite{Vincentelli2007}
\begin{itemize}
\item The types of decisions that are made at these higher levels lend to the design traits that are requirements at lower levels. For example, a lower level consideration may be heat dissipation concerns which manifests itself as size constraints at higher levels. This sort of complexities in cost of design are exactly why abstraction/virtualization are required/desired. Additional levels of abstraction (along with virtualization tools for design space exploration) aid in simplifying the design process.
\item ``Critical decisions are about the architecture of the system, e.g., processors, buses, hardware accelerators, and memories, that will carry on the computation and communication tasks associated with the overall specification of the design.''~\cite{Vincentelli2007}
\end{itemize}
\item The library of functional and communication components is the design space that we are allowed to explore at the appropriate level of abstraction~\cite{Vincentelli2007}. There are elements of recursive behavior that need to be tackled from a virtualized tool standpoint. In a PBD refinement-based design process, platforms shoould be defined to eliminate large loop iterations for affordable designs~\cite{Vincentelli2007}. This refinement should restrict the design space via new forms of regularity and structure that surrender some design potential for lower cost and first-pass sucess.
\item As with any new technology/design methodoloy/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action; same as with software development). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design developement and adoptation are software developement and a set of tools that insulate the details of rachitecture from application software (e.g. virtualization). The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it.~\cite{Vincentelli2002}
\item In PBD, the partitioning of the design into hardware and software is not the essence of system design as many think, rather it is a consequence of decisions taken at a higher level of abstraction~\cite{Vincentelli2007}. For example, a lower level consideration may be heat dissipation concerns which manifests itself as size constraints at higher levels. The types of decisions that are made at these higher levels lend to the design traits that are requirements at lower levels. These sort of complexities in cost of design are exactly why abstraction/virtualization are required/desired. Additional levels of abstraction (along with virtualization tools for design space exploration) aid in simplifying the design process. ``Critical decisions are about the architecture of the system, e.g., processors, buses, hardware accelerators, and memories, that will carry on the computation and communication tasks associated with the overall specification of the design.''~\cite{Vincentelli2007} The library of functional and communication components is the design space that we are allowed to explore at the appropriate level of abstraction~\cite{Vincentelli2007}. There are elements of recursive behavior that need to be tackled from a virtualized tool standpoint. In a PBD refinement-based design process, platforms shoould be defined to eliminate large loop iterations for affordable designs~\cite{Vincentelli2007}. This refinement should restrict the design space via new forms of regularity and structure that surrender some design potential for lower cost and first-pass sucess.
\item Due to the recursive nature of platform-based design, establishing the number, location, and components of intermediate ``platforms'' is the essence of PBD~\cite{Vincentelli2007}. ``In fact, designs with different requirements and specification may use different intermediate platforms, hence different layers of regularity and design-space constraints. The tradeoffs involved in the selection of the number and characteristics of platforms relate to the size of the design space to be explored and the accuracy of the estimation of the chracteristics of the solution adopted. Naturally, the larger the step across platforms, the more difficult is predicting performance, optimizing at the higher levels of abstraction, and providing a tight lower bound. In fact, the design space for this approach may actually be smaller than the one obtained with smaller steps because it becomes harder to explore meaningful design alternatives and the restriction on search impedes complete design-space exploration. Ultimately, predictions/abstractions may be so inaccurate that design optimizations are misguided and the lower bounds are incorrect.''~\cite{Vincentelli2007}
\item ``The identification of precisely defined layers where the mapping processes take place is an important design decision and should be agreed upon at the top design management level.''~\cite{Vincentelli2007}
\begin{itemize}
Expand Down Expand Up @@ -200,10 +175,10 @@ \section{Security}
\end{itemize}
\item Automation of security development
\begin{itemize}
\item Could have security elements that attempt to optimize themselves to the system they are in based on a few pivot points (power, time, efficiency, level of randomness?)
\begin{itemize}
\item Could trade out specific security components as an easier way to increase security without requireing re-design/re-construction of the underlying element.
\end{itemize}
\item Security Mechanisms - system artifacts that are used to enforece system security policies.
\item Security Principles - guidelines or rules that when followed during system design will aid in making the system secure.
\item Security Policies - organizational security policies are ``the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information.'' System Security Policies are rules that the information system enforces relative to the resources under its control to reflect the organizational security policy.
\item Could have security elements that attempt to optimize themselves to the system they are in based on a few pivot points (power, time, efficiency, level of randomness). Another option for the automated tool could trade out specific security components as an easier way to increase security without requireing re-design/re-construction of the underlying element. There is always the requirement that the overall trustworthiness of a new system must meet the standards of the security policies that `rule' the system.
\end{itemize}
\item Mapping of Security onto PBD structure
\begin{itemize}
Expand Down

0 comments on commit 175e5c5

Please sign in to comment.