From 50cc070e31a9e839590bcccd9a785307d46e0f7e Mon Sep 17 00:00:00 2001 From: Paul Wortman Date: Mon, 20 Jul 2015 20:37:18 -0400 Subject: [PATCH] Spelling and additional edits to sections prior to Security Signed-off-by: Paul Wortman --- PBDSecPaper.tex | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/PBDSecPaper.tex b/PBDSecPaper.tex index 8d68334..aeeaabb 100644 --- a/PBDSecPaper.tex +++ b/PBDSecPaper.tex @@ -80,61 +80,61 @@ \begin{abstract} \begin{itemize} \item PBD centers around minimizing costs of developing and manufacturering new components along with changing the landscape of this ecosystem, through the use of a ``meet-in-the-middle'' methodology where successive refinements of specificiations meet with abstractions of pototential implemenetations; the goal being to obtain the same level of abstraction as is writtien into good coding functions~\cite{Vincentelli2002}. Security centers around being able to gauge the trustworthiness of components as well as the larger system made of distributed components. There is a distinct lack of design methodology for doing platform-based design of security elements, although there is conceptual use in mobile embedded systems~\cite{Schaumont2005}. % and imaging~\cite{Sedcole2006}. -\item Ground work for implementing security via PBD exists; this paper is centered around connecting the dots and laying the foundation for framework that will be built upon for creating security in a documented, rigorous, standardized way. +\item Ground work for implementing security via PBD exists; this paper is centered around connecting the dots and laying the foundation for framework that will be built upon for creating security in a documented, rigorous, and standardized way. \end{itemize} \end{abstract} \section{Motivation} \label{Motivation} -As systems move towards more complex designs and implementations (as allowed by growths in technology; Moore's Law) the ability to make simplistic changes to these designs becomes exponentially more difficult. For this reason, levels of abstraction are desired when simplifying the design/evaluation phases of systems development. An example of this abstraction simplification is the use of system-on-chip (SoC) to replace multi-chip solutions. This abstraction solution is then used for a variety of tasks, ranging from arithmatic to system behavior. It is an industry standard to use SoC to handle encryption/security in a secure and removed manner~\cite{Wu2006}. Middleware describes software that resides between an application and the inner workings of the system hosting the application. The purpose of the middleware is to abstract the complexities of the underlying technology from the application layer~\cite{Lang2003}; to act as translation software for communicating from lower level to a higher level. This is the same sort of ``abstraction bridge'' that is required by the ``meet-in-the-middle'' methodology of PBD, along with the sort of software construct that benefits the virtualization of security component mapping. ``However, even though current silicon technology is closely following the growing demands; the effort needed in modeling, simulating, and validating such designs is adversely affected. This is because current modeling tools and frameworks, hardware and software co-design environments, and validation and verification frameworks do not scale with the rising demands.''`\cite{Patel2007} Work in the security realm is much more chaotic, although understakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Lin2013, Avizienis2004, Jorgen2008, Zakinthinos1997, Zhou2007}. Security has many facets to it: failure, processes, security mechanisms, secuirty principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocols. +As systems move towards more complex designs and implementations (as allowed by growths in technology; Moore's Law) the ability to make simplistic changes to these designs becomes exponentially more difficult. For this reason, levels of abstraction are desired when simplifying the design/evaluation phases of systems development. An example of this abstraction simplification is the use of system-on-chip (SoC) to replace multi-chip solutions. This SoC abstraction solution is then used for a variety of tasks, ranging from arithmatic to system behavior. It is an industry standard to use SoC to handle encryption/security in a secure and removed manner~\cite{Wu2006}. Middleware describes software that resides between an application and the inner workings of the system hosting the application. The purpose of the middleware is to abstract the complexities of the underlying technology from the application layer~\cite{Lang2003}; to act as translation software for communicating from lower level to a higher level. The lower-to-higher translation is the same sort of ``abstraction bridge'' that is required by the ``meet-in-the-middle'' methodology of PBD, along with the sort of software construct that benefits the virtualization of security component mapping. ``However, even though current silicon technology is closely following the growing demands; the effort needed in modeling, simulating, and validating such designs is adversely affected. This is because current modeling tools and frameworks, hardware and software co-design environments, and validation and verification frameworks do not scale with the rising demands.''`\cite{Patel2007} Work in the security realm is much more chaotic, although understakings have been made to define the scope of security and its intricacies in a well documented manner~\cite{Benzel2005}. Other work in the security realm includes security-aware mapping for automotive systems, explorations of dependable and secure computing, how to model secure systems, and defining the general theorems of security properties~\cite{Lin2013, Avizienis2004, Jorgen2008, Zakinthinos1997, Zhou2007}. Security has many facets to it: failure, processes, security mechanisms, secuirty principles, security policies, trust, etc. A key developing aspect of security is its standardization of encryption algorithms, methods of authentication, and communication protocol standards. \begin{itemize} \item Standardization is the process of developing and implementing technical standards. Standardization can help to maximinze compatability, interoperability, safety, repeatability, or quality; it can also faciliate commoditization of formerly custom processes. Standards appear in all sorts of domains. \begin{itemize} - \item For the IC domain standardization manifests as a flexibile integrated circuit where customization for a particular application is achieved by programming one or more components on the chip. PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, moust, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offeres to develop quickly new applications (adoptability). In other words, this requires a distillation of the principles so that a rigorous methodology can developed and profitably used across different design domains. - \item So why is standardization useful? Standardization allows for manufacturers, system developers, and software designers to all work around a single, accepted, platform. It is understood what the capabilities of the hardware are, what the limitations of system IO will be, along with what the methods/protocols of communication will be. Even these preset aspects of the `standard' have their own `contractual obligations' of how they will function, what their respective `net-lists' are, and where the limtations of such standards lie. While the act of creating a standard takes time, not only due to time of development but due to speed of adoption of wide use, it is a necessary step. Without it, it becomes far more difficult to create universally used complex systems, let alone validate their correctness and trustworthiness. + \item For the IC domain standardization manifests as a flexibile integrated circuit where customization for a particular application is achieved by programming one or more components on the chip (e.g. virtualization). PC makers and application software designers develop their products quickly and efficiently around a standard `platform' that emerged over time. As a quick over view of these standards: x86 ISA which makes is possible to reuse the OS \& SW applications, a full specified set of buses (ISA, USB, PCI) which allow for use of the same expansion boards of IC's for different products, and a full specification of a set of IO devices (e.g. keyboard, mouse, audio and video devices). The advantage of the standardization of the PC domain is that software can also be developed independently of the new hardware availability, thus offering a real hardware-software codesign approach. If the instruction set architecture (IAS) is kept constant (e.g. standardized) then software porting, along with future development, is far easier~\cite{Vincentelli2002}. In a `System Domain' lens, standardization is the aspect of the capabilities a platform offeres to develop quickly new applications (adoptability). In other words, this requires a distillation of the principles so that a rigorous methodology can developed and profitably used across different design domains. + \item So why is standardization useful? Standardization allows for manufacturers, system developers, and software designers to all work around a single, accepted, platform. It is understood what the capabilities of the hardware are, what the limitations of system IO will be, along with what the methods/protocols of communication will be. Even these preset aspects of the `standard' have their own `contractual obligations' of how they will function, what their respective `net-lists' are, and where the limtations of such standards lie. While the act of creating a standard takes time, not only due to time of development but due to speed of adoption of wide use, it is a necessary step. Without standarization, it becomes far more difficult to create universally used complex systems, let alone validate their correctness and trustworthiness. \item ``Indeed, market data indicate that more than 80\% of system development efforts are now in software versus hardware. This implies that an effective platform has to offer a powerful design environment for software to cope with development cost.''~\cite{Vincentelli2002} \end{itemize} \end{itemize} \begin{itemize} -\item Where do we gain/lose on shifting the method of design. Development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. +\item Where do we gain/lose on shifting the method of design to a more platform-based design/security centric model? After all development of these tools implies that there is a need to change the focus and methods of design/development~\cite{Vincentelli2007}. \begin{itemize} - \item The advatange to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototpying re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that the rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Security}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for a different manufacturers/developers to create different aspect while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. - \item The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new standards for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods and designs. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). - \item Virtualization will help offset the time and monetary costs in the long run. Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexbilitiy of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable desings' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textbf{but} not how the output manifests). The reason behind this is that the design can remain simplistic in its design and operation. Virtualization tools lend to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will)~\cite{Alagar2007}. They are a much needed tool for exploring design spaces and bringing codesign of software and hardware elements. + \item The advatange to this method is ease of changes in development and searching of design spaces during early design and development of those systems. For PBD this means that a company/manufacturer is able to lower the cost and time spent in the `early development phase'; the time spent when first drawing up system designs prior to initial prototyping. While the advantage of overcoming multiple prototpying re-designs with a series of virtualization tools will cut down on development time, it can not be forgotten that rigorous standards and documentation need to be developed for this sort of advantage. Once this hurdle is passed then the advantages of such a system can be reaped. For security this means that components can be standardized from a local, network, and distributed standpoint. These different scopes of security will be tackled in Section~\ref{Security}. The main point here is that with standardization comes a form of `contract' that all parties can expect will be followed, thus allowing for a different manufacturers/developers to create different aspects of a system, while knowing that these different elements will come together in the final product; much like the advantages of platform-based design. + \item The disadvantage of using a new method is the loss in time for developing the standards, rigors, and documentation that would be used as new guide lines for the industry. These sorts of costs have been felt while developing any of the current standards of security; most definitely with the development of new algorithms for encryption. Further more, security vulnerabilities are found mainly due to incorrect implementations or due to incorrect assumptions about functionality; both of which are more easily avoidable with the use of rigorous planning and design. The other disadvantage is in the adoptability of these new methods. Ideally all manufacturers would adopt a new commodity; rather than components, `design combinations' would be the new commodity that manufacturers would peddle (e.g. instead of saying ``my components are the best'' the dialog moves towards ``My ability to combine these components is the best''). + \item Virtualization will help offset the time and monetary costs of using and implementing these new methodologies/ideologies. Essentially the issue boils down to how to abstract the lower level requirements of a system (assembly/C) into a simpler high level set of tools (API/block). A new set of tools needs to be developed that can be used to build bigger and greater things out of a series of smaller more manage/customizable blocks. Flexbilitiy of low level elements will help minimize conflict when attempting to design higher level blocks. As with any new system, there is a need for `tunable desings' that can be centered around specific aspects (e.g. power/energy efficient systems to minimize ``power cost buildup'', or security/trust centric needs). Functions, in this tool set, should be kept simple (e.g. decide output, \textbf{but} not how the output manifests). The reason behind this is that the design can remain simplistic in its [design and] operation. Virtualization tools lend to both the ideas of abstraction (choosing the simple output) and standardization/documentation (know what the outputs are, but not needing to know exactly how they manifest; just that they will)~\cite{Alagar2007}. They are a much needed tool for exploring design spaces and bringing codesign of software and hardware elements. \end{itemize} \item Hardware/Software Codesign \begin{itemize} \item There are different coding languages to handle different aspects (i.e. SW/HW) of vitrualization. When dealing with the virtualization of software the aspects of timing and concurrenccy semantics still fall short. These problems come from a lack of resolution and control at the lowest levels of virtualization interaction. The overwhelming hardware issue is that hardware semantics are very specific and tough to simulate. There has been the development of hardware simulation languages, such as SystemC~\cite{Kreku2008}, but there has not been the development of tools to bridge the space between hardware and software simulation/virtualization. Codesign of software simulations of hardware allows for development of high level software abstraction to interact with low level hardware abstraction. \item The reasoning being the constant growth in complexity calls for simulation/virtualization of the design process. \begin{itemize} - \item System-on-chip (SoC) technology will be alreadu dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, the heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. - \item A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. Coverification will require an increasing proportion of the design time as systems become more complex. Progress at electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certian properties formulated as contraints during syntehsis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the sutrctureal view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. + \item System-on-chip (SoC) technology will be already dominated by 100-1000 core multiprocessing on a chip by 2020~\cite{Teich2012}. Changes will affect the way companies design embedded software and new languages, and tool chains will need to emerge in order to cope with the enormous complexity. Low-cost embedded systems (daily-life devices) will undoubtably see development of concurrent software and exploitation of parallelism. In order to cope with the desire to include environment in the design of future cyber-physical systems, the heterogeneity will most definitely continue to grow as well in SoCs as in distributed systems of systems. + \item A huge part of design time is already spent on the verification, either in a simulative manner or using formal techniques~\cite{Teich2012}. Coverification will require an increasing proportion of the design time as systems become more complex. Progress at the electronic system level might diminish due to verification techniques that cannot cope with the modeling of errors and ways to retrieve and correct them, or, even better, prove that certian properties formulated as contraints during syntehsis will hold in the implementation by construction. The verification process on one level of abstraction needs to prove that an implementation (the structural view) indeed satisfies the specification (behavioral view)~\cite{Teich2012}. \item The uncertainty of environment and communication partners of complex interacting cyber-physical systems, runtime adaptivity will be a must for guaranteeing the efficiency of a system. Due to the availability of reconfigurable hardware and multicore processing, which will also take a more important role in the tool chain for system simulation and evaluation, online codesign techniques will work towards a standard as time moves forward. \end{itemize} \item As with any design problem, if the functional aspects are indistinguishable from the implementation aspects, then it is very difficult to evolve the design over multiple hardware generations~\cite{Vincentelli2007}. It should be noted that there are tools that already exists for low, or high, system simulation. New terretory is the combination of these tools to form a `hardware-to-software' virtualization tool that is both efficient and effective. \end{itemize} -\item Metropolis is one tool that is based in part on the concept of platform-based design. Metropolis can analyze statically and dynamically functional designs with models that have no notion of physical quantities and mapped designs where the association of functionality to architectural services allows for evaluation of characteristics (e.g.~latency, throughput, power, and energy) of an implementation of a particular functionality with a particular platform instance~\cite{Vincentelli2007, Metropolis}. Metropolis is but one manifestation of platform-based design as a tool. PBD has been used for the platform-exploration of synthetic biological systems as seen in the work done by Densmore et.~al.~to create a strong and flexable tool~\cite{Densmore2009}. Other applications include design on a JPEG encoder and use for distributed automotive design~\cite{}\textbf{ADD ALL CITING OF OTHER PBD BASED WORK HERE!!!}. +\item Metropolis is one tool that is based in part on the concept of platform-based design. Metropolis can analyze statically and dynamically functional designs with models that have no notion of physical quantities and mapped designs where the association of functionality to architectural services allows for evaluation of characteristics (e.g.~latency, throughput, power, and energy) of an implementation of a particular functionality with a particular platform instance~\cite{Vincentelli2007, Metropolis}. Metropolis is but one manifestation of platform-based design as a tool. PBD has been used for the platform-exploration of synthetic biological systems as seen in the work done by Densmore et.~al.~to create a strong and flexable tool~\cite{Densmore2009}. Other applications, of platform-based design, include design on a JPEG encoder and use for distributed automotive design~\cite{}\textbf{ADD ALL CITING OF OTHER PBD BASED WORK HERE!!!}. \end{itemize} \begin{itemize} -\item Manufacturer's standpoint boils down to: ``minimize mask-making costs but flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime.''~\cite{Vincentelli2007} +\item The manufacturer's standpoint boils down to: the design should minimize mask-making costs but be flexible enough to warrant its use for a set of applications so that production volume will be high over an extended chip lifetime~\cite{Vincentelli2007}. \begin{itemize} - \item This is exactly why industry would love for platform-based design to pick up. The monetary costs saved would be enough to warrent adoption of the technology, \textbf{but} the monetary costs of developing such a system (e.g. design, evalutation, validation) does not carry the same attraction (simply because companies are selfish and want to \textbf{make} money). + \item This is exactly why industry would love for platform-based design to become a new standard. The monetary costs saved would be enough to warrent adoption of the technology, \textbf{but} the monetary costs of developing such a system (e.g. design, evalutation, validation) does not carry the same attraction (simply because companies are selfish and want to \textbf{make} money). \end{itemize} -\item Security concerns center around how to define trust/trustworthiness, determining the functions and behvaiors of security components, and the prinicples, policies, and mechanisms that are rigorously documented to standardize behavior. +\item Security concerns center around how to define trust/trustworthiness, determining the functions and behvaiors of security components, and the prinicples, policies, and mechanisms that are rigorously documented to standardize behavior. Also designed by industry to clearer standards, giving better security and ease of set-up and implementation. \end{itemize} \section{Platford-based design} \label{Platform-based design} \begin{itemize} -\item The monetary considerations of platform-based design include system re-use, flexibility of elements, and re-programmability. As system complexity grows the costs of producing chips becomes more expensive, thus pushing for the systems that are produced to be ``multi-capable'' (e.g. re-use value)~\cite{Keutzer2000}. The greatest challenge in the hardware design space is placement and arragement of components based on an array of differing variables. Design of architecture platforms is result of trade-off in complex space, and worse yet some aspects directly compete/contradict each other~\cite{Vincentelli2002, Gruttner2013}. +\item The monetary considerations of platform-based design include system re-use, flexibility of elements, and re-programmability. As system complexity grows the costs of producing chips becomes more expensive, thus pushing for the systems that are produced to be ``multi-capable'' (e.g. re-use value)~\cite{Keutzer2000}. The greatest challenge in the hardware design space is placement and arragement of components based on an array of differing variables. Design of architecture platforms is a result of trade-off in complex space, and worse yet some aspects directly compete/contradict each other~\cite{Vincentelli2002, Gruttner2013}. \begin{itemize} \item The size of the application space that can be supported by the architectures belonging to the architecture platform represents the flexability of the platform. The size of the architecture space that satisfies the constraints embodied in the design architecture is what providers/manufacturers have in designing their hardware instances. Even at this level of abstraction these two aspects of hardware design can compete with each other. Ex: A manufacturer has pre-determined sizes that constrain their development space apriori. Further more, aspects such as design space constraints and heat distribution are a well known source of trouble when designing embedded systems. \end{itemize} -\item As with any new technology/design methodoloy/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action; same as with software development). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design developement and adoptation are software developement and a set of tools that insulate the details of rachitecture from application software (e.g. virtualization). The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it.~\cite{Vincentelli2002} -\item In PBD, the partitioning of the design into hardware and software is not the essence of system design as many think, rather it is a consequence of decisions taken at a higher level of abstraction~\cite{Vincentelli2007}. For example, a lower level consideration may be heat dissipation concerns which manifests itself as size constraints at higher levels. The types of decisions that are made at these higher levels lend to the design traits that are requirements at lower levels. These sort of complexities in cost of design are exactly why abstraction/virtualization are required/desired. Additional levels of abstraction (along with virtualization tools for design space exploration) aid in simplifying the design process. The abstraction levels deal with critical decisions that are about the architecture of the system, e.g., processors, buses, hardware accelerators, and memories, that will carry on the computation and communication tasks associated with the overall specification of the design~\cite{Vincentelli2007, Pellizzoni2009}. In certain scenarios these critical decisions can be centered around `safety-critical' elements, as can be seen in embedded systems deployed in the medical market (e.g. pacemakers). The library of functional and communication components is the design space that we are allowed to explore at the appropriate level of abstraction~\cite{Vincentelli2007}. There are elements of recursive behavior that need to be tackled from a virtualized tool standpoint. In a PBD refinement-based design process, platforms shoould be defined to eliminate large loop iterations for affordable designs~\cite{Vincentelli2007}. This refinement should restrict the design space via new forms of regularity and structure that surrender some design potential for lower cost and first-pass sucess. +\item As with any new technology/design methodoloy/concept there are expensive initial costs for development, evaluation, and validation until more rigorous documentation can be made. As with any early development, it pays to think everything through first before diving into prototyping (want roughly 90/10 split between planning and action; same as with software development). This can be aided through the development of virtualization tools; which unfortunately come with their own development, evaluation, and validation costs. Harping on this, two main concerns for effective platform-based design developement and adoptation are software developement and a set of tools that insulate the details of architecture from application software (e.g. virtualization). The more abstract the programmer's model, the richer is the set of platform instances, but the more difficult is to choose the ``optimal'' architecture platform instance and map automatically into it~\cite{Vincentelli2002}. +\item In PBD, the partitioning of the design into hardware and software is not the essence of system design as many think, rather it is a consequence of decisions taken at a higher level of abstraction~\cite{Vincentelli2007}. For example, a lower level consideration may be heat dissipation concerns which manifests itself as size constraints at higher levels. The types of decisions that are made at these higher levels lend to the design traits that are requirements at lower levels. These sort of complexities in cost of design are exactly why abstraction/virtualization are required/desired. Additional levels of abstraction (along with virtualization tools for design space exploration) aid in simplifying the design process. The abstraction levels deal with critical decisions that are about the architecture of the system, e.g., processors, buses, hardware accelerators, and memories, that will carry on the computation and communication tasks associated with the overall specification of the design~\cite{Vincentelli2007, Pellizzoni2009}. In certain scenarios these critical decisions can be centered around `safety-critical' elements, as can be seen in embedded systems deployed in the medical market (e.g. pacemakers). The library of functional and communication components is the design space that we are allowed to explore at the appropriate level of abstraction~\cite{Vincentelli2007}. There are elements of recursive behavior that need to be tackled from a virtualized tool standpoint. In a PBD refinement-based design process, platforms should be defined to eliminate large loop iterations for affordable designs~\cite{Vincentelli2007}. This refinement should restrict the design space via new forms of regularity and structure that surrender some design potential for lower cost and first-pass sucess. \end{itemize} \begin{figure*} @@ -144,7 +144,7 @@ \section{Platford-based design} \end{figure*} \begin{itemize} -\item Due to the recursive nature of platform-based design, establishing the number, location, and components of intermediate ``platforms'' is the essence of PBD~\cite{Vincentelli2007}. In fact, designs with different requirements and specification may use different intermediate platforms, hence different layers of regularity and design-space constraints per scenarios. The tradeoffs involved in the selection of the number and characteristics of platforms relates to the size of the design space to be explored and the accuracy of the estimation of the chracteristics of the solution adopted. Naturally, the larger the step across platforms, the more difficult is predicting performance, optimizing at the higher levels of abstraction, and providing a tight lower bound. In fact, the design space for this approach may actually be smaller than the one obtained with smaller steps because it becomes harder to explore meaningful design alternatives and the restriction on the search space impedes complete design-space exploration. Ultimately, predictions/abstractions may be so inaccurate that design optimizations are misguided and the lower bounds are incorrect~\cite{Vincentelli2007}. On the other hand, having minimally small steps across the design space leads to needless complexity and unnecessary refinement of abstraction levels. The identification of precisely defined layers where the mapping processes takes place is an important design decision and should be agreed upon at the top design management level or during the initial early design phase of the system~\cite{Vincentelli2007}. As can be seen in Figure~\ref{fig:RecursivePBD}, each layer supports a design stage where the performance indexes that characterize the architectural components provide an opaque abstraction of lower layers that allows accurate performance estimations used to guide the mapping process~\cite{Vincentelli2007}. Standardization, or atleast some abstraction rule documentation, is what will greatly lend to minimizing the costs of using platform-based design. It is here that one clearly sees the recursive nature of this design methodology, and hopefully, also is able to see the advantages of the wide adoptation of platform-based design. +\item Due to the recursive nature of platform-based design, establishing the number, location, and components of intermediate ``platforms'' is the essence of PBD~\cite{Vincentelli2007}. In fact, designs with different requirements and specifications may use different intermediate platforms, hence different layers of regularity and design-space constraints per scenarios. The tradeoffs involved in the selection of the number and characteristics of platforms relates to the size of the design space to be explored and the accuracy of the estimation of the chracteristics of the solution adopted. Naturally, the larger the step across platforms, the more difficult is predicting performance, optimizing at the higher levels of abstraction, and providing a tight lower bound. In fact, the design space for this approach may actually be smaller than the one obtained with smaller steps because it becomes harder to explore meaningful design alternatives and the restriction on the search space impedes complete design-space exploration. Ultimately, predictions/abstractions may be so inaccurate that design optimizations are misguided and the lower bounds are incorrect~\cite{Vincentelli2007}. On the other hand, having minimally small steps across the design space leads to needless complexity and unnecessary refinement of abstraction levels. The identification of precisely defined layers where the mapping processes takes place is an important design decision and should be agreed upon at the top design management level or during the initial early design phase of the system~\cite{Vincentelli2007}. As can be seen in Figure~\ref{fig:RecursivePBD}, each layer supports a design stage where the performance indexes that characterize the architectural components provide an opaque abstraction of lower layers that allows accurate performance estimations to be used to guide the mapping process~\cite{Vincentelli2007}. Standardization, or atleast some abstraction rule documentation, is what will greatly lend to minimizing the costs of using platform-based design. It is here that one clearly sees the recursive nature of this design methodology, and hopefully, also is able to see the advantages of the wide adoptation of platform-based design. \item This approach results in better reuse, because it decouples independent aspects, that would otherwise be tired, e.g., a given functional specification to low-level implementation details, or to a specific communication paradigm, or to a scheduling algorithm~\cite{Vincentelli2007}. Coupled with well developed software virtualization tools, this would allow for better exploration of initial design spaces but does come at a cost of a rigorous, documented, standardization. ``It is very important to define only as many aspects as needed at every level of abstraction, in the interest of flexibility and rapid design-space exploration.''~\cite{Vincentelli2007} The issue of platform-based design is not so much an over-engineering of design/development, but rather a need to strike a balance between all aspects of design; much like what already happens in hardware and software design. \end{itemize} @@ -152,17 +152,17 @@ \section{Security} \label{Security} \begin{itemize} \item Evolving with time and understanding as knowledge of encryption and other security encapsulation techniques. There are considerations that are accounted from a software standpoint: capabilities of the software, speed of the algorthims/actions that take place, and how unique (level of uniqueness) a given solution is. Similarly there are hardware considerations as well: tolerance of the chip elements in use, their capabilities, power distribution over the entire hardware design, signal lag between different elements, and cross-talk caused by communication channels. Different groups have tackled aspects of these considerations. -\item The Trusted Computing Group (TCG) created Trusted Platform Modules (TPM) which are able to validate their own functionality and if they have been tampered with. This is, in essence, a method of `self-analysis'; thus the ground work for a `self-analyzing' security component is already in place. This self checking can be used as a method for allowing the larger system of security components to locate damaged/error-prone componenets so that they can be replaced/fixed thus raising the overall trustworthiness of the system of components. Therefore TPMs are a good stepping stone on the path for better security, but TPM/TCG has no found ``the answer'' yet~\cite{Sadeghi2008}. -\item Use of monitors for distributed system security. Different methods for determining trust of actions (e.g. Byzantine General Problem). These methods are used for determining the most sane/trustworhy machine out of a distributed network so that users know they are interacting with the `freshest' data at hand. While the realm of distributed systems has found good methods/principles to govern their actions/updates of systems. These methods are by no means `final' or perfect but will rather develop with time. +\item The Trusted Computing Group (TCG) created Trusted Platform Modules (TPM) which are able to validate their own functionality and if the TPMs have been tampered with. This is, in essence, a method of `self-analysis'; thus the ground work for a `self-analyzing' security component is already in place. This self checking can be used as a method for allowing the larger system of security components to locate damaged/error-prone componenets so that they can be replaced/fixed thus raising the overall trustworthiness of the system of components. Therefore TPMs are a good stepping stone on the path for better security, but TPM/TCG has no found ``the answer'' yet~\cite{Sadeghi2008}. +\item Another example of security/trustworthiness implementation is the use of monitors for distributed system security. Different methods are used for determining trust of actions (e.g. Byzantine General Problem). These methods are used for determining the most sane/trustworhy machine out of a distributed network so that users know they are interacting with the `freshest' data at hand. While the realm of distributed systems has found good methods/principles to govern their actions/updates of systems. These methods are by no means `final' or perfect but will rather develop with time. As one can see, there are solutions implemented across a series of different HW and SW platforms for tackling different aspects of security. The unifying factor in all of them is determining what is or isn't trustworthy. \item The definition of ``trustworthiness'' that is chosen to help color this perspective on security is as defined in the Benzel et.~al.~paper. \begin{quotation} ``\textit{Trustworthy} (noun): the degree to which the security behavior of the component is demonstrably compliant with its stated functionality (\textit{e.g., trustworthy component}). \\ \textit{Trust}: (verb) the degree to which the user or a component depends on the trustworthiness of another component. For example, component A \textit{trusts} component B, or component B is \textit{trusted} by component A. Trust and trustworthiness are assumed to be measured on the same scale.''~\cite{Benzel2005} \end{quotation} \begin{itemize} \item Dependability is a measure of a system's availability, reliability, and its maintainability. Furthermore, Avizienis et.~al.~extends this to also encompass mechanisms designs to increase and maintain the dependability of a system~\cite{Avizienis2004}. For the purpose of this paper, the interpretation is that trustworthiness requires inherrent dependability; i.e. a user should be able to trust that trustworthy components will dependably perform the desired actions and alert said user should an error/failure occur. - \item Unfortunately the measurement of trust(worthiness) is measured by the same, abstract scale~\cite{Benzel2005}. Thus, in turn, there is a void for the design and development of an understandable and standardized scale of trust(worthiness). Which could then lead to a change of paradigm in the methods by which security policies, mechanisms, and principles are implemented and evolve. - \item ``In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle.''~\cite{Mohammed2009} Component-Based Software Engineering (CBSE) is widely adopted in the software industry as the mainstream approach to software engineering~\cite{Mohammed2009, Mohammad2013}. CBSE is a reuse-based approach of defining, implementing and composing loosely coupled independent components into systems and emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. Thus it can be seen that the ideals being outlined in this paper are already in use, all that is needed is their re-application to a new design methodology. - \item ``Most prevalent trust models only focus on assessing trustworthiness of systems at runtime, and do not provide necessary predictions of the trust of systems before they are built. This makes imporving trustworthiness of the composed system to be costly and time-consuming. Therefore it is important to consider trust of the composed software system at the development time itself.''~\cite{Gamage} + \item Unfortunately the measurement of trust(worthiness) is measured using the same, abstract scale~\cite{Benzel2005}. Thus, in turn, there is a void for the design and development of an understandable and standardized scale of trust(worthiness). Which could then lead to a change of paradigm in the methods by which security policies, mechanisms, and principles are implemented and evolve. + \item ``In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle.''~\cite{Mohammed2009} Component-Based Software Engineering (CBSE) is widely adopted in the software industry as the mainstream approach to software engineering~\cite{Mohammed2009, Mohammad2013}. CBSE is a reuse-based approach of defining, implementing and composing loosely coupled independent components into systems and emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. Thus it can be seen that the ideals being outlined in this paper are already in use, all that is needed is their re-application to a new design methodology. In a way one can see CBSE as a restricted-platform version of platform-based design. This furthers this paper's point, that the required elements for combining PBD and security are already in use for different purposes and simply need to be `re-purposed' for this new security centric platform-based design. + \item ``Most prevalent trust models only focus on assessing trustworthiness of systems at runtime, and do not provide necessary predictions of the trust of systems before they are built. This makes improving trustworthiness of the composed system to be costly and time-consuming. Therefore it is important to consider trust of the composed software system at the development time itself.''~\cite{Gamage} \end{itemize} \end{itemize}