Reviews

Creating System of Systems Software: A Comparative Study of HLA and SOA A MASTER OF ENGINEERING REPORT MASTER OF ENGINEERING

Categories
Published
of 38
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Share
Description
Creating System of Systems Software: A Comparative Study of HLA and SOA By Robert F. Flanagan A MASTER OF ENGINEERING REPORT Submitted to the College of Engineering at Texas Tech University in Partial
Transcript
Creating System of Systems Software: A Comparative Study of HLA and SOA By Robert F. Flanagan A MASTER OF ENGINEERING REPORT Submitted to the College of Engineering at Texas Tech University in Partial Fulfillment of The Requirements for the Degree of MASTER OF ENGINEERING Approved Dr. Atila Ertas Dr. T. T. Maxwell Dr. M. M. Tanik Dr. Chris Letchford October 21, 2006 ACKNOWLEDGEMENTS I would like to express my sincere thanks for the support given to me by the Modeling and Simulation group. Without their support, this paper would not have been possible. I would also like to thank Raytheon and Texas Tech for putting together this masters program. The professors brought in to teach in this program are among the best in their fields. I would like to thank Joanne Wood. She provided her support by recommending me for the program as well as allowing me to make school a priority in regards to my work schedule. I would like to extend a very special thanks to my wife Erin for her patience with me as I spent my weekends working on homework and this paper. Without her encouragement and support, this paper, and my success in this program, would not have been possible. 1 TABLE OF CONTENTS ACKNOWLEDGEMENTS 1 DISCLAIMER 4 ABSTRACT 5 LIST OF FIGURES 6 LIST OF TABLES 7 CHAPTER I INTRODUCTION 8 CHAPTER II BACKGROUND Technology Evolution System of Systems 16 CHAPTER III DEFINITION OF SERVICE ORIENTED ARCHITECTURE SOA Defined SOA Data Model 24 CHAPTER IV DEFINITION OF HIGH LEVEL ARCHITECTURE HLA Defined HLA History HLA Specification Run Time Infrastructure Object Model Template Interface Specification Standard Practice 35 CHAPTER V COMPARING HLA AND SOA Architecture Comparison Implementation Differences Data Models Using XML as a data format Using the OMT as a data model Comparing the use of XML and the OMT Infrastructure SOA Infrastructure HLA Infrastructure Compare HLA and SOA Infrastructure Reuse and Extensibility SOA Reuse and Extensibility HLA Reuse and Extensibility Compare HLA and SOA Reuse 5.1.5 Scalability SOA Scalability HLA Scalability Compare HLA and SOA Scalability...62 CHAPTER VI SUMMARY AND CONCLUSIONS 63 REFERENCES 66 APPENDIX A ACRONYM LIST 68 APPENDIX B DATA MODEL COST 70 APPENDIX C PRESENTATION OF THE PAPER 76 3 DISCLAIMER The opinions expressed in this report are strictly those of the author and are not necessarily those of Raytheon, Texas Tech University, nor any U.S. Government agency. 4 ABSTRACT As technology is evolving, it is becoming more apparent that engineers can no longer work in a vacuum. Solving large complex problems require integrating yesterday s technology so that it can be enhanced and reorganized to solve today s problems. Problems being attacked with software are getting so large that the solutions are evolving into a system of systems. The bulk of the challenge in building a system of systems solution is to have all of the pieces come together as if they belong that way. The art of this task is known as integration. Unanticipated problems always creep their way into the solution and it is a constant struggle to work through them causing integration to be the most costly and unpredictable part of running a program. Anything that can be done on a program that assists integration and reduces risk has potential to generate significant savings in the budget. Integrating several products together is a complex task, especially with software. As software aspirers to solve larger and more complex problems, the need to expand on existing software is ever present. Repurposing current technology, not reinventing it, allows the developers to attack new problems rather than revisiting old ones. Reusing components that were never intended to work together is a challenge that architecture addresses. The purpose of this paper is to compare two architectures, Service Oriented Architecture and High Level Architecture. Both have unique characteristics that make each better equipped to solve different types of problems. This paper will explore some of the strengths and weaknesses of each in an attempt to show that study and thought is required to choose a proper architecture for a specific challenge. 5 LIST OF FIGURES Figure 1 System of Systems Architecture [7] 17 Figure 2 - COA Layering 21 Figure 3 - Web Browser/Server Interaction 23 Figure 4 - HLA TimeLine [7] 29 Figure 5 - Run Time Infrastructure [7] 31 Figure 6 - Crash 34 Figure 7 - XSD Address Example 42 Figure 8 - XML Address Example 43 Figure 9 - Cost of a Data Model 49 Figure 10 - HLA Time Management [12] 53 Figure 11 - DDM Illustration [15] 61 6 LIST OF TABLES Table 1 - RPR FOM Object Class Structure [10] 46 Table 2 - RPR FOM Base Entity Attributes [10] 47 Table 3 - RPR FOM Physical Entity Attributes [10] 47 7 CHAPTER I INTRODUCTION Software is facing challenges that are much larger and more complex than ever before. The days of building single purpose software that solves a specific problem are over. Software must be multipurpose otherwise it will be too expensive to build. Software is very expensive to write and maintain and therefore, needs to be written in to add capabilities as opposed to solving a specific problem. A software module that adds capability will be written generically enabling it to be used for more than one application. Developing and maintaining generic software that is used for several applications is easier than single purpose software because there are more resources to draw funding from. Leveraging current technology is required to meet 21 st century challenges within their available budgets and schedules. Effort taken to obtain and reuse legacy, commercial off the shelf (COTS) or existing software to meet a new challenge is required because of the potential cost savings. Re-use enables a program to skip the steps of coding and unit testing for the reused components, hence allowing integration of those components start immediately. Integration is the rearranging, combining of legacy and newly developed systems, and/or modifying components thereof, to perform a new task. Integrating systems in an effort to solve a larger problem is the process of developing a system of systems software solution. Integration is the most difficult and unpredictable step of executing a project. It is difficult to predict time and budget needed for this phase of a program because it encompasses many subjective parameters. Typical objective parameters that can be used for an estimate are operating systems, legacy programming languages, current interfaces and minimum hardware specifications. Subjective parameters include the working relationships of integrating parties, team communication, effectiveness of the interface control document, and the overall understanding of the operating environment. 8 Parameters such as operating systems, number of interfaces, and computation intensity are well understood. Finding out what operating systems are supported for a particular application is relatively straightforward. If the applications do not map to the same operating system, then the cost and schedule should reflect that. The number of interfaces can be counted and a complexity factor can be added to the estimate. Computational intensity will give an indication of how many applications can be allocated to each machine. The estimate will then indicate the amount of hardware needed, plus a labor estimate for integrating the applications. The estimate calculated based on the objective parameters is only accurate until the sensor group that is integrating the radar model informs the integration team that they need to send out a message letting the rest of the simulation know where the radar beam is pointing once every sixty milliseconds. There are twenty radars models running so the message traffic turns out to be at a rate of one message every three milliseconds or over three hundred messages per second. Then the group integrating the tracker mentions that the software has never been run for more than two hundred seconds and that the three-hour scenario will be a new experiment to them. The initial integration estimate has just been invalidated. There were some subjective parameters that were not taken into account when formulating the estimate. First of all, the applications that are being developed are often times not well understood because not everything is done in-house. This cannot be helped because solving large complex problems require looking for partial and full solutions, outside of the immediate well known, well-controlled environment. When another team has a product that suits a requirement, these subjective parameters start to take affect on the time required to do integration. It is implausible to have enough in-depth knowledge of every product that can fulfill a requirement in order to make an accurate estimate for integration time. There is however, a strategy that can be employed to make this problem manageable. 9 Architecture provides a framework so that applications can be built on an infrastructure that promotes integration and reuse. Architecture defines interfaces, design rules, and provides a common framework for all applications to be designed in. This infrastructure allows applications to be built as puzzle pieces that are designed to fit together. If Enterprise Service Bus (ESB) is chosen, a developer with experience with the ESB architecture will be familiar with the expectations of his application. How the application is deployed, used and accessed is well implied by just specifying the architecture. Architectures like these reduce complexity by abstracting away unnecessary coupling. This allows the interface design document to concentrate on the necessary coupling. ESB handles things like security, message transport and operating system dependencies so that the developer can focus on the problem. Architecture promotes reuse. Each component developed that adheres to a software architecture will be reusable by default. Components themselves need not have knowledge of the application that they are deployed within; they just need to fulfill their individual purpose. This creates reuse because components can be scavenged from preliminary projects as well as purchased from commercial vendors. Components written to a well-known architecture will allow them to be used in ways that the original development team may have never envisioned. Architecture enables more complex problems to be solved easier, faster and cheaper than ever before. One would be hard pressed to find an argument against having Architecture. The architecture question now is not whether to use it, but which one to use. The choice is not an easy one. Picking the right architecture takes experience, research and luck to choose the one that will work for the identified problem along with being flexible enough for all future scenarios. The term service-oriented architecture (SOA) has become a buzzword in the software industry. Very few people seem to have a handle on what it really is; they just know that they need it. The author hopes to provide background information on what SOA really is and compare it to another, less well- 10 known architecture, high-level architecture (HLA). This paper will explore various strengths and weaknesses of two architecture paradigms, HLA and SOA. The information provided will enable the reader to understand that choosing architecture is a decision deserving of some study. 11 CHAPTER II BACKGROUND 1.1 Techno log y Evo lutio n When computer science started, programming languages were developed to do one thing, and one thing only; control the behavior of computers [1]. These early languages were machine specific so if there were a need to run the program on another machine, it would have to be re-written. The problems that these machines could tackle were very small because reuse was nonexistent. The wheel had to be reinvented for every machine therefore; the problems being handled by software were trivial. Reuse, or the lack thereof, and the development of various types of machines, prompted the development of high languages, which were not machine specific [1]. The first and most revolutionary high-level language to emerge was FORTRAN. Developed in the 1950s, FORTRAN offered a way to write code that could be run on many different types of machines. It was the first successful language that took high-level syntax and transformed it into machine code. The development of FORTRAN advanced computer science in two large steps. First of all, it allowed the programmers to code in a human readable format instead of machine code and secondly, it made code portable from machine to machine. Software began to evolve because it did not have to be rewritten for each new piece of hardware. This was the beginning of reuse. FORTRAN earmarked the start of what is called procedural programming. Procedural programming is a type of programming where everything is executed in order. The first statement in the programming is executed followed by the next until the end if reached. Early high-level programming languages like FORTRAN I offered a style of programming that is considered unstructured. Unstructured programming occurs when all of the code sits in one contiguous block. This methodology, works for very small problems that only require a hand-full of variables and control constructs. Keeping track of more 12 than just a hand-full of variables in one control block is very difficult. Trying to read and understand this unstructured code is where the term spaghetti code came from. When the code is printed and lines are drawn in an effort to understand the complex and entangled control structure, the lines begin to look like spaghetti. Writing software in one continuous block makes it difficult to understand and debug, rendering the technique useless for solving larger and more complex problems. In an effort to untangle the spaghetti, the concept of structured programming was developed. Structured programming attempts to solve the problem of scaling to computer programming. Solving a life-size problem requires more than a few hundred lines of code. In fact, it usually requires thousands [13]. The idea here is to decompose the problem into smaller pieces that can be solved in sub-routines. This alleviates the programmer from having to keep the entire problem in his head. In this paradigm, the main routine solves the problem at a higher level of abstraction by invoking the sub-routines that do the detailed work for it. This structure allows the code to be understood and debugged on a modular level opening up the possibilities for solving much larger problems. Structured programming enabled more than one developer to work on the problem. Sub-routines could be given to experts so that they could handle the more detailed work. The sub-routine allowed experts to have their own workspace with their own variables and control structures. This work was then glued together by the main function. The developers of FORTRAN I realized the limitations of unstructured coding practices and a year later in 1958, released FORTRAN II. FORTRAN II was a significant improvement because it added support of subroutines as well as the capability for separate compilation of program modules [18]. FORTRAN was embraced by the scientific community has grown through the years. Surprisingly, it is still used today. The latest FORTRAN standard was released in Unstructured and structured programming are both subsets of procedural programming. These types of languages focus on a line of execution similar to a thought process. Everything is called in order, 13 sub-routine by sub-routine. As the problems being engaged by software grew, so did the amount of data being handled. To handle the larger amounts of data, the idea focusing on the data itself instead of the procedure was developed. The programming paradigm that was developed from this idea is referred to as object-oriented (OO). OO design and programming is focused on the data that is being handled by the program. Almost every piece of data becomes an object, and every object has a state that is maintained. Information about the state of the object is accessed through the use of methods. For example, a car s state information would be its fuel load, miles driven, and current speed. Objects can be combined to form larger more complex objects. A model of a road could consist of objects such as stop signs, stoplights, bridges, and cars. To find the average speed on the road, each instance of a car would be asked for its speed. Using that data, the average speed can be calculated. OO design is a methodology that enables developer to handle large amounts of data in an organized format enabling them to better solve a problem. The ideas of inheritance, polymorphism and data encapsulation are all part of OO design. The data objects modeled are all instances of a class. The programmer can use these classes and their methods without having to know how the class is implemented. Inheritance allows the programmer to reuse a class by adding functionality to an existing class. This functionality can be added without modifying, or even seeing the existing code. One could easily inherit the sphere class, add the method bounce, and have a model of a basketball. Polymorphism allows a method to be defined with the same name, but have different parameters. Data encapsulation gives a class the ability to hide information from the user of the object and prevents them from interfering with the inner workings of the class. All of these traits allow larger problems to be solved because data objects are isolated in such a way, that a development team can work on the problem together, and previous work can be reused. 14 A natural follow on to OO programming is Component Oriented Architecture (COA). COA provides a framework in which components are deployed on. These components are not stand-alone programs. They have to be deployed on an application server before they can be used. The application server will specify how a component is deployed as well as specify its interface. Most components are written with some sort of initiate, receive message and destroy method. These methods are the application server s interface into the component. The application server will have something called business logic. Business logic is the logic that routes information from one component to another in order to achieve a desired result. COA extends OO design because of the level of abstraction that classes from OO programming provide. The user of the classes that is writing the main function does not care about how the classes are implemented, the user just cares that they have some given functionality. COA does the same with its components. The business logic that uses the components does not care how the components are implemented, it only cares that they provide some capability. The difference between COA and OO design is that COA requires a standard way for the components to interact with the application server. Components running on one application server can easily be deployed another. In turn, reuse becomes easier. With COA and reuse, solutions are being built by integration. The first engineering task is to look for partial solutions that already exist. The engineer then purchases these partial solutions, develops the missing pieces, and then integrates. The solution comes together very quickly if components are available. Barring ridiculous prices for the components, a fairly large and complex problem is solved very quickly and inexpensively. An extension to COA is SOA. Services in SOA are similar to components in COA. The difference is that services are responsible for maintaining their own state. Also, service do not have to 15 follow any strict interface requirements, it just has to have one that is well defined. COA consists of a group of homogenous components that reside on a single
Search
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks