A generic load balancing framework for cooperative ITS applications

ABSTRACT Abstract—The deployment of cooperative ITS applications is due to start as soon as 2015. Large investments in the roadside unit (RSU) infrastructure will be necessary to create a dense network and accommodate an increasing number of
of 7
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  A Generic Load Balancing Framework forCooperative ITS Applications Stefan Craß, Eva K¨uhn Institute of Computer LanguagesVienna University of TechnologyVienna, AustriaEmail:  { sc, ek  } Sandford Bessler, Thomas Paulin FTW Telecommunications Research CenterVienna, AustriaEmail:  { bessler, paulin }  Abstract —The deployment of cooperative ITS applications isdue to start as soon as 2015. Large investments in the roadsideunit (RSU) infrastructure will be necessary to create a densenetwork and accommodate an increasing number of services,leading to a discussion about the trade-off between distributedprocessing and storage solutions on the RSU nodes and thecentralized alternative. A strictly central solution might not bescalable, whereas the decentralized approach faces the problemthat load in the form of CPU and memory usage may be unequallydistributed among the nodes, causing performance bottlenecks onsome of the RSUs. This work presents a solution for this problem,in form of a generic framework that balances the load between thenodes and reduces in this way the RSU costs. The interactions arebased on a flexible coordination pattern for load balancing that isrealized using customizable containers provided by a distributedsystems middleware. This mechanism is applied to a probe datacollection scenario in which individual messages are aggregatedby RSU nodes, causing both CPU and memory load. Simulationresults illustrate the operation in dynamic load situations. I. I NTRODUCTION In their Roadmap [1] the Amsterdam Group (representingOEMs and road operators) announces the deployment of Day One cooperative ITS (C-ITS) services already in 2015.The road operators and authorities as well as infrastructureproviders and telecom operators in Europe will have to makeconsiderable investments in the roadside infrastructure in or-der to build the roadside networks. The ITS G5 (5.9 GHz)communication with the vehicles will enable the realizationof a plethora of safety, traffic information and navigationservices, mainly based on broadcast V2I and I2V messages.For instance, in the initial deployment of C-ITS in the corridorNL-DE-AT [1], the following V2I/I2V services are considered:Probe Vehicle Data (PVD), Signal phase and time of trafficlights, Road works warning, and In-vehicle signage.All these use cases require high reliability as well ashigh reactivity to enable quick adaptation to changing trafficsituations. Therefore, a stable and scalable system architecturemust be found that can cope with peak load scenarios. Theproblem that arises in the strategic design of the roadsidenetwork is to estimate the long term requirements of C-ITS services in terms of data traffic, processing and storagecapabilities, because only when these key parameters can beestimated, the node resources can be correctly dimensioned.The solution approach presented in this paper relaxes theproblem by providing a mechanism for load balancing betweenoverloaded and underloaded road side units (RSUs), openingthe possibility to efficient and incremental deployment of RSUswith various sizes and capabilities. Stored data and processingtasks can be divided among RSUs in a defined geographic areathat depends on the examined service. For designing a suitableload balancing framework for ITS, several requirements havebeen identified: •  To support cost-efficient RSU hardware, the mecha-nism must be lightweight. Load monitoring and de-cision making must not cause significant load them-selves. •  Due to the generation of additional network trafficwhen redistributing data, load balancing should onlybe triggered when the benefits exceed the costs. •  A middleware-based design is desirable, as it hidesthe complexity of interactions within the distributedsystem from service developers. •  For maximal flexibility, a peer-to-peer (P2P) approachwith autonomous load redistribution among heteroge-neous nodes should be favored. •  In order to meet different requirements on reactivityand availability in various scenarios, different algo-rithms are needed. A framework approach allows theevaluation and usage of different algorithms that areoptimized for specific services and traffic situations.We have designed a generic load balancing framework thatfulfills these requirements. It operates on a network of RSUsthat communicate via the XVSM middleware (“eXtensibleVirtual Shared Memory”) [2], which supports the notion of distributed space containers. The framework architecture isbased on the generic SILBA load balancing pattern (“Self-Initiative Load Balancing Agents”) [3] that is adapted to theRSU network architecture and the requirements of C-ITSapplications. We follow a modular approach with exchangeablepolicies that define the actual load balancing algorithms, andsoftware agents that are decoupled via space containers.The load balancing framework is applied in this work to theprobe vehicle data collection use case, where vehicles performcertain measurement tasks and forward their results to nearbyRSUs. The individual vehicle traces are aggregated and usedto evaluate the current traffic and environment situation. Dueto varying traffic conditions and the different measurement  locations, the data and computational load at the RSUs isnot equally distributed. The proposed mechanisms allow theredistribution of load among the involved nodes. The genericarchitecture of the framework makes it easily adaptable to otherC-ITS use cases and multi-service scenarios.The rest of the paper is organized as follows: Section IIgives an overview on the PVD collection use case and showshow different network topologies affect the load balancing re-quirements. In Section III we describe the XVSM middlewareand its integration with the roadside ITS station. In SectionIV the components of the SILBA load balancing framework and their interactions are introduced. The evaluation of thesimulated PVD scenarios is presented in Section V. SectionVI summarizes the results and benefits of the approach.II. PVD C OLLECTION  U SE  C ASE In the PVD collection use case, each vehicle sends itsunique measurement data to roadside units. At the RSU,the PVD messages are aggregated or relayed further to aprocessing server. The aggregated data is finally analyzed at atraffic control center (TCC) to get detailed information aboutcurrent conditions on specific road segments. This servicecan potentially replace expensive fixed traffic and environmentsensors. Load balancing has to target processing load causedby data aggregation within the RSU network, and high memoryusage caused by the stored measurements.The PVD service is easy to explain: triggered by a specialcommand message from an RSU, the vehicle starts to collectdata internally from some of its sensors, such as speed,location, road adherence information, each measurement beingattached to a time stamp and internally stored. The mea-surements occur within a specified  area of relevance , whichmay cover the road segment until the next RSU or a morelimited area. The data is finally uploaded at a RSU via V2Icommunication. However, since vehicles may take differentpaths after the measurements, they may upload their data atdifferent RSUs.We define a  measurement job  as a task given to vehiclesthat specifies an area of relevance and corresponding measure-ment parameters like sensor types and sampling intervals. Eachmeasurement job is initially issued by the TCC and may bedefined for only a limited time span or as a continuous task. Avehicle’s data is sent via its on-board unit (OBU) to the nextavailable RSU in a  PVD message , which contains several  PVDsamples  that were measured within the area of relevance.For each job, the TCC receives aggregated data represent-ing statistics (e.g. average vehicle speed) of all measurementsthat were performed in a specific, configurable  time interval (e.g. every 2 minutes). To provide accurate aggregations, dataof each job is aggregated by a single node, which therefore re-quires access to all available samples for this job, as srcinallyproposed in [4].Each measurement job is coupled with an associated  ag-gregation job  that determines the aggregation algorithm andis assigned to a specific RSU where the aggregation of allcorresponding PVD samples should take place. To enableproper routing of PVD messages to the responsible RSU, themeasurement job also has to specify the address of this defaulttarget node. The RSU then forwards the probe data to theaddress included in the PVD message. Fig. 1. PVD collection with fixed aggregation job assignment. Figure 1 outlines the data flow for this approach. Thisapproach scales better than a strictly centralized solution,where raw data is aggregated directly at the TCC. Activeaggregation jobs may be split evenly among available RSUsto initially provide a fair load distribution, but it is notpossible to dynamically react to changing conditions as the job assignments are fixed. Fig. 2. P2P architecture for PVD collection. Figure 2 shows a modified interaction model based on P2Pcommunication, where aggregation jobs may be redistributedby the RSUs to balance the load among all nodes. Whenevera node is overloaded, it may forward any of its assignedaggregation jobs to another node, together with the currentlystored PVD samples for this job, which are necessary for up-coming aggregations. PVD messages that subsequently arriveat the initially assigned node are automatically forwarded tothe newly responsible node. This strategy allows a flexibledistribution of load whenever an RSU becomes overloaded. Fig. 3. Hierarchical architecture for PVD collection. For RSUs with strictly limited hardware resources, it maybe more feasible to outsource the aggregation jobs to morepowerful nodes termed Super-RSUs (S-RSUs), as depicted inFig. 3. These nodes conceptually share the architecture of regular RSUs but have higher computing power and need notnecessarily interact directly with OBUs. In this architecture,regular RSUs forward PVD messages to the assigned S-RSU.Load balancing occurs in the same way as in the P2P scenario,  but only among S-RSU nodes. As a special case of thisarchitecture, all super nodes may be located at a single clusterat the TCC site. This enables flexible load balancing even forcentralized architectures with a network that relies on a startopology. By interconnecting the cluster servers via high-speedLAN, also the network overhead of load balancing diminishes.Depending on the available RSU infrastructure and loadcharacteristics, each of the described approaches may be a fea-sible solution. Therefore, the framework described in SectionIV supports all of these interaction models.III. S PACE -B ASED  C OORDINATION The development of C-ITS applications requires effectiveand efficient coordination among distributed services in an ITSinfrastructure. Services may collaborate via middleware thatprovides a common view of relevant traffic and environmentalconditions, e.g. based on a Local Dynamic Map (LDM) [5],which supports complex queries and event notifications. Thus,service developers are provided with a much higher abstractionlevel and need not cope with low-level interaction protocolsbetween OBUs and RSUs. Additionally, a middleware-basedarchitecture enables hardware-independent application devel-opment for all platforms that are supported by the middleware.However, although LDM may be used to synchronize localservices on a single RSU, the distribution of data amongmultiple nodes still needs to be done within applications.A suitable middleware for truly cooperative services in adistributed network of roadside units should, however, alsoprovide generic coordination features like remote synchroniza-tion, complex routing and replication, so that they need notbe re-implemented for each application. Therefore, the space-based middleware XVSM [2] has been incorporated into thesoftware architecture of a roadside infrastructure.The space-based computing paradigm, which srcinatesfrom the Linda tuple space model [6], offers high decouplingbetween participating processes by enabling communicationvia shared space containers. Autonomous processes may joinand leave dynamically as all relevant data is kept in theshared space, which enables easy migration of services andsignificantly limits the reconfiguration effort when adding,removing, or moving RSUs. Additionally, new applications canbe easily integrated with existing ones, as they can simplycommunicate with each other via the shared space. Space-based middleware provides a suitable platform for traffic-related use cases, as demonstrated by [7] and [8]. Thus,by applying and extending this coordination concept, XVSMallows for a flexible integration and coordination of highlydynamic distributed data in the ITS domain.In XVSM, data entries are stored in structured sub-spacescalled containers, which are addressable via URIs. Thus,applications access containers on their own node and on remotehosts in the same way. Figure 4 shows the basic functionalityof these containers. Applications can write entries into thecontainer and retrieve them in a consuming or non-consumingway via take or read queries, respectively. These queries mayblock until a matching entry arrives or a given timeout expires,which allows for a natural way of synchronization amongdifferent processes. For each container, specific coordinationmechanisms can be set via predefined or custom coordinators, Fig. 4. Basic container functionality. which define how entries are stored and queried. Example co-ordination mechanisms are access by type, FIFO queues, key-based access or SQL-like queries. Additional space featuresinclude asynchronous notifications, transactions, extensibilityvia aspect-oriented programming, persistent container storageand access control policies for secure container access [9].We assume that each RSU hosts XVSM containers that areaccessible from other nodes via configurable communicationchannels, e.g. TCP/IP. The interaction between OBUs and thespace is performed via the exchange of ITS G5 messagesbetween the Facility Layer of the ITS Station stack [10]and dedicated input or output containers of the space. Localapplication services may directly interact with the OBUs viathe ITS Station stack or use the space API for decoupledcommunication with the whole RSU infrastructure (i.e. alllocal or remote services that are interested in the data).On top of the standard functionality defined by theplatform-independent XVSM protocol, coordination patternscan be built that extend the middleware with additional,domain-specific behavior. In previous work, such patterns weredefined for geo-based routing via an overlay network basedon a distributed hash table [11] and replication [12]. In thispaper, we define a coordination pattern for load balancing thatredistributes data and tasks among RSUs in the network basedon the generic SILBA pattern. In contrast to other space-basedmiddleware systems with basic load balancing mechanisms(e.g. [7], [13]), the SILBA approach enables the usage of different load balancing strategies.IV. SILBA F RAMEWORK FOR  C-ITSThe SILBA pattern provides a generic framework forload balancing algorithms that can be adapted to specificscenarios via configurable policies. The framework core, whichruns on each SILBA node, processes incoming load entriesvia exchangeable worker implementations. Workers may havedifferent roles and access various kinds of load entries, whichenables the coexistence of different applications that use theSILBA framework on a single node. The framework mon-itors the current load at regular intervals to check if loadbalancing is required. A  Transfer Policy  decides if the nodeis overloaded, normally loaded or underloaded. If a node isoverloaded, a  Location Policy  determines if there is a suitable(i.e. underloaded) node that can take over the load. If such anode is found, the SILBA framework transfers the excessiveload entries to the target node. For communication betweendifferent SILBA nodes and for the internal communication of the framework components, XVSM containers are used, which  Fig. 5. Interaction of SILBA components. provide convenient features like complex query capabilities,remote communication and transactions.For usage within C-ITS scenarios, this pattern, srcinallydescribed in [3], had to be adapted. As the PVD use caserequires that load can only be redistributed at the granularityof aggregation jobs (with different load characteristics), anextended SILBA pattern has been designed. An additional Selection Policy  enables the framework to select the part of the load that should be redistributed to other SILBA nodes.Furthermore, a forwarding mechanism ensures that PVD mes-sages are always sent to the correct nodes for aggregation, evenif the responsible node for a specific job has changed.The architecture of the extended SILBA framework con-sists of several independent threads per node that interact viaXVSM containers. Figure 5 shows the coordination amongthese components for the PVD use case.The  Client Container  is used as an input stage, whereremote peers can put messages (as entries) that are destinedfor the local SILBA node. In the context of PVD collection,these entries are PVD messages or aggregation job assignments(AggJob). The PVD messages may come directly via thewireless ITS G5 interface or from another SILBA node thatforwards the data. Aggregation jobs are initially created bythe TCC but are also issued by other SILBA nodes whenredistributing load.The  Load Space  contains all locally stored PVD entriesand corresponding meta data. It is structured into  Job Con-tainers  for each aggregation job the node is currently assignedto, and a  Meta Container  that contains meta data for eachaggregation job that is known to the SILBA node. Each JobContainer hosts all PVD samples of a job that are not yetaggregated. The Meta Container contains the target locationfor each job, i.e. if it is locally processed (using a specificJob Container) or should be forwarded to another node. If thelocal node is responsible, further meta data is stored in thecontainer: timing information (e.g. the next aggregation time),information about the current load (e.g. the number of entriesin the corresponding Job Container) and a locking entry thatensures exclusive access to all entries associated to the job.The  PVD Client  fetches entries from the Client Containerand processes them according to their content. When the clientencounters an AggJob entry, it initializes the aggregation jobby creating a Job Container and adding the correspondingmeta data in the Meta Container. For PVD entries, it firstchecks the Meta Container to determine if the local node isresponsible for the specified job. If yes, it writes all PVDsamples from the PVD message into the corresponding JobContainer. Otherwise, if a forwarding entry is found in theMeta Container, the entries are forwarded to the specifiedtarget node. If the PVD message was sent by another SILBAnode (and not directly received from an OBU), this node mayalso be informed about the forwarding information, which itcan include in its own Meta Container to avoid unnecessaryforwarding chains for subsequent PVD messages. If the job isnot known to the node, it is forwarded to the default targetnode that is included in the PVD message.The  Worker Threads  regularly check the timing informa-tion in the Meta Container to determine when they have to starttheir next aggregation task. A specific job is ready for aggrega-tion whenever a time interval has passed plus some additionaltime to deal with the delay between the measurements and thearrival of the data at the aggregating RSU. Then, a workerretrieves the location of the Job Container, takes all available  PVD samples whose timestamps match the targeted intervaland aggregates them to provide a representative result of themeasurements in the respective period. The result is writtento an  Answer Container , which may be located directly atthe TCC or at another site, from where it can be queried ondemand. Finally, the worker has to update the entry count andthe next aggregation time in the Meta Container.The load balancing process is triggered by the  Arbiter component, which retrieves the load size from the MetaContainer in regular intervals. According to the configuredTransfer Policy, it determines if the node is overloaded ( OL ),normally loaded ( OK ), or underloaded ( UL ). A basic policymay monitor the total number of currently stored PVD samplesand decide the node status based on predefined thresholds,while more sophisticated policies could consider the node’sactual occupancy rate regarding CPU and memory load as wellas predictions on future load development.If the node is overloaded, the Arbiter issues a routingrequest via the  Allocation Container . The  Routing Agent then processes this request and tries to find suitable partnersfor load balancing as specified by its Location Policy. It mayquery the load status of remote SILBA nodes by writingcorresponding requests into remote  Routing Containers . Theremote Routing Agents then respond (via the local RoutingContainer) if they are available for load balancing. The RoutingAgent waits for responses and determines the most suitableneighbor. Alternative policies may store information aboutbest neighbors locally and only occasionally update this in-formation via requests to remote nodes. While this behaviorenables faster load balancing decisions, it is not suitable forhighly dynamic scenarios where the load status of nodes oftenchanges.The routing information is included in the result for theArbiter, which is written back to the Allocation Container.Then, the Arbiter selects which job(s) should be moved tothe determined node based on its Selection Policy. This policymust ensure that the load associated to the selected jobs issignificant enough to relieve the local node, but not too highin order to prevent overloading the target node. It should alsoinclude mechanisms against oscillations of job assignmentsamong nodes, e.g. via timers that prevent load balancing of a job that has been redistributed recently. It may be possiblethat no suitable job can be found, e.g. because a single job isresponsible for the whole load. In this case, load balancing isnot possible and no further action is taken by the Arbiter.The actual load redistribution is executed by a separate OutArbiter  thread that is asynchronously triggered by theArbiter. This thread retrieves all stored PVD entries for theselected job, initializes the aggregation job at the new targetvia an AggJob entry and also sends the PVD entries to thenew location. Finally, the OutArbiter deletes the local JobContainer and replaces the meta data with a forwarding entryfor the new destination. It may also be possible for underloadednodes to actively pull load from other nodes using a similarprotocol. However, a push-based approach seems more feasiblefor the PVD use case as it only triggers load balancing if itis absolutely necessary, i.e. a node is overloaded and cannotperform its tasks in a satisfactory way, to reduce the network overhead of load redistribution.V. E VALUATION For a practical evaluation of the developed SILBA frame-work, we have implemented a Java-based prototype on topof an open source version of XVSM 1 . Using a simple loadbalancing strategy and simulated PVD message generation,we show how the load balancing framework and the pro-posed strategy react in different, dynamic PVD scenarios. Thescenarios are defined by various parameters like number of  jobs, vehicle density and aggregation intervals. To prove thefeasibility of the SILBA approach, we compare the systembehavior with and without load balancing and analyze thepractical consequences for different system architectures.  A. Basic Load Balancing Strategy To evaluate the feasibility of the framework, a simplestrategy for the load balancing of PVD aggregation jobs hasbeen implemented. Load is measured by counting the totalnumber of entries (i.e. PVD samples) in all Job Containers. TheTransfer Policy is initialized with thresholds for the minimumand maximum number of entries, defining its underloaded,respectively overloaded status. Heterogeneous nodes may beinitialized with different thresholds to reflect their capabilitiesfor storing and processing load. To avoid oscillations whenload approaches the maximal value, a tolerance percentagecan be specified. While the load status is OK, the tolerancepercentage is added to the upper threshold. Only when theload surpasses this increased value, the load status becomesoverloaded (OL). Then, load has to drop again below theunmodified threshold before the load status is set to OK again.Aggregation jobs are redistributed one by one. If theredistribution of a single job is not sufficient, the process issimply repeated until the node is no longer overloaded. Forthe Selection Policy, the general strategy is to choose the jobthat currently contributes the biggest share of load on the localnode. However, if this job is responsible for more than 50%of total load, the second biggest job is selected instead. Thisprevents distribution of jobs that are too big and thus wouldimmediately overload the nodes to which they are moved.The Location Policy is based on a configured list of knownneighbors for each node. Whenever load balancing is triggered,the Routing Agent requests the load status of these nodes,waits until all nodes have responded or a configurable timeoutexpires, and then chooses an arbitrary node with load statusunderloaded (UL). By configuring feasible load balancingpartners via the neighbor list, the load balancing can be adaptedto the network topology. This prevents situations where nodeswith suboptimal communication links have to exchange data.Although this lightweight strategy does not guarantee thatthe load is distributed in an optimal way among the SILBAnodes, it ensures that overloaded nodes are eventually ableto offload aggregation jobs provided that underloaded nodesexist. More complex mechanisms can be easily integrated inthe framework by adapting the policies.  B. Test Setup Before each test run, four SILBA nodes are started on asingle machine. The test program simulates the concurrent exe- 1
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks