Religion & Spirituality

A variability-based testing approach for synthesizing video sequences

Description
A variability-based testing approach for synthesizing video sequences
Published
of 11
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  A Variability-Based Testing Approach forSynthesizing Video Sequences José A. Galindo Inria, Rennes, France  jagalindo@inria.frMauricio Alférez Inria, Rennes, France mauricio.alferez@inria.frMathieu Acher University of Rennes 1 andInria, France mathieu.acher@inria.frBenoit Baudry Inria, Rennes, France benoit.baudry@inria.frDavid Benavides University of Seville, Spain benavides@us.es ABSTRACT A key problem when developing video processing softwareis the difficulty to test different input combinations. Inthis paper, we present VANE, a variability-based testingapproach to derive video sequence variants. The ideas of  VANE are i) to encode in a variability model what can vary within a video sequence; ii) to exploit the variability model to generate testable configurations; iii) to synthesize variants of video sequences corresponding to configurations. VANEcomputes T-wise covering sets while optimizing a function over attributes. Also, we present a preliminary validation of  the scalability and practicality of VANE in the context of an industrial project involving the test of video processing algorithms. Categories and Subject Descriptors D.2.5 [ Software Engineering ]: Testing and Debugging— Testing tools  ; D.2.m [ Software Engineering ]: Reusable Software General Terms Theory, Algorithms, Performance Keywords Variability, Combinatorial testing, Video analysis 1. INTRODUCTION Video analysis systems are ubiquitous and crucial in mod- ern society [18,22]. Their applications range from videoprotection and crisis monitoring to crowds analysis.  Video sequences   are acquired, processed and analyzed to producenumerical or symbolic information. The corresponding in- formation typically raises alerts to human observers in case of interesting situations or events. For instance, a classical scenario in natural disasters is to recognize survivors by using airborne cameras with the intention of rapidly acting based on information gleaned to achieve strategic or tactical rescue. Depending on the goal of the video sequence recognition, signal processing algorithms are assembled in different ways. Also, each algorithm is a complex piece of software, spe-cialized in a specific task (e.g., segmentation and objectrecognition). Even for a specific job, it is difficult to finda one-size-fits-all algorithm capable of being efficient andaccurate in all settings. The engineering of video sequence analysis systems, thus, requires choosing and configuring the right combination of algorithms [22]. In practice, engineering such systems is an iterative pro- cess in which algorithms are combined and tested on diverse inputs (video sequences). Practitioners can eventually de- termine what algorithms are likely to fail or excel in certain conditions before the actual deployment in realistic settings. Admittedly, practitioners rely on empirical and statisticalmethods, based on numerous metrics. However, the majorbarrier to improve analysis algorithms is to find a  suitable and comprehensive input set of video sequences for testing analysis algorithms  . The current testing practice is rathermanual, very costly in time and resources needed, with-out any qualitative assurance (e.g., test coverage) of the inputs [16,25]. In a project involving three industrial partners, we ob-served that collecting videos for testing such algorithms is difficult. The targeted scenarios should challenge algorithmsto process video sequences by introducing high  variability  , for example, different kinds of luminosity, altitudes, instability, meteorological conditions, etc. By combining the different variation points of the video sequences, we identified 153000 possible video sequences (three minutes each), correspond-ing to 320 days of videos to process and sixty-four years of filming outdoors (two hours per day). The numbers were calculated at the beginning of the project and the situation is now worth with billions of possible video sequences. These values show that the current practice, based on a manual elaboration of video sequences, is too expensive in terms of  time and resources needed. Moreover, a related problem is that practitioners ignore what kinds of situations are covered or not by the set of video sequences. In this paper, we present VANE, a variability-based testing approach for synthesizing video sequence variants. The key ideas of VANE are to promote the use of a variability modeldescribing what can vary within a video sequence and to ex-ploit the variability model to generate configurations. Theseconfigurations are exploited afterwards to synthesize variantsof a video sequence. In this research we rely in feature models [4,5,21] which are the most popular notation for modeling and reasoning about variability. We use advanced constructs such as attributes for handling numerical parameters and preferences. We apply combinatorial testing [12 – 14,17,19,20]over feature models with attributes to reduce the number of  configurations (combinations of features and attributes).  VANE is a hybrid approach mixing constraint satisfaction problem (CSP) solving techniques and evolutionary algo- rithms. The CSP is used to obtain T-wise covering sets while the genetic algorithm is used to tackle the multi-objective nature of the problem. A unique property of VANE is that it can obtain the minimal T-wise coverage while optimizing a function over attributes, for example, to minimize a custom attribute such as the video luminance. Previous research proposed to use different metrics tooptimize test-suites for concrete users needs in variability-intensive systems [11,13,19,20]. These approaches allowed assigning more importance to some inputs than others when testing. However, they only focused on functional testing of  the main system features without considering different testing objectives including quality  attributes  . Other evolutionary- based approaches do not consider testing aspects [23].This paper provides the following contributions: •  An srcinal application of variability and testing tech- niques to the domain of video sequence analysis, in the context of an industrial project. •  CSP encoding and automated techniques to grant T-wise coverage for feature models with attributes. Wealso develop multi-objective solving techniques to ob-tain T-wise configurations sets that satisfy multiple testing criteria at the same time. •  An evaluation of VANE’s performance in practical set- tings. We show the time required, the number of con- figurations generated, and the benefits of the approach in comparison to current practice. The remainder of this paper is organized as follows. Sec-tion 2 describes further the industrial case study and the problem of testing video sequence analysis systems. Section 3 presents a variability-based testing approach and describes the VANE solution to obtain T-wise covering sets while opti- mizing user functions over quality attributes and functionalinformation. Section 4 describes empirical results we gatherwhen evaluating VANE’s performance on the large-scale, re- alistic feature model designed by video experts. Section 4.2 discusses the variability-based testing approach in the con-text of the industrial project as well as threats to validity.Section 5 discusses related work while Section 6 presents concluding remarks. 2. INDUSTRIAL CASE STUDY The acquisition, processing, and analysis of video sequences find applications in our daily life. Surveillance applications, computer-aided medical imaging systems, traffic control, and mobile robotics are some examples. Such software-intensive systems rely on signals captured by video cameras, whichare then, processed through a chain of algorithms. Basic signal processing algorithms are assembled in different ways,depending on the goal of image recognition (scene interpreta- tion, follow a specific object, etc.). The algorithms produce numerical or symbolic information helpful for humans or sub- ject to subsequent analysis. For example, rectangles coveringthe zone of a specific object help tracking people or vehicles in motion. The MOTIV project aims at evaluating computer visionalgorithms such as those used for surveillance or rescue op-erations. A targeted scenario is usually as follows. First,airborne or land-based cameras capture on-the-fly videos.Then, the processing and analysis of video sequences areperformed to detect and track, for example, survivors in a natural disaster. Eventually, and based on the information, the operations manager can achieve strategic or tactical goalsin a rapid manner. Two organizations are part of the MOTIVproject as well as the DGA (the French governmental organi-zation for defense procurement). The two companies developand provide numerous algorithms for video analysis. Clearly,there is no one-size-fits-all solution capable of handling the di-versity of scenarios and signal qualities. This poses a difficult problem for all the partners of MOTIV: which algorithmsare best suited given a particular application? From the consumer side (the DGA), how to choose, select and combinethe algorithms? From the provider side (the two companies), how to guarantee that the algorithms meet a large varietyof conditions? How to propose innovative solutions able to handle new situations? The fundamental and common challenge is the  testing   of algorithms. All the partners, being providers or consumers, aim to determine what algorithms are likely to fail or excel incertain conditions. Empirical and statistical methods (basedon numerous metrics for assessing non-functional dimensionssuch as performance or reliability) have been developed and are intensively used for this purpose. Nevertheless, practi-tioners face severe difficulties to obtain an input test suite (i.e., a set of video sequences) large and diverse enough to test the algorithms. The current practice is indeed to find some existing videos or film video sequences outside. The effortof manually collecting videos is highly consuming in timeand resources. First, high costs and complex logistics are required to film video sequences in real locations. Second, theground truth should be elaborated for every video sequence – it is again time-consuming and also error-prone. Due to the practical difficulties, the number of collected video sequences is too low and the videos are not different enough to testalgorithms. In addition, practitioners have limited controlover the scenarios covered by the set of video sequences.As a result, the major challenge for testing the algorithms remains:  How to obtain a suitable and comprehensive input  set of video sequences?  3. VARIABILITY-BASED TESTINGAPPROACH To overcome the previous limitations, we introduce a gen- erative approach, based on variability modeling and testing principles. The goal of the approach is to automatically synthesize a variant of a video sequence given a configuration (i.e., a selection of desired features). Compared to the cur- rent practice, the approach aims to provide more automation, more diversification and more control when collecting input video sequences (see also Section 4.2 for a discussion about the benefits in the MOTIV project). For example, we synthesized four different variants of video sequences (see Figure 1). The first variant (see Figure 1a)is a video sequence with a very intense luminosity and a tank moving on. The second variant (see Figure 1b) differs from the first variant: some birds and other kinds of vehicles are included while the contrast is more intense. Variant #3(see Figure 1c) introduces shadows to mimic passing clouds.Also, Variant #4 (see Figure 1d) is over expose, thus, some colors are hardly distinguishable. We only describe static  (a) Variant #1 of video sequence (b) Variant #2 of video sequence(c) Variant #3 of video sequence (d) Variant #4 of video sequenceFigure 1: Four variants of video sequences parts of the video sequence variants but the dynamic partsis impacted as well (e.g., motion of vegetation due to thewind, appearance of occultants, vibrations of camera, orshadows). Eventually, much more variants than the fourdepicted in Figure 1 can be synthesized to test computervision algorithms (e.g., an algorithm in charge of tracking vehicles) in diverse and challenging settings. As part of the approach,  variability modeling   is used toformally characterize what can vary in a video sequence and delimit the relevant testable configurations. Because of  the huge number of testable configurations,  combinatorial  testing   techniques are applied to obtain the minimal T-wise coverage while optimizing attributes. An overview of the approach (in the context of the MOTIV project) is given in Figure 2. At the starting point (see the top of the figure),a variability model is elaborated and characterizes a set of configurations (see next section for more details about the so-called feature model with attributes). Testing techniques (see ➀ ) operate over the model and are in charge of producing a relevant subset of video sequences. A transformation (see ➁ ) has been developed to obtain configuration files that consists on variables and values. Lua code developed by oneof the MOTIV partner, processes the configuration files andexecutes some algorithms to alter, add, remove or substitute elements in a base video sequence 1 . We obtain at the end variants of video sequences (see  ➂ ). 1 Lua is a widely used programming language ( http://www.lua.org/ ). Details about the computer vision algorithms in charge of synthesizing video sequences’ variants are out of  the scope of the paper. -- targets : vehicle1vehicle1.identifier = 2 -- Integer number : 0=disable, 1=AMX30, ..vehicle1.motion = 2 -- Floating point number from 0 (static target) to 1 (extremely irregular motion)vehicle1.shadowed = 0.2 -- Floating point number from 0 (not shadowed) to 1 (extremely shadowed)Évehicle1.distance = 100 -- Distance approximately reconstructed in meters-- Distractorsdistractors.far_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.close_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.light_reflection = 0 -- Floating point number from 0 (low level) to 1 (high level)Édistractors.blinking_light = 0 -- Floating point number from 0 (low level) to 1 (high level)-- Occulting objectsoccultants.solid_level = 0.2 -- Floating point number from 0 (low level) to 1 (high level)occultants.semi_transparent_level = 0.4 -- Floating point number from 0 (low level) to 1 (high level)-- Image capture conditionscamera.vibration = 0 -- Floating point number from 0 (steady camera) to 1 (high vibrations)camera.focal_change = 0 -- Floating point number from 0 (steady focal) to 1 (high focal change)camera.pan_motion = 1 -- Floating point number from 0 (steady camera) to 1 (irregular high speed pan)camera.tilt_motion = 0 -- Floating point number from 0 (steady camera) to 1 (irregular high speed tilt)camera.altitude = 1.2 -- In reconstructed meters-- Signal qualitysignal_quality.picture_width = 1920signal_quality.picture_height = 1080signal_quality.luminance_mean = 72.55 -- default = 72.55 VariAbility testiNg for fEature models (VANE) Feature Model ConÞguration Files (VANE output) 123 Scenedistractorsobjects   luminancecontrastluminancespeed   Video Sequence Variants É Derivation of conÞguration sets to testSynthesis of videos      O    p     t     i    m     i    z    a     t     i    o    n     G    e    n    e    r    a     t     i    o    n Video-sequence variant conÞguration 1   Video-sequence variant conÞguration 1   Video-sequence variant conÞguration 1   Video-sequence variant conÞguration 1 -- targets : vehicle1vehicle1.identifier = 2 -- Integer number : 0=disable, 1=AMX30, ..vehicle1.motion = 2 -- Floating point number from 0 (static target) to 1 (extremely irregular motion)vehicle1.shadowed = 0.2 -- Floating point number from 0 (not shadowed) to 1 (extremely shadowed)Évehicle1.distance = 100 -- Distance approximately reconstructed in meters-- Distractorsdistractors.far_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.close_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.light_reflection = 0 -- Floating point number from 0 (low level) to 1 (high level)Édistractors.blinking_light = 0 -- Floating point number from 0 (low level) to 1 (high level)-- Occulting objectsoccultants.solid_level = 0.2 -- Floating point number from 0 (low level) to 1 (high level)occultants.semi_transparent_level = 0.4 -- Floating point number from 0 (low level) to 1 (high level)-- Image capture conditionscamera.vibration = 0 -- Floating point number from 0 (steady camera) to 1 (high vibrations)camera.focal_change = 0 -- Floating point number from 0 (steady focal) to 1 (high focal change)camera.pan_motion = 1 -- Floating point number from 0 (steady camera) to 1 (irregular high speed pan)camera.tilt_motion = 0 -- Floating point number from 0 (steady camera) to 1 (irregular high speed tilt)camera.altitude = 1.2 -- In reconstructed meters-- Signal qualitysignal_quality.picture_width = 1920signal_quality.picture_height = 1080signal_quality.luminance_mean = 72.55 -- default = 72.55   -- targets : vehicle1vehicle1.identifier = 2 -- Integer number : 0=disable, 1=AMX30, ..vehicle1.motion = 2 -- Floating point number from 0 (static target) to 1 (extremely irregular motion)vehicle1.shadowed = 0.2 -- Floating point number from 0 (not shadowed) to 1 (extremely shadowed)Évehicle1.distance = 100 -- Distance approximately reconstructed in meters-- Distractorsdistractors.far_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.close_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.light_reflection = 0 -- Floating point number from 0 (low level) to 1 (high level)Édistractors.blinking_light = 0 -- Floating point number from 0 (low level) to 1 (high level)-- Occulting objectsoccultants.solid_level = 0.2 -- Floating point number from 0 (low level) to 1 (high level)occultants.semi_transparent_level = 0.4 -- Floating point number from 0 (low level) to 1 (high level)-- Image capture conditionscamera.vibration = 0 -- Floating point number from 0 (steady camera) to 1 (high vibrations)camera.focal_change = 0 -- Floating point number from 0 (steady focal) to 1 (high focal change)camera.pan_motion = 1 -- Floating point number from 0 (steady camera) to 1 (irregular high speed pan)camera.tilt_motion = 0 -- Floating point number from 0 (steady camera) to 1 (irregular high speed tilt)camera.altitude = 1.2 -- In reconstructed meters-- Signal qualitysignal_quality.picture_width = 1920signal_quality.picture_height = 1080signal_quality.luminance_mean = 72.55 -- default = 72.55   -- targets : vehicle1vehicle1.identifier = 2 -- Integer number : 0=disable, 1=AMX30, ..vehicle1.motion = 2 -- Floating point number from 0 (static target) to 1 (extremely irregular motion)vehicle1.shadowed = 0.2 -- Floating point number from 0 (not shadowed) to 1 (extremely shadowed)Évehicle1.distance = 100 -- Distance approximately reconstructed in meters-- Distractorsdistractors.far_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.close_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.light_reflection = 0 -- Floating point number from 0 (low level) to 1 (high level)Édistractors.blinking_light = 0 -- Floating point number from 0 (low level) to 1 (high level)-- Occulting objectsoccultants.solid_level = 0.2 -- Floating point number from 0 (low level) to 1 (high level)occultants.semi_transparent_level = 0.4 -- Floating point number from 0 (low level) to 1 (high level)-- Image capture conditionscamera.vibration = 0 -- Floating point number from 0 (steady camera) to 1 (high vibrations)camera.focal_change = 0 -- Floating point number from 0 (steady focal) to 1 (high focal change)camera.pan_motion = 1 -- Floating point number from 0 (steady camera) to 1 (irregular high speed pan)camera.tilt_motion = 0 -- Floating point number from 0 (steady camera) to 1 (irregular high speed tilt)camera.altitude = 1.2 -- In reconstructed meters-- Signal qualitysignal_quality.picture_width = 1920signal_quality.picture_height = 1080signal_quality.luminance_mean = 72.55 -- default = 72.55   -- targets : vehicle1vehicle1.identifier = 2 -- Integer number : 0=disable, 1=AMX30, ..vehicle1.motion = 2 -- Floating point number from 0 (static target) to 1 (extremely irregular motion)vehicle1.shadowed = 0.2 -- Floating point number from 0 (not shadowed) to 1 (extremely shadowed)Évehicle1.distance = 100 -- Distance approximately reconstructed in meters-- Distractorsdistractors.far_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.close_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.light_reflection = 0 -- Floating point number from 0 (low level) to 1 (high level)Édistractors.blinking_light = 0 -- Floating point number from 0 (low level) to 1 (high level)-- Occulting objectsoccultants.solid_level = 0.2 -- Floating point number from 0 (low level) to 1 (high level)occultants.semi_transparent_level = 0.4 -- Floating point number from 0 (low level) to 1 (high level)-- Image capture conditionscamera.vibration = 0 -- Floating point number from 0 (steady camera) to 1 (high vibrations)camera.focal_change = 0 -- Floating point number from 0 (steady focal) to 1 (high focal change)camera.pan_motion = 1 -- Floating point number from 0 (steady camera) to 1 (irregular high speed pan)camera.tilt_motion = 0 -- Floating point number from 0 (steady camera) to 1 (irregular high speed tilt)camera.altitude = 1.2 -- In reconstructed meters-- Signal qualitysignal_quality.picture_width = 1920signal_quality.picture_height = 1080signal_quality.luminance_mean = 72.55 -- default = 72.55   -- targets : vehicle1vehicle1.identifier = 2 -- Integer number : 0=disable, 1=AMX30, ..vehicle1.motion = 2 -- Floating point number from 0 (static target) to 1 (extremely irregular motion)vehicle1.shadowed = 0.2 -- Floating point number from 0 (not shadowed) to 1 (extremely shadowed)Évehicle1.distance = 100 -- Distance approximately reconstructed in meters-- Distractorsdistractors.far_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.close_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.light_reflection = 0 -- Floating point number from 0 (low level) to 1 (high level)Édistractors.blinking_light = 0 -- Floating point number from 0 (low level) to 1 (high level)-- Occulting objectsoccultants.solid_level = 0.2 -- Floating point number from 0 (low level) to 1 (high level)occultants.semi_transparent_level = 0.4 -- Floating point number from 0 (low level) to 1 (high level)-- Image capture conditionscamera.vibration = 0 -- Floating point number from 0 (steady camera) to 1 (high vibrations)camera.focal_change = 0 -- Floating point number from 0 (steady focal) to 1 (high focal change)camera.pan_motion = 1 -- Floating point number from 0 (steady camera) to 1 (irregular high speed pan)camera.tilt_motion = 0 -- Floating point number from 0 (steady camera) to 1 (irregular high speed tilt)camera.altitude = 1.2 -- In reconstructed meters-- Signal qualitysignal_quality.picture_width = 1920signal_quality.picture_height = 1080signal_quality.luminance_mean = 72.55 -- default = 72.55   -- targets : vehicle1vehicle1.identifier = 2 -- Integer number : 0=disable, 1=AMX30, ..vehicle1.motion = 2 -- Floating point number from 0 (static target) to 1 (extremely irregular motion)vehicle1.shadowed = 0.2 -- Floating point number from 0 (not shadowed) to 1 (extremely shadowed)Évehicle1.distance = 100 -- Distance approximately reconstructed in meters-- Distractorsdistractors.far_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.close_moving_vegetation = 0.5 -- Floating point number from 0 (low level) to 1 (high level)distractors.light_reflection = 0 -- Floating point number from 0 (low level) to 1 (high level)Édistractors.blinking_light = 0 -- Floating point number from 0 (low level) to 1 (high level)-- Occulting objectsoccultants.solid_level = 0.2 -- Floating point number from 0 (low level) to 1 (high level)occultants.semi_transparent_level = 0.4 -- Floating point number from 0 (low level) to 1 (high level)-- Image capture conditionscamera.vibration = 0 -- Floating point number from 0 (steady camera) to 1 (high vibrations)camera.focal_change = 0 -- Floating point number from 0 (steady focal) to 1 (high focal change)camera.pan_motion = 1 -- Floating point number from 0 (steady camera) to 1 (irregular high speed pan)camera.tilt_motion = 0 -- Floating point number from 0 (steady camera) to 1 (irregular high speed tilt)camera.altitude = 1.2 -- In reconstructed meters-- Signal qualitysignal_quality.picture_width = 1920signal_quality.picture_height = 1080signal_quality.luminance_mean = 72.55 -- default = 72.55 ConÞguration Files (VANE output) Figure 2: Realization of the variability-based testingapproach in the MOTIV project  3.1 Variability Modeling Feature models are the most popular notation for describ- ing the variability of a system. This formalism is commonly used for modeling software product lines [3 – 5]. See Figure 4 for a sample of a feature model with attributes. Feature models, though, are not limited to product lines and can alsobe used for modeling configurable systems (in a broad sense). The main advantages of using feature models are their rel-ative simplicity – domain experts, such as video analysisexperts, can easily understand the formalism –, their for- mal semantics, and a series of efficiently automated analysis techniques [4]. We propose to use feature models to model the variability of a video sequence. Variability in our context means any characteristic of a video sequence that is subject to change 2 : the global luminosity of the video sequence, the local lumi-nosity of some parts of the video sequence, the presence of  some noise, distractors, or vibrations, etc. The variability of a video sequence is described in terms of mandatory, optional and exclusive features as well as propo- sitional constraints over the features (see Figure 5). The features are hierarchically organized starting from the high- level concept (video sequence) to more refined and detailed concepts. The essence of a feature model is to characterize a set of valid configurations, where a configuration is definedas the selection of features and attributes values. Propo-sitional constraints and variability information restrict the valid combinations of features authorized in a video sequence. For instance, only one kind of background is possible.The domain of video analysis has some specificities. First,  attributes   are intensively used. Attributes are needed to add non-functional information to features. They maybe used to express some preferences or determine the im-pact of testing a concrete feature in the total testing time.Attributes also help to model numerical parameters of thevideo sequence. Second, some features can appear severaltimes in the same video sequence. An example of this vari- ability element is a  Vehicle  which number  dust  which can beconfigured independently, i.e., there can be 10 vehicles in the video sequence each having a specific  dust  value. Increasing the number of variables in the problem. With the presence of  attributes, a feature model can be naturally further refined using cross-tree constraints over features and attributes. Forexample, we specify that the selection of a  Countryside  back- ground implies that less than 10  People  appear in the scene (see Figure 5). Using the formalism of feature models with attributesallows video experts to have a more suited expressiveness than with only Boolean constructs. It also opens new kindsof reasoning over non-functional attributes. In practice, the challenge is now to obtain the configurations that optimize user-defined criteria over attributes. For example, in Figure 5, we encode non-functional properties such as the amountof dust generated in a sequence. In some testing scenarios, the goal could be to minimize (or maximize) the amount of  2 A variation point (change) can be fixed once and for all, orsubject to a dynamic adaptation at runtime. For instance, wecan a fixed global luminosity value for the whole video; or wecan consider that it can vary during the video execution. Inour experience, the two kinds of changes are indeed possible. How a variation point is realized and actually implemented is out of the scope of the paper. CSP Mapping CSP solver(Section 3.2.1) ABCattribute 1attribute 2attribute 1attribute 2Valid Pair Selection + A set of conÞgurations providing pair-wise coverage User functions to optimize How many functions are we optimizing? =1>1 A, BA, CB, C + Evolutive algorithm.  A priori solution internallyusing a CSP solver (Section 3.2.2) Figure 3: VANE process to obtain optimal T-wisecovering sets. dust: practitioners can define objectives function depending on their needs. Meanwhile, a series of complex constraints involving at-tributes should be handled. For example, we specify two different constraints i) to ensure that if there is a high  dust generated by the  Vehicle , the background of the scene shouldbe a  Countryside  one, ii) to ensure that in an urban scenario there will be a crowd of   People  greater than 40%. 3.2 VANE Solution Despite the constraints over attributes and features, thenumber of possible configurations is enormous. Exhaustivetesting in such a large space of configurations is clearly un- feasible. Literature in the past proved that most errors can be detected when using pair-wise combinations of inputs [24]. Moreover, Cohen et. al. [6] proved that those results apply to feature models. Our approach is to test configurations of  video sequences that cover all possible T feature interactions ( T-wise  ). In theory, T-wise dramatically reduces the num-ber of testable video sequences while ensuring reasonable coverage. VariAbility testiNg for fEature models (VANE) is a solu-tion to obtain T-wise covering sets for feature models withattributes. VANE follows a set of steps to obtain T-wisecovering sets of configurations while meeting different usercriteria (e.g. minimize the cost of a set of configurations).Figure 3 shows the VANE process. First, developers en-code the intensive variability system’s variability using anattributed feature model. Second, VANE obtains the validpermutations of features to be covered. Third, VANE en- codes the input model as a CSP. Later, VANE adds different constraints to the CSP depending on user requirements. In the case that the user wants to obtain a multi-objective solution, VANE implements an“a priori”solution by usinga genetic algorithm. This solution uses the previously gen-erated CSP to find the weights that return Pareto optimal solutions.  SceneObjectsVehiclesCountrysideUrbanBackgroundHumansMandatoryOptional AlternativeOr RequiresExcludesAttribute dustAmount[0..10]numberOf[0..10] CountrySide==1 IMPLIES Vehicle.dust > 5Urban IMPLIES Humans.numberOf >40 Figure 4: An exemplified feature model with at-tributes 3.2.1 T-wise CSP For Attributed Feature Models This section describes how VANE uses CSP to derive solutions for T-wise covering arrays. Prior work in the fieldof automated analysis of feature models achieved to extract information from feature models by using computed-aided mechanisms. Those works yielded a set of different operations and translations into CSP problems [4]. In this paper, we consider the derivation of T-wise covering sets as an automated analysis operation that takes as input attributed feature models and user preferences. After a CSP formulation is defined for obtaining T-wise configuration sets, VANE can derive all the different valid combinations of  configurations that fully covers a set of feature pairs. CSP encoding of the problem.  A CSP is a mathemat- ical problem that is composed by a set of variables whichvalue must satisfy a set of constraints. For example, if wewant to model the problem of buying a certain number of  objects that cost 5 $ which a fixed budget of 20 $, the CSP can look like ( A <  20) ∧ ( A  =  B ∗ 5) where A is an integer variable describing the total cost of the purchase, B a variable representing the number of products to buy, and the 20$ budget a constraint. A CSP solver is a software artifact that enables to retrieve all possible valid labeling in a CSP. Avalid labeling is an assignation of values to variables thatsatisfy all constraints involved into the problem. Moreover,CSP solvers enable the optimization of different functions such as minimizing the value of an integer variable. A feature model can be encoded as a CSP as previousresearch did in the past [4]. In this mapping, each feature F  i  in the feature model features set  F   is represented by aboolean variable  f  i  in the CSP and each kind of constraint in the feature model is represented by a logic representation of it. For example, a  mandatory   relationship between the feature A and the feature B is mapped as  A  ⇔  B , meaning that, if the feature A is selected, then, the CSP have to select the feature B. An additional set of constraints should be added in caseof dealing with attributed feature models. An attributeis defined by a domain, a default value and a null value.First, every attribute  A ij  for the feature  F  i  is representedby a variable  a ij , later two different constraints are addedi)  A ij  ∈  domain , to grant that the attribute value willbe between the limits of its own domain; and ii)  if   ( f  i  = 1)  then  ( A ij  =  defaultValue )  else  ( A ij  =  nullValue ), to grant that if the feature  F  i  is selected the attribute value is its default value and the null value otherwise. After translating all features, attributes and constraintsto the CSP, more constraints have to be introduced intothe CSP to grant the covering of a concrete set of features. Therefore, if we want to grant that the  F  i  is covered by the CSP solutions, we need to add the constraint  F  i  = 1 intothe solver to grant that the feature  i  will be selected in all solutions. For example, if we want to cover the pair of featurescomposed by the feature  People  and  Countryside , then we would add the constrains  People  = 1 and  Countryside  = 1. Deriving T-wise covering sets for attributed fea-ture models.  VANE can reason over all possible of configu-ration sets that cover a concrete set of T-wise combinations.For example, in the case of the model presented in Figure 4, VANE retrieves the configuration set covering all feature combinations such as Urban and Humans. To cover a set of  feature combinations, VANE uses a custom mapping between feature models and CSP. The mapping used is defined by the tuple: < P,F,FC,A,AC,PC > where: P   is the set of feature combinations to be covered.  P  ij represents the feature  j  of the feature combination  i needed to be covered by a configuration in the test- suite. F   is a set of variables representing the features in the featuremodel. If the variable  f  i  is equal to 1, then the feature F  i  is going to be present in the configuration. (e.g if  the feature  i  is a mandatory child of the feature  j , the constraint  f  j  ⇔  f  i  is in this set) FC   is the set of constraints representing the different rela- tionships between the model elements. This is between different features and between features and attributes. A is the set of variables representing the different attributes existing in the model. AC   is the set of constraints between different attributes.For example, is the cost should be greater than 40 a constraint representing that will be added. PC   is the set of constraints representing the constraintsgranting the coverage of each pair. This is, for eachpair  P  ij , the constraint  F  j  = 1 ∧ F  i  = 1 is introduced in the CSP. This mapping differs from previous approaches because it is intended to derive combinations of valid configurations (covering sets) instead of single configurations. Table 1 showsthe main differences between the previous mapping for singleconfiguration derivation proposed by Benavides et al. [4] andthe one used by VANE. Note that in the table,  F  P  represents the parent feature of the relation.  F  C  1 to  F  CX representthe children features of a relation where X is the  nth   child of the relation. T-wise covering sets optimization.  There is more than one solution for the problem of finding T-wise covering
Search
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks