Skip to content

Tutorial

Antti Kervinen edited this page Apr 3, 2014 · 11 revisions

Tutorial: testing audio policy

A driver is listening to music in her car when incoming call alarm rings. Before she reacts to the alarm, the navigator reminds her to take the next turn to the right.

Audio policy takes care of routing all sounds to correct audio outputs. For instance, the driver should hear the alarm and the navigator message with and without a bluetooth headset, but it might not be wanted that these sounds interfere with sounds from games or videos being played on the backseat. Music, phone, games, videos and navigator can all be functions of the same IVI (in-vehicle infotainment) system in a car.

As an example of this kind of system, Tizen IVI runs the Murphy audio policy daemon.

In software testing perspective, number of different audio routing scenarios is large, and there are many different paths that lead into the same scenario. For instance, scenario "music is playing when call alarm rings" can be tested by starting to play music and then calling in. But it can be further tested by letting the navigator add the third sound for a while, and when it has finished, checking that the music and the call alarm are again routed correctly.

It would take a great effort to design a test suite with sufficient coverage of all the scenarios and interesting paths. In this tutorial we will take an alternative approach: let fMBT automatically generate and run the tests.

Modeling

Instead of trying to design test cases, we only define test steps and when they can be tested. This combination, that is test steps and their preconditions, is called a model. Model-based testing tools, like fMBT, generate tests automatically based on models.

fmbt-editor

A handy tool for modeling is fmbt-editor. It can be used for

  • editing models
  • visualising models as state spaces
  • editing test generation parameters
  • generating different tests and visualising what they cover.

You can also use your favourite editor to edit model and test configuration files, and use fmbt-editor --preview-only to visualise them.

Let's start with a minimal model:

aal "audiopolicy" {
    language "python" {}
    initial_state {}
}

This only tells that we use AAL/Python as a modeling language. This model has only one state. You can visualise this by launching fmbt-editor audiopolicy.aal, replacing editor contents with code above, and pressing Ctrl+S to save the file. Saving triggers updating visualisation.

Visualised model gets more interesting when we add first test steps.

First steps

Let's add first test steps to start and stop playing music in the system. The design is that test step "play music" will start music playback, and the music will be played until executing test step "stop music".

We'll add a playing variable to track which sounds are being played. We'll use the variable now in guards (preconditions) of "play music" and "stop music". For instance, the guard of "play music" makes sure that "play music" can be tested only if music is not already playing.

With the variable and the two test steps on board, AAL/Python code looks like this:

aal "audiopolicy" {
    language "python" {}
    variables { playing }
    initial_state {
        playing = set()
    }
    input "play music" {
        guard   { return not "music" in playing }
        body    { playing.add("music") }
    }
    input "stop music" {
        guard   { return "music" in playing }
        body    { playing.remove("music") }
    }
}

Body blocks of test steps "play music" and "stop music" change the conditions after successfully testing the step. This code does not tell how "play music" and "stop music" actually interact with the system under test when being tested. That code will be written in adapter blocks of these steps later.

The code inside initial_state, guard, body and adapter blocks is pure Python. Variables defined in the variables block are declared global in each of these blocks.

Next steps

If we would add new sound types using the same pattern, testing alarm would look like this:

aal "audiopolicy" {
    language "python" {}
    variables { playing }
    initial_state {
        playing = set()
    }
    input "play music" {
        guard { return not "music" in playing }
        body  { playing.add("music") }
    }
    input "stop music" {
        guard { return "music" in playing }
        body  { playing.remove("music") }
    }
    input "play alarm" {
        guard { return not "alarm" in playing }
        body  { playing.add("alarm") }
    }
    input "stop alarm" {
        guard { return "alarm" in playing }
        body  { playing.remove("alarm") }
    }
}

However, adding "phone", "navigator", "game" and "video" in this way would result in many almost identical test steps. We can avoid repeating code by defining several test steps with common guard and body blocks. Inside the blocks, input_name variable contains the name of the input.

aal "audiopolicy" {
    language "python" {}
    variables { playing }
    initial_state {
        playing = set()
    }

    input "play music", "play alarm", "play phone" {
        guard {
            play_what = input_name.split()[1]
            return not play_what in playing
        }
        body  {
            play_what = input_name.split()[1]
            playing.add(play_what)
        }
    }

    input "stop music", "stop alarm", "stop phone" {
        guard {
            stop_what = input_name.split()[1]
            return stop_what in playing
        }
        body  {
            stop_what = input_name.split()[1]
            playing.remove(stop_what)
        }
    }
}

In the code above, play_what and stop_what are local variables that contain value "music", "alarm" and "phone", depending on for which test step the block is executed.

Show variable values

Visualised model with "music", "alarm" and "phone" has eight states, which already makes already hard to follow. And the number of states grows quickly. For instance, adding test steps for connecting and disconnecting bluetooth headset (see below) doubles the number of states.

Printing variable values on states helps understanding the state space. fmbt-editor visualisation can be controlled with directives, like # preview-show-vars: .... For instance, the directive below prints values of both playing and headset on every state.

# preview-show-vars: playing, headset
aal "audiopolicy" {
    language "python" {}
    variables { playing, headset }
    initial_state {
        playing = set()
        headset = "disconnected"
    }

    input "play music", "play alarm", "play phone" {
        guard {
            play_what = input_name.split()[1]
            return not play_what in playing
        }
        body  {
            play_what = input_name.split()[1]
            playing.add(play_what)
        }
    }

    input "stop music", "stop alarm", "stop phone" {
        guard {
            stop_what = input_name.split()[1]
            return stop_what in playing
        }
        body  {
            stop_what = input_name.split()[1]
            playing.remove(stop_what)
        }
    }

    input "connect headset" {
        guard { return headset == "disconnected" }
        body  { headset = "connected" }
    }

    input "disconnect headset" {
        guard { return headset == "connected" }
        body  { headset = "disconnected" }
    }
}

Looking at the model in different perspectives

In addition to printing values of variables on states, fmbt-editor visualisation directives enable taking a look at the model in different perspectives.

For instance, try replacing the first line on the previous example with directive

# preview-show-vars: headset

Visualisation will merge all states here the value of the headset variable is the same, resulting in a two-state visualisation of the 16-state state space. Alternatively, you can view the state space in the perspective of which sounds are played simultaneously with directive

# preview-show-vars: playing

Other interesting views include showing the part of the state space that has been covered by automatically generated test:

# preview-show-vars: playing, headset
# preview-hide-states: unvisited

Or inspect in detail where "i:play music" could have been tested and where it has been tested (transition is colored green) by the generated test:

# preview-show-vars: playing, headset
# preview-show-trans: i:play music

Finally, sometimes the model would have too many states for visualisation. Then the preview-depth directive may save you from huge state spaces and slowness of layouting and painting it. As an example, let's limit the visualisation to the depth of one test step from the initial state:

# preview-show-vars: playing, headset
# preview-depth: 1

Even though the visualisation would not present all states of the state space, tests are always generated from the full state space.

Tags

Tags are handy for

  • labeling states: To label a set of states, define a guard that returns True in those states.
  • grouping test steps: inputs defined inside a tag block can be tested only in states where the guard of the tag and the guard of the input return True. This avoids repeating the same conditions in guards of many inputs.
  • verifying that the system under test is in expected state. These checks are written to adapters block of tags. The block is executed whenever the test generator enters a state labelled with the tag.

As an example of labeling and grouping, we will make the model configurable with an environment variable. AUDIOPOLICY_MAXSOUNDS sets the upper limit for how many simultaneously playing sounds can be tested. If the environment variable has not been defined, then default value "1" will be used.

In the code below, states where less than maximum number of sounds are being played, are tagged as "can play more". As we group all "play ..." test steps inside the tag, starting to play new sounds is allowed only in these states.

# preview-show-vars: playing
aal "audiopolicy" {
    language "python" { import os }
    variables { playing, headset, max_sounds }
    initial_state {
        playing = set()
        headset = "disconnected"
        max_sounds = int(os.getenv("AUDIOPOLICY_MAXSOUNDS", "1"))
    }

    tag "can play more" {
        guard { return len(playing) < max_sounds }

        input "play music", "play alarm", "play phone" {
            guard {
                play_what = input_name.split()[1]
                return not play_what in playing
            }
            body  {
                play_what = input_name.split()[1]
                playing.add(play_what)
            }
        }
    }

    input "stop music", "stop alarm", "stop phone" {
        guard {
            stop_what = input_name.split()[1]
            return stop_what in playing
        }
        body  {
            stop_what = input_name.split()[1]
            playing.remove(stop_what)
        }
    }

    input "connect headset" {
        guard { return headset == "disconnected" }
        body  { headset = "connected" }
    }

    input "disconnect headset" {
        guard { return headset == "connected" }
        body  { headset = "disconnected" }
    }
}
Clone this wiki locally