Skip to content

Tutorial

Antti Kervinen edited this page Jun 2, 2014 · 11 revisions

Tutorial: testing audio policy

A driver is listening to music in her car when incoming call alarm rings. Before she reacts to the alarm, the navigator reminds her to take the next turn to the right.

Audio policy takes care of routing all sounds to correct audio outputs. For instance, the driver should hear the alarm and the navigator message with and without a bluetooth headset, but it might not be wanted that these sounds interfere with sounds from games or videos being played on the backseat. Music, phone, games, videos and navigator can all be functions of the same IVI (in-vehicle infotainment) system in a car.

As an example of this kind of system, Tizen IVI runs the Murphy audio policy daemon.

In software testing perspective, number of different audio routing scenarios is large, and there are many different paths that lead into the same scenario. For instance, scenario "music is playing when call alarm rings" can be tested by starting to play music and then calling in. But it can be further tested by letting the navigator add the third sound for a while, and when it has finished, checking that music and call alarm are again routed correctly.

It would take a great effort to design a test suite with sufficient coverage of all the scenarios and interesting paths. In this tutorial we will take an alternative approach: let fMBT automatically generate and run the test suite.

Before starting

There are many code snippets but no screenshots in this tutorial. The idea is that you can copy-paste the code to your fmbt-editor, save, and see visualisations and automatically generated tests by yourself.

This tutorial has three sections. In the first section, Modeling, we will show how to define test steps with fmbt-editor. You can see first automatically generated tests already after the first two code snippets in this section.

In the second section, Implementing test steps, we will show how to make automatically generated tests executable.

In the third section, Generating different tests, we will show how to generate tests for different purposes from the same set of test steps.

Modeling

Instead of trying to design test cases, we only define test steps and when they can be tested. This combination, that is test steps and their preconditions, is called model. Model-based testing tools, like fMBT, generate tests automatically based on models.

fmbt-editor

A handy tool for modeling is fmbt-editor. It can be used for

  • editing models
  • visualising models as state spaces
  • editing test generation parameters
  • generating different tests and visualising what they cover.

You can also use your favourite editor to edit model and test configuration files, and use fmbt-editor --preview-only to visualise them. AAL-mode is provided for Emacs users.

Let's start with a minimal model:

aal "audiopolicy" {
    language "python" {}
    initial_state {}
}

This only tells that we will use AAL/Python as a modeling language. This model has only one state. You can visualise the model by launching fmbt-editor audiopolicy.aal, replacing editor contents with code above, and pressing Ctrl+S to save the file. Saving triggers updating visualisation.

Visualised model gets more interesting when we add first test steps.

First steps

Let's add first test steps to start and stop playing music in the system. The design is that test step "play music" will start music playback, and the music will be played until executing test step "stop music".

We add a playing variable to track which sounds are being played. We read the variable in guard blocks (preconditions) of "play music" and "stop music". For instance, the guard of "play music" makes sure that "play music" can be tested only if music is not already playing.

With the variable and the two test steps on board, AAL/Python code looks like this:

aal "audiopolicy" {
    language "python" {}
    variables { playing }
    initial_state {
        playing = set()
    }
    input "play music" {
        guard   { return not "music" in playing }
        body    { playing.add("music") }
    }
    input "stop music" {
        guard   { return "music" in playing }
        body    { playing.remove("music") }
    }
}

Body blocks of test steps "play music" and "stop music" update the value of playing. A body block will be executed after successfully testing the step. This code does not tell how "play music" and "stop music" actually interact with the system under test when tested. That code will be written in adapter blocks later.

The code inside initial_state, guard, body and adapter blocks is pure Python. Variables defined in the variables block are declared global in each of these blocks.

Next steps

If we would add new sound types using the same pattern, testing alarm would look like this:

aal "audiopolicy" {
    language "python" {}
    variables { playing }
    initial_state {
        playing = set()
    }
    input "play music" {
        guard { return not "music" in playing }
        body  { playing.add("music") }
    }
    input "stop music" {
        guard { return "music" in playing }
        body  { playing.remove("music") }
    }
    input "play alarm" {
        guard { return not "alarm" in playing }
        body  { playing.add("alarm") }
    }
    input "stop alarm" {
        guard { return "alarm" in playing }
        body  { playing.remove("alarm") }
    }
}

However, adding "phone", "navigator", "game" and "video" in this way would result in many almost identical test steps. We can avoid repeating code by defining several test steps with common guard and body blocks. Inside the blocks, input_name variable contains the name of the test step.

aal "audiopolicy" {
    language "python" {}
    variables { playing }
    initial_state {
        playing = set()
    }

    input "play music", "play alarm", "play phone" {
        guard {
            play_what = input_name.split()[1]
            return not play_what in playing
        }
        body  {
            play_what = input_name.split()[1]
            playing.add(play_what)
        }
    }

    input "stop music", "stop alarm", "stop phone" {
        guard {
            stop_what = input_name.split()[1]
            return stop_what in playing
        }
        body  {
            stop_what = input_name.split()[1]
            playing.remove(stop_what)
        }
    }
}

In the code above, play_what and stop_what are local variables as they are not listed in the variables block. These variables contain value "music", "alarm" and "phone", depending on for which test step the block is executed.

That's it: fMBT already generates tests, as you can see in fmbt-editor's Test tab (F6). In the remaining part of this section we will:

  • Extend the model by adding bluetooth headset support.
  • Go deeper to different visualisation options, that will help understanding and using the Model tab (F5).
  • Make the model configurable for different tests and environments.

On the other hand, if you are more eager to move on, this is a good place to jump to

Visualisation

Visualised model with "music", "alarm" and "phone" sounds has eight states, which already makes it somewhat hard to follow (see code above). The number of states grows quickly. For instance, adding test steps for connecting and disconnecting bluetooth headset (see code below) doubles the number of states to 16.

Printing variable values on states helps understanding the state space. fmbt-editor visualisation can be controlled with directives, such as # preview-show-vars: .... For instance, the directive below prints values of both playing and headset variables on every state.

# preview-show-vars: playing, headset
aal "audiopolicy" {
    language "python" {}
    variables { playing, headset }
    initial_state {
        playing = set()
        headset = "disconnected"
    }

    input "play music", "play alarm", "play phone" {
        guard {
            play_what = input_name.split()[1]
            return not play_what in playing
        }
        body  {
            play_what = input_name.split()[1]
            playing.add(play_what)
        }
    }

    input "stop music", "stop alarm", "stop phone" {
        guard {
            stop_what = input_name.split()[1]
            return stop_what in playing
        }
        body  {
            stop_what = input_name.split()[1]
            playing.remove(stop_what)
        }
    }

    input "connect headset" {
        guard { return headset == "disconnected" }
        body  { headset = "connected" }
    }

    input "disconnect headset" {
        guard { return headset == "connected" }
        body  { headset = "disconnected" }
    }
}

fmbt-editor lets you zoom in and out the visualised model with mouse (Ctrl+wheel) and with keyboard (F5 and then Ctrl++ and Ctrl--).

Visualising above AAL/Python shows that

  • States are unique combinations of values of variables.
  • Test steps are transitions between states. If the source and the destination state is the same state, then the name of the test step is written inside the state in the visualisation.
  • Test execution starts from the state tagged [initial state] on the top.
  • States and transitions that generated tests visit are colored green.

Viewing the model in different perspectives

In addition to printing values of variables on states, fmbt-editor visualisation directives help viewing the model in different perspectives.

For instance, try replacing the first line on the previous example with directive

# preview-show-vars: headset

Visualisation will merge all states here the value of the headset variable is the same, resulting in a two-state visualisation of the 16-state state space. Alternatively, you can view the state space in the perspective of which sounds are played simultaneously with directive

# preview-show-vars: playing

Other interesting views include showing only the part of the state space that has been covered by automatically generated test:

# preview-show-vars: playing, headset
# preview-hide-states: unvisited

Or inspect in detail where "i:play music" could have been tested and where it has been tested (transition is colored green) by the generated test:

# preview-show-vars: playing, headset
# preview-show-trans: i:play music

Finally, sometimes the model may have too many states for almost any visualisation. Then the preview-depth directive may save you from huge state spaces and slowness of layouting and painting it. As an example, let's limit the visualisation to the depth of one test step from the initial state:

# preview-show-vars: playing, headset
# preview-depth: 1

Even though the visualisation would not present all real states of the state space, tests are always generated from the full state space.

Tags

Tags are named conditions that are either True or False on every state. They are handy for many purposes.

  • Tags label states. To label a set of states, define a guard that returns True in those states.
  • Tags group test steps. Test steps defined inside a tag block can be tested only in states where the guard of the tag and the guard of the input block return True. This avoids repeating the same conditions in guards of many inputs.
  • Tags help verifying that the state of the system under test corresponds to tagged states. These checks are written to adapters blocks of tags. The block is executed whenever test execution enters a state with the tag.

An example on using tags for verifying will be given in Implementing test steps. As an example of labeling and grouping, we will make the above model configurable with an environment variable. AUDIOPOLICY_MAXSOUNDS sets the upper limit for how many sounds are played simultaneously in generated tests. If the environment variable has not been defined, then the default will be "3".

In the code below, states where less than maximum number of sounds are being played, are tagged as "can play more". Because we group all "play music/alarm/phone" steps inside the tag, starting to play new sounds is allowed only in "can play more" states.

# preview-show-vars: playing, headset
aal "audiopolicy" {
    language "python" { import os }
    variables { playing, headset, max_sounds }
    initial_state {
        playing = set()
        headset = "disconnected"
        max_sounds = int(os.getenv("AUDIOPOLICY_MAXSOUNDS", "3"))
    }

    tag "can play more" {
        guard { return len(playing) < max_sounds }

        input "play music", "play alarm", "play phone" {
            guard {
                play_what = input_name.split()[1]
                return not play_what in playing
            }
            body  {
                play_what = input_name.split()[1]
                playing.add(play_what)
            }
        }
    }

    input "stop music", "stop alarm", "stop phone" {
        guard {
            stop_what = input_name.split()[1]
            return stop_what in playing
        }
        body  {
            stop_what = input_name.split()[1]
            playing.remove(stop_what)
        }
    }

    input "connect headset" {
        guard { return headset == "disconnected" }
        body  { headset = "connected" }
    }

    input "disconnect headset" {
        guard { return headset == "connected" }
        body  { headset = "disconnected" }
    }
}

You can see the effect by quitting fmbt-editor (Ctrl+Q), setting the environment variable, and launch the editor again:

export AUDIOPOLICY_MAXSOUNDS=1
fmbt-editor audiopolicy.aal

Configuring and splitting test models

We already demonstrated using environment variable (AUDIOPOLICY_MAXSOUNDS) in configuring test models. Here we will introduce two more ways: preprocessor directives, that also allow splitting test steps to separate files, and user-defined Python code. These are controlled with remote_pyaal parameters in test configuration files.

Before test generation, AAL/Python runner remote_pyaal compiles AAL/Python into pure Python. The compiler underneath, that is fmbt-aalc, uses a preprocessor that handles directives

^ifdef "name"
^include "filename"
^endif

The preprocessor replaces ^include "filename" with the contents of the file. This enables splitting test steps into several files.

The preprocessor removes lines from ^ifdef "name" to ^endif if name has not been defined. This can be used for enabling and disabling testing of certain test steps.

For example, let's change the above model so that tests can be generated and run in test environments with and without bluetooth headsets. We do this by adding ^ifdef "bt" and ^endif around the last two test steps in the model:

^ifdef "bt"
    input "connect headset" {
        guard { return headset == "disconnected" }
        body  { headset = "connected" }
    }

    input "disconnect headset" {
        guard { return headset == "connected" }
        body  { headset = "disconnected" }
    }
^endif

Now we can toggle testing headset in test configuration files. You can try it as follows:

  • Open test configuration tab in fmbt-editor (press F2).
  • Add -D bt right after remote_pyaal and save the configuration (Ctrl+S). This triggers updating visualisation and generated test.

In addition to preprocessor directives, remote_pyaal's parameter -c <code> defines Python code to be executed before loading the model. This can be used, for instance, defining new values for AAL/Python variables. For example, let's allow test configuration file to override the initial status of the bluetooth headset:

initial_state {
    playing = set()
    if not "headset" in globals():
        headset = "disconnected"
    max_sounds = int(os.getenv("AUDIOPOLICY_MAXSOUNDS", "3"))
}

Now, if we want to force test generation to start from the state where bluetooth headset is already connected, we only need to define in test configuration file:

model = aal_remote(remote_pyaal -c 'headset="connected"' ...)

remote_pyaal accepts many -c <code> parameters. They will be executed in the order of appearance.

Implementing test steps

Implementation of a test step is the code that actually interacts with the system under test: sends inputs and checks that the system responds as expected. Implementations are called adapter in fMBT: they are the adaptation layer between abstract test steps seen by the test generator and interfaces of the system under test and test environment.

There are many alternatives for implementing test steps. For instance:

  • Convert output of fmbt -o adapter=dummy test.conf | fmbt-log into a series of function calls in a C (or some other programming language) module. This way you can compile fMBT-generated tests as part device drivers, firmware on embedded devices, and other components where interfaces are hard to reach from fmbt userspace process. The downside is that generated tests are static (this is offline model-based testing), and you will need to implement your own way to log test runs and interpret results.
  • Implement a remote adapter that communicates with the fMBT process. When launching test execution, fMBT spawns this process and requests it to execute a test step at a time. You have freedom to choose the programming language for test steps. Responses from the adapter can affect test generation (online model-based testing). The downside is implementing the remote adapter protocol, that is fairly simple.
  • Write test step implementations in Python modules that you import and call from AAL/Python, or write them directly to AAL/Python. This forces implementing test steps so that they are easily called from Python. On the other hand, this enables using AAL/Python variables in the adapter code, and you get logging and exception traceback recording for free. This is the easiest way to implement adapter, given that Python works for you.

In this section, we will show how to implement test steps in AAL/Python.

Sending inputs, verifying correctness

Adapter-related code can show up in five blocks in AAL/Python: imported libraries, initialisation, test step implementations, tag checks, and finally clean up at the end of test runs. Let's go through these with examples.

First, imports are most natural to do in the global code that is executed always when AAL/Python model is loaded:

language "python" {
    # import Python libraries needed for communicating with the system
    # under test and the test environment.
}

Second, adapter_init is executed only if fMBT is launched for both test generation and execution - in contrast to the language block code that is executed also for plain test generation, too. Therefore, adapter_init or any other adapter blocks are not executed when generating tests shown on fmbt-editor's Test tab.

adapter_init {
    # This is a good place to
    # * establish connections to the system under test and test environment
    # * initialize them for test run
}

Third, adapter blocks of test steps. An adapter block is executed when the test generator decides to execute the test step.

input "connect headset" {
    guard   { return headset == "disconnected" }
    body    { headset = "connected" }
    adapter {
        # Send input to the system under test.
        # Verify correct response, if possible.
        # Some verifications can be done in tags, too.

        # Failing Python's assert means that test step failed.

        # Example:
        # testenv.headset.power_on()
        # testenv.headset.connect(audio_system_id)
        # assert sut.check_headset_connected()
    }
}

Fourth, adapter blocks of tags. An adapter block is executed whenever test generator arrives to a state labelled with the tag.

tag "check sound" {
    guard   { return len(playing) > 0 }
    adapter {
        # Do not change the state of the system under test
        # or the model state (that is, do not write to variables you
        # initialised in the initial_state block).
        # Observe that SUT works as it should always in this kind of
        # state.

        # Failing Python's assert means that state check failed.

        # Example:
        # if headset == "connected":
        #     testenv.headset.record_sound("heard.wav", seconds=1.0)
        # else:
        #     testenv.mic.record_sound("heard.wav", seconds=1.0)
        # assert helpers.check_correct_sound(playing, "heard.wav")
    }
}

Finally, when fMBT test run is finished, test setup tear down is handled by

adapter_exit {
    # Clean-up test setup, fetch relevant logs, crash dumps,
    # etc.

    # "verdict" and "reason" variables contain more detailed
    # information why test run ended. For instance, verdict
    # can have value "pass" and reason "coverage 0.95 reached".
}

After implementing test steps and relevant checks in tags, tests can be both generated and run with command

fmbt -l test.log test.conf

where test.conf is a test configuration shown in F2 tab in fmbt-editor. Contents of the log file (test.log) is XML. fmbt-log is handy for picking up relevant information from the log for various purposes.

Observations that affect test generation

Most often tests can be implemented only by sending inputs and verifying responses, as described in previous subsection. Whenever this is possible, it is a good idea to stick with that: it is simple, reproducable, deterministic, and easy to debug.

This subsection goes deeper into more difficult cases. In these cases we need to observe the status of the system under test or test environment, and adapt test generation accordingly. For instance, adapter code may observe an unexpected response that is not really an error, but it may delay or prevent testing test steps planned by the test generator. As a result, test generator needs to regenerate currently running test.

As a simple example of run-time observations that affect test generation, our adapter_init could detect if a headset device is available in the test setup. If it is not, then it would run the test without testing headset actions.

adapter_init {
    if not testenv.headset_found():
        headset = "not available"
}

Because the headset variable gets value "not available", "connect headset" and "disconnect headset" test steps will never be tested, because their guards will always return False.

Note the fundamental difference between this solution and test configuration option -D bt in Configuring and splitting test models. In this approach, not being able to find headset in the test environment is not an error. Test log and test reports reveal that headset has not been tested. On the other hand, without this kind of run-time configurability, like in the -D bt case, "connect headset" test step would fail if headset could not be found.

Outputs are test steps whose execution is triggered by observations made on the system under test - in contrast to inputs, whose execution is triggered by the test generator. Observations are made in the adapter blocks of output test steps. If a block returns True, it tells the test generator that the output has been observed and the test step has been executed. The test generator only validates if it was legal to execute the test step. If the guard is not defined or returns True, taking the test step was accepted by the model, so the test generator executes the body of the test step. If the guard returns False, the observation was illegal, and the test run fails.

"Playback stopped" below is an output test step. Its adapter checks if any of the music files that currently being played (music.mp3, alarm.mp3 or phone.mp3) will stop soon. If so, it will wait until all such playbacks really stop and update the playing variable accordingly.

output "playback stopped" {
    guard   { return len(playing) > 0 }
    adapter {
        # If there's less than, say, 2 seconds until playing a sound
        # of any role file will stop, then
        # 1. sleep until all such playbacks have stopped,
        # 2. remove corresponding items from the playing variable
        # 3. return True
    }
}

The idea of this example output test step is to prevent generated tests to fail because of a sound file ends without test framework noticing it. That is, imagine a generated test of the form

  • play music
  • play alarm
  • stop alarm
  • play alarm
  • stop alarm
  • play alarm
  • stop alarm
  • ...

This never executes "stop music", nevertheless playing music will eventually stop.

The test generator runs adapter blocks of all output test steps before choosing an input test step to be executed. It is sometimes convenient to implement an adapter block of an output that can report many alternative observations. Therefore, in addition to returning True or False, the adapter block can return any output or even input action:

output "music stopped" {
    guard { return "music" in playing }
    body  { playing.remove("music") }
}

output "poll stopped" {
    guard   { return False }
    adapter {
        ...
        if stopped_mp3 == "music.mp3":
            return output("music stopped")
        ...
    }
}

Generating different tests

Test generation is controlled with parameters given in test configuration files (F2 in fmbt-editor). Parameters are

  • heuristic: defines which algorithm selects input actions to be tested.
  • coverage: defines how coverage is measured.
  • end conditions: define when test generation and execution stops, and what is the test verdict for each condition.

Complete list of options is can be found in test configuration. In this tutorial we will concentrate an selected examples.

Reference model for test generation

We will use following AAL/Python model in test generation examples in this section.

aal "audiopolicy" {
    language "python" {}
    variables { playing, headset }
    initial_state {
        playing = set()
        headset = "disconnected"
    }

    tag "music", "alarm", "phone" {
        guard { return tag_name in playing }
    }

    tag "headset connected", "headset disconnected" {
        guard { return tag_name.split()[1] == headset }
    }

    input "play music", "play alarm", "play phone" {
        guard {
            play_what = input_name.split()[1]
            return not play_what in playing
        }
        body  {
            play_what = input_name.split()[1]
            playing.add(play_what)
        }
    }

    input "stop music", "stop alarm", "stop phone" {
        guard {
            stop_what = input_name.split()[1]
            return stop_what in playing
        }
        body  {
            stop_what = input_name.split()[1]
            playing.remove(stop_what)
        }
    }

    input "connect headset" {
        guard { return headset == "disconnected" }
        body  { headset = "connected" }
    }

    input "disconnect headset" {
        guard { return headset == "connected" }
        body  { headset = "disconnected" }
    }
}

This model differs from the test model in the Visualisation section only in one sense. We have added tags to demonstrate certain test generation features. Tags alarm, music and phone present sounds being played at each state. Furthermore, every state has either the headset connected or the headset disconnected tag.

Random test generation

The most simple test generation algorithm is random. Evenly distributed random choice is the default test generation heuristic. For instance, the following test configuration file (random.conf) results in a ten-step random test:

model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
pass      = steps(10)

You can see generated test on the Test tab on fmbt-editor, or with command line by fmbt random.conf | fmbt-log. In this command, fmbt generates the test. fmbt-log reads the log from standard input and extracts test steps and the test verdict from the log (this is the default fmbt-log output format).

In the second example, we will use weighted random test generation. It allows defining weights for input test steps at any state, and specifically at states with certain tags.

For example, usage-without-headset.w:

"i:play .*"              = 1
"i:stop alarm"           = 9
"i:stop phone"           = 4
"i:stop music"           = 1
["phone"] "i:stop music" = 100

The first line of the weight file defines equal weight (1) for all i:play... test steps. Next three lines define weights 9, 4 and 1 for stopping alarm, phone and music. That is, if music and alarm are being played and either one is stopped next, nine times out of ten (on average) it will be the alarm. The last line sets a very high weight for stopping music when phone is being played. That is, with these random tests we want to test the system like a user that almost always stops the background music when speaking on the phone.

Finally, the usage-without-headset.w file does not give any weight for connect headset and disconnect headset test steps. Those steps get the default weight, that is zero. If test generator is at a state that includes test steps with non-zero weights, it never chooses a step with a zero weight. However, if every test step available at a state has weight zero, then all have equal probability to be chosen. New default weight can be assigned to every test step, for instance with line ".*" = 1 in the weight file.

You can generate a 100-step long random test that uses the weight file with the following test configuration usage-without-headset.conf:

model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
heuristic = weight(usage-without-headset.w)
pass      = steps(100)

Now, let's see the effect of the last line in the weight file, that is, weigth 100 for stopping music when on the phone. When you generate the test in fmbt-editor, you can inspect the generated test in the Test tab (F6). First, make tags visible by right-clicking the Test tab and choosing Tags. Then press Ctrl-F to open the Find dialog, and type "music, phone". Finally keep pressing Ctrl-F to go to the next matching string. You can observe that the next test step after each match is "i:stop music" -- at least very often -- due to the weight 100.

The same thing can be checked from command line by formatting tags before a test step (tb) and selected action (as) on the same line with fmbt-log:

fmbt usage-without-headset.conf | fmbt-log -f '$tb;  $as' | grep 'music;' | grep 'phone;'

Test step combinations

(TODO: explanations.)

  1. Test every test step at least once.
model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
heuristic = lookahead(5)
coverage  = perm(1)

pass      = coverage(1)
inconc    = steps(100)

on_pass   = exit(0)
on_inconc = exit(0)
  1. Test permutations of every n test steps.

With parameter n = 2, fMBT will test combinations

  • "i:play music", "i:play alarm"
  • "i:play music", "i:play phone"
  • "i:play music", "i:connect headset"
  • ...

and the same for every other test step.

model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
heuristic = lookahead(5)
coverage  = perm(2)

pass      = coverage(1)
pass      = lookahead_noprogress(5)

on_pass   = exit(0)
on_inconc = exit(0)
  1. Test only certain permutations.

Test every test step combination where the first step either connects or disconnects a headset, and the second step starts playing alarm, music or phone.

model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
heuristic = lookahead(5)
coverage  = perm(2, 'i:.*headset', 'i:play.*')

pass      = coverage(1)
pass      = lookahead_noprogress(5)

on_pass   = exit(0)
on_inconc = exit(0)

Test steps in certain order and under certain circumstances

  1. usecase(step1 then/and/or step2)

First test stopping alarm, then stopping phone, and finally test stopping music ten times.

model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
heuristic = lookahead(5)
coverage  = usecase("i:stop alarm" then "i:stop phone" then 10 * "i:stop music")

pass      = coverage(1)
inconc    = steps(100)

on_pass   = exit(0)
on_inconc = exit(0)
  1. usecase([at tags] step)

Test stopping phone when music is playing with headset connected.

model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
heuristic = lookahead(5)
coverage  = usecase(["music" and "headset connected"] "i:stop phone")

pass      = coverage(1)
inconc    = steps(100)

on_pass   = exit(0)
on_inconc = exit(0)
  1. usecase([exactly every n] steps)

Test connecting headset in every case where exactly two of the tags "alarm", "music" and "phone" are on. Try with values 0, 1 and 3, too.

model     = aal_remote(remote_pyaal -l aal.log audiopolicy.aal)
heuristic = lookahead(5)
coverage  = usecase(([exactly every 2 "alarm|music|phone"] "i:connect headset"))

pass      = coverage(1)
inconc    = steps(100)

on_pass   = exit(0)
on_inconc = exit(0)