-
Notifications
You must be signed in to change notification settings - Fork 69
Tutorial
A driver is listening to music in her car when incoming call alarm rings. Before she reacts to the alarm, the navigator reminds her to take the next turn to the right.
Audio policy takes care of routing all sounds to correct audio outputs. For instance, the driver should hear the alarm and the navigator message with and without a bluetooth headset, but it might not be wanted that these sounds interfere with sounds from games or videos being played on the backseat. Music, phone, games, videos and navigator can all be functions of the same IVI (in-vehicle infotainment) system in a car.
As an example of this kind of system, Tizen IVI runs the Murphy audio policy daemon.
In software testing perspective, number of different audio routing scenarios is large, and there are many different paths that lead into the same scenario. For instance, scenario "music is playing when call alarm rings" can be tested by starting to play music and then calling in. But it can be further tested by letting the navigator add the third sound for a while, and when it has finished, checking that music and call alarm are again routed correctly.
It would take a great effort to design a test suite with sufficient coverage of all the scenarios and interesting paths. In this tutorial we will take an alternative approach: let fMBT automatically generate and run the test suite.
There are many code snippets but no screenshots in this tutorial. The
idea is that you can copy-paste the code to your fmbt-editor
,
save, and see visualisations and automatically generated tests by
yourself.
This tutorial has three sections. In the first section, Modeling, we
will show how to define test steps with fmbt-editor
. You can see
first automatically generated tests already after the first two code
snippets in this section.
In the second section, Implementing test steps, we will show how to make automatically generated tests executable.
In the third section, Generating different tests, we will show how to generate tests for different purposes from the same set of test steps.
Instead of trying to design test cases, we only define test steps and when they can be tested. This combination, that is test steps and their preconditions, is called model. Model-based testing tools, like fMBT, generate tests automatically based on models.
A handy tool for modeling is fmbt-editor
. It can be used for
- editing models
- visualising models as state spaces
- editing test generation parameters
- generating different tests and visualising what they cover.
You can also use your favourite editor to edit model and test
configuration files, and use fmbt-editor --preview-only
to
visualise them. AAL-mode
is provided for Emacs users.
Let's start with a minimal model:
aal "audiopolicy" {
language "python" {}
initial_state {}
}
This only tells that we will use
AAL/Python
as a modeling language.
This model has only one state. You can visualise the model by launching
fmbt-editor audiopolicy.aal
, replacing editor contents with code
above, and pressing Ctrl+S
to save the file. Saving triggers
updating visualisation.
Visualised model gets more interesting when we add first test steps.
Let's add first test steps to start and stop playing music in the system. The design is that test step "play music" will start music playback, and the music will be played until executing test step "stop music".
We add a playing
variable to track which sounds are being
played. We read the variable in guard blocks (preconditions) of
"play music" and "stop music". For instance, the guard of "play music"
makes sure that "play music" can be tested only if music is not
already playing.
With the variable and the two test steps on board, AAL/Python code looks like this:
aal "audiopolicy" {
language "python" {}
variables { playing }
initial_state {
playing = set()
}
input "play music" {
guard { return not "music" in playing }
body { playing.add("music") }
}
input "stop music" {
guard { return "music" in playing }
body { playing.remove("music") }
}
}
Body blocks of test steps "play music" and "stop music" update the
value of playing
. A body block will be executed after successfully
testing the step. This code does not tell how "play music" and "stop
music" actually interact with the system under test when tested. That
code will be written in adapter blocks later.
The code inside initial_state, guard, body and adapter blocks is pure Python. Variables defined in the variables block are declared global in each of these blocks.
If we would add new sound types using the same pattern, testing alarm would look like this:
aal "audiopolicy" {
language "python" {}
variables { playing }
initial_state {
playing = set()
}
input "play music" {
guard { return not "music" in playing }
body { playing.add("music") }
}
input "stop music" {
guard { return "music" in playing }
body { playing.remove("music") }
}
input "play alarm" {
guard { return not "alarm" in playing }
body { playing.add("alarm") }
}
input "stop alarm" {
guard { return "alarm" in playing }
body { playing.remove("alarm") }
}
}
However, adding "phone", "navigator", "game" and "video" in this way
would result in many almost identical test steps. We can avoid
repeating code by defining several test steps with common guard
and body blocks. Inside the blocks, input_name
variable
contains the name of the test step.
aal "audiopolicy" {
language "python" {}
variables { playing }
initial_state {
playing = set()
}
input "play music", "play alarm", "play phone" {
guard {
play_what = input_name.split()[1]
return not play_what in playing
}
body {
play_what = input_name.split()[1]
playing.add(play_what)
}
}
input "stop music", "stop alarm", "stop phone" {
guard {
stop_what = input_name.split()[1]
return stop_what in playing
}
body {
stop_what = input_name.split()[1]
playing.remove(stop_what)
}
}
}
In the code above, play_what
and stop_what
are local variables
as they are not listed in the variables block. These variables contain
value "music", "alarm" and "phone", depending on for which test step
the block is executed.
That's it: fMBT already generates tests, as you can see in
fmbt-editor
's Test tab (F6
). In the remaining part of this
section we will:
- Extend the model by adding bluetooth headset support.
- Go deeper to different visualisation options, that will help
understanding and using the Model tab (
F5
). - Make the model configurable for different tests and environments.
On the other hand, if you are more eager to move on, this is a good place to jump to
- Implementing test steps to see how generated tests are made executable
- Generating different tests to see how fMBT test generator can be configured.
Visualised model with "music", "alarm" and "phone" sounds has eight states, which already makes it somewhat hard to follow (see code above). The number of states grows quickly. For instance, adding test steps for connecting and disconnecting bluetooth headset (see code below) doubles the number of states to 16.
Printing variable values on states helps understanding the state
space. fmbt-editor
visualisation can be controlled with
directives, such as # preview-show-vars: ...
. For instance, the
directive below prints values of both playing
and headset
variables on every state.
# preview-show-vars: playing, headset
aal "audiopolicy" {
language "python" {}
variables { playing, headset }
initial_state {
playing = set()
headset = "disconnected"
}
input "play music", "play alarm", "play phone" {
guard {
play_what = input_name.split()[1]
return not play_what in playing
}
body {
play_what = input_name.split()[1]
playing.add(play_what)
}
}
input "stop music", "stop alarm", "stop phone" {
guard {
stop_what = input_name.split()[1]
return stop_what in playing
}
body {
stop_what = input_name.split()[1]
playing.remove(stop_what)
}
}
input "connect headset" {
guard { return headset == "disconnected" }
body { headset = "connected" }
}
input "disconnect headset" {
guard { return headset == "connected" }
body { headset = "disconnected" }
}
}
fmbt-editor
lets you zoom in and out the visualised model with
mouse (Ctrl+wheel
) and with keyboard (F5
and then Ctrl++
and Ctrl--
).
Visualising above AAL/Python shows that
- States are unique combinations of values of variables.
- Test steps are transitions between states. If the source and the destination state is the same state, then the name of the test step is written inside the state in the visualisation.
- Test execution starts from the state tagged
[initial state]
on the top. - States and transitions that generated tests visit are colored green.
In addition to printing values of variables on states, fmbt-editor
visualisation directives help viewing the model in different
perspectives.
For instance, try replacing the first line on the previous example with directive
# preview-show-vars: headset
Visualisation will merge all states here the value of the headset
variable is the same, resulting in a two-state visualisation of the
16-state state space. Alternatively, you can view the state space in
the perspective of which sounds are played simultaneously with
directive
# preview-show-vars: playing
Other interesting views include showing only the part of the state space that has been covered by automatically generated test:
# preview-show-vars: playing, headset
# preview-hide-states: unvisited
Or inspect in detail where "i:play music" could have been tested and where it has been tested (transition is colored green) by the generated test:
# preview-show-vars: playing, headset
# preview-show-trans: i:play music
Finally, sometimes the model may have too many states for almost any
visualisation. Then the preview-depth
directive may save you from
huge state spaces and slowness of layouting and painting it. As an
example, let's limit the visualisation to the depth of one test step
from the initial state:
# preview-show-vars: playing, headset
# preview-depth: 1
Even though the visualisation would not present all real states of the state space, tests are always generated from the full state space.
Tags are handy for
- labeling states. To label a set of states, define a guard that returns True in those states.
- grouping test steps. Test steps defined inside a tag block can be tested only in states where the guard of the tag and the guard of the input block return True. This avoids repeating the same conditions in guards of many inputs.
- verifying that the state of the system under test corresponds to tagged state. These checks are written to adapters blocks of tags. The block is executed before test generator chooses next step to be tested.
As an example of labeling and grouping, we will make the above model
configurable with an environment variable. AUDIOPOLICY_MAXSOUNDS
sets the upper limit for how many sounds are played simultaneously in
generated tests. If the environment variable has not been defined,
then the default will be "3".
In the code below, states where less than maximum number of sounds are being played, are tagged as "can play more". As we group all "play ..." test steps inside the tag, starting to play new sounds is allowed only in "can play more" states.
# preview-show-vars: playing, headset
aal "audiopolicy" {
language "python" { import os }
variables { playing, headset, max_sounds }
initial_state {
playing = set()
headset = "disconnected"
max_sounds = int(os.getenv("AUDIOPOLICY_MAXSOUNDS", "3"))
}
tag "can play more" {
guard { return len(playing) < max_sounds }
input "play music", "play alarm", "play phone" {
guard {
play_what = input_name.split()[1]
return not play_what in playing
}
body {
play_what = input_name.split()[1]
playing.add(play_what)
}
}
}
input "stop music", "stop alarm", "stop phone" {
guard {
stop_what = input_name.split()[1]
return stop_what in playing
}
body {
stop_what = input_name.split()[1]
playing.remove(stop_what)
}
}
input "connect headset" {
guard { return headset == "disconnected" }
body { headset = "connected" }
}
input "disconnect headset" {
guard { return headset == "connected" }
body { headset = "disconnected" }
}
}
You can see the effect by quitting fmbt-editor
(Ctrl+Q
), setting
the environment variable, and launch the editor again:
export AUDIOPOLICY_MAXSOUNDS=1
fmbt-editor audiopolicy.aal
We already demonstrated using environment variable
(AUDIOPOLICY_MAXSOUNDS) in configuring test models. Here we will
introduce two more ways: preprocessor directives, that also allow
splitting test steps to separate files, and user-defined Python
code. These are controlled with remote_pyaal
parameters in test
configuration files.
Before test generation, AAL/Python runner remote_pyaal
compiles
AAL/Python into pure Python. The compiler underneath, that is
fmbt-aalc
, uses a preprocessor that handles directives
^ifdef "name"
^include "filename"
^endif
The preprocessor replaces ^include "filename"
with the contents of
the file. This enables splitting test steps into several files.
The preprocessor removes lines from ^ifdef "name"
to ^endif
if
name
has not been defined. This can be used for enabling and
disabling testing of certain test steps.
For example, let's change the above model so that tests can be
generated and run in test environments with and without bluetooth
headsets. We do this by adding ^ifdef "bt"
and ^endif
around
the last two test steps in the model:
^ifdef "bt"
input "connect headset" {
guard { return headset == "disconnected" }
body { headset = "connected" }
}
input "disconnect headset" {
guard { return headset == "connected" }
body { headset = "disconnected" }
}
^endif
Now we can toggle testing headset in test configuration files. You can try it as follows:
- Open test configuration tab in
fmbt-editor
(pressF2
). - Add
-D bt
right afterremote_pyaal
and save the configuration (Ctrl+S
). This triggers updating visualisation and generated test.
In addition to preprocessor directives, remote_pyaal
's parameter
-c <code>
defines Python code to be executed before loading the
model. This can be used, for instance, defining new values for
AAL/Python variables. For example, let's allow test configuration file
to override the initial status of the bluetooth headset:
initial_state {
playing = set()
if not "headset" in globals():
headset = "disconnected"
max_sounds = int(os.getenv("AUDIOPOLICY_MAXSOUNDS", "3"))
}
Now, if we want to force test generation to start from the state where bluetooth headset is already connected, we only need to define in test configuration file:
model = aal_remote(remote_pyaal -c 'headset="connected"' ...)
remote_pyaal
accepts many -c <code>
parameters. They will be
executed in the order of appearance.
Implementation of a test step is the code that actually interacts with the system under test: sends inputs and checks that the system responds as expected. Implementations are called adapter in fMBT: they are the adaptation layer between abstract test steps seen by the test generator and interfaces of the system under test and test environment.
There are many alternatives for implementing test steps. For instance:
- Convert output of
fmbt -o adapter=dummy test.conf | fmbt-log
into a series of function calls in a C (or some other programming language) module. This way you can compile fMBT-generated tests as part device drivers, firmware on embedded devices, and other components where interfaces are hard to reach from fmbt userspace process. The downside is that generated tests are static (this is offline model-based testing), and you will need to implement your own way to log test runs and interpret results. - Implement a remote adapter that communicates with the fMBT process. When launching test execution, fMBT spawns this process and requests it to execute a test step at a time. You have freedom to choose the programming language for test steps. Responses from the adapter can affect test generation (online model-based testing). The downside is implementing the remote adapter protocol, that is fairly simple.
- Write test step implementations in Python modules that you import and call from AAL/Python, or write them directly to AAL/Python. This forces implementing test steps so that they are easily called from Python. On the other hand, this enables using AAL/Python variables in the adapter code, and you get logging and exception traceback recording for free. This is the easiest way to implement adapter, given that Python works for you.
In this section, we will show how to implement test steps in AAL/Python.
Adapter-related code can show up in five blocks in AAL/Python: imported libraries, initialisation, test step implementations, tag checks, and finally clean up at the end of test runs. Let's go through these with examples.
First, imports are most natural to do in the global code that is executed always when AAL/Python model is loaded:
language "python" {
# import Python libraries needed for communicating with the system
# under test and the test environment.
}
Second, adapter_init is executed only if fMBT is launched for both
test generation and execution - in contrast to the language block code
that is executed also for plain test generation, too. Therefore,
adapter_init or any other adapter blocks are not executed when
generating tests shown on fmbt-editor
's Test tab.
adapter_init {
# This is a good place to
# * establish connections to the system under test and test environment
# * initialize them for test run
}
Third, adapter blocks of test steps. An adapter block is executed when the test generator decides to execute the test step.
input "connect headset" {
guard { return headset == "disconnected" }
body { headset = "connected" }
adapter {
# Send input to the system under test.
# Verify correct response, if possible.
# Some verifications can be done in tags, too.
# Failing Python's assert means that test step failed.
# Example:
# testenv.headset.power_on()
# testenv.headset.connect(audio_system_id)
# assert sut.check_headset_connected()
}
}
Fourth, adapter blocks of tags. An adapter block is executed whenever test generator arrives to a state labelled with the tag.
tag "check sound" {
guard { return len(playing) > 0 }
adapter {
# Do not change the state of the system under test
# or the model state (that is, do not write to variables you
# initialised in the initial_state block).
# Observe that SUT works as it should always in this kind of
# state.
# Failing Python's assert means that state check failed.
# Example:
# if headset == "connected":
# testenv.headset.record_sound("heard.wav", seconds=1.0)
# else:
# testenv.mic.record_sound("heard.wav", seconds=1.0)
# assert helpers.check_correct_sound(playing, "heard.wav")
}
}
Finally, when fMBT test run is finished, test setup tear down is handled by
adapter_exit {
# Clean-up test setup, fetch relevant logs, crash dumps,
# etc.
# "verdict" and "reason" variables contain more detailed
# information why test run ended. For instance, verdict
# can have value "pass" and reason "coverage 0.95 reached".
}
After implementing test steps and relevant checks in tags, tests can be both generated and run with command
fmbt -l test.log test.conf
where test.conf is a test configuration shown in F2
tab in
fmbt-editor
. Contents of the log file (test.log) is XML.
fmbt-log
is handy for picking up relevant information from the log
for various purposes.
Most often tests can be implemented only by sending inputs and verifying responses, as described in previous subsection. Whenever this is possible, it is a good idea to stick with that: it is simple, reproducable, deterministic, and easy to debug.
This subsection goes deeper into more difficult cases. In these cases we need to observe the status of the system under test or test environment, and adapt test generation accordingly. For instance, adapter code may observe an unexpected response that is not really an error, but it may delay or prevent testing test steps planned by the test generator. As a result, test generator needs to regenerate currently running test.
As a simple example of run-time observations that affect test generation, our adapter_init could detect if a headset device is available in the test setup. If it is not, then it would run the test without testing headset actions.
adapter_init {
if not testenv.headset_found():
headset = "not available"
}
Because the headset variable gets value "not available", "connect teststep" and "disconnect headset" test steps will never be tested, because their guards will always return False.
Note the fundamental difference between this solution and test
configuration option -D bt
in Configuring and splitting test
models. In this approach, not being able to find headset in the test
environment is not an error. Test log and test reports reveal that
headset has not been tested. On the other hand, without this kind of
run-time configurability, like in the -D bt
case, "connect
headset" test step would fail if headset could not be found.
Output test steps are test steps whose execution is triggered by observations made on the system under test. Observations are made by adapter blocks of output test steps. If a block returns True, it tells the test generator that the output has been observed and should be executed. If the guard is not given or returns True, taking the test step is accepted by the model, and the body of the test step is executed.
To be continued...
output "wav stopped" {
...
}
To be continued...