Skip to content

LNST Invocation

Ondrej Lichtner edited this page Oct 31, 2016 · 11 revisions

1. Starting LNST slaves on test machines

You have to start the lnst-slave processes first on all machines that are defined in the recipe file by yourself. Let's assume there are 2 machines participating in the test.

On both of these machines you have to run the following command:

$ ./lnst-slave [-d]

Or if you've installed LNST from the RPM package you can also run lnst-slave as a service:

$ systemctl start lnst-slave.service

If anything goes wrong it is good practice to pass the debug option, -d, so that you get information about established RPC connections from the controller, information about getting the network test interfaces ready, and possible problems during the setup.

After a successful startup you should get the following line on the output:

$ ./lnst-slave
Loading config file '/etc/lnst-slave.conf'
2016-10-31 11:04:59       (localhost)        -    INFO: Started
2016-10-31 11:04:59       (localhost)        -    INFO: Using RPC port 9999.
2016-10-31 11:04:59       (localhost)        -    INFO: Waiting for connection.

2. Starting LNST controller

After that you need to start lnst-ctl on the controller to run the tests in a recipe file which is as simple as running the following command:

$ ./lnst-ctl run my_recipe.xml

3. Using LNST controller to configure interfaces only

You can also use LNST to do just the network configuration of the test machines. None of the tests inside your recipe file will be executed.

To do this pass config_only instead of the run argument to the lnst-ctl script. When LNST is run this way the current configuration of slave machines (e.g. from previous config_only run) will be discarded and new configuration will replace it. Contrary to how run works the machines will not be deconfigured after lnst-ctl finishes. Use this when you want to configure the machines automatically and then manually work with them.

$ ./lnst-ctl config_only my_recipe.xml

3.1 Deconfiguration of previous config_only run

As was mentioned previously if you run the LNST controller again, with the action config_only or run the configuration of the previous config_only run will be discarded. However you might want to deconfigure slave machines without configuring them again or running an entire test. This can be done by using the deconfigure action. This action doesn't need any recipe specified.

$ ./lnst-ctl deconfigure

4. Checking the match for current recipe

Another way to invoke the controller is to check which machines from the user's slave pool would the controller match based on provided recipe. The action target is match_setup.

$ ./lnst-ctl match_setup my_recipe.xml

4.1 Turning off connectivity check to test machines

By default, the controller omits all machines that are not reachable over the network during the process of matching the recipe to the user's slave pool. This behaviour can be turned off with option --disable-pool-checks (or short version -o) so the connectivity check is skipped and all machines are considered as available for the matching phase.

$ ./lnst-ctl --disable-pool-checks match_setup my_recipe.xml

or

$ ./lnst-ctl -o match_setup my_recipe.xml

6. Other command-line options of lnst-ctl

The previous sections talked about different actions supported by the lnst-ctl tool. Here I will explain what command line options are available and when to use them.

You can always read the manual page, or use the -h command line argument to list available options:

$ man lnst-ctl
$ ./lnst-ctl -h
Usage: ./lnst-ctl [OPTIONS...] ACTION [RECIPES...]

ACTION = [ run | config_only | deconfigure | match_setup | list_pools]

OPTIONS
  -A, --override-alias name=value define top-level alias that will override any other definitions in the recipe
  -a, --define-alias name=value   define top-level alias
  -c, --config=FILE               load additional config file
  -C, --config-override=FILE      reset config defaults and load the following config file
  -d, --debug                     emit debugging messages
      --dump-config               dumps the join of all loaded configuration files on stdout and exits
  -v, --verbose                   verbose version of list_pools command
  -h, --help                      print this message
  -m, --no-colours                disable coloured terminal output
  -o, --disable-pool-checks       don't check the availability of machines in the pool
  -p, --packet-capture            capture and log all ongoing network communication during the test
      --pools=NAME[,...]          restricts which pools to use for matching, value can be a comma separated list of values or
      --pools=PATH                a single path to a pool directory
  -r, --reduce-sync               reduces resource synchronization for python tasks, see documentation
  -s, --xslt-url=URL              URL to a XSLT document that will be used when transforming the xml result file, only useful when -t is used as well
  -t, --html=FILE                 generate a formatted result html
  -u, --multi-match               run each recipe with every pool match possible
  -x, --result=FILE               file to write xml_result

Option -o was explained earlier, and the options -h, -m and --dump-config are self explanatory.

  • -a defines a new top-level alias the same way a <define> tag under the top element of the recipe would. This alias can be overridden by defining the same name again.
  • -A defines a new top-level alias the same way a <define> tag under the top element of the recipe would. This alias can't be overridden by defining the same name again.
  • -c accepts a filename as an argument. The provided file will be loaded as an additional configuration file after the standard configuration files have been read. The file MUST exist. This option is useful when automatizing the lnst-ctl runs which sometimes need different configurations. Read more about lnst configuration here
  • -C works the same as -c, but before it loads the specified configuration file it resets the internal configuration state to default. This is useful if you want to ignore all the automatically loaded configuration files. Read more about lnst configuration here
  • -d as was mentioned before will enable debugging messages that will give you greater insight into what is LNST doing at the moment, which makes it very useful when things go wrong with LNST.
  • -v enables verbose output for the list_pools action, to also list detailed information about the loaded test machines.
  • -p will enable packet capture on all machines via the use of tcpdump, the captured packets are stored in a file and transmitted to the controller machines and you will be able to find them in the log directory for the specific test. This is useful when you want to know exactly what happened in the network.
  • --pools controls which pools should be used by the matching algorithm, you can either restrict the pool selection by their names or you can specify a path to a pool directory
  • -r will disable synchronization of module and tool resources to slave machines, when python tasks are used in the recipe. Instead the user can manually synchronize resources from the python task itself. This option has no effect if the recipe only contains xml tasks.
  • -s can be used to set which XSLT document should be used for the transformation. The default is http://www.lnst-project.org/files/result_xslt/xml_to_html.xsl
  • -t will create a formatted html file of result data created during this controller run. To do this the xml file from the -x option is created as an internal structure and trasformed using an XSLT document.
  • -u enables multi-matching of recipes. This means that each specified recipe will be run with each possible match in the current pool. This is usefull when writing generic recipes that are very portable.
  • -x will make LNST create an xml file containing structured result data created from the recipe execution. The option accepts a target filename as an argument. This can be useful for storing the data or using it in other applications.

7. Test results

LNST keeps all the logs that are created during test execution. You can find the main log directory in the location specified in your configuration file, by default this is ~/.lnst/logs. In this directory the logs are organized in the following manner:

[<log_dir>/]                                      the main log directory

[<log_dir>/<timestamp>/]                          directory containing logs from one controller run
    debug
    info                                          summary logs from one controller run, contains PASS/FAIL results for all recipes

[<log_dir>/<timestamp>/<recipe>_match_N/]         logs related to N-th match of a specific recipe
    debug
    info                                          these files contain a union of all the slave and controller logs of this recipe

[<log_dir>/<timestamp>/<recipe>_match_N/<host_id>/]
    debug
    info                                          these files contain logs only from individual slave machines

We also store logs on slave machines so that in case something goes wrong there you can find at least some information on what happened. The default path for slave logs is /var/log/lnst. The structure is a little bit different since the slave runs as a daemon. It looks like this:

[<log_dir>/]                                      the main log directory

[<log_dir>/<timestamp>/]                          directory containing logs from one slave run
    debug
    info                                          information about the start of the slave application

[<log_dir>/<timestamp>_<recipe>/]                 logs related to one specific recipe, a new directory is created every time a new recipe is being run
    debug
    info                                          these files contain the logs of this recipe

The debug files contain more in-depth information about what is happening, which is useful when testing out new features or figuring out what is going wrong. The info files contain a "filtered" output containing only the most important information about the recipe execution.

For example, suppose that you ran a recipe called recipe1.xml that runs a test on two machines with ids slave1 and slave2. This is how the results from this recipe will be structured on the controller machine:

$ ls -R ~/.lnst/logs/
~/.lnst/logs/:
2013-01-10_10:40:23

~/.lnst/logs/2013-01-10_10:40:23:
debug  info  recipe1

~/.lnst/logs/2013-01-10_10:40:23/recipe1_match_1:
192.168.122.109  192.168.122.30  debug  info

~/.lnst/logs/2013-01-10_10:40:23/recipe1_match_1/slave1:
debug  info

~/.lnst/logs/2013-01-10_10:40:23/recipe1_match_1/slave2:
debug  info

While on the slave machines the hierarchy will look something like this:

$ ls -R /var/log/lnst/
/var/log/lnst/:
2013-01-10_10:40:09  2013-01-10_10:40:22_recipe1

/var/log/lnst/2013-01-10_10:40:09:
debug  info

/var/log/lnst/2013-01-10_10:40:22_recipe1:
debug  info