-
Notifications
You must be signed in to change notification settings - Fork 33
LNST Task API
Allows you to drive the test execution from a python script with the ability to access all of the LNST infrastructure - machine properties, command execution, template functions, background execution. Besides the LNST infrastructure you can benefit from the Python language facilities as loops, conditional executions and whatever Python provides in it's library.
Note that the machine requirements description still needs to be provided as an XML code in the recipe.
Let's assume that your Python program you're using for the task execution is called task_check_ping.py and in your current directory there's a recipe my_recipe.xml:
$ ls .
my_recipe.xml
task_check_ping.py
To include the python script in your recipe use following code:
<lnstrecipe>
<network>
<host id="1">
<interfaces>
<eth label="ttnet" id="testiface">
<addresses>
<address value="192.168.100.240/24"/>
</addresses>
</eth>
</interfaces>
</host>
<host id="2">
<interfaces>
<eth label="ttnet" id="testiface">
<addresses>
<address value="192.168.100.215/24"/>
</addresses>
</eth>
</interfaces>
</host>
</machines>
<!-- This is it! -->
<task python="task_check_ping.py"/>
</lnstrecipe>
NOTE: The python task pathnames are relative to the pathname of recipe that calls them
# Mandatory import, the ctl handle contains the API
from lnst.Controller.Task import ctl
# Get handles for the machines
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
# Set a config option (persistent=True)
m1.config("/proc/sys/net/ipv4/conf/all/forwarding", "0", True)
# run a shell command
m2_if = m2.get_interface("testiface")
devname = m2_if.get_devname()
m1.run("echo %s" % devname, timeout=30)
# prepare a module for execution
ping_mod = ctl.get_module("IcmpPing", options={
"addr": m2_if.get_ip(0),
"count": 40,
"interval": 0.2,
"limit_rate": 95})
# run the module twice on machine one
ping_test = m1.run(ping_mod, timeout=30)
ping_test = m1.run(ping_mod, bg=True, timeout=30)
# make the controller wait
ctl.wait(5)
# interrupt the process
ping_test.intr()
As you can see all that you need to do to access the LNST API is simply importing the Controller handle
from lnst.Controller.Task import ctl
Let's have a closer look on the API itself.
The controller handle provides following methods
ControllerAPI | |
get_host(self, host_id)
|
to get a HostAPI handle for the machine from the recipe spec with a specific id |
get_hosts(self)
|
returns a dictionary that maps host_ids to HostAPI objects |
get_module(self, name, options)
|
to get a test module API handle |
wait(self, seconds)
|
to make controller wait for a specific amount of seconds |
get_alias(self, alias)
|
gets the value of a previously defined alias |
get_aliases(self)
|
returns a dictionary of all the defined aliases |
connect_PerfRepo(self, mapping_file, url, username, password)
|
connects to a PerfRepo instance, returns a PerfRepoAPI object |
get_configuration(self)
|
returns the configuration parsed from the recipe file |
get_mapping(self)
|
returns the mapping of the recipe configuration to pool machines |
The get_host()
method returns a handle that is needed when you want to access
the information about the interfaces or run a command on the machine. See the
Host API for details on how to do that or look at the examples
below.
The get_hosts()
method returns a dictionary that maps host_ids of all
configured hosts to a HostAPI object.
The get_module()
simply returns the Test Module handle that can be run on a
test machine. This method takes an optional options dictionary to
initialize the Test Module.
The wait()
method is an equivalent for the <ctl_wait>
tag in the recipe
xml. It takes one parameter seconds and it's value tells the controller how
long it should wait before it continues in the task execution.
The get_alias()
method can be used to access aliases defined in the highest
namespace of the recipe -- <lnstrecipe>
.
The connect_PerfRepo()
method is used to get an API object used to
communicate with a PerfRepo instance. The first call will create the
connection, consecutive calls will only return the already existing connection,
if you want to connect to multiple PerfRepo instances you can create a new
PerfRepoAPI object manually. Parameter mapping_file
specifies a filepath to
read PerfRepo mapping data from, it can be a relative or absolute path to a
locally stored file or an http url to a remotely stored file. Parameters url
,
username
, password
are optional and when not set the method will read them
from your configuration file.
The get_configuration()
and get_mapping()
methods are mostly helper
functions used when preparing data to send to PerfRepo, but since they could be
useful on they're own they were made public.
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
m2_if = m2.get_interface("testiface")
ifc_ip = m2_if.get_ip()
ifc_hwaddr = m2_if.get_hwaddr()
m1.run("ping -c 5 " + ifc_ip)
m1.run("arp -n | grep -i " + ifc_hwaddr)
Following example shows the IcmpPing module initialization and it's execution on the test machine.
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
m2_if = m2.get_interface("testiface")
ping_module = ctl.get_module("IcmpPing", options={
"addr": m2_if.get_ip(0),
"count": 40,
"interval": 0.2,
"limit_rate": 95})
m1.run(ping_module)
logging.info("I'll wait for 10 seconds")
ctl.wait(10)
logging.info("Task execution continues ...")
my_value = ctl.get_alias("my_alias")
Will return the value of my_alias defined like this:
<lnstrecipe>
<define>
<alias name="my_alias" value="my_value"/>
</define>
...
</lnstrecipe>
Host API provides following methods
HostAPI | |
config(self, option, value, persistent=False, netns=None)
|
to configure values under /proc or /sys directories |
run(self, what, **kwargs)
|
to run commands or test modules on the test machines |
get_interface(self, interface_id)
|
returns an InterfaceAPI object that can be used to access the specified interface |
get_interfaces(self)
|
returns a dictionary mapping interface_id to InterfaceAPI objects of all configured interfaces on the host |
sync_resources(self, modules=[], tools=[])
|
to manually synchronize resources to the test machine |
create_bond(self, if_id=None, netns=None, ip=None, options=None, slaves=None) create_bridge(self, if_id=None, netns=None, ip=None, options=None, slaves=None) create_team(self, config=None, if_id=None, netns=None, ip=None, slaves=None) create_vlan(self, realdev_iface, vlan_tci, if_id=None, netns=None, ip=None) create_vxlan(self, vxlan_id, realdev_iface=None, group_ip=None, remote_ip=None, if_id=None, netns=None, ip=None, options={})
|
allow you to create new soft interfaces during test execution. The parameters are the equivalent of how you'd define a bond or bridge interface in the XML part of the recipe. |
enable_service(self, service) |
Starts the specified system service. Uses either the systemctl {start, stop} {service}
or the service {service} {start, stop} call.
|
disable_service(self, service) |
Stops the specified system service. |
get_num_cpus(self) |
Returns the number of CPUs on the host. |
The config()
is the equivalent of the
<config>
command used in the recipe xml. It takes optional parameters, the option
defining the path to the file under /proc or /sys directory, the value
containing the value to set the option to and persistent flag to make
the value persistent between individual tasks. The optional netns parameter
controls which network namespace should be configured.
The run()
is the equivalent of the
<run>
command used in the recipe xml. The parameter what is used to pass either
Test Module handle obtained thorugh the Host API's get_module()
method or
string containing a command to be run on the command line. This method takes
following keyword arguments that modify its behaviour. For the usage see the
examples section below.
run() |
**kwargs | ||
bg | boolean | if set to True the command will be run in background | |
expect | ["pass"|"fail"] | if set to "fail" the command is expected to fail - in other words if it succeeds this is considered as the testcase failed | |
timeout | integer | time limit in seconds | |
tool | string | run from a tool (the same as from in recipe xml) | |
netns | string | run the command in a specified network namespace |
The last method sync_resources()
can be used when the controller is run with
the option -r (--reduce-sync) is enabled. In this case the controller will skip
resource synchronization for the specific python task and instead the user is
expected to manually synchronize required resources by calling this method.
m1 = ctl.get_host("1")
m1.config("/proc/sys/net/ipv4/conf/all/forwarding", "0", True)
First example shows how to run the netcat tool on a test machine's command line.
m1 = ctl.get_host("1")
m1_if = m1.get_interface("testiface")
ifc_ipaddr = m1_if.get_ip()
m1.run("nc -l %s" % ifc_ipaddr)
As you could note the example above is not very useful since it would block the task execution so the following example modifies it a bit so the program is run in the background.
m1 = ctl.get_host("1")
m1_if = m1.get_interface("testiface")
ifc_ipaddr = m1_if.get_ip()
nc_cmd = m1.run("nc -l %s" % ifc_ipaddr, bg=True)
ctl.wait(30)
nc_cmd.kill()
Still not very useful, right? So, next example further extends the previous example and shows how to run a Test Module from the LNST Test Module library, in this example it is NetCat module as the client counterpart wrapper to the netcat tool
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
listen_port = 1234
m1_if = m1.get_interface("testiface")
ifc_ipaddr = m1_if.get_ip()
nc_cmd = m1.run("nc -l %s %s" % (ifc_ipaddr, listen_port), bg=True)
nc_module = ctl.get_module("NetCat", options={
"addr": ifc_ipaddr,
"port": listen_port,
"duration": 30})
m2.run(nc_module)
nc_cmd.wait()
The last example shows how to run 3rd party tools. We'll be using the tcp_conn tool packaged within LNST.
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
port_range="10000-10050"
m2_if = m2.get_interface("testiface")
ifc_addr = m2_if.get_ip()
# running the 3rd party tool
server = m2.run("./tcp_listen -p %s -a %s -c" % (port_range, ifc_addr), tool="tcp_conn", bg=True)
client = m1.run("./tcp_connect -p %s -a %s -c" % (port_range, ifc_addr), tool="tcp_conn")
# wait one minute and interrupt the tcp_conn tools
ctl.wait(60)
client.intr()
server.intr()
This example demonstrates sync_resources()
method used to synchronize the
IcmpPing and multicast resources with the slave machine with id 1.
m1 = ctl.get_host("1")
m1.sync_resources(modules=["IcmpPing"], tools=["multicast"])
Module API is an API class representing a module, other than that it doesn't do
much, it's just a convenient way to store the module name and module options.
Instance of this class is passed as a parameter to run()
method from the
HostAPI, it can be used more than once and provides following
methods.
ModuleAPI | |
get_options(self) |
returns key-value list of the module options |
set_options(self, options) |
appends the options to the module options, this will not overwrite already set options |
update_options(self, options) |
updates the options of the module, this will overwrite matching options |
unset_option(self, option_name) |
removes the option_name from the module options |
ProcessAPI provides following methods for handling executed commands on the test machines
ProcessAPI | |
passed(self) |
returns a boolean result of the process |
get_result(self) |
returns the whole command result |
out(self) |
returns the stdout of the command |
wait(self) |
blocking wait until the command returns |
intr(self) |
interrupt the command sending SIG_INT signal |
kill(self) |
kill the command sending SIG_KILL signal |
Methods wait()
, intr()
and kill()
are used when a user runs any command or
test module in the background.
Keep in mind that after issuing the kill()
method the command results are
disposed since it's considered an intentional termination of the process. That
also means that LNST sets command's result as passed.
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
listen_port = 1234
m1_if = m1.get_interface("testiface")
ifc_ipaddr = m1_if.get_ip()
nc_cmd = m1.run("nc -l %s %s" % (ifc_ipaddr, listen_port), bg=True)
nc_module = ctl.get_module("NetCat", options={
"addr": ifc_ipaddr,
"port": listen_port,
"duration": 30})
m2.run(nc_module)
nc_cmd.wait()
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
port_range="10000-10050"
m2_if = m2.get_interface("testiface")
ifc_addr = m2_if.get_ip()
# running the 3rd party tool
server = m2.run("./tcp_listen -p %s -a %s -c" % (port_range, ifc_addr), tool="tcp_conn", bg=True)
client = m1.run("./tcp_connect -p %s -a %s -c" % (port_range, ifc_addr), tool="tcp_conn")
# wait one minute and interrupt the tcp_conn tools
ctl.wait(60)
client.intr()
server.intr()
Following example shows the usage of the kill()
method. This example will
start stress program in the background, keeps it running for one minute and
finally kills the stress program.
m1 = ctl.get_host("1")
stress_bg = m1.run("stress -m 20 -c 4", bg=True)
ctl.wait(60)
stress_bg.kill()
InterfaceAPI instances provide an interface to access interface attributes such as hardware address or ip addresses. The class supports the following methods:
InterfaceAPI | |
get_id(self)
|
returns the id of the interface that was used in the recipe |
get_type(self)
|
returns the type of the interface |
get_network(self)
|
returns the network label that was used in the recipe |
get_driver(self)
|
returns the driver of the interface as returned by ethtool on the slave |
get_devname(self)
|
returns the device name of the interface |
get_hwaddr(self)
|
returns the hardware address of the interface |
get_ip(self, ip_index)
|
returns an ip address specified by it's index, the value is just the address, without the network mask |
get_ips(self)
|
returns a list of all ip addresses of the interface, values are tuples of the address and the network mask |
get_prefix(self, ip_index)
|
returns the network mask length of the ip address specified by the index |
get_mtu(self)
|
returns the MTU (Maximum Transmission Unit) value of the network device |
set_mtu(self, mtu)
|
configures the MTU (Maximum Transmission Unit) value of the network device |
link_stats(self)
|
returns the link statistics gathered on the slave by calling ip -s link show
|
set_link_up(self)
|
sets the link of the interface up |
set_link_down(self)
|
sets the link of the interface down |
get_host(self)
|
returns a HostAPI reference to the Host of this interface |
get_netns(self)
|
returns the name of the network namespace as specified in the Recipe |
reset(self, ip=None, netns=None)
|
reconfigures the interface with new ip addresses and optionally moves it to a different network namespace |
set_addresses(self, ips)
|
reconfigures ip addresses of the interface |
add_route(self, dest, ipv6 = False)
|
adds a route for this interface |
add_nhs_route(self, dest, nhs, ipv6 = False)
|
adds a route for this interface with a list of next hops |
del_route(self, dest, ipv6 = False)
|
deletes a route |
del_nhs_route(self, dest, nhs, ipv6 = False)
|
deletes an nhs route |
enable_multicast(self)
|
adds a route for 224.0.0.0/4 to enable multicast on the
interface
|
disable_multicast(self)
|
removes the route for 224.0.0.0/4 to disable multicast on
the interface
|
destroy(self)
|
removes this software interface |
add_br_vlan(_self, vlan_tci, pvid=False, untagged=False, self=False, master=False)
|
calls a BridgeTool command to add a vlan. |
del_br_vlan(_self, vlan_tci, pvid=False, untagged=False, self=False, master=False)
|
calls a BridgeTool command to remove a vlan. |
get_br_vlans(self)
|
calls a BridgeTool command to list vlans. |
add_br_fdb(_self, hwaddr, self=False, master=False, vlan_tci=None)
|
calls a BridgeTool command to add an FDB entry. |
del_br_fdb(_self, hwaddr, self=False, master=False, vlan_tci=None)
|
calls a BridgeTool command to del an FDB entry. |
get_br_fdbs(self)
|
calls a BridgeTool command to list current FDB entries. |
set_br_learning(_self, on=True, self=False, master=False)
|
calls a BridgeTool command to enable learning |
set_br_learning_sync(_self, on=True, self=False, master=False)
|
calls a BridgeTool command to enable learning sync |
set_br_flooding(_self, on=True, self=False, master=False)
|
calls a BridgeTool command to enable flooding |
set_br_state(_self, state, self=False, master=False)
|
calls a BridgeTool command to set it's state |
set_speed(self, speed)
|
sets the interface speed using ethtool |
set_autoneg(self)
|
enables autonegotiation for the interface using ethtool |
slave_add(self, slave_id)
|
enslaves the specified interface |
slave_del(self, slave_id)
|
removes the specified slave |
get_devlink_name(self)
|
returns the devlink name of the Interface |
get_devlink_port_name(self)
|
returns the devlink port name of the Interface |
get_ethtool_stats(self)
|
returns interface statistics as reported by ethtool |
enable_lldp(self)
|
enables lldp on the interface using lldptool |
set_pause_on(self)
|
enables RX and TX pause for the interface, disables autonegotiation |
set_pause_off(self)
|
disables RX and TX pause for the interface, disables autonegotiation |
Following example shows how to use get_devname()
method and use the config()
method and device name to set it's forwarding state.
m1 = ctl.get_host("1")
m1_if = m1.get_interface("testiface")
devname = m1_if.get_devname()
logging.info("enabling forwarding on interface %s" % devname)
m1.config(option="/proc/sys/net/ipv4/conf/%s/forwarding" % devname, 1)
In the following example get_hwaddr()
method is used to check that hardware
address of the testiface gets into the arp cache.
m1 = ctl.get_host("1")
m2 = ctl.get_host("2")
m2_if = m2.get_interface("testiface")
ifc_ip = m2_if.get_ip(0)
ifc_hwaddr = m2_if.get_hwaddr()
m1.run("ping -c 5 " + ifc_ip)
m1.run("arp -n | grep -i " + ifc_hwaddr)
This example demonstrates get_ip()
method used to get the IP address of
testiface interface and use it as the parameter to the NetCat program.
m1 = ctl.get_host("1")
m1_if = m1.get_interface("testiface")
ifc_ipaddr = m1_if.get_ip()
m1.run("nc -l %s" % ifc_ipaddr)
This example demonstrates get_prefix()
method used to get the netmask part of
an address of testiface interface.
m1 = ctl.get_host("1")
m1_if = m1.get_interface("testiface")
ifc_addr = m1_if.get_ip(0) # you can omit the ip_index parameter in these methods, 0 is the default
ifc_addr_netmask = m1_if.get_prefix(0)
m1.run("ip route add %s/%s dev gre0" % (ifc_addr, ifc_addr_netmask))
Recently we added support for talking to PerfRepo instances that can be used for storing results of performance tests. We use PerfRepo to store results reported by the Netperf test module. This section describes how we do this and how it works.
To write a working task that talks to PerfRepo you first need to do some preparation on the PerfRepo side. All results (in PerfRepo called "TestExecutions") are bound to a "Test" therefore if you want to save a TestExecution you need to reference an existing Test object. You can easily create one through the web UI so that shouldn't be a problem. You also need to define "Metrics" for the Test. Here we define a simple convention:
- for each value we want to store we created 4 metrics named: value_name, value_name_min, value_name_max, value_name_deviation
- each TestExecution you save needs to define all the Metrics of the Test it is associated with
PerfRepo is capable of more complex workflows, but for now we want to keep it simple while we figure out the details.
Next we also have basic support for comparing results against baselines. Baselines in PerfRepo are bound to "Reports" so if you want to use this functionality you will need to create a "Metric history report" and define a new baseline. LNST will use the latest baseline defined for the comparison. LNST automatically tags all the results it creates with a hash value created from all the tags, parameters and parameter values that are part of the TestExecution object. This can be used to identify results from the same setup which is required for proper comparison of performance results.
Now that we have created the necessary object in PerfRepo we need to somehow
pass this information to LNST. To do this we use mapping files. These are
simple key = value
mapping files that map a key that is used in the LNST
Task to an Id of a Test or Report object in PerfRepo. You write both into
the same file, for Test mappings you can choose any key that you like, for
Reports you need to use the hash value of a TestExecution object.
Are the two classes you will be using. PerfRepoAPI is the API that simplifies talking to PerfRepo and PerfRepoResult simplifies the creation of TestExecution objects.
PerfRepoAPI | |
load_mapping(self, file_path)
|
tries to load mapping information from the specified file |
get_mapping(self)
|
returns the currently loaded mapping |
connected(self)
|
returns True if a connection to PerfRepo was established |
connect(self, url, username, password)
|
connects to a PerfRepo instance |
new_result(self, mapping_key, name, hash_ignore=[])
|
creates a new PerfRepoResult object, mapping_key should map to a TestId or TestUid that identifies a Test object stored in PerfRepo, name is the name of a newly created TestExecution object and hash_ignore is a list of regular expressions defining which tags and parameters should be skipped during hash generation. The result is a PerfRepoResult object |
save_result(self, result)
|
creates a new TestExecution object in PerfRepo, result is a PerfRepoResult object |
get_baseline(self, report_id)
|
returns information about the newest baseline of the specified Report, returns a PerfRepoBaseline object |
get_baseline_of_result(self, result)
|
returns information about the newest baseline relevant for the given PerfRepoResult object. This automatically generates the hash of the result object and uses it as a key in the mapping file to search for a report_id after which it calls the get_baseline method |
compare_to_baseline(self, result, report_id, metric_name)
|
gets the newest baseline from the specified report and uses it to compare it to the given result, only the specified metric (with '_min', and '_max') is compared based on the metric type (less/higher is better), returns True if the result is better than the baseline. |
compare_testExecutions(self, firs, second, metric_name)
|
compares two TestExecution objects, this is a helper function used by "compare_to_baseline" |
PerfRepoResult | |
add_value(self, val_name, value)
|
adds a new value to the TestExecution being created, val_name specifies the Metric and value is a float number |
set_configuration(self, configuration=None)
|
when configuration is None, calls the HostAPI function get_configuration. The configuration dictionary is transformed into a list of (key, value) pairs where key is a dot separated string that describes the structure of the original dictionary. This method is called automatically during PerfRepoResult object initialization. This configuration is sent to PerfRepo as parameters of the TestExecution object and describes the measured result. |
set_mapping(self, mapping=None)
|
same as the previous method, but with recipe to pool match information. Also called automatically on object initialization. |
set_tags, add_tags, set_tag, add_tag, set_parameter,
set_parameters
|
adds one or more tags or parameters |
set_hash_ignore(self, hash_ignore)
|
changes what tags and parameters should be ignored during hash generation |
generate_hash(self, ignore=[])
|
generates an sha1 hash of all the tags and parameters, excluding the ones that match at least one pattern from the specified hash ignore list |
get_testExecution(self)
|
returns the created TestExecution object, used by PerfRepoAPI when sending data to PerfRepo |
PerfRepoBaseline | |
get_value(self, val_name)
|
returns the value named val_name from the associated TestExecution object |
get_texec(self)
|
returns the associated TestExecution object |
Example of use
#run some tests and get a result
#first you connect to a PerfRepo instance
perf_api = ctl.connect_PerfRepo(hostname, username, password)
#you then create an object representing your new result, you need to
#specify the TestName (TestUid) that the result will be associated with
#and a name for this particular TestExecution
result = perf_api.new_result(test_mapping_key, test_run_name)
#you can now add the measure values
result.add_value(metric_name, performance_result)
#and set some tags
result.set_tags(list_of_string_tags)
#when you're finished you save_result sends the object to PerfRepo which
#stores it in the database
perf_api.save_result(result)
#finally you can compare your result against a baseline, baselines are
#associated with reports so you need to know it's id (user id's are not
#supported yet but I expect we will have that in the future)
perf_api.compare_to_baseline(result, report_id_with_a_baseline, metric_name)