Skip to content

Commit

Permalink
automated: kselftest: Rewrite parser to support nesting
Browse files Browse the repository at this point in the history
The previous kselftests TAP parser has a few shortcomings that prevent
it from working well with the mm kselftests output. mm kselftests are
nested up to 3 levels deep, with the most important information, from
the user's perspective, usually contained in the middle level. But the
parser isn't nesting-aware, and instead flattens test results to
include the first and last level of nesting and ignroed everything in
the middle. This leads to undescriptive and confusing test names in
kernelci UI.

Additionally, conflicting test names extracted by the parser are not
made unique, which leads to multiple distinct tests being collapsed into
a single test within the kernelci UI, and its status is set to the last
occurence of the test in the list. So you might have multiple instances
that fail, but if the last one passes, it is shown as a single test that
passes. This problem is compounded by the parser's inability to properly
nest because the important middle level information is lost and this
makes many more test names look identical, so even more get collapsed
into one.

Solve all of this by rewriting the parser to properly support recursive
parsing. The tree of tests are then flattened into a test list in
depth-first order, where the test name is built from the name of each
level. Further, if duplicate test names exist, append a "_dup<N>" to the
second instance onwards, where N is a unique number. This guarrantees
that all test points output by TAP appear in kernelci UI.

I've tested this against the output for arm64, ftrace, kvm, and sigstack
kselftests (which don't have much nesting so work fine with the old
parser): The outputs from both parsers are identical, except in a couple
of instances where there are duplicate test name outputs and the new
parser appends the "_dup<N>" suffix to make it unique.

I've also tested this against the output from the mm kselftests: The
output from the new parser is as expected, and much more useful than the
old parser.

The downside is that this implementation depends on the tap.py module
(https://tappy.readthedocs.io). It is packaged for Debian and Ubuntu, so
I've added that package as a dependency. But I couldn't find anything
for Centos or Fedora, so this module (and its dependencies) will likely
need to be installed from PyPI in these environments:

  $ pip3 install tap.py more-itertools pyyaml

Signed-off-by: Ryan Roberts <[email protected]>
  • Loading branch information
Ryan Roberts committed Feb 10, 2024
1 parent 0a0c385 commit ec7e80a
Show file tree
Hide file tree
Showing 2 changed files with 63 additions and 23 deletions.
2 changes: 1 addition & 1 deletion automated/linux/kselftest/kselftest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ install() {
dist_name
# shellcheck disable=SC2154
case "${dist}" in
debian|ubuntu) install_deps "sed perl wget xz-utils iproute2" "${SKIP_INSTALL}" ;;
debian|ubuntu) install_deps "sed perl wget xz-utils iproute2 python3-tap" "${SKIP_INSTALL}" ;;
centos|fedora) install_deps "sed perl wget xz iproute" "${SKIP_INSTALL}" ;;
unknown) warn_msg "Unsupported distro: package install skipped" ;;
esac
Expand Down
84 changes: 62 additions & 22 deletions automated/linux/kselftest/parse-output.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env python3
import sys
import re

from tap import parser

def slugify(line):
non_ascii_pattern = r"[^A-Za-z0-9_-]+"
Expand All @@ -10,26 +10,66 @@ def slugify(line):
r"_-", "_", re.sub(r"(^_|_$)", "", re.sub(non_ascii_pattern, "_", line))
)

def parse_nested_tap(string):
results = []

def uncomment(line):
# All of the input lines should be comments and begin with #, but let's
# be cautious; don't do anything if the line doesn't begin with #.
if len(line) > 0 and line[0] == '#':
return line[1:].strip()
return line

tests = ""
for line in sys.stdin:
if "# selftests: " in line:
tests = slugify(line.replace("\n", "").split("selftests:")[1])
elif re.search(r"^.*?not ok \d{1,5} ", line):
match = re.match(r"^.*?not ok [0-9]+ (.*?)$", line)
ascii_test_line = slugify(re.sub("# .*$", "", match.group(1)))
output = f"{tests}_{ascii_test_line} fail"
if f"selftests_{tests}" in output:
output = re.sub(r"^.*_selftests_", "", output)
print(f"{output}")
elif re.search(r"^.*?ok \d{1,5} ", line):
match = re.match(r"^.*?ok [0-9]+ (.*?)$", line)
if "# skip" in match.group(1).lower():
ascii_test_line = slugify(re.sub("# skip", "", match.group(1).lower()))
output = f"{tests}_{ascii_test_line} skip"
def make_name(name, directive, ok, skip):
# Some of this is to maintain compatibility with the old parser.
if name.startswith('selftests:'):
name = name[10:]
if ok and skip and directive.lower().startswith('skip'):
directive = directive[4:]
else:
ascii_test_line = slugify(match.group(1))
output = f"{tests}_{ascii_test_line} pass"
if f"selftests_{tests}" in output:
output = re.sub(r"^.*_selftests_", "", output)
print(f"{output}")
directive = ''
name = f"{name} {directive}".strip()
if name == '':
name = '<unknown>'
return slugify(name)

def make_result(ok, skip):
return ('skip' if skip else 'pass') if ok else 'fail'

output = ''
ps = parser.Parser()
for l in ps.parse_text(string):
if l.category == 'test':
results.append({
'name': make_name(l.description, l.directive.text, l.ok, l.skip),
'result': make_result(l.ok, l.skip),
'children': parse_nested_tap(output),
})
output = ''
elif l.category == 'diagnostic':
output += f'{uncomment(l.text)}\n'

return results

def flatten_results(prefix, results):
ret = []
for r in results:
test = f"{prefix}{r['name']}"
children = flatten_results(f"{test}_", r['children'])
ret += children + [{'name': test, 'result': r['result']}]
return ret

def make_names_unique(results):
namecounts = {}
for r in results:
name = r['name']
namecounts[name] = namecounts.get(name, 0) + 1
if namecounts[name] > 1:
r['name'] += f'_dup{namecounts[name]}'

if __name__ == "__main__":
results = parse_nested_tap(sys.stdin.read())
results = flatten_results('', results)
make_names_unique(results)
for r in results:
print(f"{r['name']} {r['result']}")

0 comments on commit ec7e80a

Please sign in to comment.