Skip to content

04 Getting Started

Kit Siu edited this page Mar 16, 2022 · 22 revisions

This page walks you through how to get started with RiB running as an instance in a Docker container on localhost. We will use an isolated Python virtual environment. To get started, install the command-line interface (RACK CLI) following the instructions here. The steps below will demonstrate how to use both the CLI and the graphical interface SPARQLgraph with the Turnstile as an example.

Initialize RiB for the Turnstile Example

Perform the following steps to initialize RiB for the Turnstile Example. Here RiB instance is running in a Docker container on localhost

$ cd RACK/cli
$ . venv/bin/activate
(venv) $ ./setup-owl.sh
(venv) $ ./setup-turnstile.sh

setup-owl.sh copies files out of the RiB and saves them onto your local hard drive. setup-turnstile.sh loads the data model files and nodegroups for the Turnstile example into RiB.

Load the Turnstile data into RiB

Once initialized, data can be loaded into RiB. We created a sample ingestion package for the Turnstile Example.

(venv) $ cd RACK/Turnstile-Example/Turnstile-IngestionPackage
(venv) $ ./Load-TurnstileData.sh

RiB's Graphical Interface

Up to now we have interacted with RiB using the CLI. RiB also comes with a graphical user interface called SPARALgraph. Visit your RiB's welcome page by typing either one of the below in your web brower's address bar and hitting Enter.

  • http://localhost if running Docker
  • http://the IP address if using VirtualBox

This brings up the Welcome page which identifies the RACK Release version and release notes, lists RACK services, and provide links to RACK User Guide and Documentation.

SPARQLgraph-welcomepage.png

From the Welcome page, click on the link SPARQLgraph to bring up the below. The left hand side is the ontology model pane and the larger pane to the right is the main canvas for building nodegroups.

SPARQLgraph-main.png

Here's a quick summary of terms you'll want to understand about SPARQLgraph:

  • it works on top of a semantic triplestore. In this installation we use Apache Fuskeki Server.
  • it manages "connections" to the triplestore, which include at least one graph that holds the ontology model and one that holds data
  • it manages queries (SELECT, INSERT, and others) through a construct called "nodegroups", which are displayed graphically on the main canvas
  • nodegroups can be stored in the nodegroup store or as json files

Let's load a nodegroup from the store.

  • choose Nodegroup->Open the store... from the menu.
  • click on any entry that starts with "query". This page walks through "query Hazard Structure".
  • hit the "Load" button.
  • you may be asked to save the connection locally. Hit "Save" to store this connection in your browser for use later.
  • if you are prompted with "Nodegroup is from a different SPARQL connection," choose "From Nodegroup".

SPARQLgraph-hazards.png

Here are a few things to notice. The left hand side ontology model pane is now populated with classes from the model. Clicking a class name will expand it to show its properties and subclasses. (Note: The ontology is subject to change since this is an active research program on DARPA ARCOS. The ontology in the picture may be different from the current release.) To the right, the main canvas displays the nodegroup graphically. The + and - buttons on the bottom-right of the main canvas allows you to zoom in and out; there is also a refresh button to recenter the nodegroup on the canvas. The directional arrows on the bottom-left allows you to pan around the main canvas. In the top-middle, Conn identifies the connection currently being used by this nodegroup. Clicking the "Connection" menu item will bring up a dialog box that shows the names of the ontology model graph and the data model graph saved under what's called "RACK local fuseki". We'll walk through this in the next section.

Connection

Choose Connection->Load from the menu, and you will see a dialog that looks like this:

SPARQLgraph-conn-load-model.png

This dialog shows the named profiles in the left panel, with fields at the right for configuring and building new connections. The most important features of a connection are the graph names and SPARQL endpoint(s) that specify where the ontology model and data are loaded. Here we see the model graph is http://rack001/model.

To view the data graph, click the "1" next to "data". Change the "Graph:" entry to http://rack001/turnstiledata. Hit "Submit".

SPARQLgraph-conn-load-data.png

You have now set up the connection to run queries on the Turnstile data in the next step.

Run the Query

Below the main canvas is the type of query to be run. The default is "select distinct". This is what we will walk through in the next section. Hit "Run". The "query Hazard Structure" contains a runtime constraint which will bring up a dialog that looks like this:

SPARQLgraph-rtconstraints.png

This is allowing you to narrow the values for "hazard". You may ignore this dialog or try it out by doing the following:

  • keep the default "=" to search only for information about hazards equal to a particular id
  • hit the ">>" button to ask SPARQLgraph to generate a query to find all existing values. Choose one.
  • hit "Run" to continue

A table of results is returned when running "select distinct" query which contains a row for every unique permutation of values. Results may be saved using the link provided at the top-left of the table, or the drop down at the top-right of the results.

SPARQLgraph-hazard-results.png

Feel free to try out other queries from the nodegroup store and run them. Descriptions of the preloaded queries are found in the RACK Predefined Queries page.

Queries can also be run using the RACK CLI. Below shows how to run the hazard structure query. In particular, the command below says to export data from the data-graph http://rack001/turnstiledata which is the same as what is in the SPARQLgraph Connection Conn: RACK local fuseki; the name of the query json file which is "query Hazard structure.json". By default the CLI command is looking for localhost running in a Docker container. See RACK CLI -> Export data section for more options such as specifying the base url, setting runtime constraints, and saving results to a CSV file.

(venv) $ rack data export --data-graph http://rack001/turnstiledata "query Hazard structure"
testcase  req    hazard  source
--------  -----  ------  ------
TC-1-1    HLR-1  H-1.2   H-1 
TC-1-2    HLR-1  H-1.2   H-1 
TC-1-3    HLR-1  H-1.2   H-1 
TC-1-4    HLR-1  H-1.2   H-1 
...

Deeper Dives