-
Notifications
You must be signed in to change notification settings - Fork 0
Home
ekatchko edited this page Nov 6, 2020
·
9 revisions
This repository contains the docker-compose scripts needed to set up a project-usage
monitoring system for development, staging and production environments.
It can set up the site, the portal and the full stack.
This repository is part of the master thesis "Accounting and Reporting of
OpenStack Cloud Instances via Prometheus".
This page shows you an overview of how all components interact with each other. It also has a step by step to follow the queries described in the master thesis so you can understand how the described concepts have been implemented/executed.
This page shows you a short overview on how to use the grafana dashboard.
The script bin/project_usage-compose.py
is a wrapper around docker-compose
and
provides two profiles:
# bin/project_usage-compose.py --help
usage: project_usage-compose.py [-h] [-v] [-k] [-p | -n] {dev,prod} ...
Helper script to collect the different *.env files and prepare the docker-
compose call in production or dev mode, site and/or production part. Non
`*.default.env` files will be prioritized and all unknown arguments will be
appended to the docker-compose call, i.e. `logs`, `start` ...
optional arguments:
-h, --help show this help message and exit
-v, --verbose Increase verbosity (default: False)
-k, --keep-env Do not parse any *.env files, only use existing environment
variables. (default: False)
-p, --print-env Print the parsed environment variables separated by
newline, can be used to export it into the current shell by
`export $(bin/project_usage-compose.py print-env | xargs)`
(default: False)
-n, --dry-run Do not exec docker-compose, only print call. (default:
False)
Subcommands:
Different launch modi
{dev,prod}
MIT @ gilbus
# bin/project_usage-compose.py dev --help
usage: project_usage-compose.py dev [-h]
Run the whole `project_usage` in development mode. Usage data from multiple
sites will be emulated by the exporter services, collected by the `site_*`
services. The `portal_prometheus` will scrape their data, store then inside
the InfluxDB which will make them accessible to the `credits` service. All
Prometheus and Grafana instances are launched with port mappings.
optional arguments:
-h, --help show this help message and exit
# bin/project_usage-compose.py prod --help
usage: project_usage-compose.py prod [-h] [--staging | --staging-dev]
{portal,site}
Run either the `site` or the `portal` stack in production mode.
positional arguments:
{portal,site}
optional arguments:
-h, --help show this help message and exit
--staging Start services in staging mode. See .staging.default.env and
staging/ to see additional settings and config files. Used in
staging area where site and portal stack are running on the
same machine, but the exporter is connected to a real
OpenStack instance. The external network `portal_default` is
required where a separate HAProxy ought to provide access to
`portal_prometheus` and `portal_grafana`
--staging-dev Same as staging but for deploying to the local machine. An
additional network called 'fake_internet' is needed to
connect the site_prometheus_proxy and the portal_prometheus
instance to emulate the public network. Create it via `docker
network create fake_internet`