This repository contains application for controlled exeperiment in natural field that provides 4 main modules:
- custom user interface for participants of the experiment that was developed for 2 controlled groups and 3 phases (container next)
- customized content management system (container strapi)
- micro-service for enviroCar data import and processing (container updater)
- experiment live data monitoring tool Grafana (container grafana)
This application had been used by 11 users for the period of 1 and a half month in early 2021 in order to collect research data.
Make sure your machine satisfies all the requirements below before you proceed any further and also make sure that you install Docker with Docker Compose included.
| Version
-------------------------
Docker | v3.x
Python | v3.x
Node | v14.x
There are 3 main folders in the root of the project:
/docker
- contains configuration files for Docker Compose for development and production environments/data
- contains scripts that were used for data anonymization before submission but also Jupyter Notebook file for dataset mining/src
- contains all the source codes/grafana
- Grafana configuration file that Docker image uses so that it can be started up on port3001
as opposed to the default3000
where the frontend runs already in case of this project/next
- frontend part of the application written in React.js using Next.js with NextAuth, TailwindCSS and Apollo GraphQL/strapi
- customized Strapi CMS for content management/updater
- FastAPI micro-service that pulls data from enviroCar, calculates eco score and saves the processed data and also Jupyter Notebooks that were created before the actual implementation of some updater functionalities
Go through the following steps before starting the project in either development or production environment:
- Setup appropriate environment variables for all project modules (example always available in the same folder as the original file with the filename ending as
.env.example
if is file with environment variables)/src/grafana/.db.env
- grafanadb credentials/src/next/.env
- Google Sign-In credentails, NextAuth.js URL and strapi container URL/src/strapi/.env
- strapi and updater containers URLs, strapidb and grafanadb credentials,ON_VPS
toggle/src/strapi/.db.env
- strapidb credentials/src/updater/app/lib/utils/constants.py
- strapi URL and access token (tutorial on how to create it below), grafanadb credentials
- Create project for Google Sign-In inside Google Developers Console and save client ID and client secret for authentication configuration in the next step
- Appropriately configure strapi
- Run the container in either development or production environment
- Create token
- Create user with username updater
- Create phase
- Create recommendations (before 2nd phase of the experiment starts)
- Create products (before 2nd phase of the experiment starts)
- Set-up permissions on endpoints for both Authenticated and Public roles as follows:
- role Authenticated:
- section Application:
- envirocar: usercredentialsvalid
- phase: find
- purchases: update
- recommendations: findone
- synchronizations: create, findone
- tracks: count, create, find, top10, top10position, top10stats
- section Users-permissions:
- auth: connect
- user: findone, me, update
- section Application:
- role Public:
- section Application:
- phase: find
- tracks: top10, top10stats
- section Users-permissions:
- auth: callback, connect, emailconfirmation, forgotpassword, register, resetpassword
- user: me
- section Application:
- role Authenticated:
- Turn off registration with e-mail
- Enable Google provider
- redirect URL: http://strapi-host:strapi-port/api/auth/callback/google
- Appropriately configure updater by going to file with constants and setting the token to the one that you have just created in strapi in the previous step
- Open grafana and configure data source for InfluxDB
If you managed to configure all the application modules, you can start developing and deploying in the following workflow:
- Develop new features in development environment
- Test new features in development environment
- Test new features in production environment
- Release new version to production environment
I recommend starting up the application for development in order as described below.
cd docker/dev
docker-compose build
docker-compose up
- Optionally set Python virtual environment where you will want to install the dependencies
pip3 install fastapi uvicorn requests pydantic geopandas pandas numpy matplotlib pydeck ipython folium seaborn scipy shapely branca scikit-learn geopy statistics plotly datetime influxdb python-dateutil osmnx
cd src/updater/app
uvicorn main:app --reload --port 1338
cd src/strapi
yarn install
yarn dev
cd src/next
yarn install
yarn dev
The only requirement for starting up application in the production environment is to turn on all Docker containers with the following sequence of commands:
cd docker/prod
docker-compose build
docker-compose up
- hook at Docker container's CLI:
docker exec -it <HASH> /bin/sh; exit
- export data from databases that the solution uses to analyze them locally in seconds thanks to this tutorial that I prepared
This project is under the MIT license which is great! Read more inside LICENSE.