- Getting started
- Development and installation
- Deployment for production
- Authentication and magic tokens
- Local development URLs:
Frontend, built with Docker, with routes handled based on the path: http://localhost
Backend, JSON based web API based on OpenAPI: http://localhost/api/
Automatic interactive documentation with Swagger UI (from the OpenAPI backend): http://localhost/docs
Alternative automatic documentation with ReDoc (from the OpenAPI backend): http://localhost/redoc
Flower, administration of Celery tasks: http://localhost:5555
Traefik UI, to see how the routes are being handled by the proxy: http://localhost:8090
Note: The first time you start your stack, it might take a minute for it to be ready. While the backend waits for the database to be ready and configures everything. You can check the logs to monitor it.
To check the logs, run:
docker-compose logs
To check the logs of a specific service, add the name of the service, e.g.:
docker-compose logs backend
If your Docker is not running in localhost
(the URLs above wouldn't work) check the sections below on Development with Docker Toolbox and Development with a custom IP.
The backend and celery containers will fail to load if a proper Mongo URI is not configured.
Please ensure that either
MONGO_DATABASE
is properly set in the initial Cookiecutter setup phase.MONGO_DATABASE_URI
has been set in{{ cookiecutter.project_slug }}/.env
; leaving the initial value asmongodb
will establish a connection to the docker instance of mongo created.
To learn more about how to generate a MongoDB URI please look at the docs on Connecting to your MongoDB Atlas Cluster
The docker-compose file has a simple setup for a mongodb server to run in a docker container. It'll be exposed on port 27017
and reachable by setting the MONGO_DATABASE_URI to mongodb
Since the intention of this generator is to work with scalable production environments very quickly, we provide a container of MongoDB, but We do strongly advise you connect to an Atlas cluster
To see how to use MongoDB with Docker, read through this article on set-up steps
By default, the dependencies are managed with Hatch, go there and install it.
From ./backend/app/
you can install all the dependencies with:
$ hatch env prune
$ hatch env create production
Because Hatch doesn't have a version lock file (like Poetry), it is helpful to prune
when you rebuild to avoid any sort of dependency hell. Then you can start a shell session with the new environment with:
$ hatch shell
Next, open your editor at ./backend/app/
(instead of the project root: ./
), so that you see an ./app/
directory with your code inside. That way, your editor will be able to find all the imports, etc. Make sure your editor uses the environment you just created with Hatch. For Visual Studio Code, from the shell, launch an appropriate development environment with:
$ code .
Modify or add ODMantic models in ./backend/app/app/models/
, Pydantic schemas in ./backend/app/app/schemas/
, API endpoints in ./backend/app/app/api/
, CRUD (Create, Read, Update, Delete) utils in ./backend/app/app/crud/
. The easiest might be to copy the ones for Items (models, endpoints, and CRUD utils) and update them to your needs.
Add and modify tasks to the Celery worker in ./backend/app/app/worker.py
.
If you need to install any additional package to the worker, add it to the file ./backend/app/celeryworker.dockerfile
.
During development, you can change Docker Compose settings that will only affect the local development environment, in the file docker-compose.override.yml
.
The changes to that file only affect the local development environment, not the production environment. So, you can add "temporary" changes that help the development workflow.
For example, the directory with the backend code is mounted as a Docker "host volume", mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again:
$ docker-compose up -d
There is also a commented out command
override, you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the container alive. That allows you to get inside your running container and execute commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes, or start a Jupyter Notebook session.
To get inside the container with a bash
session you can start the stack with:
$ docker-compose up -d
and then exec
inside the running container:
$ docker-compose exec backend bash
You should see an output like:
root@7f2607af31c3:/app#
that means that you are in a bash
session inside your container, as a root
user, under the /app
directory.
NOTE: Tests have not been updated on the current version, so these are likely to fail.
To test the backend run:
$ DOMAIN=backend sh ./scripts/test.sh
The file ./scripts/test.sh
has the commands to generate a testing docker-stack.yml
file, start the stack and test it.
The tests run with Pytest, modify and add tests to ./backend/app/app/tests/
.
If you use Github Actions the tests will run automatically.
Start the stack with this command:
DOMAIN=backend sh ./scripts/test-local.sh
The ./backend/app
directory is mounted as a "host volume" inside the docker container (set in the file docker-compose.dev.volumes.yml
).
You can rerun the test on live code:
docker-compose exec backend /app/tests-start.sh
If your stack is already up and you just want to run the tests, you can use:
docker-compose exec backend /app/tests-start.sh
That /app/tests-start.sh
script just calls pytest
after making sure that the rest of the stack is running. If you need to pass extra arguments to pytest
, you can pass them to that command and they will be forwarded.
For example, to stop on first error:
docker-compose exec backend bash /app/tests-start.sh -x
Because the test scripts forward arguments to pytest
, you can enable test coverage HTML report generation by passing --cov-report=html
.
To run the local tests with coverage HTML reports:
DOMAIN=backend sh ./scripts/test-local.sh --cov-report=html
To run the tests in a running stack with coverage HTML reports:
docker-compose exec backend bash /app/tests-start.sh --cov-report=html
If you know about Python Jupyter Notebooks, you can take advantage of them during local development.
The docker-compose.override.yml
file sends a variable env
with a value dev
to the build process of the Docker image (during local development) and the Dockerfile
has steps to then install and configure Jupyter inside your Docker container.
So, you can enter into the running Docker container:
docker-compose exec backend bash
And use the environment variable $JUPYTER
to run a Jupyter Notebook with everything configured to listen on the public port (so that you can use it from your browser).
It will output something like:
root@73e0ec1f1ae6:/app# $JUPYTER
[I 12:02:09.975 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 12:02:10.317 NotebookApp] Serving notebooks from local directory: /app
[I 12:02:10.317 NotebookApp] The Jupyter Notebook is running at:
[I 12:02:10.317 NotebookApp] http://(73e0ec1f1ae6 or 127.0.0.1):8888/?token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397
[I 12:02:10.317 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 12:02:10.317 NotebookApp] No web browser found: could not locate runnable browser.
[C 12:02:10.317 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://(73e0ec1f1ae6 or 127.0.0.1):8888/?token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397
you can copy that URL and modify the "host" to be localhost
or the domain you are using for development (e.g. local.dockertoolbox.tiangolo.com
), in the case above, it would be, e.g.:
http://localhost:8888/token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397
and then open it in your browser.
You will have a full Jupyter Notebook running inside your container that has direct access to your database by the container name (db
), etc. So, you can just run sections of your backend code directly, for example with VS Code Python Jupyter Interactive Window or Hydrogen.
If you are using Docker Toolbox in Windows or macOS instead of Docker for Windows or Docker for Mac, Docker will be running in a VirtualBox Virtual Machine, and it will have a local IP different than 127.0.0.1
, which is the IP address for localhost
in your machine.
The address of your Docker Toolbox virtual machine would probably be 192.168.99.100
(that is the default).
As this is a common case, the domain local.dockertoolbox.tiangolo.com
points to that (private) IP, just to help with development (actually dockertoolbox.tiangolo.com
and all its subdomains point to that IP). That way, you can start the stack in Docker Toolbox, and use that domain for development. You will be able to open that URL in Chrome and it will communicate with your local Docker Toolbox directly as if it was a cloud server, including CORS (Cross Origin Resource Sharing).
If you used the default CORS enabled domains while generating the project, local.dockertoolbox.tiangolo.com
was configured to be allowed. If you didn't, you will need to add it to the list in the variable BACKEND_CORS_ORIGINS
in the .env
file.
To configure it in your stack, follow the section Change the development "domain" below, using the domain local.dockertoolbox.tiangolo.com
.
After performing those steps you should be able to open: http://local.dockertoolbox.tiangolo.com and it will be server by your stack in your Docker Toolbox virtual machine.
Check all the corresponding available URLs in the section at the end.
You might want to use something different than localhost
as the domain. For example, if you are having problems with cookies that need a subdomain, and Chrome is not allowing you to use localhost
.
In that case, you have two options: you could use the instructions to modify your system hosts
file with the instructions below in Development with a custom IP or you can just use localhost.tiangolo.com
, it is set up to point to localhost
(to the IP 127.0.0.1
) and all its subdomains too. And as it is an actual domain, the browsers will store the cookies you set during development, etc.
If you used the default CORS enabled domains while generating the project, localhost.tiangolo.com
was configured to be allowed. If you didn't, you will need to add it to the list in the variable BACKEND_CORS_ORIGINS
in the .env
file.
To configure it in your stack, follow the section Change the development "domain" below, using the domain localhost.tiangolo.com
.
After performing those steps you should be able to open: http://localhost.tiangolo.com and it will be server by your stack in localhost
.
Check all the corresponding available URLs in the section at the end.
If you are running Docker in an IP address different than 127.0.0.1
(localhost
) and 192.168.99.100
(the default of Docker Toolbox), you will need to perform some additional steps. That will be the case if you are running a custom Virtual Machine, a secondary Docker Toolbox or your Docker is located in a different machine in your network.
In that case, you will need to use a fake local domain (dev.{{cookiecutter.domain_main}}
) and make your computer think that the domain is is served by the custom IP (e.g. 192.168.99.150
).
If you used the default CORS enabled domains, dev.{{cookiecutter.domain_main}}
was configured to be allowed. If you want a custom one, you need to add it to the list in the variable BACKEND_CORS_ORIGINS
in the .env
file.
-
Open your
hosts
file with administrative privileges using a text editor:- Note for Windows: If you are in Windows, open the main Windows menu, search for "notepad", right click on it, and select the option "open as Administrator" or similar. Then click the "File" menu, "Open file", go to the directory
c:\Windows\System32\Drivers\etc\
, select the option to show "All files" instead of only "Text (.txt) files", and open thehosts
file. - Note for Mac and Linux: Your
hosts
file is probably located at/etc/hosts
, you can edit it in a terminal runningsudo nano /etc/hosts
.
- Note for Windows: If you are in Windows, open the main Windows menu, search for "notepad", right click on it, and select the option "open as Administrator" or similar. Then click the "File" menu, "Open file", go to the directory
-
Additional to the contents it might have, add a new line with the custom IP (e.g.
192.168.99.150
) a space character, and your fake local domain:dev.{{cookiecutter.domain_main}}
.
The new line might look like:
192.168.99.100 dev.{{cookiecutter.domain_main}}
- Save the file.
- Note for Windows: Make sure you save the file as "All files", without an extension of
.txt
. By default, Windows tries to add the extension. Make sure the file is saved as is, without extension.
- Note for Windows: Make sure you save the file as "All files", without an extension of
...that will make your computer think that the fake local domain is served by that custom IP, and when you open that URL in your browser, it will talk directly to your locally running server when it is asked to go to dev.{{cookiecutter.domain_main}}
and think that it is a remote server while it is actually running in your computer.
To configure it in your stack, follow the section Change the development "domain" below, using the domain dev.{{cookiecutter.domain_main}}
.
After performing those steps you should be able to open: http://dev.{{cookiecutter.domain_main}} and it will be server by your stack in localhost
.
Check all the corresponding available URLs in the section at the end.
If you need to use your local stack with a different domain than localhost
, you need to make sure the domain you use points to the IP where your stack is set up. See the different ways to achieve that in the sections above (i.e. using Docker Toolbox with local.dockertoolbox.tiangolo.com
, using localhost.tiangolo.com
or using dev.{{cookiecutter.domain_main}}
).
To simplify your Docker Compose setup, for example, so that the API docs (Swagger UI) knows where is your API, you should let it know you are using that domain for development. You will need to edit 1 line in 2 files.
- Open the file located at
./.env
. It would have a line like:
DOMAIN=localhost
- Change it to the domain you are going to use, e.g.:
DOMAIN=localhost.tiangolo.com
That variable will be used by the Docker Compose files.
- Now open the file located at
./frontend/.env
. It would have a line like:
VUE_APP_DOMAIN_DEV=localhost
- Change that line to the domain you are going to use, e.g.:
VUE_APP_DOMAIN_DEV=localhost.tiangolo.com
That variable will make your frontend communicate with that domain when interacting with your backend API, when the other variable VUE_APP_ENV
is set to development
.
After changing the two lines, you can re-start your stack with:
docker-compose up -d
and check all the corresponding available URLs in the section at the end.
See the frontend README for instructions.
If you are developing an API-only app and want to remove the frontend, you can do it easily:
- Remove the
./frontend
directory. - In the
docker-compose.yml
file, remove the whole service / sectionfrontend
. - In the
docker-compose.override.yml
file, remove the whole service / sectionfrontend
.
Done, you have a frontend-less (api-only) app. 🔥 🚀
If you want, you can also remove the FRONTEND
environment variables from:
.env
.github/workflows/actions.yml
./scripts/*.sh
But it would be only to clean them up, leaving them won't really have any effect either way.
You can deploy the stack to a Docker Swarm mode cluster with a main Traefik proxy, set up using the ideas from DockerSwarm.rocks, to get automatic HTTPS certificates, etc.
And you can use CI (continuous integration) systems to do it automatically.
But you have to configure a couple things first.
This stack expects the public Traefik network to be named traefik-public
, just as in the tutorials in DockerSwarm.rocks.
If you need to use a different Traefik public network name, update it in the docker-compose.yml
files, in the section:
networks:
traefik-public:
external: true
Change traefik-public
to the name of the used Traefik network. And then update it in the file .env
:
TRAEFIK_PUBLIC_NETWORK=traefik-public
You need to make sure that each service (Docker container) that uses a volume is always deployed to the same Docker "node" in the cluster, that way it will preserve the data. Otherwise, it could be deployed to a different node each time, and each time the volume would be created in that new node before starting the service. As a result, it would look like your service was starting from scratch every time, losing all the previous data.
That's specially important for a service running a database. But the same problem would apply if you were saving files in your main backend service (for example, if those files were uploaded by your users, or if they were created by your system).
To solve that, you can put constraints in the services that use one or more data volumes (like databases) to make them be deployed to a Docker node with a specific label. And of course, you need to have that label assigned to one (only one) of your nodes.
For each service that uses a volume (databases, services with uploaded files, etc) you should have a label constraint in your docker-compose.yml
file.
To make sure that your labels are unique per volume per stack (for example, that they are not the same for prod
and stag
) you should prefix them with the name of your stack and then use the same name of the volume.
Then you need to have those constraints in your docker-compose.yml
file for the services that need to be fixed with each volume.
To be able to use different environments, like prod
and stag
, you should pass the name of the stack as an environment variable. Like:
STACK_NAME={{cookiecutter.docker_swarm_stack_name_staging}} sh ./scripts/deploy.sh
To use and expand that environment variable inside the docker-compose.yml
files you can add the constraints to the services like:
version: '3'
services:
db:
volumes:
- 'app-db-data:/var/lib/postgresql/data/pgdata'
deploy:
placement:
constraints:
- node.labels.${STACK_NAME?Variable not set}.app-db-data == true
note the ${STACK_NAME?Variable not set}
. In the script ./scripts/deploy.sh
, the docker-compose.yml
would be converted, and saved to a file docker-stack.yml
containing:
version: '3'
services:
db:
volumes:
- 'app-db-data:/var/lib/postgresql/data/pgdata'
deploy:
placement:
constraints:
- node.labels.{{cookiecutter.docker_swarm_stack_name_main}}.app-db-data == true
Note: The ${STACK_NAME?Variable not set}
means "use the environment variable STACK_NAME
, but if it is not set, show an error Variable not set
".
If you add more volumes to your stack, you need to make sure you add the corresponding constraints to the services that use that named volume.
Then you have to create those labels in some nodes in your Docker Swarm mode cluster. You can use docker-auto-labels
to do it automatically.
You can use docker-auto-labels
to automatically read the placement constraint labels in your Docker stack (Docker Compose file) and assign them to a random Docker node in your Swarm mode cluster if those labels don't exist yet.
To do that, you can install docker-auto-labels
:
pip install docker-auto-labels
And then run it passing your docker-stack.yml
file as a parameter:
docker-auto-labels docker-stack.yml
You can run that command every time you deploy, right before deploying, as it doesn't modify anything if the required labels already exist.
If you don't want to use docker-auto-labels
or for any reason you want to manually assign the constraint labels to specific nodes in your Docker Swarm mode cluster, you can do the following:
-
First, connect via SSH to your Docker Swarm mode cluster.
-
Then check the available nodes with:
$ docker node ls
// you would see an output like:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
nfa3d4df2df34as2fd34230rm * dog.example.com Ready Active Reachable
2c2sd2342asdfasd42342304e cat.example.com Ready Active Leader
c4sdf2342asdfasd4234234ii snake.example.com Ready Active Reachable
then chose a node from the list. For example, dog.example.com
.
- Add the label to that node. Use as label the name of the stack you are deploying followed by a dot (
.
) followed by the named volume, and as value, justtrue
, e.g.:
docker node update --label-add {{cookiecutter.docker_swarm_stack_name_main}}.app-db-data=true dog.example.com
- Then you need to do the same for each stack version you have. For example, for staging you could do:
docker node update --label-add {{cookiecutter.docker_swarm_stack_name_staging}}.app-db-data=true cat.example.com
There are 3 steps:
- Build your app images
- Optionally, push your custom images to a Docker Registry
- Deploy your stack
Here are the steps in detail:
- Build your app images
- Set these environment variables, right before the next command:
TAG=prod
FRONTEND_ENV=production
- Use the provided
scripts/build.sh
file with those environment variables:
TAG=prod FRONTEND_ENV=production bash ./scripts/build.sh
- Optionally, push your images to a Docker Registry
Note: if the deployment Docker Swarm mode "cluster" has more than one server, you will have to push the images to a registry or build the images in each server, so that when each of the servers in your cluster tries to start the containers it can get the Docker images for them, pulling them from a Docker Registry or because it has them already built locally.
If you are using a registry and pushing your images, you can omit running the previous script and instead using this one, in a single shot.
- Set these environment variables:
TAG=prod
FRONTEND_ENV=production
- Use the provided
scripts/build-push.sh
file with those environment variables:
TAG=prod FRONTEND_ENV=production bash ./scripts/build-push.sh
- Deploy your stack
- Set these environment variables:
DOMAIN={{cookiecutter.domain_main}}
TRAEFIK_TAG={{cookiecutter.traefik_constraint_tag}}
STACK_NAME={{cookiecutter.docker_swarm_stack_name_main}}
TAG=prod
- Use the provided
scripts/deploy.sh
file with those environment variables:
DOMAIN={{cookiecutter.domain_main}} \
TRAEFIK_TAG={{cookiecutter.traefik_constraint_tag}} \
STACK_NAME={{cookiecutter.docker_swarm_stack_name_main}} \
TAG=prod \
bash ./scripts/deploy.sh
If you change your mind and, for example, want to deploy everything to a different domain, you only have to change the DOMAIN
environment variable in the previous commands. If you wanted to add a different version / environment of your stack, like "preproduction
", you would only have to set TAG=preproduction
in your command and update these other environment variables accordingly. And it would all work, that way you could have different environments and deployments of the same app in the same cluster.
Building and pushing is done with the docker-compose.yml
file, using the docker-compose
command. The file docker-compose.yml
uses the file .env
with default environment variables. And the scripts set some additional environment variables as well.
The deployment requires using docker stack
instead of docker-swarm
, and it can't read environment variables or .env
files. Because of that, the deploy.sh
script generates a file docker-stack.yml
with the configurations from docker-compose.yml
and injecting the environment variables in it. And then uses it to deploy the stack.
You can do the process by hand based on those same scripts if you wanted. The general structure is like this:
# Use the environment variables passed to this script, as TAG and FRONTEND_ENV
# And re-create those variables as environment variables for the next command
TAG=${TAG?Variable not set} \
# Set the environment variable FRONTEND_ENV to the same value passed to this script with
# a default value of "production" if nothing else was passed
FRONTEND_ENV=${FRONTEND_ENV-production?Variable not set} \
# The actual comand that does the work: docker-compose
docker-compose \
# Pass the file that should be used, setting explicitly docker-compose.yml avoids the
# default of also using docker-compose.override.yml
-f docker-compose.yml \
# Use the docker-compose sub command named "config", it just uses the docker-compose.yml
# file passed to it and prints their combined contents
# Put those contents in a file "docker-stack.yml", with ">"
config > docker-stack.yml
# The previous only generated a docker-stack.yml file,
# but didn't do anything with it yet
# docker-auto-labels makes sure the labels used for constraints exist in the cluster
docker-auto-labels docker-stack.yml
# Now this command uses that same file to deploy it
docker stack deploy -c docker-stack.yml --with-registry-auth "${STACK_NAME?Variable not set}"
In order to run properly in Github, you need to provide a secrets.DOCKERHUB_USERNAME
and secrets.DOCKERHUB_PASSWORD
in your Github repository secrets. Read more here on how that is done.
If you use Github Actions, the included actions.yml
can automatically deploy it. You may need to update it according to your Github configurations. Please check the actions.yml
file for details on deployment steps
If you use any other CI / CD provider, you can base your deployment from that actions.yml
file, as all the actual script steps are performed in bash
scripts that you can easily re-use.
Github Actions is configured assuming 2 environments following Github flow:
prod
(production) from theproduction
branch.stag
(staging) from themaster
branch.
If you need to add more environments, for example, you could imagine using a client-approved preprod
branch, you can just copy the configurations in actions.yml
for stag
and rename the corresponding variables. The Docker Compose file and environment variables are configured to support as many environments as you need, so that you only need to modify actions.yml
(or whichever CI system configuration you are using).
Support for the deployment to the desired host domain has been commented out as the functionality has not been tested by the MongoDB team. Feel free to uncomment and follow instructions for deployment steps at your own discretion.
There is a main docker-compose.yml
file with all the configurations that apply to the whole stack, it is used automatically by docker-compose
.
And there's also a docker-compose.override.yml
with overrides for development, for example to mount the source code as a volume. It is used automatically by docker-compose
to apply overrides on top of docker-compose.yml
.
These Docker Compose files use the .env
file containing configurations to be injected as environment variables in the containers.
They also use some additional configurations taken from environment variables set in the scripts before calling the docker-compose
command.
It is all designed to support several "stages", like development, building, testing, and deployment. Also, allowing the deployment to different environments like staging and production (and you can add more environments very easily).
They are designed to have the minimum repetition of code and configurations, so that if you need to change something, you have to change it in the minimum amount of places. That's why files use environment variables that get auto-expanded. That way, if for example, you want to use a different domain, you can call the docker-compose
command with a different DOMAIN
environment variable instead of having to change the domain in several places inside the Docker Compose files.
Also, if you want to have another deployment environment, say preprod
, you just have to change environment variables, but you can keep using the same Docker Compose files.
The .env
file is the one that contains all your configurations, generated keys and passwords, etc.
Depending on your workflow, you could want to exclude it from Git, for example if your project is public. In that case, you would have to make sure to set up a way for your CI tools to obtain it while building or deploying your project.
One way to do it could be to add each environment variable to your CI/CD system, and updating the docker-compose.yml
file to read that specific env var instead of reading the .env
file.
These are the URLs that will be used and generated by the project.
Production URLs, from the branch production
.
Frontend: https://{{cookiecutter.domain_main}}
Backend: https://{{cookiecutter.domain_main}}/api/
Automatic Interactive Docs (Swagger UI): https://{{cookiecutter.domain_main}}/docs
Automatic Alternative Docs (ReDoc): https://{{cookiecutter.domain_main}}/redoc
PGAdmin: https://pgadmin.{{cookiecutter.domain_main}}
Flower: https://flower.{{cookiecutter.domain_main}}
Staging URLs, from the branch master
.
Frontend: https://{{cookiecutter.domain_staging}}
Backend: https://{{cookiecutter.domain_staging}}/api/
Automatic Interactive Docs (Swagger UI): https://{{cookiecutter.domain_staging}}/docs
Automatic Alternative Docs (ReDoc): https://{{cookiecutter.domain_staging}}/redoc
PGAdmin: https://pgadmin.{{cookiecutter.domain_staging}}
Flower: https://flower.{{cookiecutter.domain_staging}}
Development URLs, for local development.
Frontend: http://localhost:3000
Backend: http://localhost/api/
Automatic Interactive Docs (Swagger UI): https://localhost/docs
Automatic Alternative Docs (ReDoc): https://localhost/redoc
PGAdmin: http://localhost:5050
Flower: http://localhost:5555
Traefik UI: http://localhost:8090
Development URLs, for local development.
Frontend: http://local.dockertoolbox.tiangolo.com
Backend: http://local.dockertoolbox.tiangolo.com/api/
Automatic Interactive Docs (Swagger UI): https://local.dockertoolbox.tiangolo.com/docs
Automatic Alternative Docs (ReDoc): https://local.dockertoolbox.tiangolo.com/redoc
PGAdmin: http://local.dockertoolbox.tiangolo.com:5050
Flower: http://local.dockertoolbox.tiangolo.com:5555
Traefik UI: http://local.dockertoolbox.tiangolo.com:8090
Development URLs, for local development.
Frontend: http://dev.{{cookiecutter.domain_main}}
Backend: http://dev.{{cookiecutter.domain_main}}/api/
Automatic Interactive Docs (Swagger UI): https://dev.{{cookiecutter.domain_main}}/docs
Automatic Alternative Docs (ReDoc): https://dev.{{cookiecutter.domain_main}}/redoc
PGAdmin: http://dev.{{cookiecutter.domain_main}}:5050
Flower: http://dev.{{cookiecutter.domain_main}}:5555
Traefik UI: http://dev.{{cookiecutter.domain_main}}:8090
Development URLs, for local development.
Frontend: http://localhost.tiangolo.com
Backend: http://localhost.tiangolo.com/api/
Automatic Interactive Docs (Swagger UI): https://localhost.tiangolo.com/docs
Automatic Alternative Docs (ReDoc): https://localhost.tiangolo.com/redoc
PGAdmin: http://localhost.tiangolo.com:5050
Flower: http://localhost.tiangolo.com:5555
Traefik UI: http://localhost.tiangolo.com:8090
This project was generated using https://github.com/mongodb-labs/full-stack-fastapi-mongodb with:
pip install cookiecutter
cookiecutter https://github.com/mongodb-labs/full-stack-fastapi-mongodb.git
You can check the variables used during generation in the file cookiecutter-config-file.yml
.
You can generate the project again with the same configurations used the first time.
That would be useful if, for example, the project generator (tiangolo/full-stack-fastapi-postgresql
) was updated and you wanted to integrate or review the changes.
You could generate a new project with the same configurations as this one in a parallel directory. And compare the differences between the two, without having to overwrite your current code but being able to use the same variables used for your current project.
To achieve that, the generated project includes the file cookiecutter-config-file.yml
with the current variables used.
You can use that file while generating a new project to reuse all those variables.
For example, run:
$ cookiecutter --config-file ./cookiecutter-config-file.yml --output-dir ../project-copy https://github.com/mongodb-labs/full-stack-fastapi-mongodb
That will use the file cookiecutter-config-file.yml
in the current directory (in this project) to generate a new project inside a sibling directory project-copy
.