Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docker-compose #195

Draft
wants to merge 5 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
"name": "Teknologr devcontainer",

// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../compose.yaml"
],

// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "server",

// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}"

// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},

// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],

// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],

// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",

// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",

// Configure tool-specific properties.
// "customizations": {},

// Uncomment to connect as an existing user other than the container default. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "devcontainer"
}
24 changes: 24 additions & 0 deletions compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
networks:
backend:
christiansegercrantz marked this conversation as resolved.
Show resolved Hide resolved


services:
server:
build: .
ports:
- "8888:8888"
depends_on:
- db
security_opt:
- seccomp:unconfined
networks:
- backend
volumes:
- ./:/app
db:
image: "postgres"
networks:
- backend
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
privileged: true
37 changes: 37 additions & 0 deletions dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# FROM python:3.10.12-ubuntu22.04.3
FROM ubuntu:22.04
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious as to why the python image isn't used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were so many odd bugs so I opted to use the same image as TFs servers are running. May try to roll back to the python image

USER root

# Required for older version of docker to solve problem with `RUN apt update`.
# Try removing first and if it works don't worry.
RUN sed -i -e 's/^APT/# APT/' -e 's/^DPkg/# DPkg/' \
/etc/apt/apt.conf.d/docker-clean

RUN apt update && \
apt install -y python3.10 && \
apt install -y python3-pip --upgrade pip && \
apt install -y git && \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably makes no sense to run git inside the container.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a package in the requirements files that's loaded from git that seems to require it to be installed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok so for a dev container you probably wouldn't run pip inside the container anyway, since the files would just be mounted into it from your host machine. You should only need to install what is needed at runtime.

If you want to build an image that can be deployed to a server, that's a different story.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, interesting. Still learning about dev containers so good with feedback about it. Isn't the idea to have the same environment when deving as in production (where possible) so that the container would install the packages? In most cases, they'd be cached or pip would just check that no updates were needed?

My assumption and how I understood dev containers is that you don't need to do anything before starting to dev on a project so that everything is automatically installed when opening it and that both the dev container and the shippable "production" container would contain the same base instructions for setting up the container. I'm happy to be wrong, this is just the picture I got of this.

Copy link
Contributor

@tlangens tlangens Feb 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have the right idea, but in practice it's a bit more complicated. The docker image will provide a consistent runtime environment, ensuring that the OS and its installed packages in production is the same as what you're working with during development.

Then you have the build environment, which is where your code is compiled, packaged, minified or whatever. Some people use an entirely separate docker image for the build environment, typically through the use of multi-stage builds. This way stuff that's used during the build stage (e.g. git or pip) doesn't end up in the runtime image, where it isn't needed anymore.

It's true that doing your development work always using the build image will yield more consistent results, avoiding the need to consider whether your workstation has the same version of git or pip installed, but in practice doing that rigorously is a pain in the ass. Most tools (like pip) should produce pretty consistent results from a given requirements file even if the version is a little off. As long as you're using a consistent runtime environment, things should be fine.

In a professional setting you would have CI set up that builds your image and runs your tests in a consistent environment, ensuring there are no issues, and then produces your production runtime image from that environment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a bit more:
What you want to avoid is having to constantly rebuild the image during development. As you're writing new code, installing libraries, etc, you'd want to be able to run the code without needing to run docker build. That's why you mount the files rather than copying them in the build stage.

It's possible to run tools like git or pip also inside a docker image, thereby avoiding the need for devs to install those tools on their own machine, but as I mentioned that's usually a pain in the ass. You'll have trouble integrating that with your IDE or Git UI or other tools you might be using.

I'll also point out that it's often necessary to even have a different dev runtime image because you might need tools or configurations for debugging or profiling that you don't want in the production image.

apt install -y locales locales-all

RUN apt install -y \
libsasl2-dev \
python3-dev \
libldap2-dev \
libssl-dev \
libpq-dev

COPY requirements.txt .

RUN pip install --progress-bar off -r requirements.txt

COPY . .
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're using this as a dev container you shouldn't copy any files into the image, just mount them as a volume instead. That way you don't need to rebuild the image when there are changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah true. The main task was not to create the dev container but I will look at that.


EXPOSE 8080

ENTRYPOINT [ "python3" ]

RUN ["python3", "teknologr/manage.py", "migrate"]

RUN ["python3", "teknologr/manage.py", "makemigrations", "--check", "--dry-run"]

CMD ["teknologr/manage.py", "runserver", "8888" ]
3 changes: 3 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
setuptools~=69.1.1
wheel~=0.42.0

dj-database-url~=0.5.0
Django~=3.2.20
django-ajax-selects~=2.0.0
Expand Down
Loading