Skip to content

Commit

Permalink
Merge pull request #192 from Machine-Learning-for-Medical-Language/mi…
Browse files Browse the repository at this point in the history
…kix/gpu-docker-tweaks

docker: tweak docker builds, mostly to get gpus working
  • Loading branch information
mikix committed Sep 26, 2023
2 parents 7645b19 + acb30bd commit 5a9921a
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 4 deletions.
6 changes: 4 additions & 2 deletions docker/Dockerfile.gpu
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
FROM nvidia/cuda:12.2.0-runtime-ubi8 as base
FROM nvidia/cuda:11.7.1-runtime-ubi8 as base

ARG cnlpt_version

RUN yum -y install python39 python39-pip
RUN pip3.9 install cython torch
RUN pip3.9 install cython
RUN pip3.9 install cnlp-transformers==$cnlpt_version

# pytorch can't find the cudnn library with our setup, so just point at it directly
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/"

WORKDIR /opt/cnlp/
# this copy is to support the preload of train models in the downstream images
Expand Down
6 changes: 4 additions & 2 deletions docker/MAINTAINER.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,11 @@ Pass `--help` to see all your options.

### Local Testing
Use the `./build.py` script to build the image you care about,
and then run something like the following, depending on your model:
```shell
and then run something like one of the following, depending on your model and processor:

```
docker run --rm -p 8000:8000 smartonfhir/cnlp-transformers:termexists-latest-cpu
docker run --rm -p 8000:8000 --gpus all smartonfhir/cnlp-transformers:termexists-latest-gpu
```

With that specific example of the `termexists` model, you could smoke test it like so:
Expand Down

0 comments on commit 5a9921a

Please sign in to comment.