Localstack AI is a containerized environment for experimenting and building AI applications. It supports running open source foundation models in Ollama. Included are a minimal Jupyter notebook and PostgreSQL with pgvector.
Also included are Python packages for OpenAI and AWS Bedrock.
There isn't an official image for pgvector. You can build it from source. Clone the repository.
$ cd ./pgvector
$ git clone https://github.com/pgvector/pgvector/tree/master
Localstack AI supports Ollama and Llamfile and creates a separate stack for each. You will need to download an LLM for each stack respectively.
Download models for Ollama.
Download models in GGUF format for Lllamafile from Huggingface.
To start a stack with built images:
$ docker compose -f ollama_stack.yml up
or
$ docker compose -f llamafile_stack.yml up
To start the Ollama stack and build PostgreSQL with pgvector:
$ docker compose -f buildstack-pgvector-build.yml up
To stop the stack:
$ docker compose -f buildstack.yml down
To stop the stack and build PostgreSQL with pgvector:
$ docker compose -f buildstack-pgvector-build.yml down
Docker compose outputs the messages to stdout. It will post the link and token to Jupyter notebook. Copy the appropriate link and paste into a browser. For example:
jupyter-1 | To access the server, open this file in a browser:
jupyter-1 | file:///home/jovyan/.local/share/jupyter/runtime/jpserver-6-open.html
jupyter-1 | Or copy and paste one of these URLs:
jupyter-1 | http://9faab81c039c:8888/lab?token=4d0ce25e42fd2211f4aa0b68536ff5b95b15145053d81b80
jupyter-1 | http://127.0.0.1:8888/lab?token=4d0ce25e42fd2211f4aa0b68536ff5b95b15145053d81b80