Skip to content

Commit

Permalink
docs: rewrite existing documentation into VitePress docs site
Browse files Browse the repository at this point in the history
  • Loading branch information
dairiley committed Jan 19, 2024
1 parent dda6e4e commit e0f0772
Show file tree
Hide file tree
Showing 29 changed files with 477 additions and 406 deletions.
347 changes: 2 additions & 345 deletions README.md

Large diffs are not rendered by default.

Binary file removed assets/architecture.png
Binary file not shown.
35 changes: 29 additions & 6 deletions docs/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -7,25 +7,48 @@ export default defineConfig({
base: "/aws-genai-llm-chatbot/",
themeConfig: {
// https://vitepress.dev/reference/default-theme-config
search: {
provider: 'local'
},
socialLinks: [
{ icon: 'github', link: 'https://github.com/aws-samples/aws-genai-llm-chatbot' }
],
nav: [
{ text: 'Home', link: '/' },
{ text: 'Guide', link: '/guide/getting-started' }
{ text: 'About', link: '/about/welcome' },
{
text: 'Guide',
items: [
{ text: 'Deploy', link: '/guide/deploy' },
{ text: 'Developer Guide', link: '/guide/developers' },
]
},
{ text: 'Documentation', link: '/documentation/model-requirements' }
],
sidebar: [
{ text: 'About', items: [
{ text: 'The Project', link: '/about/welcome' },
{ text: 'Features', link: '/about/features' },
{ text: 'Architecture', link: '/about/architecture' },
{ text: 'Authors & Credits', link: '/about/authors' },
{ text: 'License Information', link: '/about/license' },
]
},
{
text: 'Getting Started',
text: 'Guide',
items: [
{ text: 'Welcome', link: '/guide/getting-started' }
{ text: 'Deploy', link: '/guide/deploy' },
{ text: 'Developer Guide', link: '/guide/developers' }
]
},
{
text: 'Example',
text: 'Documentation',
items: [
{ text: 'Lipsum', link: '/guide/lipsum' },
{ text: 'Lipsum 2', link: '/guide/lipsum2' }
{ text: 'Model Requirements', link: '/documentation/model-requirements' },
{ text: 'Inference Script', link: '/documentation/inference-script' },
{ text: 'Document Retrieval', link: '/documentation/retriever' },
{ text: 'AppSync', link: '/documentation/appsync' },
{ text: 'Precautions', link: '/documentation/precautions' }
]
}
],
Expand Down
7 changes: 7 additions & 0 deletions docs/about/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Architecture

This repository comes with several reusable CDK constructs. Giving you the freedom to decide what to deploy and what not.

Here's an overview:

![sample](./assets/architecture.png "Architecture Diagram")
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
12 changes: 12 additions & 0 deletions docs/about/authors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Authors

- [Bigad Soleiman](https://www.linkedin.com/in/bigadsoleiman/)
- [Sergey Pugachev](https://www.linkedin.com/in/spugachev/)

# Credits

This sample was made possible thanks to the following libraries:

- [langchain](https://python.langchain.com/docs/get_started/introduction.html) from [LangChain AI](https://github.com/langchain-ai)
- [unstructured](https://github.com/Unstructured-IO/unstructured) from [Unstructured-IO](https://github.com/Unstructured-IO/unstructured)
- [pgvector](https://github.com/pgvector/pgvector) from [Andrew Kane](https://github.com/ankane)
65 changes: 65 additions & 0 deletions docs/about/features.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Features

## Modular, comprehensive and ready to use

This solution provides ready-to-use code so you can start **experimenting with a variety of Large Language Models and Multimodal Language Models, settings and prompts** in your own AWS account.

Supported model providers:

- [Amazon Bedrock](https://aws.amazon.com/bedrock/)
- [Amazon SageMaker](https://aws.amazon.com/sagemaker/) self-hosted models from Foundation, Jumpstart and HuggingFace.
- Third-party providers via API such as Anthropic, Cohere, AI21 Labs, OpenAI, etc. [See available langchain integrations](https://python.langchain.com/docs/integrations/llms/) for a comprehensive list.

## Experiment with multimodal models

Deploy [IDEFICS](https://huggingface.co/blog/idefics) models on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) and see how the chatbot can answer questions about images, describe visual content, generate text grounded in multiple images.

![sample](./assets/multimodal-sample.gif "AWS GenAI Chatbot")

Currently, the following multimodal models are supported:

- [IDEFICS 9b Instruct](https://huggingface.co/HuggingFaceM4/idefics-9b)
- Requires `ml.g5.12xlarge` instance.
- [IDEFICS 80b Instruct](https://huggingface.co/HuggingFaceM4/idefics-80b-instruct)
- Requires `ml.g5.48xlarge` instance.

To have the right instance types and how to request them, read [Amazon SageMaker requirements](../documentation/model-requirements#amazon-sagemaker-requirements-for-self-hosted-models-only)

> NOTE: Make sure to review [IDEFICS models license sections](https://huggingface.co/HuggingFaceM4/idefics-80b-instruct#license).
To deploy a multimodal model, follow the [deploy instructions](../guide/deploy)
and select one of the supported models (press Space to select/deselect) from the magic-create CLI step and deploy as instructed in the above section.

> ⚠️ NOTE ⚠️ Amazon SageMaker are billed by the hour. Be aware of not letting this model run unused to avoid unnecessary costs.
## Multi-Session Chat: evaluate multiple models at once

Send the same query to 2 to 4 separate models at once and see how each one responds based on its own learned history, context and access to the same powerful document retriever, so all requests can pull from the same up-to-date knowledge.

![sample](./assets/multichat-sample.gif "AWS GenAI Chatbot")

## Experiment with multiple RAG options with Workspaces

A workspace is a logical namespace where you can upload files for indexing and storage in one of the vector databases. You can select the embeddings model and text-splitting configuration of your choice.

![sample](./assets/create-workspace-sample.gif "AWS GenAI Chatbot")

## Unlock RAG potentials with Workspaces Debugging Tools

The solution comes with several debugging tools to help you debug RAG scenarios:

- Run RAG queries without chatbot and analyse results, scores, etc.
- Test different embeddings models directly in the UI
- Test cross encoders and analyse distances from different functions between sentences.

![sample](./assets/workspace-debug-sample.gif "AWS GenAI Chatbot")

## Full-fledged User Interface

The repository includes a CDK construct to deploy a **full-fledged UI** built with [React](https://react.dev/) to interact with the deployed LLMs/MLMs as chatbots. Hosted on [Amazon S3](https://aws.amazon.com/s3/) and distributed with [Amazon CloudFront](https://aws.amazon.com/cloudfront/).

Protected with [Amazon Cognito Authentication](https://aws.amazon.com/cognito/) to help you interact and experiment with multiple LLMs/MLMs, multiple RAG engines, conversational history support and document upload/progress.

The interface layer between the UI and backend is built with [AppSync](https://docs.aws.amazon.com/appsync/latest/devguide/what-is-appsync.html) for management requests and for realtime interaction with chatbot (messages and responses) using GraphQL subscriptions.

Design system provided by [AWS Cloudscape Design System](https://cloudscape.design/).
8 changes: 8 additions & 0 deletions docs/about/license.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# License

This library is licensed under the MIT-0 License. See the LICENSE file.

- [Changelog](https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/CHANGELOG.md) of the project.
- [License](https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/LICENSE) of the project.
- [Code of Conduct](https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/CODE_OF_CONDUCT.md) of the project.
- [CONTRIBUTING](https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/CONTRIBUTING.md#security-issue-notifications) for more information.
21 changes: 21 additions & 0 deletions docs/about/welcome.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
layout: doc
---

# Deploying a Multi-Model and Multi-RAG Powered Chatbot Using AWS CDK on AWS

[![Release Notes](https://img.shields.io/github/v/release/aws-samples/aws-genai-llm-chatbot)](https://github.com/aws-samples/aws-genai-llm-chatbot/releases)

[![GitHub star chart](https://img.shields.io/github/stars/aws-samples/aws-genai-llm-chatbot?style=social)](https://star-history.com/#aws-samples/aws-genai-llm-chatbot)

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

[![Deploy with GitHub Codespaces](https://github.com/codespaces/badge.svg)](#deploy-with-github-codespaces)

The AWS GenAI LLM Chatbot provides ready-to-use code so you can start experimenting with a variety of Large Language Models and Multimodal Language Models, settings and prompts in your own AWS account.

![sample](./assets/chabot-sample.gif "AWS GenAI Chatbot")

Want to find out more? Continue to [Features](./features).

Want to get started? Head to the [Deployment Guide](../guide/deploy).
Binary file removed docs/assets/icon-dark.png
Binary file not shown.
Binary file removed docs/assets/icon-light.png
Binary file not shown.
File renamed without changes.
File renamed without changes
File renamed without changes.
48 changes: 48 additions & 0 deletions docs/documentation/model-requirements.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Model Requirements

## Amazon SageMaker requirements (for self-hosted models only)

**Instance type quota increase**

If you are looking to self-host models on Amazon SageMaker, you'll likely need to request an increase in service quota for specific SageMaker instance types, such as the `ml.g5` instance type. This will give access to the latest generation of GPU/Multi-GPU instance types. [You can do this from the AWS console](https://console.aws.amazon.com/servicequotas/home/services/sagemaker/quotas)

## Amazon Bedrock requirements

**Base Models Access**

If you are looking to interact with models from Amazon Bedrock, you need to [request access to the base models in one of the regions where Amazon Bedrock is available](https://console.aws.amazon.com/bedrock/home?#/modelaccess). Make sure to read and accept models' end-user license agreements or EULA.

Note:

- You can deploy the solution to a different region from where you requested Base Model access.
- **While the Base Model access approval is instant, it might take several minutes to get access and see the list of models in the UI.**

![sample](./assets/enable-models.gif "AWS GenAI Chatbot")

## Third-party models requirements

You can also interact with external providers via their API, such as AI21 Labs, Cohere, OpenAI, etc.

The provider must be supported in the [Model Interface](https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/lib/model-interfaces/langchain/functions/request-handler/index.py), [see available langchain integrations](https://python.langchain.com/docs/integrations/llms/) for a comprehensive list of providers.

Usually, an `API_KEY` is required to integrate with 3P models. To do so, the [Model Interface](https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/lib/model-interfaces/langchain/index.ts) deployes a Secrets in [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/), intially with an empty JSON `{}`, where you can add your API KEYS for one or more providers.

These keys will be injected at runtime into the Lambda function Environment Variables; they won't be visible in the AWS Lambda Console.

For example, if you wish to be able to interact with AI21 Labs., OpenAI's and Cohere endpoints:

- Open the [Model Interface Keys Secret](https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/lib/model-interfaces/langchain/index.ts#L38) in Secrets Manager. You can find the secret name in the stack output, too.
- Update the Secrets by adding a key to the JSON

```json
{
"AI21_API_KEY": "xxxxx",
"OPENAI_API_KEY": "sk-xxxxxxxxxxxxxxx",
"COHERE_API_KEY": "xxxxx"
}
```


N.B: In case of no keys needs, the secret value must be an empty JSON `{}`, NOT an empty string `''`.

make sure that the environment variable matches what is expected by the framework in use, like Langchain ([see available langchain integrations](https://python.langchain.com/docs/integrations/llms/)).
9 changes: 9 additions & 0 deletions docs/documentation/precautions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# ⚠️ Precautions ⚠️

Before you begin using the solution, there are certain precautions you must take into account:

- **Cost Management with self-hosted models on SageMaker**: Be mindful of the costs associated with AWS resources, especially with SageMaker models billed by the hour. While the sample is designed to be cost-effective, leaving serverful resources running for extended periods or deploying numerous LLMs/MLMs can quickly lead to increased costs.

- **Licensing obligations**: If you choose to use any datasets or models alongside the provided samples, ensure you check the LLM code and comply with all licensing obligations attached to them.

- **This is a sample**: the code provided in this repository shouldn't be used for production workloads without further reviews and adaptation.
File renamed without changes.
File renamed without changes
Loading

0 comments on commit e0f0772

Please sign in to comment.