+支持原始创作者 [这里](https://github.com/ztjhz/BetterChatGPT?tab=readme-ov-file#-support)
-# ❤️ 贡献者
+## ❤️ 贡献者
-感谢所有贡献者!
-
-
-
+感谢所有的 [贡献者](https://github.com/animalnots/BetterChatGPT-PLUS/graphs/contributors)!
+
+
-# 🙏 支持
-
-在 Better ChatGPT,我们致力于为您提供实用和惊人的功能。就像任何项目一样,您的支持和激励将对我们在保持前进方面起到至关重要的作用!
-
-如果您喜欢使用我们的应用程序,我们恳请您给这个项目一颗 ⭐️。您的认可对我们意义重大,鼓励我们更加努力,以提供最佳的体验。
-
-如果您想支持我们的团队,请考虑通过以下方法之一赞助我们。每一份贡献,无论多小,都有助于我们维护和改善我们的服务。
-
-| 付款方式 | 链接 |
-| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
-| 支付宝 (Ayaka) | |
-| 微信 (Ayaka) | |
-| GitHub | [![GitHub Sponsor](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/ztjhz) |
-| KoFi | [![support](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/betterchatgpt) |
-
-感谢您成为我们社区的一员,我们期待着在未来为您提供更好的服务。
+## 🚀 更新和扩展
+
+### 添加新设置
+
+要添加新设置,请更新以下文件:
+
+```plaintext
+public/locales/en/main.json
+public/locales/en/model.json
+src/assets/icons/AttachmentIcon.tsx
+src/components/Chat/ChatContent/ChatTitle.tsx
+src/components/Chat/ChatContent/Message/EditView.tsx
+src/components/ChatConfigMenu/ChatConfigMenu.tsx
+src/components/ConfigMenu/ConfigMenu.tsx
+src/constants/chat.ts
+src/store/config-slice.ts
+src/store/migrate.ts
+src/store/store.ts
+src/types/chat.ts
+src/utils/import.ts
+```
+
+### 更新模型
+
+1. 从 [OpenRouter](https://openrouter.ai/api/v1/models) 下载 `models.json`。
+2. 将其保存为根目录下的 `models.json`。
+3. 运行 `node sortModelsJsonKeys.js` 以组织键。
\ No newline at end of file
diff --git a/README.md b/README.md
index 459947de6..4199afdd8 100644
--- a/README.md
+++ b/README.md
@@ -1,106 +1,63 @@
-## 🗳️ Feature Prioritization with Canny.io
-
-We are now using [Canny.io](https://betterchatgpt.canny.io/feature-requests) for prioritizing feature development. You can and should vote there if you want development to be prioritized. Additionally, there's a possibility to push a feature to the front of the queue with a bounty of $100.
-
-## 🚀 Maintained Fork of Better ChatGPT
-
-This is a maintained fork of the original Better ChatGPT project. The main differences in this fork include:
-
-- **Vision Support**: Added support for image uploads for compatible models.
-- **UI Enhancements**: Improved user interface with new features.
-- **Azure API Extended Support**: Ability to specify API version in the configuration.
-
-The original project has not been updated frequently, and we aim to provide more dynamic and continuous improvements. We welcome contributions! Feel free to submit PRs to help improve this project further.
-
-
Better ChatGPT
-
-
- English Version |
-
- 简体中文版
-
+# Better ChatGPT PLUS
+
+Help us decide what to build next by voting for features on [Canny.io](https://betterchatgpt.canny.io/feature-requests). Want a feature urgently? Push it to the front with a $100 bounty!
-
-
-Are you ready to unlock the full potential of ChatGPT with Better ChatGPT?
+### Key Features
-Better ChatGPT is the ultimate destination for anyone who wants to experience the limitless power of conversational AI. With no limits and completely free to use for all, our app harnesses the full potential of OpenAI's ChatGPT API to offer you an unparalleled chatbot experience.
+- **Regional Proxy**: Bypass ChatGPT restrictions.
+- **Prompt Library**
+- **Chat Organization**: Folders & filters.
+- **Token & Pricing Info**
+- **ShareGPT Integration**
+- **Custom Model Parameters**
+- **Versatile Messaging**: Chat as user/assistant/system.
+- **Edit & Reorder Messages**
+- **Auto-Save & Download Chats**
+- **Google Drive Sync**
+- **Multilingual Support (i18n)**
-Whether you're looking to chat with a virtual assistant, improve your language skills, or simply enjoy a fun and engaging conversation, our app has got you covered. So why wait? Join us today and explore the exciting world of Better ChatGPT!
+### PLUS Fork Enhancements
-# 🔥 Features
+We're continuously improving Better ChatGPT PLUS. Here are the key differences and recent updates:
-Better ChatGPT comes with a bundle of amazing features! Here are some of them:
+- **Small UI Enhancements**: Sleeker, more intuitive interface including an updated attachment icon, now moved to the bottom.
+- **Clipboard Support**: Paste images directly from the clipboard.
+- **Image Interface**: Support for the image interface for supported models.
+- **Title Model Selection**: Allow specifying a model for chat title generation.
+- **Improved Import**: Fixed issues when importing JSON and better GPT data.
+- **Models Parsing**: Added support for parsing models based on OpenRouter API.
+- **Token Count for Images**: Implemented token count and cost calculation for images.
+- **Zoom Functionality**: Added zoom functionality for images.
+- **Large File Handling**: Improved handling of large files to prevent storage overflow.
+- **OpenAI Import Fix**: Fixed import issues with OpenAI chat branches, ensuring the deepest branch with the most messages is imported.
-- Proxy to bypass ChatGPT regional restrictions
-- Prompt library
-- Organize chats into folders (with colours)
-- Filter chats and folders
-- Token count and pricing
-- ShareGPT integration
-- Custom model parameters (e.g. presence_penalty)
-- Chat as user / assistant / system
-- Edit, reorder and insert any messages, anywhere
-- Chat title generator
-- Save chat automatically to local storage
-- Import / Export chat
-- Download chat (markdown / image / json)
-- Sync to Google Drive
-- Azure OpenAI endpoint support
-- Multiple language support (i18n)
+Contributions are welcome! Feel free to submit [pull requests](https://github.com/animalnots/BetterChatGPT-PLUS/pulls).
-# 🛠️ Usage
+## 🚀 Getting Started
-To get started, simply visit our website at . There are 3 ways for you to start using Better ChatGPT.
+1. **Visit**: [Our Website](https://animalnots.github.io/BetterChatGPT-PLUS/)
+2. **API Key**: Enter your OpenAI API Key from [here](https://platform.openai.com/account/api-keys)
+3. **Proxy**: Use [ChatGPTAPIFree](https://github.com/ayaka14732/ChatGPTAPIFree) or host your own.
-1. Enter into the API menu your OpenAI API Key obtained from [OpenAI API Keys](https://platform.openai.com/account/api-keys).
-2. Utilise the api endpoint proxy provided by [ayaka14732/ChatGPTAPIFree](https://github.com/ayaka14732/ChatGPTAPIFree) (if you are in a region with no access to ChatGPT)
-3. Host your own API endpoint by following the instructions provided here: . Subsequently, enter the API endpoint into the API menu.
+## 🖥️ Desktop App
-## Desktop App
-
-Download the desktop app [here](https://github.com/animalnots/BetterChatGPT-PLUS/releases)
+Download from [Releases](https://github.com/animalnots/BetterChatGPT-PLUS/releases)
| OS | Download |
| ------- | --------- |
@@ -108,152 +65,83 @@ Download the desktop app [here](https://github.com/animalnots/BetterChatGPT-PLUS
| MacOS | .dmg |
| Linux | .AppImage |
-### Features:
+### Desktop Features:
- Unlimited local storage
-- Runs locally (access Better ChatGPT even if the website is not accessible)
-
-# 🛫 Host your own Instance
+- Runs locally
-If you'd like to run your own instance of Better ChatGPT, you can easily do so by following these steps:
+## 🛠️ Host Your Own Instance
-## Vercel
+### Vercel
-One click deploy with Vercel
+[Deploy with Vercel](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fanimalnots%2FBetterChatGPT-PLUS)
-[![Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fztjhz%2FBetterChatGPT)
+### GitHub Pages
-## GitHub Pages
+1. **Star & Fork**: [This Repo](https://github.com/animalnots/BetterChatGPT-PLUS)
+2. **Settings**: Navigate to `Settings` > `Pages`, select `GitHub Actions`
+3. **Actions**: Click `Actions`, `Deploy to GitHub Pages`, then `Run workflow`
-### Steps
+### Local Setup
-1. Create a GitHub account (if you don't have one already)
-1. Star this [repository](https://github.com/animalnots/BetterChatGPT-PLUS) ⭐️
-1. Fork this [repository](https://github.com/animalnots/BetterChatGPT-PLUS)
-1. In your forked repository, navigate to the `Settings` tab
- ![image](https://user-images.githubusercontent.com/59118459/223753577-9b6f8266-26e8-471b-8f45-a1a02fbab232.png)
-1. In the left sidebar, click on `Pages` and in the right section, select `GitHub Actions` for `source`.
- ![image](https://user-images.githubusercontent.com/59118459/227568881-d8fb7baa-f890-4dee-8fc2-b6b429ba2098.png)
-1. Now, click on `Actions`
- ![image](https://user-images.githubusercontent.com/59118459/223751928-cf2b91b9-4663-4a36-97de-5eb751b32c7e.png)
-1. In the left sidebar, click on `Deploy to GitHub Pages`
- ![image](https://user-images.githubusercontent.com/59118459/223752459-183ec23f-72f5-436e-a088-e3386492b8cb.png)
-1. Above the list of workflow runs, select `Run workflow`.
- ![image](https://user-images.githubusercontent.com/59118459/223753340-1270e038-d213-4d6f-938c-66a30dad7c88.png)
-1. Navigate back to the `Settings` tab
- ![image](https://user-images.githubusercontent.com/59118459/223753577-9b6f8266-26e8-471b-8f45-a1a02fbab232.png)
-1. In the left sidebar, click on `Pages` and in the right section. Then at the top section, you can see that "Your site is live at `XXX`".
- ![image](https://user-images.githubusercontent.com/59118459/227568881-d8fb7baa-f890-4dee-8fc2-b6b429ba2098.png)
+1. Install [node.js](https://nodejs.org/en/) and [yarn/npm](https://www.npmjs.com/)
+2. **Clone repo**: `git clone https://github.com/animalnots/BetterChatGPT-PLUS.git`
+3. Navigate: `cd BetterChatGPT-PLUS`
+4. **Install**: `yarn` or `npm install`
+5. **Launch**: `yarn dev` or `npm run dev`
-### Running it locally
+### Docker Compose
-1. Ensure that you have the following installed:
+1. Install [docker](https://www.docker.com/)
+2. **Build**: `docker compose build`
+3. **Start**: `docker compose up -d`
+4. **Stop**: `docker compose down`
- - [node.js](https://nodejs.org/en/) (v14.18.0 or above)
- - [yarn](https://yarnpkg.com/) or [npm](https://www.npmjs.com/) (6.14.15 or above)
+### Build Desktop App
-2. Clone this [repository](https://github.com/animalnots/BetterChatGPT-PLUS) by running `git clone https://github.com/animalnots/BetterChatGPT-PLUS.git`
-3. Navigate into the directory by running `cd BetterChatGPT`
-4. Run `yarn` or `npm install`, depending on whether you have yarn or npm installed.
-5. Launch the app by running `yarn dev` or `npm run dev`
+1. Install [yarn/npm](https://www.npmjs.com/)
+2. **Build (Windows)**: `yarn make --win`
-### Running it locally using docker compose
+## ⭐️ Star & Support
-1. Ensure that you have the following installed:
+[Star the repo](https://github.com/animalnots/BetterChatGPT-PLUS) to encourage development.
+ [![Star History Chart](https://api.star-history.com/svg?repos=animalnots/BetterChatGPT-PLUS&type=Date)](https://github.com/animalnots/BetterChatGPT-PLUS/stargazers)
- - [docker](https://www.docker.com/) (v24.0.7 or above)
- ```bash
- curl https://get.docker.com | sh \
- && sudo usermod -aG docker $USER
- ```
+### Support Methods:
-2. Build the docker image
+Support the original creator [here](https://github.com/ztjhz/BetterChatGPT?tab=readme-ov-file#-support)
- ```
- docker compose build
- ```
+## ❤️ Contributors
-3. Build and start the container using docker compose
-
- ```
- docker compose build
- docker compose up -d
- ```
-
-4. Stop the container
- ```
- docker compose down
- ```
-
-### Running it locally via desktop app
-
-1. Ensure that you have the following installed:
-
- - [yarn](https://yarnpkg.com/) or [npm](https://www.npmjs.com/) (6.14.15 or above)
-
-2. Build the executable (Windows)
-
- ```
- yarn make --win
- ```
-
-3. Build for other OS
- ```
- yarn make _ADD_BUILD_ARGS_HERE
- ```
- To find out available building arguments, go to [electron-builder reference](https://www.electron.build/cli.html)
-
-# ⭐️ Star History
-
-[![Star History Chart](https://api.star-history.com/svg?repos=animalnots/BetterChatGPT-PLUS&type=Date)](https://github.com/animalnots/BetterChatGPT-PLUS/stargazers)
-
-
-A ⭐️ to Better ChatGPT is to make it shine brighter and benefit more people.
-
-
-# ❤️ Contributors
-
-Thanks to all the contributors!
-
-
-
+Thanks to all the [contributors](https://github.com/animalnots/BetterChatGPT-PLUS/graphs/contributors)!
+
+
-# 🙏 Support
+## 🚀 Update & Expand
-At Better ChatGPT, we strive to provide you with useful and amazing features around the clock. And just like any project, your support and motivation will be instrumental in helping us keep moving forward!
+### Adding New Settings
-If you have enjoyed using our app, we kindly ask you to give this project a ⭐️. Your endorsement means a lot to us and encourages us to work harder towards delivering the best possible experience.
+To add new settings, update these files:
-If you would like to support the team, consider sponsoring us through one of the methods below. Every contribution, no matter how small, helps us to maintain and improve our service.
-
-| Payment Method | Link |
-| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
-| GitHub | [![GitHub Sponsor](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/ztjhz) |
-| KoFi | [![support](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/betterchatgpt) |
-| Alipay (Ayaka) | |
-| Wechat (Ayaka) | |
-
-Thank you for being a part of our community, and we look forward to serving you better in the future.
-
-# Adding new settings
-Example of files that had to be changed in order for a new settings to be added (e.g. ImageDetail for each chat + default config)
```plaintext
-- public/locales/en/main.json
-- public/locales/en/model.json
-- src/assets/icons/AttachmentIcon.tsx
-- src/components/Chat/ChatContent/ChatTitle.tsx
-- src/components/Chat/ChatContent/Message/EditView.tsx
-- src/components/ChatConfigMenu/ChatConfigMenu.tsx
-- src/components/ConfigMenu/ConfigMenu.tsx
-- src/constants/chat.ts
-- src/store/config-slice.ts
-- src/store/migrate.ts
-- src/store/store.ts
-- src/types/chat.ts
-- src/utils/import.ts
+public/locales/en/main.json
+public/locales/en/model.json
+src/assets/icons/AttachmentIcon.tsx
+src/components/Chat/ChatContent/ChatTitle.tsx
+src/components/Chat/ChatContent/Message/EditView.tsx
+src/components/ChatConfigMenu/ChatConfigMenu.tsx
+src/components/ConfigMenu/ConfigMenu.tsx
+src/constants/chat.ts
+src/store/config-slice.ts
+src/store/migrate.ts
+src/store/store.ts
+src/types/chat.ts
+src/utils/import.ts
```
-# Models update
-update models.json at
-https://openrouter.ai/api/v1/models
\ No newline at end of file
+### Updating Models
+
+1. Download `models.json` from [OpenRouter](https://openrouter.ai/api/v1/models).
+2. Save it as `models.json` in the root directory.
+3. Run `node sortModelsJsonKeys.js` to organize the keys.
diff --git a/package.json b/package.json
index 503bf171f..ce81ed929 100644
--- a/package.json
+++ b/package.json
@@ -1,7 +1,7 @@
{
"name": "better-chatgpt",
"private": true,
- "version": "1.8.1",
+ "version": "1.8.2",
"type": "module",
"homepage": "./",
"main": "electron/index.cjs",
diff --git a/public/locales/en/import.json b/public/locales/en/import.json
index 1c4ed40f9..feb021fcf 100644
--- a/public/locales/en/import.json
+++ b/public/locales/en/import.json
@@ -10,6 +10,7 @@
"unrecognisedDataFormat": "Unrecognised data format. Supported formats are: BetterGPT export, OpenAI export, OpenAI Playground (JSON)",
"chatsImported": "{{imported}} chats were imported out of {{total}}."
},
- "reduceMessagesSuccess": "{{count}} messages were reduced."
+ "reduceMessagesSuccess": "{{count}} messages were reduced.",
+ "partialImportMessages": "expected {{total}} messages but found {{count}}"
}
\ No newline at end of file
diff --git a/public/locales/en/main.json b/public/locales/en/main.json
index 1f9e25f90..1c9ad0a42 100644
--- a/public/locales/en/main.json
+++ b/public/locales/en/main.json
@@ -50,6 +50,7 @@
"submitPlaceholder": "Type a message or click [/] for prompts...",
"reduceMessagesWarning": "Reducing messages may result in data loss. It is recommended to download the chat in JSON format if you care about the data. Do you want to proceed?",
"reduceMessagesFailedImportWarning": "Full import failed as the data hit the maximum storage limit. Import as much as possible?",
+ "partialImportWarning": "Full import failed as not all of the expected messages were imported: {{message}}. Would you like to import anyway?",
"reduceMessagesButton": "Reduce Messages",
"reduceMessagesSuccess": "Successfully reduced messages. {{count}} messages were removed.",
"hiddenMessagesWarning": "Some messages were hidden with the total length of {{hiddenTokens}} tokens to {{reduceMessagesToTotalToken}} tokens to avoid laggy UI."
diff --git a/public/models.json b/public/models.json
index b46c5419b..3e2592700 100644
--- a/public/models.json
+++ b/public/models.json
@@ -1,5 +1,149 @@
{
"data": [
+ {
+ "architecture": {
+ "instruct_type": null,
+ "modality": "text->text",
+ "tokenizer": "Cohere"
+ },
+ "context_length": 128000,
+ "created": 1725062400,
+ "description": "Command-R is a 35B parameter model that performs conversational language tasks at a higher quality, more reliably, and with a longer context than previous models. It can be used for complex workflows like code generation, retrieval augmented generation (RAG), tool use, and agents.\n\nRead the launch post [here](https://txt.cohere.com/command-r/).\n\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).",
+ "id": "cohere/command-r-03-2024",
+ "name": "Cohere: Command R (03-2024)",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "0.0000015",
+ "image": "0",
+ "prompt": "0.0000005",
+ "request": "0"
+ },
+ "top_provider": {
+ "context_length": 128000,
+ "is_moderated": false,
+ "max_completion_tokens": 4000
+ }
+ },
+ {
+ "architecture": {
+ "instruct_type": null,
+ "modality": "text->text",
+ "tokenizer": "Cohere"
+ },
+ "context_length": 128000,
+ "created": 1725062400,
+ "description": "Command R+ is a new, 104B-parameter LLM from Cohere. It's useful for roleplay, general consumer usecases, and Retrieval Augmented Generation (RAG).\n\nIt offers multilingual support for ten key languages to facilitate global business operations. See benchmarks and the launch post [here](https://txt.cohere.com/command-r-plus-microsoft-azure/).\n\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).",
+ "id": "cohere/command-r-plus-04-2024",
+ "name": "Cohere: Command R+ (04-2024)",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "0.000015",
+ "image": "0",
+ "prompt": "0.000003",
+ "request": "0"
+ },
+ "top_provider": {
+ "context_length": 128000,
+ "is_moderated": false,
+ "max_completion_tokens": 4000
+ }
+ },
+ {
+ "architecture": {
+ "instruct_type": null,
+ "modality": "text->text",
+ "tokenizer": "Cohere"
+ },
+ "context_length": 128000,
+ "created": 1724976000,
+ "description": "command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint the same.\n\nRead the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed).\n\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).",
+ "id": "cohere/command-r-plus-08-2024",
+ "name": "Cohere: Command R+ (08-2024)",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "0.00001",
+ "image": "0",
+ "prompt": "0.0000025",
+ "request": "0"
+ },
+ "top_provider": {
+ "context_length": 128000,
+ "is_moderated": false,
+ "max_completion_tokens": 4000
+ }
+ },
+ {
+ "architecture": {
+ "instruct_type": null,
+ "modality": "text->text",
+ "tokenizer": "Cohere"
+ },
+ "context_length": 128000,
+ "created": 1724976000,
+ "description": "command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at math, code and reasoning and is competitive with the previous version of the larger Command R+ model.\n\nRead the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed).\n\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).",
+ "id": "cohere/command-r-08-2024",
+ "name": "Cohere: Command R (08-2024)",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "0.0000006",
+ "image": "0",
+ "prompt": "0.00000015",
+ "request": "0"
+ },
+ "top_provider": {
+ "context_length": 128000,
+ "is_moderated": false,
+ "max_completion_tokens": 4000
+ }
+ },
+ {
+ "architecture": {
+ "instruct_type": null,
+ "modality": "text+image->text",
+ "tokenizer": "Gemini"
+ },
+ "context_length": 4000000,
+ "created": 1724803200,
+ "description": "Gemini 1.5 Flash 8B Experimental is an experimental, 8B parameter version of the [Gemini 1.5 Flash](/models/google/gemini-flash-1.5) model.\n\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\n\n#multimodal\n\nNote: This model is experimental and not suited for production use-cases. It may be removed or redirected to another model in the future.",
+ "id": "google/gemini-flash-8b-1.5-exp",
+ "name": "Google: Gemini Flash 8B 1.5 Experimental",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "0",
+ "image": "0",
+ "prompt": "0",
+ "request": "0"
+ },
+ "top_provider": {
+ "context_length": 4000000,
+ "is_moderated": false,
+ "max_completion_tokens": 32768
+ }
+ },
+ {
+ "architecture": {
+ "instruct_type": null,
+ "modality": "text+image->text",
+ "tokenizer": "Gemini"
+ },
+ "context_length": 4000000,
+ "created": 1724803200,
+ "description": "Gemini 1.5 Flash Experimental is an experimental version of the [Gemini 1.5 Flash](/models/google/gemini-flash-1.5) model.\n\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\n\n#multimodal\n\nNote: This model is experimental and not suited for production use-cases. It may be removed or redirected to another model in the future.",
+ "id": "google/gemini-flash-1.5-exp",
+ "name": "Google: Gemini Flash 1.5 Experimental",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "0",
+ "image": "0",
+ "prompt": "0",
+ "request": "0"
+ },
+ "top_provider": {
+ "context_length": 4000000,
+ "is_moderated": false,
+ "max_completion_tokens": 32768
+ }
+ },
{
"architecture": {
"instruct_type": "llama3",
@@ -333,7 +477,7 @@
"top_provider": {
"context_length": 32768,
"is_moderated": false,
- "max_completion_tokens": 32768
+ "max_completion_tokens": null
}
},
{
@@ -442,12 +586,12 @@
"created": 1722470400,
"description": "Gemini 1.5 Pro (0801) is an experimental version of the [Gemini 1.5 Pro](/models/google/gemini-pro-1.5) model.\n\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\n\n#multimodal\n\nNote: This model is experimental and not suited for production use-cases. It may be removed or redirected to another model in the future.",
"id": "google/gemini-pro-1.5-exp",
- "name": "Google: Gemini Pro 1.5 (0801)",
+ "name": "Google: Gemini Pro 1.5 Experimental",
"per_request_limits": null,
"pricing": {
- "completion": "0.0000075",
- "image": "0.00263",
- "prompt": "0.0000025",
+ "completion": "0",
+ "image": "0",
+ "prompt": "0",
"request": "0"
},
"top_provider": {
@@ -936,6 +1080,30 @@
"max_completion_tokens": null
}
},
+ {
+ "architecture": {
+ "instruct_type": "llama3",
+ "modality": "text->text",
+ "tokenizer": "Router"
+ },
+ "context_length": 32000,
+ "created": 1719446400,
+ "description": "This is a router model that rotates its underlying model weekly. It aims to be a simple way to explore the capabilities of new models while using the same model ID.\n\nThe current underlying model is [Llama 3 Stheno 8B v3.3 32K](/models/sao10k/l3-stheno-8b).\n\nNOTE: Pricing depends on the underlying model as well as the provider routed to. To see which model and provider were used, visit [Activity](/activity).",
+ "id": "openrouter/flavor-of-the-week",
+ "name": "Flavor of The Week",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "-1",
+ "image": "-1",
+ "prompt": "-1",
+ "request": "-1"
+ },
+ "top_provider": {
+ "context_length": null,
+ "is_moderated": false,
+ "max_completion_tokens": null
+ }
+ },
{
"architecture": {
"instruct_type": "llama3",
@@ -1608,54 +1776,6 @@
"max_completion_tokens": null
}
},
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Llama3"
- },
- "context_length": 8192,
- "created": 1715558400,
- "description": "Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This is the base 70B pre-trained version.\n\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\n\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).",
- "id": "meta-llama/llama-3-70b",
- "name": "Meta: Llama 3 70B (Base)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000081",
- "image": "0",
- "prompt": "0.00000081",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Llama3"
- },
- "context_length": 8192,
- "created": 1715558400,
- "description": "Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This is the base 8B pre-trained version.\n\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\n\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).",
- "id": "meta-llama/llama-3-8b",
- "name": "Meta: Llama 3 8B (Base)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": null,
@@ -1728,30 +1848,6 @@
"max_completion_tokens": 64000
}
},
- {
- "architecture": {
- "instruct_type": "zephyr",
- "modality": "text->text",
- "tokenizer": "Other"
- },
- "context_length": 2048,
- "created": 1715299200,
- "description": "OLMo 7B Instruct by the Allen Institute for AI is a model finetuned for question answering. It demonstrates **notable performance** across multiple benchmarks including TruthfulQA and ToxiGen.\n\n**Open Source**: The model, its code, checkpoints, logs are released under the [Apache 2.0 license](https://choosealicense.com/licenses/apache-2.0).\n\n- [Core repo (training, inference, fine-tuning etc.)](https://github.com/allenai/OLMo)\n- [Evaluation code](https://github.com/allenai/OLMo-Eval)\n- [Further fine-tuning code](https://github.com/allenai/open-instruct)\n- [Paper](https://arxiv.org/abs/2402.00838)\n- [Technical blog post](https://blog.allenai.org/olmo-open-language-model-87ccfc95f580)\n- [W&B Logs](https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5)",
- "id": "allenai/olmo-7b-instruct",
- "name": "OLMo 7B Instruct",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 2048,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "chatml",
@@ -1760,14 +1856,14 @@
},
"context_length": 32768,
"created": 1715212800,
- "description": "Qwen1.5 4B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
- "id": "qwen/qwen-4b-chat",
- "name": "Qwen 1.5 4B Chat",
+ "description": "Qwen1.5 72B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
+ "id": "qwen/qwen-72b-chat",
+ "name": "Qwen 1.5 72B Chat",
"per_request_limits": null,
"pricing": {
- "completion": "0.00000009",
+ "completion": "0.00000081",
"image": "0",
- "prompt": "0.00000009",
+ "prompt": "0.00000081",
"request": "0"
},
"top_provider": {
@@ -1784,14 +1880,14 @@
},
"context_length": 32768,
"created": 1715212800,
- "description": "Qwen1.5 7B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
- "id": "qwen/qwen-7b-chat",
- "name": "Qwen 1.5 7B Chat",
+ "description": "Qwen1.5 110B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
+ "id": "qwen/qwen-110b-chat",
+ "name": "Qwen 1.5 110B Chat",
"per_request_limits": null,
"pricing": {
- "completion": "0.00000018",
+ "completion": "0.00000162",
"image": "0",
- "prompt": "0.00000018",
+ "prompt": "0.00000162",
"request": "0"
},
"top_provider": {
@@ -1802,170 +1898,50 @@
},
{
"architecture": {
- "instruct_type": "chatml",
+ "instruct_type": "llama3",
"modality": "text->text",
- "tokenizer": "Qwen"
+ "tokenizer": "Llama3"
},
- "context_length": 32768,
- "created": 1715212800,
- "description": "Qwen1.5 14B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
- "id": "qwen/qwen-14b-chat",
- "name": "Qwen 1.5 14B Chat",
+ "context_length": 24576,
+ "created": 1714780800,
+ "description": "The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.\n\nTo enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.\n\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).",
+ "id": "neversleep/llama-3-lumimaid-8b",
+ "name": "Llama 3 Lumimaid 8B",
"per_request_limits": null,
"pricing": {
- "completion": "0.00000027",
+ "completion": "0.000001125",
"image": "0",
- "prompt": "0.00000027",
+ "prompt": "0.0000001875",
"request": "0"
},
"top_provider": {
- "context_length": 32768,
+ "context_length": 8192,
"is_moderated": false,
"max_completion_tokens": null
}
},
{
"architecture": {
- "instruct_type": "chatml",
+ "instruct_type": "llama3",
"modality": "text->text",
- "tokenizer": "Qwen"
+ "tokenizer": "Llama3"
},
- "context_length": 32768,
- "created": 1715212800,
- "description": "Qwen1.5 32B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
- "id": "qwen/qwen-32b-chat",
- "name": "Qwen 1.5 32B Chat",
+ "context_length": 24576,
+ "created": 1714780800,
+ "description": "The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.\n\nTo enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.\n\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\n\n_These are extended-context endpoints for [Llama 3 Lumimaid 8B](/models/neversleep/llama-3-lumimaid-8b). They may have higher prices._",
+ "id": "neversleep/llama-3-lumimaid-8b:extended",
+ "name": "Llama 3 Lumimaid 8B (extended)",
"per_request_limits": null,
"pricing": {
- "completion": "0.00000072",
+ "completion": "0.000001125",
"image": "0",
- "prompt": "0.00000072",
+ "prompt": "0.0000001875",
"request": "0"
},
"top_provider": {
- "context_length": 32768,
+ "context_length": 24576,
"is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Qwen"
- },
- "context_length": 32768,
- "created": 1715212800,
- "description": "Qwen1.5 72B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
- "id": "qwen/qwen-72b-chat",
- "name": "Qwen 1.5 72B Chat",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000081",
- "image": "0",
- "prompt": "0.00000081",
- "request": "0"
- },
- "top_provider": {
- "context_length": 32768,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Qwen"
- },
- "context_length": 32768,
- "created": 1715212800,
- "description": "Qwen1.5 110B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:\n\n- Significant performance improvement in human preference for chat models\n- Multilingual support of both base and chat models\n- Stable support of 32K context length for models of all sizes\n\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).\n\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).",
- "id": "qwen/qwen-110b-chat",
- "name": "Qwen 1.5 110B Chat",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000162",
- "image": "0",
- "prompt": "0.00000162",
- "request": "0"
- },
- "top_provider": {
- "context_length": 32768,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "llama3",
- "modality": "text->text",
- "tokenizer": "Llama3"
- },
- "context_length": 24576,
- "created": 1714780800,
- "description": "The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.\n\nTo enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.\n\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).",
- "id": "neversleep/llama-3-lumimaid-8b",
- "name": "Llama 3 Lumimaid 8B",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.000001125",
- "image": "0",
- "prompt": "0.0000001875",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "llama3",
- "modality": "text->text",
- "tokenizer": "Llama3"
- },
- "context_length": 24576,
- "created": 1714780800,
- "description": "The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.\n\nTo enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.\n\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\n\n_These are extended-context endpoints for [Llama 3 Lumimaid 8B](/models/neversleep/llama-3-lumimaid-8b). They may have higher prices._",
- "id": "neversleep/llama-3-lumimaid-8b:extended",
- "name": "Llama 3 Lumimaid 8B (extended)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.000001125",
- "image": "0",
- "prompt": "0.0000001875",
- "request": "0"
- },
- "top_provider": {
- "context_length": 24576,
- "is_moderated": false,
- "max_completion_tokens": 2048
- }
- },
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Llama2"
- },
- "context_length": 4096,
- "created": 1714435200,
- "description": "Arctic is a dense-MoE Hybrid transformer architecture pre-trained from scratch by the Snowflake AI Research Team. Arctic combines a 10B dense transformer model with a residual 128x3.66B MoE MLP resulting in 480B total and 17B active parameters chosen using a top-2 gating.\n\nTo read more about this model's release, [click here](https://www.snowflake.com/blog/arctic-open-efficient-foundation-language-models-snowflake/).",
- "id": "snowflake/snowflake-arctic-instruct",
- "name": "Snowflake: Arctic Instruct",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000216",
- "image": "0",
- "prompt": "0.00000216",
- "request": "0"
- },
- "top_provider": {
- "context_length": 4096,
- "is_moderated": false,
- "max_completion_tokens": null
+ "max_completion_tokens": 2048
}
},
{
@@ -2208,30 +2184,6 @@
"max_completion_tokens": null
}
},
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Mistral"
- },
- "context_length": 65536,
- "created": 1712707200,
- "description": "Mixtral 8x22B is a large-scale language model from Mistral AI. It consists of 8 experts, each 22 billion parameters, with each token using 2 experts at a time.\n\nIt was released via [X](https://twitter.com/MistralAI/status/1777869263778291896).\n\n#moe",
- "id": "mistralai/mixtral-8x22b",
- "name": "Mistral: Mixtral 8x22B (base)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000108",
- "image": "0",
- "prompt": "0.00000108",
- "request": "0"
- },
- "top_provider": {
- "context_length": 65536,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": null,
@@ -2568,126 +2520,6 @@
"max_completion_tokens": null
}
},
- {
- "architecture": {
- "instruct_type": "gemma",
- "modality": "text->text",
- "tokenizer": "Gemini"
- },
- "context_length": 8192,
- "created": 1708560000,
- "description": "Gemma by Google is an advanced, open-source language model family, leveraging the latest in decoder-only, text-to-text technology. It offers English language capabilities across text generation tasks like question answering, summarization, and reasoning. The Gemma 7B variant is comparable in performance to leading open source models.\n\nUsage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).\n\n_These are free, rate-limited endpoints for [Gemma 7B](/models/google/gemma-7b-it). Outputs may be cached. Read about rate limits [here](/docs/limits)._",
- "id": "google/gemma-7b-it:free",
- "name": "Google: Gemma 7B (free)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0",
- "image": "0",
- "prompt": "0",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": 4096
- }
- },
- {
- "architecture": {
- "instruct_type": "gemma",
- "modality": "text->text",
- "tokenizer": "Gemini"
- },
- "context_length": 8192,
- "created": 1708560000,
- "description": "Gemma by Google is an advanced, open-source language model family, leveraging the latest in decoder-only, text-to-text technology. It offers English language capabilities across text generation tasks like question answering, summarization, and reasoning. The Gemma 7B variant is comparable in performance to leading open source models.\n\nUsage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).",
- "id": "google/gemma-7b-it",
- "name": "Google: Gemma 7B",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000007",
- "image": "0",
- "prompt": "0.00000007",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "gemma",
- "modality": "text->text",
- "tokenizer": "Gemini"
- },
- "context_length": 8192,
- "created": 1708560000,
- "description": "Gemma by Google is an advanced, open-source language model family, leveraging the latest in decoder-only, text-to-text technology. It offers English language capabilities across text generation tasks like question answering, summarization, and reasoning. The Gemma 7B variant is comparable in performance to leading open source models.\n\nUsage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).\n\n_These are higher-throughput endpoints for [Gemma 7B](/models/google/gemma-7b-it). They may have higher prices._",
- "id": "google/gemma-7b-it:nitro",
- "name": "Google: Gemma 7B (nitro)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000007",
- "image": "0",
- "prompt": "0.00000007",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Mistral"
- },
- "context_length": 8192,
- "created": 1708473600,
- "description": "This is the flagship 7B Hermes model, a Direct Preference Optimization (DPO) of [Teknium/OpenHermes-2.5-Mistral-7B](/models/teknium/openhermes-2.5-mistral-7b). It shows improvement across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA.\n\nThe model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets.",
- "id": "nousresearch/nous-hermes-2-mistral-7b-dpo",
- "name": "Nous: Hermes 2 Mistral 7B DPO",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 32768,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "code-llama",
- "modality": "text->text",
- "tokenizer": "Llama2"
- },
- "context_length": 2048,
- "created": 1706572800,
- "description": "Code Llama is a family of large language models for code. This one is based on [Llama 2 70B](/models/meta-llama/llama-2-70b-chat) and provides zero-shot instruction-following ability for programming tasks.",
- "id": "meta-llama/codellama-70b-instruct",
- "name": "Meta: CodeLlama 70B Instruct",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000081",
- "image": "0",
- "prompt": "0.00000081",
- "request": "0"
- },
- "top_provider": {
- "context_length": 4096,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "rwkv",
@@ -2760,30 +2592,6 @@
"max_completion_tokens": 4096
}
},
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Mistral"
- },
- "context_length": 32768,
- "created": 1705363200,
- "description": "Nous Hermes 2 Mixtral 8x7B SFT is the supervised finetune only version of [the Nous Research model](/models/nousresearch/nous-hermes-2-mixtral-8x7b-dpo) trained over the [Mixtral 8x7B MoE LLM](/models/mistralai/mixtral-8x7b).\n\nThe model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.\n\n#moe",
- "id": "nousresearch/nous-hermes-2-mixtral-8x7b-sft",
- "name": "Nous: Hermes 2 Mixtral 8x7B SFT",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000054",
- "image": "0",
- "prompt": "0.00000054",
- "request": "0"
- },
- "top_provider": {
- "context_length": 32768,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "chatml",
@@ -3144,30 +2952,6 @@
"max_completion_tokens": null
}
},
- {
- "architecture": {
- "instruct_type": "none",
- "modality": "text->text",
- "tokenizer": "Mistral"
- },
- "context_length": 32768,
- "created": 1702080000,
- "description": "This is the base model variant of the [StripedHyena series](/models?q=stripedhyena), developed by Together.\n\nStripedHyena uses a new architecture that competes with traditional Transformers, particularly in long-context data processing. It combines attention mechanisms with gated convolutions for improved speed, efficiency, and scaling. This model marks an advancement in AI architecture for sequence modeling tasks.",
- "id": "togethercomputer/stripedhyena-hessian-7b",
- "name": "StripedHyena Hessian 7B (base)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 32768,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "alpaca",
@@ -3240,54 +3024,6 @@
"max_completion_tokens": 2048
}
},
- {
- "architecture": {
- "instruct_type": "none",
- "modality": "text->text",
- "tokenizer": "Yi"
- },
- "context_length": 4096,
- "created": 1701907200,
- "description": "The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This is the base 6B parameter model.",
- "id": "01-ai/yi-6b",
- "name": "Yi 6B (base)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 4096,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "none",
- "modality": "text->text",
- "tokenizer": "Yi"
- },
- "context_length": 4096,
- "created": 1701907200,
- "description": "The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This is the base 34B parameter model.",
- "id": "01-ai/yi-34b",
- "name": "Yi 34B (base)",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000072",
- "image": "0",
- "prompt": "0.00000072",
- "request": "0"
- },
- "top_provider": {
- "context_length": 4096,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "chatml",
@@ -3336,30 +3072,6 @@
"max_completion_tokens": 4096
}
},
- {
- "architecture": {
- "instruct_type": "airoboros",
- "modality": "text->text",
- "tokenizer": "Mistral"
- },
- "context_length": 8192,
- "created": 1701734400,
- "description": "The Capybara series is a collection of datasets and models made by fine-tuning on data created by Nous, mostly in-house.\n\nV1.9 uses unalignment techniques for more consistent and dynamic control. It also leverages a significantly better foundation model, [Mistral 7B](/models/mistralai/mistral-7b-instruct-v0.1).",
- "id": "nousresearch/nous-capybara-7b",
- "name": "Nous: Capybara 7B",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "openchat",
@@ -3720,6 +3432,30 @@
"max_completion_tokens": null
}
},
+ {
+ "architecture": {
+ "instruct_type": null,
+ "modality": "text->text",
+ "tokenizer": "Router"
+ },
+ "context_length": 200000,
+ "created": 1699401600,
+ "description": "Depending on their size, subject, and complexity, your prompts will be sent to [Llama 3 70B Instruct](/models/meta-llama/llama-3-70b-instruct), [Claude 3.5 Sonnet (self-moderated)](/models/anthropic/claude-3.5-sonnet:beta) or [GPT-4o](/models/openai/gpt-4o). To see which model was used, visit [Activity](/activity).\n\nA major redesign of this router is coming soon. Stay tuned on [Discord](https://discord.gg/fVyRaUDgxW) for updates.",
+ "id": "openrouter/auto",
+ "name": "Auto (best for prompt)",
+ "per_request_limits": null,
+ "pricing": {
+ "completion": "-1",
+ "image": "-1",
+ "prompt": "-1",
+ "request": "-1"
+ },
+ "top_provider": {
+ "context_length": null,
+ "is_moderated": false,
+ "max_completion_tokens": null
+ }
+ },
{
"architecture": {
"instruct_type": null,
@@ -3816,54 +3552,6 @@
"max_completion_tokens": 32768
}
},
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Mistral"
- },
- "context_length": 8192,
- "created": 1698796800,
- "description": "Trained on 900k instructions, surpasses all previous versions of Hermes 13B and below, and matches 70B on some benchmarks. Hermes 2 has strong multiturn chat skills and system prompt capabilities.",
- "id": "teknium/openhermes-2-mistral-7b",
- "name": "OpenHermes 2 Mistral 7B",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "chatml",
- "modality": "text->text",
- "tokenizer": "Mistral"
- },
- "context_length": 8192,
- "created": 1698624000,
- "description": "A fine-tune of Mistral using the OpenOrca dataset. First 7B model to beat all other models <30B.",
- "id": "open-orca/mistral-7b-openorca",
- "name": "Mistral OpenOrca 7B",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000018",
- "image": "0",
- "prompt": "0.00000018",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "airoboros",
@@ -4080,54 +3768,6 @@
"max_completion_tokens": null
}
},
- {
- "architecture": {
- "instruct_type": "alpaca",
- "modality": "text->text",
- "tokenizer": "Llama2"
- },
- "context_length": 4096,
- "created": 1692489600,
- "description": "A fine-tune of CodeLlama-34B on an internal dataset that helps it exceed GPT-4 on some benchmarks, including HumanEval.",
- "id": "phind/phind-codellama-34b",
- "name": "Phind: CodeLlama 34B v2",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000072",
- "image": "0",
- "prompt": "0.00000072",
- "request": "0"
- },
- "top_provider": {
- "context_length": 16384,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
- {
- "architecture": {
- "instruct_type": "llama2",
- "modality": "text->text",
- "tokenizer": "Llama2"
- },
- "context_length": 8192,
- "created": 1692489600,
- "description": "Code Llama is built upon Llama 2 and excels at filling in code, handling extensive input contexts, and following programming instructions without prior training for various programming tasks.",
- "id": "meta-llama/codellama-34b-instruct",
- "name": "Meta: CodeLlama 34B Instruct",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000072",
- "image": "0",
- "prompt": "0.00000072",
- "request": "0"
- },
- "top_provider": {
- "context_length": 8192,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "zephyr",
@@ -4357,15 +3997,15 @@
"name": "ReMM SLERP 13B",
"per_request_limits": null,
"pricing": {
- "completion": "0.00000027",
+ "completion": "0.000001125",
"image": "0",
- "prompt": "0.00000027",
+ "prompt": "0.000001125",
"request": "0"
},
"top_provider": {
"context_length": 4096,
"is_moderated": false,
- "max_completion_tokens": null
+ "max_completion_tokens": 400
}
},
{
@@ -4512,30 +4152,6 @@
"max_completion_tokens": 400
}
},
- {
- "architecture": {
- "instruct_type": "llama2",
- "modality": "text->text",
- "tokenizer": "Llama2"
- },
- "context_length": 4096,
- "created": 1687219200,
- "description": "The flagship, 70 billion parameter language model from Meta, fine tuned for chat completions. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.",
- "id": "meta-llama/llama-2-70b-chat",
- "name": "Meta: Llama v2 70B Chat",
- "per_request_limits": null,
- "pricing": {
- "completion": "0.00000081",
- "image": "0",
- "prompt": "0.00000081",
- "request": "0"
- },
- "top_provider": {
- "context_length": 4096,
- "is_moderated": false,
- "max_completion_tokens": null
- }
- },
{
"architecture": {
"instruct_type": "llama2",
diff --git a/public/public.jpg b/public/public.jpg
new file mode 100644
index 000000000..fb78f556e
Binary files /dev/null and b/public/public.jpg differ
diff --git a/src/components/ImportExportChat/ImportChat.tsx b/src/components/ImportExportChat/ImportChat.tsx
index 53cf05da8..f017f201e 100644
--- a/src/components/ImportExportChat/ImportChat.tsx
+++ b/src/components/ImportExportChat/ImportChat.tsx
@@ -8,6 +8,7 @@ import {
importOpenAIChatExport,
isLegacyImport,
isOpenAIContent,
+ PartialImportError,
validateAndFixChats,
validateExportV1,
} from '@utils/import';
@@ -34,6 +35,7 @@ const ImportChat = () => {
const handleFileUpload = () => {
if (!inputRef || !inputRef.current) return;
const file = inputRef.current.files?.[0];
+ var shouldAllowPartialImport = false;
if (file) {
const reader = new FileReader();
@@ -56,7 +58,10 @@ const ImportChat = () => {
while (true) {
try {
if (type === 'OpenAIContent' || isOpenAIContent(chatsToImport)) {
- const chats = importOpenAIChatExport(chatsToImport);
+ const chats = importOpenAIChatExport(
+ chatsToImport,
+ shouldAllowPartialImport
+ );
const prevChats: ChatInterface[] = JSON.parse(
JSON.stringify(useStore.getState().chats)
);
@@ -315,6 +320,25 @@ const ImportChat = () => {
};
}
}
+ } else if (error instanceof PartialImportError) {
+ // Handle PartialImportError
+ const confirmMessage = t('partialImportWarning', {
+ message: error.message,
+ });
+
+ if (window.confirm(confirmMessage)) {
+ shouldAllowPartialImport = true;
+ // User chose to continue with the partial import
+ return await importData(parsedData, true, type);
+ } else {
+ // User chose not to proceed with the partial import
+ return {
+ success: false,
+ message: t('notifications.nothingImported', {
+ ns: 'import',
+ }),
+ };
+ }
} else {
return { success: false, message: (error as Error).message };
}
diff --git a/src/components/ImportExportChat/ImportChatOpenAI.tsx b/src/components/ImportExportChat/ImportChatOpenAI.tsx
deleted file mode 100644
index 31e3581bb..000000000
--- a/src/components/ImportExportChat/ImportChatOpenAI.tsx
+++ /dev/null
@@ -1,80 +0,0 @@
-// TODO: NOT USED, TO BE REMOVED R: KISS, DRY
-import React, { useRef } from 'react';
-import { useTranslation } from 'react-i18next';
-
-import useStore from '@store/store';
-
-import { importOpenAIChatExport } from '@utils/import';
-
-import { ChatInterface } from '@type/chat';
-
-const ImportChatOpenAI = ({
- setIsModalOpen,
-}: {
- setIsModalOpen: React.Dispatch>;
-}) => {
- const { t } = useTranslation();
-
- const inputRef = useRef(null);
-
- const setToastStatus = useStore((state) => state.setToastStatus);
- const setToastMessage = useStore((state) => state.setToastMessage);
- const setToastShow = useStore((state) => state.setToastShow);
- const setChats = useStore.getState().setChats;
-
- const handleFileUpload = () => {
- if (!inputRef || !inputRef.current) return;
- const file = inputRef.current.files?.[0];
- if (!file) return;
-
- const reader = new FileReader();
-
- reader.onload = (event) => {
- const data = event.target?.result as string;
-
- try {
- const parsedData = JSON.parse(data);
- const chats = importOpenAIChatExport(parsedData);
- const prevChats: ChatInterface[] = JSON.parse(
- JSON.stringify(useStore.getState().chats)
- );
- setChats(chats.concat(prevChats));
-
- setToastStatus('success');
- setToastMessage('Imported successfully!');
- setIsModalOpen(false);
- } catch (error: unknown) {
- setToastStatus('error');
- setToastMessage(`Invalid format! ${(error as Error).message}`);
- }
- setToastShow(true);
- };
-
- reader.readAsText(file);
- };
-
- return (
- <>
-
- {t('import')} OpenAI ChatGPT {t('export')}
-
-
-
-
- >
- );
-};
-
-export default ImportChatOpenAI;
diff --git a/src/components/ImportExportChat/ImportExportChat.tsx b/src/components/ImportExportChat/ImportExportChat.tsx
index b51c27daf..3ff0974de 100644
--- a/src/components/ImportExportChat/ImportExportChat.tsx
+++ b/src/components/ImportExportChat/ImportExportChat.tsx
@@ -6,7 +6,6 @@ import PopupModal from '@components/PopupModal';
import ImportChat from './ImportChat';
import ExportChat from './ExportChat';
-import ImportChatOpenAI from './ImportChatOpenAI';
const ImportExportChat = () => {
const { t } = useTranslation();
diff --git a/src/utils/import.ts b/src/utils/import.ts
index 2b2ca381c..e8b25ea99 100644
--- a/src/utils/import.ts
+++ b/src/utils/import.ts
@@ -18,6 +18,7 @@ import {
} from '@constants/chat';
import { ExportV1, OpenAIChat, OpenAIPlaygroundJSON } from '@type/export';
import { modelOptions } from '@constants/modelLoader';
+import i18next from 'i18next';
export const validateAndFixChats = (chats: any): chats is ChatInterface[] => {
if (!Array.isArray(chats)) return false;
@@ -107,14 +108,20 @@ const isContentInterface = (content: any): content is ContentInterface => {
};
export const isOpenAIContent = (content: any) => {
- return isOpenAIChat(content) || isOpenAIPlaygroundJSON(content) || isOpenAIDataExport(content);
+ return (
+ isOpenAIChat(content) ||
+ isOpenAIPlaygroundJSON(content) ||
+ isOpenAIDataExport(content)
+ );
};
const isOpenAIChat = (content: any): content is OpenAIChat => {
return typeof content === 'object' && 'mapping' in content;
};
const isOpenAIDataExport = (content: any): content is OpenAIChat => {
- return (Array.isArray(content)) && content.length > 0 && (isOpenAIChat(content[0]));
+ return (
+ Array.isArray(content) && content.length > 0 && isOpenAIChat(content[0])
+ );
};
const isOpenAIPlaygroundJSON = (
content: any
@@ -122,44 +129,177 @@ const isOpenAIPlaygroundJSON = (
return typeof content === 'object' && 'messages' in content;
};
+// Define the custom error class
+export class PartialImportError extends Error {
+ constructor(message: string, public result: ChatInterface) {
+ super(message);
+ this.name = 'PartialImportError';
+ }
+}
+
+export const convertOpenAIToBetterChatGPTFormatPartialOK = (
+ openAIChatExport: any
+): ChatInterface => {
+ return convertOpenAIToBetterChatGPTFormat(openAIChatExport, true);
+};
+
+export const convertOpenAIToBetterChatGPTFormatPartialNTY = (
+ openAIChatExport: any
+): ChatInterface => {
+ return convertOpenAIToBetterChatGPTFormat(openAIChatExport, false);
+};
// Convert OpenAI chat format to BetterChatGPT format
export const convertOpenAIToBetterChatGPTFormat = (
- openAIChatExport: any
+ openAIChatExport: any,
+ shouldAllowPartialImport: boolean
): ChatInterface => {
const messages: MessageInterface[] = [];
+ let maxDepth = -1;
+ const deepestPathIds: string[] = []; // To record IDs traveled for the deepest part
+ const upwardPathIds: string[] = []; // To record IDs traveled upwards
+ const messageIds: string[] = []; // To record IDs that go into messages
+ const emptyOrNullMessageIds: string[] = []; // To record IDs with empty or null messages
+ let emptyOrNullMessagesCount = 0; // Counter for empty or null messages
if (isOpenAIChat(openAIChatExport)) {
- // Traverse the chat tree and collect messages for the mapping structure
- const traverseTree = (id: string) => {
+ let deepestNode: any = null;
+
+ // Traverse the chat tree and find the deepest node
+ const traverseTree = (id: string, currentDepth: number) => {
const node = openAIChatExport.mapping[id];
+ console.log(`Traversing node with id ${id} at depth ${currentDepth}`);
+
+ // If the current depth is greater than maxDepth, update deepestNode and maxDepth
+ if (currentDepth > maxDepth) {
+ deepestNode = node;
+ if (!node.parent) {
+ console.log('no parent for node with id ' + node.id);
+ }
+ maxDepth = currentDepth;
+ }
+
+ // Traverse all child nodes
+ for (const childId of node.children) {
+ traverseTree(childId, currentDepth + 1);
+ }
+ };
+
+ // Start traversing the tree from the root node
+ const rootNode =
+ openAIChatExport.mapping[Object.keys(openAIChatExport.mapping)[0]];
+ traverseTree(rootNode.id, 0);
+
+ // Now backtrack from the deepest node to the root and collect messages
+ let currentDepth = 0;
+ while (deepestNode) {
+ deepestPathIds.push(deepestNode.id); // Record the ID of the deepest part
+ console.log(`Backtracking node with id ${deepestNode.id} at depth ${currentDepth}`);
+
+ if (deepestNode.message) {
+ const { role } = deepestNode.message.author;
+ const content = deepestNode.message.content;
- // Extract message if it exists
- if (node.message) {
- const { role } = node.message.author;
- const content = node.message.content;
if (Array.isArray(content.parts)) {
const textContent = content.parts.join('') || '';
if (textContent.length > 0) {
- messages.push({
+ // Insert each message at the beginning of the array to maintain order from root to deepest node
+ messages.unshift({
role,
content: [{ type: 'text', text: textContent }],
});
+ messageIds.push(deepestNode.id);
+ console.log(`Node with id ${deepestNode.id} added to messages.`);
+ } else {
+ console.log(`Node with id ${deepestNode.id} has empty text content.`);
+ emptyOrNullMessagesCount++;
+ emptyOrNullMessageIds.push(deepestNode.id);
}
} else if (isContentInterface(content)) {
- messages.push({ role, content: [content] });
+ // Insert each message at the beginning of the array
+ messages.unshift({ role, content: [content] });
+ messageIds.push(deepestNode.id);
+ console.log(`Node with id ${deepestNode.id} added to messages.`);
+ } else {
+ console.log(`Node with id ${deepestNode.id} has invalid content.`);
+ emptyOrNullMessagesCount++;
+ emptyOrNullMessageIds.push(deepestNode.id);
}
+ } else {
+ console.log(`Node with id ${deepestNode.id} has no message.`);
+ emptyOrNullMessagesCount++;
+ emptyOrNullMessageIds.push(deepestNode.id);
}
- // Traverse the last child node if any children exist
- if (node.children.length > 0) {
- traverseTree(node.children[node.children.length - 1]);
- }
- };
+ // Move up to the parent node
+ const parentNodeId = deepestNode.parent ? deepestNode.parent : null;
+ console.log(`Moving from node ${deepestNode.id} to parent node ${parentNodeId}`);
+ deepestNode = parentNodeId ? openAIChatExport.mapping[parentNodeId] : null;
+ currentDepth++;
+ }
- // Start traversing the tree from the root node
- const rootNode =
- openAIChatExport.mapping[Object.keys(openAIChatExport.mapping)[0]];
- traverseTree(rootNode.id);
+ // Record the upward path IDs in reverse order to match the order from root to end
+ for (let i = deepestPathIds.length - 1; i >= 0; i--) {
+ upwardPathIds.push(deepestPathIds[i]);
+ }
+
+ console.log('Deepest Path IDs:', deepestPathIds);
+ console.log('Upward Path IDs:', upwardPathIds);
+ console.log('Message IDs:', messageIds);
+ console.log('Empty or Null Message IDs:', emptyOrNullMessageIds);
+ console.log('Empty or Null Messages Count:', emptyOrNullMessagesCount);
+ console.log('messages.length:', messages.length);
+
+ // Show differences
+ const diffDeepestToMessages = deepestPathIds.filter(id => !messageIds.includes(id));
+ console.log('Difference between Deepest Path IDs and Message IDs:', diffDeepestToMessages);
+
+ // Check if the difference between diffDeepestToMessages and emptyOrNullMessageIds is empty
+ const diffDeepestToMessagesAndEmpty = diffDeepestToMessages.filter(id => !emptyOrNullMessageIds.includes(id));
+ console.log('Difference between diffDeepestToMessages and Empty or Null Message IDs:', diffDeepestToMessagesAndEmpty);
+
+ if (!shouldAllowPartialImport) {
+ // If the difference between diffDeepestToMessages and emptyOrNullMessageIds is not empty, throw PartialImportError
+ if (diffDeepestToMessagesAndEmpty.length > 0) {
+ const config: ConfigInterface = {
+ ..._defaultChatConfig,
+ ...((openAIChatExport as any).temperature !== undefined && {
+ temperature: (openAIChatExport as any).temperature,
+ }),
+ ...((openAIChatExport as any).max_tokens !== undefined && {
+ max_tokens: (openAIChatExport as any).max_tokens,
+ }),
+ ...((openAIChatExport as any).top_p !== undefined && {
+ top_p: (openAIChatExport as any).top_p,
+ }),
+ ...((openAIChatExport as any).frequency_penalty !== undefined && {
+ frequency_penalty: (openAIChatExport as any).frequency_penalty,
+ }),
+ ...((openAIChatExport as any).presence_penalty !== undefined && {
+ presence_penalty: (openAIChatExport as any).presence_penalty,
+ }),
+ ...((openAIChatExport as any).model !== undefined && {
+ model: (openAIChatExport as any).model,
+ }),
+ };
+
+ const result: ChatInterface = {
+ id: uuidv4(),
+ title: openAIChatExport.title || 'Untitled Chat',
+ messages,
+ config,
+ titleSet: true,
+ imageDetail: _defaultImageDetail,
+ };
+ throw new PartialImportError(
+ i18next.t('partialImportMessages', {
+ ns: 'import',
+ total: deepestPathIds.length,
+ count: messageIds.length,
+ }),
+ result
+ );
+ }
+ }
} else if (isOpenAIPlaygroundJSON(openAIChatExport)) {
// Handle the playground export format
openAIChatExport.messages.forEach((message) => {
@@ -227,11 +367,23 @@ export const convertOpenAIToBetterChatGPTFormat = (
};
// Import OpenAI chat data and convert it to BetterChatGPT format
-export const importOpenAIChatExport = (openAIChatExport: any) => {
+export const importOpenAIChatExport = (
+ openAIChatExport: any,
+ shouldAllowPartialImport: boolean
+) => {
if (Array.isArray(openAIChatExport)) {
- return openAIChatExport.map(convertOpenAIToBetterChatGPTFormat);
+ if (shouldAllowPartialImport) {
+ return openAIChatExport.map(convertOpenAIToBetterChatGPTFormatPartialOK);
+ } else {
+ return openAIChatExport.map(convertOpenAIToBetterChatGPTFormatPartialNTY);
+ }
} else if (typeof openAIChatExport === 'object') {
- return [convertOpenAIToBetterChatGPTFormat(openAIChatExport)];
+ return [
+ convertOpenAIToBetterChatGPTFormat(
+ openAIChatExport,
+ shouldAllowPartialImport
+ ),
+ ];
}
return [];
};