Skip to content

Latest commit

 

History

History
132 lines (96 loc) · 7.45 KB

README.md

File metadata and controls

132 lines (96 loc) · 7.45 KB

Palico AI - LLM Tech Stack for Rapid Iteration

Docs Website X (formerly Twitter) Follow NPM Version GitHub License

Building an LLM application involves continuously trying out different ideas (models, prompts, architectures). Palico provides you with an integrated tech stack that helps you quickly iterate on your LLM development.

With Palico you can

  • ✅  Build any application in code with complete flexibility (docs)
  • ✅  Integrate with any external libraries like LangChain, LlamaIndex, Portkey, and more (docs)
  • ✅  Preview changes instantly with hot-reload and Playground UI (docs)
  • ✅  Systematically improve performance with Experiments (docs)
  • ✅  Debug issues with comprehensive logs and tracing (docs)
  • ✅  Deploy your application behind a REST API (docs)
  • ✅  Manage your application with an UI control panel (docs)

Tip

⭐️ Star this repo to get release notifications for new features.

ezgif-4-c4cae043ed

⚡ Get started in seconds ⚡

npx palico init <project-name>

Checkout our quickstart guide.

Overview of your Palico App

Palico.Init.Overview.mp4

🛠️ Building your Application

Build your application with complete flexibility

With Palico, you have complete control over the implementation details of your LLM application. Building an LLM application with Palico just involves implementing the Agent interface. Here's an example:

import {
  Agent,
  AgentResponse,
  ConversationContext,
  ConversationRequestContent,
} from "@palico-ai/app";

class MyLLMApp implements Agent {
  async chat(
    content: ConversationRequestContent,
    context: ConversationContext
  ): Promise<AgentResponse> {
    // Your LLM application logic
    // 1. Pre-processing
    // 2. Build your prompt
    // 3. Call your LLM model
    // 4. Post-processing
    return {
      // 5. Return a response to caller
    }
  }
}

Learn more about building your application with palico (docs).

Integrates with your favorite tools and libraries

Since you own the implementation details, you can use Palico with most other external tools and libraries

Tools or Libraries Supported
Langchain
LlamaIndex
Portkey
OpenAI
Anthropic
Cohere
Azure
AWS Bedrock
GCP Vertex
Pinecone
PG Vector
Chroma

Learn more from docs.

Instantly preview your changes

Make a code change and instantly preview it locally on our playground UI

Preview.Application.mp4

Easily swap models, prompts, anything and everything

Working on LLM application involves testing different variations of models, prompts, and application logic. Palico helps you build an interchangeable application layer using "feature-flag-like" feature called AppConfig. Using AppConfig, you can easily swap models, prompts, or any logic in your application layer.

Learn more about AppConfig.

🔄 Improving Performance with Experiments

Palico helps you create an iterative loop to systematically improve performance of your LLM application using experiments.

LandPageAssets-Page-2 drawio

With experiments, you can:

  1. Setup a list of test-cases that models the behavior of your application
  2. Make a change to your application
  3. Run an evaluation to measure how well your application performed against your test-cases
  4. Iterate

Learn more about experiments

🚀 Going to Production

You can deploy your Palico app to any cloud provider using Docker or use our managed hosting (coming soon). You can then use our ClientSDK or REST API to communicate with your LLM application.

Learn more from docs.

🤝 Contributing

The easiest way to contribute is to pick an issue with the good first issue tag 💪. Read the contribution guidelines here.

Bug Report? File here | Feature Request? File here