Releases: plandex-ai/plandex
Release server/v1.1.1
- Improvements to stream handling that greatly reduce flickering in the terminal when streaming a plan, especially when many files are being built simultaneously. CPU usage is also reduced on both the client and server side.
- Claude 3.5 Sonnet model and model pack (via OpenRouter.ai) is now built-in.
Release cli/v1.1.1
Fix for terminal flickering when streaming plans 📺
Improvements to stream handling that greatly reduce flickering in the terminal when streaming a plan, especially when many files are being built simultaneously. CPU usage is also reduced on both the client and server side.
Claude 3.5 Sonnet model pack is now built-in 🧠
You can now easily use Claude 3.5 Sonnet with Plandex through OpenRouter.ai.
- Create an account at OpenRouter.ai if you don't already have one.
- Generate an OpenRouter API key.
- Run
export OPENROUTER_API_KEY=...
in your terminal. - Run
plandex set-model
, selectchoose a model pack to change all roles at once
and then choose eitheranthropic-claude-3.5-sonnet
(which uses Claude 3.5 Sonnet for all heavy lifting and Claude 3 Haiku for lighter tasks) oranthropic-claude-3.5-sonnet-gpt-4o
(which uses Claude 3.5 Sonnet for planning and summarization, gpt-4o for builds, and gpt-3.5-turbo for lighter tasks)
Remember, you can run plandex model-packs
for details on all built-in model packs.
Release server/v1.1.0
- Give notes added to context with
plandex load -n 'some note'
automatically generated names incontext ls
list. - Fixes for summarization and auto-continue issues that could Plandex to lose track of where it is in the plan and repeat tasks or do tasks out of order, especially when using
tell
andcontinue
after the initialtell
. - Improvements to the verification and auto-fix step. Plandex is now more likely to catch and fix placeholder references like "// ... existing code ..." as well as incorrect removal or overwriting of code.
- After a context file is updated, Plandex is less likely to use an old version of the code from earlier in the conversation--it now uses the latest version much more reliably.
- Increase wait times when receiving rate limit errors from OpenAI API (common with new OpenAI accounts that haven't spent $50).
Release cli/v1.1.0
Support for loading images into context with gpt-4o 🖼️
- You can now load images into context with
plandex load path/to/image.png
. Supported image formats are png, jpeg, non-animated gif, and webp. So far, this feature is only available with the default OpenAI GPT-4o model.
No more hard OpenAI requirement for builder, verifier, and auto-fix roles 🧠
-
Non-OpenAI models can now be used for all roles, including the builder, verifier, and auto-fix roles, since streaming function calls are no longer required for these roles.
-
Note that reliable function calling is still required for these roles. In testing, it was difficult to find models that worked reliably enough for these roles, despite claimed support for function calling. For this reason, using non-OpenAI models for these roles should be considered experimental. Still, this provides a path forward for using open source, local, and other non-OpenAI models for these roles in the future as they improve.
Reject pending changes with plandex reject
🚫
- You can now reject pending changes to one or more files with the
plandex reject
command. Running it with no arguments will reject all pending changes after confirmation. You can also reject changes to specific files by passing one or more file paths as arguments.
Summarization and auto-continue fixes 🛤 ️
- Fixes for summarization and auto-continue issues that could Plandex to lose track of where it is in the plan and repeat tasks or do tasks out of order, especially when using
tell
andcontinue
after the initialtell
.
Verification and auto-fix improvements 🛠️
- Improvements to the verification and auto-fix step. Plandex is now more likely to catch and fix placeholder references like "// ... existing code ..." as well as incorrect removal or overwriting of code.
Stale context fixes 🔄
- After a context file is updated, Plandex is less likely to use an old version of the code from earlier in the conversation--it now uses the latest version much more reliably.
plandex convo
command improvements 🗣️
- Added a
--plain / -p
flag toplandex convo
andplandex summary
that outputs the conversation/summary in plain text with no ANSI codes. plandex convo
now accepts a message number or range of messages to display (e.g.plandex convo 1
,plandex convo 1-5
,plandex convo 2-
). Useplandex convo 1
to show the initial prompt.
Context management improvements 📄
- Give notes added to context with
plandex load -n 'some note'
an auto-generated name in thecontext ls
list. plandex rm
can now accept a range of indices to remove (e.g.plandex rm 1-5
)- Better help text if
plandex load
is run with incorrect arguments - Fix for
plandex load
issue loading paths that begin with./
Better rate limit tolerance 🕰️
- Increase wait times when receiving rate limit errors from OpenAI API (common with new OpenAI accounts that haven't spent $50)
Built-in model updates 🧠
- Removed 'gpt-4-turbo-preview' from list of built-in models and model packs
Other fixes 🐞
- Fixes for some occasional rendering issues when streaming plans and build counts
- Fix for
plandex set-model
model selection showing built-in model options that aren't compatible with the selected role--now only compatible models are shown
Help updates 📚
plandex help
now shows a brief overview on getting started with Plandex rather than the full command listplandex help --all
orplandex help -a
shows the full command list
Release server/v1.0.1
- Fix for occasional 'Error getting verify state for file' error
- Fix for occasional 'Fatal: unable to write new_index file' error
- Fix for occasional 'nothing to commit, working tree clean' error
- When hitting OpenAI rate limits, Plandex will now parse error messages that include a recommended wait time and automatically wait that long before retrying, up to 30 seconds (#123)
- Some prompt updates to encourage creation of multiple smaller files rather than one mega-file when generating files for a new feature or project. Multiple smaller files are faster to generate, use less tokens, and have a lower error rate compared to a continually updated large file.
Release server/v1.0.0
☄️ gpt-4o is the real deal for coding
- gpt-4o, OpenAI's latest model, is the new default model for Plandex. 4o is much better than gpt-4-turbo (the previous default model) in early testing for coding tasks and agent workflows.
- If you have not used
plandex set-model
orplandex set-model default
previously to set a custom model, you will now be use gpt-4o by default. If you have used one of those commands, useplandex set-model
orplandex set-model default
and select the newgpt-4o-latest
model-pack to upgrade.
🛰️ Reliability improvements: 90% reduction in syntax errors in early testing
- Automatic syntax and logic validation with an auto-correction step for file updates.
- Significantly improves reliability and reduces syntax errors, mistaken duplication or removal of code, placeholders that reference other code and other similar issues.
- With a set of ~30 internal evals spanning 5 common languages, syntax errors were reduced by over 90% on average with gpt-4o.
- Logical errors are also reduced (I'm still working on evals for those to get more precise numbers).
- Plandex is now much better at handling large files and plans that make many updates to the same file. Both could be problematic in previous versions.
- Plandex is much more resilient to incorrectly labelled file blocks when the model uses the file label format incorrectly to explain something rather than for a file. i.e. "Run this script" and then a bash script block. Previously Plandex would mistakenly create a file called "Run this script". It now ignores blocks like these.
🧠 Improvements to core planning engine: better memory and less laziness allow you to accomplish larger and more complex tasks without errors or stopping early
- Plandex is now much better at working through long plans without skipping tasks, repeating tasks it's already done, or otherwise losing track of what it's doing.
- Plandex is much less likely to leave TODO placeholders in comments instead of fully completing a task, or to otherwise leave a task incomplete.
- Plandex is much less likely to end a plan before all tasks are completed.
🏎️ Performance improvements: 2x faster planning and execution
- gpt-4o is twice as fast as gpt-4-turbo for planning, summarization, builds, and more.
- If you find it's streaming too fast and you aren't able to review the output, try using the
--stop / -s
flag withplandex tell
orplandex continue
. It will stop the plan after a single response so you can review it before proceeding. Useplandex continue
to proceed with the plan once you're ready. - Speaking of which, if you're in exploratory mode and want to use less tokens, you can also use the
--no-build / -n
flag withplandex tell
andplandex continue
. This prevents Plandex from building files until you runplandex build
manually.
🪙 2x cost reduction: gpt-4o is half the per-token price of gpt-4-turbo
- For the same quantity of tokens, with improved quality and 2x speed, you'll pay half-price.
👩💻 New plandex-dev
and pdxd
alias in development mode
- In order to avoid conflicts/overwrites with the
plandex
CLI andpdx
alias, a newplandex-dev
command andpdxd
alias have been added in development mode.
🛠️ Bug fixes
- Fix for a potential panic during account creation (#76)
- Fixes for some account creation flow issues (#106)
- Fix for occasional "Stream buffer tokens too high" error (#34).
- Fix for potential panic when updating model settings. Might possibly be the cause of or somehow related to #121 but hard to be sure (maybe AWS was just being flakey).
- Attempted fix for rare git repo race condition @jesseswell_1 caught that gives error ending with:
Exit status 128, output
* Fatal: unable to write new_index file
📚 Readme updates
- The readme has been revamped to be more informative and easier to navigate.
🏡 Easy self-contained startup script for local mode and self-hosting
git clone https://github.com/plandex-ai/plandex.git
cd plandex/app
./start_local.sh
- Sincere thanks to @ZanzyTHEbar aka @daofficialwizard on Discord who wrote the script! 🙏🙏
🚀 Upgrading
- As always, cloud has already been updated with the latest version. To upgrade the CLI, run any
plandex
command (likeplandex version
orplandex help
or whatever command you were about to run anyway 🙂)
📆 Join me for office hours every Friday 12:30-1:30pm PST in Discord, starting May 17th
- I'll be available by voice and text chat to answer questions, talk about the new version, and hear about your use cases. Come on over and hang out!
- Join the discord to get a reminder when office hours are starting: https://discord.gg/plandex-ai
Release cli/v1.0.0
- CLI updates for the 1.0.0 release
- See the server/v1.0.0 release notes for full details
Release server/v0.9.1
- Improvements to auto-continue check. Plandex now does a better job determining whether a plan is finished or should automatically continue by incorporating the either the latest plan summary or the previous conversation message (if the summary isn't ready yet) into the auto-continue check. Previously the check was using only the latest conversation message.
- Fix for 'exit status 128' errors in a couple of edge case scenarios.
- Data that is piped into
plandex load
is now automatically given a name incontext ls
via a call to thenamer
role model (previously it had no name, making multiple pipes hard to disambiguate).
Release cli/v0.9.1
- Fix for occasional stream TUI panic during builds with long file paths (#105)
- If auto-upgrade fails due to a permissions issue, suggest re-running command with
sudo
(#97 - thanks @kalil0321!) - Include 'openrouter' in list of model providers when adding a custom model (#107)
- Make terminal prompts that shouldn't be optional (like the Base URL for a custom model) required across the board (#108)
- Data that is piped into
plandex load
is now automatically given a name inplandex ls
via a call to thenamer
role model (previously it had no name, making multiple pipes hard to disambiguate). - Still show the '(r)eject file' hotkey in the
plandex changes
TUI when the current file isn't scrollable.
Release server/v0.9.0
- Support for custom models, model packs, and default models (see CLI 0.9.0 release notes for details).
- Better accuracy for updates to existing files.
- Plandex is less likely to screw up braces, parentheses, and other code structures.
- Plandex is less likely to mistakenly remove code that it shouldn't.
- Plandex is now much better at working through very long plans without skipping tasks, repeating tasks it's already done, or otherwise losing track of what it's doing.
- Server-side support for
plandex diff
command to show pending plan changes ingit diff
format. - Server-side support for archiving and unarchiving plans.
- Server-side support for
plandex summary
command. - Server-side support for
plandex rename
command. - Descriptive top-line for
plandex apply
commit messages instead of just "applied pending changes". - Better message in
plandex log
when a single piece of context is loaded or updated. - Fixes for some rare potential deadlocks and conflicts when building a file or stopping astream.