Cody for VS Code v1.14.0: Now with bigger context windows and a refreshed chat UI

Alex Isken, Justin Dorfman

Cody for VS Code v1.14.0 is now available! This update includes context window expansions, a new chat interface, easier ways to add context to chat, new models, and more.

Bigger and better context windows to improve chat

We’ve historically limited Cody to a maximum context window of 7,000 tokens, meaning that Cody won’t pass more than 7,000 tokens (small chunks of text) to the underlying LLMs. This is mostly to make sure that too much context isn’t passed into the LLM, which can lead to poor context recall and then poor answer quality.

Because of Claude 3’s improved context recall, we can now expand that context window without degrading chat quality. We’ve made huge updates to Cody’s maximum context window for Claude 3 Sonnet and Opus models:

  • 30,000 tokens of user-defined context (user @-mentioned files)
  • 15,000 tokens of continuous context (user messages and context that's sent back to the LLM to help it recall earlier parts of a continuous conversation)

Note: One line of code converts to roughly 7.5 tokens, so these limits are approximately 4,000 and 2,000 lines of code, respectively.

This update means two things:

  • You can now push way more context into Cody, including multiple large files, so you can ask questions about larger amounts of code
  • You can have much longer back-and-forth chats with Cody before it starts to forget context from earlier in the conversation

We’ve also increased the output token limit for all messages generated by Cody’s underlying LLMs. Outputs were previously limited to 1,000 tokens, and we’ve quadrupled it to 4,000 tokens. This means you shouldn’t see Cody's responses getting cut off mid-answer anymore. This output limit update applies to all models.

For now, these changes to context windows apply to Free and Pro-tier users, with changes for Enterprise users coming in the future.

Revamped chat interface

The chat interface is getting a new look! Cody’s chat log is now structured as a series of cells, each showing a message alongside the sender's avatar. When Sourcegraph fetches context to use in a request, Cody shows the context in a cell alongside the Sourcegraph avatar.

Cody's new chat interface design

Unit test command improvements for JS, TS, Go, and Python

We’ve improved Cody’s unit test command for JavaScript, TypeScript, Go, and Python code in two ways:

  1. Cody will now determine the correct range of code to apply the command to based on the code syntax
  2. When you trigger a Quick Action (Ctrl + ./Cmd + .) on a function symbol, Cody will now given an option to generate a unit test

Add context to chat using right-click

You can now grab code from the open file and use it as context in chat without having to manually @-mention it.

Highlight the code you want to use, then right-click -> “Cody Chat: Add as context.” The highlighted file range will be added as an @-mention in the chat sidebar.

New model support: GPT-4 Turbo and Mixtral 8x22B

Mistral just released Mixtral 8x22B, their latest open source model, and it’s now available as a chat option for Cody Pro users. The model scores highly in math & coding benchmarks, particularly among open models.

We’ve also upgraded Cody’s GPT-4 Turbo model from the preview version to the newer, non-preview version.

Experimental support for CodeGemma + Ollama

You can now use CodeGemma models via Ollama with Cody. The CodeGemma family supports fill-in-the-middle code completion and can be used to power Cody’s autocomplete functionality.

Changelog

See the changelog and GitHub releases for a complete list of changes.

Thank you

Cody wouldn’t be what it is without our amazing contributors 💖 A big thank you to everyone who contributed, filed issues, and sent us feedback.

As always, we value your feedback in our support forum, Discord and GitHub. Happy Codying!


To get started with Cody install it from the VS Code Marketplace

Get Cody, the AI coding assistant

Cody makes it easy to write, fix, and maintain code.