# Overview
**Codeium Autocomplete** is powered by our best-in-class proprietary model, trained from scratch to optimize for speed and accuracy.
Our autocomplete makes in-line and multi-line suggestions based on the context of your code.
Suggestions appear in grey text as you type. You can press `esc` to cancel a suggestion.
Suggestions will also disappear if you continue typing or navigating without accepting them.
## Keyboard Shortcuts
Here are shortcuts that apply for MacOS.
Replace `⌘` with `Ctrl` and `⌥` with `Alt` to get the corresponding shortcuts on Windows/Linux.
* **Accept suggestion**: `⇥`
* **Cancel suggestion**: `esc`
* **Accept suggestion word-by-word**: `⌘+→` (VS Code), `⌥+⇧+\` (JetBrains)
* **Next/previous suggestion**: `⌥+]`/`⌥+[`
* **Trigger suggestion**: `⌥+\`
## Autocomplete Speeds
You can set the speed of the Autocomplete in your settings.
Fast Autocomplete is currently only available to our Pro, Teams, and Enterprise Users.
# Tips
## Inline Comments
You can instruct autocomplete with the use of comments in your code.
Codeium will read these comments and suggest the code to bring the comment to life.
This method can get you good mileage, but if you're finding value in writing natural-language instructions and having the AI execute them,
consider using [Codeium Command](/command/overview).
## Fill In The Middle (FIM)
Codeium's Autocomplete can Fill In The Middle (FIM).
Read more about in-line FIM on our blog [here](https://codeium.com/blog/inline-fim-code-suggestions).
## Snooze
Click the Codeium widget in the status bar towards the bottom right of your editor to see the option to switch Autocomplete off,
either temporarily or until you reenable it.
# Prompt Engineering
If you're reading this, you're probably someone that already understands some of the use cases and limitations of LLMs. The better prompt and context that provide to the model, the better the outcome will be.
Similarly with Codeium, there are best practices for crafting more effective prompts to get the most out of the tool, and get the best quality code possible to help you accelerate your workflows.
For more complex tasks that may require you to [@-Mention](/chat/overview/#mentions) specific code blocks, use [Chat](/chat/overview) instead of [Command](/command/overview).
## Components of a high quality prompt
* ***Clear objective or outcome***
* What are you asking the model to produce?
* Are you asking the model for a plan? For new code? Is it a refactor?
* ***All relevant context to perform the task(s)***
* Have you properly used @-Mentions to ensure that the proper context is included?
* Is there any context that is customer specific that may be unclear to Codeium?
* ***Necessary constraints***
* Are there any specific frameworks, libraries, or languages that must be utilized?
* Are there any space or time complexity constraints?
* Are there any security considerations?
## Examples
***Example #1:***
* **Bad**: Write unit tests for all test cases for an Order Book object.
* **Good**: Using `@class:unit-testing-module` write unit tests for `@func:src-order-book-add` testing for exceptions thrown when above or below stop loss
***Example #2***:
* **Bad**: Refactor rawDataTransform.
* **Good**: Refactor `@func:rawDataTransform` by turning the while loop into a for loop and using the same data structure output as `@func:otherDataTransformer`
***Example #3***:
* **Bad**: Create a new Button for the Contact Form.
* **Good**: Create a new Button component for the `@class:ContactForm` using the style guide in `@repo:frontend-components` that says “Continue”
# Common Use Cases
Codeium, especially now with Windsurf, can serve a great variety of use cases at a high level. However, we see certain use cases to be more common than others, especially among our enterprise customers within their production codebases.
## Code generation
**Guidance:** Codeium should work well for this use case. Codeium features including single-line suggestions, multi-line suggestions, and fill-in-the-middle (FIM) completions.
**Best Practices:** Ensuring usage of Next Completion (`⌥ + ]`), Context Pinning, @ Mentions, and Custom Context will provide best results.
**Guidance:** Codeium should work well for this use case. Codeium features including single-line suggestions, multi-line suggestions, and fill-in-the-middle (FIM) completions.
**Best Practices:** Ensuring usage of Next Completion (`⌥ + ]`), Context Pinning, @ Mentions, and Custom Context will provide best results.
**Guidance:** Codeium should work well for this use case. Codeium features including single-line suggestions, multi-line suggestions, and fill-in-the-middle (FIM) completions.
**Best Practices:** Ensuring usage of Next Completion (`⌥ + ]`), Context Pinning, @ Mentions, and Custom Context will provide best results.
## Unit Test generation
**Guidance:** Basic usage of Codeium for generating unit tests should reliably generate 60-70% of unit tests. Edge case coverage will only be as good as the user prompting the model is.
**Best Practices:** Use @ Mentions. Prompt Engineering best practices. Examples include:
Write unit test for `@function-name` that tests all edge cases for X and for Y (e.g. email domain).
Use `@testing-utility-class` to write a unit test for `@function-name`.
**Guidance:** Good for low-hanging fruit use cases. For very specific API specs or in-house libraries, Codeium will not know the intricacies well enough to ensure the quality of generated sample data.
**Best Practices:** Be very specific about the interface you expect. Think about the complexity of the task (and if a single-shot LLM call will be sufficient to address).
## Internal Code Commentary
**Guidance:** Codeium should work well for this use case. Use Codeium Command or Codeium Chat to generate in-line comments and code descriptions.
**Best Practices:** Use @ Mentions and use Code Lenses as much as possible to ensure the scope of the LLM call is correct.
**Guidance:** Generally the Refactor button / Codeium Command would be the best ways to prompt for improvements. Codeium Chat is the best place to ask for explanations or clarifications. This is a little vague but Codeium should be good at doing both.
Codeium Chat is the best place to ask for explanations or clarifications.
This is a little vague but Codeium should be good at doing both.
**Best Practices**: Use the dropdown prompts (aka Codeium’s Refactor button) - we have custom prompts that are better engineered to deliver the answer you’d more likely expect.
**Guidance**: The best way to do this would be to create the header file, open chat, @ mention the function in the cpp file, and ask it to write the header function. Then do this iteratively for each in the cpp file. This is the best way to ensure no hallucinations along the way.
**Best Practices**: Generally avoid trying to write a whole header file with one LLM call. Breaking down the granularity of the work makes the quality of the generated code significantly higher.
## API Documentation and Integration
**Guidance**: This is similar to test coverage where parts of the API spec that are common across many libraries Codeium would be able to accurately decorate. However, things that are built special for your in-house use case Codeium might struggle to do at the quality that you expect.
**Best Practices**: Similar to test coverage, as much as possible, walk Codeium’s model through the best way to think about what the API is doing and it will be able to decorate better.
**Guidance**: Codeium’s context length for a single LLM call is 16,000 tokens. Thus, depending on the scope of your search, Codeium’s repo-wide search capability may not be sufficient. Repo-wide, multi-step, multi-edit tasks will be supported in upcoming future Codeium products.
This is fundamentally a multi-step problem that single-shot LLM calls (i.e. current functionality of all AI code assistants) are not well equipped to address. Additionally, accuracy of result must be much higher than other use cases as integrations are especially fragile.
**Best Practices**: Codeium is not well-equipped to solve this problem today. If you’d like to test the extent of Codeium’s existing functionality, build out a step-by-step plan and prompt Codeium individually with each step and high level of details to guide the AI.
## Code Refactoring
**Guidance**: Ensure proper scoping using Codeium Code Lenses or @ Mentions to make sure all of the necessary context is passed to the LLM.
Context lengths for a single LLM call are finite. Thus, depending on the scope of your refactor, this finite context length may be an issue (and for that matter, any single-shot LLM paradigm). Repo-wide, multi-step, multi-edit tasks are now supported in Windsurf's [Cascade](/windsurf/cascade).
**Best Practices**: Try to break down the prompt as much as possible. The simpler and shorter the command for refactoring the better.
**Guidance**: Ensure proper scoping using Codeium Code Lenses or @ Mentions to make sure all of the necessary context is passed to the LLM.
Codeium’s context length for a single LLM call is 16,000 tokens. Thus, depending on the scope of your refactor, Codeium’s context length may be an issue (and for that matter, any single-shot LLM paradigm). Repo-wide, multi-step, multi-edit tasks will be supported in upcoming future Codeium products.
**Best Practices**: Try to break down the prompt as much as possible. The simpler and shorter the command for refactoring the better.
# Models
While we provide and train our own dedicated models for Chat, we also give you the flexibility choose your favorites.
It's worth noting that the Codeium models are tightly integrated with our reasoning stack, leading to better quality suggestions than external models for coding-specific tasks.
Due to our industry-leading infrastructure, we are able to offer them for free (or at very low cost) to our users.
Model selection can be found directly under the chat.
## Base Model ⚡
**Access:** All users
Available for unlimited use to all users is a fast, high-quality Codeium Chat model based on Meta's [Llama 3.1 70B](https://ai.meta.com/blog/meta-llama-3-1/).
This model is optimized for speed, and is the **fastest** model available in Codeium Chat. This is all while still being extremely accurate.
## Codeium Premier 🚀
**Access:** Any paying users (Pro, Teams, Enterprise, etc.)
Available in our paid tier is unlimited usage of our premier Codeium Chat model based on Meta's [Llama 3.1 405B](https://ai.meta.com/blog/meta-llama-3-1/).
This is the **highest-performing model** available for use in Codeium, due to its size and integration with Codeium's reasoning engine and native workflows.
## Other Models (GPT-4o, Claude 3.5 Sonnet)
**Access:** Any paying users (Pro, Teams, Enterprise, etc.)
Codeium provides access to OpenAI's and Anthropic's flagship models, available for use in any of our paid tiers.
# Overview
Converse with a codebase-aware AI
Chat and its related features are only supported in: VS Code, JetBrains IDEs, Eclipse, X-Code, and Visual Studio.
Chat in Windsurf is integrated within [Cascade](/windsurf/cascade). Set to "Chat" mode to replicate the original experience.
**Codeium Chat** enables you to talk to your codebase from within your editor.
Chat is powered by our industry-leading [context awareness](/context-awareness/overview.mdx) engine.
It combines thoughtfully-designed, built-in context retrieval with optional user guidance to provide accurate and grounded answers.
In VS Code, Codeium Chat can be found by default on the left sidebar.
If you wish to move it elsewhere, you can click and drag the Codeium icon and relocate it as desired.
You can use `⌘+⇧+A` on Mac or `Ctrl+⇧+A` on Windows/Linux to open the chat panel and toggle focus between it and the editor.
You can also pop the chat window out of the IDE entirely by clicking the page icon at the top of the chat panel.
In JetBrains IDEs, Codeium Chat can be found by default on the right sidebar.
If you wish to move it elsewhere, you can click and drag the Codeium icon and relocate it as desired.
You can use `⌘+⇧+L` on Mac or `Ctrl+⇧+L` on Windows/Linux to open the chat panel while you are typing in the editor.
You can also open the chat in a popped-out browser window by clicking `Tools > Codeium > Open Codeium Chat in Browser` in the top menu bar.
## @-Mentions
An @-mention is a deterministic way of bringing in context, and is guaranteed to be part of the context used to respond to a chat.
In any given chat message you send, you can explicitly refer to context items from within the chat input by prefixing a word with `@`.
Context items available to be @-mentioned:
* Functions & classes
* Only functions and classes in the local indexed
* Also only available for languages we have built AST parsers for (Python, TypeScript, JavaScript, Go, Java, C, C++, PHP, Ruby, C#, Perl, Kotlin, Dart, Bash, COBOL, and more)
* Directories and files in your codebase
* Remote repositories
* The contents of your in-IDE terminal (VS Code only).
You can also try `@diff`, which lets you chat about your repository's current `git diff` state.
The `@diff` feature is currently in beta.
If you want to pull a section of code into the chat and you don't have @-Mentions available, you can: 1. highlight the code -> 2. right click -> 3. select 'Codeium: Explain Selected Code Block'
## Persistent Context
You can instruct the chat model to use certain context throughout a conversation and across different converstions
by configuring the `Context` tab in the chat panel.
In this tab, you can see:
* **Custom Chat Instructions**: a short prompt guideline like "Respond in Kotlin and assume I have little familiarity with it" to orient the model towards a certain type of response.
* **Pinned Contexts**: items from your codebase like files, directories, and code snippets that you would like explicitly for the model to take into account.
See also [Context Pinning](/context-awareness/context-pinning).
* **Active Document**: a marker for your currently active file, which receives special focus.
* **Local Indexes**: a list of local repositories that the Codeium context engine has indexed.
## Slash Commands
You can prefix a message with `/explain` to ask the model to explain something of your choice.
Currently, `/explain` is the only supported slash command.
[Let us know](https://codeium.canny.io/feature-requests/) if there are other common workflows you want wrapped in a slash command.
## Copy and Insert
Sometimes, Chat responses will contain code blocks. You can copy a code block to your clipboard or insert it directly into the editor
at your cursor position by clicking the appropriate button atop the code block.
If you would like the AI to enact a change directly in your editor based on an instruction,
consider using [Codeium Command](/command/overview).
## Inline Citations
Chat is aware of code context items, and its responses often contain linked references to snippets of code in your files.
## Regenerate with Context
By default, Codeium makes a judgment call whether any given question is general or if it requires codebase context.
You can force the model to use codebase context by submitting your question with `⌘⏎`.
For a question that has already received a response, you rerun with context by clicking the sparkle icon.
## Stats for Nerds
Lots of things happen under the hood for every chat message. You can click the stats icon to see these statistics for yourself.
## Chat History
To revisit past conversations, click the history icon at the top of the chat panel.
You can click the `+` to create a new conversation, and
you can click the `⋮` button to export your conversation.
## Settings
Click on the `Settings` tab to update your theme preferences (light or dark) and font size.
The settings panel also gives you an option to download diagnostics, which are debug logs that can be helpful
for the Codeium team to debug an issue should you encounter one.
## Telemetry
You may encounter issues with Chat if Telemetry is not enabled.
To enable telemetry, open your VS Code settings and navigate to User > Application > Telemetry. In the following dropdown, select "all".
To enable telemetry in JetBrains IDEs, open your Settings and navigate to Appearance & Hehavior > System Settings > Data Sharing.
# Overview
AI-powered in-line edits
Command is currently only available in VS Code and JetBrains IDEs.
**Codeium Command** allows you to generate new code or edit existing code via natural language inputs, directly in the editor window.
To invoke Command, press `⌘+I` on Mac or `Ctrl+I` on Windows/Linux.
From there, you can enter a prompt in natural language and hit the Submit button (or `⌘+⏎`/`Ctrl+⏎`) to forward the instruction to the AI.
Codeium will then provide a multiline suggestion that you can accept or reject.
If you highlight a section of code before invoking Command, then the AI will edit the selection spanned by the highlighted lines.
Otherwise, it will generate code at your cursor's location.
You can accept, reject, or follow-up a generation by clicking the corresponding code lens above the generated diff,
or by using the appropriate shortcuts (`Cmd+Enter`/`Cmd+Delete`)
In Windsurf, you can select your desired model to use for Command from the dropdown.
Codeium Fast is the fastest, most accurate model available.
### Terminal Command
You can also open Command in the terminal in case you don't remember the exact syntax of what you want to run.
To invoke Command, press `⌘+I` on Mac or `Ctrl+I` on Windows/Linux.
From there, you can enter a prompt in natural language and hit the Submit button (or `⌘+⏎`/`Ctrl+⏎`) to forward the instruction to the AI.
Codeium will then provide a multiline suggestion that you can accept or reject.
If you highlight a section of code before invoking Command, then the AI will edit the selection spanned by the highlighted lines.
Otherwise, it will generate code at your cursor's location.
You can accept, reject, or follow-up a generation by clicking the corresponding code lens above the generated diff,
or by using the appropriate shortcuts (`⌥+A`/`Alt+A`, `⌥+R`/`Alt+R`, and `⌥+F`/`Alt+F`, respectively).
To invoke Command, press `⌘+I` on Mac or `Ctrl+I` on Windows/Linux.
Some users have reported keyboard conflicts with this shortcut, so `⌘+⇧+I` and `⌘+\`on Mac (`Ctrl+⇧+I` and `Ctrl+\` on Windows/Linux)
will also work.
The Command invocation will open an interactive popup at the appropriate location in the code.
You can enter a prompt in natural language and Codeium will provide a multiline suggestion that you can accept or reject.
If you highlight a section of code before invoking Command, then the AI will edit the selection spanned by the highlighted lines.
Otherwise, it will generate code at your cursor's location.
The Command popup will persist in the editor if you scroll around or focus your cursor elsewhere in the editor.
It will act on your most recently highlighted selection of code or your most recent cursor position.
While it is active, the Command popup gives you the following options:
* **Cancel** (`Esc`): this will close the popup and undo any code changes that may have occured while the popup was open.
* **Accept generation** (`⌘+⏎`): this option appears after submitting an instruction and receiving a generation.
It will write the suggestion into the code editor and close the popup.
* **Undo generation** (`⌘+⌫`): this option appears after submitting an instruction and receiving a generation.
It will restore the code to its pre-Command state without closing the popup, while reinserting your most recent instruction
into the input box.
* **Follow-up**: this option appears after submitting an instruction and receiving a generation.
You can enter a second (and third, fourth, etc.) instruction and submit it,
which will undo the currently shown generation and rerun Command using your comma-concatenated instruction history.
# Best Practices
Command is great for file-scoped, in-line changes that you can describe as an instruction in natural language.
Here are some pointers to keep in mind:
* The model that powers Command is larger than the one powering autocomplete.
It is slower but more capable, and it is trained to be especially good at instruction-following.
* If you highlight a block of code before invoking Command, it will edit the selection. Otherwise, it will do a pure generation.
* Using Command effectively can be an art. Simple prompts like "Fix this" or "Refactor" will likely work
thanks to Codeium's context awareness.
A specific prompt like "Write a function that takes two inputs of type `Diffable` and implements the Myers diff algorithm"
that contains a clear objective and references to relevant context may help the model even more.
# Refactors, Docstrings, and More
Features powered by Command
Command enables streamlined experiences for a few common operations.
## Function Refactors and Docstring Generation
Above functions and classes, Codeium renders *code lenses*,
which are small, clickable text labels that invoke Codeium's AI capabilities on the labeled item.
You can disable code lenses by clicking the `✕` to the right of the code lens text.
The `Refactor` and `Docstring` code lenses in particular will invoke Command.
* If you click `Refactor`, Codeium will prompt you with a dropdown of selectable, pre-populated
instructions that you can choose from. You can also write your own. This is equivalent to highlighting the function and invoking Command.
* If you click `Docstring`, Codeium will generate a docstring for you above the function header.
(In Python, the docstring will be correctly generated *underneath* the function header.)
## Smart Paste
This feature allows you to copy code and paste it into a file in your IDE written in a different programming language.
Use `⌘+⌥+V` (Mac) or `Ctrl+Alt+V` (Windows/Linux) to invoke Smart Paste.
Behind the scenes, Codeium will detect the language of the destination file and use Command to translate the code in your clipboard.
Codeium's context awareness will try to write it to fit in your code, for example by referencing proper variable names.
Some possible use cases:
* **Migrating code**: you're rewriting JavaScript into TypeScript, or Java into Kotlin.
* **Pasting from Stack Overflow**: you found a utility function online written in Go, but you're using Rust.
* **Learning a new language**: you're curious about Haskell and want to see what your would look like if written in it.
# Overview
On codebase context and related features
Codeium's proprietary context engine builds a deep understanding of your codebase.
Historically, code-generation approaches focused on fine-tuning large language models (LLMs) on a codebase,
which is difficult to scale to the needs of every individual user.
A more recent and popular approach leverages retrieval-augmented generation (RAG),
which focuses on techniques to construct highly relevant, context-rich prompts
to elicit accurate answers from an LLM.
The Codeium team has taken an extremely thoughtful and optimized RAG approach to codebase context,
and has seen great success in producing high-quality suggestions and few hallucinations.
This applies across the board to Autocompete, Chat, and Command.
Codeium offers full fine-tuning for enterprises, and the best solution
combines fine-tuning with RAG. Nonetheless, Codeium's RAG approach is highly
effective at codebase context personalization on its own.
## Default Context
Out of the box, Codeium takes multiple relevant sources of context into consideration.
* The current file and other open files in your IDE, which are often very relevant to the code you are currently writing.
* The entire local codebase is then indexed (including files that are not open),
and relevant code snippets are sourced by Codeium's retrieval engine as you write code, ask questions, or invoke commands.
* For Pro users, we offer expanded context lengths increased indexing limits, and higher limits on custom context and pinned context items.
* For Teams and Enterprise users, Codeium can also index remote repositories.
This is useful for companies whose development organization works across multiple repositories.
## Context Pinning
Developers have the option to offer additional guidance by pinning custom context.
You can find this option under the [context tab of the chat panel](/chat/overview#persistent-context).
You can choose to pin directories, files, repositories, or code context items (functions, classes, etc.) as persistent context.
Models reference these items for every suggestion, across Autocomplete, Chat, and Command.
### Best Practices
Context Pinning is great when your task in your current file depends on information from other files.
Try to pin only what you need. Pinning too much may slow down or negatively impact model performance.
Here are some ideas for effective context pinning:
* Module Definitions: pinning class/struct definition files that are inside your repo but in a module separate from your currently active file.
* Internal Frameworks/Libraries: pinning directories with code examples for using frameworks/libraries.
* Specific Tasks: pinning a file or folder defining a particular interface (e.g., `.proto` files, abstract class files, config templates).
* Current Focus Area: pinning the "lowest common denominator" directory containing the majority of files needed for your current coding session.
* Testing: pinning a particular file with the class you are writing unit tests for.
## Chat-Specific Context Features
When conversing with Codeium Chat, you have various ways of leveraging codebase context,
like [@-mentions](/chat/overview/#mentions) or custom guidelines.
See the [Chat page](/chat/overview) for more information.
## Frequently Asked Questions (FAQs)
### Does Codeium index my codebase?
Yes, Codeium does index your codebase. It also uses LLMs to perform retrieval-augmented generation (RAG) on your codebase using techniques we invented like [M-Query](https://youtu.be/DuZXbinJ4Uc?feature=shared\&t=606).
Indexing performance and features vary based on your workflow and your Codeium plan. For more information, please visit our [context awareness page](https://codeium.com/context).
# Analytics & Profile
## User Analytics
User analytics are available for viewing and sharing on your own [profile](https://codeium.com/profile) page.
See your completion stats, [refer](https://codeium.com/referral) your friends, look into your language breakdown,
and unlock achievement badges by using Codeium in your daily workflow.
## Team Analytics
You will need team admin privileges in order to view the following team links.
Codeium makes managing your team easy from one [dashboard](https://codeium.com/team/).
Team leads and managers can also see an aggregate of their team members' analytics.
Click [here](https://codeium.com/team/analytics) to view.
We extrapolate from the data to provide you a quick view of the value that Codeium is providing to your team.
This includes metrics like "percent of code written by Codeium" as well as time and money saved.
Team admins can access these stats by clicking `Time Saved (All Time)` from the team page.
## Profile Settings
There are a number of settings available for your profile page. This includes: name, email (for accounts with email authentication), profile picture, username (for your public profile), public profile visibility settings, telemetry settings, email subscription settings, and more.
You can also change and update your password if you used email-and-password authentication.
# Compatibility
Visit our [download page](https://codeium.com/download) for a list of supported IDEs and installation instructions.
If you are a Codeium Enterprise user, visit your enterprise portal URL for download and installation instructions.
Contact your internal Codeium administrator if you have questions.
# Supported IDEs and Versions
**VS Code**: All versions
**JetBrains IDEs**: Version 2022.3+
Note: JetBrains IDEs with remote SSH support require versions 2023.3+.
**Visual Studio**: 17.5.5+
**NeoVim**: Version 0.6+
**Vim**: 9.0.0185+
**Emacs**: All versions compiled with lbxml
**Xcode**: All versions
**Sublime Text**: Version 3+
**Eclipse**: Version 4.25+ (2022-09+)
# Getting Started
Welcome to Codeium
**Codeium** is an AI toolkit that empowers hundreds of thousands of developers with best-in-class autocomplete for code, chat capabilities, in-line command-based edits, and more.
Our models and features have been optimized to have a deep understanding of your codebase and to provide the best AI-powered code generation on the market.
## Extension Set Up 🧩
Our extension for Visual Studio Code and our plugin for JetBrains are our most popular services.
The installation steps for these two are given below.
For other IDEs and editors like Eclipse, Visual Studio, Neovim, Google Colab, and more, visit [our download page](https://codeium.com/download) to get started.
These steps do not apply for enterprises on a self-hosted plan.
If you are an enterprise user, please refer to the instructions in your enterprise portal.
Find the Codeium extension in the VS Code Marketplace and install it.
After installation, VS Code with prompt you with a notification in the bottom right corner to log in to Codeium.
Equivalently, you can log in to Codeium via the profile icon at the bottom of the left sidebar.
If you get an error message indicating that the browser cannot open a link from Visual Studio Code, you may need to update your browser and restart the authorization flow.
If you do not have an account or otherwise are not already logged in online, you will be prompted to create an account or login.
Once you sign in, you will be redirected back to Visual Studio Code via pop-up.
If you are using a browser-based VS Code IDE like GitPod or Codespaces, you will be routed to instructions on how to complete authentication by providing an access token.
Once you are signed in, Codeium will start downloading a language server.
This is the program that communicates with our APIs to let you use Codeium's AI features.
The download usually takes ten to twenty seconds, but the download speed may depend on your internet connection.
In the meantime, you are free to use VS Code as usual.
You should see a notification on the bottom right to indicate the progress of the download.
You can now enjoy Codeium's rich AI featureset: Autocomplete, Chat, Command, and more.
Open the `Plugins` menu in your JetBrains IDE. The shortcut for this is `⌘+,` on Mac and `Ctrl+,` on Linux/Windows. It is also accessible from the settings menu.
Search for Codeium, and install the plugin. The plugin loader will prompt you to restart the IDE.
Open a project. Codeium should prompt you to log in with a notification popup at the bottom right linking you to an online login page.
Equivalently, click the widget at the right of the bottom status bar and select the login option there.
If you do not have an account or otherwise are not already logged in online, you will be prompted to login.
Once you have logged in online, the webpage will indicate that you can return to your IDE.
Upon successful login, Codeium will begin downloading a language server.
This is the program that communicates with our APIs to let you use Codeium's AI features.
The download usually takes ten to twenty seconds, but the download speed may depend on your internet connection.
In the meantime, you are free to use your IDE as usual.
You should see a notification on the bottom right to indicate the progress of the download.
You can now enjoy Codeium's rich AI featureset: Autocomplete, Chat, Command, and more.
At any point, you can check your status by clicking the status bar widget at the bottom right.
If logged in, you will have access to your Codeium settings and other controls.
# Welcome to Codeium
Codeium creates highly contextual, intuitive, and trustworthy AI-powered tools to help developers dream bigger.
If you are an Enterprise user, please refer to the instructions in your
enterprise portal.
## Get started
If you're new to Codeium, start here to embark on your AI-powered coding journey.
Tomorrow's editor, today. The world's first truly agentic IDE.
Available in 40+ different IDEs with support for 70+ programming languages
## Breakthrough functionalities
Codeium's proprietary context engine builds a deep understanding of your
codebase.
}
href="/windsurf/cascade"
>
Your agentic chatbot that can collaborate with you like never before.
## Meet the modalities
Contextually aware in-line and multi-line suggestions while you type.
Talk to your codebase directly within the editor.
Generate new or edit existing code directly in the editor.
Multi-line suggestions based on intent, regardless of cursor position.
## Support
Join our community to ask questions and get support.
Let us know what you'd like to see!
Stay up-to-date with the latest news and updates from Codeium.
# Overview
Just how our Autocomplete passively predicts text, **Codeium Supercomplete** passively predicts your next intent.
You can see Supercomplete in action below, which shows the suggested diff within a box next to your code, directly in the editor. Just press `tab` to accept the suggestion!
Supercomplete makes suggestions based on the context of your code, *both before and after* your current cursor position..
Suggestions appear in a box on the side as you type. You can press `esc` to cancel a suggestion.
## Use cases to trigger Supercomplete
Supercomplete and Autocomplete work in harmony to provide you with the smoothest experience to optimize your flow state.
That being said, Supercomplete will only trigger in certain scenarios where it is most beneficial to the developer. Below, you can find a few examples of helpful Supercomplete use cases.
# General Issues
### I subscribed to Pro but I'm stuck on the free tier
First, give it a few minutes to update. If that doesn't work, try logging out of Codeium on the website, restarting your IDE, and logging back into Codeium. Additionally, please make sure you have the latest version of Codeium installed.
### How do I cancel my Pro/Teams subscription?
You can cancel your paid plan by going to your Profile (top right of the [Codeium website](https://codeium.com/profile)) -> Billing -> Cancel Plan
### How do I disable code snippet telemetry?
As mentioned in our [security page](https://codeium.com/security), you can opt out of code snippet telemetry by going to your settings [account settings](https://codeium.com/settings). For more information, please visit our [Terms of Service](https://codeium.com/terms-of-service-individual).
### How do I request a feature?
You can vote, comment, and request features on our [feature request forum](https://codeium.canny.io/feature-requests).
You can also reach out to us on Twitter/X! [@codeiumdev](https://x.com/codeiumdev) or [@windsurf\_ai](https://x.com/windsurf_ai)
# Windsurf/Extension Specific Issues
### I'm experiencing rate limiting issues
Due to the fact that Windsurf has only recently launched, we're subject to rate limits and unfortunately are hitting capacity for the premium models we work with. We are actively working on getting these limits increased and fairly distributing the capacity that we have!
This should not be an issue forever. If you get this error, please wait a few moments and try again.
### I received an error message saying "Windsurf failed to start"
Please delete the following folder:
Windows: `C:\Users\\.codeium\windsurf`
Linux/Mac: `~/.codeium/windsurf`
and try restarting the IDE.
### I received an error message about updates on Windows
An example:
> Updates are disabled because you are running the user-scope installation of Windsurf as Administrator.
It is not recommended you install Windsurf as Administrator on Windows.
### My Cascade panel goes blank
Please reach out to us if this happens! A screen recording would be much appreciated. This can often be solved by clearing your chat history (`~/.codeium/windsurf/cascade`).
### My Codeium Chat panel goes blank
Please reach out to us if this happens! A screen recording would be much appreciated. This can often be solved by clearing your chat history.
# Advanced
## SSH Support
The usual SSH support in VSCode is licensed by Microsoft, so we have implemented our own just for Windsurf. It does require you to have [OpenSSH](https://www.openssh.com/) installed, but otherwise has minimal dependencies, and should "just work" like you're used to.
This extension has worked great for our internal development, but there are some known caveats and bugs:
* We currently only support SSHing into Linux-based remote hosts, with x64 architectures.
* The usual Microsoft "Remote - SSH" extension (and the [open-remote-ssh](https://github.com/jeanp413/open-remote-ssh) extension) will not work—please do not install them, as they conflict with our support.
* We don't have all the features of the Microsoft SSH extension right now. We mostly just support the important thing: connecting to a host. If you have feature requests, let us know!
* Connecting to a remote host via SSH then accessing a devcontainer on that remote host won't work like it does in VSCode. (We're working on it!) For now, if you want to do this, we recommend instead manually setting up an SSH daemon inside your devcontainer. Here is the set-up which we've found to work, but please be careful to make sure it's right for your use-case.
1. Inside the devcontainer, run this once (running multiple times may mess up your `sshd_config`):
```
sudo -s -- <> /etc/ssh/sshd_config
ssh-keygen -A
HERE
```
2. Inside the devcontainer, run this in a terminal you keep alive (e.g. via tmux):
```
sudo /usr/sbin/sshd -D
```
3. Then just connect to your remote host via SSH in windsurf, but using the port 2222.
* SSH agent-forwarding is on by default, and will use Windsurf's latest connection to that host. If you're having trouble with it, try reloading the window to refresh the connection.
* On Windows, you'll see some `cmd.exe` windows when it asks for your password. This is expected—we'll get rid of them soon.
* If you have issues, please first make sure that you can ssh into your remote host using regular `ssh` in a terminal. If the problem persists, include the output from the `Output > Remote SSH (Windsurf)` tab in any bug reports!
## Dev Containers
Windsurf also supports dev containers! If you would like to run a development container locally on a Linux machine, you can use the following three commands:
1. `Open Folder in Container`
* Open a new workspace with a specified devcontainer.json file
2. `Reopen in Container`
* Reopen the current workspace in a new container, specifying a devcontainer.json file to configure the container.
3. `Attach to Running Container`
* If you already have a development container running, you can attach a remote server to the container and connect your current workspace to it.
### Notes
* SSH + Dev Containers is not currently supported in Windsurf, but we plan to support it in the future.
* Only Linux-based x86 architectures are supported at the moment.
## WSL
Coming soon!
## Extension Marketplace
You can change the marketplace you use to download extensions from. To do this, go to **View** -> **Extensions**, click the "Change in Settings" link, and modify the settings accordingly.
Learn more about available options [here](https://github.com/VSCodium/vscodium/blob/master/docs/index.md#extensions-marketplace).
# Cascade
Cascade allows us to expose a new paradigm in the world of coding assitants: AI Flows.
A next-gen evolution of the traditional Chat panel, Cascade is your agentic chatbot that can collaborate with you like never before, carrying out tasks with real-time awareness of your prior actions.
To open Cascade, click the Cascade icon in the top right corner of the Windsurf window.
You can open Cascade with the following keyboard shortcut `Cmd+L`
Selected text in the editor or terminal will automatically be included. If you are unable to see certain features, please make sure you are on the latest version!
# Model selection
Select your desired model from the selection menu below the chat input.
# Write/Chat Modes
Cascade comes in two modes: **Write** and **Chat**.
Write mode allows Cascade to create and make modifications to your codebase, while Chat mode is optimized for questions around your codebase or general coding principles.
# Image Upload
Add images to your prompt to be referenced in Cascade's suggestions.
Currently only available for use with GPT-4o and Claude 3.5 Sonnet models, and only images up to 1MB in size are supported.
# Real-time collaboration
A unique capability of Windsurf and Cascade is that it is aware of your real-time actions, allowing for unprecented amounts of collaboration.
No longer do you have to prompt the AI with context on your prior actions, as Cascade and Windsurf are already aware.
In the following video, you will see how a recent variable name change was flawlessly detected within Cascade, simply by prompting it with `continue` to rename the other instances.
# Direct access to tools and terminal
Cascade can detect which packages and tools that you're using, which ones need to be installed, and even install them for you. Just ask Cascade how to run your project and press Accept.
# Revert to previous steps
You have the ability to revert changes that Cascade has made if you want to. Hover your mouse over the prompt and click on the arrow on the right. This will revert all code changes back to the state of your codebase at the desired step. Note: reverts are currently irreversible, so be careful!
# Getting Started
Tomorrow's editor, today
Windsurf is Codeium's next-generation AI IDE built to keep you in the flow. On this page, you'll find instructions on how to install Windsurf on your computer, navigate the onboarding flow, and get started with your first AI-powered project.
See what's new with Windsurf in our [changelog](https://codeium.com/changelog)!
## Set Up 🏄
To get started, please ensure that your device meets the requirements, click the download link, and follow the instructions to install and run Windsurf.
Minimum OS Version: OS X Yosemite
Minimum OS Version: Windows 10
Minimum OS Version: >= 20.04 (or glibc >= 2.31, glibcxx >= 3.4.26)
Minimum OS Version: glibc >= 2.28, glibcxx >= 3.4.25
## Onboarding
Once you have Windsurf running, you will see the page below. Let's get started! Note that you can always restart this onboarding flow with the "Reset Onboarding" command.
### 1. Select setup flow
If you're coming from VS Code or Cursor, you can easily import your configurations. Otherwise, select "Start fresh". You can also optionally install `windsurf` in PATH such that you can run `windsurf` from your command line.
Choose your keybindings here, either default VS Code bindings or Vim bindings.
You can migrate your settings, extensions, or both here.
You can migrate your settings, extensions, or both here.
### 2. Choose editor theme
Choose your favorite color theme from these defaults! Don't worry, you can always change this later. Note that if you imported from VS Code, your imported theme will override this.
### 3. Sign up / Log in
To use Windsurf, you need to use your Codeium account or create one if you don't have one. Signing up is completely free!
Once you've authenticated correctly, you should see this page. Hit "Open Windsurf" and you're good to go!
#### Having Trouble?
If you're having trouble with this authentication flow, you can also log in and manually provide Windsurf with an authentication code.
Click the "Copy link" button to copy an authentication link to your clipboard and enter this link into your browser.
Copy the authentication code displayed in the link and enter it into Windsurf.
### 4. Let's Surf! 🏄
## Things to Try
Now that you've successfully opened Windsurf, let's try out some of the features! These are all conveniently accessible from the starting page. :)
On the right side of the IDE, you'll notice a new panel called "Cascade". This is your AI-powered code assistant! You can chat, write code, and run code with Cascade! Learn more about how it works [here](/windsurf/cascade).
You can create brand new projects with Cascade! Click the "New Project" button to get started.
You can open a folder or connect to a remote server via SSH or a local dev container. Learn more [here](/windsurf/advanced).
Configure some of Windsurf's AI settings here. Want to slow down autocomplete speed or disable some features? You can do that here.
You can open the command palette with the `⌘+⇧+P` (on Mac) or `Ctrl+Shift+P` (on Windows/Linux) shortcut. Explore the available commands!
## Forgot to Import VS Code Configurations?
You can easily import your VS Code/Cursor configuration into Windsurf if you decide to do so after the onboarding process.
Open the command palette (Mac: `⌘+⇧+P`, Windows/Linux: `Ctrl+Shift+P`) and type in the following:
## Incompatible Extensions
There are a few extensions that are incompatible with Windsurf. These include other AI code complete extensions and proprietary extensions. You cannot install extensions through any marketplace on Windsurf.