JetBrains AI Assistant
Deeply integrated AI powered by the IntelliJ platform's semantic code understanding.

The leading open-source AI code assistant that integrates any LLM into VS Code and JetBrains.
Continue is a modular, open-source AI code assistant designed to eliminate vendor lock-in by providing a bridge between any Large Language Model (LLM) and the developer's IDE. Technically, Continue functions as an orchestration layer that manages context through a sophisticated RAG (Retrieval-Augmented Generation) pipeline, indexing local codebases into a SQLite-based vector store. It allows developers to seamlessly switch between cloud providers like OpenAI and Anthropic, or local inference engines like Ollama and LM Studio. By 2026, Continue has solidified its position as the enterprise standard for 'Bring Your Own Model' (BYOM) architectures, offering a 'Control Plane' for teams to manage prompts, context policies, and model routing across entire organizations. Its architecture is uniquely extensible through .continuerc configuration files, allowing teams to define custom 'Slash Commands' and 'Context Providers' that pull data from Jira, GitHub Issues, or internal documentation, making it a highly customizable operating system for AI-assisted software development.
Uses LanceDB for local vector storage to index code snippets, enabling RAG across thousands of files.
Deeply integrated AI powered by the IntelliJ platform's semantic code understanding.
Build secure, high-performance applications with an AI coding companion integrated directly into the AWS ecosystem.
The open-source autopilot for software development that puts you in control of your models and context.
State-of-the-art bimodal code intelligence for high-fidelity code generation and refinement.
Verified feedback from the global deployment network.
Post queries, share implementation strategies, and help other users.
A plugin system to pull external data (Jira, Slack, SQL schemas) directly into the LLM prompt context.
A dedicated engine for multi-line code completions that works with any fine-tuned 1B-7B parameter model.
User-defined shortcuts that trigger specific prompts or scripts (e.g., /edit, /test, /share).
Configuration is handled via JSON, allowing for version-controlled IDE settings.
Native support for Ollama, allowing for 100% offline code assistance.
Automatically route chat queries to large models (Claude 3.5) and autocomplete to fast models (DeepSeek).
Manually refactoring Python 2 to Python 3 across a massive repository.
Registry Updated:2/7/2026
Developers in high-security sectors (FinTech/Gov) cannot use cloud AI.
Increasing test coverage for complex business logic classes.