JetBrains AI Assistant for IntelliJ IDEA - Detailed Guide
Approval status: Under review - this tool is not currently approved for use. We are reviewing it for potential approval, but cannot commit to if or when this might happen.
(Generated by AI, ChatGPT Deep Research, on June 23rd 2025)
What JetBrains AI Assistant does
JetBrains AI Assistant is an AI-powered feature set available across JetBrains IDEs (including IntelliJ IDEA). It works with large language models (LLMs) in your development workflow to help with tasks such as code generation, intelligent coding assistance, explaining code, writing documentation, and more. The assistant is built into the IDE’s interface and deeply integrated with JetBrains’ code understanding. It acts as a pair programmer that can answer questions, suggest improvements, generate tests or commit messages, explain errors, and even refactor code on request. You interact with it via chat or by invoking AI actions in the editor (for example, asking to explain a selected code snippet or generate a unit test).
Official website: The JetBrains AI Assistant is part of the JetBrains AI service, described on JetBrains’ site and JetBrains AI product page. You can find the plugin on the JetBrains Marketplace (ID 22282
), but you need a JetBrains Account and must accept JetBrains AI terms to use it.
Free vs paid versions: JetBrains AI Assistant is offered through subscription tiers on the JetBrains AI service. All users with a valid JetBrains IDE licence can access an AI Free plan that includes the core AI Assistant features (code assistant, chat, Junie coding agent, and Grazie writing assistant) with unlimited local/offline code completion and a one-time limited quota of cloud-based AI queries. This free quota does not renew monthly – once you use it up, you need a paid plan for further cloud AI usage. Paid plans (AI Pro and AI Ultimate) provide higher monthly allowances of cloud AI queries and are included with certain JetBrains subscriptions (for example, All Products Pack includes AI Pro). An AI Enterprise plan is also available for organisations, offering the same features with even higher usage limits plus enterprise-specific capabilities (like user access management and the option to use custom or on-premises AI models). Regardless of tier, JetBrains emphasises that “users keep their data” and that the AI features are built with a focus on privacy, security and transparency.
Privacy controls
JetBrains IDEs provide detailed privacy controls for AI features. Data sharing is opt-in: by default, detailed data collection is disabled in released versions of the IDE (it may be enabled by default in EAP/pre-release builds). You must explicitly enable any detailed logging of AI interactions. There are two levels of data that can be collected by the AI Assistant, both fully controlled by you:
-
Behavioural data – high-level usage metrics such as which AI features are used and how often suggestions are accepted. This does not include any source code or personal content. It is used to improve product features and is governed by the IDE’s general data sharing setting (found under Help → Data Sharing… in JetBrains IDEs). You can choose to disable this anonymous usage reporting, in line with GOV.UK’s preference for minimal data collection.
-
Detailed data – the actual full text of prompts and AI responses, including code snippets or other content you send to the AI. This is not collected or stored by JetBrains unless you explicitly opt in via an IDE prompt or setting. By default (if you do not opt in), any code or query you send to the AI goes directly to the LLM provider and is not saved on JetBrains’ servers. If you do opt in to detailed data collection (for debugging or improving the AI integration), JetBrains will temporarily store those interactions for analysis, but only for a short period (see Data retention below).
In summary, you can use the AI Assistant in “zero-data-retention” mode by leaving the default settings as they are. This means no conversation or code content is persistently logged by JetBrains. You can also review and revoke these sharing choices at any time in the IDE settings. This level of control aligns with government data-handling policies by ensuring that sensitive code can remain private.
Terms of use and privacy policy
Before using AI Assistant, you must agree to the JetBrains AI Terms of Service. Key points in these terms relevant to government use include:
-
Data processing and responsibility: The service works by sending your prompts (and context files if needed) to third-party LLM providers to generate answers. JetBrains acts as an intermediary and imposes contractual obligations on these providers to protect your data. You are responsible for which files or code you choose to share with the AI. The terms explicitly remind users not to input sensitive or protected data unless you are comfortable with it being processed by external AI providers.
-
No ownership change: You retain ownership of your code and data. The outputs generated by the AI are considered your data as well, with no rights claimed by JetBrains over code or text suggestions the AI produces.
-
No training on your data: Both JetBrains and its AI subcontractors agree not to use your code or prompts to train AI models without permission. All third-party LLMs integrated are either configured not to learn from API inputs or are covered by agreements that ensure no training or secondary use of submitted data. (JetBrains also states it does not partner with providers who would use customer data for training.)
-
Confidentiality: JetBrains commits to keeping your inputs and outputs confidential and only using them to provide the service. Any third-party engaged (OpenAI, Google, etc.) is bound by similar confidentiality obligations via JetBrains’ agreements. The terms note that JetBrains may monitor content in transit for abuse (for example, to prevent violating use policies) and may temporarily store data for that purpose, but such data is handled under strict access control and limited retention.
-
Acceptable use and liability: The terms include an Acceptable Use Policy (for example, you must not use the AI to generate illegal content, and must not try to circumvent fees or security). They also clarify that AI suggestions may be incorrect or inappropriate, and it’s your responsibility to review and validate AI outputs before using them. JetBrains disclaims liability for any code problems or breaches that arise from uncritical use of AI output (standard for these services). For government developers, this means you should treat AI Assistant as a helper, not an authoritative source.
-
Privacy policy compliance: The JetBrains Privacy Policy applies as well, which confirms JetBrains acts in accordance with GDPR and other applicable laws. JetBrains s.r.o. (based in the Czech Republic) is the data controller for personal data, and the policy outlines how user account data and usage data are handled. It also points to a Data Processing Agreement (DPA) available for customers who need one – likely relevant if a UK government entity requires a formal DPA for compliance.
All relevant legal documents (the JetBrains AI Terms of Service and the JetBrains Privacy Policy) are available on JetBrains’ website and should be reviewed by your department’s legal team. Notably, the Terms of Service (Section 5) and JetBrains’ online documentation provide assurances that the tool’s design emphasises data protection and that no data is stored long-term by JetBrains unless explicitly allowed.
Where your data goes
Server location and data residency
JetBrains is an EU-based company (headquartered in Prague, Czech Republic), and by default your data is routed through servers in the EU. The JetBrains AI service backend (which the IDE connects to) is hosted in Europe – specifically, JetBrains lists AWS data centres in Ireland as hosting its “IDE Services” cloud infrastructure. This means when you use the AI Assistant in the UK, your requests first go to JetBrains’ EU servers. However, the requests then reach third-party AI providers, which may be located outside Europe depending on the provider used:
-
OpenAI (for example, GPT models) – Data will be processed in the United States. OpenAI is a US company, and JetBrains uses OpenAI’s API with a zero-data-retention mode (OpenAI does not store or use the data for training).
-
Anthropic (Claude models) – Processed in the United States. Anthropic also offers a no-retention policy upon request, which JetBrains uses.
-
Google (PaLM/Gemini models) – Google’s AI cloud operates multi-region. JetBrains indicates Google’s LLMs might process data in the EU, US, and Asia data centres, depending on availability. (Google’s Vertex AI service has EU regional options; JetBrains documentation suggests Google’s model will use EU servers when possible).
-
Amazon Bedrock (AWS LLMs) – (Applicable mainly to enterprise users who configure this) would typically allow regional selection. JetBrains documentation lists Amazon as a supported provider with data location governed by AWS settings. In an enterprise scenario, a UK government could choose an AWS UK or EU region for Bedrock models if using their own keys.
In summary, for the default (cloud) AI providers, data will leave the UK/EU (for example, to the US) in many cases. This has implications under UK GDPR and departmental data policies: appropriate safeguards are in place via JetBrains’ agreements (standard contractual clauses for data transfer are in use), and all providers JetBrains uses claim to support “zero data retention” (no storing or training on the inputs). Nonetheless, the fact that code may transit to the US means government users should be cautious with extremely sensitive code.
Enterprise option: The JetBrains AI Enterprise plan can reduce residency concerns. It allows an organisation to self-host parts of the AI service and even use custom LLM endpoints. For example, a government IT department could integrate an LLM hosted in a UK-based cloud or on-premises, via JetBrains’ IDE Services, instead of sending data to OpenAI/Anthropic. JetBrains notes that AI Enterprise “supports platform-specific LLMs… ensuring you maintain complete control over data and AI operations within your infrastructure”. This plan effectively keeps the data within chosen jurisdictions at the cost of additional setup and likely higher fees. For the standard cloud service (Free/Pro tiers), however, UK users should assume data might travel to the US unless they restrict models to those hosted in EU regions (if such an option is exposed).
Data protection in transit
All data in transit is encrypted. JetBrains confirms that communications between the IDE, JetBrains servers, and the AI providers use modern TLS encryption. This is standard practice: the IDE connects to JetBrains AI endpoints over HTTPS, and JetBrains in turn calls the LLM provider APIs over HTTPS. According to JetBrains’ security policy, they use up-to-date TLS protocols to protect data during transmission and regularly rotate encryption keys. This means that source code or queries sent to the AI Assistant are not transmitted in plaintext over the internet at any point – they are encrypted on the wire, protecting against eavesdropping.
For UK government contexts, this level of in-transit encryption (TLS 1.2/1.3) meets typical PSN/GovConnect requirements for confidentiality in transit. The connection is established from your machine to JetBrains (over port 443), so agencies will need to allow IDE traffic to *.jetbrains.com
and the relevant cloud endpoints. JetBrains publishes information for network configurations if needed (and the Trust Centre provides details on their network security measures).
In summary, data in transit is secure – communication is end-to-end TLS encrypted from the IDE to JetBrains and onward to the AI model provider.
Data protection at rest
By default, JetBrains AI is designed to avoid storing your code or prompts at rest. The JetBrains AI service does not persistently store any of the code or query data on its servers unless you have opted in to the detailed data collection for product improvement. In zero-retention mode (the default), data at rest on JetBrains’ side is limited to temporary memory and short-term caches necessary to process requests, which are not written to long-term storage.
If you do opt in to detailed data collection, JetBrains will temporarily store those prompts/responses on their servers (hosted in the EU) for analysis by their development team. Even then, they apply strong encryption at rest (AES-256 or similar) for any stored data, and access is restricted. JetBrains’ Privacy Policy states that all data at rest is encrypted where technically feasible and that they have measures to limit access to encryption keys. Any stored data is also subject to removal as soon as it’s no longer needed (see data retention below).
On the LLM provider side, data at rest policies vary by provider, but JetBrains only partners with those that commit to not store or use the data beyond the immediate request. For instance, OpenAI’s enterprise API promises not to retain request data for longer than 30 days (and not use it for training), and Google’s Vertex AI and Amazon’s Bedrock have options for no persistent storage of prompts. Anthropic by default doesn’t use data for training and offers data deletion on request. The “Zero-Data Retention” table from JetBrains confirms that all integrated providers either support not storing user content at rest, or will do so when configured under JetBrains’ contract. In practice, this means your code is not being saved into some database on the provider side either – it’s held in memory to generate the response and then discarded (except for short caching necessary for continuity in a single chat session).
To summarise: under normal usage, no project code or prompts are stored at rest by JetBrains, and the third-party AI services invoked are instructed not to store them either. All persistent storage of user data (like account details, subscription information, etc.) remains in JetBrains’ EU servers with encryption and compliance to GDPR standards.
Data retention
Data retention is minimal by design. JetBrains has adopted a “zero data retention” stance for the AI Assistant service unless you explicitly allow data to be collected:
-
JetBrains servers: If you do not opt in to detailed logging, JetBrains retains none of the AI interaction data on disk. The question “Does JetBrains retain any of my data?” is answered with “No” in their docs for non-opted-in users. All prompts and responses are processed in memory and streamed to you, but not logged to storage. (JetBrains may keep aggregate counts or usage timestamps for billing/fair-use purposes, but not the contents of queries.)
-
If you opt in to detailed data collection (for example, to help JetBrains improve the AI feature by sharing transcripts), then JetBrains will store the content of interactions for a limited time. They have a policy that such detailed data is stored no longer than 30 days. After around 30 days, it is deleted from their servers. Also, access to this stored data is restricted to JetBrains teams working on AI, and it is not used to train AI models – it’s only analysed for service improvement and debugging.
-
Third-party LLM providers: As noted, JetBrains ensures that all integrated AI providers either do not retain data or have policies in place that align with no-retention. According to JetBrains: OpenAI, Google, and Amazon Bedrock all satisfy “Zero Data Retention” requirements for the JetBrains AI service. Anthropic is slightly different in that by default it may retain data for a time, but JetBrains’ contract or configuration ensures no retention for JetBrains AI usage (Anthropic has a programme for zero retention on request, which JetBrains uses). A quick summary from JetBrains’ Data Retention FAQ: “Do these third parties retain any of my data? – No. See the table below.” which then shows all providers as “Supported” in terms of zero-retention.
It’s worth noting that “zero retention” typically means the providers won’t store or reuse the prompts outside the scope of answering the query. However, some providers might keep data briefly (a few hours or days) for abuse monitoring or temporary caching. OpenAI, for example, has stated it may retain API data for 30 days for abuse detection, but not use it for training. Google’s Vertex AI allows opting out of data logging entirely (which JetBrains likely does for their service). For practical purposes, from your perspective no data is lingering on these services beyond what’s necessary to produce the immediate result.
For government adoption, this retention approach is very favourable: the tool does not accumulate a collection of your code or conversation history. Each query is temporary. If an organisation wants an extra layer of assurance, they could refrain from opting into any data collection (thus nothing is stored beyond temporary memory), and even consider using the Enterprise plan to keep as much processing internal as possible. JetBrains also offers to sign DPAs and has commitments in place to delete customer data on request (you can request removal of any personal data you believe might have been stored, via your JetBrains Account or support, per privacy policy).
Audit logs
Audit logging capabilities in the AI Assistant are limited. The out-of-the-box JetBrains AI service does not provide end-users or administrators with a detailed audit log of AI queries and responses, primarily because it does not persist that data (as described above). In other words, since the service isn’t saving the content of prompts by default, it also isn’t presenting a log of them to admins. Individual developers can see their current session history in the IDE (for example, the AI chat window keeps the conversation context as long as the session is open), but once closed, that history is not saved to a file unless you manually copy it.
For organisations, this means there isn’t a native feature to review “who queried what.” This is a double-edged sword: it’s good for privacy (no unintended logging of possibly sensitive prompts), but it means compliance officers cannot later audit the specific content sent to external LLMs, except by requiring developers to follow internal logging procedures if needed.
However, usage metrics are collected for billing and monitoring, which JetBrains could provide if requested. For instance, an admin can likely see which users have activated the AI service under their JetBrains licences, and perhaps how many requests or tokens they have used in a period (to enforce fair use). This information is not publicly documented in detail, but given the quota system, JetBrains must track usage counts per account. Those counts might be accessible through the JetBrains Account portal or on request, although not in the form of content logs, just numerical usage.
In the AI Enterprise scenario, since the organisation might be hosting parts of the service on-premises or routing through their own infrastructure, there is more potential for custom logging. Enterprise customers could configure the proxy or their chosen LLM provider to log requests internally if that’s a requirement (for example, if using an on-prem LLM, they could log prompts to a secure audit system). JetBrains’ IDE Services might also produce audit logs on the organisation’s side when integrated, but such features would need to be configured by your IT team.
Finally, it’s noted in the terms that JetBrains reserves the right to review content if necessary for legal compliance (for example, if compelled by law or if they detect misuse). In theory, JetBrains could retrieve certain interactions from their short-term logs (if within the 30-day window and if detailed logging was enabled or a legal order compels it). But this is for their internal moderation, not a customer-facing audit tool.
Implication: From a governance perspective, if audit logging of AI usage is required, the current free/pro service may not meet that need out-of-the-box. An organisation might reduce this risk by policy (instructing developers to document any AI-assisted code changes) or by opting for the Enterprise setup where they can control logging. The absence of a built-in audit trail is a conscious trade-off by JetBrains to avoid storing sensitive data.
Access controls
User access and privilege management: Access to the JetBrains AI Assistant is tied to JetBrains Accounts and licence entitlements. Only users who have an active IDE licence (commercial or an educational/complementary licence) can use the AI service. Notably, IntelliJ IDEA Community Edition users (who don’t have a JetBrains account licence) are not eligible for the AI service under current rules. This acts as a form of access control: an organisation can decide which developers are assigned a JetBrains AI licence (for paid plans) or allowed to enable the free trial, by controlling JetBrains Account access and licence allocation.
JetBrains provides an organisation admin portal (JetBrains Account for Organisations) where licence seats for AI Pro/Ultimate can be assigned or revoked to specific users. For instance, a team lead could choose not to assign the AI service to certain developers if desired. The AI Enterprise plan extends this with user access management features – allowing integration with the company’s single sign-on or directory and more detailed control over who in the organisation can use AI and how. Enterprise also supports setting organisation-wide policies, potentially restricting which model providers can be used (for example, disabling one provider if not approved by policy).
Privileges within the tool: The AI Assistant operates within the IDE and respects your existing permissions. It does not introduce new write privileges to repositories or file systems beyond what you already have. The assistant can suggest code changes or even apply code edits in the editor (for example, if you prompt “refactor this function,” it can modify the code in your IDE editor). But any changes occur in your local environment – the AI cannot on its own push commits to a remote Git repository or access files that you haven’t opened. It runs as part of the IDE process, under the same local permissions as you. Therefore, it cannot circumvent project access controls: if you don’t have access to a certain repository, the AI assistant can’t access it either.
JetBrains’ agent (“Junie”) and the assistant will not take actions like executing arbitrary system commands or publishing code outside the IDE. They are constrained to the IDE’s APIs. In practice, all code generation or edits require your initiation and confirmation. Even the more “autonomous” Junie agent is described as working “within the IDE as a collaborator” – it may automate some steps (for example, generating a set of changes across files), but you review those changes before they are saved or committed. There is no feature that automatically commits to version control without your action. This is important for security: the tool cannot, for example, steal code or secrets unless you explicitly ask it to do something that would have that effect (and even then, network communication is only to the JetBrains AI service endpoints).
From a policy standpoint, an admin concerned about AI usage could disable the plugin or block the network calls (for all or specific users). JetBrains doesn’t currently offer a central “kill switch” for AI at the licence server level aside from not assigning an AI licence. However, environment controls (firewall/proxy) could be used to disallow the JetBrains AI domain if necessary.
Write restrictions: The AI Assistant does not directly integrate with git push or similar. Generated code or suggestions remain local until you manually stage and push them. This ensures that any code modifications go through the normal code review and commit process. It’s also worth noting that JetBrains AI is intended to follow any coding guidelines or restrictions set in the IDE (for example, it can be guided by project instructions, and enterprise users can configure it to follow certain rules). But it does not have an independent ability to access resources beyond the IDE context.
In summary, access control is primarily exercised via licence management and network policy. Government organisations can choose who gets to use the AI Assistant and can require their developers to follow internal guidelines (for example, not to use it on classified code, or only in specific secured environments). The Enterprise version would further allow integration with corporate identity management and use of custom (possibly on-prem) AI models for tighter control.
Compliance and regulation
JetBrains appears to have designed the AI Assistant with major compliance considerations in mind, especially concerning data protection (GDPR) and information security best practices. Points to note for UK government adoption:
-
GDPR / UK-DPA: As JetBrains is EU-based, it fully falls under GDPR. All personal data handling (even minimal in the case of this service) is done according to GDPR principles. JetBrains’ Privacy Notice confirms that the default storage location of customer data is the EU and that if data is transferred outside the EU (for example, to the US for LLM processing), it is done under the European Commission’s adequacy mechanisms or Standard Contractual Clauses. UK organisations can likely rely on these same clauses (post-Brexit, UK recognises EU SCCs or similar UK-specific International Data Transfer Agreement can be arranged if needed). JetBrains also offers a Data Processing Agreement for customers which UK government entities could sign to satisfy UK GDPR requirements.
-
Data security standards: JetBrains has a comprehensive security programme. While not explicitly stated which certifications they hold, their Trust Centre and privacy documents indicate adherence to industry standards. For example, they mention using ISO 27001-compliant data centres (AWS, GCP) and performing supplier risk assessments. They also implement least privilege access, encryption in transit and at rest, and regular security reviews. Government security architects would appreciate that JetBrains maintains a high level of security hygiene. If needed, JetBrains can likely provide a security whitepaper or answer questionnaires (their Trust Centre site may have more details or SOC2 reports for their cloud services, though this isn’t explicitly referenced in public docs).
-
Compliance with government standards: There is no specific mention of compliance with standards like Cyber Essentials Plus, ISO/IEC 27001 (for JetBrains as an organisation), or FedRAMP (a US standard) in the context of the AI service. However, given that JetBrains Space (a different product) and other cloud services target enterprises, it’s reasonable to assume they align with common frameworks. JetBrains’ focus on not storing data and on obtaining user consent for any data use is in line with UK government’s cloud security principles (minimise data at rest, user control, transparency).
-
Privacy and ethics of AI: JetBrains explicitly commits that it does not use customer-provided code to train AI models, addressing a key intellectual property concern. This means using the AI Assistant will not inadvertently waive IP rights or expose proprietary code to downstream uses beyond the immediate service. From an ethical standpoint, the AI Assistant’s suggestions come from models (OpenAI, etc.) that may have been trained on open-source code, so there is a theoretical risk of generated code resembling licensed code. JetBrains has not published a statement on this specifically, but generally advises users to review outputs. (Government developers should still apply due diligence to ensure no sensitive code is suggested for public release inadvertently.)
-
Regulatory restrictions: The JetBrains AI terms mention export control laws – essentially you must not use the service in sanctioned regions or to process sanctioned data. This is standard; it shouldn’t affect typical UK usage, but if any part of government deals with ITAR or other restricted data, they should treat AI outputs carefully. Also, JetBrains notes the AI service is not available in certain countries due to provider restrictions. All supported locations are basically those not on US/EU sanction lists. The UK is supported; only a few regions (like mainland China, Russia, etc., possibly) cannot use OpenAI by default.
-
Audit and accountability: As discussed, the lack of built-in logging means compliance relies on organisational policy rather than technological enforcement. For now, using the AI Assistant would require trusting the contractual and technical controls in place rather than having an independent audit trail. If a higher level of accountability is needed, the Enterprise deployment (with potential logging and on-prem model options) might be necessary.
In conclusion, JetBrains AI Assistant can be used in a manner consistent with UK public sector security and privacy requirements, provided that certain precautions are observed (don’t input highly classified data unless using an isolated/on-prem solution, ensure developers understand the terms, possibly sign a DPA with JetBrains, etc.). The service emphasises data privacy (no storage, no training use of data), secure handling (TLS/AES encryption), and user control, which align well with compliance best practices. JetBrains being under EU law and the service architecture being transparent are positive aspects. Government stakeholders should review JetBrains’ legal documents (Terms, Privacy Notice, DPA) and possibly get written assurances for any specific concern (for example, confirmation of zero-retention modes) – but the documentation we’ve cited provides those assurances in writing.
What to do next
- Review JetBrains’ AI Terms of Service and Privacy Policy with your legal team
- Decide if EU data processing with some US transit is acceptable for your use case
- Check if zero-retention mode meets your data protection requirements
- Consider whether the limited audit logging meets your compliance needs
- Assess if the Enterprise plan is needed for enhanced control and on-premises options
References
-
JetBrains, JetBrains AI Terms of Service (v2.0, effective May 22, 2024) – Legal terms for use of the AI Assistant service, including data handling commitments and user responsibilities.
-
JetBrains, Privacy Policy (Version 3.0, July 2024) – Outlines JetBrains’ data protection measures (encryption, GDPR compliance) and references to third-party data processing and transfer safeguards.
-
JetBrains, Third-Party Services for JetBrains AI (April 2025) – Documentation listing the AI subcontractors (OpenAI, Google, Anthropic, etc.), their data centre locations and zero-data-retention support.
-
JetBrains, JetBrains AI Documentation – AI Service Licensing (June 2025) – Details the Free, Pro, Ultimate, and Enterprise AI plans and their feature differences and usage quotas.
-
JetBrains, JetBrains AI Documentation – About AI Assistant (April 2025) – Provides an overview of AI Assistant capabilities and the integration of local vs cloud models, including enterprise on-premises options.
-
JetBrains, Data Collection and Use Policy (April 2025) – Explains how the AI service collects (or does not collect) user data, distinguishing between behavioural telemetry and detailed content, and the retention (30 days max if opted in).
-
JetBrains, Data Retention FAQ (April 2025) – Confirms that no customer data is kept by JetBrains by default and summarises the zero-data-retention status of each AI provider integrated with the service.
-
InfoWorld (via Major Digest), “JetBrains IDEs now include AI tools by subscription” (18 April 2025) – News article summarising JetBrains’ announcement of AI Assistant availability, including quotes from JetBrains’ CEO on user data ownership and the introduction of free and paid tiers. (Provides external context on JetBrains’ approach to privacy and subscription model.)