Amazon Bedrock - Detailed Guide
Approval status: Approved - you can use this tool within Defra when you follow the tool guidance advice.
(Generated by AI, ChatGPT Deep Research, on April 1st 2025)
What Amazon Bedrock does
Amazon Bedrock is a cloud service that gives you access to AI models from leading companies. You run it on AWS infrastructure, using the security and compliance features you already have. Bedrock is a key part of AWS’s AI strategy, focused on enterprise-grade AI deployment within AWS’s cloud.
Key feature | What it does | Benefits |
---|---|---|
Access to various AI models | Gives you models from AI21 Labs, Anthropic (Claude), Cohere, Meta (Llama 2), Mistral AI, and Amazon’s own Titan models | Choose the right model for specific tasks |
Customisation options | Fine-tune models and use retrieval-augmented generation (RAG) to adapt models to your needs | Improve model outputs for specific tasks with minimal data and computing requirements |
Integrated tools | Works with other AWS services including knowledge bases, model evaluation, agents, and guardrails | Creates a complete system for building AI applications |
Serverless deployment | Fully managed infrastructure with pay-as-you-go pricing | No infrastructure management needed, scales with usage |
You can deploy Bedrock in different ways for different organisational needs:
Deployment option | What it does | Best for |
---|---|---|
Standard cloud deployment | Models run in AWS public cloud regions | General business use cases with standard security requirements |
AWS PrivateLink | Private connection to Bedrock from your VPC without internet exposure | Enhanced security for sensitive work |
Dedicated capacity | Reserved computing resources for consistent performance | High-volume or performance-critical applications |
On-premises (via EKS Anywhere) | Deploy some models in your own data centres | Highly regulated environments with strict data residency requirements |
Bedrock is available in several AWS regions, with plans for expansion. The UK London region supports both model training and inference, which is important for UK government workloads with data residency requirements. Bedrock works with existing AWS services, so you can use your current AWS infrastructure and security when deploying AI solutions.
For UK government use, Bedrock’s appeal includes its strong security posture, data sovereignty options, and alignment with AWS’s existing authority to operate in government environments. The service follows a “shared responsibility model” - AWS secures the underlying infrastructure and the models, while you secure your data, applications, and model inputs/outputs.
Privacy controls
Privacy control | What it does | Default setting | Your control |
---|---|---|---|
Opt-out of model training | Ensures your data is not used to train or improve Bedrock’s AI models | Enabled by default | Organisation-wide control via AWS console |
Private endpoints | Access Bedrock via AWS PrivateLink without exposing traffic to the public internet | Optional | Configuration required |
HIPAA/PHI setting | Enables use of protected health information with supported models (for eligible accounts) | Disabled by default | Manual opt-in required |
Knowledge base privacy | Controls whether data in knowledge bases can be used for service improvement | Training opt-out applies | Follows global opt-out setting |
Amazon Bedrock is designed with privacy by default principles. A key privacy control is that, by default, your data is not used to train or improve the foundation models in Bedrock. AWS makes this commitment explicit in their documentation and service terms. This default opt-out from training is a significant privacy advantage compared to some other AI services where you may need to explicitly opt out. AWS considers your inputs (prompts) and outputs (model responses) to be your content and treats them accordingly under the AWS shared responsibility model.
If your organisation wants further control beyond this default privacy stance, several additional options exist. For network-level privacy, you can use AWS PrivateLink to access Bedrock via private endpoints, ensuring your traffic to and from the service never traverses the public internet. For highly sensitive use cases, some models can be deployed on-premises through EKS Anywhere, though with potential trade-offs in terms of which models are available and how frequently they’re updated.
For organisations handling regulated data such as protected health information (PHI), AWS allows eligible AWS accounts to process PHI through Bedrock if they have an executed Business Associate Agreement (BAA) and enable the HIPAA option in their account settings. However, this requires explicit opt-in and is available only for specific models (Claude 2 and some Titan models as of early 2024).
When using Bedrock’s knowledge base feature, which allows connection of foundation models to your organisation’s data sources, the same data protections apply – your data is not used for model training unless you explicitly opt in. Knowledge bases also offer features to exclude certain data by pattern matching or document filtering, providing control over what information the models can access.
For UK government users, these privacy controls align well with obligations for handling citizen data and internal government information, particularly when combined with appropriate data classification practices and the UK region deployment option.
Terms of Use and Privacy Policy
Policy Document | Key Provisions | Last Updated |
---|---|---|
AWS Service Terms | Governs Bedrock usage as part of overall AWS services | Regularly updated |
Model-specific EULAs | Additional terms specific to each foundation model provider | Varies by provider |
AWS Privacy Notice | Details how AWS processes customer data across services | November 2023 |
AWS AI Services Data Privacy Addendum | Specific terms for AI services data handling | Available on request |
The use of Amazon Bedrock is governed by several key documents that outline terms of use and privacy implications. The primary agreement is the AWS Service Terms, which includes specific sections for AI services including Bedrock. These terms establish that content processed through Bedrock (inputs and outputs) are owned by the customer, not AWS, and clarify AWS’s limited rights to use this content to provide and maintain the service.
Given that Bedrock provides access to models from multiple providers (Anthropic, AI21 Labs, Cohere, Meta, etc.), each foundation model may have its own End User License Agreement (EULA) with additional terms. Customers are required to abide by these model-specific terms, which typically include restrictions on generating harmful content or using the models for high-risk applications without proper safeguards. These EULAs are accessible through the AWS console and should be reviewed before using specific models, as they may contain important variations in permitted use cases.
The AWS Privacy Notice explains how AWS handles customer data across all services, including Bedrock. This document was last updated in November 2023 and addresses key points like data collection practices, international transfers, and customer controls over data. For enterprises, the AWS Business Agreement typically includes data processing terms that satisfy GDPR and UK data protection requirements. An AI Services Data Privacy Addendum may provide additional specificity around how generative AI content is handled.
For UK government use, these terms have important implications. The standard AWS terms establish AWS as a processor (not controller) of government data, which supports compliance with data protection regulations. The terms also confirm that intellectual property created through Bedrock remains the property of the customer – important for government-developed applications or content generated through the service.
The terms explicitly prohibit use of the service to generate illegal or harmful content, consistent with public sector ethical AI use requirements. AWS enforces usage limits on specific operations for quality of service reasons, but these limits can typically be increased upon request for enterprise users with valid use cases.
For audit and compliance needs, the terms clarify that AWS maintains logs of Bedrock requests for security and service improvement, but the content of requests (your data) is protected according to standard AWS data handling practices and any specific controls implemented (like private endpoints or encryption).
AWS’s comprehensive data protection certifications (including ISO 27001, ISO 27017, ISO 27018, and UK Cyber Essentials Plus) provide a strong foundation for secure use of Bedrock in government environments, provided the appropriate AWS compliance programs cover the specific regions where Bedrock is being used.
Links:
- AWS Service Terms
- AWS Privacy Notice
- AWS AI Services Data Privacy Addendum
- Model-specific Terms (accessible in AWS Console)
Data Management
Server Location and Data Residency
Region | Availability Status | Model Support | Residency Implications |
---|---|---|---|
Europe (London) | Available | Major models supported | Meets UK data residency requirements |
Europe (Ireland) | Available | Comprehensive model support | EU data residency |
US Regions | Available | Most comprehensive | US jurisdiction |
Other Global Regions | Partial availability | Varies by region | Check AWS documentation |
Amazon Bedrock’s server locations align with AWS’s global regional infrastructure. As of February 2024, Bedrock is available in multiple AWS regions worldwide, including Europe (London) which is particularly relevant for UK government users concerned with data residency. When you use Bedrock, your data is processed in the AWS region you select for deployment - this includes prompts, custom model data, and model outputs.
This regional deployment model is key for data sovereignty considerations. By choosing the London region, UK government services can ensure that data remains within the UK’s borders during processing. AWS maintains clear boundaries between regions, and data is not automatically transferred between regions without explicit customer action. This makes it possible to implement strong data residency controls through proper configuration.
The regional deployment model extends to model fine-tuning and custom model storage as well. When you fine-tune a model in Bedrock, the resulting custom model is stored in the region where you initiated the process. This comprehensive regional isolation helps satisfy requirements from frameworks like the UK Government Security Classification Policy for handling OFFICIAL and OFFICIAL-SENSITIVE information.
From a legal jurisdiction perspective, data processed in the London region is subject to UK law, though AWS as a US-based company is also subject to US legal requirements. AWS addresses this potential conflict through contractual provisions, transparency reports, and its established processes for handling government requests for data. The AWS UK Government Digital Marketplace offerings include specific terms designed to address UK public sector requirements.
For cross-border data transfers (which may occur if you choose to deploy in a region outside the UK), AWS provides mechanisms compliant with UK GDPR requirements, including Standard Contractual Clauses. These provisions help maintain legal compliance even in multi-region deployments. However, for maximum control, keeping deployments within the London region is recommended for UK public sector workloads.
AWS’s approach to regional isolation provides technical and legal separation that supports data residency requirements, making Bedrock suitable for many UK government use cases when properly configured.
Data in Transit
Protection Mechanism | Description | Default Status | Compliance Alignment |
---|---|---|---|
TLS Encryption | All API calls and data transfers use TLS 1.2+ | Always enabled | Meets UK NCSC guidance |
VPC Endpoints | Private connectivity without internet exposure | Optional configuration | Enhanced security for sensitive workloads |
API Request Signing | SigV4 authentication on all requests | Required | Prevents request tampering |
Mutual TLS | Optional additional encryption layer | Available configuration | Higher assurance option |
Amazon Bedrock protects data in transit through comprehensive encryption and secure communication channels. All communication with Bedrock APIs is encrypted using Transport Layer Security (TLS) - specifically TLS 1.2 or higher, aligning with NCSC’s guidance for protecting data in transit. This encryption applies to all aspects of the service, including model inference requests, fine-tuning operations, and management API calls.
For enhanced security, Bedrock can be accessed through AWS PrivateLink using VPC endpoints. This configuration ensures that all traffic between your applications and Bedrock stays within the AWS network and never traverses the public internet, adding an additional layer of protection. VPC endpoints can be particularly valuable for government workloads processing sensitive data, as they reduce the attack surface and prevent potential eavesdropping on the public internet.
Bedrock also employs AWS’s standard API request signing using Signature Version 4 (SigV4), which authenticates every request and prevents request tampering or replay attacks. All requests must be properly signed with valid AWS credentials, ensuring that only authorized systems or users can interact with the service.
When using AWS SDKs to interact with Bedrock, these security features are automatically implemented, making secure integration straightforward. For custom implementations, AWS provides detailed documentation on implementing secure API calls.
For extremely sensitive government workloads, AWS supports additional transit security controls such as mutual TLS (mTLS) for specific configurations, providing even stronger assurance of endpoint identity and communication security.
The data in transit protections extend to all components of the Bedrock service. When using Bedrock knowledge bases, for example, documents being indexed and queries against those knowledge bases are similarly protected by TLS encryption and can leverage private VPC connectivity.
For UK government users, these transit controls satisfy requirements for protecting OFFICIAL information in transit when properly implemented, especially when combined with appropriate network security controls and monitoring.
Data at Rest
Storage Context | Encryption Mechanism | Key Management Options | Duration |
---|---|---|---|
Model Inference | Not stored by default | N/A - transient processing | Temporary processing only |
Fine-tuning Datasets | AWS-managed encryption (AES-256) | AWS KMS with customer keys option | Until explicitly deleted |
Custom Models | AWS-managed encryption (AES-256) | AWS KMS with customer keys option | Until explicitly deleted |
Knowledge Bases | AWS-managed encryption (AES-256) | AWS KMS with customer keys option | Until explicitly deleted |
Request Logs | AWS-managed encryption | AWS system managed | Based on log retention policy |
Amazon Bedrock’s approach to data at rest security focuses on minimizing persistent storage and applying strong encryption to any data that is stored. For standard model inference (sending prompts and receiving responses), Bedrock does not persistently store the content of your requests or the model’s responses - data is processed in memory and then discarded unless you explicitly configure logging.
For features that do require storage, such as model fine-tuning, custom models, or knowledge bases, AWS applies encryption at rest using AES-256 as the default. This encryption is implemented through AWS’s standard storage layer protections, ensuring that stored data cannot be accessed without proper authorization.
Customers have additional control through integration with AWS Key Management Service (KMS). You can specify customer-managed KMS keys for encrypting fine-tuning datasets, custom models, and knowledge bases. Using customer-managed keys provides you with direct control over the encryption keys, including the ability to rotate keys, establish key usage policies, or revoke access by disabling the key if needed.
AWS’s standard key hierarchy and envelope encryption techniques are used to implement this protection, with the actual data encrypted using data keys that are themselves encrypted with your KMS master keys. This approach aligns with industry best practices for encryption at rest.
When data deletion is required, AWS provides mechanisms to delete custom models, fine-tuning datasets, and knowledge bases. According to AWS’s standard data deletion practices, once deleted, the data becomes inaccessible and is securely wiped during normal storage media lifecycle processes.
For service logs that might contain metadata about your Bedrock usage (but not the actual content of requests unless configured for logging), AWS applies its standard log handling practices, including encryption at rest and defined retention periods.
For UK government users, these data-at-rest controls align with requirements for protecting OFFICIAL information when properly implemented with customer-managed keys. The option to use customer-managed KMS keys provides the control necessary for more sensitive workloads, as it allows government departments to maintain sovereignty over their encryption keys.
Audit Logging
Logging Capability | Information Captured | Retention Period | Access Control |
---|---|---|---|
CloudTrail | API calls, management events | User-configurable (default 90 days) | IAM permissions-based |
CloudWatch Logs | Model invocations, performance metrics | User-configurable | IAM permissions-based |
Model Invocation Logs | Optional request/response content logging | User-configurable | IAM permissions-based |
Amazon Bedrock Guardrails | Content filtering, policy violations | Integrates with CloudWatch | IAM permissions-based |
Evaluation Results | Model performance metrics | Stored until deleted | IAM permissions-based |
Amazon Bedrock integrates with AWS’s comprehensive logging and monitoring ecosystem to provide audit trails and operational visibility. The primary audit logging mechanism is AWS CloudTrail, which automatically records all API calls made to Bedrock. These logs capture details such as the identity of the API caller, the time of the call, the source IP address, the request parameters, and the response elements. CloudTrail logs are immutable and can be stored in Amazon S3 with encryption and access controls, making them suitable for compliance and security auditing.
For more detailed operational monitoring, Bedrock integrates with Amazon CloudWatch to provide metrics and logs of model inference operations. These logs can include performance metrics (latency, token usage, etc.) and, if configured, the content of requests and responses. Enabling content logging is optional and controlled by the customer - this allows you to balance audit needs against data minimization principles.
When using Bedrock Guardrails to implement content filtering and safety controls, audit logs of policy violations or blocked content are generated. These logs help track attempted misuse or policy breaches and can be essential for security governance. Similarly, when using Bedrock Agents, the agent trace feature provides detailed logs of the agent’s reasoning process and API calls, valuable for both debugging and audit purposes.
For model evaluation and testing, Bedrock stores evaluation results that can serve as audit records of model performance and behavior over time. These logs help track model drift and compliance with performance requirements.
Log access is controlled through AWS Identity and Access Management (IAM), allowing you to define precise permissions for who can view or manage logs. For example, you can grant security teams read-only access to CloudTrail logs while restricting access to operational teams. Logs can be exported to security information and event management (SIEM) systems for broader security monitoring and correlation.
The retention period for logs is configurable, allowing organizations to implement retention policies that align with their compliance requirements. Logs can be archived in Amazon S3 Glacier for long-term, cost-effective retention when needed for extended compliance purposes.
For UK government users, these logging capabilities support requirements for audit trails of system usage, especially important when AI systems are used for decisions affecting citizens or policy. The granular access controls and immutable nature of logs like CloudTrail provide the necessary assurance for demonstrating compliance with usage policies and security requirements.
Access Controls
Control Type | Mechanism | Granularity | Integration |
---|---|---|---|
IAM Policies | Role-based access control | Action and resource-level | AWS IAM |
Resource Policies | Attached directly to Bedrock resources | Resource-specific permission | Per model/custom model |
Service Control Policies | Organization-wide permission boundaries | Account-level restrictions | AWS Organizations |
Condition Keys | Fine-grained context-based access | Request attribute conditions | IAM policies |
Network-based Controls | VPC endpoints with policies | Network-level restrictions | AWS VPC |
Amazon Bedrock implements a comprehensive, layered approach to access control based on AWS’s established Identity and Access Management (IAM) framework. This approach allows organizations to implement the principle of least privilege and segregation of duties for all Bedrock operations.
The primary access control mechanism is through IAM policies attached to users, groups, or roles. These policies define which Bedrock actions (APIs) principals can perform and on which resources. For example, you can create roles that allow data scientists to invoke models but restrict model customization to specific authorized users. IAM policies support fine-grained permissions, such as limiting access to specific models or allowing only certain types of inference operations.
For more specific control at the resource level, Bedrock supports resource-based policies for custom models and model evaluation jobs. These policies are attached directly to the resource and specify who can access it regardless of the user’s IAM permissions. This approach is valuable when you need to share specific resources across teams or accounts without granting broad permissions.
At an organizational level, AWS Organizations Service Control Policies (SCPs) can establish permission guardrails for Bedrock usage across multiple AWS accounts. SCPs can enforce security standards such as requiring encryption, restricting region usage to ensure data residency, or preventing the use of specific models that haven’t been approved for organizational use.
For context-aware access control, IAM policies support condition keys that can restrict access based on tags, IP address ranges, time of day, or whether the request is using TLS. This allows for highly specialized access rules, such as permitting certain sensitive model operations only from specific secure networks.
Network-based access control adds another layer of protection through VPC endpoint policies, which can restrict which principals or source VPCs can access Bedrock through private endpoints. This network-level control is particularly valuable for government environments with strict network segregation requirements.
When using Bedrock with AWS Knowledge Bases, access controls extend to the data sources and the knowledge bases themselves. This ensures that models only access authorized information sources and that query access is appropriately restricted.
For authentication and identity management, Bedrock integrates with AWS IAM Identity Center (formerly AWS Single Sign-On) and supports federation with external identity providers through SAML 2.0. This allows government departments to integrate Bedrock access control with existing identity management systems and enforce requirements like multi-factor authentication.
For UK government users, these comprehensive access controls satisfy requirements for restricting system access based on role and need-to-know principles when properly configured. The ability to enforce network boundaries and integrate with existing identity providers supports alignment with government security frameworks.
Compliance and Regulatory Requirements
Compliance Area | Relevant Certifications/Programs | Scope | Verification |
---|---|---|---|
General Security | ISO 27001, SOC 2, CSA STAR | AWS Global Infrastructure | Third-party audited |
UK Government | Cyber Essentials Plus, NCSC Cloud Security Principles | UK operations | Independently assessed |
Data Protection | ISO 27018, UK GDPR validation | Personal data handling | Third-party validated |
Industry-specific | HIPAA eligibility, HITRUST | Healthcare data (where applicable) | Assessment available |
Model Safety | Red teaming, safety benchmarks | Foundation model behavior | Ongoing evaluation |
Amazon Bedrock benefits from AWS’s extensive compliance programs and certifications, which provide assurance about the security and compliance posture of the underlying infrastructure. AWS maintains a broad set of certifications relevant to UK government use, including ISO 27001, ISO 27017 (cloud security), ISO 27018 (personal data), SOC 1/2/3, and Cyber Essentials Plus. These certifications cover the infrastructure on which Bedrock runs, providing a strong foundation for secure deployment.
For UK government specifically, AWS has documented alignment with the NCSC Cloud Security Principles, which helps government entities assess Bedrock against required security standards. AWS is also listed on the UK Government’s G-Cloud Digital Marketplace, indicating that its services have been vetted for government procurement.
Regarding data protection regulations, AWS provides detailed documentation on UK GDPR compliance and offers Data Processing Addendums that address UK-specific legal requirements. AWS’s approach to data sovereignty, with the London region availability, supports compliance with data residency requirements that may apply to certain government data.
For specific industry regulations, AWS offers a compliance program for HIPAA (health data protection), which includes Bedrock for certain models. This may be relevant for government healthcare agencies. AWS also provides detailed guidance on using their services in regulated environments through their Compliance Center.
Beyond infrastructure compliance, Bedrock addresses AI-specific governance concerns through several mechanisms. The foundation models available through Bedrock undergo safety evaluations and red team testing to identify and mitigate potential risks. AWS provides transparency on model limitations and behaviors through model cards and documentation, helping customers assess suitability for specific use cases.
Bedrock Guardrails provides a customizable content filtering system that helps enforce usage policies and prevent misuse, supporting compliance with ethical AI principles and departmental usage policies. These guardrails can be configured to align with specific government requirements for responsible AI use.
Model evaluation capabilities within Bedrock allow government teams to conduct ongoing assessment of models against relevant criteria, including fairness, accuracy, and alignment with department-specific requirements. This supports documentation of compliance with government AI principles and risk management frameworks.
For accountability and governance, AWS provides shared responsibility guidance for Bedrock that clarifies which compliance aspects are managed by AWS and which require customer configuration. This helps government teams implement appropriate controls and document their compliance approach.
For UK government users considering Bedrock, reviewing the AWS UK Public Sector Addendum and working with AWS’s UK public sector team can provide additional assurances and guidance specific to government requirements. The combination of AWS’s established compliance programs and Bedrock’s AI governance features provides a strong foundation for compliant deployment when properly configured according to departmental requirements.
References: