Technological developments in recent years have thrust artificial intelligence (AI) and its subfields — generative AI, deep learning, and machine learning (ML) — to the forefront of innovation in various industries. With the help of these revolutionary technologies, our understanding of and interactions with computers have completely changed, allowing systems to think, learn, and act independently.

AI = Artificial Intelligence

Creating intelligent machines capable of performing tasks that typically require human intelligence.

ML = Machine Learning

Branch of AI that teaches computers to learn from data and improve over time without being explicitly programmed.

DL = Deep Learning

Subset of ML that uses complex algorithms and neural networks to handle complicated problems.

Generative AI = Generative + AI

Generative = Generate content like text, images, code, videos, audio, etc.

None
None
None

Traditional ML models Vs. Foundation models

None

What does Amazon Bedrock do?

Offering top foundation models (FMs) and a suite of tools to rapidly develop and deploy generative AI applications, Amazon Bedrock is a fully managed service. Additionally, the service contributes to security and privacy.

To select the model that best fits your use case, you can select from a variety of FMs. You can get started and experiment with FMs quickly with the serverless Amazon Bedrock experience. Additionally, you may use Amazon Web Services (AWS) tools and capabilities to easily integrate and deploy FMs into your apps after privately customizing them with your data.

What are the three key benefits of Amazon Bedrock?

Efficiently build with FMs

You can choose FMs from Amazon, AI21 Labs, Anthropic, Cohere, Meta, and Stability AI to find the right FM for your use case. This includes Amazon Titan, Jurassic-2, Claude, Command, Llama 2, and Stable Diffusion families of FMs that support different modalities, including text, embeddings, and multimodal.

You can use a single API to securely access customized FMs and those provided by Amazon and other AI companies. Using the same API, you can privately and more efficiently pass prompts and responses between the user and the FM.

With the Amazon Bedrock serverless experience, you don't need to manage the infrastructure. You can fine-tune and deploy FMs without creating instances, implementing pipelines, or setting up storage.

Securely build generative AI applications

With Amazon Bedrock, your data — including prompts, information used to supplement a prompt, FM responses, and customized FMs — remain in the Region where the API call is processed. Your data is encrypted in transit with TLS 1.2 and at rest with service-managed AWS Key Management Service (AWS KMS) keys. You can use AWS PrivateLink with Amazon Bedrock to establish private connectivity between your FMs and on-premises networks, without exposing your traffic to the internet. In addition, you can customize FMs privately, retaining control over how your data is used and encrypted. Amazon Bedrock makes a separate copy of the base FM model and trains this private copy of the model.

To secure your custom FMs, you can use AWS security services to form your in-depth security strategy. Your customized FMs are encrypted using AWS KMS keys and stored with encryption. By using AWS Identity and Access Management (IAM), you can control access to your customized FMs, allowing or denying access to specific FMs. You can also control which services receive inferences and who can log in to the Amazon Bedrock console.

Amazon Bedrock offers comprehensive monitoring and logging capabilities, including tools that can be used to help address your governance and audit requirements. You can use Amazon CloudWatch to track usage metrics and build customized dashboards with metrics that are required for your audit purposes. You can also use AWS CloudTrail to monitor API activity and troubleshoot issues as you integrate other systems into your generative AI applications.

Deliver customized experiences using your organization's data

Agents for Amazon Bedrock makes it easy to create and deploy fully managed agents that perform complex business tasks by dynamically invoking APIs. Agents use automatic prompt creation to generate prompts from developer instructions, API schemas, and company knowledge bases. An agent breaks down user requests into subtasks and determines the optimal sequence to complete them. Agents can connect to company data sources, convert data to numeric representations, and augment user requests with relevant information. This allows the agent to generate more accurate and relevant responses by looking up details from knowledge bases.

You can fine-tune a foundation model using Amazon Bedrock by providing your own labeled training dataset in order to improve the model's performance on specific tasks. Through this process, you create a new model that improves upon the performance and efficiency of the original model for a given task. To fine-tune a model, you upload a training and a validation dataset to Amazon S3, and provide the S3 bucket path to the Amazon Bedrock fine-tuning job. The fine-tuning can be done using the Amazon Bedrock console or API.

What are typical use cases for Amazon Bedrock?

Text generation

Create new pieces of original content, such as short stories, essays, social media posts, and web page copy.

Chatbots

Build conversational interfaces, such as chatbots and virtual assistants, to enhance the user experience for your customers.

Search

Search, find, and synthesize information to answer questions from a large corpus of data.

Text summarization

Get a summary of textual content, such as articles, blog posts, books, and documents, to gain understanding without having to read the full content.

Image generation

Create realistic and artistic images of various subjects, environments, and scenes from language prompts.

Personalization

Help customers find what they're looking for with more relevant and contextual product recommendations than word matching.

What else should you keep in mind about Amazon Bedrock?

Model Access

Shared responsibility model

Quotas

Integrating AWS and corporate networks

Region availability

Use of customer data — handling PII data

Use of customer data for service enhancement

Use of customer data for model customization

How much does Amazon Bedrock cost?

On-Demand: With the On-Demand mode, you only pay for what you use, with no time-based term commitments. For text generation models, you are charged for every input token processed and every output token generated. For embedding models, you are charged for every input token processed. A token comprises a few characters and refers to the basic unit that a model uses to understand user input and prompts to generate results. For image generation models, you are charged for every image generated.

Provisioned Throughput: In this mode, you can purchase model units for a specific base or custom model. It is primarily designed for large, consistent inference workloads that need guaranteed throughput. Custom models can only be accessed using Provisioned Throughput. A model unit provides a certain throughput, measured by the maximum number of input or output tokens processed per minute. With this Provisioned Throughput pricing, charged by the hour, you can choose between 1-month or 6-month commitment terms.

Amazon Bedrock pricing details

https://aws.amazon.com/bedrock/pricing/

Amazon Bedrock details

How Do You Use Amazon Bedrock?

AWS management console:

You can use the Amazon Bedrock playgrounds to interact with FMs to generate text or an image or to have a chat conversation. Amazon Bedrock supports the selection of an FM from a set of model providers.

Using the playgrounds in Amazon Bedrock, you can submit a natural language command (prompt) to the FM and get a response or an answer. You can influence the response from the model by adjusting model parameters, such as the temperature so that the answer can vary from being more factual to being more creative. You can provide prompts to generate text, generate images, summarize text, receive answers to questions, or have a chat conversation.

Amazon BedRock API:

You can use a single Amazon Bedrock API to access FMs securely. Using the same API, you can privately and more easily pass prompts and responses between the user and the FM. The Amazon Bedrock API can be used through the AWS SDK to build a generative AI application and integrate it with other AWS services.

How do you interact with the Amazon Bedrock playground?

You can access Amazon Bedrock with the AWS Management Console to use the text playground, chat, or image playground. In the text or image playground, you can choose an FM, enter a prompt in the text field, and choose Run to generate a response.

In the chat playground, you can interact with the FM of your choice using a conversational interface. In the image playground, you can use Stable Diffusion FM for text-to-image query and response. The following architecture diagram is used for the demonstration in this training.

None

You can optionally set inference parameters to influence the response generated by the model. FMs support the following types of inference parameters.

Randomness and diversity

Temperature: Large language models (LLMs) use probability to construct the words in a sequence. For any given sequence, there is a probability distribution of options for the next word in the sequence. When you set the temperature closer to zero, the model tends to select the higher-probability words. When you set the temperature farther away from zero, the model might select a lower-probability word.

Top P: Top P defines a cutoff based on the sum of probabilities of the potential choices. If you set Top P below 1.0, the model considers the most probable options and ignores the less probable ones.

Length

Response length: The response length configures the maximum number of tokens to use in the generated response.

Stop sequences: A stop sequence is a sequence of characters. If the model encounters a stop sequence, it stops generating further tokens. Different models support different types of characters in a stop sequence, different maximum sequence lengths, and may support the definition of multiple stop sequences.

None
None

And review the overview, benefit details.

None

Playground

Chat:

None
None
None
None
None
None
None
None

How to build GenAI applications with Amazon Bedrock

Step1 — Provide knowledge base details

Step2 — Set up data source

Step3 — Select the embeddings model and configure the vector store

Step4- Review and create

None
None
None
None
None
None
None
None
None
None
None
None
None
None
None
None
None

Summary:

Amazon Bedrock provides easy access to a selection of high-performing FMs (Foundation Models) from top AI firms, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. This enables you to adapt and benefit from the latest generative AI breakthroughs quickly. Regardless of the models you select, Amazon Bedrock's single API access allows you to use various FMs and update to the newest model versions with little to no code modifications.

To explore all AI use cases, visit