Llama 4 family of models from Meta are now available in SageMaker JumpStart | Amazon Web Services

Today, we’re excited to announce the availability of Llama 4 Scout and Maverick models in Amazon SageMaker JumpStart and coming soon in Amazon Bedrock. Llama 4 represents Meta’s most advanced multimodal models to date, featuring a mixture of experts (MoE) architecture and context window support up to 10 million tokens. With native multimodality and early fusion technology, Meta states that these new models demonstrate unprecedented performance across text and vision tasks while maintaining efficient compute requirements. With a dramatic increase on supported context length from 128K in Llama 3, Llama 4 is now suitable for multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over extensive codebases. You can now deploy the Llama-4-Scout-17B-16E-Instruct, Llama-4-Maverick-17B-128E-Instruct, and Llama-4-Maverick-17B-128E-Instruct-FP8 models using SageMaker JumpStart in the US East (N. Virginia) AWS Region.

In this blog post, we walk you through how to deploy and prompt a Llama-4-Scout-17B-16E-Instruct model using SageMaker JumpStart.

Llama 4 overview

Meta announced Llama 4 today, introducing three distinct model variants: Scout, which offers advanced multimodal capabilities and a 10M token context window; Maverick, a cost-effective solution with a 128K context window; and Behemoth, in preview. These models are optimized for multimodal reasoning, multilingual tasks, coding, tool-calling, and powering agentic systems.

Llama 4 Maverick is a powerful general-purpose model with 17 billion active parameters, 128 experts, and 400 billion total parameters, and optimized for high-quality general assistant and chat use cases. Additionally, Llama 4 Maverick is available with base and instruct models in both a quantized version (FP8) for efficient deployment on the Instruct model and a non-quantized (BF16) version for maximum accuracy.

Llama 4 Scout, the more compact and smaller model, has 17 billion active parameters, 16 experts, and 109 billion total parameters, and features an industry-leading 10M token context window. These models are designed for industry-leading performance in image and text understanding with support for 12 languages, enabling the creation of AI applications that bridge language barriers.

See Meta’s community license agreement for usage terms and more details.

SageMaker JumpStart overview

SageMaker JumpStart offers access to a broad selection of publicly available foundation models (FMs). These pre-trained models serve as powerful starting points that can be deeply customized to address specific use cases. You can use state-of-the-art model architectures—such as language models, computer vision models, and more—without having to build them from scratch.

With SageMaker JumpStart, you can deploy models in a secure environment. The models can be provisioned on dedicated SageMaker inference instances can be isolated within your virtual private cloud (VPC). After deploying an FM, you can further customize and fine-tune it using the extensive capabilities of Amazon SageMaker AI, including SageMaker inference for deploying models and container logs for improved observability. With SageMaker AI, you can streamline the entire model deployment process.

Prerequisites

To try the Llama 4 models in SageMaker JumpStart, you need the following prerequisites:

Discover Llama 4 models in SageMaker JumpStart

SageMaker JumpStart provides FMs through two primary interfaces: SageMaker Studio and the Amazon SageMaker Python SDK. This provides multiple options to discover and use hundreds of models for your specific use case.

SageMaker Studio is a comprehensive integrated development environment (IDE) that offers a unified, web-based interface for performing all aspects of the AI development lifecycle. From preparing data to building, training, and deploying models, SageMaker Studio provides purpose-built tools to streamline the entire process.

In SageMaker Studio, you can access SageMaker JumpStart to discover and explore the extensive catalog of FMs available for deployment to inference capabilities on SageMaker Inference. You can access SageMaker JumpStart by choosing JumpStart in the navigation pane or by choosing JumpStart from the Home page in SageMaker Studio, as shown in the following figure.

Alternatively, you can use the SageMaker Python SDK to programmatically access and use SageMaker JumpStart models. This approach allows for greater flexibility and integration with existing AI and machine learning (AI/ML) workflows and pipelines.

By providing multiple access points, SageMaker JumpStart helps you seamlessly incorporate pre-trained models into your AI/ML development efforts, regardless of your preferred interface or workflow.

Deploy Llama 4 models for inference through the SageMaker JumpStart UI

On the SageMaker JumpStart landing page, you can find all the public pre-trained models offered by SageMaker AI. You can then choose the Meta model provider tab to discover all the available Meta models.

If you’re using SageMaker Classic Studio and don’t see the Llama 4 models, update your SageMaker Studio version by shutting down and restarting. For more information about version updates, see Shut down and Update Studio Classic Apps.

  1. Search for Meta to view the Meta model card. Each model card shows key information, including:
    • Model name
    • Provider name
    • Task category (for example, Text Generation)
  2. Select the model card to view the model details page.

The model details page includes the following information:

  • The model name and provider information
  • Deploy button to deploy the model
  • About and Notebooks tabs with detailed information

The About tab includes important details, such as:

  • Model description
  • License information
  • Technical specifications
  • Usage guidelines

Before you deploy the model, we recommended you review the model details and license terms to confirm compatibility with your use case.

  1. Choose Deploy to proceed with deployment.
  1. For Endpoint name, use the automatically generated name or enter a custom one.
  2. For Instance type, use the default: p5.48xlarge.
  3. For Initial instance count, enter the number of instances (default: 1).
    Selecting appropriate instance types and counts is crucial for cost and performance optimization. Monitor your deployment to adjust these settings as needed.
  4. Under Inference type, Real-time inference is selected by default. This is optimized for sustained traffic and low latency.
  5. Review all configurations for accuracy. For this model, we strongly recommend adhering to SageMaker JumpStart default settings and making sure that network isolation remains in place.
  6. Choose Deploy. The deployment process can take several minutes to complete.

When deployment is complete, your endpoint status will change to InService. At this point, the model is ready to accept inference requests through the endpoint. You can monitor the deployment progress on the SageMaker console Endpoints page, which will display relevant metrics and status information. When the deployment is complete, you can invoke the model using a SageMaker runtime client and integrate it with your applications.

Deploy Llama 4 models for inference using the SageMaker Python SDK

When you choose Deploy and accept the terms, model deployment will start. Alternatively, you can deploy through the example notebook by choosing Open Notebook. The notebook provides end-to-end guidance on how to deploy the model for inference and clean up resources.

To deploy using a notebook, start by selecting an appropriate model, specified by the model_id. You can deploy any of the selected models on SageMaker AI.

You can deploy the Llama 4 Scout model using SageMaker JumpStart with the following SageMaker Python SDK code:

from sagemaker.jumpstart.model import JumpStartModel

model = JumpStartModel(model_id = "meta-vlm-llama-4-scout-17b-16e-instruct")

predictor = model.deploy(accept_eula=False)

This deploys the model on SageMaker AI with default configurations, including default instance type and default VPC configurations. You can change these configurations by specifying non-default values in JumpStartModel. To successfully deploy the model, you must manually set accept_eula=True as a deploy method argument. After it’s deployed, you can run inference against the deployed endpoint through the SageMaker predictor:

payload = {
"messages": [
 {"role": "system", "content": "You are a helpful assistant"},
 {"role": "user", "content": "How are you doing today"},
 {"role": "assistant", "content": "Good, what can i help you with today?"},
 {"role": "user", "content": "Give me 5 steps to become better at tennis?"}
],
 "temperature": 0.6,
 "top_p": 0.9,
 "max_tokens": 512,
 "logprobs": False
} 
response = predictor.predict(payload) 
response_message = response['choices'][0]['message']['content']

Recommended instances and benchmark

The following table lists all the Llama 4 models available in SageMaker JumpStart along with the model_id, default instance types, and the maximum number of total tokens (sum of number of input tokens and number of generated tokens) supported for each of these models. For increased context length, you can modify the default instance type in the SageMaker JumpStart UI.

Model name Model ID Default instance type Supported instance types
Llama-4-Scout-17B-16E-Instruct meta-vlm-llama-4-scout-17b-16e-instruct ml.p5.48xlarge ml.g6e.48xlarge, ml.p5.48xlarge, ml.p5en.48xlarge
Llama-4-Maverick-17B-128E-Instruct meta-vlm-llama-4-maverick-17b-128e-instruct ml.p5.48xlarge ml.p5.48xlarge, ml.p5en.48xlarge
Llama 4-Maverick-17B-128E-Instruct-FP8 meta-vlm-llama-4-maverick-17b-128-instruct-fp8 ml.p5.48xlarge ml.p5.48xlarge, ml.p5en.48xlarge

Inference and example prompts for Llama 4 Scout 17B 16 Experts model

You can use the Llama 4 Scout model for text and image or vision reasoning use cases. With that model, you can perform a variety of tasks, such as image captioning, image text retrieval, visual question answering and reasoning, document visual question answering, and more.

In the following sections we show example payloads, invocations, and responses for Llama 4 Scout that you can use against your Llama 4 model deployments using Sagemaker JumpStart.

Text-only input

Input:

payload = {
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful AI assistant."
        },
        {
            "role": "user",
            "content": "What are three key benefits of large language models for businesses?"
        }
    ],
    "max_tokens": 2048,
    "temperature": 0.7,
    "top_p": 0.9,
    "stream": False
}

Response:

Large language models (LLMs) offer numerous benefits to businesses, transforming how they operate, 
interact with customers, and generate insights. Here are three key benefits:

1. **Enhanced Customer Experience through Automation and Personalization**:
   - **Automated Customer Support**: LLMs can power chatbots and virtual assistants 
	that provide 24/7 customer support. These models can understand and respond to a wide range of customer queries, help with troubleshooting, and even process transactions, significantly reducing the workload on human customer support agents.
   - **Personalization**: By analyzing customer data, LLMs can help businesses personalize their 
	communications and offerings. For instance, they can generate personalized emails, product recommendations, and content, enhancing customer engagement and satisfaction.

2. **Efficiency and Productivity Gains**:
   - **Content Generation**: LLMs can automate the creation of various types of content, such as 
   blog posts, reports, product descriptions, and social media updates. This not only speeds up content production but also allows human writers to focus on more creative and strategic tasks.
   - **Data Analysis and Summarization**: These models can quickly analyze large volumes of data, extract relevant information, and summarize findings in a readable format. This capability can significantly reduce the time and effort required for market research, competitive analysis, and internal reporting.

3. **Improved Decision Making with Data-Driven Insights**:
   - **Market and Trend Analysis**: LLMs can process and analyze vast amounts of data from various sources, including news articles, social media, and market reports. This helps businesses stay informed about market trends, consumer sentiment, and competitor activity, enabling more informed strategic decisions.
   - **Risk Management and Compliance**: By analyzing regulatory documents and monitoring communications, LLMs can help businesses identify and mitigate compliance risks. They can also assist in the creation of compliance reports and documentation, reducing the risk of human error.

In summary, large language models offer businesses the potential to automate and enhance customer interactions, 
improve operational efficiency, and gain deeper insights from their data. These capabilities can lead to cost savings, 
increased revenue, and a competitive edge in the market.

Single-image input

In this section, let’s test Llama 4’s multimodal capabilities. By merging text and vision tokens into a unified processing backbone, Llama 4 can seamlessly understand and respond to queries about an image. The following is an example of how you can prompt Llama 4 to answer questions about an image such as the one in the example:

Image:

Input:

import requests
import base64
import boto3
import json

s3 = boto3.client("s3")

s3_bucket = f"jumpstart-cache-prod-{region}"
key_prefix = "inference-notebook-assets"

def download_from_s3(key_filenames):
    for key_filename in key_filenames:
        s3.download_file(s3_bucket, f"{key_prefix}/{key_filename}", key_filename)

# Define image names
heat_map = "heatmap_semantic_similarity_search.png"

# Download and display the heatmap image
download_from_s3(key_filenames=[heat_map])

def img_to_base64(image_path):
    with open(image_path, "rb") as f:
        img = f.read()
    enc_img = base64.b64encode(img).decode('utf-8')
    return enc_img

b64_img = img_to_base64(heat_map)

data = {
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant",
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "What is in this image?",
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": b64_img
                    }
                },
            ],
        },
    ],
    "temperature": 0.6,
    "top_p": 0.9,
    "max_tokens": 512
}

payload = json.dumps(data)

Response:

The image presents a heatmap illustrating the semantic textual similarity between 
sentences. The heatmap is divided into two axes, with the x-axis listing six sentences 
and the y-axis listing the same six sentences in a different order.\n\n**Sentences:**\n\n* 
Your dog is so cute.\n* How cute your dog is!\n* You have such a cute dog!\n* New York City 
is the place where I work.\n* I work in New York City.\n* What color do you like the most?\n* 
What is your favourite color?\n\n**Heatmap:**\n\nThe heatmap displays the semantic textual 
similarity between each pair of sentences, with darker shades of red indicating higher similarity 
and lighter shades of orange indicating lower similarity. The color bar on the right side of the 
heatmap provides a scale for interpreting the similarity values, ranging from -0.4 to 1.0.\n\n
**Observations:**\n\n* The three sentences related to dogs ("Your dog is so cute.", "How cute your dog is!", 
and "You have such a cute dog!") show high similarity with each other, as indicated by the dark red squares 
in the corresponding section of the heatmap.\n* The two sentences related to New York City ("New York City is 
the place where I work." and "I work in New York City.") also show high similarity with each other.\n* The two 
sentences related to favorite colors ("What color do you like the most?" and "What is your favorite color?") 
show high similarity with each other.\n* The sentences from different categories (dogs, New York City, and 
favorite colors) show lower similarity with each other, as indicated by the lighter shades of orange in the heatmap.\n\n
Overall, the heatmap effectively visualizes the semantic textual similarity between the given sentences, allowing 
for easy identification of similar and dissimilar sentence pairs.

The Llama 4 model on JumpStart can take in the image provided via a URL, underlining its powerful potential for real-time multimodal applications.

Multi-image input

Building on its advanced multimodal functionality, Llama 4 can effortlessly process multiple images at the same time. In this demonstration, the model is prompted with two image URLs and tasked with describing each image and explaining their relationship, showcasing its capacity to synthesize information across several visual inputs. Let’s test this below by passing in the URLs of the following images in the payload.

Image 1:

Image 2:

Input: 

def url_to_base64(image_url):
    # Download the image
    response = requests.get(image_url)
    if response.status_code != 200:
        return None

    # Encode the image content to base64
    image_base64 = base64.b64encode(response.content).decode('utf-8')
    return image_base64

image_url_2 = "https://raw.githubusercontent.com/mathvision-cuhk/MATH-V/refs/heads/main/images/13.jpg"
b64_img_url_2 = url_to_base64(image_url_2)

data = {
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant",
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "There are two images. Describe each one and why they are related",
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": b64_img,
                    }
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": b64_img_url_2,
                    }
                },
            ],
        },
    ],
    "temperature": 0.6,
    "top_p": 0.9,
    "max_tokens": 512
}

Response:

The first image is a heatmap illustrating the semantic textual similarity between 
sentences. The heatmap is titled "Semantic Textual Similarity Between Sentences" 
and features a color scale ranging from -0.4 to 1.0, with darker shades indicating 
higher similarity. The sentences being compared are:\n\n* Your dog is so cute.\n* 
How cute your dog is!\n* You have such a cute dog!\n* New York City is the place 
where I work.\n* I work in New York City.\n* What color do you like the most?\n* 
What is your favourite color?\n\nThe heatmap reveals that the sentences about dogs 
have high similarity scores, with values above 0.9, indicating that they convey 
similar meanings. Similarly, the sentences about work locations and favorite colors 
also show high similarity scores. However, there is a low similarity score between 
the sentences about dogs and those about work locations or favorite colors.\n\n
The second image presents five stick figures labeled A, B, C, D, and E, each representing
a person. The figures are drawn using simple lines and geometric shapes, such as circles
for heads, squares for bodies, and triangles for skirts. The main difference between the
figures lies in their arm and leg positions:\n\n* Figure A has its arms at its sides and 
straight legs.\n* Figure B has its arms at its sides and straight legs, but with a slightly 
different posture.\n* Figure C has its arms at its sides and straight legs, similar to Figure A.\n*
Figure D has one arm raised and straight legs.\n* Figure E has two arms raised and bent legs.\n\n
Overall, the two images are distinct in terms of their content and purpose. The heatmap provides a 
visual representation of semantic similarities between sentences, while the stick figures appear to 
be simple illustrations of people with varying poses.

As you can see, Llama 4 excels in handling multiple images simultaneously, providing detailed and contextually relevant insights that emphasize its robust multimodal processing abilities.

Codebase analysis with Llama 4

Using Llama 4 Scout’s industry-leading context window, this section showcases its ability to deeply analyze expansive codebases. The example extracts and contextualizes the buildspec-1-10-2.yml file from the AWS Deep Learning Containers GitHub repository, illustrating how the model synthesizes information across an entire repository. We used a tool to ingest the whole repository into plaintext that we provided to the model as context:

Input:

payload = {
    "messages": [
        {
            "role": "system",
            "content": """You are a helpful AI assistant that summarizes codebases to me to help me understand
            how to analyze code by synthesizing through the entire codebase before responding.
            
            Be thorough in your search as the file may be nested within a markdown code block or within a directory listing.
            """
        },
        {
            "role": "user",
            "content": f"Can you explain to me the buildspec-1-10-2.yml file and how it relates to the rest of the huggingface directory? Here is the AWS Deep Learning Container Repository for reference:\n\n{text}"
        }
    ],
    "max_tokens": 2048,
    "temperature": 0.7,
    "top_p": 0.9,
    "stream": False
}

Output: 

## Overview of the buildspec-1-10-2.yml File

The provided information appears to be a part of a large, merged representation of 
the Hugging Face repository, generated by Repomix. The `buildspec-1-10-2.yml` file 
is likely a build specification file used in a Continuous Integration/Continuous 
Deployment (CI/CD) pipeline.

## Purpose and Format

The file contains a packed representation of the entire repository's contents, designed
 to be easily consumable by AI systems for analysis, code review, or other automated processes. 
The content is organized into sections:

1.  **Summary**: Provides an overview of the file's purpose, format, and usage guidelines.
2.  **Repository Information**: Not shown in the provided excerpt, but likely contains metadata about the repository.
3.  **Directory Structure**: Lists the directory structure of the repository, including file paths.
4.  **File Entries**: Each file entry consists of a header with the file path and the full contents 
of the file in a code block.

## Relationship to the Hugging Face Directory

The `buildspec-1-10-2.yml` file is part of the Hugging Face repository, which appears to be a 
large project with multiple subdirectories and files. The directory structure section lists various 
subdirectories, including:

*   `.github/`: Contains GitHub-related configuration files, such as issue templates, workflows, 
and CODEOWNERS.
*   `.release_templates/`: Includes release image templates for various frameworks like MXNet, PyTorch, 
and TensorFlow.
*   `autogluon/`: A subdirectory with build artifacts, inference configurations, and other related files.

## buildspec-1-10-2.yml File Contents

Unfortunately, the provided excerpt does not contain the actual contents of the `buildspec-1-10-2.yml` file. 
However, based on the file name and its presence in a CI/CD context, it is likely that this file specifies 
the build process, dependencies, and environment configurations for the Hugging Face project.

## Usage Guidelines

When working with this file, keep in mind:

*   Treat this file as read-only and make changes to the original repository files instead.
*   Use the file path to distinguish between different files in the repository.
*   Handle this file with the same level of security as the original repository, as it may 
contain sensitive information.

## Additional Notes

*   Some files might be excluded based on `.gitignore` rules and Repomix's configuration.
*   Binary files are not included in this packed representation.
*   Files are sorted by Git change count, with files having more changes appearing at the bottom.

## Example Use Case

To analyze the `buildspec-1-10-2.yml` file, you would:

1.  Locate the file in the repository directory structure.
2.  Review the file contents to understand the build process, dependencies, and environment configurations.
3.  Use this information to optimize the build process, troubleshoot issues, or make changes to the repository.

Keep in mind that the actual contents of the `buildspec-1-10-2.yml` file are not provided, so this 
is a general explanation based on the file name and context. 

To get the actual content you would look for 

## File: buildspec-1-10-2.yml 

 block and read its content 

Let me know if I can assist with anything else.

Multi-document processing

Harnessing the same extensive token context window, Llama 4 Scout excels in multi-document processing. In this example, the model extracts key financial metrics from Amazon 10-K reports (2017-2024), demonstrating its capability to integrate and analyze data spanning multiple years—all without the need for additional processing tools.

Input: 

payload = {
    "messages": [
        {
            "role": "system",
            "content": "You are a financial analyst. Extract key financial data from this 10-K report."
        },
        {
            "role": "user",
            "content": f"""Extract the following information from this Amazon 10-K report:
            1. What year is this report for?
            2. Total revenue for the year
            3. Net income
            4. Key business segments and their performance
            5. Important trends mentioned
            
            Here's the 10-K text:
            {pdf_text[:25000]}
            
            Format your response as a JSON object with these fields: year, revenue, net_income, segments, trends.
            """
        }
    ],
    "max_tokens": 2048,
    "temperature": 0.7,
    "top_p": 0.9,
    "stream": False
}

Output:

To analyze Amazon's growth from 2017 to 2024, we'll examine the revenue and net income 
data available from the 10-K reports.

### Revenue Growth

| Year | Revenue (in billion USD) |
| --- | --- |
| 2014 | 89 |
| 2015 | 107 |
| 2016 | 135.99 |
| 2017 | 177.87 |
| 2018 | 232.88 |
| 2019 | 280.52 |
| 2020 | 386.06 |
| 2021 | 478 |
| 2022 | 513 |
| 2023 | 513.98 |
| 2024 | 574.78 (projected) |

From 2017 to 2024, Amazon's revenue has grown from $177.87 billion to a projected $574.78 billion, 
representing a significant increase of approximately 223% over the 7-year period.

### Net Income Growth

| Year | Net Income (in billion USD) |
| --- | --- |
| 2014 | 0.236 |
| 2015 | 0.596 |
| 2016 | 2.37 |
| 2017 | 3.03 |
| 2018 | 10.07 |
| 2019 | 11.59 |
| 2020 | 18.68 |
| 2021 | 33.4 |
| 2022 | 18.7 (lower due to economic conditions) |
| 2023 | 33.36 |
| 2024 | 37.68 (projected) |

Amazon's net income has grown from $3.03 billion in 2017 to a projected $37.68 billion in 2024, 
representing a substantial increase of approximately 1143% over the 7-year period.

### Segment and Trend Analysis

Throughout the years, Amazon has maintained its three main business segments: North America, 
International, and Amazon Web Services (AWS). The company has continued to expand into new products, 
services, technologies, and geographic regions, driving growth and increasing competition.

The trends mentioned across the 10-K reports highlight the company's:

1. **Seasonality**: Higher sales volume during the fourth quarter.
2. **Intense competition**: Across geographies and industries, including e-commerce, cloud computing, 
and digital content.
3. **Expansion**: Into new products, services, technologies, and geographic regions, which subjects 
the company to additional business, legal, financial, and competitive risks.
4. **International operations**: Subject to risks such as local economic and political conditions, 
government regulation, and restrictions on sales or distribution.

### Conclusion

Amazon has demonstrated remarkable growth from 2017 to 2024, with revenue increasing by 
approximately 223% and net income growing by approximately 1143%. The company's continued 
expansion into new areas, its strong presence in cloud computing through AWS, and its ability 
to adapt to changing market conditions have contributed to its success. However, the company 
also faces intense competition, seasonality, and risks associated with international operations.

--------------------------------------------------
Ask a question about the Amazon 10-K reports across years.

Clean up

To avoid incurring unnecessary costs, when you’re done, delete the SageMaker endpoints using the following code snippets:

predictor.delete_model()
predictor.delete_endpoint()

Alternatively, using the SageMaker console, complete the following steps:

  1. On the SageMaker console, under Inference in the navigation pane, choose Endpoints.
  2. Search for the embedding and text generation endpoints.
  3. On the endpoint details page, choose Delete.
  4. Choose Delete again to confirm.

Conclusion

In this post, we explored how SageMaker JumpStart empowers data scientists and ML engineers to discover, access, and deploy a wide range of pre-trained FMs for inference, including Meta’s most advanced and capable models to date. Get started with SageMaker JumpStart and Llama 4 models today.

For more information about SageMaker JumpStart, see Train, deploy, and evaluate pretrained models with SageMaker JumpStart and Getting started with Amazon SageMaker JumpStart.


About the authors

Marco Punio is a Sr. Specialist Solutions Architect focused on generative AI strategy, applied AI solutions, and conducting research to help customers hyper-scale on AWS. As a member of the Third-party Model Provider Applied Sciences Solutions Architecture team at AWS, he is a global lead for the Meta–AWS Partnership and technical strategy. Based in Seattle, Washington, Marco enjoys writing, reading, exercising, and building applications in his free time.

Chakravarthy Nagarajan is a Principal Solutions Architect specializing in machine learning, big data, and high performance computing. In his current role, he helps customers solve real-world, complex business problems using machine learning and generative AI solutions.

Banu Nagasundaram leads product, engineering, and strategic partnerships for Amazon SageMaker JumpStart, the SageMaker machine learning and generative AI hub. She is passionate about building solutions that help customers accelerate their AI journey and unlock business value.

Malav Shastri is a Software Development Engineer at AWS, where he works on the Amazon SageMaker JumpStart and Amazon Bedrock teams. His role focuses on enabling customers to take advantage of state-of-the-art open source and proprietary foundation models and traditional machine learning algorithms. Malav holds a Master’s degree in Computer Science.

Niithiyn Vijeaswaran is a Generative AI Specialist Solutions Architect with the Third-party Model Science team at AWS. His area of focus is AWS AI accelerators (AWS Neuron). He holds a Bachelor’s degree in Computer Science and Bioinformatics.

Baladithya Balamurugan is a Solutions Architect at AWS focused on ML deployments for inference and using AWS Neuron to accelerate training and inference. He works with customers to enable and accelerate their ML deployments on services such as Amazon Sagemaker and Amazon EC2. Based in San Francisco, Baladithya enjoys tinkering, developing applications, and his home lab in his free time.

John Liu has 14 years of experience as a product executive and 10 years of experience as a portfolio manager. At AWS, John is a Principal Product Manager for Amazon Bedrock. Previously, he was the Head of Product for AWS Web3 and Blockchain. Prior to AWS, John held various product leadership roles at public blockchain protocols and fintech companies, and also spent 9 years as a portfolio manager at various hedge funds.