AgentCore

AI agents are transforming enterprise applications across industries, from customer service to complex decision workflows. As organizations scale these deployments, they face a fundamental question: how can you improve trust in an AI application? The challenge is transparency. AI agents can make decisions on behalf of users, invoke tools dynamically,Continue Reading

Basic AI chat isn’t enough for most business applications. Institutions need AI that can pull from their databases, integrate with their existing tools, handle multi-step processes, and make decisions independently. This post demonstrates how to quickly build sophisticated AI agents using Strands Agents, scale them reliably with Amazon Bedrock AgentCore,Continue Reading

When deploying AI agents to Amazon Bedrock AgentCore Runtime (currently in preview), customers often want to use custom domain names to create a professional and seamless experience. By default, AgentCore Runtime agents use endpoints like https://bedrock-agentcore.{region}.amazonaws.com/runtimes/{EncodedAgentARN}/invocations. In this post, we discuss how to transform these endpoints into user-friendly custom domainsContinue Reading

To fulfill their tasks, AI Agents need access to various capabilities including tools, data stores, prompt templates, and other agents. As organizations scale their AI initiatives, they face an exponentially growing challenge of connecting each agent to multiple tools, creating an M×N integration problem that significantly slows development and increasesContinue Reading

We’re excited to introduce Amazon Bedrock AgentCore Identity, a comprehensive identity and access management service purpose-built for AI agents. With AgentCore Identity AI, agent developers and administrators can securely access AWS resources and third-party tools such as GitHub, Salesforce, or Slack. AgentCore Identity provides robust identity and access management atContinue Reading

Organizations are increasingly excited about the potential of AI agents, but many find themselves stuck in what we call “proof of concept purgatory”—where promising agent prototypes struggle to make the leap to production deployment. In our conversations with customers, we’ve heard consistent challenges that block the path from experimentation toContinue Reading

AI assistants that forget what you told them 5 minutes ago aren’t very helpful. While large language models (LLMs) excel at generating human-like responses, they are fundamentally stateless—they don’t retain information between interactions. This forces developers to build custom memory systems to track conversation history, remember user preferences, and maintainContinue Reading

AI agents have reached a critical inflection point where their ability to generate sophisticated code exceeds the capacity to execute it safely in production environments. Organizations deploying agentic AI face a fundamental dilemma: although large language models (LLMs) can produce complex code scripts, mathematical analyses, and data visualizations, executing thisContinue Reading