model

Organizations often face challenges when implementing single-shot fine-tuning approaches for their generative AI models. The single-shot fine-tuning method involves selecting training data, configuring hyperparameters, and hoping the results meet expectations without the ability to make incremental adjustments. Single-shot fine-tuning frequently leads to suboptimal results and requires starting the entire processContinue Reading

This post is cowritten by Salesforce’s AI Platform team members Srikanta Prasad, Utkarsh Arora, Raghav Tanaji, Nitin Surya, Gokulakrishnan Gopalakrishnan, and Akhilesh Deepak Gotmare. Salesforce’s Artificial Intelligence (AI) platform team runs customized large language models (LLMs)—fine-tuned versions of Llama, Qwen, and Mistral—for agentic AI applications like Agentforce. Deploying these modelsContinue Reading

You can use Amazon Bedrock Custom Model Import to seamlessly integrate your customized models—such as Llama, Mistral, and Qwen—that you have fine-tuned elsewhere into Amazon Bedrock. The experience is completely serverless, minimizing infrastructure management while providing your imported models with the same unified API access as native Amazon Bedrock models.Continue Reading

As organizations scale their AI infrastructure to support trillion-parameter models, they face a difficult trade-off: reduced training time with lower cost or faster training time with a higher cost. When they checkpoint frequently to speed up recovery time and minimize lost training time, they incur in substantially higher storage cost.Continue Reading

Most organizations evaluating foundation models limit their analysis to three primary dimensions: accuracy, latency, and cost. While these metrics provide a useful starting point, they represent an oversimplification of the complex interplay of factors that determine real-world model performance. Foundation models have revolutionized how enterprises develop generative AI applications, offeringContinue Reading

Machine learning (ML) has evolved from an experimental phase to becoming an integral part of business operations. Organizations now actively deploy ML models for precise sales forecasting, customer segmentation, and churn prediction. While traditional ML continues to transform business processes, generative AI has emerged as a revolutionary force, introducing powerfulContinue Reading

Upgrading legacy systems has become increasingly important to stay competitive in today’s market as outdated infrastructure can cost organizations time, money, and market position. However, modernization efforts face challenges like time-consuming architecture reviews, complex migrations, and fragmented systems. These delays not only impact engineering teams but have broader impacts includingContinue Reading

Organizations serving multiple tenants through AI applications face a common challenge: how to track, analyze, and optimize model usage across different customer segments. Although Amazon Bedrock provides powerful foundation models (FMs) through its Converse API, the true business value emerges when you can connect model interactions to specific tenants, users,Continue Reading

Today’s enterprises increasingly rely on AI-driven applications to enhance decision-making, streamline workflows, and deliver improved customer experiences. Achieving these outcomes demands secure, timely, and accurate access to authoritative data—especially when such data resides across diverse repositories and applications within strict enterprise security boundaries. Interoperable technologies powered by open standards likeContinue Reading