Amazon (Page 2)

Training state-of-the-art large language models (LLMs) demands massive, distributed compute infrastructure. Meta’s Llama 3, for instance, ran on 16,000 NVIDIA H100 GPUs for over 30.84 million GPU hours. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that simplifies the deployment, management, and scaling of Kubernetes clusters that canContinue Reading

This post is cowritten by Salesforce’s AI Platform team members Srikanta Prasad, Utkarsh Arora, Raghav Tanaji, Nitin Surya, Gokulakrishnan Gopalakrishnan, and Akhilesh Deepak Gotmare. Salesforce’s Artificial Intelligence (AI) platform team runs customized large language models (LLMs)—fine-tuned versions of Llama, Qwen, and Mistral—for agentic AI applications like Agentforce. Deploying these modelsContinue Reading

The convergence of artificial intelligence with physical systems marks a pivotal moment in technological evolution. Physical AI, where algorithms transcend digital boundaries to perceive, understand, and manipulate the tangible world, will fundamentally transform how enterprises operate across industries. These intelligent systems bridge the gap between digital intelligence and physical reality,Continue Reading

In healthcare, the ability to quickly analyze and interpret medical reports is crucial for both healthcare providers and patients. While medical reports contain valuable information, they often remain underutilized due to their complex nature and the time-intensive process of analysis. This complexity manifests in several ways: the interpretation of multipleContinue Reading

This post was written with Dominic Catalano from Anyscale. Organizations building and deploying large-scale AI models often face critical infrastructure challenges that can directly impact their bottom line: unstable training clusters that fail mid-job, inefficient resource utilization driving up costs, and complex distributed computing frameworks requiring specialized expertise. These factorsContinue Reading