- Amazon (Seattle, WA)
- …integrates with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and ... at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and...- Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work… more
- Amazon (Seattle, WA)
- …integrates with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and ... at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and...- Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,...Python experience is a plus. + Prior experience with training , deploying or optimizing the inference of… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,...Python experience is a plus. + Prior experience with training , deploying or optimizing the inference of… more
- Amazon (Seattle, WA)
- …cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...language - Fundamentals of Machine learning models, their architecture, training and inference lifecycles along with work… more
- Red Hat (Boston, MA)
- …closely with our product and research teams to scale SOTA deep learning products and software . As an ML Ops engineer , you will work closely with our technical ... open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings...and research teams to manage training and deployment pipelines, create DevOps and CI/CD infrastructure,… more
- MongoDB (Palo Alto, CA)
- We're looking for a Lead Engineer , Inference Platform to join our team building the inference platform for embedding models that power semantic search, ... Atlas and optimized for developer experience. As a Lead Engineer , Inference Platform, you'll be hands-on with...of experience serving as TL for a large-scale ML inference or training platform SW project **Nice… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on distributed vLLM (https://github.com/vllm-project/) infrastructure in the LLM-D… more
- NVIDIA (Santa Clara, CA)
- …open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and ... the crowd: + Contributions to PyTorch, JAX, vLLM, SGLang, or other machine learning training and inference frameworks. + Hands-on experience training or… more
- quadric.io, Inc (Burlingame, CA)
- …executes both NN graph code and conventional C++ DSP and control code. Role: The AI Inference Engineer in Quadric is the key bridge between the world of AI/LLM ... general purpose neural processing unit (GPNPU) architecture. Quadric's co-optimized software and hardware is targeted to run neural network...models and Quadric unique platforms. The AI Inference Engineer at Quadric will [1] port… more
- Red Hat (Sacramento, CA)
- …is looking for a customer obsessed developer to join our team as a **Forward Deployed Engineer ** . In this role, you will not just build software ; you will be ... the bridge between our cutting-edge inference platform (LLM-D (https://llm-d.ai/) , and vLLM (https://github.com/vllm-project/vllm) ) and our customers' most… more
- Capital One (San Francisco, CA)
- …or Golang + Experience developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, ... Lead AI Engineer (FM Hosting, LLM Inference ) **Overview**...support AI software components including foundation model training , large language model inference , similarity search,… more
- NVIDIA (Santa Clara, CA)
- …intelligence. Our data center platforms integrate CPUs, GPUs, DPUs, networking, and a full-stack software ecosystem to power AI at scale. We are looking for a Senior ... Technical Marketing Engineer to join our growing accelerated computing product team....high-impact go-to-market strategy. This role will focus on AI inference at scale, ensuring that customers and partners understand… more
- Red Hat (Raleigh, NC)
- …the power of open-source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... optimize, and scale LLM deployments. As a Machine Learning Engineer focused on vLLM, you will be at the...you. Join us in shaping the future of AI Inference ! **What You Will Do** + Write robust Python… more
- Bank of America (Addison, TX)
- Senior Engineer -AI Inference Addison, Texas;Plano, Texas; Newark, Delaware; Charlotte, North Carolina; Kennesaw, Georgia **To proceed with your application, you ... must be at least 18 years of age.** Acknowledge (https://ghr.wd1.myworkdayjobs.com/Lateral-US/job/Addison/Senior- Engineer -AI- Inference \_25029879) **Job Description:** At Bank of America,… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... (https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/#the-top-open-source-projects-by-contributors) on Github. As a Machine Learning Engineer focused on vLLM, you will be… more
- Amazon (Seattle, WA)
- …stack powering AWS's next-generation AI accelerators Inferentia and Trainium. As a Senior Software Engineer in our Machine Learning Applications team, you'll be ... custom silicon, our team drives innovation from silicon architecture to production software deployment. We pioneer distributed inference solutions for PyTorch… more
- Google (Sunnyvale, CA)
- …+ 3 years of experience in software development for machine learning model inference or machine learning model training , and 1 year of experience with ML ... Senior Software Engineer , Machine Learning, Kernel _corporate_fare_...model performance for large scale training and inference through tuning and optimization at both software… more
- NVIDIA (Santa Clara, CA)
- We are now looking for a Senior Deep Learning Software Engineer , FlashInfer. NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing ... AI systems engineers to develop groundbreaking technologies in the inference systems software stack! We build innovative...in domain specific compiler and library solutions for LLM inference and training (eg FlashInfer, Flash Attention)… more
- Snap Inc. (Seattle, WA)
- …ranking and recommendation systems more efficient and impactful. We're looking for a Software Engineer , ML Infrastructure to join Snap Inc! What you'll do: ... You'll play a critical role in scaling our ML Infrastructure, optimizing AI training and inference systems, and driving innovations that make Snapchat's… more