- quadric.io, Inc (Burlingame, CA)
- …GPNPU executes both NN graph code and conventional C++ DSP and control code. Role: The AI Inference Engineer in Quadric is the key bridge between the world ... of AI /LLM models and Quadric unique platforms. The AI Inference Engineer at Quadric will [1] port AI models to Quadric platform; [2] optimize the… more
- Bank of America (Plano, TX)
- Senior Engineer - AI Inference Addison, Texas;Plano, Texas; Newark, Delaware; Charlotte, North Carolina; Kennesaw, Georgia **To proceed with your application, ... must be at least 18 years of age.** Acknowledge (https://ghr.wd1.myworkdayjobs.com/Lateral-US/job/Addison/Senior- Engineer - AI - Inference \_25029879) **Job Description:** At Bank… more
- NVIDIA (CA)
- …distributed model management systems, including Rust-based runtime components, for large-scale AI inference workloads. + Implement inference scheduling ... people. Today, we're tapping into the unlimited potential of AI to define the next era of computing. An...We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo… more
- NVIDIA (Santa Clara, CA)
- …seeking highly skilled and motivated software engineers to join us and build AI inference systems that serve large-scale models with extreme efficiency. You'll ... architect and implement high-performance inference stacks, optimize GPU kernels and compilers, drive industry...teams to push the frontier of accelerated computing for AI . What you'll be doing: + Contribute features to… more
- NVIDIA (Santa Clara, CA)
- …and ensure a consistent, high-impact go-to-market strategy. This role will focus on AI inference at scale, ensuring that customers and partners understand how ... scale. We are looking for a Senior Technical Marketing Engineer to join our growing accelerated computing product team....+ Develop Technical Positioning & Messaging - Translate NVIDIA's AI inference and accelerated computing technologies into… more
- Red Hat (Raleigh, NC)
- …this role, your primary responsibility will be to build and release the Red Hat AI Inference runtimes, continuously improve the processes and tooling used by the ... open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise...on Github. We are seeking an experienced ML Ops engineer to work closely with our product and research… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to ... ai -leads-typescript-to-1/#the-top-open-source-projects-by-contributors)** on Github. As a Principal Machine Learning Engineer focused on vLLM, you will be at the… more
- Capital One (San Francisco, CA)
- Lead AI Engineer (FM Hosting, LLM Inference ) **Overview** At Capital One, we are creating responsible and reliable AI systems, changing banking for good. ... AI and ML algorithms or technologies (eg LLM Inference , Similarity Search and VectorDBs, Guardrails, Memory) using Python,...regularly worked. Cambridge, MA: $193,400 - $220,700 for Lead AI Engineer McLean, VA: $193,400 - $220,700… more
- Amazon (Cupertino, CA)
- …and Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure to ... and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model serving infrastructure, with a… more
- Amazon (New York, NY)
- Description Are you interested in advancing Amazon's Generative AI capabilities? Come work with a talented team of engineers and scientists in a highly collaborative ... and friendly team. We are building state-of-the-art Generative AI technology that will benefit all Amazon businesses and customers. Key job responsibilities As a… more
- Amazon (Cupertino, CA)
- …of applied scientists, system engineers, and product managers to deliver state-of-the-art inference capabilities for Generative AI applications. Your work will ... ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement...expertise to push the boundaries of what's possible in AI acceleration. As part of the broader Neuron organization,… more
- Amazon (Seattle, WA)
- …of applied scientists, system engineers, and product managers to deliver state-of-the-art inference capabilities for Generative AI applications. Your work will ... ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement...expertise to push the boundaries of what's possible in AI acceleration. As part of the broader Neuron organization,… more
- Amazon (Cupertino, CA)
- …learning accelerators and servers that use them. This role is for a software engineer in the Machine Learning Inference Model Enablement team for AWS Neuron ... beyond, as well as stable diffusion, vision transformers and many more. The Inference Model Enablement team works side by side with compiler engineers and runtime… more
- Amazon (Seattle, WA)
- …Trainium cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is ... for development and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of Experts, etc.… more
- MongoDB (Palo Alto, CA)
- We're looking for a Lead Engineer , Inference Platform to join our team building the inference platform for embedding models that power semantic search, ... retrieval, and AI -native features across MongoDB Atlas. This role is part...Atlas and optimized for developer experience. As a Lead Engineer , Inference Platform, you'll be hands-on with… more
- MongoDB (Palo Alto, CA)
- **About the Role** We're looking for a Senior Engineer to help build the next-generation inference platform that supports embedding models used for semantic ... MongoDB Atlas, supporting semantic search and hybrid retrieval + Collaborate with AI engineers and researchers to productionize inference for embedding models… more
- General Motors (Sunnyvale, CA)
- …**This job is eligible for relocation assistance.** **About the Team:** The ML Inference Platform is part of the AI Compute Platforms organization within ... Our team owns the cloud-agnostic, reliable, and cost-efficient platform that powers GM's AI efforts. We're proud to serve as the AI infrastructure platform… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to ... Summary** At Red Hat we believe the future of AI is open and we are on a mission...and scale LLM deployments. As a Principal Machine Learning Engineer focused on distributed vLLM (http://github.com/vllm-project/) infrastructure in the… more
- NVIDIA (Santa Clara, CA)
- …with open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, ... to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem. What we need to see: + Master's, PhD, or equivalent… more
- NVIDIA (Santa Clara, CA)
- …Lead the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service. + Architect and implement systems ... people. Today, we're tapping into the unlimited potential of AI to define the next era of computing. An...on the world. We are seeking a Principal Software Engineer to join our Software Infrastructure Team in Santa… more