- Red Hat (Raleigh, NC)
- …models, and deliver innovative apps. The OpenShift AI team seeks a Software Engineer with Kubernetes and Model Inference Runtimes experience to join our ... packaging, such as PyPI libraries + Solid understanding of the fundamentals of model inference architectures + Experience with Jenkins, Git, shell scripting, and… more
- Amazon (Seattle, WA)
- …The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's ... with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team… more
- Amazon (Cupertino, CA)
- … lifecycles along with work experience on some optimizations for improving the model execution. - Software development experience in C++, Python (experience in ... at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and...ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement… more
- NVIDIA (Santa Clara, CA)
- …open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and ... In this role, you will design, implement, and productionize model optimization algorithms for inference and deployment...you'll be doing: + Design and build modular, scalable model optimization software platforms that deliver exceptional… more
- NVIDIA (CA)
- …can make a lasting impact on the world. We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo Inference Server! ... NVIDIA is hiring software engineers for its GPU-accelerated deep learning software team, and we are a remote friendly work environment. Academic and commercial… more
- MongoDB (Palo Alto, CA)
- … Engineer , you'll focus on building core systems and services that power model inference at scale. You'll own key components of the infrastructure, work ... **About the Role** We're looking for a Senior Engineer to help build the next-generation inference...multi-tenant service design + Familiar with concepts in ML model serving and inference runtimes, even if… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... high-performance open-source frameworks, which are at the forefront of efficient large-scale model serving and inference . You will play a central role… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... vLLM, which are at the forefront of efficient large-scale model serving and inference . You will play...inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,… more
- Amazon (Cupertino, CA)
- …and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model serving infrastructure, with a ... Description AWS Neuron is the software stack powering AWS Inferentia and Trainium machine...resilient AI infrastructure at AWS. We focus on developing model -agnostic inference innovations, including disaggregated serving, distributed… more
- Argonne National Laboratory (Lemont, IL)
- …supercomputing resources and computational science expertise. The ALCF has an opening for a Software Engineer working in the space of enabling AI for science, ... and AI. In this position, the candidate can expect to explore and engineer solutions for AI inference integrated within scientific workflows, via programmatic… more
- Amazon (Seattle, WA)
- …cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of… more
- Red Hat (Boston, MA)
- …closely with our product and research teams to scale SOTA deep learning products and software . As an ML Ops engineer , you will work closely with our technical ... open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings...the vLLM project, and inventors of state-of-the-art techniques for model compression, our team provides a stable platform for… more
- MongoDB (Palo Alto, CA)
- We're looking for a Lead Engineer , Inference Platform to join our team building the inference platform for embedding models that power semantic search, ... Atlas and optimized for developer experience. As a Lead Engineer , Inference Platform, you'll be hands-on with...project **Nice to Have** + Prior experience working with model teams on inference -optimized architectures + Background… more
- General Motors (Sunnyvale, CA)
- …this role, you'll work closely with ML engineers and researchers to ensure efficient model serving and inference in production, for their workflows such as data ... inference services. + Proactively research and integrate state-of-the-art model serving frameworks, hardware accelerators, and distributed computing techniques. +… more
- quadric.io, Inc (Burlingame, CA)
- …executes both NN graph code and conventional C++ DSP and control code. Role: The AI Inference Engineer in Quadric is the key bridge between the world of AI/LLM ... models and Quadric unique platforms. The AI Inference Engineer at Quadric will [1] port...Electric Engineering. + 5+ years of experience in AI/LLM model inference and deployment frameworks/tools + experience… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... the vLLM and LLM-D projects, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for… more
- Red Hat (Sacramento, CA)
- …engineering teams at our customer to deploy, optimize, and scale distributed Large Language Model (LLM) inference systems. You will solve " **last mile** " ... developer to join our team as a **Forward Deployed Engineer ** . In this role, you will not just...KServe, vLLM, Kubernetes). + Knowledge of **Envoy Proxy** **or** ** Inference Gateway** (IGW). + Familiarity with model … more
- NVIDIA (Santa Clara, CA)
- …leads the AI revolution. What you will be doing: + Implement language and multimodal model inference as part of NVIDIA Inference Microservices (NIMs). + ... We are now looking for a Senior DL Algorithms Engineer ! NVIDIA is seeking senior engineers who are mindful...are unafraid to work across all layers of the hardware/ software stack from GPU architecture to Deep Learning Framework… more
- Capital One (San Francisco, CA)
- …develop, test, deploy, and support AI software components including foundation model training, large language model inference , similarity search, ... Lead AI Engineer (FM Hosting, LLM Inference ) **Overview**...developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, latency,… more
- Amazon (Seattle, WA)
- …and the Trn2 and future Trn3 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium... Technology team works side by side with the Inference Model Enablement, compiler runtime engineers to… more