- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... NVIDIA's inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks, NVIDIA libraries… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... optimize the GPU-accelerated software that powers today's most sophisticated AI applications. Our...inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,… more
- MongoDB (Palo Alto, CA)
- **About the Role** We're looking for a Senior Engineer to help build the next-generation inference platform that supports embedding models used for semantic ... Together, we're building infrastructure for real-time, low-latency, and high-scale inference - fully integrated with Atlas and designed for developer-first… more
- NVIDIA (CA)
- We are now looking for a Senior System Software Engineer to work on Dynamo & Triton Inference Server! NVIDIA is hiring software engineers for its ... What you'll be doing: In this role, you will develop open source software to serve inference of trained AI models running on GPUs. You will balance a variety… more
- NVIDIA (CA)
- …can make a lasting impact on the world. We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo Inference Server! ... NVIDIA is hiring software engineers for its GPU-accelerated deep learning software team, and we are a remote friendly work environment. Academic and commercial… more
- NVIDIA (Santa Clara, CA)
- …and motivated software engineers to join us and build AI inference systems that serve large-scale models with extreme efficiency. You'll architect and implement ... high-performance inference stacks, optimize GPU kernels and compilers, drive industry...way to integrate research ideas and prototypes into NVIDIA's software products. What we need to see: + Bachelor's… more
- Amazon (Seattle, WA)
- …cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of… more
- Red Hat (Boston, MA)
- …closely with our product and research teams to scale SOTA deep learning products and software . As an ML Ops engineer , you will work closely with our technical ... bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI… more
- NVIDIA (Santa Clara, CA)
- …to work on cutting-edge AI technology? Join NVIDIA's TensorRT team as a Senior Software Engineer , and be at the forefront of technology, enabling support in ... with teams and stakeholders across the whole hardware and software stack to understand and leverage new features to...models (such as Large Language Models) & frameworks for inference . + Background with C++17. NVIDIA is widely considered… more
- Amazon (Cupertino, CA)
- …and multimodal workloads-reliably and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model ... Description AWS Neuron is the software stack powering AWS Inferentia and Trainium machine...Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure… more
- Red Hat (Raleigh, NC)
- …and deliver innovative apps. The OpenShift AI team seeks a Software Engineer with Kubernetes and Model Inference Runtimes experience to join our rapidly ... You Will Do** + Develop and maintain a high-quality, high-performing ML inference runtime platform for multi-modal and distributed model serving. + Contribute… more
- Amazon (Seattle, WA)
- …The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's ... with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team… more
- Amazon (Seattle, WA)
- …The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's ... with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team… more
- MongoDB (Palo Alto, CA)
- We're looking for a Lead Engineer , Inference Platform to join our team building the inference platform for embedding models that power semantic search, ... deeply integrated into Atlas and optimized for developer experience. As a Lead Engineer , Inference Platform, you'll be hands-on with design and implementation,… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on distributed vLLM (https://github.com/vllm-project/) infrastructure in the LLM-D… more
- General Motors (Sunnyvale, CA)
- …job is eligible for relocation assistance.** **About the Team:** The ML Inference Platform is part of the AI Compute Platforms organization within Infrastructure ... of state-of-the-art (SOTA) machine learning models for experimental and bulk inference , with a focus on performance, availability, concurrency, and scalability.… more
- NVIDIA (Santa Clara, CA)
- …open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and ... as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization, speculative decoding, sparsity,… more
- quadric.io, Inc (Burlingame, CA)
- …executes both NN graph code and conventional C++ DSP and control code. Role: The AI Inference Engineer in Quadric is the key bridge between the world of AI/LLM ... general purpose neural processing unit (GPNPU) architecture. Quadric's co-optimized software and hardware is targeted to run neural network...models and Quadric unique platforms. The AI Inference Engineer at Quadric will [1] port… more
- Red Hat (Sacramento, CA)
- …is looking for a customer obsessed developer to join our team as a **Forward Deployed Engineer ** . In this role, you will not just build software ; you will be ... the bridge between our cutting-edge inference platform (LLM-D (https://llm-d.ai/) , and vLLM (https://github.com/vllm-project/vllm) ) and our customers' most… more
- NVIDIA (Santa Clara, CA)
- We are now looking for a Senior DL Algorithms Engineer ! NVIDIA is seeking senior engineers who are mindful of performance analysis and optimization to help us ... are unafraid to work across all layers of the hardware/ software stack from GPU architecture to Deep Learning Framework...will be doing: + Implement language and multimodal model inference as part of NVIDIA Inference Microservices… more