- NVIDIA (CA)
- …can make a lasting impact on the world. We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo Inference Server! ... software engineers for its GPU-accelerated deep learning software team, and we are a remote friendly work...model management systems, including Rust-based runtime components, for large-scale AI inference workloads. + Implement inference… more
- NVIDIA (Santa Clara, CA)
- …highly skilled and motivated software engineers to join us and build AI inference systems that serve large-scale models with extreme efficiency. You'll ... architect and implement high-performance inference stacks, optimize GPU kernels and compilers, drive industry...teams to push the frontier of accelerated computing for AI . What you'll be doing: + Contribute features to… more
- Argonne National Laboratory (Lemont, IL)
- …and computational science expertise. The ALCF has an opening for a Software Engineer working in the space of enabling AI for science, specifically targeting ... this position, the candidate can expect to explore and engineer solutions for AI inference ...C/C++. + Ability to create, maintain, and support high-quality software is essential. + Work with and contribute to… more
- Amazon (Cupertino, CA)
- …and multimodal workloads-reliably and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model ... Description AWS Neuron is the software stack powering AWS Inferentia and Trainium machine...Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure… more
- Amazon (Seattle, WA)
- …cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of… more
- Amazon (Seattle, WA)
- …of applied scientists, system engineers, and product managers to deliver state-of-the-art inference capabilities for Generative AI applications. Your work will ... at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and...ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement… more
- Amazon (Seattle, WA)
- …of applied scientists, system engineers, and product managers to deliver state-of-the-art inference capabilities for Generative AI applications. Your work will ... at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and...ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement… more
- quadric.io, Inc (Burlingame, CA)
- …GPNPU executes both NN graph code and conventional C++ DSP and control code. Role: The AI Inference Engineer in Quadric is the key bridge between the world ... general purpose neural processing unit (GPNPU) architecture. Quadric's co-optimized software and hardware is targeted to run neural network... AI /LLM models and Quadric unique platforms. The AI Inference Engineer at Quadric… more
- NVIDIA (Santa Clara, CA)
- …and ensure a consistent, high-impact go-to-market strategy. This role will focus on AI inference at scale, ensuring that customers and partners understand how ... platforms integrate CPUs, GPUs, DPUs, networking, and a full-stack software ecosystem to power AI at scale....+ Develop Technical Positioning & Messaging - Translate NVIDIA's AI inference and accelerated computing technologies into… more
- Capital One (San Francisco, CA)
- Lead AI Engineer (FM Hosting, LLM Inference ) **Overview** At Capital One, we are creating responsible and reliable AI systems, changing banking for good. ... interact with Capital One. + Design, develop, test, deploy, and support AI software components including foundation model training, large language model… more
- Bank of America (Addison, TX)
- Senior Engineer - AI Inference Addison, Texas;Plano, Texas; Newark, Delaware; Charlotte, North Carolina; Kennesaw, Georgia **To proceed with your application, ... must be at least 18 years of age.** Acknowledge (https://ghr.wd1.myworkdayjobs.com/Lateral-US/job/Addison/Senior- Engineer - AI - Inference \_25029879) **Job Description:** At Bank… more
- MongoDB (Palo Alto, CA)
- **About the Role** We're looking for a Senior Engineer to help build the next-generation inference platform that supports embedding models used for semantic ... MongoDB Atlas, supporting semantic search and hybrid retrieval + Collaborate with AI engineers and researchers to productionize inference for embedding models… more
- Red Hat (Raleigh, NC)
- …this role, your primary responsibility will be to build and release the Red Hat AI Inference runtimes, continuously improve the processes and tooling used by the ... open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise...research teams to scale SOTA deep learning products and software . As an ML Ops engineer , you… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... optimize the GPU-accelerated software that powers today's most sophisticated AI ...inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to ... ai -leads-typescript-to-1/#the-top-open-source-projects-by-contributors)** on Github. As a Principal Machine Learning Engineer focused on vLLM, you will be at the… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... optimize the GPU-accelerated software that powers today's most sophisticated AI ...inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,… more
- NVIDIA (CA)
- We are now looking for a Senior System Software Engineer to work on Dynamo & Triton Inference Server! NVIDIA is hiring software engineers for its ... In this role, you will develop open source software to serve inference of trained AI models running on GPUs. You will balance a variety of objectives: build… more
- Google (Sunnyvale, CA)
- Software Engineer III, Infrastructure, Inference Control Plane _corporate_fare_ Google _place_ Sunnyvale, CA, USA **Mid** Experience driving progress, ... goes on and is growing every day. As a software engineer , you will work on a...continue to push technology forward. The mission of Vertex AI Online Inference Infrastructure team is to… more
- MongoDB (Palo Alto, CA)
- We're looking for a Lead Engineer , Inference Platform to join our team building the inference platform for embedding models that power semantic search, ... retrieval, and AI -native features across MongoDB Atlas. This role is part...Atlas and optimized for developer experience. As a Lead Engineer , Inference Platform, you'll be hands-on with… more
- NVIDIA (Santa Clara, CA)
- …with open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, ... co-design. Your work will span multiple layers of the AI software stack-ranging from algorithm design to...NVIDIA platform integration and expand market adoption across the AI inference ecosystem. What we need to… more