- NVIDIA (Santa Clara, CA)
- …how you can make a lasting impact on the world. We are seeking a highly-skilled Senior On- Device Model Inference Optimization Engineer to join our team ... you'll be doing: + Develop and implement strategies to optimize AI model inference for on- device deployment. + Employ techniques like pruning, quantization,… more
- LinkedIn (Mountain View, CA)
- …large models together. The team is responsible for scaling LinkedIn's AI model training, feature engineering and serving with hundreds of billions of parameters ... and Serving performance optimizations across billions of user queries Model Training Infrastructure: As an engineer on the AI...infra on top of native cloud, enable GPU based inference for a large variety of use cases, cuda… more
- Qualcomm (Santa Clara, CA)
- …strategy both compute and data lake architecture requirements. + Defining scalable AI model optimization strategy for mission mode inference . + Managing the AI ... of how Wi-Fi platforms are architected, provisioned, and used. As a Senior AI Platform Product Manager for Qualcomm's Wireless Infrastructure Networking (WIN)… more