العودة إلى الوظائف
Qualcomm

System level AI Solutions & Software Engineer (Staff level) - Riyadh, KSA

Qualcomm

الرياض, S01, SAFull-timeتقنية المعلومات١٣ نيسان ٢٠٢٦

تفاصيل الوظيفة

**Company:** ------------ Qualcomm Middle East Information Technology Company LLC **Job Area:** ------------- Engineering Group, Engineering Group > Systems Engineering **General Summary:** **About Us** Qualcomm is growing its presence in Riyadh and is hiring Data Centre Engineers to support our expanding infrastructure across the region **.** As Saudi Arabia accelerates its digital transformation under Vision 2030, Qualcomm is investing in world‑class computing and data centre capabilities to power AI, cloud, and advanced connectivity at scale. This is a unique opportunity to work in a fast‑growing technology hub, supporting critical environments and helping shape the future of data centre operations in the Kingdom and beyond. **About the Role** As a Qualcomm Datacenter AI Solutions Engineer, you will research, develop, optimize, and validate software, hardware, architecture, algorithms, and machine learning solutions that enable the deployment of cutting-edge AI datacenter technology. Qualcomm Solution Engineers collaborate across functional teams to meet and exceed system-level requirements and standards. This is a great opportunity to innovate and develop leading-edge products and solutions around best-in-class Qualcomm AI inference accelerators for data center, and hybrid AI applications. **We are looking for Engineers with 8+ years of experience** **Principal Duties and Responsibilities:** * Lead the development of end-to-end AI/ML solutions that integrate Qualcomm AI hardware, system software, and ecosystem components to deliver best-in-class AI inference performance, power efficiency, and scalability * Drive the design, development, deployment, and optimization of Generative AI and LLM-based applications, with a focus on production readiness and inference efficiency * Contribute to and guide the implementation of model fine-tuning, distillation, and optimization strategies tailored for deployment on target hardware * Apply deep systems-level expertise to research, design, develop, simulate, validate, and optimize AI systems spanning hardware, system software, AI frameworks, and models, while ensuring system-level requirements are met * Perform AI model benchmarking, workload characterization, and performance analysis to influence system requirements, hardware/software co-design, and product direction * Serve as a technical lead for customer engagements, supporting AI model onboarding, inference optimization, deployment, and performance tuning * Own and drive system-level architecture and design, including requirements definition, interface specifications, performance targets, and implementation of new systems or enhancements to existing platforms * Collaborate across cross-functional teams (hardware, software, tools, frameworks, and product) to deliver features, validate AI system correctness, and ensure high-quality execution * Stay current with advancements in AI/ML models, inference techniques, and hardware/software innovations, and proactively translate them into impactful solutions * Propose and drive new, innovative ideas that meaningfully improve products, platforms, or developer experience * Lead system-level debugging and triage, identify root causes across the stack, and clearly communicate findings, trade-offs, and recommendations to team members and stakeholders **Preferred Qualifications & Skills:** * Master’s or PhD in Engineering, Computer Science, Information Systems, Physics, or a related discipline * Strong proficiency in Python and experience with ML frameworks, APIs, REST services, and microservice-based architecture * Hands-on experience designing, deploying, and operating AI/ML systems in production * Solid understanding of Generative AI architectures, including transformers, diffusion models, and hybrid systems (LLMs, LVMs, embeddings) * Experience with large-scale AI systems architecture, including microservices, distributed systems, event-driven designs, and fault-tolerant/resilient architectures * Practical experience with AI inference serving, performance optimization, and scalability across heterogeneous hardware * Experience with MLOps practices for AI application development, deployment, monitoring, and lifecycle management * Familiarity with automation and DevOps tooling, including GitOps workflows, containerization (Docker), orchestration platforms (Kubernetes), and ML lifecycle tools * Strong problem-solving skills with a customer- and solution-focused mindset * Experience with cluster schedulers and resource managers (e.g., Slurm, PBS) and workload orchestration is a plus * Experience with observability, monitoring, and debugging tools for ML pipelines and inference services * Proven ability to operate effectively in a large, matrixed organization, influencing across teams * Experience with fine-tuning and optimization of GenAI models, including reinforcement learning techniques, is a plus * Well versed in open-source development practices, collaborati