KubeCraftJobs

DevOps & Cloud Job Board

Intern 2026: AI Hybrid Cloud Platform Research Scientist

IBM

San Jose, CA

On-site
Junior
Internship
Posted January 06, 2026

Tech Stack

kubernetes docker python cplusplus golang java rust hugging-face pytorch deepspeed avature

Please log in or register to view job application links.

Job Description

**Introduction** IBM Research takes responsibility for technology and its role in society. Working in IBM Research means you'll join a team who invent what's next in computing, always choosing the big, urgent and mind-bending work that endures and shapes generations. Our passion for discovery, and excitement for defining the future of tech, is what builds our strong culture around solving problems for clients and seeing the real world impact that you can make. IBM's product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. **Your Role And Responsibilities** We are seeking a PhD-level summer intern to help design and advance the next generation of Hybrid Cloud AI platforms. The role focuses on working with full-time IBM researchers on highly scalable systems for machine learning (training and inference) that are both novel and impactful. Our research builds on open-source platforms like llm-d, Kubernetes, Kserve, and explores optimizations across the entire stack from GPU networking, model scheduling serving, AI platform optimization including inference optimization, performance modeling, and Compound AI / Agentic systems. You will work with an agile team of researchers and engineers developing practical innovations in scalable GenAI systems that can impact thousands of developers and applications worldwide. **Preferred Education** Master's Degree **Required Technical And Professional Expertise** - PhD student in Computer Science, Computer Engineering, or related discipline - Research background in systems for generative AI (training or inference) - Experience with distributed systems or microservices for data or ML workloads - Familiarity with cloud-native platforms (Kubernetes, Docker, or hybrid cloud environments) - Proficiency in at least one of the following languages: Python, C++, Go, Java, or Rust **Preferred Technical And Professional Experience** - Knowledge of open-source large language model frameworks (e.g., Hugging Face, PyTorch, DeepSpeed) - Familiarity with open source serving platforms like vllm, llm-d, and KServe - Research or development experience in GPU networking and large-scale acceleration - Research or development experience in Inference performance modeling and optimization