KubeCraftJobs

DevOps & Cloud Job Board

AIML - ML Infrastructure Engineer, ML Platform & Technology - ML Compute

Apple

Seattle, WA 98105

On-site
Mid Level
Posted January 06, 2026

Tech Stack

amazon-web-services tensorflow pytorch python golang kubernetes pyspark

Please log in or register to view job application links.

Job Description

Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build, service we create, or Apple Store experience we deliver is the result of us making each other’s ideas stronger. That happens because every one of us shares a belief that we can make something wonderful and share it with the world, changing lives for the better. It’s the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Here, you’ll do more than join something - you’ll add something! **Description** As a Senior/Staff Engineer on the Foundation Model Compute Infra team, you will design and scale the scheduling and orchestration systems that power Apple’s large-scale foundation model training and inference workloads across TPU clusters. You will drive innovations in resource management, cluster efficiency, and platform reliability, enabling Apple’s next-generation AI models to train and serve at scale.","responsibilities":"Lead the design and evolution of the scheduling platform that manages large-scale TPU workloads across multi-region clusters, supporting both training and inference. Develop topology-aware and quota-aware schedulers to improve cluster utilization, job latency, and fairness. Collaborate with Apple Foundation Model team to integrate advanced distributed computing frameworks (Pathways, Ray, Beam, JetStream) into the platform or expose them as reliable, scalable services. Automate complex operational workflows for quota updates, job lifecycle management, and resource provisioning to reduce on-call and dev-ops overhead. Mentor engineers and partner across teams to influence the direction of Apple’s large-scale distributed compute strategy. **Preferred Qualifications** Advance degrees in Computer Science, engineering, or a related field Proficient in working with and debugging accelerators, like: GPU, TPU, AWS Trainium Proficient in ML training and deployment frameworks, like: JAX, Tensorflow, PyTorch, TensorRT, vLLM **Minimum Qualifications** Bachelors in Computer Science, engineering, or a related field Experience with foundation model training and inference workloads across TPU clusters 5+ years of hands-on experience in building scalable backend systems for training and evaluation of machine learning models Proficient in relevant programming languages, like Python or Go Strong expertise in distributed systems, reliability and scalability, containerization, and cloud platforms Proficient in cloud computing infrastructure and tools: Kubernetes, Ray, PySpark Ability to clearly and concisely communicate technical and architectural problems, while working with partners to iteratively find Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .