KubeCraftJobs

DevOps & Cloud Job Board

WXA4Z - Devops Engineer

IBM

Bengaluru East, Karnataka

Hybrid
Mid Level
Full Time
Posted December 27, 2025

Tech Stack

python golang terraform helm kvm docker kubernetes grpc langchain jenkins github github-actions gitlab gitlab-ci ansible prometheus grafana elk amazon-web-services google-cloud-platform microsoft-azure avature

Please log in or register to view job application links.

Job Description

**Introduction** IBM Infrastructure is a catalyst that makes the world work better because our clients demand it. Heterogeneous environments, the explosion of data, digital automation, and cybersecurity threats require hybrid cloud infrastructure that only IBM can provide. Your ability to be creative, a forward-thinker and to focus on innovation that matters, is all support by our growth minded culture as we continue to drive career development across our teams. Collaboration is key to IBM Infrastructure success, as we bring together different business units and teams that balance their priorities in a way that best serves our client's needs. IBM's product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. **Your Role And Responsibilities** We are looking for a passionate and skilled DevOps Engineer to join our AI & Microservices team. You will work closely with backend engineers and ML practitioners to design, automate, and optimize infrastructure for high-performance APIs and AI workloads. This role demands a strong foundation in cloud-native DevOps practices, container orchestration, and modern deployment strategies. **Required Technical And Professional Expertise** 5+ year of experience in Design and maintain CI/CD pipelines for Python/Golang-based microservices. Automate infrastructure provisioning using tools like Terraform or Helm. Manage the infra using Openshift and KVM technologies Manage containerized workloads using Docker and Kubernetes. Support APIs using HTTP, gRPC, and WebSocket protocols. Collaborate with developers working on Retrieval-Augmented Generation (RAG), LangChain, and other AI frameworks. Monitor and optimize performance of LLM-based services. Implement observability tools for logging, metrics, and tracing. Ensure high availability, scalability, and security of production systems. **Preferred Technical And Professional Experience** Containers & Orchestration: Proficiency in Openshift, Docker and Kubernetes. Programming & API Knowledge: Familiarity with Python/Golang APIs and microservices architecture. AI Frameworks: Understanding of LangChain, RAG, and LLMs (Large Language Models). Protocols: Experience with HTTP, gRPC, WebSocket. CI/CD & Automation: Hands-on with Jenkins, GitHub Actions, GitLab CI, or similar. Infrastructure as Code: Terraform, Helm, or Ansible. Monitoring & Logging: Prometheus, Grafana, ELK Stack, or similar tools. Cloud Platforms: AWS, GCP, or Azure.