On-site
Mid Level
Full Time
Posted January 14, 2026
Tech Stack
json
kubernetes
kafka
spark
python
Job Description
**Job Title: Big Data Operations Engineer**
**Duration: 1 Year+**
**Location: Singapore**
**GENERAL DESCRIPTION**
We are seeking a passionate Big Data Operations Engineer to join the Global Data Platform team, which serves as the enterprise‑wide single source of truth for reporting and analytics. The platform also provides a common data model and a self‑service data science environment.
**KEY FEATURES OF THE POSITION**
**Functional / Technical Responsibilities**
- Provide daily support and troubleshooting of data pipelines from ingestion to consumption
- Ensure defined thresholds are met for data availability
- Actively support release cycles and advocate changes to prevent production disruption
- Maintain and model JSON‑based schemas and metadata for reuse across the organization
- Perform Data Engineer responsibilities to implement corrective measures (historizing tables, managing dependencies, resolving quality issues, etc.)
- Take operational responsibility for Common Data Model tables within a dedicated access zone
- Participate in the agile setup and support development teams
- Manage the operating model to ensure support levels and task distribution for operational activities (e.g., monitoring service availability and performance) with Level 1 and Level 2 support
- Provide Level 3 support (incident & problem management) for the IT service and related data pipelines
- Continuously enhance service availability, performance, capacity, and knowledge management
**SKILLS REQUIREMENTS**
**Professional**
- Higher education in Computer Science (University or equivalent diploma)
- Strong communication, planning, and coordination skills
- Strong team player with a proactive, collaborative, and customer‑focused approach
- Excellent understanding of data technology and operational management
- Passion for driving a data‑driven culture and empowering users
**Technical**
- Minimum 5 years of experience in cluster environment operations (Kubernetes, Kafka, Spark, distributed storage such as S3/HDFS)
- Strong Python proficiency
- Expertise in DataOps processes
- Experience with monitoring tools for platform health, ingestion pipelines, and consumer applications
- Experience with Linux-based infrastructure including advanced scripting
- Expert SQL skills (both traditional DWH and distributed environments)
- Experience in maintaining databases, load processes, CI/CD pipelines
- Several years of experience in a similar role within a complex data environment
Interested Candidates please share resume to [email protected]