KubeCraftJobs

DevOps & Cloud Job Board

Senior BigData engineer - Hybrid in Phoenix & NYC

Jobs via Dice

Queens, NY

Hybrid
Senior
Full Time
Posted January 06, 2026

Tech Stack

hadoop apache-hadoop snowflake kafka spark pig airflow apache-airflow python scala java docker kubernetes tableau power-bi amazon-web-services microsoft-azure google-cloud-platform appcast

Please log in or register to view job application links.

Job Description

Dice is the leading career destination for tech experts at every stage of their careers. Our client, Bright Sol, is seeking the following. Apply via Dice today! Role: Senior Big Data Engineer Location: NYC, NY & Phoenix , AZ - Hybrid Type: Contract 11+ years Minimum **Overview** Job Summary We are seeking a highly skilled Data Engineer with deep expertise in Big Data technologies, data lakes, and modern analytics platforms. The ideal candidate will design, build, and optimize scalable data pipelines that support advanced analytics and business intelligence. This role requires strong hands-on experience with Hadoop ecosystems, Snowflake, Kafka, Spark, and other distributed data platforms. **Responsibilities** Design, develop, and maintain data pipelines for ingesting, transforming, and delivering large-scale datasets. Manage and optimize data lake architectures to ensure scalability, reliability, and performance. Implement and support Hadoop-based solutions for distributed data processing. Integrate and manage Snowflake for cloud-based data warehousing and analytics. Build and maintain real-time streaming solutions using Kafka. Develop and optimize Spark applications for batch and streaming workloads. Collaborate with data analysts, scientists, and business stakeholders to deliver actionable insights. Ensure data quality, governance, and security across all platforms. Monitor and troubleshoot data pipelines to maintain high availability and performance. **Skills & Qualifications** Core Skills Big Data Ecosystem: Hadoop (HDFS, Hive, Pig, MapReduce), Spark, Kafka. Cloud Data Warehousing: Snowflake (preferred), Redshift, BigQuery. Data Lake Management: Experience with large-scale data storage and retrieval. Data Pipelines: ETL/ELT design, orchestration tools (Airflow, NiFi, etc.). Programming & Scripting: Python, Scala, Java, SQL. Data Analysis: Strong ability to query, analyze, and interpret large datasets. Distributed Systems: Understanding of scalability, fault tolerance, and performance optimization. DevOps & Automation: CI/CD pipelines, containerization (Docker, Kubernetes). Visualization & BI Tools: Familiarity with Tableau, Power BI, or similar. **Preferred Qualifications** 12+ years of experience in data engineering or big data roles. Experience with cloud platforms (AWS, Azure, Google Cloud Platform). Strong problem-solving and analytical mindset. Excellent communication and collaboration skills.