Bestkaam Logo
Mogi I/O : OTT/Podcast/Short Video Apps for you Logo

AI Data Engineer

Actively Reviewing the Applications

Mogi I/O : OTT/Podcast/Short Video Apps for you

Chennai Full-Time 4–8 years
Posted 3 days ago Apply by June 11, 2026

Job Description

Location: Bengaluru, India (Bagmane Tech Park)

Work Mode: Hybrid (3 Days WFO | 2 Days WFH)

Notice Period: Immediate to 30 Days

Compensation : INR 2400000 - 2800000

Experience: 6 - 8 years

Work Timings

Day shift with extended overlap with the US team.

Expected working window: 10:30/11:00 AM – 10:00/11:00 PM IST, with adequate breaks.

About The Client

We are hiring on behalf of our client, a leading solutions provider focused on technology, finance & accounting, and professional staffing. They support organizations with expert talent and customized solutions to drive digital transformation

Role Overview

As a Lead Data Engineer, you will act as both a hands-on technical leader and a strategic data architect, owning next-generation unified analytics foundations across Digital, Stores, and Marketplace domains.

This role is responsible for defining the target-state data architecture, executing the complete Snowflake divestiture, and delivering a scalable, governed Databricks Lakehouse ecosystem with ≥95% enterprise KPI alignment.

Key Responsibilities

  • Define target-state enterprise data architecture using Databricks, Apache Spark, and AWS-native services.
  • Own and deliver the Snowflake divestiture strategy, ensuring zero residual dependency and uninterrupted reporting.
  • Design scalable, secure, and cost-optimized batch and streaming data pipelines.
  • Establish architectural standards for data modeling, storage formats, and performance optimization.
  • Design and build ETL/ELT pipelines using Python, Spark, and SQL for large-scale analytics.
  • Develop production-grade pipelines leveraging AWS S3, Lambda, EMR, and Databricks.
  • Enable real-time and near-real-time data processing using Kafka, Kinesis, and Spark Streaming.
  • Drive containerized deployments using Docker and Kubernetes.
  • Lead orchestration standards using Apache Airflow for complex workflows.
  • IDefine SLAs, SLOs, and operational playbooks for mission-critical analytics.
  • Mentor senior and mid-level engineers, raising overall engineering standards.

Must-Have Qualifications

  • 6–8+ years of experience in data engineering, distributed systems, and platform architecture with clear ownership.
  • Strong hands-on experience with Databricks and Apache Spark in large-scale production environments.
  • Deep AWS expertise (S3, Lambda, EMR).
  • Advanced Python for data processing, automation, and optimization.
  • Advanced SQL for complex queries, data modeling, and performance tuning.
  • Proven experience modernizing legacy platforms and migrating to Databricks/Spark Lakehouse architectures.
  • Strong exposure to data governance, lineage, cataloging, and enterprise metrics.
  • Certifications (Preferred / Mandatory)
  • Databricks Certified Data Engineer – Professional (Mandatory / Strongly Preferred)
  • AWS Solutions Architect – Associate or Professional (Preferred)

Note: Certification is preferred; however, exceptionally strong candidates without certification will also be considered.
Check Qualification

Quick Tip

Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.