Bestkaam Logo
Bridgenext Logo

Senior Databricks Engineer

Actively Reviewing the Applications

Bridgenext

India Full-Time
Posted 6 days ago Apply by June 29, 2026

Job Description

Job ID: Sen-Eng-Pun-1277

Location: Pune,Remote

Position: Senior Databricks Engineer

Experience: 5–7 Years

Location: Pune / Remote

Employment Type: Full-time

Role Overview

We are seeking a highly skilled Senior Databricks Engineer with 3–6 years of experience in modern data engineering, distributed data processing, and cloud-based analytics. The ideal candidate will have strong hands-on expertise with Databricks, PySpark, and Delta Lake, along with experience on at least one major cloud platform (Azure, AWS, or GCP). Knowledge of Azure data ecosystem is a strong plus.

This role involves designing scalable pipelines, optimizing Databricks workloads, and collaborating closely with cross-functional teams to deliver enterprise-grade data solutions.

Key Responsibilities

Databricks Engineering

  • Develop, optimize, and maintain ETL/ELT pipelines using Databricks (PySpark, Spark SQL, Delta Lake).
  • Design and deploy distributed data processing workflows for batch and streaming use cases.
  • Implement best practices for performance tuning, cost optimization, and cluster configuration.
  • Work with Delta Lake for data versioning, incremental pipelines, and reliability.

Cloud Data Platform Integration

  • Build solutions on Azure, AWS, or GCP using cloud-native services integrated with Databricks.
  • Ingest, transform, and process large datasets using cloud storage and compute services.
  • Work with APIs, connectors, and cloud-native data orchestration tools.

Azure Data Services (Good to Have)

  • Exposure to Azure Data Lake (ADLS), Azure Data Factory, Azure Synapse Analytics, Azure SQL, Event Hub, Azure Functions, etc.
  • Support end-to-end pipelines covering ingestion, transformation, storage, governance, and monitoring.

Data Engineering & Development

  • Write high-quality, production-grade Python, PySpark, and SQL code.
  • Develop reusable data frameworks, utilities, and automation scripts.
  • Participate in code reviews and enforce engineering best practices.

Collaboration & Delivery

  • Work closely with Data Architects, Analysts, and Scientists to implement scalable data solutions.
  • Contribute to solution design documents, data models, and architecture diagrams.
  • Ensure solutions adhere to security, governance, and compliance standards (e.g., Unity Catalog).

Required Skills & Qualifications

  • 3–6 years of experience in Data Engineering or Big Data platforms.
  • Strong hands-on experience with Databricks, PySpark, Spark SQL, and Delta Lake.
  • Experience with at least one cloud provider (Azure, AWS, or GCP).
  • Strong SQL programming and data modeling concepts.
  • Understanding of distributed computing, performance tuning, and cost-efficient design.
  • Experience with Git, CI/CD, and basic DevOps practices.
  • Familiarity with workflow orchestration (Databricks Workflows, Airflow, ADF, etc.).

Preferred / Good-to-Have

  • Experience with Azure data services (ADLS, ADF, Synapse, Key Vault, Event Hub).
  • Understanding of Unity Catalog, RBAC, and data governance practices.
  • Experience with MLflow, serverless compute, or Delta Live Tables.
  • Knowledge of containerization and serverless technologies (Docker, Kubernetes, Functions/Lambda).
  • Relevant certifications:
  • Databricks Data Engineer Associate or Professional

Required Skills

Check Qualification

Quick Tip

Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.