Databricks Engineer
Actively Reviewing the ApplicationsMcLaren Strategic Solutions (MSS)
India, Karnataka
Full-Time
On-site
INR 1–4 LPA
Posted 4 days ago
•
Apply by June 15, 2026
Job Description
About Us
Next Generation of Technology Consulting
Our approach is built on delivering value by combining our powerful ecosystem of platforms with capital efficient execution.
We bring together deep domain expertise and our strength in technology to help the world’s leading businesses build their digital core, optimize operations, accelerate revenue growth and deliver tangible outcomes at speed and scale.
Job Description
Key Responsibilities
Required Qualifications
Next Generation of Technology Consulting
Our approach is built on delivering value by combining our powerful ecosystem of platforms with capital efficient execution.
We bring together deep domain expertise and our strength in technology to help the world’s leading businesses build their digital core, optimize operations, accelerate revenue growth and deliver tangible outcomes at speed and scale.
Job Description
Key Responsibilities
- Design and develop scalable data ingestion pipelines using Databricks and Apache Spark.
- Build and maintain ETL/ELT workflows to ingest structured and semi structured data from various source systems.
- Implement data transformation logic using PySpark, SQL, and Delta Lake.
- Integrate data from APIs, databases, file systems, and streaming platforms.
- Optimize data processing performance and manage large-scale data workloads.
- Collaborate with Data Architects, Business Analysts, and QA teams to ensure accurate data delivery.
- Implement monitoring, logging, and error handling mechanisms for ingestion pipelines.
- Support CI/CD deployment processes for data engineering solutions.
- Ensure adherence to data governance, security, and data quality standards.
Required Qualifications
- 4+ years of experience in Data Engineering or Big Data development.
- Strong experience with Databricks and Apache Spark.
- Proficiency in Python (PySpark) and SQL.
- Experience building data pipelines in cloud environments such as AWS, Azure, or GCP.
- Knowledge of data ingestion frameworks and ETL tools.
- Experience working with structured and semi-structured data formats (JSON, Parquet, CSV, Avro).
- Familiarity with version control tools such as Git.
- Experience with Delta Lake architecture.
- Knowledge of workflow orchestration tools such as Airflow or Azure Data Factory.
- Experience with streaming platforms such as Kafka or Event Hub.
- Exposure to CI/CD pipelines and DevOps practices for data platforms.
Required Skills
Engineering
Git
Monitoring
Python
Quality Standards
ETL Tools
Apache Spark
SQL
AWS
CI/CD Pipelines
Spark
Kafka
Data Governance
Azure
Airflow
Databricks
ETL
JSON
Avro
Parquet
CSV
Data Engineering
DevOps
CI/CD
Apache
Data quality
Governance
Orchestration
Version control
Consulting
Data processing
Data ingestion
Big Data
Data platforms
Ingestion
Data transformation
Delta
Data pipelines
PySpark
Logging
Data Development
Cloud environments
Factory
Domain expertise
Error handling
Workflow orchestration
Data formats
ELT
Strength
Azure Data Factory
Big data development
ELT workflows
Data Factory
Event Hub
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
AVP, AML Analytics
Synchrony
₹1–1 LPA
Python
Tableau
Hadoop
+2
Lead AI Platform & Automation Engineer - GCP, Vertex AI, IBM Watsonx, MLFlow, Terraform
UPS
Chennai
Full-Time
Pipeline automation
Data Science
Semantic modeling
+2
Senior Revenue Operations Analyst
Uplers
India
Full-Time
₹4–10 LPA
Sales
Data Analysis
Python
+4
Product Manager – Automotive Manufacturing
Work Force Nexus
India
Full-Time
₹10–17 LPA
Communication
Product Development
Sales
+29
Engineering Management
Larsen & Toubro
₹60–80 LPA
Engineering
Mechanical
Electrical
+1
Share
Quick Apply
Upload your resume to apply for this position