Bestkaam Logo
Weekday AI (YC W21) Logo

Data Engineer

Actively Reviewing the Applications

Weekday AI (YC W21)

India, Kerala, Kochi Full-Time On-site INR 10–22 LPA
Posted 1 day ago Apply by May 14, 2026

Job Description

This role is for one of the Weekday's clients

Salary range: Rs 1300000 (ie INR 13 LPA)

Min Experience: 6 years

Location: Kochi, Chennai, Coimbatore

JobType: full-time

We are seeking a highly skilled Azure Databricks Engineer with strong hands-on experience in building and managing large-scale data processing solutions. In this role, you will be responsible for designing, developing, and optimizing data pipelines using Azure Databricks and PySpark to support advanced analytics, reporting, and business intelligence initiatives. You will work closely with data engineers, analysts, and business stakeholders to transform raw data into reliable, scalable, and high-performance data solutions. The ideal candidate has a strong foundation in distributed data processing, cloud-based data platforms, and performance optimization within Azure environments. This position requires someone who is comfortable working with big data architectures, understands modern data engineering best practices, and can independently manage end-to-end data workflows.

Requirements

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines using Azure Databricks
  • Implement large-scale data processing solutions using PySpark
  • Optimize Spark jobs for performance, scalability, and cost efficiency
  • Integrate data from multiple sources including structured and unstructured datasets
  • Collaborate with cross-functional teams to understand data requirements and deliver robust solutions
  • Perform data cleansing, transformation, and aggregation to support analytics use cases
  • Monitor and troubleshoot data workflows to ensure reliability and availability
  • Implement best practices for data governance, security, and compliance
  • Maintain documentation for data architecture, pipelines, and processes
  • Continuously improve data engineering processes through automation and optimization

What Makes You a Great Fit

  • Strong hands-on experience with Azure Databricks
  • Expertise in PySpark for distributed and large-scale data processing
  • Solid understanding of Spark architecture and performance tuning
  • Experience working with Azure cloud services and data storage solutions
  • Strong knowledge of ETL/ELT processes and data transformation techniques
  • Experience handling large datasets in distributed environments
  • Good understanding of data modeling and data warehousing concepts
  • Ability to troubleshoot complex data processing issues
  • Strong analytical and problem-solving skills
  • Effective communication and collaboration abilities

Key Skills

  • Azure Data Brick, PySpark
Check Qualification

Quick Tip

Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.