Lead II - Data Engineering-AWS
Actively Reviewing the ApplicationsUST
Bengaluru
Full-Time
4–8 years
Posted 3 days ago
•
Apply by June 11, 2026
Job Description
Role Description
Job Description
UST is looking for a skilled AWS Data Lead to join team with 8 to 10 years of experience and contribute to the development and maintenance of robust data pipelines and systems on the AWS platform, with a focus on leveraging PySpark for efficient data processing. As a Data Engineer, role will play a crucial role in ensuring the efficient flow, storage, and processing of data for our organization.
Responsibilities
Data Pipeline Development: Design, implement, and maintain scalable and efficient data pipelines on the AWS platform using tools such as AWS Glue, Apache Spark, and PySpark.
AWS Services Utilization: Leverage AWS services like S3, Glue, Athena, and others to build end-to-end data solutions.
ETL Processing with PySpark: Develop and optimize ETL processes using PySpark to facilitate seamless data extraction, transformation, and loading.
Requirements
Experience: Proven experience as a Data Lead with a focus on AWS data solutions, including hands-on experience with PySpark.
Programming Skills: Proficient in Python, with advanced experience in PySpark for efficient data processing.
ETL Tools: Hands-on experience with AWS Glue or other ETL tools for data processing.
Skills
data engineering,aws,aws glue,pyspark,etl,
Job Description
UST is looking for a skilled AWS Data Lead to join team with 8 to 10 years of experience and contribute to the development and maintenance of robust data pipelines and systems on the AWS platform, with a focus on leveraging PySpark for efficient data processing. As a Data Engineer, role will play a crucial role in ensuring the efficient flow, storage, and processing of data for our organization.
Responsibilities
Data Pipeline Development: Design, implement, and maintain scalable and efficient data pipelines on the AWS platform using tools such as AWS Glue, Apache Spark, and PySpark.
AWS Services Utilization: Leverage AWS services like S3, Glue, Athena, and others to build end-to-end data solutions.
ETL Processing with PySpark: Develop and optimize ETL processes using PySpark to facilitate seamless data extraction, transformation, and loading.
Requirements
Experience: Proven experience as a Data Lead with a focus on AWS data solutions, including hands-on experience with PySpark.
Programming Skills: Proficient in Python, with advanced experience in PySpark for efficient data processing.
ETL Tools: Hands-on experience with AWS Glue or other ETL tools for data processing.
Skills
data engineering,aws,aws glue,pyspark,etl,
Required Skills
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
Vice President, National & Special Account Services
Delta Dental Ins.
15–25 years
Solution architecture
Adobe Illustrator
Enterprise Architecture
+4
Senior Accounting Manager
Contractor In Charge
4–8 years
Solution architecture
Data architecture
Event-driven architecture
+11
Software Engineer Intern
Weekday AI (YC W21)
Hyderabad
Full-Time
Solution architecture
Service-oriented architecture
Adobe Illustrator
+4
Director - Compliance Control Management
American Express
Noida
Full-Time
10–20 years
Six Sigma
Sales Operations
Solution architecture
+13
Senior Cloud Architect
Swiss Re
India
Full-Time
Enterprise Architecture
Design patterns
Adobe Illustrator
+2
Share
Quick Apply
Upload your resume to apply for this position