Data Engineer (Python, PySpark, AWS)
Hyderabad, Telangana, India
4 days ago
Applicants: 0
3 weeks left to apply
Job Description
Role Description We are seeking a skilled Data Engineer with strong expertise in Python, PySpark, and the AWS Data Engineering stack to design, develop, and optimize large-scale ETL pipelines. The ideal candidate will have a solid foundation in data engineering best practices, automation, and modern cloud-based architectures, along with a passion for leveraging Generative AI tools to boost productivity and code quality. Core Must-Have Skills Strong proficiency in Python and PySpark for developing and optimizing ETL pipelines. In-depth understanding of data engineering best practices, including data validation, transformation logic, and performance optimization. Experience working with large-scale data processing and distributed computing environments. Good-to-Have / Preferred Skills Working knowledge of Scala programming, particularly for Spark-based use cases. Familiarity with AI-assisted development tools such as GitHub Copilot to enhance productivity and code quality. AWS Data Engineering Stack Expertise Hands-on experience with AWS Glue, Lambda Functions, EventBridge, SQS, SNS, DynamoDB, and Streams for building serverless and event-driven data pipelines. Proficiency in CloudWatch for creating dashboards, setting up s, and monitoring pipeline health. Basic working knowledge of AWS CDK and CloudFormation for infrastructure automation and deployment. Skills Python ,Pyspark,Data Validation,Transformation Logic
Additional Information
- Company Name
- UST
- Industry
- N/A
- Department
- N/A
- Role Category
- Python Developer
- Job Role
- Entry level
- Education
- No Restriction
- Job Types
- On-site
- Gender
- No Restriction
- Notice Period
- Less Than 30 Days
- Year of Experience
- 1 - Any Yrs
- Job Posted On
- 4 days ago
- Application Ends
- 3 weeks left to apply