AWS Data Engineer - PySpark & SQL
India, Kerala, Kochi
2 weeks ago
Applicants: 0
Share
Data Modeling
Cloud Security
Data Warehousing
Data Engineering
Data Science
Cloud
Analytics
Athena
Data pipelines
PySpark
RedShift
Glue
Salary Not Disclosed
2 months left to apply
Job Description
Job Overview
We are looking for an experienced AWS Data Engineer with strong expertise in PySpark and SQL to build and maintain scalable data pipelines on AWS.
The role involves working with large datasets and supporting analytics, reporting, and data-driven applications.
Key Responsibilities
- Design, develop, and optimize data pipelines on AWS
- Build ETL/ELT workflows using PySpark
- Write efficient and complex SQL queries for data transformation
- Work with AWS services to ingest, process, and store large datasets
- Ensure data quality, performance, and reliability
- Collaborate with analytics, BI, and data science teams
- Troubleshoot and resolve production data issues
- Follow best practices for data engineering and cloud security
- Strong experience with AWS
- PySpark for data processing
- Advanced SQL
- Experience handling large-scale data systems
- AWS services such as S3, Glue, EMR, Redshift, Athena
- Knowledge of data warehousing and data modeling
- Exposure to CI/CD for data pipelines
- AWS Data Engineer, PySpark, SQL, Big Data, Cloud Data Engineer, ETL, Kochi, Trivandrum
Additional Information
- Company Name
- Precision Staffers
- Industry
- Data Infrastructure and Analytics, Technology, Information and Internet, and Software Development
- Department
- N/A
- Role Category
- Information Technology
- Job Role
- Mid-Senior level
- Education
- No Restriction
- Job Types
- On-site
- Employment Types
- Full-Time
- Gender
- No Restriction
- Notice Period
- Immediate Joiner
- Year of Experience
- 1 - Any Yrs
- Job Posted On
- 2 weeks ago
- Application Ends
- 2 months left to apply
Similar Jobs
Quick Apply
Upload your resume to apply for this position