Big Data Hadoop ETL || 6-10 Years || Noida
Noida, Uttar Pradesh, India
3 weeks ago
Applicants: 0
Share
6 days left to apply
Job Description
We are looking for an experienced Big Data Engineer who has strong hands-on experience in Big Data ecosystem tools, database engineering, and ETL pipeline development. The ideal candidate should have strong analytical and problem-solving skills along with expertise in performance tuning and scheduling tools. Key Responsibilities Design, develop, and optimize scalable Big Data pipelines. Work closely with cross-functional teams on data acquisition, transformation, and processing. Perform ETL workflows, data ingestion, and data processing using Hadoop ecosystem. Build and maintain data solutions ensuring performance, scalability, and reliability. Monitor, troubleshoot, and tune data pipelines to ensure optimal performance. Mandatory Skills Big Data / Hadoop Technologies: Hive, HQL, HDFS Programming: Python, PySpark, SQL (Strong query writing) Schedulers: Control-M or equivalent scheduler Database & ETL: Strong experience in SQL Server / Oracle or similar ETL pipeline development & performance tuning Preferred (Good to Have) GCP Services: BigQuery, Composer, DataProc, GCP Cloud architecture Experience in Agile delivery methodology Terraform coding and IaC knowledge
Required Skills
Additional Information
- Company Name
- Yellow Octo LLP
- Industry
- N/A
- Department
- N/A
- Role Category
- N/A
- Job Role
- Mid-Senior level
- Education
- No Restriction
- Job Types
- On-site
- Gender
- No Restriction
- Notice Period
- Less Than 30 Days
- Year of Experience
- 1 - Any Yrs
- Job Posted On
- 3 weeks ago
- Application Ends
- 6 days left to apply
Similar Jobs
Quick Apply
Upload your resume to apply for this position