Bestkaam Logo
CURATAL Logo

Big Data Engineer - Python/SQL/ETL

Hyderabad, Telangana, India

1 month ago

Applicants: 0

Salary Not Disclosed

N/A

Job Description

Key Responsibilities Design, develop, and support robust ETL pipelines to extract, transform, and load data into analytical products that drive strategic organizational goals. Develop and maintain data workflows on platforms like Databricks and Apache Spark using Python and Scala. Create and support data visualizations using tools such as MicroStrategy, Power BI, or Tableau, with a preference for MicroStrategy. Implement streaming data solutions utilizing frameworks like Kafka for real-time data processing. Collaborate with cross-functional teams to gather requirements, design solutions, and ensure smooth data operations. Manage data storage and processing in cloud environments, with strong experience in AWS cloud services. Use knowledge of data warehousing, data modeling, and SQL to optimize data flow and accessibility. Develop scripts and automation tools using Linux shell scripting and other languages as needed. Ensure continuous integration and continuous delivery (CI/CD) practices are followed for data pipeline deployments using containerization and orchestration technologies. Troubleshoot production issues, optimize system performance, and ensure data accuracy and integrity. Work effectively within Agile development teams and contribute to sprint planning, reviews, and Skills & Experience : 7+ years of experience in technology with a focus on application development and production support. At least 5 years of experience in developing ETL pipelines and data engineering workflows. Minimum 3 years hands-on experience in ETL development and support using Python/Scala on Databricks/Spark platforms. Strong experience with data visualization tools, preferably MicroStrategy, Power BI, or Tableau. Proficient in Python, Apache Spark, Hive, and SQL. Solid understanding of data warehousing concepts, data modeling techniques, and analytics tools. Experience working with streaming data frameworks such as Kafka. Working knowledge of Core Java, Linux, SQL, and at least one scripting language. Experience with relational databases, preferably Oracle. Hands-on experience with AWS cloud platform services related to data engineering. Familiarity with CI/CD pipelines, containerization, and orchestration tools (e.g., Docker, Kubernetes). Exposure to Agile development methodologies. Strong interpersonal, communication, and collaboration skills. Ability and eagerness to quickly learn and adapt to new Qualifications : Bachelors or Masters degree in Computer Science, Information Technology, or related fields. Experience working in large-scale, enterprise data environments. Prior experience with cloud-native big data solutions and data governance best practices. (ref:hirist.tech)

Additional Information

Company Name
CURATAL
Industry
N/A
Department
N/A
Role Category
Data Analyst
Job Role
Mid-Senior level
Education
No Restriction
Job Types
On Site
Gender
No Restriction
Notice Period
Less Than 30 Days
Year of Experience
1 - Any Yrs
Job Posted On
1 month ago
Application Ends
N/A

Similar Jobs

IBM

3 weeks ago

Application Developer-RDBMS

IBM

ViaPlus

1 month ago

QA Engineer - Automation

ViaPlus

Oracle

4 weeks ago

Senior AI Applications Engineer

Oracle

Virtusa

1 month ago

Senior QA lead

Virtusa

Arm

1 month ago

Verification Engineer - Media IP

Arm

Discoveries Quintessential

3 weeks ago

Full Stack Engineer

Discoveries Quintessential

ModMed India

3 weeks ago

Senior Software Engineer 1

ModMed India

DISCO

3 weeks ago

Senior Software Engineer - India

DISCO

Java, Kotlin, C +2
Labcorp

1 month ago

Asst Data Programmer

Labcorp

Infosys

3 weeks ago

Python Developer

Infosys