Software Engineer
India, Karnataka, Bengaluru
3 weeks ago
Applicants: 0
Share
2 months left to apply
Job Description
About Business Unit:
The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud.
Candidate will be a member of the Data Engineering Team and be responsible for developing, Unit Testing, and implementing applications for the Data engineering group predominantly in the Hadoop Ecosystem and Databricks.
Why we are looking for you:
- You have good knowledge of Databricks implementations and are ready to use this expertise to migrate and modernize Hadoop ecosystem workloads on Databricks..
- You are hands-on with big data technologies like Spark, PySpark, Hive, and Hadoop, and enjoy working with massive datasets.
- You have experience working with AWS.
- You have strong experience in building and optimizing large-scale data engineering pipelines.
- You are self-driven, thrive in solving complex problems, and can mentor and guide junior engineers.
- You have experience with data design, data modeling, and performance tuning in distributed systems.
- You take pride in writing efficient, maintainable code and automating processes to improve efficiency.
- You enjoy new challenges and are solution oriented.
What you will enjoy in this role:
- Opportunity to design, build, and optimize large-scale data solutions that power Epsilon’s core products.
- Exposure to diverse data engineering challenges, from ingestion and transformation to performance optimization and data governance.
- Hands-on experience with modern data platforms, cloud ecosystems, and automation frameworks.
- A collaborative and agile work environment that values innovation, learning, and continuous improvement.
- Being part of a global Data team that directly impacts data-driven decision-making for top-tier clients.
Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice.
Responsibilities:
- Design, develop, and maintain data pipelines and ETL frameworks using Spark, PySpark, Hive, and SQL.
- Design and develop scalable data pipelines and processing frameworks on Databricks using PySpark, SQL, and Delta Lake to support large-scale data integration and analytics.
- Develop efficient, reusable, and reliable code for data processing and transformation.
- Optimize and tune Spark jobs for performance and scalability.
- Work with Technical Leads, Architects and data platform teams to understand and implement robust data solutions.
- Contribute to data modeling, quality, and governance initiatives to ensure trusted data delivery.
- Perform detailed analysis, troubleshooting, and RCA for production issues and optimize system reliability.
- Participate in code reviews, enforce best coding and design practices.
- Collaborate with multi-functional teams to deliver high-quality software solutions.
- Improve and optimize deployment challenges and help in delivering reliable solution.
- Interact with technical leads and architects to discover solutions that help solve challenges faced by Data Engineering teams.
- Contribute to building an environment where continuous improvement of the development and delivery process is in focus and our goal is to deliver outstanding software.
Qualifications:
- BE / B.Tech / MCA – No correspondence course.
- 3-5 years of experience
- Must have hands-on experience in building and optimizing data solutions on the Hadoop ecosystem leveraging PySpark.
- Must have Good Knowledge and experience with Databricks
- Must have experience working with AWS
- Experience with performance tuning for large data sets.
- Experience with JIRA for user-story/bug tracking.
- Experience with GIT/Bitbucket.
Required Skills
Additional Information
- Company Name
- Epsilon
- Industry
- Advertising Services
- Department
- N/A
- Role Category
- Engineering
- Job Role
- Mid-Senior level
- Education
- No Restriction
- Job Types
- On-site
- Employment Types
- Full-Time
- Gender
- No Restriction
- Notice Period
- Immediate Joiner
- Offered Salary
- INR 1 - 4 LPA
- Year of Experience
- 1 - Any Yrs
- Job Posted On
- 3 weeks ago
- Application Ends
- 2 months left to apply
Similar Jobs
Quick Apply
Upload your resume to apply for this position