PA2026Q1JB031 -Senior Data Engineer
Actively Reviewing the ApplicationsSS&C Technologies
India, Maharashtra
Full-Time
Posted 1 day ago
•
Apply by June 18, 2026
Job Description
As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology.
Job Description
Job Summary
We are seeking an experienced Senior Data Engineer to build, optimize, and maintain scalable data pipelines and infrastructure in a modern lakehouse environment. You will work closely with Data Architects to implement well-defined data products, schemas, and patterns, ensuring reliable data ingestion, transformation, quality, and distribution. This role requires strong hands-on expertise with both batch and streaming systems, as well as a deep focus on performance, reliability, and operational excellence.
Key Responsibilities
SS&C Technologies is an Equal Employment Opportunity employer and does not discriminate against any applicant for employment or employee on the basis of race, color, religious creed, gender, age, marital status, sexual orientation, national origin, disability, veteran status or any other classification protected by applicable discrimination laws.
Job Description
Job Summary
We are seeking an experienced Senior Data Engineer to build, optimize, and maintain scalable data pipelines and infrastructure in a modern lakehouse environment. You will work closely with Data Architects to implement well-defined data products, schemas, and patterns, ensuring reliable data ingestion, transformation, quality, and distribution. This role requires strong hands-on expertise with both batch and streaming systems, as well as a deep focus on performance, reliability, and operational excellence.
Key Responsibilities
- Implement and maintain end-to-end data pipelines for data acquisition from diverse sources, including databases, APIs, files, and messaging systems such as Kafka.
- Build robust data validation, enrichment, and transformation workflows using Python and pySpark.
- Develop and optimize data storage and querying layers using technologies such as Apache Iceberg, Trino, StarRocks, and Snowflake.
- Implement and maintain dimensional data models, including Star and Snowflake schemas, as defined by data architecture standards.
- Integrate and manage streaming data flows using Kafka for both ingestion and real-time data distribution.
- Design and implement data quality checks, monitoring, and alerting to ensure high data reliability.
- Contribute to metadata management, data governance, and security practices, including access controls and data masking.
- Enable data distribution and consumption through files, APIs, Kafka, Snowflake data sharing, and analytics tools.
- Optimize pipeline performance, cost, and scalability while troubleshooting and resolving production issues.
- Collaborate closely with data architects, analysts, data scientists, and stakeholders to deliver high-quality data products.
- Mentor junior engineers and promote best practices in code quality, testing, and CI/CD for data pipelines.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 5+ years of hands-on experience in data engineering roles, including at least 2 years working with big data or lakehouse platforms.
- Strong proficiency in Python and pySpark for building scalable data processing pipelines.
- Hands-on experience with analytical and query platforms such as Trino, StarRocks, and Snowflake.
- Experience working with open table formats, particularly Apache Iceberg.
- Proven experience with streaming technologies, especially Apache Kafka.
- Solid understanding of dimensional modeling and data warehousing concepts.
- Familiarity with data quality frameworks, metadata management, governance tools, and security best practices.
- Experience with cloud platforms such as AWS, Azure, or GCP, and infrastructure-as-code tools.
- Strong problem-solving skills with experience debugging and tuning complex data pipelines.
- Excellent communication and collaboration skills.
- Experience building and operating large-scale real-time and batch data platforms.
- Familiarity with orchestration tools such as Airflow or Dagster.
- Experience with CI/CD practices for data engineering workflows.
- Familiarity with BI tools and analytic dashboard integrations.
- Relevant certifications (e.g., Databricks, Snowflake, Confluent) or contributions to open-source projects.
SS&C Technologies is an Equal Employment Opportunity employer and does not discriminate against any applicant for employment or employee on the basis of race, color, religious creed, gender, age, marital status, sexual orientation, national origin, disability, veteran status or any other classification protected by applicable discrimination laws.
Required Skills
Python
Cloud Platforms
Data Modeling
AWS
Snowflake
Microsoft Azure
Google Cloud Platform
Data Governance
Apache Kafka
Trino
Databricks
CI/CD
Business Intelligence
Apache
Analytics
Debugging
Data architecture
Apache Airflow
Data products
Apache Iceberg
Data ingestion
Data platforms
Data pipelines
PySpark
Code quality
Metadata management
Lakehouse
Computer Science
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
IOS DEVELOPER
Audify
Bengaluru
Full-Time
Product Design
Research
iOS
+3
TC0-CS-CDR-Network DLP Analyst-Senior
EY
Hyderabad
Full-Time
HIPAA
Key Account Manager
Uplers
India
Full-Time
₹9–13 LPA
Sales
4912084-Lead Assistant Manager
EXL
Delhi NCR
Full-Time
Data Analysis
Microsoft Azure
Cloud Computing
+1
Manual Testing
Birlasoft
DevOps
Share
Quick Apply
Upload your resume to apply for this position