Engineer I, Enterprise Data Lakehouse
Actively Reviewing the ApplicationsLPL Financial Global Capability Center
India
Full-Time
On-site
Posted 4 days ago
•
Apply by June 2, 2026
Job Description
What if you could build a career where ambition meets innovation?
At LPL’s Global Capability Center, you'll find a collaborative culture where your voice matters, integrity guides every decision, and technology fuels progress. Your skills, talents, and ideas will redefine what's possible. LPL's success reflects its exceptional employees, who together pursue one noble purpose: empowering financial advisors to deliver personalized advice for all who need it. We’re proud to be expanding and reaching new heights in Hyderabad.
Join us as we create something extraordinary together.
Job Overview
We are seeking a hands-on Enterprise Data Lake/Lakehouse Engineer to design, build, and operate robust data lake and lakehouse solutions that enable analytics, reporting, and AI-driven products. This role will be pivotal in bridging the gap between traditional data warehouses and modern data lakes, ensuring seamless data integration, governance, and accessibility for business intelligence and advanced analytics.
Responsibilities
We’re looking for strong collaborators who deliver exceptional client experiences and thrive in fast-paced, team-oriented environments. Our ideal candidates pursue greatness, act with integrity, and are driven to help our clients succeed. We value those who embrace creativity, continuous improvement, and contribute to a culture where we win together and create and share joy in our work.
Requirements
3+ years of experience in data engineering, software engineering, or cloud engineering, with at least 4 years focused on data lake or lakehouse environments in AWS Bachelor’s degree in Data Science, Computer science or related field; Master’s degree preferred Experience establishing and developing high-performing engineering teams Demonstrable hands-on experience with:
At LPL’s Global Capability Center, you'll find a collaborative culture where your voice matters, integrity guides every decision, and technology fuels progress. Your skills, talents, and ideas will redefine what's possible. LPL's success reflects its exceptional employees, who together pursue one noble purpose: empowering financial advisors to deliver personalized advice for all who need it. We’re proud to be expanding and reaching new heights in Hyderabad.
Join us as we create something extraordinary together.
Job Overview
We are seeking a hands-on Enterprise Data Lake/Lakehouse Engineer to design, build, and operate robust data lake and lakehouse solutions that enable analytics, reporting, and AI-driven products. This role will be pivotal in bridging the gap between traditional data warehouses and modern data lakes, ensuring seamless data integration, governance, and accessibility for business intelligence and advanced analytics.
Responsibilities
- Implement and maintain scalable data lake and lakehouse architectures using cloud-native services (AWS S3, Glue, Lake Formation, Delta Lake, Snowflake, etc.)
- Develop and optimize end-to-end data pipelines (batch and streaming) for ingesting, transforming, and storing structured and unstructured data at scale
- Integrate diverse data sources and ensure efficient, secure, and reliable data ingestion and processing
- Implement and enforce data governance, cataloging, lineage, and access controls (e.g., AWS DataZone / Glue Data Catalog or Unity Catalog, Collibra, Atlan)
- Collaborate with cross-functional teams (data scientists, BI engineers, product managers) to translate business needs into reliable, observable, and governed data products
- Drive adoption of modern data engineering frameworks (dbt, Airflow, Delta Live Tables, etc.) and DevOps practices (IaC, CI/CD, automated testing, monitoring)
- Champion data quality, security, and compliance (encryption, PII, GDPR, HIPAA, etc.) across all data lake/lakehouse operations
- Mentor and guide team members, contribute to platform roadmaps, and promote best practices in data engineering and lakehouse design
- Stay current with emerging trends in data lakehouse technologies, open-source tools, and cloud platforms
We’re looking for strong collaborators who deliver exceptional client experiences and thrive in fast-paced, team-oriented environments. Our ideal candidates pursue greatness, act with integrity, and are driven to help our clients succeed. We value those who embrace creativity, continuous improvement, and contribute to a culture where we win together and create and share joy in our work.
Requirements
- Cloud data lake architectures: AWS S3, Glue, Lake Formation, Snowflake, or similar
- Data lake design patterns: raw, curated, consumption zones; medallion architecture
- Data versioning and schema evolution: e.g., Delta Lake, Apache Iceberg
- Data governance and cataloging: including any of the following (preferred experience in multiple tools) Unity Catalog, Collibra, Atlan, AWS Glue Data Catalog
- Programming: Python and/or SQL (production code, reusable libraries, tests)
- Pipeline orchestration: Airflow, Step Functions, dbt, or similar
- DevOps for data: Terraform/CloudFormation, CI/CD, monitoring, and runbook creation
- Strong understanding of data modeling, data quality, and secure data onboarding/governance
- Experience with both batch and real-time data processing
- Experience with Spark, Snowflake or other big data frameworks
- AWS and/or Snowflake architect or developer certifications
- Demonstrated use of AI/ML tools to augment engineering productivity (prompting for code generation, LLMs for docs/tests, query optimization)
- AWS (S3, Glue, Lake Formation, IAM), Snowflake
- SQL, Python
- dbt, Airflow, Step Functions
- Terraform/CloudFormation, CI/CD (GitHub Actions, Jenkins)
- Observability (Dynatrace preferred, Datadog, Prometheus)
- LLM/AI augmentation tooling (preferred)
Required Skills
Engineering
Compliance
Onboarding
Monitoring
Python
Cloud Platforms
SQL
Data Modeling
AWS
Snowflake
Jenkins
Terraform
GitHub
Prometheus
IAM
Spark
Data Governance
GitHub Actions
Encryption
Airflow
Data Lake
Unity
Datadog
Dynatrace
Data Engineering
Data Science
DevOps
CI/CD
Automated Testing
Testing
Apache
Query optimization
Design patterns
Data quality
Iceberg
Governance
HIPAA
DBT
Software engineering
Cloud Engineering
Orchestration
Data products
Apache Iceberg
Unity catalog
Data processing
Data ingestion
Modeling
Schema evolution
Big Data
Schema
Data sources
Ingestion
Experience in multiple
Delta
GDPR
Data pipelines
Data Catalog
CloudFormation
Data frameworks
Enterprise Data
AWS glue
AWS S3
Code generation
Pipeline orchestration
Unstructured data
Cloud Data
LLMs
Observability
Batch
Lakehouse
Glue
AI/ML
Computer Science
LLM
Step Functions
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
Python Developer
Norwin Technologies
India
Full-Time
Communication
API Integration
Prioritization
+26
Cloud Technical Solutions Engineer, Networking
India
Full-Time
Engineering
JavaScript
Python
+5
Lead, Executive Assistant Services
Standard Chartered India
India
Full-Time
Communication
Leadership
Recruitment
+61
Test Analyst
NatWest Group
India
Full-Time
Testing
Associate - Business Analyst
EXL
India
Full-Time
Testing
Debugging
Share
Quick Apply
Upload your resume to apply for this position