Senior Data Engineer Blue Ash, OH
Actively Reviewing the ApplicationsJobs via Dice
Raipur
Full-Time
4–8 years
Posted 3 days ago
•
Apply by June 11, 2026
Job Description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, V-CENTRIX-US LLC, is seeking the following. Apply via Dice today!
We are seeking a Senior Data Engineer with deep expertise in Azure Databricks, Spark, Python, SQL, and distributed data pipeline optimization. This is a fully onsite C2C contract role based in Blue Ash, OH.
Role: Senior Data Engineer
Type: Contract (C2C)
Location & Onsite: Blue Ash, OH (5 days onsite)
Visa: Any visa acceptable
Interview Process: In-person onsite
Team Details: 10 team members; work independently with peer programming sessions throughout the day
Top Skills: Azure Databricks, Python, Spark
Soft Skills: Problem-solving, attention to detail, ability to work independently and collaboratively in an agile team
VERY IMPORTANT DETAILS:
We are seeking a Senior Data Engineer with deep expertise in Azure Databricks, Spark, Python, SQL, and distributed data pipeline optimization. This is a fully onsite C2C contract role based in Blue Ash, OH.
Role: Senior Data Engineer
Type: Contract (C2C)
Location & Onsite: Blue Ash, OH (5 days onsite)
Visa: Any visa acceptable
Interview Process: In-person onsite
Team Details: 10 team members; work independently with peer programming sessions throughout the day
Top Skills: Azure Databricks, Python, Spark
Soft Skills: Problem-solving, attention to detail, ability to work independently and collaboratively in an agile team
VERY IMPORTANT DETAILS:
- Work location must be local
- Interviews will be in person, onsite
- Candidates must be willing to come onsite for their interview and work fully onsite with the team
- Prescreening includes 3 video questions; candidates must answer using their own knowledge and experience, no AI-generated responses
- Include a link to the candidate''s LinkedIn profile with the submittal
- Senior experience as a Data Engineer
- Strong experience with Azure Databricks, Spark, Python
- Strong SQL skills and database experience
- Experience monitoring and optimizing Databricks clusters or workflows
- Experience working with Azure data services and integrating them with Databricks and enterprise data platforms
- Experience building and optimizing distributed data processing systems (partitions, joins, shuffles, cluster performance)
- Experience with data pipeline development using tools such as Delta Live Tables (DLT) or Databricks SQL
- Experience with orchestration, messaging services, or serverless components (e.g., Azure Functions)
- Experience with version control and CI/CD tools such as GitHub and GitHub Actions
- Experience using Terraform for cloud infrastructure provisioning
- Familiarity with SDLC and modern data engineering best practices
- Strong organizational skills with the ability to manage multiple priorities and work independently
- Experience with data governance, lineage, or cataloging tools (Purview, Unity Catalog)
- Analyze, design, and develop enterprise data solutions using Azure, Databricks, Spark, Python, SQL
- Develop, optimize, and maintain Spark/PySpark data pipelines, addressing performance issues such as data skew, partitioning, caching, and shuffle optimization
- Build and support Delta Lake tables and data models for analytical and operational use cases
- Apply reusable design patterns, data standards, and architectural guidelines, including collaboration with 84.51° when needed
- Use Terraform to provision and manage cloud and Databricks resources (Infrastructure as Code)
- Implement and maintain CI/CD workflows using GitHub and GitHub Actions
- Manage Git-based workflows for Databricks notebooks, jobs, and data engineering artifacts
- Troubleshoot failures and improve reliability across Databricks jobs, clusters, and data pipelines
- Apply cloud computing skills to deploy fixes, upgrades, and enhancements in Azure environments
- Collaborate with engineering teams to enhance tools, systems, development processes, and data security
- Participate in the development and communication of data strategy, standards, and roadmaps
- Create architectural diagrams, interface specifications, and design documentation
- Promote reuse of data assets and contribute to enterprise data catalog practices
- Provide timely support and communication to stakeholders and end users
- Mentor team members on data engineering best practices and emerging technologies
Required Skills
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
Machine Learning Engineer
EdgeVerve
Bengaluru
Full-Time
4–8 years
Adobe Illustrator
B2B Sales
Sanity
+1
Medical Assistant (MA) - West Chester Family Medicine
Bon Secours Mercy Health
Bengaluru
Full-Time
1–2 years
Adobe Illustrator
B2B Sales
Sanity
+1
Managing Director, Intellectual Property Claims
The Vertex Companies LLC
Ahmedabad
Full-Time
10–20 years
Adobe Illustrator
CRM
Sanity
HRIS Implementation Specialist - Reports & Integrations
TransPerfect
Hyderabad
Full-Time
4–8 years
Adobe Illustrator
User Stories
Process Improvement
+5
Media & Public Relations (PR) Internship in Delhi, Noida
Chennai
Full-Time
Adobe Illustrator
Sanity
CRM
Share
Quick Apply
Upload your resume to apply for this position