Bestkaam Logo
Glocomms Logo

Data Engineer

Actively Reviewing the Applications

Glocomms

On-site 2–2 LPA
Posted 9 hours ago Apply by June 8, 2026

Job Description

Location: South Market, San Francisco

Pay: 150K-200K

*Unable to provide sponsorship at this time*

About the Role

A fast‑growing consumer technology startup is hiring its first dedicated Data Engineer to build and own the full data infrastructure that powers real‑time decisioning, product features, analytics, and revenue-critical systems. This is a high‑impact role where you will architect the entire data stack end‑to‑end, partnering closely with both engineering and quantitative teams in a fast‑moving environment.

You will design the warehouse architecture, own transformation layers, build reliable pipelines, and develop the real‑time systems that enable the company to scale. This is a rare opportunity to define standards, tooling, quality frameworks, and system design from the ground up.

What You'll Do

  • Architect and manage the data warehouse: Design and optimize the company's warehouse environment for performance, reliability, and cost efficiency as data volumes grow.
  • Own the transformation layer: Build and maintain the core transformation framework (e.g., dbt), including models, documentation, testing, and CI/CD.
  • Build and operate data pipelines: Develop robust, well‑monitored data pipelines with modern orchestration tools, ensuring failures are detected and handled reliably.
  • Develop real‑time data systems: Design streaming infrastructure for use cases where low latency matters, such as live user behavior, in‑session signals, and dynamic business logic.
  • Integrate data into production systems: Implement reverse ETL and other mechanisms to ensure model outputs and derived data reach production systems where they drive real‑time decisions.
  • Establish data quality frameworks: Build testing, monitoring, and validation systems to ensure accuracy, trust, and reliability in a rapidly scaling environment.
  • Collaborate cross‑functionally: Work closely with both data science/quantitative teams and software engineering teams, enabling fast iteration and data‑driven decision making across the organization.

Qualifications

  • Strong software engineering fundamentals with clean, maintainable, well‑tested code.
  • Deep experience with SQL and Python in production environments.
  • Hands‑on experience with major data warehouse technologies (e.g., BigQuery, Snowflake, or Redshift).
  • Experience with transformation tooling such as dbt
  • Experience building and operating pipelines using modern orchestration tools (e.g., Airflow, Dagster, Prefect).
  • Understanding of data modeling approaches (dimensional modeling, SCDs, incremental models).
  • Ability to work autonomously and make strong architectural decisions in a high‑ownership environment.

Preferred Experience

  • Exposure to streaming and real‑time systems (Kafka, Pub/Sub, Flink, etc.).
  • Familiarity with modern data stack tooling (e.g., Fivetran, analytics engineering best practices).
  • Experience working in high‑accuracy, high‑throughput domains such as financial, quantitative, or real‑time decisioning environments.
  • Background as an early or founding data engineering hire.
Check Qualification

Quick Tip

Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.