FetchFetch

Data Platform Engineer

Added 7 hours ago

Meet Fetch AI & Data

AI & Data at Fetch sit at the center of how we understand our business, make decisions, and build intelligent products. The organization operates as an integrated AI & data ecosystem, spanning multiple disciplines, including data engineering, analytics engineering, machine learning, experimentation, and data platforms, all working together to turn data into durable business and customer impact.

Teams operate in complex problem spaces where requirements evolve, tradeoffs are constant, and the right answer is rarely obvious. Success depends on strong technical judgment, comfort with ambiguity, and the ability to gather context and make informed decisions while balancing quality, performance, scalability, and responsible use.

Practitioners across this org contribute hands-on to production systems, analytical foundations, and intelligent features. You will collaborate closely with product, platform, and engineering partners, help shape standards and best practices, and ensure our AI and data capabilities scale reliably as Fetch grows.

About the Role:

Fetch is building a modern, cloud-native data platform that powers analytics, experimentation, and machine learning across the company. As a Data Platform Engineer, you’ll focus on building and operating reliable data ingestion pipelines and core platform services that enable teams to work with data at scale.

This role is ideal for engineers who enjoy hands-on execution, learning distributed systems, and growing their platform engineering skill set while working closely with senior engineers on complex systems.

You’ll partner with product, analytics, and engineering teams to ensure data is ingested, processed, and made available reliably, while maintaining strong operational excellence across the platform.

What You’ll Do:

Build & Operate Data Pipelines

  • Design, implement, and maintain data ingestion pipelines using AWS-native data tools and distributed processing frameworks.
  • Support batch and streaming ingestion patterns with a focus on reliability, scalability, and observability.

Platform Operations & Reliability

  • Operate and improve core data platform services, addressing incidents, performance issues, and operational toil.
  • Implement monitoring, alerting, and runbooks to improve platform stability and on-call readiness.

Distributed Systems Support

  • Work with distributed data processing systems (e.g., Spark-based workloads) and orchestration frameworks.
  • Debug production issues across compute, storage, and networking layers.

Infrastructure & Automation

  • Contribute to Infrastructure as Code (Terraform, CloudFormation, or CDK) and CI/CD workflows.
  • Help improve automation around deployments, scaling, and platform maintenance.

Cross-Team Collaboration

  • Partner with data producers and consumers to onboard pipelines, troubleshoot issues, and improve platform usability.
  • Learn and apply platform standards and best practices defined by senior engineers.

AI-Assisted Engineering

  • Use AI-assisted tools to accelerate development, troubleshoot issues, and validate infrastructure or pipeline code—while ensuring correctness, security, and performance through testing and review.

Minimum Qualifications

  • 3+ years of experience in data platform, data engineering, or platform engineering roles.
  • Experience working with AWS and cloud-based data tooling.
  • Familiarity with distributed data processing concepts (e.g., Spark, batch and/or streaming systems).
  • Proficiency in at least one programming language (Python, Java, Go, or Scala preferred).
  • Experience with CI/CD, Infrastructure as Code, or operating production systems.
  • Ability to learn quickly, debug complex systems, and collaborate effectively across teams.
  • Experience using AI-assisted development tools responsibly to improve development speed and quality.
  • Bachelor’s degree in Computer Science, Engineering, or a related technical field.

Preferred Qualifications

  • Hands-on experience with AWS data services (e.g., S3, Glue, EMR, Kinesis, MSK).
  • Exposure to data orchestration frameworks or workflow engines (e.g., Airflow, Step Functions).
  • Familiarity with data observability, monitoring, or operational metrics.
  • Interest in growing ownership across platform or distributed systems domains.

This is a full-time role that can be held from one of our US offices or remotely in the United States.

Compensation: At Fetch, we offer competitive compensation packages including base, equity, and benefits to the exceptional folks we hire. The base salary range for this position is $119,000 - $140,000. Discover our benefits and how our employees live rewarded at https://fetch.com/careers.