Bedrock RoboticsBedrock Robotics

Perception Sensor Software Engineer

Added 18 hours ago

Join the team bringing advanced autonomy to the built world

At Bedrock, we’re moving AI out of the lab and into the real world. Our team is composed of industry veterans who helped launch Waymo, scaled Segment to a $3.2B acquisition, and grew Uber Freight to $5B in revenue. Today, we’re deploying autonomous systems on heavy construction machinery across the country, accelerating project schedules of billion-dollar infrastructure projects and improving safety on job sites. Backed by $350M in funding, we’re working quickly to close the gap between America's surging demand for housing, data centers, manufacturing hubs, and the construction industry's growing labor shortage.

This is where algorithms meet steel-toed boots. You’ll collaborate with construction veterans and world-class engineers to solve physical-world problems that simulations can’t touch. If you're ready to apply cutting-edge technology to solve meaningful problems alongside a talented team—we'd love to have you join us.

Bedrock is bringing autonomy to the construction industry! We’re a group of veterans from the autonomous vehicle industry who are passionate about bringing the benefits of automation to areas in the construction industry currently underserved by the market.

We’re building out our first fleet of retrofitted autonomous construction machines and we’re looking for a Software Engineer to work on the Hardware Abstraction Layer for sensors. Your day to day work will be developing and optimizing our sensor pipelines (Lidar, cameras, IMUs, GPS, etc.)

What You’ll Do:

  • Own the sensor validation framework for our safety-critical system

  • Develop ML models and novel approaches for sensor data validation and anomaly detection across camera, lidar, IMU, and all other sensors on our robots

  • Build the data pipelines, ground-truth infrastructure, and evaluation frameworks that quantify sensor data quality and expose corner cases

  • Partner with the perception and behavior teams so they can deeply understand and trust the data feeding their models

What We're Looking For:

  • MSc or advanced degree in Computer Science, Robotics, or a related field

  • 4+ years of professional experience shipping deep learning models to production (ideally on robotic or other embedded platforms), with strong hands-on experience in PyTorch or another deep learning framework

  • Proficient in Python and comfortable reading and writing at least one systems language (e.g. C++, Rust)

  • Hands-on experience incorporating raw sensor data (camera, lidar, IMU) into deep learning pipelines

  • Solid grounding in 3D geometry, sensor calibration (intrinsics/extrinsics), coordinate transforms, and camera/image reprojection algorithms

  • Strong data analysis skills across statistical characterization of sensor data, corner-case and anomaly discovery, and evaluation design

Ways to Stand Out of the Crowd:

One or more of the following:

  • Deep knowledge of sensor physics: noise models, failure modes, how environmental conditions affect lidar and cameras

  • Prior experience with safety-critical systems

  • Experience with the Rust programming language

  • Published work in top-tier venues such as ICRA, IROS, CVPR, ECCV, ICCV, CoRL, or RSS

Other Special Aspects of the Role:

  • Based in the Bay Area with the ability to be onsite at our testing facility and SF office 3-4 days a week

Our roles are often flexible. If you don't fit all the criteria, or are in another location (especially one where we have an office like SF or NY) please apply anyway! We'd love to consider you.