Renew Home logo

Engineering Manager, Data (Remote, US)

Renew Home
United States
On-site
Full-time
Posted about 1 month ago

Job Description

Who We Are

Renew Home is on a mission to change how we power the world by making it easier for customers to save energy and money at home as part of the largest residential virtual power plant in North America.

We partner with industry-leading brands to better manage residential energy for users by prioritizing efficiency, savings, and comfort — and cleaner energy for everyone.

We are an Equal Opportunity employer striving to create a diverse, equitable, and inclusive work environment where everyone feels that they have a voice that is heard.

We strongly encourage candidates to check out our website at www.renewhome.com  to learn more about the world-changing work we are doing.

Role Summary

Renew Home is seeking an experienced Data Engineering Manager to lead a team of 4-5 data engineers in building and maintaining secure, scalable data infrastructure and pipelines. You will oversee the development of batch and real-time data pipelines, data lake architectures, data archival and compliance, and database optimizations to support our growing business needs. This role requires a blend of technical expertise, strategic vision, and strong leadership to guide the team in delivering reliable data solutions while collaborating with cross-functional stakeholders. As a manager at this level, you will demonstrate proficiency in domain expertise, people management, scope and impact, leadership, communication, and decision-making, aligning with expectations for engineering managers who can handle moderately complex issues and lead small teams effectively.

What You Will Do

  • Manage and mentor a team of data engineers. Support career growth through regular feedback, one-on-ones, and performance reviews.  Ensure the team follows company policies and fosters a respectful, inclusive work environment.
  • Guide the team through agile development practices — sprint planning, stand-ups, retrospectives, and effective prioritization.
  • Design and oversee scalable, reliable data pipelines — both batch and real-time.
  • Work closely with software engineers, data scientists, and cross functional product teams to deliver clean, consistent, and high-quality data across the company.
  • Uphold data quality, integrity, and compliance with governance standards
  • Maintain system performance, data integrity, and uptime. Manage and participate in on-call rotations and ensure strong operational standards.
  • Champion modern data practices, cloud technologies, and automation to improve efficiency, scalability, and cost-effectiveness.
  • Keep the team aligned with evolving data technologies and best practices while fostering a culture of learning and innovation.
  • Work with tools and platforms such as Python, Postgres, AWS/GCP, Prefect (or Airflow), Redis, Git, and Terraform.

Requirements

  • 8+ years of experience in data engineering or related fields, including 2+ years in a leadership role.
  • Comfort balancing leadership and technical execution. This role includes regular hands-on engineering work—designing and reviewing architecture, writing and reviewing code, and contributing directly to implementation—while also managing, mentoring, and enabling the team.
  • Proven experience designing and delivering large-scale data pipelines (batch and streaming).
  • Hands-on experience with cloud data platforms (AWS or GCP) and modern data lake platforms (Apache Iceberg/Snowflake).
  • Proficiency in Python and SQL, plus solid software engineering fundamentals.
  • Strong understanding of workflow orchestration tools (e.g., Prefect, Airflow, or Dagster).
  • Familiarity with data streaming systems (Kafka, Kinesis, or Pub/Sub) and infrastructure as code (Terraform, CDK).
  • Excellent communication and leadership skills — able to motivate, mentor, and guide technical teams.
  • A passion for building reliable systems, continuous improvement, and collaborative teamwork.
  • Bonuses:
    • Experience with data warehousing (Redshift, BigQuery) or machine learning pipelines.
    • Exposure to Apache Spark.
    • Contributions to open-source data projects or cloud certifications (AWS or GCP).

Disclaimer: Real Jobs From Anywhere is an independent platform dedicated to providing information about job openings. We are not affiliated with, nor do we represent, any company, agency, or agent mentioned in the job listings. Please refer to our Terms of Services for further details.