Job Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer II, Fandango (AWS/Redshift/PySpark) in the United States.
This role offers the opportunity to build and maintain robust data pipelines and systems that power critical business operations for a leading entertainment and digital media platform. You will work closely with other data engineers, software teams, and product managers to ensure data flows efficiently from multiple sources, both internal and external, and is structured, accurate, and accessible for analytics and decision-making. The position involves hands-on development with cloud-based tools, real-time and batch data processing, and optimizing large-scale datasets. You will contribute to the evolution of the company’s data architecture, troubleshoot production issues, and actively improve workflows, all while operating in a fully remote, collaborative, and high-impact environment.
Accountabilities
- Design, build, test, scale, and maintain data pipelines from diverse source systems, including APIs, structured and semi-structured files, and real-time feeds.
- Participate actively in all aspects of the team’s Agile processes, including stand-ups, sprint planning, and retrospectives.
- Collaborate with engineers and product managers to understand business and technical data requirements.
- Troubleshoot and resolve production data issues in a timely manner.
- Continuously improve code quality, data workflows, and documentation to ensure reliability and maintainability.
- Contribute to the development of near real-time and batch data pipelines using AWS, Redshift, PySpark, and related tools.
Requirements
- Bachelor’s degree in Computer Science, Computer Engineering, or related technical field, or equivalent practical experience.
- 5+ years of applied experience in data engineering, including data pipeline development, orchestration, data modeling, and data lake solutions.
- Proficiency in Python, SQL, and experience with reusable, efficient coding for automation and analysis.
- Experience with data warehousing and dimensional modeling.
- Hands-on experience with large datasets, ETL frameworks, and workflow orchestration tools such as Apache Airflow / Amazon MWAA.
- Expertise with AWS data management services (S3, Redshift, DynamoDB, Athena, EMR, Glue, Lambda).
- Familiarity with near real-time data processing and batch pipeline development.
- Experience working in agile/scrum environments.
- Strong collaboration, problem-solving skills, and the ability to learn new tools and methods quickly.
- Preferred: familiarity with Talend, Informatica, Terraform, or Test-Driven Development practices.
Similar Jobs
Field Engineer - High Voltage (Remote - US)
Jobgether
Sr. Project Manager (Remote - US)
Jobgether
Senior Software Engineer - Backend - Growth Platform (Remote - US)
Jobgether
Senior Application Security Engineer (Remote - US)
Jobgether
Engineering Manager - CAD/3D Research and Novel Algorithms (Remote - US)
Jobgether
Data Engineer (Remote - US)
Jobgether
Implementation Engineer (Remote - US)
Jobgether
Senior Data Engineer (Remote - US)
Jobgether
Staff Mobile Engineer (Android) (Remote - US)
Jobgether
Senior Product Manager (Remote - US)
Jobgether
IoT Security Consultant- Remote (Anywhere in the U.S.)
Jobgether
Senior Software Engineer (TypeScript) - AI/ML (Remote - US)
Jobgether
Design Director (Remote - US)
Jobgether
Senior Product Manager, Reporting & Analytics (Remote - US)
Jobgether
Firefox OS Integration Engineer, Mac OS Engineering (Remote - US)
Jobgether
Disclaimer: Real Jobs From Anywhere is an independent platform dedicated to providing information about job openings. We are not affiliated with, nor do we represent, any company, agency, or agent mentioned in the job listings. Please refer to our Terms of Services for further details.
