This role is open to candidates based in LATAM, Africa, and Eastern Europe. Please note that as this role supports U.S.-based clients, candidates must be available to work during U.S. business hours aligned with the client’s time zone.Our client is an AI-driven technology company building forecasting and attribution intelligence products powered by high-quality, analytics-ready data. Their teams work cross-functionally across data engineering, analytics, data science, and product to deliver reliable insights that support customer onboarding, reporting workflows, and advanced AI use cases in a fast-moving, execution-focused environment.LocationFully remote | 9 AM - 5 PM ESTRole OverviewThe Data Engineer will help build and maintain reliable, scalable data pipelines that support analytics, forecasting, and AI-driven products. This is a hands-on, execution-focused contract role centered on data quality, pipeline reliability, and collaboration across analytics, data science, and product teams.The role operates within a modern analytics engineering stack using Python, dbt, and Dagster, with a strong emphasis on supporting customers onboarding and reporting workflows.Key ResponsibilitiesData Pipeline Development & OrchestrationBuild and maintain scalable, fault-tolerant ELT pipelines using PythonOrchestrate and monitor data workflows using DagsterTroubleshoot pipeline failures, performance issues, and data inconsistenciesMonitor pipeline health using observability tools and metricsAnalytics Engineering & Data ModelingDevelop, optimize, and document dbt models following analytics engineering best practicesModel clean, analytics-ready datasets for BI, forecasting, and machine learning feature consumptionContribute to refactoring and improvement of existing data workflows as product needs evolveData Quality & ReliabilityImplement and maintain data quality checks and testing strategiesFollow established team standards for SLAs, code quality, and deploymentsCross-Functional CollaborationCollaborate closely with data scientists to support forecasting and AI-driven use casesWork cross-functionally with analytics and product teams to ensure data meets business and product requirementsQualificationsExperience3+ years of professional experience in data engineering or analytics engineeringHands-on experience working with dbt (Core or Cloud)Experience using Dagster or similar orchestration toolsExperience working with cloud data warehouses such as Snowflake, BigQuery, or RedshiftExperience collaborating with Product, Analytics, or Data Science teamsAbility to work independently and deliver results in a contract environmentSkillsStrong proficiency in Python, including libraries such as pandas, SQLAlchemy, or psycopg2Advanced SQL skills, including CTEs, window functions, and query optimizationFamiliarity with modern ELT tools such as Airbyte, Fivetran, Meltano, or dltHubStrong troubleshooting skills for data pipelines, performance, and data quality issuesAbility to follow established standards for reliability, testing, and deploymentOpportunityThis contract role offers the opportunity to contribute directly to AI-driven products by building high-impact data infrastructure used for forecasting, attribution, and reporting. You’ll work within a modern analytics engineering stack, collaborate closely with technical teams, and gain hands-on exposure to real-world AI and analytics use cases in a fast-paced, product-driven environment.Application Process:To be considered for this role these steps need to be followed:Fill in the application formRecord a video showcasing your skill sets