We’re hiring an MLE | MLOps Engineer for a fast-growing AI infrastructure startup building safety and reliability systems for large-scale AI deployments.The Company Behind the Role:AI-native infrastructure productFocused on AI safety, reliability and model optimizationBacked by international investorsOperating at significant production scaleSmall, highly technical team bridging research and productThe company builds systems that help organizations evaluate, optimize, and control AI model behavior in real-world production environments.Your Impact:Own inference infrastructure end-to-end (latency, throughput, cost)Optimize model serving performanceBuild and scale vector search pipelinesDefine service health metrics and reliability KPIsTurn experimental research models into production-ready systemsManage deployment and performance across cloud-native environmentsThis role sits at the intersection of deep ML engineering and production infrastructure.Tech Environment (High-Level):High-performance model serving frameworksLarge-scale inference optimizationVector search & embedding pipelinesKubernetes-based infrastructureObservability & monitoring systemsPerformance benchmarking & tuning(Full technical details shared during interviews)Your Superpower:3+ years shipping ML systems to productionStrong hands-on experience optimizing inference performanceComfortable debugging real-world latency & throughput bottlenecksExperience across the stack (model layer → infra → monitoring)Strong engineering fundamentalsBonus Points If:Experience with low-level performance optimizationExperience with GPU kernels or custom serving optimizationsBackground in systems programming (e.g. Rust)Experience benchmarking large model workloadsWhy Join:Competitive compensation + equityHybrid setup in Europe + relocation supportComprehensive health coverageTop-tier hardware & toolsTeam off-sitesBudget for learning & AI tooling