M

Lead AI Engineer, Application Modernization

MongoDB
Full-time
On-site
Olympia
MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. We're looking for a Lead AI Engineer with a background in building AI/ML platform. We are a diverse and talented group of contributors building tools to migrate legacy applications to MongoDB Atlas, our database-as-a-service (DBaaS) offering. This is an exciting opportunity to lead and grow an established team that is undergoing rapid expansion. In this leader-of-leaders position, you will shape the future of the Application Modernization team.

Organizations struggle with legacy applications that lack scalability, resilience, and cloud compatibility. To modernize these systems, many migrate from relational databases to MongoDB—a leading developer data platform for transactional systems. MongoDB's Application Modernization team helps customers through this complex transition with tools like Relational Migrator and is now expanding to include AI-powered code modernization solutions to accelerate the migration process.

We are looking to speak to candidates who are based in the US or Canada for our hybrid working model.

Our ideal candidate will have

4+ years of technical leadership experience managing engineering teams, building platform infrastructure in fast-paced, early-stage environments

2+ years of hands-on experience building and deploying AI agent solutions using frameworks such as AutoGen, CrewAI, LangGraph, TaskWeaver, LangChain, Semantic Kernel, or evaluation suites like OpenAI Evals

Deep understanding of multi-agent coordination, task decomposition, and agent-to-agent communication patterns

Proven experience in distributed systems, platform engineering, or developer tooling

Expertise with container orchestration (Kubernetes, Docker) and cloud infrastructure (AWS/GCP/Azure)

Proficiency in Python and/or JavaScript/TypeScript for building scalable backend services and APIs

Experience with workflow orchestration tools and event-driven architectures

Strong background in building automated testing frameworks, benchmarking systems, and CI/CD pipelines

Experience with performance monitoring, metrics collection, and statistical analysis for system evaluation Position Expectations

Lead the design and development of a scalable agent orchestrator platform that can deploy, manage, and coordinate multiple AI agents across different use cases

Architect and build a comprehensive benchmarking and evaluation suite for testing agent performance, reliability, and safety across multiple dimensions: accuracy, latency, cost-effectiveness, and robustness

Build robust APIs and SDKs that enable seamless integration of various agent frameworks and models

Establish engineering best practices for code quality, testing, documentation, and deployment

Collaborate closely with ML researchers, product managers, and security teams to define evaluation criteria and platform requirements

Drive technical decision-making and architecture reviews while balancing innovation with system stability

Stay current with the rapidly evolving agent ecosystem and evaluate new frameworks and approaches

Lead and coach a high-performing team of software engineers through the complexities of 0-1 platform development

Partner effectively with engineering and product leaders to align on direction and execution

Create a balanced environment that encourages both iterative improvement and rigorous evaluation Success Measures

Within the first three months, you will have: Familiarise yourself with the MongoDB database and aggregation language

Familiarise yourself with the problem space and the domain

Set up software development infrastructure (tech stack, build tools, etc) to enable development using the relevant tech stacks

Started collaborating with your peers and contributed to code reviews

Established deep technical understanding of agent orchestration frameworks, evaluation methodologies, and our existing platform architecture Within six months, you will have: Provide design and architectural guidance in extending current software and developing new software

You will be involved in our recruitment of new team members

Identify what mentorship each individual needs to enable them to meet their goals

Delivered core components of the agent orchestrator platform with robust API interfaces and SDK foundations Within 12 months, you will have: You will contribute to the vision and growth of our team

You will be involved in our recruitment of new team members

Delivered a comprehensive evaluation suite that provides reliable benchmarking across performance, cost, and safety dimensions

You will be trusted to execute complex projects

Deliver at least one release of our products Life at MongoDB