to design, build, and scale enterprise-grade AI platforms leveraging frontier Large Language Models (LLMs). This role sits at the intersection of AI engineering, platform architecture, and applied GenAI, with a strong emphasis on productionization in regulated environments (financial services, wealth, capital markets). You will play a key role in operationalizing AI at scale, building reusable capabilities, and enabling secure, governed adoption of LLM-powered solutions across the enterprise.
Key Responsibilities
AI Platform Engineering
·
Design and build scalable AI platforms supporting LLMs, RAG pipelines, and multi-model orchestration ·
Develop reusable frameworks for prompt management, model routing, evaluation, and monitoring ·
Implement LLMOps / MLOps pipelines for continuous integration, deployment, and lifecycle management ·
Architect API-first AI services for enterprise-wide consumption.
Frontier LLM Integration
·
Integrate and optimize models from providers like OpenAI, Anthropic, Google DeepMind, and open-source ecosystems ·
Build multi-model strategies (closed + open source) for performance, cost, and governance ·
Implement advanced techniques: ·
Retrieval-Augmented Generation (RAG) ·
Tool use / agents ·
Fine-tuning and embeddings ·
Context optimization and memory systems.
Enterprise AI & Governance
·
Design systems aligned with security, compliance, and data privacy requirements ·
Implement guardrails, auditability, and explainability in AI workflows ·
Enable safe AI deployment in distributed environments (e.g., advisor desktops, hybrid cloud).
Applied AI Solutions
·
Build AI-driven use cases such as: ·
Intelligent document processing (e.g., wealth plans, research docs) ·
Advisor copilots and decision support systems ·
Knowledge assistants and enterprise search ·
Partner with business teams to translate use cases into scalable AI solutions.
Performance & Evaluation
·
Develop evaluation frameworks for accuracy, hallucination detection, and model performance ·
Optimize latency, throughput, and cost for production deployments ·
Establish benchmarking and observability standards
Required Qualifications
·
7–12+ years in software engineering, with 3+ years in AI/ML engineering or GenAI ·
Deep understanding of transformer models, LLM architecture, prompt engineering, and context handling ·
Experience building production-grade AI systems (not just POCs).
Preferred Qualifications
· Experience in financial services / wealth / capital markets · Familiarity with regulated AI deployments (compliance, DLP, governance) · Exposure to agentic AI systems and autonomous workflows · Experience with fine-tuning / LoRA / model optimization · Knowledge of data engineering pipelines and real-time architectures.