Y

AI Engineer (Generative AI / MLOps / AI Agents)

Yochana
7 hours ago
Full-time
On-site
Key Responsibilities

Generative AI & LLM Engineering • Design, fine-tune, and deploy Large Language Models (LLMs) for insurance-specific use cases including document intelligence, claims summarization, policy interpretation, and underwriting Q&A. • Build Retrieval-Augmented Generation (RAG) pipelines using vector databases (e.g., Azure AI Search, Pinecone, ChromaDB) to ground LLM outputs in enterprise knowledge bases. • Develop prompt engineering frameworks and systematic evaluation pipelines to ensure LLM output quality, consistency, and safety in regulated insurance contexts. • Integrate LLM capabilities with internal data platforms via LangChain, LlamaIndex, or Semantic Kernel. • Evaluate and benchmark foundational models (OpenAI GPT-4o, Azure OpenAI, Claude, Mistral, Llama) against insurance-specific tasks to guide platform selection.

AI Agents & Automation • Architect and implement autonomous AI agents capable of multi-step reasoning, tool use, and decision-making for workflows such as FNOL triage, claims routing, policy lookup, and compliance checks. • Build agentic frameworks using patterns such as ReAct, Chain-of-Thought, and Tool-Augmented Agents to handle complex, multi-turn insurance workflows. • Design human-in-the-loop (HITL) checkpoints and escalation logic to ensure AI agents operate within defined risk and compliance boundaries. • Integrate agents with internal APIs, data platforms, and enterprise systems using orchestration tools such as Azure Logic Apps, Apache Airflow, or Databricks Workflows. • Develop guardrails, monitoring, and audit logging for all deployed agents to meet regulatory and governance standards.

MLOps & Model Deployment • Build and maintain end-to-end MLOps pipelines covering model training, versioning, validation, deployment, and monitoring using MLflow, Azure ML, and Databricks. • Implement CI/CD pipelines for ML models using Azure DevOps or GitHub Actions, enabling reliable, repeatable model releases. • Deploy models as REST APIs or batch inference services on Azure Kubernetes Service (AKS) or Azure Container Apps, ensuring scalability and low-latency response. • Establish model monitoring frameworks to detect data drift, model degradation, and prediction anomalies in production. • Manage the model registry and lineage tracking to maintain governance and auditability of all AI assets. • Collaborate with data engineering teams to ensure feature pipelines are production-grade, versioned, and integrated with the Feature Store on Databricks or Azure ML.

Collaboration & Delivery • Work closely with business analysts, actuaries, underwriters, and claims professionals to translate domain requirements into AI solution designs. • Participate in Agile/Scrum ceremonies including sprint planning, standups, and retrospectives as an active delivery contributor. • Produce clear, well-structured technical documentation including solution designs, API specs, model cards, and deployment runbooks. • Mentor junior engineers and contribute to internal AI engineering best practices and standards

Experience • 3-5 years of professional experience in AI/ML engineering, with demonstrated delivery of production-grade AI systems. • Hands-on experience building and deploying LLM-powered applications using frameworks such as LangChain, LlamaIndex, or Semantic Kernel. • Proven experience implementing MLOps pipelines in cloud environments (Azure preferred). • Experience developing AI agents or automation workflows using agentic frameworks. • Prior experience in financial services, insurance, or regulated industries is strongly preferred.