The company seeks an AI Engineer with MLOps and LLMOps background to own projects’ pipelines and CI/CD infrastructure, supporting the turning business requirements into production-ready AI/GenAI features. You’ll build scalable MLOps frameworks, convert AI research into seamless integrations, and optimize models through prompt engineering and embeddings. Strong skills in Python or TypeScript, machine learning (including deep learning and NLP/LLMs), and the ability to explain complex concepts to all stakeholders are essential.
Responsibilities:
● Design, implement, and maintain scalable ML and LLM pipelines for training, validation, deployment, and monitoring.
● Apply your knowledge of software engineering and AI systems to develop AI solutions that directly address and resolve business problems.
● Take ownership of implementing and optimizing applied AI components, ensuring they meet project needs with high complexity and scale.
● Partner with other data and AI professionals to compile the team’s work into comprehensive pipelines for continuous integration, deployment, and delivery.
● Own end-to-end infrastructure for model CI/CD, including versioning, automated testing, and evaluations.
● Develop and incorporate AI and GenAI solutions and pipelines while adhering to industry best practices, including moderation, security, and compliance standards.
● Lead the charge in designing, measuring, and evaluating AI model outputs, developing standard and custom metrics to ensure alignment with business objectives.
● Translate AI research into production-ready features, delivering robust and scalable AI components that integrate seamlessly with larger systems.
● Drive the selection and application of appropriate evaluation metrics, ensuring that AI solutions are robust, unbiased, and meet all necessary performance standards.
Qualifications:
● Hands-on experience building and operating MLOps platforms (Docker, Kubernetes, Terraform) and LLMOps frameworks.
● Strong experience with CI/CD in the GCP environment, leveraging services and frameworks such as Vertex AI, Kubeflow, DataFlow, Cloud Build, Cloud Deploy, Artifact Registry, and Container Registry.
● Experience configuring and deploying LLM Guardrails, while ensuring security and compliance among AI-powered solutions.
● Expertise in tracking application execution with observability tools such as LangFuse, Phoenix, while leveraging Cloud Monitoring dashboards and alerts.
● Strong understanding of the trade-offs between Generative AI models, considering the balance of cost and performance, depending on the use case.
● Demonstrable experience in applied AI, with a foundation in machine learning, deep learning, NLP, LLMs, and statistical analysis.
● Experience with data embeddings and vector databases, understanding the trade-off between available options, and leveraging it to optimize data ingestion.
● Experience with software development in Python and/or TypeScript for API development and orchestration.
● Skilled in creating and adjusting prompts for complex AI systems to meet diverse project requirements.
● Familiarity with testing and evaluating AI systems using state-of-the-art methods and best practices.
● Ability to communicate complex AI solutions and concepts effectively to technical and non technical stakeholders.
Bonus Points:
● Withhold a GCP certification on AI/ML, Cloud, or Data segments.
● Experience deploying and maintaining large-scale data-intensive solutions, maintaining high throughput, low latency, and data security.
● Experience with advanced Agentic AI architecture, performance optimization of machine learning models, and the integration of AI into larger software ecosystems.
● Contributions to AI thought leadership in the industry.
Benefits:
● Health insurance.
● Access to Udemy + Platzi.
● Mentorship program.
And more!