specializing in AI Engineering with deep expertise in distributed systems and Conversational AI. In this role, you will lead the design and development of an enterprise-grade conversational AI platform, building intelligent virtual assistants and AI-driven solutions for enterprise customers.
Read on to find out what you will need to succeed in this position, including skills, qualifications, and experience.
You’ll work at the intersection of large-scale distributed infrastructure and cutting-edge AI/ML, ensuring that our conversational products are scalable, reliable, and deliver state-of-the-art experiences in natural language understanding and dialogue.
This position combines technical leadership with hands-on development, offering the opportunity to shape the future of conversational AI products used by thousands of enterprise users.
Responsibilities:
Design and build distributed software systems and microservices to support large-scale conversational AI solutions, ensuring high availability, fault tolerance, and performance under heavy load.
Lead the development of advanced Conversational AI capabilities (NLP and generative AI models), delivering intelligent dialogue systems and virtual assistants for enterprise users. This includes working on natural language understanding, intent recognition, and integration of large language models to enhance user interactions.
Ensure end-to-end quality of AI-driven applications, including rigorous testing, test automation, performance tuning, and maintaining CI/CD pipelines for continuous integration and deployment of ML models. You will drive best practices in MLOps to streamline model training, validation, and rollout to production.
Deploy and orchestrate AI services on cloud platforms (e.g., AWS, GCP, or Azure) using containerization technologies like Kubernetes and Docker to achieve scalable, secure, and reliable infrastructure for conversational AI products.
Architect solutions for real-time data processing and streaming (using technologies like Kafka or similar) to enable responsive, data-driven AI interactions, and ensure efficient data storage and retrieval by leveraging both relational and NoSQL database systems for conversation context and analytics.
Collaborate closely with research scientists, ML engineers, and product teams to integrate the latest AI breakthroughs into the platform. Drive innovation by contributing to open-source projects or research publications, and foster a culture of knowledge-sharing and continuous improvement in the Conversational AI domain.
Qualifications:
Bachelor’s degree in Computer Science or related field (or equivalent practical experience); advanced degree (Master’s/PhD) preferred. 3-5+ years of software development experience, including building large-scale distributed systems or infrastructure.
Proven experience designing and implementing distributed systems and microservices architecture for complex applications. Deep understanding of concurrency, networking, and large-scale system design (experience building highly scalable infrastructure is required).
Strong expertise in AI/Machine Learning, with emphasis on Natural Language Processing and Conversational AI. Hands-on experience developing NLP models, dialogue management, or working with large language models (LLMs) to build conversational interfaces.
Track record of deploying and scaling AI-driven applications in production. Experience with model testing/validation, performance optimization, monitoring, and managing full ML lifecycle from experimentation to live service (including setting up CI/CD pipelines for ML).
Proficiency in modern programming languages such as Java, Go, and Python for backend development and data processing tasks. Strong coding abilities with attention to clean, maintainable code and use of relevant frameworks or libraries.
Solid knowledge of cloud platforms (AWS, Google Cloud, or Azure) and infrastructure-as-code. Experience with containerization and orchestration (Docker, Kubernetes) and familiarity with continuous integration/continuous deployment tools and workflows for automated build/test/release.
Familiarity with real-time data processing and streaming architectures (e.g., Kafka, Apache Flink) for handling live data feeds. Working knowledge of databases, including both SQL (relational) and NoSQL data stores (MongoDB, Cassandra, etc.), and how to optimize data pipelines for large-scale AI applications.
Excellent communication and teamwork skills. Experience working in cross-functional and matrixed teams, bridging research and product engineering. Demonstrated leadership in mentoring engineers and driving technical vision. xsgimln Contributions to open-source projects or authorship of technical publications in AI/ML are a strong plus, reflecting a passion for community engagement and thought leadership.