V

ML/AI Engineer – Princeton NJ | 15+ Years

Vsecurelabs.Inc
Full-time
On-site
About the Role: ML/AI Engineer We are seeking a

highly experienced ML/AI Engineer

to join our team in

Princeton, NJ

on a

hybrid work model . With

15–18+ years of experience , you will play a critical role in designing, building, and deploying

machine learning (ML) and large language models (LLMs)

for real-world enterprise applications.

This position combines

hands-on technical expertise

with system design and deployment leadership. You will collaborate with cross-functional teams, optimize ML pipelines, and ensure models are

production-ready, scalable, and efficient

using Azure cloud technologies.

Key Responsibilities for ML/AI Engineer

Model Development:

Design, train, and implement ML and deep learning models across structured and unstructured data.

Data Processing:

Preprocess and analyze large-scale datasets to improve feature engineering.

Optimization:

Fine-tune models for performance, scalability, and efficiency.

Deployment:

Integrate models into production systems via APIs and cloud-based platforms.

Monitoring & Feedback:

Continuously monitor, test, and retrain models based on performance metrics.

Documentation:

Maintain technical documentation, architecture diagrams, and versioning logs.

Infrastructure:

Design scalable infrastructure for ML and LLM training and deployment.

Cloud & Containers:

Manage

Azure Kubernetes Service (AKS)

clusters and containerized ML workloads.

Governance:

Enforce model governance, reproducibility, and versioning with MLflow and Azure DevOps.

Collaboration:

Work closely with data scientists, DevOps engineers, and business stakeholders to align models with project goals.

Required Skills & Qualifications

Experience:

15–18+ years in ML/AI engineering with enterprise-level deployments.

Cloud Expertise:

Hands-on experience with

Azure Machine Learning, Azure OpenAI, Azure DevOps, and AKS .

Programming:

Proficiency in

Python

for ML and automation.

Containerization:

Strong knowledge of

Docker and Kubernetes .

CI/CD:

Expertise in building CI/CD pipelines for ML workflows.

LLMs:

Experience in

fine-tuning large language models, prompt engineering, and deployment .

Tools:

Familiarity with MLflow for experiment tracking and Terraform for infrastructure automation.

Monitoring:

Experience with

Prometheus and Grafana

for monitoring ML systems.

Soft Skills:

Strong problem-solving, collaboration, and communication skills.

Preferred Skills

Deep knowledge of distributed computing for ML workloads.

Advanced knowledge of MLOps and DevSecOps practices.

Prior experience in

enterprise-scale LLM deployments .

Work Model

Location:

Princeton, NJ

Work Type:

Hybrid (onsite and remote balance)

Experience Level:

Senior-level (15–18+ years)

Why Join This Role? This is an opportunity to lead

cutting-edge AI and ML initiatives

while working with modern Azure cloud services. You will design scalable systems for

enterprise ML, LLM fine-tuning, and production deployments , making a direct impact on business outcomes.

Ready to Apply? If you are passionate about building and deploying

AI-driven solutions at scale , we encourage you to apply today.

FAQs – ML/AI Engineer Role

What is the location for this role?

Princeton, New Jersey (hybrid work model).

What is the required experience level?

15–18+ years of experience in ML/AI engineering.

What cloud platforms are expected?

Primarily Azure services including Azure ML, Azure OpenAI, Azure DevOps, and AKS.

Do I need experience with large language models?

Yes, LLM fine-tuning, prompt engineering, and deployment experience are essential.

Which programming languages are required?

Proficiency in Python is required.

What container tools are used?

Docker and Kubernetes for containerized ML workloads.

What infrastructure tools are important?

Terraform for infrastructure automation and MLflow for governance.

Will I manage ML systems in production?

Yes, including deployment, monitoring, retraining, and performance optimization.

Which monitoring tools are used?

Prometheus and Grafana for system monitoring.

Do I need CI/CD expertise?

Yes, building secure and automated ML pipelines is critical.

What kind of datasets will I work with?

Both structured and unstructured large-scale datasets.

Is collaboration with other teams expected?

Yes, you will work closely with data scientists and business teams.

What level of documentation is expected?

Comprehensive documentation of models, infrastructure, and performance.

Is cloud-native deployment experience required?

Yes, integrating models into cloud-based production systems is essential.

How do I apply?

Submit your application via the internal portal and connect on LinkedIn for discussions.

#J-18808-Ljbffr