AI Engineer β Classifiers, Media Intelligence & Voice R&D
KORE1, a nationwide provider of staffing and recruiting solutions, has an immediate opening for an AI Engineer. This role focuses on building intelligent systems that power large-scale media understanding and organization. You will design, train, and deploy machine learning models that classify, tag, and structure complex datasets, while also contributing to research and development in emerging areas such as voice, audio, and image intelligence. You'll play a key role in transforming unstructured media into meaningful, searchable, and scalable data, while helping shape the next generation of AI-driven product capabilities.
What You'll Do:
Design and deploy classification models to support content understanding, including style detection, quality scoring, filtering, moderation, and semantic categorization
Build automated tagging and organization systems that enable efficient media management, search, and discovery
Develop and maintain training data pipelines, including dataset curation, annotation workflows, and active learning loops
Research and prototype advanced image intelligence capabilities such as pose estimation, visual similarity, and feature extraction
Lead experimentation in AI voice and audio technologies, including text-to-speech, voice cloning, and audio synthesis, and help transition research into production-ready systems
Create evaluation frameworks to measure model performance, accuracy, and drift over time
Optimize model inference pipelines for performance and cost efficiency through batching, caching, and model optimization techniques
Integrate ML models into production systems, exposing them through APIs and ensuring reliability at scale
What We're Looking For:
3+ years of experience building and deploying machine learning models in production environments
Hands-on experience training models, including dataset preparation, architecture experimentation, hyperparameter tuning, and debugging
Strong background in computer vision or image classification (e.g., CNNs, vision transformers, CLIP)
Experience or strong interest in voice and audio AI, such as speech synthesis, voice cloning, or audio classification
Proficiency in Python and ML frameworks such as PyTorch or TensorFlow
Experience building or working with data labeling pipelines, annotation workflows, or active learning systems
Understanding of production model serving, including API integration, latency optimization, and monitoring for model drift
Familiarity with embedding-based systems, vector search, or semantic similarity techniques
Nice to Have:
Experience with generative models such as diffusion models, GANs, or generative audio systems
Background in optimizing models using tools like ONNX or TensorRT
Exposure to research environments or published work in machine learning or AI
Technology Stack:
Python, PyTorch
GPU-based compute environments
REST and webhook APIs
TypeScript (for integration with backend services)
PostgreSQL (metadata and labeling systems), Redis
Workflow orchestration tools (e.g., Temporal)