Pixalate is an online trust and safety platform that protects businesses, consumers and children from deceptive, fraudulent and non-compliant mobile, CTV apps and websites. We're seeking a PhD-level AI Engineer to lead cutting-edge research in agentic AI systems, multimodal analysis, and advanced reasoning architectures that will directly impact millions of users worldwide.
Our software and data have been used to unearth multiple high profile criminal and illegal surveillance cases including:
Gizmodo:
An iCloud Feature Is Enabling a $65 Million Scam, New Research Says
Adweek:
A 7-Figure Ad Fraud Scheme Running on Roku Underlines Murkiness of CTV
Washington Post:
Your kids’ apps are spying on them
Pro Publica:
Porn, Piracy, Fraud: What Lurks Inside Google’s Black Box Ad Empire
ABC7 News:
The State of Children's Privacy Online
NBC News:
How many apps are tracking your children
Our team of lawyers, data scientists, engineers, economists and researchers span globally with presence in California, New York, Washington DC, London and Singapore.
About the Role
As an AI Research Engineer at Pixalate, you'll bridge the gap between fundamental AI research and production systems that protect the digital ecosystem. Working with our Research team (AFAC) that has uncovered more than $100M in ad fraud and national security threats, you'll have the autonomy to pursue groundbreaking research while seeing your innovations deployed at scale within months, not years.
You'll lead research in emerging AI paradigms including autonomous agent systems, test-time compute optimization, and multimodal understanding - all applied to real-world challenges in digital safety and fraud detection.
Key Research Areas & Responsibilities
Agentic AI Systems Development
Design and implement multi-agent architectures for autonomous fraud detection and analysis
Develop sophisticated agent coordination systems using frameworks like LangChain, AutoGen, or custom architectures
Create tool-integrated AI agents capable of complex reasoning and decision-making
Research novel approaches to agent safety and alignment in production environments
Advanced Reasoning & Test-Time Compute
Implement state-of-the-art reasoning systems inspired by recent breakthroughs (o1, DeepSeek-R1)
Optimize inference-time compute allocation for complex analytical tasks
Develop chain-of-thought and verification mechanisms for high-stakes decision making
Research novel approaches to scaling reasoning capabilities efficiently
Multimodal AI & Knowledge Systems
Build advanced multimodal models for analyzing video, image, text, and behavioral data
Develop sophisticated RAG (Retrieval-Augmented Generation) architectures:
Design high-performance vector databases and hybrid search systems
Implement advanced chunking strategies and semantic understanding
Create context-aware retrieval mechanisms for complex documents
Research cross-modal learning for fraud pattern detection
Required Qualifications
Education & Research Background
PhD in Computer Science, AI, Machine Learning, or related field (or exceptional research track record)
Published research in peer-reviewed venues demonstrating expertise in:
Large Language Models and transformer architectures
Agentic AI, autonomous systems, or multi-agent coordination
Multimodal learning or computer vision
Distributed systems and scalable ML
Technical Expertise
Expert proficiency in Python and deep learning frameworks (PyTorch preferred, TensorFlow)
Advanced experience with:
Modern AI frameworks: LangChain, Hugging Face Transformers, Ray
Agent development and orchestration
RAG systems and vector databases
Distributed training frameworks and GPU optimization
Strong understanding of:
Transformer architectures and attention mechanisms
Reinforcement learning and reward modeling
Neural architecture search and AutoML
MLOps and production ML systems
Research Skills
Track record of novel algorithm development and innovation
Experience with large-scale experimentation and ablation studies
Proficiency in research tools: Weights & Biases, MLflow, TensorBoard
Strong theoretical foundation in optimization, statistics, and linear algebra
Preferred Qualifications
Experience with fraud detection, cybersecurity, or trust & safety applications
Contributions to open-source AI projects
Industry research experience at leading AI labs (DeepMind, OpenAI, FAIR, etc.)
Experience translating research into production systems
Experience with:
Mixture of Experts (MoE) architectures
Constitutional AI and alignment techniques
Efficient inference optimization (quantization, distillation)
Real-time streaming ML systems
Benefits
We focus on doing things differently and challenge each other to be the best we can be.
Generous benefits package including 25 days holiday plus Bank holidays
Defined contribution Pension scheme
Monthly internet reimbursement
Casual, remote work environment
Hybrid, flexible hours
Opportunity for advancement
Fun annual team events
Being part of a high performing team that wants to win and have fun doing it
Extremely competitive compensation
We're particularly interested in candidates who can demonstrate both theoretical depth and practical implementation skills. Show us how your research can transform the landscape of online trust and safety.
Pixalate is an equal opportunity employer committed to building a diverse team. We particularly encourage applications from underrepresented groups in AI research.