AI Safety Researcher
EU engineers, ready to place with your US clients
Pre-screened on AI. Remote B2B contracts. View 5 full profiles free — AI score, skills report, interview questions included.
About This Role
Requirements
- Demonstrate 3+ years of experience in AI safety, alignment research, or ML robustness
- Develop and implement safety evaluation frameworks for production ML systems
- Conduct adversarial testing, bias audits, and failure mode analysis
- Design monitoring systems to detect model drift, hallucinations, and unsafe outputs
- Collaborate with engineering teams to integrate safety guardrails into deployment pipelines
- Stay current with AI safety literature and contribute to internal research documentation
Required Skills
Pre-screened Candidates
0No candidates available for review yet.
All profiles are anonymized for fair evaluation
Similar Positions
Candidates may also fit these roles
AI Red Team Engineer
Remote
We're seeking an AI Red Team Engineer to join our AI-first recruiting platform at a critical growth stage. In this role, you'll design and execute adversarial t…
AI Alignment Engineer
Remote
We're seeking an AI Alignment Engineer to join our AI-first recruiting platform and help us build responsible, interpretable AI systems at scale. In this role, …
Senior ML Engineer
148 matchedAustin, TX
About the Role We're building ML systems that serve millions of users with sub-100ms latency requirements. As a Senior ML Engineer, you'll own the full lifecyc…
Senior Applied AI Researcher
49 matchedremote
About the Role You will bridge the gap between research and production systems that serve millions of users. In this role, you'll tackle fundamental ML questio…
