Remote

AI Safety Researcher

Productfull timemid level$95k-$145k
AI ScreenedRemote B2BEU Talent Pool
For hiring agencies & HR teams

EU engineers, ready to place with your US clients

Pre-screened on AI. Remote B2B contracts. View 5 full profiles free — AI score, skills report, interview questions included.

About This Role

We're seeking an AI Safety Researcher to join our AI-first recruiting platform at a critical growth stage. You'll design and implement safety frameworks, alignment protocols, and risk mitigation strategies for our machine learning systems that process recruitment workflows across EU markets. This role bridges research and production, ensuring our AI models operate with transparency, fairness, and robustness while maintaining compliance with emerging AI regulations.

Requirements

  • Demonstrate 3+ years of experience in AI safety, alignment research, or ML robustness
  • Develop and implement safety evaluation frameworks for production ML systems
  • Conduct adversarial testing, bias audits, and failure mode analysis
  • Design monitoring systems to detect model drift, hallucinations, and unsafe outputs
  • Collaborate with engineering teams to integrate safety guardrails into deployment pipelines
  • Stay current with AI safety literature and contribute to internal research documentation

Required Skills

AI SafetyModel AlignmentRisk AssessmentPyTorchAdversarial TestingFairness & Bias MitigationInterpretabilityPython