Pivots Hiring

AI Safety Researcher

Productfull timeRemotemid level$95k-$145k

About This Role

We're seeking an AI Safety Researcher to join our AI-first recruiting platform at a critical growth stage. You'll design and implement safety frameworks, alignment protocols, and risk mitigation strategies for our machine learning systems that process recruitment workflows across EU markets. This role bridges research and production, ensuring our AI models operate with transparency, fairness, and robustness while maintaining compliance with emerging AI regulations.

Requirements

  • Demonstrate 3+ years of experience in AI safety, alignment research, or ML robustness
  • Develop and implement safety evaluation frameworks for production ML systems
  • Conduct adversarial testing, bias audits, and failure mode analysis
  • Design monitoring systems to detect model drift, hallucinations, and unsafe outputs
  • Collaborate with engineering teams to integrate safety guardrails into deployment pipelines
  • Stay current with AI safety literature and contribute to internal research documentation

Skills

AI SafetyModel AlignmentRisk AssessmentPyTorchAdversarial TestingFairness & Bias MitigationInterpretabilityPython

Check your profile with AI