A
82

Applied AI Researcher / Founding Engineer

6y relevant experience

Qualified
For hiring agencies & HR teams

EU engineers, ready to place with your US clients

Pre-screened on AI. Remote B2B contracts. View 5 full profiles free — AI score, skills report, interview questions included.

Executive Summary

The candidate is a strong candidate for the Applied AI Researcher / Founding Engineer role, combining rare depth in LLM post-training and multimodal systems with a proven ability to ship production-grade AI at scale. Their experience ranges from frontier model fine-tuning (1T parameter RL) to multi-agent orchestration and hybrid RAG pipelines, and their CVPR publication alongside top-venue reviewing reflects genuine research standing. They lack a PhD and has no visible open-source footprint, which are real gaps for the role's ideal profile but are mitigated by the quality and specificity of their applied work. Their co-founder background and team leadership history suggest they have the entrepreneurial appetite and people skills this founding role demands. A technical interview with a coding component is strongly recommended to validate engineering quality before proceeding to an offer.

Top Strengths

  • Deep, hands-on multimodal and LLM post-training expertise (SFT, RFT, RL) with frontier-scale model experience (1T parameters)
  • Proven ability to ship production AI systems with quantified business impact across multiple companies and domains
  • Research credibility via CVPR publication and top-venue reviewing (ECCV/ICCV, CVPR) — uncommon combination with applied engineering depth
  • Entrepreneurial DNA: co-founded a startup, led teams, and has experience at both early-stage ventures and research institutions
  • Breadth spanning agentic AI, RAG, MLOps, distributed training, and computer vision makes them genuinely full-stack for an AI founding role

Key Concerns

  • !No PhD (M.Sc. only) — the role explicitly prefers a PhD, and for a research-heavy founding role this may affect credibility with future academic hires or research partners
  • !No public code artifacts (no GitHub, no open-source) — for a Founding Engineer role where code ownership and quality standards matter, this is a meaningful blind spot that must be assessed in a technical interview

Culture Fit

80%

Growth Potential

High

Salary Estimate

$110,000 - $140,000

Assessment Reasoning

The candidate is assessed as FIT (score 82) based on the following reasoning: They meets approximately 85% of the required and preferred skills, with particularly strong alignment on the most critical dimensions — multimodal model expertise, LLM post-training (SFT + RL), production AI delivery, and early-stage startup experience. Their CVPR publication and reviewing roles satisfy the 'proven track record' requirement at a research level, and their measurable impact metrics (6x cost reduction, 30x throughput, 5x hallucination reduction) demonstrate engineering rigor. The primary gaps — no PhD, no public code repository, and limited social/thought leadership presence — are real but not disqualifying for a candidate of this applied depth. Confidence is set at 78 rather than higher because code quality cannot be directly assessed without a GitHub profile or code sample, which is a non-trivial unknown for a Founding Engineer role. A technical screen is required to validate this decision with high confidence.

Interview Focus Areas

Live coding or take-home exercise to assess code quality, modularity, and engineering best practices directlyDeep-dive on architectural decisions made at Warmwind and Mona AI — probe tradeoffs, failures, and what they would do differentlyLeadership philosophy and experience managing engineers under pressure in ambiguous early-stage environmentsVision for the technical roadmap at AlpacaRelay and how they would prioritize research vs. product engineering in a resource-constrained setting

Code Review

FairSenior Level

No direct code was submitted for review, so this score reflects an inference from resume evidence rather than direct evaluation. The technical stack described is modern and appropriate for the role, and the architectural decisions documented (distributed RL infra, hybrid RAG, agentic memory systems) suggest senior-level engineering judgment. However, the absence of a GitHub profile or any code artifact is a notable gap that should be addressed before an offer is extended.

PythonPyTorchHuggingFace TransformersLangChainLangGraphvLLMRayDockerGCPPineconeFirestorePostgresRedisNext.jsPydanticOpenPipe
  • +Resume demonstrates strong architectural thinking: distributed RL training infrastructure, multi-level memory compression, and hybrid RAG pipelines suggest modular, scalable design sensibility
  • +Breadth of tooling (Ray, vLLM, LangGraph, Pydantic, Docker) indicates pragmatic, production-aware engineering practices
  • -No code samples, GitHub profile, or open-source repositories were provided, making direct code quality assessment impossible — this is a meaningful gap for a Founding Engineer role

Experience Overview

9y total · 6y relevant

The candidate is a well-rounded Applied AI Researcher with approximately 6 years of directly relevant experience spanning multimodal models, LLM post-training, multi-agent systems, and production AI deployment. Their track record of delivering measurable, production-grade results at both startups and research institutions is compelling, and their CVPR publication combined with top-venue reviewing roles demonstrates genuine research credibility. The absence of a PhD and lack of visible open-source contributions are the primary gaps relative to the role's ideal profile.

Matching Skills

PythonPyTorchHuggingFaceLLMs and multimodal modelsReinforcement Learning fine-tuningVLM post-training (SFT + RFT)Distributed training (Ray)Model lifecycle managementMulti-agent orchestrationRAG pipelinesGCP cloud infrastructureMLOps and deploymentDeep learning architecturesComputer visionvLLMLangChain / LangGraphDockerTeam leadership experience

Skills to Verify

AWS or Azure cloud experience (only GCP listed)TensorFlow (PyTorch-centric profile)Formal PhD credential (has M.Sc.)Explicit large-scale data workflow / MLOps pipeline tooling (e.g., MLflow, Kubeflow)No GitHub or open-source contributions documented
Candidate information is anonymized. Personal details are hidden for fair evaluation.