A
82

Applied AI Researcher / Founding Engineer

4y relevant experience

Qualified
For hiring agencies & HR teams

EU engineers, ready to place with your US clients

Pre-screened on AI. Remote B2B contracts. View 5 full profiles free — AI score, skills report, interview questions included.

Executive Summary

The candidate is a technically strong applied ML engineer with a genuine entrepreneurial background, having co-founded two AI startups and served as CTO. Their hands-on expertise in LLM fine-tuning, multi-agent systems, and end-to-end ML deployment aligns well with the core technical requirements of this founding engineer role. The primary gaps are the absence of a PhD (preferred but not strictly required), no public code presence, and limited explicit experience with image generation systems specifically mentioned in the job description. Their startup track record, modern tooling fluency, and full ownership mentality make them a strong candidate for the role. The hiring decision should be supported by a technical interview and code review to close the verification gap left by the absence of a GitHub profile.

Top Strengths

  • Proven CTO and co-founder experience with demonstrated product traction (PentX reaching $10k MRR) — directly maps to the founding engineer mandate
  • Comprehensive modern LLM stack expertise: GRPO/DPO/ORPO fine-tuning, multi-agent LangGraph systems, vLLM serving, and RAG architectures
  • Full-stack ownership mentality — has delivered end-to-end systems from data pipelines to containerized cloud deployment and frontend integration
  • Experience at a US-based cybersecurity startup (Anvilogic, Palo Alto) demonstrates ability to work in fast-paced, high-expectation environments
  • Multilingual and internationally mobile (French, English, Romanian) with exposure to diverse technical and regulatory environments

Key Concerns

  • !Absence of a PhD or formal advanced academic credential is a meaningful gap given the role's strong preference and the company's research-oriented positioning
  • !No public code, GitHub, or open-source work provided — critical verification gap for a founding engineer role where technical credibility must be independently assessable

Culture Fit

80%

Growth Potential

High

Salary Estimate

$90,000 - $120,000

Assessment Reasoning

The candidate is assessed as FIT (score 82) based on the following reasoning: They meets approximately 75-80% of the required technical skills with strong depth in the most critical areas — LLM fine-tuning, multi-agent architectures, model lifecycle management, and cloud deployment on AWS/GCP. Their twice-over CTO/co-founder background with real product traction directly satisfies the founding engineer and leadership requirements. The role's minimum requirements (3-7 years AI/ML experience, delivering working AI systems, solid software engineering, leadership potential) are all met or exceeded. The PhD preference is a gap but is explicitly listed as preferred, not required. The absence of a GitHub profile and no code sample are meaningful concerns that prevent a higher confidence score, but the resume's technical specificity and startup credibility are sufficient to recommend moving to interview stage. The salary range of $90k-$144k aligns with their experience level. A technical screen focusing on system design and RL fine-tuning depth should be the next step.

Interview Focus Areas

Deep technical dive on RL fine-tuning methodology (GRPO reward design, trajectory sampling) to validate depth beyond resume bullet pointsArchitecture walkthrough of a multi-agent system they built end-to-end, assessing system design thinking and trade-off reasoningDiscussion on text and image generation experience — clarify whether they have worked with diffusion models or vision-language modelsLeadership and team management experience — how they have mentored or grown engineers in their CTO rolesRequest for a GitHub profile or code sample review before final decision

Code Review

FairSenior Level

No code example or GitHub profile was submitted, so direct code quality evaluation cannot be performed. Based on resume descriptions alone, the candidate demonstrates senior-level architectural thinking and production engineering awareness. A technical interview or take-home assignment should be used to validate actual code quality before proceeding.

PythonPyTorchLangGraphvLLMFastAPIDockerAWS SageMakerSnowflakeONNX
  • +Resume descriptions demonstrate architectural thinking — sandboxed execution layers, async orchestration, checkpointing, and WebSocket streaming suggest clean systems design
  • +Awareness of production concerns: quantization, ONNX optimization, guardrails, and multi-target evaluation harnesses indicate engineering maturity
  • -No code sample was provided and no GitHub link is included, making direct code quality assessment impossible — this is a meaningful gap for a founding engineer role where code ownership is central

Experience Overview

4y total · 4y relevant

The candidate presents a strong applied ML engineering profile with genuine founding experience, modern LLM stack expertise, and demonstrated ability to ship production AI systems. They covers the majority of technical requirements well, particularly around LLMs, multi-agent systems, RL fine-tuning, and cloud deployment. The absence of a PhD and lack of visible open-source or academic output are the primary gaps against the role's stated preferences.

Matching Skills

PythonPyTorchHugging Face / TransformersLLM fine-tuning (DPO, ORPO, GRPO, RL)LangGraph / LangChain multi-agent systemsvLLM model servingRAG & Graph RAG architecturesAWS (SageMaker, Lambda, S3)GCP (Cloud Functions, Cloud Run)Docker / KubernetesModel lifecycle management (training, fine-tuning, deployment, monitoring)Multimodal and NLP model experienceFastAPI backend engineeringStartup / founding engineer experienceMLOps pipelinesText and image/document generation adjacent work

Skills to Verify

Formal PhD or advanced academic degree in AI/ML/CSExplicit experience with multimodal vision-language models (e.g., image generation, diffusion models)Published academic research or peer-reviewed papersTensorFlow experience (PyTorch-only background apparent)Demonstrated open-source community contributions (no GitHub provided)
Candidate information is anonymized. Personal details are hidden for fair evaluation.