A
82

Applied AI Researcher / Founding Engineer

7y relevant experience

Qualified
For hiring agencies & HR teams

EU engineers, ready to place with your US clients

Pre-screened on AI. Remote B2B contracts. View 5 full profiles free — AI score, skills report, interview questions included.

Executive Summary

The candidate is a strong candidate for the Applied AI Researcher / Founding Engineer role. They bring a PhD in AI, 7+ years of directly relevant experience, and a demonstrated ability to ship production AI systems across multiple domains. Their hands-on expertise with LLMs, RAG pipelines, agentic systems, and multimodal AI aligns closely with the company's technical direction. Their founding and team leadership experience suggests they are ready for the ownership and growth expectations of this role. The primary gaps are the absence of visible open-source work and limited explicit experience with image generation models specifically — both of which can be addressed in the interview process. They are a high-confidence FIT pending a technical interview.

Top Strengths

  • PhD in AI/CS with 4 peer-reviewed publications, satisfying the strong academic background requirement
  • Proven ability to build and ship end-to-end AI products with real users and measurable business outcomes (trading systems, recommendation engines, voice agents)
  • Hands-on expertise with LLMs, RAG, agentic AI, and multimodal systems — directly aligned with the company's text and image generation platform focus
  • Entrepreneurial founder mindset with B2B contracting experience, well-suited to an early-stage startup environment
  • Team leadership experience as Data Science Team Lead at scale (Digikala, Iran's largest e-commerce platform)

Key Concerns

  • !No GitHub or open-source contributions provided, making independent code quality verification impossible without a technical assessment
  • !Limited explicit image generation experience (Stable Diffusion, diffusion models, GANs) despite the job requiring text and image generation systems

Culture Fit

78%

Growth Potential

High

Salary Estimate

$100,000 - $130,000 (within posted range; UAE-based B2B contractor may have flexibility on structure)

Assessment Reasoning

The candidate meets or exceeds the majority of required qualifications: they hold a PhD in AI/CS, has 7+ years of relevant experience, has published 4 peer-reviewed papers, has led engineering teams, and has hands-on production experience with LLMs, multimodal agents, RAG systems, and cloud infrastructure (AWS/Azure). Their entrepreneurial background as a founder of GTC. LLC directly mirrors the founding engineer mindset required. They clears the 80% skills match threshold comfortably. The primary risks — no GitHub profile and limited explicit image generation experience — are manageable concerns that do not disqualify them, especially given the breadth and depth of their overall profile. A technical interview is recommended to validate code quality and image generation competence before extending an offer.

Interview Focus Areas

Technical deep-dive: Ask them to walk through the architecture of their most complex AI agent project, probing for scalability decisions, failure modes, and trade-offs madeImage generation experience: Probe depth of experience with generative image models (diffusion, GANs, multimodal) given the job's explicit focus on image generation systemsLeadership style and team management: Explore how they led the Digikala team, how they handled underperformance, and their vision for scaling an engineering team from scratchOpen-source and code quality: Request a GitHub repository or live coding session to assess engineering craft directly

Code Review

GoodSenior Level

No direct code samples were provided, which limits confidence in this assessment. However, the resume describes sophisticated, production-oriented system design patterns suggesting Senior-level engineering competence. The use of modular, swappable architectures and scalable pipeline design is a positive signal. A technical interview or code challenge is strongly recommended to verify actual code quality.

PythonDjangoFastAPILangChainLangGraphFAISSRedisTensorFlowKerasPySparkTransformers (HuggingFace)Groq APIElevenLabsGemini STTSolidity
  • +Resume demonstrates use of clean architectural patterns (abstract factory for LLM provider swapping, modular chat controller architecture, SSE streaming)
  • +Evidence of production-grade system design including token-aware conversation management, document chunking pipelines, and multi-user session isolation
  • -No GitHub profile or code samples provided, so code quality assessment is entirely inferred from resume descriptions rather than direct review

Experience Overview

9y total · 7y relevant

The candidate is a highly experienced AI engineer and PhD holder with approximately 7 years of directly relevant experience spanning LLMs, agentic AI, deep learning, and production ML systems. They have delivered measurable business impact across multiple industries and holds a founding/leadership track record. Their primary gap is the lack of visible open-source or GitHub contributions to validate code quality, and limited explicit experience with image generation systems specifically called out in the job description.

Matching Skills

PythonLLMs (Large Language Models)LangChain / LangGraphRAG (Retrieval-Augmented Generation)PyTorch / TensorFlow / KerasMultimodal AI (voice, text, image, document understanding)AWS and Azure cloud infrastructureModel lifecycle management (training, fine-tuning, deployment)Django / FastAPI / FlaskDeep learning architectures (LSTM, Transformers)Vector databases (FAISS, Redis)MLOps and data pipelinesTeam leadership (Data Science Team Lead at Digikala)Rapid prototyping in ambiguous environmentsPhD in Computer Science / AI

Skills to Verify

Explicit PyTorch hands-on experience (TensorFlow/Keras dominant in resume)GitHub / open-source contributions not visibleNo explicit mention of GCPNo explicit image generation model experience (Stable Diffusion, DALL-E, etc.)No explicit experience with model fine-tuning at scale (LoRA, PEFT, etc.)
Candidate information is anonymized. Personal details are hidden for fair evaluation.