D
78

Data Engineer

7y relevant experience

Qualified
For hiring agencies & HR teams

EU engineers, ready to place with your US clients

Pre-screened on AI. Remote B2B contracts. View 5 full profiles free — AI score, skills report, interview questions included.

Executive Summary

This candidate is a senior Python/data engineer based in Tallinn, Estonia, with 10 years of experience spanning backend development, ETL pipelines, ML model deployment, and LLM integrations. their current role at Kavida.ai demonstrates direct relevance — building Airflow pipelines, deploying to AWS, and working with AI-powered data products. they clears the threshold for core required skills in Python, SQL, Airflow, cloud platforms, and ETL, making him a FIT candidate. However, notable gaps in Apache Spark, Kafka, and dbt — all required or heavily preferred — and a near-absent public profile lower confidence. This candidate would be a strong hire if the Spark/Kafka gaps prove addressable through onboarding, but this should be verified during technical screening before extending an offer.

Top Strengths

  • Strong Python and data engineering foundation with 10 years of progressive experience
  • Direct Airflow ETL pipeline experience at current AI/logistics company highly relevant to role
  • Cloud platform versatility across AWS, GCP, Snowflake, and BigQuery matches the job's technical environment
  • ML and LLM expertise (LangChain, RAG, OpenAI) is a genuine value-add for an AI-first recruiting platform
  • Experience in growth-stage and startup-adjacent environments (Kavida.ai, CodeLions) suggests adaptability to fast-paced culture

Key Concerns

  • !Critical skills gaps in Apache Spark and Kafka — both listed as required — with no mention anywhere in the resume
  • !Unverifiable digital presence (no GitHub, empty LinkedIn) makes it difficult to independently validate 10 years of claimed expertise

Culture Fit

70%

Growth Potential

High

Salary Estimate

$80k-$100k

Assessment Reasoning

Aleksandr meets or exceeds requirements for the majority of the core technical stack: Python, SQL, Airflow, ETL/ELT, AWS/GCP, Snowflake/BigQuery, and data pipeline design. their 10 years of experience, current role at an AI startup, and ML/LLM depth are well-aligned with the AI-first platform context and preferred qualifications around ML workflows. The missing skills — Apache Spark, Kafka, and dbt — are meaningful gaps given they appear in both the required skills list and the technical environment section, but they do not represent an insurmountable barrier for a candidate with their demonstrated adaptability and cloud-native data engineering background. The overall score of 78 clears the FIT threshold of 70. This candidate is moderated to 72 due to the inability to independently verify claims via GitHub or LinkedIn, and the Spark/Kafka gaps. Recommended path: advance to a technical screen focused on those gap areas and include a short coding assessment.

Interview Focus Areas

Depth assessment on Apache Spark and Kafka — whether he has any working knowledge not reflected in resumeData pipeline design: walk through an end-to-end ETL/ELT architecture decision he owned, including trade-offs and failure handlingdbt familiarity and modern data stack experience beyond AirflowAutonomous remote work style and examples of ownership in distributed teamsCode assessment or take-home task to validate Python and SQL quality independently

Code Review

FairSenior Level

No GitHub or portfolio was provided, making direct code quality assessment impossible. The resume describes production deployments and testing practices consistent with a senior developer, but the absence of any public code is a notable gap for a data engineering role where technical depth matters. A code assessment or technical interview stage is strongly recommended before advancing.

PythonFastAPIDjangoFlaskDockerKubernetesGitLab CI/CDJenkinsPyTestTensorFlowLangChain
  • +Demonstrates TDD and PyTest practices, suggesting disciplined coding habits
  • +CI/CD pipeline management (GitLab, Jenkins) indicates awareness of production-grade deployment
  • +Diverse technology usage across projects suggests adaptability and breadth
  • -No GitHub profile or public code samples provided — cannot independently verify code quality or engineering depth
  • -Without code samples, claimed breadth across 30+ technologies cannot be validated
  • -Resume bullet points describe outcomes but lack architectural or design decision detail that would confirm senior-level thinking

Experience Overview

10y total · 7y relevant

Aleksandr presents a solid 10-year Python engineering background with meaningful data engineering exposure at Kavida.ai and CodeLions, covering Airflow, ETL, cloud platforms, and Snowflake. However, notable absences of Apache Spark, Kafka, and dbt — all core to this role — raise questions about their fit with the full technical stack. their ML and LLM depth is a genuine bonus for an AI-first recruiting platform.

Matching Skills

PythonSQLAirflowETL/ELTData PipelinesAWSGCPSnowflakeBigQueryDockerPostgreSQLData WarehousingLangChainFastAPICI/CD

Skills to Verify

Apache SparkKafkadbtInfrastructure-as-Code (Terraform/Pulumi)
Candidate information is anonymized. Personal details are hidden for fair evaluation.