M
91

ML Systems Engineer

4.5y relevant experience

Qualified
For hiring agencies & HR teams

EU engineers, ready to place with your US clients

Pre-screened on AI. Remote B2B contracts. View 5 full profiles free — AI score, skills report, interview questions included.

Executive Summary

Patrik Gellért Szepesi is a high-caliber ML Systems Engineer whose experience profile closely mirrors the role's requirements. With ~4.5 years of production ML work spanning healthcare AI, autonomous vehicle perception, and fintech, they brings end-to-end ownership of complex ML pipelines, deep AWS MLOps expertise, and a measurable track record of impact. their LLM fine-tuning, NLP deployment, and cost-optimization experience are particularly well-aligned with an AI-first recruiting platform at growth stage. The only meaningful technical gaps are GCP exposure and the absence of code samples — both addressable through interview and onboarding. This candidate is a strong FIT and should be prioritized for a first-round technical interview.

Top Strengths

  • Production ML pipeline expertise at scale — from 500M image pre-labeling to real-time LLM inference serving tens of thousands of patients
  • Deep AWS MLOps stack fluency (SageMaker, Kubeflow, Airflow, MLflow, EKS, Bedrock) matching nearly all required technical environment components
  • Proven cost optimization mindset: 70% AWS cost reduction, 60% labeling cost reduction, 97% annotation time reduction — directly valuable for a growth-stage startup
  • LLM fine-tuning and NLP model deployment experience (BERT, LoRA, RAG, LLM classification) — highly relevant for an AI-first recruiting platform
  • Public teaching and community contribution (25,000+ students, AWS conference speaker, academic researcher) demonstrating leadership, communication, and knowledge depth

Key Concerns

  • !No GCP experience evident — role lists AWS/GCP and candidate is exclusively AWS-oriented, which may require onboarding time
  • !No GitHub or code samples available to verify software engineering best practices, test coverage, or collaborative code quality

Culture Fit

88%

Growth Potential

High

Salary Estimate

$110k–$135k

Assessment Reasoning

Patrik meets or exceeds ~90% of the required skills and preferred qualifications for this ML Systems Engineer role. This candidate has direct, production-validated experience with Python, PyTorch/TensorFlow, ML pipeline orchestration (Kubeflow, Airflow, SageMaker Pipelines), containerized deployment (Docker, AWS EKS), and model monitoring/optimization — covering virtually the entire required technical stack. their NLP model deployment experience (BERT classifiers, LLM fine-tuning, RAG pipelines) satisfies the preferred qualification around NLP at scale, and their 70% cost reduction on AWS demonstrates the infrastructure optimization mindset the role demands. The two identified gaps — GCP experience and absence of code samples — are minor relative to the strength of the overall profile and do not constitute disqualifying factors. their cultural alignment with an ownership-driven, impact-focused startup environment is reinforced by their freelance teaching, public speaking, and end-to-end project delivery history. FIT decision is made with high confidence.

Interview Focus Areas

GCP familiarity and cross-cloud adaptability — assess willingness and speed to ramp on GCP toolingSoftware engineering practices: testing strategies, code review habits, documentation standards, and CI/CD integration in ML systemsSystem design depth: scalability decisions, trade-offs in inference vs. training infrastructure, and experience debugging ML systems in productionCollaboration style within a small ML team and with cross-functional data scientists and backend engineers

Code Review

GoodSenior Level

No direct code samples are available for review, which limits objective assessment. However, the breadth and production nature of The candidate's described systems — including custom training pipelines with LoRA/quantization, scalable inference backends, and self-supervised pipelines over 500M images — strongly imply solid engineering discipline. A technical interview or take-home assignment should be used to validate code quality and software engineering practices directly.

PythonTypeScriptPyTorchTensorFlowHuggingFace TransformersScikit-LearnSQLDockerKubeflowMLflowApache AirflowAWS SageMakerAWS BedrockAWS EKSTerraform
  • +Demonstrated systems-level thinking: load testing for scalability, automated retraining pipelines, managed spot training for cost control
  • +Breadth of production-grade tooling (Docker, Kubeflow, MLflow, Airflow, SageMaker Pipelines) implies familiarity with clean, modular pipeline code
  • +Academic publication and Udemy courses suggest ability to write well-structured, explainable code
  • -No GitHub profile or code samples provided — cannot directly assess code quality, test coverage, or engineering hygiene
  • -Inability to evaluate adherence to software engineering best practices (testing, documentation, code review culture) without code artifacts

Experience Overview

4.5y total · 4.5y relevant

This candidate is an exceptionally well-rounded ML Systems Engineer with ~4.5 years of hands-on production ML experience across healthcare AI, autonomous vehicles, and fintech. This candidate demonstrates mastery of the full MLOps stack — from training pipelines and LLM fine-tuning to containerized deployment and cost optimization on AWS. their experience is directly aligned with the role's core requirements, and their teaching background signals strong technical communication and depth.

Matching Skills

PythonPyTorch / TensorFlowML Systems ArchitectureMLOps / ML Pipeline OrchestrationCloud Infrastructure (AWS)Data EngineeringModel Deployment & MonitoringKubernetes (AWS EKS)DockerApache AirflowKubeflowMLflowSageMaker PipelinesPostgreSQLCI/CD pipelines

Skills to Verify

GCP experience (AWS-heavy, no GCP evidence)Explicit observability/monitoring tooling mention (e.g., Prometheus, Grafana, Datadog)
Candidate information is anonymized. Personal details are hidden for fair evaluation.