Svetlin Penkov

Co-Founder & CEO at Efemarai

Machine LearningMLOpsMotion PredictionReinforcement LearningExplainable AI (XAI)Robotics

About

I'm Svetlin Penkov, the Co-Founder and CEO of Efemarai. My career has been dedicated to the intersection of robotics and machine learning, from my PhD research at Edinburgh to leading motion prediction teams for autonomous vehicles. Currently, I am focused on making ML Quality Assurance a 'first-class citizen' by building tools that automate stress testing and model debugging. I'm passionate about ensuring the 'Cambrian explosion' of AI is backed by reliability, transparency, and accountability. I love connecting with fellow engineers and leaders who are looking to move beyond black-box models toward robust, explainable AI systems.

Networking

What I can offer

  • Tools to speed up ML development and debugging
  • Theoretical insights into model behavior
  • Expertise in ML quality assurance and reliability

Looking for

  • expanding my professional network
  • exploring mutual opportunities in the AI and MLOps industry

Best fit for

ML engineersAI researchersTech leadersAI ecosystem builders

Current Interests

ML Quality AssuranceAI Safety & EthicsExplainabilityMLOps evolutionStructured task-related abstractions

Background

Career

Transitioned from a PhD in Robotics to leading motion prediction at FiveAI, before founding Sciro Research and Efemarai to focus on ML testing and debugging.

Education

Ph.D. in Robotics, Master’s in Neuroinformatics and Computational Neuroscience (First Class) from The University of Edinburgh; Master’s in Mechatronics (First Class) from The University of Reading.

Achievements

  • Developed an automatic stress testing engine for ML models
  • Led the Motion Prediction team at FiveAI
  • Developed a real-time brain-machine interface classifier for ErrP
  • Chairman of the Managing Board at AI Cluster Bulgaria

Opinions

  • Quality Assurance should be a 'first-class citizen' in machine learning development workflows.
  • Automatic stress testing can significantly outperform human experts in finding model failures.
  • The next wave of AI applications cannot be unlocked without deep work on reliability and accountability.