
Apoorva Kumar
Founder and CEO at Disseqt AI
About
I'm Apoorva Kumar, the Founder and CEO of Disseqt AI. My career has been defined by leading high-impact product teams at Microsoft, AWS, and SAP, where I specialized in cloud infrastructure and AI. Today, I am focused on solving the 'last mile' problem in AI reliability by building an AI Governance and assurance platform for agentic enterprises. I am passionate about moving Responsible AI from abstract principles into production-ready systems. Having been part of the 500 Global Launch Program, I'm currently scaling our operations in the US and looking to connect with founders and investors who are navigating the complexities of AI risk, security, and global regulatory compliance. I offer deep expertise in RAIOps and a track record of building scalable, patent-backed technology solutions.
Networking
What I can offer
- ›Expertise in AI risk visibility and continuous assurance platforms
- ›Guidance on navigating global AI compliance standards (EU AI Act, NIST)
- ›Strategic product leadership and GTM for AI infrastructure
Looking for
- ›Founders building AI agents at scale
- ›Investors in AI infrastructure and data intelligence
- ›Enterprise leaders in regulated industries like BFSI
Best fit for
Current Interests
Background
Career
Began as a software developer at SAP, rising to Solutions Architect and Product Owner. Transitioned to senior product leadership at Informatica, AWS, and Microsoft before founding multiple AI-focused startups including Inspeq AI and Disseqt AI.
Education
Master of Science (MS) in Product & Technology Management, Leadership and Innovation from Technological University Dublin; Bachelor of Technology (B.Tech.) in Information Technology from IIIT Allahabad.
Achievements
- ›Selected for the 500 Global Launch Program in Silicon Valley
- ›Global Patent Lead for Microsoft Teams
- ›Launched Microsoft eSign and Document Hub
- ›Provided load testing sign-off for the Aarogya Setu COVID-19 app
- ›Established strategic partnership with HCLTech
Opinions
- Trust in AI is not a principle but a system that must be engineered, enforced, and continuously evaluated.
- AI governance shouldn't be 'bolted on' at the end but embedded by design across data and models.
- Companies waiting for AI regulations to become 'hard requirements' will be 1–2 years too late to compete.
- Current RAI frameworks have a blind spot because they govern models rather than complex production systems.