AI Judges a Hackathon in Kathmandu, And Nails It

During the five intense days at Sui Hacker House, builders coded, collaborated, and transformed ideas into working prototypes over 120 hours.  Behind the scenes, we ran an experiment: Could AI judge hackathon projects as well as humans? We used Suispark not just as the submission portal, but as a silent, parallel judge.

The AI Judging Panel

While elite human judges reviewed submissions, our SuiSpark AI agents silently analyzed the same projects. Each AI had a unique role:

Sui Project Analyst: Checked if the project was truly unique to Sui or just a clone from another chain.
Technical Analyst: Scrutinized the GitHub repo to evaluate code quality and execution.
Track Evaluator: Ensured submissions matched the hackathon’s specified tracks.
Venture Analyst: Assessed the MVP, go-to-market strategy, and revenue potential.

The AI reviewed demo videos, project descriptions, and GitHub repositories, without bias or burnout. Just sharp, data-driven analysis.

The Results? Stunning Alignment

When the human and AI judges revealed their final rankings, the room fell silent. The lists were nearly identical. Same winner. Same standout projects. The AI didn’t just get close, it mirrored the instincts and decisions of seasoned experts.

Why This Matters:

This isn’t just about one hackathon. This is a glimpse into the future: AI-powered judging at scale, analyzing thousands of projects globally, so human experts can focus on mentoring and funding the best builders. SuiSpark doesn’t replace human judgment. It supercharges it.

Building Something on Sui? Let our AI agents take a look.

Submit your project at SuiSpark

Leave a Reply