Kaggle Gemma 4 Good Hackathon — verified prize tree + Bidet AI submission strategy
Live page (login required): https://www.kaggle.com/competitions/gemma-4-good-hackathon Updated 2026-05-09 evening with data pulled directly from Mark's logged-in Kaggle session — supersedes the earlier draft on this URL.
TL;DR
- $200K total. Stackable: a single Bidet AI submission can win Main Track + Impact Track + a Special Technology Track simultaneously — up to ~$70K from one APK.
- The prize hidden in plain sight: Cactus Prize ($10K) — "best local-first mobile or wearable application that intelligently routes tasks between models." Bidet AI is exactly this (Whisper-tiny → Gemma 4 routing on-device).
- Strongest fits for Bidet AI: Future of Education (Impact, $10K) · Digital Equity (Impact, $10K) · Cactus (Special Tech, $10K) · LiteRT (Special Tech, $10K) · Unsloth (Special Tech, $10K — needs the fine-tune showcase).
- Deadline: May 18, 2026 at 11:59 PM UTC. Today is May 9. 9 days.
- One submission per team. Pick a Track and stick.
- 70 % of the score is the video pitch + vision narrative. Only 30 % is technical depth. The video is the star.
1. Verified prize structure
Main Track — $100,000
"Awarded to the best overall projects that demonstrate exceptional vision, technical execution, and potential for real-world impact."
| Place | Prize |
|---|---|
| 1st | $50,000 |
| 2nd | $25,000 |
| 3rd | $15,000 |
| 4th | $10,000 |
Impact Track — $50,000 (5 × $10K, one per domain)
| Track | Prize | Description (verbatim) |
|---|---|---|
| Health & Sciences | $10,000 | "Bridge the gap between humans and data. Build tools that accelerate discovery or democratize knowledge." |
| Global Resilience | $10,000 | "Build the systems of tomorrow—from offline, edge-based disaster response to long-range climate mitigation—that anticipate, mitigate, and respond to the world's most pressing challenges." |
| Future of Education | $10,000 | "Reimagine the learning journey by building multi-tool agents that adapt to the individual and empower the educator through seamless integration." |
| Digital Equity & Inclusivity | $10,000 | "Break down barriers through linguistic diversity, intuitive interfaces, and tools that help close the AI skills gap." |
| Safety & Trust | $10,000 | "Pioneer frameworks for transparency and reliability, ensuring AI remains grounded and explainable." |
Special Technology Track — $50,000 (5 × $10K, one per tool/framework)
"These five prizes recognize outstanding technical achievement using specific tools and frameworks within the ecosystem. Projects are eligible to win both a Main Track Prize and a Special Technology Prize."
| Tool | Prize | Description (verbatim) | Bidet AI fit |
|---|---|---|---|
| Cactus | $10,000 | "For the best local-first mobile or wearable application that intelligently routes tasks between models." | HIGHEST — this describes Bidet exactly. |
| LiteRT | $10,000 | "For the most compelling and effective use case built using Google AI Edge's LiteRT implementation of Gemma 4." | HIGH — already on this stack via LiteRT-LM. |
| llama.cpp | $10,000 | "For the best innovative implementation of Gemma 4 on resource-constrained hardware." | LOW — wrong stack for Bidet. |
| Ollama | $10,000 | "For the best project that utilizes and showcases the capabilities of Gemma 4 running locally via Ollama." | LOW for the phone; MEDIUM for a desktop companion. |
| Unsloth | $10,000 | "For the best fine-tuned Gemma 4 model created using Unsloth, optimized for a specific, impactful task." | MEDIUM — strong if we ship the per-user / per-corpus fine-tune; story extension only without it. |
Earlier draft was wrong. I had only verified Unsloth + Ollama. The Cactus prize was missing entirely from my prior notes — that's a $10K fit lost in the noise. Corrected.
2. Eligibility and rules (verbatim from the Rules tab)
- Submissions per team: one (1). This is a hackathon, not a leaderboard competition. One shot.
- Maximum team size: five (5). Mergers allowed up to the team merger deadline.
- Winner license: CC-BY 4.0 for the winning Submission and source code. Pretrained models and input data with incompatible licenses do NOT need to be relicensed — Gemma weights are fine as-is.
- Cannot enter from multiple Kaggle accounts.
- External Data permitted if publicly available and equally accessible to all participants at no cost.
- No Competition Data is provided by Kaggle for this hackathon — bring your own.
3. Submission requirements (verbatim from the Overview > Submission Requirements tab)
A valid submission must contain all of the following:
- Kaggle Writeup — your project report. Title + subtitle + detailed analysis. Max 1,500 words. Submissions over this limit may be subject to penalty. Must select a Track to submit.
- Video — attach to the Media Gallery. ≤ 3 minutes, published to YouTube. "This is the most important part of your submission. Create a dynamic, engaging, and high-quality video that demonstrates your project in action."
- Code repository — link to a public repo (GitHub, public Kaggle Notebook, etc.). "Must be well-documented and clearly show the implementation of Gemma 4. This is non-negotiable and will be used to validate the authenticity of your project." No login wall, no paywall.
- Cover image — required for the Writeup.
If a private Kaggle Resource is attached to your public Writeup, it auto-publishes after the deadline.
4. Evaluation rubric (verbatim)
"Your project will be judged primarily on your video demo. This is your chance to create something exciting, compelling, and with the potential to be seen by millions. Your video should tell a story, demonstrate the real-world impact of your product, and leave the judges inspired."
"While the video is the star of the show, all projects must be backed by real, functional technology. The accompanying writeup and code repository will be used by our judges to verify that your product is not just a concept but a working proof-of-concept built on Gemma 4."
| Criterion | Points | What they look for |
|---|---|---|
| Impact & Vision | 40 | "How clearly and compellingly does your project address a significant real-world problem? Is the vision inspiring and does the solution have a tangible potential for positive change?" |
| Video Pitch & Storytelling | 30 | "How exciting, engaging, and well-produced is the video? Does it tell a powerful story that captures the viewer's imagination?" |
| Technical Depth & Execution | 30 | "How innovative is the use of Gemma 4's unique features? Is the technology real, functional, well-engineered, and not just faked for the demo?" |
Implication: 70 % of the score is the video + vision narrative. The technical 30 points is where Whisper-mark fine-tune + Cactus model-routing + LiteRT + Unsloth-trained adapter all stack to demonstrate "innovative use of Gemma 4's unique features." We have the technical depth — the lever to pull is the video.
5. Timeline (verbatim)
| Date | What | Notes |
|---|---|---|
| April 2, 2026 | Start Date | Already past |
| May 18, 2026 11:59 PM UTC | Final Submission Deadline | 9 days from today (2026-05-09) |
"All deadlines are at 11:59 PM UTC on the corresponding day unless otherwise noted. The competition organizers reserve the right to update the contest timeline if they deem it necessary."
6. Recommended track + tool stack for Bidet AI
Pick ONE Main Track entry that simultaneously qualifies for ONE Impact Track and ONE Special Technology prize.
The cleanest stack: - Main Track: submit at the top of the funnel (eligible for $10K-$50K). - Impact Track: Future of Education ($10K). Tagline reads "multi-tool agents that adapt to the individual and empower the educator." We're the multi-tool agent (Whisper + Gemma) that adapts to the speaker. Mark as a teacher in the bio reinforces the "empower the educator" half. Digital Equity is the alternative if we lean accessibility harder. - Special Technology Prize: Cactus ($10K). The wording — local-first mobile that intelligently routes tasks between models — is Bidet AI's literal architecture (Whisper-tiny on CPU + Gemma 4 E4B on GPU/NPU, with the Whisper output routed into Gemma).
Backup Special Technology pick: LiteRT ($10K). If judges prefer to give Cactus to a different submission, LiteRT is the next-cleanest fit because we ship litert-community/gemma-4-E4B-it-litert-lm from Google AI Edge as the official path.
Stretch Special Technology pick: Unsloth ($10K). Requires we actually ship a fine-tuned Gemma 4 model (the BYOK Personalize narrative). LiteRT-LM converter risk for runtime adapters remains (see §7); we can de-risk by shipping a single canonical Unsloth-trained model merged into the base weights, ignoring the per-user adapter dream for v1.
Stacking rule (from the Rules tab): "Projects are eligible to win both a Main Track Prize and a Special Technology Prize." Singular Special. So the upside is Main + Impact + Special = three buckets, max one prize per bucket. Theoretical ceiling: 1st Main ($50K) + Future of Education ($10K) + Cactus ($10K) = $70K.
7. Honest blocker — runtime LoRA on Gemma 4 + LiteRT-LM
The ealier draft already covered this; restating with one update.
- Today (2026-05-09): runtime LoRA adapter loading on Gemma 4 + LiteRT-LM is not supported. PEFT layer types are not plumbed through. The MediaPipe
.litertlmconverter formodel_type=GEMMA_4_E4Bwas last reported failing withUnknown special modelon HF discussion #7. Status of a fix is unknown. - Workaround that works: train LoRA via Unsloth → merge into base weights (Unsloth issue #4820 is fixed) → re-convert to
.litertlm. The whole 3.6 GB model gets re-shipped per fine-tune cycle. Not the "tiny adapter on phone" picture, but defensible product behavior for a quarterly retrain. - Action item: test the merged-weights → MediaPipe converter end-to-end for
GEMMA_4_E4BTHIS WEEK. If it still rejects E4B, we ship the baselitert-community/gemma-4-E4B-it-litert-lmwith our prompt scaffolding and skip the fine-tune narrative for v1 — the Cactus + LiteRT prizes don't require fine-tuning.
8. What to do next (in priority order)
- Lock in the track choice. Recommended: Main Track + Future of Education + Cactus. Mark confirms.
- Cut the 3-minute video. This is 70 % of the score. Should show: a real student-style brain-dump → RAW transcript → Clean for me → Clean for others → Show what changed toggle → emphasize the on-device, no-cloud, no-telemetry framing. End on the accessibility framing (college students with attention-related learning differences).
- Polish the README + writeup. The 1,500-word Kaggle Writeup mostly already exists in PR #21's README + the brain-dump research dossier. Repackage as a Kaggle Writeup with a track-specific framing.
- Test the converter path for Gemma 4 E4B — decide before mid-week whether Unsloth fine-tune narrative is in or out for v1.
- Cover image — the existing branding/bidet-ai_icon.png + a screenshot of the three-tab UI works.
Sources
- Kaggle competition — Rules tab (verified via Mark's logged-in session, 2026-05-09)
- Kaggle competition — Overview / Submission Requirements / Timeline / Evaluation tab (same)
- Cross-reference earlier external sources retained in the prior version of this report (Unsloth tweet, Ollama tweet, ETIH coverage, HF discussion #7, Sasha Denisov tutorial, Groundy article).