Take a brain dump. AI cleans up your mess.
Bidet AI — Kaggle Gemma 4 Good Hackathon submission
Bidet AI is a 100% on-device Android app. You hit Record, you talk — scattered, repeated, stuttering, however your brain comes out — and Gemma 4 E4B cleans it up right there on your phone. No cloud. No upload. The audio and the transcript never leave the device. The model runs on the Tensor G3 CPU in my Pixel 8 Pro. It's been in my pocket all day, every day, while I built this.
The thesis is one line: it lets me be me, and the AI lets everyone else understand me.
Why I built this
I'm a middle-school teacher. Twenty-five years. I have adult ADD. I'm an introvert. I stutter when I get frustrated. I repeat words. I say "um" a lot. My brain is everywhere. It's scattered. I'm unfocused. I can't write. I overanalyze. I overthink. I put myself in their shoes and ask how they'd want to read it, and the whole thing just becomes so overwhelming.
The single hardest part of my job is communicating with parents. Report card comments. Progress reports. The kind of thing where you're supposed to capture a kid in three sentences and you owe it to them to actually do it. I would procrastinate until the night before. Stay up until two in the morning. Six hours of overthinking. And I'd wind up falling back on the generic — your child is a joy to teach, so-and-so is doing well, keep up the good work — clichés, because the other was just so overwhelming. I have twenty-five years of examples of it.
The night I figured out I could just talk to an AI was the night that broke. I put my grade book up on the screen, I hit record on the transcription, and I went through each student. Talked about the grades, about what happened, about what I remembered. "I'm just throwing it all in. You just say it." It was scattered, it was chaotic, it made no sense to anybody — including me when I re-read it. And then I dropped it into the AI.
What came back was professional and organized and personal, and it was me. The most genuine comments I've ever written. Six hours of dread became two and a half hours of actual work — the proofreading and tweaking — and I enjoyed it, because for the first time the comments expressed what I'd always wanted to say but couldn't get typed out of my fingers.
That's when I knew this had to be a product. And it had to be on the phone, because the phone is in my pocket 95% of my waking day, and what I'm dumping into it is my actual thinking about my actual students. That doesn't go to a cloud.
How it works
You open Bidet AI. You hit Record. You talk. Anywhere up to 45 minutes per session — a beep warns you five minutes before the cap.
Audio is captured in 30-second chunks and transcribed on-device by sherpa-onnx Moonshine. As each chunk lands, the raw text appears live. Stop when you're done.
Then you choose your output:
• Clean for me — Gemma 4 E4B rewrites your dump into the format your brain reads best. For me that's tight bullets, grouped by topic, with the throughlines surfaced. It's my ADD output, designed for how I learn.
• Clean for others — same brain dump, different audience. Email. Report card. Class notes. Or context for the next AI agent in the chain. The same raw becomes whichever shape the situation needs.
Both outputs are produced 100% on-device. Nothing crosses the network. The model file is 2.4 GB, lives in the app's external storage, loads on first launch.
Beyond me — for the kids
The wider lens is the part of this I care about most.
I teach kids with LDs. Not severe ones — the kind where, with organization and repetition, you can compensate. That's what I've done my whole life without realizing it. I have routines that just work.
But here's the thing I keep hitting in class. I know this kid knows the material. I teach history. They can tell me the story. They can sit there and walk me through Nixon from Peace with Honor to Watergate. They just can't get it typed out of their fingers. Or written on the page. Or formatted into the five-paragraph structure the rubric asks for.
Imagine if I could give them this:
Tell me the story of Nixon from Peace with Honor to Watergate, and let him go.
Let him get scattered. Let him remember a detail and throw it in. Let him talk for twelve minutes the way I talk for twelve minutes. Then run it through Clean for others with a "Cornell notes" prompt, and I can read his actual understanding instead of guessing at it from a half-page of stilted sentences.
That kid knows the concepts. He's in the concepts. He may not be able to write it. But like me, maybe he can tell me the story. Bidet AI is the bridge between what a brain knows and what a page can show.
The fine-tune
For the contest deliverable I'm using Unsloth to LoRA-fine-tune Gemma 4 E4B on my own brain-dump corpus — a few months of raw dumps paired with the cleaned outputs I actually used. The cleaned outputs were auto-labeled by Gemini 2.5 Pro with a fidelity-first prompt (preserve proper nouns, drop fillers, ≤30% length reduction). About 80 (raw, cleaned) training pairs. One epoch on a free Kaggle T4×2.
The fine-tune does one thing: it makes Bidet AI's cleaning sound like me cleaning my own dumps, not like a generic LLM cleaning anyone's dumps. The on-device model learns my voice. The output stops feeling like AI-text and starts feeling like the comments I would have written if I could.
The pattern generalizes. Any user with fifty raw captures and a fidelity-first cleaning prompt can have a small on-device model that cleans in their voice. That's the Unsloth submission inside the larger Gemma 4 submission.
Closer
Bidet AI started because I needed to write report card comments without losing six hours of my life. It's becoming something bigger because the same shape — talk freely, let the model find the structure — works for any brain that goes faster than its fingers. Teachers. Students. Anyone whose useful thoughts come out scattered.
The whole thing runs on the phone in your pocket. It doesn't need a server. It doesn't need a subscription. It doesn't need your data to leave your hand.
Take a brain dump. AI cleans up your mess.
That's it.
Built by Mark Barnett · Bidet AI · Kaggle Gemma 4 Good Hackathon, May 2026 Stitched from three brain-dump sessions recorded into Bidet AI on 2026-05-10 (31m48s of raw). Transcription artifacts repaired in this version: "Bidet AI" restored where Moonshine heard "the day AI"; "Yaya" where it heard "Yahya"; "Unsloth" where it heard "Unsolved"; silence-fill hallucinations cut.