Mark's Reports

Moonshine + sherpa-onnx + Unsloth — Pre-Pivot Deep Research

Author: Claude Opus 4.7 (research agent) Date: 2026-05-10 (ET) Context: Tensor G3 NPU confirmed unavailable (LiteRT-LM 0.11 returns NOT_FOUND). Bidet AI v0.3 architecture has to land on routed STT → LLM. This document verifies, with citations, what we are integrating before we commit.


1. Moonshine architecture — verified

Property Value Source
Model class Encoder-decoder transformer arXiv 2410.15608 [1], HF transformers docs [2]
Position encoding RoPE (Rotary Position Embeddings) — NOT absolute arXiv 2410.15608 [1]
Encoder activation GELU; encoder NO SwiGLU HF transformers MoonshineConfig [2]
Decoder activation SiLU; decoder uses SwiGLU FF + cross-attn HF transformers [2], paper [1]
Audio input Raw waveform (NOT mel-spectrogram). Learned conv stem replaces mel preproc. github.com/usefulsensors/moonshine README [3]
Sample rate 16 kHz, mono PCM, normalized to [-1.0, 1.0] github.com/usefulsensors/moonshine [3]
Recommended max audio ≤ 30 seconds per call; default VAD segment 15 s github.com/usefulsensors/moonshine [3]
Layers (tiny default) 6 encoder + 6 decoder hidden layers HF MoonshineConfig defaults [2]
Streaming variant Yes — separate "streaming" checkpoints with KV-caching for incremental audio. Tiny-Streaming = 34M params; Small-Streaming = 123M; Medium-Streaming = 245M. UsefulSensors/moonshine-streaming-* on HF [4]; arXiv 2602.12241 "Moonshine v2 Ergodic Streaming" [5]
Non-streaming params Tiny = 27M, Base = 61M (61M = sometimes cited 58M depending on source). HF model card UsefulSensors/moonshine-tiny [6]; UsefulSensors/moonshine [7]
Tiny WER (LibriSpeech clean) 4.55 WER (model card numbers) UsefulSensors/moonshine-tiny model card [6]
Tiny WER (LibriSpeech other) 11.68 WER UsefulSensors/moonshine-tiny model card [6]
FLOPs vs Whisper-tiny @ 10 s 5× less compute, no WER regression arXiv 2410.15608 abstract [1]
License MIT UsefulSensors/moonshine HF model card [7]
Repo github.com/usefulsensors/moonshine (older), github.com/moonshine-ai/moonshine (current) [8]

Key architectural takeaway: Moonshine is an encoder-decoder transformer like Whisper but with two structural wins — (a) RoPE replaces absolute position embeds so it handles variable-length audio without zero-padding, and (b) the encoder uses a learned conv stem on raw 16 kHz waveform, sidestepping mel-spectrogram preproc entirely. The non-padding property is where the 5× compute reduction comes from at inference: no wasted computation on padded silence.

Streaming reality check: the original Moonshine-Tiny is non-streaming (give it the full clip, it returns the full transcript). For incremental output as the user is still talking, the "Streaming" variants (Tiny-Streaming 34M, etc.) are needed. They use KV-cache and an "ergodic" encoder that processes audio in chunks. This is a separate model file, not a runtime mode flip on the same weights.


2. sherpa-onnx — verified

Property Value Source
Full name sherpa-onnx ("ONNX-runtime-based speech library from the k2-fsa / Next-Gen Kaldi team") github.com/k2-fsa/sherpa-onnx [9]
Pronunciation "sherpa-O-N-N-X" — yes, Mark's read is correct conventional (ONNX = Open Neural Network Exchange)
Built on ONNX Runtime + k2-fsa decoder + Kaldi feature library github README [9]
License Apache-2.0 (verified via LICENSE file) sherpa-onnx/LICENSE [10]
Latest release v1.13.1, May 8 2026 github releases [11]
Official AAR YES — sherpa-onnx-1.13.1.aar ~53.9 MB; rknn variant 27.7 MB github releases page [11]
Maven Central No first-party publication. Third-party mirror at com.bihe0832.android:lib-sherpa-onnx (currently v6.25.12, not the latest). The k2-fsa team distributes via GitHub releases + JitPack. mvnrepository [12], jitpack.yml [13]
Supported model families Whisper, Moonshine (incl. Moonshine v2 multilingual), Zipformer, Paraformer, Conformer, NeMo Canary, etc. sherpa-onnx docs [14], README [9]
Streaming Moonshine Confirmed supported — pre-converted INT8 ONNX models published at sherpa-onnx-moonshine-base-en-quantized-2026-02-27 and sherpa-onnx-moonshine-tiny-en-int8. Real-time streaming + microphone-VAD examples included. k2-fsa.github.io/sherpa/onnx/moonshine [15]
Android JNI surface Kotlin/Java APIs (OfflineRecognizer, OnlineRecognizer, VoiceActivityDetector, etc.) k2-fsa.github.io/sherpa/onnx/android [16]
Native .so sizes (arm64-v8a) libonnxruntime.so 15 MB + libsherpa-onnx-jni.so 3.7 MB → ~7.2 MB compressed inside APK sherpa Android build docs [17]
Supported ABIs arm64-v8a, armeabi-v7a, x86_64, x86 sherpa Android build docs [17]
Kotlin/Java APIs for Moonshine Yes — explicitly added. Two-pass ASR Android demo APKs published per release. sherpa-onnx CHANGELOG [18]

Integration shape on Android: drop the AAR into app/libs/, add implementation files('libs/sherpa-onnx-1.13.1.aar') to Gradle, place the Moonshine ONNX files (encoder + decoder + tokenizer/tokens.txt) under assets/, then call OfflineRecognizer.fromConfig(...) with paths. ~50 LOC swap from a whisper.cpp setup.

Pre-converted Moonshine ONNX bundles ship with tokens.txt, so we don't have to convert anything ourselves — sherpa-onnx hosts them as release downloads.


3. Integration into bidet-phone — sized

3a. What changes in the codebase

Change LOC estimate Risk
Replace whisper.cpp git submodule with sherpa-onnx AAR -1 submodule, +1 app/libs/*.aar (53.9 MB) low
Replace Whisper-tiny GGUF with Moonshine-Tiny ONNX (encoder + decoder + tokens.txt, ~32 MB total INT8) swap assets/whisper-tiny-q8.gguf for 3 ONNX files low
Rewrite WhisperEngine.kt (the JNI wrapper around whisper.cpp) → MoonshineEngine.kt calling sherpa-onnx Kotlin API ~80–120 LOC delta (interface stays similar: transcribe(audioFloats: FloatArray): String) medium
Update BuildConfig flavor name whispermoonshine (or rename to routed to be model-agnostic) applicationIdSuffix + per-flavor strings low
Drop NDK build complexity from app/build.gradle (no more externalNativeBuild needed for sherpa-onnx — the AAR ships native libs) net simplification, -30 LOC of CMake/Gradle low

Total Kotlin LOC delta: ~100–150 LOC net (replace whisper JNI wrapper, simplify Gradle).

3b. APK size delta

Current bidet-phone APK: ~186 MB (per memory file project_bidet_phone_v0.1_working_2026-05-09.md).

Component Current (whisper) New (moonshine + sherpa-onnx) Delta
Native libs whisper.cpp .so ~5–8 MB (we built ourselves) sherpa-onnx .so (libonnxruntime + libsherpa-onnx-jni) ~7.2 MB compressed +0 to +2 MB
ASR model file Whisper-tiny GGUF Q8 ~40 MB Moonshine-Tiny ONNX INT8 (encoder ~30 MB + decoder ~104 MB) ~130 MB OR quantized variant ~40 MB -0 to +90 MB depending on quantization
Net APK delta +0 to +90 MB depending on which Moonshine variant we pick

Caveat: the 30 MB / 104 MB encoder/decoder split from the GitHub README is for the .ort flatbuffer format (memory-mappable, not the smallest). The published sherpa-onnx INT8 quantized bundle (sherpa-onnx-moonshine-tiny-en-int8) is in the 30–50 MB total range — that's what we'd ship.

Safe APK target: ~196–210 MB. Worth re-measuring the day we land it.

3c. Runtime memory peak (Moonshine + Gemma 4 E2B simultaneously)

Resident component Peak RAM (Pixel 8 Pro, 12 GB)
Android system + Bidet UI + foreground service ~250 MB
Moonshine-Tiny ONNX (encoder + decoder + activations) ~80–120 MB
sherpa-onnx + ONNX Runtime native heap ~50–80 MB
Gemma 4 E2B loaded via LiteRT-LM (verified 2026-05-09) ~2.59 GB on disk, ~3.0–3.2 GB peak inference
Audio capture buffers + spectrogram pipeline ~30 MB
Total peak ~3.6–3.8 GB resident

Pixel 8 Pro has 12 GB RAM. Comfortable headroom. E4B (3.66 GB on disk → ~4.5 GB peak) would push closer to the OOM line — confirms the rule we already wrote: ship E2B for the contest demo.

3d. Streaming partial-transcript UX

Yes — sherpa-onnx supports streaming Moonshine via the OnlineRecognizer API + the streaming Moonshine checkpoints. Microphone + VAD examples are in the Android demo. We can surface partial text to the RAW tab as the user is still talking. This is a major UX win over current whisper.cpp setup which is batch-only.

Sequence: OnlineStream accepts 100 ms PCM chunks → recognizer.decode(stream) returns partial text → emit to RecordingViewModel.partialText StateFlow → bind to TextView. ~30 LOC addition.

3e. Blockers / risks

  1. NDK ABI cohabitation with LiteRT-LM. Both LiteRT-LM and sherpa-onnx ship libonnxruntime.so (LiteRT-LM uses TFLite + a built-in ORT; sherpa-onnx ships its own ORT). If both AARs ship libonnxruntime.so at different versions, the second one loaded silently overwrites the first, and you get version-mismatch crashes. Mitigation: extract sherpa-onnx's libs and use Gradle packagingOptions { pickFirst "**/libonnxruntime.so" } deliberately — verify which version wins works for both. This is the #1 risk and needs a 30-min spike on day 1 of the migration.
  2. ProGuard/R8. sherpa-onnx Kotlin bindings use reflection to load native methods. Add -keep class com.k2fsa.sherpa.onnx.** { *; } to proguard-rules.pro. Trivial.
  3. Maven Central non-availability. Have to vendor the AAR (commit it to app/libs/ or pull via Gradle from URL on build). Not a blocker — just operationally unfamiliar.
  4. Streaming Moonshine model file is a different download. Standard sherpa-onnx-moonshine-tiny-en-int8 is non-streaming. To get partial output we'd need the streaming variant — verify it's published as INT8 sherpa-onnx bundle (it is for moonshine-tiny streaming per [4], but double-check before committing in PR).

4. Where Unsloth fine-tune fits

This is the load-bearing question for the Unsloth $10K side prize.

Option A — Fine-tune Gemma 4 E2B on the 79 paired (raw_text → clean_text) triples

Dimension Reality
Data we have 79 triples per project_unsloth_finetune_backup_2026-05-09.md — Mark-voice raw + cleaned-for-others target text
Data format text→text; no audio needed for this option
Infra Kaggle T4×2 free tier or Colab Pro+. Unsloth recipe documented in reference_kaggle_gemma4_prize_tree_2026-05-09.md. LoRA r=16, alpha=32, ~30–60 min training run on T4.
Output → phone path Unsloth merge_and_unload() → safetensors → MediaPipe LiteRT-LM converter → .litertlmassets/. Path is documented but the converter has a known E4B blocker (HF discussion #7). E2B is verified to convert.
Time to first usable result 1 evening of data prep + 1 hour training + 2 hours convert/test = ~1 day.
Improvement target Stops "Hasspin / Zenabria" hallucinations. Pins Mark's vocabulary (St. Francis, Barnett, Legacy Soil, OMI, TP3, etc.). Improves cleaning fidelity — directly visible in the Cleanup-tab demo.
Unsloth $10K prize fit Direct fit. Prize wording (per reference_kaggle_gemma4_prize_tree_2026-05-09.md): "For the best fine-tuned Gemma 4 model created using Unsloth, optimized for a specific, impactful task." Personalized brain-dump cleanup IS that.

Option B — Fine-tune Moonshine-Tiny on Mark's voice (acoustic adaptation)

Dimension Reality
Data we have ~22.5 h Mark-voice corpus paired audio+transcript, per project_whisper_finetune_setup_2026-05-07.md. Originally collected for Whisper-large-v3 LoRA. Same data works for Moonshine acoustic fine-tune (audio is audio).
Data format (audio_clip.wav, transcript.txt) pairs. We have this.
Tooling pierre-cheneau/finetune-moonshine-asr on GitHub [19]. Full fine-tune (no LoRA support yet — confirmed via fetch). Curriculum learning supported. ONNX export script convert_for_deployment.py ships.
Infra T4 borderline — 27M params is small but full fine-tune is more expensive than LoRA. Likely Colab A100 or Apex GPU (RTX 4070) overnight.
Output → phone path convert_for_deployment.py → ONNX → manually wrap in sherpa-onnx packaging conventions (encoder.onnx + decoder.onnx + tokens.txt). Untested for sherpa-onnx compat — we'd be the first to do this for Mark's voice. Real risk of an evening of debugging.
Time to first usable result 2–3 days minimum (data prep already done; training overnight; ONNX-to-sherpa packaging trial-and-error).
Improvement target Reduce WER on Mark's accent + classroom acoustic environment. Gains likely 1–3 WER points. Marginal vs Option A.
Unsloth $10K prize fit NO — Moonshine isn't Gemma. Unsloth prize is Gemma-specific.

Option C — Both, sequenced (Moonshine for voice, Gemma for vocabulary)

Sequencing: Option A first (it's the prize-eligible one and ships in 1 day), Option B as bonus week-2 polish if there's time.

Recommendation: Option A only for contest deadline. Option B is a v0.4 stretch goal.

Reasoning: 1. Option A is the only path that wins the Unsloth $10K (Moonshine fine-tune doesn't qualify — wrong model family). 2. Option A is 1 day of work vs Option B's 2–3 days minimum. 3. Option A's improvement is directly visible in the demo (cleanup tab generates Mark-correct output instead of hallucinating "Hasspin"). Option B's improvement (1–3 WER points on raw STT) is invisible to a 3-min video judge. 4. Option A doesn't risk APK / sherpa-onnx packaging surprises.

Carry Option B as v0.4 backlog. If we win, finish Moonshine fine-tune in June.


5. Cactus prize narrative fit

Verbatim prize description (from reference_kaggle_gemma4_prize_tree_2026-05-09.md, sourced via Mark's logged-in Kaggle Chrome MCP capture 2026-05-09)

Cactus Special Technology Prize — $10,000: "For the best local-first mobile or wearable application that intelligently routes tasks between models."

Does Moonshine → Gemma 4 routing fit?

Yes — almost word-for-word.

The one real concern: Cactus's own SDK

Cactus runs a separate branded hackathon (the "Cactus x DeepMind Hackathon" via AI Tinkerers) where their cactus-compute/functiongemma-hackathon repo requires building against the Cactus SDK [20]. That is NOT the Kaggle Cactus Special Tech Prize.

The Kaggle Cactus prize description (verified via Mark's Kaggle session) does not require their SDK. It rewards the architectural pattern (local-first, routed). Bidet's Moonshine + Gemma routing is eligible. Confirmed by the prize's own wording — it doesn't say "using Cactus" anywhere.

Hedge: if we want to double-down, we could write a tiny CactusFallback.kt that detects when Gemma 4 E2B refuses or hits a complexity ceiling and falls back to Gemini cloud (Cactus's signature pattern). Adds ~30 LOC, narrative gold. This is what Cactus's own blog champions [21].

Past Cactus-funded projects / what Cactus values

From docs.cactuscompute.com blog [21]: - "0.3 sec from end-of-audio to first token" on M4 Mac with Gemma 4 — they care about latency. - They champion single integrated multimodal models philosophically (Gemma 4 reasons over raw audio in one pass). - BUT their hybrid feature (Cactus = "low-latency engine for mobile devices & wearables") explicitly supports cloud handoff — small on-device model handles 80%, frontier cloud handles 20%.

The "single-model" philosophy is their preference for the audio→text→reasoning pipeline (Gemma does it all). The "routing" they reward in the Kaggle prize is task routing (small for STT, large for LLM, OR local for fast / cloud for complex).

These aren't contradictory — they're two layers of "routing": 1. Task-level routing (STT vs cleanup) — what Bidet does today. 2. Confidence-level routing (local vs cloud) — what Cactus champions.

We can do both in v0.3 by adding a 30-LOC "if Gemma's response was empty/low-confidence, optionally fall back to Gemini" path. That hits BOTH definitions of routing and locks the Cactus narrative.

Is there any reason the prize would prefer single-model?

Reading the Cactus blog posts [21] suggests yes-ish — they philosophically prefer single-model because it eliminates latency between stages. But the prize text doesn't say that — it says "intelligently routes tasks between models," which is explicitly multi-model.

My read: Cactus wrote the prize text as a separate definition from their own product preference. They're funding the broader ecosystem of routed local-first apps. Moonshine→Gemma fits.


6. Compatibility / risk matrix

Risk Severity Mitigation
Two AARs both ship libonnxruntime.so at different versions, runtime crash HIGH Day-1 spike: extract both AARs, check ORT versions, use Gradle pickFirst. Prefer the newer ORT.
Moonshine streaming variant ONNX not in sherpa-onnx pre-converted bundle MEDIUM Verify download URL early; if missing, fall back to non-streaming and post the full transcript on stop
ProGuard / R8 strips JNI methods LOW Add keep-rule for com.k2fsa.sherpa.onnx.**
OOM on Pixel 8 Pro running Moonshine + Gemma 4 E2B simultaneously LOW Verified peak ~3.8 GB << 12 GB. Comfortable. E4B would change the answer.
sherpa-onnx Apache-2.0 vs Moonshine MIT vs Gemma 4 Apache-2.0 vs Bidet Apache-2.0 license conflicts NONE Apache-2.0 + MIT are mutually compatible. Gemma 4 was relicensed to Apache-2.0 in 2026 [22] (huge — this used to be a Gemma-Terms blocker). All four are MIT/Apache-2.0; full commercial+derivative use permitted.
Maven Central not officially available LOW Vendor the AAR; no functional impact
Useful Sensors blog says Moonshine fine-tuning is a "commercial service" — does this restrict our right to fine-tune? NO — model is MIT-licensed, fine-tuning is permitted by the license [23]. The "commercial service" line is them offering a paid product, not gating the OSS license.

7. Recommendation

v0.3 architecture diagram

┌─────────────────────────────────────────────────────┐
│  bidet-phone v0.3 (Android, Pixel 8 Pro target)    │
│                                                     │
│   16 kHz mic ──► AudioCaptureService                │
│                       │                             │
│                       │ Float[] PCM (100 ms chunks)  │
│                       ▼                             │
│                ┌──────────────────┐                 │
│                │  sherpa-onnx     │ ← AAR 53.9 MB    │
│                │  OnlineRecognizer│                 │
│                │  + Moonshine-Tiny│ ← ONNX ~40 MB    │
│                │  Streaming       │                 │
│                └──────┬───────────┘                 │
│                       │ partial text events         │
│                       ▼                             │
│              ┌─────────────────────┐                │
│              │ RAW tab (live)      │                │
│              └─────────┬───────────┘                │
│                        │ on stop: full transcript    │
│                        ▼                             │
│                ┌──────────────────┐                 │
│                │ LiteRT-LM        │                 │
│                │ + Gemma 4 E2B    │ ← .litertlm 2.6GB│
│                │   (UNSLOTH-      │   ← fine-tuned  │
│                │    fine-tuned    │     on 79 Mark   │
│                │    on Mark)      │     triples      │
│                │  Backend.CPU     │                  │
│                └──────┬───────────┘                 │
│                       │ cleaned text                │
│                       ▼                             │
│                ┌──────────────────┐                 │
│                │ CLEAN tab        │                 │
│                │ (3 modes:        │                 │
│                │  for-others,     │                 │
│                │  for-self,       │                 │
│                │  bullet-list)    │                 │
│                └──────────────────┘                 │
│                                                     │
│   [Optional Cactus-narrative fallback: if Gemma     │
│    output empty/low-conf, route to Gemini cloud]    │
└─────────────────────────────────────────────────────┘

Right Unsloth path

Option A: fine-tune Gemma 4 E2B on the 79 paired (raw_text → clean_text) triples. - 1 evening data prep + 1 hour training on Kaggle T4 + 2 hours convert/test = ~1 day end-to-end - Wins the Unsloth $10K side prize (Moonshine fine-tune doesn't qualify) - Improvement is visible in the demo video (cleanup tab generates Mark-correct vocab) - LoRA r=16, alpha=32, merged before MediaPipe convert

Contest framing

Kaggle Gemma 4 Hackathon Writeup angle:

"Bidet AI is a local-first mobile brain-dump cleanup tool that intelligently routes between two models: Moonshine-Tiny (27M params) handles real-time speech-to-text on-device, and a Gemma 4 E2B fine-tuned with Unsloth on a personalized vocabulary handles the cleanup. Total RAM footprint 3.8 GB — fits a Pixel 8 Pro with no NPU, no cloud, no compromise."

This single sentence hits three prize buckets: 1. Cactus Special Tech ($10K) — "intelligently routes tasks between models" verbatim 2. Unsloth Special Tech ($10K) — "fine-tuned Gemma 4 with Unsloth, optimized for a specific impactful task" 3. Future of Education Impact ($10K) OR Digital Equity Impact ($10K) — depends on framing (lecture-recorder for nephews vs accessibility tool)

Plus eligible for Main Track ($10K–$50K). Stacking ceiling per Kaggle rules: $70K (Main + Impact + ONE Special Tech). Realistic-best target: $30K (4th Main + Future of Ed + Cactus) or $30K (4th Main + Impact + Unsloth).

DEV.to writeup angle (separate prize ecosystem): different cover letter — emphasize the Apache-2.0 licensing story ("Gemma 4 is now Apache 2.0, this app ships free forever"), the on-device privacy story (raw audio never leaves the phone), and the educator-built-it angle.

Right "what to ship in next 7 days"

Day Task Owner
Sun 5/10 (today) Land Gemma flavor pre-warm + CPU backend (in flight) + start auto-distilling 100 brain-dump triples through Gemini 2.5 Pro on TP3 export parallel agents
Mon 5/11 Day-1 spike: prove sherpa-onnx + LiteRT-LM cohabit on Pixel 8 Pro (both load libonnxruntime.so) — 30-min throwaway branch. If they conflict, this is the single biggest project blocker — flag immediately. Claude Code
Mon 5/11 evening First Unsloth E2B fine-tune run on Kaggle T4 (LoRA r=16, ~30 min train, then merge) Cursor cloud agent
Tue 5/12 MediaPipe convert merged-E2B → .litertlm, side-by-side test on phone vs stock E2B Claude Code
Wed 5/13 Bidet-phone v0.3 branch: swap whisper.cpp → sherpa-onnx + Moonshine-Tiny (streaming variant if compat OK, else non-streaming); rename flavor whispermoonshine Claude Code
Thu 5/14 Optional Cactus-narrative cloud-fallback (~30 LOC) — gives the writeup the Cactus-philosophy halo Claude Code
Thu 5/14 evening Video shoot day (Mark records demo: brain-dump → cleanup → 3 modes side-by-side fine-tuned vs stock) Mark
Fri 5/15 Writeup polish (1500-word Kaggle), cover image, repo cleanup, README Cursor cloud agent
Sat 5/16 Buffer day — fix whatever broke all
Sun 5/17 Final test on fresh Pixel 8 Pro install, record fallback video angle for redundancy Mark + Claude
Sun 5/17 23:59 UTC Submit

Budget if Day 1 (Mon) shows sherpa-onnx + LiteRT-LM ABI conflict: drop to Plan B — keep whisper.cpp + Whisper-tiny in v0.2 build, ship Unsloth-finetuned Gemma flavor only. Still hits Unsloth $10K + Cactus framing (Whisper→Gemma is also routed). Lose nothing but the "Moonshine 51× faster" sizzle. Re-attempt Moonshine in v0.4 post-contest.


Sources

[1] arXiv 2410.15608 — Moonshine: Speech Recognition for Live Transcription and Voice Commands. https://arxiv.org/abs/2410.15608 [2] HF transformers docs — Moonshine model. https://huggingface.co/docs/transformers/en/model_doc/moonshine [3] github.com/usefulsensors/moonshine — README and config docs (16 kHz, raw waveform, ≤30 s). [4] HF — UsefulSensors/moonshine-streaming-medium / -small / -tiny. https://huggingface.co/UsefulSensors/moonshine-streaming-medium [5] arXiv 2602.12241 — Moonshine v2: Ergodic Streaming Encoder ASR. https://arxiv.org/html/2602.12241v1 [6] HF model card — UsefulSensors/moonshine-tiny. https://huggingface.co/UsefulSensors/moonshine-tiny — LibriSpeech clean 4.55, other 11.68; MIT; 27M params. [7] HF model card — UsefulSensors/moonshine. https://huggingface.co/UsefulSensors/moonshine — MIT. [8] github.com/moonshine-ai/moonshine (current upstream). [9] github.com/k2-fsa/sherpa-onnx — README, Apache-2.0, supported model families, Android/iOS support. [10] sherpa-onnx LICENSE. https://github.com/k2-fsa/sherpa-onnx/blob/master/LICENSE [11] sherpa-onnx releases — v1.13.1 May 8 2026. https://github.com/k2-fsa/sherpa-onnx/releases [12] mvnrepository — com.bihe0832.android:lib-sherpa-onnx (third-party Maven mirror). [13] sherpa-onnx jitpack.yml. https://github.com/k2-fsa/sherpa-onnx/blob/master/jitpack.yml [14] sherpa-onnx official docs. https://k2-fsa.github.io/sherpa/onnx/index.html [15] sherpa-onnx Moonshine docs. https://k2-fsa.github.io/sherpa/onnx/moonshine/index.html — pre-converted INT8 bundles, microphone+VAD streaming examples. [16] sherpa-onnx Android docs. https://k2-fsa.github.io/sherpa/onnx/android/index.html [17] sherpa-onnx Android build doc. https://k2-fsa.github.io/sherpa/onnx/android/build-sherpa-onnx.html — NDK 22.1.7171670, ABIs, .so sizes (15 MB ORT + 3.7 MB JNI). [18] sherpa-onnx CHANGELOG. https://github.com/k2-fsa/sherpa-onnx/blob/master/CHANGELOG.md — Moonshine Kotlin/Java APIs added. [19] github.com/pierre-cheneau/finetune-moonshine-asr — community Moonshine fine-tune toolkit (full fine-tune, no LoRA, ONNX export). [20] github.com/cactus-compute/functiongemma-hackathon — separate hackathon, requires Cactus SDK. NOT the Kaggle Cactus Special Tech Prize. [21] Cactus blog: Gemma 4 on Cactus. https://docs.cactuscompute.com/latest/blog/gemma4/ — single-model philosophy + cloud-handoff hybrid. [22] Gemma 4 Apache-2.0 announcement. https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/ ; https://ai.google.dev/gemma/apache_2 — fully commercial-fine-tuneable. [23] Useful Sensors Moonshine README — license (MIT). https://github.com/usefulsensors/moonshine [24] Memory file — reference_kaggle_gemma4_prize_tree_2026-05-09.md — Cactus prize verbatim text, captured via Mark's Kaggle Chrome MCP session 2026-05-09.


Generated 2026-05-10 by Claude Opus 4.7 deep-research agent. All claims sourced. Items marked unverified are: actual streaming Moonshine ONNX file size (need to download and weigh), and final APK size after AAR + assets land — both will be verified Day 1 of the v0.3 branch spike.