Mark's Reports

Saturday 4/25 — Everything Open + Reports to Read

Generated 2026-04-25 morning. Rebuild status, your three big questions answered, and the four agent reports you need to read.

Table of contents
  1. Rebuild status — Phase 0 progress
  2. DC trip: Westin Washington Dulles is an airport hotel
  3. Pearl Method granulator — three DIY local-store paths
  4. AI Radar — what last week's run actually found
  5. Remote-boot Claude PR — broken, but you have a working path
  6. Three big questions answered

Rebuild status — where Phase 0 stands right now

Phase 0 of the operator-layer rebuild is ~60% complete. Hard-blocking items done. Polish + setup items pending. No risky changes yet — everything so far is additive.

Done

Disk space verified — 37.8 GB free on Apex C:\, well above the 35 GB threshold.
Postgres backup completepostgres_full.sql.gz 2.1 GB at C:\Users\Breezy\tp3_pre_migration_backup\2026-04-25\. Compressed inside the container, copied out via docker cp. Verified non-empty.
.env snapshot saved alongside the dump as env_snapshot.env.
30 scheduled tasks exported as XML for rollback — scheduled_tasks_xml/ per-task files in the same backup folder.
Repo tag pushedpre-migration-2026-04-25 on main + fix/maps-audit-2026-04-19. Single-command rollback anchor.
autoheal Docker image pre-pulledwillfarrell/autoheal:latest ready on Apex's local registry. This is the sidecar that kills unhealthy containers so Compose's restart: always can relaunch them.

Pending

python:3.12-slim image — not pulled yet. Need this for the tp3_ingest + tp3_embed containers. ~5 min when fired.
OLLAMA_HOST normalization — surprise discovery this morning: Apex Windows has OLLAMA_HOST=0.0.0.0 as a system env var (the SERVER bind config). Bidet/processor.py treats OLLAMA_HOST as a CLIENT URL — collision. Two options: (a) change the Windows env var to a URL form, or (b) patch the processor scripts to normalize bare hosts. Option (b) is cleaner; will patch + commit.
.env parity check — grep every os.environ.get() across tp3_scripts/, diff against .env to find missing vars before container migration. Quick.
Docker Desktop auto-update disable — registry edit so it doesn't fire mid-sprint and reset clean-run.
healthchecks.io dead-man-switch — account setup + ping URL stored in .env.

What unblocks Phase 1 (re-embed)

Phase 1 (the COPY-only re-embed of ~3,500 Gemini rows) is gated on Phase 0 finishing cleanly. The remaining items above are all ~30 minutes of work each. Realistic path: Phase 0 wraps before lunch; Phase 1 fires this afternoon.

Today's collateral fixes (not Phase 0 but on the rebuild's path)


2. DC trip: Westin Washington Dulles is an airport hotel

Your suspicion was right.

2520 Wasser Terrace, Herndon VA 20171 — ~1 mi from IAD, ~25 mi from downtown DC.

Transit reality

What this means for the trip

This is "park the kids out by the airport" hotel selection. Common pick for school groups because Westin brand reads okay to parents, bus parking is easy, rates are way cheaper than inside-the-Beltway.

Daily chartered bus into the city. 7:30-8:00 AM departures to beat the morning crawl. Mr. A's trips have run this model for years.

One legit win

The Smithsonian's Udvar-Hazy Center (the Air & Space annex with the SR-71, Space Shuttle Discovery, Concorde) is ~2 miles from the hotel. Free shuttle to Wiehle-Reston East Metro stops there. Easy day-trip stop.

What to tell Craig / Mr. A

Confirm that the itinerary assumes daily bus into DC and that there's a contingency for the I-66 morning crawl. If departures are 8:30+ AM, the kids will get to the Mall around 10:30 AM at best — that's an effective day of ~5 hours. Earlier departures = more touring time but earlier wakeup.


3. Pearl Method granulator — three DIY local-store paths

Hand-method rejected. These are real machines you can build from Tractor Supply / Northern Tool / Home Depot / Harbor Freight within a 30-mile radius.

The shape of the problem

Commercial pan/disc granulators tilt a drum or pan at 40-55°, spin slowly (8-25 RPM), and spray binder mist while material rolls into balls by centrifugal force. Three real ways to get there from local stores:

Path 2 (Ancient Japanese Engineer build): Stainless mixing bowl on slow-turn motor~$135-170

Base: 14-16" stainless mixing bowl ($25-40 Home Depot), bolted to a 12" lazy-Susan bearing ($15-25 Home Depot) on a tilting plywood frame.

Drive: Bringsmart 60KTYZ 110V synchronous gear motor at 5 RPM ($25-35 Amazon) friction-drives the bowl rim via a rubber wheel — no welded gear, no chain, no welding ever.

Why this works: open pan, slow rotation, adjustable tilt, binder mist — the four required mechanics. Smaller batch than the cement mixer (1 bowl-load = 6-12 small pearls vs 30+ in the drum) but quieter, prettier, fits in a garage. Looks like a craft tool, not a construction tool.

Bonus research: the agent specifically chased the Japanese tradition angle and found that traditional okotsu-shinju was hand-rolled — there is no historical Japanese rotation machine. So Path 2 IS the closest mechanical translation of palm-rolling: slow-spin pan, hand-style result.

Path 3 (Patio-tonight, prove the concept): Cake turntable + rotisserie motor~$55-80

Build: 12" cake decorating turntable ($15-20) tilted on a 45° plywood wedge. Universal grill rotisserie motor 7-10 RPM ($25-40 Home Depot) friction-drives the turntable rim via a rubber-band loop.

Honest note: ugly, probably won't last 100 batches. But gets you to "I made a pearl" for the price of two pizzas. Use it to validate the method, then upgrade to Path 1 or 2 with the data you learned.

Don't buy

ItemReason
Lapidary rock tumbler ($50-150)Sealed barrel — no way to spray binder. Polishes finished pearls, doesn't form them.
Used pottery wheel ($200-500 Atlanta CL)Flat platter, no tilt, no walls. Conceivable but more work + cost than Path 1.
Concrete sphere moldsWrong technology. One-shot casting, not iterative agglomeration. Wrong texture for the Shinjusou aesthetic.
Bauer 4 cu ft mixer (~$330-380)Too big for 25-80mm pearls. 1-1/4 cu ft is the sweet spot.
Mochi-making machines / wagashi toolsIndustrial belt-extruders, not pan-rotation. Wrong physics.

Recommendation

Pick a path and tell me — I'll prep a one-page printable shopping list + assembly checklist for that specific build. No spend without you saying go.


4. AI Radar — what last week's run actually found

The 4/20 run partially worked (HuggingFace was open). The 4/24 run was fully blocked. Both fixed for next Friday. Here's what last week surfaced.

Top picks from 2026-04-20 (HuggingFace partial scan)

FindScoreWhy it matters to your stack
Bonsai-8B 1-bit GGUF22/251.15 GB total. Scores 70.5 average on standard benchmarks. 5× faster than FP16. Free local inference. Could replace gemma3:4b or run alongside on Apex. Worth a download-and-test once Phase 0 wraps.
Bonsai WebGPU Browser Space22/25Same model, no install — opens in a browser tab. Useful for sanity-checking responses without burning Apex GPU.
HF Transformers→MLX Claude Skill20/25One command to auto-port any Hugging Face model to Apple Silicon (MLX format) inside Claude Code. Not relevant for Apex (CUDA) but useful if you ever shift to a Mac.
Qwen3.6-35B-A3B-GGUF (unsloth)17/2573.4 SWE-bench. 3B active params (mixture of experts). 816K downloads. Heavier than Bonsai but stronger on coding tasks. If you ever want a local "Cursor replacement" agent, this is the candidate.

What 4/24 surfaced

Nothing. All 9 sources permission-blocked in the headless session. Reported zero candidates. Permissions fixed for 5/1's run.

Action items from 2026-04-20 carried forward

  1. Pull Bonsai-8B via Ollama and run it side-by-side with gemma3:4b for a few Bidet sessions. See which produces better clean/analysis output. Approval needed: this is download-only, no spend.
  2. Try the WebGPU Bonsai space in a browser before deciding whether the local model is worth the disk space.
  3. Decide on Qwen3.6 only after the operator-layer rebuild stabilizes.

5. Remote-boot Claude PR — broken, but you have a working path

The PR Cursor merged on April 7 ("Add remote Claude boot workflow for Apex") shipped three files (tp3_remote_boot_claude.ps1, Run-RemoteBoot-Claude.bat, docs MD). They're in the TP3 repo on Apex.

The script has a PowerShell-over-SSH bug at line 58 that means it's never run successfully. State directory tp3_data/remote_boot/ doesn't exist. The intended workflow does NOT work as shipped. Not usable from phone, G16, or Chromebook.

What you already have that DOES work

The pre-existing Claude Remote Control scheduled task on Apex is running fine, launching start-claude-rc.ps1 which runs claude remote-control --name "Apex-TP3-Mission-Control". The live session URL right now is in C:\Users\Breezy\claude_rc.log on Apex.

Usage from any device: open https://claude.ai/code in your browser, pick the Apex-TP3-Mission-Control environment from the list. Auto-resumes. Same on phone, G16, Chromebook.

Decision (your call from this morning's brain dump)

Decommission the Cursor PR's three files. I'll move them to _archived/ in the TP3 repo so they're not in the active path. The claude.ai/code URL is the working remote-Claude path.


6. Three big questions answered

Q: Why was faster-whisper missing on Apex if it was there before?

It was never installed on Apex. The web Bidet's audio_handler.py was upgraded at some point (commit history shows the move from openai-whisper to faster-whisper), but requirements.txt was not updated. Apex's venv had openai-whisper from the original install; the new code imported faster_whisper → ImportError → silent fallback to Gemini → Gemini key disabled → failure.

Same root pattern as the G16 Bidet bug this morning: code-deps-out-of-sync-with-venv. Phase 0's .env-parity check + Phase 4's containerization (reproducible images from requirements.txt) eliminate this entire class of bugs.

This is not a Gemma issue. Gemma is what processor.py uses for clean/analysis (still on gemma3:4b via Ollama, working). Whisper is the audio-to-text step before processor.py runs.

Q: Should we rebuild Bidet around Gemma instead of Gemini?

Bidet IS already on Gemma. processor.py uses gemma3:4b via Ollama for clean/analysis/forai. faster-whisper handles audio→text. Gemini was already removed from the active path on 2026-04-22; today's fixes just made the swap take effect everywhere.

Today's rebuild containerizes Bidet (Phase 5 of the plan) but doesn't change the LLM stack — Gemma stays.

Q: Does today's rebuild include the LiteLLM harness plan from 4/14?

No. Today's rebuild plan v2 includes a smaller piece — tp3_embed, a single internal HTTP service that becomes the one place that decides local-vs-cloud, with a per-month spend counter.

The full LiteLLM harness (one OpenAI-compatible proxy fronting Ollama + Gemma + multiple cloud backends with routing rules) is a next phase — bigger, more general, and answers your "go all local" goal completely. Right call: do the operator-layer rebuild first, stabilize for a week, then layer LiteLLM on top.

The 4/14 plan is still valid as a roadmap. Once Phase 7 (the rebuild's clean-run window) closes, I'll surface the LiteLLM plan as the next thing on deck.


What I'm waiting on you for

  1. Granulator path pick — Path 1, 2, or 3? Once you say which, I prep a printable shopping list + assembly checklist.
  2. Pull Bonsai-8B for testing? — download-only, no spend. Y/N.
  3. Install the Apex BRIEFS runner today? — would let tonight's overnight-decisions-graph + adversarial-self-test BRIEFs actually fire. ~1 hour of work. Y/N.
  4. Decommission the broken remote-boot Claude PR files (move to _archived/)? — cosmetic cleanup. Y/N.