Hand-written the night before. Real numbers, real wins, real schedule. Not the recycled boilerplate from last week.
tp3_ollama container on tp3_internal_network with GPU passthrough (RTX 5060 Ti, 16 GB, all 35/35 layers offloaded). Five models live inside (gemma3:4b, gemma3:12b, qwen3:4b, nomic-embed-text, all-minilm). No re-download — bind-mounted C:\Users\Breezy\.ollama. Bidet flipped back to BIDET_LLM_BACKEND=local, Gemini key disabled._embed() returned None, the INSERT violated tp3_embedding NOT NULL, the bg-task wrapper swallowed the exception, endpoint returned ok:true, row was lost. After: zero-vector placeholder + metadata.needs_embed=true flag, plus a new POST /admin/backfill_embeddings endpoint to re-embed those rows when Ollama is healthy. Patches durable in the rebuilt tp3_memory_api:latest image — won't be lost on next recreate./omi/ask on Apex (Anthropic-backed when key is set, Gemini Flash fallback). Bypasses OMI's gpt-5.1 chat tab — see the OMI quality finding below for why that matters.barnett.markd@yahoo.com): 180 → 116 inbox, 4 unread.mark@thebarnetts.info: 3,284 → 2,041, 10 unread.breezybarnett16@gmail.com: 1,927 → 872, then an aggressive pass to 66 unread.mbarnett@sfschools.net): 21 archived./ingest, /sms, /phone. Direct http://100.88.195.118:8945 from the pendant + Tasker. Faster, fewer hops, no edge-routing failures.| Total rows | 668,281 |
| Last 24 hours | 794 |
| Last 1 hour | 66 |
| Newest row age | 118 seconds (~2 min) |
| Freshness | ok |
Sources active in the last 24h:
The catch-up after yesterday's 12-hour Ollama outage is happening — 794 over 24h is solid (the full-week average is ~700/day) and the 66 in the last hour confirms the patched /ingest + Yahoo poller are pulling the backlog through cleanly.
Pulled live from Mark's primary, Mom's fran@thebarnetts.info, and the school official calendar.
Mark — school day:
(No Sutter coverage on Tuesday's calendar, no parking-duty entry, no mid-day exam-prep block returned by the workspace fetch. If anything got added since last sync, the live dashboard calendar card will show it.)
Mom (Fran):
School (St. Francis official):
/omi/ask on Apex and the answer comes back through the Ray-Bans. No ETA from OMI on review turnaround./ingest) but rows died server-side before the patch landed. Make.com retains execution body data — the recovery script can pull and re-POST those bodies through the now-patched endpoint. Not done yet; on the active to-do list.\Seen logic preserves UNSEEN messages on TP3-side failure, so no data loss; just slow catch-up while Ollama warms its cache. Backlog should clear over the next 12-24 hours.Ran a deep investigation Sunday on the "OMI replies got worse" complaint from earlier this week. Findings:
gpt-5.1 (with gpt-4.1 + gpt-4.1-mini for routing). Claude only powers OMI Desktop chat (Opus 4.6 / Sonnet 4.6 / Haiku 4.5)./omi/ask on Apex, which is Anthropic-backed when the key is set (Gemini Flash fallback). Bypasses OMI's gpt-5.1 entirely for personal-context questions. Once the app is approved, this is the answer.Full investigation file at /tmp/omi_quality_investigation_2026-05-04.md — table of weekly stats, public OMI commit references, recommendation matrix, all there.
14 containers up on Apex. All healthy or running:
| tp3_postgres_brain | Up 16h |
| tp3_memory_api | Up 1m (healthy, freshly restarted with patches) |
| tp3_ingest | Up 16h (healthy) |
| tp3_omi_mcp | Up 16h (healthy) |
| tp3_embed | Up 15h (healthy) |
| tp3_bidet | Up 15h (healthy) |
| tp3_ollama | Up 15h (in Docker, GPU active — new home) |
| yahoo_tp3_ingest | Up 15h (healthy) |
| yahoo-mcp | Up 16h |
| workspace-mcp | Up 2h |
| wa-web-mcp | Up 16h (WhatsApp Web linked, persistent ~14d) |
| tp3_pinger | Up 16h |
| tp3_minio_vault | Up 16h |
| tp3_autoheal | Up 3h (healthy) |
The Ollama-in-Docker move closed the last "fragile native Windows process" gap. Whole ingest path is now containerized end-to-end on Apex.
Generated 2026-05-04 ~23:00 ET for Tuesday morning read. Real content this time — no recycled boilerplate. Numbers were live-queried from tp3_memories_local on Apex; calendar pulled from /omi/ask; container list from docker ps; OMI quality findings from /tmp/omi_quality_investigation_2026-05-04.md.