Three days, three reboots, a contract meeting, a full Apex migration, a legacy site audit, and one Bidet midnight pivot.
/omi/ask shipped end-to-end with calendar context; workspace-mcp + yahoo-mcp + wa-web-mcp ALL migrated to Apex Docker; Yahoo→TP3 ingest poller live; G16 metrics heartbeat live; legacy.thebarnetts.info V3 audit across 15 pages; Mom's calendar visible via share; Mom's calendar shared to breezy@gmail; Apex hardware ceiling confirmed (64 GB).[::]:11434 IPv6-only; Bidet container has no IPv6 routing. Four restarts, two portproxy attempts, one Python forwarder — all failed because Ollama itself kept dying within 1-2 min of each restart. Pivoted: re-enabled Gemini cloud key, flipped BIDET_LLM_BACKEND to cloud, recreated container, verified end-to-end. Registered scheduled task "TP3 Bidet Cloud Revert (one-shot)" for 7:00 AM ET 5/4 to auto-revert.A Friday that mattered for the year ahead. The day's verbatim transcript is the canonical record; everything else is supporting material.
bdailey@sfschools.net. "Lyn/Lin Copang" was misheard — actually Lynne Koppang, lkoppang@sfschools.net, fired the same day Brian held this meeting with Mark.pip list / requirements audit. Not done.nomic-embed-text) and toward a paid API. Anti-fit with PD; cut.senders.txt, Gemma 4 / Qwen 3.6 pulls framed as "decisions Mark needs to make" when he'd already said go, and beta-header drop separated as a future task instead of run inline.Brian freaks me out. That's the second time I've had to actually talk to him about something real, and he's been very weird both times. Defensive weird with an aggressive stance. — Mark to Lynne, 1:55 PM, before walking into the meeting
The day phone+glasses+server became one thing instead of three. Atlanta United up 3-1. Mark, end of night: "This was an excellent communication and integration. This makes me happy prime Directive."
tp3_cursor_report on public ntfy.sh. curl -d "msg" -H "Title: ..." https://ntfy.sh/tp3_cursor_report from anywhere lands on phone in seconds. Validated: Atlanta United 7 PM alarm fired at exactly 19:00:07 EDT via background bash. Mark heard it./ingest, then Say with Respect Audio Focus = TRUE, Stream=Media, Engine default. Saved verbatim in reference_tasker_phone_internals.md._tp3_work/02_extract Google Takeout (already in Drive cloud, deleted local + cloud trash), 20 GB Docker VHDX compaction (56 GB → 36 GB, all 9 TP3 containers came back online in 5 sec), 9 GB TP2_Backup + OMI_audio archive in My Drive, 4 GB pip cache, ~7 GB old logs / pre-reset zip / temp / docker volume prune. SSD TRIM ran post-cleanup. This removed the silent constraint that was about to force Mark off-local by mid-summer.%evtprm1/2/3, NOT %nottitle/%nottext/%notapp from docs. The documented names appear in the variable picker but DON'T auto-populate. Discover via the % button on a real notification field.<ConditionList> with <bool0>and</bool0> is silently dropped on import. Tasker imports actions unconditionally.tp3_*.tsk.xml files in /sdcard/Tasker/tasks/ are import sources, NOT live state mirrors. Tasker UI edits do NOT write back. Pulling them gives you the LAST IMPORTED version./ask → Claude reply → speak through Ray-Bans. Server-side /ask not yet built./brief endpoint — Apex cron at 6:55 AM generates fresh brief text from live calendar/health/TP3 status; Tasker GETs at 7:00 AM and Says it. Replaces static greeting. Not built.This was an excellent communication and integration. This makes me happy prime Directive. — Mark, end of Saturday night
It opened with Mark calling the existing reports "pathetic" and ended with a parallel four-stream agent push that closed eight tangents. By bedtime, three MCPs and the Yahoo poller were running on Apex Docker, /omi/ask was live, and every Legacy Soil page was disambiguated.
/dashboard/data IS fresh (generated 11:43 UTC, TP3 total 667,262). But reports.thebarnetts.info was pulling stale row counts. The morning-digest generator was reading cached source instead of live TP3.ai_radar_2026-05-01.md existed locally on G16 — was never pushed to ntfy or surfaced to Mark. He said: "Wait, did we get that? We should have had a report on 5.1. Do I have a 5.1 radar?" — and the answer was yes, on disk only.tp3_scripts/tp3_memory_api.py was untracked since first written. Every revision lived only on disk; previous "pretty" versions were overwritten in place with no recoverable diff. git log --all -S "_DASHBOARD_HTML" returned zero matches across the entire repo history.294e128 — file is now tracked, future edits are diffable/revertable.escapeHtml() for all string interpolation (XSS), cache: 'no-store' on fetch, granular formatTime(), empty-state messages per card ("No events on the books," "No recent inbound mail"), live-dot turns red on fetch failure with "live" → "reconnecting" label flip./ask endpoint LIVE on Apex tp3_memory_api with calendar context. LLM stack priority: Anthropic (if key set) → Gemini 2.5 Flash with thinkingBudget=0 (default, uses TP3_GEMINI_API_KEY) → Ollama (local, currently zombie — known, not blocking). Calendar context: workspace-mcp pulls 7-day window of Mark's primary + Mom's fran@thebarnetts.info, cached 5 min./omi/ask + /.well-known/omi-tools.json deployed via Apex Docker. Manifest at memory.thebarnetts.info/.well-known/omi-tools.json. Tool description engineered to be greedy on personal/identity/schedule keywords.Mark@thebarnetts.info. Calendar/Gmail/Drive read+write. 36 tools. 6 calendars visible (mark@, fran@ Mom shared, Wildwood, Atlanta United two cals, US Holidays). NO admin/directory tools — that gap tracked as tangent #11. Critical case gotcha: the server stored Mark's email with capital M (Mark@thebarnetts.info); ALL user_google_email arguments must use the capital M or get "Authentication needed" error even though token is cached. Detail: reference_workspace_mcp_2026-05-03.md.barnett.markd@yahoo.com. Yahoo deprecated OAuth for consumer Mail in 2014; IMAP+SMTP via app password is the only path. 8 tools (search/read/send/move/mark/delete/list folders/status). 163 INBOX / 38 unread confirmed. Detail: reference_yahoo_mcp_2026-05-03.md./chats returned real chats (Mom, Kim, friends). /contacts/search?q=mom found "Mom" at +1 (407) 797-6490. Persistent for ~14 days via Docker volume.--restart unless-stopped + persistent Docker volumes for credentials. Survive Apex reboots. Tailscale-reachable from anywhere on the tailnet at http://100.88.195.118:{8766,8767,9876}.C:\Users\Breezy\.docker\config.json and ~/.docker/config.json in WSL had credsStore: "desktop" which fails over SSH ("specified logon session does not exist"). Fix: replace BOTH with {"auths":{}}. SSH+WSL+Bash quote nesting is fragile — pattern that works is local script → scp → run via WSL. Recipe: reference_apex_docker_migration_2026-05-03.md.yahoo_tp3_ingest on Apex. 15-min poll of Yahoo INBOX for UNSEEN, posts each to /ingest as source=email_yahoo, verifies the row landed in tp3_memories_local before flagging \\Seen./ingest endpoint is fire-and-forget async (FastAPI _bg_executor.submit). Returns 200 OK before the row is actually inserted. The insert depends on Ollama producing an embedding; the tp3_embedding vector(768) NOT NULL column makes embedding-failure equal row-loss. First version of the poller flagged \\Seen immediately on 200 OK. Result: 41 messages got \\Seen flag set in Yahoo while ZERO landed in TP3 — silent data loss. Verify-then-flag with 90s Postgres polling is the fix; Ollama-down means the message stays UNSEEN and retries next pass.fc0cc8b. Detail: reference_dashboard_v2_2026-05-03.md.omi_api_poll daily snapshot. Now calls workspace-mcp on host.docker.internal:8766/mcp, parsed into structured events, cached 5 min. Falls back to old snapshot only if workspace-mcp errors.source IN ('gmail', 'email_yahoo'). Each row gets a source pill (GM amber / YH slate). Gmail staleness banner: green < 6h, amber 6h - X, red if no Gmail data ever./system/metrics endpoints (POST + GET). No auth (LAN/Tailscale-only via Cloudflare tunnel). Stale=true if age_seconds > 90.C:\Users\Breezy\tp3_apex_metrics.ps1, scheduled task TP3 Apex Metrics, every 60s, runs as SYSTEM. Real RAM 12.5/13.8 GB, CPU %, GPU %, VRAM 1.1/15.9 GB on RTX 5060 Ti. Detail of registration: reference_dashboard_v2_2026-05-03.md./home/g16/tp3_g16_metrics.sh, self-respawning loop wrapper at /home/g16/tp3_g16_metrics_loop.sh, pidfile-guarded. Trigger: Windows Task Scheduler entry TP3 G16 Metrics runs every 1 min, calls tp3_g16_metrics_starter.ps1 which wsl.exe-launches the loop. Pidfile guard makes re-firing self-healing — if the loop's already alive, the new launch exits cleanly; if it died, the new launch takes over. Verified live: g16.stale=false, RAM 4.3/15.5 GB, RTX 4070 Laptop GPU. Detail: reference_g16_heartbeat_2026-05-03.md.docker-compose.yml uses Windows-backslash volume specs that current compose CLI rejects.tp3_scripts/tp3_morning_digest_web.py's update_index_html() now regenerates the entire index from scratch each run via Python templates + auto-detection (_latest_dated, _latest_upcoming, _all_reports_dated). Runs daily 6:30 AM ET via Apex scheduled task TP3 Morning Digest Web. Hand edits are overwritten — to change layout, edit the generator. Detail: reference_reports_page_2026-05-03.md.v2 and v3 labels stripped from all V3 page bodies, footers, links, link text. Homepage v3 stamp ("Updated May 3, 2026 — v3") stays per Mark's earlier directive./v2-archive.html. Mark verbatim: "One small archive area for V2."https://da24dd77.legacy-soil-handoff.pages.dev. Detail: reference_legacy_v3_audit_2026-05-03.md.breezybarnett16@gmail.com Drive: Legacy Soil V3 / (new source-of-truth) + Legacy Soil Archive / V1 / (23 V1 files moved) + Legacy Soil / (V2 stays in place; deploy pipeline tar+SCPs from there). 33 soil-research files copied for V3 reference. Detail: reference_legacy_soil_v3_drive_2026-05-03.md./gmail endpoint silently dropped writes (returned ok:true but row never landed because the embedding-less INSERT violates tp3_embedding NOT NULL schema and the bg-insert wrapper has except: pass). Same antipattern in /phone, /sms, _insert_tp3. Fixed: started OllamaServe task, re-enabled the previously-disabled TP3 Ollama Watchdog. Detail: reference_gmail_apps_script_repair_2026-05-03.md.acl.insert. ACL ID user:breezybarnett16@gmail.com, role=reader. Mark must accept the share invite in breezy's email/Calendar to make it visible.action_error_notifications from 15:52:22 to 15:53:51 EDT. Root cause: TTS engine errored on overlapping Say invocations when OMI fired 6 rapid notifications in 300ms. Server-side /ingest is healthy. Detail: research_tasker_error_2026-05-03.md.\n in multi-line notification text. Replaced strict request.json() with json.loads(strict=False) + form-urlencoded fallback + last-resort raw-text capture. Patch lives only in the running container — needs source commit (tangent #20).spaces.patch with scope meetings.space.settings. Transcripts auto-delete after 3 months unless moved/copied — recommend Apex cron to copy transcript Doc text to TP3 with source=family_fun_call. Plan check (admin console SKU) not yet done.iam.disableServiceAccountKeyCreation. Pivoted to User OAuth + admin scopes. Then realized Mom's calendar — the actual goal — was already accessible via her existing share through current workspace-mcp OAuth. Skipped the full admin rebuild.This isn't it. This is terrible. And mainly just because it's not updated. It's old. — Mark, Sunday morning brain dump, before the rebuild
Mark lost a brain dump because Bidet was broken. What followed was a 90-minute Ollama-Apex wrestle that went 0-for-4 on restart strategies, with one rule reinforced and one pivot landed.
Listening on [::]:11434 (version 0.17.7). Despite OLLAMA_HOST=0.0.0.0:11434 being set at Machine + User + Process scope, Ollama Windows still binds [::] only./etc/hosts resolves host.docker.internal to both fdc4:f303:9324::254 and 192.168.65.254; v6 returns "Network is unreachable" immediately; v4 hits TCP refused or empty-reply when Ollama isn't bound to v4.C:\Users\Breezy\honest-answers\.env — uncommented the disabled GEMINI_API_KEY=.C:\Users\Breezy\tp3_neural_stack\docker-compose.yml line 83: BIDET_LLM_BACKEND: local → cloud.tp3_bidet via docker compose up -d --force-recreate.docker exec into bidet, ran processor._generate(...) against the cloud-Gemini path, got back valid JSON {"clean": "..."}. Health endpoint reports {"gemini":true}. Backups of both files saved with .bak.<unixtime> extensions.powershell.exe -ExecutionPolicy Bypass -File C:\Users\Breezy\bidet_revert.ps1. Script re-disables the GEMINI_API_KEY line, flips compose back to local, recreates container. If Mark wants Gemini past 7 AM, he tells me before then or extends the task.Start-ScheduledTask "OllamaServe" (×4) — process started, sometimes bound, kept dying within 1-2 min. No crash log.C:\Users\Breezy\ollama_v4_forwarder.py — listening 0.0.0.0:11435 forwarding to [::1]:11434. Couldn't connect upstream; Ollama died again before forwarder ran.OLLAMA_HOST=0.0.0.0:11434 at Machine + User + Process scope — Ollama 0.17.7 still binds [::] only. Probable Go runtime / Windows IPV6_V6ONLY default behavior.[::]:11434 IPv6-only even with explicit OLLAMA_HOST=0.0.0.0:11434, and Windows does not enable IPV6_V6ONLY=0 dual-stack automatically.C:\Users\Breezy\ollama_v4_forwarder.py; (2) move Ollama into a Docker container natively reachable on tp3_internal_network; (3) pin Ollama to a version that bound v4 properly (pre-0.17.7 maybe); (4) Go env var hack: GODEBUG=netdns=go+v4 or similar — needs research.OLLAMA_DEBUG=DEBUG and longer-living log capture.TP3 Ollama Watchdog scheduled task is currently disabled (Mark complained twice about ntfy spam earlier today; auto-recovery off). Side effect: when Ollama dies, nothing brings it back until I notice. Smarter watchdog (alert only after N consecutive failures) is the right fix./dashboard/data extension that surfaces them when Mark wants.why tomorrow? Why not just fix it and do it right? — Mark, ~00:25 ET, catching the deferral
Every thread Mark mentioned during the weekend, with one-line status. Pulled from project_tangents_backlog.md.
/gmail, /phone, /sms, _insert_tp3. Either insert a zero-vector placeholder + re-embed later, or return ok:false so callers know to retry./brief endpoint (tangent #3) — Apex cron 6:55 AM generates fresh brief. Tasker GETs at 7:00 AM. Replaces static greeting./ask — top-3 memory hits for the question, deeper personalization.l) → check "TP3" → Apply. Within 15 min the next trigger run picks it up.