One Tool, One Job: How a Stage 4 Cancer Patient Is Building Infrastructure the Health System Won't
When Russ was diagnosed with bowel cancer at 40 — and, at the same appointment, with smoldering myeloma — he was handed a paper journal to track his chemotherapy symptoms. He threw it away. Four years, three surgeries, and a stage 4 diagnosis later, he's running seven Claude projects, a Notebook LM workspace, and a mobile orchestration tool called Claude Dispatch that lets him leave his laptop at home when he goes to hospital.
Russ was diagnosed with bowel cancer in late 2021 and simultaneously with smoldering myeloma, aged 40. The smoldering myeloma has been inactive; the bowel cancer has progressed through multiple surgeries (bowel, liver, lung) and is now stage 4, on active chemotherapy. He runs AI for the business he works for, so his day job is adjacent to the technology. He blogs publicly about his disease at fcancerwith.ai and on LinkedIn. He is British; cared for by the NHS with some private care around the edges. He is articulate, technically fluent, and willing to pay roughly £200 a month for AI subscriptions.
Summary
Russ's story is the clearest case study in the series of a patient building durable, purpose-specific AI infrastructure to do what the health system will not: maintain a coherent, longitudinal, cross-specialty view of his own care. He tracks chemo symptoms daily in Claude, mirrors them into a Google Doc because chatbots silently lose long-term memory (he lost roughly half of three months of chemo data to memory failures across two tools), runs a weekly "cancer smasher" Claude project that scans new literature and trials against his own tumor profile, and uses Notebook LM as a closed-corpus analyst over his doctor's notes and bloods. From a Notebook LM audio summary he learned, for the first time, that his gallbladder had been removed during an earlier surgery — no one had told him. When he is in hospital he uses Claude Dispatch from his phone to orchestrate multiple projects, leaving the laptop at home. His philosophy is architectural: separate tools for separate jobs, paid tiers for privacy, external backups, and a clinician relationship that is, crucially, open to all of it (his liver surgeon sends him WhatsApp videos of his scans because it's faster than the hospital system).
Key Warnings
1. LLM long-term memory is unreliable and failures are silent. Between two chatbots across three months, Russ lost about half his chemo symptom data. He only noticed at the end of the cycle. Any patient treating a chatbot as a clinical log is accepting this risk.
2. Free tiers are not appropriate for personal health data. Training on conversation data is the default for free consumer tiers; sensitive medical context leaves the patient's control. Russ is categorical on this.
3. Sycophancy is not a UX quirk — it is a clinical risk. He switched away from ChatGPT and Gemini partly because he read that Claude was less agreeable. Tjasa's parallel example is sharper: Gemini told her to ease off exercise while Claude, with the same inputs, told her doing weights well was evidence she was recovering. Same prompt, opposite advice, no way for the patient to arbitrate.
4. AI tools overload when you try to make one do everything. "You should never try and get one tool to do too many things." Russ learned this by trying to build unified tools that collapsed under their own complexity. Separation of concerns is a design principle he arrived at through failure.
5. The health system will not hand you a coherent record. Specialists don't share; NHS scan transfer between parts of the same hospital takes two weeks unless someone physically walks it across. Procedures happen and are not communicated (the gallbladder). The patient is the only person with an incentive to integrate their own data.
6. The cost curve is steep. £20/month a year ago; £200/month now. For a sophisticated user, the bill scales with ambition. This is a two-tier patient experience in the making.
Key Insights
1. Tool architecture, not tool choice, is the expertise. Russ runs at least five distinct AI surfaces in coordinated roles: Claude (daily interaction, symptom logging, pill management), Google Docs (durable memory), Notebook LM (closed-corpus synthesis and presentation generation), Whisper Flow (voice input for sick days), Claude Dispatch (mobile orchestration across projects). The value is in the composition.
2. Closed-corpus AI is a different product class from open-web AI. Notebook LM's promise — it only uses what you give it — is the key property for a cancer patient. It cannot drift into forum lore or SEO-gamed content. It can, however, surface procedures you didn't know you'd had.
3. Agentic orchestration has already arrived for patients who pay for it. Claude Dispatch lets Russ go to hospital with only his phone and have multiple workstreams report back to a single screen. This is not a future capability; it is what he did two weeks before the interview.
4. Public disclosure can substitute for counseling. He describes starting to blog about cancer on LinkedIn as more useful to him than the formal counseling he'd tried earlier. "You cut the wheat from the chaff" on friendships; you build a different, wider network. The AI role here is not therapy but scaffolding — helping him write and publish when he is too sick to type.
5. Control is the real psychological deliverable. "AI has allowed me to feel like I'm being proactive and I'm not just waiting for the next bad thing to happen." The clinical value of a symptom tracker is real; the psychological value of the posture it enables is bigger. This is worth naming explicitly because it doesn't show up in outcomes papers.
6. The clinician relationship is bimodal. His liver surgeon sends WhatsApp video scan reviews because it's faster than hospital IT. His sister (a GP) describes her practice adopting AI scribes. There are clinicians who meet AI-informed patients as partners. There are also those who don't. The former group's existence makes disclosure safe and makes the chart accurate.
Key Tips
Use paid tiers for anything involving personal health data. Non-negotiable.
Separate tools by job. One for symptom tracking. One for literature scanning. One for pill management. Don't consolidate.
Always externalize long-term memory. A Google Doc, a spreadsheet — something the LLM can re-read but cannot lose.
Put standing rules in project settings: "be critical," "avoid confirmation bias," "ask clarifying questions rather than confirming." Don't retype these every session.
Test models against each other on questions you already know the answer to. That's how you detect which tool is sycophantic about what.
For presentation and synthesis: Notebook LM turns a corpus of medical notes into infographics, slide decks, and audio summaries that are genuinely useful for preparing for specialist appointments.
When too sick to type, use voice-to-text. Whisper Flow sits across most apps.
Key Learning Lessons
Architect your tools; don't just pick one. Composition matters more than model choice.
Long-term memory is externalized, not trusted to the chatbot. Assume silent loss.
Paid tiers for health data, always.
The patient is the integration layer the health system doesn't provide.
Control is a therapeutic outcome — name it as such.
Your clinician's AI posture is a safety variable. Seek out the ones who engage; build the relationship where you find them.
The gallbladder lesson: read your own notes. You may be the first person to do so end-to-end.