The Research Patient: Dale on Agency, Custom GPTs, and the Concealment That Nearly Killed Him
Dale was given an inoperable, incurable esophageal cancer diagnosis in October 2024 with less than twelve months to live, roughly the worst prognostic classification available. His partner had lung cancer (in recovery after a pneumonectomy/lobectomy). His mother had just died. He was, in his own words, in a bad headspace and used it as fuel. At the time of the interview he has no visible signs of cancer.
Dale Atkinson was a financial crime investigator before his cancer diagnosis. This is important understanding the research he did on his cancer. The skills required for a compliance officer trained him to read dense regulated documents, which is a transferable skill for medical literature. He is a compelling interview subject and, simultaneously, a survivorship-biased sample of one.
Dale’s story
Dale's approach after he decided to fight his diagnosis, was research-first, AI-second. He used ChatGPT as a literature triage layer — feeding in his diagnosis and medical letters, asking "if you were in my shoes, where would you start?" and getting a reading list. He then manually read roughly 4,500 papers over three-to-six months, initially cover-to-cover, later skimming for the sections that actually mattered. He obtained next-generation sequencing data on his tumor. He spent six weeks (two to three hours a day) building a custom GPT trained only on the self-curated corpus of papers he found relevant — explicitly instructed not to search the open web, required to cite at least five sources, required to explain its sourcing. He used this to map drug interactions across his chemotherapy, immunotherapy, off-label medications, and a self-designed supplement-and-lifestyle protocol. Crucially, he assembled a private clinical team consisting of a metabolic researcher, a naturopath-nutritionist, and an integrative oncologist, to review what his AI and his reading were suggesting. The other half of his story is darker: his standard-of-care oncologist called him "dangerous" and "stupid." He stopped disclosing his medication list. He later ended up on a palliative ward being offered morphine while silently taking low-dose naltrexone, a dangerous opioid interaction. He survived the event and the cancer. He is now clear-eyed that clinician dismissal created that near-miss.
Key Warnings
1. ChatGPT confuses popularity with authority.
AI pulls information from Facebook groups and social-media doctors who may not be credentialed. "AI thinks that the more instances something is mentioned, the more likely it is to be true." This is the opposite of how evidence works. His advice: the quiet doctors doing hard work are the ones to look for, and the model will not surface them unless you force it to.
2. Hallucinated context is the dominant failure mode, not hallucinated facts.
The model correctly identified papers but misread their relevance: confusing squamous-cell esophageal cancer with adenocarcinoma (different disease, different pathways, different treatment logic). The paper titles were real; the clinical inference was wrong.
3. Clinician dismissal produces concealment, which produces real harm.
Dale's low-dose naltrexone was not on his chart when he was admitted as a palliative patient. The hospital started him on morphine. The combination is dangerous. The omission existed because his primary oncologist had punished disclosure. This is the single most important safety story in the entire series.
4. Different models produce different answers to the same question and patients cannot easily arbitrate.
Dale's examples and Tjasa's parallel stories confirm that the same prompt across Claude, ChatGPT, and Gemini can yield contradictory recommendations, each confidently expressed. Without an external reference point this is not just confusing; it is a safety hazard.
5. AI is sycophantic in exactly the domain where that matters most.
"AI is essentially like a little puppy dog. It is very eager to please." He acknowledges that when he pushed the model toward conclusions he wanted — about his own protocol — it may have simply delivered what he wanted. He had clinicians to cross-check. Most people in his position will not.
6. Most advanced-stage cancer patients are using AI in secret.
He cites roughly 76% of stage 3–4 patients seeking alternatives online and roughly 70–72% of all cancer patients doing the same. If these numbers are even directionally right, clinician refusal to engage leaves a majority of oncology patients self-managing without safety oversight.
Key Insights
1. Literature triage is the cleanest patient AI use case. "This is my diagnosis, these are my medical letters, where would you start?" followed by a manual-read discipline is a genuinely strong workflow. It does what patients otherwise cannot — narrow a 400-paper PubMed result set into a 35-paper starting point — without replacing the actual reading.
2. The custom-GPT-with-guardrails pattern is the serious version of patient AI. Dale's six-week build: train only on curated papers, disable web search, require multiple-source citation, force-explain sourcing process, iterate prompts until guardrails hold. The method is specific and reproducible. It is also not remotely accessible to a patient without research-document literacy.
3. Use one model to build prompts for another. Dale and Tjasa converge on this pattern. Claude is better at data analysis and extracting from source; ChatGPT is sometimes better at writing the prompt that Claude will then execute against. This is workflow sophistication most clinicians do not know exists.
4. The clinician-as-AI-partner posture is what actually works.
Dale describes what good looks like, concretely: a clinician says "I get that ChatGPT told you ivermectin is the cure for everything. It's only phase one/two in trials, there are liver toxicity concerns and chemo interactions; you can use it, here's how we'd safely integrate it, but let's also work out what else has better evidence." That conversation never happened for him in the NHS; it happened with his private integrative team. The difference between those two postures is the difference between a complete chart and a near-fatal interaction.
5. Language matters clinically.
Tjasa's citation of a Canadian advocate — "the patient didn't fail the treatment; the treatment failed the patient" — and Dale's embrace of patient agency reframe the default clinical vocabulary. Language shapes blame, which shapes disclosure, which shapes chart accuracy, which shapes safety.
6. Agency is transferable to the clinician-patient dynamic.
Over a year, Dale's oncologist shifted from "do exactly what I tell you or die" to "what do you think we should do?" He attributes the shift to demonstrated agency on his part. This tracks what Tjasa reports in her own care. Agency earns a different kind of consultation — but it takes time and it is not available to every patient.
Key Tips from Dale
For literature triage: feed in your diagnosis plus medical letters, ask for a prioritized reading list, then actually read the papers.
Force citations: "cite at least five different sources," "explain where you're sourcing from and why." If the model cannot, don't trust the output.
Build guardrails iteratively. Use one model to ask you 20 questions to refine the prompt; take that prompt into a different model for execution.
Disable open-web search for clinical reasoning — use a closed corpus of papers you've already read.
Assemble a human clinical team you can show AI outputs to. If your standard-of-care clinician won't engage, find one who will (integrative oncology, research clinicians, second-opinion networks).
Never hide your medication list, even from a clinician who has punished disclosure. Even when they've made it hard. If you cannot disclose directly, disclose it in the ER admission paperwork, on the wristband-level intake form, anywhere it reaches the people treating you in an emergency.
Key Learning Lessons
Use AI to narrow the search, not to summarize the answer. Read the papers yourself.
Context hallucination is the subtle killer not invented studies, but correctly-cited studies applied to the wrong disease.
Concealment is a safety emergency caused by clinician posture, and disclosure is non-negotiable regardless.
Custom GPTs with closed corpora are the step up from consumer chat, and require real time investment.
A clinical team you can bring AI findings to is a prerequisite, not a nice-to-have.
Clinician language and clinician posture shape patient behavior — agency begets partnership begets better care.
n=1 is n=1. Dale's outcome is extraordinary; his method is instructive; the two must be reasoned about separately.