<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Epidemiology of Algorithms]]></title><description><![CDATA[Over 700 AI systems are active in clinical medicine. Almost none are monitored after deployment. The Epidemiology of Algorithms is building the surveillance science to change that.]]></description><link>https://newsletter.epidemiologyofalgorithms.org</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 16:14:24 GMT</lastBuildDate><atom:link href="https://newsletter.epidemiologyofalgorithms.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Anne E. Burnley, MD, MHS, MS]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[epidemiologyofalgorithms@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[epidemiologyofalgorithms@substack.com]]></itunes:email><itunes:name><![CDATA[Anne E. Burnley, MD, MHS, MS]]></itunes:name></itunes:owner><itunes:author><![CDATA[Anne E. Burnley, MD, MHS, MS]]></itunes:author><googleplay:owner><![CDATA[epidemiologyofalgorithms@substack.com]]></googleplay:owner><googleplay:email><![CDATA[epidemiologyofalgorithms@substack.com]]></googleplay:email><googleplay:author><![CDATA[Anne E. Burnley, MD, MHS, MS]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[04 | The Infrastructure Gap: Why Surveillance, Detection, and Mitigation Require More Than Just a Dashboard]]></title><description><![CDATA[Series 1 | Issue 04 | The Epidemiology of Algorithms]]></description><link>https://newsletter.epidemiologyofalgorithms.org/p/issue-04-the-infrastructure-gap</link><guid isPermaLink="false">https://newsletter.epidemiologyofalgorithms.org/p/issue-04-the-infrastructure-gap</guid><dc:creator><![CDATA[Anne E. Burnley, MD, MHS, MS]]></dc:creator><pubDate>Thu, 30 Apr 2026 14:33:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!w3mh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Issue 03 identified the problem.</p><p>Three feedback loops &#8212; the Override/Adaptation Loop, the Clinician Learning Loop, and the Training Feedback Loop &#8212; can turn a deployed clinical AI system into a dynamic system. Once in use, the system begins to react to its environment, influencing clinician behavior and, in some cases, even creating the very drift it was meant to prevent.</p><p>These loops do more than create risk. They make that risk harder to see. Traditional incident review is not built to detect them. Individual clinicians cannot spot them from the bedside. They only become visible when you look across larger populations and over longer periods of time.</p><p>Issue 04 asks the next logical question.</p><p><em>If these loops cannot be avoided and routine monitoring cannot consistently detect them, what does an institution actually need to build?</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w3mh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w3mh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 424w, https://substackcdn.com/image/fetch/$s_!w3mh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 848w, https://substackcdn.com/image/fetch/$s_!w3mh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 1272w, https://substackcdn.com/image/fetch/$s_!w3mh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w3mh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png" width="3870" height="1860" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1860,&quot;width&quot;:3870,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6191999,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/i/193257095?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F830546d8-8d84-4662-842a-49f972afcb14_4000x2857.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w3mh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 424w, https://substackcdn.com/image/fetch/$s_!w3mh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 848w, https://substackcdn.com/image/fetch/$s_!w3mh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 1272w, https://substackcdn.com/image/fetch/$s_!w3mh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5726b32a-5c23-4348-aa8c-b246d33fb50a_3870x1860.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p><strong>The Governance Reality</strong></p><p>This is not a pessimistic conclusion; it is a structural one.</p><p>Clinical AI systems are static tools placed into environments that are constantly changing. From the moment they are deployed, their outputs begin interacting with clinicians, workflows, incentives, and data systems that are also evolving. Some of that change comes from outside the model &#8212; shifting patient populations, updated protocols, or new disease patterns. Some of it comes from the model itself through the three feedback loops described in Issue 03.</p><p>Either way, the result is the same. The gap between the model&#8217;s training environment and the real clinical setting widens at deployment. That divergence is not a flaw. It is the predictable outcome of placing a fixed artifact inside a dynamic environment.</p><p>That is why initial validation cannot be the endpoint. Responsible AI deployment requires continuous surveillance, not as an optional quality-improvement activity but as a core operational function.</p><div><hr></div><p><strong>What a Minimum Viable Surveillance Dataset Really Needs</strong></p><p>The literature is now clear enough to show what separates a truly useful surveillance dataset from the dashboards most health systems currently rely on. Three components are critical, and all remain largely missing from standard practice.</p><p>The first is data provenance markers: clear indicators of whether the training data have outcomes influenced by earlier model predictions. Without this information, every retraining cycle risks absorbing distortions created by the model&#8217;s own prior use. In one documented clinical example, a retrained model performed worse even after receiving six times as much training data because AI-influenced labels had contaminated the dataset.</p><p>The second is a baseline measure of clinician performance gathered in AI-off conditions before deployment. This is the only reliable way to understand what the Clinician Learning Loop is doing over time. Without a pre-deployment baseline, there is no meaningful reference point for identifying trust miscalibration, deskilling, or never-skilling.</p><p>The third is adherence flags: structured records showing whether clinicians followed the model&#8217;s recommendations on an encounter-by-encounter basis. Without adherence tracking, retraining cannot separate outcomes that reflect the model&#8217;s true predictive performance from those that reflect clinicians&#8217; responses to its recommendations.</p><p>Two additional elements complete the minimum viable dataset:</p><p>Clinician identifiers for longitudinal tracking, so behavioral drift can be seen at the individual level rather than only in aggregate.</p><p>Cumulative alert exposure per clinician, measured over a rolling 90-day window, so institutions can monitor the dose-response relationship between alert burden and behavioral degradation before the effects become permanent.</p><p>Right now, no post-deployment monitoring system brings all of this together in real time. That is the real issue. The field does not lack ambition. It lacks infrastructure.</p><div><hr></div><p><strong>The Five Signal Types: What They Are, When They Appear, and Who Owns Them</strong></p><p>To detect the three feedback loops, institutions need to distinguish among five distinct signal types that current systems often blur together. Each one appears at a different stage after deployment, depends on different infrastructure, and belongs to different stakeholders. Treating them as interchangeable is exactly how health systems end up with dashboards that seem comprehensive while missing the most important problems.</p><p><em><strong>1. The Bedside Signal</strong></em></p><p>The bedside signal reflects patient physiology directly &#8212; things like vital signs, clinical deterioration markers, and early warning scores. It is independent of the AI system and available in real time, making it useful for understanding what is happening with the patient in the moment.</p><p>But that is also its limitation. It tells you what is happening, not why. A worsening bedside signal cannot, on its own, implicate the algorithm. It only becomes meaningful during algorithmic surveillance when paired with the other four signal types.</p><p><em>Ownership</em>: the clinical team.</p><p><em><strong>2. The Clinician Behavior Signal</strong></em></p><p>The clinician behavior signal captures how individual clinicians respond to AI recommendations over time. This includes override rates by clinician and alert type, time-to-decision after alert presentation, cumulative alert exposure over a rolling 90-day period, and agreement rates with AI recommendations. To make this signal useful, institutions need clinician-level longitudinal tracking rather than broad aggregate measures.</p><p>This signal commonly emerges within days to weeks of deployment, making it the earliest detectable indicator among the five. For example, a clinician whose override rate is climbing, whose decision-making is becoming faster, and whose alert exposure falls in the highest quartile may already be showing signs of automation bias before any patient outcome data are available.</p><p><em>Ownership</em>: clinical informatics, with escalation to the department chief and the AI governance committee.</p><p><em><strong>3. The Workflow Signal</strong></em></p><p>The workflow signal reflects aggregate system performance. It includes unit-level override rates, unusual alert-firing patterns, irregularities in system response times, and the share of overrides submitted without a structured reason. It looks across clinicians and encounters to identify system-wide patterns rather than individual decisions.</p><p>Most health systems already collect the raw data needed for this signal through EHR audit logs. What they usually lack is the analytical capability to monitor those logs for meaningful anomalies.</p><p><em>Ownership</em>: IT and informatics, with escalation to quality and safety leadership as well as operations.</p><p><em><strong>4. The Subgroup Outcome Signal</strong></em></p><p>The subgroup outcome signal tracks how algorithm performance differs across patient populations &#8212; by race, ethnicity, age, sex, comorbidity burden, and combinations of those factors. It is the most important signal for health equity, but also the slowest to appear, because it depends on outcome data that may take weeks or months to accumulate.</p><p>This is also the signal most likely to reveal endogenous bias amplification. Aggregate performance measures will not capture that. Only subgroup-level stratification will.</p><p><em>Ownership</em>: quality and safety, with escalation to the AI governance committee and executive leadership.</p><p><em><strong>5. The Retraining and Data Feedback Signal</strong></em></p><p>The retraining and data feedback signal monitors how model inputs, outputs, and training data evolve over the course of deployment. It requires model-specific instrumentation, including feature distribution tracking with rolling windows, label contamination monitoring, adherence-weighted performance estimates, and AUROC tracking against pre-deployment benchmarks.</p><p>This is the signal that most directly detects the Training Feedback Loop. An AUROC drop of more than 0.05 from baseline should trigger investigation. A 9% to 39% decline in specificity after retraining is a quantified warning sign of feedback loop contamination.</p><p><em>Ownership</em>: data science and ML operations, with escalation to both the governance committee and the vendor.</p><div><hr></div><p><strong>What This Means for Healthcare Institutions</strong></p><p>If your institution has already deployed clinical AI, the real question is not whether the feedback loops described in Issue 03 exist.</p><p>The real question is whether you can see them.</p><p>Health executives and governance leaders should be asking:</p><ol><li><p>Do we track override rates over time by clinician and alert type rather than only in aggregate?</p></li><li><p>Do we know if our retraining data has already been contaminated by prior model use?</p></li><li><p>Do we have a way to measure clinician performance without AI assistance?</p></li><li><p>Do we continuously monitor subgroup performance rather than only checking it at deployment?</p></li><li><p>Do we know which signals appear first and who is responsible for escalation when thresholds are crossed?</p></li></ol><p>If the answer is no, then feedback loops may already be active in your institution without any meaningful surveillance signal in place.</p><p>That is not an individual failure. It is a field-wide blind spot that can no longer be defended as being invisible.</p><div><hr></div><p><strong>What Comes Next</strong></p><p>Issue 03 introduced the mechanism&#8212;three feedback loops and the epistemological challenge they pose.</p><p>Issue 04 has now laid out the governance implications, the infrastructure requirements, and the five signal types that any real surveillance system must be able to distinguish.</p><p>Issue 05 will introduce the six-domain surveillance architecture, the operational framework that enables surveillance, detection, and mitigation at an institutional scale. These domains are not abstract ideas. They are the practical answer to the central question this issue raises: if feedback loops are inevitable, what does a health system actually need to build in order to see them?</p><p>Drift cannot be prevented. But it can be detected. And detection is where the discipline begins.</p><p>Subscribe to get each new issue of <em>The Epidemiology of Algorithms</em> delivered to your email inbox. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.epidemiologyofalgorithms.org/subscribe?"><span>Subscribe now</span></a></p><blockquote><p><em>&#8220;Trust in algorithmic systems should be continuously earned through rigorous, population-level surveillance rather than historically inherited from initial validation or deployment approval.&#8221; </em>&#8212; Anne E. Burnley, MD, MHS, MS</p></blockquote>]]></content:encoded></item><item><title><![CDATA[Issue 03 - The System Talks Back: Feedback Loops and the Epistemology of Drift ]]></title><description><![CDATA[Anne E. Burnley, MD, MHS, MS | The Epidemiology of Algorithms | Issue 03]]></description><link>https://newsletter.epidemiologyofalgorithms.org/p/the-system-talks-back-feedback-loops</link><guid isPermaLink="false">https://newsletter.epidemiologyofalgorithms.org/p/the-system-talks-back-feedback-loops</guid><pubDate>Thu, 16 Apr 2026 10:01:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dt2O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dt2O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dt2O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!dt2O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!dt2O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!dt2O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dt2O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4380275,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/i/192983711?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dt2O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!dt2O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!dt2O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!dt2O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89ef6319-e5ee-483e-8cf9-d65b594e087a_2400x1600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In Issue 02, I introduced the Algorithm Exposure Model: a causal chain for understanding how clinical AI produces effects not one patient at a time, but across entire populations simultaneously.</p><p>That model was deliberately linear.</p><p>It began with the algorithm as agent, moved through algorithm output, clinician interpretation, clinical decision, patient, and patient outcome, and showed how the clinical environment modifies every step.</p><p>That structure was necessary.</p><p>It clarified the architecture of exposure.</p><p>But it was also incomplete.</p><p>A causal chain is useful because it imposes order. Inputs move toward outputs. Causes precede effects. The model acts, and the world responds.</p><p>Deployed clinical AI does not remain that kind of system for long.</p><p>Once it enters a real clinical environment, it does not simply generate outputs and wait passively for the world to react. Clinicians adapt to it. Institutions reorganize around it. And in some systems, decisions made in response to the model become part of the data used later to evaluate or retrain the next version.</p><p>The output begins to shape the input. The effect begins to modify the cause. That is what Figure 2 adds.</p><p>Figure 1 showed us the architecture of exposure. Figure 2 adds motion.</p><div><hr></div><h4><strong>From causal chain to living system</strong></h4><p>The six nodes of the Algorithm Exposure Model are unchanged. The clinical environment continues to function as an effect modifier across all nodes and transitions.</p><p>What is new are three recursive arcs: feedback loops that connect downstream effects back to upstream system behavior.</p><p>Those loops turn a linear causal model into a living system.</p><p>And that distinction matters, because the problem is not only that clinical AI has downstream effects.</p><p>It is those effects that can return to the system, reshape its future behavior, and make the resulting harms harder to detect.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Uu3y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Uu3y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 424w, https://substackcdn.com/image/fetch/$s_!Uu3y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 848w, https://substackcdn.com/image/fetch/$s_!Uu3y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 1272w, https://substackcdn.com/image/fetch/$s_!Uu3y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Uu3y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png" width="1456" height="578" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:578,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Uu3y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 424w, https://substackcdn.com/image/fetch/$s_!Uu3y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 848w, https://substackcdn.com/image/fetch/$s_!Uu3y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 1272w, https://substackcdn.com/image/fetch/$s_!Uu3y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4523d3c5-3f63-4cfe-aa10-2f54ce077e40_2368x940.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><em><strong>Figure 2. The Algorithm Exposure Model with Feedback Loops.</strong> Three recursive arcs connect downstream effects back to upstream system behavior, turning the linear causal chain from Figure 1 into a living system. The clinical environment functions as an effect modifier across all nodes and transitions.</em></p><p><strong>The three loops are:</strong></p><p><strong>The Override/Adaptation Loop &#8212; from decision host back to algorithm output</strong></p><p><strong>The Clinician Learning Loop &#8212; from clinical decision back to decision host</strong></p><p><strong>The Training Feedback Loop &#8212; from patient outcome back to algorithm agent</strong></p><p>Each has a distinct mechanism. Each has a distinct failure mode. And each operates in ways that are difficult to see in a single patient encounter.</p><p>That is the point.</p><div><hr></div><h4><strong>The Override/Adaptation Loop: where automation bias lives</strong></h4><p>The most recognizable loop is the shortest one.</p><p>The Override/Adaptation Loop sits between the <strong>clinician</strong> node and the <strong>algorithm output</strong> node. Every time a clinician receives an alert, recommendation, score, or classification, they must decide whether to accept, modify, or override it.</p><p>That decision is never purely technical.</p><p>It is shaped by trust, fatigue, workload, recent experience with the system, alert burden, and the cognitive cost of checking whether the recommendation is correct.</p><p>Over time, what matters is not any single decision. It is the pattern of those decisions across hundreds or thousands of encounters.</p><p>This is where automation bias lives.</p><p>The evidence base here is already substantial. Across specialties, AI recommendations can influence clinical decision-making powerfully &#8212; sometimes appropriately, sometimes not. Experience may moderate that effect, but it does not eliminate it. Even highly experienced clinicians remain vulnerable when an output appears plausible enough, and the burden of verification is high.</p><p>Alert volume compounds the problem.</p><p>As exposure increases, adherence patterns degrade. What begins as decision support becomes background noise. The system continues to generate outputs. The clinician continues to adapt. And the aggregate pattern that emerges becomes the institution&#8217;s operational reality, regardless of what the original validation study claimed.</p><p>This loop often generates one of the earliest detectable signals in a deployed system. Changes in override rates, response rates, and alert fatigue can appear within days to weeks of deployment &#8212; long before a health system has enough outcome data to say anything definitive about patient harm.</p><p>That makes this loop the first place a surveillance architecture should look.</p><div><hr></div><h4><strong>The Clinician Learning Loop: trust miscalibration over time</strong></h4><p>The second loop is slower and less visible.</p><p>The Clinician Learning Loop runs from the <strong>clinical decision</strong> node back to the <strong>clinician </strong>node. It reflects the fact that clinicians do not merely use AI systems.</p><p>They learn from them.</p><p>They update their internal model of the system&#8217;s reliability, when to trust it, and when to ignore it.</p><p>Sometimes that learning is appropriate. Sometimes it is not.</p><p>The problem is not simply <em>overtrust </em>or <em>undertrust </em>in the abstract. Repeated exposure to algorithmic output can reshape clinical reasoning itself. Trust can become miscalibrated either gradually over months of use or rapidly after brief exposure to outputs that appear consistently correct.</p><p>That miscalibration has a name in the literature &#8212; actually, three names: Deskilling, never-skilling, and mis-skilling.</p><p>Deskilling is the loss of previously acquired clinical ability through disuse.</p><p>Never-skilling is the failure to develop competency because the system scaffolds the task before the clinician can internalize it.</p><p>Mis-skilling is the reinforcement of incorrect behavior when algorithmic error becomes part of the clinician&#8217;s own reasoning pattern.</p><p>All three reduce the clinician&#8217;s ability to detect error independently, and that matters because, in the Algorithm Exposure Model, the clinician is not a passive recipient of output. The clinician is the decision host.</p><p>If the decision host is being reshaped by the system, then the system&#8217;s future behavior is being altered indirectly through the human-in-the-loop, meant to regulate it.</p><p>The clinician learning loop is also the least well-instrumented. Most health systems do not measure trust calibration in real time. They do not track independent baseline performance in AI-off conditions. They also do not routinely test agreement rates against challenge cases designed to reveal overreliance.</p><p>Without those measures, trust miscalibration remains not only unmanaged but largely unseen.</p><div><hr></div><h4><strong>The Training Feedback Loop: the self-fulfilling system</strong></h4><p>The longest and most concerning loop runs from the <strong>patient outcome</strong> node back to the <strong>algorithm agent </strong>node itself. This is concerning because this is the loop in which clinical AI begins to learn from its own effects.</p><p>If algorithmic predictions influence clinical decisions, and those decisions influence outcomes, and those outcomes later enter the data used for evaluation or retraining, then the model is no longer learning from an independent world. It is learning from a world it has already helped shape.</p><p>That is a fundamentally different problem from simple model degradation over time. It is a self-fulfilling system.</p><p>Recent literature has begun to name this directly. In ICU prognostic models, researchers have described model-mediated intervention as a mechanism that changes the relationship between predictors and outcomes after deployment. In resuscitation science, self-fulfilling prophecies arise when treatment decisions are not adequately represented in training, when human-machine interaction compounds those decisions, and when historically sensible clinical choices become structurally misleading as medicine advances.</p><p>Simulation work has made the stakes concrete. Models retrained after deployment can lose substantial specificity precisely because the data no longer mean what they meant before deployment. A model that appears to be improving by incorporating more recent data may be introducing distortions from its prior use. In one documented clinical case, a retrained model performed worse despite having six times as much training data because AI-influenced labels had contaminated the training set.</p><p>This loop is not only hard to detect, but it also changes the meaning of the evidence used to detect it.</p><p>More data may not correct the model, in some cases, more data makes it worse.</p><div><hr></div><h4><strong>The epistemology of drift</strong></h4><p>This is where the argument moves from mechanism to interpretation.</p><p>Standard drift literature often considers performance degradation as external to the model. The patient population evolves. Disease prevalence varies. Clinical protocols are updated. The environment changes. The model must adapt accordingly.</p><p>That framing is not only wrong but also incomplete.</p><p>The feedback loops in Figure 2 introduce a different category: endogenous drift.</p><p>This is not drift happening to the model from outside; it is drift the model helps create by its own deployment.</p><p>Endogenous drift emerges when algorithmic output alters clinician behavior, workflow patterns, data provenance, and the relationship between predictors and outcomes.</p><p>That distinction matters because it changes what can be known about error:</p><p>&#183; A model that improves outcomes may appear to degrade because it has changed the world it was trained to predict.</p><p>&#183; A model that appears to perform well may do so only because its outputs have shaped the outcomes now being counted as ground truth.</p><p>&#183; A system may seem stable in the aggregate while subgroup harm quietly intensifies beneath the surface.</p><p>The loops do not merely produce errors; they also obscure them:</p><p>&#183; A self-fulfilling prophecy can look like a successful prediction.</p><p>&#183; Alert fatigue can look like clinician noncompliance.</p><p>&#183; Trust miscalibration can look like individual variation.</p><p>&#183; Retraining contamination can look like model adaptation.</p><p>That is the epistemic problem.</p><p>The system can alter the conditions under which it is observed, and in doing so, make its own failures harder to see.</p><div><hr></div><h4><strong>The loops are not independent</strong></h4><p>These feedback loops do not operate independently.</p><p>The Override/Adaptation Loop can alter the data-generating process for the Training Feedback Loop. The Clinician Learning Loop can reduce the clinician&#8217;s ability to detect errors, allowing flawed outputs to pass uncorrected into future data. Behavioral shifts become retraining distortions. Retraining distortions make outputs less reliable. Less reliable outputs further degrade trust calibration.</p><p>This is how harm compounds.</p><p>Not usually as a single dramatic failure, but as a gradually tightening cycle.</p><p>A system that generates automation bias produces override patterns that contaminate training data; likewise, contaminated training data creates a model with more systematic error. More systematic error deepens trust miscalibration. More miscalibration reduces error detection. Reduced error detection allows the next round of labels and outcomes to carry the distortion forward.</p><p>The loops tighten around one another.</p><p>That is why this is not a series of isolated problems. It is a system problem.</p><p>At the bedside, none of these patterns are obvious; however, as they aggregate at the population level over time, they become detectable.</p><div><hr></div><h4><strong>The reason why incident review cannot solve this problem</strong></h4><p>Individual incident reviews cannot solve this problem because they were never designed to do so.</p><p>Incident review works best when harm is discrete, visible, and attributable to a bounded event. Feedback-loop harms are different. They unfold gradually, distribute across populations, and can alter the evidentiary frame through which later events are interpreted.</p><p>In other words, the problem is not simply that we are missing isolated failures. We are using methods built for isolated failures to monitor recursive systems. That mismatch guarantees blind spots.</p><p>A surveillance architecture for deployed clinical AI must be able to detect early behavioral adaptation, monitor evolving trust calibration, and interrogate whether retraining data remain epistemically meaningful after deployment.</p><p>Without that infrastructure, health systems may continue to rely on approval, validation, or aggregate performance summaries long after those measures no longer reflect the true state of the system.</p><div><hr></div><h4><strong>The governance problem after deployment</strong></h4><p>Clinical AI should not be understood as a tool acting on a static environment.</p><p>Once deployed, it enters into relationships with clinicians, institutions, workflows, and data systems that change in response to it.</p><p>Those responses loop back.</p><p>They affect future decisions, future measurements, and future models.</p><p>That is why trust in algorithmic systems cannot be permanently inherited from initial validation or deployment approval. It has to be continuously earned through rigorous, population-level surveillance capable of seeing the recursive dynamics that ordinary monitoring misses.</p><p>The key question is no longer just whether a model was valid when deployed. It is whether the system it evolves into after deployment remains understandable enough to govern safely.</p><p>&#8220;Trust in algorithmic systems should be continuously earned through rigorous, population-level surveillance rather than historically inherited from initial validation or deployment approval.&#8221;</p><div><hr></div><p>What comes next:</p><p>Issue 03 identifies the problem. In issue 04, we will examine what it would take to monitor these loops in practice.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Epidemiology of Algorithms! Subscribe now to receive new posts in your email inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Issue 02 - Clinical AI as a Population-Level Intervention: The Algorithm Exposure Model]]></title><description><![CDATA[Anne E. Burnley, MD, MHS, MS | The Epidemiology of Algorithms | Issue 02]]></description><link>https://newsletter.epidemiologyofalgorithms.org/p/clinical-ai-as-a-population-level-intervention-the-algorithm-exposure-model</link><guid isPermaLink="false">https://newsletter.epidemiologyofalgorithms.org/p/clinical-ai-as-a-population-level-intervention-the-algorithm-exposure-model</guid><dc:creator><![CDATA[Anne E. Burnley, MD, MHS, MS]]></dc:creator><pubDate>Sat, 28 Mar 2026 18:06:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!I49r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p>&#8220;Six Blind Men and an Elephant&#8221; is an ancient Indian fable,  documented in the <em>Tittha Sutta</em> around 500 BC.</p><p>Six blind men encounter an elephant.</p><p style="text-align: justify;">Each touches a different part and declares a different truth: A tree, a fan, a wall, a snake, a rope, a spear.</p><p style="text-align: justify;">Each is absolutely certain. None is wholly wrong. But none can see the whole.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!I49r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!I49r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 424w, https://substackcdn.com/image/fetch/$s_!I49r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 848w, https://substackcdn.com/image/fetch/$s_!I49r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 1272w, https://substackcdn.com/image/fetch/$s_!I49r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!I49r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png" width="1456" height="1040" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1040,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:18743921,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/i/191723290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!I49r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 424w, https://substackcdn.com/image/fetch/$s_!I49r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 848w, https://substackcdn.com/image/fetch/$s_!I49r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 1272w, https://substackcdn.com/image/fetch/$s_!I49r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa221159b-ede0-42aa-88b3-c95fd7f5dd15_4000x2857.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I thought about that fable one afternoon in our occupational health clinic.</p><p>I was reviewing audiograms for a workforce I had been watching for years. Halfway through that stack, I noticed something that shouldn&#8217;t have been there.</p><p>One third of the group had significant threshold shifts. Not one worker. Not a cluster. A third &#8212; distributed quietly across a population I happened to be watching in its entirety. No one had complained. The shift had aggregated in silence.</p><p>A worksite visit confirmed what I suspected. We contained the harm because we caught it early. Without that intervention, those workers would have presented years later with permanent hearing loss, entirely preventable.</p><p>What it required: One provider. One population. Complete visibility.</p><p>Remove any one of those three conditions, and the shift continues. Distribute that workforce across three clinics, assign them to five different providers, and the signal disappears into the noise of individual encounters. Each provider sees their fraction. Nobody sees the elephant.</p><blockquote><p><em><strong>&#8220;Clinical AI does not act on one patient. It acts on every patient to produce population-level effects.&#8221; &#8212; Anne E. Burnley, MD</strong></em></p></blockquote><p>Clinical AI is deployed across thousands of providers and hundreds of institutions. Every clinician sees their piece, their patient, their alert, their encounter. Nobody is standing back, watching the whole animal.</p><p>That is the problem this discipline exists to solve.</p><div><hr></div><p><strong>THE PROBLEM</strong></p><p>When the Epic Sepsis Model was deployed at hundreds of hospitals beginning in 2018, no systematic surveillance infrastructure existed to monitor its performance. The model was proprietary. External validation was not required before deployment.</p><p>It took a research team at Michigan Medicine, working retrospectively, on their own data, with limited vendor cooperation, to find out what the model was actually doing. What they found: the algorithm missed two-thirds of sepsis patients while firing alerts on nearly one in five hospitalizations. Epic had reported performance that would correctly distinguish sepsis patients from non-sepsis patients roughly 8 times out of 10. The real number was barely better than flipping a coin.</p><p>The model ran unwatched.</p><p>Not because clinicians were careless. But because no architecture existed for watching. No surveillance infrastructure. No signal detection system. No adverse event taxonomy. No unique identifier to track the model across sites or versions.</p><p>That is not a technology problem. That is an epidemiological problem that has a solution.</p><div><hr></div><p><strong>THE ALGORITHM EXPOSURE MODEL</strong></p><p>The framework begins with a reframe.</p><p>Clinical AI is not a software tool deployed in individual clinical encounters. It is a population-level intervention &#8212; operating simultaneously across thousands of providers and hundreds of institutions, producing effects that aggregate into outcomes no single clinician can see.</p><p>The Algorithm Exposure Model extends the classical epidemiological triad &#8212; agent, host, environment &#8212; to clinical AI.</p><p>The algorithm is the agent. Its outputs influence the clinician, the decision host, whose clinical decisions then affect the patient. Individual outcomes aggregate into population-level effects. The clinical environment modifies every step: staffing ratios, alert burden, workflow design, and institutional culture around AI trust.</p><p>The same algorithm, deployed in different environments, produces different outcomes. Not because the model changes. Because the environment changes its effect.</p><p>Algorithms are not inherently biased. But the data they are trained on may carry biases, and algorithms can amplify those biases across populations.</p><p>The Algorithm Exposure Model traces a causal pathway from algorithm to patient, six nodes, one chain, one environment that modifies everything. The algorithm generates an output. A clinician interprets it. A clinical decision follows. A patient is affected. Individual outcomes aggregate into population-level effects that no single provider can see at the bedside.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7czS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7czS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 424w, https://substackcdn.com/image/fetch/$s_!7czS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 848w, https://substackcdn.com/image/fetch/$s_!7czS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 1272w, https://substackcdn.com/image/fetch/$s_!7czS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7czS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png" width="2304" height="993" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb759079-0dbd-42e2-864e-15b7571991c3_2304x1163.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:993,&quot;width&quot;:2304,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:131469,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/i/191723290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F946252f7-3218-44dc-8d00-3d5605b5de08_2304x1728.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7czS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 424w, https://substackcdn.com/image/fetch/$s_!7czS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 848w, https://substackcdn.com/image/fetch/$s_!7czS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 1272w, https://substackcdn.com/image/fetch/$s_!7czS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fab1e1b-5759-4b36-8216-b1697ee8a503_2304x993.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em><strong>Figure 1. The Algorithm Exposure Model:</strong> Linear Causal Chain. The algorithm acts as the agent, generating outputs that influence the decision host (clinician), whose clinical decisions affect the patient host. The clinical environment: Staffing, alert burden, workflow, and institutional culture- acts as an effect modifier at every node. Individual patient outcomes aggregate across settings to produce population-level signals.</em></p><div><hr></div><p><strong>WHY CLINICIANS MUST LEAD</strong></p><p>The clinician who sees the patient is the only person who can recognize when an algorithm&#8217;s recommendation doesn&#8217;t fit the clinical picture. The clinician who reviews a hundred alerts in a shift is the only person who can report when the signal-to-noise ratio has become unworkable. The clinician who knows their patient population is the only person who can notice when a subgroup is being systematically missed.</p><p>If clinicians do not take a prominent role in AI surveillance and governance, we will have ceded the system&#8217;s most important feedback loop.</p><p>Our patients will pay the price.</p><div><hr></div><p><strong>AN OPEN INVITATION</strong></p><p>This framework is my best current thinking. It is not finished.</p><p>A surveillance architecture for clinical AI, one that actually works at scale, cannot be designed by one physician working alone.</p><p>It needs the cardiologist who has watched a risk algorithm miss her patients. The informaticist who knows where the data lives. The patient safety officer has been trying to report an algorithm-associated event, but has nowhere to send it. The health equity researcher who knows which populations are being missed. The nurse who has watched alert fatigue erode her colleagues&#8217; responsiveness. </p><p>Post-deployment surveillance of clinical AI is not optional. It is a requirement.</p><p>But Figure 1 is not the whole picture. It shows how algorithm-associated events travel. It does not show how they compound.</p><p>In Issue 03 <strong>- </strong><em>The System Talks Back: Feedback Loops and the Epistemology of Drift,</em> the model becomes dynamic with feedback loops, propagation, and aggregation. The linear model in Figure 1 becomes a living system, changing everything we think about drift.</p><p>The chain you saw here talks back. The next issue shows you how.</p><blockquote><p><em><strong>&#8220;Trust in algorithmic systems should be continuously earned through rigorous, population-level surveillance rather than historically inherited from initial validation or deployment approval.&#8221;   &#8212; Anne E. Burnley, MD</strong></em></p></blockquote><p>In healthcare, particularly, trust must be:</p><p>&#8226; Measured through subgroup performance<br>&#8226; Maintained through transparent evaluation<br>&#8226; Monitored as models drift<br>&#8226; Re-earned every time a system is updated, retrained, or deployed in a new context</p><p>This is what <em>The Epidemiology of Algorithms</em> exists to build.</p><div><hr></div><p><em>Next issue, April 16, 2026: The System Talks Back: Feedback Loops and the Epistemology of Drift</em></p><p>The Epidemiology of Algorithms <br><em>Anne E. Burnley, MD, MHS, MS </em><br>newsletter.epidemiologyofalgorithms.org</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p style="text-align: center;"><em>Thanks for reading The Epidemiology of Algorithms! <br>Subscribe to receive new posts and support my work.</em></p>]]></content:encoded></item><item><title><![CDATA[Issue 01 - SHIFT and DRIFT: The Case for Algorithmic Surveillance]]></title><description><![CDATA[Anne E. Burnley, MD, MHS, MS | The Epidemiology of Algorithms | Issue 01]]></description><link>https://newsletter.epidemiologyofalgorithms.org/p/shift-and-drift</link><guid isPermaLink="false">https://newsletter.epidemiologyofalgorithms.org/p/shift-and-drift</guid><dc:creator><![CDATA[Anne E. Burnley, MD, MHS, MS]]></dc:creator><pubDate>Thu, 19 Mar 2026 12:03:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vvo_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://newsletter.epidemiologyofalgorithms.org/subscribe?"><span>Subscribe now</span></a></p><h3>WHAT DO AI SYSTEMS AND INFLUENZA VIRUSES HAVE IN COMMON?</h3><p>They both shift and drift, and we have built a global surveillance system for only one of them.</p><div><hr></div><h3>THE EXPOSURE </h3><p>I asked my new Occupational Health Resident this question as a &#8220;joke&#8221; in the clinic a few weeks ago.  &#8220;What do AI systems and influenza viruses have in common?&#8221; I had just taken a course on &#8220;AI in Healthcare,&#8221; and he had spent a year studying epidemiology for an MPH. I could see the wheels turning in his head, possibly wondering what he was missing or if it was a trick question, after all, we had just met. After a minute or so, he said, &#8220;I have no clue. What do they have in common?&#8221; Excitedly, I said, &#8220;They both shift and drift.&#8221;  We both went quiet, the specific kind of quiet that happens when something meant as a joke turns out to be true.</p><p>Every clinician reading this has encountered a clinical decision support tool. An early warning score. A sepsis alert. A diagnostic algorithm. You&#8217;ve learned, over time, how much trust to put in each. You&#8217;ve calibrated your response. You&#8217;ve built a mental model of when it&#8217;s right and when it goes off randomly. </p><p>Here is what you were almost certainly never told: The mental model you so carefully curated over the years may no longer be valid.</p><p>The algorithm you use today may not be the algorithm you were trained on. It may have been silently updated. Its underlying model may have drifted as the population it was trained on diverged from the population you&#8217;re now treating. No alarm fired. No notification was sent. The interface looks identical. But something changed, and your calibration, built on the old version, is now obsolete.</p><div><hr></div><h3>THE SIGNAL</h3><p> <strong>Antigenic shift and drift: A borrowed framework that just fits.</strong></p><p>In virology, antigenic drift is the slow, incremental change in the surface proteins of viruses, especially influenza viruses, over time. These changes due to mutations are small, and the immune system, trained on a previous version of the virus, still recognizes it, but less effectively. Protection erodes gradually, invisibly, until it fails when it matters.</p><p>Antigenic shift is different. It is a sudden, discontinuous change &#8212; a reassortment of genetic material that produces a novel viral strain the immune system has never encountered. No prior immunity. No warning. The 1918 influenza pandemic and the 2009 H1N1 pandemic were significant events. In contrast, the 2009 COVID-19 pandemic was caused by a new virus, SARS-CoV-2, which was new to human populations.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vvo_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vvo_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!vvo_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!vvo_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!vvo_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vvo_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:328210,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://epidemiologyofalgorithms.substack.com/i/190786432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vvo_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!vvo_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!vvo_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!vvo_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc2400-49bd-4bd5-a5da-139b25ea589e_1536x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Algorithmic (model) drift illustrated: As time passes and data patterns change, the model&#8217;s accuracy gradually declines because it was trained on older data that no longer reflects current conditions.</figcaption></figure></div><p></p><p><strong>The algorithmic parallels:</strong></p><p>Algorithmic drift, or model drift, is the gradual decline in an algorithm&#8217;s performance over time. It happens when the data used to train the model diverges from the data it encounters in the real world, such as changes in the patient population or clinical practices.</p><p>The model still runs normally, and no error messages appear. Because the system looks the same, clinicians continue to use and trust it at the same rate as before. </p><p>However, as the gap between the training data and current conditions grows, the model&#8217;s predictions may become less accurate. Performance slowly erodes without immediate notice.</p><p>An algorithmic shift is a sudden, major change in how an algorithm behaves. It can happen when a system receives a major version update, a new vendor system is adopted, or a retrained model is deployed overnight.</p><p>Unlike gradual changes, an algorithmic shift is discontinuous. The system&#8217;s behavior can change immediately and significantly, leaving clinicians and users without time to adapt their expectations or update their clinical intuition.</p><p>Because these changes can occur without clear notice, there may be little &#8220;memory&#8221; of how the previous system behaved and no routine surveillance to detect what changed. As a result, decisions may be influenced by a tool that now behaves differently from the one users learned to trust.</p><p>The analogy is not merely poetic. It is structurally precise. In both cases, the change is invisible at the individual level; the harm is distributed across a population. Early detection requires systematic surveillance, and no single clinician can see the patterns within their own practice. </p><p>Algorithmic drift in clinical settings can only be detected by watching a population, which is, by definition, epidemiology or the study of disease distribution, causes, and control in populations.</p><div><hr></div><h3>THE EVIDENCE</h3><p>We built a global surveillance system for influenza, but have almost nothing for algorithms in healthcare settings.</p><p>The World Health Organization&#8217;s Global Influenza Surveillance and Response System coordinates sentinel sites across 114 countries. It sequences circulating strains continuously. It detects drift in real time. It produces annual vaccine composition recommendations based on that surveillance. The entire architecture exists because we recognized after 1918 that a pathogen that shifts and drifts requires ongoing population-level monitoring, not just point-of-care response.</p><p>Now consider what exists for clinical AI. Most health systems have no formal process for detecting performance degradation in deployed algorithms. Vendors are not required to report model updates to clinical users. There is no equivalent of strain sequencing, no systematic comparison of how an algorithm behaves now versus six months ago. When a sepsis model starts missing more cases than before, no alarm goes off. The signal is buried in individual clinical outcomes, invisible without aggregation.</p><p><strong>The gap in numbers:</strong></p><p>The FDA has cleared over 700 AI/ML-based medical devices. Post-market surveillance requirements for algorithmic performance degradation are minimal. There is no mandatory registry of algorithm adverse events. There is no equivalent of MedWatch for clinical AI. We are literally flying with instruments we do not calibrate.</p><p>This is not an argument against clinical AI, just as influenza surveillance is not an argument against the influenza vaccine. It is the science that makes the vaccine work, because without surveillance, you cannot know what you are vaccinating against.</p><p><em>The Epidemiology of Algorithms</em> is not a critique of AI in medicine or healthcare in general. It is the science that makes AI in medicine safe.</p><div><hr></div><h3>OPEN QUESTION</h3><p><strong>What would algorithmic surveillance actually look like?</strong></p><p>If we take the influenza analogy seriously, the architecture almost writes itself: sentinel sites &#8212; hospitals that systematically monitor algorithm performance against outcomes. Strain sequencing: Version-control systems that track changes between model iterations. Signal detection: Statistical methods that distinguish random variation from true performance degradation. Governance: Who acts when a signal is detected, and how fast.</p><p>None of this exists at scale for algorithms. Building it is the work of this discipline. In the next issue, I will introduce the framework I have been developing, the five-component architecture for population-level algorithmic surveillance, and why the classical epidemiological triad of agent, host, and environment maps more precisely onto clinical AI than anything currently proposed in health informatics.</p><p>For now, I leave you with the question I want you to carry into your next clinical shift:</p><blockquote><p>&#8220;Do you know if the algorithm you used today is the same algorithm you used six months ago?&#8221;</p></blockquote><p>If you cannot answer that question with certainty, you are practicing in a surveillance gap. That gap now has a name, and closing it is why this newsletter exists.</p><div><hr></div><p>The Epidemiology of Algorithms<br>Anne E. Burnley, MD, MHS, MS<br>Founder &amp; Writer</p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.epidemiologyofalgorithms.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Epidemiology of Algorithms! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>