Friday, February 02, 2018

butterfly effect

The so-called butterfly effect refers to the future’s extreme sensitivity to initial conditions. Tiny variations, which seem dismissible as trivial rounding errors in measurements, can accumulate into massively different future events
Identical twins with the same observable demographic characteristics, lifestyle, medical care, and genetics necessarily generate the same predictions — but can still end up with completely different real outcomes.
Though no method can precisely predict the date you will die, for example, that level of precision is generally not necessary for predictions to be useful. By reframing complex phenomena in terms of limited multiple-choice questions 
(e.g., Will you have a heart attack within 10 years? 
Are you more or less likely than average to end up back in the hospital within 30 days?),
 predictive algorithms can operate as diagnostic screening tests to stratify patient populations by risk and inform discrete decision making.

even a perfectly calibrated prediction model may not translate into better clinical care. An accurate prediction of a patient outcome does not tell us what to do if we want to change that outcome — in fact, we cannot even assume that it’s possible to change the predicted outcomes

It is true, for instance, that palliative care consults and norepinephrine infusions are highly predictive of patient death, but it would be irrational to conclude that stopping either will reduce mortality. 

many such predictions are “highly accurate” mainly for cases whose likely outcome is already obvious to practicing clinicians. 


With machine learning situated at the peak of inflated expectations, we can soften a subsequent crash into a “trough of disillusionment”2 by fostering a stronger appreciation of the technology’s capabilities and limitations. Before we hold computerized systems (or humans) up against an idealized and unrealizable standard of perfection, let our benchmark be the real-world standards of care whereby doctors grossly misestimate the positive predictive value of screening tests for rare diagnoses, routinely overestimate patient life expectancy by a factor of 3, and deliver care of widely varied intensity in the last 6 months of life

Whether such artificial-intelligence systems are “smarter” than human practitioners makes for a stimulating debate — but is largely irrelevant. Combining machine-learning software with the best human clinician “hardware” will permit delivery of care that outperforms what either can do alone. Let’s move past the hype cycle and on to the “slope of enlightenment,”2 where we use every information and data resource to consistently improve our collective health

mostly from  http://www.nejm.org.ezp-prod1.hul.harvard.edu/doi/10.1056/NEJMp1702071

No comments: