Work Next to AI, Not With It
I’m struck by the hubris of AI prognosticators, perhaps including myself in that category though at least I’m admitting it…
There are some forecasters who have a pretty good track record – geopolitical gurus and bridge (card game) experts are good as are meteorologists (really). Technology forecasters on the other hand are pretty lousy, including experts. MIT’s Technology Review has done a historically bad job of reading the tea leaves and predicting the future of tech. Its authors missed smartphones entirely…
One sliver of the AI future replete with chest-thumping harrumphs is the notion that AI does/will work best in partnership with humans. Maybe, maybe not.
One instance where it currently doesn’t? Radiology, one of many fields where AI disruption is already here.
Analysis was done comparing,
- Radiologist accuracy without AI
- AI alone
- Radiologist + AI
AI beat the radiologists 65% of the time but on average, the Radiologist + AI combo did no better than either of the other two conditions. This despite the Radiologists having what was determined to be useful, contextual and patient specific information that AI didn’t – e.g., doctor’s notes, patient history.
In theory this is the perfect combo of human with unique, special, “small data” combined with the massive computing power of AI. But the combo failed. Why?
That the combo failed as an average hides a lot. There was heterogeneity in the data. When the AI was highly confident the human + AI combo worked as intended. But when the AI had anything other than a very high probability diagnosis the humans intervened to create sub-optimal outcomes. They also took longer per patient to produce a less good result.
Human bias intervened. When the machine said ” I think it’s this…” the human decided to weight their own opinion too heavily. The humans also ignored the fact that most of the time their opinion and that of the machine were the same – i.e., highly correlated. That should have increased confidence and shortened decision times, but people seem to exhibit an anti-automation bias.
The human experts under-respond to information other than their own. Where does this happen in our world?
Selection for one. I can remember many an instance where good-old fashioned statistical modeling of RFM data was used to select X cases for a mailing only to be second guessed by fundraising ‘experts” who insisted on forcing all the $50-100, with 0 to 12 recency into the select regardless of the model.
At least the radiologists have a medical degree and a residency focused on nothing but reading scans. Those fundraising experts forcing their pet-segments in the mailing tend to be the biggest title in the room but that’s about it.
Kevin
hi, thanks Kevin, not happy about you throwing experts under the bus!
I still see models that work sometimes, but not all of the times, and sometimes having some additional segments into the mix to compare to works just fine or even better. I think the message should be that experts with years of experience and AI and testing together make the best combination. Smart experts are always willing to learn.
You might like this one – new study out in Science. College-educated writers who had access to ChatGPT decreased writing time by 40% and increased writing quality by 18%: https://www.science.org/doi/10.1126/science.adh2586
Working together not only worked better, but those who used the system during the test were also likely to continue using it after.
Nick, yep saw that study, thought I wrote about it but who the hells knows if I did or just drafted it or imagined it. You can relate. We did write about a similar study but with professional writers of various flavors vs. college kids. The results were mixed:
-The average overall liking was 2.5, slightly positive.
-The more creative the writer (e.g. fiction writers and poets) the less they like it, though the differences were slight.
-Writers were neutral about the idea of embedding this tech into their word processor though creatives were much more negative on the idea.
-The writers got less positive the more they used it.
-Sentiment analysis of open-end comments indicated more negative sentiment than positive.
-The negative emotions were dominated by fear, anger (far less common) and sadness.
-The positive emotion was Joy and an analytical sentiment
We use it, we’re very focused on using it more and more and convinced it is making us more productive and equally convinced that prompting GPT to write a letter for you is a total waste of time. It does require knowledge transfer, fine tuning – e.g., “small data” to augment the big.
I haven’t read this study. Did they control for gender of the human radiologist?
Hi Kathryn, yes, among a host of other more interesting possible, causal factors.