This giant blockquote is from a lengthier story in today’s New York Times about medical imaging radiation overdoses.
These paragraphs are a rigged card for software design problem bingo:
Normally, the more radiation a CT scan uses, the better the image. But amid concerns that patients are getting more radiation than necessary, the medical community has embraced the idea of using only enough to obtain an image sufficient for diagnosis.
To do that, GE offers a feature on its CT scanner that can automatically adjust the dose according to a patient’s size and body part. It is, a GE manual says, “a technical innovation that significantly reduces radiation dose.”
At Cedars-Sinai and Glendale Adventist, technicians used the automatic feature — rather than a fixed, predetermined radiation level — for their brain perfusion scans.
But a surprise awaited them: when used with certain machine settings that govern image clarity, the automatic feature did not reduce the dose — it raised it.
As a result, patients at Cedars-Sinai received up to eight times as much radiation as necessary, while the 10 overradiated at Glendale received four times as much, state records show.
GE says the hospitals should have known how to safely use the automatic feature. Besides, GE said, the feature had “limited utility” for a perfusion scan because the test targets one specific area of the brain, rather than body parts of varying thickness. In addition, experts say high-clarity images are not needed to track blood flow in the brain.
GE further faulted hospital technologists for failing to notice dosing levels on their treatment screens.
But representatives of both hospitals said GE trainers never fully explained the automatic feature.
In a statement, Cedars-Sinai said that during multiple training visits, GE never mentioned the “counterintuitive” nature of a feature that promises to lower radiation but ends up raising it. The hospital also said user manuals never pointed out that the automatic feature was of limited value for perfusion scans.
A better-designed CT scanner, safety experts say, might have prevented the overdoses by alerting operators, or simply shutting down, when doses reached dangerous levels.
To Mr. Heuser, it is unconscionable that equipment able to deliver such high radiation doses lacks stronger safety features.
“When you are in a car and it backs up, it goes beep, beep, beep,” he said. “If you fill the washing machine up too much, it won’t work. There is no red light that says you are overradiating.”
For quite some time, the software community that works on this sort of thing has been focused on making sure that the software itself doesn’t contain defects that kill people, with mixed results: using “safe” languages like Ada, formal correctness proofs, etc., are our attempts to take deadly software defects out of the picture.
But what about deadly UX defects? It is generally assumed that software used by professionals, in professional settings, doesn’t need any fancy UX treatment. But what are the issues that caused this problem?
- Users’ mental model doesn’t match system’s actual model (i.e. automatic mode should lower dose, not raise)
- Manuals/Documentation not adequate or adequately studied
- Belief that interlocks prevent dangerous use, even though interlocks don’t exist
- Poor feedback from UI: dosage level indicated, not recognized as dangerous by users
None of these is a traditional software defect. There’s no Ariane 5 16-bit integer overflow, or any of the engineering issues that caused the famous Therac-25 radiation overdose cases. These are all failures of usability.
As we have more and more complex software in charge of complex or dangerous things, we are going to have to recognize that the users, even though they are highly trained professionals, do not have infinite cognitive capacity, and the interface to software is going to have to do more work to make sure things are being done safely. “Easy to use” is going to have to move from being a strategy to sell iPad apps to the way nuclear power plant control systems are required by law to be designed.
Failure modes are partially an engineering issue, but also partly a usability issue. Reducing the defect rate in the system’s codebase is clearly critical, but designing the system’s interface such that it’s hard to really fuck shit up is now just as important. If I was an Adaptive Path-/Frog-/Ideo-type company, I think I might pass on a few Disney pitches and go after the GE Windfarm Control Software account.