In the early spring of 2009, a team of doctors at the Lucile Packard Children’s Hospital at Stanford University lifted a 2-year-old into an MRI scanner. The boy, whom I’ll call Bryce, looked tiny and forlorn inside the cavernous metal device. The stuffed monkey dangling from the entrance to the scanner did little to cheer up the scene. Bryce couldn’t see it, in any case; he was under general anesthesia, with a tube snaking from his throat to a ventilator beside the scanner. Ten months earlier, Bryce had received a portion of a donor’s liver to replace his own failing organ. For a while, he did well. But his latest lab tests were alarming. Something was going wrong — there was a chance that one or both of the liver’s bile ducts were blocked.
Shreyas Vasanawala, a pediatric radiologist at Packard, didn’t know for sure what was wrong, and hoped the MRI would reveal the answer. Vasanawala needed a phenomenally hi-res scan, but if he was going to get it, his young patient would have to remain perfectly still. If Bryce took a single breath, the image would be blurred. That meant deepening the anesthesia enough to stop respiration. It would take a full two minutes for a standard MRI to capture the image, but if the anesthesiologists shut down Bryce’s breathing for that long, his glitchy liver would be the least of his problems.
However, Vasanawala and one of his colleagues, an electrical engineer named Michael Lustig, were going to use a new and much faster scanning method. Their MRI machine used an experimental algorithm called compressed sensing — a technique that may be the hottest topic in applied math today. In the future, it could transform the way that we look for distant galaxies. For now, it means that Vasanawala and Lustig needed only 40 seconds to gather enough data to produce a crystal-clear image of Bryce’s liver.
Compressed sensing was discovered by chance. In February 2004, Emmanuel Candès was messing around on his computer, looking at an image called the Shepp-Logan Phantom. The image — a standard picture used by computer scientists and engineers to test imaging algorithms — resembles a Close Encounters alien doing a quizzical eyebrow lift. Candès, then a professor at Caltech, now at Stanford, was experimenting with a badly corrupted version of the phantom meant to simulate the noisy, fuzzy images you get when an MRI isn’t given enough time to complete a scan. Candès thought a mathematical technique called l1 minimization might help clean up the streaks a bit. He pressed a key and the algorithm went to work.
Candès expected the phantom on his screen to get slightly cleaner. But then suddenly he saw it sharply defined and perfect in every detail — rendered, as though by magic, from the incomplete data. Weird, he thought. Impossible, in fact. “It was as if you gave me the first three digits of a 10-digit bank account number — and then I was able to guess the next seven,” he says. He tried rerunning the experiment on different kinds of phantom images; they resolved perfectly every time.
Candès, with the assistance of postdoc Justin Romberg, came up with what he considered to be a sketchy and incomplete theory for what he saw on his computer. He then presented it on a blackboard to a colleague at UCLA named Terry Tao. Candès came away from the conversation thinking that Tao was skeptical — the improvement in image clarity was close to impossible, after all. But the next evening, Tao sent a set of notes to Candès about the blackboard session. It was the basis of their first paper together. And over the next two years, they would write several more.
That was the beginning of compressed sensing, or CS, the paradigm-busting field in mathematics that’s reshaping the way people work with large data sets. Only six years old, CS has already inspired more than a thousand papers and pulled in millions of dollars in federal grants. In 2006, Candès’ work on the topic was rewarded with the $500,000 Waterman Prize, the highest honor bestowed by the National Science Foundation. It’s not hard to see why. Imagine MRI machines that take seconds to produce images that used to take up to an hour, military software that is vastly better at intercepting an adversary’s communications, and sensors that can analyze distant interstellar radio waves. Suddenly, data becomes easier to gather, manipulate, and interpret.
Source: http://www.wired.com/magazine/2010/02/ff_algorithm/ by Jordan Ellenberg