In the fall of 1999, a young chemical engineer named Todd Zion left his job at Eastman Kodak to enroll in the Ph.D. program at the Massachusetts Institute of Technology. While looking for a subject to research, Zion noticed a grant proposal, never funded, that another graduate student had written on the subject of drug delivery.
One possibility mentioned in the proposal was the development of a kind of insulin that would automatically respond to changes in blood-sugar levels, becoming active only when needed to maintain healthy levels around the clock. If it worked, the sugar-sensitive version could transform the lives of the six million people with diabetes in the United States who use insulin. No longer would they have to test their blood-sugar levels multiple times per day and try to calculate how much insulin to take. The self-regulating insulin would curtail high sugar levels, which raise the risk of long-term complications, and eradicate, or at least reduce, the most dangerous short-term complication: hypoglycemia, when sugar levels fall so low that they can cause confusion, unconsciousness, seizures and even death.
“I thought, Wow, that is a really neat area in which to do research,” Zion told me. Other researchers had been trying for 20 years to make insulin work only when needed. The first to publish a study were the scientists Michael Brownlee and Anthony Cerami, who in 1979 embedded a sugar-encrusted form of insulin in a mesh pouch with a sugar-loving protein called lectin. The idea was that when blood-sugar levels were elevated, the lectin would bind to the sugar in the blood and allow the insulin to leak out; when sugar levels were normal, the lectin would bind to the sugar-encrusted insulin, keeping it inside the pouch. But the system was impractical, because the pouch would need to be implanted under the skin and periodically refilled. And the lectin, derived from the jack-bean plant, was toxic.
‘Almost everyone in the field of drug discovery believed that you should never, ever change the insulin molecule.’
In his first three years at M.I.T., Zion used nanoscale molecular engineering to create an insulin-lectin gel that could be injected without the need for an implanted pouch, while also rejiggering the lectin to make it less toxic. “It worked great visually,” he said. When the gel was placed in a sugar solution, “you could see the outer edge getting clear and dissolving . . . almost like an ice cube melting in water.”
The problem was that, no matter how hard he tried to manipulate the gel, the insulin either leaked out too easily, even when sugar levels were low, or not easily enough, when sugar levels were high.
Zion knew there was another approach: changing the insulin molecule itself, to make it sensitive to blood-sugar levels directly. But, he said, “almost everyone in the field of drug discovery believed that you should never, ever change the insulin molecule. . . . From a regulatory point of view, as soon as you manipulate the molecule, you’ve created a new drug. It doesn’t matter if you’ve attached a single atom to it.” Doing so, he knew, would require years of clinical trials in preparation for a new-drug application submission to the F.D.A. “That’s the only reason people were afraid.”
After three years of struggle and frustration at M.I.T., Zion figured he had no other choice. The big aha moment for him, he said, was when he realized he would have to take the step others feared. “I said the only way for this to be possible would be to chemically engineer the insulin itself. The result, yes, would be a new chemical entity, but it might actually work.”
Zion’s trick was to attach a chemical arm — a short chain of sugars — directly onto the insulin molecule. (Attaching sugar to insulin? “Yes, it is a bit ironic,” he concedes.) The beauty of this approach is that it permits the insulin to latch directly onto the sugar-loving lectin in the gel — but only as needed. When blood-sugar levels are high, the lectin hooks up with any sugar floating freely from the blood, allowing the insulin molecule to do its normal job of connecting with insulin receptors on the surface of cells. (That is how insulin lowers blood-sugar levels, by merging with a cell’s insulin receptor and unlocking the cell membrane to allow sugar in.) When the sugar gets swallowed up by cells, lowering levels in the blood, the only remaining sweet for the lectin to attach to is the altered insulin’s sugar arm. Handcuffed to the lectin, the insulin can’t bind to insulin receptors. And so the insulin stops working to lower blood-sugar levels precisely when it’s no longer needed, because its sugar chain is held hostage by the lectin.
Zion called his new chemical creation smart insulin. In 2003, after testing it in rats, he won M.I.T.’s annual Entrepreneurship Competition. With the $30,000 prize money, he started a company called SmartCells and tried to see if he could make it work well enough to test it in humans.
The two biggest remaining problems were that the lectin-insulin gel was much bulkier than plain insulin alone, and that the lectin continued to produce minor, unwanted side effects. In 2008 Zion solved both when he figured out a way to tweak the sugar arm of his altered insulin molecule so it would bind with the type of lectin receptors already present on every cell in the body. Suddenly, he realized, there was no need to add the plant-based lectin in order for the drug to work.
The final version of his smart insulin was so promising that in December 2010, the drug company Merck, one of the largest in the world, announced it was buying SmartCells and its patents for what could eventually reach more than half a billion dollars, all for a drug that has yet to be taken by a single human being.
Should smart insulin eventually gain F.D.A. approval, its success would be more than a breakthrough in the treatment of diabetes, and certainly much more than a business win for Merck. For the pharmaceutical industry — which during the past two decades has increasingly focused on an automated, high-tech approach to discovering drugs — it would mark a victory for old-fashioned trial-and-error chemistry, the kind of endless tinkering and mucking around in the dark that by now was supposed to be a thing of the past.
On Sept. 25, 1990, James D. Watson, the Nobel Prize-winning co-discoverer of the double-helix structure of DNA, and at the time the director of the National Center for Human Genome Research, wrote a letter to this paper making a prediction: “The ability to sequence DNA quickly and cheaply will also provide the technological basis for a new era in drug development.”
At that moment, the idea that the human genome would lead to a multitude of cures for diseases seemed inevitable and irresistible. DNA is, after all, nature’s instruction booklet for building living things; open that book and read its instructions, the thinking ran, and the botched instructions that result in diseases would be revealed. From there, a logical series of steps would arrive at a cure. Once a malfunctioning gene was isolated, scientists would find the protein coded by that gene. Then they’d use that protein as a target. Finally, they’d run tests of tens of thousands of unique chemical entities that drug companies have stockpiled over the years, to find one that fit the target like a key in a lock, to correct its function.
But this golden road to pharmaceutical riches, known as target-based drug discovery, has often proved to be more of a garden path. The first disappointment has been that most diseases affecting large numbers of people are not caused by a handful of mutations that can be unearthed as easily as digging potatoes in a field. Geneticists have called this the problem of “missing heritability,” because despite what they promised in the 1990s, they have found no single genetic variants that are necessary and sufficient to cause most forms of widespread diseases like diabetes, heart disease, Alzheimer’s or cancer.
The second disappointment is that even when a genetic variation can be plainly linked to a disease, the process for figuring out what to do about it rarely works as efficiently as advertised. Compounds that appear to hit a designated target right between the eyes still often fail to be safe and effective in animal and human studies. Biology is just way too complicated.
“If you read them now, the claims made for genomics in the 1990s sound a bit like predictions made in the 1950s for flying cars and anti-gravity devices,” Jack Scannell, an industry analyst, told me. But rather than speeding drug development, genomics may have slowed it down. So far it has produced fewer returns on greater investments. Scannell and Brian Warrington, who worked for 40 years inventing drugs for pharmaceutical companies, published a grim paper in 2012 that showed the plummeting efficiency of the pharmaceutical industry. They found that for every billion dollars spent on research and development since 1950, the number of new drugs approved has fallen by half roughly every nine years, meaning a total decline by a factor of 80. They called this Eroom’s Law, because it resembled an inversion of Moore’s Law (the observation, first made by the Intel co-founder Gorden E. Moore in 1965, that the number of transistors in an integrated circuit doubles approximately about every two years).
That’s not to say that target-based drug discovery, informed by genomics, hasn’t had its share of spectacular successes. Gleevec, used since 2001 to treat chronic myelogenous leukemia (C.M.L.) and a variety of other cancers, is often pointed to as one of the great gene-to-medicine success stories. Its design followed logically from the identification of an abnormal protein caused by a genetic glitch found in almost every cancer cell of patients with C.M.L.
Many of the drugs developed through target-based discovery, however, work for only single-mutation diseases affecting a tiny number of people. Seventy percent of new drugs approved by the F.D.A. last year were so-called specialty drugs used by no more than 1 percent of the population. The drug Kalydeco, for instance, was approved in 2012 for people with a particular genetic mutation that causes cystic fibrosis. But only about 1,200 people in the United States have the mutation it corrects. For them it can be a lifesaver, but for the tens of millions of people suffering from more widespread diseases, target-based drugs derived from genomics have offered little.
“We still have big public health needs,” said John Jenkins, director of the F.D.A.’s Office of New Drugs. “We’re hoping companies don’t lose track of the broader diseases, like diabetes, as they pursue genomic science and targeted therapies.”
So far, most drug companies have continued to devote a vast majority of their funding to target-based research, even as more traditional methods of drug discovery have proved more productive. A study published last year by David Swinney found that only 17 of 50 novel drugs approved by the F.D.A. between 1999 and 2008 came from target-based research, compared with 28 from what Swinney calls “phenotypic” discovery, made by studying living cells in Petri dishes, animals and humans. Many of the drugs in this latter category — Alamast for allergies, Amitiza for constipation, Abreva for herpes cold sores, Ranexa for angina, Veregen for genital warts and Keppra, Excegran and Inovelon for seizures — were discovered by chemists who didn’t set out knowing what the drugs’ targets were, or even how they worked. In one now well known case of nontargeted research, scientists developed a drug for angina and found that while it wasn’t effective for relieving chest pain, it did cause erections in the study’s male volunteers. The researchers changed course, and Viagra was born.
Swinney, chief executive of the Institute for Rare and Neglected Diseases Drug Discovery, spent the first 25 years of his career as a drug researcher, looking for the kind of targets that genomics was supposed to provide.
“I’ve done an about-face,” said Swinney, who estimates that more than 80 percent of research funding is still spent on target-based approaches. “The target-based research made possible by genomics is cool and fascinating,” he went on. But, he conceded, “you know what? We almost never use this information before we discover a drug. . . . This whole idea is too simplistic for the overall complexity of biology.”
Another veteran chemist, Derek Lowe, worked at Bayer when the genomics revolution was in full flourish. “The whole industry went crazy with it,” said Lowe, who writes In the Pipeline, a blog about the pharmaceutical industry. “Bayer committed half a billion dollars into human genome research, and they got nothing for it. Nothing at all.”
Of course, an overreliance on genomics is not the only factor slowing down the discovery of new drugs, as industry analysts are quick to point out. One challenge is that the industry is the victim of its own previous successes. In order to thrive, it must come up with drugs that work better than blockbusters of the past. After all, old drugs don’t fade away; they just go generic. Scannell and Warrington have dubbed this the “Better Than the Beatles” problem, as if every new song in the recording industry had to be bigger than “Hey Jude” or “I Want to Hold Your Hand.”
At the same time, the demand for proof of safety and efficacy, not only from the F.D.A. but also from trial lawyers and the public at large, is far higher than in years past. The days when drugs like the original insulin could be sold within a year of their discovery by chemists are long gone, and rightly so.
To check the industry’s slide, experts say that drug companies need to begin applying the new molecular engineering tools to old-fashioned methods of discovery. It may sound absurd to describe the newfangled nanoengineering that Zion used to develop smart insulin as “old-fashioned,” but his fundamental approach is one that researchers from the 1950s would recognize. It is inductive, beginning with a close observation of chemicals and their behaviors in living systems, to then making tentative hypotheses and then going through a long tinkering process of trial and error to find something that works. The spirit that animates the trial-and-error chemists is an enthusiasm for constructing new drugs piece by molecular piece, like children playing with building blocks.
Brian Warrington, who retired in 2005 as Britain’s vice president of discovery technology development at GlaxoSmithKline UK, says that creating new drugs through trial and error was a thing of joy. “You’re just watching something happening in a chemical reaction,” he told me, “and then realizing, Wait a minute, that went strangely. I wonder why.”
Warrington told me about his late colleague Sir James Black, a Nobel Prize-winning pharmacologist, who was one of the great practitioners of trial-and-error discovery. In the early 1960s, he developed propranolol, the first beta blocker for heart disease, which for a while was the world’s best-selling drug. Then he developed the first H2-receptor antagonist for treating peptic ulcers, cimetidine, which took the top-selling spot from propranolol in the 1970s. “He developed some of the greatest drugs of the 20th century by looking at phenomena and asking the right questions,” Warrington said. “He didn’t need the genome.”
The progress of smart insulin has not been as fast as some might have hoped since it was acquired by Merck. The company declined to let me talk to its scientists, but it announced in May that it planned to begin testing the drug, which it calls L-490, in humans by the end of this year. Assuming a typical six-year timetable from the beginning of the first Phase 1 trial to the completion of a large Phase 3 trial, plus another year for F.D.A. review, it would be 2021 at the earliest before people whose diabetes requires treatment with insulin would be able to get the smart kind — more than two decades after Zion began working on it.
Zion says he is grateful to know that his drug is one step closer to making it out into the world, and not just because it could help millions of people. He also sees it as a validation of the idea that the attempt-and-fail model can produce something marvelous. “It’s really engineering,” Zion said. “You have to become the master of something no one ever knew existed before.”
Dan Hurley is the author of the book “Smarter: The New Science of Building Brain Power.”
A version of this article appears in print on November 16, 2014, on page MM61 of the Sunday Magazine with the headline: Kemia Malekvilibro Chemical Imbalance.
Source: The New York Times (13-11-2014).