Renaissance Europe, for all its splendour, did not offer many scholarly opportunities for women, unless they chose to join nunneries. A notable exception was Italy, which espoused a more enlightened view that allowed a few women to flourish in the arts, medicine, literature, and mathematics. Among the most notable of mathematically minded women of the era was Maria Gaetana Agnesi.
The eldest of 21 children – her father married three times – Agnesi was born in 1718. (On May 16 this year we celebrate the 300th anniversary of her birth.) She was very much a child prodigy, known in her family as “the Walking Polyglot” because she could speak French, Italian, Greek, Hebrew, Spanish, German, and Latin by the time she was 13. By her late teens, she had also mastered mathematics.
Agnesi had the advantage of a wealthy upbringing; the family fortune came from the silk trade. She also had a highly supportive father, who hired the very best tutors for his talented elder daughter. Unfortunately for the shy, retiring Agnesi, he also insisted she participate in regular intellectual “salons” he hosted for great thinkers hailing from all over Europe. She was expected to recite long speeches from memory in Latin or participate in discussions about philosophy or science with men who made it their life’s work. Her younger sister, a brilliant harpsichordist and composer, was also tapped to impress visitors with her extraordinary abilities.
In the summer of 1727, a particularly noble gathering was held in the garden of the Palazzo Agnesi. Maria Gaetana, aged nine, declaimed from memory a long Latin oration against the rooted prejudice that women should not be allowed to study and practice the fine arts and the sciences. Among those in attendance were senators and magistrates. The child’s remarkable performance caused much enthusiasm among the guests, who decided to publish in her honour a pamphlet that included the oration and a series of poetic compositions in various meters and languages. The latter are, in general, as nebulous and pompous as most of the Arcadian poetry of the time; in contrast, the oration stands out as a clear and effective defense of the right of women to the pursuit of any kind of knowledge.
During the 1730s, Maria Gaetana debated topics in natural philosophy and mathematics in a series of disputes with her father’s guests. Manuscript material held at the Biblioteca Ambrosiana tells us about Agnesi’s cursus studorium in those years: lists of Latin terms and their Greek and Hebrew translations; a Latin pamphlet on mythology and its Greek translation; a Latin text on the life of Alexander the Greate translated into Italian, French, German and Greek.
In 1738, at the age of twenty, Maria Agnesi completed her studies by publishing a list of philosophical theses, Propositiones Philosophicae, most of which she had defended in the disputes held at her father’s palazzo.
By the time Agnesi’s Propositiones Philosophicae went to press in 1738, Milanese salon culture had entered a period of stagnation that would last for nearly two decades. This culture relied essentially on a few families and on small and ephemeral private academies. From 1734, when Lombardy became involved in the war of the Polish succession, and throughout the war over the Austrian succession (1740-1748), many salons and academies interrupted their scientific activities.
Yet in 1739 the Palazzo Agnesi was still at the centre of Milanese social life, thanks to the brilliant performances of the filosofessa. Agnesi was requested by her “most loving father” to attend an increasing number of “salons”. One of these was particularly remarkable. The heir to the throne of Poland had been visiting Milan and was invited to attend events at the palaces of the great patrician families: the Borromeo, the Simonetta and the Pallavicini. On a December evening, the prince, “followed by a number of the most qualified and erudite nobles”, visited the Palazzo Agnesi. Pietro received them “with great joy”; the palace was adorned with plentiful decorations and lights. The structure of the gathering was familiar: Maria Gaetana debated the guests on topics in natural philosophy (including the explanation of the tides, for which she referred to Newton). A report of the evening appeared in the pages of the Gazzetta di Milano.
A few months earlier, in July 1739, Charles de Brosses had attended a similar meeting at Palazzo Agnesi. There he found some thirty people from across Europe in a circle around Agnesi, who sat on a sofa awaiting questions and challenges. In his rusty Latin, the Frenchman debated with Agnesi on subjects of his own choosing – the relation between soul and body, perception, the propagation of light and the nature of colours.
De Brosses admired her intellectual prowess greatly, he described Agnesi as “something more stupendous than the cathedral of Milan”, and expressed his horror upon learning that she wished to become a nun. Antonio Frisi effectively describes Pietro Agnesi’s reaction to this request: “It was as if he had been struck by lightning.” Frisi refers to long discussions and negotiations between father and daughter. Eventually, Agnesi declared herself convinced that “God had destined her to live in the world” and to assist and relieve “suffering humanity”. She agreed to maintain her lay status, but only on certain conditions that would make her life an unusually private one. Occasionally, to please her father, she would still participate in the “salons” at the palazzo. But her glittering public career was at an end.
During the same period, Agnesi increasingly turned toward mathematics, “the only province of the literary world where peace reigns.” As with so many other academic pursuits, she took to it immediately. She studied amid globes and mathematical instruments, ploughing through calculus before anyone else in Milan was studying it.
Largely self-educated, Agnesi had the good fortune to find a mathematical mentor in a monk named Ramiro Rampinelli, a frequent visitor to the Agnesi home, who directed her study of calculus. He also encouraged her to write a seminal mathematics textbook, Analytical Institutions, and through his influence, she was able to gain the input of Jacopo Riccati, one of the leading Italian mathematicians of the day, while writing her seminal manuscript. She revised the draft text to incorporate Riccati’s comments.
Perhaps Agnesi began her book project as a way to pass her knowledge on to her younger siblings. Or maybe she realized how annoying it was to have mathematics instruction siloed into individual branches and one-off books so that getting an education required hunting down a whole collection of resources and hiring a tutor to fill in the gaps. Whatever the case, Agnesi saw a need for a unified textbook covering algebra, geometry, and calculus, so she wrote one.
Agnesi wrote the book in Tuscan, the dialect that would become modern Italian, instead of her own Milanese. Because she chose Italian over Latin — the language of scholars and one she knew well — it appears the text was aimed at a school-age population from the very beginning. Analytical Institutions would provide generations of Italian students with a solid and well-rounded mathematics education.
As was Agnesi’s style, when she decided to take on a project, she went big. In 1748, Agnesi published a two-volume, 1,020-page text called Instituzioni Analitiche (Analytical Institutions), believed to be the first mathematics book published by a woman. Thanks to her father’s wealth, Agnesi arranged for a private printing of the book, ensuring she could oversee the book’s typesetting and verify that her formulas were accurately represented. If a particularly unwieldy equation ran past the bottom of the page, it was printed on a long sheet of paper that was folded up and tucked into the regular-size pages.
Volume one covered arithmetic, geometry, trigonometry, analytic geometry and calculus. Volume two included discussions of infinite series and differential equations. In the preface, Agnesi paid tribute to her monkish mentor, declaring that without Rampinelli’s help, “I should have become altogether entangled in the great labyrinth of insuperable difficulty… to him I owe all advances that my small talent has sufficed to make.”
In England, John Colson, a professor at Cambridge, heard about the book and the impact it was making abroad, and felt that British students urgently needed access to the same information. Colson was getting on in age, so he scrambled to bone up on his Italian in order to translate Agnesi’s text. He hadn’t yet published the translated manuscript when he died in 1760. The work was finally released in 1801 in English, thanks to a vicar who edited and shepherded it through the publication process.
More than 250 years later, Agnesi’s name continues to appear in calculus textbooks: she lends it to a curve that rolls over a sphere like a gentle hill. She wasn’t the first to discover the curve, although it was assumed she was at the time; mathematics historians found someone who had claimed it earlier. The “witch of Agnesi”, as the curve is called, is actually the product of a mistranslation. In Instituzioni Analitiche, Agnesi calls her cubic curve versiera, which meant “turning in every direction”. Colson translated it as versicra, or “witch”.
Among other phenomena, this curve describes a driven oscillator near resonance; the spectral line distribution of optical lines and x-rays; and the amount of power that is dissipated in resonant circuits. Today, Agnesi’s curve is used primarily as a modelling and statistical tool. Some computer models for weather and atmospheric conditions, for example, use Agnesi’s curve to model topographic peaks of terrain. It can also be used as a distribution model, substituting for the standard bell curve model in statistics. It can be difficult to integrate over a specified range using the bell curve; the Witch of Agnesi’s algebraic expression is relatively straightforward in contrast, and thus easier to integrate.
Most biographies, while admiring, feel compelled to note that Agnesi’s seminal tome contained “no original mathematics”. Her accomplishment was noteworthy in part because Agnesi’s gift for languages enabled her to read mathematical papers from around the world and synthesize those works in a single text. Notably, Analytical Institutions was the first tome discussing calculus that included the very different methods developed by co-inventors Isaac Newton and Gottfried von Leibniz.
Even before publishing her textbook, Agnesi had been invited to join a number of learned academies in Italy, including the Instituto delle Scienze in Bologna. After its publication, in 1748, she became famous. Letters of congratulations were sent by numerous personalities, including Laura Bassi (also a member of the Academy of Sciences of Bologna), Jacopo Riccati, Giovanni Poleni, Etienne de Montigny (who read and commented very favourably on the book on behalf of the Académie Royale des Sciences) and the plenipotentiary minister Gian Luca Pallavicini, writing on behalf of the Empress Maria Theresa, who also sent her a diamond ring and jewel-encrusted box.
“For if at any time there can be an excuse for the rashness of a Woman who ventures to aspire to the subtleties of a science, which knows no bounds, not even those of infinity itself, it certainly should be at this glorious period, in which a Woman reigns…” — Maria Gaetana Agnesi’s dedication of her book Analytical Institutions to Maria Theresa, Holy Roman Empress, in 1748.
Father François Jacquier – a protégé of Benedict XIV, a professor of physics at La Sapienza University, and co-author of the translation of the Principia that legitimated Newton’s natural philosophy in official Catholic culture – wrote from Rome. The pope himself sent Agnesi a personal letter of congratulations that showed some knowledge of the contents of her textbook. he also recommended that the University of Bologna appoint her lecturer in mathematics. Agnesi was then approached by the president of the Academy of Bologna and three other professors of the Academy and invited to accept the Chair of mathematics at the University of Bologna. While her name remained on the rolls of the university for 45 years, she never went to Bologna.
In 1752, when Agnesi was thirty-four, her father died and she was finally able to claim her freedom. She gave up mathematics and her other scholarly pursuits in order to spend the rest of her life serving the poor, donating her entire inheritance to the cause. She died a pauper in 1799, in one of the houses for the poor and sick she managed, having given away everything she owned.
Story from Headstrong – 52 women who changed science and the world, by Rachel Swaby, American Physical Society and The World of Maria Gaetana Agnesi, Mathematician of God by historian Massimo Mazzotti.
Lucky I know lots of stories! 🙂 How about a story about Ada Lovelace?
Ada Lovelace (née Augusta Byron) was given a famous name before she made her own. Her father was Lord Byron, the bad boy of English Romantic poetry, whose epic mood swings could be topped only by his string of scandalous affairs — affairs with women, men, and his half-sister.
True to character, he was hardly an exemplary father. The first words he spoke to his newly born daughter were, “Oh! What an implement of torture have I acquired in you!”
According to the book Lady Byron and Her Daughters by Julia Markus, less than a month after the birth of their daughter, Lord Byron informed his wife of his intention to continue an affair with a stage actress and three days later wrote Lady Byron telling her to find a convenient day to leave their home. “The child will of course accompany you,” he added. Soon after, Lord Byron left England and never saw his daughter again. He died when Ada Lovelace was 8.
However brief their time in each other’s company, Lord Byron was ever-present in Lovelace’s upbringing — as a model of what not to be. Lady Byron, herself a mathematical wiz called “Princess of Parallelograms” by Lord Byron, believed a rigorous course of study rooted in logic and reason would enable her daughter to avoid the romantic ideals and moody nature of her father. From the age of 4, Ada Lovelace was tutored in mathematics and science, an unusual course of study for a woman in 19th century England. When Ada became sick with the measles, she was bedridden, only permitted to rise to a sitting position thirty minutes a day. Any impulsive behaviour was systematically ironed out.
It may have been a strict upbringing, but Lady Byron did provide her daughter with a solid education — one that would pay off when Lovelace was introduced to the mathematician Charles Babbage. The meeting occurred in the middle of her “season” in London, that time when noblewomen of a certain age were paraded around to attract potential suitors. Babbage was forty-one when he made Lovelace’s acquaintance in 1833. They hit it off. And then he extended the same offer to her that he had to so many: come by to see my Difference Engine.
Babbage’s Difference Engine was a two-ton, hand-cranked calculator with four thousand separate parts designed to expedite time-consuming mathematical tasks. Lovelace was immediately drawn to the machine and its creator. She would find a way to work with Babbage. She would.
Her first attempt was in the context of education. Lovelace wanted tutoring in math, and in 1839, she asked Babbage to take her on as his student. The two corresponded, but Babbage didn’t bite. He was too busy with his own projects. He was, after all, dreaming up machines capable of streamlining industry, automating manual processes, and freeing up workers tied to mindless tasks.
Lovelace’s mother may have tried to purge her of her father’s influence, but as she reached adulthood, her Byron side started to emerge. Lovelace experienced stretches of depression and then fits of elation. She would fly between frenzied hours of harp practice to the concentrated study of biquadratic equations. Over time, she shook off the behavioural constraints imposed by her mother, and gave herself over to whatever pleased her. All the while, she produced a steady stream of letters. A playfulness emerged. To Babbage, she signed her letters, “Your Fairy”.
Meanwhile, Babbage began spreading the word of his Analytical Engine, another project of his—a programmable beast of a machine, rigged with thousands of stacked and rotating cogwheels. It was just theoretical, but the plans for it were to far exceed the capabilities of any existing calculators, including Babbage’s own Difference Engine. In a series of lectures delivered to an audience of prominent philosophers and scientists in Turin, Italy, Babbage unveiled his visionary idea. He convinced an Italian engineer in attendance to document the talks. In 1842, the resulting article came out in a Swiss journal, published in French.
A decade since their first meeting, Lovelace remained a believer in Babbage’s ideas. With this Swiss publication, she saw her opening to offer support. Babbage’s Analytical Engine deserved a massive audience, and Lovelace knew she could get it in front of more eyeballs by translating the article into English.
Lovelace’s next step was her most significant. She took the base text from the article — some eight thousand words — and annotated it, gracefully comparing the Analytical Engine to its antecedents and explaining its place in the future. If other machines could calculate, reflecting the intelligence of their owners, the Analytical Engine would amplify its owner’s knowledge, able to store data and programs that could process it. Lovelace pointed out that getting the most out of the Analytical Engine meant designing instructions tailored to the owner’s interests. Programming the thing would go a long way. She also saw the possibility for it to process more than numbers, suggesting “the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
Reining in easily excitable imaginations, Lovelace also explained the Engine’s limitations (“It can follow analysis; but it has no power of anticipating any analytical relations or truths”) and illustrated its strengths (“the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”).
The most extraordinary of her annotations was Lovelace’s so-called Note G. In it, she explained how a punch-card-based algorithm could return a scrolling sequence of special rational numbers, called Bernoulli numbers. Lovelace’s explanation of how to tell the machine to return Bernoulli numbers is considered the world’s first computer program. What began as a simple translation, as one Babbage scholar points out, became “the most important paper in the history of digital computing before modern times.”
Babbage corresponded with Lovelace throughout the annotation process. Lovelace sent Babbage her commentary for feedback, and where she needed help and clarification, he offered it. Scholars differ on the degree of influence they believe Babbage had on Lovelace’s notes. Some believe that his mind was behind her words. Others, like journalist Suw Charman-Anderson, call her “[not] the first woman [computer programmer]. The first person.”
Lovelace guarded her work, and sometimes fiercely. To one of Babbage’s edits, she replied firmly, “I am much annoyed at your having altered my Note… I cannot endure another person to meddle with my sentences.” She also possessed a strong confidence in the range of her own abilities. In one letter, she confided, “That brain of mine is something more than merely mortal… Before ten years are out, the Devil’s in it if I haven’t sucked out some of the lifeblood from the mysteries of the universe, in a way that no purely mortal lips or brains could do.”
For what it’s worth, Babbage himself was effusive about her contributions. “All this was impossible for you to know by intuition and the more I read your notes the more surprised I am at them and regret not having earlier explored so rich a vein of the noblest metal.”
Lovelace’s ideas about computing were so far ahead of their time that it took nearly a century for technology to catch up. While Lovelace’s notes on Babbage’s analytical engine gained little attention at the time they were originally published in 1843, they found a much wider audience when republished in B.V. Bowden’s 1953 book Faster Than Thought: A Symposium on Digital Computing Machines. As the field of computer science dawned in the 1950s, Lovelace gained a new following in the digital age.
During the 1970s the US Department of Defense developed a high-order computer programming language to supersede the hundreds of different ones then in use by the military. When US Navy Commander Jack Cooper suggested naming the new language “Ada” in honour of Lovelace in 1979, the proposal was unanimously approved. Ada is still used around the world today in the operation of real-time systems in the aviation, health care, transportation, financial, infrastructure and space industries.
Ada Lovelace Day (second Tuesday of October) celebrates the extraordinary achievements of women in science, technology, engineering, and math. The “Ada Lovelace Edit-a-thon” is an annual event aimed at beefing up online entries for women in science whose accomplishments are unsung or misattributed. When her name is mentioned today, it’s more than a tip of the hat; it’s a call to arms.
Story from Headstrong – 52 women who changed science and the world, by Rachel Swaby.
On any list of history’s great mathematicians who were ignored or underappreciated simply because they were women, you’ll find the name of Emmy Noether. Despite the barricades erected by 19th century antediluvian attitudes, she managed to establish herself as one of Germany’s premier mathematicians. She made significant contributions to various math specialties, including advanced forms of algebra. And in 1918, she published a theorem that provided the foundation for 20th century physicists’ understanding of reality. She showed that symmetries in nature implied the conservation laws that physicists had discovered without really understanding.
Joule’s conservation of energy, it turns out, is a requirement of time symmetry — the fact that no point in time differs from any other. Similarly, conservation of momentum is required if space is symmetric, that is, moving to a different point in space changes nothing about anything else. And if all directions in space are similarly equivalent — rotational symmetry — then the law of conservation of angular momentum is assured and figure skating remains a legitimate Olympic sport. Decades after she died in 1935, physicists are still attempting to exploit Noether’s insight to gain a deeper understanding of the symmetries underlying the laws of the cosmos.
Yay, it’s story time!
Albert Einstein was in over his head. He had worked out his general theory of relativity, but he was having problems with the mathematics that would have to correspond. So Einstein pulled in a team of experts from the University of Göttingen to help him formulate the concepts. The team was led by David Hilbert and Felix Klein, who were held in extremely high regard for their contributions to mathematical invariants. But their legacy, in part, is the community of scholars they fostered at Göttingen, who helped the university grow into one of the world’s most respected mathematics institutions. They scouted talent. For the Einstein project, Emmy Noether was their draft pick.
Noether had been making a name for herself steadily. In the eight years prior, she worked at the University of Erlangen without a salary or a job title. By the time she left for Göttingen, she had published half a dozen or so papers, lectured abroad, taken on PhD students, and filled in as a lecturer for her father, Max Noether, who was an Erlangen mathematics professor suffering from deteriorating health.
At the time, Noether’s specialty was invariants, or the unchangeable elements that remain constant throughout transformations like rotation or reflection. For the general theory of relativity, her knowledge base was crucial. Those interlinked equations that Einstein needed? Noether helped create them. Her formulas were elegant, and her thought process and imagination enlightening. Einstein thought highly of her work, writing, “Frl. Noether is continually advising me in my projects and…it is really through her that I have become competent in the subject.”
It didn’t take long for Noether’s closest colleagues to realize that she was a mathematical force, someone of extraordinary value who should be kept around with a faculty position. However, Noether faced sharp opposition. Many of the people who supported the push to make her a lecturer also believed that she was a special case and that, in general, women shouldn’t be allowed to teach in universities. The Prussian ministry of religion and education, whose approval the university needed, shut down her appointment: “She won’t be allowed to become a lecturer at Göttingen, Frankfurt, or anywhere else.”
The shifting political landscape finally cracked open the stubborn set of regulations governing women in academia. When Germany was defeated in World War I, socialists took over and gave women the right to vote. There was still a movement internally to get Noether on staff, and Einstein offered to advocate for her. “On receiving the new work from Fräulein Noether, I again find it a great injustice that she cannot lecture officially,” he wrote. Though Noether had been teaching, on paper her classes were David Hilbert’s. Finally, Noether was allowed a real position at the university with a title that sounded like fiction. As the “unofficial, extraordinary professor,” Emmy Noether would receive no pay. (Her colleagues joked about the title, saying “an extraordinary professor knows nothing ordinary, and an ordinary professor knows nothing extraordinary.”) When she finally did receive a salary, she was Göttingen’s lowest-paid faculty member.
Pay or no pay, at Göttingen she thrived. Here’s how deeply one line of study, now called Noether’s theory, influenced physics, according to a physicist quoted in the New York Times: “You can make a strong case that her theorem is the backbone on which all of modern physics is built.” And the dent she made in mathematics? She was a founder of abstract algebra. In one paper, published in 1921 and titled “Theory of Ideals in Rings,” Noether dusted her work free of numbers, formulas, and concrete examples. Instead she compared concepts, which, the science writer Sharon Bertsch McGrayne, explains, “is as if she were describing and comparing the characteristics of buildings—tallness, solidarity, usefulness, size—without ever mentioning buildings themselves.” By zooming way, way out, Noether noticed connections between concepts that scientists and mathematicians hadn’t previously realized were related, like time and conservation of energy.
Noether would get so excited discussing math that neither a dropped piece of food at lunch nor a tress of hair sprung from her bun would slow her down for a second. She spoke loudly and exuberantly, and like Einstein was interested in appearance only as it related to comfort. Einstein loved his gray cotton sweatshirts when wool ones were the fashion; Noether wore long, loose dresses, and cut her hair short before it was in style. For Einstein, we call these the traits of an absentminded genius. For Noether, there was a double standard—her weight and appearance became the subject of persistent teasing and chatter behind her back. Like the trivial annoyances of title, pay, and politics, the comments didn’t bother Noether. When students tried to replace hairpins that had come loose and to straighten her blouse during a break in a particularly passionate lecture, she shooed them away. Hairstyles and clothes would change, but for Noether, math was her invariant.
With a mind working as rapidly as hers, it was a challenge for even Noether to keep up with her own thoughts. As she worked out an idea in front of the class, the blackboard would be filled up and cleared and filled up and cleared in rapid succession. When she got stuck on a new idea, students recalled her hurling the chalk to the floor and stomping on it, particles rising around her like dust at a demolition. Effortlessly, she could redo the problem in a more traditional way.
Both social and generous with sharing ideas, many, many important papers were sparked by Noether’s brainpower and published without her byline but with her blessing. In fact, whole chunks of the second edition of the textbook Modern Algebra can be traced back to her influence.
Politics in Germany affected her career again. Though Noether had established herself as one of the greatest mathematical minds of the twentieth century, the Nazis judged only her left political leanings and her Jewish ancestry. In May 1933, Noether was one of the first Jewish professors fired at Göttingen. Even in the face of blatant discrimination, perhaps naively, the math came first. When she could no longer teach at the university, Noether tutored students illegally from her modest apartment, including Nazis who showed up in full military gear. It wasn’t that she agreed with what was happening, but she brushed it aside for the dedicated student. “Her heart knew no malice,” remembered a friend and colleague. “She did not believe in evil — indeed it never entered her mind that it could play a role among men.”
For her generosity, Noether’s friends were wholly dedicated to her. Understanding that staying in Germany would put her in serious danger, in 1933 her friends arranged for Noether to take a position at Bryn Mawr College in the United States. It was meant to be a temporary post until she could land somewhere more prestigious. But just two years after she arrived, Noether died while recovering from a surgery on an ovarian cyst. She was fifty-three. Following her death, Einstein wrote a letter to the New York Times. “Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began.” Today, some scientists believe her contributions, long hidden beneath the bylines and titles of others, outshine even the accomplishments of the ode’s writer.
Physicists tend to know Noether’s work primarily through her 1918 theorem. Because their work relies on symmetry and conservation laws, nearly every modern physicist uses Noether’s theorem. It’s a thread woven into the fabric of the science, part of the whole cloth. Every time scientists use a symmetry or a conservation law, from the quantum physics of atoms to the flow of matter on the scale of the cosmos, Noether’s theorem is present. Noetherian symmetries answer questions like these: If you perform an experiment at different times or in different places, what changes and what stays the same? Can you rotate your experimental setup? Which properties of particles can change, and which are inviolable?
Conservation of energy comes from time-shift symmetry: You can repeat an experiment at different times, and the result is the same. Conservation of momentum comes from space-shift symmetry: You can perform the same experiment in different places, and it comes out with the same results. Conservation of angular momentum, which when combined with the conservation of energy under the force of gravity explains the Earth’s motion around the sun, comes from symmetry under rotations. And the list goes on.
The greatest success of Noether’s theorem came with quantum physics, and especially the particle physics revolution that rose after Noether’s death. Many physicists, inspired by Noether’s theorem and the success of Einstein’s general theory of relativity, looked at geometrical descriptions and mathematical symmetries to describe the new types of particles they were discovering.
Emmy Noether’s theorem is so vital to physics that she deserves to be as well known as Einstein. – Brian Greene
Noether’s theorem to me is as important a theorem in our understanding of the world as the Pythagorean theorem. – Christopher Hill
Mathematicians are familiar with a variety of Noether theorems, Noetherian rings, Noether groups, Noether equations, Noether modules and many more. Over the course of her career, Noether developed much of modern abstract algebra: the grammar and the syntax of math, letting us say what we need to in math and science. She also contributed to the theory of groups, which is another way to treat symmetries; this work has influenced mathematical side of quantum mechanics and superstring theory.
Story from Headstrong – 52 women who changed science and the world, by Rachel Swaby, and Fermilab/SLAC National Accelerator Laboratory Symmetry Magazine.
Three statisticians go out hunting together. After a while they spot a rabbit. The first statistician takes aim and overshoots. The second aims and undershoots. The third shouts: “We got him!”
After a talking sheepdog gets all the sheep in the pen, he reports back to the farmer: “All 40 accounted for.” “But I only count 36 sheep,” says the farmer. “I know,” says the sheepdog, “But you asked me to round them up.”
I hired an odd-job person to do 8 jobs for me, but when I got back, she’d only done half of them.
Last night I dreamed that I was weightless… I was like, 0mg!
Did you hear about the mathematician who was afraid of negative numbers? He’d stop at nothing to avoid them.
Did you hear about the Improper Fractions shop? It’s open 24/7.
What’s the difference between a pupil studying exponentials and a lumberjack? Nothing, they both involve moving logs around.
What’s the difference between an angle measurer and the President of the Agriculturists’ Union? Nothing, they’re both pro-tractors.
What’s the difference between 0.9 recurring and 1? Nothing.
Why did the Viking fail the graph question? He forgot to label his axes.
How many numbers are there between 1 and 10 inclusive? Five, because 1, 3, 5, 7 and 9 aren’t even numbers.
What do you say to a mathematical cat who’s stuck in a (geome-)tree? Compass.
Why did you divide sin by tan? Just cos.
Why is 6 afraid of 7? Because 7 8 9.
How does a ghost solve a quadratic equation? By completing the scare.
In the Nobel Museum in Stockholm, there is a bilateral carousel, similar to what dry cleaners use for garments, with the biographies of all Nobel Prize Laureates. In keeping with the garment carousel theme, originally the biographic sheets were going to be shaped like tuxedos for men and evening gowns for women. That idea was soon dropped as it would have been too much of a reminder of how few women have been awarded the Nobel Prize,
Maybe the committee needs to take a lesson from Ed Yong who bravely addressed his own unconscious contribution to the gender imbalance in science reporting. Did you know that the committees responsible for choosing prize recipients do so under strict rules of secrecy, and originally the proceedings were meant to be kept private forever? Now, details of the process for each round of consideration are kept secret for ‘only’ 50 years, still long enough to ensure that committee members are not asked any embarrassing questions. Declassified documents have occasionally shown that the laureate chosen was merely the best of a bad lot.
48 women in total have been awarded the Nobel Prize between 1901 and 2017. (NB You don’t ‘win’ the Nobel Prize, you are awarded the Nobel Prize and whether you accept it or not, you remain a Nobel Prize Laureate). 2009 was a really good year, five women were awarded the prize in four different categories, and for the first time (and only time to date) two women were awarded the Nobel Prize in Physiology or Medicine.
17 women scientists have been awarded the Nobel Prize. Few people would know their names apart maybe from the name Marie Curie who was awarded the Nobel Prize twice, for Physics in 1903 and for Chemistry in 1911, and to this day, she remains the only person to have received Nobel Prizes in two different disciplines.
Here are their amazing stories of success, often against incredible odds.
Maria Goeppert Mayer (Nobel Prize Physics 1963)*
Maria Mayer was poised to become the seventh generation of professors in her family. When she was twenty-three, she left Germany. She thought her chances of snagging a research position at a university were better in the United States. But when Mayer landed in Baltimore, Maryland, in 1930, she was surprised to find that the professional reception was cold.
Mayer had been trained with a group in Göttingen, Germany, that Max Born called “probably the most brilliant gathering of young talent then to be found anywhere.” Among her peers were Enrico Fermi and Eugene Wigner. Although some were cowed by their company, she was not one of them. During one quantum mechanics seminar led by Born, Robert Oppenheimer (aka the future father of the atomic bomb) interjected so many questions and comments that Mayer sent around a petition to her fellow students, asking them to sign if they wanted Oppenheimer to keep his trap shut.
Her thesis was completed in a famous theoretical physicist’s guest room. One of the bedrooms had a wall signed by Albert Einstein (at the host’s invitation, of course). With a legion of physicist friends and supporters, Mayer assumed her transition into American employment wouldn’t be too thorny.
But her timing wasn’t ideal. Jobs were scarce during the Great Depression. Johns Hopkins University hired her husband as an assistant professor, but because of antinepotism rules, though she was allowed to work, she couldn’t be paid for it. Refused a vacant office, Mayer set up a research space for herself in a university attic.
She reported to her little corner of the school every day, not out of obligation, but because she loved physics and was confident that she was a meaningful contributor. Though she was never paid, Mayer was eventually given classes to teach and an office in the physics department. In her nine years at Johns Hopkins, she published ten papers and co-authored with her husband, Joe Mayer, a textbook used in schools for over four decades.
Joe Mayer lost his job in 1938 in a round of firings aimed at restructuring the budget. He landed on his feet—and with a better salary—at Columbia University. For Maria, relocation was more of a blow. Despite her impressive credentials, those who didn’t know her often dismissed her as just a professor’s wife. Outgoing and confident when she lived in Germany, in the United States Mayer was often shy in her new environment. When she taught classes, she relied on cigarettes to steel her nerves. (Sometimes she’d puff on two or more at a time, lighting up a second while finishing the first.)
Mayer inquired about a position in the Columbia physics department, but her request was spiked. Maria eventually found an office and a bit of support from the chemistry department. There she was given some classes to teach and a job title, although these crumbs were handed to her as if they were favours instead of the embarrassing miscalculations of her value that they were.
Mayer was finally paid for her research during World War II, although it was actually the government, not Columbia, footing the bill. Mayer oversaw a team of some fifteen chemists working on projects related to enriching uranium. After the war a flock of famous physicists moved to the University of Chicago, and the Mayers went with them. Again, the pay was bad (read: non-existent), but Mayer found the science to be cutting edge, competitive, done at a breakneck pace — in a word, delicious.
When Mayer arrived, a former student offered her an appointment in nuclear physics at Argonne National Laboratory, which would allow her to continue teaching at the University of Chicago. “But I don’t know anything about nuclear physics,” she pointed out. His reply: “Maria, you’ll learn.”
So half-time, Mayer wielded a piece of chalk in one hand and a cigarette in the other, running a brutally difficult physics theory seminar and participating in other faculty-like responsibilities at the University of Chicago in exchange for a “voluntary professor” title and no pay. The rest of the time, she worked at Argonne, and puzzled over isotopes.
Isotopes are atoms of the same element that are either carrying extra neutrons or missing a few. Mayer wanted to figure out why some isotopes are more stable than others. She collected evidence showing that the more stable isotopes were the ones with a so-called magic number of neutrons or protons—that is, with 2, 8, 20, 28, 50, 82, or 126. Because these magic-numbered isotopes tended to stay put instead of decaying further like their more unstable counterparts, the universe is loaded with them. Mayer could see that they were special, but she didn’t know why.
Like the rings of an onion, Mayer came to believe that inside the nucleus, there were a series of shells. The theory had been first floated by scientists in the 1930s but never proven. Over time, Mayer’s data offered strong support for the assumption that neutrons and protons rotated at different levels of orbit. However, she still lacked crucial information that would explain why they flew in that formation. Why these numbers and why the onion like shells?
As she discussed the issue with Enrico Fermi in her office in 1948, he tossed out one last question as he headed out the door: “Is there any evidence of spin-orbit coupling?” Within ten minutes, she had it. His question had brought all her other evidence into focus. The way the particles spun in these onion rings determined the isotope’s stability. She compared the model to sets of dance partners waltzing in a room. “Anyone who has danced a fast waltz knows that it is easier to spin in one direction than in the other direction,” wrote journalist Sharon Bertsch McGrayne. “Thus, the couples spinning in the easier direction will need slightly less energy.” The magic-numbered isotopes were more stable because the waltzing couples were all moving in the direction that produced less energy. Mayer had figured out the nuclear shell model, explaining both what goes on in the nucleus and why some isotopes are more stable than others.
Mayer was offered a full-time university position in California in 1960. (The University of Chicago counteroffered, but after so many years of working there unpaid, she found the gesture amusing.) Three years after she moved to the West Coast, Mayer won the Nobel. She was in bad health then, weakened by a serious stroke and a lifetime of smoking and drinking. Mayer may have slowed down, but she didn’t stop. “If you love science,” she said, “all you really want is to keep on working.”
Irène Joliot-Curie (Nobel Prize in Chemistry 1935)*
When Irène Joliot-Curie was six years old, her parents, Marie and Pierre Curie, won the Nobel Prize in Physics. Then, when she was fourteen, her mother won a second one, this time in chemistry. The list of Curie Nobel Prize winners got longer when Joliot-Curie herself, at age thirty-seven, shared a Nobel in chemistry with her husband, Frédéric Joliot. “In our family, we are accustomed to glory,” Joliot-Curie stated matter-of-factly.
Here “accustomed” is a loaded word, as Joliot-Curie felt the harsh smack of fame from a very early age. The whole world was following along as her parents won their Nobel, but her family was again the focus of attention when Pierre’s skull was crushed under a buggy while fumbling with an umbrella. Several years later, when Marie Curie became close with a married colleague, it didn’t matter that the details of their involvement were unclear. She was labelled a husband-stealer, which put her status in the scientific community in jeopardy. Joliot-Curie watched her mother accept a second Nobel and then collapse into a full-blown breakdown, sidelining her career for a year and keeping her adoring children from their only living parent. Afraid of the public attention, Joliot-Curie was instructed to send letters to her mother addressed to a pseudonym.
Joliot-Curie both idealized her mother and was extremely protective of her. She shared her mother’s love of science, but practiced it with the disposition of her father. Joliot-Curie was calm and confident, where Marie Curie could be fragile and nervous. Once, after being caught daydreaming during a private math lesson, Curie chucked her notebook out the window. Unruffled, Joliot-Curie walked down to the courtyard, picked up the notebook, and was ready with the answer to the math question upon her return. When they were separated when she was a child, Joliot-Curie wrote home with frequent updates, including ones about which equations she found “adorable” (inverse functions) and which one she found the “ugliest” (Taylor’s formula). When she grew older, Joliot-Curie made her mother meals, arranged for her transportation, and assisted with whatever else needed to be done.
During World War I, when she was a teenager, Joliot-Curie took up the dangerous task of deploying X-ray technology in field hospitals, following through with a project her mother started. Without X-rays, to find shrapnel in pulverized flesh, physicians had to stick their hands into a wound and dig around. With X-rays and some knowledge of three-dimensional geometry, doctors could evaluate at precisely what angle to enter the wound to retrieve the metal. Joliot-Curie’s job was not only to deliver the technology but also to teach hospital staff how to use it. Joliot-Curie’s age, sex, and self-possession didn’t always endear her to those she tried to teach. In some places she was told not to bother unpacking her equipment, and in others medical workers threatened to destroy the machines as soon as she moved on.
Though she was alone, a teen, and just a few miles from the front, the biggest danger Joliot-Curie faced was actually in the important tools that she helped distribute. With only cotton gloves and a wooden screen to protect her, Joliot-Curie was repeatedly exposed to radiation.
When World War I concluded, Joliot-Curie began working as her mother’s assistant at the organization she directed, Paris’s Radium Institute. The radioactive glow of materials made her feel giddy. Never one to pick a research subject purely for its popularity, Joliot-Curie let her interests guide her.
Her fluency in physics and math could intimidate her colleagues at times. She didn’t care for pleasantries; her verbal and written style struck some as curt and others (like her sister) as simply straightforward and honest. That she was her mother’s favourite in the lab earned her the nickname “Crown Princess”.
When she presented her PhD dissertation in 1925, even the New York Times reported on it: “Nearly a thousand people packed the conference room while the daughter of two of the foremost geniuses of this age calmly read a masterly study.” Joliot-Curie, dressed in a loose black dress, explained her analysis of alpha particles cast off by polonium, the element her parents had discovered in 1898. She told one reporter who asked about family obligations, “I consider science to be the paramount interest of my life.”
Frédéric Joliot began working at the Radium Institute as Marie Curie’s assistant in 1925. He and Joliot-Curie were not at all alike. He was charming and talkative, very perceptive in social situations. Where she ducked the spotlight, he sought it out. But they shared a love for the outdoors and for sports, and a great appreciation of each other’s work. Joliot, too, had idolized the Curies. When he was younger, he cut their pictures from a magazine and hung them on his wall. He explained, “I discovered in this girl, whom other people regarded somewhat as a block of ice, an extraordinary person, sensitive and poetic, who in many things gave the impression of being a living replica of what her father had been…his good sense, his humility.” A year after he began work at the lab, they were married.
Together, Frédéric and Irène swung for a Nobel three times.
In the early 1930s, the Joliot-Curies (as they’ll henceforth be called) observed protons flying out of paraffin wax. This is what they knew: the German physicist Walther Bothe had shown how, by placing polonium (which is radioactive) next to beryllium (a brittle metal), the latter will start emitting powerful rays. But what were those rays? The Joliot-Curies guessed gamma rays.
They had misinterpreted their data. When other scientists tried what the Joliot-Curies had done, putting paraffin wax in front of those rays, a subatomic particle with no electric charge appeared: the neutron. (For his discovery of the neutron, James Chadwick won the Nobel Prize in Physics in 1935.)
The Joliot-Curies moved on to studying the neutron in a Wilson cloud chamber. Like following a jet’s contrail to track its path, the chamber allowed researchers to study a particle by observing where it had been. The neutron’s activity within the chamber could have been explained either by a negatively charged electron or by a positively charged electron, called a positron. They guessed it wasn’t a positron and were wrong again.
The Joliot-Curies finally connected with a right answer when they put polonium next to aluminium foil and neutrons and positrons leapt out. The activity was curious because they were expecting to see hydrogen nuclei instead. When they retried the experiment in 1934, they got the same results.
A Geiger counter, which measures ionizing radiation, finally clarified what the Joliot-Curies had done. As the machine clicked, they realized that the aluminium foil had become radioactive. They had discovered the first-ever artificially produced radioactive element. Just months before Marie Curie’s death, Joliot brought her a tiny test tube holding the couple’s discovery.
In 1935, the Joliot-Curies’ artificial radioactivity netted them the Nobel Prize in Chemistry. Following the award, Joliot was hired by the Collège de France and Joliot-Curie stayed on as the director of the Radium Institute. She was also tapped to be one of France’s first female cabinet members, though French women were not yet allowed to vote.
As the years went on, health and political problems encroached. During World War II Joliot-Curie and her two children were smuggled out of France. They were fortunate enough to hike the Jura Mountains into Switzerland on June 6, 1944, D-Day, when Germans guarding the Swiss-French border were otherwise occupied. Joliot-Curie made sure to carry a large physics book with her in a backpack.
In 1956, Joliot-Curie was diagnosed with leukaemia, likely caused by her exposure to X-rays as a teen. Concurrently, her husband was battling a severe case of hepatitis, also brought on by prolonged exposure to radiation. Joliot-Curie died within the year. Joliot followed two years later. Joliot-Curie was neither terribly surprised nor devastated by the leukaemia diagnosis, since the disease had also killed her mother. “I am not afraid of death,” she wrote a friend. “I have had such a thrilling life!”
In the cave like accommodations in the basement of the Oxford University Museum, electrical cables hung from the ceiling like a high-voltage canopy of Christmas lights. A single Gothic window graced the lab space, but it was mounted so high that taking advantage of its light required a staircase. In the twenty-four years that Dorothy Hodgkin ran her X-ray crystallography lab from the museum, at least one person was zapped with 60,000 volts—and thankfully, not fatally. The lab was underfunded and Hodgkin underappreciated, but she made do. Even in the paltry conditions, Hodgkin’s masterful abilities launched her to the top of her field.
X-ray crystallography became a discipline in 1912 when Max von Laue discovered that X-ray diffraction patterns can tell scientists quite a bit about a molecule’s atomic structure. The process starts with molecules all organized in a uniform, recurring pattern, called a crystal. When X-rays are pointed at crystals, the molecules cause the X-ray to diffract, and the resulting design is captured on photo plates. The pictures are chock-full of clues that can lead researchers to the 3-D structure of the molecule. To decode them before computers was an especially gnarly task—one that could take years of computational muscle and exceptional patience. Hodgkin was a pro.
In the early 1930s, during the first years of Hodgkin’s career, cracking even the simplest crystal’s code took tens of thousands of mathematical calculations carried out on a hand-adding machine. The equations were used to build what’s called an electron density map, which looks like a topographical map but shows instead where the crystal’s electrons are most concentrated. The whole process, from X-ray to structure, could take months or even years.
In 1936, ploughing through calculations got a bit easier when Hodgkin became the proud owner of two boxes packed with 8,400 thin pieces of paper. These so-called Beevers-Lipson strips were like a card catalogue for crystallographers. From top to bottom, they were filled with meticulously ordered trigonometric values, which cut down the time Hodgkin spent working out the math.
When she began decoding the cholesterol molecule in the late 1930s, most of her peers said it couldn’t be done with crystallography. But Hodgkin, whom a friend affectionately called the “gentle genius,” beamed X-rays at the cholesterol crystal and started punching that adding machine. Where traditional chemists had failed, the crystallographer succeeded.
Word of her incredible electron density map decoding skills spread, and Hodgkin found herself a magnet for unsolved crystal structures. When someone needed a molecule’s structure worked out, they dropped off the crystal sample at Hodgkin’s. Over the years she was sent some doozies, penicillin among them.
Penicillin had already shown its ability to prevent bacterial infection in humans by 1941—an extraordinary boon during wartime. By understanding its structure, scientists hoped to help drug developers mass-produce it. However, the molecule evaded scientific understanding in a variety of ways. First, American and British crystallographers were unknowingly working on penicillin crystals of different shapes. Nobody knew penicillin crystals could have those variations. Furthermore, because of the way the molecules were layered, the photo plates didn’t present a very clear picture, either.
Finally, as if there weren’t already enough challenges, Hodgkin and her Oxford graduate student were endeavouring to work out the structure of the penicillin molecule without any knowledge of its chemical groups. Hodgkin joked that it looked like “just the right size for a beginner.”
Hodgkin’s decoding work revealed that penicillin’s parts were bound together in an extraordinarily unusual way. One chemist was so taken aback that he wagered his whole career against her findings, swearing he’d become a mushroom farmer should her assertions about its structure be true. (Despite her verified results, the naysaying scientist didn’t become a mushroom farmer.) When Hodgkin realized she had the final answer to the penicillin problem in 1946, she flitted around the room in childlike celebration. It had taken her four years. The discovery ushered in new, semisynthetic penicillins and their widespread deployment.
Her success notwithstanding, it would be eleven more years until Oxford would make her a full university professor. For an upgraded lab space, she’d have to wait twelve.
Her next massive molecular puzzle had six times more non-hydrogen atoms than penicillin’s total number. Overturning expectations was a Hodgkin hallmark. Although other scientists declared that B12 was unsolvable by X-ray crystallography, Hodgkin gave it a shot.
For six years, Hodgkin and her team took some 2,500 X-ray photos of B12 crystals. Processing the images was beyond anything the Beevers-Lipson strips could handle. Luckily, Hodgkin had a computer programmer on her side. The University of California, Los Angeles, had just gotten a new computer specifically programmed to tackle crystallographic calculations, and the student programmer, a chemist, just happened to be visiting Hodgkin’s lab at Oxford for the summer. When the student returned to UCLA, Hodgkin mailed him bundles of information on B12, and he would send back the computer-processed results. The work was long, difficult, and very challenging. When there were mistakes, like one that rendered an atom ten times larger than its actual size, Hodgkin told her Southern California programmer to cheer up. Through the entire process, he never once saw her lose her cool.
Eight years after she’d commenced work on B12, Hodgkin successfully nailed down its 3-D blueprint. According to a British chemist, if her work on penicillin “broke the sound barrier,” B12 was “nothing short of magnificent—absolutely thrilling!”
“For her determinations by X-ray techniques of the structures of important biochemical substances,” Hodgkin was awarded the Nobel Prize in Chemistry in 1964.
Hodgkin was always kind and gracious but also firm when she needed to be. Underestimating her never went well. Even in her old age, she continued to surprise people. With crippling rheumatoid arthritis and a broken pelvis, she continued to jet to Moscow and elsewhere to attend conferences on science and peace.
Ada E. Yonath (Nobel Prize in Chemistry 2009)
Ada E. Yonath is an Israeli crystallographer best known for her pioneering work on the structure of the ribosome. She is the current director of the Helen and Milton A. Kimmelman Center for Biomolecular Structure and Assembly of the Weizmann Institute of Science. In 2009, she received the Nobel Prize in Chemistry along with Venkatraman Ramakrishnan and Thomas A. Steitz for her studies on the structure and function of the ribosome, becoming the first Israeli woman to win the Nobel Prize out of ten Israeli Nobel laureates, the first woman from the Middle East to win a Nobel prize in the sciences, and the first woman in 45 years to win the Nobel Prize for Chemistry. However, she said herself that there was nothing special about a woman winning the Prize.
Gerty Theresa Cori (Nobel Prize in Physiology or Medicine 1947)*
“As a research worker, the unforgotten moments of my life are those rare ones, which come after years of plodding work, when the veil over nature’s secret seems suddenly to lift and when what was dark and chaotic appears in a clear and beautiful light and pattern.” These were Gerty Cori’s words, first recorded for the radio series This I Believe, and played again at her memorial service in 1957.
Over the course of her career, Cori and her research partner/husband Carl lifted that veil repeatedly to reveal a dazzling string of discoveries, including processes as essential as how food fuels our muscles. When we talk about glycogen and lactic acid and how they relate to exercise, we’re talking about a series of biochemical cycles that taken together are called the Cori cycle. The Coris were the first to bioengineer glycogen in a test tube, which was huge, considering that when they did it in 1939, no one had created a large biological molecule outside of living cells before. The Coris also ferreted out a whole slew of enzymes and then worked out how those enzymes control chemical reactions. Today, these discoveries are so fundamental to how we understand biochemistry that learning about them is standard in high school textbooks.
From 1931 until her death, Gerty Cori ran a lab at Washington University in St. Louis that was considered the epicentre of enzyme research. Scientists came from all over the world to work with her and her husband, and all in all the lab churned out eight Nobel Prize winners.
Of the couple, Gerty was the “lab genius” according to a colleague, keeping a sharp eye on the experimental data and commanding perfection. Her natural pace was breakneck, and she brought everyone else in the lab, including Carl, with her. She kept on top of current research, dispatching students to the library regularly to copy the most interesting articles. When she read something that jumped out at her, she’d race down the hallway to Carl’s office to discuss it with him. She smoked furiously, and evidence of her frenetic activity could be seen in a flying trail of ashes.
Though she could be tough on colleagues, Gerty’s command of the lab had a lot to do with keeping experimental conditions ideal—and just how flat-out thrilled she was to be doing important work in biochemistry. If she took someone to task for a mistake, her response came from disappointment; she would have to delay the fun stuff for a day.
At the core of so many of the lab’s discoveries was Gerty’s partnership with Carl, which was forged during medical school at the University of Prague in anatomy class, of all places. Even before they were married, they published together. Once they wed in 1920, they wanted to continue. However, there were powerful political, social, and geographic forces in Eastern Europe working against them. Gerty converted to Catholicism to marry Carl, but even so, anti-Semitism was so acute that his family worried about his job prospects suffering because of her Jewish ancestry. Furthermore, the borders of what was once Austria-Hungary were in flux. At one point, Carl and some friends dressed up in workmen’s clothes to break down a lab in Czechoslovakia in secret, only to reassemble it again in Hungary, the home of the lab’s founder.
The Coris decided that the best opportunities for them were likely abroad. Carl was offered a position in Buffalo, New York, at a research centre tackling malignant diseases. Gerty stayed at a children’s hospital in Vienna, where she dug into congenital thyroid deficiency, until she got the thumbs-up from Carl that he had lined up a position for her at the research centre in Buffalo as an assistant pathologist. Six months after he arrived, she followed him to the United States.
In Buffalo, they fought with administration for their autonomy. When a director added his name to their papers without reading the contents, the Coris removed it before sending their manuscripts for publication. When the director pushed his theory that cancer was the work of parasites, Gerty refused to play along, and was nearly fired for it. The director demanded that the only way she would be able to hold on to her position at the centre was to stick to her own lab space and quit collaborating with Carl. Naturally, they snuck around, peeking at each other’s slides and discussing results.
Soon enough, they were back in the swing of things, traveling deep into the study of glycogen together. In nine years, they published fifty papers and mapped out the general structure of the Cori cycle.
When it was time for them to move on, Cornell University, the University of Toronto, and the University of Rochester Medical School all called Carl. Gerty, however, was the deal breaker. She was scolded by a school courting Carl, told that by requiring a position at the universities she was torpedoing his career. What the schools didn’t understand was that the Coris did their best work together. There was sexism at play, certainly, but nepotism rules also made hiring a spouse difficult. With so many Americans out of work during the Great Depression, two family members working at the same university was seen as an unfair advantage.
Washington University School of Medicine in St. Louis found a Gerty-sized loophole, in that the school was part of a private institution, not a public one. Carl was brought on as a research professor and Gerty as a research associate. Though their titles were tiered, the Coris always acted like they were equals.
During work hours the two talked to each other constantly about their research, but they were close outside the lab, too. At home, they skated and swam and hosted parties and tried to avoid discussing ongoing experiments in their limited off-hours. When they were at the university, an hour-long lunch break became a Cori-led story time, during which they regaled their colleagues with discussions of far-reaching passions. The talks covered everything from wine to research reports to whatever they’d been reading for pleasure.
When the hour was up, the team was back at top speed. In 1936, the Coris figured out how the body breaks down glycogen into sugar. The final years of the 1930s were dedicated to tracking down new enzymes and sussing out their purpose. When the Coris netted phosphorylase, it was the first time scientists had zoomed in to observe the molecular workings of carbohydrate metabolism.
Test tube glycogen gave the Coris—and the larger research community — another high. Carl made quite the spectacle by whipping the molecule up in front of a conference audience and then passing the test tube around for all to ogle.
Pressure from outside finally bumped up Gerty’s official status at the university. An offer from Harvard and the Rockefeller Institute to make them both professors was turned down only when Washington University countered by promoting Gerty.
According to the science journalist Sharon Bertsch McGrayne, “The lab made so many discoveries so quickly during the late 1940s and early 1950s that Carl worried a bit.” It wasn’t luck that brought them so many successes; it was hard work. Gerty was a fixture in the lab every single day.
Gerty found out that she and Carl had won a Nobel Prize “for their discovery of the course of the catalytic conversion of glycogen” in 1947, the same year she found out she had a rare form of anaemia, which would kill her a decade later. Wherever promising treatments were offered, the Coris went. They traveled all over the world in an effort to find something that would improve her health.
Gerty kept her illness to herself, but others could see traces of it in her routines. The Coris moved a cot into her lab so she could rest; blood transfusions zapped her energy, and she became more frequently frustrated. In one illness-related incident, Gerty fired the nurses Carl had hired to help her. Though she could no longer be the Gerty who would jet down the hall or leap in excitement at a breakthrough, she kept working with the same fervour that had always defined her. When she could no longer make the trek from one room in the lab to the other, Carl scooped her up and carried her, working together until the end.
Rosalyn Yalow (Nobel Prize in Physiology or Medicine 1977)*
If Rosalyn Yalow wanted to see Enrico Fermi speak, she would see Enrico Fermi speak—even if it meant hanging from the rafters. One of the world’s greatest physicists talking about one of the world’s greatest discoveries? Fermi on fission? She would be there even if she, a junior at Hunter College, had to compete for seat space with every physicist within traveling distance. Yalow did attend his colloquium at Columbia University. And she did see it hanging from the rafters.
Such was the way with Rosalyn Yalow. Once an idea had settled, the obstacles didn’t stand a chance. How does a child get braces if her parents are poor? Yalow folded collars with her mother to bring in the necessary cash. How does a researcher secure lab space when she isn’t given any? Yalow fashioned one of the first in the United States dedicated to radioisotope research out of a janitor’s closet. How does one get past discrimination? “Personally,” explained Yalow, “I have not been terribly bothered by it… if I wasn’t going to do it one way, I’d manage to do it another way.” That principle is how she navigated so many issues — graduate school rejections, work limits on pregnant women, rejection from a major journal, and, yes, a packed house for Enrico Fermi. She simply found another way—and quickly. Whining was a waste of time. Minutes were things she didn’t like to lose.
Yalow was direct. She questioned colleagues at conferences and spoke up at meetings. At times, people found her manner abrasive, but Yalow sensed a double standard.
She and her long-time research partner, Solomon A. Berson, dealt in directness. Over the course of their twenty-two years working together, communication turned into what onlookers described as a sort of “eerie extrasensory perception.” Their rapid-fire conversations about work would spill out into academic events, over dinners, and into walks around campus. At parties, Berson would have to remind Yalow to curb the shop talk and chat with other people.
They began working together at Bronx Veterans Administration Hospital in 1950. Yalow had landed there three years earlier as a consultant via a full-time teaching position at Hunter College, and before that a position at the Federal Telecommunications Laboratory. Yalow wanted to do nuclear physics and neither Hunter nor the Telecommunications Lab was getting her any closer to that goal. Although it was in the closet, at the VA hospital, she finally had her own lab. Berson was a resident physician, and Yalow brought him on board.
The pair clicked immediately. For eighty hours each week, they worked furiously on iodine metabolism, on the role of radioisotopes in blood volume determination, and on insulin research. With test tubes flying and chemical assays to prepare, there was never an extra moment to waste.
One of their first challenges as a team was to figure out how long insulin injected into a diabetic’s body stays there. Yalow and Berson attached a radioactive tag to the insulin in order to monitor how long it stuck around. Through frequent blood sampling, they got their answer: too long. The result was surprising because it meant the insulin was being held up by antibodies when the popular assumption was that insulin molecules were so small that they were able to slip past the body’s alarm system.
But why did the body attack the injected insulin? Yalow and Berson traced the problem back to an incompatibility between the human body and the hormone injected, which in the 1950s came from pigs and cattle. Even though the difference between human and bovine insulin was slight, antibodies detected that the insulin was foreign and went after it. The pair’s discovery overturned a long-held belief and provided a crucial piece of information for doctors treating diabetics. (Today, insulin is synthetically made to exact human specifications to avoid this problem.)
The greatest takeaway from the experiment, however, was not about insulin at all but about how they learned about it. Over the course of conducting their insulin research, Yalow and Berson measured the antibodies generated as a result of the hormone. Flip that relationship around and what do you get? They’d inadvertently developed a way to measure hormones in a test tube by looking at their antibodies. This process didn’t require injecting radioactive material into the body and it was capable of surprising accuracy. They called their technique RIA, or radioimmunoassay.
Together, Yalow and Berson tore through hormone research, interpreting their discovery of RIA as a starting gun. What they learned allowed researchers to tell the difference between patients with type 1 and type 2 diabetes; which children would be able to benefit from human growth hormone treatments; whether ulcers should be operated on or handled with medication; which newborns needed a medical intervention for underactive thyroid…and the list goes on. Though others were slow to catch on, within a decade, the RIA technique energized scientists, transforming endocrinology into the “it” specialty in medical research. For eighteen years, Yalow and Berson knocked out hormone after hormone, furiously preparing solutions and loading 2,000 to 3,000 test tubes in twenty-four-hour stretches.
By the time Berson moved on to City University of New York in 1968, much of RIA-related research had already been worked out. Even so, Berson and Yalow reunited on Tuesdays and Thursdays to pull all-nighters at the lab.
In a terrible one-two punch, Berson was hit with a small stroke in March 1972 and then a heart attack during a scientific conference in Atlantic City one month later. The heart attack killed him.
Berson and Yalow were so close that their relationship was nearly familial, and his death hit Yalow extraordinarily hard. Besides losing her friend and research partner, she was concerned about losing her status. For their entire partnership, his had been their outward-facing image. Yalow was devastated by his death, but she also didn’t want the public interest in her work to be buried with Berson.
Yalow thought that going back to school for an MD might give her extra clout, but with so many years of important research already under her belt, she decided against it. Making a name for herself—by herself—would require upending more than twenty years of assumptions that Berson led the partnership. (Yalow and Berson had always considered each other equals.)
The only way to regain the scientific community’s trust, Yalow decided, would be to kick her already breakneck pace up a notch. She turned eighty-hour workweeks into hundred-hour ones. She renamed her lab the Solomon A. Berson Research Laboratory so that her articles — sixty produced in the following four years — would still appear with his name on them.
Yalow knew that her work with Berson deserved a Nobel, but science’s highest award is given only to the living, and her partner was already gone. Yalow, as always, didn’t give up hope. Every year she chilled champagne and dressed up the day the awards were announced, just in case the news was good.
In the fall of 1977, Yalow woke up in the middle of the night, no longer able to sleep. As was her tradition, if sleep wasn’t panning out, she’d go to the office. On this particular morning she was in by 6:45. When she got word that she’d won, Yalow ran home, changed her clothes, and was back in her lab by eight a.m. Her Nobel was granted, but as an exception; both members of a research partnership should have been living.
The Nobel finally affirmed a desire she’d had since the age of eight: to become a “big deal” scientist. This time her admittance was granted with flung-open doors, not a spot in the rafters.
Barbara McClintock (Nobel Prize in Physiology or Medicine 1983)*
At the University of Missouri, Barbara Mcclintock, an acclaimed geneticist working on how one generation of corn passes its genetic traits on to the next, was known as a troublemaker. The marks against her — wearing pants in the field instead of knickers, allowing students to stay in the lab past their curfew, managing with a firm, no-nonsense style — were practical choices, ones McClintock believed would improve her work and that of others. But to her superiors, her behaviour was obstinate. McClintock was excluded from faculty meetings, her requests for research support were denied, and her chances for advancement were made clear: If she ever decided to marry, she’d be fired. If her research partner left the university, she’d be fired. The dean was just waiting for an excuse.
There are times for perseverance and there are times to get out quick. In 1941, after five years at the University of Missouri, McClintock found the door, slamming it behind her.
Never one to be burdened with possessions (or weighed down by the limited vision of others), McClintock hopped in her Model A Ford and, like a dandelion seed surfing the breeze, set out not knowing where she and her masterful canon of genetics work would land. When she turned her back on the University of Missouri, it was possible she was also losing the career that she’d worked so hard to cultivate.
But freedom felt like home to McClintock. When she was a baby, her mother used to set her on a pillow and leave her to amuse herself. Simply mulling over the world and all of its amazing patterns and peculiarities was a happy pastime of McClintock’s earliest years. “I didn’t belong to that family, but I’m glad I was in it,” she said. “I was an odd member.”
Her outsider status was not so different in the scientific community. Though she absolutely belonged there and was fully absorbed in her work, McClintock never completely integrated. One part of the issue was societal. Getting a faculty position at a university was exponentially harder for women in the 1920s than it was during World War II, when positions opened up for women when men were called to war. Though up to 40 percent of graduate students in the 1920s in the United States were women, that didn’t translate into jobs—especially in science. Fewer than 5 percent of female scientists in America were able to land jobs at coed institutions. And even then, the home economics and physical education departments were the biggest hirers. Women rarely rose to posts as prestigious as professor. In the Venn diagram of female biologists hired as professors at major research institutions, the middle was a lonely place. McClintock never got there.
McClintock’s work also kept her out of the mainstream. She was either ahead of her time, with experimental methods so dense and complicated that they were difficult for her peers to understand, or she chose subjects that operated outside trends in biology.
During her first year of graduate school at Cornell University, for example, McClintock took it upon herself to identify discrete parts of corn’s chromosomes. Her short-term advisor, a cryptologist, had been after the same tricky-to-find prize for a long time. McClintock saddled up to the microscope and — bam — “I had it done within two or three days — the whole thing done, clear, sharp, nice.” She revealed the answer so quickly that it bruised her advisor’s ego. McClintock was so thoroughly hopped up on the quest that she hadn’t even considered the possibility that she would upstage her superior. In other instances, her ground breaking experiments required an interpreter. When she laid out her case for the location of genes on corn’s distinguishable ten chromosomes, her method remained a mystery to her colleagues until a scientist from another school visited and unpacked the study design for public consumption. “Hell,” said the interpreter. “It was so damn obvious. She was something special.”
McClintock adored biology at Cornell. She was no typical high achiever. Following the acknowledgment of her corn chromosome discovery as a master’s student, she attracted a pack of professors and PhDs who trailed her around campus, “lapping up the stimulation she provided,” said one, like puppies tumbling after castoff treats. Together the group, with McClintock as its intellectual leader, ushered in an especially bright period of genetics. McClintock proudly recounted how the “very powerful work with chromosomes… began to put cytogenetics, working with chromosomes, on the map… The older people couldn’t join; they just didn’t understand. The young people were the ones who really got the subject going.”
Post-PhD, McClintock spent a few more years at Cornell, publishing papers, teaching botany, and advising students. In 1929, she and a graduate student bred together one strain of corn with waxy, purple kernels with another strain that had kernels that were neither waxy nor eggplant-colored. McClintock’s experiments showed that some kernels inherited one trait but not the other, for example, brightly coloured kernels without the waxy texture. When McClintock looked at the chromosomes through a microscope, she found that their appearance was noticeably different, and in the cases where kernels had one trait but not the other, parts of a chromosome had traded places.
The discovery was hailed as one of the greatest experiments of modern biology. At just twenty-nine years old, McClintock proved herself a powerful force in genetics research—but without a permanent faculty position. The head of the department was in favour of bringing her on to become a professor but the Cornell faculty forbade it. So McClintock left, picking up fellowships here and there, searching for a new place to put down roots.
The country’s greatest research institutions should have fought over McClintock, but instead she ended up searching for a space to plant her corn. She found one at Cold Spring Harbor in Long Island, New York. The facility was initially founded in 1890 as a place for high school and college teachers to learn about marine biology. When McClintock arrived, it was a genetics institute. The atmosphere was ideal; McClintock wouldn’t have to teach, and there were no restrictions on her research, which would be entirely self-directed. She could wear jeans and stay as late and as often as she wanted. The place suited her so well that when she socialized, she would invite friends to the lab instead of to her “home,” an unheated, converted garage down the street used for nothing more than sleep.
McClintock was extraordinarily organized. Clothes in her closet all faced the same direction, and each of her scientific specimens was assiduously labelled. Sometimes she’d get so engrossed in her work that peering into a microscope would feel to her like spelunking through the deep secrets of a cell. “You’re not conscious of anything else,” she remembered. “You’re so absorbed that even small things get big.”
At Cold Spring Harbor, McClintock spent six years on her greatest scientific accomplishment. When she finally unveiled her findings to a group of researchers, her hour-long talk was met with silence. One listener recalled that the talk landed “like a lead balloon.” McClintock had just laid out a meticulously researched case that genetics was much, much more fluid than what scientists had previously realized, with genes able to switch on and off and change locations. The prevailing belief was that genes were like bolted-down pieces of furniture. In the 1950s, scientists from all different fields of study were getting into the genetics game; chemists and physicists applied their disciplines to understanding inherited traits. With so many new ways to look at our genetic makeup, corn had fallen out of favour. “I was startled when I found they didn’t understand it, didn’t take it seriously,” she said of the talk. “But it didn’t bother me. I knew I was right.”
That she was. The acceptance of her ideas didn’t come until nearly two decades later, when molecular biologists finally saw in bacteria what McClintock had seen in corn. At the news, McClintock was overjoyed. “All the surprises…revealed recently give so much fun,” she wrote to a friend. “I am thoroughly enjoying the stimulus they provide.” Public acknowledgment brought a string of awards—the MacArthur Foundation Fellowship, the Albert Lasker Basic Medical Research Award—but no Nobel. Then finally, in 1983, thirty-two years after her big-but-ignored discovery, she heard her name announced on the radio. She had finally won science’s most prestigious prize. Her “discovery of mobile genetic elements” was touted by the Nobel Committee as “one of the two great discoveries of our times in genetics.”
In the ensuing years, she was asked time and time again the same question, some delicately worded take on Were you bitter it took so long? Her answer: “No, no, no. You’re having a good time. You don’t need public recognition, and I mean this quite seriously, you don’t need it.” With characteristic confidence, she added, “When you know you’re right you don’t care. It’s such a pleasure to carry out an experiment when you think of something…. I’ve had such a good time, I can’t imagine having a better one…. I’ve had a very, very satisfying and interesting life.”
Rita Levi-Montalcini (Nobel Prize in Physiology or Medicine 1986)*
During the last two and a half decades of her 103 years, Italians liked to joke that everyone would recognize the pope, so long as he appeared with Rita Levi-Montalcini. Though she stood only five feet, three inches, the stories of her work and her life were as large and dramatic as her iconic sideswept hair.
There was the time she smuggled a pair of mice on a plane to Brazil by tucking them away in her purse or pocket—for the sake of her research, of course. Or the years she bicycled door-to-door during World War II, pleading with farmers for donated chicken eggs to feed her “babies” (they were actually embryonic research). The plight was a ruse. She needed fertilized eggs for her work. Once, Levi-Montalcini talked her way into the copilot seat on a fully booked flight. On another flight, when the airline lost her suitcase and the clothes she had on were wrinkled, she opted to give a lecture in a pressed nightgown rather than appear dishevelled in front of an audience.
In life and in her work, Levi-Montalcini preferred grand gestures and big risks. As a child, she vowed never to marry, in order to devote herself completely to science—a promise she kept. Finishing school? No thanks. She was meant for medical school. When the Italian government barred her from medicine and research in 1938 because she was Jewish, she set up a secret lab in a bedroom so she could continue to examine the development of fibrous nerve cells, an interest she cultivated while working toward her medical degree.
During this time, Levi-Montalcini read an article written by the founder of developmental neurobiology, a German embryologist based in St. Louis, Missouri, named Viktor Hamburger. Hamburger used chick embryos to inquire about a possible link between the spinal cord and the development of the nervous system. The idea piqued her interest. Even while operating undercover, Levi-Montalcini figured she could talk her way into a regular supply of chicken eggs.
Levi-Montalcini sprang into action, conducting her own experiments to see if she could suss out a link. She recruited a similarly ousted professor as a research partner and called on her family to provide lab support. Her brother built an incubator for the eggs she gathered, and Levi-Montalcini made a scalpel from a filed-down knitting needle. She also acquired a slew of tiny instruments, like forceps made for a watchmaker and scissors made for an ophthalmologist. She used these miniature tools to extract the chick embryos and cut their spines into thin slices. After studying the neurons in the spinal cord at different stages of embryonic development, Levi-Montalcini discovered something entirely new. Nerve cells didn’t fail to multiply as previously thought; they both grew and died as a normal part of the development process.
Because she couldn’t publish in Italy, Levi-Montalcini sent her papers to Swiss and Belgian journals available in America, which is where Hamburger learned about her work. After World War II had concluded and Levi-Montalcini was allowed to conduct scientific experiments outside the bedroom, Hamburger invited the Italian researcher to come to Washington University in St. Louis to discuss their overlapping interests. She accepted, and a trip that should have taken a few months turned into a twenty-six-year tenure at the institution.
With Levi-Montalcini’s knowledge of the nervous system and Hamburger’s foundation in analytical embryology, the pair was ideally matched to tackle the mystery of how nerve cells emerge and extinguish together. Levi-Montalcini thrived in her new environment, working extraordinarily hard from morning until late into the evening. Despite the work experience, Levi-Montalcini still believed that her biggest accomplishments were guided by intuition. “I have no particular intelligence,” she said. But when the powerful weathervane inside her landed on a direction or a thought, “I know it’s true. It is a particular gift, in the subconscious. It’s not rational.” Hamburger leaned toward crediting talent. “She has a fantastic eye for those things in microscope sections…and she’s an extremely ingenious woman.”
Levi-Montalcini travelled to Brazil to learn how to grow tissues in a glass dish. On this trip, however, the scientist was failing. Trying the method she’d learned, Levi-Montalcini swung back and forth between enthusiasm and despair. (Even her mood swings were legendary.) By relying on mice to produce nerve cells, researchers were locked into a set timeline of growth. But if Levi-Montalcini could make those special cells in the lab, her experiments would accelerate. Still… the technique wasn’t working. On her final attempt, Levi-Montalcini plopped a scrap of embryonic chick cells on one side of a petri dish and a chunk of tumour on the other. When placed next to each other—but not touching—the nerve fibres astonishingly started to stretch, extending out from the cells in every direction like a fragile, otherworldly crown. It was an extraordinary show indeed—and one Levi-Montalcini took pleasure in playing over and over again throughout her career.
What was the factor shoving nerve growth into action? Upon her return to St. Louis, Levi-Montalcini figured it would take her a few months to find out.
A few months went by…and then a year, two, three; all the while she and her then research partner Stanley Cohen working furiously. (By that point, Hamburger had stepped back from research and become more of a mentor.) The team grew tumours, experimented with snake venom, and spent a lot of time thinking about mouse saliva. It took six years, until 1959, to identify the nerve growth factor in a mouse’s salivary gland and purify it into something that would trigger that ethereal crown.
At one time, the discovery was seen as a small thing, impressive but also niche. But as more and more growth factors were discovered, the field bloomed. It was found that nerve growth factors influence everything from degenerative disease progression to the success of a skin graft to damaged spinal cord protection.
In 1986, she and Cohen were awarded the Nobel Prize in Physiology for their work.
The prize launched Levi-Montalcini into Italian celebrity. (She had returned to Italy part-time in 1961.) In her later years, she took work calls on the car phone as a driver chauffeured her around in a Lotus. Levi-Montalcini was awarded the National Medal of Science, and in Italy she was appointed a senator-for-life. “The moment you stop working,” she said, “you’re dead.” Wearing a string of pearls, high heels, and a brooch under her lab coat well into old age, she made it to 103.
Gertrude B. Elion (Nobel Prize in Physiology or Medicine 1988)*
Gertrude Elion never forgot the ones she lost: her grandfather from stomach cancer when she was a teenager, her fiancé from a sudden infection in his heart, a patient from leukemia, her mother from cervical cancer. She held on to the very real pain of their passing, the losses serving as a constant reminder that every atom substituted and drug synthesized might make a difference. Her grandfather’s death “was the turning point,” she admitted. “It was as though the signal was there: ‘This is the disease you’re going to have to work against.’ I never really stopped to think about anything else. It was really that sudden.”
Though her purpose was singular, her path to pharmaceutical research was not. The first hang-up was funding a higher degree in chemistry. She applied to fifteen schools, and not a single graduate program would offer her any kind of financial assistance. During the Great Depression, what money the schools had went to men. It was the same in the job market. One employer worried that with no other women in the lab, Elion would be a “distracting influence.”
To get herself closer to the chemistry she loved, Elion adapted, pulling together a hodgepodge of employment. She signed up for secretarial school and taught nursing students biochemistry. When she ran into a chemist at a party, she offered to work for free. Eventually she scraped together enough money to fund one year of graduate school at New York University. Elion supported herself by picking up a job as a doctor’s office receptionist.
Elion’s first full-time position in a lab was quality control for a line of grocery stores. She tested the acidity in pickles and made sure spices were fresh. She took what she needed from the position and then called it quits, telling her manager matter-of-factly, “I’ve learned whatever you have to teach me, and there’s nothing more for me to do. I have to move on.”
Her father noticed “Burroughs Wellcome Company” on a pill bottle and suggested that she apply for a job, since the company was located nearby, in Westchester County, New York, just eight miles from her home. Burroughs Wellcome gave scientists the space, freedom, and financing to chase down drug-related solutions to any serious medical problem they desired. When Elion showed up for an interview, it was luck that she landed in the company of George Hitchings, who was working on just the types of problems that Elion wanted to tackle.
In 1944, Elion was hired by Hitchings, who was interested not only in drug development but also in how the medical research community was conducting it. Trial-and-error drug development was the norm, but Hitchings believed the method was a little like grabbing at a solution hidden somewhere inside a paper bag. Why couldn’t they learn about new drugs with a methodical, scientific approach that incorporated knowledge of applicable subjects like cell growth?
Hitchings sent Elion off to explore adenine and guanine, the so-called purine bases in nucleic acid. (They’re the A and the G in ACGT—the building blocks that make up DNA.) Cells need nucleic acids to reproduce, and tumours, bacteria, and protozoa need a lot of them to spread. So Hitchings figured that really getting to know these little-understood acids might allow the research team to develop a biochemical wrench to throw at diseases to stop them from spreading.
Thrilled to finally be doing work that satisfied her, Elion stayed late, went into the lab on weekends, and carried out her experiments merrily—even when the floors were a sizzling 140 degrees, a by-product of the baby food dehydration plant downstairs, which ran even during New York’s muggy summer. She enjoyed the job so much that when she opted to spend one whole weekend at home and put work on the backburner, her mother worried that there must be something wrong with her.
Elion was in her element. She ripped through studies of organic chemistry, biochemistry, pharmacology, immunology, and virology. However, she still yearned for a PhD. For a time, Elion took PhD classes in her off-hours, but she ended up getting forced out of the program. Work on your PhD full-time or get out, the dean demanded. She chose her job over higher education. “Oh, no, I’m not quitting that job,” she explained to the dean. “I know when I’ve got what I want.” (Elion never did complete her PhD, though George Washington University gave her an honorary one.)
Elion stayed put because she was just having too much fun and too much success waging a battle against the illnesses that kill us. Take her accomplishments in 1950. Elion synthesized two effective cancer treatments. Remember those purine bases? Well, she developed a compound called diaminopurine that used them to disrupt leukaemia cell formation. On animals, diaminopurine worked such wonders that Sloan-Kettering Memorial Hospital in New York City tried the medication on two severely ill leukaemia patients. One patient’s recovery so transformed her that, for a while, doctors thought she might not even have had leukaemia in the first place. The patient stopped taking the medication, got married, and had a child. Elion also developed another compound that improved the life expectancy of leukaemia patients.
Both medications were major breakthroughs in cancer treatments, but their effectiveness only took patients so far. When the leukaemia-patient-turned-mom relapsed and died two years after receiving the medication, Elion was completely torn apart. Even after decades had passed, the case would still move her to tears.
As the years went by, it was almost as if Elion would skip through areas of disease research and then invite the world to come along. In the early 1950s, she started studying drug metabolism before she’d heard of anyone else doing it. That pair of cancer drugs? Her work kicked off a whole new wave of leukaemia research. Then, in 1978, Elion and her research partners completely flipped the way scientists thought about viruses.
Antivirals were not thought to be very accurate attackers. Scientists believed antivirals would aim for a virus’s DNA but would smash into the DNA of healthy cells, too. Elion’s antiviral research, however, got a promising start. When Elion sent a sample of some early work to a lab to be tested, the response she got back was encouraging. “This is the best thing we’ve seen. It’s active against both the herpes simplex virus and the herpes zoster virus.” For four years Elion and her team fine-tuned the compound to bump up its effectiveness and nail down its metabolism. The secret? What Elion and her colleagues developed was a near twin, something so similar that the virus itself activated the assassin that would take it out. The drug was unveiled at a scientific conference in 1978 and was immediately heralded as a major breakthrough, one that would change the way scientists approached antivirals.
Elion lived for these highs. But better than solving any chemical puzzle was how her work touched people. In 1963, she watched a medication she’d helped develop clear up a watchman’s painful gout, and in 1967, the first heart transplant took place thanks to an immunosuppressant she worked on.
“For their discoveries of important principles for drug treatment,” Elion and Hitchings won the Nobel Prize in Medicine in 1988. After she received the award, letters poured in from people expressing their gratitude. Someone recounted a story of a son’s terminal reticulum cell sarcoma reversed; another wrote about a daughter with herpes encephalitis whose life was saved. Someone else’s eyesight was protected from a case of severe shingles, thanks to the medicine Elion had helped pioneer. She may not have been able to do anything about the deaths of her loved ones, but, explained a research vice president at her old institution, “In fifty years, Trudy Elion will have done more cumulatively for the human condition than Mother Teresa.”
Christiane Nüsslein-Volhard (Nobel Prize in Physiology or Medicine 1995)
If a list were made of the great biologists of the past 100 years, Christiane Nüsslein-Volhard would certainly be on it.
In the 1980’s, she and Eric F. Wieschaus solved one of the central mysteries of life: how the genes in a fertilized egg direct the formation of an embryo. For their discovery, Dr. Nüsslein-Volhard, Dr. Wieschaus and Edward B. Lewis received the 1995 Nobel Prize in Physiology or Medicine. At the time, Dr. Nüsslein-Volhard was just the 10th woman to win a Nobel Prize in one of the sciences.
Between 1985-2015 she was the Director of the Max Planck Institute for Developmental Biology in Tübingen, Germany.
With her own money and a $100,000 award from Unesco-L’Oréal’s Women in Science Program, in 2004 she set up the Christiane Nüsslein-Volhard Foundation, for the promotion of science and research, to support talented young women with children. The Foundation aims to enable them to create the freedom and mobility required to further their scientific careers. The Foundation wishes to help prevent science from loosing excellent talent. It is aimed specifically at graduate students and postdoctoral fellows in the fields of experimental natural sciences and medicine.
Linda B. Buck (Nobel Prize in Physiology or Medicine 2004)
The reawakening of a long-forgotten memory by the scent of spring lilacs in bloom or an apple pie fresh from the oven can be one of our most powerful human responses — and one of the most mysterious.
How do we differentiate thousands of distinct odours and how do our brains perceive and remember them? Dr. Linda Buck set out to understand the sense of smell, a monumental scientific question that had long evaded explanation.
Through years of intensive research, Buck became the first to identify a family of genes that control the olfactory system, a complex network that governs our sense of smell. The genes are blueprints for a family of smell-receptor proteins in the nose that work in different combinations so that the brain can identify a nearly infinite array of odours — much like the letters of the alphabet are combined to form different words.
Buck determined that each odour-sensing cell in the nose possesses only one type of odorant receptor, and each receptor can detect a limited number of odorant substances. She then used this knowledge to determine how the identities of different odours are perceived by the brain to allow us to sense distinct odours. In later studies, she and her colleagues uncovered a sensory map in a part of the brain known as the olfactory bulb and that is virtually identical in all individuals.
Uncovering how this system works has been fundamental to understanding the machinery that controls the relay of sensory signals from the world around us to the central nervous system.
These landmark contributions to human biology open new doors to studying the brain and have numerous implications for health. They may even hold the key to understanding behaviours such as fear and aggression.
For her ground-breaking discoveries of odorant receptors and the organization of the olfactory system, Buck received the 2004 Nobel Prize in physiology or medicine.
Françoise Barré-Sinoussi (Nobel Prize in Physiology or Medicine 2008)
In the early, frantic confusion of the Aids crisis, as doctors and patients scrabbled for answers, desperate to grasp what was smothering the immune systems of the young and fit, there was, in 1983, a turning point, a pivot on which the rest of the pandemic would hinge: the discovery of HIV.
When Françoise Barré-Sinoussi – along with her colleague Luc Montagnier – found the virus that causes the disease, it proved monumental, a boulder thrown into a lake whose ripples still fan out today. It proved critical too, transforming our understanding of AIDS as it bulldozed through country after country, bringing blindness and pneumonia and death like a grim huckster.
But for the woman who made the discovery there was another turning point, a moment that changed her for good. It was a meeting on a hospital ward 30 years ago this year, and it would cement for ever Barré-Sinoussi’s commitment to fighting the disease.
In early 1984, soon after her discovery, the virologist was invited to give a talk at the General Hospital in San Francisco, the genesis of the AIDS crisis. News soon circulated among patients on the wards of the now famous French scientist being in their midst.
“The doctor there asked me whether I would see an [AIDS] patient that was in the emergency room and was dying,” she says, as we sit in her bright, white office in the Pasteur Institute – the very building in Paris where her discovery was made.
“He told me that the patient would like to see me. And I said okay. I went there and he was really ill, in terrible shape.” What did he say?
She swallows and folds her arms, as if hugging herself.
“It will stay in my mind for ever,” she says. “He had difficulty speaking, but I could see on his lips that he said, ‘Thank you.’”
Her jaw moves back and forth, grinding her teeth to keep control.
“I was so totally… I didn’t know what to say, and the doctor saw me and said, ‘Ask him why.’ And I did, I asked him why he thanked me and he said, ‘Not for me, for the others.’”
Her eyes fill as she glances down. He died a few hours later.
“Since that day I still have that [image] in my mind… He took my hand and I still feel his hand in my hand today.”
She looks straight ahead, her empty hands cupped, palm-side up.
This, it transpires, as afternoon sinks into evening, is far from the only unknown story in a life and career that traces more than 30 years of the pandemic, from the smart building surrounding us to the devastated villages of Cambodia and Cameroon.
Fuelled by freshly brewed coffee and speaking with the precision one might expect of a great French scientist, Barré-Sinoussi begins to unravel her entwinement with the history of HIV/AIDS, laying out the truth about her discovery, the devastating effect of the legal battles that ensued, the likelihood of finding a cure, and the radical policies needed to halt the disease before it is too late.
Birth of a retrovirus
Her story begins with a phone call.
“The clinician [Willy Rozenbaum, an infectious disease specialist] in France called the Pasteur Institute at the end of 1982 to ask whether we’d like to look for a retrovirus in this disease [AIDS]. I’d never heard about it before.”
Barré-Sinoussi and Montagnier had been working on animal retroviruses in the early 1980s, during the period that the first two human retroviruses (named HTLV-I, which stands for human T-cell lymphotropic virus 1, and HTLV-II) were discovered by Dr Robert C Gallo at the National Cancer Institute in Maryland.
A retrovirus, unlike other viruses, contains an enzyme called reverse transcriptase. This enables it to convert its genetic material – which in the case of a retrovirus is RNA (ribonucleic acid) – into DNA after entering a host cell. So, when HIV infects T cells (the white blood cells that govern the immune system), its genes are injected into these cells’ DNA. Each host cell then regards the viral DNA as part of its own make-up, replicating a distorted version of itself. The T cells then die off, depleting the immune system so the body can’t fight off infections.
Barré-Sinoussi’s PhD, awarded by the University of Paris in 1975, focused on reverse transcriptase activity. And so, despite an ordinary Parisian upbringing (her father was a surveyor) and being thousands of miles from the USA, the known epicentre of the disease, she was ideally qualified and in the perfect place at the perfect time to conduct this research.
Rozenbaum gave Montagnier and his team a lymph node sample from a French AIDS patient, with the aim of isolating the agent responsible for the disease – the hypothesis being that a retrovirus might be the cause of AIDS. They sampled the culture every three to four days to screen for reverse transcriptase activity, and noticed an increase and then decrease in activity that correlated with the death of the T cells.
“At one point we thought we would lose the virus because it was disappearing at the same time that the [T] cells were dying,” Barré-Sinoussi recalls.
And so, on a January morning, as their discovery was slipping through their fingers, one member of the team dashed across the street to the blood bank. They used the lymphocytes from a fresh blood sample to save the virus.
Initially they thought the cell death was caused by toxicity related to the tissue culture conditions, but after adding the fresh lymphocytes to the culture they saw that the cell death correlated again with the detection of reverse transcriptase activity. Finally, they had the answer: a retrovirus was indeed responsible for AIDS, and they were the first to discover it. But how difficult was the process?
“There was no problem at all,” says Barré-Sinoussi with a shrug. What? One of the great medical discoveries of the 20th century was easy?
“It was very easy. We received the first sample at the beginning of January 1983 and, 15 days after, we had the first sign of the virus in the culture.” But Barré-Sinoussi refused to indulge in a eureka moment.
“It was not enough to say we had the virus causing AIDS for sure – there was much more work to be done,” she says squinting through her frameless glasses. “But we knew one patient with AIDS had an antibody against the virus.”
On 20 May 1983 Barré-Sinoussi and Montagnier published their findings in the journal Science, naming their newly discovered virus LAV – lymphadenopathy-associated virus, referring to the swollen lymph nodes on the patient who provided the sample, a symptom sometimes present in the recently infected. Not until later that year though, as more and more AIDS patients were found to have the virus and the evidence began to accumulate, did she feel convinced of a causative link between LAV and AIDS. The young scientist – still only 36 – phoned a friend and former colleague in the USA.
“I was so excited – I had to share it with someone – and he said, ‘Do you realise what this means, to have a virus like that if it’s true?’ And I said, ‘I’m starting to realise.’ But what I didn’t realise, like many of us at the time, was the impact of the virus, the impact of the epidemic.”
At that time there were only about 50 known AIDS cases in France, so was there any clue what effect the discovery would have on her personally?
“That it would change the rest of my life? Certainly not. I remember I told my husband, who wasn’t very happy because I wasn’t very often at home, ‘Don’t worry it’s just, you know, for one, two years maybe and then it will be over.’” She laughs darkly at the irony. As the 1980s rolled on, with the pandemic emerging and the professor dedicating more and more time to further research, her husband would remind her of her promise.
But she could not banish from her mind the San Franciscan, or any of the other patients she had met. It made her work “like crazy”. And as she worked, people with AIDS (PWAs, as they were called) would show up at the Pasteur Institute, among them the Hollywood icon Rock Hudson, before his death in 1985.
“They heard that the virus was discovered here and they wanted to speak with the scientists who understood the virus.” But there was nothing immediate Barré-Sinoussi could do. Instead, she abandoned the normal scientist–patient distance and regularly went out into the community to understand better the needs of those she was trying to help.
“Some of them I became friends [with] and some of them are not there anymore. The list is very, very long…” She looks away. As the desperate search for treatment continued, Barré-Sinoussi was unable to remain detached.
“It was really traumatic. I knew as a scientist that we will not have a treatment tomorrow because we know that science needs time to develop a better understanding [of a disease] and to develop drugs and then to test the drugs. As a human being, to see the patients dying and expecting so much from us, it was terrible… It has been so dramatic and painful.”
And it wasn’t long before an altogether different kind of drama threatened to distract her from her efforts.
Across the Pacific, Dr Gallo and his team at the Laboratory of Tumor Cell Biology were also researching the cause of AIDS, and in May 1984 they published four papers in Science outlining their discovery: the third human retrovirus, HTLV-III.
The papers illustrated how the lab had found a way to create a cell culture in which the virus could grow and stay alive, thus allowing unlimited further research to be conducted on it. They showed the shape of the virus – thanks to electron microscope imaging – which, unlike the spherical core of the first two HTLVs, was cylindrical. And, crucially, they detailed the high proportion – 88 per cent – of AIDS patients presenting with antibodies reacting against the antigens in HTLV-III. Though others later amended the latter findings to 100 per cent of patients, it was a game-changer: they had demonstrated unequivocally that the virus was the cause of AIDS.
“He [Gallo] contributed to confirming the data that we had – and this is very important in science. You need to have confirmation by others,” says Barré-Sinoussi. Would she and Montagnier have been able to make their discovery without Gallo’s earlier work on retroviruses?
“I don’t know,” she says, enunciating carefully through a distinct Parisian accent – conscious perhaps of how far her words can reverberate. “The discovery could have been made exactly the same way because we did not use at all the HTLV experience to isolate the virus – we decided to start without any idea about which virus could be responsible.”
The trouble for Gallo was, as it transpired in 1985, HTLV-III was merely a name he and his team had given to a virus that was identical to LAV, the one already identified in Paris. In 1990, after an investigation ordered by the United States Office of Research Integrity, following accusations of misappropriation of samples, it emerged that Gallo’s discovery was not from the same sample as the Pasteur Institute’s, but that in fact both had been contaminated by another sample. Regardless, what ensued, from the mid-1980s onwards, was a row so protracted and ugly as to cast an unsightly shadow over the findings of both laboratories.
Not only was the argument, splaying out in all directions, over whom deserved credit for finding the cause of AIDS, but also what it should be called. In 1986 the International Committee on Taxonomy of Viruses swept the nomenclature dispute aside when it announced in Science a new name given by neither lab: HIV (human immunodeficiency virus).
But this was by no means the end of the Franco–American war over HIV research. With depressing inevitability, battle commenced over who deserved the royalties from the development of the HIV diagnostic test, which began trials in 1984. Gallo was granted the patent by the US Patent and Trademark Office, prompting the Pasteur Institute’s Board of Directors to hire lawyers to fight back, accusing him of stealing ‘the French virus’ – a claim later discredited.
“I tried as much as I could to stay away from this,” says Barré-Sinoussi with a snort – suggesting it proved impossible. “Like all the people involved, I had to be heard by lawyers.”
It took a meeting of both governments in 1987 to settle the matter and split the proceeds of the royalties. But that did little to quell the personal attacks aimed at the scientists involved.
“It was another bad period for me,” she says, unable to disguise the weariness in her voice. “During that period I would go to these [HIV] conferences and at the end of the conference [you would have] people affected by the diseased standing up and saying, ‘The only thing you’re interested in is to fight with your colleagues. You’re interested in making money, you don’t care about us.’ This was the worst for me to hear. I was really shocked.”
Barré-Sinoussi’s face contorts at the recollection. She is almost alone among virologists in dedicating her entire career to fighting HIV. But the predicament in the late 1980s for patients cannot be overlooked: HIV campaigners mounted a desperate, valiant effort to demand that governments pay attention, grasp any treatment that might help, and ensure medication was affordable and whizzed through the normal protocol in time to save lives. As politicians prevaricated, millions perished.
Footage of one particular protest by ACT UP, the New York-based pressure group, conveys precisely the anger and despair felt: activists scatter the ashes of their loved ones over the White House lawn – plumes of grey smoke, a ghostly reminder of a disease out of control, billow over the grass.
There was one development, however, in the savage horror of the virus, capable of bringing excitement, even joy: the introduction in 1996 of effective treatment – antiretroviral therapy (ART), also called combination therapy.
For doctors, impotent for 15 years as their patients wasted away in front of them, ART finally offered a tool, a torch in the darkness. You might expect then that for the scientist who identified the virus, before witnessing the storm turn into an unstoppable hurricane, this would herald happiness – at least for a while. But for Barré-Sinoussi, the very opposite happened.
“I rapidly developed depression,” she says flatly.
“Probably because so much stress accumulated on our [scientists’] shoulders for years – to have seen people dying and feeling that we were not going fast enough, and when suddenly the data appeared of the combination treatment, that people were living, they were safe…and the fact that we were relieved – we just fell down.”
For more than a year she stopped going to HIV events and conferences.
“When I reappeared among the community some of my colleagues said, ‘Oh Françoise, it’s a long time since we’ve seen you, what happened?’ And I explained and they said, ‘You too? Welcome to the club.’” Despite not being alone in her reaction to the breakthrough, the manifestation of mental illness was alarming and bewildering.
“I said to my husband at the time, ‘I feel that I’m not my own personality any more. I look like a virus. My face is like HIV. I cannot see the picture of the virus in the street or the announcement for prevention information.’” And so, after helping millions with her work, for the first time the professor herself sought help.
“I had to go to the doctors. I didn’t give them my name, in order for them not to make the link between the virus and me.” She lets out a meek, perhaps embarrassed laugh, folding her arms again around her tailored, dogtooth jacket. The doctor put her on antidepressants for a year.
Joy – albeit rather tainted – did come though. In 2008 Barré-Sinoussi and Montagnier were awarded the Nobel Prize for Physiology or Medicine for discovering the virus.
She was in the midst of a meeting in Cambodia when a journalist, weeping with happiness, phoned with the news.
“We had to stop the meeting unfortunately.” Calls flooded in from around the world. The French ambassador quickly organised a cocktail reception at the French embassy. “We were celebrating together until very late at night.”
One notable absentee was her husband – he had died just a few months earlier. Another was Gallo: the Nobel committee had excluded him from the prize, unleashing accusations by US scientists of “Euro bias”. Gallo admitted, with admirable restraint, that it was “a disappointment”. Montagnier said he was “surprised” by the exclusion. And Barré-Sinoussi?
“For me it’s a Nobel Prize for all the community of people that has been working [on it]. It’s not my prize, it’s our prize.”
To the surprise of many casual observers of the Barré-Sinoussi–Montagnier–Gallo saga, the personal dynamics between these behemoths have played out in rather unexpected ways.
“Bob was here in May,” she says. “We organised a symposium for the 30 years of HIV here in Pasteur. I never had a very bad relationship with Bob Gallo, you know.” She laughs again, the truth finally out. “The relationships are not what the media seems to show.”
“I don’t speak to him,” she says, pointing out that they stopped working together in the late 1980s – she opened her own lab at the Pasteur Institute in 1988 and is now Director of the Regulation of Retroviral Infections Division. “It’s a long time ago. Of course if we are in the same place we say hello, but I don’t have a relationship with him.”
Montagnier, meanwhile, remains in contact with Gallo. The French virologist has for many years been based at China’s Shanghai Jiao Tong University, his work on the electromagnetic signals from bacterial DNA being cited by some homeopaths as evidence of its efficacy – something he refutes. Barré-Sinoussi’s diplomacy levels reach their peak when this is mentioned.
“I think it’s important for scientists to be free,” she offers vaguely. “He’s doing what he likes to do.”
In contrast, Barré-Sinoussi has built on their discovery with research into mother-to-child transmission of HIV, the adaptive immune response to viral infections, and elite controllers – those with a natural resistance to the virus. Since 2012 she has also been President of the International AIDS Society as a new threat emerges.
The other disease
Effective treatment might have been developed in 1996, but three things prevented Barré-Sinoussi from giving up her fight against the disease: the lack of availability of antiretrovirals – of today’s 35 million sufferers, only about a third receive treatment; the limitations of the treatment, which can cause a kaleidoscope of side effects; and the galloping rise in the number of infections. When combination therapy was introduced there were 22m living with the virus, a figure that has increased markedly. Furthermore, in the West particularly, there is another disease: apathy.
Barré-Sinoussi is “concerned” about young people, who are “too relaxed” about HIV because “the education campaigns are not sufficient…there is not enough information”. A 2013 survey in Scotland found one in ten pupils think HIV can be transmitted through kissing.
“They [young people] feel today it’s easy, there is a treatment. They have not heard about co-morbidities on long-term treatment.” Eight to ten per cent of HIV-positive people develop cancer or cardiovascular disease. “They look at me and they’re like, ‘Nobody told us that.’” She looks incredulous, thrusting her hands up in anger – not at them but at us: the adults.
She supports, in the first instance, legislation to ensure mandatory sex education in schools, but also “to make sure that the education is well done…if the teachers are not well-informed they will not give the best information to students.” Even once sex education is improved, manifold problems must be overcome.
“As a scientist I’m very frustrated – to work for decades to get the treatment we have and to see people not have access to those drugs is unacceptable. We know it’s a question of cost. [But] when you speak about life you should not think about cost. For me, priority number one is life.”
It is also, she acknowledges, a question of “political willingness”, something that falls short when it comes to oppressed groups.
“There are countries where today some populations still do not have access to care because they are what we call ‘key affected populations’ – marginalised because they are MSM [men who have sex with men] or drug addicts, or they are transgender or whatever. But they are a life! We have to respect their behaviour. They are living bodies and so it’s the responsibility of all, including the politicians, to take care of human beings.”
But political leaders’ behaviour and policies do not always correlate. Yoweri Museveni, the President of Uganda, took an HIV test publicly in 2013 to encourage his compatriots to follow suit, amid bafflement from his administration over the increase in infections from 6.4 per cent in 2005 to 7.3 per cent in 2012. But just months later he signed the infamous anti-homosexuality bill introducing life imprisonment for “aggravated homosexuality”, one cited example of which is for an HIV-positive person to have gay sex. The bill also includes a three-year jail sentence for anyone who does not report gay sex, a law that could hobble HIV outreach work. Barré-Sinoussi wants other presidents around the world to take the test publicly.
“Any action to encourage people to be tested is good,” she says. “So we should congratulate the President of Uganda. But one action at one moment is not sufficient. This should be associated [with] educating people to change their view about MSM or other populations.”
For Barré-Sinoussi the situation in Uganda mirrors that of Cameroon, where she has devoted much of her time and where homosexuality is also an imprisonable offence. Two years ago the virologist visited Chantal Biya, Cameroon’s first lady and the country’s ambassador for HIV/AIDS, to convey the urgency of a change in attitudes.
“She started to tell me there are cultural aspects to take into consideration. I said, ‘I know all that but the culture can change, so if you don’t start now to educate the young about the different populations that should be respected, whoever they are, it will never change.’” The effect of Barré-Sinoussi’s warning was only short-lived: “Homosexuals were taken out of jail after one of my visits, and a few months after they were back in jail.”
But it isn’t only the oppression of gay people fuelling infection rates.
“Criminalisation of drug addiction is playing a negative role in access to prevention, care and treatment,” she says. “This has been reported and published by several groups at an international level, so I cannot understand decision makers in some countries not taking the responsibility to implement a policy of decriminalisation of drug addiction.” Prostitution should also be decriminalised, as “banning is a repressive measure. Repressive measures are not working.” Equally, she also opposes attempts to ban unsafe (‘bareback’) sex in pornography.
In 2009 she took on a global leader, perhaps the most influential of all on sexual behaviour: the Pope. After Benedict XVI said on a visit to Cameroon that HIV/AIDS is a problem that “cannot be overcome by the distribution of prophylactics: on the contrary, they increase it”, Barré-Sinoussi wrote him an open letter, challenging his assertions. Why?
“Because I thought it was my responsibility. I’ve been to Cameroon for years, and several [other] countries in Africa, and I know what the impact of such a statement can be on the population.” Those working on the ground there, including nuns, told her after his speech that the use of condoms was “not going back to zero but [was] reduced. This is not acceptable.”
Pope Benedict didn’t reply to her letter. His pronouncements are far from solitary in the Catholic hierarchy, with Cardinal Elio Sgreccia, Archbishop Francisco Chimoio, Cardinal Alfonso López Trujillo and Bishop Demetrio Fernández claiming, respectively, that condoms: don’t “immunise against infection”, are being deliberately infected with HIV to spread the virus in Africa, have tiny holes through which HIV can pass, and are like “a cork”.
Is she more hopeful about Pope Francis? “Not 100 per cent, but at least some of [his] statements seem to be in the right direction.”
All of the above mean the chances of reaching the UN’s Millennium Development Goal of “halting and reversing the spread of HIV by 2015” are zero.
“2015 is almost tomorrow,” she says. “It will not be over.”
But there is one country that, according to Barré-Sinoussi, provides an exemplary template for HIV treatment efforts: Cambodia, where she has worked repeatedly over the last two decades. More than 90 per cent of HIV-positive people there are undergoing treatment. In some parts of the USA it is 28 per cent. Why have they been so successful?
“Political willingness,” she replies. “Certainly not money, because Cambodia is a very, very poor country, but with the success of being funded by the Global Fund [to Fight AIDS, Tuberculosis and Malaria] they have been able to have a continuum of care programmes, linking from the test to the treatment system, with a network of care services.” This is down to “a few personalities – very dynamic, energetic personalities that decided to take the programme, implement and coordinate as much as possible.”
Media coverage of this success story has been almost invisible, however, with journalists over the last few years focusing instead on something long unuttered in HIV circles: the C word.
Since 2008, when the first known case of an HIV patient being ‘cured’ was reported – dubbed the ‘Berlin patient’ – a growing wave of stories have appeared citing research from around the world on what could lead to a cure. But scientists largely prefer the term ‘functional cure’ (Barré-Sinoussi favours ‘remission’ or ‘sustained remission’), using it to describe a scenario in which a sufferer no longer needs ART but his or her body is not wholly free from HIV. So is a complete cure even possible?
“In France we say impossible is not French,” she says, with a little laugh. “But it is very, very difficult because that means we would be able to eliminate even latently infected cells from all compartments in the body – not only the blood but also the lymph nodes, the gut and even the brain. This is a tremendous challenge. The virus is hidden in the cells so it cannot be seen by the antiretrovirals, it cannot be seen by the immune system.”
Timothy Ray Brown, the Berlin patient, became functionally cured after being given a stem cell transplant from a donor resistant to HIV (due to a rare genetic mutation) to rid him of acute myeloid leukaemia. He has not needed ART since. Although it would not be feasible or desirable to repeat this costly, risky treatment on the wider population, its importance, for Barré-Sinoussi, is unarguable.
“The reason why there is optimism today is because we have what we call ‘proof of concept’,” she says, referring to the notion that an outcome is possible as it has been achieved in at least one person. Although, as she points out, there are several other examples that show a functional cure is achievable. The ‘Mississippi baby’, who was given exceptionally high doses of antiretroviral drugs hours after being born with HIV before having this treatment stopped, is now three years old and has undetectable levels of the virus in her blood. And in France there is the ‘Visconti cohort’ – 14 patients who were given ART within 10 weeks of infection. They remained on the treatment for three years before stopping it, and show no signs of the virus returning.
“We need to have better predictors,” says Barré-Sinoussi. “Maybe we will have a treatment that will be effective not for all but for some.” So how likely is it that within, say, a decade, a functional cure will be available for all?
“I don’t know. I’m not going to tell you dates, because I personally think we should not – we don’t know and we should be honest as scientists. We still have work to do to understand better the latency and the establishment of what we call the reservoir.” Latent HIV reservoirs are created just after infection and, although reduced dramatically by ART, cannot be eradicated, so once reactivated will produce HIV again.
If not in ten years then, what chance is there that Barré-Sinoussi will achieve the dream she expressed two years ago of seeing an end to HIV during her lifetime?
“That depends when I’m going to die!” she says, instead stressing the importance of “an approach that will be scalable, affordable for everyone”. Bone marrow transplants, for example, would not be possible en masse.
Even without a cure, improvements in medication are now such that comparisons are routinely made between HIV and type 1 diabetes – both manageable, chronic conditions. Which would Barré-Sinoussi prefer to have?
“Difficult question. I guess type 1 diabetes. Maybe because of my ignorance of diabetes, but I have seen so many dramas with HIV that I would prefer to be affected by a disease I know less about – even with the treatment available now, because it’s also about the stigma and discrimination that are still affecting [HIV-positive] people.”
Perhaps the most important question for those with access to medication is when to start treatment. The debate rages between the need to halt the progression of the virus and to prevent further transmissions (which ART does by 96 per cent) versus the need to reduce the long-term side effects of the medication. If Barré-Sinoussi were diagnosed HIV-positive, when would she start?
“As early as possible,” she replies without hesitation. “No doubt. We have scientific evidence that if you are treated as early as possible, the mortality and morbidity is decreased more significantly than if you are treated later. It’s a question of risk and benefit. However, today that balance is [tipped in favour of] early treatment.” With this her hands mimic scales, the left rising above the right by almost a foot.
As we finish the interview and get up to leave I realise there’s one question, possibly the hardest, I haven’t asked. Barré-Sinoussi is standing behind her desk packing up her things, with her personal assistant hovering. Can she begin to conceive of what she’s achieved?
“I don’t think I can,” she says, stopping what she’s doing and looking bewildered, stumped. “I don’t think I can.”
She is not the only one who might struggle to grasp her achievement. In the mid-1970s, as the young scientist was nearing the completion of her PhD, she went to see one of the senior members of staff at the Pasteur Institute because she wanted to carry on working there. He told her: “A woman in science, they never do anything. They are only good at caring for the home and babies. Forget this dream.” Ignoring him helped save millions.
Barré-Sinoussi will officially retire in 2015, but won’t stop her fight against this most elusive of diseases. She pauses and thinks again about whether she can comprehend the impact of her discovery:
“Through the eyes of others, but not myself, because for me, when I go to the resource-limited countries and still see people in a bad condition, treated too late, and still people are dying, I say, ‘My god, it’s not possible. We still have so much to do.’”
She gestures out through the window at a world letting its young perish from a perfectly treatable virus, of doctors still remembering the touch of skeletal patients, slipping away. Her eyes narrow again, blinkered with determination and hope, and suddenly Françoise Barré-Sinoussi’s life and work become clear – none of it was a choice; it was all as the San Franciscan patient requested: for the others.
Carol W. Greider (Nobel Prize in Physiology or Medicine 2009)
Elizabeth H. Blackburn (Nobel Prize in Physiology or Medicine 2009)
Rarely has a Nobel award received such media buzz as that of Elizabeth Blackburn, Jack Szostak and Carol Greider. If cracking the DNA code revealed the ‘secret of life’, their discovery of telomerase was heralded as the ‘fountain of youth’.
The truth, while not quite so magical, is impressive enough. Telomerase is the enzyme that makes telomere DNA – protective ‘caps’ that allow DNA strands to split without damage during cell division as bodies grow.
Scientists had long suspected the existence of these caps but it was while studying the single-celled Tetrahymena (‘pond-scum’) that Blackburn discovered that the DNA sequence CCCCAA was repeated several times at the ends of the chromosomes.
When Blackburn presented her results in 1980, Szostak suggested they try an experiment by grafting the CCCCAA sequence to previously vulnerable minichromosomes in yeast. Sure enough, the telomere protected them from degradation. As the two organisms were not related, this showed a fundamental mechanism common to most plants and animals.
Szostak discovered some yeast cells had mutations that caused a shortening of the telomeres. These cells aged rapidly and soon failed.
Blackburn’s team set out to explore whether telomere DNA was created by an unknown enzyme. On Christmas Day, 1984, graduate student Carol Greider observed activity in a cell extract. Greider and Blackburn found that the enzyme, which they called telomerase, contained an RNA blueprint of the CCCCAA sequence and proteins, allowing telomerase to build longer telomeres. Blackburn then showed that telomere shortening in Tetrahymena could be caused by mutating telomerase itself.
In humans, telomerase is a two-edged sword. From more recent research, Blackburn now believes that telomere shortening in normal cells of the body can hasten some of the most common diseases of aging. If short telomeres accelerate this aspect of the ageing process, long telomeres seem to slow it down. Future work may attempt to stimulate telomere elongation in diseased cells, such as in anaemia. However, despite media speculation, preventing telomere shortening will not make Methuselahs of us.
Conversely, telomerase is often overly active in malignant cancer cells. Therefore, work is under way to explore the effect of targeting telomerase in those cancer cells that have already become malignant.
May-Britt Moser (Nobel Prize in Physiology or Medicine 2014)
The fact that May-Britt and Edvard Moser have collaborated for 30 years — and been married for 28 — has done nothing to dull their passion for the brain. They talk about it at breakfast. They discuss its finer points at their morning lab meeting. And at a local restaurant on a recent summer evening, they are still deep into a back-and-forth about how their own brains know where they are and will guide them home. “Just to walk there, we have to understand where we are now, where we want to go, when to turn and when to stop,” says May-Britt. “It’s incredible that we are not permanently lost.”
If anyone knows how we navigate home, it is the Mosers. They shot to fame in 2005 with their discovery of grid cells deep in the brains of rats. These intriguing cells, which are also present in humans, work much like the Global Positioning System, allowing animals to understand their location. The Mosers have since carved out a niche studying how grid cells interact with other specialized neurons to form what may be a complete navigation system that tells animals where they are going and where they have been. Studies of grid cells could help to explain how memories are formed, and why recalling events so often involves re-envisioning a place, such as a room, street or landscape.
While pursuing their studies, the two scientists have become a phenomenon. Tall and good-looking, they operate like a single brain in two athletic bodies in their generously funded lab in Trondheim, Norway — a remote corner of northern Europe just 350 kilometres south of the Arctic Circle. They publish together and receive prizes as a single unit — most recently, the Nobel Prize in Physiology or Medicine, which they won this week with their former supervisor, neuroscientist John O’Keefe at University College London. In 2007, while still only in their mid-40s, they won a competition by the Kavli Foundation of Oxnard, California, to build and direct one of only 17 Kavli Institutes around the world. The Mosers are now minor celebrities in their home country, and their institute has become a magnet for other big thinkers in neuroscience. “It is definitely intellectually stimulating to be around them,” says neurobiologist Nachum Ulanovsky from the Weizmann Institute of Science in Rehovot, Israel, who visited the Trondheim institute for the first time in September.
The Mosers’ work has also given them traction at one of the most challenging twenty-first-century research frontiers: how the brain computes. Just as computers use programming languages such as Java, the brain seems to have its own operating languages — a bewildering set of codes hidden in the rates and timing with which neurons fire as well as the rhythmic electrical activities that oscillate through brain circuits. These codes allow the brain to represent features of the external world — such as sound, light, smell and position in space — in a language that it can understand and compute. With their grid-cell work, the Mosers have been the first to crack one such code deep in the brain; now the challenge for the field is to find all the rest.
“May-Britt and Edvard’s research lies at the very heart of the cognitive-neuroscience enterprise,” says Stanislas Dehaene, who studies consciousness at the Collège de France in Paris. “They are trying to understand the neural codes for cognition — and so unite biology with computer science and even philosophy.”
The Mosers grew up on different Norwegian islands in the North Atlantic, where summer days seem eternal and the long winter nights are brightened only by the dancing Northern Lights. They were both from non-academic families and they went to the same school. But they didn’t get to know each other until 1983, when both were at the University of Oslo, both were wondering what to study and both were starting to realize that their true passion was for neuroscience and the brain.
Suddenly, everything sparked: romance between the two of them, intellectual curiosity and the beginnings of their mission in life — to find out how the brain generates behaviour. The Mosers visited one of the university’s more famous faculty members, electrophysiologist Per Andersen, and asked to do their undergraduate projects with him. Andersen was studying the activity of neurons in the hippocampus — a brain area associated with memory — and the two students wanted to try to link this precise activity of cells with the behaviour of animals. Andersen, like most neuroscientists at the time, was sceptical about making such a big leap across the black box of the brain. But the pair wouldn’t leave his office until he gave in and offered them an apparently simple project: how much of the hippocampus could you cut away before a rat could no longer remember new environments?
The two young scientists embraced the challenge, and soon discovered something profound. Until then, it had been assumed that the hippocampus was homogeneous. But the Mosers showed that one side of it was much more important for spatial memory than the other side. That brought home to them the importance of detailed brain anatomy for understanding brain function, a lesson that would prove invaluable later in their careers.
In 1984, while still undergraduates, the couple got engaged on top of the dormant volcano Mount Kilimanjaro in Tanzania. (The bitter temperature at the peak forced them to rush their exchange of rings, the quicker to get their gloves back on.) The pair had decided how their joint lives should be: children early, postdoc experience abroad and then their own lab together, somewhere in the world. These plans panned out — just a little faster than they had anticipated. Even before defending their PhDs, they accepted side-by-side postdocs in O’Keefe’s lab in London.
In the 1970s, O’Keefe had discovered neurons called place cells in the hippocampi of rats. These cells fire only when an animal is in a particular place — close to an exercise wheel, for example, or in front of a door. (Since then, other navigation-related neurons have been discovered, including those that fire when the head turns in a particular direction, or when a border, such as the long edge of a cage, is in view.) The research area was red hot, and the Mosers wanted to extend it.
But in 1996, just a few months into their postdocs, the Mosers received a surprise offer of two associate professorships at the Norwegian University of Science and Technology in Trondheim. They weren’t sure about accepting: it would mean striking out alone, in a small university in a country isolated from the world’s major centres of research. “But the offer of two posts in the same place and in the same research area was too good to turn down,” Edvard says. They flew back home, by this time trailing a toddler and baby.
It wasn’t easy to get established in Trondheim. They had to build a lab from scratch in a small basement, and establish an animal facility too. But only a few years in, they were winning big grants from the European Commission and the Research Council of Norway. And by then, the results were coming through.
On the grid
The pair’s first aim in Trondheim had been to better describe the origin of the place-cell signal. Although the cells themselves were in the hippocampus, it could be that cells elsewhere were instructing them when to fire. Remembering their lesson from the undergraduate lab, the Mosers knew that they needed to understand the brain’s anatomy to see how the signals flowed physically across it.
In the lab, they adapted the standard experimental technique for studying place cells: implanting electrodes directly into a rat’s hippocampus and recording from them as the animal runs freely in a large box (see ‘A sense of place’). The electrodes — which are sensitive enough to pick up activity from single neurons — feed into a computer and map the exact spot on the floor of the box where each neuron fires. This appears on screen as a black dot. To make sure that the rat covers the entire floor area, the researchers scatter chocolate treats across it. (May-Britt is a chocolate enthusiast both inside and outside the lab.)
The Mosers chemically inactivated different parts of the hippocampus and its surroundings in the rat brains, and then tested whether the place cells continued to fire normally. In this way, they discovered that information flowed to the place cells from the entorhinal cortex, a narrow strip of tissue running vertically up the lower back of the rat’s brain. No one had paid much attention to this structure before, in large part because it is extremely difficult to access. One side lies very close to a large blood chamber; puncturing that would be fatal. The Mosers consulted an expert in neuroanatomy and concluded that, fortunately, the ideal place for the electrodes would be away from the chamber and close to the brain’s surface. Then they started to repeat their experiments, recording from single neurons in the entorhinal cortex. That is when they found something unexpected.
The researchers saw that some of these entorhinal neurons fired when the rats moved onto or through a particular spot in the box, just like hippocampal place cells. But they went on to fire at several other spots too. While a rat scurried around mopping up chocolate treats, the researchers watched, perplexed, as the computer mapped the firings, and overlapping blobs appeared on the screen. The Mosers could see that the blobs were creating some sort of pattern, but they couldn’t work out what it was.
It took some months before it dawned on them that they needed the rats to run around bigger boxes, so that the pattern would be stretched out and easier to see. At that point, it came into view: a near-perfect hexagon lattice, like a honeycomb. At first they refused to believe it. Such simplicity and regularity was the last thing they had expected — biology is usually a lot messier than this. But one by one, the pair ruled out all other explanations — that the pattern was an artefact from their electronic equipment, for example — and then they began to understand how this part of the brain was working. There were no physical hexagons traced on the floor; the shapes were abstractly created in the rat’s brain and imposed on its environment, such that a single neuron fired whenever it crossed one of the points of the hexagon. The discovery was exciting for more than its pleasing pattern. This representation of space in brain-language was one of the long-sought codes by which the brain represents the world around us. “It was a long-drawn-out eureka moment,” recalls Edvard. The team published the discovery in Nature in 2005.
Soon the Mosers were putting the grid cells to the test. They showed that the firing pattern of the cells remained constant even in the dark, and that they are independent of the animals’ speed or direction. Whereas place cells in a rat brain may change their firing rates if their environment is altered even a little — for example by changing the colour of the walls — those of grid cells remain robustly unchanged. The Mosers also found that the different cells in the entorhinal cortex generate grids of many different types, like overlapping honeycombs — big, small and in every orientation and position relative to the box’s border. And they ultimately came to see that the brain’s grid cells are arranged according to a precise mathematical rule.
The cells that generate smaller grids, with narrower spacing, are at the top of the entorhinal cortex, and those that generate bigger grids are at the bottom. But it is even more exact than that: cells that make grids of the same size and orientation seem to cluster into modules. The modules are arranged in steps down the length of the entorhinal cortex, and the size of the grid represented by each module expands by a constant factor of 1.4 with every step. At the same time, grid cells that represent different positions relative to the box’s border are dotted randomly through the structure. Assuming a similar arrangement exists in humans, the idea is that, together, these cells are unconsciously keeping track of where we are as we wander between rooms or stroll down a street.
All in the mind
These discoveries link the Mosers to a rich cast of scientists and philosophers who have pondered the connections between brain, memory and location since at least the time of Ancient Greece. Back then, a philosopher who needed to remember a long speech might memorize the layout of a building or a street, and mentally attach different parts of the speech to its different landmarks. He (they were almost always men) could then fluently deliver the entire rhetoric as he mentally walked around, allowing each landmark to activate the individual sections from memory. The fascination with memory and location continued into the twentieth century, when behavioural scientists first hypothesized that animals carry an abstract map of space inside their heads. The grid cells finally proved that this was true.
The discoveries also astonished and thrilled theoreticians, because the hexagonal pattern is the optimal arrangement for achieving the highest-possible spatial resolution with a minimum number of grid cells. This saves energy, showing how beautifully efficient the brain can sometimes be. “Whoever would have believed that such a beautiful hexagonal representation existed so deep in the brain?” says Andreas Herz, a computational neuroscientist at the University of Munich in Germany. “It was so unexpected that the brain would use the same simple geometric forms that we have been describing in mathematics for millennia.” The appealing simplicity gives hope, he says, that the entire brain uses computational principles that scientists may eventually understand.
That understanding could take a long time to reach. It seems unlikely that the neural codes that the brain uses to represent other aspects of the world will be so simple; individual neurons may code for several different properties of the world, making the languages difficult to disentangle. The grid code is also valuable because it exists high up in the brain’s hierarchy, with no direct input of sensory information. Unlike the visual cortex, say, whose coding will be influenced by light falling onto the retina, the entorhinal cortex creates the hexagonal pattern entirely internally, by integrating whatever information about the environment is received by other areas of the brain.
With the lab churning out one high-impact paper after another, the Mosers’ work has attracted people and funding. Neuroscientist David Rowland was doing his PhD at the University of Oregon in Eugene when he read the 2005 paper on grid cells and was inspired. “I thought it was so cool that I immediately wanted my first postdoc to be in their lab,” he says — and that’s how it worked out. He has joined the Mosers at the Trondheim Kavli Institute, which is now buzzing with six additional research groups, each working on different aspects of neural circuitry and coding.
Not every couple would find it easy to work together in such apparent harmony. The Mosers ascribe their ability to do so in large part to their patient temperaments and shared interests — in science and beyond. Both love outdoor activities: May-Britt runs every other day across the rugged hills around their coastal home, and Edvard hikes at weekends. They share an obsession with volcanoes — hence their engagement at the top of one — and have climbed many of the globe’s most spectacular peaks.
At work, they have evolved some division of labour. Edvard is more involved in computing and theory, and May-Britt manages the lab and staff and is more immersed in the experiments. “We have different strengths and we know that by combining them, the results become so much better,” says Edvard. They aim for only one of them to attend any particular meeting, so that the other is left in the lab. “So we are not really stepping on each other throughout the day, as many people might believe,” says Edvard.
The Mosers — and other labs around the world now studying grid cells — still have a lot to learn. Scientists do not yet know how the grid is generated by the neural networks in the entorhinal cortex, or how the overall map created by grid cells, place cells and other navigation cells is integrated to help animals to get from one place to the next. These challenges require more data, and the Mosers have a roster of experiments under way to collect them.
One virtual-reality experiment they are planning will record from electrodes in rats running on a stationary ball surrounded by screens showing changing environments. The rats’ heads will be held still so that it becomes possible to place electrodes directly inside individual cells for the first time, and to insert small lenses that allow the researchers to simultaneously examine those cells under a microscope. This will reveal precisely which of the many cell types are firing at any one time as the rats move around the virtual space.
The next step will be to map how the grid cells are hard-wired into networks, and to find out when in the rats’ lives this wiring happens. Early studies suggest that the grid system is fully established at around three or four weeks after birth, which implies that babies — humans as well as rats — are born with a very primitive sense of where they are in space, and that this sense develops as their brains adapt to the world. The Mosers are also planning to test how the hexagonal pattern would be modified in the brain of a rat that has been reared from birth in a perfect sphere instead of a flat-bottomed cage.
Outside the abstract world of neural coding, grid cells have another major relevance — in understanding memory and its loss. The entorhinal cortex is the first structure in the brain to be affected by Alzheimer’s disease, and getting or feeling lost is one of the disease’s first symptoms. The Mosers hypothesize that the cells in the entorhinal cortex may have special properties that allow the disease to develop there early — a puzzle that they hope scientists elsewhere can start solving.
Meanwhile, in Trondheim, it is 10 p.m. and Edvard and May-Britt are still discussing the brain as they find their way home. Later, long after they are out of sight, the two scientists continue to make their presence felt. Anyone flying out of Trondheim Airport will find a photograph of the couple in an exhibition of famous Norwegians. The other 13 portraits are all of individual athletes or artists. The Mosers’ portrait is the only one featuring two scientific brains.
Youyou Tu (Nobel Prize in Physiology or Medicine 2015)
It was 21 January 1969 when Mao Zedong gave a 39-year-old scientist from Zhejiang province the challenge of her life.
China was in the grip of the Cultural Revolution, with universities and schools across the country shutting their doors as the red guards ran riot.
Amid all the madness Tu Youyou, then a researcher at the Academy of Traditional Chinese Medicine in Beijing, was handed a daunting mission: to find a drug that would cure malaria.
“The work was the top priority so I was certainly willing to sacrifice my personal life,” the famously understated scientist later recalled.
In 2015, nearly half a century after her life-changing quest began, Tu was awarded the Nobel prize in medicine for her role in creating a drug that helped slash malaria mortality rates in Africa and Asia, saving millions of lives.
Yet for all her achievements, Tu, who is now 87, remains a little known figure, even in her native China where she had drifted into obscurity despite the magnitude of her discovery.
As news of Tu’s victory reached her native land in 2015, one fan wrote on Weibo, China’s Twitter: “Recognised at last!”
Tu was born in Ningbo, a port city about 225 kilometres south of Shanghai, in 1930. She was named after a verse in the Book of Songs, a collection of ancient Chinese poetry that is believed to have been compiled by Confucius.
Tu chose medicine, not philosophy, when she left Zhejiang and headed to China’s capital to further her studies in 1951.
She enrolled at the Peking University School of Medicine and graduated from its Department of Pharmacology four years later.
From university Tu moved to the Academy of Traditional Chinese Medicine. She married Li Tingzhao, a former school classmate and factory worker with whom she would have two daughters, and settled down in Beijing.
Then, in 1969, everything changed when Tu was recruited to a medical research project so secret it was known only as “523”.
The unit had been created two years earlier – on 23 May 1967 – on the orders of Chairman Mao, who hoped to find a way of halting the spread of malaria, a disease that was decimating North Vietnamese troops fighting in the jungles to China’s south-west.
Tu was tasked with searching in nature for a new malaria treatment and was sent to Hainan, a tropical island off China’s southern coast that has long struggled with its blight.
There, in the sweltering rainforests of southern China, Tu witnessed up close the mosquito-borne disease’s devastating toll on the human body.
“I saw a lot of children who were in the latest stages of malaria,” she told New Scientist in 2011. “Those kids died very quickly.”
But it was in ancient Chinese manuscripts that Tu found the key to beating the disease. Back in Beijing, Tu and her team scoured books about traditional Chinese medicine for leads on substances that might help them defeat malaria.
In a hundreds-of-years-old text, The Manual of Clinical Practice and Emergency Remedies by Ge Hong of the East Jin Dynasty, they found mention of sweet wormwood (Artemisia annua) – or in Chinese qinghao – being used to treat malaria.
Tu’s team put it to the test. At first the results were mixed but after much persistence the researchers identified an active compound in the plant that attacked malaria-causing parasites in the blood and would later become known as artemisinin.
Not content with identifying the remedy, which thus far had only been tested on animals, Tu took it upon herself to test it. “As the head of this research group, I had the responsibility,” she said.
The treatment worked and was proved safe for humans. Along with insecticide-treated bed nets, artemisinin became a crucial tool in the fight against malaria in Africa and Asia. Experts credit the discovery with saving millions of lives.
Recognition came late in life to Tu, a famously modest woman who once remembered the moment of her discovery by saying: “Of course that was a really happy moment in my career as a researcher.”
Only in 2011, when Tu was awarded the prestigious Lasker DeBakey clinical medical research award, did Communist party officials in her home town begin scrambling to locate and preserve the scientist’s childhood home.
Asked for her thoughts on that award, Tu simply replied: “I am too old to bear this.”
Speaking to China’s Global People magazine in 2007, Tu insisted she had not given her life to medicine in order to make headlines. “I do not want fame. In our day, no essay was published under the author’s byline,” she said.
Showing the magazine’s journalists around her modest home in east Beijing, the elderly scientist pointed to cupboards and drawers stuffed full with lab records and correspondence chronicling her hunt for a cure for malaria.
“I didn’t keep all this stuff deliberately,” Tu told her guests. “It’s a habit of scientific work.”
Elinor Ostrom (The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2009)
It seemed to Elinor Ostrom that the world contained a large body of common sense. People, left to themselves, would sort out rational ways of surviving and getting along. Although the world’s arable land, forests, fresh water and fisheries were all finite, it was possible to share them without depleting them and to care for them without fighting. While others wrote gloomily of the tragedy of the commons, seeing only overfishing and overfarming in a free-for-all of greed, Mrs Ostrom, with her loud laugh and louder tops, cut a cheery and contrarian figure.
Years of fieldwork, by herself and others, had shown her that humans were not trapped and helpless amid diminishing supplies. She had looked at forests in Nepal, irrigation systems in Spain, mountain villages in Switzerland and Japan, fisheries in Maine and Indonesia. She had even, as part of her PhD at the University of California, Los Angeles, studied the water wars and pumping races going on in the 1950s in her own dry backyard.
All these cases had taught her that, over time, human beings tended to draw up sensible rules for the use of common-pool resources. Neighbours set boundaries and assigned shares, with each individual taking it in turn to use water, or to graze cows on a certain meadow. Common tasks, such as clearing canals or cutting timber, were done together at a certain time. Monitors watched out for rule-breakers, fining or eventually excluding them. The schemes were mutual and reciprocal, and many had worked well for centuries.
Best of all, they were not imposed from above. Mrs Ostrom put no faith in governments, nor in large conservation schemes paid for with aid money and crawling with concrete-bearing engineers. “Polycentrism” was her ideal. Caring for the commons had to be a multiple task, organised from the ground up and shaped to cultural norms. It had to be discussed face to face, and based on trust. Mrs Ostrom, besides poring over satellite data and quizzing lobstermen herself, enjoyed employing game theory to try to predict the behaviour of people faced with limited resources. In her Workshop in Political Theory and Policy Analysis at Indiana University—set up with her husband Vincent, a political scientist, in 1973—her students were given shares in a notional commons. When they simply discussed what they should do before they did it, their rate of return from their “investments” more than doubled.
“Small is beautiful” sometimes seemed to be her creed. Her workshop looked somewhat like a large, cluttered cottage, reflecting her and Vincent’s idea that science was a form of artisanship. When the vogue in America was all for consolidation of public services, she ran against it. For some years she compared police forces in the town of Speedway and the city of Indianapolis, finding that forces of 25-50 officers performed better by almost every measure than 100-strong metropolitan teams. But smaller institutions, she cautioned, might not work better in every case. As she travelled the world, giving out good and sharp advice, “No panaceas!” was her cry.
Rather than littleness, collaboration was her watchword. Neighbours thrived if they worked together. The best-laid communal schemes would fall apart once people began to act only as individuals, or formed elites. Born poor herself, to a jobless film-set-maker in Los Angeles who soon left her mother alone, she despaired of people who wanted only a grand house or a fancy car. Her childhood world was coloured by digging a wartime “victory” vegetable garden, knitting scarves for the troops, buying her clothes in a charity store: mutual efforts to a mutual end.
The same approach was valuable in academia, too. Her own field, institutional economics (or “the study of social dilemmas”, as she thought of it), straddled political science, ecology, psychology and anthropology. She liked to learn from all of them, marching boldly across the demarcation lines to hammer out good policy, and she welcomed workshop-partners from any discipline, singing folk songs with them, too, if anyone had a guitar. They were family. Pure economists looked askance at this perky, untidy figure, especially when she became the first woman to win a shared Nobel prize for economics in 2009. She was not put out; it was the workshop’s prize, anyway, she said, and the money would go for scholarships.
Yet the incident shed a keen light on one particular sort of collaboration: that between men and women. Lin (as everyone called her) and Vincent, both much-honoured professors, were joint stars of their university in old age. But she had been dissuaded from studying economics at UCLA because, being a girl, she had been steered away from maths at high school; and she was dissuaded from doing political science because, being a girl, she could not hope for a good university post. As a graduate, she had been offered only secretarial jobs; and her first post at Indiana involved teaching a 7.30am class in government that no one else would take.
There was, she believed, a great common fund of sense and wisdom in the world. But it had been an uphill struggle to show that it reposed in both women and men; and that humanity would do best if it could exploit it to the full.
The Nobel Prize is a universal symbol of excellence and the subject of Dix-Sept Femmes Prix Nobel des Sciences (“Seventeen Women Who Won a Nobel Prize for Science”) by Hélène Merle-Béral, professor of haematology at Pierre and Marie Curie University in Paris. As the title indicates, only 17 women have been awarded a science Nobel Prize since its inception in 1901. That amounts to less than three percent of all Nobel laureates. Why should that be?
This link is to a story written by Ndoni Mcunu on her personal journey in science.
There are at least three explanations to the low numbers of women in science. First, oppression along with objective and official discrimination of women long relegated them to secondary roles and served to deter them from science. About 1,600 years ago, the Egyptian mathematician and philosopher Hypatia was stoned in public — according to some accounts, by order of the Bishop of Alexandria, because she was a woman, a pagan, and in particular much too smart. In human societies, it always seems as if men, from time immemorial, have done everything possible to deny women access to knowledge and power, which are often linked. This hold began to loosen only during the Renaissance, when girls were (very) gradually allowed, and then encouraged, to pursue the same studies as boys. But the road has been long, and there is still quite a way to go.
In Western Europe, this era is more or less over, but naturally the vestiges of it remain: although girls are reclaiming the world of science little by little, it will take several generations before they accede to positions of power beyond the administrative level.
The second explanation has to do with male stereotypes of women, which are nowhere close to disappearing. A 2015 survey showed that 67 percent of men believe that women lack the capacity to become first-rate scientists. Hence the unconscious temptation of parents and teachers to discourage girls from these careers.
Most worrisome, however, is that the same survey showed that 66 percent of women believe it, too! This is the third, more insidious hurdle: women’s own internalization of stereotypes about themselves leads most of them to self-limit and to voluntarily reject careers connected to science and power.
This phenomenon — the “stereotype threat” — is well known. U.S. researchers demonstrated it in 1995 with respect to African Americans. Given a complex intellectual task to solve, African American subjects performed as well as whites, except when a group composed of both black and white volunteers was reminded that they would be taking a complicated intelligence test. This seemingly innocuous information evoked the racist stereotype about blacks being generally less intellectually endowed than whites. Disconcerted by the racist clichés, a significant number of blacks performed less well. The same phenomenon was subsequently identified in girls with respect to math and technical skills, though the latter is obviously less of a social handicap.
As is often the case, these toxic stereotypes contain what appear to be “kernels of truth” but in fact are distorted and erroneous. Thus, according to one argument, inequalities are justified by taking the actual situation as proof (and not the consequence) of the stereotype. For example, “The fact that there are fewer women scientists proves that women are worse in science.”
A second type of argument may be based in reality but has nothing to do with the actual situation. For instance, even taking body size into account, women typically have smaller brains than men: 1,130 cubic centimetres for women compared with 1,260 cubic centimetres for men. But it is impossible to conclude anything from that fact, because we also know that large brains are not necessarily more efficient. Einstein had an ordinary-sized brain. And brains, be they large or small, are designed to thrive and find inspiration. In this sense, it is interesting to see how social evolution (giving girls a chance) has affected their scientific scores, such as in math.
In the U.S. in the 1970s, boys and girls performed at the same level in math in primary school. Then, beginning at age 12, boys typically did better. Thirty years later — following the women’s liberation movement and the fight for equality — a new study was conducted involving nearly seven million students: the difference between the sexes had evaporated. Today, talented girls no longer eschew advanced studies, whether scientific or otherwise, although more of them choose life sciences (medicine or biology) over more abstract disciplines (math or physics). Other studies show that as social equality between and women increases, the level of math achievement for both sexes also becomes more comparable.
Clichés about the intellectual superiority of men are being rejected and fought with ever greater frequency: we are on the right track to improvement. But women need to be aware of their susceptibility to the “stereotype threat”. And they can take heart from the example of these amazing scientists and not from the women who continue to entrench the stereotypes.
*Stories from Headstrong – 52 women who changed science and the world, by Rachel Swaby
We saw Orion in the night sky at the Pinnacles. He was standing on his head!
Professor Neville H. Fletcher (1930-2017) once said: “In astronomy circles, it is often remarked that God, in creating the universe, perversely located all the most interesting regions of our galaxy in the Southern Hemisphere, but all the astronomers in the north.” As a result, it can be more difficult to pick out in the Southern Hemisphere the shapes for which the constellations were originally named.
Orion, the hunter, is not proudly standing on his feet, but rather doing a cart-wheel 🙂
Orion, one of the 48 Greek constellations listed by Ptolemy in the Almagest, is the most splendid of constellations, befitting a character who was in legend the tallest and most handsome of men. His right shoulder and left foot are marked by the brilliant stars Betelgeuse and Rigel, with a distinctive line of three stars forming his belt. “No other constellation more accurately represents the figure of a man”, said Germanicus Caesar.
Manilius called it ‘golden Orion’ and ‘the mightiest of constellations’, and exaggerated its brilliance by saying that, when Orion rises, ‘night feigns the brightness of day and folds its dusky wings’. Manilius described Orion as “stretching his arms over a vast expanse of sky and rising to the stars with no less huge a stride”. In fact, Orion is not an exceptionally large constellation, ranking only 26th in size, but the brilliance of its stars gives it the illusion of being much larger.
Orion is also one of the most ancient constellations, being among the few star groups known to the earliest Greek writers such as Homer and Hesiod. Even in the space age, Orion remains one of the few star patterns that non-astronomers can recognize.
In the sky, Orion is depicted facing the snorting charge of neighbouring Taurus, yet the myth of Orion makes no reference to such a combat. However, the constellation originated with the Sumerians, who saw in it their great hero Gilgamesh fighting the Bull of Heaven. The Sumerian name for Orion was URU AN-NA, meaning light of heaven. Taurus was GUD AN-NA, bull of heaven.
Gilgamesh was the Sumerian equivalent of Heracles, which brings us to another puzzle. Being the greatest hero of Greek mythology, Heracles deserves a magnificent constellation such as this one, but in fact is consigned to a much more obscure area of sky. Orion might be Heracles in another guise, for one of the labours of Heracles was to catch the Cretan bull, which would fit the Orion – Taurus conflict in the sky. Ptolemy described him with club and lion’s pelt, both familiar attributes of Heracles, and he is shown this way on old star maps. Despite these parallels, no mythologist hints at a connection between this constellation and Heracles.
According to myth, Orion was the son of Poseidon, the sea-god, and Euryale, daughter of King Minos of Crete. Poseidon gave Orion the power to walk on water. Homer in the Odyssey describes Orion as a giant hunter, armed with an unbreakable club of solid bronze. In the sky, the hunter’s dogs (the constellations Canis Major and Canis Minor) follow at his heels, in pursuit of the hare (the constellation Lepus).
On the island of Chios, Orion wooed Merope, daughter of King Oenopion, apparently without much success, for one night while fortified with wine he tried to ravish her. In punishment, Oenopion put out Orion’s eyes and banished him from the island. Orion headed north to the island of Lemnos where Hephaestus had his forge. Hephaestus took pity on the blind Orion and offered one of his assistants, Cedalion, to act as his eyes. Hoisting the youth on his shoulders, Orion headed east towards the sunrise, which an oracle had told him would restore his sight. As the Sun’s healing rays fell on his sightless eyes at dawn, Orion’s vision was miraculously restored.
Orion is linked in a stellar myth with the Pleiades star cluster in Taurus. The Pleiades were seven sisters, daughters of Atlas and Pleione. As the story is usually told, Orion fell in love with the Pleiades and pursued them with amorous intent. But according to Hyginus, it was actually their mother Pleione he was after. Zeus snatched the group up and placed them among the stars, where Orion still pursues them across the sky each night.
Stories of the death of Orion are numerous and conflicting. Astronomical mythographers such as Aratus, Eratosthenes and Hyginus were agreed that a scorpion was involved. In one version, told by Eratosthenes and Hyginus, Orion boasted that he was the greatest of hunters. He declared to Artemis, the goddess of hunting, and Leto, her mother, that he could kill any beast on Earth. The Earth shuddered indignantly and from a crack in the ground emerged a scorpion which stung the presumptuous giant to death.
Orion is one of several constellations in which the star labelled Alpha is not the brightest. The brightest star in Orion is actually Beta Orionis, called Rigel from the Arabic rijl meaning ‘foot’, from Ptolemy’s description of it as ‘the bright star in the left foot’. Ptolemy also said it was shared with the river Eridanus, and some old charts depict it in this dual role.
Rigel is a brilliant blue-white supergiant, one of the rarest breeds in our galaxy. With their enormous brilliance — up to 100,000 times as bright as the sun — blue-white supergiants remain conspicuous over great distances. Rigel is one of the most intrinsically luminous of all stars and one of the hottest, apparently just reaching the prime of its life in the time span of a star and literally “burning the candle at both ends”. It has been computed that Rigel’s luminosity is something like 57,000 times that of the sun. The star is about 800 light-years away.
The star is only 10 million years old, compared to the Sun’s 4.5 billion, and due to its measured size and brightness it is expected to end in a supernova one day. It also has two known companions, Rigel B and Rigel C.
In contrast, red supergiants like Betelgeuse (Alpha Orionis) are gigantic bloated globes of cooler gas. If such a star were to replace the sun in the solar system, it might extend beyond Mars’ orbit. It is located about 500 light-years away, but does not shine with a steady light. Bright red Betelgeuse is near the end of its career. When the core can no longer support the star’s vast weight, it will collapse, triggering a cataclysmic supernova explosion. Betelgeuse is in its final stage and could explode in only a few million years.
Stars produce their energy by fusing hydrogen into helium deep within their cores. When a star accumulates sufficient helium in its core, its energy output increases significantly, and it swells into a red giant or supergiant, like Betelgeuse. This is what Rigel will become in a few million years.
Betelgeuse is a ‘pulsating’ star, expanding and contracting spasmodically with a diameter that varies from 550 to 920 times that of the sun, but so irregular are these pulsations that no one can predict exactly when it will expand or contract. In trying to describe Betelgeuse many years ago, Henry Neely, a lecturer at New York’s Hayden Planetarium, once noted that it is “like an old man with his strength almost entirely spent, panting in the asthmatic decrepitude of old age”.
Betelgeuse is one of the most famous yet misunderstood star names. It comes from the Arabic yad al-jauza, often wrongly translated as ‘armpit of the central one’. In fact, it means ‘hand of al-jauza’. But who (or what) was al-jauza? It was the name given by the Arabs to the constellation figure that they saw in this area, seemingly a female figure encompassing the stars of both Orion and Gemini. The word al-jauza apparently comes from the Arabic jwz meaning ‘middle’, so the best translation that modern commentators can offer is that al-jauza means something like ‘the female one of the middle’. The reference to the ‘middle’ may be to do with the fact that the constellation lies astride the celestial equator. Ptolemy described it in the Almagest as ‘the bright, reddish star on the right shoulder’.
The Greeks did not give a name to either Betelgeuse or Rigel, surprisingly for such prominent stars, which is why we know them by their Arabic titles.
The left shoulder of Orion is marked by Gamma Orionis, known as Bellatrix, a Latin name meaning ‘the female warrior’. The star at the hunter’s right knee, Kappa Orionis, is called Saiph. This name comes from the Arabic for ‘sword’, and is clearly misplaced. The three stars of the belt – Zeta, Epsilon, and Delta Orionis – are called Alnitak, Alnilam and Mintaka. The names Alnitak and Mintaka both come from the Arabic word meaning ‘the belt’ or ‘girdle’. Alnilam comes from the Arabic meaning ‘the string of pearls’, another reference to the belt of Orion.
Below the belt lies a hazy patch marking the giant’s sword or hunting knife. This is the location of the Orion Nebula, one of the most-photographed objects in the sky, a mass of gas from which a cluster of stars is being born. The gas of the Nebula shines by the light of the hottest stars that have already formed within; it is visible to the naked eye on clear nights.
In one of the most detailed astronomical images ever produced, NASA’s Hubble Space Telescope captured an unprecedented look at the Orion Nebula. This turbulent star formation region is one of astronomy’s most dramatic and photogenic celestial objects. More than 3,000 stars of various sizes appear in this image. Some of them have never been seen in visible light. These stars reside in a dramatic dust-and-gas landscape of plateaus, mountains, and valleys that are reminiscent of the Grand Canyon. The Orion Nebula is a picture book of star formation, from the massive, young stars that are shaping the nebula to the pillars of dense gas that may be the homes of budding stars.
These beary sized cakes are very yummy. I’ll taste them some more 🙂
Blue moon, super moon and blood moon have combined to create moment not seen in the western hemisphere skies in more than 150 years.
A rare celestial event graced the skies last night – a blue moon and lunar eclipse combine with the moon being at its closest point to Earth, resulting in what is being called a “super blue blood moon”. On their own, a full moon, a total lunar eclipse, a blue moon and a supermoon are not that unusual. What is rare is that they happened all together on one day. The last time the elements combined at the same time was in 1866.
A “super blue blood moon” is the result of a blue moon – the second full moon in a calendar month – occurring at the same time as a super moon, when the moon is near perigee and about 14% brighter than usual (although the difference is not visible with the naked eye), and a so-called blood moon – the moment during a lunar eclipse when the moon, in the Earth’s shadow, takes on a reddish tint.
Like the Earth, half the moon is illuminated by the sun at any one time. The moon orbits around the Earth and as a result we see different amounts of the lit-up side.
A full moon is when we see its entire lit-up side. This occurs every 29.5 days, when the moon is directly opposite the sun relative to the Earth.
The moon’s orbit is tilted by about 5 degrees relative to the Earth’s orbit. So, most of the time the moon ends up a little above or below the path Earth follows as it revolves around the sun, and the Earth’s shadow misses the moon, falling either above or below it. But twice a year, the moon crosses into our planet’s orbital plane.
If that crossing corresponds to a full moon, the moon will pass into the Earth’s shadow, resulting in a total lunar eclipse. Since the moon needs to be behind the Earth, relative to the sun, a lunar eclipse can only happen on a full moon.
When a lunar eclipse happens, the moon appears to darken as it moves into the Earth’s shadow called the umbra. When the moon is all the way in shadow it doesn’t go completely dark; instead, it looks red due to a process called Rayleigh scattering. The gas molecules of Earth’s atmosphere scatter bluer wavelengths of light from the sun, while redder wavelengths pass straight through.
This is why we have blue skies and red sunrises and sunsets. When the sun is high in the sky, red light passes straight through to the ground while blue light is scattered in every direction, making it more likely to hit your eye when you look around. During a sunset, the angle of the sun is lower in the sky and that red light instead passes directly into your eyes while the blue light is scattered away from your line of sight.
In the case of a lunar eclipse, the sunlight that makes it around Earth passes through our atmosphere and is refracted toward the moon. Blue light is filtered out, leaving the moon looking reddish during an eclipse.
By one definition (there are more than one), a blue moon occurs any time a second full moon occurs in a single month. Since there are 29.5 days between two full moons, we usually only end up with one per month. With most months longer than 29.5 days, it occasionally works out that we have two full moons. We already had a full moon on the 1st of January and yesterday was the second full moon of the month, making it a blue moon (in name only, not in colour 🙂 ). With this definition our next blue moon is in March, leaving February with no full moon at all this year.
Finally, to add the cherry on top 🙂 this was also a supermoon. The moon’s orbit is not perfectly circular, meaning its distance from Earth varies as it goes through one cycle. The closest point in its orbit is called the perigee. A full moon that happens near perigee is called a supermoon by some. The astronomical term for a moon at perigee is “perigee full moon” or a “perigee syzygy”, which means three celestial bodies in a line.
A lunar eclipse is a great opportunity for scientific learning. The details of how the sunlight we see reflected from the moon during eclipse has been altered, scattered and absorbed on its way through our atmosphere, and how this is affected by, for example, volcanic eruptions or even meteor showers, are still being studied.
In recent years, there has been a resurgence of interest in studying lunar eclipses from a surprising source, the discovery of planets orbiting other stars. If we see an “exoplanet” pass across the face of its parent star, a small fraction of the starlight we collect will have passed through the planet’s atmosphere. Looking at spectra – measurements of light broken down by wavelength – taken during such a transit with those taken out of transit can help determine the composition of the atmosphere. This could include biosignatures such as oxygen, ozone or methane – which might give away the presence of extraterrestrial life.
A lunar eclipse is a perfect opportunity to study the details of the same effect close to home – sunlight reflected from the moon during eclipse has passed through the Earth’s atmosphere and been imprinted with its characteristics. This means the Earth takes the place of a transiting exoplanet. Various lunar eclipse studies are being conducted ahead of observations with upcoming facilities – such as the James Webb Space Telescope and the European Extremely Large Telescope – which have the potential to study the atmospheres of distant Earth-like planets.