When an optometrist shines a bright light into your eyes, a vast, branching tree sprouts in your field of vision. This is the shadow of blood vessels. Though we normally can’t perceive them, these vessels always occlude a portion of what we see, and for an important reason. They power the retina, a thin layer of nerve tissue in the back of the eye that communicates light signals to the brain.
The retina is one of the body’s most energetically expensive tissues. Built from complex networks of sometimes more than 100 different types of neurons, retinal tissue consumes two to three times more energy than the same mass of typical brain tissue. That’s why most vertebrate retinas, including our own, are furrowed with these dense, branching networks of blood vessels to deliver oxygen and other ingredients for producing energy.
But there’s a significant exception to this rule. Birds have retinas that mostly lack blood vessels. This may seem especially strange given birds’ exceptional vision. The bird retina is “one of the most metabolically active tissues in the animal kingdom, yet it worked with no apparent blood perfusion,” said the evolutionary physiologist Christian Damsgaard. “It was a complete paradox.” For centuries this has puzzled scientists, who figured that the bird retina must obtain oxygen through a unique, undiscovered process.
Damsgaard is the lead author of a study that showed for the first time that bird retinas don’t have some unusual adaptation for acquiring oxygen — they survive without it entirely. Instead, to bring energy to the tissue, they use a process called anaerobic glycolysis that is significantly less efficient than oxygen-powered metabolism but gets the job done.
By studying how tissues can survive without oxygen, researchers can potentially develop therapeutics to treat conditions of oxygen deprivation, such as strokes. More fundamentally, they want to understand the limits of evolution.
“What are the extremes of life?” Damsgaard said. “How far can we bend the conditions under which highly metabolically active tissues can actually survive?”
A bird, he learned, can bend them pretty far.
👀Read the full story at the link in our bio.
One Tuesday in June 2025, a white Chevy Suburban set off down the northernmost highway in North America. The sun of Alaska’s polar summer hadn’t set in 40 days, and it wouldn’t set again for another 35. But for Michael Van Nuland, the biologist in the driver’s seat, time was already running out.
The SUV, packed with four days of fieldwork essentials — rubber boots for mucking in marshes, GPS for centimeter-level precision, a steel tube for extracting soil cores from permafrost — growled along the Dalton Highway, which sews an asphalt-and-gravel seam through the tundra of Alaska’s northern coast. Through the window, the lack of visible trees suggested a barren landscape, but looks are deceiving. The miles of sedge and duvet-thick moss formed the basis of a feast for seasonal caribou, grizzlies, muskox, and roughly 200 bird species.
Van Nuland was more interested in what was happening underground, where sprawling systems of fungal threads — from microscopic ducts to arteries thick as yarn — extended dozens of feet horizontally in all directions. By connecting plant roots and circulating nutrients, this dense, networked scaffold sustained life above the surface.
“Some people just see dirt as dirt. But it’s a living, breathing system,” said Van Nuland, the lead data scientist of the nonprofit Society for the Protection of Underground Networks (SPUN). “The complexity you see in a forest — the layers of canopy, the different species of birds and insects … You’re walking over an equally or possibly even more complex system below ground.”
🔗 Read Max Levy’s full story at the link in our bio.
📸 Field coverage by Max Levy
Mathematicians spend most of their time thinking about what’s knowable. But the unknowable can be just as compelling.
Perhaps the most famous example comes from a theorem by the logician Kurt Gödel. Gödel’s celebrated result — one of two “incompleteness theorems” he published in 1931 — established that for any reasonable set of basic mathematical assumptions, called axioms, it’s impossible to prove that the axioms won’t eventually lead to contradictions. Though mathematicians continued their research much as they had before, they would never again be certain that their rules were self-consistent.
More than 50 years after Gödel’s theorem, cryptographers devised a radical new proof method in which unknowability played a very different role. Proofs based on this technique, called zero-knowledge proofs, can convince even the most skeptical audience that a statement is true without revealing why it’s true.
These two flavors of unknowability, which originated decades apart and in different fields, were long considered completely unrelated. Now the computer scientist Rahul Ilango has established a striking connection between them. While still a graduate student, he devised a new type of zero-knowledge proof in which secrecy stems from the fundamental limits of math. Ilango’s approach gets around limitations of zero-knowledge proofs that researchers have long thought insurmountable, pushing the boundaries of what such a proof can be. The work has also spurred researchers to explore other intriguing links between mathematical logic and cryptography.
🔗 Read the full story at the link in our bio.
🎨 @kristina.armitage /Quanta Magazine
In the summer of 1991, Pinatubo, a volcano in the Philippines, self-destructed. The eruption started on June 12, and three days later it culminated in a tremendous explosion. By the time pyroclastic flows — incandescent avalanches of molten rock and gas — tumbled down its sterilized slopes, Pinatubo’s peak had been obliterated and replaced by a 2.5-kilometer-wide chasm.
The eruption killed more than 800 people, mainly because roofs, weighed down by rain-saturated ash, collapsed. But it could have been so much worse: About 250,000 people, across multiple cities and a sprawling U.S. Air Force base, lived in the volcano’s shadow. When Pinatubo started convulsing and belching steam in April of that year, scientists from the United States and the Philippines deployed an array of instruments that tracked the volcano’s inner tumult.
“We didn’t know much about that volcano, and so there was this really rapid geological assessment. And the assessment said, ‘Oh, crap, when this thing erupts, it only erupts big,’” said Mike Poland, current scientist in charge at the U.S. Geological Service’s Yellowstone Volcano Observatory. “And that became the basis for a forecast.”
By early June, ash and lava were escaping Pinatubo’s flanks, and an evacuation was ordered, just a few days before the cataclysmic hammer fell. It was, in other words, a very close call.
Those scientists saved countless lives, but their forecast was more of an educated guess than it might have appeared. It was nothing like a weather forecast; they couldn’t say that on June 12, an explosive eruption was going to occur with anything resembling certainty, nor could they predict the evolution of that eruption.
With very few exceptions, this imprecision is true of all well-monitored volcanoes. But volcanology, as a field, has made great leaps since Pinatubo blew its top. The instrumentation is more advanced, machine learning has made interpreting data far more efficient, and scientists have a much better understanding of the magmatic plumbing that drives volcanism. But can volcanoes ever be truly 𝘱𝘳𝘦𝘥𝘪𝘤𝘵𝘢𝘣𝘭𝘦?
🌋How close are we to forecasting volcano behavior the way we forecast the weather?
Thunderstorms have captivated humanity for millennia, and yet their inner workings remain deeply mysterious. Storm clouds are opaque. They’re dangerous to approach. And they’re too big to fit in a lab. Inquisitive researchers have been sending kites, balloons, and rockets up into them for nearly three centuries, and they’ve learned a lot. But every time lightning lovers get closer to the action, they discover major gaps in their understanding. For the past 50 years, researchers have focused on one particular gap: How does the jagged channel of white-hot air we call a lightning bolt get started?
Recently, the field has experienced a sort of renaissance as researchers have devised new ways to pierce the clouds. They’ve taken a slew of instruments built to study violent cosmic events and trained them on the brutality of terrestrial thunderstorms. They’ve seen lightning shooting out X-rays as it zigs and zags, spotted flickering glows of gamma rays coming from thunderclouds, and, very recently, detected hints of bolts traveling in unexpected directions.
No one has put all the pieces together, but a new understanding of lightning is taking shape. The fearsome flashes look less and less like the supersize electric sparks that physicists once imagined them to be. While electricity plays a central role, lightning bolts are formed and shaped by the whole physics canon — from cosmic blasts to particle physics. In particular, triggering a bolt seems to require extreme events more typically associated with supernovas, black holes, and particle colliders than with fluffy clouds.
“There is a growing consensus in the field that high-energy processes play a critical role in lightning initiation,” said Caitano da Silva, an atmospheric physicist at New Mexico Tech. “It’s an exciting time to be in this field.”
⚡ Read the full story at the link in our bio.
Living on light is a dangerous game. Not only do the sun’s rays carry ultraviolet waves that can snap DNA strands and degrade molecules, but they also vary wildly in intensity. Plants must endure and thrive through soft morning light and blazing summer afternoons, through shade one moment and full sun the next. Their solar calories come in a trickle — or a deluge.
“Think of a cloud obscuring the sun, and suddenly the cloud passes and the sun ray hits a leaf,” said Nico Schramma, a biophysicist at Amsterdam University Medical Center. “Something has to change because the intensity might change a hundredfold.”
Plants aren’t passive. They respond accordingly. They can reorient themselves by rotating their leaves and stems to seek sunbeams or shade, but this mechanism works on a scale of minutes or hours. For finer responses, their cells must mobilize as well. Within every plant cell are chloroplasts, disc-shaped organelles that turn sunlight into sugars. And while plants have to remain mostly stationary, chloroplasts do not.
“Chloroplasts move,” Schramma said. He likened their behavior to that of a flock of sheep seeking shade on a bright day: Intense light similarly shepherds chloroplasts into shaded patches along the cell wall.
“Light is the best friend and worst enemy of chloroplasts,” said Mazi Jalaal, a biophysicist at the University of Amsterdam who supervised Schramma’s doctoral work. “They need it for photosynthesis. But the moment the light intensity goes too high, they have to run away from it.”
🌿 How does each organelle balance the plant’s appetite for light with its distaste for too much? Read the full story at the link in our bio.
Roughly 540 million years ago, toward the start of the Cambrian Period, the planet was mostly ocean, and life was both alien and vaguely familiar. Small, phallic-looking worms rummaged through ocean-floor sediments while blind swimming beasts flung out whiplike tentacles to ensnare prey. Meanwhile, early versions of mollusks and sponges populated the seafloor as jellyfish floated above.
Shallow ocean waters and an increase in oxygen levels in Earth’s atmosphere had triggered what we call the Cambrian explosion: the first major blossoming of modern biodiversity. Life forms of increasing complexity filled the seas, providing the evolutionary foundations for nearly every phylum alive today.
Then, around 513.5 million years ago, came the Sinsk event, the first known mass extinction of the Phanerozoic, our current geologic eon. As Earth’s tectonic plates shifted, huge volumes of volcanic gas and carbon dioxide transformed the atmosphere, sucking oxygen out of the oceans and devastating shallow-water environments.
In 2026, paleontologists in southern China published a trove of some of the best-preserved Cambrian fossils to date — a massive collection of 8,681 fossils spanning 153 species.
These fossils are dated approximately 1.5 million years after the Sinsk event, and to paleontologists’ delight, more than half of the species uncovered at the Huayuan site are new to science.
🔗 Explore the image gallery at the link in our bio.
📸Han Zeng/Nanjing Institute of Geology and Palaeontology, Chinese Academy of Sciences
Aristotle saw infinity as something that you could move toward but never reach. “The fact that the process of dividing never comes to an end ensures that this activity exists potentially,” he wrote. “But not that the infinite exists separately.” For millennia, this “potential” version of infinity reigned supreme.
But in the late 1800s, Georg Cantor and other mathematicians showed that the infinite really can exist. Cantor’s approach was to treat a series of numbers, such as the integers, as a complete infinite set. This approach would become essential in the creation of the foundational theory of mathematics that’s still used today. Infinity, he showed, is an 𝘢𝘤𝘵𝘶𝘢𝘭 object. Moreover, it can come in many different sizes; by manipulating and comparing these different infinities, mathematicians can prove surprising truths that on their face seem to have nothing to do with infinity at all. While few mathematicians spend much time in the realm of the higher infinite, infinity is assumed by default.
But this foundation of modern math has inspired fierce arguments since it was first proposed. One reason is that accepting a core assumption about infinity allows you to construct strange paradoxes: It becomes possible, for instance, to carve a ball into five parts and use them to create five new balls, each with a volume equal to that of the first.
Another objection is more philosophical. In the decades after Cantor’s revelations, some mathematicians argued that you cannot simply assert the existence of a mathematical structure — you must prove that it exists through a process of mental construction. In this “intuitionist” philosophy, for example, pi is less a number with an infinite non-repeating decimal expansion, and more a symbol that represents an algorithmic process for generating digits.
But intuitionism only requires that a given mental construction be possible in theory: It prohibits 𝘢𝘤𝘵𝘶𝘢𝘭 infinity but permits 𝘱𝘰𝘵𝘦𝘯𝘵𝘪𝘢𝘭 infinity. Some mathematicians still weren’t satisfied with this. They were troubled by numbers so large they could never be written down. And so they sought to take intuitionist ideas to an extreme.
🔗 Link in our bio.
How do mathematicians decide that something is true? They write a proof.
Often they start with proofs that already exist, building on or drawing connections between proven claims. Each of these proofs, in turn, has relied on other proofs to make its point, and so on. Proofs upon proofs. Truths upon truths. But eventually this process must reach an end. At some point, things are true simply because they are.
These truths are the axioms, the ground rules. And it is tempting to stop there — to declare, as
Penelope Maddy, a philosopher of mathematics at the University of California, Irvine, put it, “that axioms are obvious or intuitive or conceptual truths.”
After all, most mathematicians simply accept that their work relies on an axiomatic system — namely, “Zermelo-Fraenkel set theory with the axiom of choice,” or ZFC — if they bother to acknowledge the axioms at all. ZFC is a list of ten basic principles that together form the foundation on which nearly all of modern mathematics is built.
But a closer inspection reveals a more unsettled, human process of establishing truth. “Any honest, clear-eyed examination of how the axioms of ZFC came to be adopted would have to acknowledge that a wide range of mathematical considerations went into these decisions,” Maddy said.
That process, which began over a century ago, is still very much in progress.
🔗 Read the full story at the link in our bio.
🎨 @kristina.armitage /Quanta Magazine
Ice comes in more forms than what you’ll find in a freezer or a glacier. Since 1900, scientists have observed more than 20 phases of ice, many of them shaped under extreme conditions. The growing list includes hot ice and even ice that conducts electricity.
Ice is the name for any phase of water that is solid and crystalline, meaning that it has a repeating molecular structure. Over the past decade, computer simulations have predicted tens of thousands of possible forms of ice. Though uncommon on our planet, exotic ice may exist in off-Earth environments, from cold and amorphous comet tails to the hot and crushing cores of icy planets.
As physicists put water to the test with improved experimental techniques, they keep finding surprises. “You take water, and just the way you compress it — a little bit faster, a bit slower, up and down, at the right timescale — and then you can find this completely unexpected behavior,” said Marius Millot, a research scientist at Lawrence Livermore National Laboratory (LLNL) in California.
Abandoning old assumptions and applying new techniques, scientists have discovered three new kinds of ice in the past year, including two of the most complex ice phases ever seen. “It seems a remarkable time at the moment,” said Chris Pickard, a physicist at the University of Cambridge. “They’re really finding a lot more of these structures.”
🍦Read the full story at the link in our bio.
Ice comes in more forms than what you’ll find in a freezer or a glacier. Since 1900, scientists have observed more than 20 phases of ice, many of them shaped under extreme conditions. The growing list includes hot ice and even ice that conducts electricity.
Ice is the name for any phase of water that is solid and crystalline, meaning that it has a repeating molecular structure. Over the past decade, computer simulations have predicted tens of thousands of possible forms of ice. Though uncommon on our planet, exotic ice may exist in off-Earth environments, from cold and amorphous comet tails to the hot and crushing cores of icy planets.
As physicists put water to the test with improved experimental techniques, they keep finding surprises. “You take water, and just the way you compress it — a little bit faster, a bit slower, up and down, at the right timescale — and then you can find this completely unexpected behavior,” said Marius Millot, a research scientist at Lawrence Livermore National Laboratory (LLNL) in California.
Abandoning old assumptions and applying new techniques, scientists have discovered three new kinds of ice in the past year, including two of the most complex ice phases ever seen. “It seems a remarkable time at the moment,” said Chris Pickard, a physicist at the University of Cambridge. “They’re really finding a lot more of these structures.”
🧊Read the full story at the link in our bio.
Every experience we have changes our brain, the way a ceramicist reshapes a slab of clay. Every corner we turn, every conversation we have, every shudder we feel causes cascading effects: Chemicals are released, electricity surges, the connections between brain cells tighten, and our mental models update.
The brain is “incredibly plastic, and it stays that way throughout the lifespan of a human,” said the neuroscientist Christine Grienberger. This plasticity, the quality of being easily reshaped, makes the brain really good at learning — a quintessential process that allows us to remember the plotline of a novel, navigate a new city, pick up a new language, and avoid touching a hot stove. But neuroscientists are still uncovering fundamental rules that describe how neuroplasticity reshapes brain connections.
Recently, neuroscientists described a new form of neuroplasticity that might help the brain learn across a timescale of several seconds — long enough to capture the behavioral process of learning from a single experience. In two recent reviews, they describe “behavioral timescale synaptic plasticity,” or BTSP. This type of learning in the hippocampus, the brain’s memory hub, is caused by an electrical change that affects multiple neurons at once and unfolds across several seconds. Researchers suspect that it may help the brain learn in a single attempt.
“It’s pretty clear that [BTSP is] a strong, powerful mechanism that can lead to immediate memory formation,” said the neuroscientist Daniel Dombeck who was not involved with the theory’s development. “It’s something that has been missing in the field for a long time.”
By uncovering BTSP, neuroscientists have unraveled more of the story of how the brain changes with experience, bringing us closer to understanding how learning happens. “Neuroplasticity is … one of the last frontiers of the brain,” said the neuroscientist Attila Losonczy who studies BTSP. “If we understand this, I think we take a major step towards understanding how the brain works.”
🧠 Read the full story at the link in our bio.
🎨@artscistudios , Yasemin Saplakoglu/Quanta Magazine