If you look at businesses who scaled deep and wide, Amazon built AWS by turning internal infra into a platform, Google built TPUs themselves to train their AI and ML engine and there are many such examples of outward infra scaling done right.
@bigbasketcom is making the same play with global supply chain management.
#BigBasket #BBMatrix #SupplyChain #QuickCommerce #TechFondue
Imagine how the manuscripts, stone carvings, the passed down vedic scriptures, the religious teachings of the past (read 100s and 1000s of years) became guiding light for entire civilizations that exist today.
So the question for our century is how do people of the future remember us hundreds and thousand years from now? how do people living in Year 2950 AD remember movies, music, writings, research, geopolitics of the past people, that is us?!!
With everything moving to digital footprints, Project Silica is taking a stance worth noticeable.
you see, Hard drives fail in a decade, magnetic tape rots in three, and cold storage rarely makes it past a century.
Project Silica is Microsoft Research Cambridge's attempt to engineer storage that outlives the institution storing it.
Femtosecond laser pulses etch tiny structures called voxels into glass, with data sitting in the material itself rather than any electrical state that needs power to hold.
The 10,000-year claim is an extrapolation from accelerated aging at 500°C, not a measured lifespan.
What is verified is a medium that needs no power, cannot be overwritten, and survives floods, fires, and electromagnetic pulses.
A quiet rethinking of how a civilisation remembers itself.
Follow TechFondue on Instagram and Substack for the science that actually matters.
Zomato + Blinkit clocked 450 million Kafka messages per minute.
Zepto has 5 different databases for 5 access patterns.
ML models running berserk to predict what our neighbourhood will order before we know we need it.
India's quick commerce companies aren't grocery delivery apps with a backend, they're genuinely running autonomous-vehicle-grade infrastructure to deliver the breakfast eggs in 10 minutes.
The tech stack behind @letsblinkit@zeptonow , @instamart , @bigbasketcom , and @flipkartminutes is one of the most operationally demanding real-time AI systems running anywhere today.
Full research breakdown on my substack.
.
.
.
#QuickCommerce #QComm #SystemDesign #Blinkit #Zepto
It looks like the "age-of-research" guys are starting to take lead.
hashtag#IlyaSutskever said about scaling era ending and the research era's cycle re-clocking five months ago on dwarkesh's podcast. The man who co-wrote the original scaling laws had just called time on the scaling era.
Last week, Anthropic crossed $30B in ARR and released Claude Mythos Preview. The lab spending roughly four times less than OpenAI on training is now compounding faster than the lab with the largest cluster in the world.
Sutskever further divided the last decade into two chapters.
- From 2012 to 2020 was the Age of Research, when ideas dominated progress. AlexNet trained on 2 GPUs, and the original Transformer on 8 to 64. Researchers tinkered, capital was small relative to insight, and breakthroughs came from architectural discovery.
- From 2020 to 2025 was the Age of Scaling, when pre-training became a reliable recipe. The formula was simple: add compute, add data, and watch the loss curve fall. Labs stopped asking which ideas to test and started asking how many GPUs they could buy.
His claim is that the second chapter is closing. Pre-training is hitting diminishing returns because data is finite, and the next decade belongs to research again.
The Mythos Preview benchmark deltas back him up. A 55-point USA Math Olympiad jump from Claude Opus 4.6 is not what happens when you add 10x the GPUs to the same recipe. It is what happens when you discover a better recipe.
And the cosigner list tells you what is happening.
Yann LeCun, hashtag#GeoffreyHinton, Andrew Ng, and Dario Amodei all have said some version of the same thing. The researchers who built the scaling era are roughly identical to the researchers who now think it is ending.
And there are a lot of arXiv papers suggesting similar paths.
That is not a coincidence. It is the field correcting itself in real time.
Prepared a crunched down carousel of the thesis, the receipts, and the paradox that is reshaping venture strategy right now.
#ageofscaling #ageofresearch #artificialintelligence
Yesterday (07/04), @claudeai did a gated launch (Project Glasswing) of the Mythos Preview and the benchmarks are signaling something.
Images:
1. Non-hallucination rates. Knowing the answer is half the job but knowing when you don't is the other half.
2. LAB-Bench FigQA. The first Claude that reads a figure better than the researcher who made it.
3. USAMO 2026. A 97.6% on a contest where one wrong line voids a problem.
4. Secret-keeping robustness. How long can a Claude keep a secret?
5. ScreenSpot-Pro. Can a model click the right button in Photoshop?
.
.
.
#ClaudeMythos #Mythos #Anthropic #GlasswingProject
For 53 years, no human has flown beyond low Earth orbit. And our sweet Earthlings had the most ecstatic Lunar flyby yesterday.
How gorgeous were those images, of Earth, of the moon, of Orion, of the Crew who are in it.
1st April, 2026 was historic. Four astronauts climbed into the most powerful rocket ever built and aimed it at the far side of the Moon.
Reid Wiseman. Victor Glover. Christina Koch. Jeremy Hansen.
For Ten days. Covering 685,000 miles. With absolutely zero margin for error or the voyage is out into deep space.
Artemis II's mathematics is not for the landing, it is to pressure test all those equations and estimations that decide our best path to a moon-landing in future.
And for that,
six things have to go right before NASA commits four humans to the lunar surface in 2028. [Artemis-III]
If any one of them fails, Artemis III slips.
If all six pass, humans walk on the Moon for the first time since Apollo 17 soon.
There's more.
- a heat shield redesigned after Artemis-I scorched its predecessor.
- a digital twin watching 22 billion sensor relationships in real time.
- a 50-minute radio blackout on the far side where no signal reaches Earth.
AND
- a free-return trajectory the crew cannot abort.
@techfondue pulled apart the mission, the hardware, the crew, and the gates.
Here's a carousel you'd enjoy reading 🎴
Open source wins on cost per token, data sovereignty, and customisation depth.
So why do 80% of AI tokens globally flow through closed APIs?
Because the technical ceiling is not the bottleneck, the complex setup floor is.
.
.
.
#aiinfrastructure #nvidia #opensource #artificialintelligence #openai
Did you know that the present day architecture of TPUs were not Google’s first idea, infact not even their original idea.
That was the idea that survived against 2 other internal R&D projects.
When Google hit a code red on speed, bandwidth, and scale in 2013, it explored multiple ways to keep up with its own growing workloads. Software was one path.
Alternate hardware optimisation was another.
But the bet that won reached back to a much older idea ressurected from a 1978 Phd paper : systolic arrays.
That forgotten architecture became the backbone of today’s TPUs.
About other news in Robotics, advances in LLMs and the anticipated arrival of AGI are rapidly closing the gap between concept and capability.
And if you magnify the trend well enough, the prospect of humanoid robots functioning autonomously in workplaces and public spaces is moving from speculative to attainable very very quickly.
There's no hiding that the global population overalls is anticipated to decline with countries like China, South Korea, Japan, some EU nations showing major concern. The demand for humanoid robots designed to operate within human environments and offset growing labor shortages across industries is accelerating like never seen in theory.
So yes, expect seeing more humanoids, exoskeletons, cobots in action by latter part of this decade.
read more on my substack. Link in bio 📃
Something worth (seriously) pondering upon while we build the base!!
The theorical framework of paper is both qualitative and quantitative.
Head over to the substack link to find the research papers hyperlinked.
#AIpolicy #Humandisempowerment #AIDiffusion #AIregulations
In 2013, Google Brain's researchers ran the numbers and what they found was alarming.
Voice search alone, used by a small fraction of Android users for just a few minutes a day, could double Google's entire server capacity requirement.
This wasn't a theoretical warning, this was an existential compute crisis hiding inside a product launch.
CPUs couldn't solve it. GPUs couldn't afford it.
And buying more NVIDIA hardware wasn't the answer, not for Google's planetary scale.
Episode 2/5 : The moment Google realised it had to build its own silicon. ⚡
.
.
.
#AIHardware #neuralnetworks #AIInfrastructure #MachineLearning #googlebrain