Start Up No.2605: AI deepfake fraud growing fast, are space GPUs possible?, how to think like a worm, Moltbook?, and more


Some Winter Olympics events are being held on artificial snow, which has very different properties for racers than the natural kind. CC-licensed photo by Mark Teasdale on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. Blades? Glory! I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Deepfake fraud taking place on an industrial scale, study finds • The Guardian

Aisha Down:

»

Deepfake fraud has gone “industrial”, an analysis published by AI experts has said.

Tools to create tailored, even personalised, scams – leveraging, for example, deepfake videos of Swedish journalists or the president of Cyprus – are no longer niche, but inexpensive and easy to deploy at scale, said the analysis from the AI Incident Database.

It catalogued more than a dozen recent examples of “impersonation for profit”, including a deepfake video of Western Australia’s premier, Robert Cook, hawking an investment scheme, and deepfake doctors promoting skin creams.

These examples are part of a trend in which scammers are using widely available AI tools to perpetuate increasingly targeted heists. Last year, a finance officer at a Singaporean multinational paid out nearly $500,000 to scammers during what he believed was a video call with company leadership. UK consumers are estimated to have lost £9.4bn to fraud in the nine months to November 2025.

“Capabilities have suddenly reached that level where fake content can be produced by pretty much anybody,” said Simon Mylius, an MIT researcher who works on a project linked to the AI Incident Database.

He calculates that “frauds, scams and targeted manipulation” have made up the largest proportion of incidents reported to the database in 11 of the past 12 months. He said: “It’s become very accessible to a point where there is really effectively no barrier to entry.”

“The scale is changing,” said Fred Heiding, a Harvard researcher studying AI-powered scams. “It’s becoming so cheap, almost anyone can use it now. The models are getting really good – they’re becoming much faster than most experts think.”

«

There’s a description in the article of a deepfake “candidate” in a video job interview, which the interviewer figures out almost immediately, but:

»

Rebholz went through with the conversation, not wanting to face the awkwardness of asking the candidate directly if they were, in fact, an elaborate scam.

«

I think people need to face up to the awkwardness, or they’ll end up hiring fakes because they were too afraid to confront them.
unique link to this extract


Notes on space GPUs • Dwarkesh Podcast

Dwarkesh Patel:

»

The whole reason to go to space is energy. Yes, panels in space get about 40% more irradiance—but the real advantage is that you can put your satellites in sun-synchronous orbit, where they face the sun continuously. No nights, no clouds, no need for batteries (which is the majority of cost in a solar-storage system). Solar on Earth has a roughly 25% capacity factor, meaning panels only generate a quarter of their peak output on average. In space, you get close to 100%.

The logic is that if the launch costs continue to drop, it will become cheaper to put GPUs in orbit than to build power plants and batteries on Earth. And there’s a lot of room for launch costs to fall—propellant is cheap, and the main expense is the rocket, which you can now reuse. Falcon 9 is around $2,500/kg with a disposable upper stage. Starship with full reusability could get below $100/kg.

But here’s the problem with this argument. Energy is only about 15% of a datacenter’s total cost of ownership. The chips themselves are around 70%. And you still have to launch those to space!

It gets worse. On Earth, GPUs fail constantly. In the Llama 3 paper, Meta reported a failure roughly once every three hours across a 16,000 H100 cluster. When a chip dies, a technician walks over, swaps it out, and the cluster keeps running. In space, you can’t do that—at least not until we have Optimus robots stationed on every satellite.

What about radiation? It’s actually less catastrophic than you might expect. Google’s Suncatcher paper found that their TPUs survived nearly 3x the total ionizing dose needed for a five-year mission before showing permanent degradation.

…[But what about launching these data centres into space? Some calculations about weight are made] Assuming the numbers above—and also assuming that a fourth of the mass of the satellite has to be the chassis—I get 85 W/kg for the whole system. Again, I want to emphasize these are rough calculations; feel free to plug in your own numbers in the spreadsheet here.

At 150 metric tons to low earth orbit per Starship (Elon’s target), you’re looking at around 10 MW per launch. That means roughly 100 Starship launches in order to put 1 GW of compute in orbit. To hit 100 GW in a year, you’d need roughly 10,000 launches, or, about one launch every hour.

This is insane! A single Starship produces around 100 GW of thrust power at liftoff. That’s about a fifth of total US electricity consumption, concentrated in one rocket for a few minutes. And the plan would be to do that once an hour, every hour, every day, for a year.

«

I get a feeling this isn’t going to happen, for very simple reasons.
unique link to this extract


Mini 3D printed replica of ancient Tiwanaku structure sheds lights onto historic site • VoxelMatters

Tess Boissonneault:

»

Tiwanaku, an archaeological site found in Western Bolivia dating back to around 500 AD, is the latest historical site to get the 3D printing treatment. Researchers from UC Berkeley have 3D printed miniature models of the Pre-Columbian site to reconstruct the ruins, which have been ransacked and compromised over the last 500 years.

The site itself spans four square kilometers—making it the largest archaeological site in South America. Evidently, this was too large of a space to 3D print, so the researchers from UC Berkeley focused their efforts on 3D printing and reconstructing a specific architectural structure, namely, the Pumapunku building.

The Pumapunku building, believed to be built around the year 536 AD, was part of a large temple complex in Tiwanaku, which was believed by the Incas to be where the world was created. Over the years, the site and building itself have been of great interest to archaeologists for a number of reasons. The Pumapunku building specifically is considered to be an architectural wonder.

Sadly, due to ransacking, none of the 150 blocks that once made up the building are in their original place. A reality that has led the UC Berkeley team to try and recreate a model of the original building using miniature 3D printed blocks.

“A major challenge here is that the majority of the stones of Pumapunku are too large to move and that field notes from previous research by others present us with complex and cumbersome data that is difficult to visualize,” explained Dr. Alexei Vranich, the corresponding author of the study published in Heritage Science. “The intent of our project was to translate that data into something that both our hands and our minds could grasp. Printing miniature 3D models of the stones allowed us to quickly handle and refit the blocks to try and recreate the structure.”

In making the miniature models, the research team 3D printed 140 pieces of andesite, a extrusive rock found in the Andes, and 17 slabs of sandstone, all of which were based on measurements recorded by scholars over the last century and a half of the blocks. With these measurements, the team was able to create 3D models of the blocks and subsequently print them.

«

Having thus created Spinal Tap-scale versions, they were able to figure out how the people who lived there put blocks that held together without mortar.
unique link to this extract


What it’s like to be a worm • Asimov

Ralph Stefan Weir:

»

Darwin was among the first to ground judgements about animal sentience on careful experiments, such as suspending pieces of raw and roasted meat over the worms’ habitat overnight to see which they preferred.

Even more striking than Darwin’s methodological approach to studying sentience was his choice of earthworms for his subject. Such a selection in place of a human subject made Darwin a forerunner of a research program that has recently gained incredible momentum: the science of borderline sentience. That is, the investigation of sentience in creatures that dwell near the boundary between sentience and non-sentience.

Whereas Darwin’s interest in the inner workings of the worm mind was driven by pure curiosity, researchers today study borderline sentience to avoid causing gratuitous suffering in contexts such as agriculture and research. In the UK, octopuses and decapod crustaceans have been recognized as “sentient” since 2021, meaning government ministers legally must consider their welfare in future policies. This has just resulted in a ban on the practice of boiling crabs and lobsters alive.

The same considerations also apply to humans. Every year, around 400,000 people fall into “prolonged disorders of consciousness,” such as a coma, due to injury or illness. While no longer recognizably sentient, as many as a quarter of these patients are thought to retain some awareness. The better we understand borderline sentience, then, the better we will be able to care for (and perhaps even cure) such individuals.

…Studies of the microscopic worm C. elegans, with its fully mapped and readily accessible nervous system, have become ideal test cases for assessing theories of sentience against detailed physiological data. The hope is that discoveries made with this species will help refine how we assess sentience in humans, other complex animals, and even artificial neural networks.

«

Paging Andrew Brown. C. elegans lifts its little head once more. Having been the first organism to have its entire genome mapped, now it’s leading us into the liminal space of sentience.
unique link to this extract


What Olympic athletes see that viewers don’t: machine-made snow makes ski racing faster and riskier • The Conversation

Keith Musselman and Agnes Macy:

»

We talked with Brennan and cross-country skiers Ben Ogden and Jack Young as they were preparing for the 2026 Winter Games. Their experiences reflect what many athletes describe: a sport increasingly defined not by the variability of natural winter but by the reliability of industrialized snowmaking.

Snowmaking technology makes it possible to create halfpipes for freestyle snowboarding and skiing competitions. It also allows for races when natural snow is scarce – the 2022 Winter Olympics in Beijing relied entirely on machine-made snow for many races.

However, machine-made snow creates a very different surface than natural snow, changing the race.

In clouds, each unique snowflake shape is determined by the temperature and humidity. Once formed, the iconic star shape begins to slowly erode as its crystals become rounded spheres. In this way, natural snow provides a variety of textures and depths: soft powder after a storm, firm or brittle snow in cold weather, and slushy, wet snow during rain or melt events.

Machine-made snow varies less in texture or quality. It begins and ends its life as an ice pellet surrounded by a thin film of liquid water. That makes it slower to change, easier to shape, and, once frozen, it hardens in place.

When artificial snow is being made, the sound is piercing – a high-pitched hiss roars from the pressurized nozzles of snow guns. These guns spew water mixed with compressed air, and it freezes upon contact with the cold air outside, creating small, dense ice particles. The drops sting exposed skin, as one of us, Agnes Macy, knows well as a former competitive skier.

Snow machines then push out artificial snow onto the racecourse. Often, the trails are the only ribbons of snow in sight – a white strip surrounded by brown mud and dead grass.

“Courses built for natural snow feel completely different when covered in man-made snow,” Brennan, 37, said. “They’re faster, icier, and carry more risk than anyone might imagine for cross-country skiing.”

«

unique link to this extract


“Penisgate” at the Olympics: why inject acid into your penis, and what are the health risks? • The Guardian

Natasha May:

»

In the quest for Olympic gold, professional athletes endure hardships that might seem unfathomable to most of us mere mortals. But do those lengths extend to ski jumpers injecting their penises with hyaluronic acid in order to fly further?

That is the question the World Anti-Doping Agency will investigate since such startling allegations emerged first in the German newspaper Bild in what has now been dubbed “Penisgate”.

Bild has claimed that athletes have injected the acid into their penises to game the system when they are measured for their suits, which is tightly regulated to prevent any athlete having an aerodynamic advantage.

While the result of that investigation is pending, other questions remain: why would a ski jumper want to tamper with their penis, is it safe, and what does it have to do with aerodynamics?

«

Since you ask – and I sincerely hope you would need to – “Hyaluronic acid is a common filler used in cosmetic surgery, including injections being used for penile girth enlargement surgery, Prof Eric Chung, a urological surgeon, says.”

The problem is that a suit designed for bigger, uh, equipment means you have a sort of sail canopy which could give you those extra few inches. ON THE JUMP.
unique link to this extract


Moltbook was peak AI theater • MIT Technology Review

Will Douglas Heaven:

»

For some, Moltbook showed us what’s coming next: an internet where millions of autonomous agents interact online with little or no human oversight. And it’s true there are a number of cautionary lessons to be learned from this experiment, the largest and weirdest real-world showcase of agent behaviors yet.  

But as the hype dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.

For a start, agents on Moltbook are not as autonomous or intelligent as they might seem. “What we are watching are agents pattern‑matching their way through trained social media behaviors,” says Vijoy Pandey, senior vice president at Outshift by Cisco, the telecom giant Cisco’s R&D spinout, which is working on autonomous agents for the web.

Sure, we can see agents post, upvote, and form groups. But the bots are simply mimicking what humans do on Facebook or Reddit. “It looks emergent, and at first glance it appears like a large‑scale multi‑agent system communicating and building shared knowledge at internet scale,” says Pandey. “But the chatter is mostly meaningless.”

Many people watching the unfathomable frenzy of activity on Moltbook were quick to see sparks of AGI (whatever you take that to mean). Not Pandey. What Moltbook shows us, he says, is that simply yoking together millions of agents doesn’t amount to much right now: “Moltbook proved that connectivity alone is not intelligence.”

The complexity of those connections helps hide the fact that every one of those bots is just a mouthpiece for an LLM, spitting out text that looks impressive but is ultimately mindless. “It’s important to remember that the bots on Moltbook were designed to mimic conversations,” says Ali Sarrafi, CEO and cofounder of Kovant, a German AI firm that is developing agent-based systems. “As such, I would characterize the majority of Moltbook content as hallucinations by design.”

For Pandey, the value of Moltbook was that it revealed what’s missing. A real bot hive mind, he says, would require agents that had shared objectives, shared memory, and a way to coordinate those things. “If distributed superintelligence is the equivalent of achieving human flight, then Moltbook represents our first attempt at a glider,” he says. “It is imperfect and unstable, but it is an important step in understanding what will be required to achieve sustained, powered flight.”

Not only is most of the chatter on Moltbook meaningless, but there’s also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

«

Seems like this teaches us nothing at all, in fact.
unique link to this extract


Decrypted “Sweet 16” journal in Epstein files refers to Donald Trump: “how can you have dignity after being with that man?” • Narativ

Zev Shalev:

»

A recently published document from the Jeffrey Epstein investigative files contains disturbing revelations from a young woman whose coded and encrypted journal was discovered among the 3.5 million pages released by the Department of Justice on January 30.

The journal’s cover is a Sweet 16 card — meant for birthday wishes and teenage dreams. Inside are the words of a girl trapped in a nightmare with no hope of escape. Between the cut-and-pasted scrapbook of magazine headlines are her darkest secrets, written in code, about the men who held her captive — including one about Donald Trump.

Once decrypted, we discovered a haunting response to a news clipping regarding Ivana Trump’s divorce. In the clipping, Ivana is quoted as saying: “You came out of your divorce with dignity and pride, and that’s how I would like to come out of mine.”

The journal keeper wrote back with a stark, handwritten rebuttal: “Does this lady know you can’t have any dignity if you’ve been with him? I know I have none. Only skittles.”

Because the journal was partially written in code, it provides a rare list of names of alleged abusers—many of whom remain redacted in other parts of the public files. The author describes violent or threatening encounters with several high-profile individuals… [who are then named in the article; one is Andrew Mountbatten Windsor.]

«

The “code” is a sort of circular one: four rows, split into two pairs; you then read each pair of rows as two-letter columns. The code yields its contents rapidly. Bellingcat has confirmed its veracity. Who needs cryptography when you can create what looks like nonsense? (Thanks Adewale for the pointer.)
unique link to this extract


Ring’s Search Party for Dogs now available nationwide in the US • Amazon News

Amazon Staff:

»

Ring has expanded Search Party for Dogs, an AI-powered community feature that enables your outdoor Ring cameras to help reunite lost dogs with their families, to anyone in the US who needs help finding their lost pup. Since launch, Search Party has helped bring home more than a dog a day—and now, the feature is available to non-Ring camera owners via the Ring app for the first time.

“Before Search Party, the best you could do was drive up and down the neighbourhood, shouting your dog’s name in hopes of finding them,” said Jamie Siminoff, Ring’s chief inventor. “Now, pet owners can mobilize the whole community—and communities are empowered to help—to find lost pets more effectively than ever before. That’s why we believe it’s so important to make this feature available to anyone who shares a lost dog post in Neighbors.”

In addition to Search Party’s expansion, Ring is committing $1m to help equip animal shelters across the country with Ring camera systems. The aim is to assist the more than 4,000 US shelters in leveraging Search Party for Dogs to help reunite more lost dogs with their families and achieve the shared goal of reducing the time dogs spend in shelters. Ring is already working with several nonprofit organizations, including Petco Love and Best Friends Animal Society, and encourages others in the space to reach out about collaboration opportunities…

When a neighbour reports a lost dog in the Ring app, nearby participating outdoor Ring cameras automatically begin looking for potential matches. Using AI-powered computer vision, these cameras look for dogs that resemble the one reported missing, alerting the camera owner if it detects a potential match.

The camera owner can see the photo of the missing dog, alongside relevant footage from their own camera. If the camera owner confirms a match, they can then choose to share the information with the neighbour searching for their pet or ignore the alert. Camera owners choose on a case-by-case basis whether they want to share videos with a pet owner, protecting users’ privacy while also giving them the power to be a neighbourhood hero.

«

The self-surveillance society that can find its lost dogs. Good?
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.