Start Up No.2560: the web’s continual refusal to die, Anthropic’s hacking claim queried, bad AI toys!, 300 billion 1c problems, and more


The murmurs inside Apple about Tim Cook’s successor are growing louder, perhaps so that markets won’t react badly when he goes. CC-licensed photo by Mike Deerkoski on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. There were no CC-BY licensed pictures of John Ternus. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


The man (or men) who keeps predicting the web’s death • Tedium

Ernie Smith:

»

Jeffrey Zeldman, one of the OGs of web design, recently decided to weigh in on a debate that’s been picking up lately: With AI on the rise, is the Web dead? After all, that new OpenAI browser seems to be built for another era of internet entirely.

Zeldman’s critique is simple, and one that I can definitely appreciate: People have been declaring the Web dead as long as it’s been alive (and the comments have been hilariously wrong). I’d like to take a moment to consider one specific naysayer: George Colony.

Colony’s name may not ring a bell if you’re not in technology spaces, but he is the founder of Forrester Research, one of the largest tech and business advisory firms in the world. If you’re a journalist with a story and need an analyst, you’ve probably talked to someone from Forrester. I’ve talked to Forrester quite a few times—their analysis is generally quite sound.

But there’s one area where the company—particularly Colony—gets it wrong. And it has to do with the World Wide Web, which Colony declared “dead” or dying on numerous occasions over a 30-year period. In each case, Colony was trying to make a bigger point about where online technology was going, without giving the Web enough credit for actually being able to get there.

«

Entertaining set of receipts for your Monday morning.
unique link to this extract


Researchers question Anthropic claim that AI-assisted attack was 90% autonomous • Ars Technica

Dan Goodin:

»

Researchers from Anthropic said they recently observed the “first reported AI-orchestrated cyber espionage campaign” after detecting China-state hackers using the company’s Claude AI tool in a campaign aimed at dozens of targets. Outside researchers are much more measured in describing the significance of the discovery.

Anthropic published the reports on Thursday here and here. In September, the reports said, Anthropic discovered a “highly sophisticated espionage campaign,” carried out by a Chinese state-sponsored group, that used Claude Code to automate up to 90% of the work. Human intervention was required “only sporadically (perhaps 4-6 critical decision points per hacking campaign).” Anthropic said the hackers had employed AI agentic capabilities to an “unprecedented” extent.

“This campaign has substantial implications for cybersecurity in the age of AI ‘agents’—systems that can be run autonomously for long periods of time and that complete complex tasks largely independent of human intervention,” Anthropic said. “Agents are valuable for everyday work and productivity—but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks.”

Outside researchers weren’t convinced the discovery was the watershed moment the Anthropic posts made it out to be. They questioned why these sorts of advances are often attributed to malicious hackers when white-hat hackers and developers of legitimate software keep reporting only incremental gains from their use of AI.

“I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,” Dan Tentler, executive founder of Phobos Group and a researcher with expertise in complex security breaches, told Ars. “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”

«

My reaction to hearing the reports was that Anthropic was extremely proud that its AI was being used in this way, and wanted everyone to be impressed by it, and that on that basis it was probably better to be sceptical.
unique link to this extract


Apple intensifies succession planning for CEO Tim Cook • Financial Times

Tim Bradshaw, Stephen Morris, Michael Acton and Daniel Thomas:

»

Apple is stepping up its succession planning efforts, as it prepares for Tim Cook to step down as chief executive as soon as next year.

Several people familiar with discussions inside the tech group told the Financial Times that its board and senior executives have recently intensified preparations for Cook to hand over the reins at the $4tn company after more than 14 years.

John Ternus, Apple’s senior vice-president of hardware engineering [aged 50], is widely seen as Cook’s most likely successor, although no final decisions have been made, these people said.

People close to Apple say the long-planned transition is not related to the company’s current performance, ahead of what is expected to be a blockbuster end-of-year sales period for the iPhone. Apple declined to comment.

The company is unlikely to name a new CEO before its next earnings report in late January, which covers the critical holiday period.

An announcement early in the year would give its new leadership team time to settle in ahead of its big annual keynote events, its developer conference in June and its iPhone launch in September, the people said. These people said that although preparations have intensified, the timing of any announcement could change.

…Apple has had a number of high-profile changes this year among its top executive team. Longtime Cook confidante, chief financial officer Luca Maestri, stepped back from his role at the start of this year. Jeff Williams, a Cook protégé, announced he was stepping down as chief operating officer in July.

Appointing Ternus would put an executive from the hardware side of the company back in charge at the iPhone maker at a time when Apple has struggled to break into new product categories and keep up with its Silicon Valley rivals in AI.

«

John Gruber suggests Cook would want to remain as executive chairman, which sounds very likely.
unique link to this extract


AI-powered toys caught telling five-year-olds wildly inappropriate things • Futurism

Frank Landymore:

»

AI chatbots have conquered the world, so it was only a matter of time before companies started stuffing them into toys for children, even as questions swirled over the tech’s safety and the alarming effects they can have on users’ mental health.  

Now, new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.

After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.

In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk that we’re barely beginning to scratch the surface of — and just in time for the winter holidays, when huge numbers of parents and other relatives are going to be buying presents for kids online without considering the novel safety issues involved in exposing children to AI.

“This tech is really new, and it’s basically unregulated, and there are a lot of open questions about it and how it’s going to impact kids,” report coauthor RJ Cross, director of PIRG’s Our Online Life Program, said in an interview with Futurism. “Right now, if I were a parent, I wouldn’t be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it.”

«

I’m not sure I’d give some adults access to a chatbot, inside or outside a teddy bear. But this is predictably bad, while also remiscent of some SF stories.
unique link to this extract


Power companies are using AI to build nuclear power plants • 404 Media

Matthew Gault:

»

Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster. 

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.

The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field. 

“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”

The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.

«

Don’t worry – there will be objections from NIMBYs who have trained their LLMs on previous successful (or not, it all adds time) objections to nuclear power plants.
unique link to this extract


Apple’s new App Review Guidelines clamp down on apps sharing personal data with ‘third-party AI’ • TechCrunch

Sarah Perez:

»

Apple on Thursday introduced a new set of App Review Guidelines for developers, which now specifically state that apps must disclose and obtain users’ permission before sharing personal data with third-party AI.

The change comes ahead of the iPhone maker’s plan to introduce its own AI-upgraded version of Siri in 2026.
That update will see Apple’s digital assistant offer users the ability to take actions across apps using Siri commands, and will be powered, in part, by Google’s Gemini technology, according to a recent Bloomberg report.

At the same time, Apple is ensuring other apps aren’t leaking personal data to AI providers or other AI businesses.
What’s interesting about this particular update is not the requirements being described but that Apple has specifically called out AI companies as needing to come into compliance.

Before the revised language, the guideline known as rule 5.1.2(i) included language around disclosure and obtaining user consent for data sharing, noting that apps could not “use, transmit or share” someone’s personal data without their permission. This rule served as part of Apple’s compliance with data privacy regulations like the EU’s GDPR (General Data Protection Regulation), California’s Consumer Privacy Act, and others, which ensure that users have more control over how their data is collected and shared. Apps that don’t follow the policy can be removed from the App Store.

«

Wonder what wording the apps will invent to persuade people that they’re sharing their data with third parties but in a nice and desirable way.
unique link to this extract


AI slop tops Billboard and Spotify charts as synthetic music spreads • The Guardian

Aisha Down:

»

Three songs generated by artificial intelligence topped music charts last week, reaching the highest spots on Spotify and Billboard charts.

Walk My Walk and Livin’ on Borrowed Time by the outfit Breaking Rust topped Spotify’s “Viral 50” songs in the US, which documents the “most viral tracks right now” on a daily basis, according to the streaming service. A Dutch song, We Say No, No, No to an Asylum Center, an anti-migrant anthem by JW “Broken Veteran” that protests against the creation of new asylum centers, took the top position in Spotify’s global version of the viral chart around the same time. Breaking Rust also appeared in the top five on the global chart.

“You can kick rocks if you don’t like how I talk,” reads a lyric from Walk My Walk, a seeming double entendre challenging those opposed to AI-generated music.

Days after its ascent up the charts, the Dutch song disappeared from Spotify and YouTube, as did Broken Veteran’s other music. Spotify told the Dutch outlet NU.nl that the company had not removed the music, the owners of the song rights had. Broken Veteran told the outlet that he did not know why his music had disappeared and that he was investigating, hoping to return it soon.

In an email to the Guardian, Broken Veteran, who declined to give his real name, said that he saw AI as “just another tool for expression, particularly valuable for people like me who have something to say but lack traditional musical training”, adding that the technology had “democratized music creation”. He claimed his songs “express frustration with governmental policies, not with migrants as individuals”.

«

AI song generation is no different in principle from the chord generators on early synthesizers. Discuss.
unique link to this extract


Apple Vision Pro live sports streaming is too costly for startups • Apple Insider

Malcolm Owen:

»

Earlier in 2025, a startup reached out to us about a its 3D sports streaming startup for Apple Vision Pro. The grim reality of money, licensing, plus tech limitations of streaming 3D live sports makes it almost impossible for a startup to solve.

Since its launch, the Apple Vision Pro has been about giving users an experience. Whether it’s looking around a spaceship or an Immersive Video on a submarine, or even a snow-covered village in Iceland, there are lots of things to see and do with the headset.

However, while you can watch streaming video on a massive screen within a large novelty scene, it’s not quite the same as being at a stadium. You’re not getting the experience of being surrounded by fans and watching sports stars run around some supremely-kept grass.

With the Apple Vision Pro now in its second generation, as well as Apple moving deeper into sports, there is hope that the two can join up and result in an immersive broadcasting experience.

To a point, it is. Albeit in extremely small doses.

When it comes to its connections with the NBA, Apple has recorded a Slam Dunk contest using its cameras. There will also be a selection of Lakers basketball games streamed in 180-degree Apple Immersive video during the 2025-2026 season.

The real problem is that it’s a dream that is far away from happening on a much wider basis, for many reasons. In short, it’s an expensive and difficult thing to create, and the audience isn’t big enough to be worth it right now.

«

Sports streaming on the Vision Pro is an absolute chicken-egg problem: the audience is small because there’s no content, so you can’t do the content deals because the audience is small. Only Apple can unlock this. (Thanks Joe S for the pointer.)
unique link to this extract


Pennies are trash now • The Atlantic

Caity Weaver:

»

What, exactly, is the plan for all the pennies?

Many Americans—and many people who, though not American, enjoy watching from a safe distance as predictable fiascoes unfold in this theoretical superpower from week to week—find themselves now pondering one question. What is the United States going to do with all the pennies—all the pennies in take-a-penny-leave-a-penny trays, and cash registers, and couch cushions, and the coin purses of children, and Big Gulp cups full of pennies; all the pennies that are just lying around wherever—following the abrupt announcement that the country is no longer in the penny game and will stop minting them, effective immediately?

The answer appears to be nothing at all. There is no plan.

The U.S. Mint estimates that there are 300 billion pennies in circulation—which, if true, means that the Milky Way galaxy contains about three times more American pennies than stars. How, you ask, could the plan for 300,000,000,000 coins be “nothing”? The Mint, you say, issued a formal press release about striking the final cents. Surely, you insist, that implies some sort of strategy, or at least is evidence of logical human thought and action?

…I spent several months attempting to ascertain why, in the year 2024, one out of every two coins minted in the United States was a one-cent piece, even though virtually no one-cent pieces were ever spent in the nationwide conduction of commerce, and, on top of that, each cost more than three cents apiece to manufacture.

…There were logical reasons not to care: 300 billion pennies—all of them still and indefinitely legal currency—constitute approximately zero% of the total money supply of the United States (0.0% if rounding to one decimal place). The millions of dollars the government loses by paying more than three cents to manufacture one-cent coins represents an infinitesimal fraction of 1% of the government’s several-trillion-dollar budget. And these days, most people barely encounter the coins.

«

unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.