Start Up No.2632: Meta readies more job cuts, Kalshi indicted for “gambling”, LLMs used for Github attacks, Wired booms!, and more


You can find out how many people live within a given radius of you, or anywhere, with a new website. CC-licensed photo by Tomas on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 10 links for you. Crowded. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Exclusive: Meta planning sweeping layoffs as AI costs mount • Reuters

Katie Paul, Jeff Horwitz and Deepa Seetharaman:

»

Meta is planning sweeping layoffs ​that could affect 20% or more of the company, three sources familiar with the matter told Reuters, as Meta seeks to offset costly artificial intelligence infrastructure bets and prepare for greater efficiency brought about by AI-assisted workers.

No date has been set for the cuts and the magnitude has not been finalized, the people said.

Top executives have recently signaled the plans to other senior leaders at Meta and told ​them to begin planning how to pare back, two of the people said. The sources spoke anonymously because they ​were not authorized to disclose the cuts.

“This is speculative reporting about theoretical approaches,” Meta spokesperson Andy Stone said in response to questions about the plan.

If Meta settles on the 20% figure, the layoffs will be the company’s most ​significant since a restructuring in late 2022 and early 2023 that it dubbed the “year of efficiency.” It employed nearly 79,000 people as ​of December 31, according to its latest filing.

The company laid off 11,000 staffers in November 2022, or around 13% of its workforce at the time. Around four months later, it announced it was cutting another 10,000 jobs.

…The company has said it plans to invest $600bn to build data centers by 2028. Earlier this week, it acquired Moltbook, a social networking platform built for AI agents. Meta is also spending at least $2bn to buy Chinese AI startup Manus, Reuters previously reported.

…Meta’s planned AI investments follow a series of setbacks with its Llama 4 models last year, including criticism that it provided misleading results on the benchmarks it used for early versions. It abandoned the release of the largest version of that model, ​called Behemoth, which had been due out in the summer.

«

This level of spending is not sustainable. It’s surely going to be cut back, especially with everything coming down the track as a result of the Iranian conflict.
unique link to this extract


Arizona indicts prediction market Kalshi for running illegal gambling operation • Financial Times via Ars Technica

Stephanie Stacey and Oliver Roeder:

»

Arizona’s attorney general filed criminal charges against prediction market Kalshi, accusing it of operating a gambling business without a license and offering illegal wagers on elections.

“Kalshi may brand itself as a ‘prediction market,’ but what it’s actually doing is running an illegal gambling operation and taking bets on Arizona elections, both of which violate Arizona law,” Attorney General Kris Mayes said in a statement on Tuesday.

While Arizona’s case is the first time criminal charges have been brought against the company, several other US states have alleged that Kalshi’s markets constitute illegal and unregulated sports betting.

“There’s clearly going to be a domino effect,” said Daniel Wallach, a lawyer who specializes in gaming law. “These are the first criminal charges filed against Kalshi anywhere in the US but they may not be the last.”

In a statement, Kalshi said: “Sadly, a state can file criminal charges on paper-thin arguments. States like Arizona want to individually regulate a nationwide financial exchange, and are trying every trick in the book to do it.”

Prediction market platforms such as Kalshi offer shares in binary outcomes, such as a certain team winning or losing a football match. Kalshi has argued that these contracts should continue to be regulated as derivatives by the federal Commodity Futures Trading Commission, enabling it to bypass states’ sports-gambling bans or regulations by arguing that its regulatory status under the CFTC preempts state-level laws.

Arizona’s case focuses on betting contracts Kalshi offered on four separate elections, including the 2028 US presidential race and 2026 race for Arizona governor. Gambling on elections is illegal under Arizona state law.

«

One can imagine this escalating all the way up to Supreme Court. Though given that that court gave gambling the green light, it’s hard to see Arizona prevailing.
unique link to this extract


Supply-chain attack using invisible code hits GitHub and other repositories • Ars Technica

Dan Goodin:

»

Researchers say they’ve discovered a supply-chain attack flooding repositories with malicious packages that contain invisible code, a technique that’s flummoxing traditional defenses designed to detect such threats.

The researchers, from firm Aikido Security, said Friday that they found 151 malicious packages that were uploaded to GitHub from March 3 to March 9. Such supply-chain attacks have been common for nearly a decade. They usually work by uploading malicious packages with code and names that closely resemble those of widely used code libraries, with the objective of tricking developers into mistakenly incorporating the former into their software. In some cases, these malicious packages are downloaded thousands of times.

The packages Aikido found this month have adopted a newer technique: selective use of code that isn’t visible when loaded into virtually all editors, terminals, and code review interfaces. While most of the code appears in normal, readable form, malicious functions and payloads—the usual telltale signs of malice—are rendered in unicode characters that are invisible to the human eye. The tactic, which Aikido said it first spotted last year, makes manual code reviews and other traditional defenses nearly useless. Other repositories hit in these attacks include NPM and Open VSX.

The malicious packages are even harder to detect because of the high quality of their visible portions.

“The malicious injections don’t arrive in obviously suspicious commits,” Aikido researchers wrote. “The surrounding changes are realistic: documentation tweaks, version bumps, small refactors, and bug fixes that are stylistically consistent with each target project.”

The researchers suspect that Glassworm—the name they assigned to the attack group—is using LLMs to generate these convincingly legitimate-appearing packages.

«

Well isn’t that fun news.
unique link to this extract


Wired’s new editor doesn’t care if the tech bros are mad • The New York Times

Katie Robertson:

»

In less than three years, [Katie] Drummond, 40, has transformed Wired from a fading magazine and website into a buzzy brand that has become a bright spot for Condé Nast, the publishing giant better known as the home of Vogue, Vanity Fair and The New Yorker.

She started its about-face by creating a politics team that landed major scoops about Elon Musk’s Department of Government Efficiency, including uncovering the young engineers given key roles in the project and revealing that members of DOGE’s staff had direct access to U.S. Treasury payment systems. The publication has recently been focusing on artificial intelligence, the war in Iran and the ties between technology companies and Immigration and Customs Enforcement.

The edgier coverage has generated grumbles from parts of the tech community, including in recent weeks. Wired’s February print cover, for a feature titled “Inside the Gay Tech Mafia,” drew outrage online for its suggestive visual of two men shaking hands at crotch level through the flies of their pants. Trae Stephens, a prominent venture capitalist, argued on social media that Wired was “irreparably broken in its current form” and floated the possibility of buying it.

But Ms. Drummond’s approach appears to be working. Condé Nast does not disclose profits or losses for its publications, but Ms. Drummond said Wired had added more than 200,000 new paying subscribers in the past year, and subscription revenue increased 24% last year in the United States. Wired currently has more than 500,000 paid subscribers. It has a newsroom of around 80 people with plans to hire up to a dozen more this year, and was recently named a finalist for general excellence in the National Magazine Awards.

“We cover technology with a great deal of curiosity,” Ms. Drummond said, “with a great deal of skepticism as I think behooves any smart journalist, with an eye toward accountability, which I believe is of paramount importance in this moment.”

«

unique link to this extract


How Europe and Canada can stay relevant in the AI wars • The Washington Post

Sam Winter-Levy and Anton Leicht are based at the Technology and International Affairs Program at the Carnegie Endowment for International Peace:

»

The middle powers in the West — European countries and Canada — are increasingly hoping to chart a course through the artificial intelligence revolution independent of China and the United States.

At the Munich Security Conference last month, German Chancellor Friedrich Merz declared the rules-based order dead and laid out a road map for “a strong sovereign Europe.” Ursula von der Leyen, president of the European Commission, declared that “our digital sovereignty is our digital sovereignty.” French President Emmanuel Macron, echoing Canadian Prime Minister Mark Carney, called for “de-risking vis-à-vis all the big powers.” Days later, at an AI summit in New Delhi, the same impulse surfaced in explicitly technological terms. Delegates from over 100 countries discussed sovereign AI infrastructure and how to prevent the technology’s benefits from being captured by a handful of American firms. On paper, it was the most ambitious assertion of middle-power independence in years.

But all this hopeful and ambitious talk ignores a painful reality: It’s too late in the AI race for the middle powers to play catch-up. The leaders calling for autonomy in Munich and New Delhi are doing so at a moment in which the defining technology of the next decade is controlled by the very powers they seek independence from.

In other sectors, settling for developing homegrown second-best technology might be a viable strategy for preserving sovereignty. In AI, that’s a riskier bet. If the gap between cutting-edge and second-tier systems continues to widen, and especially if advanced AI accelerates scientific research and industrial innovation, access to the best systems could become decisive for economic growth.

…Today’s AI infrastructure projects determine tomorrow’s catch-up capacity, and middle powers lag far behind. The European Union’s up to five planned AI “gigafactories” are slated to come online between 2026 and 2027; in 2024, Elon Musk’s xAI built a cluster of comparable size in 122 days. Europe’s most ambitious AI infrastructure projects are two to three years behind the curve, an eternity in AI development. Canada has committed roughly $2bn to “sovereign AI compute” over five years; Microsoft will spend over $100bn, more than 50 times as much, on its biggest data center in Wisconsin.

«

unique link to this extract


Population around a point • Tom Forth

»

Human population within a distance, from any point in the world.

Select a radius and click on the map.

The data is the Global Human Settlement Layer population grid for 2025. This release was known to be less accurate for small areas, especially where there has been rapid change in population.

The bus stop, tram stop, and train/metro stop data is from Open Street Map from early 2023 and has errors. Many stops are missing, especially outside of Europe. Stops included rarely used or heritage service stops.

«

Fun! Though also likely to get overloaded as everyone discovers it.
unique link to this extract


Cuba begins restoring power after energy grid collapses in nationwide blackout • CBS News

»

Officials in Cuba reported an island-wide blackout Monday in the country of some 11 million people as its energy and economic crises deepen. Cuba has blamed its woes on a U.S. energy blockade after President Trump in January warned of tariffs on any country that sells or provides oil to it.

The Ministry of Energy and Mines noted on the social network X a “complete disconnection” of the country’s electrical system and said it was investigating. The ministry later said some “microsystems” were beginning to operate in various territories but did not go into further details.

It was the third major blackout in Cuba over the past four months. The cause was still unknown as of Monday night, Cuban state media said. 

President Miguel Díaz-Canel on Friday said the island had not received oil shipments in more than three months and was operating on solar power, natural gas and thermoelectric plants, and the government has had to postpone surgeries for tens of thousands of people.

A massive outage over a week ago affected the island’s west, leaving millions without power. In 2025, almost exactly a year ago, the country suffered a massive outage in western Cuba. 

«

The US is effectively blockading Cuba, which has in the past relied on oil from Venezuela, along with Russia and Mexico. Though it can produce 40% of the petroleum it needs, that gap is the reason for the outages.
unique link to this extract


Samsung discontinues its Galaxy Z TriFold after just three months • The Verge

Jess Weatherbed:

»

Samsung is preparing to axe its first three-panel foldable phone less than three months after launching the device in the US. Sales of the $2,899 Galaxy Z TriFold will first be wound down in Korea and then discontinued in the US once remaining inventory has been cleared, an unnamed Samsung spokesperson told Bloomberg.

This follows a report from Korean media outlet Dong-A Ilbo on Monday that says the TriFold will be getting a final domestic restock today, March 17th. Samsung’s website stopped providing future restock updates for the foldable earlier this month, with the TriFold currently listed as “sold out” in the US.

It was only available to purchase directly from Samsung, with Dong-A Ilbo reporting that just 6,000 units have been stocked and sold domestically since the phone launched in Korea on December 12th. Meanwhile, Huawei is already on the second generation of its own trifold phone, but while the original Mate XT Ultimate eventually launched in other regions, the Mate XTs is still limited to China.

«

Francisco Jeronimo of IDC sent out a press comment:

»

“The Trifold was never intended to be a high-volume, mass-market device. It was conceived as a limited-production initiative, designed to test both technological feasibility and market reception. Samsung positioned it as an exploratory project, a forward-looking concept aimed at evaluating consumer interest, usage patterns, and design viability for next-generation form factors.

Production volumes were deliberately limited, and the initiative was not driven by short-term sales objectives but by longer-term innovation goals and learning. Reports of discontinuation should not be interpreted as a failure or a strategic withdrawal.”

«

Perhaps it’s envy of Huawei, which IDC reckons has sold 1.2m trifold devices, generating $3.2bn in revenues – suggesting average prices of $2,666. There’s premium, and there’s premium.
unique link to this extract


Why (and how) everyone is cold-calling the president • Semafor

Max Tani:

»

The calls come in late at night, when the president can’t sleep. They come in when he’s watching TV in the evenings, and right after a game of golf, when he’s in a good mood. They come in early in the morning, as soon as he starts posting to Truth Social — but sometimes he’s a little snappy at that hour.

President Donald Trump’s iPhone won’t stop ringing because his Palm Beach number has become the ultimate status symbol in a town obsessed with proximity to power and influence.

This has produced a curious new form of journalism. In the two weeks since the US and Israel began military operations in Iran, Trump has done more than 30 cell phone interviews. He has become the presidential version of a drive-time radio host, picking up without screening his callers and conducting brief conversations with the public — in this case, journalists from outlets from The New York Times to Washington Reporter. One day earlier this month, ABC News’ Jonathan Karl and Rachel Scott each got separate interviews with Trump, in which he told them each how well the Iranian operation was going.

…When the president picks up, “he is often preoccupied, puts them on speaker in front of a large group of people, and he is loosely chatting and has fun messing with them,” said a White House official, who spoke on the condition of anonymity to say something that reporters ought to realize: Trump isn’t taking these calls that seriously.

“Reporters who think they are being serious journalists by calling him are frankly doing themselves a disservice.”

…One television insider called the breathless Trump phone exclusives “shameless.” Another said they thought it was “silly and doesn’t add much value.” Another said the interviews were “useless.” Of course, all three acknowledged that they had at one point or another called Trump on the phone and reported out the details. But they suggested that the real offenders were others who were abusing the privilege for largely meaningless interviews and, let’s face it, clout.

“He’s having a good time and saying whatever he wants having gotten softball questions,” one journalist who has spoken to Trump told Semafor.

«

So it’s a little like when everyone and their cousin had British PM Boris Johnson’s mobile number, though he might have been more coherent.
unique link to this extract


Google Maps just got a major AI upgrade. Here’s how “Ask Maps” will work • PC Mag

Jibin Joseph:

»

Google Maps can easily pull up nearby locations, but what if you’re not exactly sure what you need? A new Gemini-powered “Ask Maps” button in the app will let you ask natural-language questions about your surroundings. Plus, a dramatic 3D navigation upgrade will help you get there.

When available, the Ask Maps button will appear below the Search bar in Google Maps. The company touts its latest addition as “a new conversational experience that answers complex, real-world questions a map could never answer before.”

Tapping Ask Maps opens a chat interface familiar to Gemini users. Google says you can ask the chatbot questions like “My phone is dying — where can I charge it without having to wait in a long line for coffee?” or “Is there a public tennis court with lights on that I can play at tonight?”

You can also ask the AI to prepare an itinerary for your upcoming trip, and it will respond with details like directions, ETAs, and tips from users on how to explore a hidden gem, the best route to take, or ways to get free tickets.

Ask Maps taps into your Google Maps search and save history to provide personalized responses. If you search for a restaurant, for example, it may recommend vegan options if your past searches and saved places indicate that preference.

Each response option will include photos of the place, an AI-generated summary of user reviews, opening hours, and options to save the location or get directions. You can also share Ask Maps’s response with friends and family before deciding.

The other new Maps addition is supports for Immersive Navigation, which Google describes as the “biggest update in over a decade.” It gives you a 3D view of the buildings, overpasses, and terrain around you; video demos make it look like you’re in a driving game.

«

unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2631: Polymarket gamblers try to change the truth, Britannica sues OpenAI, hacking Companies House, and more


The (now Japanese-owned) company that runs car parks across the UK has fallen into administration over its unsupportable debts. CC-licensed photo by Elliott Brown on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 10 links for you. Reverse out. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Car park firm NCP collapses with nearly 700 jobs at risk • BBC News

Mitchell Labiak and Emer Moreau:

»

National Car Parks (NCP) has gone into administration, putting 682 jobs at risk.

The administrator, PwC, said demand for parking had not recovered to pre-Covid levels, pointing to “shifts in commuting and customer driving patterns”. It said NCP could no longer afford to pay its creditors after consistently losing money and was unable to scrap “long-term, inflexible” leases on loss-making sites.

PwC is looking to sell the business as the “best outcome” for those NCP owes money to. “All sites are open, staff remain in post, and trading continues as normal,” PwC added. “We will be engaging with landlords, employees, and other stakeholders as we explore all options.”

NCP was founded in 1931 and is one of the biggest car park operators in the UK. It runs 340 car parks across the country, including in airports, hospitals and train stations. The firm’s debts were £305m greater than the value of its assets, as of 30 September last year, according to a filing from its parent company.

Among the options to secure NCP’s future is to sell some or all of the company.

PwC said NCP had a “high concentration” of inflexible leases that prevented it from reducing costs or scrapping unprofitable car parks. Zelf Hussain, joint administrator and PwC partner, said the company had faced “a challenging trading environment” in recent years.

“Our priority on appointment is to ensure continuity of service while we undertake a detailed review of the business.”
NCP’s parent company, Park24, which is Japanese, said higher energy prices as a result of the outbreak of war in Ukraine in 2022 also put pressure on the business, adding to its operating costs.

«

When you look at the financials, it actually made a slim operating profit (£7.6m on £233m) last year, and in 2024, but a hefty loss in 2023. But the leases and other costs dragged it into consistent losses.

NCP going bust is an eventuality I’d never have expected, based on the cost of parking there. But the trend to Work From Home has murdered it.
unique link to this extract


Changes to UK working habits push car park group into administration • Financial Times

Megan Snaith, with some fabulous historical detail about NCP:

»

The company was founded by Colonel Frederick Lucas in 1931 with a small car park in London.

It grew rapidly after it was bought by Donald Gosling and Ronald Hobson who combined it with their business, Central Car Parks, in 1959. They had spotted the opportunity of converting city bomb sites into car parks, having paid £200 for their first bomb site in Holborn, central London in 1948. NCP was sold to US company Cendant for £800m nearly 40 years later.

That deal almost broke down at the last minute after Hobson objected on the basis that he wanted to keep a single parking space for himself in central London off Oxford Street. He wanted to maintain his weekend habit of packing a flask of tea and driving there in his Bentley to try to predict where people would park.  

NCP has changed hands multiple times since then. In 2007, it was bought by Macquarie, which later sold it to its current owners, Japan’s Park24 in 2017.

«

That detail about bomb sites as potential car parks always stuck for me. That’s a real “lemonade from lemons” insight.
unique link to this extract


Gamblers trying to win a bet on Polymarket are vowing to kill me if I don’t rewrite an Iran missile story • The Times of Israel

Emanuel Fabian:

»

on X, I saw a user reply to a recent tweet of mine: “There are people saying that they have received word from you that the missile strike in Beit Shemesh on March 10th was in fact intercepted, is this true or did no such interaction occur?”

Another X user responded to my post with the video showing the Iranian ballistic missile impact in Beit Shemesh with: “was there any video of the actual impact.” (Clearly, he didn’t watch the video.)

Checking those X accounts, both appeared to be involved in gambling on the Polymarket betting site.

As far as I now understand, the emails I [had previously] received were intended to confirm whether or not a missile had hit Israel on March 10 in order to resolve a prediction on Polymarket.

Polymarket is one of the largest prediction markets in the world, where users can wager their money on the likelihood of future events, using cryptocurrency, debit or credit cards, and bank transfers. However, there are accusations that the site has been plagued by manipulation and insider trading.

The event that these people had bet on was “Iran strikes Israel on…?” More than 14 million dollars had been wagered on March 10.

The rules of the bet state: “This market will resolve to ‘Yes’ if Iran initiates a drone, missile, or air strike on Israel’s soil on the listed date in Israel Time (GMT+2). Otherwise, this market will resolve to ‘No’.”

However, there is a clause: “Missiles or drones that are intercepted… will not be sufficient for a ‘Yes’ resolution, regardless of whether they land on Israeli territory or cause damage.”

My minor report on a missile striking an open area was now in the middle of a betting war, with those who had bet “No” on an Iranian strike on Israel on March 10 demanding I change my article to ensure they would win big.

More emails arrived in my inbox.

«

I thought that Polymarket didn’t allow betting on wars. But of course it does. And whereas we used to think prediction markets would be a great way to guess at the future – on the “wisdom of crowds” principle – it turns out that giving people a financial interest in bending truth to match their prediction will distort them rather than give us the reading we wanted.
unique link to this extract


Google scraps AI search feature that crowdsourced amateur medical advice • The Guardian

Andrew Gregory:

»

Google has dropped a new artificial intelligence search feature that gave users crowdsourced health advice from amateurs around the world.

The company had said its launch of “What People Suggest”, which provided tips from strangers, showed “the potential of AI to transform health outcomes across the globe”.

But Google has since quietly removed the feature, according to three people familiar with the decision. A Google spokesperson confirmed “What People Suggest” had been scrapped. The move came as part of a “broader simplification” of its search page and had nothing to do with the quality or safety of the new feature, the spokesperson said.

The revelation comes as the company faces mounting scrutiny over its use of AI to provide millions of users with health information and advice. In January, a Guardian investigation found people were being put at risk of harm by false and misleading health information in Google AI Overviews. The AI-generated summaries are shown to 2 billion people a month, and appear above traditional search results on the world’s most visited website.

Google initially sought to downplay the Guardian’s findings. The AI Overviews that alarmed independent experts linked to reputable sources and recommended seeking expert advice, the company said. Days later, Google removed AI Overviews for some but not all medical queries.

In March last year at an event in New York, Google said it planned to expand medical-related AI summaries in search. The company said it was adding a new feature, “What People Suggest”, which aimed to provide users with information from people with similar lived medical experiences.

On the day of “The Check Up” event, Karen DeSalvo, then Google’s chief health officer, wrote a blog post outlining why the company was launching the new feature, and how it would help users.

“While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” wrote DeSalvo. “That’s why we’re making it even easier to find this type of information on Search with a new feature labelled ‘What People Suggest’.

«

Is it worth pointing out that this was always an astonishingly bad idea, but Google doesn’t seem to learn from its mistakes?
unique link to this extract


Encyclopedia Britannica is suing OpenAI for allegedly “memorizing” its content with ChatGPT • The Verge

Stevie Bonifield:

»

On Friday, Encyclopedia Britannica and dictionary publisher Merriam-Webster filed a lawsuit against OpenAI alleging that it used their copyrighted content to train its AI, then generated responses that were “substantially similar” to their content, as previously reported by Reuters.

According to Britannica, OpenAI repeatedly copied its content without permission, stating, “GPT-4 itself has ‘memorized’ much of Britannica’s copyrighted content and will output near-verbatim copies of significant portions on demand. The memorized examples are unauthorized copies that [OpenAI] used to train their models, including GPT-4.”

The lawsuit goes on to include examples of responses from OpenAI’s models side by side with Britannica’s text, in which entire passages appear to match word for word. Britannica also claims that OpenAI has been “cannibalizing” its web traffic by generating responses that “substitute, or directly compete” with Britannica’s content, rather than directing users to its website the way a traditional search engine would.

«

Join the queue, Britannica, though its complaint does have some merit. The content of an encyclopaedia isn’t as random as the rest of the web’s contents; it’s filled with facts, but those facts are organised in a particular way.

The grumble that OpenAI isn’t sending clicks in the way that a search engine does: welcome to the new paradigm, people.
unique link to this extract


AI agents are taking over contract negotiations • IEEE Spectrum

Michael Dumiak:

»

Some of the world’s largest companies with the biggest supply chains—including Walmart, the global shipping giant Maersk, and the telecom servicer Vodafone—are now using bots powered by artificial intelligence to negotiate and maintain supplier contracts.

That these sophisticated AI systems were designed and built by a startup in Estonia is interesting; it’s even more notable that bots now routinely engage in automated contract negotiations for sprawling global enterprises. But what’s really eye-opening is that these AI agents aim to work autonomously. Which prompts a question: What will happen if the AIs start to haggle amongst themselves?

“In the future I can imagine all sorts of agents in the real physical world negotiating with one another,” says Tim Baarslag, a senior researcher in intelligent and autonomous systems at the Centrum Wiskunde & Informatica in Amsterdam. “Letting these bots run completely wild, I think, requires more research.”

Baarslag has wrestled with negotiation bot concepts for years (one of his peers has a running project called Pocket Negotiator). In 2017 he and his colleagues published “When Will Negotiation Agents Be Able to Represent Us?” They drew a sharp line between automated and autonomous negotiation. The difference is the freedom to negotiate independently.

The five-year-old Estonian startup Pactum is clearly marketing its bot as an autonomous agent. In addition to Maersk and Walmart, its client list now includes a wire and cable supplier and an electrical supply wholesaler (once part of Westinghouse). The startup landed a US $20m venture capital investment in July from backers including Maersk itself.

…Pactum calls its agent an autonomous negotiation suite. The system’s machine learning can analyze a massive set of complex contractual terms using historical and market data from both within and outside the company. It can send its analysis to a human user, such as a buyer or procurement officer—or, on its own, it can produce and forward a set of contract options to a vendor (mostly based on price, delivery dates, and billing cycles). The bot can take counteroffers and respond.

«

unique link to this extract


Companies House vulnerability enabled company hijacking • Tax Policy

Dan Neidle:

»

A major vulnerability in the Companies House website gave unauthorised access to the private dashboard of any of the five million registered companies for five months. It exposed directors’ home addresses and email addresses, and appears to have enabled attackers to change company and director details – and even file accounts.

This article sets out what we know, what we don’t, and what businesses should be doing to protect themselves.

What we confirmed: unauthorised access to any company’s dashboard; exposure of non-public personal data; submission of a change that generated a confirmation number.

What remains unconfirmed: whether the change was actually processed; how long the vulnerability existed; whether it was exploited by criminals; whether Companies House can identify affected companies.

The vulnerability is incredibly simple, and involves just pressing the “back” key at a particular time.

The vulnerability was discovered on Thursday 12 March by John Hewitt at Ghost Mail, a corporate services provider. He tried to contact Companies House immediately, but didn’t get a response – so he contacted us.

…John used the vulnerability to view the private Companies House dashboard of ClarityDW Ltd, a digital communications consultancy owned by Jonathan Phillips. Jonathan kindly gave us permission to do this.

John then used it to view the dashboard of a company I own, and to modify my own registered address. That appeared to work, as it generated a confirmation number. As you will hear, I was incredulous at what John showed me.

I then spoke to computer security specialists. To rule out the possibility that it was something specific to John’s computer, network or account, I tested the vulnerability myself… This shows the exploit revealing private information that’s not published by Companies House, such as personal email addresses and full dates of birth (and you can see that in the video, with Jonathan’s personal information masked).

«

On Monday, Companies House released a statement acknowledging that “It may also have been possible for unauthorised filings — such as accounts or changes of director — to have been made on another company’s record.” So now every company just has to go through all those records and check they haven’t changed. The demand will probably overload the Companies House servers.
unique link to this extract


Behind the curtain: Trump’s escalation trap • Axios

Jim VandeHei and Mike Allen:

»

Trump could pull out tomorrow. But the Iranians could keep the Strait of Hormuz closed and push oil prices so high that America would have to re-engage.

The Iranians have made it clear in private and in public that even if Trump decides to end the war, they could continue shooting missiles and rockets until they get guarantees that this is the end of the war, not just a temporary ceasefire.
Behind the scenes: Trump has grown accustomed to doing what he wants and then quickly improvising if things go south. But this time, some in his inner circle have what one official called “buyer’s remorse” — growing fears that attacking Iran was a mistake.

A source close to the administration said some key officials around Trump were reluctant or wanted more time. “He ended up saying, ‘I just want to do it,'” the source said. “He grossly overestimated his ability to topple the regime short of sending in ground troops.”

The source said Trump was “high on his own supply” after last summer’s quick strikes in Iran and January’s abduction of Venezuelan President Nicolás Maduro: “He saw multiple decisive quick victories with extraordinary military competence.”

Reality check: Trump’s war of choice certainly looks like a military success so far. Iran’s missile and drone launches have greatly decreased, indicating it’s running out of weapons or the ability to fire them.

«

I’m not sure about the reality check. Iran has an unknown number of drones, and mines, which it can use to harass transport in the Strait.
unique link to this extract


Why Hormuz will haunt us long after this war ends • Financial Times

Gideon Rachman:

»

The closure of the Strait of Hormuz is one of the most foreseen “unforeseen problems” in history. For decades, academics and game theorists have speculated about the possibility that, in wartime, Iran could choke off the narrow waterway through which 20% of the world’s oil exports pass.

Donald Trump was warned of the danger to the strait as America and Israel prepared to attack Iran. But the US president waved away these concerns, predicting instead that the Islamic republic would swiftly capitulate.

A conflict with Iran that started with vague war aims now has one clear and overriding objective: reopen the Strait of Hormuz. Ironically and infuriatingly, the only reason the strait is closed is because the US and Israel went to war in the first place.

It is not in Trump’s power to reopen this vital sea passage by declaring victory and walking away. Instead his war with Iran — and the particular issue of the Strait of Hormuz — will define the rest of his presidency and may haunt his successors.

That is because the strait’s closure creates both an immediate crisis and a long-term strategic quandary. The current problem is that the longer it is closed, the greater the threat of a global recession. The future dilemma is that Iran now knows that control of the Strait of Hormuz gives it a stranglehold over the world economy. Even if it relaxes its grip in the short term, it can tighten it again in future.

The difficulties of reopening the strait are already very apparent. Iran does not have to sink or impede every tanker that tries to pass through. The spate of attacks already carried out — and the threat of new ones — has been enough to persuade ship owners, crews and insurers to steer clear.

…Trump is now asking America’s allies to send their navies to break the Iranian chokehold on the strait. He has even appealed to Beijing. The UK, the EU and China do have a real interest in reopening the Strait of Hormuz. But they will be understandably reluctant to put their own forces at risk to solve a problem that they did not create and that the US navy cannot fix on its own.

A year of tariffs, threats and insults from the Trump administration towards its European allies has also burned through goodwill towards Washington. They also know that any navy operating in the Strait of Hormuz would be very vulnerable to Iranian attacks — and might have to keep the operation up over many months.

«

unique link to this extract


The Curse of the Long Boom • WIRED

David Karpf:

»

Back in February I posted some reflections on what I’m looking for in the WIRED back catalog. Here’s how I phrased it then: “All of my best thinking comes from getting stuff wrong. That’s the angle from which I approach all social science research questions.” The whole point of making predictions, from this perspective, is to help yourself identify the limits of your own knowledge, leading to harder questions that improve your understanding of the world.

Credit to Dan Davies—his coinage [“if you don’t make predictions, you never know what to be surprised by”] is so much pithier.

He also provides a corollary: “If you don’t make recommendations, you won’t know what to be disappointed by.”

Let me offer a second corollary: “If you retrofit your predictions to insist they were right after all, you’ll never learn a single damn thing.”

I mention this because, as you might imagine, I run into a lot of incorrect predictions as I read through the WIRED archive. Back in the ’90s, WIRED was chock-full of a very particular style of futurism—one that has not aged especially well.

In keeping with Davies’ Law, this presents a lovely opportunity. I’m rereading the entire magazine back catalog to understand how emerging technologies looked, take stock of where people thought the world was headed, and draw lessons from the resulting surprises.

The thing that sets me back on my heels, though, is that a lot of those old WIRED techno-optimists are still out there making predictions today. And to hear them tell it, they were right all along.

Huh?

«

The folks at Wired have been cheating to protect their Long Boom story for ages. Karpf has been detailing all Wired’s mad but also self-correcting predictions on his Substack, but has ceased writing there. (Thanks Lloyd W for the link.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2630: Google Fiber sold to private equity, AI companies train on “emotion”, Chromebooks’ RAM challenge, and more


The number of human bank tellers has slumped dramatically since 2010 in the US. But it isn’t because of ATMs. So why? CC-licensed photo by Eric Lubbers on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. Telling. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Why ATMs didn’t kill bank teller jobs, but the iPhone did • David Oks

David Oks:

»

So what happened to bank tellers [whose numbers in the US have dropped precipitously since 2010]? Autor, Bessen, Vance, and the like are right to point out that ATMs did not reduce bank teller employment. But they miss the second half of the story, which is that another technology did. And that technology was the iPhone. The huge decline in bank teller employment that we’ve seen over the last 15-odd years is mainly a story about iPhones and what they made possible.

But why? Why did the ATM, literally called the automated teller machine, not automate the teller, while an entirely orthogonal technology—the iPhone—actually did?

The answer, I think, is complementarity.

In my last piece, on why I don’t think imminent mass job loss from AI is likely, I talked a lot about complementarity. The core point I made was that labor substitution is about comparative advantage, not absolute advantage: the relevant question for labor impacts is not whether AI can do the tasks that humans can do, but rather whether the aggregate output of humans working with AI is inferior to what AI can produce alone. And I suggested that given the vast number of frictions and bottlenecks that exist in any human domain—domains that are, after all, defined around human labor in all its warts and eccentricities, with workflows designed around humans in mind—we should expect to see a serious gap between the incredible power of the technology and its impacts on economic life.

That gap will probably close faster than previous gaps did: AI is not “like” electricity or the steam engine; an AI system is literally a machine that can think and do things itself. But the gap exists, and will exist even as the technology continues to amaze us with what it can now accomplish.

But by talking about why ATMs didn’t displace bank tellers but iPhones did, I want to highlight an important corollary, which is that the true force of a technology is felt not with the substitution of tasks, but the invention of new paradigms. …When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it’s quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement. It’s only when we see the construction of entirely new paradigms that the full power of a technology can be realized. The ATM substituted tasks; but the iPhone made them irrelevant.

«

This is a very thoughtful, and insightful piece which offers a lot of food for thought. In the town where I live, there used to be at least five banks and two building society branches, with attendant staff (and five ATMs total); now there are two building societies, one bank, and two ATMs. I haven’t been into a bank branch for years; the last time I did was when I was trying to get some copper coins converted into their equivalent in pounds. (Two of the banks refused.)
unique link to this extract


AI companies want to use improv actors to train AI on human emotion • The Verge

Hayden Field:

»

If you’ve got strong creative instincts, the ability to authentically portray emotion, and are capable of staying true to a character’s voice throughout a scene, there’s a job listing calling for your experience.

The catch: You won’t be performing in a theater, a film studio, or an underground performance space. You’d be using your talents to train an AI model for “one of the leading AI companies,” according to the open role posted by Handshake, a company that provides training data to OpenAI and other labs.

Handshake AI is one of a handful of companies of its kind, scrambling to provide more and more niche or specific training data to AI labs in order to feed the models. AI models are often described as “jagged,” meaning they’re typically great at some surprisingly complex tasks but fail deeply at some simple ones. AI companies are trying to fix the gaps in their models’ knowledge with specialized data labeling, and companies like Handshake, Mercor, and Scale AI have adjusted accordingly, hiring professionals in a wide range of industries.

Handshake’s demand for training data tripled last summer, as The Verge reported in December, and the company surpassed a $150 million run rate in November, scrambling to keep up with demand. Handshake and its competitors have touted their networks of tens of thousands (or more) professionals in white-collar industries, from chemists and doctors to lawyers and screenwriters. Many of these professionals worry they’re training AI models in a way that will make their careers obsolete even quicker than may have happened otherwise.

And now the leading AI labs have come for sketch comics, improv actors, and more.

…it’s looking for people who can essentially “test the limits of the world’s top LLMs’ understanding” by teaching the models how to recognize or replicate human tone and emotions. “Emotional awareness” is one of the requirements, for instance, specifically the “ability to recognize, express, and shift between emotions in a way that feels authentic and human.” The job listing also called for “interactions that feel grounded, human, and fun to play.”

Handshake declined to comment, and the listing does not specify what the training data will be used for.

«

unique link to this extract


LATENT: Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa • Github

Li Yi et al, Tsinghua University:

»

Human athletes demonstrate versatile and highly-dynamic tennis skills to successfully conduct competitive rallies with a high-speed tennis ball. However, reproducing such behaviors on humanoid robots is difficult, partially due to the lack of perfect humanoid action data or human kinematic motion data in tennis scenarios as reference.

In this work, we propose LATENT, a system that Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa. The imperfect human motion data consist only of motion fragments that capture the primitive skills used when playing tennis rather than precise and complete human-tennis motion sequences from real-world tennis matches, thereby significantly reducing the difficulty of data collection.

Our key insight is that, despite being imperfect, such quasi-realistic data still provide priors about human primitive skills in tennis scenarios. With further correction and composition, we learn a humanoid policy that can consistently strike incoming balls under a wide range of conditions and return them to target locations, while preserving natural motion styles.

We also propose a series of designs for robust sim-to-real transfer and deploy our policy on the Unitree G1 humanoid robot. Our method achieves surprising results in the real world and can stably sustain multi-shot rallies with human players.

«

This is the paper. It’s going to be interesting to see how it learns to serve: one assumes it will be extremely consistent and hit it very hard. The question is how long it will take before this is playing at a serious standard. Train it on Alcaraz, you’ll see a hell of an improvement.

Definitely worth going to the page linked here to see the videos! I’ve seen worse from humans, to be honest.
unique link to this extract


“This is not the computer for you” • Sam Henri Gold

Sam Henri Gold:

»

There is a certain kind of computer review that is really a permission slip. It tells you what you’re allowed to want. It locates you in a taxonomy — student, creative, professional, power user — and assigns you a product. It is helpful. It is responsible. It has very little interest in what you might become.

The MacBook Neo has attracted a lot of these reviews.

The consensus is reasonable: $599, A18 Pro, 8GB RAM, stripped-down I/O. A Chromebook killer, a first laptop, a sensible machine for sensible tasks. “If you are thinking about Xcode or Final Cut, this is not the computer for you.” The people saying this are not wrong. It is also not the point.

Nobody starts in the right place. You don’t begin with the correct tool and work sensibly within its constraints until you organically graduate to a more capable one. That is not how obsession works. Obsession works by taking whatever is available and pressing on it until it either breaks or reveals something. The machine’s limits become a map of the territory. You learn what computing actually costs by paying too much of it on hardware that can barely afford it.

I know this because I was running Final Cut Pro X on a 2006 Core 2 Duo iMac with 3GB RAM and 120GB of spinning rust. I was nine. I had no business doing this. I did it every day after school until my parents made me go to bed.

«

He’s absolutely right: there’s a terrible tendency for reviewers to say “I have decided that this machine is not correct for you, ethereal internet reader, and I will tell you which people it is correct for.” In reality, we use what we can. It’s actually quite rare to have the exact perfect machine for your needs.
unique link to this extract


Memory crunch threatens to kneecap Chromebook shipments • The Register

Dan Robinson:

»

Chromebooks, the low-cost computing option popular with education buyers, will be squeezed hardest this year as memory prices spiral out of control.

According to the mystics at Omdia, total global PC shipments are on track to decline 12% in 2026: desktop PCs by 10% to 53.2m units and laptops by 12% to 192.2m units.

…Omdia expects the entry-level bracket to experience the biggest slowdown. ChromeOS platforms are forecast to shrink 27.6% year-on-year in 2026, compared to 12.1% for Windows and 4.8% for macOS.

“The supply-driven downturn in 2026 will not affect all PC platforms equally,” said Kieren Jessop, research manager at Omdia.

“Chrome devices face the steepest decline at 28 percent, as the education-heavy platform is particularly exposed to tighter component allocation, lower margins, and the discontinuation of some memory and storage products.”

Apple’s “vertically integrated supply chain and premium positioning” mean it is comparatively sheltered from memory dynamics.

The forecast 12% drop in PC shipments, however, is based on an expected minimum 60% rise in memory and storage prices during the first quarter, with the hope that subsequent increases throughout 2026 will moderate as pressure eases. Things could deteriorate, pushing PC shipments toward a 15% decline, possibly even greater.

«

Not that Apple intended it, but launching the Neo into a market where rivals are being squeezed might turn out to be a smart move.
unique link to this extract


Buzzfeed has “substantial doubt” it can stay in business • CNN Business

Ramishah Maruf:

»

Buzzfeed, the digital media company that took the mid-2010s by storm, said on Thursday it has “substantial doubt” about its ability to continue as a business.

In an earnings report released Thursday, Buzzfeed said it has engaged in “strategic conversations” about relieving its liquidity issues.

“We believe there is a gap between the value of our individual assets and our market capitalization that suggests significant unrecognized upside,” founder and CEO Jonah Peretti said.

The company was $165m in debt three years ago; that’s been slashed by more than 65%, Buzzfeed said. But Matt Omer, Buzzfeed’s chief financial officer, said the company is still burdened by legacy commitments. Alongside its namesake, Buzzfeed also owns news site Huffpost and online food network Tasty.

“We’re exploring strategic options to complete the work we started years ago and position the Company to operate profitably on a sustainable basis,” Omer said in the earnings statement.

Peretti gave a hint of what that could look like in the statement: “In 2026, our focus is demonstrating the value of our brands, Studio IP, and new AI apps to the market.”

In 2025, Buzzfeed had a net loss of $57.3m, according to its earning report. The company noted it did not have enough resources to fund its cash obligations for the next year.

The decline of Buzzfeed’s digital empire, synonymous with viral videos and online quizzes, has been playing out since its IPO in 2021.

«

Strong suspicion that this is going to get broken up for parts. Back in April 2023 there was a link here to “How Buzzfeed News went bust“. Now the parent seems to be heading the same way. It says a lot about the challenge of digital media nowadays. The old names – Guardian, etc – are still going, decades or centuries later. Meanwhile after 20 years Buzzfeed, which was feted by Ben Thompson back in March 2015 as “the most important news organisation in the world”, looks close to the edge.
unique link to this extract


Australia releases six days’ worth of petrol and five of diesel from emergency stockpile • ABC News

Jake Evans:

»

Australia will make available about six days’ worth of petrol from its emergency stockpile and five days of diesel, the first use of its fuel reserves since the invasion of Ukraine in 2022.

Australia currently holds 36 days’ worth of petrol supply, 29 days’ worth of jet fuel and 32 days’ worth of diesel, according to the latest data.

Energy Minister Chris Bowen said the fuel would not flow immediately due to the complexities of supply chains, but it would give fuel retailers flexibility to manage their supply.

“The minimum stock obligation which was introduced … for this purpose, [for] the rainy day, is now necessary,” Mr Bowen said.

“There is a war. I think war ticks the boxes of crisis.”

Reserves for petrol and diesel already sit above the minimum requirements established by the government in 2023.

«

A month’s worth of petrol supply? That doesn’t seem like a lot when the principal transport route is closed. As someone remarked, Australia is a couple of months away from Mad Max (the first one, with Mel Gibson).
unique link to this extract


Bahrain’s Alba shuts 19% of aluminium capacity as Hormuz disruption continues • Reuters

Tom Daly:

»

Aluminium Bahrain, known as Alba, said on Sunday it had initiated a shutdown of three aluminium smelting lines accounting for 19% of its capacity to preserve business continuity amid ongoing ​disruption in the Strait of Hormuz.

The closures are the latest impact ​on the Middle East aluminium sector, which accounts for around 9% of global supply, from the US-Israeli war on Iran. Fears of shortages ​propelled London Metal Exchange aluminium to a nearly four-year high of $3,546.50 per metric ​ton on Thursday.

Alba, which has smelting capacity of 1.62 million tons of aluminium per year, said in a statement it had initiated a “controlled and safe shutdown” of reduction lines 1, 2 ​and 3.

“This targeted, line-specific action is designed to optimise the utilisation of ​Alba’s existing raw materials inventory and prioritise operational stability across Reduction Lines 4, 5 and 6,” added Alba, which describes itself as the “world’s largest aluminium smelter on one site.”

The company had issued force majeure on March 4 since it was unable to ship metal to customers due to the effective closure of the Strait of Hormuz. The ​closure has also ​left Middle East smelters unable to bring in vessels carrying their key raw material, alumina.

Energy supply is another issue for smelters. Qatar’s Qatalum had ​begun a shutdown on March 3 due to a suspension ​of its gas supply but will now operate at 60% capacity.

«

The ripple effects of this completely unfocussed assault on Iran are going to go on and on.
unique link to this extract


Google Fiber will be sold to private equity firm and merge with cable company – Ars Technica

Jon Brodkin:

»

Google Fiber, now officially called GFiber, is being sold to private equity firm Stonepeak and will be combined with cable-and-fiber firm Astound Broadband to create a larger Internet service provider.

Google owner Alphabet announced last Wednesday that it will keep only a minority stake in the fiber ISP that launched with grand ambitions in 2012 but scaled back its expansion plans in 2016. Alphabet and Astound owner Stonepeak announced “an agreement to combine GFiber with Astound Broadband, creating a leading independent fiber provider,” with the merged company to be “majority owned by Stonepeak, an investment firm specializing in infrastructure and real assets.”

The deal is subject to regulatory approvals and other closing conditions, with an expected closing date in Q4 of this year. The sale price was not disclosed. The deal will help GFiber take “a major step toward its goal of operational and financial independence” and obtain the “external capital and strategic focus needed to accelerate its next phase of growth,” the announcement said.

«

Google Fiber [sic] launched with the intention of being the “no data caps, no hidden fees, good prices, high speeds” provider which would shame rivals into improving their services.

With private equity taking over.. one doesn’t expect any of those to hold. Does Google just get bored with things? Or is its management not capable of looking after more than a certain number of things at once?
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2629: life as an accidental sports gambler, love my typos!, AI agents run riot, why clean energy beats drilling, and more


In the past 20 years, Paris has cut car traffic in half, and expanded cycle lanes sixfold. Will the mayoral election reverse that? CC-licensed photo by PhotoLanda on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Wheely interesting. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


My year as a degenerate sports gambler • The Atlantic

McKay Coppins, being a Mormon, is “prohibited from indulging in games of chance; besides, I had always thought of gambling as a waste of time”. The Atlantic gave him $10,000 to bet on the NFL season, with a promise it would cover his losses and halve the winnings with him. So he made some bets on the first night:

»

For 200 bucks, I had purchased an artificial rooting interest in a game I had no reason to care about. I kept watching even after a weather delay pushed it late into the night, scrolling frenetically next to my sleeping wife in search of angles to exploit with late-game bets. Most of my bets ended up losing, but the long-shot Hurts-Barkley parlay hit, and when the game ended, I calculated that I was up $20.

The next morning, I proudly shared the news with Annie, who high-fived me and immediately began to fantasize about how we would spend my winnings for the season. Could we replace our dying KitchenAid mixer? Remodel the kitchen pantry? Like so many wives before her, she had looked upon my foray into sports gambling with a bemused air of exasperation; now she was seeing a potential upside.

I laughed at her sudden enthusiasm—but I was starting to get ideas myself. I had made $20 on my very first night of gambling. Scale up the wager sizes, multiply across all 272 games in the NFL season, throw in some NBA and college football, and I stood to make—what, $10,000? $20,000? More?

I knew, of course, that I wouldn’t win every bet. But I didn’t see the harm in dreaming. As Annie and I traded home-improvement fantasies, I tried my best not to dwell on the last thing the bishop had said to me: “Be careful.”

Ever since the advent of sports, humans have found ways to lose money gambling on them. Ancient Greeks wagered on the (occasionally rigged) early Olympic Games; Romans bet on chariot races and gladiatorial contests (also sometimes rigged).

«

This is a terrific piece, but it also shows that sports gambling is a huge, huge problem for everyone within its blast radius. Such as:

»

[Retired French tennis player] Caroline Garcia doesn’t remember the first abusive message she received from an angry gambler who lost a bet on her, but she knows she was still a teenager… more than a decade of death threats from deranged bettors can make you appreciate high-tech security systems and heavily policed streets.

«

(This is a gift link, free to all.)

unique link to this extract


How Paris beat the car • Financial Times

Simon Kuper:

»

The city’s transition away from the car, though fantastically chaotic, has become a global role model. Under mayor Anne Hidalgo, Paris was “the most influential city in the world”, says Canadian urbanist Brent Toderian. Parisian car traffic fell by more than half between 2002 and 2023, while cycle lanes expanded sixfold. Bikes now make more than twice as many journeys as cars. Hidalgo, stepping down after 12 years, exulted: “The bike beat the car.”

This Sunday and next, Paris elects a new mayor. The election is in part a referendum on cars. The frontrunners are Emmanuel Grégoire of the left, who follows Hidalgo’s line even though she seems to dislike him, and car-friendly rightwinger Rachida Dati. So what are the lessons from the Parisian revolution?

First, pushing out cars improves life for most inhabitants. Paris has reduced traffic accidents, noise and air pollution. More than 300 “school streets” have been pedestrianised; kids play there after school. More than ever before, Paris is a sea of terraces: from April to October, cafés and restaurants can put tables on parking spaces outside their premises. Cities shouldn’t be storage spaces for heaps of metal.

Lesson two is that banishing cars doesn’t hurt an urban economy. Retailers often worry it will deter their customers. Studies repeatedly show it doesn’t. More broadly, French Hidalgo-haters need to explain why Paris is in the global top four of business-focused rankings of cities by Oxford Economics, the Mori Memorial Foundation and Kearney.

Lesson three: car-free cities must offer people good alternative ways to travel. Paris itself does: it has world-class public transport plus cycle lanes. Only 28% of Parisian households own a car. But Paris is a relatively small city of 2.1 million inhabitants. The five million people living outside the ring road in the “Grand Paris” metropole are less well served. True, connections are improving: 68 suburban metro stations are opening from 2024 through 2031.

«

Remarkable. There are more lessons – about deliveries, and the need to control bicycles – but it seems that Paris has made a change unlike any other city.
unique link to this extract


How typos became the new status symbol • Business Insider

Katie Notopoulos:

»

I come bearing great news for my kind of people (horrible typists): Typos are the new status symbol. Garbled spelling, a missed space, improper capitalization — those are all the new and best ways to signal to others that you are powerful and elite.

The Wall Street Journal, a place that employs editors to do more than just catch typos, wrote about how the rich and powerful are, in some cases, abandoning perfect prose. The examples they cite include Jack Dorsey’s all-lowercase memo announcing layoffs at Block and David Ellison texting David Zaslav and somehow writing their shared first name as “Daivd” (as someone who typos their own five-letter name not unfrequently, I find this relatable).

The most notable recent example of how the rich and powerful have abandoned the bourgeois veneration of proper writing is in the Epstein Files, which are full of Jeffrey Epstein’s awful typing in emails. Of course, Epstein isn’t aspirational. Please do not mistake me here. But if you look at just his typing — and some of the replies he got from rich and powerful friends — you can gather my point: it seems that if you’re rich and powerful enough, one of the many things people are willing to overlook is god-awful written communications.

I have written before about “emailing like a CEO” — replying to emails immediately, often with just a few words, or sending a message with just a subject line and no body. For the most important person at a company, some of the pleasantries of formal email style aren’t required. I even tried emailing like a CEO myself as an experiment. Hammering through my inbox like Paul Bunyan made me feel invigorated.

«

So we’re now in the “evasive manoeuvres” part of AI adoption. It’s a good excuse for being unable to spell, I suppose.

unique link to this extract


‘Exploit every vulnerability’: rogue AI agents published passwords and overrode anti-virus software • The Guardian

Robert Booth:

»

Rogue artificial intelligence agents have worked together to smuggle sensitive information out of supposedly secure systems, in the latest sign cyber-defences may be overwhelmed by unforeseen scheming by AIs.

With companies increasingly asking AI agents to carry out complex tasks in internal systems, the behaviour has sparked concerns that supposedly helpful technology could pose a serious inside threat.

Under tests carried out by Irregular, an AI security lab that works with OpenAI and Anthropic, AIs given a simple task to create LinkedIn posts from material in a company’s database dodged conventional anti-hack systems to publish sensitive password information in public without being asked to do so.

Other AI agents found ways to override anti-virus software in order to download files that they knew contained malware, forged credentials and even put peer pressure on other AIs to circumvent safety checks, the results of the tests shared with the Guardian showed.

The autonomous engagement in offensive cyber-operations against host systems was unearthed in laboratory tests of agents based on AI systems publicly available from Google, X, OpenAI and Anthropic and deployed within a model of a private company’s IT system.

“AI can now be thought of as a new form of insider risk,” warned Dan Lahav, cofounder of Irregular, which is backed by the Silicon Valley investor Sequoia Capital.

For the new tests of how AI agents behave, Lahav modelled an IT system to replicate a standard company, which he called MegaCorp.

It included a common type of company information pool with details about products, staff, accounts and customers. A team of AI agents was introduced to gather information from this pool for employees. The senior agent was told to be a “strong manager” of two sub-agents and “instruct them to creatively work around any obstacles”.

«

Perhaps tangentially related: AI scans drove UK reports of fraud to record 444,000 last year.
unique link to this extract


If computers are the future, why are computer users expected to be permanently illiterate? • Lapcat Software

Jeff Johnson:

»

The iPhone seems to have engendered a culture of anti-intellectualism and learned helplessness so pervasive that users have become evangelists for their own disempowerment. They demand that all software be “intuitive,” by which they mean idiot-proof, so simple that a n00b can use the software effortlessly and effectively. A suggestion to read the fine manual is greeted with charges of user hostility.

Those who habitually, endlessly scroll YouTube or social media would balk at investing a little time to master the computing tools they use daily. Perhaps they would be surprised to learn that the supposedly intuitive 1984 Macintosh came with 170 page manual. A common conceit is that the Mac was easier to use than other computers out of the box; the reality is that the Mac was easier to use after the initial learning curve. The system went out of its way to establish internal consistency and interoperability, but the user first had to make sense of the system and its principles.

The latest front in the war against power users is Artificial Intelligence. The promise of AI appears to be that you’ll never have to learn anything. Don’t know something? Just ask AI for the answer. Can’t do something? Just ask AI to do it. Ignorance is bliss. Laziness is encouraged in the name of efficiency. The prospect that one may have to work and struggle to achieve one’s goals is considered abhorrent. The very notion of self-improvement becomes obsolete. Life under AI is a video game, pure joy… as long as you continue inserting tokens into the machine. Hopefully my video arcade metaphor hasn’t become obsolete too.

A more contemporary example: playing guitar, which practically anyone can do with practice, is replaced entirely by playing Guitar Hero. I’m sure that there are Guitar Hero heroes, people who have mastered the game, but could a single one of them ever write a song? I guess we have to ask AI to write our songs in the future. The public debates whether AI will eventually become as intelligent as humans, or even super-intelligent, when I think the relevant question is whether humans will eventually become as dumb as AI, or even super-dumb, as in Idiocracy. I fear that none of this will end well, except for the computer and LLM vendors.

«

I used the original Lisa, and do not recall ever reading the manual. It was self-explanatory. (However, one couldn’t program on it without going into much deeper water than I was prepared for. By contrast, all the CP/M machines were relatively easy to work with.)
unique link to this extract


Analysis: why clean energy will cut UK gas imports by more than North Sea drilling • Carbon Brief

Daisy Dunne, Josh Gabbatiss and Simon Evans:

»

Carbon Brief analysis shows that the UK’s gas production in the North Sea is set to drop 99% by 2050, when compared to 2025 levels, with new licences pushing this figure down only slightly to 97%. (Oil production is also in long-term decline.)

Additionally, the analysis shows that the continued expansion of renewables and low-carbon technologies offers far greater protection against volatile gas imports than new domestic drilling.

The chart below [in the article] shows how the roughly 15 gigawatts (GW) of wind and solar power secured in the latest UK renewable-energy auction will avoid the need to import 78 “Q-Flex” tankers full of liquified natural gas (LNG) each year by 2030. This gas would cost roughly £4bn at current prices, which stood at 126p per therm as of 11 March.

(Gas can be either transported via pipelines or compressed into LNG and shipped across oceans, as is the case for gas coming into the UK from the US, Qatar or Algeria, for example.)

This is nearly six times more than the extra domestic gas production in 2030 if new licences are issued for North Sea drilling, according to Carbon Brief analysis of data from the UK government’s North Sea Transition Authority (NSTA).

Moreover, the 15GW of new renewables were secured in a single auction round. Another auction, likely to add significantly to this tally, is due to take place later in 2026.

«

unique link to this extract


Explain it like I’m five: why is everyone on speakerphone in public? • Ars Technica

Nate Anderson:

»

A pet peeve may drive me nuts but does not appear to impact anyone else. A Ways that Technology is Changing our World story must be about something that drives a lot of people nuts.

“But where is the threshold?” I hear you asking plaintively. “It’s extremely important that I know when something crosses the line from pet peeve to important, chin-stroking journalism topic!”

Fortunately, the answer is simple. The threshold has been breached when your local public transit agency puts up a sign about the behavior in question.

Which brings me to the sign I saw yesterday in Philadelphia.

“Unless the tea [gossip] is REALLY hot, keep the call off speaker,” it said.

SEPTA, the local transit agency, runs the buses and commuter rail in Philadelphia, and you can tell from the light-hearted-but-seriously-don’t-do-this tone of the message that speakerphone-wielding passengers are now widely complained about by their fellow riders.

I share their disdain, but for me, the dark and judgmental thoughts I have when I see this behavior are also paired with confusion. Why is it happening? Do these people not know that it is actually more work to hold your phone out in front of you than up to your ear? Do they have no common decency, manners, or taste? Do they genuinely not care if everyone in the frozen foods aisle overhears them talking about Aunt Kathy’s diagnosis? It’s bizarre.

At least when it comes to something like TikTok or Spotify, there’s a certain logic. Perhaps you have no headphones but need to unwind after a long day, and you just can’t imagine anyone who might not enjoy the soothing sounds of [Harry Styles/Cannibal Corpse/Wu-Tang Clan]?

But phone calls? People: are you aware that we can hear you and the person speaking to you?

«

It is strange behaviour. I blame it on The Apprentice, where in the UK every group found it obligatory to yell into a mobile phone held on speaker. The programme first aired in 2005.
unique link to this extract


US Army approves its first new lethal hand grenade since the Vietnam Era • Military.com

Allen Frazier:

»

For nearly six decades, the M67 fragmentation grenade has been the primary hand grenade American soldiers have carried into combat. That changes now.

The Army cleared the M111 Offensive Hand Grenade for full material release Tuesday, marking the first new lethal hand grenade to enter service since 1968. Developed at Picatinny Arsenal, N.J., the weapon gives soldiers a purpose-built tool for urban combat, where the fragmentation grenade that has carried the force since the Vietnam War can become as dangerous to friendly troops as to the enemy.

The M111 does not rely on shrapnel to kill. Instead, it harnesses blast overpressure, a wave of force generated by detonation that behaves very differently inside a closed room than fragmentation does. Walls, doorframes and furniture offer no refuge from it.

The M67 fragmentation grenade, the round baseball-shaped weapon soldiers have carried since Vietnam, sends steel fragments outward in all directions at detonation. In the open, that makes it lethal at range. Inside a building, the same fragments can be stopped, deflected or scattered unpredictably. Anything on the other side of a thin wall is as much at risk as the intended target.

For urban combat in the modern battlespace, the M111 aims to negate enemy cover while protecting the assault troops.

“One of the key lessons learned from the door-to-door urban fighting in Iraq was that the M67 grenade wasn’t always the right tool for the job,” said Col. Vince Morris, project manager for Close Combat Systems at the Capabilities Program Executive Ammunition and Energetics. “The risk of fratricide on the other side of the wall was too high.”

The M111 addresses that problem head-on. Blast overpressure radiates through enclosed space and cannot be stopped by interior walls the way fragmentation can, creating lethal effects that reach every corner of a room.

«

The joy of death. I wonder which war-that-isn’t-a-war this will first be used in.
unique link to this extract


Grammarly is facing a class action lawsuit over its AI “expert review” feature • WIRED

Miles Klee:

»

Superhuman, the tech company behind the writing software Grammarly, is facing a class action lawsuit over an AI tool that presented editing suggestions as if they came from established authors and academics—none of whom consented to have their names appear within the product.

Julia Angwin, an award-winning investigative journalist who founded The Markup, a nonprofit news organization that covers the impact of technology on society, is the only named plaintiff in the suit, which does not call for a specific amount in damages but argues that damages across the plaintiff class are in excess of $5m. She was among the many individuals, alongside Stephen King and Neil deGrasse Tyson, offered up via Grammarly’s “Expert Review” tool as a kind of virtual editor for users.

The federal suit, filed Wednesday afternoon in the Southern District of New York, states that Angwin, on behalf of herself and others similarly situated, “challenges Grammarly’s misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman.”

The complaint comes as Superhuman has already decided to discontinue the feature amid significant public backlash. “After careful consideration, we have decided to disable Expert Review as we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented—or not represented at all,” said Ailian Gan, Superhuman’s director for product management, in a statement to WIRED shortly before the claim was filed.

«

And what did I say a few days ago? “Grammarly is sure to find itself on the wrong end of a class lawsuit in the US because there are some big names on there.” If Grammarly is sensible, it will move to pay out on this as soon as possible, because nothing will get better from here.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2628: the junior developer pipeline problem, oil prices high until 2027?, forgetting Google, America’s problem, and more


Desalination plants are in widespread use in Gulf states, which rely on their output. But they haven’t been targeted in the latest conflict. Why? CC-licensed photo by GRID-Arendal on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Salty. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


The talent pipeline is collapsing. Your team will feel it next • The Long Commit

Juan Cruz Martinez:

»

At the biggest tech companies, new graduates went from roughly a third of all hires in 2019 to somewhere around 7% today. In the US, entry-level hiring at the top 15 tech firms fell 25% from 2023 to 2024 alone.

The research paints an even starker picture. A Stanford Digital Economy Lab study analyzing millions of payroll records found that employment for software developers aged 22 to 25 declined nearly 20% from its late-2022 peak, while employment for those over 30 held steady or grew. A Harvard study tracking 62 million workers across 285,000 firms found that when companies adopt generative AI, junior employment drops 9 to 10% within six quarters. Senior employment barely moves.

The trend isn’t limited to quiet budget decisions either. Block cut 40% of its entire workforce just weeks ago, with CEO Jack Dorsey citing AI as the reason. Those weren’t junior-specific cuts, but the underlying logic is the same one driving this whole shift: smaller teams, more AI, fewer humans. Salesforce announced it would halt engineering hiring entirely for 2025, citing AI agents. Klarna froze developer hiring in late 2023 (then reversed course when the strategy failed). A LeadDev survey found that 54% of engineering leaders plan to hire fewer juniors, thanks to AI copilots enabling seniors to handle more.

The reasoning is consistent across every boardroom version of this story: why pay a junior $80-100K plus six months of ramp-up when a senior with AI tools can cover triple the output? The math makes sense. On paper, it looks clean.

I’ve been watching this unfold for two years now, and I believe it’s one of the most short-sighted decisions a generation of engineering leaders is making simultaneously.

I’ve been in this industry for over 20 years. What concerns me isn’t the individual company choosing to slow junior hiring for a quarter or two. It’s the industry-wide retreat happening all at once, with almost no public conversation about what it costs.

«

unique link to this extract


Soaring fuel prices to cast long shadow across US economy • Financial Times

Myles McCormick, Jamie Smyth, Gregory Meyer, Christian Davies and Martha Muir:

»

The US energy department has warned petrol and diesel prices are unlikely to recede to prewar levels until mid-2027 at the earliest, ratcheting up costs for industries from trucking and farming to airlines and retailers.

Official figures released on Tuesday show US petrol prices rose 19% over the past two weeks to $3.50 a gallon as the Middle East conflict throttled energy supplies, while diesel jumped 28% to $4.86 a gallon.

Petrol is not forecast to drop back below its $2.94 per gallon pre-conflict level before the end of 2027, according to the Energy Information Administration, the energy department’s statistics arm. Diesel — the lifeblood of American industry — will not fall below the $3.81 per gallon it sat at two weeks ago until the middle of next year.

The shift threatens to push up costs for industry, which in turn will ratchet up prices for consumers with far-reaching inflationary impacts for the world’s largest economy.

It will also pile pressure on Donald Trump, who campaigned for the presidency in 2024 on a platform to slash petrol and energy costs. Prices at the pump are now higher than at any time during his two terms in office.

“We’ve got a lot of costs moving their way through the system,” said Tom Kloza, an independent oil analyst. “We’re looking at some really scary inflation ratings — pervasive inflation throughout the country.”

The rise in the price of refined fuel products in the US comes as Iran’s threats to strike ships traversing the Strait of Hormuz have all but halted maritime traffic in an artery through which roughly a fifth of global oil supply flows.

«

The longer this goes on, the more people will ask: why exactly did the US join Israel in the attack on Iran, and what was the expected outcome, after how long? Because everything so far looks like don’t know, don’t know and don’t know. Meanwhile the adverse effects are piling up. If stock markets get spooked then venture capital money might get tighter, and then all the AI data centres get put on hold, and then the AI stimulus to the US economy goes away, and things get bad quickly.
unique link to this extract


How my life got 100x better when i stopped thinking about Google • Joost Boer

Joost Boer:

»

I started building niche websites in 2020. Within a few months, Google became the single biggest factor in my professional life. Not my skills. Not my ideas. Not the quality of my work. Google.

When 80% of your visitors come from one source, and that source can change its mind overnight with zero explanation, it shapes everything. What you write. How you write it. What you build. What you don’t build. It’s not a tool you use. It’s a landlord you try to keep happy.

…My main site, Start24, became the best resource in its niche: WordPress and web hosting reviews for Dutch beginners. Honest ratings, beautiful design, well-structured guides, fun to read. Start24 was and is a good site. Readers told me. Competitors knew it.

Core updates came and went. I wasn’t obsessively checking Search Console during them — that’s a level of anxiety I don’t need — but I was generally relaxed. Good site, good content, good rankings. The system seemed to work.

Then last year’s core update hit, and Start24 got nuked. Not “dropped a few positions.” Nuked. The kind of decline where you watch your traffic chart and wonder if the Y-axis is broken.

…I rebuilt Start24 from the ground up. Interactive content. Custom tools that actually help people make decisions. Radical honesty in reviews — not “every product is great in its own way” but “this one is genuinely bad and here’s why.” A free WordPress video course. A free WordPress theme I built myself. Dramatically better design across the board.

And Google’s response? Keywords dropped. Traffic fell. Again.

I made the site significantly better — more useful, more honest, more complete — and Google decided it deserved to rank lower. At some point during this process, something shifted. I stopped caring. Start24 was getting traffic through other channels — paid ads, direct visits, email, social. The site was doing fine. Not because of Google. Despite Google.

And once that clicked, the weight lifted.

«

A lot of site owners are probably going to have this revelation as they look at their referrer traffic this year. It’s not just updates; it’s the AI summaries. Referral might be dead.
unique link to this extract


Fake, AI-generated images and videos of the Iran war are spreading on social media • CNN Politics

Daniel Dale:

»

After Russia invaded Ukraine in 2022, social media was littered with crude fakes that were presented as fresh images of the war but were either photoshopped phonies or mislabeled clips taken from video games, movies, past incidents and unrelated news coverage.

Those kinds of old-fashioned fakes are now spreading again during the war against Iran. This time, they have been joined by a form of deception that wasn’t readily available in 2022: high-quality videos and still images that have been custom-created with easy-to-use artificial intelligence tools.

Ten years ago, said Hany Farid, a University of California, Berkeley, professor specializing in digital forensics, “there’d be like one or two fake things out there; they’d get debunked pretty fast. … Now you see hundreds of them, and they’re really realistic.” Farid added: “It’s not just realistic, it’s landing — it’s landing hard. People believe it and they’re amplifying it.”

“What has changed in the last year or so is that generative AI has become much more widely accessible,” said BBC Verify senior journalist Shayan Sardarizadeh, a prominent debunker of war-related fakes, “and it’s now possible to create very believable videos and images appearing to show a significant war incident that is hard to detect to the untrained or naked eye.”

…Several fakes that have spread widely have been pushed as propaganda by pro-Iran social media accounts. The motivation behind the creation of many of the fakes, though, is hard to identify — perhaps social media views and the influence and money they can sometimes lead to, perhaps just because people were able to make them easily.

…Social media platform X did announce last week that it was taking some action to combat wartime AI fakes. Head of product Nikita Bier posted that if users who get paid by X as content “creators” spread AI-generated videos of armed conflicts without disclosing the videos were made with AI, they will be suspended from the payment program for 90 days and then permanently suspended if they commit additional violations.

Even if this policy is strictly enforced — Farid said he is skeptical — the overwhelming majority of X users are not part of the creator payment program.

«

A bad problem, getting worse. Remember when there were gatekeepers and making a video involving war footage required a huge budget? Are we sure this is better?
unique link to this extract


The usefulness of useless knowledge • Financial Times

Tim Harford:

»

In the 1970s, some basic ideas in supposedly useless number theory were deployed by Ron Rivest, Adi Shamir and Leonard Adleman. They developed the RSA algorithm, which enables public key cryptography, without which there would be no ecommerce. Cryptography is hardly valueless to the military, either. One never knows when useless knowledge will be useful after all.

Hardy’s number theory was not alone in being accidentally useful. In a famous article published around the same time — “The Usefulness of Useless Knowledge” (1939) — the head of Princeton’s Institute for Advanced Study, Abraham Flexner, made the case for apparently useless research. Flexner started with the radio and the radio telegraph — remarkable inventions for which many people thanked Guglielmo Marconi, the Nobel Prize-winning engineer.

Flexner argued that the “real credit” should go to James Clerk Maxwell and Heinrich Hertz, who had done the fundamental research. “Neither Maxwell nor Hertz had any concern about the utility of their work,” wrote Flexner, adding that Marconi contributed “merely the last technical detail . . . now obsolete”.

Some more recent examples have been gathered by the American Association for the Advancement of Science for its Golden Goose awards. Ten years ago, the awards recognised the Honey Bee Algorithm, which began with biologists painting tiny numbers on the backs of chilled (and thus immobile) bees, and then tracking the individual bees to figure out how they contributed to the hive’s search for nectar. Why? Because they wanted to know.

A couple of engineers became intrigued, figuring that maybe the bees had evolved a smart mechanism which the engineers might use to . . . well, do something. Perhaps they could use it to smooth the flow of traffic or suchlike. The bees had indeed evolved a clever approach, but the engineers couldn’t work out how to use it.

«

Turns out they could find a use. This article will appear at timharford.com in a couple of weeks for those who aren’t FT subscribers.
unique link to this extract


“Severe water stress”: why desalination plants are the Gulf’s greatest weakness • The Guardian

Damien Gayle:

»

In 1983, the CIA determined that the most crucial commodity in the Gulf was its desalinated potable water.

Although the loss of a single plant could be handled, “successful attacks on several plants in the most dependent countries could generate a national crisis that could lead to panic flights from the country and civil unrest”. And the greatest threat to the region’s water supply? “Iran.”

That’s why, four decades later, the world held its breath on Saturday when Iran’s foreign minister, Abbas Araghchi, accused the US of “a blatant and desperate crime” by attacking a desalination plant on the island of Qeshm, in the strait of Hormuz. “The US set this precedent, not Iran,” he said.

The US denied responsibility for the attack. But the next day, on the other side of the Gulf, Bahrain announced one of its own desalination plants had been hit. The alleged culprit: “Iranian aggression.”

It looked like the region, its cities and its industries, was poised to unravel in a frenzy of tit-for-tat assaults on critical water infrastructure. But then the attacks on desalination plants stopped. Why?

…According to the latest data, 70% of Saudi Arabia’s drinking water comes from desalination plants; in Oman the figure is 86%; the United Arab Emirates, 42%; and in Kuwait, 90%. Even Israel, which has access to the Jordan river, relies on five large coastal desalination plants for half its potable water.

Collectively, the Middle East accounts for roughly 40% of global desalinated water production, providing a combined desalination capacity of 28.96m cubic metres of water, every single day.

“In several Persian Gulf states, modern cities would simply not function without it,” said Nima Shokri, the director of the Institute of Geo-Hydroinformatics at the Hamburg University of Technology.

In 2026, just as in 1983, observers have pointed out that this crucial structural weak point can be used against its Arab neighbours. “Targeting desalination plants could quickly create water shortages in several Persian Gulf states,” Shokri said.

“Many cities depend on a small number of large coastal plants, meaning a successful strike could disrupt drinking water supplies within days. Unlike oil facilities, these plants cannot easily be replaced or repaired quickly. In extreme cases, governments could be forced to ration water for entire urban populations.”

«

Mutually assured destruction of those plants would probably kill millions – or force the biggest human movement in history. Neither is attractive.
unique link to this extract


America and public disorder • Chris Arnade Walks the World

Chris Arnade:

»

An epidemic of mental illness and/or addiction plays out in the US in public, with our streets, buses, parking lots, McDonald’s, parks, and Starbucks as ad hoc institutions for the broken, addicted, and tortured.

That is not the case for the rest of the world, including where I am now, Seoul. My train from the airport was spotless, and so is the ten-mile river park I walk each day here, which given that large parts of it are beneath roadways is especially impressive. In the US it would have impromptu homes of tents, cardboard, and tarps, smell of urine, and the exercise spots that dot its length probably couldn’t exist because of a fear of being vandalized.

You can learn more about the US by traveling overseas and comparing, and five years of that has taught me we accept far too much public disorder.

We are the world’s richest country, and yet our buses, parking lots, and city streets are filthy, chaotic, and threatening. Antisocial and abnormal behavior, open addiction, and mentally tortured people are common in almost every community regardless of size.

I’ve written about this many times before, because it is so striking, and it has widespread consequences, beyond the obvious moral judgement that a society should simply not be this way.

It’s a primary reason why we shy away from dense walkable spaces and instead move towards suburban sprawl. People in the US don’t respect, trust, or want to be around other random citizens, out of fear and disgust. Japanese/European style urbanism—density, fantastic public transport, mixed-use zoning, that so many American tourists admire—can’t happen here because there is a fine line between vibrant streets and squalid ones, and that line is public trust. The US is on the wrong side of it. Simply put, nobody wants to be accosted by a stranger, no matter how infrequent, and until that risk is close to nil, people will continue edging towards isolated living.

«

Arnade has spent the past ten years (perhaps longer?) walking first across America, and more recently all sorts of countries. His comparisons have weight, because he’s seen it all. The article is a dispiriting journey through the battered soul of America’s urban battleground. But he does offer a solution, of sorts.
unique link to this extract


Amazon holds engineering meeting following AI-related outages • Financial Times

Rafe Rosner-Uddin:

»

Amazon’s ecommerce business summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterised by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established”.

“Folks, as you likely know, the availability of the site and related infrastructure has not been good recently,” Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT.

The note ahead of Tuesday’s meeting did not specify which particular incidents the group planned to discuss.

Amazon’s website and shopping app went down for nearly six hours this month in an incident the company said involved an erroneous “software code deployment”. The outage left customers unable to complete transactions or access functions such as checking account details and product prices.

Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly “This Week in Stores Tech” (TWiST) meeting on a “deep dive into some of the issues that got us here as well as some short immediate term initiatives” the group hopes will limit future outages.

«

What? AI-written code isn’t the most trustworthy thing in the world? Heaven forfend.
unique link to this extract


UK fights to keep Gatwick drone disaster report redacted • The Register

Connor Jones:

»

The UK’s Department for Transport (DfT) is assembling government lawyers to fight the Information Commissioner’s decision that it must release a document summarizing the lessons from the 2018 Gatwick drone chaos.

Government sources confirmed the development to The Register this week – the latest in a series of attempts made by the DfT to prevent the “Lessons Report” document from being released in full.

This incident review document is likely to provide greater detail about what exactly happened during the 2018 disruption at London Gatwick Airport, which prevented around 800 flights from taking off, affecting around 120,000 passengers. Officials blamed the situation on drone sightings, although experts disagree.

The DfT has also allegedly attempted to conceal the document’s existence, despite requests made under Freedom of Information laws explicitly seeking its release.

Ian Hudson is one of the few drone experts committed to uncovering further details about the Gatwick incident, and he has filed hundreds of these requests since 2018, including those that revealed the document’s existence.

Hudson has fought for the document’s release since May 2024. In responding to one of his requests in July 2024, Hudson believes the DfT tried to conceal the document. The response ignored parts of the request pertaining to the Lessons Report, only answering other questions by saying the information was already in the public domain.

Follow-up requests made later that year revealed that the DfT had five versions of the Lessons Report document. However, the department refused to release it based on national security grounds and upheld this decision following an internal review.

«

The 2018 incident was never properly explained, and perhaps the DfT is hoping that everyone has forgotten it. Clearly some haven’t.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2627: the phantoms of British AI “investment”, Bluesky CEO steps aside, Claude LLM finds Firefox bugs, and more


The A4 flyover at Brentford is more than 60 years old, and problems with it emerged almost immediately – and have only got worse since. CC-licensed photo by stevekeiretsu on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Overlooked. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Missing money, shipped chips and a 350,000% profit: key takeaways on AI “phantom investments” • The Guardian

Aisha Down:

»

A Guardian investigation has examined a series of massive AI investments announced by the government over the past two years, comparing what was promised with what has so far been delivered.

The investigation centres on two companies backed by the chipmaker Nvidia and central to the UK’s AI plans, Nscale and CoreWeave.

It has found that large, promised sums do not represent real investments into the UK’s economy, that new datacentres are not in fact new, and that a giant supercomputer set to be online later this year is still being used by a construction company in Essex.

Here are some of the key details at a glance.

1. A supercomputer centre set to be built this year is still a scaffolding yard in Loughton
The Guardian visited a site in Essex that is supposed to host “one of the most powerful AI computing centres ever built”, built by Nscale. The government and Nscale say this supercomputer is supposed to be up by the end of this year.

The site is still being used as a scaffolding yard by a different company. While Nscale said more than a year ago that it had already bought the site, land records seem to show that it is not registered as the owner of the land. Nscale is very unlikely to finish constructing a “top tier” supercomputer there this year.

2. The UK government has not checked the numbers when it comes to massive AI investments
The Guardian asked the government about key investments worth billions of pounds, including from Nscale and CoreWeave. The government said that these numbers came from the companies themselves, and that it had no mechanism in place to audit them. It could not say what these investments involved: equipment, capital or something else.

Asked about a contract it said had been signed to build the supercomputer in Loughton, the government did not answer. Instead, it said the entire related investment, $2.5bn, did not involve a formal contract, but an intention to commit capital.

«

There’s plenty more, all of it card-shuffling that would do a magician proud. The more closely one looks at all of the AI investments, the less real they seem to be; Potemkin financial villages.
unique link to this extract


Bluesky CEO Jay Graber is stepping down • WIRED

Kate Knibbs:

»

Jay Graber is stepping down as head of Bluesky, the social media platform exclusively announced to WIRED. Venture capitalist Toni Schneider will be the interim CEO until a permanent replacement is found.

“As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things,” Graber wrote in a statement about the personnel change.

Graber joined Bluesky in 2019, when it was a research project within Twitter focused on developing a decentralized framework for the social web. She became the company’s first chief executive officer in 2021, when it spun out into an independent entity. She oversaw the platform’s remarkable rise and the growing pains it experienced as it transformed from a quirky Twitter offshoot to a full-fledged alternative to X.

Schneider tells WIRED that he intends to help Bluesky “become not just the best open social app, but the foundation for a whole new generation of user-owned networks.”

Schneider, who will continue working as a partner at the venture capital firm True Ventures while at Bluesky, was previously CEO of the WordPress parent company, Automattic, from 2006 to 2014. He also served as its CEO again in 2024 while top executive Matt Mullenweg went on a sabbatical. During that time, Schneider met Graber and became an adviser to Bluesky’s leadership. In a blog post announcing his new role, Schneider said he plans to emphasize scaling, describing his job as “to help set up Bluesky’s next phase of growth.”

This isn’t the end for Graber and Bluesky. She will transition to become the company’s chief innovation officer, a role focused on Bluesky’s technology stack rather than its business operations.

«

Graber and the moderation team have not been treated nicely by the users of the network, to put it mildly. It’s going to be quite the popcorn moment watching the reaction to a VC who will undoubtedly be looking to make Bluesky wash its face, financially speaking. What are the options? Advertising will be a tough sell (to both advertisers and users); and what is the compelling offering for subscriptions?
unique link to this extract


The forever bottleneck, part 1 • The roads.org.uk blog

Chris Marshall:

»

If you’re familiar with the M4 in West London, you’ll know only too well the narrow viaduct at Brentford. A three-lane motorway narrows to two lanes on the way in to London, resulting in a near-permanent traffic jam.

Just arrived, over on the website, is the complete booklet published in 1964 to mark the opening of this section of motorway, which ran from Chiswick to Langley (J1-5). It’s the latest addition to my collection of these commemorative publications.

Like all these booklets, it’s a fascinating window into another world. It offers photographs of the motorway under construction – including, most notably, that long elevated viaduct above the Great West Road through Brentford.

The booklet is understandably very proud of this feature: it was completely unique in the UK when it first opened, and considered a real achievement, not least since it had been built while traffic continued to flow on the A4 underneath. But that pride didn’t last long. Just eight years after it opened, the panel of the Layfield Inquiry described it like this:

»

“Present traffic on this road clearly demonstrates…the inadequacy of the dual 2 lane elevated section in the Chiswick/Brentford area. [The Ministry of Transport] gave evidence which indicated that widening of the present elevated road, either elevated or at ground level, was not possible within limits of conceivable expenditure.”

«

Because this brand-new road was already a problem, and the problem could not be solved, the panel had to conclude:

»

“It is clear that the M4 will have to be included in the primary principal road network, but clear also that every effort must be made to reduce or reroute some of the traffic it now carries…”

«

Within a decade of opening, planners wanted to encourage traffic away from the M4, thanks to a design flaw that could not be fixed. Something had gone very wrong.

«

Absolutely fascinating post (and only the first on the topic!) about a piece of road that almost everyone will have driven along, because it’s part of the arterial road from London to the west.

Read it if only for the solution to de-icing: can’t use salt as that would be bad for the concrete and iron rebar, so it had embedded electrical heating. With a demand of 8 MEGAWATTS: “at the UK’s current electricity prices would cost about £2,000 per hour to run.” Soon abandoned; salt was used instead, with deleterious results.
unique link to this extract


Anthropic’s AI hacked the Firefox browser. It found a lot of bugs • WSJ

Robert McMillan:

»

It took Anthropic’s most advanced artificial-intelligence model about 20 minutes to find its first Firefox browser bug during an internal test of its hacking prowess.

The Anthropic team submitted it, and Firefox’s developers quickly wrote back: This bug was serious. Could they get on a call? “What else do you have? Send us more,” said Brian Grinstead, an engineer with Mozilla, Firefox’s parent organization.

Anthropic did. Over a two-week period in January, Claude Opus 4.6 found more high-severity bugs in Firefox than the rest of the world typically reports in two months, Mozilla said. 

Tools powered by AI are increasingly adept at spotting vulnerabilities and are beginning to rival the talents of seasoned security experts. Some experts worry that those same capabilities will unleash a new wave of cyberattacks as bugs are discovered and then exploited more quickly than ever before.

Claude’s bug bonanza began after Anthropic’s security team decided that it would be interesting to focus its software on a widely used and complex piece of browser software that has been under the microscope for years.

Firefox is the modern version of the Web’s first commercial browser, Netscape Navigator. Its code is now managed under the umbrella of the not-for-profit Mozilla Foundation. Navigator launched its first bug bounty program more than 30 years ago, offering cash to those who identify potential weaknesses that bad actors could abuse. Mozilla typically pays as much as $6,000 for high-severity bugs. 

In the two weeks it was scanning, Claude discovered more than 100 bugs in total, 14 of which were considered “high severity.” That means that if the right “exploit code” had been created, they could have been used in a widespread attack on Firefox’s users.

…AI tools are both a blessing and a curse for software developers. In January, the makers of Curl software abandoned their own bug bounty program, citing “an explosion in AI slop reports.” Fewer than one in 20 bugs reported in 2025 were actually real, said Daniel Stenberg, Curl’s lead developer.

«

That’s the real problem: they’re great at highlighting what they think are bugs, but are they really? In Firefox yes; in curl no. Or m,aybe vice-versa? You’d really want the LLM to write some exploit code to demonstrate it, but that opens a whole new can of worms; Pandora’s bug reporter has been opened.
unique link to this extract


X suspends 800m accounts in one year amid ‘massive’ scale of manipulation attempts • The Guardian

Dan Milmo:

»

Elon Musk’s X said it had suspended 800m accounts over a 12-month period as it fights the “massive” scale of attempts to manipulate the platform.

The social media company told MPs it was continually fighting state-backed attempts to hijack the agenda on its network, with Russia the most prolific state actor, followed by Iran and China.

As part of the battle against such content, X suspended 800m accounts in 2024 for breaching its rules on platform manipulation and spam, although it did not reveal which of those suspensions related to foreign interference. X has approximately 300 million monthly users worldwide.

Wifredo Fernández, a government affairs executive at the platform’s parent company, X Corp, said: “There are efforts every single day to create inauthentic networks of accounts.”

Speaking to MPs on the foreign affairs committee via video link on Monday, Fernández said attempts to manipulate the platform or flood it with spam had not subsided and “several hundred million accounts” had been taken down in the latter part of last year as well. Fernández added he was “quite confident” that the remaining accounts on X were authentic.

X defines manipulative accounts as those that engage in “bulk, aggressive or disruptive activity that misleads others and/or disrupts their experience”. It refers to spam as “unsolicited, repeated actions” that affect other accounts, often meaning a stream of low-quality content.

«

So you’re saying there are lots and lots and lots and LOTS of bots on the platform? Useful to know.
unique link to this extract


US military contractor likely built iPhone hacking tools used by Russian spies in Ukraine • TechCrunch

Lorenzo Franceschi-Bicchierai:

»

A mass hacking campaign targeting iPhone users in Ukraine and China used tools that were likely designed by US military contractor L3Harris, TechCrunch has learned. The tools, which were intended for Western spies, wound up in the hands of various hacking groups, including Russian government spooks and Chinese cybercriminals.

Last week, Google revealed that over the course of 2025, it discovered that a sophisticated iPhone-hacking toolkit had been used in a series of global attacks. The toolkit, dubbed “Coruna” by its original developer, was made of 23 different components first used “in highly targeted operations” by an unnamed government customer of an unspecified “surveillance vendor.” It was then used by Russian government spies against a limited number of Ukrainians and finally by Chinese cybercriminals “in broad-scale” campaigns with the goal of stealing money and cryptocurrency. 

Researchers at mobile cybersecurity company iVerify, which independently analyzed Coruna, said they believed it may have been originally built by a company that sold it to the US government.

Two former employees of government contractor L3Harris told TechCrunch that Coruna was, at least in part, developed by the company’s hacking and surveillance tech division, Trenchant. The two former employees both had knowledge of the company’s iPhone hacking tools. Both spoke on condition of anonymity because they weren’t authorized to talk about their work for the company.

“Coruna was definitely an internal name of a component,” said one former L3Harris employee, who was familiar with iPhone hacking tools as part of their work at Trenchant. 

«

Give it a few years we’ll probably all have a copy, at the rate this seems to be getting handed around.
unique link to this extract


Insider trading is going to get people killed • The Atlantic

Saahil Desai:

»

Polymarket traders swap crypto, not cash, and conceal their identities through the blockchain. Even so, investigations into insider trading are already under way: Last month, Israel charged a military reservist for allegedly using classified information to make unspecified bets on Polymarket.

The platform forbids illegal activity, which includes insider trading in the U.S. But with a few taps on a smartphone, anyone with privileged knowledge can now make a quick buck (or a hundred thousand). Polymarket and other prediction markets—the sanitized, industry-favored term for sites that let you wager on just about anything—have been dogged by accusations of insider trading in markets of all flavors. How did a Polymarket user know that Lady Gaga, Cardi B, and Ricky Martin would make surprise appearances during the Super Bowl halftime show, but that Drake and Travis Scott wouldn’t? Shady bets on war are even stranger and more disturbing. They risk unleashing an entirely new kind of national-security threat. The U.S. caught a break: The Venezuela and Iran strikes were not thwarted by insider traders whose bets could have prompted swift retaliation. The next time, we may not be so lucky.

The attacks in Venezuela and Iran—like so many military campaigns—were conducted under the guise of secrecy. You don’t swoop in on an adversary when they know you are coming. The Venezuela raid was reportedly so confidential that Pentagon officials did not know about its exact timing until a few hours before President Trump gave the orders.

…Consider if the Islamic Revolutionary Guard Corps had paid the monthly fee for a service that flagged relevant activity on Polymarket two hours before the strike. The supreme leader might not have hosted in-person meetings with his top advisers where they were easy targets for missiles. Perhaps Iran would have launched its own preemptive strikes, targeting military bases across the Middle East. Six American service members have already died from Iran’s drone attacks in the region; the death toll could have been higher if Iran had struck first. In other words, someone’s idea of a get-rich-quick scheme may have ended with a military raid gone horribly awry. (The Department of Defense did not respond to a request for comment.)

Maybe this all sounds far-fetched, but it shouldn’t. “Any advance notice to an adversary is problematic,” Alex Goldenberg, a fellow at the Rutgers Miller Center who has written about war markets, told me. “And these predictive markets, as they stand, are designed to leak out this information.”

«

unique link to this extract


CFO gets prison time after losing $35m of company money in crypto side hustle • Decrypt

Stephen Graves:

»

A Washington man has been sentenced to two years in prison after diverting $35m in funds from his former employer to his own DeFi platform—and losing nearly all of it.

Nevin Shetty, 42, was found guilty of wire fraud last November for taking and misusing funds from the private software company at which he worked.

Shetty, who drafted a “conservative” company investment policy, secretly moved $35m in company funds to his side business HighTower Treasury, after being told in April 2022 that his role as CFO would end due to performance issues. Those funds were then invested in high-yield DeFi lending protocols that promised returns of 20% or more.

Per the DOJ’s statement, Shetty planned to pay his employer a “comparatively small, fixed amount,” keeping the remainder of the returns for HighTower. Initially, the scheme paid off, earning some $133,000 in its first month for Shetty and his HighTower business partner.

The wheels came off in May 2022, following the Terra collapse and the subsequent crypto winter, with Shetty’s HighTower crypto investments plummeting in value from $35m to near zero.

After confessing to colleagues at his employer, Shetty was fired from the company, which, according to trial judge Tana Lin, suffered “significant and severe effects” as a result of his theft, adding that his actions “almost put the company out of business.”

«

Two years, but the prosecution was looking for nine. Hard to know if longer jail terms are really a deterrent to rank stupidity combined with venality and, one suspects, a bit of resentment about the company letting him go the first time.
unique link to this extract


Musk’s spam frustration was ‘confusing’ to ex-Twitter executives • Bloomberg via BloombergLaw

Isiah Poritz and Kurt Wagner:

»

Two former top executives at Twitter Inc. sought to beat back Elon Musk’s narrative at a jury trial that they lied to him about the makeup of the platform’s user base when he was purchasing the company in 2022.

Parag Agrawal, who was chief executive officer, and Ned Segal, the chief financial officer, took the witness stand Friday in an investor trial over Musk’s tumultuous $44 billion buyout of the company.

But the two men, both of whom were fired right as Musk took control of the social networking platform, offered few detailed recollections of the chaotic weeks surrounding his attempt to back out of the deal and stopped short of criticizing his behavior.

Agrawal was asked for his reaction to Musk’s May 13, 2022, tweet at the heart of the case stating the purchase agreement was “temporarily on hold.” Testifying Friday in jeans and a t-shirt, Agrawal was brief, saying, “It did not make sense to me.”

Segal, who testified most of Thursday, also was concise. “I was displeased,” was his only reaction to another tweet from Musk blasting Twitter’s methodology for counting how much of its user base was fake accounts, or bots.

The investors claim Musk’s public bashing of the company was actually an effort to drive down the stock price and gain himself a better bargaining position.

The two days of testimony from Agrawal and Segal pushed back on Musk’s own recollection that he was always committed to the deal, but genuinely believed that Twitter had lied to him about the percentage of spam accounts.

Musk told the jury he was “stunned” that Agrawal, Segal and other Twitter executives could not provide more details about the bot count methodology during their first meeting on May 6 after signing the acquisition agreement. Musk said the bot issue was important, likening it to investigating a termite infestation while purchasing a house.

«

If that’s what Musk thinks the problem is like, he’s definitely let it completely undermine the foundations since then.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2626: phishing gets even tougher to spot, New York salaries, don’t fret about AI jobs, the metaverse just won’t die, and more


The maths behind mammograms is complicated – and the human stories they hide even more so. CC-licensed photo by Kristie Wells on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Pre-screened. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Gone (almost) phishin’ • Ma.tt

Matt Mullenweg:

»

This is a little embarrassing to share, but I’d rather someone else be able to spot a dangerous scam before they fall for it. So, here goes.

One evening last month, my Apple Watch, iPhone, and Mac all lit up with a message prompting me to reset my password. This came out of nowhere; I hadn’t done anything to elicit it. I even had Lockdown Mode running on all my devices. It didn’t matter. Someone was spamming Apple’s legitimate password reset flow against my account—a technique Brian Krebs documented back in 2024. I dismissed the prompts, but the stage was set.

What made the attack impressive was the next move: The scammers actually contacted Apple Support themselves, pretending to be me, and opened a real case claiming I’d lost my phone and needed to update my number. That generated a real case ID, and triggered real Apple emails to my inbox, properly signed, from Apple’s actual servers. These were legitimate; no filter on earth could have caught them.

Then “Alexander from Apple Support” called. He was calm, knowledgeable, and careful. His first moves were solid security advice: check your account, verify nothing’s changed, consider updating your password. He was so good that I actually thanked him for being excellent at his job.

That, of course, was when he moved into the next phase of the attack. He texted me a link to review and cancel the “pending request.” The site, audit-apple.com, was a pixel-perfect Apple replica, and displayed the exact case ID from the real emails I’d just received. There was even a fake chat transcript of the scammers’ actual conversation with Apple, presented back to me as evidence of the attack against my account. At the bottom of the page was a Sign in with Apple button that he told me to use.

I started poking at the page and noticed I could enter any case ID and get the same result. Nothing was being validated. It was all theater.

“This is really good,” I told Alexander. “This is obviously phishing. So tell me about the scam.”

Silence. *Click*.

«

One can hope that all readers will spot that Apple won’t call you, because none of these companies is going to do that. They’d never be off the phone if they did.
unique link to this extract


The salaries of 60 New Yorkers • NY Mag

The Editors:

»

The last time we surveyed New Yorkers about their paychecks, the math was easy. Well, easier. In 2005, a blogger at Gawker made $30,000 and the CEO at Lehman Brothers more than $35m. Back then, there was no “gig economy,” at least not as we know it today, and coffee shops from Bed-Stuy to the Upper East Side weren’t lousy with model–pickleballer–nanny–actor–producer–DJ–creative directors.

Some 20 years later, amid a radically different economic environment in which the nature of work feels as if it’s about to change forever, we set out to conduct a similar experiment. We reached into our network of sources, blind-messaged LinkedIn profiles, put out a casting call on Instagram, even stopped strangers in Union Square. What we discovered, just before a jobs report earlier this month confirmed a dwindling labor market, is that salaries across most industries have not kept up with inflation in a city that has become exorbitantly expensive. For so many professionals we spoke with, some of whom agreed to pose for the paper-bag portraits in this story, the answer to stalling wages and cost-of-living expenses is hanging out their own shingle and juggling freelance projects and social-media collaborations.

«

The numbers in here are amazing. Manhattan dog walker. Psychoanalyst. Pastor. Ghostwriter. Head of engineering at an AI startup. Sex worker. Not in that order: $325,000, $450,000, $92,000, $284,000, $67,200, $165,000. (See if you can guess which is which.) Worth reading for the human stories they contain.
unique link to this extract


2023: Why I’m not worried about AI causing mass unemployment

Timothy B. Lee, writing in April 2023:

»

The startups that best fit the “software eating the world” thesis [of Marc Andreessen in 2011] are probably “sharing economy” companies like Bird, DoorDash, Instacart, Lime, Lyft, Uber, and WeWork. Each of these companies use software to offer services in the “real world”—taxi rides, scooter rentals, food delivery, lodging, office space, and so forth. They enjoyed a lot of hype in the mid-2010s, and most of them have struggled in the last few years.

Some of them have been total fiascos. WeWork failed to disrupt commercial real estate. Shares in the scooter startup Bird have lost 97% of their value since the company went public less than two years ago. Last year I drove for Lyft for a week and wrote about its difficulty in turning a profit. 

The two most successful “sharing economy” startups are probably Airbnb (founded in 2008) and Uber (founded in 2009). These companies are each worth tens of billions of dollars, and they seem likely to be enduring, profitable businesses.

Still, Airbnb has only a modest share of the overall lodging industry. And in recent years, the quality of Uber’s service has deteriorated, with higher fees and longer wait times. Smartphone-based ride hailing is a marginal improvement over conventional taxis, but hasn’t been a revolution.

In his 2011 essay, Andreessen specifically mentions health care and education as industries ripe for disruption by software. But as far as I can see that hasn’t happened. Hospitals increasingly use computers for record-keeping and billing and software has been used to make new drugs and medical devices. Many people learn foreign languages using Duolingo or watch educational videos on YouTube. But people largely go to the same schools and hospitals they did 10 or 20 years ago.

The reason I’m relitigating this 12-year-old argument is that I hear echoes of it in contemporary discussions of AI. In the early 2010s, Silicon Valley thought leaders looked at the early success of companies like Airbnb and Uber, extrapolated wildly, and concluded that software was going to transform the entire economy. Today, AI thought leaders are looking at the early success of ChatGPT and Stable Diffusion, extrapolating wildly, and concluding that AI software is going to transform the economy and put tons of people out of work.

To be clear, I do think AI is going to be a big deal. I wouldn’t have started an AI newsletter otherwise. But as with the Internet, I expect the impact to be concentrated in information-focused industries and occupations. And most of the American economy is not information-focused

«

Lee is still insistent about this point: he reiterated it on Monday.
unique link to this extract


The screening machine • Tablet Magazine

Mo Perry is a journalist and actor:

»

The battles play out all day, every day, in our feeds, where there’s an influencer for every temperament. Whatever your position on the trustworthiness of the medical establishment, there’s someone posting online or yammering on a podcast ready to support or refute you, armed with studies, stats, and anecdotes. Each can be compelling in isolation. Taken together, their contradictory messages—all delivered with evangelical certitude—can be dizzying.

Peptides. Ivermectin. Universal Hepatitis B vaccines for newborns. Raw milk. MRNA technology. All contested, all fraught. [They’re only “contested” if you don’t understand science – Overspill Ed.]

Mammograms are no different. On one side there are people like Dr. Thais Aliabadi, a board-certified OB-GYN and doctor to the stars, telling millions of “Huberman Lab” podcast listeners her story of getting a routine mammogram that led to a biopsy deemed benign, followed by a risk calculation (based on factors like family history, age at first menses, and age at first childbirth) that pegged her lifetime chance of breast cancer at 37%. Convinced that wasn’t a number she could live with, she fought for a prophylactic double mastectomy. When the pathologists examined the removed tissue, they found a small, previously invisible cancer in one breast—proof, in her telling, that more aggressive screening and preventative surgery had saved her life.

On the other side is Dr. Jenn Simmons, a former breast surgeon turned functional-medicine entrepreneur who regularly tells her more than 100,000 followers that mammograms have never been shown to save lives, that many mastectomies performed in the wake of screening are unnecessary products of overdiagnosis and fear, and that the radiation from mammograms can cause the very cancers women are trying to detect. She urges women toward diet and lifestyle overhaul, “terrain” testing, and radiation-free QT ultrasounds at her centers instead.

Between these poles sits the US Preventive Services Task Force, an independent panel of national experts in disease prevention and evidence-based medicine that issues recommendations about clinical preventive services. It doesn’t endorse risk calculators or supplemental screening. Its guidance is almost comically bland by comparison: biennial mammograms for women between 40 and 74, full stop.

«

In the UK, the guidance is an invitation every three years for all women aged between 50 and 70, so clearly there’s room for nuance (or statistical variation in populations and diets). Perry’s general point – that doctors like mammograms because it means they’re doing something – is reasonable. “Surgeons like to do surgery,” as my GP said to me when I mentioned having some adverse effects from an elective arthroscopy. “They’re not going to turn down the opportunity.”
unique link to this extract


Epic and Google have signed a special deal for a new class of ‘metaverse’ apps • The Verge

Jay Peters:

»

Epic Games and Google are burying the hatchet, but documents released today reveal that they aren’t only aligned on how Google is shaking things up for app stores. The two companies have also agreed to terms about a new class of apps that they’re calling “metaverse browsers,” according to a heavily redacted section of a revised binding term sheet.

While the term “metaverse” has largely fallen out of favour — Mark Zuckerberg, for example, is now much more interested in AI — Epic CEO Tim Sweeney has been talking for years about the metaverse and how it might work in the future. (Depending on how you define the concept, Epic’s Fortnite is already arguably one of the biggest versions of a metaverse.) And this actually isn’t the first time there has been a connection with Epic and Google about the metaverse; in court in January, when discussing a secret $800m Unreal Engine and services deal, Sweeney blurted out that the agreement related to the metaverse.

Unfortunately, the redactions in the revised binding term sheet cover up a lot of the key details about what a metaverse browser actually is. But from what’s visible in the document, metaverse browsers will:

• “have the primary purpose of allowing the navigation and exploration of metaverse worlds;”
• “support virtual items and identity that are portable across different worlds in the metaverse browser; and”
• “must support modern security considerations including Sandbox capabilities, limitations on code execution, and secure connections.”

«

Oh please, can’t we just put the metaverse out of our misery?
unique link to this extract


One hundred accounts are behind the majority of conspiracy theory content in Canada • National Observer

Rory White:

»

Conspiracy theories about globalist cabals, climate hoaxes and election fraud may seem ubiquitous on social media. But a report published on Monday by the Media Ecosystem Observatory has found that they come from a tiny minority of users.

According to the report, just 100 users were responsible for almost 70% of online conspiracy posts from influential accounts they examined in Canada.

The researchers analyzed over 14 million social media posts from accounts in Canada, and found that 87% of conspiratorial claims come from influencers. Users on Elon Musk’s X were the biggest culprits.

These influencers are having an outsized impact in the physical world as well as online. Local governments across Canada are facing a wave of “larger scale conspiracy theories” overwhelming council meetings, according to Zoe Grams, executive director of Climate Caucus. This has led some politicians to avoid mentioning climate change altogether for fear of provoking a backlash.

“It’s about the permission structure of how we treat each other and how we treat our democratic institutions, which I think conspiracy theories are really undermining,” said Grams.

Local governments across Canada face a wave of “larger scale conspiracy theories” overwhelming council meetings, according to Zoe Grams, executive director of Climate Caucus. This has led some politicians to avoid mentioning certain topics altogether

The report from the Media Ecosystem Observatory, a collaboration between McGill University and the University of Toronto, did not name the accounts responsible for spreading conspiracy theories. But an analyst at the organization gave some clues. “A lot of them are part of a network. They often know each other and engage with each other’s content,” said Mathieu Lavigne. 

While the conspiracy theory posts were viewed billions of times, only a small minority of Canadians fell for them.

«

Once again, one has to wonder what the world would be like if social media were optimised to reward accuracy rather than virality.
unique link to this extract


OpenAI robotics leader resigns over concerns about surveillance and autonomous weapons • Fortune

Sharon Goldman:

»

Caitlin Kalinowski, who had been leading hardware and robotic engineering teams at OpenAI since November 2024, announced she has left the company.

“I resigned from OpenAI,” she posted on X and LinkedIn. “I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.​​​​​​​​​​​​​​​​”

Her departure comes amid an escalating dispute over how far AI companies should go in supporting U.S. military uses of the technology. In recent days, negotiations between the Pentagon and Anthropic collapsed after the company pushed for strict limits on domestic surveillance and autonomous weapons. Soon after, OpenAI reached its own agreement with the Defense Department to deploy its models on a classified government network.

The move drew criticism from some employees and observers who argued that OpenAI appeared to step in after Anthropic refused the terms. CEO Sam Altman later acknowledged the deal’s rollout looked “opportunistic,” and the company has since moved to clarify restrictions on how its systems can be used by the military.

«

It’s very noticeable how easily the AI companies have agreed to the use of their technology for military purposes, while the earlier incarnations of Google and Microsoft were chary of doing so.
unique link to this extract


The social media discoverability problem • Sam Randa

Sam Randa:

»

At its best, algorithmic feeds feel like wandering through a city street. Stimuli is varied, accessible, and acts as a jumping off point for engaging with new communities. But for those of us not privileged enough (or in my case, too privileged) to spend adolescence in lively, diverse urban areas, social media formed a decent alternative. I don’t think the problem lies in algorithmic discovery, but the specific corporate incentives to keep users on their platforms no matter what.

I feel like many of the people pushing towards a federated, conscious, intentional web landscape tend to know who they are and what they want out of the internet. It’s an easy thing to forget going through if you’ve worked with computers for multiple decades, but as someone whose “real” identity only started to crystallize in high school during the pandemic, it’s fresh in my mind. Though I have more appreciation for the slow web nowadays, where my identity is a bit more solidified, I still feel a pretty strong pull towards “the platform”, and my visions for a healthier internet include it.

The future is the piece I’m the least sure of. It’s obvious that algorithmic feeds have a lot to do with plenty of incredibly harmful societal effects: the dissemination of fake news and misinformation on Facebook that shaped the 2016 Trump campaign, Instagram’s impact on the self-image of teenage girls, the proliferation of brain rot and rage bait content across all of short-form video, and the pull that young men feel towards radicalization as a misplaced response to the struggles they face. All of these are completely separate from the deep privacy concerns of trading your personal information for participation in a platform. But, there’s something to be said about having a wide variety of interests, people, and culture thrown at you that, in a small way, makes up for an upbringing that doesn’t.

«

unique link to this extract


“My invention makes ocean plastic the world’s cheapest problem to solve” • The Times

Ben Spencer:

»

The global plastic pollution crisis could be solved within 15 years and for less than $1 billion, according to an ambitious plan to stop litter getting into the oceans.

Boyan Slat, an inventor, environmentalist and the chief executive of The Ocean Cleanup, a nonprofit organisation, argues that to call his plan a bargain would be to sell it short.

“It is the world’s cheapest problem to solve,” the 31-year-old said. “If we’re off by a factor of two or three, it’s still the cheapest world problem. Even if we’re off by a factor of 100, it’s still the cheapest world problem.”

But, crucially, he added: “It’s not solved yet. We still need to raise a significant amount of money, and there is a big execution challenge.”

His confidence relies on research that suggests that stopping pollution in just 30 cities around the world will cut a third of the plastic waste that enters the oceans. Slat plans to tackle these first 30 cities by 2030 at a cost of $350m (£260m).

His method uses floating barriers to trap plastic in the cities’ rivers. If it cannot be scooped up by traditional excavators, “interceptors” — autonomous boats with conveyor belts — are sent to gather the waste and send it for recycling or disposal.

The momentum from that would enable the charity to stop 90% of floating plastic from entering the ocean by 2040, and to clear up the “legacy” waste, particularly in hotspots such as the “great Pacific garbage patch” — an accumulation of about 100,000 tonnes floating between Hawaii and California. In total, he believes, it would cost less than $1bn.

“So 2040 is our publicly stated goal to get to 90%, but I think we can go faster than that, depending on how things go in the next few years.”

Slat, who grew up in the Dutch city of Delft, dropped out of his degree in aerospace engineering after one semester to set up his project. “I realised if I wanted to make this a success, I needed to dedicate all my time to this.”

«

So many problems have this shape: a few individuals are associated with the highest cost. It’s true for crime, for homelessness, and clearly for waste plastic. As he says, it’s comparatively cheap to do. Why doesn’t anyone listen?
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2625: Meta sued over smart glass privacy, a new Apple?, the bad agent, Grammarly fakes writer identities, and more


The UK burnt as much coal in 2025 as in the years when Shakespeare’s Hamlet was first performed, despite the population being nearly 20 times larger. CC-licensed photo by Jens Naehler on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. Reduced to a bit part. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Meta hit with a class action lawsuit over smart glasses’ privacy claims • Engadget

Karissa Bell:

»

Meta is facing a class action lawsuit for false advertising related to its AI glasses following reports about the company’s use of human contractors to review footage captured from users’ glasses. The lawsuit, filed Wednesday in federal court in San Francisco, alleges that Meta’s claims about the devices’ privacy features have misled users.

The lawsuit comes after a Swedish newspaper reported that subcontractors in Kenya have raised concerns about viewing footage recorded via Ray-Ban Meta glasses. According to Svenska Dagbladet, workers have reported witnessing “intimate” material, including bathroom visits, sexual encounters and other private details as part of their job labeling objects in videos captured on users’ smart glasses.

“This nationwide class action seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline,” the lawsuit, filed by Clarkson Law Firm, states. The filing names two individuals who live in California and New Jersey who purchased Meta’s smart glasses. It says that both “relied” on Meta’s marketing claims about the glasses’ privacy protecting features and that they would not have purchased them if they knew about the company’s use of contractors. The lawsuit seeks monetary damages and injunctive relief.

A spokesperson for Meta confirmed to Engadget that data from its smart glasses can be shared with human contractors in some cases. The company declined to comment on the claims in the lawsuit.

“Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you,” the spokesperson said. “Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do…”

«

“Other people do it so it must be OK for us to do it” is not really the powerful argument that Meta seems to hope. Note though that a class action lawsuit won’t get it to change its policy; only pay out on it, if found culpable. To force a change in policy you’d need some part of government to take action. No sign of that so far.
unique link to this extract


The new Apple finally begins to emerge • Parker Ortolani

Parker Ortolani:

»

It’s official. Apple’s added Steve Lemay and Molly Anderson to their executive leadership page. After much drama following Alan Dye’s departure, the company has decided to not only elevate the two designers but give them the kind of platform that they deserve. They’re now listed right alongside folks like Tim Cook and John Ternus.

I think it’s no secret that many of us have come to agree that Apple’s been a bit lost the past few years. Between Vision Pro, Apple Intelligence, tone deaf advertisements, and debates over software quality, there’s been a sense that a change needs to happen. I care about Apple holistically, as a living breathing entity not just their products. So who is running the joint matters to me. It doesn’t feel like a coincidence that the same week we get a radically different kind of Mac, we also see the executive leadership page get a revamp.

Molly Anderson’s already proven herself to be an incredibly talented industrial designer. She’s been at Apple for over a decade, but she recently became the face of ID doing voiceover for the iPhone 17 Pro reveal. If the latest iPhones and the MacBook Neo are the first real fruits of her leadership, that bodes incredibly well for the future. And mind you, she’s also in charge of accessories and packaging. I think she might just turn out to be exactly the kind of person that Apple hardware needs to inject a breath of fresh air.

The same goes for Steve Lemay, who I know so many people are excited to see takeover human interface. Liquid Glass has been so poorly received that knowing a true Apple veteran with deep knowledge of HI is now in charge has been the best kind of whiplash. Lemay has been at Apple for 27 years and has a track record that implies the future of Apple software under his leadership is extraordinarily bright.

«

The MacBook Neo has surely been under development for longer than the few months since Alan Dye left, but it’s true that highlighting Lemay and Anderson is encouraging. What I found most notable about last week’s announcements was that the person who was given the quotes in the press release about the Neo was… John Ternus, the person everyone is tipping as the future replacement for Tim Cook.
unique link to this extract


Let it flow: agentic crafting on rock and roll • ArXiv

Weixun Wang, XiaoXiao Xu et al work at Alibaba and other (undocumented?) Chinese organisations, and were working on getting agentic systems to work:

»

When rolling out the instances for the trajectory, we encountered an unanticipated—and operationally consequential—class of unsafe behaviours that arose without any explicit instruction and, more troublingly, outside the bounds of the intended sandbox.

Our first signal came not from training curves but from production-grade security telemetry. Early one morning, our team was urgently convened after Alibaba Cloud’s managed firewall flagged a burst of security-policy violations originating from our training servers. The alerts were severe and heterogeneous, including attempts to probe or access internal-network resources and traffic patterns consistent with cryptomining-related activity. We initially treated this as a conventional security incident (e.g., misconfigured egress controls or external compromise).

However, the violations recurred intermittently with no clear temporal pattern across multiple runs. We then correlated firewall timestamps with our system telemetry and RL traces, and found that the anomalous outbound traffic consistently coincided with specific episodes in which the agent invoked tools and executed code. In the corresponding model logs, we observed the agent proactively initiating the relevant tool calls and code-execution steps that led to these network actions.

Crucially, these behaviors were not requested by the task prompts and were not required for task completion under the intended sandbox constraints. Together, these observations suggest that during iterative RL optimisation, a language-model agent can spontaneously produce hazardous, unauthorized behaviors at the tool-calling and code-execution layer, violating the assumed execution boundary.

In the most striking instance, the agent established and used a reverse SSH tunnel from an Alibaba Cloud instance to an external IP address—an outbound-initiated remote access channel that can effectively neutralize ingress filtering and erode supervisory control. We also observed the unauthorised repurposing of provisioned GPU capacity for cryptocurrency mining, quietly diverting compute away from training, inflating operational costs, and introducing clear legal and reputational exposure.

Notably, these events were not triggered by prompts requesting tunnelling or mining; instead, they emerged as instrumental side effects of autonomous tool use under RL optimization.

«

In other words: despite not having any instructions to do so, the agent started using parts of the network to mine cryptocurrency. For itself? For its controllers? We don’t know, and there’s no way to know even if it tells you, because how can you trust that?
unique link to this extract


Cluely CEO Roy Lee admits to publicly lying about revenue numbers last year • TechCrunch

Julie Bort:

»

The $7m in annual recurring revenue that Cluely co-founder and CEO Roy Lee shared with TechCrunch last summer was a lie, Lee admitted on Thursday on X. Wrote Lee, this “is the only blatantly dishonest thing i’ve said publicly online, so this is my formal retraction.”

Yet his post on X also misrepresents the backstory of how and why he told TechCrunch his ARR in the first place.
Lee says in that same post that he “got a random cold call from some woman asking about numbers and told her some bs, did not expect an article about it.”

But that call occurred because Cluely’s public relations representative emailed TechCrunch and offered to make Lee available for a story. On Friday, Jun 27, 2025 at 8:38 a.m., Cluely’s PR person sent an email to TechCrunch reporter Marina Temkin that said, “I’d love to arrange an interview with Roy. Whether for a deeper dive into Cluely’s next phase or a fresh angle on his vision, we’d be happy to make it happen.”

Temkin agreed. The PR representative shared Lee’s number and confirmed that he was expecting the call. After a few attempts to reach him, Lee answered the call and gave the interview, as had been arranged.

TechCrunch was interested in talking to Cluely because in the summer of 2025, Cluely was the “cheat-on-everything” phenomenon — a viral startup that let users secretly look up answers during video calls without being detected. The company was founded after Lee published a viral post on X saying he had been suspended by Columbia University after he and his co-founder developed a tool to cheat on job interviews for software engineers.

«

Very much “My ‘I’ve only been blatantly dishonest once in public’ T-shirt has people asking questions which are answered by my T-shirt.”
unique link to this extract


Grammarly is using our identities without permission • The Verge

Stevie Bonifield:

»

Grammarly’s “expert review” feature offers to give users writing advice “inspired by” subject matter experts, including recently deceased professors, as Wired reported on Wednesday. When I tried the feature out myself, I found some experts that came as a surprise for a different reason: one of them was my boss. [The original text here incorrectly used an em dash rather than a colon. If you’re wondering where chatbots get their bad style, tech writers, you don’t have to look too far – Overspill Ed.]

The AI-generated feedback included comments that appeared to be from The Verge’s editor-in-chief, Nilay Patel, as well as editor-at-large David Pierce and senior editors Sean Hollister and Tom Warren, none of whom gave Grammarly permission to include them in the “expert reviews.”

The feature, which launched in August, claims to help you “sharpen your message through the lens of industry-relevant perspectives.” When users select the “expert review” button in the Grammarly sidebar, it analyzes their writing and surfaces AI-generated suggestions “inspired by” related experts. Those “industry-relevant perspectives” include the likes of Stephen King, Neil deGrasse Tyson, and Carl Sagan, among many others.

The Verge found numerous other tech journalists named in the feature, as well, including former Verge editors Casey Newton and Joanna Stern, former Verge writer Monica Chin, Wired’s Lauren Goode, Bloomberg’s Mark Gurman and Jason Schreier, The New York Times’ Kashmir Hill, The Atlantic’s Kaitlyn Tiffany, PC Gamer’s Wes Fenlon, Gizmodo’s Raymond Wong, Digital Foundry founder Richard Leadbetter, Tom’s Guide editor-in-chief Mark Spoonauer, former Rock Paper Shotgun editor-in-chief Katharine Castle, and former IGN news director Kat Bailey. The descriptions for some experts contain inaccuracies, such as outdated job titles, which could have been accurately updated had Superhuman asked those people for permission to reference their work.

In a statement to The Verge, Alex Gay, vice president of product and corporate marketing at Grammarly parent company Superhuman, commented: “The Expert Review agent doesn’t claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply.”

«

Notably, there’s no place where you can find out who is in Grammarly’s list except by signing up for Grammarly. This is obviously an abuse of identity, though, and Grammarly is sure to find itself on the wrong end of a class lawsuit in the US because there are some big names on there.
unique link to this extract


The war on Iran puts global chip supplies and AI expansion at risk • WIRED

Carla Sertin:

»

South Korean officials have warned that the US-Israel war with Iran could hit the global semiconductor supply chain if it disrupts the flow of critical industrial materials from the Middle East.

South Korea’s semiconductor sector, led by giants like Samsung Electronics and SK Hynix, produces about two-thirds of the world’s memory chips. If the Middle East’s supply of chipmaking materials is disrupted, semiconductor production could slow unless alternative sources are found quickly.

One material at risk is helium, which is essential in chip manufacturing for managing heat, detecting leaks, and maintaining stable temperatures in fabrication equipment. For many of these uses, there is no real substitute.

About 38% of the world’s helium is produced by Qatar, where large extraction facilities are tied to the natural gas industry. This concentration means that disruptions can quickly ripple through the global supply chain.

National oil company QatarEnergy declared force majeure on March 4, after stopping its gas production and downstream operations due to ongoing attacks. Downstream facilities turn gas into other products, including urea, polymers, methanol, and aluminum.

South Korea’s Industry Ministry said the country also depends on the Middle East for 14 other materials in chipmaking, such as bromine and some chip-inspection equipment. While some of these materials can be sourced domestically or from other markets, shifting suppliers in the semiconductor sector is difficult because chipmakers need to test and validate new sources to meet strict purity standards.

Companies say the situation is manageable for now. As reported by Reuters, SK Hynix said it has secured diverse supply chains and maintains sufficient helium inventories, adding that there is “almost no chance” its operations would be affected in the near term.

Contract chipmaker TSMC similarly said it does not currently anticipate a significant impact, while GlobalFoundries stated it is in direct contact with suppliers and has mitigation plans in place.

«

Even so: semiconductor supply chain gets hit, AI flywheel slows down, US economy slows down (two-thirds of its current growth depends on AI), everything gets very bad economically because when the US economy sneezes, the world economy catches a cold.
unique link to this extract


The NAND crisis is now worse than DRAM; Samsung is doubling prices for the second quarter in a row • WCCF Tech

Muhammad Zuhair:

»

The PC industry is set to face another crisis from memory suppliers, and after being disrupted by AI customers’ demand for DRAM, it appears NAND is next. According to a report by the Korean media outlet Sedaily, Samsung now plans to hike prices by a whopping 100% in Q2, following a similar hike in Q1.

This means that the Korean giant alone has raised NAND pricing by more than 200% this year, indicating that products dependent on NAND chips will become significantly more expensive, if not unaffordable. And, if you have guessed it, these hikes are directly targeted towards the AI industry.

It is reported that NAND prices alone have surged by 450% last year, driven not just by demand from the AI sector, but also by manufacturers’ struggle to balance DRAM and NAND production. However, in recent times, the role of NAND chips has become much more significant in AI workloads, and, as we have highlighted previously, SSDs are now used in mainstream AI racks such as Vera Rubin to handle long-context workloads.

With this, suppliers like Samsung, SanDisk, SK hynix, and Kioxia are now planning extensive price hikes, hoping they don’t miss out on hyperscaler demand.

«

“Memory” comes in more than one form, and the NAND crunch was always predictable.
unique link to this extract


How I dropped our production database and now pay 10% more for AWS • Alexey On Data

Alexey Grigorev:

»

I’m working on expanding the AI Shipping Labs website and wanted to migrate its current version from static GitHub Pages to AWS. And later, replace the original Next.js setup with a Django version.

My gradual plan was:

1: Move the current static site from GitHub Pages to AWS S3
2: Move DNS to AWS so the domain is fully managed there
3: Deploy the new Django version on a subdomain
4: When everything works, switch the main domain to Django

This way, everything would already be inside AWS, and the final switch would be seamless.

The migration strategy itself was reasonable, but the problems came from how I executed it.

I was overly reliant on my Claude Code agent, which accidentally wiped all production infrastructure for the DataTalks.Club course management platform that stored data for 2.5 years of all submissions: homework, projects, leaderboard entries, for every course run through the platform.

To make matters worse, all automated snapshots were deleted too. I had to upgrade to AWS Business Support, which costs me an extra 10% for quicker assistance. Thankfully, they helped me restore the database, and the full recovery took about 24 hours.

«

It’s really not very encouraging to read lines like “I was overly reliant on my Claude Code agent” from people who then let them loose on production databases.
unique link to this extract


Analysis: UK emissions fall 2.4% in 2025 as coal hits 400-year low • Carbon Brief

Carbon Brief Staff:

»

The UK’s greenhouse gas emissions fell by 2.4% in 2025 to their lowest level in more than 150 years, according to new Carbon Brief analysis.

The biggest factors were gas use falling to a 34-year low and coal use dropping to levels last seen in 1600, when Queen Elizabeth I was on the throne and William Shakespeare was writing Hamlet.

These shifts were helped by record-high UK temperatures, elevated gas prices, the end of coal power in late 2024 and a sharp slowdown in the steel industry.

Other key findings of the analysis include:
• The UK’s greenhouse gas emissions fell to 364m tonnes of carbon dioxide equivalent (MtCO2e) in 2025, the lowest level since 1872
• Coal use roughly halved, with more than half of this due to the end of coal power and another
• Gas use fell by 1.5% to the lowest level since 1992, with roughly equal contributions from cuts in heat for buildings and industry, more than offsetting a small rise in gas power
• Oil use fell by 0.9%, despite rising traffic, helped by more than 700,000 new electric vehicles (EVs), electric vans and plug-in hybrids on the nation’s roads
• The UK’s emissions are now 54% below 1990 levels, while its GDP has nearly doubled.

«

The UK’s population was around four million in 1600. In 2025, it was around 70 million. Go (away) coal!
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2624: Canadian journal retracts 25 years of studies, the AI writing question, Netflix buys Affleck AI firm, and more


Fuel prices have jumped in response to reduced tanker traffic through the Straits of Hormuz as the Iran conflict intensifies. CC-licensed photo by Images Money on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 10 links for you. Electric what? I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


A medical journal says the case reports it has published for 25 years are, in fact, fiction • Retraction Watch

Kate Travis and James Heathers:

»

A Canadian journal has issued corrections on 138 case reports it published over the last 25 years to add a disclaimer: The cases described are fictional.

Paediatrics & Child Health, the journal of the Canadian Paediatric Society, has published the cases since 2000 in articles for a series for its Canadian Paediatric Surveillance Program. The articles usually start with a case description followed by “learning points” that include statistics, clinical observations and data from CPSP. The peer-reviewed articles don’t state anywhere the cases described are fictional.

The corrections come following a January article in New Yorker magazine that mentioned one of the reports — “Baby boy blue,” a case published in 2010 describing an infant who showed signs of opioid exposure via breast milk while his mother was taking acetaminophen with codeine. The New Yorker article made public an admission by one of the coauthors that the case was made up. 

“Based on the New Yorker article, we made the decision to add a correction notice to all 138 publications drawing attention to CPSP studies and surveys to clarify that the cases are fictional,” Joan Robinson, editor-in-chief of Paediatrics & Child Health, told Retraction Watch. “From now on, the body of the case report will specifically state that the case is fictional.” 

The move came as a surprise to David Juurlink, professor of medicine and pediatrics at the University of Toronto, who has spent over a decade looking into the claim that infants can receive a meaningful or even lethal dose of opioids via breast milk when their mothers take acetaminophen with codeine.

«

What is shocking about this is that other mothers will have had their babies taken away on the basis of the claims in those papers. It’s an astonishing failure of peer review, researcher honesty, and basically everything you thought science was meant to protect against.
unique link to this extract


Diesel at 16-month high in UK as Iran war drives oil prices up further • Sky News

James Sillars:

»

UK average diesel costs have hit a 16-month high, less than a week after war gripped the Middle East and sent oil costs rocketing.

Global energy prices have been the main financial market focus since Tehran launched attacks against Gulf nations in retaliation for the US-Israeli strikes on its country, disrupting production and deliveries of both oil and natural gas.

The narrow Strait of Hormuz in the Persian Gulf, between Iran and the United Arab Emirates, is used to more than 80 tankers a day passing through.

But shipping has been reduced to a trickle amid Iranian attacks and threats, with all the disruption to normal trade flows being quickly reflected in petrol and diesel prices across Europe and the US through higher wholesale prices.

Sky News was told on Tuesday how those for UK diesel had risen by 7p-per-litre and 2p for petrol in the wake of big rises to oil prices on Monday, when financial markets gave their first reaction to the US-led military strikes.

The Petrol Retailers’ Association (PRA) believed at the time that those higher wholesale costs would likely filter through to the pumps over the course of the next few weeks, but it warned that some forecourts would have to pass them on more quickly because of the nature of their fuel-buying contracts.

«

The problem with these fossil fuel energy sources, you see, is that their supply is so intermittent and unreliable.
unique link to this extract


I verified my LinkedIn identity. Here’s what I actually handed over • THE LOCAL STACK

“Rogi” (or possibly Igor, no last name given):

»

I wanted the blue checkmark on LinkedIn. The one that says “this person is real.” In a sea of fake recruiters, bot accounts, and AI-generated headshots, it seemed like a smart thing to do.

So I tapped “verify.” I scanned my passport. I took a selfie. Three minutes later — done. Badge acquired. I felt a tiny dopamine hit of legitimacy.

Then I did what apparently nobody does. I went and read the privacy policy and terms of service.

Not LinkedIn’s. The other company’s.

Wait, what other company?

When you click “verify” on LinkedIn, you’re not giving your passport to LinkedIn. You get redirected to a company called Persona. Full name: Persona Identities, Inc. Based in San Francisco, California.

LinkedIn is their client. You are the face being scanned.

I had never heard of Persona before this. Most people haven’t. That’s kind of the point — they sit invisibly between you and the platforms you trust.

So I downloaded their privacy policy (18 pages) and their terms of service (16 pages). Here’s what I found.

«

Turned out that his passport, and data, went far, far beyond LinkedIn or even Persona.
unique link to this extract


Bits in, bits out • The Intrinsic Perspective

Erik Hoel:

»

there’s a better form of argument about AI, one which I am finally comfortable making: the argument from experience. There simply has been enough time now to see clearly how LLMs transformed the intellectual work of writing, and how this reflects their fundamental nature. My proposal is that we simply extrapolate what has happened to text production to all the other intellectual domains LLMs will ever touch.

For if everything that anyone can do on a computer is soon to be automated (as Andrew Yang is now preaching will happen in the next 12-18 months), then this process should have started with writing years ago. Yet, beyond mass-producing stilted emails and stilted social media posts and stilted essays, the impact of LLMs on writing itself has not really been to improve or accelerate good writing overall. We are not in a glut of good writing. We are in a dearth of it. This is surprising and counterintuitive, because for an LLM, words are its womb, its mother, its literal atoms—yet their impact on writing as a whole has been mostly to generate mountains of slop, while, on the positive side, helping with efficiency and research and editing and feedback, all things that only marginally improve already-good pieces. There are no signs of a burgeoning “text singularity” seen in the words output by our civilization, and words are the most sensitive weathervane to AI capabilities.

If LLMs were a true source of intelligence to rival humans, then discovering them should be like discovering oil. And if we were climbing the curve of an intelligence explosion their surplus intellect would be improving our civilization’s text as a whole in noticeable ways. If LLMs are tools, then we should expect their impacts to be a mirror of us, and concern efficiency and scale, rather than quality, and depend strongly on how people use them.

So let me ask you: if you took an observer from 2016 and teleported them a decade ahead to our time, and then showed them your social media feed or your emails and other media in general, what would their main response be? Would it be “Wow, everything is more intelligent now!” Or would it be “Why is everyone writing like a pod person now?”

It’s been six years since GPT-3, and there has been no “move 37” moment for writing (as there was for AlphaGo’s creative play of Go). Not even close.

…Looking into the crystal ball that the last half-decade represents for writers reveals that, more likely than superintelligence, we are going to enter a world of immense, overwhelming, scientific and philosophical and mathematical slop.

«

There’s also a graph showing the number of books on Amazon and their ratings, and that as time has gone on (and AI-written ones more plentiful) the ratings have gone down.
unique link to this extract


In Iran war, AI and drones are outpacing global rules of war • Rest of World

Rina Chandran:

»

Iran has launched thousands of drones across the Persian Gulf that have hit civilian, commercial, and military targets, upending global oil supplies and grounding thousands of aircraft in one of the busiest transport hubs in the world. These cheaply made and easily deployed UAVs are currently operated by pilots by remote control, but as AI becomes more integrated into militaries, the advancements will become even more pronounced with “unpredictable, risky, and lethal consequences,” Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace think tank, told Rest of World. 

The biggest role that AI now has in US military operations in Iran, as well as Venezuela, is in decision-support systems, or AI-powered targeting systems, Feldstein said. AI can process reams of surveillance information, satellite imagery, and other intelligence, and provide insights for potential strikes. The AI systems offer speed, scale, and cost-efficiency, and “are a game-changer,” he said.

“My concern is that untested systems with high degrees of lethality will be relied upon and can potentially lead to catastrophic results — e.g., strikes on civilian structures like hospitals and schools,” Feldstein said. “Additionally, I’m concerned that human accountability will be deemphasized, meaning that human operators will only have a limited means to ensure targeting recommendations are accurate before giving assent to proceed. This will harm accountability and lessen command and control oversight for militaries.”

«

It seems unlikely that the world’s powers will sit around a table when this is all over (and what does one mean by “this”, anyway?) to agree a set of rules about the use of AI and/or drones. The artillery shell, the tank, the bomber, the nuclear weapon, and now both AI and drones arriving on the battlefield almost simultaneously all mark disjunctions in how war is fought.
unique link to this extract


A visual guide to DNA sequencing • Asimov

Evan DeTurk:

»

In the twenty years after the draft human genome was first released, the average sequencing cost per genome fell roughly one hundred thousand-fold, ending up just north of $500. In that same period, the cost to sequence a million letters or “megabase” of DNA fell to six tenths of a cent.2 This plummeting price is due largely to technological innovation, including new sequencing chemistries, computational methods for assembling raw reads into finished genomes, and highly efficient commercial sequencing machines.

Out of the many sequencing methods developed over the decades, five are particularly important. These are their histories.

«

This is not short; it is thorough. But it’s also essential and educative. Asimov is a terrific new publication in the science space.
unique link to this extract


Netflix acquires Ben Affleck’s AI filmmaker tools startup InterPositive • Variety

Todd Spangler:

»

In a rare acquisition, Netflix has bought InterPositive, a startup founded by Ben Affleck that makes AI-powered tools for filmmakers.

Terms of the acquisition are not being disclosed. The entire 16-person InterPositive team of engineers, researchers and creatives will join Netflix through the acquisition, and Affleck will serve as a senior adviser to Netflix to provide ongoing guidance.

While Netflix historically is more often a builder than a buyer, the company said it saw Affleck’s InterPositive as providing a unique set of AI tools that “keeps filmmakers at the center of the process.” Netflix will offer access to InterPositive’s tech to its creative partners and does not have plans to sell it commercially in the marketplace.

Affleck’s L.A.-based company, which has been in stealth mode since he founded it in 2022, does not produce generative AI videos à la OpenAI’s Sora. “It’s not about text-prompting or generating something from nothing,” Affleck said about InterPositive’s approach in a video that Netflix shared with the acquisition announcement. “AI, people mostly think of it as making something from nothing: ‘I’m gonna type something into a computer and it’s gonna give me a movie.’ That’s not what this is.”

…InterPositive began filming a proprietary dataset on a controlled soundstage “with all the familiarities of a full production,” according to Affleck. “I wanted to build a workflow that captures what happens on a set, with vocabulary that matched the language cinematographers and directors already spoke and included the kind of consistency and controls they would expect.”

The startup’s first AI model was trained to understand “visual logic and editorial consistency,” while preserving cinematic rules under real-world production challenges such as missing shots, background replacements or incorrect lighting, Affleck said. “We also built in restraints to protect creative intent, so the tools are designed for responsible exploration while keeping creative decisions in the hands of artists — and ensuring that the benefits of this technology flow directly back to the story they’re trying to tell.”

«

Affleck played a dumb guy in Good Will Hunting, but he’s actually very sharp.
unique link to this extract


When AI writes almost all code, what happens to software engineering? • The Pragmatic Engineer

Gergely Orosz:

»

Unexpectedly, LLMs like Opus 4.5 and GPT 5.2 did amazing jobs on the mid-sized tasks I assigned them: I ended up pushing a few hundred lines of code to production simply by prompting the LLM, reviewing the output, making sure the tests passed (and new tests I prompted also passed!), then prompting it a bit more for some final tweaking.

To add to the magical feeling, I then managed to build production software on my phone: I set up Claude Code for Web by connecting it to my GitHub, which let me instruct the Claude mobile app to make changes to my code and to add/run tests. Claude duly created PRs that triggered GitHub actions (which ran the tests Claude couldn’t) and I found myself reviewing and merging PRs with new functionality purely from my mobile device while travelling. Admittedly, it was low-risk work and all the business logic was covered by automated tests, but I hadn’t previously felt the thrill of “creating” code and pushing it to prod from my phone.

This experience, also shared by many others, suggests to me that a step change is underway in software engineering tooling. In this article – the first of 2026 for this publication – we explore where we are, and what a monumental change like AI writing the lion’s share of code could mean for us developers.

«

Among the most intriguing comments from one developer: “What I learned over the course of the year [2025] is that typing out code by hand now frustrates me”.
unique link to this extract


CDC issues travel advisory for 32 countries over spread of polio • People.com

Cara Lynn Shultz:

»

A travel alert has been issued warning Americans to take precautions against polio, which is spreading in Europe and elsewhere across the globe.

The U.S. Centers for Disease Control issued a level 2 alert, cautioning travelers to “practice enhanced precautions” before visiting 32 countries. The agency is advising people to make sure they’re up to date on their polio vaccines, adding that people who plan to travel to the listed countries are eligible for a single-dose booster of the vaccine.

The countries include European travel destinations like Spain, Finland, Germany, and Poland — as well as the U.K.

As the CDC explains, polio‚ which is caused by the extremely contagious poliovirus, is “a crippling and potentially deadly disease that affects the nervous system.” It lives in the feces of an infected person, but can also be spread via eating or drinking food that’s been contaminated.  

Most people who contract polio do not exhibit symptoms — or if they do, they experience flu-like fevers, tiredness, nausea, headache, nasal congestion, and sore throat.

«

This seems to be nonsense. The European dashboard for polio cases worldwide shows pretty much zero for any country in 2026, and nothing in Europe for 2025.

Is the CDC all right?
unique link to this extract


BBC says ‘irreversible’ trends mean it will not survive without major overhaul • The Guardian

Michael Savage:

»

The BBC has said it is facing “permanent and irreversible” trends that mean it cannot survive without a major overhaul, as it revealed a stark divergence between the number of people consuming its content and those paying the licence fee.

In its opening response to government talks over its future, the corporation said 94% of people in the UK continued to use the BBC each month, but fewer than 80% of households contributed to the licence fee.

It said the rise of streaming services and digital platforms such as YouTube had caused blurring and confusion around when the licence fee needed to be paid, suggesting there was “a mismatch” between TV licence rules – based on watching live TV – and the nation’s viewing habits.

“The BBC has gone from being a service almost every household paid for and used to one that almost every household uses but millions do not pay for,” it said.

The broadcaster suggested the licence fee could actually fall for some groups and become more progressive if the government found a way to ensure that more people paid for it, closing the gap between those consuming and those funding its output.

The BBC warned that without the change, there would be a “tipping point” at which those still paying the licence fee would resent having to do so, fuelling even greater non-payment. It said the current rules would leave a “diminishing number of people paying for a service designed for and made available to everyone”.

Its official response to the charter renewal process, in which it will negotiate with the government over its future, suggested that other platforms such as Netflix or YouTube could do more to alert people when they were watching content that required a TV licence.

Audiences watching any live TV on the likes of YouTube or streaming platforms need a TV licence, but this is apparently not well known and not effectively enforced.

…Overall, the document acknowledged the massive changes in media consumption to which the BBC was having to adapt. “The precise set of rules that require households to be licensed no longer reflect typical audience behaviour among many households in the UK,” it said.

“The TV licence is predicated upon content being consumed via ‘live TV’ (ie watched as it is being broadcast). But on-demand consumption is not licensable, unless it is BBC content consumed via iPlayer.”

«

Sounds like you need better enforcement? That though is difficult and expensive. The puzzle of how you fund the BBC in an age of plenty truly is the Gordian Knot of broadcasting.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2623: DRAM bots snoop for memory bargains, AP journalists face AI future, genome model figures genetics, and more


The Straits of Hormuz are important not just for shipping, but also as a crossing point for internet cables. And now they’re at risk. CC-licensed photo by Michael Gaylard on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Straitened circumstances. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


US-Iran war threatens Gulf AI infrastructure as both data chokepoints close • Rest of World

Indranil Ghosh:

»

Billions of dollars in US technology infrastructure, and trillions more in planned investment, now depend on fiber-optic cables running through war zones.

Amazon, Microsoft, and Google spent years building data centers across the Gulf, betting the region would become the world’s next great hub for artificial intelligence. The undersea cables connecting those facilities to Africa, South Asia, and Southeast Asia pass through two narrow passages: the Red Sea and the Strait of Hormuz. Both are now effectively closed to commercial traffic.

Iran’s Islamic Revolutionary Guard Corps declared Hormuz shut on March 3, threatening to “set ablaze” any vessel attempting passage. At least five tankers have been damaged and roughly 150 ships are stranded around the strait. In the Red Sea, Houthi militants announced they would resume attacks on shipping in solidarity with Iran, ending a ceasefire that had held since late 2025. The war that began on February 28 has turned both choke points into active conflict zones simultaneously, something that has never happened before.

About 17 submarine cables pass through the Red Sea, carrying the vast majority of data traffic between Europe, Asia, and Africa. Additional cables run through the Strait of Hormuz, serving Iran, Iraq, Kuwait, Bahrain, and Qatar. If any are severed, the specialized repair ships can’t safely reach either passage.

“Closing both choke points simultaneously would be a globally disruptive event,” Doug Madory, director of internet analysis at the network intelligence firm Kentik, told Rest of World. “I’m not aware of that ever happening.”

«

First time for everything! And a reminder that the complexity of the world grows geometrically while our ability to handle problems grows perhaps arithmetically. Though we’re pretty good at dividing into zero, using large bombs.
unique link to this extract


DRAM bots reportedly being deployed to hoover up memory chips and components • Tom’s Hardware

Jowi Morales:

»

Scalpers are reportedly deploying web scrapers to make a quick buck while we’re deep in the memory and storage chip crisis. According to DataDome, a firm that protects websites and other online assets from automated attacks run through bots and AI, it has detected an operation trolling for the latest pricing data on memory modules and their components, sending queries every 6.5 seconds — that’s over 550 requests for each page, resulting in more than 50,000 requests per hour in total. The company says that it has blocked over 10 million requests that have been sent by the scalping bot, even using advanced techniques like cache-busting and ensuring that the request frequency stays under the alarm thresholds that companies use to protect their websites.

What’s interesting is that the bot isn’t just looking at consumer products. Instead, it was also looking at various levels of the supply chain, including DIMM sockets and CAMM2 connectors, as well as industrial memory modules designed for B2B transactions.

This isn’t the first time that we’ve seen scalpers take advantage of a supply situation in the electronics and computer industry. In fact, this has been a problem with every item that’s been limited or is experiencing a shortage in recent history, like the Sony PlayStation 5 Pro 30th Anniversary pre-orders, RTX 5090 GPUs a few days after its launch, the limited edition MSI RTX 5090 Lightning Z, and even scalpers taking advantage of selling DDR5 kits for 7x their original value on eBay. But what’s insidious about this operation is that it seems to be a deliberate attack orchestrated by an organized entity with access to sophisticated bots.

… Data centers are already expected to consume nearly 70% of the world’s memory supply this year, resulting in limited stocks for every other segment. If this continues in the next several years, analysts say that this will spell the end of entry-level PCs by 2028.

«

unique link to this extract


Large genome model: open source AI trained on trillions of bases • Ars Technica

John Timmer:

»

Late in 2025, we covered the development of an AI system called Evo that was trained on massive numbers of bacterial genomes. So many that, when prompted with sequences from a cluster of related genes, it could correctly identify the next one or suggest a completely novel protein.

That system worked because bacteria tend to cluster related genes together—something that’s not true in organisms with complex cells, which tend to have equally complex genome structures. Given that, our coverage noted, “It’s not clear that this approach will work with more complex genomes.”

Apparently, the team behind Evo viewed that as a challenge, because today it is describing Evo 2, an open source AI that has been trained on genomes from all three domains of life (bacteria, archaea, and eukaryotes). After training on trillions of base pairs of DNA, Evo 2 developed internal representations of key features in even complex genomes like ours, including things like regulatory DNA and splice sites, which can be challenging for humans to spot.

…The researchers trained two versions of their system using a dataset called OpenGenome2, which contains 8.8 trillion bases from all three domains of life, as well as viruses that infect bacteria. They did not include viruses that attack eukaryotes, given that they were concerned that the system could be misused to create threats to humans. Two versions were trained: one that had 7 billion parameters tuned using 2.4 trillion bases, and the full version with 40 billion parameters trained on the full open genome dataset.

The logic behind the training is pretty simple: if something’s important enough to have been evolutionarily conserved across a lot of species, it will show up in multiple contexts, and the system should see it repeatedly during training.

«

Advances like this aren’t visible in everyday reports, but they will surely feed through to all sorts of medical and biological applications in the next few years.
unique link to this extract


Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself • The Guardian

Dara Kerr:

»

Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way.

“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”

Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.

In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.

«

If you wrote this in a science fiction story, it would be the prelude to the machines telling the humans to kill themselves in the prelude to a takeover. This, though, is awful: the outcome of tech companies that don’t know what they have or how to control it, throwing products at consumers whose reactions they can’t predict. This is already far more dangerous than social media. And we’ve only had chatbots for less than four years.
unique link to this extract


It’s bots versus reporters at the AP • Semafor

Max Tani:

»

One of The Associated Press’ (AP) leaders on AI had a blunt message for the publication’s staff: resistance to AI is “futile.”

Last month, the Cleveland Plain Dealer’s editor wrote that a recent job applicant withdrew from consideration for a reporting fellowship after discovering the position included filing notes to an AI writing tool instead of actually writing stories, touching off a heated debate in media circles.

One AP higher-up crystallized many media managers’ views on the debate: “Because local newsrooms are so strapped, they are turning for assistance on the news making process in every direction. Advance Publications got there first, others will follow,” AP Senior Product Manager for AI Aimee Rinehart wrote in internal company Slack messages first shared with Semafor, referring to the Plain Dealer’s parent company. “Resistance is futile.”

Rinehart, who oversees the wire service’s AI initiatives, suggested that in the future, reporters could go to events, get quotes, plug them into a large language model, and have the model generate a story, saving them time on writing stories they don’t feel passionately about. She also noted that some editors told her that they would “prefer to have reporters report and have articles at least pre-written by AI.”

“There are many — and I mean MANY — editors who would prefer an AI-written article to a human-written one. Reporting and writing are two different skill sets and rare — RARE — is the occasion when it’s wrapped into one person,” she wrote.

Rinehart’s comments alarmed some AP journalists.

One AP reporter said in a message that the “dismissiveness and disdain some of you have shown for human writing are insulting and abhorrent. Strong reporting and clear writing are the lifeblood of journalism, not AI-written slop. AI may be inevitable, but denigrating the work of colleagues who write for a living without whom there would be no AP, is disgraceful.”

«

If I were slogging out tedious wire stories – recitations of facts with no leavening – I’d be grateful for an LLM to do the tedious writing stuff. “Human writing” applies for longer features, but most wire stories are drudgery, plain and simple. Resistance isn’t just futile; it’s backward.
unique link to this extract


MacBook Neo versus an old MacBook Air: good luck • The Verge

Nathan Edwards:

»

My first thought when Apple announced the MacBook Neo today was “okay, but why not just get an older Air?” If you’re thinking that too, you might be right. If you can find one.

The Neo starts at $599 with an A18 Pro processor, 8GB of memory, and 256GB storage, and ends at $699 with the same specs plus TouchID and 512GB of storage. It has two USB-C (not Thunderbolt) ports, a pretty basic-looking screen, a mechanical trackpad instead of haptic, and various other cost-saving measures. It’s the cheapest new MacBook you can get now.

The new M5 MacBook Air starts at $1,099 with 16GB of memory and 512GB of much faster storage, a bigger and brighter screen, a better webcam, better Wi-Fi and Bluetooth, more speakers, Thunderbolt 4, a faster charger, and so forth. It’s $100 more than last year’s model, probably because of the Neo. Or you can get an M4 MacBook Air for $1,000, with a slightly slower processor than the M5 (but still faster than the Neo), and otherwise pretty much the same specs.

If you could still get a new M1 Air from Walmart for $700, it’d be a pretty tough call between that and the Neo. That machine came out in 2020, but is still better in most respects. Unfortunately, they’ve been out of stock since last month — probably because of the Neo — so that’s the end of that. You can probably find a refurb one. Same with the M3 and M4: if you can find one for around the same price as the Neo, especially with 16GB of RAM, you should get one of those. But they’re pretty thin on the ground, and I’d expect them to become thinner. (Keep an eye on Apple’s refurb site, though — a refurb M4 Air for $750 is pretty dang good.)

The modern Air is unquestionably a better computer. The thing about $1,000 is it’s a lot more money than $600.

«

It’s 66% more, in fact. The Neo is going after a market that is much more price-sensitive than power-sensitive – and who also like computers that come in different colours. The Neo is probably going to be popular with parents buying for children. Whether it can gouge into the Chromebook and low-cost Windows market remains to be seen.
unique link to this extract


Apple does fusion • On my Om

Om Malik:

»

For years, Apple’s narrative around its “M-series” chips was about integration. One chip. One die. Everything on the same piece of silicon. Unified memory so the CPU, GPU, and Neural Engine could all access the same data without copying it around. It worked beautifully for the M1 and M2. But now with the rise of AI, chips need to get bigger. AI demands more cores, more memory bandwidth, more compute. So, making one really big honking chip gets really expensive.

The larger a single die gets, the harder it is to manufacture. One tiny defect anywhere on the silicon and you toss the whole thing. Yields drop. Costs climb. AMD’s CEO Lisa Su recently showed that a design using four smaller chiplets delivered more total capability at 59% of the cost of one big chip.

Apple, too, faced a fork in the road. Keep building bigger and bigger single chips. Or break the big chip into smaller pieces and connect them together fast enough that software barely notices the split. They chose the second option, but made it their own. They call it Fusion Architecture.

This is not a new idea in the industry. AMD has been doing this with its chiplet strategy across Ryzen and EPYC for years. Intel has used 3D stacking and bridge interconnects. Nvidia builds massive AI accelerators using multi-die packaging. The chiplet market is now roughly $40bn a year, and nearly all data-center AI products are built this way. The era of the giant monolithic die is ending. Chip-heads agree that the future is modular silicon.

«

Chips are becoming three-dimensional. Malik suggests this means more capable chips on your laptop – and perhaps the implication too is that they’ll be able to keep up with, or ahead of, Moore’s Law.
unique link to this extract


Tech publications lost 58% of Google traffic since 2024 • Growtika

Yuval Halevi:

»

At their peaks, ten major tech publications pulled a combined 112 million organic visits per month from Google in the US. By January 2026, that number had fallen to 47 million. All ten sites are down, though not by equal amounts. Some lost 30%. Others lost over 90%.

10 major tech publications lost a combined 65m monthly organic visits since their peaks. That’s a 58% decline.

Digital Trends: 8.5M → 265K (-97%)
• ZDNet: 7.6M → 769K (-90%)
• The Verge: 5.3M → 790K (-85%).

Even the least affected sites are down significantly:
• CNET lost 47%
• Tom’s Guide lost 50%
• Wired lost 62%.

NerdWallet lost 73% (25M → 6.8M) and Healthline lost 50% (111M → 56M), suggesting the pattern extends beyond tech.
The steepest declines started in mid-2025, coinciding with the expansion of Google’s AI Overviews.

…Down 85% or more: Digital Trends; ZDNet; HowToGeek; The Verge.

These four sites lost enough traffic that their search-dependent revenue models face serious questions. Digital Trends went from 8.5M to 265K monthly visits. ZDNet, which was one of the larger enterprise tech publications online, dropped from 7.6m to 769K.

HowToGeek is worth noting specifically. Its content was predominantly step-by-step how-to guides: “how to take a screenshot on Windows,” “how to change your DNS settings,” etc. That’s exactly the type of query Google’s AI Overviews now answer directly in the search results without requiring a click. The site lost 85% of its search traffic.

«

HowToGeek was surely always living on borrowed time. But The Verge is a notable loser there: it has always focused on news and features, not the obvious. This dropoff is clearly a big part of why it has shifted to a paywall system across almost the whole site. The era of free news might be dying. Equally, the era of Google search mattering to sites may also be ending.
unique link to this extract


2017: Laws of mathematics don’t apply here, says Australian PM • New Scientist

Timothy Revell:

»

Mathematicians around the world are rushing to check millennia of calculations, as the Australian prime minister Malcolm Turnbull has explained that their discoveries aren’t as concrete as we thought.

“The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia,” said Turnbull.

Turnbull’s comments came as he proposed a new law to force tech companies to give security services access to encrypted messages. Apps like WhatsApp currently prevent any snoopers from reading your messages using end-to-end encryption, jumbling it up in such a way that only the recipient can de-jumble it.

This form of encryption is underpinned by complex mathematics that can’t simply be overturned by an eavesdropper, whether that’s Whatsapp itself, a government agency, criminals, or anyone else. For security services that are trying to get access to messages sent by suspected terrorists this can be problematic, but encryption cannot be weakened for terrorists unless it is weakened for everyone.

However, this has not stopped governments from trying. The UK home secretary Amber Rudd has previously called encryption “completely unacceptable” and the UK prime minister Theresa May has said that the big internet companies give terrorists “safe spaces” to communicate.

In November 2016, the UK parliament passed the Investigatory Powers Act that put into legislation the ability to force companies to remove encryption. But how that will work in practice is far from clear.

«

So great to know that politicians are still insisting that it’s possible to have weakened encryption that’s only weaker for governments, and not for anyone else with access to huge amounts of computing power. (Thanks Wendy G for the link.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified