
The price of the resin used in printed circuit boards (PCBs) has risen dramatically due to shortages caused by, guess what, the Iran war. CC-licensed photo by David Lenker on Flickr.
The Overspill will be on a two-week break from next week. Next edition Monday May 18, if all goes to plan.
A selection of 11 links for you. Wired up. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
AI outperforms doctors in Harvard trial of emergency triage diagnoses • The Guardian
Robert Booth:
»
From George Clooney in ER to Noah Wyle in The Pitt, emergency department doctors have long been popular heroes. But will it soon be time to hang up the scrubs?
A groundbreaking Harvard study has found that AI systems outperformed human doctors in high-pressure emergency medicine triage, diagnosing more accurately in the potentially life and death moments when people are first rushed to hospital.
The results were described by independent experts as showing “a genuine step forward” in the clinical reasoning of AIs and came as part of trials that tested the responses of hundreds of doctors against an AI.
The authors said the results, published in the journal Science, showed large language models (LLMs) “have eclipsed most benchmarks of clinical reasoning”.
One experiment focused on 76 patients who arrived at the emergency room of a Boston hospital. An AI and a pair of human doctors were each given the same standard electronic health record to read – typically including vital sign data, demographic information and a few sentences from a nurse about why the patient was there. The AI identified the exact or very close diagnosis in 67% of cases, beating the human doctors, who were right only 50%-55% of the time.
It showed the AIs’ advantage was particularly pronounced in triage circumstances requiring rapid decisions with minimal information. The diagnosis accuracy of the AI – OpenAI’s o1 reasoning model – rose to 82% when more detail was available, compared with the 70-79% accuracy achieved by the expert humans, though this difference was not statistically significant.
«
Going to make for a very boring TV series, though. Still a role for paramedics, at least.
unique link to this extract
The Pulse: token spend breaks budgets – what next? • The Pragmatic Engineer
Gergely Orosz:
»
Last week, we covered the slightly perverse trend of “tokenmaxxing” across the industry, where devs run agents with the sole aim of boosting their personal “token stats” in an effort to rank higher on internal token leaderboards, and not be seen as a Luddite who doesn’t use AI tools enough compared to peers.
This week, I spoke with a software engineer at a large company and another at a seed-stage place. Both shared almost identical stories: at their latest all-hands, company leadership expressed concerns about the fast-rising costs of tokens. At both places, token spend has increased by ~10x in the last six months – with no signs of slowing down.
I wanted to find out about this trend, so I talked to devs at 15 businesses. Below is what I learned about what’s happening in workplaces of all sizes. Names are anonymized.
…Fintech company, US+Europe, late stage, ~5,000 people. Staff engineer: “Some developers are now spending $500 a day (!!) on Claude Code. Practically speaking, this means that employee costs have doubled. Productivity has increased, in my view, but now the bottleneck is code reviews. AI can spit out code quite quickly, but we still have human reviews in place. Leadership encourages using AI for code review, but my team will not blindly trust AI.
“The push from AI is coming from the top. This year’s performance review had a section on AI, rating devs by how well they used AI, so this is another reason everyone just uses it as much as they can.”
…Series A, US, ~50 people. Principal Engineer: “About 15 devs are heavy users of AI and costs are rising very fast. Almost everyone uses Claude and Claude Code. We are considering four potential options: Increase AI budget, and start measuring more. Continue doing what we are, but allow devs to use more tokens instead of hiring limits. The precise ROI is hard to quantify, but we’ll start to measure and track both AI adoption and impact. Optimize token consumption. Use cheaper models for simpler tasks, review token usage, and see where we can cut usage. Downside: this approach could become one with diminishing returns, fast. Integrate more AI providers in the company. Find wrappers to abstract LLMs. The problem is: how do you replace Claude Code, for instance? Or pivot to local models: such as Kimi, Qwen, and so on. The problem is it’s a big investment in high-end hardware or cloud GPUs. Upside: it offers better long-term cost control, once done.
«
The post is full of such stories. The demand is there. But let’s see what happens once the new set of prices come in. But “the precise ROI is hard to quantify”? Not the most encouraging phrase.
unique link to this extract
Iran war disrupts the circuit board supply chain, raising costs for tech firms • Reuters
Che Pan, Liam Mo and Hyunjoo Jin:
»
The conflict in the Middle East has disrupted supplies of crucial raw materials and pushed up prices of the printed circuit boards (PCB) used in almost all electronic devices, from smartphones and computers to AI servers, industry sources and executives said.
The disruption is a fresh blow to electronics manufacturers which are already grappling with soaring memory chip costs and highlights the broadening impact of the Iran war that has wreaked havoc on supply chains, plastics, and oil supplies.
Iran struck Saudi Arabia’s Jubail petrochemical complex in early April, forcing a halt in production of high-purity polyphenylene ether (PPE) resin — a critical base material used to manufacture PCB laminates.
SABIC, which accounts for approximately 70% of the world’s high-purity PPE supply and operates in the Jubail complex on the Gulf coast, has been unable to resume output, severely tightening the availability of the material worldwide, according to one source. Shipping in and out of the Gulf has also been severely disrupted by the war.
PCB prices have been climbing since late last year, driven by a growing appetite for AI servers. Demand has been accelerating sharply since March as manufacturers scramble to secure raw material supplies and soften the impact of skyrocketing costs, three industry sources told Reuters.
In April alone, PCB prices surged as much as 40% from March, Goldman Sachs analysts said in a recent note. Cloud service providers are willing to accept further increases as they expect demand will outstrip supplies over the coming years, they added.
The global PCB industry is projected to increase by 12.5% to reach $95.8bn in 2026, according to a recent report from Prismark.
Daeduck Electronics, a South Korean PCB maker whose customers include Samsung Electronics, SK Hynix and AMD, has begun discussions with customers over price increases, a senior executive at the company told Reuters.
«
Inflation in the price of technology components is just going mad. None of this helps it at all. Prediction: the strait of Hormuz doesn’t open before July. (That’s eight weeks away. So far it’s been closed eight weeks. Good explainer from June 2025 about its relevance.)
unique link to this extract
How AI killed student writing (and revived it) • The New York Times
Dana Goldstein:
»
In the era of artificial intelligence, take-home writing assignments have become so difficult to police for integrity that many educators have simply stopped assigning them.
Instead, in a rapid shift, teachers are requiring students to write inside the classroom, where they can be observed. Assignments have changed too, with some educators prompting students to reflect on their personal reactions to what they’ve learned and read — the type of writing that A.I. struggles to credibly produce.
This transformation is happening across the educational landscape, from suburban districts and urban charter schools to community colleges and the Ivy League.
The New York Times heard from nearly 400 college and high school educators who responded to a callout about how generative AI is changing writing instruction. Almost all described a deep rethinking of how to teach writing — and whether it still matters, since AI has become a better writer than most students (and adults), they said.
Teachers are responding to a widespread challenge. Over the past year, AI use has become ubiquitous among American students. Between May and December of 2025, the share of American middle school, high school and college students who reported regularly using A.I. for homework increased from 48% to 62%, according to polling from RAND — even as two-thirds of students said the technology harmed critical-thinking skills. A third of the students reported using AI to draft or revise writing.
…“The standard curriculum was a thesis-driven research essay that students completed on their own time outside of class,” said Marc Watkins, who directs the AI Institute for Teachers at the University of Mississippi. “That is, unfortunately, gone.”
…One April afternoon in her AP literature class, Ms. [Jessica] Binney read aloud “XIV,” a poem by the St. Lucian Nobel laureate Derek Walcott. It describes the poet and his brother as children, trekking into the Caribbean forest to listen at the feet of a traditional storyteller.
Walcott’s language is lush and challenging. Students marked up paper handouts of the text, underlining and scrawling in the margins. Then they took out notebooks and began to draft essays analyzing literary devices.
“I want you to write out a really rough, terrible draft in your writers’ notebooks,” Ms. Binney told them. “And then I want you to scratch it out and rewrite it.”There was nary a laptop or tablet in sight. For these juniors and seniors, who have been taught on screens for much of their schooling, Ms. Binney’s class can be a welcome break.
“It’s a relief,” said Cassady Tondorf, 17. “There’s less distraction.”
«
Dispute over fate of Kenyan workers who saw Meta AI glasses films • BBC News
Chris Vallance:
»
Meta is under pressure to explain why it cancelled a major contract with a company it was using to train AI, shortly after some of its Kenya-based workers alleged they had to view graphic content captured by Meta smart glasses.
In February, workers at the company, Sama, told two Swedish newspapers they had witnessed glasses users going to the toilet, and having sex.
Less than two months later, Meta ended its contract with Sama, which Sama said would result in 1,108 workers being made redundant.
Meta says it’s because Sama did not meet its standards, a criticism Sama rejects. A Kenyan workers’ organisation alleges Meta’s decision was caused by the staff speaking out. Meta has not addressed that allegation but told BBC News in a statement it had “decided to end our work with Sama because they don’t meet our standards”.
Sama has defended its work.
“Sama has consistently met the operational, security and quality standards required across our client engagements, including with Meta,” it said in a statement. “At no point were we notified of any failure to meet those standards, and we stand firmly behind the quality and integrity of our work.”
In late February, Swedish newspapers Svenska Dagbladet (SvD) and Goteborgs-Posten (GP) published an investigation which included the accounts of unnamed workers who had been asked to review videos filmed by Meta’s glasses. “We see everything – from living rooms to naked bodies,” one worker reportedly said.
At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI. It said this was for the purpose of improving the customer experience, and was a common practice among other companies.
However, the revelations have prompted regulators to act.
«
One suspects that the standards Sama didn’t meet were those saying “don’t let your staff tell the press about our astonishing invasion of privacy that might get regulators involved”. The leopard doesn’t change its spots.
unique link to this extract
Senators introduce bipartisan bill to ban Chinese vehicles and auto parts • NBC News
Allan Smith:
»
A bipartisan Senate duo introduced a bill on Wednesday to ban the importation of Chinese-made vehicles and auto parts, weeks ahead of US President Donald Trump’s planned sit-down with Chinese President Xi Jinping.
Sens. Bernie Moreno, R-Ohio, and Elissa Slotkin, D-Mich., introduced the Connected Vehicle Security Act, which would ban automobiles, parts and vehicle software made in China or in partnership with China, as well as other adversarial nations, from the US market.
The Commerce Department last year issued a rule that restricted such vehicles and parts from the US market, but both Moreno and Slotkin spoke of the importance of codifying the effort into law. On Tuesday, more than 70 House Democrats signed a letter urging Trump to block Chinese automakers from the US market ahead of his meeting with the Chinese leader next month. In January, Trump suggested an openness to allowing Chinese automakers into the US market during a speech before the Detroit Economic Club.
In an interview, Slotkin said Trump’s upcoming meeting with Xi was the impetus for introducing the legislation now.
“We are watching very closely what deals come out of that summit,” she said.
Moreno, who touted Trump’s support for US automakers, said he did not expect this effort to be on the agenda for Trump’s meeting with Xi, which is slated to take place in mid-May.
«
You can understand not wanting to give up the entire vehicle manufacturing business to China, but without some subsidies it’s going to be a losing battle trying to sell abroad as EVs take over, which will mean a shrinking market inside the US. Blocking partnerships just seems perverse. The Chinese are now far ahead of the Americans with this generation of vehicles.
unique link to this extract
Knee surgery for cartilage damage does not benefit patients, study suggests • The Guardian
Hannah Devlin:
»
A common knee surgery for cartilage damage does not benefit patients and may lead to worse outcomes, a 10-year trial suggests.
The study tracked outcomes for patients treated for a meniscus tear, who were given a partial meniscectomy, one of the most common orthopaedic surgeries. Their trajectories were compared with patients who had randomly been assigned to receive “sham surgery”, in which no procedure was carried out.
Patients who had undergone the surgery, which involves trimming frayed meniscus tissue, did not appear to benefit and scored worse on a range of measures designed to measure knee function, pain and progression of symptoms.
Prof Teppo Järvinen, an orthopaedic surgeon and researcher at the University of Helsinki who led the study, said: “Our findings suggest that this may be an example of what is known as a medical reversal, where broadly used therapy proves ineffective or even harmful.”
The meniscus is a C-shaped, rubbery pad of cartilage in the knee joint that acts as a shock absorber between the thigh bone and shin bone. There are two in each knee.
A meniscus tear, in which the edges of the tissue become frayed, can occur due to a sudden twist of the knee while playing sport. Damage can also occur gradually over time and MRI scans often reveal meniscal tears in healthy people with no symptoms.
“We now know that these meniscal tears are very frequently found in patients with no symptoms,” said Järvinen. “Over the past 20 years, evidence has accumulated to suggest that most of these findings on MRI are purely incidental.”
«
There have been previous papers which suggested this, also with randomised placebo/treatment trials, but this 10-year followup suggests that a lot of surgery is just pointless. (Apart from enriching surgeons.) As my GP said to me when I was discussing various pains, “surgeons like doing surgery”. It’s something to bear in mind, and to research carefully before going under any knife.
unique link to this extract
Meta lost 20 million users last quarter • The Verge
Jess Weatherbed:
»
In an earning call on Wednesday, Meta reported that figures for “Family daily active people” — the term Meta has coined for all collective users of Facebook, Instagram, WhatsApp, or Messenger — declined by 20 million this quarter compared to the previous three months.
Meta attributes this fall to “internet disruptions in Iran, as well as a restriction on access to WhatsApp in Russia.” It’s up to you whether you take Meta on its word, given that by bundling the user stats together across all its platforms, we can’t tell which ones are most impacted. If I wanted to obscure that a leading social platform was potentially haemorrhaging daily users, that’s certainly what I would do.
This drop comes as Meta says it’s increasing its projected capital expenditures for 2026 to a range of $125-145bn, $10bn more than previous estimates. This increased spending is driven by expectations for higher component pricing and, “to a lesser extent,” additional costs for future data centre capacity. This is a course correction, according to Meta’s chief financial officer Susan Li, who said in the investor call that Meta had “underestimated our compute demand in the past.”
«
The Reality Labs – remember wearables and VR? – had an operating loss of $4bn over the period. It’s an absolute money pit.
But the bigger question is: what’s all this AI compute for? It can’t be for social networks. That’s an honestly trivial use, and there are plenty of users who are bringing their own to that. So it must be that Zuckerberg thinks there’s an overarching need for AI, and particularly for Meta to control that AI, which would have to be somewhere completely outside what we think of as Meta’s ambit. When will we find out what that is?
unique link to this extract
Elon Musk’s worst enemy in court is Elon Musk • The Verge
Elizabeth Lopatto:
»
[Elon] Musk spent a lot of [Wednesday] painting this heroic picture of himself, and this morning, near the end of his direct examination, said, “I don’t lose my temper,” and “I don’t yell at people.” He said he might have called someone a “jackass,” but only in the spirit of saying something like, “don’t be a jackass.”
Immediately afterward, [OpenAI’s defence lawyer William] Savitt baited him into being petty, irritating, and generally hard to deal with. At one point, we all watched Musk lose his temper. He spent hours quibbling over simple questions. Again and again, Savitt referred back to Musk’s deposition, where he’d answered questions slightly differently, calling Musk’s accounts into question. Even if the average juror didn’t think he was lying, he was certainly inconsistent.
Savitt’s cross-examination left the distinct impression that Musk quit his quarterly payments to OpenAI because he wasn’t going to get full control of the company, then tried to kneecap it and fold it into Tesla. Initially, Musk wanted four board seats and 51% of the shares. The other cofounders would get three seats, together, to be voted on by shareholders (including other employees). Though Musk said that the eventual plan was to expand to 12 seats, it was obvious that Musk had full control on the initial board of seven.
…He accused Savitt of asking questions that were “designed to trick me,” and we got multiple versions of this:
»
Musk: You mostly do unfair questions.
Savitt: I am trying to put the questions as fairly as I can. I am doing my best.
Musk: That’s not true.«
Musk was trying to make this as painful as possible for Savitt, but he also made it as painful as possible for everyone else, including the jury. Watching him simply refuse to answer questions during cross he’d easily answered during direct was annoying. Watching him refuse to admit he understood the nature of linear time — and therefore the fact that he was still a director of OpenAI’s board before he resigned in 2018 — was infuriating. It made him look dishonest.
«
Juries are meant to decide on the facts, but they’re human. Annoy them enough, and you’ll lose them in spite of everything. It’s not as if Musk has the greatest case anyway.
unique link to this extract
Phony whistleblowers, fake journalists and cyber spies: ICIJ network targeted after China Targets probe • ICIJ
Scilla Alecci:
»
In May 2025, Kuochun Hung, the chief operating officer of the Taiwanese media outlet Watchout, received an email from someone purporting to be Yi-Shan Chen, a well-regarded local reporter.
“Chen” claimed to be working for the International Consortium of Investigative Journalists (ICIJ) and was requesting an interview with Hung on a range of topics: then pending impeachment proceedings against Taiwan’s president, the island’s divided government, and Watchout’s planned events with members of civil society groups.
Hung, whose media outlet monitors information manipulation, found the email unusual.
“The topic and questions in the invitation email [were] too entry-level for a senior journalist,” Hung told ICIJ. What’s more, Chen’s name was spelled in English, instead of the original Chinese, and the email address didn’t include ICIJ’s official domain.
Hung decided to find out more and started interacting with “Chen” on LINE, a popular messaging app in Taiwan.
The person, who used Chen’s name and photo in their handle, told Hung that an American journalist from ICIJ would meet him in Taipei for the interview and sent a link to what looked like an ICIJ webpage with the reporter’s photo. Hung noticed it wasn’t ICIJ’s real website. The fake Chen also sent Hung another link she said would direct him to a list of questions, adding: “For journalists, information security is truly very important,” a warning most journalists would find superfluous.
Hung didn’t click.
“I played stupid,” he said. “And then she gave up.”
…Now, an investigation by ICIJ, with the help of cybersecurity analysts at Toronto University’s Citizen Lab, has found that the incident was part of a sophisticated offensive strategy against ICIJ and its network following the 2025 publication of China Targets. The ICIJ-led exposé, in collaboration with 42 media outlets, revealed Beijing’s tactics to threaten, coerce and intimidate regime critics overseas.
«
Being suspicious always rewards you in situations like this. The routine use of spyware – which that link almost certainly led to – is on a par in spying with the use of drones in hot wars.
unique link to this extract
Copy Fail: 732 bytes to root on every major Linux distribution • Xint
»
A single 732-byte Python script can edit a setuid binary and obtain root on essentially all Linux distributions shipped since 2017.
The kernel never marks the corrupted page dirty for writeback, so the file on disk remains unchanged and ordinary on-disk checksum comparisons miss the modification. However, the page cache is what actually gets read when accessing the file, so the corrupted in-memory version is immediately visible system-wide. A local unprivileged user can turn this into root by corrupting the page cache of a setuid binary. The same primitive also crosses container boundaries because the page cache is shared across the host.
This finding was AI-assisted, but began with an insight from Theori researcher Taeyang Lee, who was studying how the Linux crypto subsystem interacts with page-cache-backed data. He used Xint Code to scale his research across the entire crypto subsystem, and Copy Fail was the most critical finding in the report.
«
Xint is a security company (“AI-Powered Vulnerability Discovery for Source Code and Live Apps”) and this is a colossal discovery – probably in line with Heartbleed and similar open source vulnerability discoveries. There is a fix, but people need to apply it. Now there will be a race between those looking to fix, and those looking to exploit.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.