Start Up No.2621: Anthropic’s doomed military standoff, chatbots v PDFs, ChatGPT’s bad health, 25 years after the iPod, and more


People living in California can play a game where they make a call from as many payphones as possible. Like Pokémon, but with phones. CC-licensed photo by Curtis Gregory Perry on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Ahoy ahoy. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Anthropic and alignment • Stratechery

Ben Thompson:

»

nuclear weapons meaningfully tilt the balance of power; the extent to which AI is of equivalent importance is the extent to which the United States has far more interest in not only what Anthropic lets it do with its models, but also what Anthropic is allowed to do period.

…Regardless, this [threat by US Department of War chief Pete Hegseth to declare Anthropic a “supply chain risk”] is an extreme measure that has been met with near universal dismay, even amongst people who are sympathetic to the idea that a private company should not have veto power over the U.S. military. Why would the US government want to kneecap one of its AI champions?

In fact, [Anthropic CEO Dario] Amodei already answered the question: if nuclear weapons were developed by a private company, and that private company sought to dictate terms to the US military, the US would absolutely be incentivized to destroy that company. The reason goes back to the question of international law, North Korea, and the rest:

• International law is ultimately a function of power; might makes right
• There are some categories of capabilities — like nuclear weapons — that are sufficiently powerful to fundamentally affect the US’s freedom of action: we can bomb Iran, but we can’t North Korea
• To the extent that AI is on the level of nuclear weapons — or beyond — is the extent that Amodei and Anthropic are building a power base that potentially rivals the US military.

Anthropic talks a lot about alignment; this insistence on controlling the US military, however, is fundamentally misaligned with reality. Current AI models are obviously not yet so powerful that they rival the US military; if that is the trajectory, however — and no one has been more vocal in arguing for that trajectory than Amodei — then it seems to me the choice facing the US is actually quite binary:

• Option 1 is that Anthropic accepts a subservient position relative to the U.S. government, and does not seek to retain ultimate decision-making power about how its models are used, instead leaving that to Congress and the President
• Option 2 is that the US government either destroys Anthropic or removes Amodei.

«

Thompson very much offering the “how many tank divisions does Anthropic have?” analysis of how far AI companies can resist the Trump administration. You might think the analogy to nuclear weapons is excessive; but what if you’re wrong? (The article is free to read.)
unique link to this extract


Why is AI so bad at reading PDFs? • The Verge

Josh Dzieza:

»

PDFs are notoriously difficult for machines to parse, in part, because they were never meant to be read by them. The format was developed by Adobe in the early 1990s as a way to reproduce documents while preserving their precise visual appearance, first when printing them on paper, then later when depicting them on a screen. Where formats like HTML represent text in logical order, PDF consists of character codes, coordinates, and other instructions for painting an image of a page.

Optical character recognition (OCR) can turn those pictures of words back into text computers can use, but if it comes across a PDF where text is displayed in multiple columns — as many academic papers are — it will plow ahead left to right and create an unintelligible jumble. OCR tools are designed to detect and correct for these sorts of formatting variations, but tables, images, diagrams, captions, footnotes, and headers all present further obstacles. If you give an AI assistant like ChatGPT a PDF, it will cycle through a variety of these tools, sometimes fail, sometimes pass the PDF to a large vision model to perform OCR, sometimes hallucinate, and generally take a very long time and use a lot of computing power for uneven results.

“The key issue is that they cannot recognize editorial structure,” said Langlais. “It’s all fine while it’s relatively simple text, but then you’ve got all these tables, you’ve got forms. A PDF is part of some kind of textual culture with norms that it needs to understand.”

A further problem that arises from and compounds PDF’s inherent difficulty is that models rarely train on them. This has begun to change, partly because AI developers are increasingly desperate for high-quality data, and PDFs contain a disproportionate amount of it. Government reports, textbooks, academic papers — all PDFs. “PDF documents have the potential to provide trillions of novel, high-quality tokens for training language models,” wrote researchers at the Allen Institute for AI last year in a paper announcing a new specialized PDF-reading model.

«

PDFs are a sort of Fermat’s Last Theorem of OCR. There have been so, so many efforts at writing software that will interpret them for something other than displays and printers. The mystery is that if we can get displays and printers to understand them, why not any other program?
unique link to this extract


ChatGPT Health performance in a structured test of triage recommendations • Nature Medicine

Ashwin Ramaswamy and many, many others:

»

ChatGPT Health launched in January 2026 as OpenAI’s consumer health tool, reaching millions of users. Here, we conducted a structured stress test of triage recommendations using 60 clinician-authored vignettes across 21 clinical domains under 16 factorial conditions (960 total responses). Performance followed an inverted U-shaped pattern, with the most dangerous failures concentrated at clinical extremes: non-urgent presentations (35%) and emergency conditions (48%).

Among gold-standard emergencies, the system under-triaged 52% of cases, directing patients with diabetic ketoacidosis and impending respiratory failure to 24–48-hour evaluation rather than the emergency department, while correctly triaging classical emergencies such as stroke and anaphylaxis.

When family or friends minimized symptoms (anchoring bias), triage recommendations shifted significantly in edge cases (OR 11.7, 95% CI 3.7-36.6), with the majority of shifts toward less urgent care. Crisis intervention messages activated unpredictably across suicidal ideation presentations, firing more when patients described no specific method than when they did.

Patient race, gender, and barriers to care showed no significant effects, though confidence intervals did not exclude clinically meaningful differences.

Our findings reveal missed high-risk emergencies and inconsistent activation of crisis safeguards, raising safety concerns that warrant prospective validation before consumer-scale deployment of artificial intelligence triage systems.

«

In short: ChatGPT Health may not be good for your health.
unique link to this extract


25 years of iPod brain • Dirt

Molly Mary O’Brien:

»

It’s hard to believe that there was once a time when consumer technology solved problems we actually had. In the late 1900s, one of these problems was the portability of one’s music collection. For a long time, recorded music came in the form of physical objects so large they were inconvenient to tote around. Then the physical objects shrank—into tapes, MiniDiscs, CDs—but it was still not possible to carry your entire music collection around with you without managing some kind of large bindle of rattling plastic.

In 1999, Napster launched, and every song in the history of sound recording transformed into data. What was once a large PVC disc was now code—air, really. Songs in the wind. What would be the best way to carry them with us?

There were MP3 players before the iPod—they just weren’t very good. In Walter Isaacson’s biography of Steve Jobs, Jobs said the pre-iPod MP3 players on offer “truly sucked.” In 2000, a trio of former Apple software engineers wrote a Mac interface for the Rio, a homely chunk of black plastic that held 30 minutes of music and ran off a single AA battery. Their interface was called SoundJam, which Apple then acquired and retooled into iTunes.

Meanwhile Jon Rubinstein, who had previously overseen the development of the delicious candy-colored late ‘90s iMacs, sourced components that would make the iPod possible: a small LCD screen, a rechargeable battery, and a 1.8in drive that could fit 1,000 songs. Designer Jony Ive had the idea to make the player white. “Most small consumer products have this disposable feel to them,” Ive said in Isaacson’s book Steve Jobs. “There is no cultural gravity to them.”

…The iPod expanded your palate while shrinking your record collection to ultimate portability. Friends or strangers could swipe through your library and clock your musical id in an instant. But unless you offered an earbud to a friend, your collection was still private. Portability did not extend to sharability. Liam Inscoe-Jones, author of Songs In The Key of MP3: The New Icons of the Internet Age, notes that the first Sony Walkman was fitted with two headphone jacks; the Sony engineering team assumed no one would want to listen to music by themselves.

“The introduction of the iPod accelerated a process which was already begun with the invention of home stereos: the transformation of music from something normally experienced communally, to something enjoyed individually; now as an actual means of isolation.”

«

unique link to this extract


Scammers target Dubai bank accounts amid Iran missile salvo • The Register

Connor Jones:

»

Scammers targeted Dubai citizens mere hours after missiles struck the city, attempting to gain access to their bank accounts, police have warned.

Financially motivated cybercriminals are contacting citizens under the guise of Dubai Crisis Management, a fictitious department ostensibly tied to Dubai Police, in attempts to gather information that could be used in SIM-swap attacks.

The police said that the fraudsters are impersonating officials to acquire “sensitive information, including UAE Pass credentials and Emirates ID details, from vulnerable individuals rocked by the deadly Iranian missile attacks on Saturday.”

“Dubai Police caution that disclosing such data may enable criminals to carry out SIM swap operations and gain unauthorized access to bank accounts through mobile banking applications,” the police announced on Sunday.

“Dubai Police affirm that they do not request confidential information or verification codes via telephone calls or text messages under any circumstances.”

SIM swapping involves gathering details on individuals in order to socially engineer mobile network operators into switching control of SIMs, and the communications that are sent to them, from the rightful owners to the attackers.

Successful attacks can see one-time passcodes associated with authentication into mobile banking apps intercepted and abused to fraudulently gain access to victims’ bank accounts.

«

Just a reminder that the worst possible people will always take advantage of the worst events.
unique link to this extract


Payphone Go

Riley Walz:

»

California still has 2,203 working payphones. Can you find ’em all? Here’s how to play:

1: Create an account and get your unique player ID. It’s a 9 digit number.

2: Use the map to locate one of California’s payphones. Some are easy to find. Some are not.

3: Pick up the receiver, dial (888) 683-6697. It’s toll-free, so no coins required! Then enter your player ID.

4: First to call from a payphone? 20 points. 2nd gets 10. 3rd gets 5. Everyone after gets 1 point.

«

Yes, it is meant to be like Pokémon Go; but it uses all the payphones in California, whose locations were acquired via an FOI request. Walz likes coming up with quirky ideas exploiting the web, as his front page shows. Enjoy, California-based friends!
unique link to this extract


Condé Nast CEO says AI is a “death blow” to Google search • Financial Times

Anna Nicolaou:

»

Condé Nast, the publisher of Vogue and The New Yorker, is preparing for a future in which Google search is “no longer a meaningful driver” of its business, in a striking acknowledgment of how AI is upending the news industry.

Google accounted for a majority of visits to Condé Nast’s websites just a few years ago but only about a quarter last year, according to chief executive Roger Lynch. He described Google’s introduction of AI summaries as “another sort of death blow” in search traffic. “We assume very dramatic continued declines in search traffic, to the point where in a couple of years it’s just not a meaningful driver of our traffic,” Lynch told the FT. 

The shift underscores how quickly the economics of digital publishing are changing as generative AI tools alter how people find information online. 

Condé Nast has struck licensing agreements with AI groups including OpenAI and Amazon, but has yet to reach a deal with Google. Lynch criticised what he described as a “pernicious” arrangement under which publishers must opt out of Google search in order to prevent their content from being scraped for AI-generated summaries.

Long synonymous with glossy magazines, Condé Nast has spent much of the past several years overhauling its structure under Lynch, who was hired by the billionaire Newhouse family in 2019 to revive the publisher after years of losses.

Lynch said Condé Nast increased revenue in 2025, despite search traffic declining more than expected, thanks to strong growth in subscriptions and other areas.

«

Is the fall in traffic because people just aren’t using Google to find magazine content, or because they somehow get AI answers when searching for magazine content? It doesn’t quite make sense.
unique link to this extract


Can ChatGPT wire a British plug? • The Nomenloonyverse

“Nömenlōony:

»

I’m an electrician.

I dare you to use ChatGPT to wire a plug.

«

I tried this – asking it “Draw a diagram showing how to wire a UK 13 amp plug.” The result is all over the place but would also blow every fuse around, as it joins the Earth to the Neutral. No idea how other chatbots fare on this.
How ChatGPT thinks, wrongly, you should wire a British mains plug
unique link to this extract


Gang who stole almost £150,000 in jewellery, watches, cash and paintings in four-month smash-and-grab rampage face jail • Daily Mail Online

George Odling:

»

Detectives from Scotland Yard’s flying squad linked the gang members after analysing hours of CCTV, establishing that the same cars were used by different members of the crew in various robberies.

The spree began on May 8, when Gibbs, O’Hare and a third man rammed a blue Ford Fiesta into the entrance of luxury clothing store Fendi in Kensington. The trio of thugs made off in a Mercedes getaway car with £8,350 of designer goods.

During the early hours of June 30, Hughes and Gibbs broke into the Unico café in St Johns Wood, northwest London, and snatched £1,107 in cash as well as the store’s safe.

McCready and Windrass used a sledgehammer to smash into a jewellers on Edgware Road in west London during a nine-minute daylight raid at 4.15pm. The following day, McCready and Windrass used a sledgehammer to smash into a jewellers on Edgware Road in west London during a nine-minute daylight raid at 4.15pm.

CCTV footage showed the shocking moment the balaclava-clad robbers bludgeoned the reinforced glass of the store before reaching inside to snatch valuables they then stuffed into black bags.

Munday, of Hyde Park, was the getaway driver and the trio fled in a silver Jaguar with a haul of at least £59,930. McCready was freed on a lifetime licence in 2017 after being jailed for life for being part of a gang who stamped, kicked and stabbed Ricky Fisher to death 21 years ago.

At 3.20am on July 13 Rigelsford and another suspect parked a white SUV outside a store in Kensington, kicked their way inside and took £11,000 worth of goods. Eight days later, Rigelsford and Gibbs used a sledgehammer to smash into a watch store in Westminster at 3.30am, destroying cabinets inside before leaving empty-handed.

Bungling Gibbs gave away his identity by using a Lime bike to travel to the shop – booked via his bank account.

«

Might be the first time a Lime bike has solved a crime. Well done Lime!
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified. But this plug is a big erratum.

Start Up No.2620: AI v the Pentagon, prediction markets get edgy on Iran, Nasa’s solar mixup, when Apple’s lights breathed, and more


The new Pope has instructed clergymen not to use chatbots to help write sermons because AI “will never be able to share faith”. Are we sure? CC-licensed photo by Catholic Church England and Wales on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. Habemas chatbot. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


AI vs. the Pentagon • @jasmine’s substack

Jasmine Sun:

»

Well, Secretary Hegseth was not bluffing. He moved ahead with designating Anthropic a supply chain risk. In a long and dumb tweet, he calls the company’s behavior a “master class in arrogance and betrayal” and “a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.” (He also uses the phrase “defective altruism,” which I must admit is pretty good.)

But the implications are severe. Hegseth implied that this would cut Anthropic from “any commercial activity” with US government suppliers: i.e. require NVIDIA, Google, and basically every other tech giant to stop transacting with Anthropic. (In reality, US supply chain risk law only applies to DoD contracts and procurement, not general private commerce.3) If this was merely about canceling the $200 million contract, that would be sort of understandable—I get why the DoW may not want to set a precedent for private companies setting the bounds of use. But the “supply chain risk” measure is just so, so extreme. As Kelsey Piper has emphasized, there is not a single constituency that should endorse this.

Then, late Friday night, Sam Altman swept in and made the confusing announcement that OpenAI will take the DoW contract while keeping the same two red lines as Anthropic—and offered to broker a truce with the other AI labs too. Crucially, OpenAI was willing to compromise by letting the Pentagon define what counts as “lawful” “mass surveillance” and “autonomous weapons.” That is: Altman seems to trust DoW discretion, and it’s not clear if OpenAI will hold separate red lines at all.

That, today, is where we stand.

Now I am no national security expert, but neither is Pete Hegseth. What we both are is media professionals. And that’s why I’ll make my wager about what’s actually going on.

Hegseth is making a spectacle of punishing Anthropic—just like ICE made a spectacle of videotaping each immigrant deportation, and just like the CCP made a spectacle of disappearing Jack Ma for criticizing Chinese regulators.

This has nothing to do with national security or antiwokeness or anything like that. It is about striking fear into the hearts of any person or company—no matter how wealthy—who dares cross the admin. It is rule by fear and deterrence and chilling effect. I don’t think it matters if the “supply chain risk” is ruled unlawful and gets knocked down in the courts. It is enough to cause public pain and make an example of Anthropic.

«

With the Trump admin, Occam’s Razor takes the form that the most Mafia-style explanation is the simplest and most likely to be correct. They don’t care what the courts say, because the courts take time to come to conclusions that even so they will ignore. The Americans have recreated the Tudor court, but with bigger weapons.
unique link to this extract


Kalshi voids some bets on Khamenei because it’s “tied to death” • The Verge

Terrence O’Brien:

»

In a statement on X, [“prediction market”] Kalshi CEO Tarek Mansour said his company would pay out positions on “Ali Khamenei out as Supreme Leader?” at the last trading price before his death. Mansour said that Kalshi doesn’t “list markets directly tied to death” and that its rules are designed to “prevent people from profiting from death.” In addition, Kalshi is refunding fees related to the market and reimbursing anyone who purchased shares after Khamenei’s death.

Some users have voiced anger at how the situation was handled, claiming that either Kalshi’s rules should have been communicated more clearly, or that its markets should have been more narrowly worded to avoid confusion. (“Will Khamenei resign?” for example.) Some are accusing Kalshi of trying to have it both ways by allowing users to bet on Khamenei being out of power, which they believe was never going to happen without his death, but refusing to pay out people’s bets in full to boost their bottom line.

While Mansour defended that having a market on Khamenei was important, he said that “having a market directly settling on someone’s death” was not allowed under US regulations.

«

OK, but also: Polymarket defends its decision to allow betting on war as “invaluable”, also by Terrence O’Brien at The Verge:

»

In a statement posted on its site, Polymarket defended its decision to allow betting on the potential start of a war, saying that it was an “invaluable” source of news and answers, before taking shots at traditional media and Elon Musk’s X. The statement reads:

»

Note on Middle East Markets: The promise of prediction markets is to harness the wisdom of the crowd to create accurate, unbiased forecasts for the most important events to society. That ability is particularly invaluable in gut-wrenching times like today. After discussing with those directly affected by the attacks, who had dozens of questions, we realized that prediction markets could give them the answers they needed in ways TV news and 𝕏 could not.

«

«

People are very annoyed at being blocked from payouts, because if someone has a job for life, what other way is there for them to exit it except by dying? This is going to show the prediction markets for what they are – betting markets where the house always wins.
unique link to this extract


Wall Street has AI psychosis • WIRED

Steven Levy:

»

Wall Street, like the rest of us, is in a persistent state of anxiety about AI, and it doesn’t take much to trigger a mini-panic. Financial markets don’t necessarily map to reality, but the jitters reflect a wider disquiet. The AI future is in a William Gibson zone—it’s here, but unevenly distributed—and the news from those already living in the agent-packed, AI code-writing universe is both exciting and unsettling. Emphasis on unsettling.

No one—no one!—knows exactly how AI will impact the economy, but clearly it will be significant. Right now stocks are soaring, so it seems to make sense to keep the party going. But then along comes the latest doom manifesto, or a paper indicating that a traditional business sector might be threatened by AI, and suddenly money managers are reminded that the biggest issue of our time is totally unresolved. Case in point: earlier this month, a tiny company (valuation under $6 million) that had previously sold karaoke machines pivoted to AI-powered shipping logistics and put out a report saying that it had discovered some efficiencies in loading semi-trucks. That was enough to erase billions of dollars from the share prices of several major logistics companies, none of which had karaoke experience.

…When everyone has a few dozen AI agents working on their behalf, writes [research firm Citrini blogpost co-author Alap] Shah, consumers will be able to effortlessly find the best goods for the best prices. Apps will be rendered unnecessary—just type what you want into the LLM and an army of agents will do everything for you. The “poster child” for this phenomenon, Shah says, is DoorDash. Instead of being limited to the restaurants on the app, consumers will send out AI agents to find their ideal meal options, contracting directly with restaurants and delivery people—no apps needed. Zero friction! The DoorDashes of the world are avocado toast!

Not surprisingly, people at DoorDash did not appreciate this. “We were trying to rationalize—why? Why did they call us out more than anyone else?” says spokesperson Ali Musa. DoorDash, he says, is already navigating the world of AI with some success.

«

The way that everyone is so on edge for all of this is a function of how rapidly this keeps advancing. Even the PC or internet revolution didn’t move this quickly; ChatGPT first appeared in 2022, and look where we are. It’s as if we’d gone from 33K dialup to T1 internet in the same period, with everyone having it. You wouldn’t know what was a safe bet and what was in danger.
unique link to this extract


Pope implores priests not to write sermons using ChatGPT • Futurism

Joe Wilkins:

»

In a closed-door meeting with clergy from the Diocese of Rome late last week, Pope Leo XIV clobbered his priests with a distinctly 21st-century request: to resist the “temptation to prepare homilies with artificial intelligence,” according to Vatican News.

“Like all the muscles in the body, if we do not use them, if we do not move them, they die,” the Pope reportedly said. “The brain needs to be used, so our intelligence must also be exercised a little so as not to lose this capacity.”

The holy father drew a fascinating line in the sand, declaring that despite AI’s capabilities now or in the future, a chatbot could never stand-in for a flesh-and-blood priest. “To give a homily is to share faith,” he said, and AI “will never be able to share faith.”

Aside from AI, the Pope warned his clergymen against conflating social media to real life, per Vatican News. If one lives a “life authentically rooted in the Lord,” they’re offering something special to the world, the Pope said, adding that a common “illusion on the internet, on TikTok” is to treat followers and likes as authentic spiritual connection.

Whether you follow the teachings of the church or not, the advice is a unique snapshot of the issues facing the Vatican in 2026.

«

That he’s making a point of this already suggests that he’s heard of it happening quite widely, or that people are being tempted by it. Because who, faced with having to compose a lengthy sermon that will hold people’s attention for 10 minutes, wouldn’t?

The point about AI and faith resembles that of the law over AI and copyright. Can a machine have faith, though?
unique link to this extract


NASA lost a lunar spacecraft one day after launch. A new report details what went wrong • NPR

Joe Palca:

»

On February 26, 2025, a NASA probe called Lunar Trailblazer lifted off from Kennedy Space Center in Florida. Its mission was to map the water on the moon. But a day after launch, mission managers lost contact with the spacecraft, and it was never heard from again.

One year later, NPR has learned exactly why the $72m dollar mission failed.

A report by a review panel convened by NASA to explore what went wrong contains the explanation. Software that was supposed to point the spacecraft solar panels toward the sun instead pointed them 180 degrees away from the sun.

In addition, the panel found “many erroneous on-board fault management actions” that, taken together with the solar panel pointing error, “caused the Lunar Trailblazer failure.”

NASA provided the report in response to a Freedom of Information Act request.

…Lockheed Martin built the low-cost Lunar Trailblazer spacecraft. The NASA panel says the company did not properly test the solar panel pointing software before launch. Mission managers might have been able to fix that problem, but other software issues made it at first difficult, and ultimately impossible, to fix the pointing error.

…Scott Hubbard is a NASA veteran now at Stanford University. He says yes, NASA accepts higher risks with lower cost, or so-called class D, missions. “What class D was supposed to mean is that you were taking a big risk of not getting the science that was as high precision as you were planning on,” Hubbard says. “It didn’t mean the whole darn thing wouldn’t work.”

Hubbard says you can take risks. “But take mitigated, understood risk. Don’t take foolish risk,” he says. “The way I characterize it is that cheap failure is no good for anybody.”

«

It sounds as though there was a rigid deadline for the launch, and things went wrong, and there was a “sign error” hardcoded into the software: rather than + it was – and the arrays were thus pointed directly away from the sun. It’s a complete mess.
unique link to this extract


“Just a little detail that wouldn’t sell anything” • Unsung

Marcin Wichary:

»

The breathing light – officially “Sleep Indicator Light” – debuted in the iconic iBook G3 in 1999.

It was originally placed in the hinge, but soon was moved to the other side for laptops, and eventually put in desktop computers too: Power Mac, the Cube, and the iMac.

The green LED was replaced by a white one, but “pulsating light indicates that the computer is sleeping” buried the nicest part of it – the animation was designed to mimic human breathing at 12 breaths per minute, and feel comforting and soothing:

Living through that era, it was interesting to see improvements to this small detail.

The iMac G5 gained a light sensor under the edge of the display in part so that the sleep indicator light wouldn’t be too bright in a dark room (and for older iMacs, the light would just get dimmer during the night based on the internal clock).

…in the first half of 2010s, the breathing light was gone, victim to the same forces that removed the battery indicator and the illuminated logo on the lid.

I know each person would find themselves elsewhere on the line from “the light was overkill to begin with” to “I wished to see what they would do after they introduced that invisible metal variant.”

I know where I would place myself. This blog is all about celebrating functional and meaningful details, and there were practical reasons for the light to be there. This was in the era where laptops often died in their sleep – so knowing your computer was actually sleeping safe and sound was important – and the first appearance of the light after closing the lid meant that the hard drives were parked and the laptop could be moved safely.

The breathing itself, however, was purely a humanistic touch, and I miss that quirkiness of this little feature. If a Save icon can survive, surely so could the breathing light.

«

Does Apple still do quirky little things which take effort but make no substantial difference to the performance of any products now? I can’t think of any offhand. The Apple TV could have a “breathing” sleep light; but doesn’t. (Thanks Adewale A for the link.)
unique link to this extract


Helsinki just went a full year without a single traffic death • POLITICO

Aitor Hernández-Morales:

»

Helsinki hasn’t registered a single traffic-related fatality in the past year, municipal officials revealed this week.

Although road deaths are on the decline across the EU, with a 3% decrease in 2024, accidents with tragic outcomes are still commonplace in metropolitan areas. To go a full year without one is a remarkable feat for most cities — let alone a European capital.

In 2023, 7,807 Europeans lost their lives in traffic accidents in EU cities. Fifty-five people died in traffic accidents in Berlin last year, and nine individuals lost their lives in collisions in the Brussels region over the past 12 months.

While Helsinki is among the smallest EU capitals, with a little under 690,000 residents, some 1.5 million people live in and commute throughout the metropolitan area.

Roni Utriainen, a traffic engineer with the city’s Urban Environment Division, told the Finnish press that the achievement was attributable to “a lot of factors … but speed limits are one of the most important.”

Citing data that shows the risk of pedestrian fatality is cut in half by reducing a car’s speed of impact from 40 to 30 kilometres per hour (~25mph to ~20mph), city officials imposed the lower limit in most of Helsinki’s residential areas and city center in 2021.

The limits were enforced with 70 new speed cameras and a policing strategy based on the national “Vision Zero” policy, with the goal of achieving zero traffic injuries or deaths.

«

The peak – 30 deaths (18 pedestrians, 6 cyclists, 6 drivers) – came in 1983, and it’s all been below ten since 2007. Even so, worth noting.
unique link to this extract


AI is rewiring how the world’s best Go players think • MIT Technology Review

Michelle Kim:

»

Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol. And in the years since, AI has upended the game. It’s overturned centuries-old principles about the best moves and introduced entirely new ones. Players now train to replicate AI’s moves as closely as they can rather than inventing their own, even when the machine’s thinking remains mysterious to them. Today, it is essentially impossible to compete professionally without using AI. Some say the technology has drained the game of its creativity, while others think there is still room for human invention. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks as a result. 

For Shin Jin-seo, the top-ranked Go player in the world, AI is an invaluable training partner. Every morning, he sits at his computer and opens a program called KataGo. Nicknamed “Shintelligence” for how closely his moves mimic AI’s, he traces the glowing “blue spot” that represents the program’s suggestion for the best next move, rearranging the stones on the digital grid to try to understand the machine’s thinking. “I constantly think about why AI chose a move,” he says.

…The starkest shift has been in opening moves. Go starts on a blank grid, and the first 50 moves were canvases for abstract thinking and creativity, where players etched their personalities and philosophies. Lee Sedol fashioned provocative moves that invited chaos. Ke Jie, a Chinese player who was defeated by AlphaGo Master in 2017, dazzled with agile, imaginative moves. Now, players memorize the same strain of efficient, calculated opening moves suggested by AI. The crux of the game has shifted to the middle moves, where raw calculation matters more than creativity.

Training with AI has led to a homogenization of playing styles. Ke Jie has lamented the strain of watching the same opening moves recycled endlessly. “I feel the exact same way as the fans watching. It’s very tiring and painful to watch,” he told a Chinese news outlet in 2021. Fans revel when a player breaks from the script with offbeat moves, but those moments have become rarer. Over a third of moves by the top Go players replicate AI’s recommendations, according to a study in 2023. The first 50 moves of each game are often identical to what AI suggests, many players say.

«

That’s sad; whereas chess openings have long been predictable recitations, Go had the vibrancy of invention. No longer, it seems.
unique link to this extract


An Ohio newspaper has a new star writer. It isn’t human • Washington Post

Will Oremus and Scott Nover:

»

Some [AI rewrites] are as simple as rewriting a press release, while others require more legwork: the reporter types up the notes and quotes they’ve gathered, then sends them to the rewrite editor, who prompts the AI to turn them into a full article draft for the editor and reporter to review and tweak as needed. In the time saved by not writing, the reporters are asked to do the kinds of reporting that AI can’t, like inviting a mayor or police chief to coffee.

Plain Dealer reporter Hannah Drown, who grew up in Lorain County, now relies on AI to help her cover her home turf. She said she broke and wrote a story about overcrowding at a high school thanks to an alert from an AI tool that scans the transcripts of school board meetings for newsworthy tidbits. That story, in turn, led to a bigger feature that she wrote on how rapid growth was transforming a small farming community and straining city services.

More recently, she filed her notes on the possible repossession of 41 county sheriff’s cruisers to the paper’s AI rewrite desk, which helped to turn them into a 600-word article that appeared on Cleveland.com.

“It’s tagging in my teammate,” Drown said. “I still outline the story. I still decide what the news is and what the tone should be.”

But four other current and former Plain Dealer journalists, who spoke on the condition of anonymity, said the growing reliance on AI has taken a toll on editorial quality and staff morale.

One recalled that the AI push went into overdrive in 2025 when the newsroom gained access to a paid version of ChatGPT and [Plain Dealer editor Chris] Quinn encouraged “unfettered use” of the tool. The result, the staffer said, is that the AI-generated stories they publish have minimal guardrails despite claims that they are thoroughly edited and fact-checked.

Quinn took issue with the claim that AI articles are published with few guardrails, saying every story drafted by AI is reviewed by a human reporter and an editor.

«

This seems like a generational, or at least flexibility, thing. Drown gets it; others don’t. There were journalists who resisted word processing and pagers and mobile phones. None do now.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified