
The new Pope has instructed clergymen not to use chatbots to help write sermons because AI “will never be able to share faith”. Are we sure? CC-licensed photo by Catholic Church England and Wales on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 9 links for you. Habemas chatbot. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
AI vs. the Pentagon • @jasmine’s substack
Jasmine Sun:
»
Well, Secretary Hegseth was not bluffing. He moved ahead with designating Anthropic a supply chain risk. In a long and dumb tweet, he calls the company’s behavior a “master class in arrogance and betrayal” and “a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.” (He also uses the phrase “defective altruism,” which I must admit is pretty good.)
But the implications are severe. Hegseth implied that this would cut Anthropic from “any commercial activity” with US government suppliers: i.e. require NVIDIA, Google, and basically every other tech giant to stop transacting with Anthropic. (In reality, US supply chain risk law only applies to DoD contracts and procurement, not general private commerce.3) If this was merely about canceling the $200 million contract, that would be sort of understandable—I get why the DoW may not want to set a precedent for private companies setting the bounds of use. But the “supply chain risk” measure is just so, so extreme. As Kelsey Piper has emphasized, there is not a single constituency that should endorse this.
Then, late Friday night, Sam Altman swept in and made the confusing announcement that OpenAI will take the DoW contract while keeping the same two red lines as Anthropic—and offered to broker a truce with the other AI labs too. Crucially, OpenAI was willing to compromise by letting the Pentagon define what counts as “lawful” “mass surveillance” and “autonomous weapons.” That is: Altman seems to trust DoW discretion, and it’s not clear if OpenAI will hold separate red lines at all.
That, today, is where we stand.
Now I am no national security expert, but neither is Pete Hegseth. What we both are is media professionals. And that’s why I’ll make my wager about what’s actually going on.
Hegseth is making a spectacle of punishing Anthropic—just like ICE made a spectacle of videotaping each immigrant deportation, and just like the CCP made a spectacle of disappearing Jack Ma for criticizing Chinese regulators.
This has nothing to do with national security or antiwokeness or anything like that. It is about striking fear into the hearts of any person or company—no matter how wealthy—who dares cross the admin. It is rule by fear and deterrence and chilling effect. I don’t think it matters if the “supply chain risk” is ruled unlawful and gets knocked down in the courts. It is enough to cause public pain and make an example of Anthropic.
«
With the Trump admin, Occam’s Razor takes the form that the most Mafia-style explanation is the simplest and most likely to be correct. They don’t care what the courts say, because the courts take time to come to conclusions that even so they will ignore. The Americans have recreated the Tudor court, but with bigger weapons.
unique link to this extract
Kalshi voids some bets on Khamenei because it’s “tied to death” • The Verge
Terrence O’Brien:
»
In a statement on X, [“prediction market”] Kalshi CEO Tarek Mansour said his company would pay out positions on “Ali Khamenei out as Supreme Leader?” at the last trading price before his death. Mansour said that Kalshi doesn’t “list markets directly tied to death” and that its rules are designed to “prevent people from profiting from death.” In addition, Kalshi is refunding fees related to the market and reimbursing anyone who purchased shares after Khamenei’s death.
Some users have voiced anger at how the situation was handled, claiming that either Kalshi’s rules should have been communicated more clearly, or that its markets should have been more narrowly worded to avoid confusion. (“Will Khamenei resign?” for example.) Some are accusing Kalshi of trying to have it both ways by allowing users to bet on Khamenei being out of power, which they believe was never going to happen without his death, but refusing to pay out people’s bets in full to boost their bottom line.
While Mansour defended that having a market on Khamenei was important, he said that “having a market directly settling on someone’s death” was not allowed under US regulations.
«
OK, but also: Polymarket defends its decision to allow betting on war as “invaluable”, also by Terrence O’Brien at The Verge:
»
In a statement posted on its site, Polymarket defended its decision to allow betting on the potential start of a war, saying that it was an “invaluable” source of news and answers, before taking shots at traditional media and Elon Musk’s X. The statement reads:
»
Note on Middle East Markets: The promise of prediction markets is to harness the wisdom of the crowd to create accurate, unbiased forecasts for the most important events to society. That ability is particularly invaluable in gut-wrenching times like today. After discussing with those directly affected by the attacks, who had dozens of questions, we realized that prediction markets could give them the answers they needed in ways TV news and 𝕏 could not.
«
«
People are very annoyed at being blocked from payouts, because if someone has a job for life, what other way is there for them to exit it except by dying? This is going to show the prediction markets for what they are – betting markets where the house always wins.
unique link to this extract
Wall Street has AI psychosis • WIRED
Steven Levy:
»
Wall Street, like the rest of us, is in a persistent state of anxiety about AI, and it doesn’t take much to trigger a mini-panic. Financial markets don’t necessarily map to reality, but the jitters reflect a wider disquiet. The AI future is in a William Gibson zone—it’s here, but unevenly distributed—and the news from those already living in the agent-packed, AI code-writing universe is both exciting and unsettling. Emphasis on unsettling.
No one—no one!—knows exactly how AI will impact the economy, but clearly it will be significant. Right now stocks are soaring, so it seems to make sense to keep the party going. But then along comes the latest doom manifesto, or a paper indicating that a traditional business sector might be threatened by AI, and suddenly money managers are reminded that the biggest issue of our time is totally unresolved. Case in point: earlier this month, a tiny company (valuation under $6 million) that had previously sold karaoke machines pivoted to AI-powered shipping logistics and put out a report saying that it had discovered some efficiencies in loading semi-trucks. That was enough to erase billions of dollars from the share prices of several major logistics companies, none of which had karaoke experience.
…When everyone has a few dozen AI agents working on their behalf, writes [research firm Citrini blogpost co-author Alap] Shah, consumers will be able to effortlessly find the best goods for the best prices. Apps will be rendered unnecessary—just type what you want into the LLM and an army of agents will do everything for you. The “poster child” for this phenomenon, Shah says, is DoorDash. Instead of being limited to the restaurants on the app, consumers will send out AI agents to find their ideal meal options, contracting directly with restaurants and delivery people—no apps needed. Zero friction! The DoorDashes of the world are avocado toast!
Not surprisingly, people at DoorDash did not appreciate this. “We were trying to rationalize—why? Why did they call us out more than anyone else?” says spokesperson Ali Musa. DoorDash, he says, is already navigating the world of AI with some success.
«
The way that everyone is so on edge for all of this is a function of how rapidly this keeps advancing. Even the PC or internet revolution didn’t move this quickly; ChatGPT first appeared in 2022, and look where we are. It’s as if we’d gone from 33K dialup to T1 internet in the same period, with everyone having it. You wouldn’t know what was a safe bet and what was in danger.
unique link to this extract
Pope implores priests not to write sermons using ChatGPT • Futurism
Joe Wilkins:
»
In a closed-door meeting with clergy from the Diocese of Rome late last week, Pope Leo XIV clobbered his priests with a distinctly 21st-century request: to resist the “temptation to prepare homilies with artificial intelligence,” according to Vatican News.
“Like all the muscles in the body, if we do not use them, if we do not move them, they die,” the Pope reportedly said. “The brain needs to be used, so our intelligence must also be exercised a little so as not to lose this capacity.”
The holy father drew a fascinating line in the sand, declaring that despite AI’s capabilities now or in the future, a chatbot could never stand-in for a flesh-and-blood priest. “To give a homily is to share faith,” he said, and AI “will never be able to share faith.”
Aside from AI, the Pope warned his clergymen against conflating social media to real life, per Vatican News. If one lives a “life authentically rooted in the Lord,” they’re offering something special to the world, the Pope said, adding that a common “illusion on the internet, on TikTok” is to treat followers and likes as authentic spiritual connection.
Whether you follow the teachings of the church or not, the advice is a unique snapshot of the issues facing the Vatican in 2026.
«
That he’s making a point of this already suggests that he’s heard of it happening quite widely, or that people are being tempted by it. Because who, faced with having to compose a lengthy sermon that will hold people’s attention for 10 minutes, wouldn’t?
The point about AI and faith resembles that of the law over AI and copyright. Can a machine have faith, though?
unique link to this extract
NASA lost a lunar spacecraft one day after launch. A new report details what went wrong • NPR
Joe Palca:
»
On February 26, 2025, a NASA probe called Lunar Trailblazer lifted off from Kennedy Space Center in Florida. Its mission was to map the water on the moon. But a day after launch, mission managers lost contact with the spacecraft, and it was never heard from again.
One year later, NPR has learned exactly why the $72m dollar mission failed.
A report by a review panel convened by NASA to explore what went wrong contains the explanation. Software that was supposed to point the spacecraft solar panels toward the sun instead pointed them 180 degrees away from the sun.
In addition, the panel found “many erroneous on-board fault management actions” that, taken together with the solar panel pointing error, “caused the Lunar Trailblazer failure.”
NASA provided the report in response to a Freedom of Information Act request.
…Lockheed Martin built the low-cost Lunar Trailblazer spacecraft. The NASA panel says the company did not properly test the solar panel pointing software before launch. Mission managers might have been able to fix that problem, but other software issues made it at first difficult, and ultimately impossible, to fix the pointing error.
…Scott Hubbard is a NASA veteran now at Stanford University. He says yes, NASA accepts higher risks with lower cost, or so-called class D, missions. “What class D was supposed to mean is that you were taking a big risk of not getting the science that was as high precision as you were planning on,” Hubbard says. “It didn’t mean the whole darn thing wouldn’t work.”
Hubbard says you can take risks. “But take mitigated, understood risk. Don’t take foolish risk,” he says. “The way I characterize it is that cheap failure is no good for anybody.”
«
It sounds as though there was a rigid deadline for the launch, and things went wrong, and there was a “sign error” hardcoded into the software: rather than + it was – and the arrays were thus pointed directly away from the sun. It’s a complete mess.
unique link to this extract
“Just a little detail that wouldn’t sell anything” • Unsung
Marcin Wichary:
»
The breathing light – officially “Sleep Indicator Light” – debuted in the iconic iBook G3 in 1999.
It was originally placed in the hinge, but soon was moved to the other side for laptops, and eventually put in desktop computers too: Power Mac, the Cube, and the iMac.
The green LED was replaced by a white one, but “pulsating light indicates that the computer is sleeping” buried the nicest part of it – the animation was designed to mimic human breathing at 12 breaths per minute, and feel comforting and soothing:
Living through that era, it was interesting to see improvements to this small detail.
The iMac G5 gained a light sensor under the edge of the display in part so that the sleep indicator light wouldn’t be too bright in a dark room (and for older iMacs, the light would just get dimmer during the night based on the internal clock).
…in the first half of 2010s, the breathing light was gone, victim to the same forces that removed the battery indicator and the illuminated logo on the lid.
I know each person would find themselves elsewhere on the line from “the light was overkill to begin with” to “I wished to see what they would do after they introduced that invisible metal variant.”
I know where I would place myself. This blog is all about celebrating functional and meaningful details, and there were practical reasons for the light to be there. This was in the era where laptops often died in their sleep – so knowing your computer was actually sleeping safe and sound was important – and the first appearance of the light after closing the lid meant that the hard drives were parked and the laptop could be moved safely.
The breathing itself, however, was purely a humanistic touch, and I miss that quirkiness of this little feature. If a Save icon can survive, surely so could the breathing light.
«
Does Apple still do quirky little things which take effort but make no substantial difference to the performance of any products now? I can’t think of any offhand. The Apple TV could have a “breathing” sleep light; but doesn’t. (Thanks Adewale A for the link.)
unique link to this extract
Helsinki just went a full year without a single traffic death • POLITICO
Aitor Hernández-Morales:
»
Helsinki hasn’t registered a single traffic-related fatality in the past year, municipal officials revealed this week.
Although road deaths are on the decline across the EU, with a 3% decrease in 2024, accidents with tragic outcomes are still commonplace in metropolitan areas. To go a full year without one is a remarkable feat for most cities — let alone a European capital.
In 2023, 7,807 Europeans lost their lives in traffic accidents in EU cities. Fifty-five people died in traffic accidents in Berlin last year, and nine individuals lost their lives in collisions in the Brussels region over the past 12 months.
While Helsinki is among the smallest EU capitals, with a little under 690,000 residents, some 1.5 million people live in and commute throughout the metropolitan area.
Roni Utriainen, a traffic engineer with the city’s Urban Environment Division, told the Finnish press that the achievement was attributable to “a lot of factors … but speed limits are one of the most important.”
Citing data that shows the risk of pedestrian fatality is cut in half by reducing a car’s speed of impact from 40 to 30 kilometres per hour (~25mph to ~20mph), city officials imposed the lower limit in most of Helsinki’s residential areas and city center in 2021.
The limits were enforced with 70 new speed cameras and a policing strategy based on the national “Vision Zero” policy, with the goal of achieving zero traffic injuries or deaths.
«
The peak – 30 deaths (18 pedestrians, 6 cyclists, 6 drivers) – came in 1983, and it’s all been below ten since 2007. Even so, worth noting.
unique link to this extract
AI is rewiring how the world’s best Go players think • MIT Technology Review
Michelle Kim:
»
Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol. And in the years since, AI has upended the game. It’s overturned centuries-old principles about the best moves and introduced entirely new ones. Players now train to replicate AI’s moves as closely as they can rather than inventing their own, even when the machine’s thinking remains mysterious to them. Today, it is essentially impossible to compete professionally without using AI. Some say the technology has drained the game of its creativity, while others think there is still room for human invention. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks as a result.
For Shin Jin-seo, the top-ranked Go player in the world, AI is an invaluable training partner. Every morning, he sits at his computer and opens a program called KataGo. Nicknamed “Shintelligence” for how closely his moves mimic AI’s, he traces the glowing “blue spot” that represents the program’s suggestion for the best next move, rearranging the stones on the digital grid to try to understand the machine’s thinking. “I constantly think about why AI chose a move,” he says.
…The starkest shift has been in opening moves. Go starts on a blank grid, and the first 50 moves were canvases for abstract thinking and creativity, where players etched their personalities and philosophies. Lee Sedol fashioned provocative moves that invited chaos. Ke Jie, a Chinese player who was defeated by AlphaGo Master in 2017, dazzled with agile, imaginative moves. Now, players memorize the same strain of efficient, calculated opening moves suggested by AI. The crux of the game has shifted to the middle moves, where raw calculation matters more than creativity.
Training with AI has led to a homogenization of playing styles. Ke Jie has lamented the strain of watching the same opening moves recycled endlessly. “I feel the exact same way as the fans watching. It’s very tiring and painful to watch,” he told a Chinese news outlet in 2021. Fans revel when a player breaks from the script with offbeat moves, but those moments have become rarer. Over a third of moves by the top Go players replicate AI’s recommendations, according to a study in 2023. The first 50 moves of each game are often identical to what AI suggests, many players say.
«
That’s sad; whereas chess openings have long been predictable recitations, Go had the vibrancy of invention. No longer, it seems.
unique link to this extract
An Ohio newspaper has a new star writer. It isn’t human • Washington Post
Will Oremus and Scott Nover:
»
Some [AI rewrites] are as simple as rewriting a press release, while others require more legwork: the reporter types up the notes and quotes they’ve gathered, then sends them to the rewrite editor, who prompts the AI to turn them into a full article draft for the editor and reporter to review and tweak as needed. In the time saved by not writing, the reporters are asked to do the kinds of reporting that AI can’t, like inviting a mayor or police chief to coffee.
Plain Dealer reporter Hannah Drown, who grew up in Lorain County, now relies on AI to help her cover her home turf. She said she broke and wrote a story about overcrowding at a high school thanks to an alert from an AI tool that scans the transcripts of school board meetings for newsworthy tidbits. That story, in turn, led to a bigger feature that she wrote on how rapid growth was transforming a small farming community and straining city services.
More recently, she filed her notes on the possible repossession of 41 county sheriff’s cruisers to the paper’s AI rewrite desk, which helped to turn them into a 600-word article that appeared on Cleveland.com.
“It’s tagging in my teammate,” Drown said. “I still outline the story. I still decide what the news is and what the tone should be.”
But four other current and former Plain Dealer journalists, who spoke on the condition of anonymity, said the growing reliance on AI has taken a toll on editorial quality and staff morale.
One recalled that the AI push went into overdrive in 2025 when the newsroom gained access to a paid version of ChatGPT and [Plain Dealer editor Chris] Quinn encouraged “unfettered use” of the tool. The result, the staffer said, is that the AI-generated stories they publish have minimal guardrails despite claims that they are thoroughly edited and fact-checked.
Quinn took issue with the claim that AI articles are published with few guardrails, saying every story drafted by AI is reviewed by a human reporter and an editor.
«
This seems like a generational, or at least flexibility, thing. Drown gets it; others don’t. There were journalists who resisted word processing and pagers and mobile phones. None do now.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified








