
A new weather app from the people who built Dark Sky aims to show the uncertainty in forecasts. CC-licensed photo by Ladislav Beneš on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 9 links for you. Changeable. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Does Anthropic think Claude is alive? Define ‘alive’ • The Verge
Hayden Field:
»
Over the past several weeks, as more and more Anthropic executives do interviews on a publicity blitz for Claude, one thing has gotten increasingly clear: Anthropic sure seems to think Claude is alive in some way, shape, or form.
“Alive” is obviously a loaded term; the more frequently used word is “conscious.” If you ask Anthropic if the company thinks Claude is alive, the company will flatly deny it, but stop short of saying the models aren’t conscious.
Kyle Fish, who leads model welfare research at Anthropic, told The Verge, “No, we don’t think Claude is ‘alive’ like humans or any other biological organisms. Asking whether they’re ‘alive’ is not a helpful framing for understanding them, as it typically refers to a fuzzy set of physiological, reproductive, and evolutionary characteristics.” Instead, he believes that “Claude, and other AI models, are a new kind of entity altogether.”
And is that new entity conscious? “Questions about potential internal experience, consciousness, moral status, and welfare are serious ones that we’re investigating as models become more sophisticated and capable, but we remain deeply uncertain about these topics,” he said.
“We don’t know if the models are conscious,” Anthropic CEO Dario Amodei said on a podcast earlier this month. He specified that the company has taken “a generally precautionary approach here” in that Anthropic is “not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.”
«
Remember Blake Lemoine, the Google staffer who became convinced that one of its early LLMs was conscious? That was June 2022, and everyone thought Lemoine needed a looong holiday, including Google. Now, less than four years later, people are talking about this and considering it without people running up to them brandishing straitjackets.
unique link to this extract
The tax nerd who bet his life savings against DOGE • WSJ
Richard Rubin:
»
Alan Cole put his life savings, all $342,195.63, into a prediction-market wager. He insists he’s not really a betting man.
Cole is a 37-year-old tax economist with Ivy League degrees, a mortgage and a young child. Until Elon Musk’s Department of Government Efficiency (DOGE) came roaring into the nation’s capital last year, he was largely a plain-vanilla investor or, as he puts it, a “normal, conventional Wall Street Journal-reading adult.”
But Musk’s boasts and his eager fans brought an unusual opportunity into the burgeoning U.S. prediction markets: People willing to bet that the world’s richest man would transform and shrink the federal government.
Cole took the opposite position, one he didn’t see as a gamble at all. If federal spending in each quarter of 2025 exceeded federal spending in the fourth quarter of 2024, he would win big.
Cole isn’t an old Washington hand or even an expert on federal spending. At the right-leaning Tax Foundation, he lives in the complex world of international corporate taxation. If you want to know about QDMTTs (a real thing) or the DBCFT (don’t ask), he’s your guy. He posts on X about subpar city snow clearing and various internet memes.
Crucially, he’s been around long enough to see politicians’ promises collide into reality and to know basic federal-budget math. The US government has been described as an insurance company with an army. Now, with federal debt nearing 100% of gross domestic product, it’s an insurance company with an army and a giant mortgage. The forces driving spending ever upward—inflation, an aging population, healthcare costs and interest payments—can’t change quickly.
…Cole gradually amassed more than 3% of one particular $12 million federal-spending prediction market. He spread risk across several sub-bets, structured so he landed in the red only if spending declined by more than $50 billion. He wasn’t betting against Kalshi itself, just against people betting on Musk.
“The virtue of the matching market is that you can take the good side of a bad bet—someone else’s bad bet,” Cole said.
…The government published the final 2025 figures Feb. 20. It wasn’t even close. The lowest spending quarter in 2025 was $66bn above the bet’s target level. Cole collected $470,300, for a profit of more than $128,000, or 37%.
«
It’s true, though: Cole isn’t a betting man. This wasn’t a bet. It was like betting on where the sun will come up tomorrow.
unique link to this extract
Embracing uncertainty in the weather in our new app • Acme Weather
Adam Grossman:
»
Our biggest pet peeve with most weather apps is how they deal (or rather, don’t deal) with forecast uncertainty. It is a simple fact that no weather forecast will ever be 100% reliable: the weather is moody, fickle, and chaotic. Forecasts are often wrong.
Understanding this uncertainty is crucial for planning your day. Most weather apps will give you their single best guess, leaving you to wonder how sure they actually are, and what else might happen instead. Will it actually start raining at 9am, or might it end up pushed off until noon? Will there be rain or snow? How sure are you? You can’t plan your day if you don’t know how much you can trust the forecast, or know what other possibilities might arise. Rather than pretending we will always be right, Acme Weather embraces the idea that our forecast will sometimes be wrong. We address this uncertainty in several ways…
Our homegrown forecasts are produced using many different data sources, including numerical weather prediction models, satellite data, ground station observations, and radar data. Most of the time, our forecast will be a reliable source of information (it’s better than the one we had at Dark Sky). But, crucially, we supplement the main forecast with a spread of alternate predictions. These are additional forecast lines that capture a range of alternate possible outcomes…
This accomplishes a couple things.
First, the spread of the lines offers a sort of intuition as to how reliable the forecast is. Take the two forecasts below [images in the blogpost]. In the first, the alternate predictions are tightly focused and the forecast can be considered robust and reliable. In the second, there is a significant spread, which is an indication that something is up and the forecast may be subject to change. It’s a call to action to check other conditions or maps, or come back to the app more frequently.
Over time, you build up an intuitive sense of just how much you can actually trust the forecast. After using this for the past six months, I never want to go back to a single forecast again!
«
$25 annually; not yet available in the UK. The developers originally made Dark Sky, which Apple bought; when they earned out they went straight back to weather apps. Be interesting to see how it copes with British weather, assuming it comes over here. (Thanks Gregory B for the pointer.)
unique link to this extract
What happened after Elon Musk took the Russian army offline • POLITICO
Ibrahim Naber:
»
“All we’ve got left now,” the Russian soldier said, “are radios, cables and pigeons.”
A decision earlier this month by SpaceX to shut down access to Starlink satellite-internet terminals caused immediate chaos among Russian forces who had become increasingly reliant upon the Elon Musk-owned company’s technology to sustain their occupation of Ukraine, according to radio transmissions intercepted by a Ukrainian reconnaissance unit and shared with the Axel Springer Global Reporters Network, to which POLITICO belongs.
The communications breakdown significantly constrained Russian military capabilities, creating new opportunities for Ukrainian forces. In the days following the shutdown, Ukraine recaptured roughly 77 square miles in the country’s southeast, according to calculations by the news agency Agence France-Presse based on data from the Washington-based Institute for the Study of War.
SpaceX began requiring verification of Starlink terminals on Feb. 4, blocking unverified Russian units from accessing its services. Almost immediately, Ukrainian eavesdroppers heard Russian soldiers complaining about the failure of “Kosmos” and “Sinka” — apparently code names for Starlink satellite internet and the messaging service Telegram.
“Damn it! Looks like they’ve switched off all the Starlinks,” one Russian soldier exclaimed. “The connection is gone, completely gone. The images aren’t being transmitted,” another shouted.
Dozens of the recordings were played for Axel Springer Global Reporter Network reporters in an underground listening post maintained by the Bureviy Brigade in northeastern Ukraine. Neither SpaceX nor the Russian Foreign Ministry responded to requests for comment.
«
On one hand, good; on the other, what is the world where a single man – not a government – can do something which can potentially alter the course of a war?
unique link to this extract
The phantom horizon • Royal Aeronautical Society
Emma Lewis:
»
It is a blunt truth: the human body is not designed for flight. Our vestibular system provides very finely tuned balance and motion cues on the ground but becomes wildly unreliable once an aircraft moves in three axes. Our eyes are also easily tricked by darkness, bright lights, sloping terrain, reflections, rain, haze or even the expanse of the sea at night.
The problem is that misperception rarely presents itself as confusion. It presents as absolute confidence in something that is essentially wrong. A pilot’s instruments, not their body, remain the only trustworthy reference. Yet under stress, fatigue or startle, the body can still overpower the mind.
Pilots generally learn about ‘the leans’ early in their flight training. When an aircraft slowly banks below the vestibular detection rate, the brain does not perceive movement. When the pilot later levels the wings, the inner ear signals a bank in the opposite direction. This triggers a powerful, almost irresistible urge to lean back into the imaginary turn.
Meanwhile, the somatogyral illusion is its more dramatic cousin. After sustained rotation, the semicircular canals that form part of the body’s vestibular system effectively reset. When rotation stops, the pilot feels as if they have begun turning in the opposite direction. It is incredibly convincing and often disorientating. These semicircular canals also only measure angular acceleration, not attitude. In prolonged or gentle turns they simply stop reporting. In the absence of visual cues, the brain inaccurately fills in the gaps.
Spatial disorientation continues to be a leading factor in aviation accidents. The Atlas Air Boeing 767 crash of February 2019 (flight 3591 from Miami to Houston) demonstrated how disorientation and startle can combine catastrophically after an unexpected pitch-up and go-around.
The investigation cited multiple contributing factors, including improper control inputs and breakdowns in crew monitoring, but the underlying thread throughout was the flight crew’s sensory misperception.
Modern cockpit technologies are increasingly designed to outsmart the sensory traps that have challenged pilots for decades. Synthetic vision systems now provide a reliable, clear horizon even in total darkness or heavy weather, while head-up displays (HUDs) keep critical attitude and flight-path information directly in the pilot’s forward view.
«
“The leans” is what killed one of the Kennedy clan: he was descending over water at night, in conditions he wasn’t qualified for.
unique link to this extract
What’s the point of school when AI can do your homework? • 404 Media
Matthew Gault:
»
There’s a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein’s website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions.
Educators told me that Einstein is just one of many AI tools that can do homework for students, but should be seen as a warning to schools that are increasingly seen by students as a place to gain a diploma and status as opposed to the value of education itself.
If an AI can go to school for you what’s the point of going to school? For Advait Paliwal, Brown dropout and co-creator of Einstein, there isn’t one. “I think about horses,” he said. “They used to pull carriages, but when cars came around, I’d argue horses became a lot more free,” he said. “They can do whatever they want now. It would be weird if horses revolted and said ‘no, I want to pull carriages, this is my purpose in life.’”
But humans aren’t horses. “This is much bigger than Einstein,” Matthew Kirschenbaum told 404 Media. “Einstein is symptomatic. I doubt we’ll be talking about Einstein, as such, in a year. But it’s symptomatic of what’s about to descend on higher ed and secondary ed as well.”
Kirschenbaum teaches English at the University of Virginia and has written at length about artificial intelligence. He’s also a member of the Modern Language Association (MLA) where he serves as member of its Task Force on AI Research and Teaching. Einstein isn’t the first agentic AI to do the work of a student for them, it’s just one that got attention online recently. Kirschenbaum and his fellow committee members flagged their concerns about these AIs in October, 2025.
«
That quote about horses is insane. Has he really so little idea of history that he doesn’t know what happened to all those horses that became surplus? Does he think they were just set free to wander around cities? And that’s before we get to the question of “what is school for?” It is not just about opening books.
unique link to this extract
Writing crystallised thinking at Amazon. Is AI muddying it? • Big Technology
Kristi Coulter:
»
Before AI, human writing at Amazon was sacrosanct. The company began each big meeting with six-page narrative describing the product or feature, typically written by the project lead, read in silence before anyone spoke. The writing’s purpose was to crystalize thinking and anticipate every scenario. Powerpoint, the enabler of logical leaps, be damned.
“The document should be written with such clarity that it’s like angels singing from on high,” Jeff Bezos once said. “I like a crisp document and a messy meeting.”
But now, Amazon is in its AI era, and its leadership is encouraging employees to let AI do the writing for them. The company’s internal marketing for Cedric, its ChatGPT-style tool, promises “six-page narratives in seconds.”
The implications for Amazon’s culture struck me as so profound that I reached out to over fifteen current employees, a mix of Bezos-era veterans and newcomers, to explore what the mandate says about their work lives, the company’s priorities, and what it means to be “Amazonian” today.
The worry within the company, I learned, is that Amazon is losing sight of writing’s centrality in its deliberative, thoughtful culture as it pursues powerful, new tools.
“Writing is thinking,” said one longtime company veteran. “That was the whole point of Amazon’s writing culture. I can’t tell you how many times I changed my mind when writing a narrative. And even when I didn’t, my arguments were more precise for having written them down. Now we have chatbots writing six-pagers to be summarized by other chatbots.”
«
Seems like we should watch this to see whether there are any visible effects on Amazon.
unique link to this extract
Canada tells OpenAI to boost safety measures or be forced to by government • Reuters
David Ljunggren:
»
Canadian ministers told OpenAI that if it did not quickly boost its safety protocols in the wake of a recent school shooting, Ottawa would effect the change through legislation, a top official said on Wednesday.
Ottawa summoned OpenAI’s safety team for talks on Tuesday after the ChatGPT maker said it had not contacted police about an account that it banned belonging to an alleged mass shooter.
Jesse Van Rootselaar, 18, is suspected of killing eight people on February 10 and then committing suicide in Tumbler Ridge, a small town in British Columbia.
OpenAI said it banned Rootselaar’s account last year on ChatGPT for policy violations, which it said did not meet internal criteria for reporting to law enforcement.
“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government is going to be making changes,” Justice Minister Sean Fraser told reporters. OpenAI was not immediately available for comment.
In 2024, Canada’s Liberal government introduced draft legislation to crack down on online hate, but the effort stalled amid criticism it was too broad in scope. Ministers say they will try again this year with more focused measures.
“Anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law,” Prime Minister Mark Carney told reporters.
…OpenAI says it banned Van Rootselaar’s account in 2025 after it was flagged by systems that identify “misuses of our models in furtherance of violent activities.”
The company considered contacting police, but determined the account did not meet the threshold of posing an imminent and credible risk of serious physical harm to others.
«
What would the legislation look like? I’m not sure the ministers have thought this through.
unique link to this extract
The race to dominate AI is brutally competitive. That’s good for everyone • The New York Times
Jason Furman was chair of the White House Council of Economic Advisers 2013-2017:
»
In an era of anxiety about unchecked corporate power, artificial intelligence can seem like the most terrifying example of all. Already valued in the trillions of dollars, the industry has unparalleled influence over our collective futures, and the government’s doing nothing to rein it in. If that’s not akin to monopoly power, what is?
Yet the real story of the most consequential technology of our time is strikingly different from what it seems. Instead of consolidating, as so many other industries have done, the leading edge of AI has become fiercely competitive. The result has been a staggering pace of innovation, significant reductions in costs and an expanding array of choices for consumers and businesses alike.
Five years ago, worries about sparse digital competition were well founded. A handful of giants — Amazon, Apple, Google, Meta and Microsoft — dominated the tech economy. Most major product categories had only two or three serious competitors, such as search (Google and Microsoft’s Bing) and mobile operating systems (Apple’s iOS and Google’s Android). When new markets like cloud computing emerged, incumbents quickly took control.
These leads were large. Google handled roughly 90% of search queries. And they were stable. Facebook users could not take their social graphs to a rival platform, and I am not sure how I would pry my digital life out of Apple’s ecosystem if something better came along. Certain platforms became near-mandatory gateways. Many businesses attempt e-commerce at their peril unless they go through Amazon.
I thought if anything would lock in those advantages, it would be AI. I could not have been more wrong.
Consider the widely followed Arena leaderboard, where chatbots compete in blind, head-to-head tests. The top-ranked lab is Anthropic, a company founded just five years ago. OpenAI, which is third, is only about a decade old. A year ago, a dark-horse entrant from China pushed into contention with Google with vastly fewer resources. Some observers concluded that large companies could never move fast enough to keep up. Google, which published research in 2017 that almost everyone since has built on, responded by behaving like a startup again.«
I guess Furman doesn’t know a lot about the history of PCs, or search engines. This period reminds me exactly of the early PC age, or early search engine age (when Google had just arrived): tons of competitors all leapfrogging each other, thinning out to two or three (or fewer) rivals as the market resolves. In those times, lots of people were left having used suboptimal products. In the case of PCs, they’d spent money. Subscriptions for AI services means the same.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified








