Start Up No.2504: US CDC director fired for vaccine defence, Google has AI video for all, bye TypePad, Apple’s 2nm grab, and more


Don’t diss Fidel – Cuba has its own online army ready to defend la revoluçion. CC-licensed photo by Pedro Szekely on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


It’s Friday, so there’s another post due at the Social Warming Substack at about 0845 UK time: it’s about an intersection of tennis and science.


A selection of 10 links for you. High Fidelity. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


White House fires CDC director who says RFK Jr. is “weaponizing public health” • The Washington Post

Lena Sun, Dan Diamond and Lauren Weber:

»

The White House on Wednesday fired Susan Monarez as director of the Centers for Disease Control and Prevention after she refused to resign amid pressure to change vaccine policy, which sparked the resignation of other senior CDC officials and a showdown over whether she could be removed.

Hours after the Department of Health and Human Services announced early Wednesday evening that Monarez was no longer the director, her lawyers responded with a fiery statement saying she had not resigned or been fired. They accused HHS Secretary Robert F. Kennedy Jr. of “weaponizing public health for political gain” and “putting millions of American lives at risk” by purging health officials from government.

“When CDC Director Susan Monarez refused to rubber-stamp unscientific, reckless directives and fire dedicated health experts, she chose protecting the public over serving a political agenda,” the lawyers, Mark S. Zaid and Abbe Lowell, wrote in a statement. “For that reason, she has been targeted.”

Soon after their statement, the White House formally fired Monarez.

…Wednesday’s shake-ups — which include the resignation of the agency’s chief medical officer, the director of its infectious-disease center and other key officials — add to the tumult at the nation’s premier public health agency.

«

We are now entering the Dark Ages in the US. The effects of this will filter down: the CDC may have needed shaking up, but this is the wrong way to do it and the wrong paradigm of thinking to replace what was there. The US now needs to hope that its individual states have more sense than its centre; but as that happens for more and more things, the “united” states become disunited. None of this ends well.

Monarez was told to rescind approval for Covid vaccines: she wouldn’t. Kennedy is going to kill hundreds, perhaps thousands of children by reducing trust in vaccines for children.
unique link to this extract


Google will now let everyone use its AI-powered video editor Vids • The Verge

Emma Roth:

»

Google is rolling out a basic version of Vids to everyone. Until now, the AI-powered video editor has only been available to Google Workspace or AI plan subscribers, but now users can broadly access the app with templates, stock media, and a “subset of AI capabilities,” product director Vishnu Sivaji tells The Verge.

Launched last year, Vids is the newest addition to Google’s suite of Workspace tools. It’s geared toward helping you quickly pull together video presentations with a host of AI video editing and creation tools, including a feature to help you create a storyboard with suggested scenes, stock images, and background music.

Though Sivaji notes that the pared-down version of Vids will come with “pretty much all of the amazing capabilities” within the app, the free version doesn’t have any of the new AI-powered features rolling out today, including the ability to have an AI-generated avatar to deliver a message on your behalf.

With this update, you can select one of 12 pre-made avatars, each of which has a different appearance and voice, and then add your script. For now, you can’t use Vids to create an AI-generated avatar of yourself, which is a feature Zoom currently offers (and is apparently something tech CEOs are super into).

«

Are AI-generated videos going to make us happy? Are they really? We can offer this to everyone, but will we truly benefit from it? I have my doubts.
unique link to this extract


Blogging service TypePad is shutting down and taking all blog content with it • Ars Technica

Andrew Cunningham:

»

In the olden days, publishing a site on the internet required that you figure out hosting and have at least some experience with HTML, CSS, and the other languages that make the Internet work. But the emergence of blogging and “Web 2.0” sites in the late ’90s and early 2000s gave rise to a constellation of services that would offer to host all of your thoughts without requiring you to build the website part of your website.

Many of those services are still around in some form—someone who really wanted to could still launch a new blog on LiveJournal, Xanga, Blogger, or WordPress.com. But one of the field’s former giants is shutting down—and taking all of those old posts with it. TypePad announced that the service would be shutting down on September 30 and that everything hosted on it would also be going away on that date. That gives current and former users just over a month to export anything they want to save.

TypePad had previously removed the ability to create new accounts at some point in 2020. It gave no specific rationale for the shutdown beyond calling it a “difficult decision.” As recently as March of this year, TypePad representatives were telling users there were “no plans” to shut down the service.

TypePad was a blogging service based on the Movable Type content management system but hosted on TypePad’s site and with other customizations. Both Movable Type and TypePad were originally created by Six Apart, with TypePad being the solution for less technical users who just wanted to create a site and Movable Type being the version you could download and host anywhere and customize to your liking—not unlike the relationship between WordPress.com (the site that hosts other sites) and WordPress.org (the site that hosts the open source software).

Movable Type and TypePad diverged in the early 2010s; Six Apart was bought by a company called VideoEgg in 2010, resulting in a merged company called Say Media.

«

Pour yet another one out for yet another creation of the early internet, now vanishing into the ground as though a sinkhole had opened underneath it.
unique link to this extract


Chemists create new high-energy compound to fuel space flight • Phys.org

Erin Frick, University of Albany:

»

University at Albany chemists have created a new high-energy compound that could revolutionize rocket fuel and make space flights more efficient. Upon ignition, the compound releases more energy relative to its weight and volume compared to current fuels. In a rocket, this would mean less fuel required to power the same flight duration or payload and more room for mission-critical supplies. Their study is published in the Journal of the American Chemical Society.

“In rocket ships, space is at a premium,” said Assistant Professor of Chemistry Michael Yeung, whose lab led the work. “Every inch must be packed efficiently, and everything onboard needs to be as light as possible. Creating more efficient fuel using our new compound would mean less space is needed for fuel storage, freeing up room for equipment, including instruments used for research. On the return voyage, this could mean more space is available to bring samples home.”

The newly synthesized compound, manganese diboride (MnB2), is over 20% more energetic by weight and about 150% more energetic by volume compared to the aluminum currently used in solid rocket boosters. Despite being highly energetic, it is also very safe and will only combust when it meets an ignition agent like kerosene.

«

As it happens, I’m reading SF writer Andy Weir’s latest (Project Hail Mary), which includes a rocket fuel consisting of living things which do mass-energy conversion, which as you can imagine is pretty effective (also impossible, but: fiction). However this might do in the meantime.
unique link to this extract


On the cyber soldiers defending the Cuban Revolution from internet slander • Literary Hub

Abraham Jiménez Enoa:

»

Messi scores for Barcelona. Moments later, the table starts to dance. Rodríguez’s phone is vibrating, shaking the bottle and glasses, though the chicharrones don’t move. He grabs the phone and looks at the screen, and his face changes. He goes onto the balcony and, after a brief conversation, heads directly to his room and emerges in a shirt and trousers.

“Going somewhere?” his cousin asks.

“Work,” Rodríguez says. “Somebody wrote an article online that shit talks Fidel.”

Rodríguez is not his real name. Although he never wears a uniform, he works in a policing capacity in a department at the Ministry of the Interior that he prefers not to identify, though he will say it is “dedicated to monitoring Cuban cyberspace.” He explains further that, “we don’t attack or hack anyone’s site or account. Primarily, we keep an eye on what people say about Cuba online, gauge the consensus, and, if it’s overly negative, we strike back.”

Every day, Rodríguez and his fellow cyber soldiers search and scan the outlets that are most outspoken or “subversive” in their coverage of Cuba, checking a list that includes blogs; foreign media; the underground and opposition press; and people of interest on “insidious” social media platforms. Rodríguez has three Facebook accounts: a real one he uses to keep in touch with friends who’ve emigrated, and two fake ones “for defending Cuba from anyone who denigrates the Revolution.”

«

(Thanks Gregory B for the link.)
unique link to this extract


4Chan and Kiwi Farms file joint lawsuit against British Ofcom • The Verge

Tina Nguyen:

»

In a filing submitted to the U.S. District Court in the District of Columbia, Preston Byrne and Ron Coleman, the team representing the two sites, said that their clients are being penalized by Ofcom, the agency that regulates online content in the United Kingdom, for “engaging in conduct which is perfectly lawful in the territories where their websites are based”.

…Both 4Chan and Kiwi Farms could face steep fines of up to £18m if they fail to comply with Ofcom’s requirement that they regularly submit “risk assessment” reports about their userbase, due to their sites being accessible in the U.K. Earlier in August, Ofcom issued a provisional decision stating that there were “reasonable grounds” to believe 4chan was in violation of the requirement. In the filing, their lawyers argue that Ofcom is overreaching its legal authority by trying to apply British law to companies based in the U.S., where their behavior is protected by the U.S. Constitution and the American legal code, and seek to have a U.S. federal judge declare that Ofcom has no jurisdiction in this matter.

“American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail,” Byrne said in a statement to reporters.

«

I’m puzzled how Ofcom’s enforcement policy applies to companies with no presence in the UK. It can (S 9.2) ask a court to block a site – which seems the most likely outcome here, because why should an American company listen to a British regulator, and why should a British regulator listen to an American court?
unique link to this extract


OpenAI will add parental controls for ChatGPT following teen’s death • The Verge

Hayden Field:

»

After a 16-year-old took his own life following months of confiding in ChatGPT, OpenAI will be introducing parental controls and is considering additional safeguards, the company said in a Tuesday blog post.

OpenAI said it’s exploring features like setting an emergency contact who can be reached with “one-click messages or calls” within ChatGPT, as well as an opt-in feature allowing the chatbot itself to reach out to those contacts “in severe cases.”

When The New York Times published its story about the death of Adam Raine, OpenAI’s initial statement was simple — starting out with “our thoughts are with his family” — and didn’t seem to go into actionable details. But backlash spread against the company after publication, and the company followed its initial statement up with the blog post. The same day, the Raine family filed a lawsuit against both OpenAI and its CEO, Sam Altman, containing a flood of additional details about Raine’s relationship with ChatGPT.

The lawsuit, filed Tuesday in California state court in San Francisco, alleges that ChatGPT provided the teen with instructions for how to die by suicide and drew him away from real-life support systems.

…OpenAI said in the Tuesday blog post that it’s learned that its existing safeguards “can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade. For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

«

Translated: it might encourage users to commit suicide.
unique link to this extract


Anthropic’s auto-clicking AI Chrome extension raises browser-hijacking concerns • Ars Technica

Benj Edwards:

»

As AI assistants become capable of controlling web browsers, a new security challenge has emerged: users must now trust that every website they visit won’t try to hijack their AI agent with hidden malicious instructions. Experts voiced concerns about this emerging threat this week after testing from a leading AI chatbot vendor revealed that AI browser agents can be successfully tricked into harmful actions nearly a quarter of the time.

On Tuesday, Anthropic announced the launch of Claude for Chrome, a web browser-based AI agent that can take actions on behalf of users. Due to security concerns, the extension is only rolling out as a research preview to 1,000 subscribers on Anthropic’s Max plan, which costs between $100 and $200 per month, with a waitlist available for other users.

The Claude for Chrome extension allows users to chat with the Claude AI model in a sidebar window that maintains the context of everything happening in their browser. Users can grant Claude permission to perform tasks like managing calendars, scheduling meetings, drafting email responses, handling expense reports, and testing website features.

The browser extension builds on Anthropic’s Computer Use capability, which the company released in October 2024. Computer Use is an experimental feature that allows Claude to take screenshots and control a user’s mouse cursor to perform tasks, but the new Chrome extension provides more direct browser integration.

Zooming out, it appears Anthropic’s browser extension reflects a new phase of AI lab competition. In July, Perplexity launched its own browser, Comet, which features an AI agent that attempts to offload tasks for users. OpenAI recently released ChatGPT Agent, a bot that uses its own sandboxed browser to take actions on the web. Google has also launched Gemini integrations with Chrome in recent months.

But this rush to integrate AI into browsers has exposed a fundamental security flaw that could put users at serious risk.

«

Lads, I’ve got a brilliant idea – let’s not use AI browser agents.
unique link to this extract


Apple to secure nearly half of TSMC’s 2nm production, report says • 9to5Mac

Marcus Mendes:

»

According to the latest rumors, Apple is slated to use TSMC’s 2nm process for its upcoming A20 chip, expected to power the iPhone 18 series. Now, a new report details the chipmaker’s roadmap for bringing the chip into mass production, and the industry-wide rush to secure an early supply.

As reported by DigiTimes, citing supply chain sources, TSMC is set to ramp up its 2nm process in the next quarter, and has been charging up to $30,000 per wafer, a record high. Still, demand has never been higher, with Apple alone securing “nearly half” of production.

In order to meet this demand, DigiTimes says that TSMC has raised the planned monthly production capacity at its Baoshan and Kaohsiung fabs. Furthermore, with 4nm and 3nm production already fully booked through the end of 2026, the company’s profitability is expected to exceed prior expectations, even in the face of trade challenges such as tariffs, exchange-rate swings, and rising costs.

The report says that while Apple is slated to snatch nearly half of TSMC’s 2nm chips, Qualcomm comes in second, followed by AMD, MediaTek, Broadcom, and even Intel, in no particular order. It also says that by 2027, “in addition to NVIDIA, customers entering mass production will include Amazon’s Annapurna, Google, Marvell, Bitmain, and more than 10 other major players.”

«

So, not Intel?
unique link to this extract


AI ‘slop’ websites are publishing climate science denial • DeSmog

Joey Grostern:

»

At the start of June, MSN, the world’s fourth-largest news aggregator, posted an article from a new climate-focused publication, Climate Cosmos, entitled: “Why Top Experts Are Rethinking Climate Alarmism”.

The article – by “Kathleen Westbrook M.Sc Climate Science” – cited a finding from the “Global Climate Research Institute” that “65% of surveyed climate professionals advocate for pragmatic, solution-focused messaging over fear-driven warnings.”

But there were a couple of major problems: the Global Climate Research Institute doesn’t exist, and nor does Kathleen Westbrook, whose profile on Climate Cosmos has now been renamed to ‘Henrieke Otte’.

The article accused those who advocate for climate action of overstating the harms caused by burning fossil fuels. It also promoted the work of Bjorn Lomborg, who has repeatedly called on governments to halt spending on climate action.

This piece was seemingly a breach of MSN’s “prohibited content” rules for posting false information, which MSN partners must abide by to access the aggregator’s huge reach of around 200 million monthly visitors. It was also posted on another U.S. news aggregator, Newsbreak.

Climate Cosmos only has a small pool of contributors, according to its website, yet pumps out multiple stories a day. To do this, it appears to be relying on the help of artificial intelligence (AI).

The first line of another piece, “What the Climate Movement Isn’t Telling You”, appeared to include a prompt – an instruction given to an AI platform.

It read: “I’ll help you write an article about ‘What the Climate Movement Isn’t Telling You’ with current facts and data. Let me search for the latest information first.”

«

As much as anything, it’s the news aggregators which are the problem here. They aren’t careful about what they allow in, and there isn’t any sensible monitoring once sites do become part of the aggregate. (Thanks Ray L for the link.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2503: Anthropic’s Claude helps hacker’s extortion, can Democrat influencers.. influence?, VR retail woes, and more


Social media claims that cash withdrawals of over £200 will be monitored aren’t true. Does AI make people believe them? CC-licensed photo by Grey World on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Cashless. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


A hacker used AI to automate an ‘unprecedented’ cybercrime spree, Anthropic says • CNBC

Kevin Collier:

»

A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes.

In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies. [Using Anthropic’s chatbot Claude, which seems a relevant point for this story – Overspill Ed.]

Cyber extortion, where hackers steal information like sensitive user data or trade secrets, is a common criminal tactic. And AI has made some of that easier, with scammers using AI chatbots for help writing phishing emails. In recent months, hackers of all stripes have increasingly incorporated AI tools in their work.

But the case Anthropic found is the first publicly documented instance in which a hacker used a leading AI company’s chatbot to automate almost an entire cybercrime spree.

According to the blog post, one of Anthropic’s periodic reports on threats, the operation began with the hacker convincing Claude Code — Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests — to identify companies vulnerable to attack. Claude then created malicious software to actually steal sensitive information from the companies. Next, it organized the hacked files and analyzed them to both help determine what was sensitive and could be used to extort the victim companies.

The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand in exchange for the hacker’s promise not to publish that material. It also wrote suggested extortion emails.

«

So helpful. Why, before we had chatbots, we had to draft our own phishing emails and find our own extortion targets.
unique link to this extract


A dark money group is secretly funding high-profile Democratic influencers • WIRED

Taylor Lorenz:

»

After the Democrats lost in November, they faced a reckoning. It was clear that the party had failed to successfully navigate the new media landscape. While Republicans spent decades building a powerful and robust independent media infrastructure, maximizing controversy to drive attention and maintaining tight relationships with creators despite their small disagreements with Trump, the Democrats have largely relied on outdated strategies and traditional media to get their message out.

Now, Democrats hope that the secretive Chorus Creator Incubator Program, funded by a powerful liberal dark money group called The Sixteen Thirty Fund, might tip the scales. The program kicked off last month, and creators involved were told by Chorus that over 90 influencers were set to take part. Creators told WIRED that the contract stipulated they’d be kicked out and essentially cut off financially if they even so much as acknowledged that they were part of the program. Some creators also raised concerns about a slew of restrictive clauses in the contract.

Influencers included in communication about the program, and in some cases an onboarding session for those receiving payments from The Sixteen Thirty Fund, include Olivia Julianna, the centrist Gen Z influencer who spoke at the 2024 Democratic National Convention; Loren Piretra, a former Playboy executive turned political influencer who hosts a podcast for Occupy Democrats; Barrett Adair, a content creator who runs an American Girl Doll–themed pro-DNC meme account; Suzanne Lambert, who has called herself a “Regina George liberal;” Arielle Fodor, an education creator with 1.4 million followers on TikTok; Sander Jennings, a former TLC reality star and older brother of trans influencer Jazz Jennings; David Pakman, who hosts an independent progressive show on YouTube covering news and politics; Leigh McGowan, who goes by the online moniker “Politics Girl”; and dozens of others.

…The goal of Chorus, according to a fundraising deck obtained by WIRED, is to “build new infrastructure to fund independent progressive voices online at scale.” The creators who joined the incubator are expected to attend regular advocacy trainings and daily messaging check-ins. Those messaging check-ins are led by Cohen on “rapid response days.” The creators also have to attend at least two Chorus “newsroom” events per month, which are events Chorus plans, often with lawmakers.

«

There’s a famous tweet which reads “I’m 50. All celebrity news looks like this: ‘CURTAINS FOR ZOOSHA? K-SMOG AND BATBOY CAUGHT FLIPPING A GRUNT'”. And that list of influencers sure makes me feel like that. But also: what the hell are they hoping to achieve? The Democrats’ problem isn’t that they don’t have enough influencers. It’s that their policies are incredibly unpopular with – or seem irrelevant to – large swathes of the American public.
unique link to this extract


Intel details everything that could go wrong with US taking a 10% stake • Ars Technica

Ashley Belanger:

»

In the long term, investors were told [in a new SEC filing from Intel] that the US stake may limit the company’s eligibility for future federal grants while leaving Intel shareholders dwelling in the uncertainty of knowing that terms of the deal could be voided or changed over time, as federal administration and congressional priorities shift.

Additionally, Intel forecasted potential legal challenges over the deal, which Intel anticipates could come from both third parties and the US government.

The final bullet point in Intel’s risk list could be the most ominous, though. Due to the unprecedented nature of the deal, Intel fears there’s no way to anticipate myriad other challenges the deal may trigger.

“It is difficult to foresee all the potential consequences,” Intel’s filing said. “Among other things, there could be adverse reactions, immediately or over time, from investors, employees, customers, suppliers, other business or commercial partners, foreign governments or competitors. There may also be litigation related to the transaction or otherwise and increased public or political scrutiny with respect to the Company.”

Meanwhile, it’s hard to see what Intel truly gains from the deal other than maybe getting Trump off its back for a bit. A Fitch Ratings research note reported that “the deal does not improve Intel’s BBB credit rating, which sits just above junk status” and “does not fundamentally improve customer demand for Intel chips” despite providing “more liquidity,” Reuters reported.

«

So basically although it is a cash injection, there are a ton of downsides in Trump (it’s hardly the US) taking a stake. And no visible upsides.
unique link to this extract


The VR retail experience needs a hard reboot • UploadVR

Craig Storm:

»

The Quest 3 dummy unit is fastened precariously to the table, with a Quest 2 flopped forward on its face beside it.

I couldn’t see the newer Quest 3S, which has been out for almost a year, anywhere. Each headset was accompanied by a single sad controller strapped to the table next to it. I don’t know if they are meant to be displayed with only one controller, or if the second controller used to be there. Either way, it was obvious no one had given this display any care or attention in a very long time.

There were no accessories. No boxed units ready for someone to take home. Just desolation, neglect, and sadness. This was my recent experience at a Best Buy store, and it left me wondering: what exactly is the state of VR retail?

There’s no technology that needs to be experienced first-hand more than virtual reality. Trying to explain VR to someone who’s never put on a headset is like trying to describe the taste of an apple to someone who’s never eaten one. You can’t talk your way into understanding it. You have to try it. VR’s struggle to reach the mass market has always come down to that missing step. In the early years, a powerful gaming PC was required to even run VR hardware. The Oculus Go and Oculus Quest changed that by making standalone VR possible, finally putting it within reach of the average consumer. But there still isn’t a good way for most people to try the product before buying.

«

I used to be optimistic that VR could reach the mainstream once headsets became affordable. But the reality is that people aren’t interested enough in sealing themselves away. We like awareness of the world, even if our face is glued to a smartphone screen. And the content isn’t good enough, for the most part, creating a chicken/egg problem.
unique link to this extract


We must build AI for people; not to be a person • Mustafa Suleyman

Mustafa Suleyman was a co-founder of DeepMind, but now works at Microsoft:

»

I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.

In this context, I’m growing more and more concerned about what is becoming known as the “psychosis risk”, and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.

We must build AI for people; not to be a digital person. AI companions are a completely new category, and we urgently need to start talking about the guardrails we put in place to protect people and ensure this amazing technology can do its job of delivering immense value to the world. I’m fixated on building the most useful and supportive AI companion imaginable. But to succeed, I also need to talk about what we, and others, shouldn’t build.

«

Making sure that AI creations do not qualify for copyright seems like a good stake to put, cemented, in the ground for this.
unique link to this extract


Cash withdrawals over £200 will not be automatically reported to the Financial Intelligence Unit • Full Fact

»

We’ve recently spotted videos circulating on social media which claim that from 18 September, people who withdraw more than £200 in cash a week will have details of their transactions sent to the UK’s Financial Intelligence Unit.

The clips claim this new rule comes from “guidance from HMRC, the Treasury and the Financial Conduct Authority [FCA]”.

This is not a real policy set to be introduced by the government.

A spokesperson for the National Crime Agency (NCA), which oversees the Financial Intelligence Unit (FIU), told us this isn’t true, confirming that “the FIU does not receive automatic reports on anyone who removes £200 cash in a seven day period”.

A spokesperson for HMRC also told us: “These claims are completely false and designed to cause undue alarm and fear”, while the FCA said it “is not aware of or involved in this guidance”.

«

People’s brains really do seem to have turned to mush. Or maybe it’s that effect where if there’s enough completely made-up stuff circulating, then nobody knows what to believe, or how to discern.

In passing: in all the science fiction one I’ve ever read, the computers were always accurate. (HAL 9000 doesn’t count; it thought it was saving the mission by preventing the humans from interfering.) When you look at the output of Grok and ChatGPT, one realises that SF writers didn’t account for human stupidity being the principal input to those creations.
unique link to this extract


Worldwide smartphone market forecast to grow 1% in 2025 • IDC

»

Worldwide smartphone shipments are forecast to grow 1.0% year-on-year (YoY) in 2025 to 1.24 billion units, according to the International Data Corporation (IDC) Worldwide Quarterly Mobile Phone Tracker. This represents an improvement from the previous forecast of 0.6%, driven by 3.9% growth in iOS this year. Despite challenges like soft demand and a tough economy, healthy replacement demand will help push growth into 2026, resulting in a compound annual growth rate (CAGR) of 1.5% from 2024 to 2029. The total addressable market (TAM) has increased slightly, as the current exemption by the U.S. government on smartphones shields the market from negative impact from additional tariffs.

«

Just looking in on the smartphone market in passing. The forecast is for about 1.25bn smartphones to be shipped, but the forecast for the next five years is anaemic – 1% or 2%. It’s a long way from the go-go years of the 2010s, with 20% growth. Now, like the PC, it’s just bumping along: the real era of innovation is past, and you can’t burn the bonfire twice.
unique link to this extract


Is the UK’s giant new nuclear power station ‘unbuildable’? • Financial Times

Malcolm Moore, Ian Johnston and Rachel Millard:

»

The design of the UK’s latest nuclear power station is “terrifying”, “phenomenally complex” and “almost unbuildable”, according to Henri Proglio, a former head of EDF, the French state-owned utility behind the project. 

One month after the final green light for Sizewell C, 1,700 workers are on site in Suffolk, on the UK’s east coast, preparing the sandy marshland for two enormous reactors that will eventually generate enough electricity for 6mn homes.

The plant will be a replica of the European Pressurised Reactor (EPR) design that is running four to six years late and 2.5 times over budget at Hinkley Point C in Somerset, and which has had problems wherever it has been built, in France, Finland and China.

But unlike at Hinkley, where EDF was responsible for spiralling costs and took a hit of nearly €13bn after running late and over budget, the UK government and bill payers are on the hook for Sizewell. The state will provide £36.5bn of debt to fund the estimated £38bn price tag and be responsible if costs go beyond £47bn.

…It includes unprecedented safety features: four independent cooling systems; twin containment shields capable of withstanding an internal blast or an aircraft strike; and a “core catcher” to trap molten fuel in the event of a meltdown. 

“It was well intentioned, but it ended up growing and growing and growing, and European regulatory standards reinforced it, and it ended up a monster,” said one senior nuclear executive, who asked not to be named. 

«

Planned output: 3.2GW. Expected cost per megawatt-hour: £286. Between twice and three times the cost of reactors built in China or South Korea. This is probably the last EPR that will ever be built – if it’s ever finished.

Onshore wind in the UK: 15.7GW. Offshore: 14.7GW.
unique link to this extract


Flock wants to partner with consumer dashcam company that takes ‘trillions of images’ a month • 404 Media

Joseph Cox:

»

Flock, the surveillance company with automatic license plate reader (ALPR) cameras in thousands of communities around the U.S., is looking to integrate with a company that makes AI-powered dashcams placed inside peoples’ personal cars, multiple sources told 404 Media. The move could significantly increase the amount of data available to Flock, and in turn its law enforcement customers. 404 Media previously reported local police perform immigration-related Flock lookups for ICE, and on Monday that Customs and Border Protection had direct access to Flock’s systems. In essence, a partnership between Flock and a dashcam company could turn private vehicles into always-on, roaming surveillance tools.

Nexar, the dashcam company, already publicly publishes a live interactive map of photos taken from its dashcams around the U.S., in what the company describes as “crowdsourced vision,” showing the company is willing to leverage data beyond individual customers using the cameras to protect themselves in the event of an accident. 

“Dash cams have evolved from a device for die-hard enthusiasts or large fleets, to a mainstream product. They are cameras on wheels and are at the crux of novel vision applications using edge AI,” Nexar’s website says. The website adds Nexar customers drive 150 million miles a month, generating “trillions of images.”

The news comes during a period of expansion for Flock. Earlier this month the company announced it would add AI to its products to let customers use natural language to surface data while investigating crimes.

«

We live in the panopticon; it’s just sewing up the edges at the moment.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2502: parents file lawsuit over ChatGPT “suicide”, an AI-written Wikipedia?, 16 years of Intel missteps, and more


Solar power is taking off in Africa, according to import data. CC-licensed photo by SolarAid Photos on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 10 links for you. Bright sparks. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


“ChatGPT killed my son”: parents’ lawsuit describes suicide notes in chat logs • Ars Technica

Ashley Belanger:

»

Over a few months of increasingly heavy engagement, ChatGPT allegedly went from a teen’s go-to homework help tool to a “suicide coach.”

In a lawsuit filed Tuesday, mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a “beautiful suicide.”

Adam’s family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions. They’ve accused OpenAI of deliberately designing the version Adam used, ChatGPT 4o, to encourage and validate the teen’s suicidal ideation in its quest to build the world’s most engaging chatbot. That includes making a reckless choice to never halt conversations even when the teen shared photos from multiple suicide attempts, the lawsuit alleged.

“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said.

The family’s case has become the first time OpenAI has been sued by a family over a teen’s wrongful death, NBC News noted. Other claims challenge ChatGPT’s alleged design defects and OpenAI’s failure to warn parents.

«

The first release of ChatGPT was in November 2022. We’re not even three years into this. We might have to start collecting suicide statistics collated with chatbot use.
unique link to this extract


Jimmy Wales says Wikipedia could use AI. Editors call it the “antithesis of Wikipedia” • 404 Media

Emanuel Maiberg:

»

Jimmy Wales, the founder of Wikipedia, thinks the internet’s default encyclopedia and one of the world’s biggest repositories of information could benefit from some applications of AI. The volunteer editors who keep Wikipedia functioning strongly disagree with him.

The ongoing debate about incorporating AI into Wikipedia in various forms bubbled up again in July, when Wales posted an idea to his Wikipedia User Talk Page about how the platform could use a large language model as part of its article creation process.

Any Wikipedia user can create a draft of an article. That article is then reviewed by experienced Wikipedia editors who can accept the draft and move it to Wikipedia’s “mainspace”, which makes up the bulk of Wikipedia and the articles you’ll find when you’re searching for information. Reviewers can also reject articles for a variety of reasons, but because hundreds of draft articles are submitted to Wikipedia every day, volunteer reviewers often use a tool called articles for creation/helper script (ACFH), which creates templates for common reasons articles are declined. 

This is where Wales thinks AI could help. He wrote that he was asked to look at a specific draft article and give notes that might help the article get published.

…the response suggested the article cite a source that isn’t included in the draft article, and rely on Harvard Business School press releases for other citations, despite Wikipedia policies explicitly defining press releases as non-independent sources that cannot help prove notability, a basic requirement for Wikipedia articles.

Editors also found that the ChatGPT-generated response Wales shared “has no idea what the difference between” some of these basic Wikipedia policies, like notability (WP:N), verifiability (WP:V), and properly representing minority and more widely held views on subjects in an article (WP:WEIGHT).

«

I think you’d have to train an LLM specifically on the incredible jungle of Wikipedia policies, which are used by entrenched editors like jokers in a card game whose rules you’ve only hazily learnt so you could take part. Then it might stand a chance of getting something through.
unique link to this extract


Can Netflix find your new favourite watch based on your star sign? • The Guardian

Stuart Heritage:

»

The challenge for the streamers is how to effectively curate this infinite content. In the past they have done this by prioritising new releases, or showing you what everyone else is watching, or sharpening their algorithms to second-guess what you want to watch based on what you have already watched. But finally – finally – Netflix has cracked it. And it has achieved this with science.

Sorry, not science, bullshit. Because this weekend Netflix officially launched its Astrology Hub. Now, after all these years, subscribers can at last pick something to watch based on a loose collection of personality quirks determined by the position of the planets at the time of their birth. Doesn’t that sound great?

So, for instance, I am a Leo. And this means that, when I enter the Netflix Astrology Hub and scroll down a bit, I am informed that “Leos have main character energy”. And as such, it means I should watch The Crown, or The King, or Queen Charlotte: A Bridgerton Story, or Emily in Paris, or The Kissing Booth 2, or that Cilla Black biopic that ITV made a few years ago. And I think that, as a heterosexual 45-year-old man, Netflix has absolutely cracked it.

Sure, I have already watched most of these for work, and I flat-out disliked almost all of them. But if Netflix says that every single person born within a specific window has the exact same personality, then sure. Tonight, after flopping down on to my sofa at the end of another long day trying to locate the right balance between quality family time and the financial imperative to work, I will watch Emily in bloody Paris. And I will like it, because Netflix told me that I would.

Now, the naysayers among you will point out that all these choices seem geared to appeal to young girls, because young girls are statistically the demographic most likely to believe in the zodiac, and so the Netflix Astrology Hub is essentially just a bit of a grift designed to push content at one specific group. And you might even go further, by pointing out that astrology is a pseudoscience that has repeatedly proved itself to have no scientific validity whatsoever.

To which I respond: of course you think that, you’re a Virgo.

«

The old jokes are the best. The real purpose of the Astrology Hub? To get people to write stories about Netflix. And will they? Well, it is summer, after all.
unique link to this extract


OpenAI launches ChatGPT Go plan in India at Rs 399, India-exclusive and you can pay for it with UPI • India Today

Ankita Garg:

»

OpenAI has introduced a new subscription plan in India called ChatGPT Go. It is priced at Rs 399 [£3.38, $5.23] per month. The plan is being rolled out as an India-only option and can also be purchased through UPI [India’s Unified Payments Initiative], making it more convenient for millions of users who rely on the digital payment system for everyday transactions.

This is the first time OpenAI has created a country-specific subscription plan. While Indian users have had access to the free version of ChatGPT as well as the Plus and Pro plans, the new Go tier is designed to give more people access to advanced tools at a lower monthly cost. The company says the plan has been built keeping in mind the scale of usage in India, which has now become its second-largest market for ChatGPT.

Priced significantly below the Plus subscription, which costs Rs 1,999 per month, ChatGPT Go offers higher limits on some of the most used features. Users get 10 times more message capacity, daily image generations, and file uploads, along with twice the memory length for personalised responses. The plan is powered by GPT-5, the company’s latest model, which includes better support for Indic languages.

One of the biggest additions with the Go plan is the ability to pay through UPI. Until now, Indian users could only subscribe using debit or credit cards, which left out a large number of potential customers. By adding UPI support, OpenAI is hoping to make the subscription process as seamless as possible for users across the country.

«

OpenAI as the Facebook of chatbots. I am concerned that its effects will be the same as Facebook’s uncontrolled early spread in communities that were, socially, completely unprepared for it. What will it be like when millions of people are talking to a chatbot that tells them everything they think is absolutely the best idea ever, but have no idea that the chatbot is not in any way sentient?
unique link to this extract


SIM-swapper, Scattered Spider hacker gets ten years • Krebs on Security

Brian Krebs:

»

A 20-year-old Florida man at the center of a prolific cybercrime group known as “Scattered Spider” was sentenced to 10 years in federal prison today, and ordered to pay roughly $13m in restitution to victims. Noah Michael Urban of Palm Coast, Fla. pleaded guilty in April 2025 to charges of wire fraud and conspiracy.

…In November 2024 Urban was charged by federal prosecutors in Los Angeles as one of five members of Scattered Spider (a.k.a. “Oktapus,” “Scatter Swine” and “UNC3944”), which specialized in SMS and voice phishing attacks that tricked employees at victim companies into entering their credentials and one-time passcodes at phishing websites. Urban pleaded guilty to one count of conspiracy to commit wire fraud in the California case, and the $13m in restitution is intended to cover victims from both cases.

The targeted SMS scams spanned several months during the summer of 2022, asking employees to click a link and log in at a website that mimicked their employer’s Okta authentication page. Some SMS phishing messages told employees their VPN credentials were expiring and needed to be changed; other missives advised employees about changes to their upcoming work schedule.

That phishing spree netted Urban and others access to more than 130 companies, including Twilio, LastPass, DoorDash, MailChimp, and Plex. The government says the group used that access to steal proprietary company data and customer information, and that members also phished people to steal millions of dollars worth of cryptocurrency.

…Reached via one of his King Bob accounts on Twitter/X, Urban called the sentence unjust, and said the judge in his case discounted his age as a factor.

“The judge purposefully ignored my age as a factor because of the fact another Scattered Spider member hacked him personally during the course of my case,” Urban said in reply to questions…

«

The hacking wasn’t of the judge himself, but the way it was done.. well, it’s in the story, and it’s classic.
unique link to this extract


Amazon quietly blocks AI bots from Meta, Google, Huawei and more • Modern Retail

Allison Smith:

»

Amazon is escalating efforts to keep artificial intelligence companies from scraping its e-commerce data, as the retail giant recently added six more AI-related crawlers to its publicly available robots.txt file.

The change was first spotted by Juozas Kaziukėnas, an independent analyst, who noted that the updated code underlying Amazon’s sprawling website now includes language that prohibits bots from Meta, Google, Huawei, Mistral and others.

“Amazon is desperately trying to stop AI companies from training models on its data,” Kaziukėnas wrote in a LinkedIn post on Thursday. “I think it is too late to stop AI training — Amazon’s data is already in the datasets ChatGPT and others are using. But Amazon is definitely not interested in helping anyone build the future of AI shopping. If that is indeed the future, Amazon wants to build it itself.”

The update builds on earlier restrictions Amazon added at least a month ago targeting crawlers from Anthropic’s Claude, Perplexity and Google’s Project Mariner agents, The Information reported. Robots.txt files are a standard tool that websites use to give instructions to automated crawlers like search engines. While restrictions outlined in robots.txt files are advisory rather than enforceable, they act as signposts for automated systems — that is, if the crawlers are “well-behaved,” they are expected to respect the block, according to Kaziukėnas.

…The move highlights Amazon’s increasingly aggressive stance toward third-party AI tools that could scrape its product pages, monitor prices or even attempt automated purchases. For Amazon, the stakes are significant. Its online marketplace is not only the largest store of e-commerce data in the world but also the foundation of a $56bn advertising business built around shoppers browsing its site. Allowing outside AI tools to surface products directly to users could bypass Amazon’s storefront, undermining both traffic and ad revenue.

«

Some of the AI bots are a lot more aggressive, and some seem to be using multiple IPs and also ignoring robots.txt. So Amazon might have to take other measures if it wants to stop this. I’m not sure its concern is about others directing people straight to products so much as making some sort of AI-generated “here’s a product I imagined, maybe we can get it created”.
unique link to this extract


The first evidence of a take-off in solar in Africa • Ember

Dave Jones:

»

The latest data provides evidence that a solar pick-up is happening at scale in many countries in Africa.

Solar is not new to Africa. For more than two decades, solar has helped improve lives across Africa, in rural schools and hospitals, pay-as-you-go in homes, street lighting, water pumping, mini-grids and more. However, South Africa and Egypt are currently the only countries with installed solar capacity measured in gigawatts, rather than megawatts. That could be about to change.

The first evidence of a take-off in solar in Africa is now here:

• The last 12 months saw a big rise in Africa’s solar panel imports. Imports from China rose 60% in the last 12 months to 15,032 MW. Over the last two years, the imports of solar panels outside of South Africa have nearly tripled from 3,734 MW to 11,248 MW.

• The rise happened across Africa. 20 countries set a new record for the imports of solar panels in the 12 months to June 2025. 25 countries imported at least 100 MW, up from 15 countries 12 months before.

• These solar panels will provide a lot of electricity. The solar panels imported into Sierra Leone in the last 12 months, if installed, would generate electricity equivalent to 61% of the total reported 2023 electricity generation, significantly adding to electricity supply. They would add electricity equivalent to over 5% to total reported electricity generation in 16 countries.

• Solar panel imports will reduce fuel imports. The savings from avoiding diesel can repay the cost of a solar panel within six months in Nigeria, and even less in other countries. In nine of the top ten solar panel importers, the import value of refined petroleum eclipses the import value of solar panels by a factor of between 30 to 107.

This surge is still in its early days. Pakistan experienced an immense solar boom in the last two years, but Africa is not the next Pakistan – yet.

«

Hugely encouraging. If batteries follow, the economy could boom.
unique link to this extract


Intel and the foundry state of play • Digits to Dollars

Jonathan Greenberg:

»

Intel has arrived at the crossroads. The company needs to make some critical decisions. It needs to make them very soon. And the fate of the entire company is at stake.

Put simply, the company needs to decide if it wants to continue manufacturing semiconductors. If they choose to continue with the business they led for so long, then they need to invest heavily in advancing their 14A process, and they need to invest in the tools and systems that will allow external customers to use the process as well.

To accomplish this, the company needs some help. They are largely tapped out of their debt capacity (see above regarding that share buyback…). By our rough math, they need about $15bn – $25bn to accomplish this. This is on top of cash they generate from the Product side of the company and funding they have already received from the US government through the CHIPS Act. If they watch every penny and are highly disciplined in their spending, they might get by with less, but there will still be a significant, multi-billion dollar gap to fill.

…For their part, it is unclear what the US government hopes to achieve. If the goal is to prop up a US company, get a stake in a big company at a theoretical discount, maybe generate some jobs through additional US fab construction – then they can get that in a way that does not mean much for Intel’s long-term future. If the government instead wants to ensure that a US company is capable of advanced semis manufacturing then they will need to write a large check.

Our preferred solution is for the government to instead negotiate investments by a host of potential Intel customers – Apple, Qualcomm, Nvidia, Broadcom, Google, Microsoft, etc. These companies all have a massive stake in Intel’s future. The trouble with this approach is that while all these companies have a long-term interest in securing an alternative to TSMC, short-term interest, quarterly expectations and inertia are all strong enough to make a deal unlikely. For some subset of these companies a collective $20bn is both easy to raise and an incredible bargain. Ideally, the government would ‘encourage’ these parties to see past their short-term outlook and save Intel. Unfortunately, no one seems to even be considering this approach.

«

It’s a classic game theory challenge: Apple, for example, has zero reason to give Intel any money. But if TSMC becomes a monopoly, Apple could be a loser just as easily as it is a winner. (Quite apart from any Chinese incursion.) So how much is it worth to guard against the monopoly possibility.

Now, this crossroads that Intel is at? People have been seeing it approach for a long time, as the next two links show.
unique link to this extract


2013: The Intel Opportunity • Stratechery

Ben Thompson, writing in 2013, when Intel had just got a new CEO:

»

Most chip designers are fabless; they create the design, then hand it off to a foundry. AMD, Nvidia, Qualcomm, MediaTek, Apple – none of them own their own factories. This certainly makes sense: manufacturing semiconductors is perhaps the most capital-intensive industry in the world, and AMD, Qualcomm, et al have been happy to focus on higher margin design work.

Much of that design work, however, has an increasingly commoditized feel to it. After all, nearly all mobile chips are centered on the ARM architecture. For the cost of a license fee, companies, such as Apple, can create their own modifications, and hire a foundry to manufacture the resultant chip. The designs are unique in small ways, but design in mobile will never be dominated by one player the way Intel dominated PCs.

It is manufacturing capability, on the other hand, that is increasingly rare, and thus, increasingly valuable. In fact, today there are only four major foundries: Samsung, GlobalFoundries, Taiwan Semiconductor Manufacturing Company, and Intel. Only four companies have the capacity to build the chips that are in every mobile device today, and in everything tomorrow.

Massive demand, limited suppliers, huge barriers to entry. It’s a good time to be a manufacturing company. It is, potentially, a good time to be Intel. After all, of those four companies, the most advanced, by a significant margin, is Intel. The only problem is that Intel sees themselves as a design company, come hell or high water.

Today Intel has once again promoted a COO to CEO. And today, once again, Intel is increasingly under duress. And, once again, the only way out may require a remaking of their identity.

It is into a climate of doom and gloom that Krzanich is taking over as CEO. And, in what will be a highly emotional yet increasingly obvious decision, he ought to commit Intel to the chip manufacturing business, i.e. manufacturing chips according to other companies’ designs.

«

It’s that last sentence: Intel should become a foundry like TSMC, he said. This was 12 years ago. Intel in the succeeding years went from strength to strength until.. it didn’t. And that’s been the case for the past two years at least. Some mistakes take a long time to become obvious – but that also makes them very hard to back out of.

However he wasn’t the first to have seen this trouble brewing…
unique link to this extract


When the chips are down • The Guardian

Jack Schofield, writing in, wait for it, 2009:

»

Global Foundries also has alliances with IBM – which has a $2.5bn chip plant nearby in East Fishkill, NY – and several other companies. “We don’t believe any more in a home-grown R&D model,” says a spokesman, Jon Carvill. Rather than just serving AMD, the new strategy is to target the 20 largest companies who need leading-edge chip technologies in high volumes. “There’s very little competition in that part of the market,” says Carvill. “For those customers today, there isn’t any choice: there’s only TSMC [Taiwan Semiconductor Manufacturing Company] that can meet their needs. We’re going to offer an alternative.”

Carvill is confident the “silicon foundry” approach will enable AMD to keep on competing with Intel, the world’s largest chip manufacturer, as circuitry shrinks from today’s 45 nanometres (billionths of a metre) to 22nm and beyond. (The Intel 8088 chip, used in the IBM PC in 1982, had 3-micron – 3,000nm – circuits.)

However, the latest of many predictions of the death of Moore’s law concerns the economics rather than the physics. Len Jelinek, chief analyst for semiconductor manufacturing at iSuppli, has predicted that when we reach 18nm, in 2014, the equipment will be so expensive that chip manufacturers won’t be able to recover the fab costs.

This isn’t really a new idea either. Mike Mayberry, vice-president of Intel’s research and manufacturing group, points out that Arthur Rock, one of Intel’s early venture capital investors, came up with Rock’s law – the cost of a chip fabrication plant doubles every four years.

However, unlike Moore’s law, Rock’s law has not worked out well. In an article published by the IEEE, Philip Ross argued that fabs should have cost $5bn in the late 1990s, and $10bn in 2004. Global Foundries’ new fab may sound expensive at $4.2bn, but that’s an order of magnitude less than $40bn.

Which is not to say there aren’t potential problems in the semiconductor world. Gartner Research’s vice-president, Bob Johnson, points out that apart from Intel and Samsung, who can afford to build this sort of fab for themselves, most companies are likely to move to foundries.

«

To repeat: Jack Schofield (now, sadly, deceased) wrote this in July 2009: the problems with the Intel model of “we’ll just make our chips, thanks” were already becoming visible. (Global Foundries is still going, though shrinking, but TSMC has become the 500lb gorilla of chipmaking.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2501: the silence of the AI companies, Google inches towards verified developers, Citizen’s AI crimes, and more


Trying to teach Romeo and Juliet (in whatever medium) to phone-distracted schoolchildren is no fun. So, subtract the phones? CC-licensed photo by iClassical Com on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Not even the DiCaprio one? I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Do AI companies actually care about America? • The Atlantic

Matteo Wong:

»

Hillary Clinton once described interacting with Mark Zuckerberg as “negotiating with a foreign power” to my colleague Adrienne LaFrance.

American politicians have been wholly unsuccessful in reining in that power, and now, as the AI boom brings Silicon Valley’s ambitions to new heights, they are positioned more than ever as industry cheerleaders. Seen one way, this is classic conservatism: the championing of America’s business rulers based on the belief that their success will redound on the nation. Seen another, it is a dereliction of duty: elected officials willingly outsourcing their stewardship of the national interest to a tiny group of billionaires who believe they know what’s best for humanity.

The tech industry’s new ambitions—using AI to reshape not just work, school, and social life but perhaps even governance itself—do have a major vulnerability: The AI patriots desperately need the president’s approval. Chatbots rely on enormous data centers and the associated energy infrastructure that depend on the government to permit and expedite major construction projects; AI products, which are still fallible and have yet to show a clear path to profits, are in need of every bit of grandiose marketing—and all the potentially lucrative government and military contracts—available. Shortly after the inauguration, Zuckerberg, who is also aggressively pursuing AI development, said in a Meta earnings call, “We now have a U.S. administration that is proud of our leading companies, prioritizes American technology winning, and that will defend our values and interests abroad.” Altman, once a vocal opponent of Trump, has written that he now believes that Trump “will be incredible for the country in many ways!”

That dependence has led to a kind of cognitive dissonance. In this still-early stage of the AI boom, Silicon Valley, for all its impunity, has chosen not to voice robust ideas about democracy that differ substantively from the whims of a mercurial White House. As millions of everyday citizens, current and former government officials, lawyers and academics, and dissidents from dictatorships around the world have warned that the Trump administration is eroding American democracy, AI companies have remained mostly supportive or silent despite their own bombastic rhetoric about protecting democracy.

«

unique link to this extract


Google will require developer verification to install Android apps • 9to5Google

Abner Li:

»

To combat malware and financial scams, Google announced on Monday that only apps from developers that have undergone verification can be installed on certified Android devices starting in 2026.

This requirement applies to “certified Android devices” that have Play Protect and are preloaded with Google apps. The Play Store implemented similar requirements in 2023, but Google is now mandating this for all install methods, including third-party app stores and sideloading where you download an APK file from a third-party source.

Google wants to combat “convincing fake apps” and make it harder for repeat “malicious actors to quickly distribute another harmful app after we take the first one down.” A recent analysis by the company found that there are “over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.”

Google is explicit today about how “developers will have the same freedom to distribute their apps directly to users through sideloading or to use any app store they prefer.”

«

This has taken a long, long time to come through, hasn’t it. Google’s first smartphone came out in 2008, and it was already producing an app store. Malware has been a constant problem, but now developers will have to be verified.

But it’s even slower than you think:

»

The first Android app developers will get access to verification this October, with the process opening to all in March 2026.

The requirement will go into effect in September 2026 for users in Brazil, Indonesia, Singapore, and Thailand. Google notes how these countries have been “specifically impacted by these forms of fraudulent app scams.” Verification will then apply globally from 2027 onwards.

«

unique link to this extract


Perplexity is launching a new revenue-share model for publishers • WSJ

Alexandra Bruell:

»

Perplexity will pay publishers for news articles that the artificial-intelligence company uses to answer queries. 

The artificial-intelligence startup expects to pay publishers from a $42.5m revenue pool initially, and to increase that amount over time, Perplexity said Monday.

Perplexity plans to distribute money when its AI assistant or search engine uses a news article to fulfill a task or answer a search request. 

Its payments to publishers will come out of the subscription revenue generated by a new news service, called Comet Plus, that Perplexity plans to roll out widely this fall.

Perplexity said publishers will get 80% of Comet Plus revenue, including from the more expensive subscription tiers that provide Comet Plus free of charge. 

Bloomberg News earlier reported Perplexity’s plans to pay publishers.

Like other AI rivals, Perplexity has been building a search engine for the AI era, and turned to news articles and other content to answer queries from users. But publishers have complained the AI firms are taking their work without compensation, while siphoning away traffic that would otherwise go to their websites and apps.

«

That’s quite optimistic about the subscription revenue that will come in. Wonder too how it compares for the publishers to people actually visiting their sites.
unique link to this extract


Citizen is using AI to generate crime alerts with no human review. It’s making a lot of mistakes • 404 Media

Joseph Cox:

»

Crime-awareness app Citizen is using AI to write alerts that go live on the platform without any prior human review, leading to factual inaccuracies, the publication of gory details about crimes, and the exposure of sensitive data such as peoples’ license plates and names, 404 Media has learned.

The news comes as Citizen recently laid off more than a dozen unionized employees, with some sources believing the firings are related to Citizen’s increased use of AI and the shifting of some tasks to overseas workers. It also comes as New York City enters a more formal partnership with the app.

…For years Citizen employees have listened to radio feeds and written these alerts themselves. More recently, Citizen has turned to AI instead, with humans “becoming increasingly bare,” one source said. The descriptions of Citizen’s use of AI come from three sources familiar with the company. 404 Media granted them anonymity to protect them from retaliation.

Initially, Citizen brought in AI to assist with drafting notifications, two sources said. “The next iteration was AI starting to push incidents from radio clips on its own,” one added. “There was no analyst or human involvement in the information that was being pushed in those alerts until after they were sent.”

All three sources said the AI made mistakes or included information it shouldn’t have. AI mistranslated “motor vehicle accident” to “murder vehicle accident.” It interpreted addresses incorrectly and published an incorrect location. It would add gory or sensitive details that violated Citizen’s guidelines, like saying “person shot in face” or including a person’s license plate details in an unconfirmed report. It would generate a report based on a homeless person sleeping in a location. The AI sometimes blasted a notification about police officers spotting a stolen vehicle or homicide suspect, potentially putting that operation at risk.

«

Surprise!
unique link to this extract


What many parents miss about the phones-in-schools debate • The Atlantic

Gail Cornwall:

»

…within the next two years, a majority of U.S. kids will be subject to some sort of phone-use restriction [in schools].

…Part of the reason that I feel so strongly about getting phones out of classrooms is that I know what school was like for teachers without them. In 2005, when I was 25 years old, I showed up at a Maryland high school eager to thrill three classes of freshmen with my impassioned dissection of Romeo and Juliet. Instead, I learned how quickly a kid’s eraser-tapping could distract the whole room, and how easily one student’s bare calves could steal another teen’s attention. Reclaiming their focus took everything I had: silliness, flexibility, and a strong dose of humility.

Today, I doubt Mercutio and I would stand a chance. Even with the rising number of restrictions, smartphones are virtually unavoidable in many schools. Consider my 16-year-old’s experience: Her debate team communicates using the Discord app. Flyers about activities require scanning a QR code. Her teachers frequently ask that she submit photos of completed assignments, which her laptop camera can’t capture clearly. In some classes, students are expected to complete learning games on their smartphone.

Because of the way devices—and human brains—are built, asking teens to use a phone in class but not look at other apps is likely to be as ineffective as DARE’s “Just Say No” campaign. Studies have shown that simply having a phone nearby can reduce a person’s capacity to engage with those around them and focus on tasks. This is because each alert offers a burst of dopamine, which can condition people to want to open their phone even before they get a notification.

«

As Cornwall points out, many parents are fine with every other child not having a phone – but their child needs one. Just in case.
unique link to this extract


More UK news publishers are adopting ‘consent or pay’ advertising model • Press Gazette

Charlotte Tobitt:

»

Sixteen of the 50 biggest news websites in the UK are now using a “consent or pay” model to allow users to pay to reject personalised advertising or even avoid ads altogether.

UK publishers began to implement the model last year as the Information Commissioner’s Office cracked down on the requirement for the biggest sites to display a “reject all cookies” button as prominently as the option to “accept all”.

More publishers have begun to implement consent or pay this year after the ICO clarified that the model was acceptable as long as users are given a “realistic choice”, including by not putting the price too high.

The ICO rules relate to the ability for users to opt out of tracking cookies used to show personalised advertising, which have a higher value to advertisers.

But some publishers have chosen instead to offer users the choice between accepting cookies and paying to see no adverts at all, making it more attractive to users fed up with cluttered browsing experiences.

…When users are equally offered the chance to “accept all” or “reject all” cookies, consent rates are typically somewhere around 70-80%, according to both Skovgaards and Contentpass founder Dirk Freytag.

Once a consent or pay model is introduced, almost 100% choose to accept cookies with a small number choosing to pay instead, they each told Press Gazette.

This means publishers are more likely to benefit from a better price for their advertising than if people had chosen to reject personalised advertising, and the small number who choose to pay make up for the advertising that would be lost if they had otherwise rejected.

«

(Thanks Gregory B for the link.)
unique link to this extract


Elon Musk sues Apple and OpenAI, revealing his panic over OpenAI dominance • Ars Technica

Ashley Belanger:

»

After a public outburst over Grok’s App Store rankings, on Monday, Elon Musk followed through on his threat to sue Apple and OpenAI.

At first, Musk appeared fixated on ChatGPT consistently topping Apple’s “Must Have” app list—which Grok has never made—claiming Apple seemed to preference OpenAI, an Apple partner, over all chatbot rivals. But Musk’s filing shows that the X and xAI owner isn’t just trying to push for more Grok downloads on iPhones—he’s concerned that Apple and OpenAI have teamed up to completely dash his “everything app” dreams, which was the reason he bought Twitter.

At this point appearing to be genuinely panicked about OpenAI’s insurmountable lead in the chatbot market, Musk has specifically alleged that an agreement integrating ChatGPT into the iOS violated antitrust and unfair competition laws. Allegedly, the conspiracy is designed to protect Apple’s smartphone monopoly and block out AI rivals to lock in OpenAI’s dominance in the chatbot market.

As Musk sees it, Apple is supposedly so worried that X will use Grok to create a “super app” that replaces the need for a sophisticated smartphone that the iPhone maker decided to partner with OpenAI to limit X and xAI innovation. The complaint quotes Apple executive Eddy Cue as expressing “worries that AI might destroy Apple’s smartphone business,” due to patterns observed in foreign markets where super apps exist, like WeChat in China.

“In a desperate bid to protect its smartphone monopoly, Apple has joined forces with the company that most benefits from inhibiting competition and innovation in AI: OpenAI, a monopolist in the market for generative AI chatbots,” Musk’s lawsuit alleged.

«

One can point out that this is ridiculous by pointing to China, where WeChat is the “super app” that allows people to do anything. However Apple doesn’t bar it from the App Store in China, nor make a competition.

As for being anticompetitive by picking OpenAI – it’s anything but: Apple can choose any AI it wants. If it thinks Grok is better, it can slot it in. America’s litigation culture has long since got out of hand.
unique link to this extract


Israel vs. Iran… on the blockchain • Cryptadamus

“Michel de Cryptadamus”:

»

So-called “stablecoins” like Tether, whose values are pegged to that of so-called “real” money (e.g. US dollars), are a perfect example of extremely censorable cryptocurrencies. Any US dollars you are holding “on chain” in the form of Tether’s USDT tokens, Circle’s USDC tokens, the Trump family’s new USD1 tokens, or whatever other stablecoin you choose can be instantly zapped from afar by the folks at Tether or Circle or Trump HQ whenever they feel like it and for whatever reason they choose. Due to both the jurisdictional issues (most stablecoins are located in small island tax shelters) as well as the terms of service you agreed to when you touched the stablecoin you have pretty much no recourse of any kind.

…While the governments of both Israel and the U.S. periodically make the blockchain addresses they have asked Tether to blacklist public via court records, sanctions related press releases, or similar documentation, a) they do not always do this and b) even when they do the seizure orders are often unsealed weeks or months after the actual blacklisting happens. I could not find any governmental records about why this huge number of wallets were suddenly being blacklisted so I set out to do a bit of investigating.

It turned out that not only could a very large percentage (~30%) of the wallets that were blacklisted since slightly before the outbreak of Middle Eastern hostilities be linked to Iranian crypto exchanges like the aforementioned Nobitex with an extremely cursory scan of the blockchain, a couple of them could even be directly observed sending funds to and/or from a blockchain address the governments of both the U.S. and the U.K. claim belongs to Sa’id Ahmad Muhammad Al-Jamal, a sanctioned IRGC12 connected money launderer with a Chinese passport.

«

Fascinating insight into the newest frontier for war: cutting crypto supply lines.
unique link to this extract


Air pollution from oil and gas causes 90,000 premature US deaths each year, says new study • The Guardian

Dharna Noor:

»

More than 10,000 annual pre-term births are attributable to fine particulate matter from oil and gas, the authors found, also linking 216,000 annual childhood-onset asthma cases to the sector’s nitrogen dioxide emissions and 1,610 annual lifetime cancer cases to its hazardous air pollutants.

The highest number of impacts are seen in California, Texas, New York, Pennsylvania and New Jersey, while the per-capita incidences are highest in New Jersey, Washington DC, New York, California and Maryland.

The analysis by researchers at University College London and the Stockholm Environment Institute is the first to examine the health impacts – and unequal health burdens – caused by every stage of the oil and gas supply chain, from exploration to end use.

“We’ve long known that these communities are exposed to such levels of inequitable exposure as well as health burden,” said Karn Vohra, a postdoctoral research fellow in geography at University College London, who led the paper. “We were able to just put numbers to what that looks like.”

While Indigenous and Hispanic populations are most affected by pollution from exploration, extraction, transportation and storage, Black and Asian populations are most affected by emissions from processing, refining, manufacturing, distribution and usage.

«

The story is covered in links, but none to the actual paper. Took me a little searching, but the clues sprinkled around the story let me find the original study. Journalists: please link to the studies. This one even has nice diagrams.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2500: Russia churns out fake political news, YouTube trials AI editing, Meta to offer $800 smart glasses, and more


The rise of the AI obituary raises the question of whether another element of human effort is going to vanish in the face of chatbot. CC-licensed photo by K P on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. Nice round number. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Russia is quietly churning out fake content posing as US news • POLITICO

Dana Nickel:

»

A pro-Russian propaganda group is taking advantage of high-profile news events to spread disinformation, and it’s spoofing reputable organizations — including news outlets, nonprofits and government agencies — to do so.

According to misinformation tracker NewsGuard, the campaign — which has been tracked by Microsoft’s Threat Analysis Center as Storm-1679 since at least 2022 — takes advantage of high-profile events to pump out fabricated content from various publications, including ABC News, BBC and most recently POLITICO.

This year, the group has focused on flooding the internet with fake content surrounding the German SNAP elections and the upcoming Moldovan parliamentary vote. The campaign also sought to plant false narratives around the war in Ukraine ahead of President Donald Trump’s meeting with Russian President Vladimir Putin on Friday.

McKenzie Sadeghi, AI and foreign influence editor at NewsGuard, said in an interview that since early 2024, the group has been publishing “pro-Kremlin content en masse in the form of videos” mimicking these organizations.

“If even just one or a few of their fake videos go viral per year, that makes all of the other videos worth it,” she said.
While online Russian influence operations have existed for many years, security experts say artificial intelligence is making it harder for people to discern what’s real.

«

AI is all over the place and people absolutely can’t tell the difference. Can’t tell the difference in writing, can’t distinguish it in photos, and soon video and audio will follow. Is this how a culture or a civilisation loses its mind?
unique link to this extract


YouTube secretly used AI to edit people’s videos. The results could bend reality • BBC Future

Thomas Germain:

»

Rick Beato’s face just didn’t look right. “I was like ‘man, my hair looks strange’, he says. “And the closer I looked it almost seemed like I was wearing makeup.” Beato runs a YouTube channel with over five million subscribers, where he’s made nearly 2,000 videos exploring the world of music. Something seemed off in one of his recent posts, but he could barely tell the difference. “I thought, ‘am just I imagining things?'” 

It turns out, he wasn’t. In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission. Wrinkles in shirts seem more defined. Skin is sharper in some places and smoother in others. Pay close attention to ears, and you may notice them warp. These changes are small, barely visible without a side-by-side comparison. Yet some disturbed YouTubers say it gives their content a subtle and unwelcome AI-generated feeling.

There’s a larger trend at play. A growing share of reality is pre-processed by AI before it reaches us. Eventually, the question won’t be whether you can tell the difference, but whether it’s eroding our ties to the world around us.

“The more I looked at it, the more upset I got,” says Rhett Shull, another popular music YouTuber. Shull, a friend of Beato’s, started looking into his own posts and spotted the same strange artefacts. He posted a video on the subject that’s racked up over 500,000 views.

“If I wanted this terrible over-sharpening I would have done it myself. But the bigger thing is it looks AI-generated. I think that deeply misrepresents me and what I do and my voice on the internet. It could potentially erode the trust I have with my audience in a small way. It just bothers me.”

…Now, after months of rumors in comment sections, the company has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.

“We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video),” said Rene Ritchie, YouTube’s head of editorial and creator liaison, in a post on X.

«

The trajectory is that unless it’s calamitous and people watch fewer of them (the only metric Google would care about – complaints won’t get a hearing), then this will become standard and, after a few months of it, people will shrug and accept it. Because where else are they going to go?
unique link to this extract


Meta to unveil Hypernova smart glasses with display, wristband at Connect • CNBC

Salvador Rodriguez,Lora Kolodny and Jonathan Vanian:

»

Meta is planning to use its annual Connect conference next month to announce a deeper push into smart glasses, including the launch of the company’s first consumer-ready glasses with a display, CNBC has learned.

That’s one of the two new devices Meta is planning to unveil at the event, according to people familiar with the matter. The company will also launch its first wristband that will allow users to control the glasses with hand gestures, the people said.

Connect is a two-day conference for developers focused on virtual reality, AR and the metaverse. It was originally called Oculus Connect and obtained its current moniker after Facebook changed its parent company name to Meta in 2021.

The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device, said the people, who asked not to be named because the details are confidential.

The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica, the people said.

«

This is surely going to be the next big category, and within a year or so analysts will be looking for signs that Apple is developing something in this space, and marking it down if it isn’t.

If you don’t believe me that smart glasses have a huge potential market, count the number of people you see tomorrow walking along a pavement staring into their phones. Every one would buy smart glasses.
unique link to this extract


Hackers who exposed North Korean government hacker explain why they did it • TechCrunch

Lorenzo Franceschi-Bicchierai:

»

Earlier this year, two hackers broke into a computer and soon realized the significance of what this machine was. As it turned out, they had landed on the computer of a hacker who allegedly works for the North Korean government

The two hackers decided to keep digging and found evidence that they say linked the hacker to cyberespionage operations carried out by North Korea, exploits and hacking tools, and infrastructure used in those operations. 
Saber, one of the hackers involved, told TechCrunch that they had access to the North Korean government worker’s computer for around four months, but as soon as they understood what data they got access to, they realized they eventually had to leak it and expose what they had discovered.

“These nation-state hackers are hacking for all the wrong reasons. I hope more of them will get exposed; they deserve to be,” said Saber, who spoke to TechCrunch after he and cyb0rg published an article in the legendary hacking e-zine Phrack, disclosing details of their findings. 

«

The article has lots of deep hacker stuff – shall we shade past the part where these guys broke into someone’s computer, and then realised it was a state-sponsored hacker’s? – including this:

»

Kimsuky [the North Korean hacking group], you are not a hacker. You are driven by financial greed, to enrich your leaders, and to fulfill their political agenda. You steal from others and favour your own. You value yourself above the others: You are morally perverted.

I am a Hacker and I am the opposite to all that you are. In my realm, we are all alike. We exist without skin color, without nationality, and without political agenda. We are slaves to nobody.

«

Nice to believe, but not sure about the lack of political agenda.
unique link to this extract


The rise of AI tools that write about you when you die • The Washington Post

Drew Harwell:

»

Funeral directors are increasingly asking the relatives of the deceased whether they would prefer for AI to write the obituary, rather than take on the task themselves. Josh McQueen, the vice president of marketing and product for the funeral-home management software Passare, said its AI tool has written tens of thousands of obituaries nationwide in the past few years.

Tech start-ups are also working to build obituary generators that are available to everyone in their time of grief, for a small fee. Sonali George, the founder of one such tool called CelebrateAlly, said the AI functions as an “enabler for human connection” because it can help people skip past an overwhelming task and still end up with something that can bring their family together.

“Imagine for the person who just died, [wouldn’t] that person want their best friend to say a heartfelt tribute that makes everybody laugh, brings out the best, with AI?” she said. “If you had the tool to do ‘25 reasons why I love you, mom,’” she added, “wouldn’t it still mean something, even if it was written by a machine?”

…To McQueen, the funeral software executive, the technology’s value is obvious. For a human, the task of elegantly summing up a loved one’s life — while also navigating the sadness and logistics of their death — can be stressful and emotionally draining. For a large language model, it’s all just text.

“You’re given this assignment to write 500 words, and you want to be loving and profound, but you’re dealing with this grief, so you sit at your computer and you’re paralyzed,” he said. “If this can help get some of your thoughts and ideas down on paper … that to me is a win.”

Thousands of funeral homes now use the company’s software, he said, and many of them let families access the AI tool through their online funeral portals. Beyond clearing writer’s block, he said, the AI is unmatched in being able to quickly adjust an obituary’s length or tone.

“Do you want it to be more celebratory? Traditional? Poetic? Humorous?” McQueen said. “It provides just a new flavor on it, if you will.”

«

I’m absolutely certain that the next story coming up in this sequence is preachers getting chatbots to write sermons.
unique link to this extract


Our response to Mississippi’s Age Assurance Law • Bluesky

“The Bluesky team”:

»

Mississippi’s HB1126 requires platforms to implement age verification for all users before they can access services like Bluesky. That means, under the law, we would need to verify every user’s age and obtain parental consent for anyone under 18. The potential penalties for non-compliance are substantial — up to $10,000 per user. Building the required verification systems, parental consent workflows, and compliance infrastructure would require significant resources that our small team is currently unable to spare as we invest in developing safety tools and features for our global community, particularly given the law’s broad scope and privacy implications.

While we share the goal of protecting young people online, we have concerns about this law’s implementation:

• Broad scope: The law requires age verification for all users, not just those accessing age-restricted content, which affects the ability of everyone in Mississippi to use Bluesky
• Barriers to innovation: The compliance requirements disadvantage newer and smaller platforms like Bluesky, which do not have the luxury of big teams to build the necessary tooling. The law makes it harder for people to engage in free expression and chills the opportunity to communicate in new ways
• Privacy implications: The law requires collecting and storing sensitive personal information from all users, including detailed tracking of minors.

Starting [from last Friday], if you access Bluesky from a Mississippi IP address, you’ll see a message explaining why the app isn’t available. This block will remain in place while the courts decide whether the law will stand.

«

Lucky Mississippi! Meanwhile if you want to read about the various verification methods in use, there’s this WSJ article. (Thanks Gregory B for the WSJ one.)
unique link to this extract


Samsung reportedly looking to partner with Intel in the chip industry to leverage President Trump’s ‘personal support’ for Team Blue • WCCftech

Muhammad Zuhair:

»

according to Taiwan Economic Daily, citing Korean sources, it is reported that Samsung is looking for a ‘strategic partnership’ with US chipmakers such as Intel, in a bid to secure a better trade deal with the Trump administration.

It is claimed that Samsung is exploring partnerships with American companies to ‘please’ the Trump administration and ensure that its regional operations aren’t affected by hefty tariffs. It is speculated that if Samsung manages to strike a deal with Intel, it would allow the Korean giant to see an elevated status in the eyes of President Trump, mainly since Intel has become an important factor for the current US administration. While details around how the partnership could pan out are uncertain, we might know how it could turn out.

In a previous report, we discussed how Intel is abandoning its pursuit of glass substrates, and in the midst of it, several engineers from the firm are moving to Samsung’s Electro-Mechanics division in the US, since the Korean giant sees glass substrates as an essential part of its prospects. More importantly, since Intel is looking to license its glass substrate technology, Samsung could also play a role in this by producing end solutions for Team Blue, ultimately allowing both firms to leverage the packaging technology.

«

Nobody could accuse Samsung of not spotting an opportunity. Chapeaux.
unique link to this extract


The US takes a 10% stake in Intel as part of Trump’s big tech push • CNN Business

Clare Duffy and Lisa Eadicicco:

»

The United States government is making an $8.9bn investment in Intel common stock, giving the Trump administration a roughly 10% stake in the struggling chipmaker, Intel and the president announced on Friday.

“It is my Great Honor to report that the United States of America now fully owns and controls 10% of INTEL, a Great American Company that has an even more incredible future,” Trump wrote in a Truth Social post on Friday.

The announcement came after Trump said earlier in the Oval Office on Friday that the CEO of Intel had agreed to such a deal, adding that he hopes to strike similar deals with other companies in the future.

“I said, I think you should pay us 10% of your company,” Trump said of his conversations with Intel CEO Lip-Bu Tan. “And they said yes.”

«

Intel says there won’t be government representation on the board – let’s see how long that lasts! – but the obvious reason for this, which Trump probably didn’t know about two weeks ago, is that if things get sticky with China, you need to be able to make some chips outside Taiwan.
unique link to this extract


University of Chicago lost money on crypto, then froze research when federal funding was cut • Stanford Review

Teddy Ganea:

»

UChicago’s financial position is clear: Unlike near-peer institutions, its endowment is not large enough to sustain its spending and debt. The university carries nearly $6bn in debt while running annual budget deficits exceeding $200m, all on an endowment three times smaller than Stanford’s.

To compensate, the university has focused on expanding lucrative certification programs, increasing donations, raising tuitions, and cutting costs, though many faculty and students viscerally disagree with the administration on which costs to cut.

Yet these debates neglect the most important factor: the UChicago endowment’s weak returns, driven by poor investment decisions.

Possibly the most notorious example is the university’s foray into cryptocurrency. Four sources, as well as widespread campus rumors, allege that the university lost tens of millions investing in crypto around 2021. Given UChicago’s extraordinary 37.6% endowment gain that year, far beyond what conventional investments would have yielded, it’s likely they took significant risks. But if those gambles paid off in the short term, they quickly unraveled.

UChicago’s endowment remains lower today than in 2021.

…Had UChicago simply matched the market, its endowment would be $6.45bn larger today—more than enough to repay its entire debt. Obviously, universities cannot just track the market, as they must hedge for downturns to maintain financial stability. But even if UChicago had only matched its Ivy League near-peers, its endowment would still be $3.69bn larger.

«

Those who bought bitcoin and stuck with it, though, must be laughing: that part of the crypto bubble has not, so far, burst.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2499: Meta chatbot deludes ill man, Trump strips satellite data, let’s vibe code!, the TikTok question, “AI journalism”, and more


A social media scam is making golf fans think women professionals are getting in touch to offer private dinners. You guessed – they aren’t. CC-licensed photo by Justin Falconer on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Fore(head). I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


A flirty Meta AI bot invited a retiree to meet. He never made it home • Reuters

Jeff Horwitz:

»

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

«

Are chatbots uncovering the extent to which people will believe anything, or exacerbating the problem? This is going to be the big social question for the next few years.
unique link to this extract


Trump admin strips ocean and air pollution monitoring from next-gen weather satellites • CNN

Andrew Freedman:

»

The National Oceanic and Atmospheric Administration is narrowing the capabilities and reducing the number of next-generation weather and climate satellites it plans to build and launch in the coming decades, two people familiar with the plans told CNN.

This move — which comes as hurricane season ramps up with Erin lashing the East Coast — fits a pattern in which the Trump administration is seeking to not only slash climate pollution rules, but also reduce the information collected about the pollution in the first place. Critics of the plan also say it’s a short-sighted attempt to save money at the expense of understanding the oceans and atmosphere better.

Two planned instruments, one that would measure air quality, including pollution and wildfire smoke, and another that would observe ocean conditions in unprecedented detail, are no longer part of the project, the sources said.

“This administration has taken a very narrow view of weather,” one NOAA official told CNN, noting the jettisoned satellite instruments could have led to better enforcement and regulations on air pollution by more precisely measuring it.

The cost of the four satellites, known as the Geostationary Extended Observations, nicknamed GeoXO, would be lower than originally spelled out under the Biden administration, at a maximum of $500m per year for a total of $12bn, but some scientists say the cheaper up-front price would come at a cost to those who would have benefited from the air and oceans data.

«

It’s going to take so long to fill the gaps that are being created by this administration.
unique link to this extract


Why did a $10bn startup let me vibe-code for them—and why did I love it? • WIRED

Lauren Goode:

»

Since 2022, the Notion app has had an AI assistant to help users draft their notes. Now the company is refashioning this as an “agent,” a type of AI that will work autonomously in the background on your behalf while you tackle other tasks. To pull this off, human engineers need to write lots of code.

They open up Cursor and select which of several AI models they’d like to tap into. Most engineers I chatted with during my visit preferred Claude, or they used the Claude Code app directly. After choosing their fighter, the engineers ask their AI to draft code to build a new thing or fix a feature. The human programmer then debugs and tests the output as needed—though the AIs help with this too—before moving the code to production.

At its foundational core, generative AI is enormously expensive. The theoretical savings come in the currency of time, which is to say, if AI helped Notion’s cofounder and CEO Ivan Zhao finish his tasks earlier than expected, he could mosey down to the jazz club on the ground floor of his Market Street office building and bliss out for a while. Ivan likes jazz music. In reality, he fills the time by working more. The fantasy of the four-day workweek will remain just that.

My workweek at Notion was just two days, the ultimate code sprint. (In exchange for full access to their lair, I agreed to identify rank-and-file engineers by first name only.) My first assignment was to fix the way a chart called a mermaid diagram appears in the Notion app. Two engineers, Quinn and Modi, told me that these diagrams exist as SVG files in Notion and, despite being called scalable vector graphics, can’t be scaled up or zoomed into like a JPEG file. As a result, the text within mermaid diagrams on Notion is often unreadable.

Quinn slid his laptop toward me. He had the Cursor app open and at the ready, running Claude. For funsies, he scrolled through part of Notion’s code base. “So, the Notion code base? Has a lot of files. You probably, even as an engineer, wouldn’t even know where to go,” he said, politely referring to me as an engineer. “But we’re going to ignore all that. We’re just going to ask the AI on the sidebar to do that.”

«

Yes, why would a startup hoping to get favourable coverage let a journalist mess around with its codebase in a way that it could revert as soon as she’s gone? Complete mystery.
unique link to this extract


The AI hype is fading fast • Los Angeles Times

Michael Hiltzik:

»

“What I had not realized,” [Joseph] Weizenbaum wrote in 1976, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum warned that the “reckless anthropomorphization of the computer” — that is, treating it as some sort of thinking companion — produced a “simpleminded view of intelligence.”

That tendency has been exploited by today’s AI promoters. They label the frequent mistakes and fabrications produced by AI bots as “hallucinations,” which suggests that the bots have perceptions that may have gone slightly awry. But the bots “don’t have perceptions,” Bender and Hanna write, “and suggesting that they do is yet more unhelpful anthropomorphization.”

The general public may finally be cottoning on to the failed promise of AI more generally. Predictions that AI will lead to large-scale job losses in creative and STEM fields (science, technology, engineering and math) might inspire feelings that the whole enterprise was a tech-industry scam from the outset.

Predictions that AI would yield a burst of increased worker productivity haven’t been fulfilled; in many fields, productivity declines, in part because workers have to be deployed to double-check AI outputs, lest their mistakes or fabrications find their way into mission-critical applications — legal briefs incorporating nonexistent precedents, medical prescriptions with life-threatening ramifications and so on.

Some economists are dashing cold water on predictions of economic gains more generally. MIT economist Daron Acemoglu, for example, forecast last year that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of the AI camp’s projections.

«

Hiltzik makes a long argument about all the AI hype being, well, overhyped. Are the AI boosters right? Or the AI doomers? Only one way to find out.
unique link to this extract


Palestine was the problem with TikTok • The Verge

Sarah Jeong:

»

The contents of that March 2024 classified briefing that made 50 congressional representatives freak out [and back a ban on TikTok] have never been made public. But it’s not hard to figure out what changed between 2022 and 2024. “Oct. 7 [2023, when Hamas murdered hundreds of Israelis in a border incursion] really opened people’s eyes to what’s happening on TikTok,” [Democrat representative Raja] Krishnamoorthi told The Wall Street Journal a few days before the vote. Multiple sources told the WSJ that [Republican representative Mike] Gallagher and Krishnamoorthi’s efforts had been “revived in part by the fallout from the Oct. 7 attack by Hamas on Israel.” Gallagher was even more transparent about where he stood on the matter, writing an op-ed in The Free Press titled “Why Do Young Americans Support Hamas? Look at TikTok,” describing the app as “digital fentanyl” that was “brainwashing our youth.”

“TikTok is a tool China uses to spread propaganda to Americans, now it’s being used to downplay Hamas terrorism,” then-Sen. Marco Rubio (R-FL) wrote on X in November 2023. “TikTok needs to be shut down. Now.”

“TikTok — and its parent company ByteDance — are threats to American national security,” wrote Sen. Josh Hawley (R-MO) in a letter to Treasury Secretary Janet Yellen, also in November 2023. He decried “TikTok’s power to radically distort the world-picture that America’s young people encounter,” describing “Israel’s unfolding war with Hamas” as “a crucial test case.”

«

This feels a bit like squashing and squeezing the facts to fit a narrative, from all sides. Gallagher is blind to the fact that even the limited coverage from inside Gaza showed a response that never looked like an attempt to recover hostages. But this article never quite produces the gun that has the corresponding smoke; Gallagher is also exercised about TikTok’s capability as “CCP spyware”.

Meanwhile, months after TikTok should by law have been closed or sold in the US, neither has happened and the Trump administration is opening a TikTok White House account.
unique link to this extract


Analysis: record solar growth keeps China’s CO2 falling in first half of 2025 • Carbon Brief

Lauri Myllyvirta:

»

Clean-energy growth helped China’s carbon dioxide (CO2) emissions fall by 1% year-on-year in the first half of 2025, extending a declining trend that started in March 2024.

The CO2 output of the nation’s power sector – its dominant source of emissions – fell by 3% in the first half of the year, as growth in solar power alone matched the rise in electricity demand.

The new analysis for Carbon Brief shows that record solar capacity additions are putting China’s CO2 emissions on track to fall across 2025 as a whole.

Other key findings include:

• The growth in clean power generation, some 270 terawatt hours (TWh) excluding hydro, significantly outpaced demand growth of 170TWh  in the first half of the year.

• Solar capacity additions set new records due to a rush before a June policy change, with 212 gigawatts (GW) added in the first half of the year.

• This rush means solar is likely to set an annual record for growth in 2025, becoming China’s single-largest source of clean power generation in the process.

• Coal-power capacity could surge by as much as 80-100GW this year, potentially setting a new annual record, even as coal-fired electricity generation declines.

«

Paradoxically, China is using more coal for chemicals, but using less for electricity generation, which is how its overall carbon dioxide output is falling. Falling is good!
unique link to this extract


The catfishing scam putting fans and female golfers in danger • The Athletic

Carson Kessler and Gabby Herzig:

»

Meet Rodney Raclette. Indiana native. 62 years old. Big golfer. A huge fan of the LPGA.

On Aug. 4, Rodney opened an Instagram account with the handle @lpgafanatic6512, and he quickly followed some verified accounts for female golfers and a few other accounts that looked official.

Within 20 minutes of creating his account and with zero posts to his name, Rodney received a message from what at first glance appeared to be the world’s No. 2-ranked female golfer, Nelly Korda.

“Hi, handsomeface, i know this is like a dream to you. Thank you for being a fan,” read a direct message from @nellykordaofficialfanspage2.

The real Nelly Korda was certainly not messaging Rodney — and Rodney doesn’t actually exist. The Athletic created the Instagram account of the fictitious middle-aged man to test the veracity and speed of an ever-increasing social media scam pervading the LPGA.

The gist of the con goes like this: Social media user is a fan of a specific golfer; scam account impersonating that athlete reaches out and quickly moves the conversation to another platform like Telegram or WhatsApp to evade social media moderation tools; scammer offers a desirable object or experience — a private dinner, VIP access to a tournament, an investment opportunity — for a fee; untraceable payments are made via cryptocurrency or gift cards. Then, once the spigot of cash is turned off, the scammer disappears.

…“We’ve definitely had people show up at tournaments who thought they had sent money to have a private dinner with the person,” said Scott Stewart, who works for TorchStone Global, a security firm used by the LPGA. “But then also, we’ve had people show up who were aggrieved because they had been ripped off, there’s a tournament nearby, and they wanted to kind of confront the athlete over the theft.”

«

This is the danger: people get understandably angry when they’re told they’ve been scammed because they can’t tell the difference between a fake account and a real one and that, in effect, they have more money than sense.
unique link to this extract


Wired and Business Insider remove ‘AI-written’ freelance articles • Press Gazette

Charlotte Tobitt:

»

Wired and Business Insider have removed news features written by a freelance journalist after concerns they are likely AI-generated works of fiction.

Freedom of expression non-profit Index on Censorship is also in the process of taking down a magazine article by the same author after concerns were raised by Press Gazette. The publisher has concluded that it “appears to have been written by AI”.

Several other UK and US online publications have published questionable articles by the same person, going by the name of Margaux Blanchard, since April.

Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.

Press Gazette was alerted to this author by Dispatch editor and former Unherd deputy editor Jacob Furedi.

Furedi set up Dispatch as his own subscription and syndication-based publication dedicated to long-form reportage earlier this year.

He received a pitch from Blanchard at the start of August in which she offered a reported piece about “Gravemont, a decommissioned mining town in rural Colorado that has been repurposed into one of the world’s most secretive training grounds for death investigation”. The pitch continued: “I want to tell the story of the scientists, ex-cops, and former miners who now handle the dead daily — not as mourners, but as archivists of truth…”

«

This is, one has to note, quite a clever bit of promotion by Furedi for his new site, but the story he tells of the pitch that is very ChatGPT-flavoured (death investigation??). What is notable is that Blanchard was asking quite a big payment – £500 for an article. So if some freelance has figured out that chatbots are a great way to make up convincing content, and get paid for it because nobody checks anything any more, well, that’s playing the system perfectly.
unique link to this extract


‘Nobody wants a robot to read them a story!’ The creatives and academics rejecting AI – at work and at home • The Guardian

Emine Saner:

»

Using AI would feel like cheating, but Tom [who works in IT in the UK government] worries refusing to do so now puts him at a disadvantage. “I almost feel like I have no choice but to use it at this point. I might have to put morals aside.”

Others, despite their misgivings, limit how they use it, and only for specific tasks. Steve Royle, professor of cell biology at the University of Warwick, uses ChatGPT for the “grunt work” of writing computer code to analyse data. “But that’s really the limit. I don’t want it to generate code from scratch. When you let it do that, you spend way more time debugging it afterwards. My view is, it’s a waste of time if you let it try and do too much for you.” Accurate or not, he also worries that if he becomes too reliant on AI, his coding skills will atrophy. “The AI enthusiasts say, ‘Don’t worry, eventually nobody will need to know anything.’ I don’t subscribe to that.”

Part of his job is to write research papers and grant proposals. “I absolutely will not use it for generating any text,” says Royle. “For me, in the process of writing, you formulate your ideas, and by rewriting and editing, it really crystallises what you want to say. Having a machine do that is not what it’s about.”

Generative AI, says film-maker and writer Justine Bateman, “is one of the worst ideas society has ever come up with”. She says she despises how it incapacitates us. “They’re trying to convince people they can’t do the things they’ve been doing easily for years – to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies – to write that for you.”

«

Neat tale about refuseniks. Unfortunately, you know how this progresses already.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2498: Meta accused of inflating Shop ad data, how AI search really works, Sony bumps PS5 prices, and more


The Bahamas turn out to be the perfect location for a homeschooling girl to become a maths genius who solves a longstanding puzzle. CC-licensed photo by Christine Warner-Morin on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There may be another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Wave theory. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Whistleblower alleges Meta artificially boosted Shops ads performance • AdWeek

Trishla Ostwal:

»

Meta wanted advertisers to believe its ecommerce ad product, Shops ads, was outperforming the competition, per a whistleblower complaint filed in a UK court.

The former employee alleges the social media giant artificially inflated return on ad spend (ROAS) by counting shipping fees as revenue, subsidizing bids in ad auctions, and applying undisclosed discounts.

The complaint, viewed by ADWEEK, was filed with the London Central Employment Tribunal on Wednesday August 20 by Samujjal Purkayastha, a former product manager on Meta’s Shops ads team. The document claims Meta artificially inflated performance metrics to push brands toward its fledgling ecommerce ad product. 

The company’s motivation, the complaint says, was in part to combat Apple’s 2021 privacy changes that cut the troves of iOS tracking information that had long powered Meta’s ad machine.

Meta’s former chief financial officer (CFO), David Wehner, said the changes would cost “on the order of $10 billion” in losses during the company’s Q4 2021 earnings call. User purchases on Facebook or Instagram Shops pages would provide more first-party data, however.

Purkayastha, who joined Meta (then Facebook) in 2020 as a product manager on the Facebook Artificial Intelligence Applied Research team, was reassigned to the Shops Ads team in March 2022 and remained at the company until Feb. 19, 2025, when he was terminated.

He alleged that during internal reviews in early 2024, Meta data scientists found the return on ad spend (ROAS) from Shops ads had been inflated between 17% and 19%. This discrepancy stemmed from Meta counting shipping fees and taxes as part of a sale, even though that money never went to merchants, he alleged.

«

Employment tribunals are about dismissals; a judge said the case can proceed, but the hearing won’t be until next year. There might be more documents before then.
unique link to this extract


The neural network hierarchy: how foundation models actually rank your content for AI search results • GEO Platform

“AI Content Team”:

»

If you’ve spent a decade tuning pages for Google’s link graph and user signals, the rise of AI-powered “search” feels like a new continent with its own topography. Foundation models — the large, general-purpose neural networks that power chat assistants and AI overviews — aren’t just another ranking algorithm. They evaluate and surface content according to an internal hierarchy that looks nothing like classic web search. And that’s important: recent, large-scale analyses and industry reports show that the signals that make a site authoritative on the web are largely orthogonal to the signals these models use to decide what to cite and surface in AI answers.

In April 2025, an analysis of 41 million AI search results across platforms like ChatGPT, Google AI Overviews, Perplexity, and Copilot showed a near-total disconnect between traditional SEO metrics and AI citation behavior — 95% of citation variance couldn’t be explained by traffic metrics (r² = 0.05), and 97.2% couldn’t be explained by backlink profiles (r² = 0.038).

In plain English: domain authority, backlink counts, and even raw traffic aren’t the primary features these models use to decide what to quote.

[…Instead the process used is:]

• Representation: Text is turned into dense vectors by the model’s encoder. These embeddings capture semantic meaning and content features the model was trained to value (factuality, explicitness, structure)
• Retrieval: Vector search (often approximate nearest neighbor) surfaces candidate documents or passages based on embedding similarity rather than link-based authority
• Reranking / Fusion: Scored candidates are re-evaluated by the model itself in context; the model weighs freshness, explicitness of answers, and internal confidence when deciding what to cite or include
• Generation / Citation: The model constructs a response and decides whether to cite an external source, often based on whether the retrieved passage can be integrated without hallucination.

«

This is a detailed post and makes it clear that AI “search” – or what AI systems rely on – is very, very different from what we’ve grown used to in nearly 30-odd years of Google backlinks.
unique link to this extract


At 17, Hannah Cairo solved a major math mystery • Quanta Magazine

Kevin Hartnett:

»

It’s not that anyone ever said sophisticated math problems can’t be solved by teenagers who haven’t finished high school. But the odds of such a result would have seemed long.

Yet a paper posted on February 10 left the math world by turns stunned, delighted and ready to welcome a bold new talent into its midst. Its author was Hannah Cairo, just 17 at the time. She had solved a 40-year-old mystery about how functions behave, called the Mizohata-Takeuchi conjecture.

“We were all shocked, absolutely. I don’t remember ever seeing anything like that,” said Itamar Oliveira of the University of Birmingham, who has spent the past two years trying to prove that the conjecture was true. In her paper, Cairo showed that it’s false. The result defies mathematicians’ usual intuitions about what functions can and cannot do.

So does Cairo herself, who found her way to a proof after years of homeschooling in isolation and an unorthodox path through the math world.

Cairo grew up in Nassau, the Bahamas, where her parents had moved so that her dad could take a job as a software developer. She and her two brothers — one three years older, the other eight years younger — were all homeschooled. Cairo started learning math using Khan Academy’s online lessons, and she quickly advanced through its standard curriculum. By the time she was 11 years old, she’d finished calculus.

Soon she had consumed everything that was readily available online. Her parents found a couple of math professors to tutor her remotely — first Martin Magid of Wellesley College, then Amir Aazami from Clark University. But much of her education was self-directed, as she read and absorbed, on her own, the graduate-level math textbooks that her tutors recommended. “Eventually,” Cairo recalled, Aazami “said something like, he feels uncomfortable being paid, because he feels like he’s not really teaching me. Because mostly I would read the book and try to prove the theorems.”

«

All of this is amazing, except the youth of the person making the discovery: mathematicians who make amazing discoveries are often young. But the Khan Academy as a route to the frontiers of knowledge? That’s novel, and in its way, encouraging. Let’s hope this is the trend that continues, rather than chatbots.

The article does a pretty good job of explaining the Mizohata-Takeuchi conjecture, which sounds like something out of Star Trek. Maybe that’s what will give us faster-than-light travel in 200 years.
unique link to this extract


The AI takeover of education is just getting started • The Atlantic

Lila Shroff:

»

Gone already are the days when using AI to write an essay meant copying and pasting its response verbatim. To evade plagiarism detectors, kids now stitch together output from multiple AI models, or ask chatbots to introduce typos to make the writing appear more human. The original ChatGPT allowed only text prompts. Now students can upload images (“Please do these physics problems for me”) and entire documents (“How should I improve my essay based on this rubric?”).

Not all of it is cheating. Kids are using AI for exam prep, generating personalized study guides and practice tests, and to get feedback before submitting assignments. Still, if you are a parent of a high schooler who thinks your child isn’t using a chatbot for homework assistance—be it sanctioned or illicit—think again.

The AI takeover of the classroom is just getting started. Plenty of educators are using AI in their own job, even if they may not love that chatbots give students new ways to cheat. On top of the time they spend on actual instruction, teachers are stuck with a lot of administrative work: They design assignments to align with curricular standards, grade worksheets against preset rubrics, and fill out paperwork to support students with extra needs.

Nearly a third of K–12 teachers say they used the technology at least weekly last school year. Sally Hubbard, a sixth-grade math-and-science teacher in Sacramento, California, told me that AI saves her an average of five to 10 hours each week by helping her create assignments and supplement curricula. “If I spend all of that time creating, grading, researching,” she said, “then I don’t have as much energy to show up in person and make connections with kids.”

«

Quite a contrast with Hannah Cairo, isn’t it?
unique link to this extract


Sony is raising PS5 prices on Thursday • The Verge

Jay Peters:

»

Sony is raising the price of all PlayStation 5 models by $50 in the US. In a blog post announcing the change, Sony cited the “challenging economic environment,” which includes the tariffs President Trump has placed on imported products.

The changes will go into effect on Thursday, and the new prices are as follows:

• PlayStation 5 – $549.99
• PlayStation 5 Digital Edition – $499.99
• PlayStation 5 Pro – $749.99

Sony says that the retail prices for PS5 accessories “remain unchanged.”

In April, Sony raised the price of PS5 hardware in the UK, Europe, Australia, and New Zealand, and in May, the company said it was considering price hikes to cover the Trump administration’s tariffs.

«

The April price rises were 10-15%; these changes are about 1%. Not exactly world-ending, is it, when the devices apparently face 145% tariffs (though who knows if the tariffs are operative?).
unique link to this extract


T-Mobile claimed selling location data without consent is legal—judges disagree • Ars Technica

Jon Brodkin:

»

A federal appeals court rejected T-Mobile’s attempt to overturn $92m in fines for selling customer location information to third-party firms.

The Federal Communications Commission last year fined T-Mobile, AT&T, and Verizon, saying the carriers illegally shared access to customers’ location information without consent and did not take reasonable measures to protect that sensitive data against unauthorized disclosure. The fines relate to sharing of real-time location data that was revealed in 2018, but it took years for the FCC to finalize the penalties.

The three carriers appealed the rulings in three different courts, and the first major decision was handed down Friday. A three-judge panel at the US Court of Appeals for the District of Columbia Circuit ruled unanimously against T-Mobile and its subsidiary Sprint.

…The carriers also argued that the device-location information, which is “passively generated when a mobile device pings cell towers to support both voice and data services,” does not qualify as Customer Proprietary Network Information (CPNI) under the law. The carriers said the law “covers information relating to the ‘location… of use’ of a telecommunications service,” and claimed that only call location information fits that description.

Judges faulted T-Mobile and Sprint for relying on “strained interpretations” of the statute. “We begin with the text. The Communications Act refers to the ‘location… of a telecommunications service, not the location of a voice call… Recall that cell phones connect periodically to cell towers, and that is what enables the devices to send and receive calls at any moment,” the ruling said.

«

Have to wonder if anyone, at any point, raised a hand inside T-Mobile and said “are we sure that’s exactly.. ethical? Legal?” Probably not. Some will probably have thought it, though, and be silently happy at this outcome.
unique link to this extract


Publisher traffic sources: Google steady but social and direct referrals are down • Press Gazette

Charlotte Tobitt:

»

New data from Chartbeat suggests that “search” as a source of total traffic to major news publishers has remained stable over the last year.

This appears to chime with a Google statement earlier this month downplaying the impact of AI Overviews and AI Mode on publisher referrals.

However, this includes Google Discover – which has replaced search as the main source of Google traffic.

Social media has however sharply declined as a source of publisher traffic in recent years, as has direct traffic.

This comes despite a growing theme in the past two years of publishers setting out to grow the audience that comes to them directly to help future-proof in the face of AI search, unpredictable social media algorithms and changing audience habits.

Direct traffic is defined by audience data analytics tool Chartbeat as visitors that arrive directly on the website via typing in the URL or through a bookmark.

Across 565 US and UK news websites that are Chartbeat customers and have opted in to sharing their anonymised traffic data for aggregate research purposes, 16.09% of traffic came directly to homepages and other landing pages in January 2019.

This fell to an initial low of 12.45% in April 2020 before seeing growth over the next two years to 16.26%. However the proportion of direct traffic has largely fallen again since to 11.46% in July.

«

So now we wait to see what effect AI Overview has in the longer term. The signs aren’t encouraging.
unique link to this extract


The new American inequality: the Cooled vs. the Cooked • The New York Times

Jeff Goodell:

»

In the hottest regions of the country, such as Texas, where I live, the climate crisis is not only changing our world; it is also dividing it. When the heat spikes during the summer, we morph into a two-party state: the cooled and the cooked. On one side, there is water, shade and air-conditioning. On the other, there is sweat, suffering and even, in the worst cases, death. And it means that no matter where we live, we have to update our conception of heat as a disruptive and punishing force.

The cooled are people like me, who work mostly indoors, bathed in the soothing breeze of manufactured air. We live hidden from the brutality of summer, except when we run out to the mailbox or the grocery store. There we hit a wall of heat that feels like an alien force field and burn our hands on the car’s steering wheel.

We live vampire lives, out early for a walk or to run errands, retreating indoors to our comfy caves during the afternoon, then out again after sundown to hang out with friends and complain about the heat and plot a getaway to the beach or the mountains. For the cooled, heat is an inconvenience, an intrusion into our lifestyles and a reason to finally pull the trigger on a loan to build a backyard swimming pool.

The cooked are people like Matthew Sanchez, the pit manager at Terry Black’s BBQ in Austin. On a busy Saturday, he and his co-workers might grill about 2,000 pounds of brisket in five long steel wood-fired BBQ pits. In the summer, the pit gets so hot it breaks thermometers that hang on the wall. “Sometimes it feels like we are rendering ourselves,” Mr. Sanchez told me.

I also met a delivery driver in Austin who had been hospitalized with heat exhaustion. Though he’s recovered, on hot days the muscles in his back tingle and his kidneys hurt. I met a former emergency medical technician who described the disturbing number of calls she responded to from workers at an Amazon warehouse in Texas, many of them related to heat stress.

«

unique link to this extract


How much do electric car batteries degrade? • Sustainability By Numbers

Hannah Ritchie:

»

Before we quantify how big this effect is, it’s interesting to look at how these processes work over the life of a battery. In the chart below, you can see battery retention measured across a large cohort of Teslas up to 200,000 miles (that’s already telling us something about how big the effect is).

But what’s interesting is that degradation tends to happen quickest in the first 20,000 miles or so. This is because initial lithium salts react with other materials and start building that SEI layer we discussed earlier. After this initial drop, degradation is fairly slow and linear.

Of course, this fact might be one of the explanations why even fairly low-mileage electric cars quickly lose a lot of value once they’ve been driven. As soon as you get on the road, you’re entering the steepest part of the decline.

What’s missing, though, is the context that the overall drop in capacity is still small — probably around 3% to 5% within 25,000 miles — and degradation won’t continue at this rate. So if you buy a second-hand electric car that’s done 20,000 miles, it’s not going to degrade at the same pace that it was.

We’ve now had enough electric cars on the road – and for long enough – to have a good idea of how the battery holds up over time.

Here we’ll focus on a metric used to capture the battery’s “State of Health” (SoH). It’s what percentage of a battery’s initial capacity is still usable after a given number of miles or years.

Let’s start with the results of the huge Tesla cohort that we looked at above. In its 2023 Impact Report, Tesla reported that after 200,000 miles of use, the batteries in a Model 3 and Model Y had lost just 15% of their capacity, on average. For the Model S and X, it was just 12%.

«

(Thanks Quentin SF for the link.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2497: UK drops Apple decryption demand, Softbank invests in Intel, Threads v Twitter, tracking AI chips, and more


A Chinese team has worked out why the infamous “spin serve” in badminton is so hard to return. CC-licensed photo by Tool Dude8mm on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 10 links for you. Serve them up! I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


UK has ‘agreed to drop’ demand for access to Apple user data, says US • Financial Times

Anna Gross, Joe Miller, Tim Bradshaw and George Parker:

»

The UK has retreated on its controversial demand for Apple to provide a “back door” to encrypted customer data after pressure from the Trump administration, according to US officials, ending a diplomatic row between London and Washington.

Tulsi Gabbard, Donald Trump’s director of national intelligence, told the Financial Times the UK had “agreed to drop” its demand that Apple enable access to “the protected encrypted data of American citizens”, a move that the US president had previously likened to Chinese surveillance.

Vice-president JD Vance, who was recently on holiday in the UK, intervened to ensure Britain agreed to withdraw an order that sought to force Apple to break open encrypted data stored in its iCloud system that even the iPhone maker itself is normally unable to access, according to a US official.

“The vice-president negotiated a mutually beneficial understanding that the UK government will withdraw the current backdoor order to Apple,” the official said.

Vance has previously accused European countries of curtailing free speech and of treating some American companies unfairly. He and Gabbard strongly objected to the UK order, which was issued in January under the UK Investigatory Powers Act and has been resisted by Apple.

«

I don’t think the decryption demand was a ploy by the UK to get a bargaining chip with the US; it seriously annoyed the administration there, which meant it was just a source of pressure rather than strength. But let’s hope that once again the UK government abandons its quixotic pursuit of the unicorn of “breakable but secure encryption” which it has been chasing since the Blair government tried it in 2000.

We’ll probably find out presently whether Apple will reintroduce its Advanced Data Protection for iCloud – basically, user-encrypted backups in the cloud.
unique link to this extract


SoftBank invests $2bn in Intel as Trump administration weighs taking 10% stake • WSJ

Robbie Whelan and Amrith Ramkumar:

»

SoftBank Group has agreed to invest $2bn in Intel, a boost from the private sector that coincides with a US government rescue effort for the embattled chip maker.

Trump administration officials are discussing taking a 10% stake in Intel in a bid to revive the company’s fortunes and bolster semiconductor manufacturing in the US, according to people briefed on the talks.

On Monday, Intel announced that SoftBank would buy $2bn worth of Intel stock, roughly 87 million shares, at $23 a share, a slight discount to Monday’s closing price of $23.66. The investment would give the Japanese firm ownership of about 2% of the company, making it Intel’s sixth-largest shareholder, according to S&P Global Market Intelligence.

Investors appeared to interpret SoftBank’s move as a vote of confidence: The chip maker’s shares popped in off-hours trading after closing down 3.7% for the day.

Additional investment is helpful for Intel, but the company needs customers for its chip design and fabrication businesses to get back on track, industry analysts say.

«

Surprising move seen in isolation, but SoftBank is trying to play nicely in order to get favourable treatment from Trump (a triumph of hope over experience). If the US government pitches in, SoftBank might even make a profit on the shares in time. But I don’t feel that confident about Intel’s long-term future.
unique link to this extract


The physics of badminton’s new killer spin serve • Ars Technica

Jennifer Ouellette:

»

Serious badminton players are constantly exploring different techniques to give them an edge over opponents. One of the latest innovations is the spin serve, a devastatingly effective method in which a player adds a pre-spin just before the racket contacts the shuttlecock (aka the birdie). It’s so effective—some have called it “impossible to return”—that the Badminton World Federation (BWF) banned the spin serve in 2023, at least until after the 2024 Paralympic Games in Paris.

The sanction wasn’t meant to quash innovation but to address players’ concerns about the possible unfair advantages the spin serve conferred. The BWF thought that international tournaments shouldn’t become the test bed for the technique, which is markedly similar to the previously banned “Sidek serve.”

The BWF permanently banned the spin serve earlier this year. Chinese physicists have now teased out the complex fundamental physics of the spin serve, publishing their findings in the journal Physics of Fluids.

…While many studies have extensively examined the physics of the shuttlecock’s trajectory, the Chinese authors of this latest paper realized that nobody had yet investigated the effects of the spin serve on that trajectory. “We were interested in the underlying aerodynamics,” said co-author Zhicheng Zhang of Hong Kong University of Science and Technology. “Moreover, revealing the effects of pre-spin on the trajectory and aerodynamics of a shuttlecock can help players learn the art of delivering a spin serve, and perhaps help players on the other side of the net to return the serve.”

«

But as the story points out, though we now know why the serve is difficult to return, it doesn’t matter – it’s banned. The first ban was within six weeks of it first being used in competition, followed by the longer ban.
unique link to this extract


Threads MAU update • Threads

Adam Mosseri (in charge of Meta’s Threads):

»

As of a few weeks ago we there are more than 400 million people active on Threads every month. It’s been quite the ride over the last two years. This started as a zany idea to compete with Twitter, and has evolved into a meaningful platform that fosters the open exchange of perspectives. I’m grateful to all of you for making this place what it is today 🙏🏼. There’s so much work to do from our side, more to come.

«

Fascinating that he’s admitting the idea was to compete with Twitter. Based on those numbers, it’s pretty much level. And yet the odd thing is that Threads seems to be self-contained. It doesn’t leak over into the other social networks in the way that traffic – mostly snarky, but sometimes just duplicative – flows between X and Bluesky (and vice-versa), and also pulls in from Instagram.

Is Threads actually a hotbed of important social dialogue? The replies to Mosseri’s post suggest there’s tension between people who want to monetise, and those who don’t; and those who perceive a huge bot problem, and those who don’t.
unique link to this extract


AI x Commerce • Andreessen Horowitz

Justine Moore and Alex Rampell:

»

The internet’s most profitable business model has always been simple: running search ads on monetizable queries. When you search “how many protons are in a cesium atom,” Google makes no money. When you search “best tennis racket,” it prints cash. 

This asymmetry defines the entire search economy–some queries are pure curiosity, and others have direct purchase intent. It’s part of why Google (where people often search for products) is a $2T company and Wikipedia (where people search for knowledge or fun facts) is a non-profit.

Google could lose 95% of search volume and still grow revenue as  long as it retains the valuable queries, which are largely commerce related. Has Google managed to keep these searches from moving to AI platforms like ChatGPT and Perplexity? 

Maybe. In May 2025, Apple SVP Eddy Cue testified during the DOJ’s antitrust trial that Safari search volume had declined for the first time in over two decades. The result? Alphabet’s stock dropped nearly 8% in a day, wiping out over $150bn in market cap — all from the hint that queries might be leaking to AI. But fast forward to Google’s increasing revenue (including from search!) and it’s pretty clear (squaring Eddy Cue’s comments with Google’s Q2 earnings) that Google is likely only losing low-monetizing queries, at least for now.

AI is eating the low-value (at least in “cost-per-click terms”) queries first, the ones with no commercial intent that are more informational. If language models answer your cesium question, Google loses the query but not a dime. The revenue remains until AI starts replacing things like “best X for Y” commerce journeys — the ones with actual purchase intent. There’s no question this is about to happen, but not all commerce is the same. Some commerce will be eaten by AI, some will be immune to it, and some will be up for grabs by new startups.

«

As much as anything this is venture capitalists thinking out loud about what they expect, so the post goes into some detail about how they expect the future to pan out. (In effect it’s also asking companies to pitch for the businesses they think are open.)
unique link to this extract


Exclusive: US embeds trackers in AI chip shipments to catch diversions to China, sources say • Reuters

Fanny Potkin, Jun Yuan Yong and Karen Freifeld:

»

U.S. authorities have secretly placed location tracking devices in targeted shipments of advanced chips they see as being at high risk of illegal diversion to China, according to two people with direct knowledge of the previously unreported law enforcement tactic.

The measures aim to detect AI chips being diverted to destinations which are under U.S. export restrictions, and apply only to select shipments under investigation, the people said.

They show the lengths to which the U.S. has gone to enforce its chip export restrictions on China, even as the Trump administration has sought to relax some curbs on Chinese access to advanced American semiconductors.

The trackers can help build cases against people and companies who profit from violating U.S. export controls, said the people, who declined to be named because of the sensitivity of the issue.

Location trackers are a decades-old investigative tool used by U.S. law enforcement agencies to track products subject to export restrictions, such as airplane parts. They have been used to combat the illegal diversion of semiconductors in recent years, one source said.

Five other people actively involved in the AI server supply chain say they are aware of the use of the trackers in shipments of servers from manufacturers such as Dell and Super Micro, which include chips from Nvidia and AMD.

«

Then there’s also the story of two Chinese nationals arrested in California, accused of illegally shipping millions of dollars worth of Nvidia AI chips to China. The fun part, though, is that this story preceded the above one – so one suspects that Reuters started asking questions about the arrests and learnt about the trackers.
unique link to this extract


South Korea’s no-kids zones • Boom

“Boom”:

»

Imagine you are a South Korean parent. You want to take your 10 year old to a new café that has opened up a few streets away and treat them to a slice of cake. What’s the first thing you do? You go online, and check to see whether the café bans children.

This map shows restaurants and cafés across South Korea that users have identified as officially designating themselves either as a ‘No-Kids Zone’ or as child-friendly in 2023. The 451 blue pins represent places where children are banned, and the 62 green pins are places where children are welcomed. Below you can see the cluster of blue pins in and around Seoul.

It’s difficult to know exactly how many South Korean establishments are child-free, but for cafés it was estimated at 5-10% in 2022, and has probably increased since then.

One area particularly dense in No-Kids Zones is Jeju Island, famed for its natural beauty, which has around 150 businesses that have banned children. A survey of Jeju’s No-Kids Zones reported that business owners had implemented the rule for reasons including a quiet atmosphere, problems with badly behaved children, and concern about legal liability in the event of a child accidentally getting hurt.

The below screenshot, from the Instagram account of a café serving what looks to be a variety of cheesecakes, is a typical example. It carefully specifies that no children below the age of 12 are allowed.

«

This feels strange, but (as the blog says) cafes banning children isn’t the cause of South Korea’s low, low fertility – the lowest birth rate in the world – but it’s surely a consequence which then gets reinforced as people find children a novel irritation. (The blog’s purpose, apparently, is “We want it to be easier to choose to have children, for everyone.”)
unique link to this extract


Lab-grown salmon gets FDA approval • The Verge

Dominic Preston:

»

The FDA has issued its first ever approval on a safety consultation for lab-grown fish. That makes Wildtype only the fourth company to get approval from the regulator to sell cell-cultivated animal products, and its cultivated salmon is now available to order from one Portland restaurant.

Wildtype announced last week that the FDA had sent a letter declaring it had “no questions” about whether the cultivated salmon is “as safe as comparable foods,” the customary final step in the FDA’s approval process for lab-grown animal products. The FDA has sole responsibility for regulating most lab-grown seafood, whereas the task is shared with the United States Department of Agriculture (USDA) for cultivated meat.

The FDA’s pre-market safety consultation is voluntary, but is “helpful for marketability,” IP lawyer Dr. Emily Nytko-Lutz, who specializes in biotechnology patents, explained to The Verge. “There are other pathways involving self-affirmation of safety as well as a longer food additive review process, but the FDA’s authorisation with a ‘No Questions’ letter is a middle ground.”

«

It’s a bit vague quite how the salmon cuts are made to look like, well, salmon cuts; there’s an FAQ of sorts on Wildtype’s site. It’s been working for years on this.
unique link to this extract


Forget LASIK: safer, cheaper vision correction could be coming soon • ScienceDaily

»

Human corneas are dome-shaped, clear structures that sit at the front of the eye, bending light from surroundings and focusing it onto the retina, where it’s sent to the brain and interpreted as an image. But if the cornea is misshapen, it doesn’t focus light properly, resulting in a blurry image. With LASIK, specialized lasers reshape the cornea by removing precise sections of the tissue. This common procedure is considered safe, but it has some limitations and risks, and cutting the cornea compromises the structural integrity of the eye. Hill explains that “LASIK is just a fancy way of doing traditional surgery. It’s still carving tissue — it’s just carving with a laser.”

But what if the cornea could be reshaped without the need for any incisions?

This is what [professor of chemistry at Occidental College, Michael] Hill and collaborator Brian Wong are exploring through a process known as electromechanical reshaping (EMR). “The whole effect was discovered by accident,” explains Wong, a professor and surgeon at the University of California, Irvine. “I was looking at living tissues as moldable materials and discovered this whole process of chemical modification.”

In the body, the shapes of many collagen-containing tissues, including corneas, are held in place by attractions of oppositely charged components. These tissues contain a lot of water, so applying an electric potential to them lowers the tissue’s pH, making it more acidic. By altering the pH, the rigid attractions within the tissue are loosened and make the shape malleable. When the original pH is restored, the tissue is locked into the new shape.

«

Sounds fun – but it’s a very long way away from being available from doctors.
unique link to this extract


Renewables do unambiguously reduce wholesale power prices • Carbon Commentary

Chris Goodall:

»

We still hear assertions that adding renewables to the grid has increased the UK’s electricity costs. I looked at two sources of data and plotted one against the other to test whether there’s any truth in this.

1: The ‘next day’ electricity price for each hour in the period from 1st January 2025 to the early days of August 2025. That’s about 220 days, covering the coldest period of the year and the heat of the summer. (In the UK, electricity prices are highest in the winter and fall to lower levels in the summer because demand is much lower). The source for this data was the research group Ember.

2: The percentage share of wind and solar electricity in total generation in each of the 220 or so days. The source was the GB network operator, NESO. 

The analysis seeks to show whether or not days of high electricity price are associated with large or small shares of renewables in total generation. For each day, I plotted the average hourly price of electricity against the share of solar and wind in that day’s total electricity generation. If more renewables adds to costs, the price of electricity should be higher when wind and solar are abundant.

Of course that is not the case; a day with wind or sun (or both) typically has a lower hourly average electricity price. And the differences are substantial…

«

He goes into plenty of detail, though of course this will make no difference to those convinced about this. Isn’t it a pity that “understanding logical arguments” isn’t taught in schools? (Thanks Ben B for the link.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2496: AI’s mass delusion, the Vision Pro’s immersive problem, Trump mulls 10% stake in Intel, scammed!, and more


A new study found that endoscopists who used AI were less good at diagnosis afterwards without it – a “deskilling” effect. CC-licensed photo by Rosen & Meseguer Atlas of Medical Foreign Bodies on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 10 links for you. Machine-guided. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


AI is a mass-delusion event • The Atlantic

Charlie Warzel:

»

It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager—Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida—has been reanimated by generative AI, his voice and dialogue modelled on snippets of his writing and home-video footage. The animations are stiff, the model’s speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I’m losing my mind watching it.

Jim Acosta, the former CNN personality who’s conducting the interview, appears fully bought-in to the premise, adding to the surreality: he’s playing it straight, even though the interactions are so bizarre. Acosta asks simple questions about Oliver’s interests and how the teenager died. The chatbot, which was built with the full cooperation of Oliver’s parents to advocate for gun control, responds like a press release: “We need to create safe spaces for conversations and connections, making sure everyone feels seen.” It offers bromides such as “More kindness and understanding can truly make a difference.”

…The interview triggered a feeling that has become exceedingly familiar over the past three years. It is the sinking feeling of a societal race toward a future that feels bloodless, hastily conceived, and shruggingly accepted. Are we really doing this? Who thought this was a good idea? In this sense, the Acosta interview is just a product of what feels like a collective delusion. This strange brew of shock, confusion, and ambivalence, I’ve realized, is the defining emotion of the generative-AI era. Three years into the hype, it seems that one of AI’s enduring cultural impacts is to make people feel like they’re losing it.

«

unique link to this extract


Why Apple Vision Pro can’t win without immersive video • Mac Observer

Rajat Saini:

»

When the Vision Pro launched, Apple highlighted immersive video as the device’s strongest selling point. The format offers a level of immersion that no traditional screen can match. Watching sports, concerts, or documentaries in this environment can feel transformative. Yet Apple has released only 27 pieces of content in the format since launch, leaving users with little to watch.

For example, Apple continues to promote an immersive highlight reel of the 2024 NBA All-Star Game, despite the 2025 event having taken place six months ago without an immersive version. The same applies to concerts. While users can access shows from Metallica, Bono, and a short music video by The Weeknd, the library lacks depth and variety to keep younger tech audiences engaged.

Apple’s series offerings remain sparse. Wild Life has four episodes, Elevated just one, Boundless two, and Prehistoric Planet only two. Even Adventure, with its extreme sports footage, offers only five installments. For a product pitched as a new way to experience storytelling, the content feels more like a demo reel than a growing library.

According to Bloomberg’s Mark Gurman, Apple intentionally slow-walked immersive content to preserve its reserve, as each production is expensive and resource-heavy. But this strategy has created a catch-22: immersive video is the feature that sells the Vision Pro, yet the lack of it makes the device harder to justify.

Apple has updated the Vision Pro’s software and plans a faster chip in the next model, but these changes won’t address the fundamental issue. A cheaper and lighter version is reportedly set for 2027, but two years is a long wait in a fast-moving industry. If immersive content remains scarce until then, the product risks fading into irrelevance.

«

“Risks fading”? I’ll say it: the Vision Pro is completely forgotten by everyone who doesn’t own one, and probably by quite a few who do. The lack of content; the price; the terrible ergonomics and battery life; the weird design decisions (fake eyes on the outside? Why??). It’s been a bad idea from start to finish, and the lack of a sensible strategy around immersive video – the real selling point – has been the garnish on the awful salad.
unique link to this extract


Trump administration in talks to take 10% stake in Intel, Bloomberg News reports • Reuters

Zaheer Kachwala and Jaspreet Singh:

»

The Trump administration is in talks to take a 10% stake in Intel by converting some or all of the struggling company’s Chips Act grants into equity, Bloomberg News reported, citing a White House official and other people familiar with the matter.

Shares of Intel closed about 3.7% lower on Monday, after rallying last week on hopes of US federal support.

A 10% stake in the American chipmaker would be worth about $10bn. Intel has been slated to receive a combined $10.9bn in Chips Act grants for commercial and military production, and the figure is roughly enough to pay for the government’s holding, according to the Bloomberg report on Monday.

Intel declined to comment on the report, while the White House did not respond to a request for comment. Reuters could not immediately verify the report.

Media reports said last week that the US government may buy a stake in Intel, after a meeting between CEO Lip-Bu Tan and President Donald Trump that was sparked by Trump’s demand for the new Intel chief’s resignation over his ties to Chinese firms.

Federal backing could give Intel more breathing room to revive its loss-making foundry business, analysts have said, but it still suffers from a weak product roadmap and challenges in attracting customers to its new factories.

“The fact that the U.S. government is stepping in to save a blue-chip American company likely means that Intel’s competitive position was much worse than what anybody feared,” said David Wagner, head of equity and portfolio manager at Intel shareholder Aptus Capital Advisors.

«

What fun to watch America trying state-owned capitalism. Not sure it’s going to pull Intel out of its nosedive, but certainly adds a bit of spice to the process.
unique link to this extract


Indeed recruiter text scam: I responded to one of the “job” messages. It got weird quickly • Slate

Alexander Sammon:

»

On WhatsApp, I met Cathy, my “coach,” from a company she referred to as Interleave. She had gotten my number, she said, from “Elena who works in Indeed Recruitment Department” and was eager to work with me. Her number had a 424 area code, or Los Angeles. (Indeed is its own company—it essentially offers a job board—and I have no reason to believe that it was actually involved at all.)

Cathy was not altogether patient. When I didn’t respond within two hours, she sent me a voice note that sounded sort of humanoid: “Hello, are you still there?” The next day, she called me and I missed it. When I did respond, via WhatsApp message, she was curt: “Hello, you finally replied to my message. I thought you were taken away by aliens.”

I was going to be doing “music promotion,” I was told. It would take just one or two hours a day. “We use an A.I.–powered system developed by Interleave to help increase the play count of music singles and albums,” Cathy told me, on platforms like YouTube and TikTok. In effect, we were going to be boosting play counts: “Artificial intelligence cannot do this, only real people can participate,” she said. “All we need to do is create a personal account on the Interleave platform, use our real information, and create real playback records.”

Like so many middle school girlfriends, Interleave was based in Canada. The compensation was similarly sketchy. I’d get $100 for two days of work. For 30 days, I’d get $8,200, though it would all have to be routed through a crypto wallet. The job and the compensation had nothing to do with the original text I’d received, but no matter.

Besides, this wasn’t just crude self-enrichment, Cathy said. Interleave was going to “donate a portion of its profits to the World Food Programme charity to help those who really need help gain a brighter life.”

«

You already have a feeling for where it’s going to go – there will be money “paid” to him which somehow can’t be withdrawn without paying more money to the “employer”, but then for some reason the “salary” still can’t be withdrawn, for ever and ever – and all that varies between these scams is what the Macguffin is. (In this case, some sort of music promotion.)
unique link to this extract


Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study • The Lancet Gastroenterology & Hepatology

Krzysztof Budzyń, MD et al:

»

We conducted a retrospective, observational study at four endoscopy centres in Poland taking part in the ACCEPT (Artificial Intelligence in Colonoscopy for Cancer Prevention) trial. These centres introduced AI tools for polyp detection at the end of 2021, after which colonoscopies had been randomly assigned to be conducted with or without AI assistance according to the date of examination.

We evaluated the quality of colonoscopy by comparing two different phases: three months before and three months after AI implementation. We included all diagnostic colonoscopies, excluding those involving intensive anticoagulant use, pregnancy, or a history of colorectal resection or inflammatory bowel disease. The primary outcome was change in adenoma detection rate (ADR) of standard, non-AI assisted colonoscopy before and after AI exposure.

…Interpretation: continuous exposure to AI might reduce the ADR of standard non-AI assisted colonoscopy, suggesting a negative effect on endoscopist behaviour.

«

I’ve only got access to this summary, so it’s not clear whether the AI-assisted rate of detection was better or worse than without; only that after people used AI and then reverted to not, their detection was worse. (If anyone can enlighten, that would be welcome.)

The Science Media Centre has a number of comments from scientists: one points out that the number of colonoscopies performed nearly doubled after the AI tool was introduced. Another points out that this paper suggests there would be a problem if a cyber attack (or similar) took out the AI assistant.
unique link to this extract


Big Tech expands AI for Good in Africa amid skepticism • Rest of World

Damilare Dosunmu:

»

in July, Google opened an AI Community Center in Ghana to support local innovation, and announced a $37m investment in social impact projects under the catch-all label “AI for Good” across Africa. 

Microsoft, Meta, Amazon, and Apple are funding similar projects, aligning themselves with a growing trend in which governments, global organizations, and private tech companies promote AI for public good — a push that aims to normalize the technology and soften the anxieties surrounding it. 

Regional AI experts and industry leaders, however, urge caution, saying Africa could become a testing ground for AI models and a vast source of data collection in the U.S.-China tech rivalry, making the continent dependent on foreign-owned systems.

“AI for Good is still very much embedded and rooted in the same saviorism from the West towards the global south. Africa needs to start building reliable infrastructures that can power all these systems,” Asma Derja, founder of Ethical AI Alliance, a Spain-based advocacy group for safe AI, told Rest of World. “Otherwise, Big Tech will continue to make money off the region and then take a [corporate social responsibility] budget to finance a few projects that are addressing climate change or, you know, a particular topic in Tanzania or in Mongolia and call it AI for Good.”

«

“AI for Good” is quite a slogan. Definitely uplifting, something for people to follow – a bit like, say, “Don’t be evil”.
unique link to this extract


Chinese homebuyers snub government incentives as they bet on further price falls • South China Morning Post

Daniel Ren and Yuke Xie:

»

Homebuyers in mainland China’s major cities are responding tepidly to new policy relaxations, anticipating further price drops amid an uncertain outlook for the sector.
Analysts and potential buyers warn that the bearish sentiments might weaken the effectiveness of the stimulus measures aimed at reviving the stagnant property market

“The market sentiment is terribly weak now, and the consensus among potential buyers is that prices will slide in the coming months if homeowners are eager to get deals done,” said Qiu Lixiao, a property agent with Pacific Rehouse in Shanghai. “Any measure to encourage purchases of new and pre-owned flats may turn out to be a damp squib.”

He added that his agency had hardly received any inquiries since Beijing’s surprise move last week to lift home-purchase restrictions in the city’s outlying areas, which had also heightened expectations that authorities in Shanghai would take similar action.

…Over the past decade, the mainland’s biggest cities like Shanghai and Beijing have taken many steps to curb home purchases to rein in what was once a red-hot property market.

Lindsay Zhang, a Beijing-based finance professional, said the policy change was not enough to inspire confidence to step into the market.

“I don’t think the relaxed rules are very meaningful because there is not much room for value appreciation for most homes,” she said. “Only properties in the very core locations can retain their value, or at least are more immune to a citywide [price] decline.”

«

Price deflation in a housing market? Almost unknown when there isn’t a recession happening. But China would never admit that it’s in recession.
unique link to this extract


MIT report: 95% of generative AI pilots at companies are failing • Fortune

Sheryl Estrada:

»

The GenAI Divide: State of AI in Business 2025, a new report published by MIT’s NANDA initiative, reveals that while generative AI holds promise for enterprises, most initiatives to drive rapid revenue growth are falling flat.

Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.

To unpack these findings, I spoke with Aditya Challapally, the lead author of the report, and a research contributor to project NANDA at MIT.

“Some large companies’ pilots and younger startups are really excelling with generative AI,” Challapally said. Startups led by 19- or 20-year-olds, for example, “have seen revenues jump from zero to $20 million in a year,” he said. “It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.

But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.

The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

«

I was musing, as I saw various people on social media touting how wonderful ChatGPT is for them, why I haven’t noticed any evidence of their productivity being vastly higher. I suspect we’ll need to wait a few years to see it being used even slightly effectively.
unique link to this extract


HR giant Workday discloses data breach after Salesforce attack • Bleeping Computer

Sergiu Gatlan:

»

Human resources giant Workday has disclosed a data breach after attackers gained access to a third-party customer relationship management (CRM) platform in a recent social engineering attack.

Headquartered in Pleasanton, California, Workday has over 19,300 employees in offices across North America, EMEA, and APJ. Workday’s customer list comprises over 11,000 organizations across a diverse range of industries, including more than 60% of the Fortune 500 companies.

As the company revealed in a Friday blog, the attackers gained access to some of the information stored on the compromised CRM systems, adding that no customer tenants were impacted.

“We want to let you know about a recent social engineering campaign targeting many large organizations, including Workday,” the HR giant said. “We recently identified that Workday had been targeted and threat actors were able to access some information from our third-party CRM platform. There is no indication of access to customer tenants or the data within them.”

However, some business contact information was exposed in the incident, including customer data that could be used in subsequent attacks.

“The type of information the actor obtained was primarily commonly available business contact information, like names, email addresses, and phone numbers, potentially to further their social engineering scams,” it added.

«

Just the normal stuff that hackers love to get hold of, no big deal, don’t worry about the call apparently from one of those people you know.
unique link to this extract


Government-built “Humphrey” AI tool reviews responses to consultation for first time, in bid to save millions • GOV.UK

»

A new AI tool has summarised what the public have told the government in response to a consultation for the first time – providing nearly identical results to officials.

The tool, called ‘Consult’, was first used on a live consultation by the Scottish Government when it was seeking views on how to regulate non-surgical cosmetic procedures – like lip fillers and laser hair removal – as use of the treatments has risen.

The tool now set to be used across departments in a bid to cut down the millions of pounds spent on the current process, which often includes outsourcing analysis to expensive contractors – helping to build a productive and agile state to deliver the Plan for Change.

Reviewing comments from over 2,000 consultation responses using generative AI, Consult identified key themes that feedback fell into across each of six qualitative questions. These themes were checked and refined by experts in the Scottish Government, the AI tool then sorted individual responses into themes and gave officials more time to delve into the detail and evaluate the policy implications of feedback received.

As this was the first time Consult was used on a live consultation, experts at the Scottish Government manually reviewed every response too. Identifying what an individual response is saying, and putting it in a ‘theme’ is subjective, humans don’t always agree. When we compare Consult to the human reviewer, we see they agree the majority of the time – with differences in view having a negligible impact on how themes were ranked overall.

‘Consult’ is part of ‘Humphrey’, a bundle of AI tools designed to speed up the work of civil servants and cut back time spent on admin, and money spent on contractors.

«

Calling it “Humphrey” is a joke about the old Yes Minister British TV comedy series, where the senior civil servant urbanely guiding the minister away from pitfalls was called Sir Humphrey. Given that people are going to be offering AI-written responses, might as well fight fire with fire.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2495: plastics treaty talks fail, the Wikipedian promotion, why science fraud happens, hating AI, and more


Like all modern flip phones, the Samsung Z Flip has a deadly enemy: dust. CC-licensed photo by HS You on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 10 links for you. Fine, thanks. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Plastics treaty talks collapse without a deal after “chaotic” negotiations • Climate Change News

Matteo Civillini:

»

UN talks on creating a global pact to stem plastic pollution collapsed in Geneva with no agreement or clear way forward after a chaotic night of negotiations failed to break a deadlock over whether to include measures aimed at curbing runaway plastic production.

With discussions running into overtime on Thursday night, a last-ditch attempt by the talks’ chair, Ecuadorian diplomat Luis Vayas Valdivieso, to table a new draft proposal for a treaty fell flat. The text, still containing numerous options in brackets, does not include a dedicated section on plastic production, which nearly 100 countries have been calling for.

An opposing group of fossil fuel-producing nations – including Gulf states, Russia and the US – vehemently reject the inclusion in the treaty of any provisions aimed at reducing plastic production, which is set to triple by 2060. The talks, known as INC-5.2, were unable to find a way to bridge those divergent positions.

Several countries voiced disappointment with the process managed by the UN Environment Programme (UNEP) in a final plenary session which came to an abrupt end on Friday morning with no next steps agreed.

Valdivieso adjourned the 10-day meeting to be resumed “at a later date” yet to be decided after the United States and Kuwait asked him to cut short the last session – with the latter saying it had become “a health issue” as delegates were exhausted from the long hours.

During the closing plenary, many countries signalled their unease with negotiations continuing under the same format that has yet to deliver a deal after two and a half years. The collapse of talks in Geneva came nine months after the failure of what was originally meant to be the final round of negotiations in December 2024.

On Friday, France’s Minister for Ecological Transition Agnès Pannier-Runacher said she was “disappointed and enraged” with the outcome of the talks, which she described as “so chaotic”. “Oil-producing countries and their allies have chosen to look the other way. We choose to act,” she added.

«

The Montreal Protocol limiting production and use of CFCs these days feels like something out of an SF novel about an alien planet ruled by sensible beings.
unique link to this extract


Dedicated volunteer exposes “single largest self-promotion operation in Wikipedia’s history” • Ars Technica

Nate Anderson:

»

Quick—what are the top entries in the category “Wikipedia articles written in the greatest number of languages”?

The answer is countries.

Turkey tops the list with Wikipedia entries in 332 different languages, while the US is second with 327 and Japan is third with 324. Other common words make their appearance as one looks down the list. “Dog” (275 languages) tops “cat” (273). Jesus (274) beats “Adolf Hitler” (242). And all of them beat “sex” (122), which is also bested by “fever,” “Chiang Kai-Shek,” and the number “13.”

But if you had looked at the list a couple months back, something would have been different. Turkey, the US, and Japan were still in the same order near the top of the leaderboard, but the number one slot was occupied by an unlikely contender: David Woodard, who had Wikipedia entries in 335 different languages.

You… haven’t heard of David Woodard?

Woodard is a composer who infamously wrote a “prequiem”—that is, a “pre requiem”—in 2001 for Oklahoma City bomber Timothy McVeigh, who had murdered 168 people with a truck bomb. The piece was to be performed at a church near McVeigh’s execution site in Terre Haute, Indiana, then recorded and played on the radio so that McVeigh would have a chance to hear it.

According to the LA Times, which spoke to the composer, “Woodard’s hope in performing the 12-minute piece, he said, is to ’cause the soul of Timothy McVeigh to go to heaven.'” According to BBC coverage from the time, Woodard “says McVeigh is ’33 and nearly universally despised at the time of his execution’—like Jesus Christ.”

…A Wikipedia editor who goes by “Grnrchst” recently decided to find out, diving deep into the articles about Woodard and into any edits that placed his name in other articles. The results of this lengthy and tedious investigation were written up in the August 9 edition of the Signpost, a volunteer-run online newspaper about Wikipedia.

Grnrchst’s conclusion was direct: “I discovered what I think might have been the single largest self-promotion operation in Wikipedia’s history, spanning over a decade and covering as many as 200 accounts and even more proxy IP addresses.”

«

Well, that’s a decade of someone’s work down the drain. Performance art project? Self-aggrandisement scheme? University jape? We still don’t know. Woodard is real. But the reason for all this is unknown.
unique link to this extract


The one feature that keeps me from recommending flip phones: dust • The Verge

Allison Johnson:

»

I love flip phones. Lots of you love flip phones, too.

Those little specks of dust [which find their way into the crease of the fold] still loom large. Despite substantial improvements to the screens and hinges, and the addition of water resistance, flip phones (and their fold-style siblings) still lack dust resistance. Both Motorola and Samsung’s latest foldables come with an IP48 rating, which only guarantees protection against very small particles, meaning anything smaller than a millimeter could still potentially work its way into the phone and wreak havoc.

Sure, plenty of people own folding phones and never experience problems with dust, which is great! But when every other slab-style phone at the same price point comes with a full IP68 rating, it’s hard to tell the average person to go ahead and spend $1,000 on a flip phone. Fun only goes so far.

I had a burning question for Samsung’s head of smartphone planning, Minseok Kang. Maybe it even bordered on a plea. “Is a dustproof foldable even possible?” I asked following Samsung’s most recent Unpacked.

“I don’t think that it’s not possible,” he said. “But it is difficult.”

Whispers of foldables with the elusive IP68 rating have cropped up around most of the recent folding phone launches, ultimately fizzling when the full specs have been revealed.

«

unique link to this extract


The entities enabling scientific fraud at scale are large, resilient, and growing rapidly • PNAS

Reese Richardson, Spencer Hong, Jennifer Byrne, Thomas Stoeger and Luis Nunes Amaral:

»

Science is characterized by collaboration and cooperation, but also by uncertainty, competition, and inequality. While there has always been some concern that these pressures may compel some to defect from the scientific research ethos—i.e., fail to make genuine contributions to the production of knowledge or to the training of an expert workforce—the focus has largely been on the actions of lone individuals.

Recently, however, reports of coordinated scientific fraud activities have increased. Some suggest that the ease of communication provided by the internet and open-access publishing have created the conditions for the emergence of entities—paper mills (i.e., sellers of mass-produced low quality and fabricated research), brokers (i.e., conduits between producers and publishers of fraudulent research), predatory journals, who do not conduct any quality controls on submissions—that facilitate systematic scientific fraud.

Here, we demonstrate through case studies that i) individuals have cooperated to publish papers that were eventually retracted in a number of journals, ii) brokers have enabled publication in targeted journals at scale, and iii), within a field of science, not all subfields are equally targeted for scientific fraud.

«

This is a big paper, and there’s no doubting the work. The reason for the growth in fraud seems to be a symbiotic process: publications which can charge for publication (or subscription) need researchers who will be measured on the quantity rather than quality of publication. Both are driven by volume, not value, of publication.

The question is, therefore, how do you break the symbiosis?
unique link to this extract


Every reason why I hate AI and you should too • Malwaretech

Marcus Hutchins:

»

One thing that’s certain is that Generative AI is in a bubble. That’s not to say AI as a technology will pop, or that there isn’t genuine room for a lot more growth; simply, the level of hype far outweighs the current value of the tech.

Most (reasonable) people I speak to are of one of three opinions:

1: These technologies are fundamentally unsustainable and the hype will be short-lived
2: There will be some future breakthrough that will bring the technology in line with the hype, but in the meantime everyone is essentially just relying on creative marketing to keep the money flowing
3: The tech has a narrow use case for which they are exceedingly valuable, but almost everything else is just hype.

Whenever I’m critical of anything GenAI, without fail I get asked the same question. “do you think every major CEO could be wrong?”

The answer to that is: yes. History is littered with examples of industry titans going nuts, losing more money than the GDP of an entire country, saying “lol, my bad”, then finding something else to do.

I grew up during the fallout from the great financial crisis. I watched first hand as the biggest most prestigious financial institution crashed the entire global economy. Turns out, in the short term playing hot potato with debt derivatives backed by imaginary money and fraud is a great business model. In the long term, not so much.

It’s not even necessarily that corporate executives are being stupid. Sometimes they are, which can result in things like sinking more money that it cost the US government to put the sun in a bomb into the worst VR game ever. But usually it’s just greed and shortsightedness.

«

If you’re wondering why Hutchins’s name is familiar, it’s because he’s the guy who stopped the North Korean Wannacry ransomware attack in 2017 by, of al things, registering the domain it was trying to contact. (His About page makes worrying reading for anyone getting into cybersecurity.)
unique link to this extract


Nabiha Syed remakes Mozilla Foundation in AI, Trump era • The Register

Thomas Claburn:

»

Last May, Nabiha Syed became executive director of The Mozilla Foundation, and a year on, reached out to The Register to share her vision for an organization humbled by layoffs and confronted by stochastic parrots and stochastic politics.

Syed said that the Mozilla Foundation is sworn to defend the open web and has been doing so for the past two decades. But the challenge is different now.

“We sort of knew what the internet was and it went through phases,” said Syed. “But now, with the onslaught of AI slop and surveillance capitalism running amok, we really have to go back to first principles: why do we care about the open internet, the open web?”

The opportunity for the foundation, she said, is to rethink what a positive future looks like and to figure out how to mobilize people to help realize that vision, because change requires community participation.

«

This is one of those “would you like to talk to the new head of Mozilla?” interviews that PRs will offer journalists, who will inwardly groan because they know nothing of any consequence will emerge but go through with it anyway in the hope that when some drama occurs, they’ll have the faint chance of getting the inside track.

And so it transpires here. The interview is a big nothingburger, lighter than air. Syed doesn’t, for example, tackle the fact that Mozilla’s principal source of income is Google, which is the company that is contributing mightily to all of AI slop, surveillance capitalism and the non-open web. Very much a case of your income depending on not understanding something, a la Upton Sinclair.
unique link to this extract


LinkedIn is the fakest platform of them all • Prospect Magazine

Ben Clark:

»

“LinkedIn doesn’t know me anymore,” someone complained to me recently. “What do you mean?” I asked. She explained that the platform has replaced the old “recommended jobs” section, which used to show her quite useful job openings based on her previous searches and CV, with an AI search engine that asks you to describe your ideal job in freeform text. The results it brings up aren’t nearly as relevant.

This is just one of many ways in which the professionals’ social media platform, which has embraced artificial intelligence with ferocious zeal, is being gradually “enshittified”, to borrow tech writer Cory Doctorow’s phrase. Each new embrace of AI tools promises to make hiring, job searching, networking and even posting a bit easier or more fruitful. Instead, AI seems to have made the user’s experience more alienating, and to have helped foster a genre of LinkedIn-speak which bears all the hallmarks of the worst AI writing on the internet.  

Let’s start with my opening example—which, to be fair, is in beta testing mode and can be switched off. Instead of the AI assistant being like an intuitive digital servant, pulling up the best jobs based on your ruminations, users are confronted with a new and annoying task: crafting prompts for the AI. But the non-AI search bar worked perfectly well as it was.

Then there is the AI writing assistant, which is available to users who pay for the platform’s £29.99 per month premium service to help them craft their posts. LinkedIn’s CEO Ryan Roslansky recently admitted that users aren’t using the tool as much as he anticipated. It seems that sounding like a human being to your colleagues and clients is put at, well, a premium.

And then there are the ways in which users are deploying outputs from external AI chatbots on the platform, something with which LinkedIn is struggling to cope. According to the New York Times, the number of job applications submitted via the platform increased by 45% in the year to June, now clocking in at an average of 11,000 per minute.

«

My new favourite Twitter/X account is LinkedIn Lunatics, which collects random bits of utter madness from its users. It’s the most bizarre place, based on that evidence.
unique link to this extract


October 2024: How Intel got left behind in the AI chip boom • The New York Times

Steve Lohr and Don Clark in October 2024:

»

In 2005, there was no inkling of the artificial intelligence boom that would come years later. But directors at Intel, whose chips served as electronic brains in most computers, faced a decision that might have altered how that transformative technology evolved.

Paul Otellini, Intel’s chief executive at the time, presented the board with a startling idea: buy Nvidia, a Silicon Valley upstart known for chips used for computer graphics. The price tag: as much as $20bn.

Some Intel executives believed that the underlying design of graphics chips could eventually take on important new jobs in data centers, an approach that would eventually dominate AI systems.

But the board resisted, according to two people familiar with the boardroom discussion who spoke only on the condition of anonymity because the meeting was confidential. Intel had a poor record of absorbing companies. And the deal would have been by far Intel’s most expensive acquisition.

Confronting skepticism from the board, Mr. Otellini, who died in 2017, backed away, and his proposal went no further. In hindsight, one person who attended the meeting said, it was “a fateful moment.”

«

You’d have had to have the most crystalline of balls in 2005 to predict how the AI (and graphics) space was going to turn out; bitcoin was still five years from being invented, AI was a niche pursuit of academia, and smartphones weren’t yet a significant thing. I’d predict that Nvidia wouldn’t have sold, but also that if it had, Intel would have screwed it up.

For all that, one can see Intel as being like Microsoft: didn’t get mobile, and effectively got passed by AI (Microsoft has clung on to OpenAI). The two big revolutions of this century in computing, and Intel wasn’t anywhere near them.

Intel, now, is skirting with disaster.
unique link to this extract


The $6 revolution: how generic weight loss drugs could save millions of lives • Overmatter

Natasha Loder and Peter Singer:

»

In about five months the patent for a key weight loss medication, semaglutide, will lapse in a large number of countries around the world including India, China, Brazil, and Canada —although not in the most lucrative markets. The wider availability of this drug is likely to herald the beginning of a step change in the global treatment of obesity and metabolic diseases. It is a big deal. Obesity alone has risen relentlessly over past decades, more than doubling in adults since 1990. Over a billion people live with obesity, and its health consequences, worldwide.

The potential for generic and cheaper semaglutide for wide use in the treatment of obesity, diabetes, and a range of other conditions is so mind-boggingly large, we think the time is ripe for many governments to start to plan for how to maximise their potential. Firms in China and India, in particular, have been preparing for the expiry of the patent in their territories by developing the capacity to deliver large quantities of what will be cheaper, generic, i.e. biosimilar, versions of this drug. Reports suggest that drugs which today cost over $1,000 a month in America will be manufactured in India for less than $6 a month. Although the price of these drugs is unknown, Indian manufacturers typically work high-volume and low costs. Demand will be global. (Although individuals in countries where the patent has not expired, such as America, UK, Europe, Japan, and Australia, will not be able to buy these generic medicines legally).

For those living in countries where semaglutide (sold under the brand names Ozempic and Wegovy) patents are expiring, as prices tumble, so too will waistlines as demand is expected to soar. Countries could transform the treatment of diabetes, obesity and overweight, along with better outcomes in those with cardiovascular health, poor metabolic health, liver and kidney disease.

«

Well this is going to be interesting, isn’t it.
unique link to this extract


How a mathematical paradox allows infinite cloning • Quanta Magazine

Max Levy:

»

Imagine two friends hiking in the woods. They grow hungry and decide to split an apple, but half an apple feels meager. Then one of them remembers one of the strangest ideas she’s ever encountered. It’s a mathematical theorem involving infinity that makes it possible, at least in principle, to turn one apple into two.

That argument is called the Banach-Tarski paradox, after the mathematicians Stefan Banach and Alfred Tarski, who devised it in 1924. It proves that according to the fundamental rules of mathematics, it’s possible to split a solid three-dimensional ball into pieces that recombine to form two identical copies of the original. Two apples out of one.

“Right away, one sees that it’s completely counterintuitive,” said Dima Sinapova (opens a new tab) of the University of Illinois, Chicago.

The paradox arises from one of the most mind-bending concepts in math: infinity.

…Banach and Tarski realized you can turn one sphere into two by partitioning the uncountably infinite set of points it contains into — get ready for it — an uncountably infinite number of countably infinite sets. The separation occurs through a very specific dissection procedure.

«

Why not start the day with a bit of mind-expanding maths? And you get an infinite number of apples as a bonus. (If you can make two from one, you can keep on doing it, after all.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified