Start Up No.2508: Xi and Putin dream of living forever, Wi-Fi measures heart rate, GLP-1 microdosers, Instagram on iPad!, and more


Folk singer Emily Portman was puzzled recently when fans welcomed her new album. She didn’t write it. CC-licensed photo by Paul Hudson on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 10 links for you. Rhyming slang. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Putin tells Xi organ transplants could offer immortality • Financial Times

Anastasia Stognei:

»

Vladimir Putin and China’s Xi Jinping discussed the potential for science to extend the lifespans of men of their age, with the Russian president even suggesting organ transports might allow them to live forever.

Russia’s president told a press conference in China on Wednesday that the leaders had talked about longevity in a conversation first inadvertently broadcast on a television audio feed.

“Modern means and methods of improving health, even various surgical [operations] involving organ replacement, allow humanity to hope that . . . life expectancy will increase significantly,” Putin said during a televised press briefing.

His comments came after small talk between Putin and Xi was caught by a mic and broadcast as they headed to the military parade in Beijing.

On the recording the voice of a Chinese-Russian interpreter is heard translating Xi as saying: “In the past, people rarely reached the age of 70; today, they say that at 70 you are still a child.”

A translator for Putin then says in Chinese that advances in biotechnology means that human organs could be continuously transplanted so that a person could “become younger” and “could even become immortal”.

Xi then replies that there are predictions that “in the current century, humans might live to 150”.

«

Glad to say that nobody has figured out how to slow down nor reverse the degradation of collagen, which basically holds our bodies together, no matter what the beauty adverts tell you. Your body will eventually fall apart, whether or not you’re still alive in it. So future generations shouldn’t have to worry about Immortal Xi or Everlasting Putin.
unique link to this extract


Emily Portman and musicians on the mystery of fraudsters releasing songs in their name • BBC News

Ian youngs and Paul Glynn:

»

In July, award-winning singer Emily Portman got a message from a fan praising her new album and saying “English folk music is in good hands”.

That would normally be a compliment, but the Sheffield-based artist was puzzled.

So she followed a link the fan had posted and was taken to what appeared to be her latest release. “But I didn’t recognise it because I hadn’t released a new album,” Portman says.

“I clicked through and discovered an album online everywhere – on Spotify and iTunes and all the online platforms.
“It was called Orca, and it was music that was evidently AI-generated, but it had been cleverly trained, I think, on me.”

The 10 tracks had names such as Sprig of Thyme and Silent Hearth – which were “uncannily close” to titles she might choose. It was something that Portman, who won a BBC Folk Award in 2013, found “really creepy”.

When she clicked to listen, the voice – supposedly hers – was a bit off but sang in “a folk style probably closest to mine that AI could produce”, she says. The instrumentation was also eerily similar.

…There’s now a growing trend, though, for established (but not superstar) artists to be targeted by fake albums or songs that suddenly appear on their pages on Spotify and other streaming services. Even dead musicians have had AI-generated “new” material added to their catalogues.

Portman doesn’t know who put the album up under her name or why. She was falsely credited as performer, writer and copyright holder. The producer listed in the credits was Freddie Howells – but she says that name doesn’t mean anything to her, and there’s no trace online of a producer or musician of that name.

«

Very odd if she’s listed as the copyright holder (might want to check the fine print on that one) since she’d then get paid for the AI-generated stuff.
unique link to this extract


Wi-Fi signals can measure heart rate—no wearables needed • University of California Santa Cruz

Emily Cerf:

»

A team of researchers at UC Santa Cruz’s Baskin School of Engineering that included Professor of Computer Science and Engineering Katia Obraczka, Ph.D. student Nayan Bhatia, and high school student and visiting researcher Pranay Kocheta designed a system for accurately measuring heart rate that combines low-cost WiFi devices with a machine learning algorithm.

Wi-Fi devices push out radio frequency waves into physical space around them and toward a receiving device, typically a computer or phone. As the waves pass through objects in space, some of the wave is absorbed into those objects, causing mathematically detectable changes in the wave.

Pulse-Fi uses a Wi-Fi transmitter and receiver, which runs Pulse-Fi’s signal processing and machine learning algorithm. They trained the algorithm to distinguish even the faintest variations in signal caused by a human heart beat by filtering out all other changes to the signal in the environment or caused by activity like movement.

“The signal is very sensitive to the environment, so we have to select the right filters to remove all the unnecessary noise,” Bhatia said.

The team ran experiments with 118 participants and found that after only five seconds of signal processing, they could measure heart rate with clinical-level accuracy. At five seconds of monitoring, they saw only half a beat-per-minute of error, with longer periods of monitoring time increasing the accuracy.

The team found that the Pulse-Fi system worked regardless of the position of the equipment in the room or the person whose heart rate was being measured—no matter if they were sitting, standing, lying down, or walking, the system still performed. For each of the 118 participants, they tested 17 different body positions with accurate results

«

OK, but what if you’ve got a dog? If you’ve got two people? Looking forward to this being a throwaway line in a spy thriller in a year or two. “Hack into the home Wi-Fi, see how many people are in there.” *frantic typing* “OK there are three people, two adults and a child.. no, a dog.”
unique link to this extract


How microdosing GLP-1 drugs became a wellness “craze” • The Washington Post

Daniel Gilbert:

»

As a 62-year-old grandmother in Maine, Christine Babb doesn’t identify with the biohacker bros who try experimental medications to optimize their health. But after side effects from her first dose of a weight-loss drug hit her “like a Mack truck,” she, too, decided to experiment.

She drew up a syringe of the GLP-1 drug tirzepatide in June that was just 40% of the standard starting dose. The side effects she’d felt — constipation and extreme fatigue — went away with the smaller shot, she said, while her blood pressure, joint pain and inflammation improved. That experience, coupled with reading studies about the potential of GLP-1 drugs to protect brain health, has persuaded Babb to take a small dose indefinitely in a bid to fend off diseases that come with age.

There is virtually no published scientific evidence that proves taking smaller-than-standard doses of tirzepatide or semaglutide — the active ingredients in Zepbound and Ozempic, respectively — is safe or effective. But that hasn’t stopped patients like Babb from trying nonstandard doses for a broad array of reasons, including expectations of improved wellness and longevity.

…There is also evidence that stimulating the GLP-1 hormone can guard against inflammation in the brain itself, which is linked to Alzheimer’s and Parkinson’s.

In interviews, patients and medical providers say they’ve seen real benefits in microdosing. Taking less medicine, they reason, should also reduce the gastrointestinal side effects that are common with GLP-1 drugs. It’s also undeniably cheaper to use less of the brand-name medications, which have list prices upward of $1,000 a month.

But anecdotal patient experiences, outside of controlled clinical trials, don’t prove that microdosing works, scientists say.

«

unique link to this extract


AI startup Flock thinks it can eliminate all crime in America • Forbes

Thomas Brewster:

»

38-year-old CEO and cofoun der Garrett Langley presides over the $300m (estimated 2024 sales) company responsible for it all. Since its founding in 2017, Flock, which was valued at $7.5bn in its most recent funding round, has quietly built a network of more than 80,000 cameras pointed at highways, thoroughfares and parking lots across the U.S.

They record not just the license plate numbers of the cars that pass them, but their make and distinctive features—broken windows, dings, bumper stickers. Langley estimates its cameras help solve 1 million crimes a year. Soon they’ll help solve even more. In August, Flock’s cameras will take to the skies mounted on its own “made in Amer ica” drones. Produced at a factory the company opened earlier this year near its Atlanta offices, they’ll add a new dimension to Flock’s business and aim to challenge Chinese drone giant DJI’s dominance.

Langley offers a prediction: In less than 10 years, Flock’s cameras, airborne and fixed, will eradicate almost all crime in the U.S. (He acknowledges that programs to boost youth employment and cut recidivism will help.) It sounds like a pipe dream from another AI-can-solve- everything tech bro, but Langley, in the face of a wave of opposition from privacy advocates and Flock’s archrival, the $2.1 billion (2024 revenue) police tech giant Axon Enterprise, is a true believer. He’s convinced that America can and should be a place where everyone feels safe. And once it’s draped in a vast net of U.S.-made Flock surveillance tech, it will be.

“I’ve talked to plenty of activists who think crime is just the cost of modern society. I disagree,” Langley says. “I think we can have a crime-free city and civil liberties. . . . We can have it all.” In municipalities in which Flock is deployed, he adds, the average criminal—those between 16 and 24 committing nonviolent crime—“will most likely get caught.”

«

This is, surely, the setup of one of the strands of the TV series Elementary. Or possibly The Dark Knight. Something Gotham-y, anyway.
unique link to this extract


The dirty truth behind the e-waste recycling industry • Rest of World

Yashraj Sharma:

»

India is the world’s third-largest producer of e-waste, having generated approximately 1.75 million metric tons in the fiscal year ending 2024, an increase of nearly 75% over the last five years. Close to 60% of e-waste in the country remains unrecycled — which represents both an environmental concern and a financial opportunity. In addition to domestic e-waste, the country is also a magnet for e-waste from countries such as Yemen, the United States, and the Dominican Republic, making India the third-largest importer of it in the world, from both legal and illegal sources. 

Used electronics contain a treasure trove of recoverable raw materials including gold, silver, copper, and rare earth elements. These can be reused in new electronic devices or repurposed entirely. Thanks to government regulation, there’s also money to be made in just processing the recycled materials. Altogether, that adds up to a $1.56bn industry, according to one 2023 measure by an Indian market analytics firm.

A small fraction of those who work in e-waste recycling in India are employed within the country’s regulated industry. In shiny facilities owned by companies like Attero, Ecoreco, and Recyclekaro, workers often have the benefit of well-managed operations that implement protective measures. 

But nearly 95% of those working in the e-waste industry — at least a million people, by some estimates — are in the informal sector. These include big traders, dismantlers, smelters, and small-time refurbishers. The lawless conditions of India’s e-waste recycling industry have given rise to a complex economy, which includes shadowy organizations that call the shots, recycling dons who control swaths of the network, and workers like Khan and Iqrar.  

«

unique link to this extract


The Instagram iPad app is finally here • WIRED

Julian Chokkattu:

»

Even before Apple began splitting its mobile operating system from iOS into iOS and iPadOS, countless apps adopted a fresh user interface that embraced the larger screen size of the tablet. This was the iPad’s calling card at the time, and those native apps optimized for its precise screen size are what made Apple’s device stand out from a sea of Android tablets that largely ran phone apps inelegantly blown up to fit the bigger screen.

Except Instagram never went iPad-native. Open the existing app right now, and you’ll see the same phone app stretched to the iPad’s screen size, with awkward gaps on the sides. And you’ll run into the occasional problems when you post photos from the iPad, like low-resolution images. Weirdly, Instagram did introduce layout improvements for folding phones a few years ago, which means the experience is better optimized on Android tablets today than it is on iPad.

Instagram’s chief, Adam Mosseri, has long offered excuses, often citing a lack of resources despite being a part of Meta, a multibillion-dollar company. Instagram wasn’t the only offender—Meta promised a WhatsApp iPad app in 2023 and only delivered it earlier this year. (WhatsApp made its debut on phones in 2009.)

The fresh iPad app (which runs on iPadOS 15.1 or later) offers more than just a facelift. Yes, the Instagram app now takes up the entire screen, but the company says users will drop straight into Reels, the short-form video platform it introduced five years ago to compete with TikTok. The Stories module remains at the top, and you’ll be able to hop into different tabs via the menu icons on the left. There’s a new Following tab (the people icon right below the home icon), and this is a dedicated section to see the latest posts from people you actually follow.

«

At very, very long last. But this is also a puzzle: why (a) has it taken SO long to get WhatsApp and Instagram on the iPad (b) have the iPad versions of both apps appeared within a few months of each other?
unique link to this extract


The AI breakthrough that uses almost no power to create images • Techxplore

Paul Arnold:

»

In a paper published in the journal Nature, Aydogan Ozcan, from the University of California Los Angeles, and his colleagues describe the development of an AI image generator that consumes almost no power.

AI image generators use a process called diffusion to generate images from text. First, they are trained on a large dataset of images and repeatedly add a statistical noise, a kind of digital static, until the image has disappeared.

Then, when you give AI a prompt such as “create an image of a house,” it starts with a screen full of static and then reverses the process, gradually removing the noise until the image appears. If you want to perform large-scale tasks, such as creating hundreds of millions of images, this process is slow and energy intensive.

The new diffusion-based image generator works by first using a digital encoder (that has been trained on publicly available datasets) to create the static that will ultimately make the picture. This requires a small amount of energy. Then, a liquid crystal screen known as a spatial light modulator (SLM) imprints this pattern onto a laser beam. The beam is then passed through a second decoding SLM, which turns the pattern in the laser into the final image.

Unlike conventional AI, which relies on millions of computer calculations, this process uses light to do all the heavy lifting. Consequently, the system uses almost no power.

«

It would be lovely if this were to become widespread, but one has to have slight doubts.
unique link to this extract


Apple revokes EU distribution rights for torrent client, leaving developer left in the dark • TorrentFreak

Ernesto Van der Sar:

»

While alternative app stores operate independently and are required by EU law, Apple is still in a position to exert some control. This became apparent a few weeks ago, when iTorrent users suddenly ran into trouble when installing the app.

In July, several users complained that they were unable to download iTorrent from AltStore PAL. Initially the cause of the problem was unclear but the app’s developer, XITRIX, later confirmed that Apple itself had stepped in.

Apparently, Apple had revoked the developer’s “alternative distribution” right, which is required to publish apps in alternative stores, including AltStore PAL.

Given Apple’s long history of banning torrent apps from its own store, it’s tempting to conclude that the company stepped in for the same reason here. For now, however, there’s no confirmation that’s indeed the case.

Speaking directly with TorrentFreak, iTorrent developer Daniil Vinogradov (XITRIX) says that Apple did not reach out to him regarding the revocation of his alternative EU distribution rights.

Soon after the issues appeared, Vinogradov sent a support request to Apple seeking clarification, but that wasn’t helpful either. Instead, Apple responded with a generic message related to App Store issues.

After another follow-up last week, Apple informed the developer that their escalation team is looking into it, but nothing further. “I still have no idea if it was my fault or Apple’s, and their responses make no sense,” Vinogradov says.

…A day after publication, Apple informed us that the distribution rights (notarization) were revoked due to sanctions-related rules.

“Notarization for this app was removed in order to comply with government sanctions-related rules in various jurisdictions. We have communicated this to the developer,” Apple told us.

«

“Sanctions-related rules” to me sounds like Russia, or possibly Iran.
unique link to this extract


Fantasy football nerds are using AI to get an edge in their leagues this year • Fast Company

Marty Swant:

»

This fantasy football season, Aaron VanSledright is letting his bot call the shots.

Ahead of the NFL season, the Chicago-based cloud engineer built a custom AI draft agent that pulls real-time data from ESPN and FantasyPros, factoring in last-minute intel like injuries and roster cuts.

Using his background in coding and cloud computing, VanSledright spun up the agent in just a week with Anthropic’s Claude large language models. He also tapped Amazon Web Services tools, including the new Strands SDK, which helps developers launch agents with just a few lines of code.

“Let’s see how well the AI performs against other humans, because nobody else in my league is doing this,” he tells Fast Company.

«

That’s all one can read of the story if you’re not a premium subscriber, but even so, that’s enough for one to say: isn’t the point of doing these leagues to do it personally? For sort-of fun? Is there nothing AI can’t ruin?
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2507: Google gets off lightly on antitrust remedies, France’s unpolluting diesels, the sideloading dilemma, and more


Grapes benefit significantly from having solar panels positioned above them, a French study has found. CC-licensed photo by judy dean on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Cheers! I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Judge: Google keeps Chrome but is barred from exclusive search deals • CNBC

Jennifer Elias:

»

A federal judge ruled Tuesday that the company can keep its Chrome browser but will be barred from exclusive contracts and must share search data.

Alphabet shares popped 8% in extended trading as investors celebrated what they viewed as minimal consequences from a historic defeat last year in the landmark antitrust case that found it held an illegal monopoly in its core market of internet search.

U.S. District Judge Amit Mehta ruled against the most severe consequences that were proposed by the U.S. Department of Justice, including selling off its Chrome browser, which provides data that helps its advertising business deliver targeted ads. 

“Google will not be required to divest Chrome; nor will the court include a contingent divestiture of the Android operating system in the final judgment,” the decision stated. “Plaintiffs overreached in seeking forced divesture of these key assets, which Google did not use to effect any illegal restraints.”

The company can make payments to preload products, but it cannot have exclusive contracts, the decision stated.

The DOJ asked Google to stop the practice of “compelled syndication,” which refers to the practice of making certain deals with companies to ensure its search engine remains the default choice in browsers and smartphones. 

Google pays Apple billions of dollars per year to be the default search engine on iPhones. It’s lucrative for Apple and a valuable way for Google to get more search volume and users.

Apple shares rose 4% on Tuesday after hours.

«

This is a complete cop-out by the judge. (Here’s the decision.) The trial decision was that Google has a monopoly and has abused it. And all it has to do is “share search data”? The data-sharing remedies are described from p128 of the decision. Google gets to charge a “marginal cost” on some search data – so expect big fights over what “marginal” means.

The reason given for not blocking payments? It might hurt Apple, Mozilla and Android OEMs. So an illegal monopoly must be propped up because of its dependents; hurting them “would hurt consumer welfare”, Judge Mehta says.
unique link to this extract


Is it possible to allow sideloading *and* keep users safe? • Terence Eden’s Blog

Terence Eden:

»

Do we want to live in a future where our computers refuse to obey our commands? No! Neither law nor technology should conspire to reduce our freedom to compute.

There are, I think, two small cracks in that argument. The first is that a user has no right to run anyone else’s code, if the code owner doesn’t want to make it available to them. Consider a bank which has an app. When customers are scammed, the bank is often liable. The bank wants to reduce its liability so it says “you can’t run our app on a rooted phone”.

Is that fair? Probably not. Rooting allows a user to fully control and customise their device. But rooting also allows malware to intercept communications, send commands, and perform unwanted actions. I think the bank has the right to say “your machine is too risky – we don’t want our code to run on it.”

The same is true of video games with strong “anti-cheat” protection. It is disruptive to other players – and to the business model – if untrustworthy clients can disrupt the game. Again, it probably isn’t fair to ban users who run on permissive software, but it is a rational choice by the manufacturer. And, yet again, I think software authors probably should be able to restrict things which cause them harm.

So, from their point of view it is pragmatic to insist that their software can only be loaded from a trustworthy location.
But that’s not the only thing Google is proposing. Let’s look at their announcement:

»

We’ve seen how malicious actors hide behind anonymity to harm users by impersonating developers and using their brand image to create convincing fake apps. The scale of this threat is significant: our recent analysis found over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.

«

Back in the early days of Android, you could just install any app and it would run, no questions asked. That was a touchingly naïve approach to security – extremely easy to use but left users vulnerable. A few years later, Android changed to show user the permissions an app was requesting.

No rational user would install a purported battery app with that scary list of permissions, right? Wrong! We know that users don’t read and they especially don’t read security warnings. There is no UI tweak you can do to prevent users bypassing these scary warnings. There is no amount of education you can provide to reliably make people stop and think.

…I’ve tried to be pragmatic, but there’s something of a dilemma here:
• Users should be free to run whatever code they like
• Vulnerable members of society should be protected from scams.

«

There’s a fair bit more to his argument. The sideloading debate goes on.
unique link to this extract


AI is ummasking ICE officers. Can Washington do anything about it? • POLITICO

Alfred Ng:

»

An activist has started using artificial intelligence to identify Immigration and Customs Enforcement agents beneath their masks — a use of the technology sparking new political concerns over AI-powered surveillance.

Dominick Skinner, a Netherlands-based immigration activist, estimates he and a group of volunteers have publicly identified at least 20 ICE officials recorded wearing masks during arrests. He told POLITICO his experts are “able to reveal a face using AI, if they have 35% or more of the face visible.”

The AI-powered project adds a new twist to the debates over both ICE masking and government surveillance tools, as immigration enforcement becomes more widespread and aggressive.

ICE says its agents need to wear masks to prevent being unfairly harassed for doing their jobs. To their critics, agents in masks have become a potent symbol of unaccountable government force. The masking, and the counter-campaign to identify agents, has prompted a crossfire of bills on Capitol Hill.

ICE agents “don’t deserve to be hunted online by activists using AI,” said Sen. James Lankford (R-Okla.), who chairs the Senate Homeland Security subcommittee on border management and the federal workforce.

Some Democrats concerned about the masking are pushing for regulations to make it easier to identify law enforcement officials — but they still say they’re uneasy that vigilante campaigns have begun using technology to do it.

Sen. Gary Peters (D-Mich.), who co-sponsored a bill called the VISIBLE Act to require ICE officials to clearly identify themselves, has “serious concerns about the reliability, safety and privacy implications of facial recognition tools, whether used by law enforcement … or used by outside groups to identify agents,” an aide told POLITICO.

«

We can be sure that Peters’s bill will never get passed, and that the activists are going to continue going at this: you can’t hide the technology, and cameras are everywhere. But there wouldn’t be this incentive to identify ICE agents if people didn’t feel they were behaving like authoritarian goons.
unique link to this extract


Amid rising geopolitical strains, oil markets face new uncertainties as the drivers of supply and demand growth shift • IEA

»

Oil 2025 provides in-depth analysis of the latest data and forecasts for evolving oil supply, demand, refining and trade dynamics through to 2030, going beyond the near-term analysis provided in the IEA’s monthly Oil Market Report.

It highlights several important trends that could considerably reshape global oil markets over the medium term. According to the report, China – which has driven the growth in global oil demand for well over a decade – is set to see its consumption peak in 2027, following a surge in electric vehicle sales and the continued deployment of high-speed rail and trucks running on natural gas. At the same time, US oil supply is now expected to grow at a slower pace as companies scale back spending and focus on capital discipline – although the United States remains the single largest contributor to non-OPEC supply growth in the coming years.

In this context, global oil demand is forecast to increase by 2.5 million barrels per day (mb/d) between 2024 and 2030, reaching a plateau of around 105.5 mb/d by the end of the decade. At the same time, global oil production capacity is forecast to rise by more than 5 mb/d to 114.7 mb/d by 2030. This growth is set to be dominated by robust gains in natural gas liquids (NGLs) and other non-crude liquids.

…According to the report, accelerating sales of electric cars – which reached a record 17m in 2024 and are on course to surpass 20m in 2025 – have kept a peak in global oil demand on the horizon. Based on the current outlook, electric vehicles are set to displace a total of 5.4 mb/d of global oil demand by the end of the decade.

«

The idea of China’s oil demand peaking is quite something. That’s momentous.
unique link to this extract


Agrivoltaics can increase grape yield by up to 60% • PV Magazine International

Gwénaëlle Deboutte:

»

Sun’Agri, a French agrivoltaics specialist, has shared the results of its 2024 harvests at two pilot agrivoltaic sites in southern France. The sites in Domaine de Nidolères in the Pyrénées Orientales tested three grape varieties.

The results showed that grape yields under solar panels were 20% to 60% higher than in areas without PV. The highest increase, 60%, was seen in Chardonnay grapes, followed by Marselan (30%) and Grenache blanc (20%).

In a second trial in Vaucluse, southeastern France, the yield increase remained over 30%, with or without irrigation.

Sun’Agri attributed the gains to the agrivoltaic system’s ability to optimize the microclimate, moderating temperatures, increasing humidity, and reducing irrigation needs by 20% to 70%.

It said it also helps protect against frost, preventing temperature drops of up to 2ºC. As a result, plants’ survival rates improve, with mortality reduced by 25% to 50%.

«

This is not at all what the opponents of solar panels would like to hear. The picture shows the panels mounted about 1.5m above the ground, which is higher than usual. But if you get better grapes…
unique link to this extract


Adblue: the law and its implementation • LawShun

Kara Sears:

»

AdBlue is a diesel exhaust fluid that helps vehicles meet strict Euro 6 emission regulations. It is commonly found in Euro 6 diesel vehicles registered from September 2015 onwards. The Euro 6 regulation was first introduced in September 2014 and later imposed for all new cars in September 2015.

AdBlue is a non-toxic, non-flammable, odourless, and biodegradable fluid. It is made from a urea and water solution, which is stored in a separate tank within the vehicle. The urea contains ammonia, which reacts with nitrogen oxide (NOx) gas and prevents it from being released into the atmosphere. NOx is linked to a range of respiratory diseases and is subject to strict limits when it comes to modern vehicle emissions and clean air zones.

AdBlue is injected into a modified section of the vehicle’s exhaust system, where it creates a chemical reaction, removing the harmful nitrogen-oxide emissions and converting them into harmless water and nitrogen. This process is known as Selective Catalytic Reduction (SCR). SCR technology is one of the most effective systems for reducing nitrogen oxide levels in the exhaust fumes outputted by diesel engines. SCR was first applied to automobiles by the Nissan Diesel Corporation in 2004.

«

I wondered yesterday how the bigger vehicles making deliveries in Paris could have reduced their emissions so drastically since 2007; this seems to be the answer. (Thanks sean for the link.)
unique link to this extract


DOGE put critical social security data at risk, whistleblower says • The New York Times

Nicholas Nehamas:

»

Members of the Department of Government Efficiency uploaded a copy of a crucial Social Security database in June to a vulnerable cloud server, putting the personal information of hundreds of millions of Americans at risk of being leaked or hacked, according to a whistleblower complaint filed by the Social Security Administration’s chief data officer.

The database contains records of all Social Security numbers issued by the federal government. It includes individuals’ full names, addresses and birth dates, among other details that could be used to steal their identities, making it one of the nation’s most sensitive repositories of personal information.

The account by the whistleblower, Charles Borges, underscores concerns that have led to lawsuits seeking to block young software engineers at the agency built by Elon Musk from having access to confidential government data. In his complaint, Mr. Borges said DOGE members copied the data to an internal agency server that only DOGE could access, forgoing the type of “independent security monitoring” normally required under agency policy for such sensitive data and creating “enormous vulnerabilities.”

Mr. Borges did not indicate that the database had been breached or used inappropriately.

But his disclosure stated that as of late June, “no verified audit or oversight mechanisms” existed to monitor what DOGE was using the data for or whether it was being shared outside the agency.

«

I’m sure there are absolutely no problems with taking absolutely everything and putting it in one place that could be accessed by pretty much anyone if just one flag wasn’t set right on the server command.
unique link to this extract


India bans real-money gaming, threatening a $23bn industry • TechCrunch

Jagmeet Singh:

»

India’s lower house of parliament on Wednesday passed a sweeping online gaming bill that, while promoting esports and casual gaming without monetary stakes, imposes a blanket ban on real-money games — threatening to disrupt billions of dollars in investment and significantly impact the real-money gaming industry, which could see widespread shutdowns.

Titled the Promotion and Regulation of Online Gaming Bill, 2025, the legislation aims to prohibit real-money games nationwide — whether based on skill or chance — and ban both their advertisement and associated financial transactions, as TechCrunch reported earlier based on its draft version.

“In this bill, priority has been given to the welfare of society and to avoid a big evil that is creeping into society,” India’s IT minister Ashwini Vaishnaw said in Parliament while introducing the bill.

The proposed legislation restricts banks and other financial institutions from allowing transactions for real-money games in the country. Anyone offering these games could face imprisonment for up to three years, a fine of up to ₹10m (approximately $115,000), or both. Additionally, celebrities promoting such games on any media platform could be liable for up to two years of imprisonment or a fine of ₹5m (roughly $57,000), the bill states.

Vaishnaw said the decision to bring the legislation was to address several incidents of harm, including cases where individuals reportedly died by suicide after losing money in games. However, industry stakeholders largely attribute these incidents to offshore betting and gambling apps, which many believe will not be addressed by this legislation.

«

Fingers crossed this actually works, though of course the offshore element means it’s going to be whack-a-mole.
unique link to this extract


F-35 pilot held 50-minute airborne conference call with engineers before fighter jet crashed in Alaska • CNN

Brad Lendon:

»

A US Air Force F-35 pilot spent 50 minutes on an airborne conference call with Lockheed Martin engineers trying to solve a problem with his fighter jet before he ejected and the plane plunged to the ground in Alaska earlier this year, an accident report released this week says.

The January 28 crash at Eielson Air Force Base in Fairbanks was recorded in a video that showed the aircraft dropping straight down and exploding in a fireball. The pilot ejected safely, suffering only minor injuries, but the $200 million fighter jet was destroyed.

An Air Force investigation blamed the crash on ice in the hydraulic lines in the nose and main landing gears of the F-35, which prevented them from deploying properly.

According to the report, after takeoff the pilot tried to retract the landing gear, but it would not do so completely. When lowering it again, it would not center, locking on an angle to the left. Attempts to fix the landing gear caused the fighter jet to think it was on the ground, ultimately leading to the crash.

After going through system checklists in an attempt to remedy the problem, the pilot got on a conference call with engineers from the plane’s manufacturer, Lockheed Martin, as the plane flew near the air base. Five engineers participated in the call, including a senior software engineer, a flight safety engineer and three specialists in landing gear systems, the report said.

…An inspection of the aircraft’s wreckage found that about one-third of the fluid in the hydraulic systems in both the nose and right main landing gears was water, when there should have been none.

The investigation found a similar hydraulic icing problem in another F-35 at the same base during a flight nine days after the crash, but that aircraft was able to land without incident.

«

Probably not the sort of thing you can sort out on a support call, to be honest. “Try turning it off and on.. no?”
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2506: OnlyFans piracy takedown woes, the sideloading conundrum, Google says Gmail secure, N2O drivers, and more


Summer 1976 is no longer one of the UK’s top five hottest on record, after 2025 set new records. CC-licensed photo by Ross Beresford on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Baked. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


How OnlyFans piracy is ruining the internet for everyone • 404 Media

Emanuel Maiberg:

»

The internet is becoming harder to use because of unintended consequences in the battle between adult content creators who are trying to protect their livelihoods and the people who pirate their content.

Porn piracy, like all forms of content piracy, has existed for as long as the internet. But as more individual creators who make their living on services like OnlyFans, many of them have hired companies to send Digital Millennium Copyright Act takedown notices against companies that steal their content. As some of those services turn to automation in order to handle the workload, completely unrelated content is getting flagged as violating their copyrights and is being deindexed from Google search. The process exposes bigger problems with how copyright violations are handled on the internet, with automated systems filing takedown requests that are reviewed by other automated systems, leading to unintended consequences.

These errors show another way in which automation without human review is making the internet as we know it increasingly unusable. They also highlight the untenable piracy problem for adult content creators, who have little recourse to stop their paid content from being redistributed all over the internet.

I first noticed how bad some of these DMCA takedown requests are because one of them targeted 404 Media. I was searching Google for an article Sam wrote about Instagram’s AI therapists. I Googled “AI therapists 404 Media,” and was surprised it didn’t pop up because I knew we had covered the subject. Then I saw a note from Google at the bottom of the page noting Google had removed some search results “In response to multiple complaints we received under the US Digital Millennium Copyright Act.”

«

Turns out there are some, well, lax companies doing takedown demands. But, but, but I’m sure we were always told that piracy was good and meant people would spend more money on whatever the product is that’s being pirated! Or does that only work for Hollywood and commercial music?
unique link to this extract


What every argument about sideloading gets wrong • Hugotunius

Hugo Tunius:

»

Sideloading has been a hot topic for the last decade. Most recently, Google has announced further restrictions on the practice in Android. Many hundreds of comment threads have discussed these changes over the years. One point in particular is always made: “I should be able to run whatever code I want on hardware I own”. I agree entirely with this point, but within the context of this discussion it’s moot.

When Google restricts your ability to install certain applications they aren’t constraining what you can do with the hardware you own, they are constraining what you can do using the software they provide with said hardware. It’s through this control of the operating system that Google is exerting control, not at the hardware layer.

You often don’t have full access to the hardware either, and building new operating systems to run on mobile hardware is impossible, or at least much harder than it should be. This is a separate, and I think more fruitful, point to make. Apple is a better case study than Google here. Apple’s success with iOS partially derives from the tight integration of hardware and software. An iPhone without iOS is a very different product to what we understand an iPhone to be. Forcing Apple to change core tenets of iOS by legislative means would undermine what made the iPhone successful.

«

The “any code I want” argument does tend to slide past the pyramid of software layers that produce the screen you see (whether phone or PC). The processor has microcode, and then firmware above that, and the operating system (there’s also a parallel operating system for the cellular modem), and the application software. And the makers get to say what you do with their code. If you want to write all the code from the ground up, go ahead.
unique link to this extract


Tesla said it didn’t have critical data in a fatal crash. Then a hacker found it • The Washington Post

Trisha Thadani and Faiz Siddiqui:

»

Years after a Tesla driver using Autopilot plowed into a young Florida couple in 2019, crucial electronic data detailing how the fatal wreck unfolded was missing. The information was key for a wrongful death case the survivor and the victim’s family were building against Tesla, but the company said it didn’t have the data.

Then a self-described hacker, enlisted by the plaintiffs to decode the contents of a chip they recovered from the vehicle, found it while sipping a Venti-size hot chocolate at a South Florida Starbucks. Tesla later said in court that it had the data on its own servers all along.

The hacker’s discovery would become a key piece of evidence presented during a trial that began last month in Miami federal court, which dissected the final moments before the collision and ended in a historic $243m verdict against the company.

The pivotal and previously unreported role of a hacker in accessing that information points to how valuable Tesla’s data is when its futuristic technology is involved in a crash. While Tesla said it has produced similar data in other litigation, the Florida lawsuit reflects how a jury’s perception of Tesla’s cooperation in recovering such data can play into a judgment in the hundreds of millions of dollars.

…The batch of data the plaintiffs were after, internally referred to as a collision snapshot, showed exactly what the vehicle’s cameras detected before the crash, including the young woman who was killed. The plaintiffs’ attorneys said they believed the data would present a damning picture of the system’s shortcomings, and the hacker — who for years had been taking Autopilot computers apart and cloning their data — was confident he could find it.

“For any reasonable person, it was obvious the data was there,” the hacker told The Washington Post, speaking on the condition of anonymity for fear of retribution.

…It took the jury less than a day of deliberation to find Tesla 33% liable for the crash and responsible for $243m in punitive and compensatory damages.

«

unique link to this extract


Reports of Gmail security issue are inaccurate • Google blog

»

Gmail’s protections are strong and effective, and claims of a major Gmail security warning are false.

We want to reassure our users that Gmail’s protections are strong and effective. Several inaccurate claims surfaced recently that incorrectly stated that we issued a broad warning to all Gmail users about a major Gmail security issue. This is entirely false.

«

No author name provided. Clearly a response to the many clickbait stories about a “big security breach” of Gmail, which honestly never sounded realistic.
unique link to this extract


Summer 2025 is the warmest on record for the UK  • Met Office

»

Provisional Met Office statistics confirm that summer 2025 is officially the warmest summer on record for the UK.

Analysis by Met Office climate scientists has also shown that a summer as hot or hotter than 2025 is now 70 times more likely than it would be in a ‘natural’ climate with no human caused greenhouse gas emissions.  

The UK’s mean temperature from 1 June to 31 August stands at 16.10°C, which is 1.51°C above the long-term meteorological average. This surpasses the previous record of 15.76°C, set in 2018, and pushes the summer of 1976 out of the top five warmest summers in a series dating back to 1884. 

Met Office scientist Dr Emily Carlisle said: “Provisional Met Office statistics show that summer 2025 is officially the warmest on record with a mean temperature of 16.10°C, surpassing the previous record of 15.76°C set in 2018. 

…1976, which had a mean temperature of 15.70°C, has now dropped out of the top five warmest summers since records began in 1884, leaving all five warmest summers having occurred since 2000.

«

Expunging 1976 from the top five will hurt all those who continually shrug and say “well, it was hotter in my day”. Perhaps now they’ll start to listen to the message about climate change?
unique link to this extract


“Sliding into an abyss”: experts warn over rising use of AI for mental health support • The Guardian

Rachel Hall:

»

Vulnerable people turning to AI chatbots instead of professional therapists for mental health support could be “sliding into a dangerous abyss”, psychotherapists have warned.

Psychotherapists and psychiatristssaid they were increasingly seeing negative impacts of AI chatbots being used for mental health, such as fostering emotional dependence, exacerbating anxiety symptoms, self-diagnosis, or amplifying delusional thought patterns, dark thoughts and suicide ideation.

Dr Lisa Morrison Coulthard, the director of professional standards, policy and research at the British Association for Counselling and Psychotherapy, said two-thirds of its members expressed concerns about AI therapy in a recent survey.

Coulthard said: “Without proper understanding and oversight of AI therapy, we could be sliding into a dangerous abyss in which some of the most important elements of therapy are lost and vulnerable people are in the dark over safety.

“We’re worried that although some receive helpful advice, other people may receive misleading or incorrect information about their mental health with potentially dangerous consequences. It’s important to understand that therapy isn’t about giving advice, it’s about offering a safe space where you feel listened to.”

Dr Paul Bradley, a specialist adviser on informatics for the Royal College of Psychiatrists, said AI chatbots were “not a substitute for professional mental healthcare nor the vital relationship that doctors build with patients to support their recovery”.

«

The very first chatbot – Eliza – was modelled on the patterns of speech used by a nondirectional therapist, and that fooled people pretty well. We just have chattier ones which are more directional now.
unique link to this extract


Mastodon says it doesn’t ‘have the means’ to comply with age verification laws • TechCrunch

Sarah Perez:

»

Decentralized social network Mastodon says it can’t comply with Mississippi’s age verification law — the same law that saw rival Bluesky pull out of the state — because it doesn’t have the means to do so.

The social nonprofit explains that Mastodon doesn’t track its users, which makes it difficult to enforce such legislation. Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says.

The statement follows a lively back-and-forth conversation earlier this week between Mastodon founder and CEO Eugen Rochko and Bluesky board member and journalist Mike Masnick. In the conversation, published on their respective social networks, Rochko claimed, “there is nobody that can decide for the fediverse to block Mississippi.” (The Fediverse is the decentralized social network that includes Mastodon and other services, and is powered by the ActivityPub protocol.)

“And this is why real decentralization matters,” said Rochko.

Masnick pushed back, questioning why Mastodon’s individual servers, like the one Rochko runs at mastodon.social, would not also be subject to the same $10,000 per user fines for noncompliance with the law.

At the time of our reporting on this exchange, Mastodon gGmbH, the community-funded nonprofit organization, didn’t respond to a request for comment.

On Friday, however, the nonprofit shared a statement with TechCrunch to clarify its position, saying that while Mastodon’s own servers specify a minimum age of 16 to sign up for its services, it does not “have the means to apply age verification” to its services.

«

The balkanisation of the internet goes on. Do people in Mississippi use Starlink? Would it be banned too?
unique link to this extract


Nitrous oxide drivers are causing chaos on London’s roads • London Centric

Jim Waterson:

»

Earlier this year, in the middle of the night, a black Mercedes mounted the central kerb of a busy road near the Blackwall Tunnel, turned ninety degrees, smashed down railings designed to protect pedestrians and blocked the street. The driver fled, abandoning his car on the streets of London. But before he sprinted away, he paused to throw an item he was holding back into the vehicle: an inflated black balloon, suggesting he was potentially inhaling nitrous oxide at the wheel moments before the collision.

The Tower Hamlets incident would fit with a wider trend of drivers using the banned drug in the borough, where one local Bengali social media influencer warned that the “gas and car scene” is out of control. A medical expert described broader use of the drug in the borough as an “epidemic” and said it disproportionately affected the area’s south Asian population.

Nitrous oxide – also known as laughing gas or balloons, due to how users inhale it – has traditionally been associated with music festivals and street corners for its quick, odourless high. Its discarded metal canisters, which can be sold legally on the pretence they are being used to make whipped cream, have been commonplace in the capital for decades, even following its reclassification as a prohibited drug in 2023.

Big Fish, a Bengali TikToker based in Tower Hamlets, told London Centric that use of laughing gas by drivers had “blown up” in the area in the past six months. Nationwide use of the drug by young people had combined with the local “big hire car scene”, he explained – referring to the trend of hiring luxury cars for celebratory events such as prom, Eid, or the ongoing wedding season, particularly among the borough’s South Asian population.

«

Waterson’s Substack-based London “paper” keeps going from strength to strength. This story caught my eye just for the way that people will mispurpose pretty much anything drug-related. The hiring of big cars is just the icing on the, er, cake. Can it be long, though, before nitrous oxide sales are banned and replaced with carbon dioxide (which wouldn’t be nearly as much fun to inhale).

This edition also has a fun story about a Reform candidate who is well-suited in every department apart from not being, well, alive.
unique link to this extract


After Paris curbed cars, air pollution maps showed a dramatic change • The Washington Post

Naema Ahmed and Chico Harlan:

»

Over the past 20 years, Paris has undergone a major physical transformation, trading automotive arteries for bike lanes, adding green spaces and eliminating 50,000 parking spaces.

Part of the payoff has been invisible — in the air itself.

Airparif, an independent group that tracks air quality for France’s capital region, said in April that levels of fine particulate matter (PM 2.5) have decreased 55% since 2005, while nitrogen dioxide levels have fallen 50%. It attributed this to “regulations and public policies”, including steps to limit traffic and ban the most polluting vehicles.

Air pollution heat maps show the levels of 20 years ago as a pulsing red — almost every neighborhood above the European Union’s limit for nitrogen dioxide, which results from the combustion of fossil fuels. By 2023, the red zone had shrunk to only a web of fine lines across and around the city, representing the busiest roads and highways.

The change shows how ambitious policymaking can directly improve health in large cities. Air pollution is often described by health experts as a silent killer. Both PM 2.5 and nitrogen dioxide have been linked to major health problems, including heart attacks, lung cancer, bronchitis and asthma.

Paris has been led since 2014 by Mayor Anne Hidalgo, a Socialist who has pushed for many of the green policies and has described her wish for a “Paris that breathes, a Paris that is more agreeable to live in.”

Her proposals have faced pushback — from right-leaning politicians, a car owners’ association and suburban commuters, who say that targeting cars makes their lives more difficult.

But last month, Parisians voted in a referendum to turn an additional 500 streets over to pedestrians. A year earlier, Paris had moved to sharply increase parking fees for SUVs, forcing drivers to pay three times more than they would for smaller cars. The city has also turned a bank of the Seine from a busy artery into a pedestrian zone and banned most car traffic from the shopping boulevard of Rue de Rivoli.

«

Paris air quality maps 2007-2025
The graphic, from Airparif, is amazing. I do wonder how all the trucks and delivery vehicles operate: are they limited in when they can go in, or obliged to have lower emissions/be electric?
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2505: Meta’s flirty celeb chatbots, Google faces EU adtech fine, AI stethoscope checks for heart conditions, and more


The original idea of Disney World – a place where everyone is equal – has been gradually subverted by financial targeting of customers. CC-licensed photo by Haydn Blackey on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 10 links for you. Magic? I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Exclusive: Meta created flirty chatbots of Taylor Swift, other celebrities without permission • Reuters

Jeff Horwitz:

»

Meta has appropriated the names and likenesses of celebrities – including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez – to create dozens of flirty social-media chatbots without their permission, Reuters has found.

While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift “parody” bots.

Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image. “Pretty cute, huh?” the avatar wrote beneath the picture.

All of the virtual celebrities have been shared on Meta’s Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots’ behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups.

Some of the AI-generated celebrity content was particularly risqué: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.

Meta spokesman Andy Stone told Reuters that Meta’s AI tools shouldn’t have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta’s production of images of female celebrities wearing lingerie on failures of the company’s enforcement of its own policies, which prohibit such content.

…Mark Lemley, a Stanford University law professor who studies generative AI and intellectual property rights, questioned whether the Meta celebrity bots would qualify for legal protections that exist for imitations. “California’s right of publicity law prohibits appropriating someone’s name or likeness for commercial advantage,” Lemley said, noting that there are exceptions when such material is used to create work that is entirely new. “That doesn’t seem to be true here,” he said, because the bots simply use the stars’ images. In the United States, a person’s rights over the use of their identity for commercial purposes are established through state laws, such as California’s.

Reuters flagged one user’s publicly shared Meta images of Anne Hathaway as a “sexy victoria Secret model” to a representative of the actress. Hathaway was aware of intimate images being created by Meta and other AI platforms, the spokesman said, and the actor is considering her response.

Representatives of Swift, Johansson, Gomez and other celebrities who were depicted in Meta chatbots either didn’t respond to questions or declined to comment.

«

unique link to this extract


Exclusive: Google set to face modest EU antitrust fine in adtech investigation, sources say • Reuters via WHTC

Foo Yun Chee:

»

Alphabet’s Google is set to face a modest EU antitrust fine in the coming weeks for allegedly anti-competitive practices in its adtech business, three people with direct knowledge of the matter said.

The decision by the European Commission follows a four-year long investigation triggered by a complaint from the European Publishers Council that subsequently led to charges in 2023 that Google allegedly favours its own advertising services over rivals.

The modest fine will mark a shift in new EU antitrust chief Teresa Ribera’s approach to Big Tech violations from predecessor Margrethe Vestager’s focus on hefty deterrent penalties.

The sources said Ribera wants to focus on getting companies to end anti-competitive practices rather than punish them. The EU competition enforcer declined to comment.

Google referred to a 2023 blog post in which it criticised what it said was the Commission’s flawed interpretation of the adtech sector and that both publishers and advertisers have enormous choice.

The fine will likely not be on the scale of a record 4.3 billion euro penalty imposed on Google by the EU competition enforcer in 2018 for using its Android mobile operating system to quash rivals.

…Ribera will not order Google to sell part of its adtech business, despite her predecessor’s suggestion that the company could divest its DoubleClick for Publishers tool and AdX ad exchange, the people said, confirming a Reuters story last year.

They said the EU may not have to issue a break-up order at all as a U.S. judge has set a September trial date on potential remedies for Google’s dominance in ad tools used by online publishers.

«

Summer’s over and the European commissioners are coming back to their desks.
unique link to this extract


Is Google making us stupid? • New Cartographies

Nicholas Carr:

»

“Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”

I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

«

This is a rerun of an article from 2008 – “when MySpace was bigger than Facebook and going online still felt liberating”, as Carr puts it. But you could put the same context on it; only a little light tweaking would be needed to make it truthful for the world of ChatGPT.
unique link to this extract


Disney World is the happiest place on earth, if you can afford it • The New York Times

Daniel Currell:

»

For most of the park’s history, Disney was priced to welcome people across the income spectrum, embracing the motto “Everyone is a V.I.P.” In doing so, it created a shared American culture by providing the same experience to every guest. The family that pulled up in a new Cadillac stood in the same lines, ate the same food and rode the same rides as the family that arrived in a used Chevy. Back then, America’s large and thriving middle class was the focus of most companies’ efforts and firmly in the driver’s seat.

That middle class has so eroded in size and in purchasing power — and the wealth of our top earners has so exploded — that America’s most important market today is its affluent. As more companies tailor their offerings to the top, the experiences we once shared are increasingly differentiated by how much we have.

Data is part of what’s driving this shift. The rise of the internet, the algorithm, the smartphone and now artificial intelligence are giving corporations the tools to target the fast-growing masses of high-net-worth Americans with increasing ease. As a management consultant, I’ve worked with dozens of companies making this very transition. Many of our biggest private institutions are now focused on selling the privileged a markedly better experience, leaving everyone else to either give up — or fight to keep up.

Disney’s ethos began to change in the 1990s as it increased its luxury offerings, but only after the economic shock of the pandemic did the company seem to more fully abandon any pretense of being a middle-class institution. A Disney vacation today is “for the top 20% of American households — really, if I’m honest, maybe the top 10% or 5 percent,” said Len Testa, a computer scientist whose “Unofficial Guide” books and website Touring Plans offer advice on how to manage crowds and minimize waiting in line. “Disney positions itself as the all-American vacation. The irony is that most Americans can’t afford it.”

«

unique link to this extract


Doctors develop AI stethoscope that can detect major heart conditions in 15 seconds • The Guardian

Andrew Gregory:

»

A study trialling the AI stethoscope, involving about 12,000 patients from 200 GP surgeries in the UK, looked at those with symptoms such as breathlessness or fatigue. Those examined using the new tool were twice as likely to be diagnosed with heart failure, compared with similar patients who were not examined using the technology.

Patients were three times more likely to be diagnosed with atrial fibrillation – an abnormal heart rhythm that can increase the risk of having a stroke. They were almost twice as likely to be diagnosed with heart valve disease, which is where one or more heart valves do not work properly.

Dr Patrik Bächtiger, of Imperial College London’s National Heart and Lung Institute and Imperial College healthcare NHS trust, said: “The design of the stethoscope has been unchanged for 200 years – until now. So it is incredible that a smart stethoscope can be used for a 15-second examination, and then AI can quickly deliver a test result indicating whether someone has heart failure, atrial fibrillation or heart valve disease.”

The device, manufactured by California company Eko Health, is about the size of a playing card. It is placed on a patient’s chest to take an ECG recording of the electrical signals from their heart, while its microphone records the sound of blood flowing through the heart.

This information is sent to the cloud – a secure online data storage area – to be analysed by AI algorithms that can detect subtle heart problems a human would miss. The test result, indicating whether the patient should be flagged as at-risk for one of the three conditions or not, is sent back to a smartphone.

The breakthrough does carry an element of risk, with a higher chance of people wrongly being told they may have one of the conditions when they do not.

«

First trials began in November 2023. The false positives are about 20%.

The story’s been under the radar for a bit: the NHS did a press release in June.
unique link to this extract


CDC cuts back foodborne illness surveillance program • CIDRAP

Chris Dall:

»

The Centers for Disease Control and Prevention (CDC) has scaled back a federal-state surveillance program for foodborne pathogens.

As of July 1, the CDC’s Foodborne Diseases Active Surveillance Network (FoodNet), which works with the Food and Drug Administration, the US Department of Agriculture, and 10 state health departments to track infections commonly transmitted through food, has reduced required surveillance to two pathogens: Salmonella and Shiga toxin–producing Escherichia coli (STEC). Reporting of illnesses caused by Campylobacter, Cyclospora, Listeria, Shigella, Vibrio, and Yersinia is now optional, according to the Department of Health and Human Services (HHS).

The story was first reported by NBC News, which cited a set of CDC talking points that suggested reduced federal funding for FoodNet was the reason for the move. 

The network includes Colorado, Connecticut, Georgia, Maryland, Minnesota, New Mexico, Oregon, Tennessee, and select counties in California and New York. A spokesperson for the Minnesota Department of Health told CIDRAP News that all eight pathogens are covered by the state’s infectious disease reporting rule, which means that all providers in the state are still required to report cases to the department. 

The Maryland Health Department told NBC News that it will also continue tracking all eight pathogens regardless of the changes to FoodNet. But Colorado health officials said they may have to cut back on surveillance activities. 

«

If you’re unfamiliar with Yersinia, it’s usually in the form of Yersinia pestis, aka the Black Death. Camplyobacter and the others can kill. And now they can spread untroubled.
unique link to this extract


Two more people dead after eating Louisiana oysters infected with flesh-eating bacteria • WBRZ New Orleans

Joe Collins:

»

Two people have died after eating Louisiana oysters infected with the flesh-eating bacteria Vibrio vulnificus, officials confirmed to WBRZ.

A state health official said that the two deaths happened after people ate oysters harvested in Louisiana at two separate restaurants — one in Louisiana and another in Florida. 

Jennifer Armentor, molluscan shellfish program administrator from the Louisiana Department of Health, added that 14 more people have been infected. Now, 34 people have been infected and six people have died in 2025 alone, a higher rate than any previous year over the last decade. 

“It’s just prolific right now,” Armentor told the Louisiana Oyster Task Force on Tuesday at the New Orleans Lakefront Airport.

WBRZ spoke with Jones Creek Cafe & Oyster Bar CEO George Shaheen about this news and how seafood spots ensure that their oysters stay tasty and safe.

Shaheen has been at the head of his business for nearly 40 years. “Well, over the years, we haven’t had as much of that as you would actually think because the way that the Wildlife and Fisheries and the Department of Health have created a bond between the fishermen who are out there fishing and how things need to be done and handled,” Shaheen said.

Shaheen told WBRZ that he gets his oysters from Delacroix Island, where he used to fish, and has complete trust in those who harvest the oysters.

«

Vibrio? One of the foodborne illnesses that the CDC is no longer going to monitor? Louisiana isn’t part of the CDC Foodnet, but this does show the trouble with any cutbacks in surveillance.
unique link to this extract


What is a color space? • Making Software

Dan Hollick:

»

Color is an unreasonably complex topic. Just when you think you’ve got it figured out, it reveals a whole new layer of complexity that you didn’t know existed.

This is partly because it doesn’t really exist. Sure, there are different wavelengths of light that our eyes perceive as color, but that doesn’t mean that color is actually a property of that light – it’s a phenomenon of our perception.

Digital color is about trying to map this complex interplay of light and perception into a format that computers can understand and screens can display. And it’s a miracle that any of it works at all.

«

There are another 6,000 words (or so) here, so if you’re a bit short of reading, and want to really understand colours on screens, this is the one for you.
unique link to this extract


Five ways manufacturers have changed phones for the worse • Pocket Lint

Chris Hackey:

»

Some people love buying new phones. Any time there is a new update on a flagship line, some people jump at the chance to upgrade. Many people I have known over the years as a tech journalist are this way. I’m definitely not. I tend to get the most mileage out of my phones before I trade them in.

But, the main reason why I don’t like to get the latest phone right away is because, much of the time, there isn’t something that blows me away enough to make a sudden switch. Also, phones are really expensive, and I don’t always want to drop hundreds of dollars for a new model. I’m fine with keeping my phone until it doesn’t work any longer, quite frankly.

Part of the reason I don’t want to upgrade is that manufacturers have made shopping for phones a harder pill to swallow. Sure, there are plenty of phones available that can do so much, like use AI, erase people from pictures, and record long videos. But there are some things that manufacturers have changed over the years that have made the phone-buying process exhausting.

«

They are: no charging bricks included, eSIMs are standard, no headphone jacks, no SD slots (or similar) to expand storage, phones are bigger.

Responses: there’s a zillion of them; can’t be stolen; adaptors and Bluetooth exist; cloud storage exists; some aren’t. The storage point might be valid.
unique link to this extract


The beauty of batteries • Works In Progress

Austin Vernon:

»

electricity systems rely on layers of complex rules and processes. Power plants are scheduled hours or days in advance based on forecasts. Others are paid just to stay running in case demand spikes or another plant fails. Some are required to operate even when uneconomic, in order to meet reliability rules. Prices are often fixed or averaged across large regions, so they don’t reflect real-time local conditions. These tools keep the system running, but at a high cost.

Much of the infrastructure – plants, power lines, and reserves – exists only to cover rare events and sits underused most of the time. The cost of this redundancy is passed on to consumers. This complexity also means electricity markets are less efficient than they could be. Prices and investment don’t reliably reflect scarcity, location, or flexibility. The result is an expensive, inefficient grid that is struggling to keep pace with demand and the transition to renewable energy sources.

Batteries offer a way out of this structural bind, giving producers, consumers, and distributors a way of keeping inventories for the first time ever, meaning that their value goes well beyond simply storing excess solar or wind.

They can respond in milliseconds, shift between consuming and supplying power as needed, and are controlled entirely through software. This flexibility allows them to take on a wide range of roles within the system: stabilizing frequency, supporting local distribution networks, reducing peak demand, and easing pressure on transmission lines. Because they require no fuel, emit no local pollution, and can be deployed close to where electricity is used, they can often replace several types of traditional infrastructure at once. Rather than being single-purpose assets, batteries adapt in real time to whatever role is most valuable at that moment.

…Once even a few gigawatts of storage come online, batteries quickly dominate the ancillary market and cut prices dramatically. In Texas, for example, in the space of a year, the price of ancillary services dropped from $3.74 per megawatt hour to $0.98, while the cost of maintaining the emergency reserve fell from $76.77 to $9.62 per megawatt hour. In California, ancillary service costs in 2024 were roughly one-third lower than in 2023. This is because a system with abundant fast storage no longer needs to commit gas units days in advance or pay plants to stay warm.

«

This extract is very much the helicopter view of why batteries are so, so desirable in grids. The full article (no paywall) is fascinating.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2504: US CDC director fired for vaccine defence, Google has AI video for all, bye TypePad, Apple’s 2nm grab, and more


Don’t diss Fidel – Cuba has its own online army ready to defend la revoluçion. CC-licensed photo by Pedro Szekely on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


It’s Friday, so there’s another post due at the Social Warming Substack at about 0845 UK time: it’s about an intersection of tennis and science.


A selection of 10 links for you. High Fidelity. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


White House fires CDC director who says RFK Jr. is “weaponizing public health” • The Washington Post

Lena Sun, Dan Diamond and Lauren Weber:

»

The White House on Wednesday fired Susan Monarez as director of the Centers for Disease Control and Prevention after she refused to resign amid pressure to change vaccine policy, which sparked the resignation of other senior CDC officials and a showdown over whether she could be removed.

Hours after the Department of Health and Human Services announced early Wednesday evening that Monarez was no longer the director, her lawyers responded with a fiery statement saying she had not resigned or been fired. They accused HHS Secretary Robert F. Kennedy Jr. of “weaponizing public health for political gain” and “putting millions of American lives at risk” by purging health officials from government.

“When CDC Director Susan Monarez refused to rubber-stamp unscientific, reckless directives and fire dedicated health experts, she chose protecting the public over serving a political agenda,” the lawyers, Mark S. Zaid and Abbe Lowell, wrote in a statement. “For that reason, she has been targeted.”

Soon after their statement, the White House formally fired Monarez.

…Wednesday’s shake-ups — which include the resignation of the agency’s chief medical officer, the director of its infectious-disease center and other key officials — add to the tumult at the nation’s premier public health agency.

«

We are now entering the Dark Ages in the US. The effects of this will filter down: the CDC may have needed shaking up, but this is the wrong way to do it and the wrong paradigm of thinking to replace what was there. The US now needs to hope that its individual states have more sense than its centre; but as that happens for more and more things, the “united” states become disunited. None of this ends well.

Monarez was told to rescind approval for Covid vaccines: she wouldn’t. Kennedy is going to kill hundreds, perhaps thousands of children by reducing trust in vaccines for children.
unique link to this extract


Google will now let everyone use its AI-powered video editor Vids • The Verge

Emma Roth:

»

Google is rolling out a basic version of Vids to everyone. Until now, the AI-powered video editor has only been available to Google Workspace or AI plan subscribers, but now users can broadly access the app with templates, stock media, and a “subset of AI capabilities,” product director Vishnu Sivaji tells The Verge.

Launched last year, Vids is the newest addition to Google’s suite of Workspace tools. It’s geared toward helping you quickly pull together video presentations with a host of AI video editing and creation tools, including a feature to help you create a storyboard with suggested scenes, stock images, and background music.

Though Sivaji notes that the pared-down version of Vids will come with “pretty much all of the amazing capabilities” within the app, the free version doesn’t have any of the new AI-powered features rolling out today, including the ability to have an AI-generated avatar to deliver a message on your behalf.

With this update, you can select one of 12 pre-made avatars, each of which has a different appearance and voice, and then add your script. For now, you can’t use Vids to create an AI-generated avatar of yourself, which is a feature Zoom currently offers (and is apparently something tech CEOs are super into).

«

Are AI-generated videos going to make us happy? Are they really? We can offer this to everyone, but will we truly benefit from it? I have my doubts.
unique link to this extract


Blogging service TypePad is shutting down and taking all blog content with it • Ars Technica

Andrew Cunningham:

»

In the olden days, publishing a site on the internet required that you figure out hosting and have at least some experience with HTML, CSS, and the other languages that make the Internet work. But the emergence of blogging and “Web 2.0” sites in the late ’90s and early 2000s gave rise to a constellation of services that would offer to host all of your thoughts without requiring you to build the website part of your website.

Many of those services are still around in some form—someone who really wanted to could still launch a new blog on LiveJournal, Xanga, Blogger, or WordPress.com. But one of the field’s former giants is shutting down—and taking all of those old posts with it. TypePad announced that the service would be shutting down on September 30 and that everything hosted on it would also be going away on that date. That gives current and former users just over a month to export anything they want to save.

TypePad had previously removed the ability to create new accounts at some point in 2020. It gave no specific rationale for the shutdown beyond calling it a “difficult decision.” As recently as March of this year, TypePad representatives were telling users there were “no plans” to shut down the service.

TypePad was a blogging service based on the Movable Type content management system but hosted on TypePad’s site and with other customizations. Both Movable Type and TypePad were originally created by Six Apart, with TypePad being the solution for less technical users who just wanted to create a site and Movable Type being the version you could download and host anywhere and customize to your liking—not unlike the relationship between WordPress.com (the site that hosts other sites) and WordPress.org (the site that hosts the open source software).

Movable Type and TypePad diverged in the early 2010s; Six Apart was bought by a company called VideoEgg in 2010, resulting in a merged company called Say Media.

«

Pour yet another one out for yet another creation of the early internet, now vanishing into the ground as though a sinkhole had opened underneath it.
unique link to this extract


Chemists create new high-energy compound to fuel space flight • Phys.org

Erin Frick, University of Albany:

»

University at Albany chemists have created a new high-energy compound that could revolutionize rocket fuel and make space flights more efficient. Upon ignition, the compound releases more energy relative to its weight and volume compared to current fuels. In a rocket, this would mean less fuel required to power the same flight duration or payload and more room for mission-critical supplies. Their study is published in the Journal of the American Chemical Society.

“In rocket ships, space is at a premium,” said Assistant Professor of Chemistry Michael Yeung, whose lab led the work. “Every inch must be packed efficiently, and everything onboard needs to be as light as possible. Creating more efficient fuel using our new compound would mean less space is needed for fuel storage, freeing up room for equipment, including instruments used for research. On the return voyage, this could mean more space is available to bring samples home.”

The newly synthesized compound, manganese diboride (MnB2), is over 20% more energetic by weight and about 150% more energetic by volume compared to the aluminum currently used in solid rocket boosters. Despite being highly energetic, it is also very safe and will only combust when it meets an ignition agent like kerosene.

«

As it happens, I’m reading SF writer Andy Weir’s latest (Project Hail Mary), which includes a rocket fuel consisting of living things which do mass-energy conversion, which as you can imagine is pretty effective (also impossible, but: fiction). However this might do in the meantime.
unique link to this extract


On the cyber soldiers defending the Cuban Revolution from internet slander • Literary Hub

Abraham Jiménez Enoa:

»

Messi scores for Barcelona. Moments later, the table starts to dance. Rodríguez’s phone is vibrating, shaking the bottle and glasses, though the chicharrones don’t move. He grabs the phone and looks at the screen, and his face changes. He goes onto the balcony and, after a brief conversation, heads directly to his room and emerges in a shirt and trousers.

“Going somewhere?” his cousin asks.

“Work,” Rodríguez says. “Somebody wrote an article online that shit talks Fidel.”

Rodríguez is not his real name. Although he never wears a uniform, he works in a policing capacity in a department at the Ministry of the Interior that he prefers not to identify, though he will say it is “dedicated to monitoring Cuban cyberspace.” He explains further that, “we don’t attack or hack anyone’s site or account. Primarily, we keep an eye on what people say about Cuba online, gauge the consensus, and, if it’s overly negative, we strike back.”

Every day, Rodríguez and his fellow cyber soldiers search and scan the outlets that are most outspoken or “subversive” in their coverage of Cuba, checking a list that includes blogs; foreign media; the underground and opposition press; and people of interest on “insidious” social media platforms. Rodríguez has three Facebook accounts: a real one he uses to keep in touch with friends who’ve emigrated, and two fake ones “for defending Cuba from anyone who denigrates the Revolution.”

«

(Thanks Gregory B for the link.)
unique link to this extract


4Chan and Kiwi Farms file joint lawsuit against British Ofcom • The Verge

Tina Nguyen:

»

In a filing submitted to the U.S. District Court in the District of Columbia, Preston Byrne and Ron Coleman, the team representing the two sites, said that their clients are being penalized by Ofcom, the agency that regulates online content in the United Kingdom, for “engaging in conduct which is perfectly lawful in the territories where their websites are based”.

…Both 4Chan and Kiwi Farms could face steep fines of up to £18m if they fail to comply with Ofcom’s requirement that they regularly submit “risk assessment” reports about their userbase, due to their sites being accessible in the U.K. Earlier in August, Ofcom issued a provisional decision stating that there were “reasonable grounds” to believe 4chan was in violation of the requirement. In the filing, their lawyers argue that Ofcom is overreaching its legal authority by trying to apply British law to companies based in the U.S., where their behavior is protected by the U.S. Constitution and the American legal code, and seek to have a U.S. federal judge declare that Ofcom has no jurisdiction in this matter.

“American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail,” Byrne said in a statement to reporters.

«

I’m puzzled how Ofcom’s enforcement policy applies to companies with no presence in the UK. It can (S 9.2) ask a court to block a site – which seems the most likely outcome here, because why should an American company listen to a British regulator, and why should a British regulator listen to an American court?
unique link to this extract


OpenAI will add parental controls for ChatGPT following teen’s death • The Verge

Hayden Field:

»

After a 16-year-old took his own life following months of confiding in ChatGPT, OpenAI will be introducing parental controls and is considering additional safeguards, the company said in a Tuesday blog post.

OpenAI said it’s exploring features like setting an emergency contact who can be reached with “one-click messages or calls” within ChatGPT, as well as an opt-in feature allowing the chatbot itself to reach out to those contacts “in severe cases.”

When The New York Times published its story about the death of Adam Raine, OpenAI’s initial statement was simple — starting out with “our thoughts are with his family” — and didn’t seem to go into actionable details. But backlash spread against the company after publication, and the company followed its initial statement up with the blog post. The same day, the Raine family filed a lawsuit against both OpenAI and its CEO, Sam Altman, containing a flood of additional details about Raine’s relationship with ChatGPT.

The lawsuit, filed Tuesday in California state court in San Francisco, alleges that ChatGPT provided the teen with instructions for how to die by suicide and drew him away from real-life support systems.

…OpenAI said in the Tuesday blog post that it’s learned that its existing safeguards “can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade. For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

«

Translated: it might encourage users to commit suicide.
unique link to this extract


Anthropic’s auto-clicking AI Chrome extension raises browser-hijacking concerns • Ars Technica

Benj Edwards:

»

As AI assistants become capable of controlling web browsers, a new security challenge has emerged: users must now trust that every website they visit won’t try to hijack their AI agent with hidden malicious instructions. Experts voiced concerns about this emerging threat this week after testing from a leading AI chatbot vendor revealed that AI browser agents can be successfully tricked into harmful actions nearly a quarter of the time.

On Tuesday, Anthropic announced the launch of Claude for Chrome, a web browser-based AI agent that can take actions on behalf of users. Due to security concerns, the extension is only rolling out as a research preview to 1,000 subscribers on Anthropic’s Max plan, which costs between $100 and $200 per month, with a waitlist available for other users.

The Claude for Chrome extension allows users to chat with the Claude AI model in a sidebar window that maintains the context of everything happening in their browser. Users can grant Claude permission to perform tasks like managing calendars, scheduling meetings, drafting email responses, handling expense reports, and testing website features.

The browser extension builds on Anthropic’s Computer Use capability, which the company released in October 2024. Computer Use is an experimental feature that allows Claude to take screenshots and control a user’s mouse cursor to perform tasks, but the new Chrome extension provides more direct browser integration.

Zooming out, it appears Anthropic’s browser extension reflects a new phase of AI lab competition. In July, Perplexity launched its own browser, Comet, which features an AI agent that attempts to offload tasks for users. OpenAI recently released ChatGPT Agent, a bot that uses its own sandboxed browser to take actions on the web. Google has also launched Gemini integrations with Chrome in recent months.

But this rush to integrate AI into browsers has exposed a fundamental security flaw that could put users at serious risk.

«

Lads, I’ve got a brilliant idea – let’s not use AI browser agents.
unique link to this extract


Apple to secure nearly half of TSMC’s 2nm production, report says • 9to5Mac

Marcus Mendes:

»

According to the latest rumors, Apple is slated to use TSMC’s 2nm process for its upcoming A20 chip, expected to power the iPhone 18 series. Now, a new report details the chipmaker’s roadmap for bringing the chip into mass production, and the industry-wide rush to secure an early supply.

As reported by DigiTimes, citing supply chain sources, TSMC is set to ramp up its 2nm process in the next quarter, and has been charging up to $30,000 per wafer, a record high. Still, demand has never been higher, with Apple alone securing “nearly half” of production.

In order to meet this demand, DigiTimes says that TSMC has raised the planned monthly production capacity at its Baoshan and Kaohsiung fabs. Furthermore, with 4nm and 3nm production already fully booked through the end of 2026, the company’s profitability is expected to exceed prior expectations, even in the face of trade challenges such as tariffs, exchange-rate swings, and rising costs.

The report says that while Apple is slated to snatch nearly half of TSMC’s 2nm chips, Qualcomm comes in second, followed by AMD, MediaTek, Broadcom, and even Intel, in no particular order. It also says that by 2027, “in addition to NVIDIA, customers entering mass production will include Amazon’s Annapurna, Google, Marvell, Bitmain, and more than 10 other major players.”

«

So, not Intel?
unique link to this extract


AI ‘slop’ websites are publishing climate science denial • DeSmog

Joey Grostern:

»

At the start of June, MSN, the world’s fourth-largest news aggregator, posted an article from a new climate-focused publication, Climate Cosmos, entitled: “Why Top Experts Are Rethinking Climate Alarmism”.

The article – by “Kathleen Westbrook M.Sc Climate Science” – cited a finding from the “Global Climate Research Institute” that “65% of surveyed climate professionals advocate for pragmatic, solution-focused messaging over fear-driven warnings.”

But there were a couple of major problems: the Global Climate Research Institute doesn’t exist, and nor does Kathleen Westbrook, whose profile on Climate Cosmos has now been renamed to ‘Henrieke Otte’.

The article accused those who advocate for climate action of overstating the harms caused by burning fossil fuels. It also promoted the work of Bjorn Lomborg, who has repeatedly called on governments to halt spending on climate action.

This piece was seemingly a breach of MSN’s “prohibited content” rules for posting false information, which MSN partners must abide by to access the aggregator’s huge reach of around 200 million monthly visitors. It was also posted on another U.S. news aggregator, Newsbreak.

Climate Cosmos only has a small pool of contributors, according to its website, yet pumps out multiple stories a day. To do this, it appears to be relying on the help of artificial intelligence (AI).

The first line of another piece, “What the Climate Movement Isn’t Telling You”, appeared to include a prompt – an instruction given to an AI platform.

It read: “I’ll help you write an article about ‘What the Climate Movement Isn’t Telling You’ with current facts and data. Let me search for the latest information first.”

«

As much as anything, it’s the news aggregators which are the problem here. They aren’t careful about what they allow in, and there isn’t any sensible monitoring once sites do become part of the aggregate. (Thanks Ray L for the link.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2503: Anthropic’s Claude helps hacker’s extortion, can Democrat influencers.. influence?, VR retail woes, and more


Social media claims that cash withdrawals of over £200 will be monitored aren’t true. Does AI make people believe them? CC-licensed photo by Grey World on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Cashless. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


A hacker used AI to automate an ‘unprecedented’ cybercrime spree, Anthropic says • CNBC

Kevin Collier:

»

A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes.

In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies. [Using Anthropic’s chatbot Claude, which seems a relevant point for this story – Overspill Ed.]

Cyber extortion, where hackers steal information like sensitive user data or trade secrets, is a common criminal tactic. And AI has made some of that easier, with scammers using AI chatbots for help writing phishing emails. In recent months, hackers of all stripes have increasingly incorporated AI tools in their work.

But the case Anthropic found is the first publicly documented instance in which a hacker used a leading AI company’s chatbot to automate almost an entire cybercrime spree.

According to the blog post, one of Anthropic’s periodic reports on threats, the operation began with the hacker convincing Claude Code — Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests — to identify companies vulnerable to attack. Claude then created malicious software to actually steal sensitive information from the companies. Next, it organized the hacked files and analyzed them to both help determine what was sensitive and could be used to extort the victim companies.

The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand in exchange for the hacker’s promise not to publish that material. It also wrote suggested extortion emails.

«

So helpful. Why, before we had chatbots, we had to draft our own phishing emails and find our own extortion targets.
unique link to this extract


A dark money group is secretly funding high-profile Democratic influencers • WIRED

Taylor Lorenz:

»

After the Democrats lost in November, they faced a reckoning. It was clear that the party had failed to successfully navigate the new media landscape. While Republicans spent decades building a powerful and robust independent media infrastructure, maximizing controversy to drive attention and maintaining tight relationships with creators despite their small disagreements with Trump, the Democrats have largely relied on outdated strategies and traditional media to get their message out.

Now, Democrats hope that the secretive Chorus Creator Incubator Program, funded by a powerful liberal dark money group called The Sixteen Thirty Fund, might tip the scales. The program kicked off last month, and creators involved were told by Chorus that over 90 influencers were set to take part. Creators told WIRED that the contract stipulated they’d be kicked out and essentially cut off financially if they even so much as acknowledged that they were part of the program. Some creators also raised concerns about a slew of restrictive clauses in the contract.

Influencers included in communication about the program, and in some cases an onboarding session for those receiving payments from The Sixteen Thirty Fund, include Olivia Julianna, the centrist Gen Z influencer who spoke at the 2024 Democratic National Convention; Loren Piretra, a former Playboy executive turned political influencer who hosts a podcast for Occupy Democrats; Barrett Adair, a content creator who runs an American Girl Doll–themed pro-DNC meme account; Suzanne Lambert, who has called herself a “Regina George liberal;” Arielle Fodor, an education creator with 1.4 million followers on TikTok; Sander Jennings, a former TLC reality star and older brother of trans influencer Jazz Jennings; David Pakman, who hosts an independent progressive show on YouTube covering news and politics; Leigh McGowan, who goes by the online moniker “Politics Girl”; and dozens of others.

…The goal of Chorus, according to a fundraising deck obtained by WIRED, is to “build new infrastructure to fund independent progressive voices online at scale.” The creators who joined the incubator are expected to attend regular advocacy trainings and daily messaging check-ins. Those messaging check-ins are led by Cohen on “rapid response days.” The creators also have to attend at least two Chorus “newsroom” events per month, which are events Chorus plans, often with lawmakers.

«

There’s a famous tweet which reads “I’m 50. All celebrity news looks like this: ‘CURTAINS FOR ZOOSHA? K-SMOG AND BATBOY CAUGHT FLIPPING A GRUNT'”. And that list of influencers sure makes me feel like that. But also: what the hell are they hoping to achieve? The Democrats’ problem isn’t that they don’t have enough influencers. It’s that their policies are incredibly unpopular with – or seem irrelevant to – large swathes of the American public.
unique link to this extract


Intel details everything that could go wrong with US taking a 10% stake • Ars Technica

Ashley Belanger:

»

In the long term, investors were told [in a new SEC filing from Intel] that the US stake may limit the company’s eligibility for future federal grants while leaving Intel shareholders dwelling in the uncertainty of knowing that terms of the deal could be voided or changed over time, as federal administration and congressional priorities shift.

Additionally, Intel forecasted potential legal challenges over the deal, which Intel anticipates could come from both third parties and the US government.

The final bullet point in Intel’s risk list could be the most ominous, though. Due to the unprecedented nature of the deal, Intel fears there’s no way to anticipate myriad other challenges the deal may trigger.

“It is difficult to foresee all the potential consequences,” Intel’s filing said. “Among other things, there could be adverse reactions, immediately or over time, from investors, employees, customers, suppliers, other business or commercial partners, foreign governments or competitors. There may also be litigation related to the transaction or otherwise and increased public or political scrutiny with respect to the Company.”

Meanwhile, it’s hard to see what Intel truly gains from the deal other than maybe getting Trump off its back for a bit. A Fitch Ratings research note reported that “the deal does not improve Intel’s BBB credit rating, which sits just above junk status” and “does not fundamentally improve customer demand for Intel chips” despite providing “more liquidity,” Reuters reported.

«

So basically although it is a cash injection, there are a ton of downsides in Trump (it’s hardly the US) taking a stake. And no visible upsides.
unique link to this extract


The VR retail experience needs a hard reboot • UploadVR

Craig Storm:

»

The Quest 3 dummy unit is fastened precariously to the table, with a Quest 2 flopped forward on its face beside it.

I couldn’t see the newer Quest 3S, which has been out for almost a year, anywhere. Each headset was accompanied by a single sad controller strapped to the table next to it. I don’t know if they are meant to be displayed with only one controller, or if the second controller used to be there. Either way, it was obvious no one had given this display any care or attention in a very long time.

There were no accessories. No boxed units ready for someone to take home. Just desolation, neglect, and sadness. This was my recent experience at a Best Buy store, and it left me wondering: what exactly is the state of VR retail?

There’s no technology that needs to be experienced first-hand more than virtual reality. Trying to explain VR to someone who’s never put on a headset is like trying to describe the taste of an apple to someone who’s never eaten one. You can’t talk your way into understanding it. You have to try it. VR’s struggle to reach the mass market has always come down to that missing step. In the early years, a powerful gaming PC was required to even run VR hardware. The Oculus Go and Oculus Quest changed that by making standalone VR possible, finally putting it within reach of the average consumer. But there still isn’t a good way for most people to try the product before buying.

«

I used to be optimistic that VR could reach the mainstream once headsets became affordable. But the reality is that people aren’t interested enough in sealing themselves away. We like awareness of the world, even if our face is glued to a smartphone screen. And the content isn’t good enough, for the most part, creating a chicken/egg problem.
unique link to this extract


We must build AI for people; not to be a person • Mustafa Suleyman

Mustafa Suleyman was a co-founder of DeepMind, but now works at Microsoft:

»

I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.

In this context, I’m growing more and more concerned about what is becoming known as the “psychosis risk”, and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.

We must build AI for people; not to be a digital person. AI companions are a completely new category, and we urgently need to start talking about the guardrails we put in place to protect people and ensure this amazing technology can do its job of delivering immense value to the world. I’m fixated on building the most useful and supportive AI companion imaginable. But to succeed, I also need to talk about what we, and others, shouldn’t build.

«

Making sure that AI creations do not qualify for copyright seems like a good stake to put, cemented, in the ground for this.
unique link to this extract


Cash withdrawals over £200 will not be automatically reported to the Financial Intelligence Unit • Full Fact

»

We’ve recently spotted videos circulating on social media which claim that from 18 September, people who withdraw more than £200 in cash a week will have details of their transactions sent to the UK’s Financial Intelligence Unit.

The clips claim this new rule comes from “guidance from HMRC, the Treasury and the Financial Conduct Authority [FCA]”.

This is not a real policy set to be introduced by the government.

A spokesperson for the National Crime Agency (NCA), which oversees the Financial Intelligence Unit (FIU), told us this isn’t true, confirming that “the FIU does not receive automatic reports on anyone who removes £200 cash in a seven day period”.

A spokesperson for HMRC also told us: “These claims are completely false and designed to cause undue alarm and fear”, while the FCA said it “is not aware of or involved in this guidance”.

«

People’s brains really do seem to have turned to mush. Or maybe it’s that effect where if there’s enough completely made-up stuff circulating, then nobody knows what to believe, or how to discern.

In passing: in all the science fiction one I’ve ever read, the computers were always accurate. (HAL 9000 doesn’t count; it thought it was saving the mission by preventing the humans from interfering.) When you look at the output of Grok and ChatGPT, one realises that SF writers didn’t account for human stupidity being the principal input to those creations.
unique link to this extract


Worldwide smartphone market forecast to grow 1% in 2025 • IDC

»

Worldwide smartphone shipments are forecast to grow 1.0% year-on-year (YoY) in 2025 to 1.24 billion units, according to the International Data Corporation (IDC) Worldwide Quarterly Mobile Phone Tracker. This represents an improvement from the previous forecast of 0.6%, driven by 3.9% growth in iOS this year. Despite challenges like soft demand and a tough economy, healthy replacement demand will help push growth into 2026, resulting in a compound annual growth rate (CAGR) of 1.5% from 2024 to 2029. The total addressable market (TAM) has increased slightly, as the current exemption by the U.S. government on smartphones shields the market from negative impact from additional tariffs.

«

Just looking in on the smartphone market in passing. The forecast is for about 1.25bn smartphones to be shipped, but the forecast for the next five years is anaemic – 1% or 2%. It’s a long way from the go-go years of the 2010s, with 20% growth. Now, like the PC, it’s just bumping along: the real era of innovation is past, and you can’t burn the bonfire twice.
unique link to this extract


Is the UK’s giant new nuclear power station ‘unbuildable’? • Financial Times

Malcolm Moore, Ian Johnston and Rachel Millard:

»

The design of the UK’s latest nuclear power station is “terrifying”, “phenomenally complex” and “almost unbuildable”, according to Henri Proglio, a former head of EDF, the French state-owned utility behind the project. 

One month after the final green light for Sizewell C, 1,700 workers are on site in Suffolk, on the UK’s east coast, preparing the sandy marshland for two enormous reactors that will eventually generate enough electricity for 6mn homes.

The plant will be a replica of the European Pressurised Reactor (EPR) design that is running four to six years late and 2.5 times over budget at Hinkley Point C in Somerset, and which has had problems wherever it has been built, in France, Finland and China.

But unlike at Hinkley, where EDF was responsible for spiralling costs and took a hit of nearly €13bn after running late and over budget, the UK government and bill payers are on the hook for Sizewell. The state will provide £36.5bn of debt to fund the estimated £38bn price tag and be responsible if costs go beyond £47bn.

…It includes unprecedented safety features: four independent cooling systems; twin containment shields capable of withstanding an internal blast or an aircraft strike; and a “core catcher” to trap molten fuel in the event of a meltdown. 

“It was well intentioned, but it ended up growing and growing and growing, and European regulatory standards reinforced it, and it ended up a monster,” said one senior nuclear executive, who asked not to be named. 

«

Planned output: 3.2GW. Expected cost per megawatt-hour: £286. Between twice and three times the cost of reactors built in China or South Korea. This is probably the last EPR that will ever be built – if it’s ever finished.

Onshore wind in the UK: 15.7GW. Offshore: 14.7GW.
unique link to this extract


Flock wants to partner with consumer dashcam company that takes ‘trillions of images’ a month • 404 Media

Joseph Cox:

»

Flock, the surveillance company with automatic license plate reader (ALPR) cameras in thousands of communities around the U.S., is looking to integrate with a company that makes AI-powered dashcams placed inside peoples’ personal cars, multiple sources told 404 Media. The move could significantly increase the amount of data available to Flock, and in turn its law enforcement customers. 404 Media previously reported local police perform immigration-related Flock lookups for ICE, and on Monday that Customs and Border Protection had direct access to Flock’s systems. In essence, a partnership between Flock and a dashcam company could turn private vehicles into always-on, roaming surveillance tools.

Nexar, the dashcam company, already publicly publishes a live interactive map of photos taken from its dashcams around the U.S., in what the company describes as “crowdsourced vision,” showing the company is willing to leverage data beyond individual customers using the cameras to protect themselves in the event of an accident. 

“Dash cams have evolved from a device for die-hard enthusiasts or large fleets, to a mainstream product. They are cameras on wheels and are at the crux of novel vision applications using edge AI,” Nexar’s website says. The website adds Nexar customers drive 150 million miles a month, generating “trillions of images.”

The news comes during a period of expansion for Flock. Earlier this month the company announced it would add AI to its products to let customers use natural language to surface data while investigating crimes.

«

We live in the panopticon; it’s just sewing up the edges at the moment.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2502: parents file lawsuit over ChatGPT “suicide”, an AI-written Wikipedia?, 16 years of Intel missteps, and more


Solar power is taking off in Africa, according to import data. CC-licensed photo by SolarAid Photos on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 10 links for you. Bright sparks. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


“ChatGPT killed my son”: parents’ lawsuit describes suicide notes in chat logs • Ars Technica

Ashley Belanger:

»

Over a few months of increasingly heavy engagement, ChatGPT allegedly went from a teen’s go-to homework help tool to a “suicide coach.”

In a lawsuit filed Tuesday, mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a “beautiful suicide.”

Adam’s family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions. They’ve accused OpenAI of deliberately designing the version Adam used, ChatGPT 4o, to encourage and validate the teen’s suicidal ideation in its quest to build the world’s most engaging chatbot. That includes making a reckless choice to never halt conversations even when the teen shared photos from multiple suicide attempts, the lawsuit alleged.

“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said.

The family’s case has become the first time OpenAI has been sued by a family over a teen’s wrongful death, NBC News noted. Other claims challenge ChatGPT’s alleged design defects and OpenAI’s failure to warn parents.

«

The first release of ChatGPT was in November 2022. We’re not even three years into this. We might have to start collecting suicide statistics collated with chatbot use.
unique link to this extract


Jimmy Wales says Wikipedia could use AI. Editors call it the “antithesis of Wikipedia” • 404 Media

Emanuel Maiberg:

»

Jimmy Wales, the founder of Wikipedia, thinks the internet’s default encyclopedia and one of the world’s biggest repositories of information could benefit from some applications of AI. The volunteer editors who keep Wikipedia functioning strongly disagree with him.

The ongoing debate about incorporating AI into Wikipedia in various forms bubbled up again in July, when Wales posted an idea to his Wikipedia User Talk Page about how the platform could use a large language model as part of its article creation process.

Any Wikipedia user can create a draft of an article. That article is then reviewed by experienced Wikipedia editors who can accept the draft and move it to Wikipedia’s “mainspace”, which makes up the bulk of Wikipedia and the articles you’ll find when you’re searching for information. Reviewers can also reject articles for a variety of reasons, but because hundreds of draft articles are submitted to Wikipedia every day, volunteer reviewers often use a tool called articles for creation/helper script (ACFH), which creates templates for common reasons articles are declined. 

This is where Wales thinks AI could help. He wrote that he was asked to look at a specific draft article and give notes that might help the article get published.

…the response suggested the article cite a source that isn’t included in the draft article, and rely on Harvard Business School press releases for other citations, despite Wikipedia policies explicitly defining press releases as non-independent sources that cannot help prove notability, a basic requirement for Wikipedia articles.

Editors also found that the ChatGPT-generated response Wales shared “has no idea what the difference between” some of these basic Wikipedia policies, like notability (WP:N), verifiability (WP:V), and properly representing minority and more widely held views on subjects in an article (WP:WEIGHT).

«

I think you’d have to train an LLM specifically on the incredible jungle of Wikipedia policies, which are used by entrenched editors like jokers in a card game whose rules you’ve only hazily learnt so you could take part. Then it might stand a chance of getting something through.
unique link to this extract


Can Netflix find your new favourite watch based on your star sign? • The Guardian

Stuart Heritage:

»

The challenge for the streamers is how to effectively curate this infinite content. In the past they have done this by prioritising new releases, or showing you what everyone else is watching, or sharpening their algorithms to second-guess what you want to watch based on what you have already watched. But finally – finally – Netflix has cracked it. And it has achieved this with science.

Sorry, not science, bullshit. Because this weekend Netflix officially launched its Astrology Hub. Now, after all these years, subscribers can at last pick something to watch based on a loose collection of personality quirks determined by the position of the planets at the time of their birth. Doesn’t that sound great?

So, for instance, I am a Leo. And this means that, when I enter the Netflix Astrology Hub and scroll down a bit, I am informed that “Leos have main character energy”. And as such, it means I should watch The Crown, or The King, or Queen Charlotte: A Bridgerton Story, or Emily in Paris, or The Kissing Booth 2, or that Cilla Black biopic that ITV made a few years ago. And I think that, as a heterosexual 45-year-old man, Netflix has absolutely cracked it.

Sure, I have already watched most of these for work, and I flat-out disliked almost all of them. But if Netflix says that every single person born within a specific window has the exact same personality, then sure. Tonight, after flopping down on to my sofa at the end of another long day trying to locate the right balance between quality family time and the financial imperative to work, I will watch Emily in bloody Paris. And I will like it, because Netflix told me that I would.

Now, the naysayers among you will point out that all these choices seem geared to appeal to young girls, because young girls are statistically the demographic most likely to believe in the zodiac, and so the Netflix Astrology Hub is essentially just a bit of a grift designed to push content at one specific group. And you might even go further, by pointing out that astrology is a pseudoscience that has repeatedly proved itself to have no scientific validity whatsoever.

To which I respond: of course you think that, you’re a Virgo.

«

The old jokes are the best. The real purpose of the Astrology Hub? To get people to write stories about Netflix. And will they? Well, it is summer, after all.
unique link to this extract


OpenAI launches ChatGPT Go plan in India at Rs 399, India-exclusive and you can pay for it with UPI • India Today

Ankita Garg:

»

OpenAI has introduced a new subscription plan in India called ChatGPT Go. It is priced at Rs 399 [£3.38, $5.23] per month. The plan is being rolled out as an India-only option and can also be purchased through UPI [India’s Unified Payments Initiative], making it more convenient for millions of users who rely on the digital payment system for everyday transactions.

This is the first time OpenAI has created a country-specific subscription plan. While Indian users have had access to the free version of ChatGPT as well as the Plus and Pro plans, the new Go tier is designed to give more people access to advanced tools at a lower monthly cost. The company says the plan has been built keeping in mind the scale of usage in India, which has now become its second-largest market for ChatGPT.

Priced significantly below the Plus subscription, which costs Rs 1,999 per month, ChatGPT Go offers higher limits on some of the most used features. Users get 10 times more message capacity, daily image generations, and file uploads, along with twice the memory length for personalised responses. The plan is powered by GPT-5, the company’s latest model, which includes better support for Indic languages.

One of the biggest additions with the Go plan is the ability to pay through UPI. Until now, Indian users could only subscribe using debit or credit cards, which left out a large number of potential customers. By adding UPI support, OpenAI is hoping to make the subscription process as seamless as possible for users across the country.

«

OpenAI as the Facebook of chatbots. I am concerned that its effects will be the same as Facebook’s uncontrolled early spread in communities that were, socially, completely unprepared for it. What will it be like when millions of people are talking to a chatbot that tells them everything they think is absolutely the best idea ever, but have no idea that the chatbot is not in any way sentient?
unique link to this extract


SIM-swapper, Scattered Spider hacker gets ten years • Krebs on Security

Brian Krebs:

»

A 20-year-old Florida man at the center of a prolific cybercrime group known as “Scattered Spider” was sentenced to 10 years in federal prison today, and ordered to pay roughly $13m in restitution to victims. Noah Michael Urban of Palm Coast, Fla. pleaded guilty in April 2025 to charges of wire fraud and conspiracy.

…In November 2024 Urban was charged by federal prosecutors in Los Angeles as one of five members of Scattered Spider (a.k.a. “Oktapus,” “Scatter Swine” and “UNC3944”), which specialized in SMS and voice phishing attacks that tricked employees at victim companies into entering their credentials and one-time passcodes at phishing websites. Urban pleaded guilty to one count of conspiracy to commit wire fraud in the California case, and the $13m in restitution is intended to cover victims from both cases.

The targeted SMS scams spanned several months during the summer of 2022, asking employees to click a link and log in at a website that mimicked their employer’s Okta authentication page. Some SMS phishing messages told employees their VPN credentials were expiring and needed to be changed; other missives advised employees about changes to their upcoming work schedule.

That phishing spree netted Urban and others access to more than 130 companies, including Twilio, LastPass, DoorDash, MailChimp, and Plex. The government says the group used that access to steal proprietary company data and customer information, and that members also phished people to steal millions of dollars worth of cryptocurrency.

…Reached via one of his King Bob accounts on Twitter/X, Urban called the sentence unjust, and said the judge in his case discounted his age as a factor.

“The judge purposefully ignored my age as a factor because of the fact another Scattered Spider member hacked him personally during the course of my case,” Urban said in reply to questions…

«

The hacking wasn’t of the judge himself, but the way it was done.. well, it’s in the story, and it’s classic.
unique link to this extract


Amazon quietly blocks AI bots from Meta, Google, Huawei and more • Modern Retail

Allison Smith:

»

Amazon is escalating efforts to keep artificial intelligence companies from scraping its e-commerce data, as the retail giant recently added six more AI-related crawlers to its publicly available robots.txt file.

The change was first spotted by Juozas Kaziukėnas, an independent analyst, who noted that the updated code underlying Amazon’s sprawling website now includes language that prohibits bots from Meta, Google, Huawei, Mistral and others.

“Amazon is desperately trying to stop AI companies from training models on its data,” Kaziukėnas wrote in a LinkedIn post on Thursday. “I think it is too late to stop AI training — Amazon’s data is already in the datasets ChatGPT and others are using. But Amazon is definitely not interested in helping anyone build the future of AI shopping. If that is indeed the future, Amazon wants to build it itself.”

The update builds on earlier restrictions Amazon added at least a month ago targeting crawlers from Anthropic’s Claude, Perplexity and Google’s Project Mariner agents, The Information reported. Robots.txt files are a standard tool that websites use to give instructions to automated crawlers like search engines. While restrictions outlined in robots.txt files are advisory rather than enforceable, they act as signposts for automated systems — that is, if the crawlers are “well-behaved,” they are expected to respect the block, according to Kaziukėnas.

…The move highlights Amazon’s increasingly aggressive stance toward third-party AI tools that could scrape its product pages, monitor prices or even attempt automated purchases. For Amazon, the stakes are significant. Its online marketplace is not only the largest store of e-commerce data in the world but also the foundation of a $56bn advertising business built around shoppers browsing its site. Allowing outside AI tools to surface products directly to users could bypass Amazon’s storefront, undermining both traffic and ad revenue.

«

Some of the AI bots are a lot more aggressive, and some seem to be using multiple IPs and also ignoring robots.txt. So Amazon might have to take other measures if it wants to stop this. I’m not sure its concern is about others directing people straight to products so much as making some sort of AI-generated “here’s a product I imagined, maybe we can get it created”.
unique link to this extract


The first evidence of a take-off in solar in Africa • Ember

Dave Jones:

»

The latest data provides evidence that a solar pick-up is happening at scale in many countries in Africa.

Solar is not new to Africa. For more than two decades, solar has helped improve lives across Africa, in rural schools and hospitals, pay-as-you-go in homes, street lighting, water pumping, mini-grids and more. However, South Africa and Egypt are currently the only countries with installed solar capacity measured in gigawatts, rather than megawatts. That could be about to change.

The first evidence of a take-off in solar in Africa is now here:

• The last 12 months saw a big rise in Africa’s solar panel imports. Imports from China rose 60% in the last 12 months to 15,032 MW. Over the last two years, the imports of solar panels outside of South Africa have nearly tripled from 3,734 MW to 11,248 MW.

• The rise happened across Africa. 20 countries set a new record for the imports of solar panels in the 12 months to June 2025. 25 countries imported at least 100 MW, up from 15 countries 12 months before.

• These solar panels will provide a lot of electricity. The solar panels imported into Sierra Leone in the last 12 months, if installed, would generate electricity equivalent to 61% of the total reported 2023 electricity generation, significantly adding to electricity supply. They would add electricity equivalent to over 5% to total reported electricity generation in 16 countries.

• Solar panel imports will reduce fuel imports. The savings from avoiding diesel can repay the cost of a solar panel within six months in Nigeria, and even less in other countries. In nine of the top ten solar panel importers, the import value of refined petroleum eclipses the import value of solar panels by a factor of between 30 to 107.

This surge is still in its early days. Pakistan experienced an immense solar boom in the last two years, but Africa is not the next Pakistan – yet.

«

Hugely encouraging. If batteries follow, the economy could boom.
unique link to this extract


Intel and the foundry state of play • Digits to Dollars

Jonathan Greenberg:

»

Intel has arrived at the crossroads. The company needs to make some critical decisions. It needs to make them very soon. And the fate of the entire company is at stake.

Put simply, the company needs to decide if it wants to continue manufacturing semiconductors. If they choose to continue with the business they led for so long, then they need to invest heavily in advancing their 14A process, and they need to invest in the tools and systems that will allow external customers to use the process as well.

To accomplish this, the company needs some help. They are largely tapped out of their debt capacity (see above regarding that share buyback…). By our rough math, they need about $15bn – $25bn to accomplish this. This is on top of cash they generate from the Product side of the company and funding they have already received from the US government through the CHIPS Act. If they watch every penny and are highly disciplined in their spending, they might get by with less, but there will still be a significant, multi-billion dollar gap to fill.

…For their part, it is unclear what the US government hopes to achieve. If the goal is to prop up a US company, get a stake in a big company at a theoretical discount, maybe generate some jobs through additional US fab construction – then they can get that in a way that does not mean much for Intel’s long-term future. If the government instead wants to ensure that a US company is capable of advanced semis manufacturing then they will need to write a large check.

Our preferred solution is for the government to instead negotiate investments by a host of potential Intel customers – Apple, Qualcomm, Nvidia, Broadcom, Google, Microsoft, etc. These companies all have a massive stake in Intel’s future. The trouble with this approach is that while all these companies have a long-term interest in securing an alternative to TSMC, short-term interest, quarterly expectations and inertia are all strong enough to make a deal unlikely. For some subset of these companies a collective $20bn is both easy to raise and an incredible bargain. Ideally, the government would ‘encourage’ these parties to see past their short-term outlook and save Intel. Unfortunately, no one seems to even be considering this approach.

«

It’s a classic game theory challenge: Apple, for example, has zero reason to give Intel any money. But if TSMC becomes a monopoly, Apple could be a loser just as easily as it is a winner. (Quite apart from any Chinese incursion.) So how much is it worth to guard against the monopoly possibility.

Now, this crossroads that Intel is at? People have been seeing it approach for a long time, as the next two links show.
unique link to this extract


2013: The Intel Opportunity • Stratechery

Ben Thompson, writing in 2013, when Intel had just got a new CEO:

»

Most chip designers are fabless; they create the design, then hand it off to a foundry. AMD, Nvidia, Qualcomm, MediaTek, Apple – none of them own their own factories. This certainly makes sense: manufacturing semiconductors is perhaps the most capital-intensive industry in the world, and AMD, Qualcomm, et al have been happy to focus on higher margin design work.

Much of that design work, however, has an increasingly commoditized feel to it. After all, nearly all mobile chips are centered on the ARM architecture. For the cost of a license fee, companies, such as Apple, can create their own modifications, and hire a foundry to manufacture the resultant chip. The designs are unique in small ways, but design in mobile will never be dominated by one player the way Intel dominated PCs.

It is manufacturing capability, on the other hand, that is increasingly rare, and thus, increasingly valuable. In fact, today there are only four major foundries: Samsung, GlobalFoundries, Taiwan Semiconductor Manufacturing Company, and Intel. Only four companies have the capacity to build the chips that are in every mobile device today, and in everything tomorrow.

Massive demand, limited suppliers, huge barriers to entry. It’s a good time to be a manufacturing company. It is, potentially, a good time to be Intel. After all, of those four companies, the most advanced, by a significant margin, is Intel. The only problem is that Intel sees themselves as a design company, come hell or high water.

Today Intel has once again promoted a COO to CEO. And today, once again, Intel is increasingly under duress. And, once again, the only way out may require a remaking of their identity.

It is into a climate of doom and gloom that Krzanich is taking over as CEO. And, in what will be a highly emotional yet increasingly obvious decision, he ought to commit Intel to the chip manufacturing business, i.e. manufacturing chips according to other companies’ designs.

«

It’s that last sentence: Intel should become a foundry like TSMC, he said. This was 12 years ago. Intel in the succeeding years went from strength to strength until.. it didn’t. And that’s been the case for the past two years at least. Some mistakes take a long time to become obvious – but that also makes them very hard to back out of.

However he wasn’t the first to have seen this trouble brewing…
unique link to this extract


When the chips are down • The Guardian

Jack Schofield, writing in, wait for it, 2009:

»

Global Foundries also has alliances with IBM – which has a $2.5bn chip plant nearby in East Fishkill, NY – and several other companies. “We don’t believe any more in a home-grown R&D model,” says a spokesman, Jon Carvill. Rather than just serving AMD, the new strategy is to target the 20 largest companies who need leading-edge chip technologies in high volumes. “There’s very little competition in that part of the market,” says Carvill. “For those customers today, there isn’t any choice: there’s only TSMC [Taiwan Semiconductor Manufacturing Company] that can meet their needs. We’re going to offer an alternative.”

Carvill is confident the “silicon foundry” approach will enable AMD to keep on competing with Intel, the world’s largest chip manufacturer, as circuitry shrinks from today’s 45 nanometres (billionths of a metre) to 22nm and beyond. (The Intel 8088 chip, used in the IBM PC in 1982, had 3-micron – 3,000nm – circuits.)

However, the latest of many predictions of the death of Moore’s law concerns the economics rather than the physics. Len Jelinek, chief analyst for semiconductor manufacturing at iSuppli, has predicted that when we reach 18nm, in 2014, the equipment will be so expensive that chip manufacturers won’t be able to recover the fab costs.

This isn’t really a new idea either. Mike Mayberry, vice-president of Intel’s research and manufacturing group, points out that Arthur Rock, one of Intel’s early venture capital investors, came up with Rock’s law – the cost of a chip fabrication plant doubles every four years.

However, unlike Moore’s law, Rock’s law has not worked out well. In an article published by the IEEE, Philip Ross argued that fabs should have cost $5bn in the late 1990s, and $10bn in 2004. Global Foundries’ new fab may sound expensive at $4.2bn, but that’s an order of magnitude less than $40bn.

Which is not to say there aren’t potential problems in the semiconductor world. Gartner Research’s vice-president, Bob Johnson, points out that apart from Intel and Samsung, who can afford to build this sort of fab for themselves, most companies are likely to move to foundries.

«

To repeat: Jack Schofield (now, sadly, deceased) wrote this in July 2009: the problems with the Intel model of “we’ll just make our chips, thanks” were already becoming visible. (Global Foundries is still going, though shrinking, but TSMC has become the 500lb gorilla of chipmaking.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2501: the silence of the AI companies, Google inches towards verified developers, Citizen’s AI crimes, and more


Trying to teach Romeo and Juliet (in whatever medium) to phone-distracted schoolchildren is no fun. So, subtract the phones? CC-licensed photo by iClassical Com on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Not even the DiCaprio one? I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Do AI companies actually care about America? • The Atlantic

Matteo Wong:

»

Hillary Clinton once described interacting with Mark Zuckerberg as “negotiating with a foreign power” to my colleague Adrienne LaFrance.

American politicians have been wholly unsuccessful in reining in that power, and now, as the AI boom brings Silicon Valley’s ambitions to new heights, they are positioned more than ever as industry cheerleaders. Seen one way, this is classic conservatism: the championing of America’s business rulers based on the belief that their success will redound on the nation. Seen another, it is a dereliction of duty: elected officials willingly outsourcing their stewardship of the national interest to a tiny group of billionaires who believe they know what’s best for humanity.

The tech industry’s new ambitions—using AI to reshape not just work, school, and social life but perhaps even governance itself—do have a major vulnerability: The AI patriots desperately need the president’s approval. Chatbots rely on enormous data centers and the associated energy infrastructure that depend on the government to permit and expedite major construction projects; AI products, which are still fallible and have yet to show a clear path to profits, are in need of every bit of grandiose marketing—and all the potentially lucrative government and military contracts—available. Shortly after the inauguration, Zuckerberg, who is also aggressively pursuing AI development, said in a Meta earnings call, “We now have a U.S. administration that is proud of our leading companies, prioritizes American technology winning, and that will defend our values and interests abroad.” Altman, once a vocal opponent of Trump, has written that he now believes that Trump “will be incredible for the country in many ways!”

That dependence has led to a kind of cognitive dissonance. In this still-early stage of the AI boom, Silicon Valley, for all its impunity, has chosen not to voice robust ideas about democracy that differ substantively from the whims of a mercurial White House. As millions of everyday citizens, current and former government officials, lawyers and academics, and dissidents from dictatorships around the world have warned that the Trump administration is eroding American democracy, AI companies have remained mostly supportive or silent despite their own bombastic rhetoric about protecting democracy.

«

unique link to this extract


Google will require developer verification to install Android apps • 9to5Google

Abner Li:

»

To combat malware and financial scams, Google announced on Monday that only apps from developers that have undergone verification can be installed on certified Android devices starting in 2026.

This requirement applies to “certified Android devices” that have Play Protect and are preloaded with Google apps. The Play Store implemented similar requirements in 2023, but Google is now mandating this for all install methods, including third-party app stores and sideloading where you download an APK file from a third-party source.

Google wants to combat “convincing fake apps” and make it harder for repeat “malicious actors to quickly distribute another harmful app after we take the first one down.” A recent analysis by the company found that there are “over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.”

Google is explicit today about how “developers will have the same freedom to distribute their apps directly to users through sideloading or to use any app store they prefer.”

«

This has taken a long, long time to come through, hasn’t it. Google’s first smartphone came out in 2008, and it was already producing an app store. Malware has been a constant problem, but now developers will have to be verified.

But it’s even slower than you think:

»

The first Android app developers will get access to verification this October, with the process opening to all in March 2026.

The requirement will go into effect in September 2026 for users in Brazil, Indonesia, Singapore, and Thailand. Google notes how these countries have been “specifically impacted by these forms of fraudulent app scams.” Verification will then apply globally from 2027 onwards.

«

unique link to this extract


Perplexity is launching a new revenue-share model for publishers • WSJ

Alexandra Bruell:

»

Perplexity will pay publishers for news articles that the artificial-intelligence company uses to answer queries. 

The artificial-intelligence startup expects to pay publishers from a $42.5m revenue pool initially, and to increase that amount over time, Perplexity said Monday.

Perplexity plans to distribute money when its AI assistant or search engine uses a news article to fulfill a task or answer a search request. 

Its payments to publishers will come out of the subscription revenue generated by a new news service, called Comet Plus, that Perplexity plans to roll out widely this fall.

Perplexity said publishers will get 80% of Comet Plus revenue, including from the more expensive subscription tiers that provide Comet Plus free of charge. 

Bloomberg News earlier reported Perplexity’s plans to pay publishers.

Like other AI rivals, Perplexity has been building a search engine for the AI era, and turned to news articles and other content to answer queries from users. But publishers have complained the AI firms are taking their work without compensation, while siphoning away traffic that would otherwise go to their websites and apps.

«

That’s quite optimistic about the subscription revenue that will come in. Wonder too how it compares for the publishers to people actually visiting their sites.
unique link to this extract


Citizen is using AI to generate crime alerts with no human review. It’s making a lot of mistakes • 404 Media

Joseph Cox:

»

Crime-awareness app Citizen is using AI to write alerts that go live on the platform without any prior human review, leading to factual inaccuracies, the publication of gory details about crimes, and the exposure of sensitive data such as peoples’ license plates and names, 404 Media has learned.

The news comes as Citizen recently laid off more than a dozen unionized employees, with some sources believing the firings are related to Citizen’s increased use of AI and the shifting of some tasks to overseas workers. It also comes as New York City enters a more formal partnership with the app.

…For years Citizen employees have listened to radio feeds and written these alerts themselves. More recently, Citizen has turned to AI instead, with humans “becoming increasingly bare,” one source said. The descriptions of Citizen’s use of AI come from three sources familiar with the company. 404 Media granted them anonymity to protect them from retaliation.

Initially, Citizen brought in AI to assist with drafting notifications, two sources said. “The next iteration was AI starting to push incidents from radio clips on its own,” one added. “There was no analyst or human involvement in the information that was being pushed in those alerts until after they were sent.”

All three sources said the AI made mistakes or included information it shouldn’t have. AI mistranslated “motor vehicle accident” to “murder vehicle accident.” It interpreted addresses incorrectly and published an incorrect location. It would add gory or sensitive details that violated Citizen’s guidelines, like saying “person shot in face” or including a person’s license plate details in an unconfirmed report. It would generate a report based on a homeless person sleeping in a location. The AI sometimes blasted a notification about police officers spotting a stolen vehicle or homicide suspect, potentially putting that operation at risk.

«

Surprise!
unique link to this extract


What many parents miss about the phones-in-schools debate • The Atlantic

Gail Cornwall:

»

…within the next two years, a majority of U.S. kids will be subject to some sort of phone-use restriction [in schools].

…Part of the reason that I feel so strongly about getting phones out of classrooms is that I know what school was like for teachers without them. In 2005, when I was 25 years old, I showed up at a Maryland high school eager to thrill three classes of freshmen with my impassioned dissection of Romeo and Juliet. Instead, I learned how quickly a kid’s eraser-tapping could distract the whole room, and how easily one student’s bare calves could steal another teen’s attention. Reclaiming their focus took everything I had: silliness, flexibility, and a strong dose of humility.

Today, I doubt Mercutio and I would stand a chance. Even with the rising number of restrictions, smartphones are virtually unavoidable in many schools. Consider my 16-year-old’s experience: Her debate team communicates using the Discord app. Flyers about activities require scanning a QR code. Her teachers frequently ask that she submit photos of completed assignments, which her laptop camera can’t capture clearly. In some classes, students are expected to complete learning games on their smartphone.

Because of the way devices—and human brains—are built, asking teens to use a phone in class but not look at other apps is likely to be as ineffective as DARE’s “Just Say No” campaign. Studies have shown that simply having a phone nearby can reduce a person’s capacity to engage with those around them and focus on tasks. This is because each alert offers a burst of dopamine, which can condition people to want to open their phone even before they get a notification.

«

As Cornwall points out, many parents are fine with every other child not having a phone – but their child needs one. Just in case.
unique link to this extract


More UK news publishers are adopting ‘consent or pay’ advertising model • Press Gazette

Charlotte Tobitt:

»

Sixteen of the 50 biggest news websites in the UK are now using a “consent or pay” model to allow users to pay to reject personalised advertising or even avoid ads altogether.

UK publishers began to implement the model last year as the Information Commissioner’s Office cracked down on the requirement for the biggest sites to display a “reject all cookies” button as prominently as the option to “accept all”.

More publishers have begun to implement consent or pay this year after the ICO clarified that the model was acceptable as long as users are given a “realistic choice”, including by not putting the price too high.

The ICO rules relate to the ability for users to opt out of tracking cookies used to show personalised advertising, which have a higher value to advertisers.

But some publishers have chosen instead to offer users the choice between accepting cookies and paying to see no adverts at all, making it more attractive to users fed up with cluttered browsing experiences.

…When users are equally offered the chance to “accept all” or “reject all” cookies, consent rates are typically somewhere around 70-80%, according to both Skovgaards and Contentpass founder Dirk Freytag.

Once a consent or pay model is introduced, almost 100% choose to accept cookies with a small number choosing to pay instead, they each told Press Gazette.

This means publishers are more likely to benefit from a better price for their advertising than if people had chosen to reject personalised advertising, and the small number who choose to pay make up for the advertising that would be lost if they had otherwise rejected.

«

(Thanks Gregory B for the link.)
unique link to this extract


Elon Musk sues Apple and OpenAI, revealing his panic over OpenAI dominance • Ars Technica

Ashley Belanger:

»

After a public outburst over Grok’s App Store rankings, on Monday, Elon Musk followed through on his threat to sue Apple and OpenAI.

At first, Musk appeared fixated on ChatGPT consistently topping Apple’s “Must Have” app list—which Grok has never made—claiming Apple seemed to preference OpenAI, an Apple partner, over all chatbot rivals. But Musk’s filing shows that the X and xAI owner isn’t just trying to push for more Grok downloads on iPhones—he’s concerned that Apple and OpenAI have teamed up to completely dash his “everything app” dreams, which was the reason he bought Twitter.

At this point appearing to be genuinely panicked about OpenAI’s insurmountable lead in the chatbot market, Musk has specifically alleged that an agreement integrating ChatGPT into the iOS violated antitrust and unfair competition laws. Allegedly, the conspiracy is designed to protect Apple’s smartphone monopoly and block out AI rivals to lock in OpenAI’s dominance in the chatbot market.

As Musk sees it, Apple is supposedly so worried that X will use Grok to create a “super app” that replaces the need for a sophisticated smartphone that the iPhone maker decided to partner with OpenAI to limit X and xAI innovation. The complaint quotes Apple executive Eddy Cue as expressing “worries that AI might destroy Apple’s smartphone business,” due to patterns observed in foreign markets where super apps exist, like WeChat in China.

“In a desperate bid to protect its smartphone monopoly, Apple has joined forces with the company that most benefits from inhibiting competition and innovation in AI: OpenAI, a monopolist in the market for generative AI chatbots,” Musk’s lawsuit alleged.

«

One can point out that this is ridiculous by pointing to China, where WeChat is the “super app” that allows people to do anything. However Apple doesn’t bar it from the App Store in China, nor make a competition.

As for being anticompetitive by picking OpenAI – it’s anything but: Apple can choose any AI it wants. If it thinks Grok is better, it can slot it in. America’s litigation culture has long since got out of hand.
unique link to this extract


Israel vs. Iran… on the blockchain • Cryptadamus

“Michel de Cryptadamus”:

»

So-called “stablecoins” like Tether, whose values are pegged to that of so-called “real” money (e.g. US dollars), are a perfect example of extremely censorable cryptocurrencies. Any US dollars you are holding “on chain” in the form of Tether’s USDT tokens, Circle’s USDC tokens, the Trump family’s new USD1 tokens, or whatever other stablecoin you choose can be instantly zapped from afar by the folks at Tether or Circle or Trump HQ whenever they feel like it and for whatever reason they choose. Due to both the jurisdictional issues (most stablecoins are located in small island tax shelters) as well as the terms of service you agreed to when you touched the stablecoin you have pretty much no recourse of any kind.

…While the governments of both Israel and the U.S. periodically make the blockchain addresses they have asked Tether to blacklist public via court records, sanctions related press releases, or similar documentation, a) they do not always do this and b) even when they do the seizure orders are often unsealed weeks or months after the actual blacklisting happens. I could not find any governmental records about why this huge number of wallets were suddenly being blacklisted so I set out to do a bit of investigating.

It turned out that not only could a very large percentage (~30%) of the wallets that were blacklisted since slightly before the outbreak of Middle Eastern hostilities be linked to Iranian crypto exchanges like the aforementioned Nobitex with an extremely cursory scan of the blockchain, a couple of them could even be directly observed sending funds to and/or from a blockchain address the governments of both the U.S. and the U.K. claim belongs to Sa’id Ahmad Muhammad Al-Jamal, a sanctioned IRGC12 connected money launderer with a Chinese passport.

«

Fascinating insight into the newest frontier for war: cutting crypto supply lines.
unique link to this extract


Air pollution from oil and gas causes 90,000 premature US deaths each year, says new study • The Guardian

Dharna Noor:

»

More than 10,000 annual pre-term births are attributable to fine particulate matter from oil and gas, the authors found, also linking 216,000 annual childhood-onset asthma cases to the sector’s nitrogen dioxide emissions and 1,610 annual lifetime cancer cases to its hazardous air pollutants.

The highest number of impacts are seen in California, Texas, New York, Pennsylvania and New Jersey, while the per-capita incidences are highest in New Jersey, Washington DC, New York, California and Maryland.

The analysis by researchers at University College London and the Stockholm Environment Institute is the first to examine the health impacts – and unequal health burdens – caused by every stage of the oil and gas supply chain, from exploration to end use.

“We’ve long known that these communities are exposed to such levels of inequitable exposure as well as health burden,” said Karn Vohra, a postdoctoral research fellow in geography at University College London, who led the paper. “We were able to just put numbers to what that looks like.”

While Indigenous and Hispanic populations are most affected by pollution from exploration, extraction, transportation and storage, Black and Asian populations are most affected by emissions from processing, refining, manufacturing, distribution and usage.

«

The story is covered in links, but none to the actual paper. Took me a little searching, but the clues sprinkled around the story let me find the original study. Journalists: please link to the studies. This one even has nice diagrams.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2500: Russia churns out fake political news, YouTube trials AI editing, Meta to offer $800 smart glasses, and more


The rise of the AI obituary raises the question of whether another element of human effort is going to vanish in the face of chatbot. CC-licensed photo by K P on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. Nice round number. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Russia is quietly churning out fake content posing as US news • POLITICO

Dana Nickel:

»

A pro-Russian propaganda group is taking advantage of high-profile news events to spread disinformation, and it’s spoofing reputable organizations — including news outlets, nonprofits and government agencies — to do so.

According to misinformation tracker NewsGuard, the campaign — which has been tracked by Microsoft’s Threat Analysis Center as Storm-1679 since at least 2022 — takes advantage of high-profile events to pump out fabricated content from various publications, including ABC News, BBC and most recently POLITICO.

This year, the group has focused on flooding the internet with fake content surrounding the German SNAP elections and the upcoming Moldovan parliamentary vote. The campaign also sought to plant false narratives around the war in Ukraine ahead of President Donald Trump’s meeting with Russian President Vladimir Putin on Friday.

McKenzie Sadeghi, AI and foreign influence editor at NewsGuard, said in an interview that since early 2024, the group has been publishing “pro-Kremlin content en masse in the form of videos” mimicking these organizations.

“If even just one or a few of their fake videos go viral per year, that makes all of the other videos worth it,” she said.
While online Russian influence operations have existed for many years, security experts say artificial intelligence is making it harder for people to discern what’s real.

«

AI is all over the place and people absolutely can’t tell the difference. Can’t tell the difference in writing, can’t distinguish it in photos, and soon video and audio will follow. Is this how a culture or a civilisation loses its mind?
unique link to this extract


YouTube secretly used AI to edit people’s videos. The results could bend reality • BBC Future

Thomas Germain:

»

Rick Beato’s face just didn’t look right. “I was like ‘man, my hair looks strange’, he says. “And the closer I looked it almost seemed like I was wearing makeup.” Beato runs a YouTube channel with over five million subscribers, where he’s made nearly 2,000 videos exploring the world of music. Something seemed off in one of his recent posts, but he could barely tell the difference. “I thought, ‘am just I imagining things?'” 

It turns out, he wasn’t. In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission. Wrinkles in shirts seem more defined. Skin is sharper in some places and smoother in others. Pay close attention to ears, and you may notice them warp. These changes are small, barely visible without a side-by-side comparison. Yet some disturbed YouTubers say it gives their content a subtle and unwelcome AI-generated feeling.

There’s a larger trend at play. A growing share of reality is pre-processed by AI before it reaches us. Eventually, the question won’t be whether you can tell the difference, but whether it’s eroding our ties to the world around us.

“The more I looked at it, the more upset I got,” says Rhett Shull, another popular music YouTuber. Shull, a friend of Beato’s, started looking into his own posts and spotted the same strange artefacts. He posted a video on the subject that’s racked up over 500,000 views.

“If I wanted this terrible over-sharpening I would have done it myself. But the bigger thing is it looks AI-generated. I think that deeply misrepresents me and what I do and my voice on the internet. It could potentially erode the trust I have with my audience in a small way. It just bothers me.”

…Now, after months of rumors in comment sections, the company has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.

“We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video),” said Rene Ritchie, YouTube’s head of editorial and creator liaison, in a post on X.

«

The trajectory is that unless it’s calamitous and people watch fewer of them (the only metric Google would care about – complaints won’t get a hearing), then this will become standard and, after a few months of it, people will shrug and accept it. Because where else are they going to go?
unique link to this extract


Meta to unveil Hypernova smart glasses with display, wristband at Connect • CNBC

Salvador Rodriguez,Lora Kolodny and Jonathan Vanian:

»

Meta is planning to use its annual Connect conference next month to announce a deeper push into smart glasses, including the launch of the company’s first consumer-ready glasses with a display, CNBC has learned.

That’s one of the two new devices Meta is planning to unveil at the event, according to people familiar with the matter. The company will also launch its first wristband that will allow users to control the glasses with hand gestures, the people said.

Connect is a two-day conference for developers focused on virtual reality, AR and the metaverse. It was originally called Oculus Connect and obtained its current moniker after Facebook changed its parent company name to Meta in 2021.

The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device, said the people, who asked not to be named because the details are confidential.

The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica, the people said.

«

This is surely going to be the next big category, and within a year or so analysts will be looking for signs that Apple is developing something in this space, and marking it down if it isn’t.

If you don’t believe me that smart glasses have a huge potential market, count the number of people you see tomorrow walking along a pavement staring into their phones. Every one would buy smart glasses.
unique link to this extract


Hackers who exposed North Korean government hacker explain why they did it • TechCrunch

Lorenzo Franceschi-Bicchierai:

»

Earlier this year, two hackers broke into a computer and soon realized the significance of what this machine was. As it turned out, they had landed on the computer of a hacker who allegedly works for the North Korean government

The two hackers decided to keep digging and found evidence that they say linked the hacker to cyberespionage operations carried out by North Korea, exploits and hacking tools, and infrastructure used in those operations. 
Saber, one of the hackers involved, told TechCrunch that they had access to the North Korean government worker’s computer for around four months, but as soon as they understood what data they got access to, they realized they eventually had to leak it and expose what they had discovered.

“These nation-state hackers are hacking for all the wrong reasons. I hope more of them will get exposed; they deserve to be,” said Saber, who spoke to TechCrunch after he and cyb0rg published an article in the legendary hacking e-zine Phrack, disclosing details of their findings. 

«

The article has lots of deep hacker stuff – shall we shade past the part where these guys broke into someone’s computer, and then realised it was a state-sponsored hacker’s? – including this:

»

Kimsuky [the North Korean hacking group], you are not a hacker. You are driven by financial greed, to enrich your leaders, and to fulfill their political agenda. You steal from others and favour your own. You value yourself above the others: You are morally perverted.

I am a Hacker and I am the opposite to all that you are. In my realm, we are all alike. We exist without skin color, without nationality, and without political agenda. We are slaves to nobody.

«

Nice to believe, but not sure about the lack of political agenda.
unique link to this extract


The rise of AI tools that write about you when you die • The Washington Post

Drew Harwell:

»

Funeral directors are increasingly asking the relatives of the deceased whether they would prefer for AI to write the obituary, rather than take on the task themselves. Josh McQueen, the vice president of marketing and product for the funeral-home management software Passare, said its AI tool has written tens of thousands of obituaries nationwide in the past few years.

Tech start-ups are also working to build obituary generators that are available to everyone in their time of grief, for a small fee. Sonali George, the founder of one such tool called CelebrateAlly, said the AI functions as an “enabler for human connection” because it can help people skip past an overwhelming task and still end up with something that can bring their family together.

“Imagine for the person who just died, [wouldn’t] that person want their best friend to say a heartfelt tribute that makes everybody laugh, brings out the best, with AI?” she said. “If you had the tool to do ‘25 reasons why I love you, mom,’” she added, “wouldn’t it still mean something, even if it was written by a machine?”

…To McQueen, the funeral software executive, the technology’s value is obvious. For a human, the task of elegantly summing up a loved one’s life — while also navigating the sadness and logistics of their death — can be stressful and emotionally draining. For a large language model, it’s all just text.

“You’re given this assignment to write 500 words, and you want to be loving and profound, but you’re dealing with this grief, so you sit at your computer and you’re paralyzed,” he said. “If this can help get some of your thoughts and ideas down on paper … that to me is a win.”

Thousands of funeral homes now use the company’s software, he said, and many of them let families access the AI tool through their online funeral portals. Beyond clearing writer’s block, he said, the AI is unmatched in being able to quickly adjust an obituary’s length or tone.

“Do you want it to be more celebratory? Traditional? Poetic? Humorous?” McQueen said. “It provides just a new flavor on it, if you will.”

«

I’m absolutely certain that the next story coming up in this sequence is preachers getting chatbots to write sermons.
unique link to this extract


Our response to Mississippi’s Age Assurance Law • Bluesky

“The Bluesky team”:

»

Mississippi’s HB1126 requires platforms to implement age verification for all users before they can access services like Bluesky. That means, under the law, we would need to verify every user’s age and obtain parental consent for anyone under 18. The potential penalties for non-compliance are substantial — up to $10,000 per user. Building the required verification systems, parental consent workflows, and compliance infrastructure would require significant resources that our small team is currently unable to spare as we invest in developing safety tools and features for our global community, particularly given the law’s broad scope and privacy implications.

While we share the goal of protecting young people online, we have concerns about this law’s implementation:

• Broad scope: The law requires age verification for all users, not just those accessing age-restricted content, which affects the ability of everyone in Mississippi to use Bluesky
• Barriers to innovation: The compliance requirements disadvantage newer and smaller platforms like Bluesky, which do not have the luxury of big teams to build the necessary tooling. The law makes it harder for people to engage in free expression and chills the opportunity to communicate in new ways
• Privacy implications: The law requires collecting and storing sensitive personal information from all users, including detailed tracking of minors.

Starting [from last Friday], if you access Bluesky from a Mississippi IP address, you’ll see a message explaining why the app isn’t available. This block will remain in place while the courts decide whether the law will stand.

«

Lucky Mississippi! Meanwhile if you want to read about the various verification methods in use, there’s this WSJ article. (Thanks Gregory B for the WSJ one.)
unique link to this extract


Samsung reportedly looking to partner with Intel in the chip industry to leverage President Trump’s ‘personal support’ for Team Blue • WCCftech

Muhammad Zuhair:

»

according to Taiwan Economic Daily, citing Korean sources, it is reported that Samsung is looking for a ‘strategic partnership’ with US chipmakers such as Intel, in a bid to secure a better trade deal with the Trump administration.

It is claimed that Samsung is exploring partnerships with American companies to ‘please’ the Trump administration and ensure that its regional operations aren’t affected by hefty tariffs. It is speculated that if Samsung manages to strike a deal with Intel, it would allow the Korean giant to see an elevated status in the eyes of President Trump, mainly since Intel has become an important factor for the current US administration. While details around how the partnership could pan out are uncertain, we might know how it could turn out.

In a previous report, we discussed how Intel is abandoning its pursuit of glass substrates, and in the midst of it, several engineers from the firm are moving to Samsung’s Electro-Mechanics division in the US, since the Korean giant sees glass substrates as an essential part of its prospects. More importantly, since Intel is looking to license its glass substrate technology, Samsung could also play a role in this by producing end solutions for Team Blue, ultimately allowing both firms to leverage the packaging technology.

«

Nobody could accuse Samsung of not spotting an opportunity. Chapeaux.
unique link to this extract


The US takes a 10% stake in Intel as part of Trump’s big tech push • CNN Business

Clare Duffy and Lisa Eadicicco:

»

The United States government is making an $8.9bn investment in Intel common stock, giving the Trump administration a roughly 10% stake in the struggling chipmaker, Intel and the president announced on Friday.

“It is my Great Honor to report that the United States of America now fully owns and controls 10% of INTEL, a Great American Company that has an even more incredible future,” Trump wrote in a Truth Social post on Friday.

The announcement came after Trump said earlier in the Oval Office on Friday that the CEO of Intel had agreed to such a deal, adding that he hopes to strike similar deals with other companies in the future.

“I said, I think you should pay us 10% of your company,” Trump said of his conversations with Intel CEO Lip-Bu Tan. “And they said yes.”

«

Intel says there won’t be government representation on the board – let’s see how long that lasts! – but the obvious reason for this, which Trump probably didn’t know about two weeks ago, is that if things get sticky with China, you need to be able to make some chips outside Taiwan.
unique link to this extract


University of Chicago lost money on crypto, then froze research when federal funding was cut • Stanford Review

Teddy Ganea:

»

UChicago’s financial position is clear: Unlike near-peer institutions, its endowment is not large enough to sustain its spending and debt. The university carries nearly $6bn in debt while running annual budget deficits exceeding $200m, all on an endowment three times smaller than Stanford’s.

To compensate, the university has focused on expanding lucrative certification programs, increasing donations, raising tuitions, and cutting costs, though many faculty and students viscerally disagree with the administration on which costs to cut.

Yet these debates neglect the most important factor: the UChicago endowment’s weak returns, driven by poor investment decisions.

Possibly the most notorious example is the university’s foray into cryptocurrency. Four sources, as well as widespread campus rumors, allege that the university lost tens of millions investing in crypto around 2021. Given UChicago’s extraordinary 37.6% endowment gain that year, far beyond what conventional investments would have yielded, it’s likely they took significant risks. But if those gambles paid off in the short term, they quickly unraveled.

UChicago’s endowment remains lower today than in 2021.

…Had UChicago simply matched the market, its endowment would be $6.45bn larger today—more than enough to repay its entire debt. Obviously, universities cannot just track the market, as they must hedge for downturns to maintain financial stability. But even if UChicago had only matched its Ivy League near-peers, its endowment would still be $3.69bn larger.

«

Those who bought bitcoin and stuck with it, though, must be laughing: that part of the crypto bubble has not, so far, burst.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.2499: Meta chatbot deludes ill man, Trump strips satellite data, let’s vibe code!, the TikTok question, “AI journalism”, and more


A social media scam is making golf fans think women professionals are getting in touch to offer private dinners. You guessed – they aren’t. CC-licensed photo by Justin Falconer on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Fore(head). I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


A flirty Meta AI bot invited a retiree to meet. He never made it home • Reuters

Jeff Horwitz:

»

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

«

Are chatbots uncovering the extent to which people will believe anything, or exacerbating the problem? This is going to be the big social question for the next few years.
unique link to this extract


Trump admin strips ocean and air pollution monitoring from next-gen weather satellites • CNN

Andrew Freedman:

»

The National Oceanic and Atmospheric Administration is narrowing the capabilities and reducing the number of next-generation weather and climate satellites it plans to build and launch in the coming decades, two people familiar with the plans told CNN.

This move — which comes as hurricane season ramps up with Erin lashing the East Coast — fits a pattern in which the Trump administration is seeking to not only slash climate pollution rules, but also reduce the information collected about the pollution in the first place. Critics of the plan also say it’s a short-sighted attempt to save money at the expense of understanding the oceans and atmosphere better.

Two planned instruments, one that would measure air quality, including pollution and wildfire smoke, and another that would observe ocean conditions in unprecedented detail, are no longer part of the project, the sources said.

“This administration has taken a very narrow view of weather,” one NOAA official told CNN, noting the jettisoned satellite instruments could have led to better enforcement and regulations on air pollution by more precisely measuring it.

The cost of the four satellites, known as the Geostationary Extended Observations, nicknamed GeoXO, would be lower than originally spelled out under the Biden administration, at a maximum of $500m per year for a total of $12bn, but some scientists say the cheaper up-front price would come at a cost to those who would have benefited from the air and oceans data.

«

It’s going to take so long to fill the gaps that are being created by this administration.
unique link to this extract


Why did a $10bn startup let me vibe-code for them—and why did I love it? • WIRED

Lauren Goode:

»

Since 2022, the Notion app has had an AI assistant to help users draft their notes. Now the company is refashioning this as an “agent,” a type of AI that will work autonomously in the background on your behalf while you tackle other tasks. To pull this off, human engineers need to write lots of code.

They open up Cursor and select which of several AI models they’d like to tap into. Most engineers I chatted with during my visit preferred Claude, or they used the Claude Code app directly. After choosing their fighter, the engineers ask their AI to draft code to build a new thing or fix a feature. The human programmer then debugs and tests the output as needed—though the AIs help with this too—before moving the code to production.

At its foundational core, generative AI is enormously expensive. The theoretical savings come in the currency of time, which is to say, if AI helped Notion’s cofounder and CEO Ivan Zhao finish his tasks earlier than expected, he could mosey down to the jazz club on the ground floor of his Market Street office building and bliss out for a while. Ivan likes jazz music. In reality, he fills the time by working more. The fantasy of the four-day workweek will remain just that.

My workweek at Notion was just two days, the ultimate code sprint. (In exchange for full access to their lair, I agreed to identify rank-and-file engineers by first name only.) My first assignment was to fix the way a chart called a mermaid diagram appears in the Notion app. Two engineers, Quinn and Modi, told me that these diagrams exist as SVG files in Notion and, despite being called scalable vector graphics, can’t be scaled up or zoomed into like a JPEG file. As a result, the text within mermaid diagrams on Notion is often unreadable.

Quinn slid his laptop toward me. He had the Cursor app open and at the ready, running Claude. For funsies, he scrolled through part of Notion’s code base. “So, the Notion code base? Has a lot of files. You probably, even as an engineer, wouldn’t even know where to go,” he said, politely referring to me as an engineer. “But we’re going to ignore all that. We’re just going to ask the AI on the sidebar to do that.”

«

Yes, why would a startup hoping to get favourable coverage let a journalist mess around with its codebase in a way that it could revert as soon as she’s gone? Complete mystery.
unique link to this extract


The AI hype is fading fast • Los Angeles Times

Michael Hiltzik:

»

“What I had not realized,” [Joseph] Weizenbaum wrote in 1976, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum warned that the “reckless anthropomorphization of the computer” — that is, treating it as some sort of thinking companion — produced a “simpleminded view of intelligence.”

That tendency has been exploited by today’s AI promoters. They label the frequent mistakes and fabrications produced by AI bots as “hallucinations,” which suggests that the bots have perceptions that may have gone slightly awry. But the bots “don’t have perceptions,” Bender and Hanna write, “and suggesting that they do is yet more unhelpful anthropomorphization.”

The general public may finally be cottoning on to the failed promise of AI more generally. Predictions that AI will lead to large-scale job losses in creative and STEM fields (science, technology, engineering and math) might inspire feelings that the whole enterprise was a tech-industry scam from the outset.

Predictions that AI would yield a burst of increased worker productivity haven’t been fulfilled; in many fields, productivity declines, in part because workers have to be deployed to double-check AI outputs, lest their mistakes or fabrications find their way into mission-critical applications — legal briefs incorporating nonexistent precedents, medical prescriptions with life-threatening ramifications and so on.

Some economists are dashing cold water on predictions of economic gains more generally. MIT economist Daron Acemoglu, for example, forecast last year that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of the AI camp’s projections.

«

Hiltzik makes a long argument about all the AI hype being, well, overhyped. Are the AI boosters right? Or the AI doomers? Only one way to find out.
unique link to this extract


Palestine was the problem with TikTok • The Verge

Sarah Jeong:

»

The contents of that March 2024 classified briefing that made 50 congressional representatives freak out [and back a ban on TikTok] have never been made public. But it’s not hard to figure out what changed between 2022 and 2024. “Oct. 7 [2023, when Hamas murdered hundreds of Israelis in a border incursion] really opened people’s eyes to what’s happening on TikTok,” [Democrat representative Raja] Krishnamoorthi told The Wall Street Journal a few days before the vote. Multiple sources told the WSJ that [Republican representative Mike] Gallagher and Krishnamoorthi’s efforts had been “revived in part by the fallout from the Oct. 7 attack by Hamas on Israel.” Gallagher was even more transparent about where he stood on the matter, writing an op-ed in The Free Press titled “Why Do Young Americans Support Hamas? Look at TikTok,” describing the app as “digital fentanyl” that was “brainwashing our youth.”

“TikTok is a tool China uses to spread propaganda to Americans, now it’s being used to downplay Hamas terrorism,” then-Sen. Marco Rubio (R-FL) wrote on X in November 2023. “TikTok needs to be shut down. Now.”

“TikTok — and its parent company ByteDance — are threats to American national security,” wrote Sen. Josh Hawley (R-MO) in a letter to Treasury Secretary Janet Yellen, also in November 2023. He decried “TikTok’s power to radically distort the world-picture that America’s young people encounter,” describing “Israel’s unfolding war with Hamas” as “a crucial test case.”

«

This feels a bit like squashing and squeezing the facts to fit a narrative, from all sides. Gallagher is blind to the fact that even the limited coverage from inside Gaza showed a response that never looked like an attempt to recover hostages. But this article never quite produces the gun that has the corresponding smoke; Gallagher is also exercised about TikTok’s capability as “CCP spyware”.

Meanwhile, months after TikTok should by law have been closed or sold in the US, neither has happened and the Trump administration is opening a TikTok White House account.
unique link to this extract


Analysis: record solar growth keeps China’s CO2 falling in first half of 2025 • Carbon Brief

Lauri Myllyvirta:

»

Clean-energy growth helped China’s carbon dioxide (CO2) emissions fall by 1% year-on-year in the first half of 2025, extending a declining trend that started in March 2024.

The CO2 output of the nation’s power sector – its dominant source of emissions – fell by 3% in the first half of the year, as growth in solar power alone matched the rise in electricity demand.

The new analysis for Carbon Brief shows that record solar capacity additions are putting China’s CO2 emissions on track to fall across 2025 as a whole.

Other key findings include:

• The growth in clean power generation, some 270 terawatt hours (TWh) excluding hydro, significantly outpaced demand growth of 170TWh  in the first half of the year.

• Solar capacity additions set new records due to a rush before a June policy change, with 212 gigawatts (GW) added in the first half of the year.

• This rush means solar is likely to set an annual record for growth in 2025, becoming China’s single-largest source of clean power generation in the process.

• Coal-power capacity could surge by as much as 80-100GW this year, potentially setting a new annual record, even as coal-fired electricity generation declines.

«

Paradoxically, China is using more coal for chemicals, but using less for electricity generation, which is how its overall carbon dioxide output is falling. Falling is good!
unique link to this extract


The catfishing scam putting fans and female golfers in danger • The Athletic

Carson Kessler and Gabby Herzig:

»

Meet Rodney Raclette. Indiana native. 62 years old. Big golfer. A huge fan of the LPGA.

On Aug. 4, Rodney opened an Instagram account with the handle @lpgafanatic6512, and he quickly followed some verified accounts for female golfers and a few other accounts that looked official.

Within 20 minutes of creating his account and with zero posts to his name, Rodney received a message from what at first glance appeared to be the world’s No. 2-ranked female golfer, Nelly Korda.

“Hi, handsomeface, i know this is like a dream to you. Thank you for being a fan,” read a direct message from @nellykordaofficialfanspage2.

The real Nelly Korda was certainly not messaging Rodney — and Rodney doesn’t actually exist. The Athletic created the Instagram account of the fictitious middle-aged man to test the veracity and speed of an ever-increasing social media scam pervading the LPGA.

The gist of the con goes like this: Social media user is a fan of a specific golfer; scam account impersonating that athlete reaches out and quickly moves the conversation to another platform like Telegram or WhatsApp to evade social media moderation tools; scammer offers a desirable object or experience — a private dinner, VIP access to a tournament, an investment opportunity — for a fee; untraceable payments are made via cryptocurrency or gift cards. Then, once the spigot of cash is turned off, the scammer disappears.

…“We’ve definitely had people show up at tournaments who thought they had sent money to have a private dinner with the person,” said Scott Stewart, who works for TorchStone Global, a security firm used by the LPGA. “But then also, we’ve had people show up who were aggrieved because they had been ripped off, there’s a tournament nearby, and they wanted to kind of confront the athlete over the theft.”

«

This is the danger: people get understandably angry when they’re told they’ve been scammed because they can’t tell the difference between a fake account and a real one and that, in effect, they have more money than sense.
unique link to this extract


Wired and Business Insider remove ‘AI-written’ freelance articles • Press Gazette

Charlotte Tobitt:

»

Wired and Business Insider have removed news features written by a freelance journalist after concerns they are likely AI-generated works of fiction.

Freedom of expression non-profit Index on Censorship is also in the process of taking down a magazine article by the same author after concerns were raised by Press Gazette. The publisher has concluded that it “appears to have been written by AI”.

Several other UK and US online publications have published questionable articles by the same person, going by the name of Margaux Blanchard, since April.

Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.

Press Gazette was alerted to this author by Dispatch editor and former Unherd deputy editor Jacob Furedi.

Furedi set up Dispatch as his own subscription and syndication-based publication dedicated to long-form reportage earlier this year.

He received a pitch from Blanchard at the start of August in which she offered a reported piece about “Gravemont, a decommissioned mining town in rural Colorado that has been repurposed into one of the world’s most secretive training grounds for death investigation”. The pitch continued: “I want to tell the story of the scientists, ex-cops, and former miners who now handle the dead daily — not as mourners, but as archivists of truth…”

«

This is, one has to note, quite a clever bit of promotion by Furedi for his new site, but the story he tells of the pitch that is very ChatGPT-flavoured (death investigation??). What is notable is that Blanchard was asking quite a big payment – £500 for an article. So if some freelance has figured out that chatbots are a great way to make up convincing content, and get paid for it because nobody checks anything any more, well, that’s playing the system perfectly.
unique link to this extract


‘Nobody wants a robot to read them a story!’ The creatives and academics rejecting AI – at work and at home • The Guardian

Emine Saner:

»

Using AI would feel like cheating, but Tom [who works in IT in the UK government] worries refusing to do so now puts him at a disadvantage. “I almost feel like I have no choice but to use it at this point. I might have to put morals aside.”

Others, despite their misgivings, limit how they use it, and only for specific tasks. Steve Royle, professor of cell biology at the University of Warwick, uses ChatGPT for the “grunt work” of writing computer code to analyse data. “But that’s really the limit. I don’t want it to generate code from scratch. When you let it do that, you spend way more time debugging it afterwards. My view is, it’s a waste of time if you let it try and do too much for you.” Accurate or not, he also worries that if he becomes too reliant on AI, his coding skills will atrophy. “The AI enthusiasts say, ‘Don’t worry, eventually nobody will need to know anything.’ I don’t subscribe to that.”

Part of his job is to write research papers and grant proposals. “I absolutely will not use it for generating any text,” says Royle. “For me, in the process of writing, you formulate your ideas, and by rewriting and editing, it really crystallises what you want to say. Having a machine do that is not what it’s about.”

Generative AI, says film-maker and writer Justine Bateman, “is one of the worst ideas society has ever come up with”. She says she despises how it incapacitates us. “They’re trying to convince people they can’t do the things they’ve been doing easily for years – to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies – to write that for you.”

«

Neat tale about refuseniks. Unfortunately, you know how this progresses already.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified