Start Up No.2499: Meta chatbot deludes ill man, Trump strips satellite data, let’s vibe code!, the TikTok question, “AI journalism”, and more


A social media scam is making golf fans think women professionals are getting in touch to offer private dinners. You guessed – they aren’t. CC-licensed photo by Justin Falconer on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 9 links for you. Fore(head). I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


A flirty Meta AI bot invited a retiree to meet. He never made it home • Reuters

Jeff Horwitz:

»

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

«

Are chatbots uncovering the extent to which people will believe anything, or exacerbating the problem? This is going to be the big social question for the next few years.
unique link to this extract


Trump admin strips ocean and air pollution monitoring from next-gen weather satellites • CNN

Andrew Freedman:

»

The National Oceanic and Atmospheric Administration is narrowing the capabilities and reducing the number of next-generation weather and climate satellites it plans to build and launch in the coming decades, two people familiar with the plans told CNN.

This move — which comes as hurricane season ramps up with Erin lashing the East Coast — fits a pattern in which the Trump administration is seeking to not only slash climate pollution rules, but also reduce the information collected about the pollution in the first place. Critics of the plan also say it’s a short-sighted attempt to save money at the expense of understanding the oceans and atmosphere better.

Two planned instruments, one that would measure air quality, including pollution and wildfire smoke, and another that would observe ocean conditions in unprecedented detail, are no longer part of the project, the sources said.

“This administration has taken a very narrow view of weather,” one NOAA official told CNN, noting the jettisoned satellite instruments could have led to better enforcement and regulations on air pollution by more precisely measuring it.

The cost of the four satellites, known as the Geostationary Extended Observations, nicknamed GeoXO, would be lower than originally spelled out under the Biden administration, at a maximum of $500m per year for a total of $12bn, but some scientists say the cheaper up-front price would come at a cost to those who would have benefited from the air and oceans data.

«

It’s going to take so long to fill the gaps that are being created by this administration.
unique link to this extract


Why did a $10bn startup let me vibe-code for them—and why did I love it? • WIRED

Lauren Goode:

»

Since 2022, the Notion app has had an AI assistant to help users draft their notes. Now the company is refashioning this as an “agent,” a type of AI that will work autonomously in the background on your behalf while you tackle other tasks. To pull this off, human engineers need to write lots of code.

They open up Cursor and select which of several AI models they’d like to tap into. Most engineers I chatted with during my visit preferred Claude, or they used the Claude Code app directly. After choosing their fighter, the engineers ask their AI to draft code to build a new thing or fix a feature. The human programmer then debugs and tests the output as needed—though the AIs help with this too—before moving the code to production.

At its foundational core, generative AI is enormously expensive. The theoretical savings come in the currency of time, which is to say, if AI helped Notion’s cofounder and CEO Ivan Zhao finish his tasks earlier than expected, he could mosey down to the jazz club on the ground floor of his Market Street office building and bliss out for a while. Ivan likes jazz music. In reality, he fills the time by working more. The fantasy of the four-day workweek will remain just that.

My workweek at Notion was just two days, the ultimate code sprint. (In exchange for full access to their lair, I agreed to identify rank-and-file engineers by first name only.) My first assignment was to fix the way a chart called a mermaid diagram appears in the Notion app. Two engineers, Quinn and Modi, told me that these diagrams exist as SVG files in Notion and, despite being called scalable vector graphics, can’t be scaled up or zoomed into like a JPEG file. As a result, the text within mermaid diagrams on Notion is often unreadable.

Quinn slid his laptop toward me. He had the Cursor app open and at the ready, running Claude. For funsies, he scrolled through part of Notion’s code base. “So, the Notion code base? Has a lot of files. You probably, even as an engineer, wouldn’t even know where to go,” he said, politely referring to me as an engineer. “But we’re going to ignore all that. We’re just going to ask the AI on the sidebar to do that.”

«

Yes, why would a startup hoping to get favourable coverage let a journalist mess around with its codebase in a way that it could revert as soon as she’s gone? Complete mystery.
unique link to this extract


The AI hype is fading fast • Los Angeles Times

Michael Hiltzik:

»

“What I had not realized,” [Joseph] Weizenbaum wrote in 1976, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum warned that the “reckless anthropomorphization of the computer” — that is, treating it as some sort of thinking companion — produced a “simpleminded view of intelligence.”

That tendency has been exploited by today’s AI promoters. They label the frequent mistakes and fabrications produced by AI bots as “hallucinations,” which suggests that the bots have perceptions that may have gone slightly awry. But the bots “don’t have perceptions,” Bender and Hanna write, “and suggesting that they do is yet more unhelpful anthropomorphization.”

The general public may finally be cottoning on to the failed promise of AI more generally. Predictions that AI will lead to large-scale job losses in creative and STEM fields (science, technology, engineering and math) might inspire feelings that the whole enterprise was a tech-industry scam from the outset.

Predictions that AI would yield a burst of increased worker productivity haven’t been fulfilled; in many fields, productivity declines, in part because workers have to be deployed to double-check AI outputs, lest their mistakes or fabrications find their way into mission-critical applications — legal briefs incorporating nonexistent precedents, medical prescriptions with life-threatening ramifications and so on.

Some economists are dashing cold water on predictions of economic gains more generally. MIT economist Daron Acemoglu, for example, forecast last year that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of the AI camp’s projections.

«

Hiltzik makes a long argument about all the AI hype being, well, overhyped. Are the AI boosters right? Or the AI doomers? Only one way to find out.
unique link to this extract


Palestine was the problem with TikTok • The Verge

Sarah Jeong:

»

The contents of that March 2024 classified briefing that made 50 congressional representatives freak out [and back a ban on TikTok] have never been made public. But it’s not hard to figure out what changed between 2022 and 2024. “Oct. 7 [2023, when Hamas murdered hundreds of Israelis in a border incursion] really opened people’s eyes to what’s happening on TikTok,” [Democrat representative Raja] Krishnamoorthi told The Wall Street Journal a few days before the vote. Multiple sources told the WSJ that [Republican representative Mike] Gallagher and Krishnamoorthi’s efforts had been “revived in part by the fallout from the Oct. 7 attack by Hamas on Israel.” Gallagher was even more transparent about where he stood on the matter, writing an op-ed in The Free Press titled “Why Do Young Americans Support Hamas? Look at TikTok,” describing the app as “digital fentanyl” that was “brainwashing our youth.”

“TikTok is a tool China uses to spread propaganda to Americans, now it’s being used to downplay Hamas terrorism,” then-Sen. Marco Rubio (R-FL) wrote on X in November 2023. “TikTok needs to be shut down. Now.”

“TikTok — and its parent company ByteDance — are threats to American national security,” wrote Sen. Josh Hawley (R-MO) in a letter to Treasury Secretary Janet Yellen, also in November 2023. He decried “TikTok’s power to radically distort the world-picture that America’s young people encounter,” describing “Israel’s unfolding war with Hamas” as “a crucial test case.”

«

This feels a bit like squashing and squeezing the facts to fit a narrative, from all sides. Gallagher is blind to the fact that even the limited coverage from inside Gaza showed a response that never looked like an attempt to recover hostages. But this article never quite produces the gun that has the corresponding smoke; Gallagher is also exercised about TikTok’s capability as “CCP spyware”.

Meanwhile, months after TikTok should by law have been closed or sold in the US, neither has happened and the Trump administration is opening a TikTok White House account.
unique link to this extract


Analysis: record solar growth keeps China’s CO2 falling in first half of 2025 • Carbon Brief

Lauri Myllyvirta:

»

Clean-energy growth helped China’s carbon dioxide (CO2) emissions fall by 1% year-on-year in the first half of 2025, extending a declining trend that started in March 2024.

The CO2 output of the nation’s power sector – its dominant source of emissions – fell by 3% in the first half of the year, as growth in solar power alone matched the rise in electricity demand.

The new analysis for Carbon Brief shows that record solar capacity additions are putting China’s CO2 emissions on track to fall across 2025 as a whole.

Other key findings include:

• The growth in clean power generation, some 270 terawatt hours (TWh) excluding hydro, significantly outpaced demand growth of 170TWh  in the first half of the year.

• Solar capacity additions set new records due to a rush before a June policy change, with 212 gigawatts (GW) added in the first half of the year.

• This rush means solar is likely to set an annual record for growth in 2025, becoming China’s single-largest source of clean power generation in the process.

• Coal-power capacity could surge by as much as 80-100GW this year, potentially setting a new annual record, even as coal-fired electricity generation declines.

«

Paradoxically, China is using more coal for chemicals, but using less for electricity generation, which is how its overall carbon dioxide output is falling. Falling is good!
unique link to this extract


The catfishing scam putting fans and female golfers in danger • The Athletic

Carson Kessler and Gabby Herzig:

»

Meet Rodney Raclette. Indiana native. 62 years old. Big golfer. A huge fan of the LPGA.

On Aug. 4, Rodney opened an Instagram account with the handle @lpgafanatic6512, and he quickly followed some verified accounts for female golfers and a few other accounts that looked official.

Within 20 minutes of creating his account and with zero posts to his name, Rodney received a message from what at first glance appeared to be the world’s No. 2-ranked female golfer, Nelly Korda.

“Hi, handsomeface, i know this is like a dream to you. Thank you for being a fan,” read a direct message from @nellykordaofficialfanspage2.

The real Nelly Korda was certainly not messaging Rodney — and Rodney doesn’t actually exist. The Athletic created the Instagram account of the fictitious middle-aged man to test the veracity and speed of an ever-increasing social media scam pervading the LPGA.

The gist of the con goes like this: Social media user is a fan of a specific golfer; scam account impersonating that athlete reaches out and quickly moves the conversation to another platform like Telegram or WhatsApp to evade social media moderation tools; scammer offers a desirable object or experience — a private dinner, VIP access to a tournament, an investment opportunity — for a fee; untraceable payments are made via cryptocurrency or gift cards. Then, once the spigot of cash is turned off, the scammer disappears.

…“We’ve definitely had people show up at tournaments who thought they had sent money to have a private dinner with the person,” said Scott Stewart, who works for TorchStone Global, a security firm used by the LPGA. “But then also, we’ve had people show up who were aggrieved because they had been ripped off, there’s a tournament nearby, and they wanted to kind of confront the athlete over the theft.”

«

This is the danger: people get understandably angry when they’re told they’ve been scammed because they can’t tell the difference between a fake account and a real one and that, in effect, they have more money than sense.
unique link to this extract


Wired and Business Insider remove ‘AI-written’ freelance articles • Press Gazette

Charlotte Tobitt:

»

Wired and Business Insider have removed news features written by a freelance journalist after concerns they are likely AI-generated works of fiction.

Freedom of expression non-profit Index on Censorship is also in the process of taking down a magazine article by the same author after concerns were raised by Press Gazette. The publisher has concluded that it “appears to have been written by AI”.

Several other UK and US online publications have published questionable articles by the same person, going by the name of Margaux Blanchard, since April.

Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.

Press Gazette was alerted to this author by Dispatch editor and former Unherd deputy editor Jacob Furedi.

Furedi set up Dispatch as his own subscription and syndication-based publication dedicated to long-form reportage earlier this year.

He received a pitch from Blanchard at the start of August in which she offered a reported piece about “Gravemont, a decommissioned mining town in rural Colorado that has been repurposed into one of the world’s most secretive training grounds for death investigation”. The pitch continued: “I want to tell the story of the scientists, ex-cops, and former miners who now handle the dead daily — not as mourners, but as archivists of truth…”

«

This is, one has to note, quite a clever bit of promotion by Furedi for his new site, but the story he tells of the pitch that is very ChatGPT-flavoured (death investigation??). What is notable is that Blanchard was asking quite a big payment – £500 for an article. So if some freelance has figured out that chatbots are a great way to make up convincing content, and get paid for it because nobody checks anything any more, well, that’s playing the system perfectly.
unique link to this extract


‘Nobody wants a robot to read them a story!’ The creatives and academics rejecting AI – at work and at home • The Guardian

Emine Saner:

»

Using AI would feel like cheating, but Tom [who works in IT in the UK government] worries refusing to do so now puts him at a disadvantage. “I almost feel like I have no choice but to use it at this point. I might have to put morals aside.”

Others, despite their misgivings, limit how they use it, and only for specific tasks. Steve Royle, professor of cell biology at the University of Warwick, uses ChatGPT for the “grunt work” of writing computer code to analyse data. “But that’s really the limit. I don’t want it to generate code from scratch. When you let it do that, you spend way more time debugging it afterwards. My view is, it’s a waste of time if you let it try and do too much for you.” Accurate or not, he also worries that if he becomes too reliant on AI, his coding skills will atrophy. “The AI enthusiasts say, ‘Don’t worry, eventually nobody will need to know anything.’ I don’t subscribe to that.”

Part of his job is to write research papers and grant proposals. “I absolutely will not use it for generating any text,” says Royle. “For me, in the process of writing, you formulate your ideas, and by rewriting and editing, it really crystallises what you want to say. Having a machine do that is not what it’s about.”

Generative AI, says film-maker and writer Justine Bateman, “is one of the worst ideas society has ever come up with”. She says she despises how it incapacitates us. “They’re trying to convince people they can’t do the things they’ve been doing easily for years – to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies – to write that for you.”

«

Neat tale about refuseniks. Unfortunately, you know how this progresses already.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.