Start Up No.2503: Anthropic’s Claude helps hacker’s extortion, can Democrat influencers.. influence?, VR retail woes, and more


Social media claims that cash withdrawals of over £200 will be monitored aren’t true. Does AI make people believe them? CC-licensed photo by Grey World on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Cashless. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


A hacker used AI to automate an ‘unprecedented’ cybercrime spree, Anthropic says • CNBC

Kevin Collier:

»

A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes.

In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies. [Using Anthropic’s chatbot Claude, which seems a relevant point for this story – Overspill Ed.]

Cyber extortion, where hackers steal information like sensitive user data or trade secrets, is a common criminal tactic. And AI has made some of that easier, with scammers using AI chatbots for help writing phishing emails. In recent months, hackers of all stripes have increasingly incorporated AI tools in their work.

But the case Anthropic found is the first publicly documented instance in which a hacker used a leading AI company’s chatbot to automate almost an entire cybercrime spree.

According to the blog post, one of Anthropic’s periodic reports on threats, the operation began with the hacker convincing Claude Code — Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests — to identify companies vulnerable to attack. Claude then created malicious software to actually steal sensitive information from the companies. Next, it organized the hacked files and analyzed them to both help determine what was sensitive and could be used to extort the victim companies.

The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand in exchange for the hacker’s promise not to publish that material. It also wrote suggested extortion emails.

«

So helpful. Why, before we had chatbots, we had to draft our own phishing emails and find our own extortion targets.
unique link to this extract


A dark money group is secretly funding high-profile Democratic influencers • WIRED

Taylor Lorenz:

»

After the Democrats lost in November, they faced a reckoning. It was clear that the party had failed to successfully navigate the new media landscape. While Republicans spent decades building a powerful and robust independent media infrastructure, maximizing controversy to drive attention and maintaining tight relationships with creators despite their small disagreements with Trump, the Democrats have largely relied on outdated strategies and traditional media to get their message out.

Now, Democrats hope that the secretive Chorus Creator Incubator Program, funded by a powerful liberal dark money group called The Sixteen Thirty Fund, might tip the scales. The program kicked off last month, and creators involved were told by Chorus that over 90 influencers were set to take part. Creators told WIRED that the contract stipulated they’d be kicked out and essentially cut off financially if they even so much as acknowledged that they were part of the program. Some creators also raised concerns about a slew of restrictive clauses in the contract.

Influencers included in communication about the program, and in some cases an onboarding session for those receiving payments from The Sixteen Thirty Fund, include Olivia Julianna, the centrist Gen Z influencer who spoke at the 2024 Democratic National Convention; Loren Piretra, a former Playboy executive turned political influencer who hosts a podcast for Occupy Democrats; Barrett Adair, a content creator who runs an American Girl Doll–themed pro-DNC meme account; Suzanne Lambert, who has called herself a “Regina George liberal;” Arielle Fodor, an education creator with 1.4 million followers on TikTok; Sander Jennings, a former TLC reality star and older brother of trans influencer Jazz Jennings; David Pakman, who hosts an independent progressive show on YouTube covering news and politics; Leigh McGowan, who goes by the online moniker “Politics Girl”; and dozens of others.

…The goal of Chorus, according to a fundraising deck obtained by WIRED, is to “build new infrastructure to fund independent progressive voices online at scale.” The creators who joined the incubator are expected to attend regular advocacy trainings and daily messaging check-ins. Those messaging check-ins are led by Cohen on “rapid response days.” The creators also have to attend at least two Chorus “newsroom” events per month, which are events Chorus plans, often with lawmakers.

«

There’s a famous tweet which reads “I’m 50. All celebrity news looks like this: ‘CURTAINS FOR ZOOSHA? K-SMOG AND BATBOY CAUGHT FLIPPING A GRUNT'”. And that list of influencers sure makes me feel like that. But also: what the hell are they hoping to achieve? The Democrats’ problem isn’t that they don’t have enough influencers. It’s that their policies are incredibly unpopular with – or seem irrelevant to – large swathes of the American public.
unique link to this extract


Intel details everything that could go wrong with US taking a 10% stake • Ars Technica

Ashley Belanger:

»

In the long term, investors were told [in a new SEC filing from Intel] that the US stake may limit the company’s eligibility for future federal grants while leaving Intel shareholders dwelling in the uncertainty of knowing that terms of the deal could be voided or changed over time, as federal administration and congressional priorities shift.

Additionally, Intel forecasted potential legal challenges over the deal, which Intel anticipates could come from both third parties and the US government.

The final bullet point in Intel’s risk list could be the most ominous, though. Due to the unprecedented nature of the deal, Intel fears there’s no way to anticipate myriad other challenges the deal may trigger.

“It is difficult to foresee all the potential consequences,” Intel’s filing said. “Among other things, there could be adverse reactions, immediately or over time, from investors, employees, customers, suppliers, other business or commercial partners, foreign governments or competitors. There may also be litigation related to the transaction or otherwise and increased public or political scrutiny with respect to the Company.”

Meanwhile, it’s hard to see what Intel truly gains from the deal other than maybe getting Trump off its back for a bit. A Fitch Ratings research note reported that “the deal does not improve Intel’s BBB credit rating, which sits just above junk status” and “does not fundamentally improve customer demand for Intel chips” despite providing “more liquidity,” Reuters reported.

«

So basically although it is a cash injection, there are a ton of downsides in Trump (it’s hardly the US) taking a stake. And no visible upsides.
unique link to this extract


The VR retail experience needs a hard reboot • UploadVR

Craig Storm:

»

The Quest 3 dummy unit is fastened precariously to the table, with a Quest 2 flopped forward on its face beside it.

I couldn’t see the newer Quest 3S, which has been out for almost a year, anywhere. Each headset was accompanied by a single sad controller strapped to the table next to it. I don’t know if they are meant to be displayed with only one controller, or if the second controller used to be there. Either way, it was obvious no one had given this display any care or attention in a very long time.

There were no accessories. No boxed units ready for someone to take home. Just desolation, neglect, and sadness. This was my recent experience at a Best Buy store, and it left me wondering: what exactly is the state of VR retail?

There’s no technology that needs to be experienced first-hand more than virtual reality. Trying to explain VR to someone who’s never put on a headset is like trying to describe the taste of an apple to someone who’s never eaten one. You can’t talk your way into understanding it. You have to try it. VR’s struggle to reach the mass market has always come down to that missing step. In the early years, a powerful gaming PC was required to even run VR hardware. The Oculus Go and Oculus Quest changed that by making standalone VR possible, finally putting it within reach of the average consumer. But there still isn’t a good way for most people to try the product before buying.

«

I used to be optimistic that VR could reach the mainstream once headsets became affordable. But the reality is that people aren’t interested enough in sealing themselves away. We like awareness of the world, even if our face is glued to a smartphone screen. And the content isn’t good enough, for the most part, creating a chicken/egg problem.
unique link to this extract


We must build AI for people; not to be a person • Mustafa Suleyman

Mustafa Suleyman was a co-founder of DeepMind, but now works at Microsoft:

»

I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.

In this context, I’m growing more and more concerned about what is becoming known as the “psychosis risk”, and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.

We must build AI for people; not to be a digital person. AI companions are a completely new category, and we urgently need to start talking about the guardrails we put in place to protect people and ensure this amazing technology can do its job of delivering immense value to the world. I’m fixated on building the most useful and supportive AI companion imaginable. But to succeed, I also need to talk about what we, and others, shouldn’t build.

«

Making sure that AI creations do not qualify for copyright seems like a good stake to put, cemented, in the ground for this.
unique link to this extract


Cash withdrawals over £200 will not be automatically reported to the Financial Intelligence Unit • Full Fact

»

We’ve recently spotted videos circulating on social media which claim that from 18 September, people who withdraw more than £200 in cash a week will have details of their transactions sent to the UK’s Financial Intelligence Unit.

The clips claim this new rule comes from “guidance from HMRC, the Treasury and the Financial Conduct Authority [FCA]”.

This is not a real policy set to be introduced by the government.

A spokesperson for the National Crime Agency (NCA), which oversees the Financial Intelligence Unit (FIU), told us this isn’t true, confirming that “the FIU does not receive automatic reports on anyone who removes £200 cash in a seven day period”.

A spokesperson for HMRC also told us: “These claims are completely false and designed to cause undue alarm and fear”, while the FCA said it “is not aware of or involved in this guidance”.

«

People’s brains really do seem to have turned to mush. Or maybe it’s that effect where if there’s enough completely made-up stuff circulating, then nobody knows what to believe, or how to discern.

In passing: in all the science fiction one I’ve ever read, the computers were always accurate. (HAL 9000 doesn’t count; it thought it was saving the mission by preventing the humans from interfering.) When you look at the output of Grok and ChatGPT, one realises that SF writers didn’t account for human stupidity being the principal input to those creations.
unique link to this extract


Worldwide smartphone market forecast to grow 1% in 2025 • IDC

»

Worldwide smartphone shipments are forecast to grow 1.0% year-on-year (YoY) in 2025 to 1.24 billion units, according to the International Data Corporation (IDC) Worldwide Quarterly Mobile Phone Tracker. This represents an improvement from the previous forecast of 0.6%, driven by 3.9% growth in iOS this year. Despite challenges like soft demand and a tough economy, healthy replacement demand will help push growth into 2026, resulting in a compound annual growth rate (CAGR) of 1.5% from 2024 to 2029. The total addressable market (TAM) has increased slightly, as the current exemption by the U.S. government on smartphones shields the market from negative impact from additional tariffs.

«

Just looking in on the smartphone market in passing. The forecast is for about 1.25bn smartphones to be shipped, but the forecast for the next five years is anaemic – 1% or 2%. It’s a long way from the go-go years of the 2010s, with 20% growth. Now, like the PC, it’s just bumping along: the real era of innovation is past, and you can’t burn the bonfire twice.
unique link to this extract


Is the UK’s giant new nuclear power station ‘unbuildable’? • Financial Times

Malcolm Moore, Ian Johnston and Rachel Millard:

»

The design of the UK’s latest nuclear power station is “terrifying”, “phenomenally complex” and “almost unbuildable”, according to Henri Proglio, a former head of EDF, the French state-owned utility behind the project. 

One month after the final green light for Sizewell C, 1,700 workers are on site in Suffolk, on the UK’s east coast, preparing the sandy marshland for two enormous reactors that will eventually generate enough electricity for 6mn homes.

The plant will be a replica of the European Pressurised Reactor (EPR) design that is running four to six years late and 2.5 times over budget at Hinkley Point C in Somerset, and which has had problems wherever it has been built, in France, Finland and China.

But unlike at Hinkley, where EDF was responsible for spiralling costs and took a hit of nearly €13bn after running late and over budget, the UK government and bill payers are on the hook for Sizewell. The state will provide £36.5bn of debt to fund the estimated £38bn price tag and be responsible if costs go beyond £47bn.

…It includes unprecedented safety features: four independent cooling systems; twin containment shields capable of withstanding an internal blast or an aircraft strike; and a “core catcher” to trap molten fuel in the event of a meltdown. 

“It was well intentioned, but it ended up growing and growing and growing, and European regulatory standards reinforced it, and it ended up a monster,” said one senior nuclear executive, who asked not to be named. 

«

Planned output: 3.2GW. Expected cost per megawatt-hour: £286. Between twice and three times the cost of reactors built in China or South Korea. This is probably the last EPR that will ever be built – if it’s ever finished.

Onshore wind in the UK: 15.7GW. Offshore: 14.7GW.
unique link to this extract


Flock wants to partner with consumer dashcam company that takes ‘trillions of images’ a month • 404 Media

Joseph Cox:

»

Flock, the surveillance company with automatic license plate reader (ALPR) cameras in thousands of communities around the U.S., is looking to integrate with a company that makes AI-powered dashcams placed inside peoples’ personal cars, multiple sources told 404 Media. The move could significantly increase the amount of data available to Flock, and in turn its law enforcement customers. 404 Media previously reported local police perform immigration-related Flock lookups for ICE, and on Monday that Customs and Border Protection had direct access to Flock’s systems. In essence, a partnership between Flock and a dashcam company could turn private vehicles into always-on, roaming surveillance tools.

Nexar, the dashcam company, already publicly publishes a live interactive map of photos taken from its dashcams around the U.S., in what the company describes as “crowdsourced vision,” showing the company is willing to leverage data beyond individual customers using the cameras to protect themselves in the event of an accident. 

“Dash cams have evolved from a device for die-hard enthusiasts or large fleets, to a mainstream product. They are cameras on wheels and are at the crux of novel vision applications using edge AI,” Nexar’s website says. The website adds Nexar customers drive 150 million miles a month, generating “trillions of images.”

The news comes during a period of expansion for Flock. Earlier this month the company announced it would add AI to its products to let customers use natural language to surface data while investigating crimes.

«

We live in the panopticon; it’s just sewing up the edges at the moment.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.