Start Up No.2356: UK Online Safety Act begins to bite, Amazon accused of warehouse safety risks, insistent chatbots, and more


Close-up imagery of failed microchips reveals a world of strange shapes and details. CC-licensed photo by ZEISS Microscopy on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 10 links for you. Fractal. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


Social media given ‘last chance’ to tackle illegal posts • BBC News

Liv McMahon:

»

Online platforms must begin assessing whether their services expose users to illegal material by 16 March 2025 or face financial punishments as the Online Safety Act (OSA) begins taking effect.

Ofcom, the regulator enforcing the UK’s internet safety law, published its final codes of practice for how firms should deal with illegal online content on Monday.

Platforms have three months to carry out risk assessments identifying potential harms on their services or they could be fined up to 10% of their global turnover.

Ofcom head Dame Melanie Dawes told BBC News this was the “last chance” for industry to make changes.

“If they don’t start to seriously change the way they operate their services, then I think those demands for things like bans for children on social media are going to get more and more vigorous,” she said. “I’m asking the industry now to get moving, and if they don’t they will be hearing from us with enforcement action from March.”

Under Ofcom’s codes, platforms will need to identify if, where and how illegal content might appear on their services and ways they will stop it reaching users

According to the OSA, this includes content relating to child sexual abuse material (CSAM), controlling or coercive behaviour, extreme sexual violence, promoting or facilitating suicide and self-harm.

But critics say the Act fails to tackle a wide range of harms for children. The Molly Rose Foundation – set up in memory of teenager Molly Russell, who took her own life in 2017 after being exposed to self-harm images on social media – said the OSA has “deep structural issues”.

Andy Burrows, its chief executive, said the organisation was “astonished and disappointed” by a lack of specific, targeted measures for platforms on dealing with suicide and self-harm material in Ofcom’s guidance.

“Robust regulation remains the best way to tackle illegal content, but it simply isn’t acceptable for the regulator to take a gradualist approach to immediate threats to life,” he said.

«

unique link to this extract


LFGSS and Microcosm shutting down 16th March 2025 (the day before the Online Safety Act is enforced) • LFGSS

“Velocio” runs LFGSS: “London Fixed Gear and Single-Speed is a community of predominantly fixed gear and single-speed cyclists in and around London, UK”:

»

Reading the new Ofcom safety regulations and we’re done… we fall firmly into scope, and I have no way to dodge it. The act is too broad, and it doesn’t matter that there’s never been an instance of any of the proclaimed things that this act protects adults, children and vulnerable people from… the very broad language and the fact that I’m based in the UK means we’re covered.

The act simply does not care that this site and platform is run by an individual, and that I do so philanthropically without any profit motive (typically losing money), nor that the site exists to reduce social loneliness, reduce suicide rates, help build meaningful communities that enrich life.

The act only cares that is it “linked to the UK” (by me being involved as a UK native and resident, by you being a UK based user), and that users can talk to other users… that’s it, that’s the scope.

I can’t afford what is likely tens of thousands [of pounds] to go through all the legal hoops here over a prolonged period of time, the site itself barely gets a few hundred in donations each month and costs a little more to run… this is not a venture that can afford compliance costs… and if we did, what remains is a disproportionately high personal liability for me, and one that could easily be weaponised by disgruntled people who are banned for their egregious behaviour (in the years running fora I’ve been signed up to porn sites, stalked IRL and online, subject to death threats, had fake copyright takedown notices, an attempt to delete the domain name with ICANN… all from those whom I’ve moderated to protect community members)… I do not see an alternative to shuttering it.

«

Seems odd that a standard forum site should think itself at risk from the new rules. It’s a question of moderation, for the most part. If there aren’t enough volunteers, there could be a problem, but it seems odd in a site like this.
unique link to this extract


Amazon facing strike threats as Senate report details hidden widespread injuries • Ars Technica

Ashley Belanger:

»

Just as Amazon warehouse workers are threatening to launch the “first large-scale” unfair labor practices strike at Amazon in US history, Sen. Bernie Sanders (I-Vt.) released a report accusing Amazon of operating “uniquely dangerous warehouses” that allegedly put profits over worker safety.

As chair of the Senate Committee on Health, Education, Labor, and Pensions, Sanders started investigating Amazon in June 2023. His goal was “to uncover why Amazon’s injury rates far exceed those of its competitors and to understand what happens to Amazon workers when they are injured on the job.”

According to Sanders, Amazon “sometimes ignored” the committee’s requests and ultimately only supplied 285 documents requested. The e-commerce giant was mostly only willing to hand over “training materials given to on-site first aid staff,” Sanders noted, rather than “information on how it tracks workers, the quotas it imposes on workers, and the disciplinary actions it takes when workers cannot meet those quotas, internal studies on the connection between speed and injury rates, and the company’s treatment of injured workers.”

To fill in the gaps, Sanders’ team “conducted an exhaustive inquiry,” interviewing nearly 500 workers who provided “more than 1,400 documents, photographs, and videos to support their stories.” And while Amazon’s responses were “extremely limited,” Sanders said that the Committee was also able to uncover internal studies that repeatedly show that “Amazon chose not to act” to address safety risks, allegedly “accepting injuries to its workers as the cost of doing business.”

«

The report is worth browsing – even if just the executive summary – for its list of legislation that Sanders says needs to be passed, including the “No Robot Bosses Act” and the “Stop Spying Bosses Act”.
unique link to this extract


Sorry human, you’re wrong • Engineering Prompts

Marcel Salathé paid for access to GPT-4o:

»

Yesterday, I decided to try a quick experiment for fun. As an amateur piano player and a big Chopin fan, I took a picture of a page from the score open on my piano – Chopin’s Nocturne Op. 27 No. 2 – and asked GPT-4o to identify it. To my surprise, it couldn’t. Intrigued, I tested other models. While none of them succeeded, most at least recognized it as Chopin.

Here’s where it gets interesting: the larger models (GPT-4o, GPT o1, GPT o1 Pro, Claude Opus, and Gemini 1.5 Pro) were all quite confident in their wrong answers. Only Mistral and Claude Sonnet admitted their uncertainty. They suggested it might be Chopin but acknowledged they weren’t sure without more information. Kudos to them.

I was especially disappointed with GPT o1 Pro. After “thinking” for 2 minutes and 40 seconds (so much for being faster), it didn’t just fail – it was the only advanced model to misidentify the composer entirely. It confidently claimed the piece was Liszt’s “Un Sospiro” and gave elaborate reasons to back up its claim. Normally, I abandon experiments like this and move on, but in this case, I had a strange urge to tell o1 Pro it was wrong.

(By the way, I wish I could share the conversation directly. Unfortunately, sharing conversations involving images isn’t possible – an strange limitation given the price.)

I told it flat out that it was wrong and provided the correct answer. What happened next left me stunned. Normally, when you correct a model, it apologizes and acknowledges the mistake – unless your claim is completely outlandish. Even then, most models only push back in the gentlest way. But this time, after another minute and 18 seconds of “thinking”, o1 Pro doubled down: “I’m fairly certain that this page is not from Chopin’s Nocturne in D-flat major, Op. 27 No. 2.” (emphasis added).

It provided an elaborate explanation of why I was wrong and insisted that the piece was, in fact, Liszt’s “Un Sospiro”. The message was clear: sorry human, but you’re wrong.

«

It gets weirder, trust me.
unique link to this extract


Chinese hacker singlehandedly responsible for exploiting 81,000 Sophos firewalls, DOJ says • Cybernews

Stefanie Schappert:

»

A Chinese hacker indicted on Tuesday and the PRC-based cybersecurity company he worked for are both sanctioned by the US government for compromising “tens of thousands of firewalls” – some protecting US critical infrastructure, putting human lives at risk.

In a series of coordinated actions, the US Treasury Department’s Office of Foreign Assets Control (OFAC), the Department of Justice (DoJ), and the FBI said the massive cyber espionage campaign, which compromised at least 36 firewalls protecting US critical infrastructure, posed significant risks to national security.

A federal court in Indiana on Tuesday unsealed an indictment charging 30-year-old Guan Tianfeng (Guan) with conspiracy to commit computer and wire fraud by hacking into firewall devices worldwide, including one “used by an agency of the United States.”

Guan, employed by the Chinese cybersecurity firm Sichuan Silence – a known contractor for Beijing intelligence – was alleged to have discovered a zero-day vulnerability in firewall products manufactured by UK cybersecurity firm Sophos.

DoJ officials said between April 22nd and April 25th, 2020, Guan and his co-conspirators infected approximately 81,000 vulnerable devices, including 36 firewalls protecting US critical infrastructure.
The malware deployed by the attackers was designed to steal sensitive user information, but once compromised, Guan escalated the attacks.

Using the Ragnarok ransomware variant, the hackers would further disable their victims’ anti-virus software, encrypt their systems, and demand payment if victims attempted to remediate the breach.

«

Sneaky. And determined.
unique link to this extract


Yearlong supply-chain attack targeting security pros steals 390,000 credentials • Ars Technica

Dan Goodin:

»

A sophisticated and ongoing supply-chain attack operating for the past year has been stealing sensitive login credentials from both malicious and benevolent security personnel by infecting them with Trojanized versions of open source software from GitHub and NPM, researchers said.

The campaign, first reported three weeks ago by security firm Checkmarx and again on Friday by Datadog Security Labs, uses multiple avenues to infect the devices of researchers in security and other technical fields. One is through packages that have been available on open source repositories for over a year. They install a professionally developed backdoor that takes pains to conceal its presence. The unknown threat actors behind the campaign have also employed spear phishing that targets thousands of researchers who publish papers on the arXiv platform.

The objectives of the threat actors are also multifaceted. One is the collection of SSH private keys, Amazon Web Services access keys, command histories, and other sensitive information from infected devices every 12 hours. When this post went live, dozens of machines remained infected, and an online account on Dropbox contained some 390,000 credentials for WordPress websites taken by the attackers, most likely by stealing them from fellow malicious threat actors. The malware used in the campaign also installs cryptomining software that was present on at least 68 machines as of last month.

It’s unclear who the threat actors are or what their motives may be. Datadog researchers have designated the group MUT-1244, with MUT short for “mysterious unattributed threat.”

I’m going to go out on a limb and guess at China? There have been some very determined long-term attacks, such as (I think) the one on PyPI discovered earlier this year.
unique link to this extract


AI thriller spec script snapped up in $3.25m sale to Fifth Season, Makeready • Hollywood Reporter

Borys Kit:

»

An unknown writer, a fast-rising feeding frenzy, and a true multimillion-dollar deal. It’s enough to make executives or aspiring screenplay authors dream of the heady spec script deals of the 1990s.

In a deal that shakes up a sleepy Hollywood before the holidays, Fifth Season and Brad Weston’s Makeready banner have preemptively picked up Alignment, a spec script by Natan Dotan, a man who until a week ago had no representation.

The deal could become one of biggest spec deals of the year — Nyad writer Julia Cox sold spec Love of Your Life, with Ryan Gosling producing, to Amazon in October for low seven figures — but this one involves the breaking of a writer with few Hollywood connections. It also involves a topic that is generating intense interest — and hand-wringing — in Hollywood, namely artificial intelligence.

…Alignment is described as having the urgency of thrillers such as Margin Call and Contagion and takes place in a 36-hour period. It tells of a board member at a booming AI company who wrestles with corporate politics and warped incentives as he tries to prevent his colleagues’ willful ignorance from causing a global catastrophe.

«

It’s the OpenAI board row from November 2023, but with “global catastrophe” as the ticking clock, rather than just the passage of the weekend and emails from tech publications. Looking forward to the CEO consulting a (consults script notes from executives) wall of blinking lights and an LED display that blinks red and green.
unique link to this extract


What does a human life cost – and is it ethical to price it? Jenny Kleeman asked a hitman, philanthropists and a life insurer • The Conversation

Hugh Breakey:

»

What is your life worth, in dollar terms? The answers may surprise you. The asking price for murder, for example, is disconcertingly low. The average price of hiring a hitman is A$30,000 [UK £15,000], estimates British journalist Jenny Kleeman in her intriguing and thought-provoking book, The Price of Life. But the cost to the public purse is very high.

Here are some more striking figures (all converted into Australian dollars). The average price of a ransom: $560,000. The payout to families if one of their loved ones dies in an act of terrorism (in Australia) $75,000. The average price of saving a life through strategic philanthropy: $6,000. And the price of buying a cadaver: $7,600.

Kleeman’s book investigates the many ways decision-makers find themselves putting a price on the priceless.

In her quest to discover how our modern world fixes a price to human life in a wide variety of contexts, she also investigates the costs and consequences of life insurance, the sale of body parts, and the inside details of government policy-making, compensation for murder and more.

…Kleeman’s book is not just concerned with how and why a human life is priced, but on who decides that price. Kleeman’s question encourages her to look in strange places and talk to interesting people, opening the readers’ eyes to decisions and calculations often hidden – sometimes deliberately so – from public view.

«

The NHS has a measure called QALYs, or quality-adjusted life years, in determining whether to go ahead with treatments for disease or disability. It’s very technical, but another form of the measure discussed here.
unique link to this extract


Google goes solar as grid can’t power its future datacenters • The Register

Brandon Vigliarolo:

»

Google believes the US electricity grid can’t deliver the energy needed to power datacenters that deliver AI services, so has formed an alliance to build industrial parks powered by clean energy, at which it will build “gigawatts of datacenter capacity” across the nation.

The search megalith announced its plan last Wednesday. Google president Ruth Porat wrote that the US is poised to enjoy strong economic growth thanks to AI, increased manufacturing activity, and the electrification of transport and other industries. But Porat thinks those opportunities could be missed due to the wonky electricity grid, which she wrote has “not kept pace with the country’s economic growth opportunity” and is sometimes “unable to accommodate load increases.”

Google’s response is a deal with solar energy firm Intersect Power, and financier TPG Rise Climate, to build industrial parks next to renewable energy generation facilities that Porat wrote will be “purpose-built and right-sized for the datacenter.” Google will build datacenters at those parks – meaning they have a long-term customer from day one – and believes it can build bit barns faster under this arrangement.

Intersect Power agrees with that analysis, describing the deal as a “‘power-first’ approach to datacenter development.”

The generation plants Intersect Energy builds will also be connected to the grid, and provide power to other tenants of the industrial parks.

Intersect Power’s portfolio consists of 2.2GW of operating solar PV and accompanying battery storage in operation or construction.

«

Nuclear, solar, just throwing energy at the problem. But what exactly is the problem?
unique link to this extract


The art of failure analysis 2024 • IEEE Spectrum

Kohava Mendelsohn:

»

When your car breaks down, you take it to the mechanic. When a computer chip fails, engineers go to the failure-analysis team. It’s their job to diagnose what went wrong and work to make sure it doesn’t in the future.

The International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA) is a yearly conference in Asia attended by failure-analysis engineers. The gathering is mostly technical, but there’s also a fun part: The Art of Failure Analysis contest.

“It’s all about creativity and strong imagination,” says Willie Yeoh, chair of the Art of Failure Analysis contest this year. Anyone in the failure-analysis community can submit an image taken during their everyday work that includes something surprising or unexpected, like a melted bit of silicon that looks like a dinosaur. Ten photos are chosen by the conference committee as the most interesting, and then conference attendees vote on their favorite among those.

We’ve gathered a collection of photos from the 2022 and 2024 Art of Failure Analysis contests (it did not run in 2023). Which one would you vote for?

«

These photos are very striking – though it would be nice too to know the scale.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.