Start Up No.2319: planet barrels towards 2.9ºC of warming, how fraud caught Wiley out, Yugoslav’s home computers, and more


Watermarking for AI content is the great promise of Google’s latest open source technology. CC-licensed photo by Early Novels Database on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


It’s Friday, so there’s another post due at the Social Warming Substack at about 0845 UK time.


A selection of 9 links for you. Not an AI. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


Google offers its AI watermarking tech as free open source toolkit • Ars Technica

Kyle Orland:

»

Back in May, Google augmented its Gemini AI model with SynthID, a toolkit that embeds AI-generated content with watermarks it says are “imperceptible to humans” but can be easily and reliably detected via an algorithm. Today, Google took that SynthID system open source, offering the same basic watermarking toolkit for free to developers and businesses.

The move gives the entire AI industry an easy, seemingly robust way to silently mark content as artificially generated, which could be useful for detecting deepfakes and other damaging AI content before it goes out in the wild. But there are still some important limitations that may prevent AI watermarking from becoming a de facto standard across the AI industry any time soon.

Google uses a version of SynthID to watermark audio, video, and images generated by its multimodal AI systems, with differing techniques that are explained briefly in this video. But in a new paper published in Nature, Google researchers go into detail on how the SynthID process embeds an unseen watermark in the text-based output of its Gemini model.

The core of the text watermarking process is a sampling algorithm inserted into an LLM’s usual token-generation loop (the loop picks the next word in a sequence based on the model’s complex set of weighted links to the words that came before it). Using a random seed generated from a key provided by Google, that sampling algorithm increases the correlational likelihood that certain tokens will be chosen in the generative process. A scoring function can then measure that average correlation across any text to determine the likelihood that the text was generated by the watermarked LLM (a threshold value can be used to give a binary yes/no answer).

«

unique link to this extract


UNEP: New climate pledges need ‘quantum leap’ in ambition to deliver Paris goals • Carbon Brief

Zeke Hausfather:

»

There is a “massive gap between rhetoric and reality” that must be closed by new climate pledges being drafted under the Paris Agreement, the UN Environment Programme (UNEP) says.

In the 15th edition of its annual “emissions gap” report, the UNEP calls for “no more hot air” as countries approach the February 2025 deadline to submit their next nationally determined contributions (NDCs) setting mitigation targets for 2035.

These NDCs “must deliver a quantum leap in ambition in tandem with accelerated mitigation action in this decade”, the report says. 

The report charts the “gap” between where emissions are headed under current policies and commitments over the coming decade, compared to what is needed to meet the Paris goal of limiting global warming to “well below” 2ºC and pursuing efforts to stay under 1.5ºC.

It highlights that greenhouse gas emissions reached record levels in 2023, up 1.3% from 2022, and rising notably faster than the average over the past decade. 

The report warns that both progress and ambition have “plateaued” in recent years, with relatively little of substance occurring since the pledges made at COP26 in 2021. And many countries are not even on track to meet their existing NDCs, with current policy projections from G20 nations exceeding NDC commitments by a collective 1bn tonnes of greenhouse gas emissions (in carbon dioxide equivalent, CO2e) in 2030.

Current policies put the world on track for 2.9ºC of warming by 2100, the report finds – though this could be reduced to 2.4-2.6ºC, if all existing NDCs are met.

«

We’ve been here so many times, and missed the target so many times. Unlike the ozone hole, climate change seems intractable because it’s in the hands of too many people who have a short-term interest in not taking notice of long-term effects.
unique link to this extract


Q+A: Can ‘carbon border adjustment mechanisms’ help tackle climate change? • Carbon Brief

Carbon Brief Staff:

»

The EU’s carbon border adjustment mechanism (CBAM) has been touted as a key policy for cutting emissions from heavy industries, such as steel and cement production.

By taxing carbon-intensive imports, the EU says it will help its domestic companies take ambitious climate action while still remaining competitive with firms in nations where environmental laws are less strict.

There is evidence that the CBAM is also driving other governments to launch tougher carbon-pricing policies of their own, to avoid paying border taxes to the EU.

It has also helped to shift climate and trade up the international climate agenda, potentially contributing to a broader increase in ambition.

However, at a time of growing protectionism and economic rivalry between major powers, the new levy has proved controversial.

Many developing countries have branded CBAMs as “unfair” policies that will leave them worse off financially, saying they will make it harder for them to decarbonise their economies.

Analysis also suggests that the EU’s CBAM, in isolation, will have a limited impact on global emissions. 

«

In isolation, perhaps. But as part of something concerted?
unique link to this extract


The Hindawi Files. Part 3: Wiley • James Claims

James Heathers:

»

Unlike many academic issues, where publishers will ignore manifest tomfoolery for months or years at a time — allowing whole journals to go tits-up, allowing peer review to get compromised, allowing mass fakery to infest their products, etc. — the 10-K form is a different ball of wax. The SEC is significantly more serious and powerful than an angry assistant professor sending impotent emails. Thus, financial disclosures are treated more seriously.

As a consequence, while companies can still play little games with various pieces of information on the 10-K form, the whole exercise is infused with a different level of heat and complexity. They also require auditing! Someone not too spiritually dissimilar to me has to analyze and approve them.

So: it was very interesting to me to read the Wiley 10-K forms for the entire period of this sorry saga, because at no point do they mention paper mills deliberately trying to defraud them and ruin their business model. Before, during, and after.

There is a section specifically for this: Part 1, Section 1A.

Wiley lists a lot of regular milquetoast shit…

…But at no point do they mention paper mills — an entire class of business as setting out to catastrophically destroy trust in their brand. There are a whole slew of consequences which are all very real and material:

• loss of academic reputation, hence lower submissions
• delisting of journals, hence lower reputation
• cost of clean-up if fully breached
• etc.

It’s hard to determine if other publishers typically do, because a lot of them aren’t American companies. However, recently Springer Nature went through their long-awaited IPO (that is, they are a private company and decided to become a public company). Disclosure requirements during the IPO process are similar to the 10-K requirements — you have to list threats.

«

Heathers has written two previous pieces about the Hindawi fraud, where the venerable science publisher John Wiley in January 2021 bought Hindawi, an open access publisher, for $298m, getting 200 journals. Which turned out to be utterly rotten. Science publishing has a problem, because Hindawi surely wasn’t alone.
unique link to this extract


How one engineer beat the ban on home computers in socialist Yugoslavia • The Guardian

Lewis Packwood:

»

Very few Yugoslavians had access to computers in the early 1980s: they were mostly the preserve of large institutions or companies. Importing home computers like the Commodore 64 was not only expensive, but also legally impossible, thanks to a law that restricted regular citizens from importing individual goods that were worth more than 50 Deutsche Marks (the Commodore 64 cost over 1,000 Deutsche Marks at launch). Even if someone in Yugoslavia could afford the latest home computers, they would have to resort to smuggling.

In 1983, engineer Vojislav “Voja” Antonić was becoming more and more frustrated with the senseless Yugoslavian import laws. “We had a public debate with politicians,” he says. “We tried to convince them that they should allow [more expensive items], because it’s progress.” The efforts of Antonić and others were fruitless, however, and the 50 Deutsche Mark limit remained. But perhaps there was a way around it.

Antonić was pondering this while on holiday with his wife in Risan in Montenegro in 1983. “I was thinking how would it be possible to make the simplest and cheapest possible computer,” says Antonić. “As a way to amuse myself in my free time. That’s it. Everyone thinks it is an interesting story, but really I was just bored!” He wondered whether it would be possible to make a computer without a graphics chip – or a “video controller” as they were commonly known at the time.

Typically, computers and consoles have a CPU – which forms the “brain” of the machine and performs all of the calculations – in addition to a video controller/graphics chip that generates the images you see on the screen. In the Atari 2600 console, for example, the CPU is the MOS Technology 6507 chip, while the video controller is the TIA (Television Interface Adaptor) chip.

Instead of having a separate graphics chip, Antonić thought he could use part of the CPU to generate a video signal, and then replicate some of the other video functions using software. It would mean sacrificing processing power, but in principle it was possible, and it would make the computer much cheaper.

“I was impatient to test it,” says Antonić. As soon as he returned from his holiday, he put together a prototype – and lo and behold, it really worked. Thinking outside the box had paid off.

«

Fabulous story, and a great read.
unique link to this extract


Data (Use and Access) Bill factsheet: making lives easier • GOV.UK

»

Will the government provide mandatory digital identity cards?

• No, there are no plans to introduce national digital ID cards
• Using a digital identity will be voluntary. People will be in control of their data and who it is shared with
• People will still be able to prove their identity using physical documents if they choose
• If people choose to use digital identity products or services, we’re making sure they know which ones meet the government’s high standards.

Who will use digital identities?

Digital identities will not be mandatory. We are making it clear which digital identity products and services are secure and reliable, so you can make more informed decisions about which ones to trust with your personal data.

«

To which Big Brother Watch says:

»

Commenting on the publication of the Government’s new Data (Use and Access) Bill, Susannah Copson, Legal and Policy Officer at Big Brother Watch said:

“The Government’s new Data Bill threatens to set the UK years behind our international partners when it comes to safeguarding against the threats of new and emerging technologies such as AI. Our data protection laws are amongst the few legal protections we have against these threats, yet this Bill waters them down by simultaneously eroding privacy protections and restricting peoples’ control over their own data. Meanwhile, advancing with a digital ID framework with serious implications for privacy that lacks a legal right to opt-out poses a serious threat to individual autonomy and consent.”

«

Which leaves me rather unsure that BBW has got the right end of the stick. Both documents have the same publication date.
unique link to this extract


Bluesky announces Series A to grow network of 13m+ users • Bluesky

“The Bluesky Team”:

»

Bluesky now exceeds 13 million users, the AT Protocol developer ecosystem continues to grow, and we’ve shipped highly requested features like direct messages and video. We’re excited to announce that we’ve raised a $15m Series A financing led by Blockchain Capital with participation from Alumni Ventures, True Ventures, SevenX, Amir Shevat of Darkmode, co-creator of Kubernetes Joe Beda, and others.

Our lead, Blockchain Capital, shares our philosophy that technology should serve the user, not the reverse — the technology being used should never come at the expense of the user experience.

…In addition, we will begin developing a subscription model for features like higher quality video uploads or profile customizations like colors and avatar frames. Bluesky will always be free to use — we believe that information and conversation should be easily accessible, not locked down. We won’t uprank accounts simply because they’re subscribing to a paid tier.

Additionally, we’re proud of our vibrant community of creators, including artists, writers, developers, and more, and we want to establish a voluntary monetization path for them as well. Part of our plan includes building payment services for people to support their favorite creators and projects. We’ll share more information as this develops.

«

“Series A” is usually ground floor funding. There are also third-party apps being built around it. If Bluesky can get enough momentum – a big if – then maybe it will become a serious alternative while what was Twitter turns into smoking ashes. (If they’re really serious, I’d suggest verified users as the obvious way to attract the group who will turn it into a “news happens here” app. Depends how much they think that matters.)
unique link to this extract


Google, Microsoft, and Perplexity are promoting scientific racism in search results • WIRED

David Gilbert:

»

AI-infused search engines from Google, Microsoft, and Perplexity have been surfacing deeply racist and widely debunked research promoting race science and the idea that white people are genetically superior to nonwhite people.

Patrik Hermansson, a researcher with UK-based anti-racism group Hope Not Hate, was in the middle of a months-long investigation into the resurgent race science movement when he needed to find out more information about a debunked dataset that claims IQ scores can be used to prove the superiority of the white race.

He was investigating the Human Diversity Foundation, a race science company funded by Andrew Conru, the US tech billionaire who founded Adult Friend Finder. The group, founded in 2022, was the successor to the Pioneer Fund, a group founded by US Nazi sympathizers in 1937 with the aim of promoting “race betterment” and “race realism.”

Hermansson logged in to Google and began looking up results for the IQs of different nations. When he typed in “Pakistan IQ,” rather than getting a typical list of links, Hermansson was presented with Google’s AI-powered Overviews tool, which, confusingly to him, was on by default. It gave him a definitive answer of 80.

When he typed in “Sierra Leone IQ,” Google’s AI tool was even more specific: 45.07. The result for “Kenya IQ” was equally exact: 75.2.

Hermansson immediately recognized the numbers being fed back to him. They were being taken directly from the very study he was trying to debunk, published by one of the leaders of the movement that he was working to expose.

«

Search has been a boon to the web, but its effect on the information ecosystem hasn’t been so great.
unique link to this extract


US power grid added battery equivalent of 20 nuclear reactors in past four years • The Guardian

Oliver Milman:

»

Faced with worsening climate-driven disasters and an electricity grid increasingly supplied by intermittent renewables, the US is rapidly installing huge batteries that are already starting to help prevent power blackouts.

From barely anything just a few years ago, the US is now adding utility-scale batteries at a dizzying pace, having installed more than 20 gigawatts of battery capacity to the electric grid, with 5GW of this occurring just in the first seven months of this year, according to the federal Energy Information Administration (EIA).

This means that battery storage equivalent to the output of 20 nuclear reactors has been bolted on to America’s electric grids in barely four years, with the EIA predicting this capacity could double again to 40GW by 2025 if further planned expansions occur.

California and Texas, which both saw all-time highs in battery-discharged grid power this month, are leading the way in this growth, with hulking batteries helping manage the large amount of clean yet intermittent solar and wind energy these states have added in recent years.

The explosion in battery deployment even helped keep the lights on in California this summer, when in previous years the state has seen electricity rationing or blackouts during intense heatwaves that see air conditioning use soar and power lines topple due to wildfires. “We can leverage that stored energy and dispatch it when we need it,” Patti Poppe, chief executive of PG&E, California’s largest utility, said last month.

«

Micro- and macro-generation (or -storage) really is the way to go. (The figure above assumes 1GW nuclear reactors, by the way; the Chernobyl No.4 reactor was a 3GW system.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.