Start Up No.2078: Wordle’s army of cheats, new Neuralink claiims, Amazon offers home chatbot, Sunak negative on net zero, and more


A lawsuit has been filed by the author of Game Of Thrones – and others – against OpenAI, claiming copyright infringement. CC-licensed photo by vagueonthehow on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at about 0845 UK time. Free signup.


A selection of 9 links for you. Didn’t humans write the final season? I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


Data analysis reveals surprisingly high number of Wordle cheaters • Discover Magazine

»

The Times has since introduced some interesting analytics to help users understand the game, explore tactics and to see how they fare compared to other players and against the newspaper’s in-house Wordle computer, called Wordlebot.

Now James Dilger, from Stony Brook University in New York, says that this analytics page reveals far more data than is actually displayed. His analysis of this data over several months reveals a range of insights into the game, including the inescapable conclusion that up to 10,000 players cheat outrageously. “It happens consistently every day!” says Dilger, in his light-hearted paper.

His conclusions come via a fortuitous discovery. Every day, Wordlebot displays the dozen or so most popular words that players use for their first guess, plus some selected other words, such as the ranking of an individual’s first guess.

Dilger imagined that analyzing this data over time might reveal some interesting insights, so he copied and pasted it into an Excel spreadsheet. To his surprise, he ended up with the data for the top fifty most popular word guesses, most of which are never displayed on the webpage.

He collected this data between 3 May and 31 August 2023 and then analysed the trends that emerged. The results clearly show that many players cheat. The game has an internal vocabulary of 2315 words (five years’ worth) from which the correct answer is chosen. The chances that one of these is a first guess are 1/2315 or 0.043% at best. The actual probability is smaller because most users will not know the precise contents of this list.

And yet Dilger’s data shows that the percentage of players who guess correctly on their first try never drops below 0.2%, equivalent to 4000 players. “Some days it’s as high as 0.5% (10,000 players),” he complains. Dilger is strident in his conclusion. “What shall we call these people?” he asks. “Hmmm, “cheaters” comes to mind, so that’s what I call ‘em!”

«

I don’t doubt that he’s right, though like him I don’t see what first-word cheaters get out of it. Is it the ones who play in family groups and want bragging rights?
unique link to this extract


John Grisham, George R.R. Martin and more authors sue OpenAI for copyright infringement • AP News

Hillel Italie:

»

John Grisham, Jodi Picoult and George R.R. Martin are among 17 authors suing OpenAI for “systematic theft on a mass scale,” the latest in a wave of legal action by writers concerned that artificial intelligence programs are using their copyrighted works without permission.

In papers filed Tuesday in federal court in New York, the authors alleged “flagrant and harmful infringements of plaintiffs’ registered copyrights” and called the ChatGPT program a “massive commercial enterprise” that is reliant upon “systematic theft on a mass scale.”

The suit was organized by the Authors Guild and also includes David Baldacci, Sylvia Day, Jonathan Franzen and Elin Hilderbrand among others.

“It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the U.S.,” Authors Guild CEO Mary Rasenberger said in a statement. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

«

Hasn’t GRRM got a few books to finish? Also, where does he think OpenAI has been reading his content? Or is the complaint that it read the scripts of the first few series of Game Of Thrones and that’s his stuff?

Anyway, read on…
unique link to this extract


Why George R.R. Martins’s lawsuit against generative AI will cost authors even if they win • Arkavian

Arkavian is a company that developed open source software for making deepfake pictures:

»

what happens if Martin does win the lawsuit? Well, it’s not going to turn out the way he hoped. Sure, he will get awarded some money for damages. And probably, LLMs won’t be able to legally train on datasets with his book — in the US, anyway. But other countries — Japan, for instance — don’t see training data as a violation of copyright. So LLMs can legally train on datasets in Japan or another country, even if they contain Martin’s book – completely negating the effect of the lawsuit since they could just train in other countries with more relaxed laws. 

If this lawsuit makes US copyright law stricter against AI, all it would do is make companies hesitant to develop and innovate in their products. Limits on datasets could put a halt to innovations in the technology and put the US on the back foot for AI development and use. Restrictions on AI could translate into restrictions on how websites like Amazon use AI to recommend books to its customers — and authors don’t want that.

The best-case scenario for all authors would be for this lawsuit to get thrown out. There’s just no good way to restrict AI development via copyright that won’t harm authors, publishers, and their ability to sell through new channels in the US. And any restrictions on AI will harm not only AI companies, but authors everywhere. 

«

unique link to this extract


The gruesome story of how Neuralink’s monkeys actually died • WIRED

Dhruv Mehrotra and Dell Cameron:

»

Fresh allegations of potential securities fraud have been leveled at Elon Musk over statements he recently made regarding the deaths of primates used for research at Neuralink, his biotech startup. Letters sent this afternoon to top officials at the US Securities and Exchange Commission (SEC) by a medical ethics group call on the agency to investigate Musk’s claims that monkeys who died during trials at the company were terminally ill and did not die as a result of Neuralink implants. They claim, based on veterinary records, that complications with the implant procedures led to their deaths.

Musk first acknowledged the deaths of the macaques on September 10 in a reply to a user on his social networking app X (formerly Twitter). He denied that any of the deaths were “a result of a Neuralink implant,” and said Neuralink’s researchers had taken care to select subjects who were already “close to death.” Relatedly, in a presentation last fall, Musk claimed that Neuralink’s animal testing was never “exploratory,” but conducted instead to confirm fully formed scientific hypotheses. “We are extremely careful,” he said.

Public records reviewed by WIRED, and interviews conducted with a former Neuralink employee and a current researcher at the University of California, Davis primate center, paint a wholly different picture of Neuralink’s animal research.

«

This is indeed gruesome reading. And of course Musk doesn’t care about the collateral damage on the path to his perhaps-impossible dream.
unique link to this extract


Amazon’s all-new Alexa voice assistant is coming soon, powered by a new Alexa LLM • The Verge

Jennifer Pattison Tuohy,:

»

Amazon’s Alexa is about to come out of its shell, and what emerges could be very interesting. At its fall hardware event Wednesday, the company revealed an all-new Alexa voice assistant powered by its new Alexa large language model. According to Dave Limp, Amazon’s current SVP of devices and services, this new Alexa can understand conversational phrases and respond appropriately, interpret context more effectively, and complete multiple requests from one command. 

Voice assistants need a shake-up. A general lack of innovation and barely imperceptible improvements around comprehension have turned them into basic tools instead of the exciting technological advancements we hoped for when they broke onto the scene over a decade ago.

Generative AI has looked like their best shot at survival for a while. But while these digital assistants have always had an element of AI, they’ve lacked the complex processing abilities and more human-like interactions generative AI is capable of. This is a big moment for the smart home, as it could take home automation to the next level, moving it from a remote control experience to a home that’s, well, actually smart. 

«

Chatty rooms! What an idea. Though this was inevitable; the only question was whether it would be Google or Amazon (or maybe Microsoft, but its announcement is Thursday) that would do this first.
unique link to this extract


UK net zero policies: what has Sunak scrapped and what do changes mean? • The Guardian

Helena Horton:

»

Rishi Sunak has announced a watering-down of the UK’s net zero policies, though claims he still wishes to meet the legally binding 2050 target. The prime minister said this was to save money for families, declaring: “If we continue down this path, we risk losing the consent of the British people and the resulting backlash will not just be against specific policies, but against the wider mission itself.”

But what has he scrapped? Will it actually save the people of the UK any money? And what will it mean for the climate crisis? The Guardian has looked into each policy and what the change means.

«

Nothing good. What’s astonishing is that there’s so much obvious evidence of why these moves are bad. And yet the Tories have the audacity to trumpet that they’re “ending the ban on onshore wind” – a ban which they put in place in 2015.
unique link to this extract


Gaia Vince says: we need to prepare for mass climate migration • Prospect

Philippa Nuttall:

»

“Playground politics,” sighs Gaia Vince. The science journalist and author of Nomad Century is outraged—but unsurprised—by the UK government’s “stop the boats” campaign. When we speak, the Bibby Stockholm barge is being filled with asylum seekers, despite questions about its suitability, and Number 10 is apparently considering flying those it doesn’t want in the country to Ascension Island, if its Rwanda policy fails. Meanwhile, Lee Anderson, deputy chair of the Conservative party, has suggested asylum seekers refusing to be housed on the barge “fuck off back to France”. Vince, whose book advocates a pragmatic, organised and compassionate response to climate migration, is not short of adjectives to describe these methods. They are “unbelievable”, “depressing” and “unsustainable”, she says in only the first minute of our conversation.   

Published last summer and about to appear in paperback, Nomad Century argues that climate change will make large swathes of the planet uninhabitable, and that the only proper response is “a planned and deliberate migration of a kind humanity has never before undertaken.” The alternative, Vince writes, is “calamitous chaos” with “enormous loss of life, or terrible wars and misery, as the wealthy erect barriers against the poorest.”

…In any case, she says, migration is happening “whether we like it or not. We can either deal with it in a sensible way or close our eyes and do nothing.” Ignorance, she suggests, is the UK government’s current choice—instead of proper policies, it is “responding in a way that drives division and maybe gets applause from a small pool of worshippers.” Since ideas such as flying people to Ascension Island are “ridiculous”, Vince wonders whether the government’s long-term plan might be similarly unthinkable. “Is that all 17-year-olds are conscripted into armies to fight these people on the borders?”

«

Ironically – or perhaps not – the latter scenario is the setup for The Wall, a book by John Lanchester.
unique link to this extract


Google DeepMind AI tool assesses DNA mutations for harm potential • The Guardian

Ian Sample:

»

Scientists at Google DeepMind have built an artificial intelligence program that can predict whether millions of genetic mutations are either harmless or likely to cause disease, in an effort to speed up research and the diagnosis of rare disorders.

The program makes predictions about so-called missense mutations, where a single letter is misspelt in the DNA code. Such mutations are often harmless but they can disrupt how proteins work and cause diseases from cystic fibrosis and sickle-cell anaemia to cancer and problems with brain development.

The researchers used AlphaMissense to assess all 71m single-letter mutations that could affect human proteins. When they set the program’s precision to 90%, it predicted that 57% of missense mutations were probably harmless and 32% were probably harmful. It was uncertain about the impact of the rest.

Based on the findings, the scientists have released a free online catalogue of the predictions to help geneticists and clinicians who are either studying how mutations drive diseases or diagnosing patients who have rare disorders.

A typical person has about 9,000 missense mutations throughout their genome. Of more than 4m seen in humans, only 2% have been classified as either benign or pathogenic. Doctors already have computer programs to predict which mutations may drive disease but because the predictions are inaccurate, they can only provide supporting evidence for making a diagnosis.

«

There’s a DeepMind blogpost. I find this very encouraging: a really sensible use of AI to evaporate the difficult part of the problem and leave the relevant bits behind.
unique link to this extract


How to navigate Apple’s shift from Lightning to USB-C • The New York Times

Brian X Chen:

»

The problem with USB-C cables is that while they usually look the same, the cheaper, low-quality cords offer no such protection for your device. They may have the correct oval connector, but inside, they lack chips to protect your phone.

So if you need a USB-C cable, don’t grab any cheap wire, like the $5 ones you’ll see at a gas station kiosk. Invest in a durable cable from a reputable company. Brands like Anker, Belkin and Amazon Basics are well known for their high-quality power cables that cost roughly $9 to $30, according to John Bumstead, the owner of RDKL Inc., a repair shop that refurbishes MacBooks. Buy the cables from trusted retailers or directly from the brands themselves — and avoid purchasing used wires on sites like eBay.

Be careful what you plug into.

Many USB-C cables lack chips to restrict the current powering your phone. So if you plug it into a source that charges at a higher voltage than your phone accepts, you could electrocute your phone, Ms. Jones said.

The lesson here is to be careful about what you plug your cord into. Those USB ports embedded into airplane back seats, hotel room walls or car consoles are a big no-no because it’s unclear what their charging rates are. It’s safest to plug your USB-C cable only into a high-quality charging brick that protects your phone. Wirecutter, our sister publication, recommends USB-C power bricks from Anker, RAVPower and Spigen that do a good job replenishing your phone quickly without damaging it.
There’s always wireless.

For iPhone owners who aren’t planning on upgrading right away but need new chargers, the most cost-effective alternative to buying another Lightning cable is to go wireless. The E.U. mandate applies only to wires that plug directly into devices — not wireless charging devices that replenish your phone via magnetic induction, such as Apple’s puck-shaped MagSafe

«

But those USB ports embedded into airplane back seats are all USB-A, surely? Those aren’t going to fry your phone. For the rest, though, yup: USB-C needs colour coding, doesn’t it.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.