
This year’s Merriam-Webster dictionary word is “slop”, unsurprisingly. CC-licensed photo by Greg Myers on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 10 links for you. Human-generated. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Slop and ragebait: what 2025 ‘words of the year’ say about us • Deseret News
Eastin Hartzell:
»
Merriam-Webster’s Word of the Year for 2025 is a short, blunt one — and it’s aimed straight at your feed: “slop.”
Merriam-Webster defines “slop” as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” In other words, the obviously fake stuff that spreads fast.
Merriam-Webster’s editors said the word captured a growing public frustration, but also a deeper longing. Speaking to The Associated Press, Merriam-Webster president Greg Barlow framed “slop” as a kind of warning label — one that shows that people “want things that are real, they want things that are genuine. It’s almost a defiant word when it comes to AI. When it comes to replacing human creativity, sometimes AI actually doesn’t seem so intelligent.”
Across other major dictionaries and cultural institutions, 2025’s “words of the year” landed on a consistent theme that the modern internet can be exhausting.
Merriam-Webster started announcing a “word of the year” in 2003. Here are its words of the year since 2015:
• 2024: polarization
• 2023: authentic
• 2022: gaslighting
• 2021: vaccine
• 2020: pandemic
• 2019: they
• 2018: justice
• 2017: feminism
• 2016: surreal
• 2015: ismThe company has long emphasized search behavior — what readers look up and why — alongside cultural relevance.
This year, “slop” surged in the broader context of AI-generated everything: deepfakes, auto-written books and bizarre synthetic videos flooding platforms.
«
Other dictionaries: Oxford: ragebait. Cambridge: parasocial. Collins: vibe coding. Dictionary.com: 6-7. Macquarie: AI slop. Seems as though Simon Willison was an early amplifier of the term in May 2024, but it began earlier than that – he references a tweet whose author says they’re “watching in real time as ‘slop’ becomes a term of art…”.
Willison’s piece is worth reading in retrospect – though his suggestion for naming AI-generated spam, “slom”, hasn’t caught on at all.
unique link to this extract
Stop Citing AI
And while we’re on the topic.. Leo Herzog has a page to which you can send people who try to offer LLM output as “facts” (especially to settle arguments):
»
You’ve been sent here because you cited AI as a source to try to prove something.
Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts. They’re predicting what words are most likely to come next in a sequence. They can produce convincing-sounding information, but that information may not be accurate or reliable.
Imagine someone who has read thousands of books, but doesn’t remember where they read what…Sure, you might get an answer that’s right or advice that’s good… but what “books” is it “remembering” when it gives that answer? That answer or advice is a common combination of words, not a fact.
«
He does offer questions that they might be good at answering, but they’re not fact-based ones.
unique link to this extract
Axios CEO: US is in ‘post-news’ era • Semafor
Max Tani:
»
The co-founder and CEO of Axios is warning journalists that they’ve entered a “a post-news era where what matters, and has value, is information, not ‘the news.’”
In order to survive, he wrote in an internal memo shared with Semafor, newsrooms will need to rethink the role they will play in an information landscape dominated by artificial intelligence and algorithmic, personalized video feeds.
“Your reality — how you see the world — is no longer defined by ‘the news.’” Jim VandeHei wrote. “Instead, it’s shaped by the videos you watch, podcasts you hear, the people you follow on social media and know in person, and the reporting you consume. We’ve entered a period where everyone has their own individual reality, usually based on age, profession, passions, politics and platform preferences.”
VandeHei laid out several solutions for Axios to cut through the thicket: every piece of content must be useful to a smart professional, original reporting is crucial, and coverage should focus on one of the three major tectonic changes in tech, governing, and the media itself.
“What traditional news media companies can do is be useful, trusted, illuminating sources of vital information that’s vetted by experts held to high standards of accuracy and truthfulness. That calling is more important than ever,” he said.
Axios believes its largest area for growth is in local coverage, much of which has been left behind by national media. The digital media company has hired Liz Alesse, currently ABC’s vice president of audio, to be the company’s first general manager of Axios Local, which is expanding into new suburban areas in Colorado and Ohio, testing whether the company’s local news format can work in smaller communities.
«
“Local news” in the current age is the modern version of invading Afghanistan or attacking Moscow in winter. Sure, everybody else failed doing it, but maybe we can make it work?
Also, the journalists at Axios will surely already know all the things VandeHei wrote. They’ll have known them years ago.
unique link to this extract
Publisher under fire after “fake” citations found in AI ethics guide • The Times
Tilly Harris and Rhys Blakely:
»
One of the world’s largest academic publishers is selling a book on the ethics of artificial intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.
Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards. The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material.
The book — Social, Ethical and Legal Aspects of Generative AI — is advertised as an authoritative review of the ethical dilemmas posed by the technology and is on sale for £125. At least two chapters include footnotes that cite scientific publications that appear to have been invented.
In one chapter, 8 of the 11 citations could not be verified, suggesting more than 70% may have been fabricated.
There is growing concern within academia about citations and even entire research papers being generated by AI tools that try to mimic genuine scholarly work.
In April, Springer Nature withdrew another technology title — Mastering Machine Learning: From Basics to Advanced — after it was found to contain numerous fictitious references. In the more recent book analysed by The Times, one citation claims to refer to a paper published in “Harvard AI Journal”. Harvard Business Review has said that no such journal exists.
Guillaume Cabanac, an associate professor of computer science at the University of Toulouse and an expert in detecting fake academic papers, analysed two chapters using BibCheck, a tool designed to identify fabricated references.
He found that at least 11 of 21 citations in the first chapter could not be matched to known academic papers. The analysis also suggested that eight of the 11 citations in Chapter Four were untraceable. “This is research misconduct: falsification and fabrication of references,” Cabanac said. He tracks such cases and says he has seen a steady rise in AI “hallucinated” citations across academic literature.
«
The 26 most important ideas for 2026 • Derek Thompson
Derek Thompson:
»
here are 26 ideas for 2026, organized under the themes that I think will drive economics, politics, and technology in the near future.
«
Such as: we’re seeing the end of reading; the triumph of streaming video; the death of cinemas; TikTok is an unknown; the US economy is presently a big bet that AI will work; and plenty more. Lots of data to go with it.
unique link to this extract
Washington Post’s AI-generated podcasts rife with errors, fictional quotes • Semafor
Max Tani:
»
The Washington Post’s top standards editor Thursday decried “frustrating” errors in its new AI-generated personalized podcasts, whose launch has been met with distress by its journalists.
The Post announced that it was rolling out personalized AI-generated podcasts for users of the paper’s mobile app. In a release, the paper said users will be able to choose preferred topics and AI hosts, and could “shape their own briefing, select their topics, set their lengths, pick their hosts and soon even ask questions using our Ask The Post AI technology.”
But less than 48 hours since the product was released, people within the Post have flagged what four sources described as multiple mistakes in personalized podcasts. The errors have ranged from relatively minor pronunciation gaffes to significant changes to story content, like misattributing or inventing quotes and inserting commentary, such as interpreting a source’s quotes as the paper’s position on an issue.
According to four people familiar with the situation, the errors have alarmed senior newsroom leaders who have acknowledged in an internal Slack channel that the product’s output is not living up to the paper’s standards. In a message to other WaPo staff shared with Semafor, head of standards Karen Pensiero wrote that the errors have been “frustrating for all of us.”
…“It is truly astonishing that this was allowed to go forward at all,” one Post editor wrote on Slack. “Never would I have imagined that the Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale. And just days after the White House put up a site dedicated to attacking journalists, most notably our own, including for stories with corrections or editors notes attached. If we were serious we would pull this tool immediately.”
«
Half of graduates “would earn more as a higher-level apprentice” • The Times
Louise Eccles and Joey D’Urso:
»
Half of graduates would be earning a better salary if they had skipped university and taken a higher-level apprenticeship instead, according to a think tank.
A report published this weekend says the country is “obsessed” with university degrees, which comes at the expense of the economy and is to the detriment of many students.
The Centre for Social Justice (CSJ) found that, five years after qualifying, a higher-level apprentice earns £5,000 a year more than an average graduate.
A higher (level-4) apprenticeship is the equivalent of completing the first year of a bachelor’s degree, and offers training as a brewer, countryside ranger, fraud investigator, data analyst, network engineer, stained-glass craftsperson or insurance professional, among many other occupations.
While a level-4 apprentice typically earns an average of £37,300 after five years, the median average university student earns £32,100, according to analysis of government data.
The average student had debts of £53,000 after graduating last year. By comparison, level-4 apprenticeships are funded by employers and the government, and apprentices also earn a salary while working.
«
Also worth mentioning: those apprenticeships are probably less likely to be replaced in a few years by AI than other graduate jobs, because they’ll be in manual work (plumbing, engineering) or manufacturing.
unique link to this extract
We mapped the world’s hottest data centres • Rest of World
Hazel Gandhi and Rina Chandran:
»
Across the world, countries with hot climates are investing millions of dollars in building data centres to meet the growing demand for generative artificial intelligence while also storing data within their own borders. That’s why data centres are peppered around the world, rather than being concentrated only in cooler countries like Norway or Sweden.
Rest of World set out to document how many data centres globally are located in regions that are too hot for optimal operations. The industry standard for that range is 18ºC to 27ºC, recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, or Ashrae. Cooler temperatures improve server operation efficiency; in hotter temperatures, data centres face significant challenges in cooling their facilities.
We plotted temperature data from the Copernicus Climate Data Store, a project organized as part of the European Union’s efforts to open-source climate data against locations from Data Center Map, a widely referenced resource and marketplace for data centre-related services. We found that of the 8,808 operational data centres worldwide as of October 2025, nearly 7,000 are located in areas outside the optimal range. The vast majority are in regions with average temperatures that are colder than the range. Only 600, or less than 10% of all operational data centres, are located in areas where average annual temperatures are above 27ºC.
However, our analysis, conducted with the help of nonprofit Climate Central, showed that in 21 countries— including Singapore, Thailand, Nigeria, and the United Arab Emirates — all data centres are located in areas with average annual temperatures of above 27ºC. Nearly all data centers in Saudi Arabia and Malaysia are in regions that are too hot. Nearly half of Indonesia’s 170 data centres are in hot places, while in India — a key market for big tech and social media companies — about 30% are located in overly hot regions.
«
Going to guess that where they’re in “too hot” places, a very significant amount of money is going to be spent on aircon. Is it going to be renewable first-install energy, or diverted from the grid?
How we found the man behind two deepfake porn sites • Bellingcat
Kolina Koltai:
»
Depending on which of his social media profiles you were looking at, Mark Resan was either a marketing lead at Google or working for a dental implant company, a human resources company and a business software firm – all at the same time.
But a Bellingcat investigation has found that the Hungarian national is the key figure behind, and the likely owner of, at least two deepfake porn websites – RefacePorn and DeepfakePorn – that until recently were selling paid subscriptions.
There is no question about the nature of these websites. RefacePorn’s landing page shows an explicit video of a woman performing a sexual act. As the video plays, her face is replaced with a variety of other women’s faces. The text above declares: “Face swap deepfake porn. Upload your face!”
Deepfake porn sites such as these, which use artificial intelligence to create sexually explicit images and videos – usually without the consent of those whose faces or bodies are featured – have proliferated at an alarming rate in recent years. The impact on victims has been described as “life-shattering”, with the mental health effects similar to those reported by victims of sexual assault.
While the technology to make these synthetic images is not new, the rise of mainstream AI image generator tools and “Nudify” apps has made it more widely available to people without deep technical expertise.
«
“Follow the money” turns out to be the most reliable method of doing this sort of detection.
unique link to this extract
Roomba maker iRobot swept into bankruptcy • Financial Times via Ars Technica
Rafe Rosner-Uddin:
»
Roomba maker iRobot has filed for bankruptcy and will be taken over by its Chinese supplier after the company that popularized the robot vacuum cleaner fell under the weight of competition from cheaper rivals.
The US-listed group on Sunday said it had filed for Chapter 11 bankruptcy in Delaware as part of a restructuring agreement with Shenzhen-based Picea Robotics, its lender and primary supplier, which will acquire all of iRobot’s shares. The deal comes nearly two years after a proposed $1.5bn acquisition by Amazon fell through over competition concerns from EU regulators.
Shares in iRobot traded at about $4 a share on Friday, well below the $52 a share offered by Amazon.
“Today’s announcement marks a pivotal milestone in securing iRobot’s long-term future,” said Gary Cohen, iRobot’s chief executive. “The transaction will strengthen our financial position and will help deliver continuity for our consumers, customers and partners.”
Founded in 1990 by engineers from the Massachusetts Institute of Technology, iRobot helped introduce robotics into the home, ultimately selling more than 40 million devices, including its Roomba vacuum cleaner, according to the company. In recent years, it has faced competition from cheaper Chinese rivals, including Picea, putting pressure on sales and forcing iRobot to reduce headcount. A management shake-up in early 2024 saw the departure of its co-founder as chief executive.
Amazon proposed buying the company in 2023, seeing synergy with its Alexa-powered smart speakers and Ring doorbells. EU regulators, however, pushed back on the deal, raising concerns it would lead to reduced visibility for rival vacuum cleaner brands on Amazon’s website.
…Although iRobot received $94m in compensation for the termination of its deal with Amazon, a significant portion was used to pay advisory fees and repay part of a $200m loan from private equity group Carlyle.
Picea’s Hong Kong subsidiary acquired the remaining $191m of debt from Carlyle last month. At the time, iRobot already owed Picea $161.5m for manufacturing services, nearly $91m of which was overdue.
«
As I said when this loomed six weeks ago: maybe the market for robot vacuum cleaners isn’t that big. Bigger question: will all the Roombas out there keep working?
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified