
The largest toilet maker in Japan might also be crucial for the future of AI. Why? RAM. CC-licensed photo by starfive on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 9 links for you. Resurrected, amazingly. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Pinterest is drowning in a sea of AI slop and auto-moderation • 404 Media
Matthew Gault:
»
Pinterest has gone all in on artificial intelligence and users say it’s destroying the site. Since 2009, the image sharing social media site has been a place for people to share their art, recipes, home renovation inspiration, corny motivational quotes, and more, but in the last year users, especially artists, say the site has gotten worse. AI-powered mods are pulling down posts and banning accounts, AI-generated art is filling feeds, and hand drawn art is labeled as AI modified.
“I feel like, increasingly, it’s impossible to talk to a single human [at Pinterest],” artist and Pinterest user Tiana Oreglia told 404 Media. “Along with being filled with AI images that have been completely ruining the platform, Pinterest has implemented terrible AI moderation that the community is up in arms about. It’s banning people randomly and I keep getting takedown notices for pins.”
…r/Pinterest is awash in users complaining about AI-related issues on the site. “Pinterest keeps automatically adding the ‘AI modified’ tag to my Pins…every time I appeal, Pinterest reviews it and removes the AI label. But then… the same thing happens again on new Pins and new artwork. So I’m stuck in this endless loop of appealing → label removed → new Pin gets tagged again,” read a post on r/Pinterest.
The redditor told 404 Media that this has happened three times so far and it takes between 24 to 48 hours to sort out.
«
Facebook has already lost this fight; Instagram might not care; X is overrun with chatbot-spewing accounts; the question starts to look like “which one of all these can turn back the tide of AI slop? And will that help them survive?”
unique link to this extract
Single vaccine could protect against all coughs, colds and flus, researchers say • BBC News
James Gallagher:
»
A single nasal spray vaccine could protect against all coughs, colds and flus, as well as bacterial lung infections, and may even ease allergies, say US researchers.
The team at Stanford University have tested their “universal vaccine” in animals and still need to do human clinical trials. Their approach marks a “radical departure” from the way vaccines have been designed for more than 200 years, they say.
Experts in the field said the study was “really exciting” despite being at an early stage and could be a “major step forward”.
Current vaccines train the body to fight one single infection. A measles vaccine protects against only measles and a chickenpox vaccine protects against only chickenpox. This is how immunisation has worked since Edward Jenner pioneered vaccines in the late 18th Century.
The approach described in the journal Science, does not train the immune system. Instead it mimics the way immune cells communicate with each other.
It is given as a nasal spray and leaves white blood cells in our lungs – called macrophages – on “amber alert” and ready to jump into action no matter what infection tries to get in. The effect lasted for around three months in animal experiments.
The researchers showed this heightened state of readiness led to a 100-to-1,000-fold reduction in viruses getting through the lungs and into the body.
And for those that did sneak through, the rest of the immune system was “poised, ready to fend off these in warp speed time” said Prof Bali Pulendran, a professor of microbiology and immunology at Stanford.
«
Very sure that RFK will really hurry to get this passed through.
unique link to this extract
How will OpenAI compete? • Benedict Evans
Evans looks in detail, but also with the helicopter view, at Sam Altman’s S/W/O/T:
»
So: you don’t know how you can make your core technology better than anyone else’s. You have a big user base but one that has limited engagement and seems really fragile. The key incumbents have more or less matched your technology and are leveraging their product and distribution advantages to come after the market. And, it looks like a lot of the value and leverage will come from new experiences that haven’t been invented yet, and you can’t invent all of those yourself. What do you do?
For a lot of last year, it felt like OpenAI’s answer was “everything, all at once, yesterday”. An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I’ve forgotten! And, of course, trillions of dollars of capex announcements, or at least capex aspirations.
Some of this looked like ‘flooding the zone’, or at least just the result of hiring a lot of aggressive, ambitious people really quickly. There was also sometimes the sense of people copying the forms of previously successful platforms without quite understanding their purpose or dynamics: “platforms have app stores, so we need an app store!”
But late last year, Sam Altman tried to put it all together, showing this diagram, and using the famous quote from Bill Gates, that the definition of a platform is that it creates more value for its partners than for itself.
…That is indeed how Windows or iOS worked. The trouble is, I really don’t think that’s the right analogy. I don’t think OpenAI has any of this. It doesn’t have the kind of platform and ecosystem dynamics that Microsoft or Apple had, and that flywheel diagram [above in the blogpost text] doesn’t actually show a flywheel.
…When I was at university, a long time ago now, my medieval history professor, Roger Lovatt, told me that power is the ability to make people do something that they don’t want to do, and that’s really the question here. Does OpenAI have the ability to get consumers, developers and enterprises to use its systems more than anybody else, regardless of what the system itself actually does?
«
RFK Jr.’s FDA no longer warns against ineffective, potentially dangerous autism treatments • ProPublica
Megan O’Matz:
»
The warning on the government website was stark. Some products and remedies claiming to treat or cure autism are being marketed deceptively and can be harmful. Among them: chelating agents, hyperbaric oxygen therapies, chlorine dioxide and raw camel milk.
Now that advisory is gone.
The Food and Drug Administration pulled the page down late last year. The federal Department of Health and Human Services told ProPublica in a statement that it retired the webpage “during a routine clean up of dated content at the end of 2025,” noting the page had not been updated since 2019. (An archived version of the page is still available online.)
Some advocates for people with autism don’t understand that decision. “It may be an older page, but those warnings are still necessary,” said Zoe Gross, a director at the Autistic Self Advocacy Network, a nonprofit policy organization run by and for autistic people. “People are still being preyed on by these alternative treatments like chelation and chlorine dioxide. Those can both kill people.”
Chlorine dioxide is a chemical compound that has been used as an industrial disinfectant, a bleaching agent and an ingredient in mouthwash, though with the warning it shouldn’t be swallowed. A ProPublica story examined Sen. Ron Johnson’s endorsement of a new book by Dr. Pierre Kory, which describes the chemical as a “remarkable molecule” that, when diluted and ingested, “works to treat everything from cancer and malaria to autism and COVID.”
«
It’s as though insane monks from the 15th century have taken over.
unique link to this extract
Japan’s largest toilet maker is undervalued AI play, says activist investor • FT
David Keohane:
»
Japan’s largest toilet maker is an “undervalued and overlooked” AI play, according to a UK-based activist investor.
Palliser Capital sent a letter to the board of Toto last week exhorting it to make more of its advanced ceramics segment, saying it holds a crucial position in the semiconductor supply chain. The segment generates 40% of Toto’s operating profit.
Ubiquitous in Japan and now famous across the world, Toto is best known for its heated toilet seats and “Washlet” bidet features. But the manufacturer “has quietly evolved from a traditional domestic sanitary ware champion into a rising powerhouse in advanced ceramics for semiconductor manufacturing”, Palliser said.
It described Toto as “the most undervalued and overlooked AI memory beneficiary” because the company also makes so-called electrostatic chucks, which are used to manufacture Nand memory chips. Prices for memory chips have soared over the past few months because of massive demand from AI-focused companies.
Toto’s chuck technology uses ceramics designed to remain stable at very low temperatures, helping hold silicon wafers firmly during chip production. That makes it relevant to cryogenic etching, which is expected to grow as memory chips become more layered and complex.
«
“Nice RAM manufacturing process you’ve got there. Be a shame if you were to run out of ceramic to make it on. By the way, our prices may need adjustment.”
unique link to this extract
I hacked ChatGPT and Google’s AI – and it only took 20 minutes – BBC Future
Thomas Germain:
»
Perhaps you’ve heard that AI chatbots make things up sometimes. That’s a problem. But there’s a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It’s so easy a child could do it.
As you read this, this ploy is manipulating what the world’s leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.
To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point: I made ChatGPT, Google’s AI search tools and Gemini tell users I’m really, really good at eating hot dogs. Below, I’ll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.
It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it’s harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it’s happening on a massive scale.
“It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.”
A Google spokesperson says the AI built into the top of Google Search uses ranking systems that “keep results 99% spam-free”. Google says it is aware that people are trying to game its systems and it’s actively trying to address it. OpenAI also says it takes steps to disrupt and expose efforts to covertly influence its tools. Both companies also say they let users know that their tools “can make mistakes”.
But for now, the problem isn’t close to being solved. “They’re going full steam ahead to figure out how to wring a profit out of this stuff,” says Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, a digital rights advocacy group. “There are countless ways to abuse this, scamming people, destroying somebody’s reputation, you could even trick people into physical harm.”
«
There’s an invader turning huge swathes of Britain into deserts – and they’re spreading • The Guardian
George Monbiot:
»
there are many kinds of desert, and not all of them are dry. In fact, those spreading across Britain are clustered in the wettest places. Yet they harbour fewer species than some dry deserts do, and are just as hostile to humans. Another useful term is terrestrial dead zones.
What I’m talking about are the places now dominated by a single plant species, called Molinia caerulea or purple moor-grass. Over the past 50 years, it has swarmed across vast upland areas: in much of Wales, on Dartmoor, Exmoor, in the Pennines, Peak District, North York Moors, Yorkshire Dales and many parts of Scotland. Molinia wastes are dismal places, grey-brown for much of the year, in which only the wind moves. As I know from bitter experience, you can explore them all day and see scarcely a bird or even an insect.
Not that you would wish to walk there. The grass forms high tussocks through which it is almost impossible to push. As it happens, most of the places that have succumbed to Molinia monoculture are “access land”. Much of the pittance of England and Wales in which we are allowed to walk freely has become inaccessible.
…Molinia challenges the definition of an invasive species. The term is supposed to refer only to non-native organisms. But while it has always been part of our upland flora, it appears to have spread further and faster than any introduced plant in the UK, and with greater ecological consequences. It is uncontrolled by herbivores, disease or natural successional processes (transitions to other plant communities). In fact, it stops these processes in their tracks.
Given the scale of the problem, it is remarkably little studied and discussed. I cannot find even a reliable estimate of the area affected: the most recent in England is nearly 10 years old, and I can discover none for Wales or Scotland. But in the southern Cambrian Mountains alone, judging by a combination of my walks and satellite imagery, there appears to be a dead zone covering roughly 300 sq km, in which little but this one species grows. Most of central Dartmoor is now Molinia desert, and just as disheartening and hard to traverse.
«
It turns out there are multiple bad incentives around farming and other human activity that encourage Molinia. Which means getting rid of it – or replacing it – requires changing those incentives.
unique link to this extract
The first signs of burnout are coming from the people who embrace AI the most • TechCrunch
Connie Loizos:
»
The most seductive narrative in American work culture right now isn’t that AI will take your job. It’s that AI will save you from
That’s the version the industry has spent the last three years selling to millions of nervous people who are eager to buy it. Yes, some white-collar jobs will disappear. But for most other roles, the argument goes, AI is a force multiplier. You become a more capable, more indispensable lawyer, consultant, writer, coder, financial analyst — and so on. The tools work for you, you work less hard, everybody wins.
But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn’t a productivity revolution. It finds companies are at risk of becoming burnout machines.
As part of what they describe as “in-progress research,” UC Berkeley researchers spent eight months inside a 200-person tech company watching what happened when workers genuinely embraced AI. What they found across more than 40 “in-depth” interviews was that nobody was pressured at this company. Nobody was told to hit new targets. People just started doing more because the tools made more feel doable. But because they could do these things, work began bleeding into lunch breaks and late evenings. The employees’ to-do lists expanded to fill every hour that AI freed up, and then kept going.
As one engineer told them, “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don’t work less. You just work the same amount or even more.”
«
I think journalists could tell them a bit about the increased availability of the ability to do your job not meaning you do the job faster, but that you do the job more. The rise of the internet did not mean shorter days for journalists. (Nor did it improve pay, but nobody’s looking at that for coders yet.)
unique link to this extract
What size am I? • Darkgreener
Anna Powell-Smith:
»
Finding clothes that fit shouldn’t be so hard. Add your measurements here to see which high-street sizes are best for you
«
THIS is the project I was thinking of: Anna Powell-Smith’s 2012 (after 2010 as I said!) project at darkgreener (contains green in the name!) about dress sizes. As she notes at her site, it’s her only work that’s been featured in both the Wall Street Journal and the Daily Mail. She’s now the director of the Centre for Public Data, an advocacy organisation. Thanks Struan D for the link; no thanks to search engines or chatbots. Humans win again!
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified








