Fishing for walleyes in Lake Erie can be fun. Though some over-competitive anglers take it too far. CC-licensed photo by Tom Hart on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
There’s another post at the Social Warming Substack at about 0845 UK time: it’s about something social networks keep ignoring.
A selection of 9 links for you. How big? I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.
Twitter cuts off Substack embeds and starts suspending bots • The Verge
Writers trying to embed tweets in their Substack stories are in for a rude surprise: after pasting a link to the site, a message pops up saying that “Twitter has unexpectedly restricted access to embedding tweets in Substack posts” and explaining that the company is working on a fix. The unfortunate situation comes on the heels of Substack announcing Notes, a Twitter competitor.
The issue could cause problems for writers who want to talk about what’s going on with Twitter in their newsletters or about things that are happening on the platform. While screenshots of tweets could work in some cases, they’re less trustworthy because they don’t provide a direct link to the source. Screenshots also won’t help you if you’re trying to, say, embed a video that someone posted on Twitter. (And Twitter seems to be at least somewhat interested in becoming a video platform given that several Blue perks relate to making the video uploading experience better.)
…Substack spokesperson Helen Tobin didn’t comment on whether the issues were caused by changes to Twitter’s API when I asked, instead sharing the same statement tweeted by the company. If they are, though, it would be far from the only platform affected by Twitter’s new API policies, which were announced a week ago.
Since then, various companies have been notifying users that they have to cut out or paywall certain features that interacted with Twitter, and many people who have run bots on the platform have been posting about how they can no longer post like they used to.
Slowly but surely, Twitter is cutting itself off from the web. Not surprising. We seem to be moving into a new era of the internet, where information doesn’t want to be free at all.
unique link to this extract
The Bitcoin white paper is hidden in every modern copy of macOS • Waxy.org
While trying to fix my printer today, I discovered that a PDF copy of Satoshi Nakamoto’s Bitcoin whitepaper apparently shipped with every copy of macOS since Mojave in 2018.
I’ve asked over a dozen Mac-using friends to confirm, and it was there for every one of them. The file is found in every version of macOS from Mojave (10.14.0) to the current version, Ventura (13.3), but isn’t in High Sierra (10.13) or earlier.
See for yourself: if you’re on a Mac, open a Terminal and type the following command:
open /System/Library/Image\ Capture/Devices/VirtualScanner.app/Contents/Resources/simpledoc.pdf
[Be sure to put the \ in “Image\ Capture” so the terminal reads the space as part of the location.]
If you’re on macOS 10.14 or later, the Bitcoin PDF should immediately open in Preview.
(If you’re not comfortable with Terminal, open Finder and click on Macintosh HD, then open the System→Library→Image Capture→Devices folder. Control-click on VirtualScanner.app and Show Package Contents, open the Contents→Resources folder inside, then open simpledoc.pdf.)
Confirmed: it’s an oddity, living in a folder with a couple of setup sheets that you might use on a scanner. No doubt it will soon vanish in an update, having been discovered, and the reasons for its existence will become one of those Apple fairytales, known only to the chosen few. The, er, chatbot Eliza used to lurk somewhere in the Mac depths, but has long since been purged.
unique link to this extract
Online ads are serving us lousy, overpriced goods • The New York Times
it turns out that targeted ads aren’t helping consumers, either. Last year, researchers at Carnegie Mellon and Virginia Tech presented a study of the consumer welfare implications of targeted ads. The results were so surprising that they repeated it to make sure their findings were correct.
The new study, published online this week, confirmed the results: The targeted ads shown to another set of nearly 500 participants were pitching more expensive products from lower-quality vendors than identical products that showed up in a simple Web search.
The products shown in targeted ads were, on average, roughly 10% more expensive than what users could find by searching online. And the products were more than twice as likely to be sold by lower-quality vendors as measured by their ratings by the Better Business Bureau.
“Both studies consistently highlighted a pervasive problem of low-quality vendors in targeted ads,” write the authors, Eduardo Abraham Schnadower Mustri, a Carnegie Mellon University Ph.D. student, Idris Adjerid, a professor at Virginia Tech, and Alessandro Acquisti, a professor at Carnegie Mellon. The authors posit that targeted ads may be a way for smaller vendors to reach consumers — and “a sizable portion of these vendors may in fact be undesirable to consumers because they are of lower quality.”
Quality seems to be an issue with Jeremy’s Razors, which spent the most on Facebook advertising during the 30-day period ending March 26, spending more than $800,000. When I checked Jeremy’s Facebook reviews, many customers said they liked the product’s political message [that it’s the “woke-free razor”] more than the razor itself. “If you like razors that feel like someone is pulling your facial hair out with a tweezer one at a time, then Jeremy’s Razors are your razors,” one wrote. The razor has a 2.7 star rating (out of 5) based on more than 280 reviews.
Would be quite the turnup if all the money spent on trying to target people has just been transferred to the price of the things we’re sold.
unique link to this extract
The Talented Doctor Ripley (GPT) • Bastiat’s Window
Medical students Faisal Elali and Leena Rachid explore the possibility of fraudulent research papers produced via ChatGPT:
“The feasibility of producing fabricated work, coupled with the difficult-to-detect nature of published works and the lack of AI-detection technologies, creates an opportunistic atmosphere for fraudulent research. Risks of AI-generated research include the utilization of said work to alter and implement new healthcare policies, standards of care, and interventional therapeutics.”
Elali and Rachid say such deceptions could be motivated by:
“financial gain, potential fame, promotion in academia, and curriculum vitae building, especially for medical students who are in increasingly competitive waters.”
A rabbinic parable warns that gossip spreads like feathers from a torn pillow in a windstorm—floating every which way and utterly irretrievable. So it may be with medical misinformation.
In a 2016 PBS article, I described my friend Rich Schieken’s retirement after 40 years as a pediatric cardiologist and medical school professor. I asked why he retired from work that he loved, and he responded:
“[M]y world has changed. When I began, parents brought their sick and dying children to me. I said, ‘This is what we’ll do,’ and they said, ‘Yes, doctor.’ Nowadays, they bring 300 pages of internet printouts. When I offer a prognosis and suggest treatment, they point to the papers and ask, ‘Why not do this or this or that?’” Don’t get me wrong. This new world is better than the old one. It’s just quite a bit to get used to.”
But when Rich said the above words, those parents’ printouts were written by someone, and the requisite human effort somewhat limited the volume of misinformation. With ChatGPT and similar bots, that constraint vanishes.
Google CEO Sundar Pichai says search to feature Chat AI • WSJ
[In February] Microsoft infused the technology behind ChatGPT into its search engine Bing, long a distant laggard to Google search. The move allowed users to engage in extended conversations with the product. Microsoft said it expected to generate $2bn in revenue for every percentage point it gained in the search market, of which Google has a more than 90% share.
Mr. Pichai’s latest comments indicate that Google plans to allow users to interact directly with the company’s large language models through its search engine. That move could upend the traditional link-based experience that has been the norm for more than two decades.
Google is testing several new search products, such as versions that allow users to ask follow-up questions to their original queries, Mr. Pichai said. The company said last month that it would begin “thoughtfully integrating LLMs into search in a deeper way,” but until now hadn’t detailed plans to offer conversational features.
Google has begun testing new AI features within Gmail and other work-related products, while Microsoft has moved to offer AI beyond Bing for use in some of its business software tools.
The stakes in the AI race in search are particularly high for Mr. Pichai. Search ads remain the biggest moneymaker for Google, bringing in $162bn of revenue last year.
Google at times had been cautious about moving too fast with the technology, wary of radically altering the way users interact with its search engine.
…AI technology requires enormous computing power to process the calculations used to produce humanlike conversation. Mr. Pichai said Google needs to adapt its use of resources to continue its work in AI while also managing costs. For example, he said Google Brain and DeepMind—the company’s two main AI units, which have long operated separately—would work together more closely on efforts to build large algorithms.
It’s the latter point that really matters. If Microsoft can gain any share of search, while making it more expensive for Google to run search (as it inevitably will, adding AI to it) then that degrades Google’s core business profitability. That, in turn, limits its ability to compete in less profitable fields where Microsoft sees opportunity. (The link to the article should jump the paywall.)
unique link to this extract
ChatGPT is making up fake Guardian articles. Here’s how we’re responding • The Guardian
Chris Moran is the Guardian’s head of editorial innovation:
A recent study of 1,000 students in the US found that 89% have used ChatGPT to help with a homework assignment. The technology, with all its faults, has been normalised at incredible speed, and is now at the heart of systems that act as the key point of discovery and creativity for a significant portion of the world.
Two days ago our archives team was contacted by a student asking about [a] missing article from a named journalist. There was no trace of the article in our systems. The source? ChatGPT [which told the student that the article, by the journalist, had appeared in The Guardian].
It’s easy to get sucked into the detail on generative AI, because it is inherently opaque. The ideas and implications, already explored by academics across multiple disciplines, are hugely complex, the technology is developing rapidly, and companies with huge existing market shares are integrating it as fast as they can to gain competitive advantages, disrupt each other and above all satisfy shareholders.
But the question for responsible news organisations is simple, and urgent: what can this technology do right now, and how can it benefit responsible reporting at a time when the wider information ecosystem is already under pressure from misinformation, polarisation and bad actors.
This is the question we are currently grappling with at the Guardian. And it’s why we haven’t yet announced a new format or product built on generative AI. Instead, we’ve created a working group and small engineering team to focus on learning about the technology, considering the public policy and IP questions around it, listening to academics and practitioners, talking to other organisations, consulting and training our staff, and exploring safely and responsibly how the technology performs when applied to journalistic use.
In doing this we have found that, along with asking how we can use generative AI, we are reflecting more and more on what journalism is for, and what makes it valuable.
This is only the beginning of the problem: what happens when people start asking ChatGPT to “write an article in the style of The Guardian and give it a headline, and byline it with the name of a journalist who works there”? Presently it’s doing the headline/byline thing. Worse is to come.
unique link to this extract
Anglers plead guilty after claims they used fish fillets to win top contest • AP via The Guardian
Two men accused of stuffing fish with lead weights and fish fillets in an attempt to win thousands of dollars in an Ohio tournament last year pleaded guilty this week to charges including cheating.
The cheating allegations surfaced in September when Lake Erie Walleye Trail tournament director Jason Fischer became suspicious when the fish turned in by two anglers, Jacob Runyan and Chase Cominsky, were significantly heavier than typical walleye.
A crowd of people at Gordon Park in Cleveland watched as Fischer cut the walleye open and found weights and walleye fillets stuffed inside.
As part of this week’s deal, Runyan and Cominsky pleaded guilty to cheating and unlawful ownership of wild animals and agreed to three-year suspensions of their fishing licenses. Cominsky also agreed to give up his bass boat worth $100,000. Prosecutors agreed to drop attempted grand theft and possessing criminal tools charges.
Both men are scheduled to be sentenced in May. Prosecutors plan to recommend a sentence of six months’ probation.
“This plea is the first step in teaching these crooks two basic life lessons,” Cuyahoga county prosecutor Michael O’Malley said on Monday in a statement. “Thou shall not steal, and crime does not pay.”
They were in line for $28,000 in prizes: the growing money in this.. sport? pastime? has prompted cheats to think of new ways to get into the money class.
unique link to this extract
Algorithm rank validator • Cory Etzkorn
See how your tweet performs against the open source Twitter algorithm.
There’s a box, into which you type. I tried: “Elon Musk? Isn’t he bad for Twitter?”
👎 Said bad things about Elon Musk. (-100)
👎 Too many questions. (-50)
It’s a joke, as you’ll realise if you look at the source code on Etzkorn’s Github. But given that the real algorithm uses “author_is_elon”, the joke isn’t that far from the truth.
Why do websites have so many pop-ups? • The Verge
surely, I thought, there must be a use case for pop-ups, an evidence-based explanation. I spoke with Alex Khmelevsky, head of UX at Clay, a San Francisco-based design and branding firm with clients such as Google, UPS, and Coca-Cola. Of pop-ups, he said they’re “not a good practice overall.” And yet, clients often demand them. Designers may try to suggest small changes to make them “context-based, information-based,” and less intrusive, but the client gets the final say.
I called an old colleague at the Center for American Progress to ask why opening their website doesn’t trigger the usual nonprofit tidal wave of subscribe, donate, and take action pop-ups. As vice president of digital strategy, Jamie Perez was closely involved in every step of the site’s recent redesign and ongoing development. “I trust users are doing what they want to do,” he said, noting friction frustrates people trying to grab data or read an article. He wants those users to return — and tell their friends. He views UX as “growing a relationship,” providing something of value rather than squeezing the most out of a single session.
Still, I’m starting to feel trapped in a web of frustration and unclosable interstitials: knowing that evidence against pop-ups is substantial, why keep using them? “The people who develop [pop-ups] have no idea about design and user experience,” commented Khmelevsky, and Buhle echoed the sentiment. “Oftentimes, decision-makers look at what’s right in front,” he said, turning to what others are using for guidance rather than stopping to reconsider. After talking to over a dozen designers and marketers, the best answer I could get was: pop-ups keep happening because other sites keep using them.
unique link to this extract
|• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?
Read Social Warming, my latest book, and find answers – and more.
Errata, corrigenda and ai no corrida: none notified