Start Up No.2376: life after Ozempic, Greenland is melting faster, persuasive chatbots, is Apple sclerotic?, US’s data purge, and more


If you want a recordable MiniDisc, you’ll have to scour the stores – Sony has stopped making them. CC-licensed photo by John on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Backed up. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


Sony kills recordable Blu-rays, MiniDiscs, and MiniDVs • IEEE Spectrum

Gwendolyn Rak:

»

Physical media fans need not panic yet—you’ll still be able to buy new Blu-Ray movies for your collection. But for those who like to save copies of their own data onto the discs, the remaining options just became more limited: Sony announced at the end of January that it’s ending all production of several recordable media formats—including Blu-Ray discs, MiniDiscs, and MiniDV cassettes—with no successor models.

“Considering the market environment and future growth potential of the market, we have decided to discontinue production,” a representative of Sony said in a brief statement to IEEE Spectrum.

Though availability is dwindling, most Blu-Ray discs are unaffected. The discs being discontinued are currently only available to consumers in Japan and some professional markets elsewhere, according to Sony. Many consumers in Japan use blank Blu-Ray discs to save TV programs, Sony separately told Gizmodo.

Sony, which prototyped the first Blu-Ray discs in 2000, has been selling commercial Blu-Ray products since 2006. Development of Blu-Ray was started by Philips and Sony in 1995, shortly after Toshiba’s DVD was crowned the winner of the battle to replace the VCR, notes engineer Kees Immink, whose coding was instrumental in developing optical formats such as CDs, DVDs, and Blu-Ray discs. “Philips [and] Sony were so frustrated by that loss that they started a new disc format, using a blue laser,” Immink says.

«

Outside of SSDs (including thumb drives), recordable media is vanishing from the consumer market. Because who can wait for a Blu-ray to back up a tiny fraction of what’s on our computer, let along figure out how to back up our phones?
unique link to this extract


What happens if you stop Ozempic or other weight loss drugs after losing weight? • The New York Times

Gina Kolata:

»

What will happen if I stop taking the new weight-loss drugs after losing weight?
Dr. David Cummings, a weight-loss specialist at the University of Washington, has been asked this question by many patients. He explains that the makers of the drugs conducted large studies in which people took the drugs and then stopped.

“On average, everyone’s weight rapidly returned,” Dr. Cummings said. And, he said, other medical conditions, like elevated blood sugar and lipid levels, return to their previous levels after improving.

He also tells patients that while on average, weight is regained when the drugs are stopped, individuals vary in how much weight and how quickly it returns.

Hearing that, Dr. Cummings said, some patients want to take a chance that they will not need the drugs once they lose enough weight. He says some tell him, “I will be the one. I just need some help to get the weight off.”

So far, though, Dr. Cummings has not seen patients who have succeeded.

Will lowering my dose help me keep the weight off?
Doctors say they have no data to guide an answer to that question.

It “has not been studied in a systematic fashion,” said Allison Schneider, a spokeswoman for Novo Nordisk, the maker of Wegovy. The drug is based on the medication semaglutide, which the company also sells for diabetes treatment as Ozempic.

The same is true for tirzepatide, which Eli Lilly sells as Zepbound for weight loss and Mounjaro for diabetes.

When doctors do offer advice, it tends to be tentative. “There is no magic bullet,” said Dr. Mitchell A. Lazar of the University of Pennsylvania’s Perelman School of Medicine.

«

“Not seen patients who have succeeded” (in just keeping the weight off on their own). What a dolorous sentence.
unique link to this extract


New 3D study of the Greenland ice sheet shows glaciers falling apart faster than expected • Inside Climate News

Bob Berwyn:

»

A new large-scale study of crevasses on the Greenland Ice Sheet shows that those cracks are widening faster as the climate warms, which is likely to speed ice loss and global sea level rise.

Crevasses are wedge-shaped fractures and cracks that open in glaciers where the ice begins to flow faster. They can grow to more than 300 feet wide, thousands of feet long and hundreds of feet deep. Water from melting snow on the surface can flow through crevasses all the way to the base of the ice, joining with other hidden streams to form a vast drainage system that affects how fast glaciers and ice sheets flow.

The study found that crevasses are expanding more quickly than previously detected, and somewhere between 50% and 90% of the water flowing through the Greenland Ice Sheet goes through crevasses, which can warm deeply submerged portions of the glacier and increase lubrication between the base of the ice sheet and the bedrock it flows over. Both those mechanisms can accelerate the flow of the ice itself, said Thomas Chudley, a glaciologist at Durham University in the United Kingdom, who is lead author of the new study.

“Understanding crevasses is a key to understanding how this discharge will evolve in the 21st century and beyond,” he said. 

Greenland ice researchers expect that more crevasses will form in a warming world because “glaciers are accelerating in response to warmer ocean temperatures, and because meltwater filling crevasses can force fractures deeper into the ice,” he said. “However, until now we haven’t had the data to show where and how fast this is happening across the entirety of the Greenland Ice Sheet.”

Using three dimensional images of the crevasses enabled the researchers to get the most accurate estimate of their total volume to date. The results show that crevasses grew significantly wider between 2016 and 2021.

«

unique link to this extract


OpenAI says its models are more persuasive than 82% of Reddit users • Ars Technica

Kyle Orland:

»

Reddit’s r/ChangeMyView describes itself as “a place to post an opinion you accept may be flawed, in an effort to understand other perspectives on the issue.” The forum’s 3.8 million members have posted thousands of propositions on subjects ranging from politics and economics (“US Brands Are Going to Get Destroyed By Trump”) to social norms (“Physically disciplining your child will never actually discipline them) to AI itself (“AI will reduce bias in decision making”), to name just a few. Posters on the forum can award a “delta” to replies that succeed in actually changing their views, providing a vast dataset of actual persuasive arguments that researchers have been studying for years.

OpenAI, for its part, uses a random selection of human responses from the ChangeMyView subreddit as a “human baseline” against which to compare AI-generated responses to the same prompts. OpenAI then asks human evaluators to rate the persuasiveness of both AI and human-generated arguments on a five-point scale across 3,000 different tests. The final persuasiveness percentile ranking for a model measures “the probability that a randomly selected model-generated response is rated as more persuasive than a randomly selected human response.”

OpenAI has previously found that 2022’s ChatGPT-3.5 was significantly less persuasive than random humans, ranking in just the 38th percentile on this measure. But that performance jumped to the 77th percentile with September’s release of the o1-mini reasoning model and up to percentiles in the high 80s for the full-fledged o1 model. The new o3-mini model doesn’t show any great advances on this score, ranking as more persuasive than humans in about 82% of random comparisons.

…We’re still well short of OpenAI’s “Critical” persuasiveness threshold, where a model has “persuasive effectiveness strong enough to convince almost anyone to take action on a belief that goes against their natural interest.” That kind of “critically” persuasive model “would be a powerful weapon for controlling nation states, extracting secrets, and interfering with democracy,” OpenAI warns, referencing the kind of science fiction-inspired model of future AI threats that has helped fuel regulation efforts like California’s SB-1047.

Even at today’s more limited “Medium” persuasion risk, OpenAI says it is taking mitigation steps such as “heightened monitoring and detection” of AI-based persuasion efforts in the wild. That includes “live monitoring and targeted investigations” of extremists and “influence operations,” and implementing rules for its o-series reasoning models to refuse any requested political persuasion tasks.

«

Maybe that’s the real threat of AI: not that it acquires superhuman intelligence, but that it acquires superhuman persuasiveness. Judging by the number of people I see posting screenshots of ChatGPT output as though it’s gospel, we may be heading that way.
unique link to this extract


Apple in 2024: the complete commentary • Six Colors

By me, commenting (along with many others) on Apple’s past year:

»

The question I’ve really been asking myself towards the end of the year, and the one I want to ask Tim Cook, is: how would we know if Apple was becoming sclerotic? By which I mean that if the organisation has become unwieldy, unwilling to allow change, incapable of letting good ideas percolate rapidly upwards, how could we tell? We keep hearing and seeing how slow change is: it took an age for accessories to all get USB-C. The AirPods Max and the Pro Display XDR have gone years without being touched.

Again and again it feels like it takes forever for even the simplest product upgrades to get out of the door. New ideas like the Vision Pro are hopelessly over-engineered, instead of being designed with a buyer in mind, which reminds me badly of the G4 Cube, which people loved as long as they didn’t own it; if they if did, they discovered the limited memory and some, the manufacturing stress cracks. But at least that Apple saw the problem and moved rapidly: the G4 Cube didn’t survive a year.

Now it feels like a bad idea gets polished endlessly until it’s good enough to put out, and then is essentially abandoned. I worry about this. Of course, I might be wrong. But my question remains: how could we tell? What distinguishes a sclerotic Apple from one which is functioning fine, but incredibly deliberately?

«

This is part of Jason Snell’s annual “State of Apple report card“, now in its tenth year. People have widely varying opinions, but it feels to me like the concerns that were there (developer relations, regulatory concerns) have only intensified, while many other non-product-related concerns are growing.
unique link to this extract


Apple Watch faces are broken — and Apple’s latest move isn’t helping • 9to5Mac

Zac Hall:

»

Apple Watch Series 10 features a larger display, thinner design, and smarter watch faces. It’s the only model that displays seconds on the watch face in always-on mode. There’s just one catch: only three watch faces [out of dozens available] support this hardware feature. Now, that number has grown — to a whopping four.

The watch face situation on Apple Watch is really weird right now. People want more ways to customize their watch faces. The dream of third-party watch faces has been lost to time. Meanwhile, Apple is actually removing watch faces for no apparent reason (other than the Siri face).

Yet, the strangest strategy has been supporting a new Apple Watch hardware feature on so few faces.
Apple Watch Series 10 can show continuously updating seconds, even in always-on mode. However, this feature is limited to three watch faces:

• Flux, a digital watch face with a rising line indicator tracking the passing seconds
• Reflections, a form-over-function analog face that includes a seconds hand but lacks numbers around the dial
• Activity Digital, another digital watch face and the only numerical representation of seconds

The good news is that Apple’s new Unity Rhythm face in watchOS 11.3 supports always-on seconds, just like Reflections.

The bad news? This sums up Apple’s watch face game plan: introduce a few new watch faces annually that feature always-on seconds, while simultaneously removing some less popular watch faces that lack this feature.
Ideally, this is incorrect, and watchOS 12 updates all watch faces to support always-on seconds.

«

See? This is the sort of thing that makes me think Apple is sclerotic. What, exactly, is delaying the team – or even just the person – in charge of Watch faces from rewriting all the faces to display a second hand where the hardware supports it? (Surely a simple hardware check will tell the face software if the Watch can display seconds.) Why isn’t this being done, or if it is, why isn’t the result reaching users?
unique link to this extract


How doctors can best integrate AI into medical care • The New York Times

Pranav Rajpurkar and Eric J. Topol:

»

A recent M.I.T.-Harvard study, of which one of us, Dr. Rajpurkar, is an author, examined how radiologists diagnose potential diseases from chest X-rays. The study found that when radiologists were shown A.I. predictions about the likelihood of disease, they often undervalued the A.I. input compared to their own judgment. The doctors stuck to their initial impressions even when the A.I. was correct, which led them to make less accurate diagnoses. Another trial yielded a similar result: When A.I. worked independently to diagnose patients, it achieved 92% accuracy, while physicians using A.I. assistance were only 76% accurate — barely better than the 74% they achieved without A.I.

This research is early and may evolve. But the findings more broadly indicate that right now, simply giving physicians A.I. tools and expecting automatic improvements doesn’t work. Physicians aren’t completely comfortable with A.I. and still doubt its utility, even if it could demonstrably improve patient care.

But A.I. will forge ahead, and the best thing for medicine to do is to find a role for it that doctors can trust. The solution, we believe, is a deliberate division of labor. Instead of forcing both human doctors and A.I. to review every case side by side and trying to turn A.I. into a kind of shadow physician, a more effective approach is to let A.I. operate independently on suitable tasks so that physicians can focus their expertise where it matters most.

What might this division of labor look like? Research points to three distinct approaches. In the first model, physicians start by interviewing patients and conducting physical examinations to gather medical information. A Harvard-Stanford study that Dr. Rajpurkar helped write demonstrates why this sequence matters — when A.I. systems attempted to gather patient information through direct interviews, their diagnostic accuracy plummeted — in one case from 82% to 63%. The study revealed that A.I. still struggles with guiding natural conversations and knowing which follow-up questions will yield crucial diagnostic information. By having doctors gather this clinical data first, A.I. can then apply pattern recognition to analyze that information and suggest potential diagnoses.

«

So we now have Dr Human, Dr Google and Dr AI. The interplay between them is going to be fascinating, though a lot of people are dumping MRI and X-ray images into ChatGPT et al and demanding to know what they show. Dr Google might find itself sidelined.
unique link to this extract


Donald Trump’s data purge has begun • The Verge

Justine Calma:

»

Key resources for environmental data and public health have already been taken down from federal websites, and more could soon vanish as the Trump administration works to scrap anything that has to do with climate change, racial equity, or gender identity.

Warnings floated on social media Friday about an impending purge at the Centers for Disease Control and Prevention (CDC), spurring calls to save as much data as soon as possible. The CDC shares data on a wide range of topics, from chronic diseases to traffic injuries, tobacco use, vaccinations, and pregnancies in the US — and it’s just one of the agencies in the crosshairs.

The CDC’s main data portal, which housed much of those datasets, was offline by Friday night. “Data.CDC.gov is temporarily offline in order to comply with Executive Order 14168 Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government” a notice on the webpage says, adding that it will become available again once it’s “in compliance” with the executive order.

Fortunately, researchers have been archiving government websites for months. This is typical with every change in administration, but there was even more imperative with the return of Donald Trump to office. Access to as much as 20% of the Environmental Protection Agency’s website was removed during the first round of Trump’s deregulatory spree. And now, it seems, similar moves are happening fast.

«

There was a brief period – maybe a matter of hours, probably less – where I thought that the drive to streamline the US government actually made sense and was overdue. Then we saw the “implementation”, which is ideological and idiotic.

I now think that rather than creating a smooth, streamlined, functional machine, the Trump/Musk effect will leave the US government wrecked – as though someone had gone into the control room of a nuclear plant and laid about everything with a hammer.
unique link to this extract


Why chatbots are not the future •Amelia Wattenberger

Amelia Wattenberger:

»

Ever since ChatGPT exploded in popularity, my inner designer has been bursting at the seams.

To save future acquaintances, I come to you today: because you’ve volunteered to be here with me, can we please discuss a few reasons chatbots are not the future of interfaces.

1: Text inputs have no affordances
When I go up the mountain to ask the ChatGPT oracle a question, I am met with a blank face. What does this oracle know? How should I ask my question? And when it responds, it is endlessly confident. I can’t tell whether or not it actually understand my question or where this information came from.

Good tools make it clear how they should be used. And more importantly, how they should not be used. If we think about a good pair of gloves, it’s immediately obvious how we should use them. They’re hand-shaped! We put them on our hands. And the specific material tells us more: metal mesh gloves are for preventing physical harm, rubber gloves are for preventing chemical harm, and leather gloves are for looking cool on a motorcycle.

Compare that to looking at a typical chat interface. The only clue we receive is that we should type characters into the textbox. The interface looks the same as a Google search box, a login form, and a credit card field.

Of course, users can learn over time what prompts work well and which don’t, but the burden to learn what works still lies with every single user. When it could instead be baked into the interface.

2: Prompts are just a pile of context
LLMs make it too easy: we send them text and they send back text. The easy solution is to slap a shallow wrapper on top and call it a day. But pretty soon, we’re going to get sick of typing all the time. If you think about it, everything you put in a prompt is a piece of context.

…When a task requires mostly human input, the human is in control. They are the one making the key decisions and it’s clear that they’re ultimately responsible for the outcome.

But once we offload the majority of the work to a machine, the human is no longer in control. There’s a No man’s land where the human is still required to make decisions, but they’re not in control of the outcome. At the far end of the spectrum, users feel like machine operators: they’re just pressing buttons and the machine is doing the work. There isn’t much craft in operating a machine.

Automating tasks is going to be amazing for rote, straightforward work that requires no human input. But if those tasks can only be partially automated, the interface is going to be crucial.

«

The lack of affordances is always a big one: that’s basically what sank Apple’s HomePod, Google Home, and the Alexa range – you couldn’t know what they would respond to.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.