
Can you smile like Mark Zuckerberg? (What do you mean, he doesn’t smile?) A new webcam game will test you. CC-licensed photo by Enrique Dans on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 9 links for you. We are not amused. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
DOGE plans to rebuild SSA code base in months, risking benefits and system collapse • WIRED
Makena Kelly:
»
The so-called Department of Government Efficiency (DOGE) is starting to put together a team to migrate the Social Security Administration’s (SSA) computer systems entirely off one of its oldest programming languages in a matter of months, potentially putting the integrity of the system—and the benefits on which tens of millions of Americans rely—at risk.
The project is being organized by Elon Musk lieutenant Steve Davis, multiple sources who were not given permission to talk to the media tell WIRED, and aims to migrate all SSA systems off COBOL, one of the first common business-oriented programming languages, and onto a more modern replacement like Java within a scheduled tight timeframe of a few months.
Under any circumstances, a migration of this size and scale would be a massive undertaking, experts tell WIRED, but the expedited deadline runs the risk of obstructing payments to the more than 65 million people in the US currently receiving Social Security benefits.
“Of course, one of the big risks is not underpayment or overpayment per se; [it’s also] not paying someone at all and not knowing about it. The invisible errors and omissions,” an SSA technologist tells WIRED.
The Social Security Administration did not immediately reply to WIRED’s request for comment.
SSA has been under increasing scrutiny from president Donald Trump’s administration. In February, Musk took aim at SSA, falsely claiming that the agency was rife with fraud. Specifically, Musk pointed to data he allegedly pulled from the system that showed 150-year-olds in the US were receiving benefits, something that isn’t actually happening. Over the last few weeks, following significant cuts to the agency by DOGE, SSA has suffered frequent website crashes and long wait times over the phone, The Washington Post reported this week.
…Sources within SSA expect the project to begin in earnest once DOGE identifies and marks remaining beneficiaries as deceased and connecting disparate agency databases. In a Thursday morning court filing, an affidavit from SSA acting administrator Leland Dudek said that at least two DOGE operatives are currently working on a project formally called the “Are You Alive Project,” targeting what these operatives believe to be improper payments and fraud within the agency’s system by calling individual beneficiaries.
«
This is going to go wrong. One can predict that with certainty. The only question is how wrong. Badly wrong? Quickly wrong? Enormously wrong? All of those wrong? And the other question is: why? What’s to be gained? Nobody seems able or willing to explain that crucial question.
unique link to this extract
iOS 19.4 rumoured to revamp Health app with new coaching feature • MacRumors
Joe Rossignol:
»
In his Power On newsletter today, Gurman said Apple plans to offer a new AI-powered health coaching feature that offers personalized health recommendations.
The information provided by the coaching feature would be accompanied by videos from health experts that inform users about various health conditions and ways to make lifestyle improvements. For example, Gurman said if the Apple Watch tracks poor heart-rate trends, a video could explain the risks of heart disease.
It is possible that the feature could eventually be part of an Apple Health+ service.
Food tracking will be another big part of the revamped Health app, which could compete with the MyFitnessPal app, according to Gurman.
Apple is also aiming for the AI-powered coaching feature to provide users with fitness-related tips, such as how to improve their technique during workouts. This feature could eventually be built into the Apple Fitness+ service.
«
I can’t remember who observed that Apple increasingly seems to be thinking of new features which are aimed at semi-geriatric users, but they’re right. I’ll reserve judgement on the AI-powered coaching, but I don’t have high hopes. If Apple hasn’t got its LLM-based Siri/AI out of the door this year, is it really going to blow the doors off with AI-powered advice on how you’re lifting wrong?
unique link to this extract
Nuclear startup Terrestrial Energy goes public via SPAC, netting $280m in merger • TechCrunch
Tim De Chant:
»
Terrestrial Energy, a small nuclear startup, merged with an acquisition company on Wednesday.
The North Carolina-based company is developing small modular reactors and expects to net $280m from the deal. Before the SPAC merger, Terrestrial Energy had raised $94m, according to PitchBook. The combined entity expects to list on Nasdaq under the symbol IMSR.
The ticker is a reference to Terrestrial Energy’s flavor of small modular reactor (SMR), which it calls an integral molten salt reactor. In such a device, uranium fuel is mixed with various salts, such as lithium fluoride or sodium fluoride, that serve to suspend the nuclear fuel and act as the reactor’s main coolant.
Terrestrial Energy’s reactor core is designed to be entirely replaced every seven years, in part to head off some of the problems earlier molten salt reactors experienced like corrosion. The reactor core includes not only the fuel and graphite modulators that regulate the speed of the fission reactions, but also the heat exchangers and pumps that keep the salt cool and flowing.
The startup is targeting a range of markets, including electric power, data centers, and industrial applications that require heat.
There are many extant proposals to build commercial-scale molten salt reactors, but to date, none have been built. The basic technology was invented in the 1950s, but two experiments from that era were plagued with problems.
«
The link there in the final sentence is extremely gloomy about the potential for molten salt reactors, so we’ll have to see whether Terrestrial can live up to its promises.
unique link to this extract
The Myanmar quake is the first major disaster to suffer the brunt of Donald Trump’s devastating cuts • Sky News
Dominic Waghorn:
»
Trump’s decision to shut down the US Agency for International Development was already reported to have decimated US aid operations in Myanmar. Its global impact is hard to overstate. American aid had provided 40% of developmental aid worldwide.
Yesterday, Trump promised Myanmar aid for the earthquake. In reality, his administration has fired most of the people most experienced at organising that help and shut down the means to provide it.
The last of its staff were ironically only let go yesterday even as the president was making lofty promises to help.
The US State Department says it has maintained a team of experts in the country. But former USAID officials say the system is now ‘in shambles’ without the wherewithal to conduct search and rescue or transfer aid.
As they count the cost of this massive earthquake, the people of Myanmar will be hoping, though, for a silver lining, that the disaster may hasten the fall of their despised dictator.
The catastrophe comes at a very bad time for General Min Aung Hlaing, who seized power in a coup four years ago. The Myanmar junta is losing a civil war against an array of opposition forces, ceding territory and now largely kettled into the country’s big cities. Some of the earthquake’s worst damage has been done in the junta’s urban strongholds.
«
More than 1,600 people have died in the earthquake, which is a terrible human price to pay for both Trump (and Musk) and the junta.
unique link to this extract
How not to build an AI Institute • Chalmermagne
Alex Chalmers looks in detail at how the Alan Turing Institute (ATI) got to its current position – ie effectively useless:
»
The story of the ATI is, in many ways, the story of the UK’s approach to technology.
Firstly, drift. The UK has chopped and changed its approach to technology repeatedly, choosing seven different sets of priority technologies between 2012 and 2023. Government has variously championed the tech industry as a source of jobs, a vehicle for exports, a means of fixing public services, and a way of expanding the UK’s soft power. These are all legitimate goals, but half-heartedly attempting all of them over a decade is a surefire means of accomplishing relatively little.
Connected to this is our second challenge: everythingism. My friend Joe Hill described this aptly as “the belief that every government policy can be about every other government policy, and that there are no real costs to doing that”. This results in policymakers loading costs onto existing projects, at the expense of efficiency and prioritisation.
As the ATI’s original goals were so vague, it was a prime target. Even before the ATI was up and running, the government announced that it would also be a body responsible for allocating funding for fintech projects. It then had a data ethics group bolted onto it as a result of a 2016 select committee report. As one ex-insider put it, “there was never a superordinate goal”.
Finally, the perils of the UK government’s dependence on the country’s universities for research. The UK has historically channelled 80% of its non-business R&D through universities, versus 40-60% for many peer nations.
«
Back in December 2024 there was this story about “Staff at Britain’s AI institute in open revolt“. Everyone, including the staff, seems to think it hasn’t kept pace with the changes in AI in the past three years.
unique link to this extract
AI models miss disease in Black and female patients • Science
Rodrigo Pérez Ortega:
»
From programs designed to detect irregular heartbeats in electrocardiograms to software that tracks eye movements to diagnose autism in children, artificial intelligence (AI) is helping physicians fine-tune the care they provide patients. But for all the technology’s potential for automating tasks, a growing body of evidence also shows that AI can be prone to bias that disadvantages already vulnerable patients. A new study, published in Science Advances, adds to this work by testing one of the most cited AI models used to scan chest x-rays for diseases—and finding the model doesn’t accurately detect potentially life-threatening diseases in marginalized groups, including women and Black people.
These results “are interesting and timely,” says Kimberly Badal, a computational biologist at the University of California (UC), San Francisco, who was not involved in the new study. “We are at the point in history where we’re moving to deploy a lot of AI models into clinical care,” she says, but “we don’t really know” how they affect different groups of people.
The model used in the new study, called CheXzero, was developed in 2022 by a team at Stanford University using a data set of almost 400,000 chest x-rays of people from Boston with conditions such as pulmonary edema, an accumulation of fluids in the lungs. Researchers fed their model the x-ray images without any of the associated radiologist reports, which contained information about diagnoses. And yet, CheXzero was just as good as the radiologists in reading the diseases associated with each x-ray.
Given AI models’ tendencies for bias, Yuzhe Yang, a computer scientist at UC Los Angeles wanted to assess the Stanford team’s model for such biases. His team selected a subset of 666 x-ray images from the same data set that was used to train the model: the data set’s only images that also came with radiologists’ diagnoses and information about each patient’s age, sex, and race. The team then fed these images to CheXzero and compared the results against the radiologists’ diagnoses.
Compared with the patients’ doctors, the AI model more often failed to detect the presence of disease in Black patients or women, as well in those 40 years or younger. When the researchers looked at race and sex combined, Black women fell to the bottom, with the AI not detecting disease in half of them for conditions such as cardiomegaly, or enlargement of the heart.
«
Are we surprised? No we are not. The problem of using insufficient training data is a persistent one in AI.
unique link to this extract
smile like zuck
»
The secret to a perfect Zuck smile is his mouth and eyes. Focus on those.
Remember to keep the right amount of life in your eyes!
This site requires webcam access, but your face never leaves your computer.
«
There’s more explanation at this site. You can also do a “no smile” to “total Zuck smile” at this site, so you know what your target is. Remember, a Zuck smile isn’t like a human smile. It’s not meant to express happiness, empathy or any of that nonsense.
unique link to this extract
Agencies say Google reps are pressuring clients on AI ad tools • Digiday
Marty Swant:
»
Ad agencies have long fielded relentless pitches from Google Ads reps pushing new products in search of more money. But now, agencies of all sizes say the pressure is intensifying, with reps pushing harder to drive adoption of automated tools like Performance Max and generative AI features.
Google Ads sales reps are increasingly contacting agencies’ clients with advice that at times contradicts agency strategies — and in some cases mismanages campaigns — according to range of media agencies in the U.S. and U.K. Sources. They also say the tactics feel more aggressive and more inappropriate than in the past.
Many agencies say the efforts seem designed to sow confusion, discredit agencies and ultimately cut them out of the picture. For example, agencies claim that when they reject Google reps’ misaligned advice, the reps go around them — directly to clients — discrediting the agency by implying they don’t understand how Google Ads work. PPC professionals also shared similar concerns in a recent Reddit thread, where one user likened the issue to “being told your house needs paint by the guy who sells the paint and does the job.”
Agencies have been left agencies scrambling, said Ian Harris, founder of Agency Hackers, a U.K.-based community of indie agencies. To address the problem, Agency Hackers has created a new “Don’t Be Evil” campaign for agencies to share their experiences and frustrations about Google’s ad-sales tactics.
«
Reflecting on TikTok’s role in society as new ban deadline approaches • The New York Times
Brian Chen:
»
TikTok’s effectiveness at keeping people scrolling has been a topic of widespread concern among parents and academic researchers who wonder whether people could be considered addicted to the app, similar to video game addiction.
Studies on the topic are continuing and remain inconclusive. One, published last year and led by Christian Montag, a professor of cognitive and brain sciences at the University of Macau in China, examined TikTok overuse. Very few people in the study, which involved 378 participants of various ages, reported feeling addicted to TikTok.
Yet broadly speaking, the consensus from multiple studies on TikTok and other social media apps is that younger people are more likely to report feeling addicted, Dr. Montag said in an interview.“I think children should not at all be on these platforms,” he said about TikTok and similar apps. People’s brains can take at least 20 years to mature and self regulate, he added.
A TikTok spokeswoman said the app included tools for people to manage their screen time, including a new setting for parents to block TikTok from working on their children’s phones during certain hours of the day.
…TikTok was banned in the first place because American government officials worry that ByteDance could share the data it has collected on its American users with the Chinese government for espionage purposes.
Those concerns culminated in a Supreme Court hearing in January, where the Biden administration made its case for banning the app, citing concerns that TikTok could create a new pathway for Chinese intelligence services to infiltrate American infrastructure. But officials did not present evidence that TikTok was connected to such threats.
TikTok has, however, been linked to smaller data scandals in the United States. TikTok confirmed in 2022 that four of its employees had been fired for using the app to snoop on several journalists in an effort to track down their sources.
«
At this stage it would be astonishing if Trump banned TikTok. Even the proposal of a sale of its US assets or control to an American company looks remote.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified








