
Might the beautiful people in New York go to the restaurants with the best reviews? Perhaps! CC-licensed photo by Michele Ursino on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.
A selection of 9 links for you. Just desert, thanks. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Things you should never do, part I • Joel on Software
Joel Spolsky:
»
We’re programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We’re not excited by incremental renovation: tinkering, improving, planting flower beds.
There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: it’s harder to read code than to write it.
This is why code reuse is so hard. This is why everybody on your team has a different function they like to use for splitting strings into arrays of strings. They write their own function because it’s easier and more fun than figuring out how the old function works.
As a corollary of this axiom, you can ask almost any programmer today about the code they are working on. “It’s a big hairy mess,” they will tell you. “I’d like nothing better than to throw it out and start over.”
Why is it a mess?
“Well,” they say, “look at this function. It is two pages long! None of this stuff belongs in there! I don’t know what half of these API calls are for.”
Before Borland’s new spreadsheet for Windows shipped, Philippe Kahn, the colourful founder of Borland, was quoted a lot in the press bragging about how Quattro Pro would be much better than Microsoft Excel, because it was written from scratch. All new source code! As if source code rusted.
The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?
«
This is in the context of the Musk project to rewrite the code of the Social Security Agency (SSA) from COBOL to Java. And sure, you can use AI to help! (It won’t help. You’ll need to check everything. And that’s before we get to the subsystems that rely on it.)
One commenter yesterday pointed out that this is essentially a ruse to make SSA collapse because the code doesn’t work, and make people use private means instead. Of course it won’t work, for the reasons Spolsky explains above. The US is about to enter a very dark time.
unique link to this extract
Twitter (X) hit by 2.8 billion profile data leak in alleged insider job • Hackread
“Waqas”:
»
ThinkingOne, a well-known figure on Breach Forums for their skill in analyzing data leaks, decided to combine the 2025 leak with the 2023 one, producing a single 34GB CSV file (9GB compressed) containing 201 million merged entries. To be clear, the merged data only includes users that appeared in both incidents, creating a confusion of public and semi-public data.
This messy combination led many to believe that the 2025 leak also contained email addresses, but that’s not the case. The emails shown in the merged file are from the 2023 breach. The presence of emails in the merged dataset has given the wrong impression that the contents of the 2025 leak also include email addresses.
As of Jan 2025, X (formerly Twitter) had around 335.7 million users, so how is it possible that data from 2.8 billion users has been leaked? One possible explanation is that the dataset includes aggregated or historical data, such as bot accounts that were created and later banned, inactive or deleted accounts that still lingered in historical records, or old data that was merged with newer data, increasing the total number of records.
Additionally, some entries might not even represent real users but could include non-user entities like API accounts, developer bots, deleted or banned profiles that remained logged somewhere, or organization and brand accounts that aren’t tied to individual users.
Another possibility is that the leaked data wasn’t exclusively obtained from Twitter itself but rather scraped from multiple public sources and merged together, including archived data from older leaks or information from third-party services linked to Twitter accounts.
«
Or, how about this for a third possibility, it’s mostly junk. It’s just about possible that there have, historically, been more than a billion entries in the Twitter (now X) database, but this all feels fanciful. And also: unimportant, for the most part.
unique link to this extract
LooksMapping
Riley Walz:
»
I scraped millions of Google Maps restaurant reviews, and gave each reviewer’s profile picture to an AI model that rates how hot they are out of 10. This map shows how attractive each restaurant’s clientele is. Red means hot, blue means not.
The model is certainly biased. It’s certainly flawed. But we judge places by the people who go there. We always have. And are we not also flawed? This website just puts reductive numbers on the superficial calculations we make every day. A mirror held up to our collective vanity.
«
There’s a paper explaining the methodology of the website. Which has a wonderfully retro 2005 “look we discovered the Google Maps API!” appearance. Walz has form for doing interesting little projects like this.
unique link to this extract
Can an interstellar generation ship maintain a population on a 250-year trip to a habitable exoplanet? • Centauri Dreams
Paul Gilster:
»
One issue is not mentioned, despite the journey duration. Over the quarter millennium voyage, there will be evolution as the organisms adapt to the ship’s environment. Data from the ISS has shown that bacteria may mutate into more virulent pathogens. A population living in close quarters will encourage pandemics. Ionizing radiation from the sun and secondaries from the hull of a structure damages cells including their DNA. 250 years of exposure to residual GCR and secondaries will damage DNA of all life on the starship.
However, even without this direct effect on DNA, the conditions will result in organisms evolving as they adapt to the conditions on the starship, especially the small populations, increasing genetic drift. This evolution, even of complex life, can be quite fast, as the continuing monitoring of the Galápagos island finches observed by Darwin attests. Of particular concern is the creation of pathogens that will impact both humans and the food supply.
In the 1970s, the concept of a microbiome in humans, animals, and some plants was unknown, although bacteria were part of nutrient cycling. Now we know much more about the need for humans to maintain a microbiome, as well as some food crops. This could become a source of pathogens. While a space habitat can just flush out the agricultural infrastructure and replace it, no such possibility exists for the starship. Crops would need to be kept in isolated compartments to prevent a disease outbreak from destroying all the crops in the ECLSS [Environmental Control and. Life Support System].
If all this wasn’t difficult enough, the competition asks that the target generation population find a ready-made terrestrial habitat/terraformed environment to slip into on arrival. This presumably was prebuilt by a robotic system that arrived ahead of the crewed starship to build the infrastructure and create the environment ready for the human crew. It is the Mars agricultural problem writ large, with no supervision from humans to correct mistakes. If robots could do this on an exoplanet, couldn’t they make terrestrial habitats throughout the solar system?
«
It’s a long post, but this part stood out to me.
unique link to this extract
The tech fantasy that powers AI is running on fumes • The New York Times
Tressie McMillan Cottom:
»
Behold the decade of mid tech!
That is what I want to say every time someone asks me, “What about A.I.?” with the breathless anticipation of a boy who thinks this is the summer he finally gets to touch a boob.
…Most of us aren’t using A.I. to save lives faster and better. We are using A.I. to make mediocre improvements, such as emailing more. Even the most enthusiastic papers about A.I.’s power to augment white-collar work have struggled to come up with something more exciting than “A brief that once took two days to write will now take two hours!”
Mid tech’s best innovation is a threat.
A.I. is one of many technologies that promise transformation through iteration rather than disruption. Consumer automation once promised seamless checkout experiences that empowered customers to bag our own groceries. It turns out that checkout automation is pretty mid — cashiers are still better at managing points of sale. A.I.-based facial recognition similarly promised a smoother, faster way to verify who you are at places like the airport. But the T.S.A.’s adoption of the technology (complete with unresolved privacy concerns) hasn’t particularly revolutionized the airport experience or made security screening lines shorter. I’ll just say, it all feels pretty mid to me.
The economists Daron Acemoglu and Pascual Restrepo call these kinds of technological fizzles “so-so” technologies. They change some jobs. They’re kind of nifty for a while. Eventually they become background noise or are flat-out annoying, say, when you’re bagging two weeks’ worth of your own groceries.
Artificial intelligence is supposedly more radical than automation. Tech billionaires promise us that workers who can’t or won’t use A.I. will be left behind. Politicians promise to make policy that unleashes the power of A.I. to do … something, though many of them aren’t exactly sure what. Consumers who fancy themselves early adopters get a lot of mileage out of A.I.’s predictive power, but they accept a lot of bugginess and poor performance to live in the future before everyone else.
The rest of us are using this technology for far more mundane purposes. A.I. spits out meal plans with the right amount of macros, tells us when our calendars are overscheduled and helps write emails that no one wants. That’s a mid revolution of mid tasks.
«
These days, it’s quite easy to be an AI sceptic. But also: it’s hard to find the things that most of us find really useful.
unique link to this extract
Careless People book review: Sarah Wynn-Williams’ Facebook memoir reveals Meta’s global problems • Rest of World
Sabhanaz Rashid Diya:
»
In recounting events, the author glosses over her own indifference to repeated warnings from policymakers, civil society, and internal teams outside the U.S. that ultimately led to serious harm to communities. She briefly mentions how Facebook’s local staff was held at gunpoint to give access to data or remove content in various countries — something that had been happening since as early as 2012. Yet, she failed to grasp the gravity of these risks until the possibility of her facing jail time arises in South Korea — or even more starkly in March 2016, when Facebook’s vice president for Latin America, Diego Dzodan, was arrested in Brazil.
Her delayed reckoning underscores how Facebook’s leadership remains largely detached from real-world consequences of their decisions until they become impossible to ignore. Perhaps because everyone wants to be a hero of their own story, Wynn-Williams frames her opposition to leadership decisions as isolated; in reality, powerful resistance had long existed within what Wynn-Williams describes as Facebook’s “lower-level employees.”
…Throughout her recollections, Wynn-Williams describes extravagant off-sites, high-profile meetings, and grandiose visions to “sell” Facebook to world leaders. But the truth is, policy outside the U.S. took unglamorous and thankless grunt work, deep contextual and political expertise, and years of trust-building with communities — all faced with the routine risk of arrests and illegal detention. By trying to be the Everyman, she undermines experts, civil society, and local teams who informed her work. These glaring omissions speak to both Facebook’s indifference and moral superiority toward the rest of the world — even from its most well-meaning leaders.
Despite telling an incomplete story, Careless People is a book that took enormous courage to write. This is Wynn-Williams’ story to tell, and it is an important one. It goes to show that we need many stories — especially from those who still can’t be heard — if we are to meaningfully piece together the complex puzzle of one of the world’s most powerful technology companies.
«
Enzyme engineering: new method selectively destroys disease-causing proteins • Phys.org
Scripps Research Institute:
»
Scientists have long struggled to target proteins that lack defined structure and are involved in cancer, neurodegenerative disorders like Parkinson’s disease, and other serious illnesses. Now, a new study from Scripps Research demonstrates a proof of concept for a new strategy: engineering proteases—enzymes that cut proteins at specific sites—to selectively degrade these elusive targets with high precision in the proteome of human cells.
Published on March 24, 2025, in the Proceedings of the National Academy of Sciences, the study shows how to reprogram a protease from botulinum toxin to target α-Synuclein—a protein with unstructured regions used here as a model. The study marks one proof point in a broader approach that could be applied to a wide range of targets across the proteome.
“This work highlights how we can use the power of laboratory evolution to engineer proteases that offer a new way to treat diseases caused by hard-to-target proteins,” says senior author Pete Schultz, the President and CEO of Scripps Research, where he also holds the L.S. “Sam” Skaggs Presidential Chair. “It’s an exciting step toward developing new therapeutic strategies for diseases that lack effective treatments.”
The research builds on botulinum toxin, a bacterial protein best known for its use in Botox, a medication utilized for cosmetic purposes and certain medical conditions. This toxin naturally contains a protease. In its original form, the protease only targets SNAP-25—a protein essential for transmitting signals between nerve cells. By degrading SNAP-25, botulinum toxin disrupts nerve signaling, leading to the temporary paralysis effect seen after Botox treatments.
To reprogram this precision for α-Synuclein, the research team modified the enzyme using directed evolution, a laboratory process that involves introducing mutations and selecting variants with improved function over multiple cycles. The result: Protease 5.
…When tested in human cells, Protease 5 nearly eliminated all α-Synuclein proteins, suggesting it could help prevent the harmful buildup seen in Parkinson’s disease. And because the enzyme was designed to precisely target α-Synuclein, it didn’t cause toxicity or disrupt essential cellular functions.
«
Only a proof of concept, but a really interesting one. You can imagine that they have their eyes on Alzheimer’s disease, which also involves malformed protein deposits.
unique link to this extract
The Gen X career meltdown • The New York Times
Steven Kurutz:
»
Gen X-ers [born between the mid-1960s and late 1970s] grew up as the younger siblings of the baby boomers, but the media landscape of their early adult years closely resembled that of the 1950s: a tactile analog environment of landline telephones, tube TV sets, vinyl records, glossy magazines and newspapers that left ink on your hands.
When digital technology began seeping into their lives, with its AOL email accounts, Myspace pages and Napster downloads, it didn’t seem like a threat. But by the time they entered the primes of their careers, much of their expertise had become all but obsolete.
More than a dozen members of Generation X interviewed for this article said they now find themselves shut out, economically and culturally, from their chosen fields.
“My peers, friends and I continue to navigate the unforeseen obsolescence of the career paths we chose in our early 20s,” Mr. Wilcha said. “The skills you cultivated, the craft you honed — it’s just gone. It’s startling.”
Every generation has its burdens. The particular plight of Gen X is to have grown up in one world only to hit middle age in a strange new land. It’s as if they were making candlesticks when electricity came in. The market value of their skills plummeted.
Karen McKinley, 55, an advertising executive in Minneapolis, has seen talented colleagues “thrown away,” she said, as agencies have merged, trimmed staff and focused on fast, cheap social media content over elaborate photo shoots.
“Twenty years ago, you would actually have a shoot,” Ms. McKinley said. “Now, you may use influencers who have no advertising background.”
In the wake of the influencers comes another threat, artificial intelligence, which seems likely to replace many of the remaining Gen X copywriters, photographers and designers. By 2030, ad agencies in the United States will lose 32,000 jobs, or 7.5% of the industry’s work force, to the technology, according to the research firm Forrester.
«
Apple might buy $1bn worth of Nvidia servers • Quartz
Ece Yildrim:
»
The tech giant is reportedly placing roughly $1bn worth of orders of Nvidia’s GB300 NVL72 server platform, including the company’s next-generation Blackwell Ultra chips, built by Super Micro and Dell. With each server costing around $3.7m to $4m, Baruah estimates that Apple is buying approximately 250 servers. Apple didn’t immediately respond to Quartz’s request for comment.
Baruah expects Apple to use these servers to run or train generative AI large language models. The move could have stemmed from the intense backlash Apple received in response to its decision to delay a much-anticipated generative AI upgrade of its voice assistant Siri.
Apple began working on integrating advanced AI technology into its products as part of its Apple Intelligence initiative, which the company introduced last June at its annual developer conference, WWDC.
The tech giant first teased a so-called “LLM Siri” based on advanced large language models last year, in an effort to scale its generative AI capabilities and catch up to industry rivals like OpenAI and Amazon.
Although an arrival date was never publicly set, LLM Siri was widely anticipated to come in an iOS 18.4 upgrade expected next month. Now, the AI-infused Siri will likely be unveiled next year. Apple pulled its previous ads featuring the capability.
«
If this is Apple’s response, then.. it’s a little late? Unless they really think they can train these models up that quickly and make a difference, in which case the new Siri leadership has lit a fire under the staff.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified