
The maths behind mammograms is complicated – and the human stories they hide even more so. CC-licensed photo by Kristie Wells on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 9 links for you. Pre-screened. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Gone (almost) phishin’ • Ma.tt
Matt Mullenweg:
»
This is a little embarrassing to share, but I’d rather someone else be able to spot a dangerous scam before they fall for it. So, here goes.
One evening last month, my Apple Watch, iPhone, and Mac all lit up with a message prompting me to reset my password. This came out of nowhere; I hadn’t done anything to elicit it. I even had Lockdown Mode running on all my devices. It didn’t matter. Someone was spamming Apple’s legitimate password reset flow against my account—a technique Brian Krebs documented back in 2024. I dismissed the prompts, but the stage was set.
What made the attack impressive was the next move: The scammers actually contacted Apple Support themselves, pretending to be me, and opened a real case claiming I’d lost my phone and needed to update my number. That generated a real case ID, and triggered real Apple emails to my inbox, properly signed, from Apple’s actual servers. These were legitimate; no filter on earth could have caught them.
Then “Alexander from Apple Support” called. He was calm, knowledgeable, and careful. His first moves were solid security advice: check your account, verify nothing’s changed, consider updating your password. He was so good that I actually thanked him for being excellent at his job.
That, of course, was when he moved into the next phase of the attack. He texted me a link to review and cancel the “pending request.” The site, audit-apple.com, was a pixel-perfect Apple replica, and displayed the exact case ID from the real emails I’d just received. There was even a fake chat transcript of the scammers’ actual conversation with Apple, presented back to me as evidence of the attack against my account. At the bottom of the page was a Sign in with Apple button that he told me to use.
I started poking at the page and noticed I could enter any case ID and get the same result. Nothing was being validated. It was all theater.
“This is really good,” I told Alexander. “This is obviously phishing. So tell me about the scam.”
Silence. *Click*.
«
One can hope that all readers will spot that Apple won’t call you, because none of these companies is going to do that. They’d never be off the phone if they did.
unique link to this extract
The salaries of 60 New Yorkers • NY Mag
The Editors:
»
The last time we surveyed New Yorkers about their paychecks, the math was easy. Well, easier. In 2005, a blogger at Gawker made $30,000 and the CEO at Lehman Brothers more than $35m. Back then, there was no “gig economy,” at least not as we know it today, and coffee shops from Bed-Stuy to the Upper East Side weren’t lousy with model–pickleballer–nanny–actor–producer–DJ–creative directors.
Some 20 years later, amid a radically different economic environment in which the nature of work feels as if it’s about to change forever, we set out to conduct a similar experiment. We reached into our network of sources, blind-messaged LinkedIn profiles, put out a casting call on Instagram, even stopped strangers in Union Square. What we discovered, just before a jobs report earlier this month confirmed a dwindling labor market, is that salaries across most industries have not kept up with inflation in a city that has become exorbitantly expensive. For so many professionals we spoke with, some of whom agreed to pose for the paper-bag portraits in this story, the answer to stalling wages and cost-of-living expenses is hanging out their own shingle and juggling freelance projects and social-media collaborations.
«
The numbers in here are amazing. Manhattan dog walker. Psychoanalyst. Pastor. Ghostwriter. Head of engineering at an AI startup. Sex worker. Not in that order: $325,000, $450,000, $92,000, $284,000, $67,200, $165,000. (See if you can guess which is which.) Worth reading for the human stories they contain.
unique link to this extract
2023: Why I’m not worried about AI causing mass unemployment
Timothy B. Lee, writing in April 2023:
»
The startups that best fit the “software eating the world” thesis [of Marc Andreessen in 2011] are probably “sharing economy” companies like Bird, DoorDash, Instacart, Lime, Lyft, Uber, and WeWork. Each of these companies use software to offer services in the “real world”—taxi rides, scooter rentals, food delivery, lodging, office space, and so forth. They enjoyed a lot of hype in the mid-2010s, and most of them have struggled in the last few years.
Some of them have been total fiascos. WeWork failed to disrupt commercial real estate. Shares in the scooter startup Bird have lost 97% of their value since the company went public less than two years ago. Last year I drove for Lyft for a week and wrote about its difficulty in turning a profit.
The two most successful “sharing economy” startups are probably Airbnb (founded in 2008) and Uber (founded in 2009). These companies are each worth tens of billions of dollars, and they seem likely to be enduring, profitable businesses.
Still, Airbnb has only a modest share of the overall lodging industry. And in recent years, the quality of Uber’s service has deteriorated, with higher fees and longer wait times. Smartphone-based ride hailing is a marginal improvement over conventional taxis, but hasn’t been a revolution.
In his 2011 essay, Andreessen specifically mentions health care and education as industries ripe for disruption by software. But as far as I can see that hasn’t happened. Hospitals increasingly use computers for record-keeping and billing and software has been used to make new drugs and medical devices. Many people learn foreign languages using Duolingo or watch educational videos on YouTube. But people largely go to the same schools and hospitals they did 10 or 20 years ago.
The reason I’m relitigating this 12-year-old argument is that I hear echoes of it in contemporary discussions of AI. In the early 2010s, Silicon Valley thought leaders looked at the early success of companies like Airbnb and Uber, extrapolated wildly, and concluded that software was going to transform the entire economy. Today, AI thought leaders are looking at the early success of ChatGPT and Stable Diffusion, extrapolating wildly, and concluding that AI software is going to transform the economy and put tons of people out of work.
To be clear, I do think AI is going to be a big deal. I wouldn’t have started an AI newsletter otherwise. But as with the Internet, I expect the impact to be concentrated in information-focused industries and occupations. And most of the American economy is not information-focused
«
Lee is still insistent about this point: he reiterated it on Monday.
unique link to this extract
The screening machine • Tablet Magazine
Mo Perry is a journalist and actor:
»
The battles play out all day, every day, in our feeds, where there’s an influencer for every temperament. Whatever your position on the trustworthiness of the medical establishment, there’s someone posting online or yammering on a podcast ready to support or refute you, armed with studies, stats, and anecdotes. Each can be compelling in isolation. Taken together, their contradictory messages—all delivered with evangelical certitude—can be dizzying.
Peptides. Ivermectin. Universal Hepatitis B vaccines for newborns. Raw milk. MRNA technology. All contested, all fraught. [They’re only “contested” if you don’t understand science – Overspill Ed.]
Mammograms are no different. On one side there are people like Dr. Thais Aliabadi, a board-certified OB-GYN and doctor to the stars, telling millions of “Huberman Lab” podcast listeners her story of getting a routine mammogram that led to a biopsy deemed benign, followed by a risk calculation (based on factors like family history, age at first menses, and age at first childbirth) that pegged her lifetime chance of breast cancer at 37%. Convinced that wasn’t a number she could live with, she fought for a prophylactic double mastectomy. When the pathologists examined the removed tissue, they found a small, previously invisible cancer in one breast—proof, in her telling, that more aggressive screening and preventative surgery had saved her life.
On the other side is Dr. Jenn Simmons, a former breast surgeon turned functional-medicine entrepreneur who regularly tells her more than 100,000 followers that mammograms have never been shown to save lives, that many mastectomies performed in the wake of screening are unnecessary products of overdiagnosis and fear, and that the radiation from mammograms can cause the very cancers women are trying to detect. She urges women toward diet and lifestyle overhaul, “terrain” testing, and radiation-free QT ultrasounds at her centers instead.
Between these poles sits the US Preventive Services Task Force, an independent panel of national experts in disease prevention and evidence-based medicine that issues recommendations about clinical preventive services. It doesn’t endorse risk calculators or supplemental screening. Its guidance is almost comically bland by comparison: biennial mammograms for women between 40 and 74, full stop.
«
In the UK, the guidance is an invitation every three years for all women aged between 50 and 70, so clearly there’s room for nuance (or statistical variation in populations and diets). Perry’s general point – that doctors like mammograms because it means they’re doing something – is reasonable. “Surgeons like to do surgery,” as my GP said to me when I mentioned having some adverse effects from an elective arthroscopy. “They’re not going to turn down the opportunity.”
unique link to this extract
Epic and Google have signed a special deal for a new class of ‘metaverse’ apps • The Verge
Jay Peters:
»
Epic Games and Google are burying the hatchet, but documents released today reveal that they aren’t only aligned on how Google is shaking things up for app stores. The two companies have also agreed to terms about a new class of apps that they’re calling “metaverse browsers,” according to a heavily redacted section of a revised binding term sheet.
While the term “metaverse” has largely fallen out of favour — Mark Zuckerberg, for example, is now much more interested in AI — Epic CEO Tim Sweeney has been talking for years about the metaverse and how it might work in the future. (Depending on how you define the concept, Epic’s Fortnite is already arguably one of the biggest versions of a metaverse.) And this actually isn’t the first time there has been a connection with Epic and Google about the metaverse; in court in January, when discussing a secret $800m Unreal Engine and services deal, Sweeney blurted out that the agreement related to the metaverse.
Unfortunately, the redactions in the revised binding term sheet cover up a lot of the key details about what a metaverse browser actually is. But from what’s visible in the document, metaverse browsers will:
• “have the primary purpose of allowing the navigation and exploration of metaverse worlds;”
• “support virtual items and identity that are portable across different worlds in the metaverse browser; and”
• “must support modern security considerations including Sandbox capabilities, limitations on code execution, and secure connections.”«
Oh please, can’t we just put the metaverse out of our misery?
unique link to this extract
One hundred accounts are behind the majority of conspiracy theory content in Canada • National Observer
Rory White:
»
Conspiracy theories about globalist cabals, climate hoaxes and election fraud may seem ubiquitous on social media. But a report published on Monday by the Media Ecosystem Observatory has found that they come from a tiny minority of users.
According to the report, just 100 users were responsible for almost 70% of online conspiracy posts from influential accounts they examined in Canada.
The researchers analyzed over 14 million social media posts from accounts in Canada, and found that 87% of conspiratorial claims come from influencers. Users on Elon Musk’s X were the biggest culprits.
These influencers are having an outsized impact in the physical world as well as online. Local governments across Canada are facing a wave of “larger scale conspiracy theories” overwhelming council meetings, according to Zoe Grams, executive director of Climate Caucus. This has led some politicians to avoid mentioning climate change altogether for fear of provoking a backlash.
“It’s about the permission structure of how we treat each other and how we treat our democratic institutions, which I think conspiracy theories are really undermining,” said Grams.
Local governments across Canada face a wave of “larger scale conspiracy theories” overwhelming council meetings, according to Zoe Grams, executive director of Climate Caucus. This has led some politicians to avoid mentioning certain topics altogether
The report from the Media Ecosystem Observatory, a collaboration between McGill University and the University of Toronto, did not name the accounts responsible for spreading conspiracy theories. But an analyst at the organization gave some clues. “A lot of them are part of a network. They often know each other and engage with each other’s content,” said Mathieu Lavigne.
While the conspiracy theory posts were viewed billions of times, only a small minority of Canadians fell for them.
«
Once again, one has to wonder what the world would be like if social media were optimised to reward accuracy rather than virality.
unique link to this extract
OpenAI robotics leader resigns over concerns about surveillance and autonomous weapons • Fortune
Sharon Goldman:
»
Caitlin Kalinowski, who had been leading hardware and robotic engineering teams at OpenAI since November 2024, announced she has left the company.
“I resigned from OpenAI,” she posted on X and LinkedIn. “I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.”
Her departure comes amid an escalating dispute over how far AI companies should go in supporting U.S. military uses of the technology. In recent days, negotiations between the Pentagon and Anthropic collapsed after the company pushed for strict limits on domestic surveillance and autonomous weapons. Soon after, OpenAI reached its own agreement with the Defense Department to deploy its models on a classified government network.
The move drew criticism from some employees and observers who argued that OpenAI appeared to step in after Anthropic refused the terms. CEO Sam Altman later acknowledged the deal’s rollout looked “opportunistic,” and the company has since moved to clarify restrictions on how its systems can be used by the military.
«
It’s very noticeable how easily the AI companies have agreed to the use of their technology for military purposes, while the earlier incarnations of Google and Microsoft were chary of doing so.
unique link to this extract
The social media discoverability problem • Sam Randa
Sam Randa:
»
At its best, algorithmic feeds feel like wandering through a city street. Stimuli is varied, accessible, and acts as a jumping off point for engaging with new communities. But for those of us not privileged enough (or in my case, too privileged) to spend adolescence in lively, diverse urban areas, social media formed a decent alternative. I don’t think the problem lies in algorithmic discovery, but the specific corporate incentives to keep users on their platforms no matter what.
I feel like many of the people pushing towards a federated, conscious, intentional web landscape tend to know who they are and what they want out of the internet. It’s an easy thing to forget going through if you’ve worked with computers for multiple decades, but as someone whose “real” identity only started to crystallize in high school during the pandemic, it’s fresh in my mind. Though I have more appreciation for the slow web nowadays, where my identity is a bit more solidified, I still feel a pretty strong pull towards “the platform”, and my visions for a healthier internet include it.
The future is the piece I’m the least sure of. It’s obvious that algorithmic feeds have a lot to do with plenty of incredibly harmful societal effects: the dissemination of fake news and misinformation on Facebook that shaped the 2016 Trump campaign, Instagram’s impact on the self-image of teenage girls, the proliferation of brain rot and rage bait content across all of short-form video, and the pull that young men feel towards radicalization as a misplaced response to the struggles they face. All of these are completely separate from the deep privacy concerns of trading your personal information for participation in a platform. But, there’s something to be said about having a wide variety of interests, people, and culture thrown at you that, in a small way, makes up for an upbringing that doesn’t.
«
“My invention makes ocean plastic the world’s cheapest problem to solve” • The Times
Ben Spencer:
»
The global plastic pollution crisis could be solved within 15 years and for less than $1 billion, according to an ambitious plan to stop litter getting into the oceans.
Boyan Slat, an inventor, environmentalist and the chief executive of The Ocean Cleanup, a nonprofit organisation, argues that to call his plan a bargain would be to sell it short.
“It is the world’s cheapest problem to solve,” the 31-year-old said. “If we’re off by a factor of two or three, it’s still the cheapest world problem. Even if we’re off by a factor of 100, it’s still the cheapest world problem.”
But, crucially, he added: “It’s not solved yet. We still need to raise a significant amount of money, and there is a big execution challenge.”
His confidence relies on research that suggests that stopping pollution in just 30 cities around the world will cut a third of the plastic waste that enters the oceans. Slat plans to tackle these first 30 cities by 2030 at a cost of $350m (£260m).
His method uses floating barriers to trap plastic in the cities’ rivers. If it cannot be scooped up by traditional excavators, “interceptors” — autonomous boats with conveyor belts — are sent to gather the waste and send it for recycling or disposal.
The momentum from that would enable the charity to stop 90% of floating plastic from entering the ocean by 2040, and to clear up the “legacy” waste, particularly in hotspots such as the “great Pacific garbage patch” — an accumulation of about 100,000 tonnes floating between Hawaii and California. In total, he believes, it would cost less than $1bn.
“So 2040 is our publicly stated goal to get to 90%, but I think we can go faster than that, depending on how things go in the next few years.”
Slat, who grew up in the Dutch city of Delft, dropped out of his degree in aerospace engineering after one semester to set up his project. “I realised if I wanted to make this a success, I needed to dedicate all my time to this.”
«
So many problems have this shape: a few individuals are associated with the highest cost. It’s true for crime, for homelessness, and clearly for waste plastic. As he says, it’s comparatively cheap to do. Why doesn’t anyone listen?
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified