Start Up No.2514: EU users won’t get AirPods translation, the online reaction to Charlie Kirk, Taiwan protects its sea cables, and more


Playing Tetris has been shown to be an effective way to take your mind off disturbing content you might have seen online. CC-licensed photo by on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


I was going to write a post at the Social Warming Substack – but it wasn’t about Charlie Kirk, and the noise from that has been overwhelming. So, next week.


A selection of 10 links for you. Blocked! I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


AirPods Live Translation blocked for EU users with EU Apple accounts • MacRumors

Tim Hardwick:

»

Apple’s new Live Translation feature for AirPods will be off-limits to millions of European users when it arrives next week, with strict EU regulations likely holding back its rollout.

Apple says on its feature availability webpage that “Apple Intelligence: Live Translation with AirPods” won’t be available if both the user is physically in the EU and their Apple Account region is in the EU. Apple doesn’t give a reason for the restriction, but legal and regulatory pressures seem the most plausible culprits.

In particular, the EU’s Artificial Intelligence Act and the General Data Protection Regulation (GDPR) both impose strict requirements for how speech and translation services are offered. Regulators may want to study how Live Translation works, and how that impacts privacy, consent, data-flows, and user rights. Apple will also want to ensure its system fully complies with these rules before enabling the feature across EU accounts.

Apple’s Live Translation feature, unveiled during its AirPods Pro 3 announcement, is also coming to older models including AirPods 4 with Active Noise Cancellation and AirPods Pro 2.

«

Slight inconvenience for all those French users going to Germany, or Italy, and all manner of vice-versa. But as long as it works for Americans coming to Europe, that’s probably all that’s needed. (And Britons going to Europe – who, mirabile dictu, will also get working translation because they’re outside the EU. You mean Brexit brought a benefit?)
unique link to this extract


The logical endpoint of 21st-century America • Garbage Day

Ryan Broderick:

»

Regardless of the motive, the shooting was clearly staged to maximize impact on social media. Even though footage of mass death is an inescapable feature of the internet now, there was something especially haunting about the videos of Kirk being struck down. The uniquely parasocial terror of seeing a person who seemed so untouchable from behind their armor of internet fame be reduced to just another fragile human being.

If 9/11 was the pinnacle of political violence for the TV age, Kirk’s death should be seen as an inverted mirror image, a perfect spectacle for the social media era. A darkly fitting end for the premier digital propagandist of the Trump administration. The same algorithms he relied on to create narratives for the MAGA movement now turning his death into a dizzying torrent of content. Shitposts, memes, conspiracy theories, and delirious right-wing lust for civil war have spun together online over the last 24 hours more intensely than we’ve ever seen before. The logical endpoint of 21st-century America: An influencer shot to death at a school in front a crowd of smartphones.

The mania online that Kirk’s death has generated is best exemplified by a now-deleted video filmed by TikTok user @eldertiktok11, seconds after the shooting, which ends with the user signing off with, “make sure you subscribe to elder TikTok.” As X user @taste_of_tbone wrote, “A disturbing synthesis: the subject, the medium, the message, the messenger — the call to subscribe between striking a pose and shouting out Jesus… His later apology and promise to ‘be a better content creator.’”

The dark new America that Kirk dedicated his life to manifesting has finally arrived. A complete collapse of the online and the offline, where political violence is simply just another opportunity to grow your personal brand if you can turn your phone on fast enough and make it out alive.

«

There’s plenty more to the article, though this is the part that strikes hardest. The peculiar resonance of it happening just the day before the 24th anniversary of the terror attacks on the US, which were so hard for everyone to get updated news about, is very hard to ignore.
unique link to this extract


Charlie Kirk was shot and killed in a post-content-moderation world • WIRED

Lauren Goode:

»

Minutes after conservative political activist Charlie Kirk was shot yesterday at a speaking engagement at Utah Valley University, jarring videos of the incident began circulating on apps like TikTok, Instagram, and X. In the immediate aftermath, the majority of the videos viewed by WIRED did not contain content warnings. Many began autoplaying before viewers had the option to consent. And on X, an AI-generated recap of the incident falsely indicated that Kirk had survived the shooting.

Researchers tracking the spread of the shooting videos on social media say that major social platforms are falling short in enforcing their own content moderation rules, at a moment when political tensions and violence are flaring. And the video of Kirk being fatally shot is somehow falling into a policy loophole, threading the needle between allowable “graphic content” and the category of “glorified violence” that violates platform rules.

“It’s unbelievable how some of these videos are still up. And with the way this stuff spreads, it is absolutely impossible to take down or add warnings to all of these horrific videos if you don’t have a robust trust and safety program,” says Alex Mahadevan, the director of MediaWise at the Poynter Institute.

Over the past two years, social platforms like X, TikTok, Facebook, and Instagram have scaled back their content moderation efforts—in some cases eliminating the work of human moderators who previously acted as a crucial line of defence to protect users from viewing harmful content.

…“I don’t think it is possible to prevent the initial distribution, but I think platforms can do better in preventing the massive distribution through algorithmic feeds, especially to people that did not specifically search for it,” says Martin Degeling, a researcher who audits algorithmic systems and works with organizations like the non-profit AI Forensics.

«

I think it probably depends strongly on your location. In the UK, I didn’t see the video (still haven’t 🤞) but have no doubt this is absolutely right: the social networks now either don’t care or can’t figure out how to care about a video showing a killing. It used to be an all-hands-on-deck process to try to stop the uploading of videos of shootings. Now it’s “fill yer boots.”
unique link to this extract


Trauma, treatment and Tetris • Journal of Psychiatry and Neuroscience

Oisin Butler et al:

»

Methods: We recruited patients with combat-related PTSD before psychotherapy and randomly assigned them to an experimental Tetris and therapy group (n = 20) or to a therapy-only control group (n = 20). In the control group, participants completed therapy as usual: eye movement desensitization and reprocessing (EMDR) psychotherapy. In the Tetris group, in addition to EMDR, participants also played 60 minutes of Tetris every day from onset to completion of therapy, approximately 6 weeks later. Participants completed structural MRI and psychological questionnaires before and after therapy, and we collected psychological questionnaire data at follow-up, approximately 6 months later. We hypothesized that the Tetris group would show increases in hippocampal volume and reductions in symptoms, both directly after completion of therapy and at follow-up.

Results: Following therapy, hippocampal volume increased in the Tetris group, but not the control group. As well, hippocampal increases were correlated with reductions in symptoms of PTSD, depression and anxiety between completion of therapy and follow-up in the Tetris group, but not the control group.

«

If you’ve been shown disturbing content on social media, playing Tetris really might help you get over any effects. Just a thought.
unique link to this extract


Taiwan increases defensive patrols around 24 undersea cables — closely monitoring “96 blacklisted China-linked boats” with 24-hour operations • Tom’s Hardware

Mark Tyson:

»

Taiwan has intensified patrols around the 24 undersea cables that connect it with the global internet. For the most efficient use of Taiwan’s limited coast guard resources, it is paying particular attention to “96 blacklisted China-linked boats,” reports Reuters. The strategy update is hoped to address what is seen as an increasingly popular gray-zone warfare tactic, namely the threat of sabotaging undersea connections.

For its exclusive report on the state of the subsea communications cable threat around the island, Reuters took a trip on the 100-ton Taiwan Coast Guard Ship PP-10079. We mentioned this exact vessel in our report on the severing of Chungwha Telecom’s TP3, by the Togo-registered (but Chinese-crewed) Hong Tai back in February.

PP-10079 monitored and eventually apprehended the Hong Tai after it became apparent its suspicious movements could be associated with the TP3 cable damage. The Chinese captain of Hong Tai was found guilty of deliberately severing TP3 in legal proceedings this summer.

On board the PP-10079, the Reuters reporter was told by the captain that suspected China-backed incursions “have severely undermined the peace and stability of Taiwanese society.” Of course, internet connectivity is also a key resource that is vital for government, businesses, and personal users, too.

«

Just while the US is distracted.
unique link to this extract


Exclusive: US warns hidden radios may be embedded in solar-powered highway infrastructure • Reuters

Jana Winter and Raphael Satter:

»

US officials say solar-powered highway infrastructure including chargers, roadside weather stations, and traffic cameras should be scanned for the presence of rogue devices – such as hidden radios – secreted inside batteries and inverters.

The advisory, disseminated late last month by the US Department of Transportation’s Federal Highway Administration, comes amid escalating government action over the presence of Chinese technology in America’s transportation infrastructure.

The four-page security note, a copy of which was reviewed by Reuters, said that undocumented cellular radios had been discovered “in certain foreign-manufactured power inverters and BMS,” referring to battery management systems.

The note, which has not previously been reported, did not specify where the products containing undocumented equipment had been imported from, but many inverters are made in China.

There is increasing concern from US officials that the devices, along with the electronic systems that manage rechargeable batteries, could be seeded with rogue communications components that would allow them to be remotely tampered with on Beijing’s orders.

«

Would China really want to mess around with traffic cameras in the US? Make roadside weather stations report the wrong temperatures? It seems a bit overdone. But they did find the radios..
unique link to this extract


Google is shutting down Tables, its Airtable rival • TechCrunch

Sarah Perez:

»

Google Tables, a work-tracking tool and competitor to the popular spreadsheet-database hybrid Airtable, is shutting down.

In an email sent to Tables users this week, Google said the app will not be supported after December 16, 2025, and advised that users export or migrate their data to either Google Sheets or AppSheet instead, depending on their needs.

Launched in 2020, Tables focused on making project tracking more efficient with automation. It was one of the many projects to emerge from Google’s in-house app incubator, Area 120, which at the time was devoted to cranking out a number of experimental projects. Some of these projects later graduated to become a part of Google’s core offerings across Cloud, Search, Shopping and more.

Tables was one of those early successes: Google said in 2021 that the service was moving from a beta test to become an official Google Cloud product. At the time, the company said it saw Tables as a potential solution for a variety of use cases, including project management, IT operations, customer service tracking, CRM, recruiting, product development and more.

…Area 120, meanwhile, was the victim of a Google re-org in 2022, when the company cancelled half its projects and informed staff that a reduction in force would cut the in-house R&D division to half its size. The division that remained would focus on AI projects, Google said.

«

The Google Cemetery, a third-party observer of how these things pass away, puts the average lifespan of any Google product at four years, so this was pretty much dead on. (Aha.) Though the figure might be different – the Google Cemetery stopped being updated in 2019. Do you think it’s dead?

Bit inconvenient if you have a project going past December.
unique link to this extract


US regulator launches inquiry into AI “companions” used by teens • Financial Times

Cristina Criddle, Hannah Murphy and Stefania Palma:

»

The US Federal Trade Commission has ordered leading artificial intelligence companies to hand over information about chatbots that provide “companionship”, which are under intensifying scrutiny after cases involving suicides and serious harm to young users.

OpenAI, Meta, Google and Elon Musk’s xAI are among the tech groups hit with demands for disclosure about how they operate popular chatbots and mitigate harm to consumers. Character.ai and Snap, which aim their services at younger audiences, are also part of the inquiry.

The regulator’s move follows high-profile incidents alleging harm to teenage users of chatbots. Last month, OpenAI was sued by the family of 16-year-old Adam Raine who died by suicide after discussing methods with ChatGPT.

Character.ai is also being sued by a mother who claims the platform, which offers different AI personas to interact with, had a role in the suicide of her son.

The FTC on Thursday said: “AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”

The FTC’s action comes as US lawmakers and state attorneys-general have also launched inquiries and voiced concern over chatbots’ impact on young people — especially around mental health and sexual content — heaping pressure on tech companies.

«

unique link to this extract


How Silicon Valley enabled China’s digital police state • AP News

Dake Kang and Yael Grauer:

»

Over the past quarter century, American tech companies to a large degree designed and built China’s surveillance state, playing a far greater role in enabling human rights abuses than previously known, an Associated Press investigation found. They sold billions of dollars of technology to the Chinese police, government and surveillance companies, despite repeated warnings from the U.S. Congress and in the media that such tools were being used to quash dissent, persecute religious sects and target minorities.

Critically, American surveillance technologies allowed a brutal mass detention campaign in the far west region of Xinjiang — targeting, tracking and grading virtually the entire native Uyghur population to forcibly assimilate and subdue them.

U.S. companies did this by bringing “predictive policing” to China — technology that sucks in and analyzes data to prevent crime, protests, or terror attacks before they happen. Such systems mine a vast array of information — texts, calls, payments, flights, video, DNA swabs, mail deliveries, the internet, even water and power use — to unearth individuals deemed suspicious and predict their behavior. But they also allow Chinese police to threaten friends and family and preemptively detain people for crimes they have not even committed.

For example, the AP found a Chinese defense contractor, Huadi, worked with IBM to design the main policing system known as the “Golden Shield” for Beijing to censor the internet and crack down on alleged terrorists, the Falun Gong religious sect, and even villagers deemed troublesome, according to thousands of pages of classified government blueprints taken out of China by a whistleblower, verified by AP and revealed here for the first time. IBM and other companies that responded said they fully complied with all laws, sanctions and U.S. export controls governing business in China, past and present.

«

It’s the surveillance aspect and the predictive policing element that makes these sales questionable. If they were just cameras, just computers, no blame could be attached. It’s the analysis that makes the difference. (Thanks Gregory B for the link.)
unique link to this extract


The remorseless rule of my fitness tracker • Financial Times

Tim Harford:

»

Like any good performance metric, my watch provides me with structure and helps me optimise my running. I can feed in a goal — a distance, a time — and it will generate a training program. Once-difficult tasks, such as running at a consistent pace, become straightforward.

Yet like many performance metrics, the watch can also nudge me into counter-productive activity such as overtraining to the point of injury. The sleep-tracking function tempts many people into thinking too much about sleep, which is the sort of thing that can make it hard to drift off. There’s a term of art, “orthosomnia”. It means that you’re losing sleep because you’re worried that your sleep tracker is judging you.

There is another subtle effect at work, something called “quantification fixation”. A study published last year by behavioural scientists Linda Chang, Erika Kirgios, Sendhil Mullainathan and Katherine Milkman invited participants to choose between a series of two options, such as holiday destinations or job applicants. Chang and her colleagues found that people consistently took numbers more seriously than words or symbols. Whether deciding between a cheap, shabby hotel or an expensive swanky one, or between an intern with strong management skills or one with strong calculus skills, experimental subjects systematically favoured whatever feature had a number on it, rather than a description such as “excellent” or “likely”. Numbers can fixate us.

“A key implication of our findings,” write the researchers, “is that when making decisions, people are systematically biased to favour options that dominate on quantified dimensions. And trade-offs that pit quantitative against qualitative information are everywhere.”

«

I have not yet been contradicted in my assertion that sleep tracking is pointless. (If you don’t have an FT account, these articles typically appear on timharford.com in a couple of weeks.)
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.