
Trying to teach Romeo and Juliet (in whatever medium) to phone-distracted schoolchildren is no fun. So, subtract the phones? CC-licensed photo by iClassical Com on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.
A selection of 9 links for you. Not even the DiCaprio one? I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Do AI companies actually care about America? • The Atlantic
Matteo Wong:
»
Hillary Clinton once described interacting with Mark Zuckerberg as “negotiating with a foreign power” to my colleague Adrienne LaFrance.
American politicians have been wholly unsuccessful in reining in that power, and now, as the AI boom brings Silicon Valley’s ambitions to new heights, they are positioned more than ever as industry cheerleaders. Seen one way, this is classic conservatism: the championing of America’s business rulers based on the belief that their success will redound on the nation. Seen another, it is a dereliction of duty: elected officials willingly outsourcing their stewardship of the national interest to a tiny group of billionaires who believe they know what’s best for humanity.
The tech industry’s new ambitions—using AI to reshape not just work, school, and social life but perhaps even governance itself—do have a major vulnerability: The AI patriots desperately need the president’s approval. Chatbots rely on enormous data centers and the associated energy infrastructure that depend on the government to permit and expedite major construction projects; AI products, which are still fallible and have yet to show a clear path to profits, are in need of every bit of grandiose marketing—and all the potentially lucrative government and military contracts—available. Shortly after the inauguration, Zuckerberg, who is also aggressively pursuing AI development, said in a Meta earnings call, “We now have a U.S. administration that is proud of our leading companies, prioritizes American technology winning, and that will defend our values and interests abroad.” Altman, once a vocal opponent of Trump, has written that he now believes that Trump “will be incredible for the country in many ways!”
That dependence has led to a kind of cognitive dissonance. In this still-early stage of the AI boom, Silicon Valley, for all its impunity, has chosen not to voice robust ideas about democracy that differ substantively from the whims of a mercurial White House. As millions of everyday citizens, current and former government officials, lawyers and academics, and dissidents from dictatorships around the world have warned that the Trump administration is eroding American democracy, AI companies have remained mostly supportive or silent despite their own bombastic rhetoric about protecting democracy.
«
Google will require developer verification to install Android apps • 9to5Google
Abner Li:
»
To combat malware and financial scams, Google announced on Monday that only apps from developers that have undergone verification can be installed on certified Android devices starting in 2026.
This requirement applies to “certified Android devices” that have Play Protect and are preloaded with Google apps. The Play Store implemented similar requirements in 2023, but Google is now mandating this for all install methods, including third-party app stores and sideloading where you download an APK file from a third-party source.
Google wants to combat “convincing fake apps” and make it harder for repeat “malicious actors to quickly distribute another harmful app after we take the first one down.” A recent analysis by the company found that there are “over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.”
Google is explicit today about how “developers will have the same freedom to distribute their apps directly to users through sideloading or to use any app store they prefer.”
«
This has taken a long, long time to come through, hasn’t it. Google’s first smartphone came out in 2008, and it was already producing an app store. Malware has been a constant problem, but now developers will have to be verified.
But it’s even slower than you think:
»
The first Android app developers will get access to verification this October, with the process opening to all in March 2026.
The requirement will go into effect in September 2026 for users in Brazil, Indonesia, Singapore, and Thailand. Google notes how these countries have been “specifically impacted by these forms of fraudulent app scams.” Verification will then apply globally from 2027 onwards.
«
Perplexity is launching a new revenue-share model for publishers • WSJ
Alexandra Bruell:
»
Perplexity will pay publishers for news articles that the artificial-intelligence company uses to answer queries.
The artificial-intelligence startup expects to pay publishers from a $42.5m revenue pool initially, and to increase that amount over time, Perplexity said Monday.
Perplexity plans to distribute money when its AI assistant or search engine uses a news article to fulfill a task or answer a search request.
Its payments to publishers will come out of the subscription revenue generated by a new news service, called Comet Plus, that Perplexity plans to roll out widely this fall.
Perplexity said publishers will get 80% of Comet Plus revenue, including from the more expensive subscription tiers that provide Comet Plus free of charge.
Bloomberg News earlier reported Perplexity’s plans to pay publishers.
Like other AI rivals, Perplexity has been building a search engine for the AI era, and turned to news articles and other content to answer queries from users. But publishers have complained the AI firms are taking their work without compensation, while siphoning away traffic that would otherwise go to their websites and apps.
«
That’s quite optimistic about the subscription revenue that will come in. Wonder too how it compares for the publishers to people actually visiting their sites.
unique link to this extract
Citizen is using AI to generate crime alerts with no human review. It’s making a lot of mistakes • 404 Media
Joseph Cox:
»
Crime-awareness app Citizen is using AI to write alerts that go live on the platform without any prior human review, leading to factual inaccuracies, the publication of gory details about crimes, and the exposure of sensitive data such as peoples’ license plates and names, 404 Media has learned.
The news comes as Citizen recently laid off more than a dozen unionized employees, with some sources believing the firings are related to Citizen’s increased use of AI and the shifting of some tasks to overseas workers. It also comes as New York City enters a more formal partnership with the app.
…For years Citizen employees have listened to radio feeds and written these alerts themselves. More recently, Citizen has turned to AI instead, with humans “becoming increasingly bare,” one source said. The descriptions of Citizen’s use of AI come from three sources familiar with the company. 404 Media granted them anonymity to protect them from retaliation.
Initially, Citizen brought in AI to assist with drafting notifications, two sources said. “The next iteration was AI starting to push incidents from radio clips on its own,” one added. “There was no analyst or human involvement in the information that was being pushed in those alerts until after they were sent.”
All three sources said the AI made mistakes or included information it shouldn’t have. AI mistranslated “motor vehicle accident” to “murder vehicle accident.” It interpreted addresses incorrectly and published an incorrect location. It would add gory or sensitive details that violated Citizen’s guidelines, like saying “person shot in face” or including a person’s license plate details in an unconfirmed report. It would generate a report based on a homeless person sleeping in a location. The AI sometimes blasted a notification about police officers spotting a stolen vehicle or homicide suspect, potentially putting that operation at risk.
«
Surprise!
unique link to this extract
What many parents miss about the phones-in-schools debate • The Atlantic
Gail Cornwall:
»
…within the next two years, a majority of U.S. kids will be subject to some sort of phone-use restriction [in schools].
…Part of the reason that I feel so strongly about getting phones out of classrooms is that I know what school was like for teachers without them. In 2005, when I was 25 years old, I showed up at a Maryland high school eager to thrill three classes of freshmen with my impassioned dissection of Romeo and Juliet. Instead, I learned how quickly a kid’s eraser-tapping could distract the whole room, and how easily one student’s bare calves could steal another teen’s attention. Reclaiming their focus took everything I had: silliness, flexibility, and a strong dose of humility.
Today, I doubt Mercutio and I would stand a chance. Even with the rising number of restrictions, smartphones are virtually unavoidable in many schools. Consider my 16-year-old’s experience: Her debate team communicates using the Discord app. Flyers about activities require scanning a QR code. Her teachers frequently ask that she submit photos of completed assignments, which her laptop camera can’t capture clearly. In some classes, students are expected to complete learning games on their smartphone.
Because of the way devices—and human brains—are built, asking teens to use a phone in class but not look at other apps is likely to be as ineffective as DARE’s “Just Say No” campaign. Studies have shown that simply having a phone nearby can reduce a person’s capacity to engage with those around them and focus on tasks. This is because each alert offers a burst of dopamine, which can condition people to want to open their phone even before they get a notification.
«
As Cornwall points out, many parents are fine with every other child not having a phone – but their child needs one. Just in case.
unique link to this extract
More UK news publishers are adopting ‘consent or pay’ advertising model • Press Gazette
Charlotte Tobitt:
»
Sixteen of the 50 biggest news websites in the UK are now using a “consent or pay” model to allow users to pay to reject personalised advertising or even avoid ads altogether.
UK publishers began to implement the model last year as the Information Commissioner’s Office cracked down on the requirement for the biggest sites to display a “reject all cookies” button as prominently as the option to “accept all”.
More publishers have begun to implement consent or pay this year after the ICO clarified that the model was acceptable as long as users are given a “realistic choice”, including by not putting the price too high.
The ICO rules relate to the ability for users to opt out of tracking cookies used to show personalised advertising, which have a higher value to advertisers.
But some publishers have chosen instead to offer users the choice between accepting cookies and paying to see no adverts at all, making it more attractive to users fed up with cluttered browsing experiences.
…When users are equally offered the chance to “accept all” or “reject all” cookies, consent rates are typically somewhere around 70-80%, according to both Skovgaards and Contentpass founder Dirk Freytag.
Once a consent or pay model is introduced, almost 100% choose to accept cookies with a small number choosing to pay instead, they each told Press Gazette.
This means publishers are more likely to benefit from a better price for their advertising than if people had chosen to reject personalised advertising, and the small number who choose to pay make up for the advertising that would be lost if they had otherwise rejected.
«
(Thanks Gregory B for the link.)
unique link to this extract
Elon Musk sues Apple and OpenAI, revealing his panic over OpenAI dominance • Ars Technica
Ashley Belanger:
»
After a public outburst over Grok’s App Store rankings, on Monday, Elon Musk followed through on his threat to sue Apple and OpenAI.
At first, Musk appeared fixated on ChatGPT consistently topping Apple’s “Must Have” app list—which Grok has never made—claiming Apple seemed to preference OpenAI, an Apple partner, over all chatbot rivals. But Musk’s filing shows that the X and xAI owner isn’t just trying to push for more Grok downloads on iPhones—he’s concerned that Apple and OpenAI have teamed up to completely dash his “everything app” dreams, which was the reason he bought Twitter.
At this point appearing to be genuinely panicked about OpenAI’s insurmountable lead in the chatbot market, Musk has specifically alleged that an agreement integrating ChatGPT into the iOS violated antitrust and unfair competition laws. Allegedly, the conspiracy is designed to protect Apple’s smartphone monopoly and block out AI rivals to lock in OpenAI’s dominance in the chatbot market.
As Musk sees it, Apple is supposedly so worried that X will use Grok to create a “super app” that replaces the need for a sophisticated smartphone that the iPhone maker decided to partner with OpenAI to limit X and xAI innovation. The complaint quotes Apple executive Eddy Cue as expressing “worries that AI might destroy Apple’s smartphone business,” due to patterns observed in foreign markets where super apps exist, like WeChat in China.
“In a desperate bid to protect its smartphone monopoly, Apple has joined forces with the company that most benefits from inhibiting competition and innovation in AI: OpenAI, a monopolist in the market for generative AI chatbots,” Musk’s lawsuit alleged.
«
One can point out that this is ridiculous by pointing to China, where WeChat is the “super app” that allows people to do anything. However Apple doesn’t bar it from the App Store in China, nor make a competition.
As for being anticompetitive by picking OpenAI – it’s anything but: Apple can choose any AI it wants. If it thinks Grok is better, it can slot it in. America’s litigation culture has long since got out of hand.
unique link to this extract
Israel vs. Iran… on the blockchain • Cryptadamus
“Michel de Cryptadamus”:
»
So-called “stablecoins” like Tether, whose values are pegged to that of so-called “real” money (e.g. US dollars), are a perfect example of extremely censorable cryptocurrencies. Any US dollars you are holding “on chain” in the form of Tether’s USDT tokens, Circle’s USDC tokens, the Trump family’s new USD1 tokens, or whatever other stablecoin you choose can be instantly zapped from afar by the folks at Tether or Circle or Trump HQ whenever they feel like it and for whatever reason they choose. Due to both the jurisdictional issues (most stablecoins are located in small island tax shelters) as well as the terms of service you agreed to when you touched the stablecoin you have pretty much no recourse of any kind.
…While the governments of both Israel and the U.S. periodically make the blockchain addresses they have asked Tether to blacklist public via court records, sanctions related press releases, or similar documentation, a) they do not always do this and b) even when they do the seizure orders are often unsealed weeks or months after the actual blacklisting happens. I could not find any governmental records about why this huge number of wallets were suddenly being blacklisted so I set out to do a bit of investigating.
It turned out that not only could a very large percentage (~30%) of the wallets that were blacklisted since slightly before the outbreak of Middle Eastern hostilities be linked to Iranian crypto exchanges like the aforementioned Nobitex with an extremely cursory scan of the blockchain, a couple of them could even be directly observed sending funds to and/or from a blockchain address the governments of both the U.S. and the U.K. claim belongs to Sa’id Ahmad Muhammad Al-Jamal, a sanctioned IRGC12 connected money launderer with a Chinese passport.
«
Fascinating insight into the newest frontier for war: cutting crypto supply lines.
unique link to this extract
Air pollution from oil and gas causes 90,000 premature US deaths each year, says new study • The Guardian
Dharna Noor:
»
More than 10,000 annual pre-term births are attributable to fine particulate matter from oil and gas, the authors found, also linking 216,000 annual childhood-onset asthma cases to the sector’s nitrogen dioxide emissions and 1,610 annual lifetime cancer cases to its hazardous air pollutants.
The highest number of impacts are seen in California, Texas, New York, Pennsylvania and New Jersey, while the per-capita incidences are highest in New Jersey, Washington DC, New York, California and Maryland.
The analysis by researchers at University College London and the Stockholm Environment Institute is the first to examine the health impacts – and unequal health burdens – caused by every stage of the oil and gas supply chain, from exploration to end use.
“We’ve long known that these communities are exposed to such levels of inequitable exposure as well as health burden,” said Karn Vohra, a postdoctoral research fellow in geography at University College London, who led the paper. “We were able to just put numbers to what that looks like.”
While Indigenous and Hispanic populations are most affected by pollution from exploration, extraction, transportation and storage, Black and Asian populations are most affected by emissions from processing, refining, manufacturing, distribution and usage.
«
The story is covered in links, but none to the actual paper. Took me a little searching, but the clues sprinkled around the story let me find the original study. Journalists: please link to the studies. This one even has nice diagrams.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified
However Apple doesn’t bar it from the App Store in China, nor make a competitor.