Cory Doctorow’s “Reflectacles” bamboozle CCTV – to evade surveillance. CC-licensed photo by Cory Doctorow on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 10 links for you. None Oscar-nominated. I’m @charlesarthur on Twitter. Observations and links welcome.
The recent history of social media isn’t a story of context collapse. It’s a story of its opposite: context restoration. Young people led the way, moving much of their online conversation from the public platform of Facebook, where parents and teachers lurked, to the more intimate platform of Snapchat, where they could restrict their audience and where messages disappeared quickly. Private accounts became popular on other social networks as well. Group chats and group texts proliferated. On Instagram, people established pseudonymous accounts — fake Instagrams, or finstas — limited to their closest friends. Responding to the trend, Facebook itself introduced tools that allow members to restrict who can see a post and to specify how long the post stays visible. (Apparently, Zuckerberg has decided he’s comfortable undermining the integrity of the public.)
Context collapse remains an important conceptual lens, but what’s becoming clear now is that a very different kind of collapse — content collapse — will be the more consequential legacy of social media. Content collapse, as I define it, is the tendency of social media to blur traditional distinctions among once distinct types of information — distinctions of form, register, sense, and importance. As social media becomes the main conduit for information of all sorts — personal correspondence, news and opinion, entertainment, art, instruction, and on and on — it homogenizes that information as well as our responses to it.
Content began collapsing the moment it began to be delivered through computers.
‘Shattered’: inside the secret battle to save America’s undercover spies in the digital age • Yahoo News
Jenna McLaughlin and Zach Dorfman:
The familiar trope of Jason Bourne movies and John le Carré novels where spies open secret safes filled with false passports and interchangeable identities is already a relic, say former officials — swept away by technological changes so profound that they’re forcing the CIA to reconsider everything from how and where it recruits officers to where it trains potential agency personnel. Instead, the spread of new tools like facial recognition at border crossings and airports and widespread internet-connected surveillance cameras in major cities is wiping away in a matter of years carefully honed tradecraft that took intelligence experts decades to perfect.
Though U.S. technical capabilities can collect reams of data, human intelligence remains critical. In 2016, for example, a high-level Russian asset recruited by the CIA confirmed that Russian President Vladimir Putin had personally ordered plans to interfere in the 2016 U.S. presidential election. After fleeing to the United States, that same covert source was forced to relocate because of his digital trail. Without the ability to send undercover intelligence officers overseas to recruit or meet sources face to face, this type of intelligence might all but disappear, creating a blind spot for U.S. policymakers.
During a summit of Western intelligence agencies in early 2019, officials wrestled with the challenges of protecting their employees’ identities in the digital age, concluding that there was no silver bullet. “We still haven’t figured out this problem,” says a Western intelligence chief who attended the meeting. Such conversations have left intelligence leaders weighing an uncomfortable question: is spying as we know it over?
Another example of that issue of facial recognition and identification being Bellingcat’s identification of the Russian agents behind the Skripal poisoning. Identifying spies cuts in every direction.
unique link to this extract
in the days after the Ukraine Airlines plane crashed into the ground outside Tehran, Bellingcat and The New York Times have blown a hole in the supposition that the downing of the aircraft was an engine failure. The pressure – and the weight of public evidence – compelled Iranian officials to admit overnight on January 10 that the country had shot down the plane “in error”.
So how do they do it? “You can think of OSINT [open source intelligence] as a puzzle. To get the complete picture, you need to find the missing pieces and put everything together,” says Loránd Bodó, an OSINT analyst at Tech versus Terrorism, a campaign group. The team at Bellingcat and other open-source investigators pore over publicly available material. Thanks to our propensity to reach for our cameraphones at the sight of any newsworthy incident, video and photos are often available, posted to social media in the immediate aftermath of events. (The person who shot and uploaded the second video in this incident, of the missile appearing to hit the Boeing plane was a perfect example: they grabbed their phone after they heard “some sort of shot fired”.) “Open source investigations essentially involve the collection, preservation, verification, and analysis of evidence that is available in the public domain to build a picture of what happened,” says Yvonne McDermott Rees, a lecturer at Swansea University.
Some of the clips in this incident surfaced on Telegram, the encrypted messaging app popular in the Middle East, while others were sent directly to Bellingcat. “Because Bellingcat is known for our open source work on MH17, people immediately thought of us. People started sending us links they’d found,” says Eliot Higgins of Bellingcat. “It was involuntary crowdsourcing.”
Special sunglasses, license-plate dresses, Juggalo face paint: how to be anonymous in the age of surveillance • The Seattle Times
Daniel Castro, the vice president of nonprofit think tank Information Technology and Innovation Foundation, believes the error rates could be reduced by comparing images to a wider range of databases that are more diverse.
Facial recognition systems have proved effective in pursuing criminal investigation leads, he said, and are more accurate than humans at verifying people’s identities at border crossings. The development of policies and practices around the retention and usage of data could avoid government misuse, he said.
“The general use of this technology in the United States is very reasonable,” said Castro. “They’re being undertaken by police agencies that are trying to balance communities’ public safety interests with individual privacy.”
Still, in Doctorow’s eyes, the glasses serve as a conversation starter about the perils of granting governments and companies unbridled access to our personal data.
The motivation to seek out antidotes to an over-powerful force has political and symbolic significance for Doctorow, an L.A.-based science-fiction author and privacy advocate. His father’s family fled the Soviet Union, which used surveillance to control the masses.
“We are entirely too sanguine about the idea that surveillance technologies will be built by people we agree with for goals we are happy to support,” he said. “For this technology to be developed and for there to be no countermeasures is a road map to tyranny.”
Justice Department officials said that they need access to Mr. Alshamrani’s phones to see messages from encrypted apps like Signal or WhatsApp to determine whether he had discussed his plans with others at the base and whether he was acting alone or with help.
“The evidence shows that the shooter was motivated by jihadist ideology,” Mr. Barr said, citing a message that Mr. Alshamrani posted on last year’s anniversary of the Sept. 11 attacks warning that “the countdown has begun.” He also visited the 9/11 memorial in New York over the Thanksgiving holiday.
Mr. Alshamrani also posted anti-American, anti-Israeli and jihadist messages on social media, including just two hours before he attacked the base, Mr. Barr said.
Mr. Barr turned up the pressure on Apple a week after the F.B.I.’s top lawyer, Dana Boente, asked the company for help searching Mr. Alshamrani’s iPhones. Apple said that it would turn over only the data it had, implying that it would not work to unlock the phones and hand over the private data on them.
Apple’s stance set the company on a collision course with a Justice Department that has grown increasingly critical of encryption that makes it impossible for law enforcement to search devices or wiretap phone calls.
As I said before: here we go again. The question is, could Apple break into these phones if it wanted to? It’s still unclear. No doubt Trump will be prepared to rage on Twitter about it in a way the Obama administration didn’t.
unique link to this extract
Right now opt-in rates to share [location] data with apps when they’re not in use are often below 50%, said Benoit Grouchko, who runs the ad tech business Teemo that creates software for apps to collect location data. Three years ago those opt-in rates were closer to 100%, he said. Higher opt-in rates prevailed when people weren’t aware that they even had a choice. Once installed on a phone, many apps would automatically start sharing a person’s location data.
Apple’s latest privacy protection move, however, is making people more aware that they do have a choice about which data is shared. Seven in 10 of the iPhone users tracked by location-verification business Location Sciences downloaded iOS 13 in the six weeks after it first became available, and 80% of those users stopped all background tracking across their devices.
“People have decided to stop their phones’ sharing location data at a universal level,” said Jason Smith, chief business officer at Location Sciences.
All the background location data that previously had been made available for targeted advertising is lost to marketers when people decide they don’t want their apps to share it with other companies.
“This also impacts the ability to tie users that research online and purchase in store or driving, and measuring footfall for clients becomes far more opaque,” said Paul Kasamias, managing partner at Publicis Media agency Starcom. “The drop in spend is also likely to come via small- to medium-sized advertisers, where cost efficiency is paramount and there is a physical footprint, as targeting the right user at the right time will become more difficult.”
Other media buyers say they are starting to feel the ripple effects of Apple’s move when they work with certain ad tech vendors.
“We have seen a drop in sales pitches from providers on location-data solutions, and there is a rise in ensuring that the data-exchange piece is addressed transparently up front as part of bigger deals,” said Sargi Mann, evp of digital strategy at Havas Media.
“Once installed, many apps would automatically start sharing.” Essentially we had cars without seatbelts, and the hospitals recommended not using them.
unique link to this extract
In an open letter published on Wednesday, more than 50 organizations have asked Google to take action against Android smartphone vendors who ship devices with unremovable pre-installed apps, also known as bloatware.
The letter, signed by 53 organizations, was addressed to Google CEO Sundar Pichai.
Signees say Android bloatware has a detrimental effect on user privacy. They say many bloatware apps cannot be deleted and leave users exposed to having their data collected by unscrupulous phone vendors and app makers without their knowledge or consent.
“These pre-installed apps can have privileged custom permissions that let them operate outside the Android security model,” the open letter reads.
“This means permissions can be defined by the app – including access to the microphone, camera and location – without triggering the standard Android security prompts. Users are therefore completely in the dark about these serious intrusions.”
And 91% of those bloatware apps aren’t on Google Play, so don’t get scanned. This came before the revelation about phones under a US government scheme with, yes, Chinese malware.
What could Google do? It could perhaps tweak its device agreement to ban certain apps, or classes of apps. Might be troublesome in Europe if it was seen as anticompetitive. But there’s a page where you can add your name to the lobbying.
unique link to this extract
although she said she had been able to stop companies breaking European competition law, and punish past misconduct, [Margrethe Vestager] acknowledged that “recovery of the markets” was a “work in progress”.
This included two major cases against Google, the firm which has drawn the toughest actions from Ms Vestager (Google has appealed both judgments and the Android verdict).
In 2016, Ms Vestager warned Google to stop restricting third-party rivals on its AdSense search advertising platform, subsequently fining the firm €1.49bn (£1.28bn) in March 2019 for illegal actions “over 10 years”.
Yet although she said Google had stopped restricting rivals, Ms Vestager acknowledged that commercially “nothing has changed”, with Google still dominating the market in search advertising.
“That is a really sad example,” she said, saying it showed that even if a firm allowed competition it “doesn’t necessarily change anything in the marketplace because [it has] already won the market”.
Also in 2016, Ms Vestager fined Google a record-breaking €2.42bn (£2.1bn) for promoting its product advertising system Google Shopping ahead of rivals and downgrading their websites in search results.
Three years on, she said the changes she had required Google to make had “given more rivals visibility and more clicks to merchants that work with rivals, but very little traffic to the rivals themselves”.
She added: “We will keep monitoring this to see what should happen next.”
Admitting that fines on their own don’t work is an important step; the next step is realising that you need to shape the market before it gets filled (or won). But you can’t know what markets will next be thriving; so you need to act quickly. Possibly as tricky a time as when antitrust was first emerging as a legal theory towards the end of the 19th century.
unique link to this extract
Sabine Hossenfelder is a research fellow at the Frankfurt Institute for Advanced Studies and author of the blog Backreaction:
what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality.
Physicists need new methods. Better methods. Methods that are appropriate to the present century.
And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do.
Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.
You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive.
But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem.
ICANN extracts $20m signing fee for $1bn dot-com price increases – and guess who’s going to pay for it? • The Register
Operator of the dot-com registry, Verisign, has decided to pay DNS overseer ICANN $4m a year for the next five years in order to “educate the wider ICANN community about security threats.”
Even though the generous $20m donation has nothing to do with ICANN signing off on an extension of the dot-com contract until 2024, the “binding letter of intent” [PDF] stating the exact amount of funding will be appended to the registry agreement that Verisign has with ICANN to run the dot-com registry.
That extension lifts a price freeze put in place several years ago and will allow Verisign to increase prices by 7% a year.
It’s an increase that we calculated was worth $993m and which the stock market appeared to agree with when it raised the company’s share price by 16% when the agreement was first flagged in November 2018.
No doubt ICANN’s lawyers are concerned that extracting $20m to sign a piece of paper worth $1bn to its recipient could be viewed negatively, perhaps by the cynical as a sign that it is a corrupt organization that is using its control of a critical market to feather its own nest. But that’s clearly not the case because, as ICANN makes plain, it would have approved the agreement anyway.
ICANN is the worst and continues to be the worst. The fact that such a terrible organisation is in charge of a key element of infrastructure is a depressing comment on our ability to organise anything.
unique link to this extract
Errata, corrigenda and ai no corrida: none notified