If Marx used an Android phone, you’d be able to break into it. CC-licensed photo by Stuart Chalmers on Flickr.
It’s charity time: ahead of Christmas, I’m encouraging readers to make a donation to charity; a different one each day.
Book Aid International. Every £2 you give could send another book to a child living with war
Readers in the US can donate too. Please give as generously as you feel you can.
A selection of 9 links for you. Use them wisely. I’m @charlesarthur on Twitter. Observations and links welcome.
The newest and most glaring example of just how reckless corporations in the autonomous vehicle space can be involves the now-infamous fatal crash in Tempe, Arizona, where one of Uber’s cars struck and killed a 49-year-old pedestrian. The Information obtained an email reportedly sent by Robbie Miller, a former manager in the testing-operations group, to seven Uber executives, including the head of the company’s autonomous vehicle unit, warning that the software powering the taxis was faulty and that the backup drivers weren’t adequately trained.
“The cars are routinely in accidents resulting in damage,” Miller wrote. “This is usually the result of poor behavior of the operator or the AV technology. A car was damaged nearly every other day in February. We shouldn’t be hitting things every 15,000 miles. Repeated infractions for poor driving rarely results in termination. Several of the drivers appear to not have been properly vetted or trained.”
That’s nuts. Hundreds of self-driving cars were on the road at the time, in San Francisco, Pittsburgh, Santa Fe, and elsewhere. The AV technology was demonstrably faulty, the backup drivers weren’t staying alert, and despite repeated incidents—some clearly dangerous—nothing was being addressed. Five days after the date of Miller’s email, a Volvo using Uber’s self-driving software struck Elaine Herzberg while she was slowly crossing the street with her bicycle and killed her. The driver was apparently streaming The Voice on Hulu at the time of the accident.
This tragedy was not a freak malfunction of some cutting-edge technology—it is the entirely predictable byproduct of corporate malfeasance.
There isn’t a great deal that’s new here (apart from his efforts to get Tesla to explain its thinking on autonomous driving), but gathering it in one place is quite startling.
link to this extract
Google has been forced to shut down a data analysis system it was using to develop a censored search engine for China after members of the company’s privacy team raised internal complaints that it had been kept secret from them, The Intercept has learned.
The internal rift over the system has had massive ramifications, effectively ending work on the censored search engine, known as Dragonfly, according to two sources familiar with the plans. The incident represents a major blow to top Google executives, including CEO Sundar Pichai, who have over the last two years made the China project one of their main priorities.
The dispute began in mid-August, when the The Intercept revealed that Google employees working on Dragonfly had been using a Beijing-based website to help develop blacklists for the censored search engine, which was designed to block out broad categories of information related to democracy, human rights, and peaceful protest, in accordance with strict rules on censorship in China that are enforced by the country’s authoritarian Communist Party government.
There’s some doubt, even among those who pushed against this, whether Google really has shut it down. Wait and see.
link to this extract
Jennifer Valentino-DeVries on how they got the data for that “your phones are tracking you, and the data is being sold” story from last week:
I wrote an article in May about a company that bought access to data from the major US cellphone carriers. My reporting showed that the company, Securus Technologies, allowed law enforcement to get this data, and officers were using the information to track people’s locations without a warrant. After that article ran, I started getting tips that the use of location data from cellphones was more widespread than I had initially reported. One person highlighted a thread on Hacker News, an online forum popular with technologists. On the site, people were anonymously discussing their work for companies that used people’s precise location data.
I called sources who knew about mapping and location data. Many had worked in that field for more than a decade. I also partnered with other Times reporters, Natasha Singer and Adam Satariano, who were looking into something similar. These conversations were the start of an investigation into how smartphone apps were tracking people’s locations, and the revelation that the tipsters were right — selling location data was common and lucrative.
On a big investigation like this one, hours and even days of work can go into a single paragraph or even a sentence. This is especially true in technology investigations because the subject matter is so detailed; combing through data and conducting technical tests is time consuming.
Remove Image Background FREE, 100% automatically – in 5 seconds – without a single click.
Remove.bg is a free service to remove the background of any photo. It works 100% automatically: You don’t have to manually select the background/foreground layers to separate them – just select your image and instantly download the result image with the background removed!
Uses “sophisticated AI technology”. Only works on people and faces. They say they delete the results from their servers after an hour.
Pretty good, if you have a need for cutouts.
link to this extract
For our tests, we used my own real-life head to register for facial recognition across five phones. An iPhone X and four Android devices: an LG G7 ThinQ, a Samsung S9, a Samsung Note 8 and a OnePlus 6. I then held up my fake head to the devices to see if the device would unlock. For all four Android phones, the spoof face was able to open the phone, though with differing degrees of ease. The iPhone X was the only one to never be fooled.
There were some disparities between the Android devices’ security against the hack. For instance, when first turning on a brand new G7, LG actually warns the user against turning facial recognition on at all. “Face recognition is a secondary unlock method that results in your phone being less secure,” it says, noting that a similar face can unlock your phone. No surprise then that, on initial testing, the 3D-printed head opened it straightaway.
Yet during filming, it appeared the LG had been updated with improved facial recognition, making it considerably more difficult to open. As an LG spokesperson told Forbes, “The facial recognition function can be improved on the device through a second recognition step and advanced recognition which LG advises through setup. LG constantly seeks to make improvements to its handsets on a regular basis through updates for device stability and security.” They added that facial recognition was seen as “a secondary unlock feature” to others like a PIN or fingerprint.
There’s a similar warning on the Samsung S9 on sign up. “Your phone could be unlocked by someone or something that looks like you,” it notes. “If you use facial recognition only, this will be less secure than using a pattern, PIN or password.” Oddly, though, on setting up the device the first presented option for unlocking was facial and iris recognition.
Windows Hello didn’t let him in either. An absurd spinoff of this story (not by Brewster) suggests police might now use 3D printed heads to break into suspects’ phones. Duh. You just show the phone to them. (Assuming you’ve got them before the unlock timeout.)
link to this extract
Conversations around the [Russian] Internet Research Agency [IRA] operations traditionally have focused on Facebook and Twitter, but like any hip millennial, the IRA was actually most obsessive about Instagram. “Instagram was perhaps the most effective platform for the Internet Research Agency,” the New Knowledge researchers write. All in, the troll accounts received 187 million engagements on Instagram, and about 40% of the accounts they created had at least 10,000 followers.
That isn’t to say, however, that the trolls neglected Twitter. There, the IRA deployed 3,841 accounts, including several personas that “regularly played hashtag games.” That approach paid off; 1.4 million people engaged with the tweets, leading to nearly 73 million engagements. Most of this work was focused on news, while on Facebook and Instagram, the Russians prioritized “deeper relationships,” according to the researchers. On Facebook, the IRA notched a total of 3.3 million page followers, who engaged with their politically divisive content 76.5 million times. Russia’s most popular pages targeted the right wing and the black community. The trolls also knew their audiences; they deployed Pepe memes at pages intended for right-leaning millennials, but kept them away from posts directed at older conservative Facebook users. Not every attempt was a hit; while 33 of the 81 IRA Facebook pages had over 1,000 followers, dozens had none at all.
That the IRA trolls aimed to pit Americans against each other with divisive memes is now well known. But this latest report reveals just how bizarre some of the IRA’s outreach got. To collect personally identifying information about targets, and perhaps use it to create custom and Lookalike audiences on Facebook, the IRA’s Instagram pages sold all kinds of merchandise. That includes LGBT sex toys and “many variants of triptych and 5-panel artwork featuring traditionally conservative, patriotic themes.”
For May’s government, populist news sites are an increasing threat. Under previous prime ministers, like Tony Blair, Gordon Brown — or even the early years of David Cameron — a handful of newspapers and television stations served as news gatekeepers, picking out what they considered important and beaming it to a mass audience.
Some publications were hostile, of course, but they were known quantities, their editors contactable, their reporters easy to berate. Today’s news media has broken completely free of these bounds.
News, fake news, information and disinformation now reaches voters through a collection of social media pages, messaging apps, video platforms and anonymous websites spreading content beyond the control of anyone in Whitehall — or the Élysée in France, as Emmanuel Macron is discovering.
“Who do you ring?” asked one exasperated No. 10 official when asked about these sites. “You don’t know who these people are.”
At 12:50 p.m. on April 25, 2018, a new British political news website was registered in Scottsdale, Arizona. Within weeks, PoliticalUK.co.uk was producing some of the most viral news stories in the U.K. and had been included on briefing notes circulated in No. 10.
The website — specializing in hyper-partisan coverage of Brexit, Islam and Tommy Robinson — has no named editor and one reporter using a pen name. Its owner is anonymous, having registered the site with the U.S. firm “Domains By Proxy” whose catch line, beaming out from its homepage, reads: “Your privacy is nobody’s business but ours.”
The website itself does not provide any contact details. It has no mission statement. It has a small but growing following on Twitter but no branded Facebook page or YouTube channel.
And yet, since PoliticalUK.co.uk started publishing stories at the end of April, the site has amassed more than 3 million interactions on social media, with an average of 5,000 “engagements” for every story it has published — far more than most national newspapers.
NY Times columnist Nick Kristof led the charge to get Facebook to censor content, now whining that Facebook censors his content • Techdirt
When pushing for FOSTA, Kristof wrote the following:
Even if Google were right that ending the immunity for Backpage might lead to an occasional frivolous lawsuit, life requires some balancing.
For example, websites must try to remove copyrighted material if it’s posted on their sites. That’s a constraint on internet freedom that makes sense, and it hasn’t proved a slippery slope. If we’re willing to protect copyrights, shouldn’t we do as much to protect children sold for sex?
As we noted at the time, this was an astoundingly ignorant thing to say, but of course now that Kristof helped get the law passed and put many more lives at risk, the “meh, no big deal if there are some more lawsuits or more censorship” attitude seems to be coming back to bite him.
You see, last week, Kristof weighed in on US policy in Yemen. The core of his argument was to discuss the horrific situation of Abrar Ibrahim, a 12-year-old girl who is starving in Yemen, and weighs just 28 pounds. There’s a giant photo of the emaciated Ibrahim atop the article, wearing just a diaper. It packs an emotional punch, just as intended.
But, it turns out that Facebook is blocking that photo of Ibrahim, claiming it is “nudity and sexual content.” And, boy, is Kristof mad about it. [He tweeted his outrage that Facebook “repeatedly blocked the photo”.]
Hey, Nick, you were the one who insisted that Facebook and others in Silicon Valley needed to ban “sexual content” or face criminal liability. You were the one who insisted that any collateral damage would be minor. You were the one who said there was no slippery slope.
Yet, here is a perfect example of why clueless saviors like Kristof always make things worse, freaking out about something they don’t understand, prescribing the exact wrong solution. Moderating billions of pieces of content leads to lots of mistakes.
In its way, almost exactly the same mistake as with the famous “napalm girl” in 2016. That one involved the Norwegian prime minister. Facebook’s systems haven’t improved since then.
link to this extract
This trial marks the seventh such trial in London since 2016. In addition to the December 17-18 tests, authorities have said there will be three more tests which have yet to be scheduled.
According to the police, these trials, which “will be used overtly with a clear uniformed presence and information leaflets will be disseminated to the public,” are set to take place specifically in the vicinity of Soho, Piccadilly Circus, and Leicester Square.
The Met noted in a statement that anyone who declines to be scanned “will not be viewed as suspicious by police officers.”
Law enforcement in South Wales has also previously tested this technology, among other locales around the United Kingdom. Numerous tests in the United States have shown that this technology can be flawed, particularly when in use against non-white suspects.
Here in the US, the technology has already become quietly pervasive.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
Errata, corrigenda and ai no corrida: none notified