Normally there would be a Flickr photo here, but Flickr is undergoing “improvements”, and you know what that means.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 11 links for you. “Flickr is unavailable”. I’m @charlesarthur on Twitter. Observations and links welcome.
instead of locking YouTube up, Mohan and his team are trying to tame it as best they can, with computers, humans, and a set of constantly updated guidelines for those computers and humans to follow.
During my conversation with Mohan, he mentioned those guidelines and the work the company has done to update them over the last few years, over and over.
That emphasis surprised me: I would think that the problem is the sheer volume of horrible things people are uploading, which is why YouTube took down a staggering 8.3 million videos in the first three months of this year. The company uses a combination of software and humans — at least 10,000 people have been hired to help flag offensive content — to find and remove those videos.
But if I understood Mohan correctly, he’s arguing that computers and humans can’t do anything without rules to follow. And that YouTube thinks refining and changing those rules is core to the work it’s doing to clean up the site. He’s also arguing that those rules will have to allow some videos that you might not like to remain on the site.
“In some cases, some of those videos … might be something that lots of users might find objectionable but are not violating our policies as they stand today,” he said.
That makes sense (though Bloomberg has reported, convincingly, that YouTube turned a blind eye to some of its worst content because it was more concerned about increasing engagement). But it doesn’t explain a recurring story for YouTube, where users or journalists find offensive (or worse) videos and point them out to YouTube, which then takes them down.
David Carroll, an associate professor of media design at Parsons School of Design, said in the past he’s seen relatively high-quality promoted tweets. But “it’s pretty shocking to see what garbage is circulating” recently on the platform, he told BuzzFeed News.
The onslaught of junky ads and associated user complaints is the latest challenge for Twitter’s promoted tweets product. While popular with advertisers, it has in the past been exploited by Bitcoin scammers, as well as those that masqueraded as Twitter itself and falsely claimed to offer account verification services.
Other screenshots of promoted tweets sent to BuzzFeed News evoke the kind of articles promoted in the content ad units provided by companies such as Taboola and RevContent. Carroll said these kind of ads sometimes include false or misleading claims and therefore “pose a challenge for Twitter’s stance on how far it will go to police truth-in-advertising.”
In other cases, people sent BuzzFeed News images of alleged promoted tweets that made little, if any, sense.
Can’t understand why people don’t use third-party apps such as Tweetbot. No ads there (if you get the paid version).
unique link to this extract
Joe Fitzgerald Rodriguez:
SFMTA [San Francisco Municipal Transportation Agency] also recommends state regulators instate a local “advisory body” to keep a watchful eye on Uber and Lyft’s disability services.
That’s especially key, as without any prompting from state or local lawmakers Uber and Lyft have for years left wheelchair users at the curb.
The report highlights a steep drop-off of ramp-enabled taxi services for people who use wheelchairs during the rise of Uber and Lyft. While wheelchair users can ride Muni buses, and have access to pre-planned trips using San Francisco’s robust paratransit services, impromptu trips are needed by us all, the report notes.
From a scheduling change at the doctor’s office to a sudden (and perhaps welcome) romantic date, life happens. But whereas years ago San Francisco’s estimated 5,000 people who use wheelchairs could catch a cab, that’s less possible now, especially because Uber and Lyft do not widely provide wheelchair accessible vehicles in San Francisco.
While SFMTA cannot track all wheelchair taxi trips, it can measure the riding habits of wheelchair users who partake in city subsidies.
In 2013 there were roughly 1,400 monthly subsidized wheelchair-ramp taxi rides, but by 2018 that number dropped to roughly 500 monthly requests.
That’s not because there were fewer wheelchair users, or because those wheelchair users requested fewer rides, according to SFMTA. There simply weren’t enough taxi drivers available anymore after the rise of Uber and Lyft, with people left stranded.
Of particular concern to journalism advocates is the fact that Assange faces charges not only for working with Manning to obtain classified information, but also for publishing it.
“Assange is no journalist,” John Demers, the head of the Justice Department’s National Security Division, told reporters Thursday. The Justice Department maintains that Assange was complicit with and conspired with Manning in WikiLeaks’ publication of classified materials.
Manning, whose sentence was commuted by former President Barack Obama in the final days of his presidency, recently spent several weeks in jail after being held in contempt for refusing to testify before the grand jury. She was sent back to jail last week, and remained in jail as of Thursday.
Bad move. Assange faced charges for his role helping Manning to get the data. But publishing it? On that basis you’d have to charge people at the newspapers which published emails stolen by Russians from the DNC. You’ll note that’s not happening, because it’s not enforceable under the US’s 1st Amendment. I wonder if (some of) these charges will fail on that basis too.
unique link to this extract
The researchers behind the New Media + Society paper set out to understand this odd quirk of Google’s algorithm, and to find out why the company seemed to be serving some markets better than others. They developed a list of 28 keywords and phrases related to suicide, Scherr says, and worked with nine researchers from different countries who accurately translated those terms into their own languages. For 21 days, they conducted millions of automated searches for these phrases, and kept track of whether hotline information showed up or not.
They thought these results might simply, logically, show up in countries with higher suicide rates, but the opposite was true. Users in South Korea, which has one of the world’s highest suicide rates, were only served the advice box about 20% of the time. They tested different browser histories (some completely clean, some full of suicide-related topics), with computers old and new, and tested searches in 11 different countries.
It didn’t seem to matter: the advice box was simply much more likely to be shown to people using Google in the English language, particularly in English-speaking countries (though not in Canada, which Scherr speculates was probably down to geographical rollout). “If you’re in an English-speaking country, you have over a 90% chance of seeing these results — but Google operates differently depending on which language you use,” he said. Scherr speculates that using keywords may simply have been the easiest way to implement the project, but adds that it wouldn’t take much to offer it more effectively in other countries, too.
A Google spokesperson, who asked not to be quoted directly, said that the company is refining these algorithms.
We can already make a face in one video reflect the face in another in terms of what the person is saying or where they’re looking. But most of these models require a considerable amount of data, for instance a minute or two of video to analyze.
The new paper by Samsung’s Moscow-based researchers, however, shows that using only a single image of a person’s face, a video can be generated of that face turning, speaking and making ordinary expressions — with convincing, though far from flawless, fidelity.
It does this by frontloading the facial landmark identification process with a huge amount of data, making the model highly efficient at finding the parts of the target face that correspond to the source. The more data it has, the better, but it can do it with one image — called single-shot learning — and get away with it. That’s what makes it possible to take a picture of Einstein or Marilyn Monroe, or even the Mona Lisa, and make it move and speak like a real person.
Film makers of all stripes will love this. But it’s also going to make the fake news of 2016 look like kiddies’ play.
Amazon.com Inc. is developing a voice-activated wearable device that can recognize human emotions.
The wrist-worn gadget is described as a health and wellness product in internal documents reviewed by Bloomberg. It’s a collaboration between Lab126, the hardware development group behind Amazon’s Fire phone and Echo smart speaker, and the Alexa voice software team.
Designed to work with a smartphone app, the device has microphones paired with software that can discern the wearer’s emotional state from the sound of his or her voice, according to the documents and a person familiar with the program. Eventually the technology could be able to advise the wearer how to interact more effectively with others, the documents show…
…A US patent filed in 2017 describes a system in which voice software uses analysis of vocal patterns to determine how a user is feeling, discerning among “joy, anger, sorrow, sadness, fear, disgust, boredom, stress, or other emotional states.” The patent, made public last year, suggests Amazon could use knowledge of a user’s emotions to recommend products or otherwise tailor responses.
So it’ll be more adept than its early testers?
unique link to this extract
Ana Swanson and Edward Wong:
The Trump administration is considering limits to a Chinese video surveillance giant’s ability to buy American technology, people familiar with the matter said, the latest attempt to counter Beijing’s global economic ambitions.
The move would effectively place the company, Hikvision, on a United States blacklist. It also would mark the first time the Trump administration punished a Chinese company for its role in the surveillance and mass detention of Uighurs, a mostly Muslim ethnic minority.
The move is also likely to inflame the tensions that have escalated in President Trump’s renewed trade war with Chinese leaders. The president, in the span of two weeks, has raised tariffs on $200 billion worth of Chinese goods, threatened to tax all imports and taken steps to cripple the Chinese telecom equipment giant Huawei. China has promised to retaliate against American industries.
Hikvision is one of the world’s largest manufacturers of video surveillance products and is central to China’s ambitions to be the top global exporter of surveillance systems. The Commerce Department may require that American companies obtain government approval to supply components to Hikvision, limiting the company’s access to technology that helps power its equipment.
Hmm. I could get behind this, as a proportionate (and feasible) punishment for enabling the forced detention simply on religious grounds of a million people.
unique link to this extract
Over the past year I have interviewed 20 people, the majority of whom used only their first name or a cover name to protect their identity. At all times, I was escorted by members of the agency’s press and security staff.
The picture that emerged is of an organisation still heavily bound up in its traditional work of secretive code-cracking and surveillance, but also braced for another wave of technological change that is thrusting it and its staff of 6,000 people into the spotlight.
As the nature of intelligence work becomes increasingly digital, GCHQ is no longer a passive collector and distributor of intelligence, but is transforming into a key player in offensive combat operations.
“In the past, you could characterise what we did as producing pieces of paper which we handed to government who could take action,” explains Tony Comer, GCHQ’s historian and one of just seven people allowed to speak publicly on its behalf. “Now we are the ones actually taking the action.”
Nearly three decades after the birth of the world wide web forced GCHQ to rapidly shift from cold war-era listening posts to a digital surveillance and security service, the arrival of artificial intelligence and machine learning, the internet of things and the sheer scale and complexity of modern online communications is upending the agency again, forcing it to rethink how it delivers its expanding mission…
…In the coming months, Britain will launch a new offensive cyber force, made up of more than 2,000 people, which will build significantly on existing powers to initiate online operations that can degrade or destroy computer networks and have real-world effects, such as turning off energy grids or water supplies. While no decision has yet been made public, the force is expected to be led by GCHQ.
If Britain has one, then it’s a good bet that the US and China do.
unique link to this extract
Brian X. Chen and Cade Metz:
“It sounded very real,” Mr. Tran said in an interview after hanging up the call with Google. “It was perfectly human.”
Google later confirmed, to our disappointment, that the caller had been telling the truth: He was a person working in a call center. The company said that about 25% of calls placed through Duplex started with a human, and that about 15% of those that began with an automated system had a human intervene at some point.
We tested Duplex for several days, calling more than a dozen restaurants, and our tests showed a heavy reliance on humans. Among our four successful bookings with Duplex, three were done by people. But when calls were actually placed by Google’s artificially intelligent assistant, the bot sounded very much like a real person and was even able to respond to nuanced questions.
In other words, Duplex, which Google first showed off last year as a technological marvel using AI, is still largely operated by humans. While AI services like Google’s are meant to help us, their part-machine, part-human approach could contribute to a mounting problem: the struggle to decipher the real from the fake, from bogus reviews and online disinformation to bots posing as people.
Forgivable; these are still very early days for this technology. Did you expect you’d be able to say “a machine will be able to make a booking with a restaurant, and it will seem like a human” a couple of years ago?
unique link to this extract
TfL’s use of Wi-Fi data is particularly interesting, however, because of its sheer scale. The 2016 trial collected 509 million pieces of data from 5.6m mobile devices on 42m journeys. Until now, all TfL has known about your journey is where you tapped in and out, if you were using an Oyster Card or contactless payments. Wi-Fi can fill in the gaps. Transport planners will be able to see exactly which route between two stations was taken by customers, and how they move around each station.
The trial data contained some intriguing insights, including the convoluted paths that some customers take. While the majority of those travelling between Liverpool Street and Victoria changed at Oxford Circus, two% of travellers inexplicably took the Central line to Holborn, then the Piccadilly line to Green Park, then the Victoria line to Victoria. It also revealed that passengers have 18 different ways to get between King’s Cross and Waterloo, and that it takes 86 seconds to get from the ticket hall to the platforms at Victoria.
TfL plan to use the data to model passenger behaviour, and squeeze more capacity out of the existing tube network. It can, for example, show how passengers react to problems on the network. When the Waterloo and City line was suspended in December 2016, TfL was able to use Wi-Fi data to see exactly what alternatives people took.
Apps that use TfL data, such as Google Maps and CityMapper – will also be able to use the data, to incorporate information about delays and congestion. If Wi-Fi beacons detect queues forming in a ticket hall, apps could suggest alternative routes for subsequent travellers.
There’s also a clear commercial incentive – which may be particularly important to TfL given the dual blows of the Crossrail delay and the loss of its central government grant.
Lots of privacy concerns; but TfL isn’t really interested in invading privacy or tracking individuals. I guess the problem comes if there’s an administration which wants to invade privacy and track people.
unique link to this extract
Errata, corrigenda and ai no corrida: none notified