The BBC’s got a plan which would automatically tailor iPlayer content to users – and much more. CC-licensed photo by Barnaby_S on Flickr.
A selection of 9 links for you. Finally up to speed. I’m @charlesarthur on Twitter. Observations and links welcome.
I used Google ads for social engineering. It worked • The New York Times
Patrick Berlinquette:
»
You don’t have to be a marketer with years of experience to do this. You just need to follow the instructions and put up a credit card (a few hundred bucks will suffice).
Recently, I followed the [Google] blueprint [used against people searching for Isis propaganda] and created a redirect campaign of my own.
The first step was to identify the problem I wanted to address. I thought about Kevin Hines and how his fate might have changed if cellphones with Google had existed back in 2000 when he tried to take his own life.
Could Kevin [Hines, who tried to commit suicide by jumping off a bridge] have been redirected? Could he have been persuaded — by a few lines of ad copy and a persuasive landing page — not to jump? I wondered if I could redirect the next Kevin Hines. The goal of my first redirect campaign was to sway the ideology of suicidal people.
The problem my campaign addressed: Suicidal people are underserved on Google. In 2010, Google started making the National Suicide Prevention Lifeline the top result of certain searches relating to suicide. It also forced autocomplete not to finish such searches.
The weakness of Google’s initiative is that not enough variations of searches trigger the hotline. A search for “I am suicidal” will result in the hotline. But a search for “I’m going to end it” won’t always. “I intend to die” won’t ever. A lot of “higher-funnel” searches don’t trigger the hotline.
I hoped my redirect campaign would fill the gap in Google’s suicide algorithm. I would measure my campaign’s success by how many suicidal searchers clicked my ad and then called the number on my website, which forwarded to the National Suicide Prevention Lifeline.
«
Object-Based Media • BBC R&D
»
Object-based media allows the content of programmes to change according to the requirements of each individual audience member.
The ‘objects’ refer to the different assets that are used to make a piece of content. These could be large objects: the audio and video used for a scene in a drama – or small objects, like an individual frame of video, a caption, or a signer.
By breaking down a piece of media into separate objects, attaching meaning to them, and describing how they can be rearranged, a programme can change to reflect the context of an individual viewer.
We think this approach has potential to transform the way content is created and consumed: bringing efficiencies and creative flexibility to production teams, enabling them to deliver a personalised BBC to every member of our audience…
My Forecast
When I watch the weather forecast on iPlayer, I can choose to replace the speaking presenter with a signing one. Because it knows me, iPlayer gives me a signer as default. It syncs with my calendar, knows where I’m planning to go in the next week, and gives me hyper-local forecasts. Ideal for planning my festival wardrobe for Radio 1’s Big Weekend!Eastenders Catch-up
I love EastEnders but with four episodes a week there’s a lot to catch up on after a fortnight in the sun. iPlayer knows what I’ve missed and it creates a catch-up episode of Enders just for me. All the juicy bits are there and I’m up to speed in 30 minutes instead of two hours.«
Those are just two – the article points to plenty more things they can do. This is hugely ambitious, and they’re envisaging doing them within three years. Amazing if they can.
unique link to this extract
Kids’ apps are filled with manipulative ads, according to a new study • Vox
Chavie Lieber:
»
suddenly, the game is interrupted. A bubble pops up with a new mini game idea, and when a child clicks on the bubble, they are invited to purchase it for $1.99, or unlock all new games for $3.99. There’s a red X button to cancel the pop-up, but if the child clicks on it, the character on the screen shakes its head, looks sad, and even begins to cry.
The game, developed by the Slovenian software company Bubadu and intended for kids as young as 6, is marketed as “educational” because it teaches kids about different types of medical treatments.
But it’s structured so that the decision to not buy anything from the game is wrong; the child is shamed into thinking they’ve done something wrong. Pulling such a move on a young gamer raises troubling ethical questions, especially as children’s gaming apps — and advertising within them — have become increasingly popular.
On Tuesday, a group of 22 consumer and public health advocacy groups sent a letter to the Federal Trade Commission calling on the organization to look into the questionable practices of the children’s app market. The letter asks the FTC to investigate apps that “routinely lure young children to make purchases and watch ads” and hold the developers of these games accountable.
«
Mozilla: No plans to enable DNS-over-HTTPS by default in the UK • ZDNet
Catalin Cimpanu:
»
After the UK’s leading industry group of internet service providers named Mozilla an “Internet Villain” because of its intentions to support a new DNS security protocol named DNS-over-HTTPS (DoH) inside Firefox, the browser maker told ZDNet that such plans don’t currently exist.
“We have no current plans to enable DoH by default in the UK,” a spokesperson ZDNet last night.
The browser maker’s decision comes after both ISPs and the UK government, through MPs and GCHQ have criticized Mozilla and fellow browser maker Google during the last two months for their plans to support DNS-over-HTTPS.
The technology, if enabled, would thwart the ability of some internet service providers to sniff customer traffic in order to block users from accessing bad sites, such as those hosting copyright-infringing materials, child abuse images, and extremist material.
UK ISPs block websites at the government requests; they also block other sites voluntarily at the request of various child protection groups, and they block adult sites as part of parental controls options they provide to their customers.
Not all UK ISPs will be impacted by Mozilla and Google supporting DNS-over-HTTPS, as some use different technologies to filter customers’ traffic…
«
This is the story which came out horrendously confused in the Sunday Times about three months ago, talking about “plans to encrypt Chrome”, which left everyone who understands what the words actually mean puzzled.
unique link to this extract
The fight for the future of YouTube • The New Yorker
Neima Jahromi:
»
Francesca Tripodi, a media scholar at James Madison University, has studied how right-wing conspiracy theorists perpetuate false ideas online. Essentially, they find unfilled rabbit holes and then create content to fill them. “When there is limited or no metadata matching a particular topic,” she told a Senate committee in April, “it is easy to coördinate around keywords to guarantee the kind of information Google will return.” Political provocateurs can take advantage of data vacuums to increase the likelihood that legitimate news clips will be followed by their videos. And, because controversial or outlandish videos tend to be riveting, even for those who dislike them, they can register as “engaging” to a recommendation system, which would surface them more often. The many automated systems within a social platform can be co-opted and made to work at cross purposes.
Technological solutions are appealing, in part, because they are relatively unobtrusive. Programmers like the idea of solving thorny problems elegantly, behind the scenes. For users, meanwhile, the value of social-media platforms lies partly in their appearance of democratic openness. It’s nice to imagine that the content is made by the people, for the people, and that popularity flows from the grass roots.
In fact, the apparent democratic neutrality of social-media platforms has always been shaped by algorithms and managers. In its early days, YouTube staffers often cultivated popularity by hand, choosing trending videos to highlight on its home page; if the site gave a leg up to a promising YouTuber, that YouTuber’s audience grew. By spotlighting its most appealing users, the platform attracted new ones. It also shaped its identity: by featuring some kinds of content more than others, the company showed YouTubers what kind of videos it was willing to boost. “They had to be super family friendly, not copyright-infringing, and, at the same time, compelling,” Schaffer recalled, of the highlighted videos.
«
Long, and absorbing; with the telling phrase that one ex-YouTube staffer “told me that hate speech had been a problem on YouTube since its earliest days.”
unique link to this extract
BA hit by biggest GDPR fine to date • Financial Times
Chris Nuttall:
»
The UK Information Commissioner’s Office says it intends to fine BA £183m (€204m, $229m) — 1.5% of BA’s worldwide turnover in 2017 — after it admitted that more than half a million customers’ data had been stolen by hackers last August from its website and mobile app.
Under pre-GDPR powers, the maximum penalty was £500,000 but this has now risen to up to 4% of turnover. In the first nine months of GDPR, national data protection agencies in 11 countries had levied a total of €56m in fines, made up mostly of a €50m fine that France’s CNIL imposed on Google in January.
The ICO said poor security arrangements at BA had given hackers access to personal data, including customer logins, payment card details, travel bookings and name and address information. BA will be able to make representations to the ICO over the finding and fine.
«
This, you’ll recall, was the remarkably clever Magecart scam, which replaced an innocent script from the BA baggage handling site to steal peoples’ credit card and other details when they paid for flights. Then BA found a second hacking script on the site, announced in October.
unique link to this extract
Over 1,300 Android apps scrape personal data regardless of permissions • TechRadar
David Lumb:
»
Researchers at the International Computer Science Institute (ICSI) created a controlled environment to test 88,000 apps downloaded from the US Google Play Store. They peeked at what data the apps were sending back, compared it to what users were permitting and – surprise – 1,325 apps were forking over specific user data they shouldn’t have.
Among the test pool were “popular apps from all categories,” according to ICSI’s report.
The researchers disclosed their findings to both the US Federal Trade Commission and Google (receiving a bug bounty for their efforts), though the latter stated a fix would only be coming in the full release of Android Q, according to CNET.
Before you get annoyed at yet another unforeseen loophole, those 1,325 apps didn’t exploit a lone security vulnerability – they used a variety of angles to circumvent permissions and get access to user data, including geolocation, emails, phone numbers, and device-identifying IMEI numbers.
One way apps determined user locations was to get the MAC addresses of connected WiFi base stations from the ARP cache, while another used picture metadata to discover specific location info even if a user didn’t grant the app location permissions. The latter is what the ICSI researchers described as a “side channel” – using a circuitous method to get data.
They also noticed apps using “covert channels” to snag info: third-party code libraries developed by a pair of Chinese companies secretly used the SD card as a storage point for the user’s IMEI number. If a user allowed a single app using either of those libraries access to the IMEI, it was automatically shared with other apps.
«
Android Q isn’t going to be universally adopted by any means. Data leaks are going to go on.
unique link to this extract
No flights, a four-day week and living off-grid: what climate scientists do at home to save the planet • The Guardian
Alison Green is one of many academics interviewed for this piece:
»
In July 2018, I came across Prof Jem Bendell’s Deep Adaptation paper, which was going viral online. Here was someone with credibility and a good track record who, having studied the science, was saying that we’re no longer looking at mitigation, we’re looking at adaptation; that societal collapse is inevitable.
People are starting to talk about the kind of spiritual awakening you get in these situations: an “ecophany”. I concluded that banging on about climate change on social media was not enough, and became involved with grassroots activism. Being a vice-chancellor no longer meant anything to me. I gave up my career, and I’m so much happier as a result. Now I talk at conferences and events about the need for urgent action and I have taken part in direct actions with Extinction Rebellion, including the closing of five London bridges last November and speaking in Parliament Square during the April rebellion.
The science shows that societal collapse could be triggered by any one of a number of things, and once triggered, it could happen quite quickly. I suppose I’m being protective towards my four children, aged between 16 and 24, but in the event, I feel I need to be somewhere where I’m growing my own food, living in an eco-house, trying to live off-grid. It would give me some security; I don’t feel secure where I live in Cambridge at the moment – I’m concerned by thoughts like, “What would happen if I turned the tap on and there was no water?”. On our current trajectory, cities will not necessarily be safe places in the future – possibly within my own lifetime, certainly within my children’s.
«
Societal collapse. Just a phrase to roll around your head.
unique link to this extract
Europe built a system to fight Russian meddling. It’s struggling • The New York Times
Matt Apuzzo:
»
Efforts to identify and counter disinformation have proven not only deeply complicated, but also politically charged.
The new Rapid Alert System — a highly touted network to notify governments about Russian efforts before they metastasized as they did during the 2016 American elections — is just the latest example.
Working out of a sixth-floor office suite in downtown Brussels this spring, for example, European analysts spotted suspicious Twitter accounts pushing disinformation about an Austrian political scandal. Just days before the European elections, the tweets showed the unmistakable signs of Russian political meddling.
So European officials prepared to blast a warning on the alert system. But they never did, as they debated whether it was serious enough to justify sounding an alarm. In fact, even though they now speak of spotting “continued and sustained disinformation activity from Russian sources,” they never issued any alerts at all.
«
“Struggling”, in the headline, is generous.
unique link to this extract
Errata, corrigenda and ai no corrida: none notified
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.