A selection of 11 links for you. Use them wisely. I’m @charlesarthur on Twitter. Observations and links welcome.
When Facebook invited journalists for a phone briefing on Tuesday evening to talk about its progress in tackling hate speech in Myanmar, it seemed like a proactive, well-intentioned move from a company that is typically fighting PR fires on several fronts.
But the publication of a bombshell Reuters investigation on Wednesday morning suggested otherwise: the press briefing was an ass-covering exercise.
This is the latest in a series of strategic mishaps as the social network blunders its way through the world like a giant, uncoordinated toddler that repeatedly soils its diaper and then wonders where the stench is coming from. It enters markets with wide-eyed innocence and a mission to “build [and monetise] communities”, but ends up tripping over democracies and landing in a pile of ethnic cleansing. Oopsie!
Human rights groups and researchers have been warning Facebook that its platform was being used to spread misinformation and promote hatred of Muslims, particularly the Rohingya, since 2013. As its user base exploded to 18 million, so too did hate speech, but the company was slow to react and earlier this year found its platform accused by a UN investigator of fuelling anti-Muslim violence.
The Australian journalist and researcher Aela Callan warned Facebook about the spread of anti-Rohingya posts on the platform in November 2013. She met with the company’s most senior communications and policy executive, Elliott Schrage. He referred her to staff at Internet.org, the company’s effort to connect the developing world, and a couple of Facebook employees who dealt with civil society groups. “He didn’t connect me to anyone inside Facebook who could deal with the actual problem,” she told Reuters.
But was there anyone there who could deal with the actual problem? The most effective way would have been to turn it off.
link to this extract
While ARM already believes that its recently unveiled Cortex-A76 is competitive with Intel’s 2.6GHz Core i5-7300U, it expects its 2019 “Deimos” and 2020 “Hercules” designs to clearly outperform that CPU. You would get “laptop-class” speed from a more efficient mobile chip, according to the company.
Of course, it’s worth taking ARM’s braggadocio with a grain of salt. The figures don’t include Intel’s comparable 8th-generation Core chips that pack twice as many cores and could easily shrink the performance gap. This is also based on one synthetic, integer-oriented benchmark (SPEC CINT2006), not a broader suite of tests that would measure floating point math and other performance traits. ARM is putting its best foot forward rather than offering definitive proof.
Even so, it’s telling that ARM might be in the ballpark.
The argument is strong apart from the bit where it suggests PC OEMs would switch to ARM from Intel. I just don’t think it would happen. Fine, Windows could manage it. Could third-party apps? Nope. Only Apple might be able to strongarm enough developers to do that, or run an emulator able to do it.
link to this extract
A few years ago, when Snap was a fast-growing, hot commodity venerated by venture capitalists, some would occasionally ask whether it would end up like Twitter—a niche platform with dedicated users but without the broad scale of, say, Facebook.
It’s now becoming clear the answer is yes.
As the chart above shows, Snap’s user growth has slowed sharply since the third quarter of last year to around 9 to 10% year on year from as high as 65% just two years ago. The slower growth rate puts Snap right in line with Twitter’s recent trends. Twitter, of course, is another once-hot company that started four years before Snap and whose user growth slowed sharply starting around 2014.
Of course, all companies’ growth rate slows eventually, once they reach a certain size. But Snap didn’t even reach 200 million daily active users—it finished the June quarter with 188 million. Twitter doesn’t reveal how many daily active users it has (although it does disclose its quarterly DAU growth rate). A person with knowledge of Twitter’s finances say it has far fewer DAUs than Snap. In comparison, Facebook has 1.4 billion DAUs and is still growing that number at more than 10% year over year.
The question raised by Snap’s user growth slowdown is to what extent Snap’s ad revenue growth will also echo Twitter’s. The older company’s advertising revenue surged for a couple of years after user growth weakened, before slowing sharply in 2016—and falling in 2017. (It picked up in the first half of this year, however, rising 22%.)
Does everything have to be as big as Facebook, though? Can’t they just be quietly successful with hundreds of millions of users?
link to this extract
Hundreds of Google employees, upset at the company’s decision to secretly build a censored version of its search engine for China, have signed a letter demanding more transparency to understand the ethical consequences of their work.
In the letter, which was obtained by The New York Times, employees wrote that the project and Google’s apparent willingness to abide by China’s censorship requirements “raise urgent moral and ethical issues.” They added, “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.”
The letter is circulating on Google’s internal communication systems and is signed by about 1,000 employees, according to two people familiar with the document, who were not authorized to speak publicly.
The protest presents another obstacle for Google’s potential return to China eight years after the company publicly withdrew from the country in protest of censorship and government hacking. China has the world’s largest internet audience but has frustrated American tech giants with content restrictions or outright blockages of services including Facebook and Instagram.
Gallagher, who writes at The Intercept, with some “cutting-room details” from his other stories on Google’s China project; it turns out that Sergey Brin has a 240ft, $80m yacht named “Dragonfly” – the same as the China project:
After Google pulled its search engine out of China in 2010, Brin said of the Chinese government: “In some aspects of their policy, particularly with respect to censorship, with respect to surveillance of dissidents, I see the same earmarks of totalitarianism, and I find that personally quite troubling.”
It’s clear Brin was at the time genuinely uncomfortable with the censorship – he didn’t just say what he did for public relations reasons. I have heard this from several people inside the company who spent years working with him. He took a principled stand and had arguments with colleagues over the issue.
In recent years, Brin has taken a more hands-off role at Google. Since 2015, CEO Sundar Pichai has taken the helm, and he has steered the company’s policy on China. But Brin still serves on Google’s board of directors, and would surely have been briefed on the search engine plans, given their importance for Google both politically and strategically. So did Brin change his mind about the censorship? Was he simply outvoted by his colleagues on the issue?
More to the point at hand, why was the Chinese censorship project given the same name as Brin’s yacht? Is it possible somebody inside Google is trying to troll Brin, knowing that he has in the past spoken out against the Chinese government censorship? Or was Brin himself involved in giving the project this name, indicating that he has changed his views?
I’d suspect it’s a form of trolling; that it’s people trying to annoy Brin, for whatever reason.
link to this extract
The researchers’ system uses channel state information (CSI) from run-of-the-mill Wi-Fi. It can first identify whether there are dangerous objects in baggage without having to physically rifle through it. It then determines what the material is and what the risk level is. The researchers tested the detection system using 15 different objects across three categories—metal, liquid, and non-dangerous—as well as with six bags and boxes across three categories—backpack or handbag, cardboard box, and a thick plastic bag.
The findings were pretty impressive. According to the researchers, their system is 99% accurate when it comes to identifying dangerous and non-dangerous objects. It is 97% accurate when determining whether the dangerous object is metal or liquid, the study says. When it comes to detecting suspicious objects in various bags, the system was over 95% accurate.
The researchers state in the paper that their detection system only needs a wifi device with two to three antennas, and can run on existing networks. “In large public areas, it’s hard to set up expensive screening infrastructure like what’s in airports,” Chen said. “Manpower is always needed to check bags, and we wanted to develop a complementary method to try to reduce manpower.”
Reading the paper, it seems a bit optimistic: fine for static things, but once you get to an airport where everyone’s moving around, how will you be sure what you’re monitoring? It would likely need people to walk through a specific space to be assessed. Rather like a doorway..?
link to this extract
Filip Liu, a 31-year-old software developer from Beijing, was traveling in the far western Chinese region of Xinjiang when he was pulled to one side by police as he got off a bus.
The officers took Liu’s iPhone, hooked it up to a handheld device that looked like a laptop and told him they were “checking his phone for illegal information”.
Liu’s experience in Urumqi, the Xinjiang capital, is not uncommon in a region that has been wracked by separatist violence and a crackdown by security forces.
But such surveillance technologies, tested out in the laboratory of Xinjiang, are now quietly spreading across China.
Government procurement documents collected by Reuters and rare insights from officials show the technology Liu encountered in Xinjiang is encroaching into cities like Shanghai and Beijing.
Police stations in almost every province have sought to buy the data-extraction devices for smartphones since the beginning of 2016, coinciding with a sharp rise in spending on internal security and a crackdown on dissent, the data show.
The documents provide a rare glimpse into the numbers behind China’s push to arm security forces with high-tech monitoring tools as the government clamps down on dissent…
…These sorts of scanners are used in countries like the United States but they remain contentious and security forces need to go through a lengthy legal process to be able to forcibly break into a suspect’s phone.
In China, while a number of firms say they have the ability to crack many phones, police are generally able to get users to hand over their passwords, experts say.
It’s very intrusive, but of course there’s no way for people to protest effectively. It’s claimed that it can break into iPhones – which of course you can if you get the passcode.
link to this extract
every time I talk about the uncertainties inherent in climate projections, I feel attacked from all sides of the climate mitigation debate. I admit that in the current landscape, any expression of uncertainty is immediately weaponized by those who want to delay climate action.
Still, I’m a scientist, and I love to think about things I don’t understand. Being honest means acknowledging we don’t know everything. It also means being open about the problems of science itself, from a broken incentive system to the pervasive racial and sexual harassment that drives out brilliant minds. I struggle with how to talk about these things in a world where merchants of doubt will find a way to convert my science into their product.
I suspect this piece will be shared by some of those bad-faith actors. Unfortunately, there seems to be no way to construct an un-twistable argument. SCIENTIST SUPPRESSES INCONVENIENT RESULTS, they’ll say. CENSORSHIP! GROUPTHINK! This is, of course, the opposite of what I want to do. All I can ask is that if people insist on spreading false rumors about me, they also note that I have an evil twin, used to be an astronaut, and once killed a man in a bar fight.
All I know is this: science communication is hard. There are no institutional rewards for doing it. Almost no one gets promoted for talking to the public. But we rely on scientists to choose to talk about their work, and to deal with the sometimes-overwhelming consequences of speaking in public. No other industry does this. McDonalds does not force their cooks to engage in Hamburger Communication; they hire highly paid PR professionals instead.
So I want to approach this with something the stereotypical scientist is not known for: humility. Please don’t just tell us to be honest, help us to understand how to be transparent in an opaque world.
The company’s email also says it hopes to eventually learn “why people hire 3rd party clients over our own apps.”
Its own apps?
Oh, you mean like TweetDeck, the app Twitter acquired then shut down on Android, iPhone and Windows? The one it generally acted like it forgot it owned? Or maybe you mean Twitter for Mac (previously Tweetie, before its acquisition), the app it shut down this year, telling Mac users to just use the web instead? Or maybe you mean the nearly full slate of TV apps that Twitter decided no longer needed to exist?
And Twitter wonders why users don’t want to use its own clients?
Perhaps, users want a consistent experience – one that doesn’t involve a million inconsequential product changes like turning stars to hearts or changing the character counter to a circle. Maybe they appreciate the fact that the third parties seem to understand what Twitter is better than Twitter itself does: Twitter has always been about a real-time stream of information. It’s not meant to be another Facebook-style algorithmic News Feed. The third-party clients respect that. Twitter does not.
Yesterday, the makers of Twitterific spoke to the API changes, noting that its app would no longer be able to stream tweets, send native push notifications, or be able to update its Today view, and that new tweets and DMs will be delayed.
It recommended users download Twitter’s official mobile app for notifications going forward.
This is stupid. Twitter wants to know why people don’t use its app? Because, as everyone keeps saying, they prefer the experience over a web browser – which is not an app experience. It can never ever be. Web apps aren’t apps. Twitter needs to open up its API and figure it out. Show us ads, whatever. The phrase “user-hostile” is appropriate here. Convenient for Twitter; bad for us.
link to this extract
The problem with social-optimized content is that its overt, eerie familiarity drapes a kind of lowest-common-denominator cynicism across the internet. Social media tends to favor positive sentiment over negative, and exaggeration over subtlety. When a writer claims to be “scrEAMING” at the newest Marvel trailer in the headline, are they saying that because they really are, or because they want the reader to think that they are so that the reader will share on Facebook? It’s not a crime to write an enthusiastic headline, but when every headline you see is yelling at you in one way or another — and making outsized claims about the emotional state of its author or readers — it becomes difficult to trust the claimed sentiments of writers. At the very least, it’s extremely annoying.
SEO content, on the other hand, dispenses with the emotional in favor of the mechanical. It can be stilted and awkward — but it’s more honest and transparent. When a writer pads their article for the trailer of the newest Marvel movie with search keywords — data like the cast and crew and opening date — they’re optimizing for the Google robots. But they’re also providing genuinely useful information. Social content was about manipulating people into clicking, sharing, and posting. SEO is about manipulating robots into treating your content as the best example of sought-after information.
SEO is far from a perfect assignment editor for the web. Scammers and charlatans have been trying to abuse it for years, and it can create spectacles as ghoulish and cynical as social-optimized posts when news happens. A particularly gross instance happened in the hours after news of Anthony Bourdain’s suicide broke, when Newsweek pumped out individual Google-optimized posts about each of his family members and former partners. Tasteless? Absolutely. But it is also fulfilling a direct reader request with dispassionate information instead of hyperbole. The mechanics of SEO are clear, far more than the mechanics of human emotions.
Robots are coming to take our jobs — and trick our gullible children.
The conventional wisdom has long been afraid of the first half of that sentence, and now a study published Wednesday in the journal Science Robotics speaks to the second half. The study paired a group of kids with cute, humanoid robots who constantly gave an incorrect answer to a simple test, which the led the kids to quite often — well, follow the robots’ incorrect lead.
According to an abstract of the study, “People are known to change their behavior and decisions to conform to others, even for obviously incorrect facts. Because of recent developments in artificial intelligence and robotics, robots are increasingly found in human environments, and there, they form a novel social presence. It is as yet unclear whether and to what extent these social robots are able to exert pressure similar to human peers.”
The testers, however, used a group of children ages 7 to 9 and found that they generally conform to the robots in the study. “This raises opportunities as well as concerns for the use of social robots with young and vulnerable cross-sections of society; although conforming can be beneficial, the potential for misuse and the potential impact of erroneous performance cannot be ignored.”
To be sure, you can argue about how much value to ascribe to this, since kids of a certain age will to a degree go along with anyone who’s older than them. Maybe that same thought holds true when it comes to kids and our robot overlords.
People are the same when computers suggest things too. They’ll go along with an answer from a calculator when they’ve entered bad information in a way they might not on paper.
link to this extract
Errata, corrigenda and ai no corrida: The game ‘Fortnite’ was wrongly spelt ‘Fortnight’ yesterday.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.