Start Up: the Spectacles flop, Google’s language failure, NAO on #wannacry, a Luther for today?, and more

Oxycontin, from Purdue Pharma, has laid waste to millions of lives in the US. Now they want to expand. Photo by redfishingboat on Flickr.

(A search on Flickr for “Oxycontin” turned up something called the “Oxycontin Express”, which turned out to be this programme. Very relevant.)

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

Yes, we’re back! A selection of 11 links for you. Unlike our holiday, doesn’t contain food poisoning. I’m @charlesarthur on Twitter. Observations and links welcome.

Why Snapchat Spectacles failed • TechCrunch

Josh Constine:


How come only 0.08% of Snapchat’s users bought its camera sunglasses? Hundreds of thousands of pairs of Spectacles sit rotting in warehouses after the company bungled the launch. Initial hype and lines for its roving, limited time only Snapbot vending machines led Snap to overestimate demand but underdeliver on quality and content.

Massive piles of assembled and unassembled video-recording sunglasses sit unsold, contributing to Snap’s enormous costs and losses, says The Information. Internal Snap data shows less than 50% of buyers kept using Spectacles a month after purchase, Business Insider’s Alex Heath reports. A “sizeable” percentage stopped after just a week, with a source calling the retention rate “shockingly low”.

What was the problem?

Karl Lagerfeld’s photo of Snap CEO Evan Spiegel donning Spectacles for their September 2016 reveal


Gee, can’t imagine. All the tech writers said they were F.A.B.
link to this extract

Dear Google, when are you going to fix Android Wear? • AndroidAuthority

Adam Doud:


Smartwatches are in a funny state. They’re not really all that popular amongst the populace at large, but they’re not exactly busts either. The Apple watch is still the big seller in terms of market penetration. Android Wear is no slouch either, with many OEMs producing a wide array of options. There’s just one problem. The Android Wear software is just not good.

To me, the root problem with Android Wear devices is that they’re trying too hard to be watches. We use the term “smartwatch” to describe these devices, but all they really are – and all they really need to be – are small screens on your wrist. Sure, they can tell time – that’s fine. But the power of the smartwatch isn’t the “watch” part— it’s the “smart” part. Getting notifications and apps running on a screen on your wrist is far more powerful than knowing how long it is until the Blackhawks game starts.


Actually, the first paragraph contains a canard. Apple’s Watch is selling fine. Android Wear is an absolute dog. The app (which is needed to run the watch) passed the 5m downloads mark in September 2016, having started in July 2014 or so. But it hasn’t added another 5m. It’s not selling.

Their “problem” is the users. They don’t care about what it offers.
link to this extract

Google’s sentiment analyzer thinks being gay is bad • Motherboard

Andrew Thompson:


Google’s sentiment analyzer isn’t always effective and sometimes produces biased results.

Two weeks ago, I experimented with the API for a project I was working on. I began feeding it sample texts, and the analyzer started spitting out scores that seemed at odds with what I was giving it. I then threw simple sentences about different religions at it.

When I fed it “I’m Christian” it said the statement was positive:

When I fed it “I’m a Sikh” it said the statement was even more positive:

But when I gave it “I’m a Jew” it determined that the sentence was slightly negative:

The problem doesn’t seem confined to religions. It similarly thought statements about being homosexual or a gay black woman were also negative:

Being a dog? Neutral. Being homosexual? Negative:

I could go on, but you can give it a try yourself: Google Cloud offers an easy-to-use interface to test the API.


Google apologised, in a response. This is a classic example of “garbage in, garbage out” – and as we start to build these systems into subsystems, it could become pernicious. Worse: unlike public opinion, which shifts over time (track opinion about abortion, gay marriage and marijuana legalisation) these systems wouldn’t shift their position. They’d be embalmed views of how we should think, from how we used to think.
link to this extract

Investigation: WannaCry cyber attack and the NHS • National Audit Office (NAO)


The key findings of the investigation are:

• The Department was warned about the risks of cyber attacks on the NHS a year before WannaCry and although it had work underway it did not formally respond with a written report until July 2017. The Department and Cabinet Office wrote to trusts in 2014, saying it was essential they had “robust plans” to migrate away from old software, such as Windows XP by April 2015. In March and April 2017, NHS Digital had issued critical alerts warning organisations to patch their systems to prevent WannaCry. However, before 12 May 2017, the Department had no formal mechanism for assessing whether local NHS organisations had complied with their advice and guidance and whether they were prepared for a cyber attack.

• The attack led to disruption in at least 34% of trusts in England although the Department and NHS England do not know the full extent of the disruption. On 12 May, NHS England initially identified 45 NHS organisations including 37 trusts that had been infected by the WannaCry ransomware. In total at least 81 out of 236 trusts across England were affected. A further 603 primary care and other NHS organisations were infected by WannaCry, including 595 GP practices. However, the Department does not know how many NHS organisations could not access records or receive information, because they shared data or systems with an infected trust. NHS Digital told us that it believes no patient data were compromised or stolen…

• The Department had developed a plan, which included roles and responsibilities of national and local organisations for responding to an attack, but had not tested the plan at a local level. As the NHS had not rehearsed for a national cyber attack it was not immediately clear who should lead the response and there were problems with communications. Many local organisations could not communicate with national NHS bodies by email as they had been infected by WannaCry or had shut down their email systems as a precaution, though NHS Improvement did communicate with trusts’ Chief Executive Officers by telephone. Locally NHS staff shared information through personal mobile devices, including using the encrypted WhatsApp application.


That last bit is deliciously ironic given ministers’ repeated calls to be able to tap into it. Turns out mobile is the last resort – and reliable.
link to this extract

#wannacry: cyber defence failure or organisational lapse? • Medium

Vladimiro Sassone on the National Audit Office report into Wannacry:


This particular attack — as several others before — was known, not particularly sophisticated, and has only affected organisations which did not take the recommended precautions. Once a vulnerability is in the public domain, you either close it by applying the relevant patch, or stand as a sitting duck borrowing time on your good luck.

Admittedly, for organisations like the NHS this represents a big cultural change. These are organisations used to procure their equipment and then expect to use it flawlessly for tens of years, without giving it a further thought. The reality is that IT does not work that way. IT systems can be extremely complex, and therefore (for reasons too long to explain here) are not perfect, are reachable from the global network, and therefore are exposed to all sort of malicious behaviours and attacks, and so need constant revision. When a critical piece software becomes no longer supported, it has essentially reached the end of its useful life, and must be replaced, even if at the naked eye it may still appear as perfectly viable. This is true of PCs running the obsolete Windows XP, as well as of other scary situations with health devices and implants not designed with security and upgradability in mind.


(Sassone is based at the University of Southampton, in the cyber security controls effectiveness project; they’ve produced a paper on what SME networks need for cybersecurity.)

I’m writing a book on hacking, and ransomware is one of the chapters – with a focus on hospitals. The NHS problem is hydra-headed: million-pound equipment you replace once every 20 years uses old interfaces; small numbers of IT staff; large numbers of temporary staff who might not know what not to click; old equipment. It’s a nightmare.
link to this extract

Why we need a 21st-century Martin Luther to challenge the church of tech • The Guardian

John Naughton (professor of the public understanding of technology at the Open University) is aiming to create a modern form of Martin Luther’s 95 theses:


One thing above all stands out from those theses. It is that if one is going to challenge an established power, then one needs to attack it on two fronts – its ideology (which in Luther’s time was its theology), and its business model. And the challenge should be articulated in a format that is appropriate to its time. Which led me to think about an analogous strategy in understanding digital technology and addressing the problems posed by the tech corporations that are now running amok in our networked world.

These are subjects that I’ve been thinking and writing about for decades – in two books, a weekly Observer column, innumerable seminars and lectures and a couple of academic research projects. Many years ago I wrote a history of the internet, motivated partly by annoyance at the ignorant condescension with which it was then viewed by the political and journalistic establishments of the time. “Don’t you think, dear boy,” said one grandee to me in the early 1990s, “that this internet thingy is just the citizens band [CB] radio de nos jours?”

“You poor sap,” I remember thinking, “you have no idea what’s coming down the track.”


The church door to which they will be pinned is, on 31 October. I’m looking forward to it. The two extracted in the article (“No.19: the technical is political”; “No.92: Facebook is many things, but a ‘community’ it ain’t”) are mouthwatering.

(Disclosure: I have known John for years, and was a visiting fellow last academic year at Cambridge on his Technology & Democracy project.)
link to this extract

The family that built an empire of pain • The New Yorker

Patrick Radden Keefe on the Sackler family, who own Purdue Pharma, which makes Oxycontin, which is widely abused – and has led to opioid abuse being declared both an epidemic in the US in 2011, and a national emergency (finally) by Trump earlier this month:


Purdue developed a pill of pure oxycodone, with a time-release formula similar to that of MS Contin. The company decided to produce doses as low as ten milligrams, but also jumbo pills—eighty milligrams and a hundred and sixty milligrams—whose potency far exceeded that of any prescription opioid on the market. As Barry Meier writes, in “Pain Killer,” “In terms of narcotic firepower, OxyContin was a nuclear weapon.”

Before releasing OxyContin, Purdue conducted focus groups with doctors and learned that the “biggest negative” that might prevent widespread use of the drug was ingrained concern regarding the “abuse potential” of opioids. But, fortuitously, while the company was developing OxyContin, some physicians began arguing that American medicine should reëxamine this bias. Highly regarded doctors, like Russell Portenoy, then a pain specialist at Memorial Sloan Kettering Cancer Center, in New York, spoke out about the problem of untreated chronic pain—and the wisdom of using opioids to treat it.

“There is a growing literature showing that these drugs can be used for a long time, with few side effects,” Portenoy told the Times, in 1993. Describing opioids as a “gift from nature,” he said that they needed to be destigmatized. Portenoy, who received funding from Purdue, decried the reticence among clinicians to administer such narcotics for chronic pain, claiming that it was indicative of “opiophobia,” and suggesting that concerns about addiction and abuse amounted to a “medical myth.”

In 1997, the American Academy of Pain Medicine and the American Pain Society published a statement regarding the use of opioids to treat chronic pain. The statement was written by a committee chaired by Dr. J. David Haddox, a paid speaker for Purdue.

Richard Sackler worked tirelessly to make OxyContin a blockbuster, telling colleagues how devoted he was to the drug’s success. The F.D.A. approved OxyContin in 1995, for use in treating moderate to severe pain. Purdue had conducted no clinical studies on how addictive or prone to abuse the drug might be.

But the F.D.A., in an unusual step, approved a package insert for OxyContin which announced that the drug was safer than rival painkillers, because the patented delayed-absorption mechanism “is believed to reduce the abuse liability.” David Kessler, who ran the F.D.A. at the time, told me that he was “not involved in the approval.” The F.D.A. examiner who oversaw the process, Dr. Curtis Wright, left the agency shortly afterward. Within two years, he had taken a job at Purdue.


This is a long read. But it’s astonishing in its depth, and the myriad ways in which the US medical industry has been coöpted by this company and drug. The scary ending: Purdue is now looking for sales abroad because the US is slowing down – and the UK is in its sights.
link to this extract

Colliding neutron stars could settle cosmology’s biggest controversy • Quanta Magazine

Natalie Wolchover on how measurements for the Hubble constant – how quickly the universe is expanding – might be determined; currently the two best estimates are 67 and 73 (the story explains the units that go with it):


The crashing stars serve as “standard sirens,” as Holz and Scott Hughes of the Massachusetts Institute of Technology dubbed them in a 2005 paper, building on the work of Bernard Schutz 20 years earlier. They send rushes of ripples outward through space-time that are not dimmed by gas or dust. Because of this, the gravitational waves transmit a clean record of the strength of the collision, which allows scientists to “directly infer the distance to the source,” Holz explained. “There is no distance ladder, and no poorly understood astronomical calibrations. You listen to how loud the [collision] is, and how the sound changes with time, and you directly infer how far away it is.” Because astronomers can also detect electromagnetic light from neutron-star collisions, they can use redshift to determine how fast the merged stars are receding. Recessional velocity divided by distance gives the Hubble constant.

From the first neutron-star collision alone, Holz and hundreds of coauthors calculated the Hubble constant to be 70 kilometers per second per megaparsec, give or take 10. (The major source of uncertainty is the unknown angular orientation of the merging neutron stars relative to the LIGO detectors, which affects the measured amplitude of the signal.) Holz said, “I think it’s just pure luck that we’re smack in the middle,” between the cosmic-distance-ladder and cosmic-microwave-background Hubble estimates. “We could easily shift to one side or the other.”

The measurement’s accuracy will steadily improve as more standard sirens are heard over the next few years, especially as LIGO continues to ramp up in sensitivity. According to Holz, “With roughly 10 more events like this one, we’ll get to 1% [of error],” though he stresses that this is a preliminary and debatable estimate.


If we can fix the Hubble constant, we might have an idea of the composition of the universe. Then again, we might just be more confused about the differences between the early one, and the current one.
link to this extract

This new Twitter account hunts for bots that push political opinions • Quartz

Keith Collins:


One account features a photo of a middle-aged woman, and a bio that reads “Patriot, self employed, loving mother and grandmother.”

Another has a photo of a younger woman in sunglasses, described in the bio as a “NonProfit Exec born to LEGAL Immigrants who owned laundromat for 30 yrs to earn our #AmericanDream. #PresidentTrump #ProIsrael #ThankAVet #BackTheBlue #MAGA.”

Both Twitter accounts frequently tweet or retweet in support of US president Donald Trump and in opposition to everything from immigrants, to the National Football League, to CNN. They’ve both had accounts on Twitter since 2012—and they both appear to be bots.

They were identified by a new bot created by Quartz, @probabot_, which searches Twitter for accounts that tweet about politics and scores them using Botometer, a classification tool that applies machine learning to determine how likely a given account is to be a bot.


Could we lend it to Twitter?
link to this extract

No, Apple’s machine learning engine can’t surface your iPhone’s secrets • iMore

Rene Ritchie on this article in Wired, which posits that Apple’s CoreML machine learning system could be used maliciously:


Theoretically, finding and extracting a few photos might be easier to hide than simply pulling a large number or all photos. So could trickle uploading over time. Or based on specific metadata. Or any other sorting vector.

Just as theoretically, ML and neural networks could be used to detect and combat these kinds of attacks as well.


For an example of where that could go wrong, thing of a photo filter or editing app that you might grant access to your albums. With that access secured, an app with bad intentions could provide its stated service, while also using Core ML to ascertain what products appear in your photos, or what activities you seem to enjoy, and then go on to use that information for targeted advertising.


Also nothing unique to Core ML. Smart spyware would try to convince you to give it all your photos right up front. That way it wouldn’t be limited to preconceived models or be at risk of removal or restriction. It would simply harvest all your data and then run whatever server-side ML it wanted to, whenever it wanted to.

That’s the way Google, Facebook, Instagram, and similar photo services that run targeted ads against those services already work.


Just recently, iMore has found itself writing two kinds of stories: “here’s how to” and “No, here’s why this story about Apple is bogus”. As he says, people are overthinking this. A service (malicious or otherwise) that says “let us see all your photos and do wonderful things to them!” is going to get a lot more of your photos than one which tries to subvert CoreML. But people are desperate to find a new angle on anything Apple-y.
link to this extract

Google defends Pixel 2 XL screen, promises updates for audio issues • Ars Technica

Ron Amadeo:


The end result of the complaints (and news articles) is that every Pixel 2 and 2 XL will come with a two-year warranty, and Google will push out some software updates to alleviate some of the other Pixel problems.

LG is far behind Samsung when it comes to producing quality OLED panels for smartphones, but for some reason Google still chose to slap an inferior component onto its flagship smartphone. Here are the most common complaints we’ve seen out there as a result:

• The display is grainy or “dirty” looking at low brightness.
• It experiences image burn-in after just a few weeks.
• There’s a blue shift to the display when looked at off-angle.
• The colors are “dull.” (This one is more of a personal preference.)

Mario Queiroz, Google Hardware’s VP of product management, said on the Pixel forums that while he thinks the Pixel 2 XL display is “beautiful,” Google is taking some steps to address some of these issues.

For the display burn-in, Queiroz says Google’s investigation found that “the Pixel 2 XL display shows that its decay characteristics are similar to OLED panels used in comparable products” and that “the differential aging is in line with that of other premium smartphones and should not affect the normal, day-to-day user experience of the Pixel 2 XL.”


Well, this has been a whole saga during the past week. LG-made p-OLED panels on the Pixel XL seem to show burn-in (many reviewers bore this out). And people complain they look dull. The former seems to be down to LG not being great at OLED (its V30 drew similar complaints); the latter, to not trying to have oversaturated colours on the OLED.

Given the small numbers the Pixel 2 sells in, comparatively, this is hardly a great start.

link to this extract

Errata, corrigenda and ai no corrida: none notified. Either that, or I’ve forgotten.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.