Start Up: the telltale tracker, Uber’s culture trouble, the Twitter resistance, wood trouble, and more


Could machine learning solve the troll problem? Google hopes so. Others are doubtful. Photo by tsparks on Flickr.

You can now sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 10 links for you. Not for sale in Boston. I’m @charlesarthur on Twitter. Observations and links welcome.

Marathon runner’s tracked data exposes phony time, cover-up attempt • Ars Technica

Sam Machkovech:

»

Hot tip: If you’re going to cheat while running a marathon, don’t wear a fitness tracking band.

A New York food writer found this out the hard way on Tuesday after she was busted for an elaborate run-faking scheme, in which she attempted to use doctored data to back up an illegitimate finish time. In an apologetic Instagram post that was eventually deleted, 24-year-old runner Jane Seo admitted to cutting the course at the Fort Lauderdale A1A Half Marathon.

An independent marathon-running investigator (yes, that’s a thing) named Derek Murphy posted his elaborate analysis of Seo’s scheme, and the findings revolved almost entirely around data derived from Seo’s Garmin 235 fitness tracker. Suspicions over her second-place finish in the half marathon began after very limited data about her podium-placing run was posted to the Strava fitness-tracking service. The data only listed a distance and completion time, as opposed to more granular statistics. (This followed the release of Seo’s official completion times, which showed her running remarkably faster in the half marathon’s later stages.)

Things got weirder when Seo eventually posted a “complete,” GPS-tracked run of the half-marathon course. Its time-stamp looked suspiciously off, Murphy noted in his own report, so he dug up older run-data posts from her same account and noticed starkly different heart rate and cadence stats in her newer report. “The cadence data [of the half marathon] is more consistent with what you would expect on a bike ride, not a run,” Murphy wrote.

«

There are people tracking what you do all. The. Time. How long before this sort of thing is mandatory?
link to this extract


The EU’s renewable energy policy is making global warming worse • New Scientist

Michael Le Page:

»

Countries in the EU, including the UK, are throwing away money by subsidising the burning of wood for energy, according to an independent report.

While burning some forms of wood waste can indeed reduce greenhouse gas emissions, in practice the growing use of wood energy in the EU is increasing rather than reducing emissions, the new report concludes.

Overall, burning wood for energy is much worse in climate terms than burning gas or even coal, but loopholes in the way emissions are counted are concealing the damage being done.

“It is not a great use of public money,” says Duncan Brack of the policy research institute Chatham House in London, who drew up the report. “It is providing unjustifiable incentives that have a negative impact on the climate.”

The money would be better spent on wind and solar power instead, he says.

It is widely assumed that burning wood does not cause global warming, that it is “carbon neutral”. But the report, which is freely available, details why this is not true.

«

From the report:

»

Although most renewable energy policy frameworks treat biomass as though it is carbon-neutral at the point of combustion, in reality this cannot be assumed, as biomass emits more carbon per unit of energy than most fossil fuels. Only residues that would otherwise have been burnt as waste or would have been left in the forest and decayed rapidly can be considered to be carbon-neutral over the short to medium term.

«

link to this extract


Reporters love chatrooms but worry security is slacking • Fast Company

Cale Guthrie Weissman:

»

Slack’s ease of use is great for a busy newsroom. Reporters and staff can post links they’ve found online, leads they’ve uncovered, public records they want to request, or edits, in real-time and in one place. (Most of Fast Company’s staff relies heavily on Slack.) New chatrooms or “channels”—either public or private ones—can be created on the fly. The app’s ease of use also means the virtual newsroom is a sort of digital watercooler, where reporters share the sort of gossip they would never want associated with their bylines. A release of this data—either by a court’s subpoena or a hacker’s intrusion—wouldn’t only require the public explanation of private jokes. It could risk compromising an already delicate trust between journalists and their audiences, and lead to the inadvertent disclosure of the identities of anonymous sources.

This last part is of the utmost importance to reporters. The relationship between a source and an investigative journalist hangs on trust: Sources provide sensitive information under the assumption that writers will protect their identities. Despite the best of intentions, reporters using Slack and other digital platforms may be inadvertently breaking this pact. Sources like John Kiriakou, the first CIA officer to speak openly on waterboarding—and whose disclosure of classified information to investigative journalists helped send him to prison—serve as an example of how high the stakes can be.

«

link to this extract


Inside Uber’s aggressive, unrestrained workplace culture • The New York Times

Mike Isaac:

»

Interviews with more than 30 current and former Uber employees, as well as reviews of internal emails, chat logs and tape-recorded meetings, paint a picture of an often unrestrained workplace culture. Among the most egregious accusations from employees, who either witnessed or were subject to incidents and who asked to remain anonymous because of confidentiality agreements and fear of retaliation: One Uber manager groped female co-workers’ breasts at a company retreat in Las Vegas. A director shouted a homophobic slur at a subordinate during a heated confrontation in a meeting. Another manager threatened to beat an underperforming employee’s head in with a baseball bat.

Until this week, this culture was only whispered about in Silicon Valley.

«

Great reporting as ever by Isaac.
link to this extract


An open letter to the Uber board and investors • Medium

Mitch and Freada Kapor:

»

As early investors in Uber, starting in 2010, we have tried for years to work behind the scenes to exert a constructive influence on company culture. When Uber has come under public criticism, we have been available to make suggestions, and have been publicly supportive, in the hope that the leadership would take the necessary steps to make the changes needed to bring about real change.

Freada gave a talk on hidden bias to the company in early 2015, and we have both been contacted by senior leaders at Uber (though notably not by Travis, the CEO) for advice on a variety of issues, mostly pertaining to diversity and inclusion, up to and including this past weekend.

We are speaking up now because we are disappointed and frustrated; we feel we have hit a dead end in trying to influence the company quietly from the inside.

If we believed it was too late for Uber to change, we would not be writing this, but as investors, it is now up to us to call out the inherent conflicts of interest in their current path.

We are disappointed to see that Uber has selected a team of insiders to investigate its destructive culture and make recommendations for change. To us, this decision is yet another example of Uber’s continued unwillingness to be open, transparent, and direct.

«

If you’re trying to put your finger on where you’ve heard the Kapor name before, Mitch was behind Lotus 1-2-3 – the most gigantic smash hit office software ever before Microsoft Office. It’s useful to read the Wikipedia entry: “Lotus was a company with few rules and fewer internal bureaucratic barriers”. (Quoting a book.)

Uber, meanwhile, is a company with big cultural problems. Changing its culture could kill the company. Not changing the culture could hurt its public face.
link to this extract


Echo Labs debuts a wearable medical lab on your wrist • ReadWrite

Amanda Razani:

»

Echo Labs provides health care organizations with analytics to allow for better care of their patients, decrease hospital admissions, and reduce spending. Its first generation wearable offers health information by creating continuous vital sign tracking.

The company is now working on its newest device. The company states that the new tracker will be able to determine what’s going on inside the bloodstream, which is a first for wrist-based wearables.  The tracker utilizes optical sensors and spectrometry to measure and analyze blood composition and flow. It also monitors heart rate, blood pressure, respiratory rate, and full blood gas panels.

The company explains that the band measures blood content with a light and a proprietary algorithm. Basically, it sends electromagnetic waves through human tissue, and then measures the reflection of varying light frequencies in order to find the concentration of molecules in the blood.

“The wearable and sensor are the gateway to understanding the state of the body at any point in time. We can identify deterioration 3 to 5 days before it happens,” the company states.

«

Might want to have a little scepticism around this (*cough*Theranos*cough*) but it does sound interesting.
link to this extract


Google’s Perspective API opens up its troll-fighting AI • WIRED

Andy Greenberg:

»

Last September, A Google offshoot called Jigsaw declared war on trolls, launching a project to defeat online harassment using machine learning. Now, the team is opening up that troll-fighting system to the world.

On Thursday, Jigsaw and its partners on Google’s Counter Abuse Technology Team released a new piece of code called Perspective, an API that gives any developer access to the anti-harassment tools that Jigsaw has worked on for over a year. Part of the team’s broader Conversation AI initiative, Perspective uses machine learning to automatically detect insults, harassment, and abusive speech online. Enter a sentence into its interface, and Jigsaw says its AI can immediately spit out an assessment of the phrase’s “toxicity” more accurately than any keyword blacklist, and faster than any human moderator.

The Perspective release brings Conversation AI a step closer to its goal of helping to foster troll-free discussion online, and filtering out the abusive comments that silence vulnerable voices—or, as the project’s critics have less generously put it, to sanitize public discussions based on algorithmic decisions.

«

And there’s a demonstration website. Maybe they should try Microsoft’s Tay (which was driven haywire within a few hours) on it? “Nasty woman” gets 92%, “Bad hombre” 78%. Wait, now I’m thinking it should be used on Trump’s tweets.

Testers including the NY Times, Guardian and Economist.
link to this extract


If only AI could save us from ourselves • MIT Technology Review

David Auerbach goes into more detail about Google’s Perspective project:

»

The linguistic problem in abuse detection is context. Conversation AI’s comment analysis doesn’t model the entire flow of a discussion; it matches individual comments against learned models of what constitute good or bad comments. For example, comments on the New York Times site might be deemed acceptable if they tend to include common words, phrases, and other features. But Greene says Google’s system frequently flagged comments on articles about Donald Trump as abusive because they quoted him using words that would get a comment rejected if they came from a reader. For these sorts of articles, the Times will simply turn off automatic moderation.

It’s impossible, then, to see Conversation AI faring well on a wide-open site like Twitter. How would it detect the Holocaust allusions in abusive tweets sent to the Jewish journalist Marc Daalder: “This is you if Trump wins,” with a picture of a lamp shade, and “You belong here,” with a picture of a toaster oven? Detecting the abusiveness relies on historical knowledge and cultural context that a machine-learning algorithm could detect only if it had been trained on very similar examples. Even then, how would it be able to differentiate between abuse and the same picture with “This is what I’m buying if Trump wins”? The level of semantic and practical knowledge required is beyond what machine learning currently even aims at.

«

link to this extract


How Twitter became an outlet of resistance, information for federal employees • FederalNewsRadio.com

David Thornton tried to verify whether the 80+ accounts claiming to be “Alt” federal accounts were really people working inside the US government:

»

Federal News Radio attempted to contact more than 50 of these accounts via Twitter, although the vast majority won’t accept direct messages from people they don’t follow. Those who do claim to be federal employees frequently point to their access to inside information to prove their case.

“It is actually quite fraught for federal employees [to use Twitter], as it is for private employees as well,” Brooke Van Dam, associate professor and faculty director of the Masters in Professional Studies in Journalism at Georgetown University, said in an email. “I can see why they would want to directly talk to the public but most institutions and organizations want to keep a single line or statement and having a multitude of actors sharing ‘what’s really going on’ or ‘the truth’ is problematic. In that, it gives an easy out for those higher up to fire or get rid of those that don’t toe the line as we just saw with Shermichael Singleton at HUD.”

Singleton was an aide to Housing and Urban Development Secretary Nominee Ben Carson, before he was fired after a background investigation turned up writings from the campaign season in which Singleton criticized then-Presidential Candidate Donald Trump.

«

link to this extract


Can I own my identity on the internet? • Terence Eden

The aforesaid Eden:

»

The ultra secure messaging app, Signal, requires a mobile phone number in order to sign up to it. This, as my friend Tom Morris, points out, is madness.

People don’t own mobile phone numbers. They are rented from mobile operators. Yes, you may be able to move “your” number between a limited set of providers – but it ultimately doesn’t belong to you. An operator can unilaterally take your number away from you.
If you move to a different country, you will almost certainly have to change your number – thus invalidating any account which relies on a mobile being your primary identifier.

That’s before we get on to how hideously insecure phone numbers are. Transmitting an SMS with a sensitive one-time code over a cleartext which can be easily intercepted is not a sensible approach to security. Modern phone networks are designed to accommodate Lawful Intercept – and suffer from a range of security weaknesses.

Fine. Whatever. Let’s use emails as our primary ID. Bzzzt! Wrong! Email addresses are just as ephemeral as mobile numbers.

«

Could we not all have an IPv6 address, though, assigned at birth or something?
link to this extract


Errata, corrigenda and ai no corrida: none notified

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s