You can now sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 10 links for you. That’s news. To someone. I’m @charlesarthur on Twitter. Observations and links welcome.
Every year, traffic engineers review the speed limit on thousands of stretches of road and highway. Most are reviewed by a member of the state’s Department of Transportation, often along with a member of the state police, as is the case in Michigan. In each case, the “survey team” has a clear approach: they want to set the speed limit so that 15% of drivers exceed it and 85% of drivers drive at or below the speed limit.
This “nationally recognized method” of setting the speed limit as the 85th percentile speed is essentially traffic engineering 101. It’s also a bit perplexing to those unfamiliar with the concept. Shouldn’t everyone drive at or below the speed limit? And if a driver’s speed is dictated by the speed limit, how can you decide whether or not to change that limit based on the speed of traffic?
The answer lies in realizing that the speed limit really is just a number on a sign, and it has very little influence on how fast people drive. “Over the years, I’ve done many follow up studies after we raise or lower a speed limit,” Megge tells us. “Almost every time, the 85th percentile speed doesn’t change, or if it does, it’s by about 2 or 3 mph.”
As most honest drivers would probably concede, this means that if the speed limit on a highway decreases from 65 mph to 55 mph, most drivers will not drive 10 mph slower. But for the majority of drivers, the opposite is also true. If a survey team increases the speed limit by 10 mph, the speed of traffic will not shoot up 10 mph. It will stay around the same. Years of observing traffic has shown engineers that as long as a cop car is not in sight, most people simply drive at whatever speed they like.
There’s a lot more to it.
link to this extract
US retailers are dropping like flies. And it’s worth wondering whether the retail implosion could be a preview of potential pain for the technology industry.
This year has brought a surge of retailers that are closing stores, slashing jobs and filing for bankruptcy protection in record numbers. The boom of online shopping and a glut of stores are common factors for the retail carnage.
The tipping point, however, was the private equity buyouts in recent years that left many retailers with debt that they couldn’t repay. Of the 19 companies on a Moody’s list of distressed retailers in February, 15 are owned or part-owned by private equity firms. Private equity didn’t kill these retailers, but they helped make the hangman’s noose.
Some of the same ingredients that created the retail carnage are now present in technology, which became a surprise darling of private equity buyouts. Dell, BMC Software, Rackspace, Informatica and Marketo were among the tech companies purchased in recent years with private equity money and debt.
In the first quarter, about one in five private equity buyouts in the U.S. involved tech companies, according to data from PitchBook. That is far above the industry’s typical 10% to 15% share of U.S. private equity deals.
Silver Lake and other private equity firms have raised billions of dollars for even more technology buyouts. And the tech industry’s share of loans related to acquisitions and leveraged buyouts has risen by a factor of six since 2007, according to Barclays research published last fall.That’s not to say some of the buyouts in technology will blow up as they have in retail, but the industries have echoes.
Squeaky bum time for Dell, one might have thought.
link to this extract
The video has all sorts of nice ideas about how this could be used, but anyone with any scepticism will be seeing this as a tool for all sorts of misinformation and “watch the video if you don’t believe what this evil politician said”. And mis-misinformation, of course.
Last night, Trump officially gave up. During a White House reception with representatives from conservative media outlets, the President said that he could wait for the next spending battle, in September, to try to win financing for a real wall, rather than risk shutting down the government the week of his hundredth day in office. Late on Tuesday, Republicans in Congress followed Trump’s cue: according to a congressional aide with knowledge of the negotiations, the latest offer from Republicans to Democrats does not include money for the wall.
From a policy perspective, Trump’s reversal is welcome. There is no credible evidence that a twenty-two-hundred-mile physical wall is the best use of federal funds to deter unauthorized border crossings—never mind the message that a giant wall would send to the rest of the world. The members of Congress who know the issue the best think it’s a bad idea. The Wall Street Journal recently reported, “Not a single member of Congress who represents the territory on the southwest border said they support President Donald Trump’s request for $1.4 billion to begin construction of his promised wall.” And if Trump’s retreat from insisting on wall money helps keep the government open, he should be applauded for being flexible.
But, from a political perspective, Trump has given members of Congress another reason not to trust his word.
Sure, he can put it off to September. The question is, will anything have changed by then that will enable him to get anything through? Or will he be aiming much, much lower, and the talk of the wall will have subsided?
link to this extract
‘Phony numbers and front groups’: Trump’s inaugural donor list contains massive evidence of fraud • Raw Story
In one case, Marion Forcht of Corbin, KY donated $50,000 to the president’s inaugural committee. His wife Terry also donated $50,000. Forcht owns Forcht Insurance Agency, with an estimated annual revenue of $134,500. In another, “GLM Development” donated $100,000 to the president—with an estimated revenue of $146,000. In yet another, Isabel T. John of Englewood, NJ donated $400,000. The problem? There’s no record she exists.
As the Intercept reports, Trump’s inaugural committee claimed it received $25,000 from NASA mathematician and physicist Katherine Johnson—a claim her power of attorney denies.
The Intercept also notes the filings reveal a $1m donation from American Action Network, which NBC describes as a “dark money power player”—an organization used to funnel money to campaigns while masking the identity of the donor. It also lists a $300,000 donation from Bennett LeBow, a businessman who partnered with Trump on a failed Moscow real estate venture.
Wilkie, with the help of a Twitter call to action, transferred the inaugural committee’s FEC filing to an open-source spreadsheet. Volunteers are working to verify the identities of Trump’s donors, and leaving comments on the document.
Another example of crowdsourcing political action to shine light on what’s going on.
link to this extract
Wireless charging is widely expected to be on one if not all new iPhone models later this year, and the CEO of a prominent wireless charging technology company appears to consider it a done deal. Powermat CEO Elad Dubzinski called wireless charging ‘a standard feature in the next iPhone’ ahead of any official iPhone 7s or iPhone 8 announcement.
Powermat’s CEO made the comment in an unrelated news release about the company gaining a new board chairman:
“With the recent announcement by Apple that wireless charging will become a standard feature in the next iPhone, we are finally at the threshold of mainstream adoption,” said Mr. Dubzinski.
If Steve Jobs were still alive, Dubzinski would now be in the foundations of the new Apple building.
link to this extract
Any large system picks a metric to goal itself on. Entire books and way-too-long Medium posts have been written on the importance of said metric – it influences everything from people’s incentives to how quickly you can optimize your business. In an organizational equivalent of Schrödinger’s cat, picking the metric itself can cause weird cultural distortion (see Goodhart’s Law).
Since it is near impossible to perfectly measure human behavior, most large teams/products pick a proxy metric to measure underlying behavior. For example – ‘clicks’ are a proxy for “did I read this?” and “will I buy this product sometime in the future?”, ‘time spent’ is a proxy for “did I enjoy this content?” and NPS is often a substitute for “do I love this company?”. You convert a nebulous human emotion/behavior to a quantifiable metric you can align execution on and stick on a graph and measure teams on. Engineers and data scientists can’t do anything with “this makes people feel warm and fuzzy”. They can do a lot with “this feature improves metric X by 5% week-over-week”. Figuring out the connection between the two is often the art and science of product management.
This is where opportunities arise for startups and insurgents.
These metrics never really capture the underlying human emotion or behavior they are trying to measure. To make things more interesting, they almost always create secondary behavior which makes the metric go up but in a way the system designers didn’t anticipate or want.
For example, in terms of what designers wanted, what they built/measured and what they unintentionally caused:
• Quality journalism → Measure Clicks → Creation of click-bait content
• Whether an ad resonates with a human being → measure how long someone saw an ad → varied tactics to game people into seeing an ad.
If you take this outside tech, you could vaguely apply this framework to broader issues.
AMP has gradually been taking over the Guardian’s mobile traffic; today, 60% of its Google mobile traffic is AMP, well above the 10% to 15% that publishers have been getting from AMP, according to a recent estimate by SEO consulting company Define Media.
Note that the first clause in the sentence isn’t backed up by the second. We don’t know how much of the Guardian’s mobile traffic is Google mobile traffic.
…implementing AMP hasn’t been without hiccups, which [Guardian developer Natalia] Baltazar also detailed. Developer glitches can lead AMP pages to be invalid, which costs them the speed benefit of AMP pages. In one example, the Guardian added a Facebook Messenger share button to pages before that feature had been AMP-approved.
“You can’t imagine how easy it is to invalidate an AMP page,” she said.
Pages can also become invalid when journalists embed elements that aren’t AMP-ready in articles. This happened during the U.S. presidential election, when, on the Guardian’s biggest traffic day of the year, its top story was invalid. “We have limited control over what a journalist might embed,” Baltazar said.
The AMP development team now keeps track of whether AMP traffic drops suddenly, which might indicate pages are invalid, and it can react quickly.
All this adds expense, though. There are setup, development and maintenance costs associated with AMP, mostly in the form of time. After implementing AMP, the Guardian realized the project needed dedicated staff, so it created an 11-person team that works on AMP and other aspects of the site, drawing mostly from existing staff.
That seems like a big effort for something finicky. But the Guardian has pulled out of Facebook Instant Articles and Apple News; it’s all in on Google AMP.
link to this extract
In most news sites, the community tends to hang at the bottom of articles in comments that serve little purpose. We believe the community can play a more important role in news. Wikitribune puts community at the top, literally.
Articles are authored, fact-checked, and verified by professional journalists and community members working side by side as equals, and supported not primarily by advertisers, but by readers who care about good journalism enough to become monthly supporters.
Wikitribune is transparent about the way it operates and will publish its financials regularly. With Wikitribune your support will have more impact as most of the funds are used for paying journalists rather than expensive offices.
On that note, if we don’t reach our goal, of 10 journalists hired, we will refund all our supporters (minus transaction fees).
As has been pointed out elsewhere, Jimmy Wales (who is behind this) is also on the board of The Guardian’s parent group – which is also calling for voluntary contributions to help pay its journalists. The writing at The Guardian isn’t crowdfunded, but it’s pretty well known.
The puzzle is why Wikipedia can’t be the news source, since it gets updates to events very fast, and does that without journalists. Also, news is short-lived; its value vanishes within moments, whereas Wikipedia’s value lies in its longevity.
link to this extract
Things began to go bad late last year when [Brian] Wansink posted some advice for grad students on his blog. The post, which has subsequently been removed (although a cached copy is available), described a grad student who, on Wansink’s instruction, had delved into a data set to look for interesting results. The data came from a study that had sold people coupons for an all-you-can-eat buffet. One group had paid $4 for the coupon, and the other group had paid $8.
The hypothesis had been that people would eat more if they had paid more, but the study had not found that result. That’s not necessarily a bad thing. In fact, publishing null results like these is important—failure to do so leads to publication bias, which can lead to a skewed public record that shows (for example) three successful tests of a hypothesis but not the 18 failed ones. But instead of publishing the null result, Wansink wanted to get something more out of the data.
“When [the grad student] arrived,” Wansink wrote, “I gave her a data set of a self-funded, failed study which had null results… I said, ‘This cost us a lot of time and our own money to collect. There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.’ I had three ideas for potential Plan B, C, & D directions (since Plan A had failed).”
The responses to Wansink’s blog post from other researchers were incredulous, because this kind of data analysis is considered an incredibly bad idea. As this very famous xkcd strip explains, trawling through data, running lots of statistical tests, and looking only for significant results is bound to turn up some false positives. This practice of “p-hacking”—hunting for significant p-values in statistical analyses—is one of the many questionable research practices responsible for the replication crisis in the social sciences.
O’Grady’s article is not short. But it is very, very informative.
link to this extract
Errata, corrigenda and ai no corrida: none notified