You can now sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 11 links for you. It is, after all, Friday. I’m @charlesarthur on Twitter. Observations and links welcome.
The United Kingdom is about to play host to one of the most ambitious autonomous car tests ever. Its goal? To find out what happens when you let a fleet of self-driving cars loose into the real world.
The DRIVEN consortium is a government-funded group of companies involved in several aspects of autonomous car development, starting a 30-month test project that will culminate in six to 12 self-driving cars driving between London and Oxford in the second half of 2019. The project aims to go beyond the question of whether we can make a car drive itself, exploring bigger issues like how a computer can judge risk and what happens when an autonomous car loses cellular service.
The open-road testing will put to use the technology developed by Oxford-based artificial intelligence firm Oxbotica. The cars will operate with SAE Level 4 autonomy.
“This is the first exercise where there’s a connected fleet talking to each other about risk and routes and all those sorts of things,” Dr. Graeme Smith, CEO of Oxbotica, tells Inverse.
“Typically, vehicles today work as single vehicles, so this is the first trial where we’re looking at doing some joined-up thinking between the different vehicles.”
You shouldn’t install the iOS 11 beta for many reasons, most notably the fact that tons of things are just plain broken. Here’s a selection of things that are broken or annoying in the current beta state…
All Birchler’s points are fair. I’ve been trying iOS 11 out on an iPad Pro, and it’s good fun – the new Control Center (once you figure out how to get it, and highlight the relevant bits) is great. The new Dock and multitasking UI takes a little getting used to.
One thing I notice? The lock screen is really black. As if it were preparing for OLED blacks.
link to this extract
Sorry to burst your bubble, but Microsoft’s ‘Ms Pac-Man beating AI’ is more Automatic Idiot • The Register
So what’s the problem?
It’s all a bit of clever trickery. It’s a bit of a hack. The crucial thing is that the reward weights are hardcoded into the software. Ghosts are set to -1,000. Pills and fruits are set a weight based on their in-game points. This is programmed in by the researchers. It means the AI hasn’t learned very much at all: it hasn’t learned that ghosts are bad and to be avoided because they cause Ms Pac-Man to lose her lives and ultimately the whole game, that pills need to be collected, that fruits are good and not stationary ghosts, and so on.
Other reinforcement learning systems found out through hours of trial and error that, for example in Space Invaders, they could press the fire button and sometimes earn points; that firing away made things disappear, also earning points; that moving and firing made more things disappear, earning more points; that moving to avoid being hit by enemy bullets let the player live longer, thus allowing it to gain more points; and so on. These systems learned from scratch the value of their decisions. Hit the ball, shoot the thing, get a reward, figure it out, get better.
Maluuba’s HRA is, in all honesty, a proof of concept. It didn’t have to learn the hard way. It was born knowing everything it ever needed to know. Until it can learn for itself from scratch, building up intelligence on its own from its environment, it’s a preprogrammed maze-searching algorithm. Romain Laroche, one of the paper’s coauthors, admitted the weights are defined “manually for the moment,” adding they’ll become dynamic at some point, hopefully. The fixed design is documented in the paper.
Bali’s famous rice terraces, when seen from above, look like colorful mosaics because some farmers plant synchronously, while others plant at different times. The resulting fractal patterns are rare for man-made systems and lead to optimal harvests without global planning.
To understand how Balinese rice farmers make their decisions for planting, a team of scientists led by Stephen Lansing (Nanyang Technological University) and Stefan Thurner (Medical University of Vienna, Complexity Science Hub Vienna, IIASA, SFI), both external faculty at the Santa Fe Institute, modeled two variables: water availability and pest damage. Farmers that live upstream have the advantage of always having water; while those downstream have to adapt their planning on the schedules of the upstream farmers.
Here, pests enter the scene.
yes, really: fractal planting, without central control, produce pretty much optimal outcomes.
link to this extract
Proponents of the tax cuts argued that they would unleash economic growth and job creation. Yet as numerous subsequent analyses demonstrate, the promised economic growth did not materialize. Tax revenues fell sharply. Job growth and output growth disappointed. Population growth, whether as a cause or consequence of the economic growth, failed to materialize. Finally, last week, state legislators recognized the experiment’s failure and reversed course.
Understanding the reasons that the Kansas tax cut experiment failed to create jobs is particularly important given that the outline for tax reform rolled out by the Trump administration in April shares many features with the Kansas model. U.S. Treasury Secretary Steven Mnuchin says the administration’s plan “is all about jobs, jobs, jobs,” much as Gov. Brownback did in Kansas five years ago. In fact, subsequent reporting suggests that the Trump administration’s tax plan was rolled out in an incomplete state because the president read an op-ed in The New York Times co-authored by some of the same advocates who provided advice to Brownback on his tax plan.
The failure of the Kansas tax cut experiment to create jobs has little to do with Kansas, however, and everything to do with the fact that the underlying economics of tax reform—as envisioned by Gov. Brownback and President Donald Trump—isn’t a good path to jobs. To understand this point, it’s worth considering in turn the two primary types of taxes that were cut under the Kansas plan and in the Trump administration’s outline: taxes on labor income and taxes on business profits.
Claims of supply-side growth from labor income tax cuts rely on the idea that people will be more willing to work when their after-tax wages are higher. This theory posits that labor income tax cuts result in growth because people who could increase their earnings choose not to because tax rates are too high, but it does not take much to see why cutting tax rates for middle- and higher-income families does not create jobs through this mechanism. Middle- and higher-income families already have jobs, even if they are not the jobs they necessarily want.
If I’m reading this correctly, it suggests that the Laffer curve is nice in theory, bunk in practice. Otherwise revenues from the tax cuts would have spiked and things would have been great.
Or – alternative hypothesis – the tax ratio was already on the wrong side of the Laffer curve, and cutting just made it worse.
link to this extract
Spotify’s revenue grew more than 50% to $3.3bn last year. And in order to grow more, the music streaming company will pay music labels billions of dollars over the next two years.
In financial filings released this morning, Spotify says it has agreed to pay more than $2bn in minimum payments to record labels over the next two years.
Spotify doesn’t spell out who that money is going to. But people familiar with the company confirm it is talking about two deals it has recently signed with Universal Music Group, the world’s biggest music label, which has about a third of the market, and Merlin, which represents a large group of independent labels.
That means Spotify will ultimately be on the hook for even more guaranteed payments once it re-signs Sony and Warner Music Group, the two other major music labels.
Total users grew to 140m, but no word on how many are paying (the last figure was 50m in March.)
link to this extract
Today, a new analysis from the Pivotal Research Group showed that Google and Facebook accounted for approximately 71% of all digital advertising sales in the United States during the first quarter of 2017 and 82% of all growth in digital advertising. That’s a steady year-over-year increase from 2016 and 2015, when the two technology giants had a combined share of 69% and 64% of digital advertising, respectively, according to the analysis.
And as media analyst Ken Doctor notes, that growth isn’t exactly loose change.
“Even a 2% share movement, which may seem like a small number, it’s still a big number,” said Doctor, author of “Newsonomics.”
What’s left for media organizations? Not much, according to Alan Mutter, a newspaper industry analyst and professor at the University of California at Berkeley.
“The vast preponderance of digital advertising dollars go to Google and Facebook, and very little is left over for other people,” Mutter said. “There’s just more content running around in search of advertising than there is advertising dollars that can support that content.”
And so hundreds of people go out of work.
link to this extract
• 615 million, or how many devices have ad-blocking software on them, worldwide. That’s up 30% year over year, according to PageFair.
• 90 percent: The overwhelming majority of the mobile devices equipped with an ad blocker – all 380 million of them – are located in Asia, where limited, expensive bandwidth plays just as big a role in the adblocking wars as user experience.
• 1%: For a time, publishers could take solace in the fact that very few any mobile devices in the U.S. had adblocking apps installed, according to eMarketer research. With Safari and Chrome both poised to begin blocking ads on mobile, this number is going to change a lot in the coming year.
• 17%, 22%, 27%: Adblocking might be surging in Asia, but in many advanced digital media markets, it’s either stabilized or declining. These three numbers represent the adblocking rates in Canada, the UK and Germany.
With Google Chrome and Apple’s Safari about to add adblocking in the near future, things are hotting up on this front. Adtech companies may only have a limited time to get their act in order.
link to this extract
Children are already taught Data Language as part of the Maths curriculum. They are taught how to collect data, record it, create basic statistics, make charts and graphs from it, even in primary school. But what about Data Literature?
What if children were taught about Florence Nightingale’s use of data? They could unpick the method of collection, the birth of new forms of visualisation and the use of data for argument and persuasion and change. They could examine the context of Nightingale’s work at the time and the repercussions through to the present day. They could create new works from her data, put together new visualisations and invent modern-day newspaper stories.
They could examine the works of great modern day data visualisers and compare and contrast their works around particular key events, such as the Iraq war or the 2016 presidential election, or on thematic topics such as climate change. They could examine commonalities in form – citation of sources, provision of values – as well as differences in style and expression. They could produce their own visualisations in the style of one of the greats, or simply copy a work to see how it’s done.
They could look at the use of data in reports, from official statistical releases, through academic papers, to sports commentary. They could look at how these have evolved over time, and the varying ways in which numbers and statistics can be used to inform and substantiate a story that is being told. They could look at the choices made about what numbers get quoted in such stories, and have exercises where they select different numbers or use different rhetorical devices (eg “almost 20%” vs “less than 20%”) to reach a different conclusion…
…I am sure there must be people thinking of and doing this already. I know of the Calling Bullshit course, for example. What else is there? Does this idea have legs? How could we advance it? Let me know at email@example.com.
The 2017 Pro retains the same selection of ports as the Pro 4. There’s a full-size USB 3.1 generation 1 (5Gbps) port, a mini DisplayPort, a headset jack, a microSDXC card reader, and Microsoft’s proprietary Surface Connect magnetic port (used for charging and the Surface Dock). That’s it.
The sheer number of ports has always felt a little stingy; the technology being used feels even worse. There’s no 10Gbps USB 3.1 generation 2 port; there’s no Thunderbolt 3; there’s no USB Type-C. The port selection is as backwards-looking as they come.
Microsoft has argued that this is because USB Type-C is in its infancy and remains complicated to deploy, given some marketplace confusion about which ports can be used for what (features such as charging, video output, and Thunderbolt all can use Type-C, but there’s no guarantee that a Type-C port offers any of those capabilities). In addition, many companies produce out-of-spec cables and chargers, adding further complexity. As such, it’s better to stick with what’s safe and well-known.
This is a disappointing attitude. If the goal of the Surface brand is, at least in part, to drive forward PC technology, what better place to do it than with this tricky piece of tech? After all, when the Surface line first came to market, one could easily argue that PC tablets and pen computers were complex, niche products that weren’t a good fit for most users. Microsoft didn’t give up on that idea, however; it refined it and has successfully demonstrated that, when done well, these machines can have wide appeal.
Type-C could surely have presented a similar opportunity to show the industry a best-in-class Type-C implementation. Give the machine, say, four ports and ensure that every port supports charging, supports displays, and supports Thunderbolt 3. Make sure external GPUs work reliably. Ensure that the system firmware is configured correctly to protect against malicious Thunderbolt 3 devices. Make Windows clearer about when an underpowered charger is being used.
The true dimensions of the failure were first reported Wednesday by Politico Magazine. The affected Center for Election Systems referred all questions to its host, Kennesaw State University, which declined comment. In March, the university had mischaracterized the flaw’s discovery as a security breach.
Logan Lamb, a 29-year-old Atlanta-based private security researcher formerly with Oak Ridge National Laboratory, made the discovery last August. He told The Associated Press he decided to go public after the publication last week of a classified National Security Agency report describing a sophisticated scheme, allegedly by Russian military intelligence, to infiltrate local U.S. elections systems using phishing emails.
The NSA report offered the most detailed account yet of an attempt by foreign agents to probe the rickety and poorly funded U.S. elections system. The Department of Homeland Security had previously reported attempts last year to gain unauthorized access to voter registration databases in 20 states — one of which, in Illinois, succeeded, though the state says no harm resulted.
It also emboldened Lamb to come forward with his findings. Lamb discovered the security hole — a misconfigured server — one day as he did a search of the Kennesaw State election-systems website. There, he found a directory open to the internet that contained not just the state voter database, but PDF files with instructions and passwords used by poll workers to sign into a central server used on Election Day, said Lamb.
“It was an open invitation to anybody pretending to even know a little bit about computers to get into the system,” said Marilyn Marks, an election-transparency activist whose Colorado-based foundation participated in a failed lawsuit that sought to bar the use of paperless voting machines in next week’s election.
Linked to this rather than Politico because of Lamb’s action: the NSA story that the Intercept ran (leaked, remember, by someone who heard an Intercept podcast wondering about extent of Russian hacking) prompted Lamb to come forward. Dominoes fall.
More to the point, the US’s election system is beginning to look unfit for purpose in the modern world. Sure, I take the point (American readers) that US elections can involve multiple topics on big ballot papers. That doesn’t mean the answer is insecure, unauditable systems for convenience, though.
link to this extract
Errata, corrigenda and ai no corrida: none notified