Start Up No.1,022: Huawei’s asymmetric PR push, watching the porn blockers, Vizio targets your smart TV, the facial files, and more


See the big engines? That’s what makes the 737 Max a problem to fly. CC-licensed photo by Caribb on Flickr.

A selection of 10 links for you. Want to vote on it? I’m @charlesarthur on Twitter. Observations and links welcome.

Ethiopian Air crash: where did Boeing go wrong with the 737 Max? • Slate

Jeff Wise:

»

To maintain its lead, Boeing had to counter Airbus’ move [of rolling out the A320neo in 2014]. It had two options: either clear off the drafting tables and start working on a clean-sheet design, or keep the legacy 737 and polish it. The former would cost a vast amount—its last brand-new design, the 787, cost $32bn to develop—and it would require airlines to retrain flight crews and maintenance personnel.

Instead, it took the second and more economical route and upgraded the previous iteration. Boeing swapped out the engines for new models, which, together with airframe tweaks, promised a 20% increase in fuel efficiency. In order to accommodate the engine’s larger diameter, Boeing engineers had to move the point where the plane attaches to the wing.

This, in turn, affected the way the plane handled. Most alarmingly, it left the plane with a tendency to pitch up, which could result in a dangerous aerodynamic stall. To prevent this, Boeing added a new autopilot system that would pitch the nose down if it looked like it was getting too high. According to a preliminary report, it was this system that apparently led to the Lion Air crash.

If Boeing had designed a new plane from scratch, it wouldn’t have had to resort to this kind of kludge. It could have designed the airframe for the engines so that the pitch-up tendency did not exist. As it was, its engineers used automation to paper over the aircraft’s flaws. Automated systems can go a long way toward preventing the sorts of accidents that arise from human fecklessness or inattention, but they inherently add to a system’s complexity. When they go wrong, they can act in ways that are surprising to an unprepared pilot. That can be dangerous, especially in high-stress, novel situations. Air France 447 was lost in 2009 after pilots overreacted to minor malfunctions and became confused about what to expect from the autopilot.

«

This seems to have been the cause of the Ethiopian Air crash. The UK has grounded all upgraded 787s. And the NYT was writing about the Lion Air crash – and the associated changes – at the start of February. The Ethiopian Air crash seems to have been avoidable, if the lessons had been learned quickly enough.
link to this extract


Huawei shows where the real US-China imbalance lies • Bloomberg

Tim Culpan:

»

the Shenzhen-based telecom equipment maker has sought to recruit foreign journalists for its public-relations team, taken out advertisements in overseas media to press its case, and intensified its activity on Twitter to include criticism of the US legal system and “a call for truth and justice for the good of global citizens.”

The irony is that no foreign organization could dream of attempting the same in China. This imbalance has worked in Huawei’s favor.

A months-long PR and lobbying campaign overseas has softened the stance of foreign governments and regulators, helping combat the perception that the company is a conduit for espionage by Beijing. That’s moved it toward the company’s likely end goal: winning more business with telecom operators.

The dichotomy isn’t unique to Huawei. China’s government has also leveraged the openness of developed-nation democracies to push its message, while refusing the same opportunities at home.

China blocks its citizens from accessing Twitter, yet the country’s state-controlled media and government agencies have dozens of accounts with the US social media service that they use to spread Beijing’s agenda. One editor-in-chief even regularly criticizes foreign governments on his personal timeline, a practice that would probably land him in detention if it was directed at his own government.

Huawei and Meng may have credible arguments to make against US and Canadian authorities, but the real victory for them is in being able to make them at all.

«

link to this extract


Triton is the world’s most murderous malware, and it’s spreading • MIT Technology Review

Martin Giles:

»

In a worst-case scenario, the rogue code could have led to the release of toxic hydrogen sulfide gas or caused explosions, putting lives at risk both at the facility and in the surrounding area.

[Julian] Gutmanis recalls that dealing with the malware at the petrochemical plant, which had been restarted after the second incident, was a nerve-racking experience. “We knew that we couldn’t rely on the integrity of the safety systems,” he says. “It was about as bad as it could get.”

In attacking the plant, the hackers crossed a terrifying Rubicon. This was the first time the cybersecurity world had seen code deliberately designed to put lives at risk. Safety instrumented systems aren’t just found in petrochemical plants; they’re also the last line of defence in everything from transportation systems to water treatment facilities to nuclear power stations.

Triton’s discovery raises questions about how the hackers were able to get into these critical systems. It also comes at a time when industrial facilities are embedding connectivity in all kinds of equipment—a phenomenon known as the industrial internet of things. This connectivity lets workers remotely monitor equipment and rapidly gather data so they can make operations more efficient, but it also gives hackers more potential targets.

«

First spotted late in 2017; origin still unknown.
link to this extract


Peak California • Medium

Byrne Hobart:

»

When Airbnb was just starting out, the founders spent years being nearly broke. It’s hard to imagine someone living in the Bay Area spending a long time “nearly broke” today; they’d spend too much on rent and have to move back home or get a BigCo job. Y Combinator has implicitly acknowledged this. When the program started in 2005, they’d offer founders a maximum of $20,000 to spend the summer running a startup. Now it’s $120,000. That’s a 14% compounded growth rate in the minimum amount of cash on hand needed to start a company. YC has also grown, but it’s hard to count on one organization to hold back the tide here. As long as higher rents raise the cost of starting a pre-revenue company, fewer people will join them, so more people will join established companies, where they’ll earn market salaries and continue to push up rents.

And one of the things they’ll do there is optimize ad loads, which places another tax on startups. More dangerously, this is an incremental tax on growth rather than a fixed tax on headcount, so it puts pressure on out-year valuations, not just upfront cash flow.

According to Social Capital’s 2018 letter, almost 40% of VC money goes to advertising on the largest search, social, and e-commerce channels. Those channels have adapted to a world where they’re the best place to scale because they have the biggest audience, which means there’s more money for them in optimizing their revenue capture. Thus, ads get better-targeted, ad loads rise over time, more content moves into the walled garden, and it becomes progressively harder not to pay an economically efficient (read: very high) ad price.

«

Hobart reckons that California (particularly San Francisco) has reached the point where you just can’t start up there any more. But haven’t people felt that way for years?
link to this extract


Porn block: how will the new UK laws work? • HuffPost UK

Sophie Gallagher:

»

As stipulated in the 2017 Digital Economy Act, from the beginning of April all porn websites are required to have verification of a user’s age before they can permit them to view the website.

Websites such as PornHub and RedTube will only be unlocked after individual users have been through a process of verification to prove they are over 18… 

…The NSPCC [National Society for the Prevention of Cruelty to Children] claims two thirds of 15 to 16-year-olds have seen pornography, while Childline claims to have delivered more than 2,000 counselling sessions in the past three years about online porn.

The government has left it in the hands of the porn companies to ensure they comply with the compulsory checks, so the type of age-verification software will depend on which sites you visit.

One example of software being developed is by MindGeek – which owns Pornhub, YouPorn, RedTube and Brazzers – has been called AgeID. This will work by redirecting you to a non-pornographic page when you attempt to visit a porn site. On that separate page, you will have to put in your phone number, email address and credit card details. MindGeek say this will be a one-time verification, and they expect 20 to 25 million UK users will sign up to AgeID.

«

No way at all that this could possibly go wrong. No way at all. Not as if it’s going with three things that are quite widely available to hackers, an which will have risen in value overnight.
link to this extract


Vizio wants next-generation smart TVs to target ads to households • Reuters

Sheila Dang:

»

Smart TV manufacturer Vizio has formed a partnership with nine media and advertising companies to develop an industry standard that will allow smart TVs to target advertisements to specific households, the companies said Tuesday.

The consortium includes major TV networks like Comcast Corp’s NBCUniversal and CBS Corp, as well as advertising technology companies like AT&T Inc’s Xandr.

Addressable advertising, or targeting viewers on the household level based on their interests, has long been the goal of TV marketers. But TVs lack cookies that internet browsers use to allow ads to follow people around the web. And TV manufacturers have so far used different technology and standards to enable addressable advertising, hindering the industry’s growth, said Jodie McAfee, senior vice president of sales and marketing at Inscape, a subsidiary of Vizio.

“It creates a level of complication for (TV networks), and scale is critical,” he said in an interview.

Privacy advocates have voiced concerns that targeted advertising may invade privacy and the information gathered could be misused or hacked.

«

So the idea is that they could charge more for the ads? In return for knowing everything about what you’re watching and, perhaps, listening to you? If Vizio wants to destroy the smart TV concept, it’s going about it in just the right way.

Also proving there’s no activity that Americans can’t see as needing more advertising.
link to this extract


Free will in an algorithmic world • Medium

Kartik Hosanagar:

»

Consider these facts: 80% of viewing hours streamed on Netflix originate from automated recommendations. By some estimates, nearly 35% of sales at Amazon originate from automated recommendations. And the vast majority of matches on dating apps such as Tinder and OkCupid are initiated by algorithms. Given these numbers, many of us clearly do not have quite the freedom of choice we believe we do.

One reason is that products are often designed in ways that make us act impulsively and against our better judgment. For example, suppose you have a big meeting at work tomorrow. Ideally, you want to spend some time preparing for it in the evening and then get a good night’s rest. But before you can do either, a notification pops up on your phone indicating that a friend tagged you on Facebook. “This will take a minute,” you tell yourself as you click on it. But after logging in, you discover a long feed of posts by friends. A few clicks later, you find yourself watching a YouTube video that one of them shared. As soon as the video ends, YouTube suggests other related and interesting videos. Before you know it, it’s 1:00 a.m., and it’s clear that you will need an all-nighter to get ready for the following morning’s meeting. This has happened to most of us.

The reason this behavior is so common, as some product designers have noted, is that popular design approaches—such as the use of notifications and gamification to increase user engagement—exploit and amplify human vulnerabilities, such as our need for social approval or our inability to resist immediate gratification even when we recognize that it comes with long-term costs. While we might feel as if we are making our own choices, we’re often nudged or even tricked into making them.

«

Worth looking around in your daily life and noticing how many of your “choices” are actually made by machines.
link to this extract


Where Warren’s wrong • Stratechery

Ben Thompson has a huge writeup on presidential hopeful Senator Elizabeth Warren’s proposal to regulate tech firms:

»

I have called Facebook’s acquisition of Instagram The Greatest Regulatory Failure of the Past Decade, and called for an end to social networks being allowed to buy other social networks. I do have qualms about the idea of retroactively undoing deals, but I do think Senator Warren is directionally correct in this case.

More broadly, as I explained in The Value Chain Constraint, the price of being an Aggregator is tuning your company to the value chain within which you compete; it follows that all of these companies have will face significant challenges moving into new spaces with new value chains. To that end, what makes the most sense from a management perspective is leveraging the tremendous amounts of cash thrown off by their core businesses to acquire and invest in companies competing in different value chains.

On the flipside, to the extent regulators wish to constrain Aggregators, the single most effective lever is limiting acquisitions. There are significant problems with this, to be sure, particularly when it comes to the incentives for new company creation (most successful exits are acquisitions, not IPOs), but at least this is a remedy that is somewhat approaching the problem.

«

Worth settling in if you want to think about this topic.
link to this extract


Facial recognition’s ‘dirty little secret’: millions of online photos scraped without consent • NBC News

Olivia Solon:

»

“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz.

The latest company to enter this territory was IBM, which in January released a collection of nearly a million photos that were taken from the photo hosting site Flickr and coded to describe the subjects’ appearance. IBM promoted the collection to researchers as a progressive step toward reducing bias in facial recognition.

But some of the photographers whose images were included in IBM’s dataset were surprised and disconcerted when NBC News told them that their photographs had been annotated with details including facial geometry and skin tone and may be used to develop facial recognition algorithms. (NBC News obtained IBM’s dataset from a source after the company declined to share it, saying it could be used only by academic or corporate research groups.)

“None of the people I photographed had any idea their images were being used in this way,” said Greg Peverill-Conti, a Boston-based public relations executive who has more than 700 photos in IBM’s collection, known as a “training dataset.”

“It seems a little sketchy that IBM can use these pictures without saying anything to anybody,” he said.

John Smith, who oversees AI research at IBM, said that the company was committed to “protecting the privacy of individuals” and “will work with anyone who requests a URL to be removed from the dataset.”

Despite IBM’s assurances that Flickr users can opt out of the database, NBC News discovered that it’s almost impossible to get photos removed. IBM requires photographers to email links to photos they want removed, but the company has not publicly shared the list of Flickr users and photos included in the dataset, so there is no easy way of finding out whose photos are included. IBM did not respond to questions about this process.

«

Solon is one of the best technology journalists out there, with a consistent run of great stories.
link to this extract


Why I put my dog’s photo on social media, but not my son’s • WSJ

Joanna Stern:

»

“We often see people overexposing their children—nude photos, bath-time photos, beach photos—and hashtagging them, which allows this to be searchable content and allows predators to find children,” says Carly Asher Yoost, chief executive of Child Rescue Coalition, an organization that works with law enforcement to locate people who download or distribute child pornography.

Even on Instagram, I came across a number of comment threads where people appeared to be trading links to child pornography. Instagram has since shut down those accounts.

“Keeping children and young people safe on Instagram is hugely important to us,” an Instagram spokeswoman said. “We do not allow content that endangers children, and we will not hesitate to take action when we find accounts that break these rules.” Instagram and Facebook provide written guides for parents. The company says it has automated systems to detect nudity in uploaded photos and is improving its capabilities all the time.

What to do: Many parents opt to keep photos of their children off social media until they are old enough to be part of the conversation; some who do share conceal their children’s faces—for instance with emojis. But other parents I spoke to didn’t realize how visible their public Instagram accounts were, especially when photos are hashtagged with things like #pottytraining or #bathtime.

If you are sharing photos of your children, make your posts and your account private. Unfriend or block any followers you don’t feel comfortable with. Remember that your Facebook cover photo—where parents often show off their children—is always public.

«

Trading links to child pornography. (I think we call it child sexual abuse, but anyway.) Et tu, Instagram.
link to this extract


Errata, corrigenda and ai no corrida: none notified

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

1 thought on “Start Up No.1,022: Huawei’s asymmetric PR push, watching the porn blockers, Vizio targets your smart TV, the facial files, and more

  1. that one is cute: theverge.com/2019/3/13/18263422/microsoft-directx-12-windows-7-world-of-warcraft-support

    I think you’re the one who linked to Jason Snell’s site a while back, he’s got one post about how old Microsoft would bend over backwards to keep old software running on new Windows, but new MS, not so much. This harks back to old MS.

    Also kind of reveals that “if you want DX12, you’ve got to upgrade” was a marketing, not technical, decision.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.