You can now sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 9 links for you. Also: Friday (local variations apply). I’m @charlesarthur on Twitter. Observations and links welcome.
Napster cofounder Sean Parker appears to have some regrets about the role he played in bringing social media to the world. Before speaking at an Axios event yesterday, he told reporters that he was now “something of a conscientious objector” on social media, according to Axios, and he shared a few thoughts on how he and others designed sites like Facebook to suck people in.
“When Facebook was getting going, I had these people who would come up to me and they would say, ‘I’m not on social media.’ And I would say, ‘OK. You know, you will be.’ And then they would say, ‘No, no, no. I value my real-life interactions. I value the moment. I value presence. I value intimacy.’ And I would say, … ‘We’ll get you eventually,” Parker said. And he added that the initial goals for companies like Facebook, which Parker served as the first president of, were to make sure users spent as much time on their sites as possible. Interactions such as likes and comments served to bring people deeper into the site, about which Parker said, “It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”
For one year, Google researchers investigated the different ways hackers steal personal information and take over Google (GOOG) accounts. Google published its research, conducted between March 2016 and March 2017, on Thursday.
Focusing exclusively on Google accounts and in partnership with the University of California, Berkeley, researchers created an automated system to scan public websites and criminal forums for stolen credentials. The group also investigated over 25,000 criminal hacking tools, which it received from undisclosed sources.
Google said it is the first study taking a long term and comprehensive look at how criminals steal your data, and what tools are most popular.
“One of the interesting things [we found] was the sheer scale of information on individuals that’s out there and accessible to hijackers,” Kurt Thomas, security researcher at Google told CNN Tech.
Even if someone has no malicious hacking experience, he or she could find all the tools they need on criminal hacker forums.
Man, that’s a lot of stolen logins.
link to this extract
Twitter pauses account verifications after critics slam it for verifying Charlottesville rally organizer • TechCrunch
Twitter today announced it’s pausing all account verifications – the process that gives public figures on Twitter a blue checkmark next to their names – while it tries to resolve “confusion” around what it means to be verified, the company says. The move comes shortly after a wave of criticism directed against the social network for verifying the account belonging to Jason Kessler, the organizer of the white supremacist rally in Charlottesville, Virginia in August that left one person dead.
The Daily Beast discovered that Kessler’s Twitter account had been given the preferred status indicated by the blue badge on Tuesday. When reached for comment, Twitter pointed reporters to its policies around account verification which explain the badge is awarded if an account is “of public interest.”
But the coveted blue checkmark is still hard to achieve for many noteworthy figures, and critics claimed that verifying a known white supremacist isn’t something that’s in the public interest.
Even Twitter doesn’t seem to understand its own rules on the matter, as it has withheld the checkmark before for controversial but influential accounts, including Julian Assange. It also has punished Twitter users by stripping them of verification, as it did with right-winger Milo Yiannopoulos last year, ahead of permanently banning him.
“Public interest” isn’t the test of verification. Or didn’t used to be. It’s whether you’re someone who might be in the public eye, and are who you say you are.
Isn’t it? Else what’s the point? Don’t biometrics verify the person? Twitter is descending into a mess, organisationally. It no longer seems to know quite what it stands for, or why it exists.
link to this extract
Making matters more difficult is the explosive amount of risky debt owed by retail coming due over the next five years. Several companies are like teen-jewelry chain Claire’s Stores Inc., a 2007 leveraged buyout owned by private-equity firm Apollo Global Management LLC, which has $2 billion in borrowings starting to mature in 2019 and still has 1,600 stores in North America.
Just $100m of high-yield retail borrowings were set to mature this year, but that will increase to $1.9bn in 2018, according to Fitch Ratings Inc. And from 2019 to 2025, it will balloon to an annual average of almost $5bn. The amount of retail debt considered risky is also rising. Over the past year, high-yield bonds outstanding gained 20%, to $35bn, and the industry’s leveraged loans are up 15%, to $152bn, according to Bloomberg data.
(Key: colour represents% of retail real estate loans that are delinquent by metro areas
Yellow 0-5%; orange 5-10%; red 10-25%; brown 25-53%)
Even worse, this will hit as a record $1 trillion in high-yield debt for all industries comes due over the next five years, according to Moody’s. The surge in demand for refinancing is also likely to come just as credit markets tighten and become much less accommodating to distressed borrowers.
Retailers have pushed off a reckoning because interest rates have been historically low from all the money the Federal Reserve has pumped into the economy since the financial crisis. That’s made investing in riskier debt—and the higher return it brings—more attractive. But with the Fed now raising rates, that demand will soften. That may leave many chains struggling to refinance, especially with the bearishness on retail only increasing.
Higher interest rates, even a little, will create big problems as this debt rolls over: stores will have to generate more money to pay the interest, at a time when the advantages for internet retailers will be growing.
link to this extract
Multiple online user reports claim that the MantisTek GK2 mechanical keyboard’s configuration software is sending data to an Alibaba server. One of the reports even includes an analysis of the software’s traffic, which seems to include how many times keys have been pressed.
The MantisTek GK2 is a cheap RGB mechanical keyboard from China that costs half as much (or less) as the mechanical keyboards from better known companies. Multiple gadgets that come from China seem to have either poor security or privacy issues caused by collecting user data without consumers’ explicit permission. The MantisTek GK2 seems to be one of those products.
The main issue seems to be caused by the keyboard’s “Cloud Driver,” which sends information to IP addresses tied to Alibaba servers. Alibaba sells cloud services, so the data isn’t necessarily being sent to Alibaba, the company, but to someone else using an Alibaba server.
The data being sent—in plaintext, no less— has been identified as a count on how many times keys have been pressed.
The first way to stop the keyboard from sending your key presses to the Alibaba server is to ensure the MantisTek Cloud Driver software isn’t running in the background.
The second method to stop the data collection is to block the CMS.exe executable in your firewall. You could do this by adding a new firewall rule for the MantisTek Cloud Driver in the “Windows Defender Firewall With Advanced Security.”
“Yeah, just updating my firewall rules to stop it telling China what I type.” The update does point out that it’s only sending *how many* times the key was pressed – maybe to see key lifetimes or durability. But even so. Shouldn’t do, especially not without very explicit permission.
link to this extract
Spectacles company Warby Parker recently updated its mobile app to include a novel implementation of Apple’s face recognition technology exclusive to the iPhone X.
The glasses app uses the smartphone’s front-facing TrueDepth camera to map the user’s face and create an ideal fit for a new set of frames.
Apple’s Face ID authentication works by projecting 30,000 dots on the surface of a person’s face, accurately mapping its curvature and unique features.
The camera’s sensors also capture the data in three dimensions, and it’s this technology in particular that the glasses app uses to recommend to the user a series of frames that it thinks will fit their facial structure.
The only failing of the app is that it doesn’t (yet) place the spectacles on the user’s face, Snapchat-style, to let the customer see what they look like wearing them.
Third-party app making a clever use of a new affordance. Makeup apps next, surely.
link to this extract
Both the creators of disturbing kids’ videos and fake news writers game the platforms. The tag-filled names of the videos are designed to exploit YouTube’s search algorithms, and that clearly works since the channels that run the content keep proliferating. The catchy headlines of the fake stories continue fooling Facebook’s supposedly sophisticated clickbait detection algorithms. During the recent congressional hearing on Russian meddling in the 2016 election, the platforms’ representatives were asked about fake accounts but couldn’t come up with any convincing answers about their efforts to purge them.
At least the tech platforms are beginning to recognize that, in order not to be gamed as easily and as often as today, they need more human eyes and human hands. But the hype they spurred by boasting about their intelligent algorithms has acquired a life of its own. I wouldn’t be surprised if a company testing autonomous vehicles took seriously a recent paper by a group of Massachusetts Institute of Technology and Carnegie Mellon University scientists describing something called the Moral Machine. The idea is to automate the ethical decisions that a human driver makes on the fly, even the toughest ones such as whether to hit a wall and kill the car’s passengers, including a young girl, or run over an athlete and his dog crossing the street on a red light. The researchers used a website to ask people about moral choices. The next step is to aggregate the data and have an AI-based algorithm figure out a decision that corresponding to the crowdsourced wisdom.
“The implementation of our algorithm on the Moral Machine dataset has yielded a system which, arguably, can make credible decisions on ethical dilemmas in the autonomous vehicle domain (when all other options have failed),” the researchers wrote. “But this paper is clearly not the end-all solution.”
Guess which parts of this sentence a tech company would throw away if it decided to implement the algorithm. My bet is on “arguably” and “clearly not the end-all solution.”
Even a novice hacker could breach the network hosting Kris Kobach’s bogus voter fraud program • Gizmodo
To suggest that state officials involved with the program have been grossly negligent is simply too kind.
Kobach was appointed vice-chairman of President Donald Trump’s election integrity commission this year after Trump repeatedly and falsely suggested that between 3 and 5 million people voted illegally in the 2016 general election, ultimately costing him the popular vote. Since taking office, the Trump administration has been pushing to take Kobach’s flawed program nationwide. (As of this week, the commission is being sued by one of its own commissioners.)
Gizmodo has learned, however, that the records passing through the Crosscheck system have been stored on a server in Arkansas operating on a network rife with security flaws. What’s more, multiple sets of login credentials, which could be used by virtually anyone to directly access the Crosscheck system—as well the encrypted voter data it contains—have been compromised.
Our investigation into the program builds on the work of ProPublica, which last month published a report describing multiple security flaws plaguing Crosscheck’s operations. Documents obtained under state transparency laws by the anti-Trump group Indivisible Chicago revealed that Crosscheck had emailed Illinois election officials both the username and password to the program’s FTP server—credentials that Illinois neglected to redact before releasing the emails publicly.
The emails further revealed that participating states had submitted millions of voter files to the Arkansas server using an unencrypted file transfer protocol. Gizmodo has learned that while some of the data sets were encrypted prior to being transferred, the passwords to decrypt three year’s worth of voter files, belonging to every state participating in Crosscheck, have likewise been exposed.
China-based vendors Oppo and Xiaomi Technology will adopt 3D sensing solutions for smartphones to be launched in 2018, with such solutions to be developed by Himax Technologies via cooperation with Qualcomm and the sensor modules to be produced by Truly Opto-Electronics, according to industry sources.
The cooperation efforts by Qualcomm, Himax and Truly Opto-Electronics will help upgrade significantly the hardware specifications of high-end models rolled out by China-based smartphone vendors in the coming year further enhancing their competitiveness, said the sources.
The facial recognition solutions co-developed by Qualcomm, Himax and Truly Opto-Electronics are expected to enter volume production in March-April 2018 at the earliest, indicated the sources.
Meanwhile, China’s top smartphone vendor Huawei is reportedly cooperating with China-based Sunny Optical Technology to develop related 3D sensor solutions for its premier models, indicated the sources.
Oh, I thought the race was to develop fingerprint readers under the front screen? Guess that’s been abandoned now that Apple says it dropped that idea more than a year ago. The Android vendors do adequate catch-up, but given that Samsung has failed at face recognition, and that Apple had to buy a specialist company to do FaceID, I wouldn’t put a lot of store in this being a great – as in secure and fast – experience.
In 2015 Huawei, of course, was a couple of weeks ahead of Apple in showing off “Force Touch” in its phones, illustrated by weighing an orange. But it only worked with Huawei’s own apps, so it was essentially pointless. BY contrast, lots of third-party iOS apps have adopted it: all but one phone (the SE) that it now sells incorporates it.
Facial recognition will be patchy across Android devices; the implementations will be uneven and security variable. But they’ll be cheaper.
link to this extract
Errata, corrigenda and ai no corrida: none notified