You can now sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 8 links for you. Octolink. I’m @charlesarthur on Twitter. Observations and links welcome.
The QR code rules supreme in China. You can pay for almost anything with it: street food, toilet paper, a lobster dinner, a foot massage. You can even use it to socialize. At networking sessions, it’s not uncommon to scan someone’s WeChat QR code instead of giving them your business card.
But after an incident last week involving fraudulent QR codes and US$13 million of stolen money, the security of China’s most popular offline-to-online tool is coming under fresh scrutiny.
“Some criminals paste their own QR codes over the original ones to illicitly obtain money, as ordinary consumers simply cannot tell the difference,” wrote China Daily, a state-owned English media site, in an op-ed.
“That is why we are powerless to prevent QR codes from being used for fraudulent activities, and that is precisely why the enterprises using QR codes should assume their share of the responsibility for protection.”
This isn’t the first time that QR codes have been used for malicious purposes in China. Essentially a link, QR codes can be used to infect smartphones with viruses, which then let the fraudster steal money from a victim’s mobile wallet, such as Alipay. Methods are sometimes even more direct – unsuspecting victims, expecting the payment to go to a shopkeeper or a service provider, will be tricked into transferring money via QR code.
More recently, a spate of scams have been linked to the country’s bike-sharing craze. Users normally can scan a code to unlock rental bikes; by attaching their own QR code to the bike, fraudsters can fool bike riders into transferring US$43 – the same amount as Mobike’s required deposit – to their account.
Surprised this hasn’t happened more widely. Seems like an obvious scam.
link to this extract
Speed is a competitive advantage, and time is considered the enemy in most interfaces. That’s reflected in our industry’s fascination with download and rendering speeds, though those metrics are merely offshoots of the underlying user imperative, help me get this job done quickly. “Performance isn’t the speed of the page,” says Gerry McGovern. “It’s the speed of the answer.”
But it has to be the right answer. While this approach works a treat for simple facts like weather, dates, or addresses, it starts to get hairy in more ambitious topics—particularly when those topics are contentious.
The reasonable desire for speed has to be tempered by higher-order concerns of fact and accuracy. Every data-driven service has a threshold where confidence in the data gives way to a damaging risk of being wrong. That’s the threshold where the service can no longer offer “one true answer.” Designers have to be vigilant and honest about where that tipping point lies.
It’s more complex than that. Outside certain topics which are clearly bounded (weather; maths; biographical details), it’s really risky to try to give answers: the potential damage to reputation is serious.
link to this extract
“Buttonwood” on Owen Jones’s decision to quit social media after receiving endless, irrational hate over his change of stance over Corbyn; a key element is (as he says) peoples’ unwillingness to deal in good faith:
as Tim Harford wrote in the Financial Times this weekend, a big problem is that facts are no longer accepted as evidence. This makes economic debate all the harder, as Sean Spicer, Mr Trump’s secretary, showed on March 10th, saying that jobs data were phony under Obama but true under the new president. In other words, he implied the people who produced the official statistics were doctoring the numbers. The right of the Congressional Budget Office to assess the new health-care plan has also been challenged. If society continues down that route, rational debate becomes impossible.
But there is an even bigger problem. If we think the motives of others are suspect, then we can have no trust. And trust is the glue that ties international relations, and the global economy, together. It is what makes international supply chains, money transfers, trade treaties, and lots of other things work. Economists have shown conclusively that societies where trust is low perform poorly (read Daron Acemoglu and James Robinson’s book, for example).
A world where nationalists take power is a world where disputes flare easily, and governments are reluctant to back down because this makes them look weak. Indeed, they may relish confrontation as burnishing their populist credentials.
This is an excellent distillation of what feels like a growing problem.
Recently, I heard from a security professional whose close friend received a targeted attempt to phish his Apple iCloud credentials. The phishing attack came several months after the friend’s child lost his phone at a public park in Virginia. The phish arrived via text message and claimed to have been sent from Apple. It said the device tied to his son’s phone number had been found, and that its precise location could be seen for the next 24 hours by clicking a link embedded in the text message.
That security professional source — referred to as “John” for simplicity’s sake — declined to be named or credited in this story because some of the actions he took to gain the knowledge presented here may run afoul of U.S. computer fraud and abuse laws.
John said his friend clicked on the link in the text message he received about his son’s missing phone and was presented with a fake iCloud login page: appleid-applemx[dot]us. A lookup on that domain indicates it is hosted on a server in Russia that is or was shared by at least 140 other domains — mostly other apparent iCloud phishing sites — such as accounticloud[dot]site; apple-appleid[dot]store; apple-devicefound[dot]org; and so on.
While the phishing server may be hosted in Russia, its core users appear to be in a completely different part of the world.
Basically, John went gently a-hackin’, and he wound up finding a crim so dim he’d hacked his own phone and stored selfies on his iCloud account and left “Find my iPhone” on.
link to this extract
Chamois is an Android PHA [malware – “potentially harmful application”] family capable of:
• Generating invalid traffic through ad pop ups having deceptive graphics inside the ad
• Performing artificial app promotion by automatically installing apps in the background
• Performing telephony fraud by sending premium text messages
• Downloading and executing additional plugins
• Interference with the ads ecosystem
We detected Chamois during a routine ad traffic quality evaluation. We analyzed malicious apps based on Chamois, and found that they employed several methods to avoid detection and tried to trick users into clicking ads by displaying deceptive graphics. This sometimes resulted in downloading of other apps that commit SMS fraud. So we blocked the Chamois app family using Verify Apps and also kicked out bad actors who were trying to game our ad systems.
Our previous experience with ad fraud apps like this one enabled our teams to swiftly take action to protect both our advertisers and Android users. Because the malicious app didn’t appear in the device’s app list, most users wouldn’t have seen or known to uninstall the unwanted app. This is why Google’s Verify Apps is so valuable, as it helps users discover PHAs and delete them.
Chamois was one of the largest PHA families seen on Android to date and distributed through multiple channels. To the best of our knowledge Google is the first to publicly identify and track Chamois.
Notable what Google isn’t saying: how many apps had this; how many developers were involved; how many downloads there had been (of apps which contained this malware); how long it had been going on; how many people have been affected.
One other note:
“Our security teams sifted through more than 100K lines of sophisticated code written by seemingly professional developers. Due to the sheer size of the APK, it took some time to understand Chamois in detail.”
“Seemingly professional”? Anyone who writes that amount of code isn’t doing it for laughs, and if they evaded Google for as long as they clearly did, they’re at least “professional”.
link to this extract
During heated exchanges at the Commons home affairs committee one Labour MP went as far as accusing internet company executives of “commercial prostitution” and demanding to know whether they had any shame.
Yvette Cooper, the chair of the committee, told social media executives that they had “a terrible reputation” among their users for failing to act on reports of hate speech and other offensive material online.
She prepared for the evidence session on Tuesday by sending Google links to three YouTube videos posted by neo-Nazis including the US white supremacist, David Duke, and National Action, a banned organisation in Britain.
Other MPs on the committee questioned why they could find hate speech material online “within seconds” on social media sites and how Islamic State supporters and neo-Nazi groups could earn advertising revenue through the videos they posted on YouTube.
The social media companies defended their current monitoring arrangements but said they had to rely on their users on a “notify and take down” basis to tackle the problem of online hate. The tech companies’ sheer scale meant it was impossible for them to conduct proactive searches for such material although they were trying to develop technology, including artificial intelligence, that could improve their response to the problem.
But Cooper told the companies their responses were unconvincing and they were not enforcing their own published community standards despite having millions of users in Britain and making billions of pounds from them…
…Peter Barron, Google Europe’s vice-president for communications and public affairs, said two of the three Youtube videos reported by the committee had been removed. But a fourth, a David Duke video entitled “Jews admit organising white genocide” had not been removed despite being described by Cooper as antisemitic and shocking.
Barron said while many Duke videos had been removed this particular one “did not cross the line into hate speech even though it was shocking and offensive in its nature”.
The problem is: how do you take action against these companies, especially when they blithely tell you things like this? There’s clearly no incentive for Google and others to take down this sort of content, because it isn’t reducing engagement. (It’s possible they see data that suggests it increases engagement. Please leak that data to me if you’ve seen it…)
link to this extract
In the last few weeks Alphabet filed a lawsuit against Uber. Alphabet and Waymo (Alphabet’s self-driving car company) allege that Anthony Levandowski, an ex-Waymo manager, stole confidential and proprietary information from Waymo, then used it in his own self-driving truck startup, Otto. Uber acquired Otto in August 2016, so the suit was filed against Uber, not Otto.
This alone is a fairly explosive claim, but the subtext of Alphabet’s filing is an even bigger bombshell. Reading between the lines, (in my opinion) Alphabet is implying that Mr Levandowski arranged with Uber to:
• Steal LiDAR and other self-driving component designs from Waymo
• Start Otto as a plausible corporate vehicle for developing the self-driving technology
• Acquire Otto for $680 million
• Below, I’ll present the timeline of events, my interpretation, and some speculation on a possible (bad) outcome for Uber.
It’s quite an interpretation. (Also, legal things tend not to go with bombshells. They’re more like super-slow burners.) One suspect it isn’t going to be that bad, but Uber could find itself a few years behind rivals if things go badly. Still, it has a ton of money which it can use to get through the hard times.
link to this extract
“The ongoing expansion of domain name choices has added another instrument to the spammer’s toolbox: enticing recipients to click through to malicious sites, ultimately allowing attackers to infiltrate their networks,” wrote Ralf Iffert, Manager, X-Force Content Security in a blog about the spam findings. “More than 35% of the URLs found in spam sent in 2016 used traditional, generic top-level domains (gTLD) .com and .info. Surprisingly, over 20% of the URLs used the .ru country code top-level domain (ccTLD), helped mainly by the large number of spam emails containing the .ru ccTLD.”
Iffert continued: Even the lesser known domains are already well-established in spammers’ business model. Of the top 20 TLDs used in spam emails, X-Force observed seven new gTLDs in the top 10 ranks of the overall list: .click, .top, .xyz, .link, .club, .space and .site.
The new, generic top-level domains let spammers vary their domain URLs and thus bypass spam filters and some new gTLDs can cost as little as $1 to register, making them more lucrative to spammers who can automate the registration of hundreds of domains a day, Iffert wrote.
So at least that will gladden the hearts of the registrars of gTLDs. Though one could imagine that companies might start setting up filters to block out non-standard gTLDs.
link to this extract
Errata, corrigenda and ai no corrida: none notified