“Our male co-founder? Er.. he’s just over here”. Photo by Horia Varlan on Flickr.
A selection of 12 links for you. Consistency, that’s the word. I’m @charlesarthur on Twitter. Observations and links welcome.
The day an army of bots turned on bot researchers • Daily Beast
»
On Aug. 18, DFR Lab published an analysis on how U.S. alt-right platforms mimicked the sentiment of pro-Russian outlets concerning Charlottesville. The following week, ProPublica picked up the story, but something strange happened: Apparent bots quickly retweeted the article thousands of times.
A day later, an account with just 74 followers described investigative journalism news operation ProPublica as an “alt-left #HateGroup and #FakeNews site funded by Soros.” That tweet racked up some 23,000 retweets, seemingly from a group of bots. A similar tweet managed to grab more than 12,500 retweets. Ben Nimmo, a senior fellow at DFR Lab, then wrote his own analysis of the tweets against ProPublica, and a guide on how to spot a bot.Those retweet bots don’t really help propagate a tweet: Most probably don’t have any followers who are real users. Instead, their goal is likely to saturate a target’s notifications.
“They are not amplifying the accounts, but what they are doing is intimidating the users,” Nimmo told The Daily Beast. “They’re standing in an empty room, shouting really, really, loudly.”
But things got weirder.
“The Atlantic Council’s tweets, which are normally retweeted a couple dozen times, got retweeted almost 108,000 times and some of us got loads of fake new followers,” Donara Barojan, also from the DFR Lab, told The Daily Beast. She gained more than 1,000 new Twitter followers, most of which appeared to be automated accounts.
Barojan said most of the bots that followed her don’t tweet. But the automated accounts have been on Twitter for years.
«
It’s that latter point – that the accounts have been there for years – which always intrigues me. Were they planted there years ago? Bought from spammers who seeded them a long time ago? Hacked more recently (my guess)? Remember that Adrian Chen’s canonical article about paid Russian trolls dates from June 2015, and describes events from mid-2014 onwards. And re-read that article, which contains this:
»
The boom in pro-Kremlin trolling can be traced to the antigovernment protests of 2011, when tens of thousands of people took to the streets after evidence of fraud in the recent Parliamentary election emerged. The protests were organized largely over Facebook and Twitter and spearheaded by leaders, like the anticorruption crusader Alexei Navalny, who used LiveJournal blogs to mobilize support. The following year, when Vyascheslav Volodin, the new deputy head of Putin’s administration and architect of his domestic policy, came into office, one of his main tasks was to rein in the Internet.
«
Perhaps Russia really has been playing a long, long game.
link to this extract
Tech firms team up to take down ‘WireX’ Android DDoS botnet • Krebs on Security
»
Experts tracking the attacks soon zeroed in on the malware that powers WireX: Approximately 300 different mobile apps scattered across Google‘s Play store that were mimicking seemingly innocuous programs, including video players, ringtones or simple tools such as file managers.
“We identified approximately 300 apps associated with the issue, blocked them from the Play Store, and we’re in the process of removing them from all affected devices,” Google said in a written statement. “The researchers’ findings, combined with our own analysis, have enabled us to better protect Android users, everywhere.”
Perhaps to avoid raising suspicion, the tainted Play store applications all performed their basic stated functions. But those apps also bundled a small program that would launch quietly in the background and cause the infected mobile device to surreptitiously connect to an Internet server used by the malware’s creators to control the entire network of hacked devices. From there, the infected mobile device would await commands from the control server regarding which websites to attack and how.
Experts involved in the takedown say it’s not clear exactly how many Android devices may have been infected with WireX, in part because only a fraction of the overall infected systems were able to attack a target at any given time. Devices that were powered off would not attack, but those that were turned on with the device’s screen locked could still carry on attacks in the background, they found.
“I know in the cases where we pulled data out of our platform for the people being targeted we saw 130,000 to 160,000 (unique Internet addresses) involved in the attack,” said Chad Seaman, a senior engineer at Akamai, a company that specializes in helping firms weather large DDoS attacks (Akamai protected KrebsOnSecurity from hundreds of attacks prior to the large Mirai assault last year).
The identical press release that Akamai and other firms involved in the WireX takedown agreed to publish says the botnet infected a minimum of 70,000 Android systems, but Seaman says that figure is conservative.
“Seventy thousand was a safe bet because this botnet makes it so that if you’re driving down the highway and your phone is busy attacking some website, there’s a chance your device could show up in the attack logs with three or four or even five different Internet addresses,” Seaman said in an interview with KrebsOnSecurity. “We saw attacks coming from infected devices in over 100 countries. It was coming from everywhere.”
«
(This is not the same as the Android ad fraud botnet linked in yesterday’s Overspill.)
link to this extract
Post a boarding pass on Facebook, get your account stolen • Michal Špaček
»
When searching for boarding passes on Facebook, I found a picture of an Aztec code taken by a man who wished to remain anonymous. He’s well known in certain circles, has about 120,000 followers on Twitter, and founded something in Europe and in the United States too. The code in the picture contained his United Airlines frequent flyer number. This airline treats such numbers as a super secret access codes. If they print a frequent flyer number on an official correspondence they print only last 3 digits and the rest is masked, like a password. There was a full number in the Aztec code, of course, so I was thinking of using it to try and hijack that person’s account. Because why not, right, it shouldn’t be that easy.
So I went to the United Airlines website, selected Forgot password, and entered the name and the number from the scanned Aztec code. What followed were two security questions that were answered within a few seconds: “the first major city that you visited” was the city where this person was born, and “your favorite cold-weather activity” in the Alpine country was not golf. The system correctly recognized that me was, in fact, him and then I could set up a new password for his account. Update August 25: this happened in June 2016, United has since added an additional step in which they require the customer to click a link which was emailed to them to change their password. Seems that nowadays, I’d be able to just trigger such email.
I did not set a new password, I wasn’t there to cause anyone any trouble. I sent a message to that person, just like I sent one to Petr Mára. He had deleted the picture with the Aztec code from Facebook (it’s still on Twitter, though), but he didn’t believe I could hijack the account. He thought the website would send a new password to him.
After a brief explanation, he understood. Oh shit, you’re right. You could have just changed the password. This is crazy. Yeah, it is. Just because he’s uploaded his boarding pass I could steal his account. Maybe there might be a stored payment card for future purchases, or I could make him get stuck somewhere.
«
Do not take a picture of your boarding pass and put it on social media. (Perhaps should have linked this before summer holidays, eh.)
link to this extract
Why we’re disabling comments on aljazeera.com • Al Jazeera English
»
Today, we disabled the ability to comment on stories on aljazeera.com. It’s a decision that we’ve given much thought to, and one that we feel ultimately best serves our audience.
The mission of Al Jazeera is to give a voice to the voiceless, and healthy discussion is an active part of this. When we first opened up comments on our website, we hoped that it would serve as a forum for thoughtful and intelligent debate that would allow our global audience to engage with each other.
However, the comments section was hijacked by users hiding behind pseudonyms spewing vitriol, bigotry, racism and sectarianism. The possibility of having any form of debate was virtually non-existent.
«
And another one down. I should have been keeping a list.
link to this extract
These women entrepreneurs created a fake male cofounder to dodge startup sexism • Fast Company
»
Witchsy, the alternative, curated marketplace for bizarre, culturally aware, and dark-humored art, celebrated its one-year anniversary this summer. The site, born out of frustration with the excessive clutter and limitations of bigger creative marketplaces like Etsy, peddles enamel pins, shirts, zines, art prints, handmade crafts and other wares from a stable of hand-selected artists. Witchsy eschews the “Live Laugh Love” vibe of knickknacks commonly found on sites like Etsy in favor of art that is at once darkly nihilistic and lightheartedly funny, ranging in spirit from fiercely feminist to obscene just for the fun of it.
In its first year, Witchsy has sold about $200,000 worth of this art, paying its creators 80% of each transaction and managing to turn what Dwyer says is a small profit…
But along the way, Gazin and Dwyer had to come up with clever ways to overcome some of the more unexpected obstacles they faced. Some hurdles were overt: early on a web developer they brought on to help build the site tried to stealthily delete everything after Gazin declined to go on a date with him. But most of the obstacles were much more subtle.
After setting out to build Witchsy, it didn’t take long for them to notice a pattern: In many cases, the outside developers and graphic designers they enlisted to help often took a condescending tone over email. These collaborators, who were almost always male, were often short, slow to respond, and vaguely disrespectful in correspondence. In response to one request, a developer started an email with the words “Okay, girls…”
That’s when Gazin and Dwyer introduced a third cofounder: Keith Mann, an aptly named fictional character who could communicate with outsiders over email.
“It was like night and day,” says Dwyer. “It would take me days to get a response, but Keith could not only get a response and a status update, but also be asked if he wanted anything else or if there was anything else that Keith needed help with.”
«
The web developer! The collaborators! Good grief. Is it this bad in the UK or other countries? As some have pointed out, the premise here of needing the fake male is exactly the same as the TV series Remington Steele.
link to this extract
Google critic ousted from think tank funded by the tech giant • The New York Times
»
not long after one of New America’s scholars posted a statement on the think tank’s website praising the European Union’s penalty against Google, Mr. Schmidt, who had chaired New America until 2016, communicated his displeasure with the statement to the group’s president, Anne-Marie Slaughter, according to the scholar.
The statement disappeared from New America’s website, only to be reposted without explanation a few hours later. But word of Mr. Schmidt’s displeasure rippled through New America, which employs more than 200 people, including dozens of researchers, writers and scholars, most of whom work in sleek Washington offices where the main conference room is called the “Eric Schmidt Ideas Lab.” The episode left some people concerned that Google intended to discontinue funding, while others worried whether the think tank could truly be independent if it had to worry about offending its donors.
Those worries seemed to be substantiated a couple of days later, when Ms. Slaughter summoned the scholar who wrote the critical statement, Barry Lynn, to her office. He ran a New America initiative called Open Markets that has led a growing chorus of liberal criticism of the market dominance of telecom and tech giants, including Google, which is now part of a larger corporate entity known as Alphabet, for which Mr. Schmidt serves as executive chairman.
Ms. Slaughter told Mr. Lynn that “the time has come for Open Markets and New America to part ways,” according to an email from Ms. Slaughter to Mr. Lynn. The email suggested that the entire Open Markets team — nearly 10 full-time employees and unpaid fellows — would be exiled from New America.
While she asserted in the email, which was reviewed by The New York Times, that the decision was “in no way based on the content of your work,” Ms. Slaughter accused Mr. Lynn of “imperiling the institution as a whole.”
Mr. Lynn, in an interview, charged that Ms. Slaughter caved to pressure from Mr. Schmidt and Google, and, in so doing, set the desires of a donor over the think tank’s intellectual integrity.
“Google is very aggressive in throwing its money around Washington and Brussels, and then pulling the strings,” Mr. Lynn said. “People are so afraid of Google now.”
Google rejected any suggestion that it played a role in New America’s split with Open Markets.
«
The Open Market comment said, inter alia,
»
By requiring that Google give equal treatment to rival services instead of privileging its own, Vestager is protecting the free flow of information and commerce upon which all democracies depend. We call upon U.S. enforcers, including the Federal Trade Commission, the Department of Justice, and states attorneys general, to build upon this important precedent, both in respect to Google and to other dominant platform monopolists including Amazon.
«
New America’s CEO said that she has been working “for the past two months” to spin out Open Markets as an independent program, and responded that
»
“As I reiterated to [Lynn] in June, his repeated refusal to adhere to New America’s standards of openness and institutional collegiality meant that we could no longer work together as part of the same institution. I continued, however, to seek a cooperative solution with Barry; unfortunately, I have been unsuccessful.”
«
That phrase “institutional collegiality” is an interesting one, hinting at “not being part of the team”. Meanwhile, Open Markets has set up a campaign at Citizens Against Monopoly.
link to this extract
Google to comply with EU search demands to avoid more fines • Bloomberg
»
Google will comply with Europe’s demands to change the way it runs its shopping search service, a rare instance of the internet giant bowing to regulatory pressure to avoid more fines.
The Alphabet Inc. unit faced a Tuesday deadline to tell the European Union how it planned to follow an order to stop discriminating against rival shopping search services in the region. A Google spokeswoman said it is sharing that plan with regulators before the deadline expires, but declined to comment further.
The EU fined Google a record 2.4bn euros ($2.7bn) in late June for breaking antitrust rules by skewing its general search results to unfairly favor its own shopping service over rival sites. The company had 60 days to propose how it would “stop its illegal content” and 90 days to make changes to how the company displays shopping results when users search for a product. Those changes need to be put in place by Sept. 28 to stave off a risk that the EU could fine the company 5% of daily revenue for each day it fails to comply.
“The obligation to comply is fully Google’s responsibility,” the European Commission said in an emailed statement, without elaborating on what the company must do to comply.
«
The question really is, how is it going to do this?
link to this extract
How to escape a submerged car • Popular Mechanics
»
The good news is that you can escape a sinking vehicle. But you’ve got to be quick. According to The University of Manitoba’s Gordon Geisbrecht, who trains law enforcement officers and others on underwater-vehicle escape, a person has about a minute to get out alive. Here are his five rules of survival—and one caveat.
Rule 1. Don’t Call 911 until you’re out of the car. You’re going to need every second to get out of that vehicle. Worry about calling 911 once you’ve made it out alive, or, as in the case of the I-5 collapse, if your vehicle isn’t submerged. “Time is critical,” says Geisbrecht. “If you touch your cell phone you’re probably going to die.”
Rule 2. Unbuckle.
Rule 3. Don’t open the door! Roll down the windows instead. Opening the door is very difficult against the water pressure and it also allows so much water into the vehicle that it will speed up the sinking process.
You’ll have 30 seconds to a minute until the water rises to the bottom of the passenger windows. This is what Geisbrecht calls the floating period. After that, the water pressure will force the window against the doorframe, making it essentially impossible to roll down.
Caveat to Rule 3: Break that window. Since most vehicles these days have electronically controlled windows, the circuits probably will short before you have a chance to roll them down. In that case, you’ll need a tool to break the window open.
«
Click through for rules 4 and 5, of course. This is clearly a very dangerous situation; let’s hope you never find yourself in it, but that if you do you can remember at least a few of these. Prompted by the sad story of a family swept away in Houston’s floods.
link to this extract
What we get wrong about technology • Tim Harford
»
Blade Runner (1982) is a magnificent film, but there’s something odd about it. The heroine, Rachael, seems to be a beautiful young woman. In reality, she’s a piece of technology — an organic robot designed by the Tyrell Corporation. She has a lifelike mind, imbued with memories extracted from a human being. So sophisticated is Rachael that she is impossible to distinguish from a human without specialised equipment; she even believes herself to be human. Los Angeles police detective Rick Deckard knows otherwise; in Rachael, Deckard is faced with an artificial intelligence so beguiling, he finds himself falling in love. Yet when he wants to invite Rachael out for a drink, what does he do?
He calls her up from a payphone.
There is something revealing about the contrast between the two technologies — the biotech miracle that is Rachael, and the graffiti-scrawled videophone that Deckard uses to talk to her. It’s not simply that Blade Runner fumbled its futurism by failing to anticipate the smartphone. That’s a forgivable slip, and Blade Runner is hardly the only film to make it. It’s that, when asked to think about how new inventions might shape the future, our imaginations tend to leap to technologies that are sophisticated beyond comprehension. We readily imagine cracking the secrets of artificial life, and downloading and uploading a human mind. Yet when asked to picture how everyday life might look in a society sophisticated enough to build such biological androids, our imaginations falter.
«
Just as filmmakers fail, so do our planners. But we also don’t recognise the subtle needs for making lots of things consistently that underlie what happens. This is a great essay; Harford’s “Fifty Things That Made The Modern Economy” would be a good Christmas present for the reader in your life.
link to this extract
Misidentification and improvised rules – we lift the lid on the Met’s Notting Hill facial recognition operation • Liberty
Silkie Carlo was allowed to watch the Met’s facial recognition system trying to identify criminals at the Notting Hill Carnival in London:
»
The project leads explained they had constructed a “bespoke dataset” for the weekend – more than 500 images of people they were concerned might attend. Some police were seeking to arrest, others they were looking to apprehend if they were banned from attending.
I asked what kind of crimes those on the ‘arrest’ watch list could be wanted for. We weren’t given details, but were told it could be anything from sexual assault to non-payment of fines.
I watched the facial recognition screen in action for less than 10 minutes. In that short time, I witnessed the algorithm produce two ‘matches’ – both immediately obvious, to the human eye, as false positives. In fact both alerts had matched innocent women with wanted men.
The software couldn’t even differentiate sex. I was astonished.
The officers dismissed the alerts without a hint of self-reflection – they make their own analysis before stopping and arresting the identified person anyway, they said.
I wondered how much police time and taxpayer’s money this complex trial and the monitoring of false positives was taking – and for what benefit.
I asked how many false positives had been produced on Sunday – around 35, they told me. At least five of these they had pursued with interventions, stopping innocent members of the public who had, they discovered, been falsely identified.
There was no concern about this from the project leaders.
There was a palpable dark absurdity as we watched the screen, aghast, red boxes bobbing over the faces of a Hare Krishna troupe relentlessly spreading peace and love as people wearing Caribbean flags danced to tambourines.
“It is a top-of-the-range algorithm,” the project lead told us, as the false positive match of a young woman with a balding man hovered in the corner of the screen.
«
Uber faces investigation of possible foreign-bribery law violations • WSJ
Douglas MacMillan and Aruna Viswanatha:
»
Under former Chief Executive Travis Kalanick, the eight-year-old company spread rapidly to more than 70 countries around the world in part by giving regional teams authority to adapt to local markets and expand as quickly as possible, sometimes flouting local laws.
In South Korea and France, for example, it was found to violate transportation laws. In Singapore, local managers bought more than 1,000 defective cars last year and rented them out to drivers, only fixing the safety defect after one of the cars caught on fire, an investigation by The Wall Street Journal this month found. Uber said it has since added safety measures and fixed all the defective cars in Singapore.
News of the preliminary bribery probe comes as Uber plans to usher in a new chief executive, Expedia Inc. CEO Dara Khosrowshahi, to replace Mr. Kalanick, who resigned in June following months of scandals, legal issues and an internal investigation into allegations of sexism. Mr. Khosrowshahi said Tuesday he plans to accept the job once his employment contract his ironed out.
As Mr. Khosrowshahi steps in, Uber faces growing pressure from U.S. authorities. The Justice Department is separately pursuing a criminal investigation into “Greyball,” a software tool employees used to evade law-enforcement officials, people familiar with the matter said in May. Uber hasn’t commented on the probe.
«
Uber looks like the Augean stables just at the moment.
link to this extract
Researchers taught AI to write totally believable fake reviews, and the implications are terrifying • Business Insider
»
there will soon be a major new threat to the world of online reviews: Fake reviews written automatically by artificial intelligence (AI).
Allowed to rise unchecked, they could irreparably tarnish the credibility of review sites — and the tech could have far broader (and more worrying) implications for society, trust, and fake news.
“In general, the threat is bigger. I think the threat towards society at large and really disillusioned users and to shake our belief in what is real and what is not, I think that’s going to be even more fundamental,” Ben Y. Zhao, a professor of computer science at the University of Chicago, told Business Insider.
Fake reviews are undetectable — and considered reliable
Researchers from the University of Chicago (including Ben Zhao) have written a paper (“Automated Crowdturfing Attacks and Defenses in Online Review Systems“) that shows how AI can be used to develop sophisticated reviews that are not only undetectable using contemporary methods, but are also considered highly reliable by unwitting readers.The paper will be presented at the ACM Conference on Computer and Communications Security later this year.
Here’s one example of a synthesised review: “I love this place. I went with my brother and we had the vegetarian pasta and it was delicious. The beer was good and the service was amazing. I would definitely recommend this place to anyone looking for a great place to go for a great breakfast and a small spot with a great deal.”
There’s nothing immediately strange about this review. It gives some specific recommendations and believable backstory, and while the last phrase is a little odd (“a small spot with a great deal”), it’s still an entirely plausible human turn-of-phrase.
«
Based on this, we’re either going to need better ways to identify humans, or online reviews are going the way of the dinosaur.
link to this extract
Errata, corrigenda and ai no corrida: none notified.
Website readers! You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.