Start Up No.1911: Online Safety Bill resurfaces yet again, Epson ditches lasers for inkjets, Pegasus v disinformation, and more

Computerhash
What if… the hashing process in your computer produced a file that was illegal to own.. or destructive? (Picture of “a computer hash” by Diffusion Bee.)

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at about 0845 UK time. Free signup.


A selection of 9 links for you. Programming note: Elon Musk tweets on their own will not under any circumstances be treated as newsworthy. I’m @charlesarthur on Twitter. Observations and links welcome.


Social media firms face big UK fines if they fail to stop sexist and racist content • The Guardian

Dan Milmo:

»

Social media platforms that breach pledges to block sexist and racist content face the threat of substantial fines under government changes to the online safety bill announced on Monday.

Under the new approach, social media sites such as Facebook and Twitter must also give users the option of avoiding content that is harmful but does not constitute a criminal offence. This could include racism, misogyny or the glorification of eating disorders.

Ofcom, the communications regulator, will have the power to fine companies up to 10% of global turnover for breaches of the act. Facebook’s parent, Meta, posted revenues of $118bn (£99bn) last year.

A harmful communications offence has, however, been dropped from the legislation after criticism from Conservative MPs that it was legislating for “hurt feelings”.

Ministers have scrapped the provision on regulating “legal but harmful” material – such as offensive content that does not constitute a criminal offence – and are instead requiring platforms to enforce their terms and conditions for users.

If those terms explicitly prohibit content that falls below the threshold of criminality – such as some forms of abuse – Ofcom will then have the power to ensure they police them adequately.

Under another adjustment to the bill, big tech companies must offer people a way of avoiding harmful content on their platform, even if it is legal, through methods that could include content moderation or warning screens. Examples of such material include those that are abusive, or incite hatred on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation.

«

Hoo boy, that’s all going to be a doddle for the newly shrunk Twitter, I bet. By my count this is the third time the Online Safety/Harms Bill has come around, subtly different each time.
unique link to this extract


Epson ditches lasers and goes all in on inkjet • TechRadar

Will McCurdy:

»

Epson claims its own inkjet printers use less than 85% less energy than a comparable laser printer and 85% less carbon dioxide.

Inkjet printers use wet ink and nozzle assembly to print onto paper, whereas laser printers use a laser and dry ink (also called toner) to print

In general, inkjet printers tend to be somewhat smaller in size than their laser counterparts, but also have a slightly higher cost per page.

This news comes a year after Epson announced a ¥100bn ($700m) investment into sustainable innovation. But despite the latest public commitment to sustainability, Epson has attracted some intense criticism regarding its environmental practices in recent years.

Epson confirmed in July 2022 (opens in new tab) that some of its printers are designed to stop working after a certain period of time, forcing customers to either replace the hardware or pay for it to be survived by an authorized repair person.

The timecoded limit was reported to impact Epson’s L360 L130, L220, L310, and L365 model printers.

Commenting on the news to the Fight to Repair blog Harvard professor Jonathan Zittrain said:

“A printer self-bricking after a while is a great example of ‘you think you bought a product, but you really rented a service.”

«

I have a suspicion that Epson has been getting kicked around in the laser business and has found a neat green exit.
unique link to this extract


Pegasus spyware inquiry targeted by disinformation campaign, say experts • The Guardian

Stephanie Kirchgaessner and Sam Jones:

»

Victims of spyware and a group of security experts have privately warned that a European parliament investigatory committee risks being thrown off course by an alleged “disinformation campaign”.

The warning, contained in a letter to MEPs signed by the victims, academics and some of the world’s most renowned surveillance experts, followed news last week that two individuals accused of trying to discredit widely accepted evidence in spyware cases in Spain had been invited to appear before the committee investigating abuse of hacking software.

“The invitation to these individuals would impede the committee’s goal of fact-finding and accountability and will discourage victims from testifying before the committee in the future,” the letter said.

It was signed by two people who have previously been targeted multiple times by governments using Pegasus: Carine Kanimba, the daughter of Paul Rusesabagina, who is in prison in Rwanda, and the Hungarian journalist Szabolcs Panyi. Other signatories included Access Now, the Electronic Frontier Foundation, Red en Defensa de los Derechos Digitales, and the Human Rights Foundation.

One MEP said it appeared that Spain’s “national interest” was influencing the committee’s inquiry.

The invitation to one of the individuals – José Javier Olivas, a political scientist from Spain’s Universidad Nacional de Educación a Distancia – was rescinded but the other, to Gregorio Martín from the University of Valencia, was not and he is expected to appear before the parliamentary panel on Tuesday.

«

Pegasus, in case you forgot, is the spying software written by NSO which infects phones through zero-click exploits and can download anything from it. NSO has always insisted that it vets its clients carefully against misuse. The evidence shows that their clients hack journalists and human rights activists quite indiscriminately.
unique link to this extract


The exceptionally American problem of rising roadway deaths • The New York Times

Emily Badger and Alicia Parlapiano:

»

as cars grew safer for the people inside them, the US didn’t progress as other countries did to prioritizing the safety of people outside them.

“Other countries started to take seriously pedestrian and cyclist injuries in the 2000s — and started making that a priority in both vehicle design and street design — in a way that has never been committed to in the United States,” [researcher at the Urban Institute, Yonah] Freemark said.

Other developed countries lowered speed limits and built more protected bike lanes. They moved faster in making standard in-vehicle technology like automatic braking systems that detect pedestrians, and vehicle hoods that are less deadly to them. They designed roundabouts that reduce the danger at intersections, where fatalities disproportionately occur.

In the US in the past two decades, by contrast, vehicles have grown significantly bigger and thus deadlier to the people they hit. Many states curb the ability of local governments to set lower speed limits. The five-star federal safety rating that consumers can look for when buying a car today doesn’t take into consideration what that car might do to pedestrians.

These diverging histories mean that while the US and France had similar per capita fatality rates in the 1990s, Americans today are three times as likely to die in a traffic crash, according to Mr. Freemark’s research.

«

Road deaths are just ahead of gun deaths in the US. Though it’s a pretty close thing. Exceptionalism gone wrong.
unique link to this extract


Google partners with med tech company to develop AI breast cancer screening tools • The Verge

Justine Calma:

»

Google announced today that it has licensed its AI research model for breast cancer screening to medical technology company iCAD. This is the first time Google is licensing the technology, with the hopes that it will eventually lead to more accurate breast cancer detection and risk assessment.

The two companies aim to eventually deploy the technology in real-world clinical settings — targeting a “2024 release,” Google communications manager Nicole Linton told The Verge in an email. Commercial deployment, however, still depends on how successful continued research and testing are. “We will move deliberately and test things as we go,” Linton said in the email.

The partnership builds on Google’s prior work to improve breast cancer detection. Back in 2020, Google researchers published a paper in the journal Nature that found that its AI system outperformed several radiologists in identifying signs of breast cancer. The model reduced false negatives by up to 9.4% and reduced false positives by up to 5.7% among thousands of mammograms studied.

«

Quiet improvements: this is what we want from technology. (Not loudmouthed idiots. Please.)
unique link to this extract


Crypto lender BlockFi files for bankruptcy after FTX collapse • The Guardian

Alex Hern:

»

BlockFi, which operates in a similar fashion to a conventional bank, paying interest on savings and using customer deposits to fund lending, says it has $256.9m cash in hand. According to court documents, its creditors include FTX itself, to which it owes $275m, and the US Securities and Exchange Commission (SEC), to which it owes $30m.

In a statement announcing its Chapter 11 bankruptcy filing, BlockFi said: “This action follows the shocking events surrounding FTX and associated corporate entities and the difficult but necessary decision we made as a result to pause most activities on our platform.

“Since the pause, our team has explored every strategic option and alternative available to us, and has remained laser-focused on our primary objective of doing the best we can for our clients.

“These Chapter 11 cases will enable BlockFi to stabilise the business and provide BlockFi with the opportunity to consummate a reorganisation plan that maximises value for all stakeholders, including our valued clients.”

The SEC levied a $100m fine on the company in February for violating securities laws, arguing that the investment products the company offered qualified as unregistered securities. The outstanding $30m debt is apparently the unpaid portion of that fine.

BlockFi has already stumbled close to bankruptcy once already this year, in the wake of spring’s crypto crash.

After chief executive Zac Prince said the company needed an injection of capital to stave off a liquidity crisis, it signed a deal with none other than FTX, which gave the company access to $400m in loans. The price of the deal was an option from FTX to buy the lender for about $240m, a sharp decline from a peak valuation of $3bn.

«

More dominoes. Wonder if the SEC will push itself to the front of the queue of creditors.
unique link to this extract


Illegal hashes • Terence Eden’s Blog

The aforesaid Eden:

»

To understand this blog post, you need to know two things.

• There exists a class of numbers which are illegal in some jurisdictions. For example, a number may be copyrighted content, a decryption key, or other text considered illegal.
• There exists a class of algorithms which will take any arbitrary data and produce a fixed length text from it. This process is known as “hashing”. These algorithms are deterministic – that is, entering the same data will always produce the same hash.

Let’s take the MD5 hashing algorithm. Feed it any data and it will produce hash with a fixed length of 128 bits. Using an 8 bit alphabet, that’s 16 human-readable characters.

Suppose you live in a country with Lèse-majesté – laws which make it treasonous to insult or threaten the monarch.

There exists a seemingly innocent piece of data – an image, an MP3, a text file – which when fed to MD5 produces these 128 bits:

01001001 00100000 01101000 01100001
01110100 01100101 00100000 01110100
01101000 01100101 00100000 01110001
01110101 01100101 01100101 01101110

Decoded into ASCII, that spells “I hate the queen”. 128 bits is probably too short to be illegal in all but the most repressive of regimes. It would be hard, if not impossible, to squeeze terrorist plans into that little space. But it is just enough space to store an encryption key for copyrighted material.

Therefore, it is possible that there exists a file which – by pure coincidence – happens to have an MD5 hash which is illegal.

«

Take it even further: what if there was a string that could make a machine wreck itself (in the sense of a Turing machine instruction)? Then you’d have a destructive hash. Seems like there’s a fun SF story buried in this concept.
unique link to this extract


5.4 million Twitter users’ stolen data leaked online, more shared privately • Bleeping Computer

Lawrence Abrams:

»

More than 5.4 million Twitter user records containing non-public information stolen using an API vulnerability fixed in January have been shared for free on a hacker forum.

Another massive, potentially more significant, data dump of millions of Twitter records has also been disclosed by a security researcher, demonstrating how widely abused this bug was by threat actors.

The data consists of scraped public information as well as private phone numbers and email addresses that are not meant to be public.

Last July, a threat actor began selling the private information of over 5.4 million Twitter users on a hacking forum for $30,000.

While most of the data consisted of public information, such as Twitter IDs, names, login names, locations, and verified status, it also included private information, such as phone numbers and email addresses.

This data was collected in December 2021 using a Twitter API vulnerability disclosed in the HackerOne bug bounty program that allowed people to submit phone numbers and email addresses into the API to retrieve the associated Twitter ID.

Using this ID, the threat actors could then scrape public information about the account to create a user record containing both private and public information,

«

The API flaw seems like a pretty bad (and obvious?) one: “The vulnerability allows any party without any authentication to obtain a twitter ID (which is almost equal to getting the username of an account) of any user by submitting a phone number/email even though the user has prohibitted this action in the privacy settings,” reads the vulnerability disclosure by security researcher ‘zhirinovskiy.” Clearly having lots of staff didn’t necessarily equate to having great checks on API security.
unique link to this extract


Twitter failed to detect upload of Christchurch mosque terror attack videos • The Guardian

Eva Corlett:

»

The video clips, filmed by the Australian white supremacist who murdered 51 Muslim worshippers at two mosques in Christchurch in 2019, were uploaded by some Twitter users on Saturday, according to the office of the prime minister, Jacinda Ardern.

A spokesperson for the prime minister said Twitter’s automated reporting function didn’t pick up the content as harmful.

Other users reported the videos and the government separately raised it with Twitter, the office said. “Twitter advised us overnight that the clips have been taken down and said they would do a sweep for other instances.”

The mosque attack was livestreamed on multiple social media platforms and the terrorist’s manifesto published online.

Ardern launched the Christchurch Call after the attack, asking social media companies to counter online extremism and misinformation. Twitter founder Jack Dorsey had supported the initiative.

Speaking to media on Monday afternoon, Ardern said that while “time will tell” over Twitter’s commitment to removing harmful content, the company had advised the government it had not changed its view over its membership to the Christchurch Call community.

“We will continue to maintain our expectation that [Twitter does] everything they can on a day-to-day basis to remove that content but also to reduce terrorist content and violent extremist content online, as they’ve committed to,” Ardern said.

«

This seems to match level 9 (of 20 set out) in Mike Masnick’s Content Moderation Speed Run, as linked yesterday. Plenty of headroom yet.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.