Start Up No.2137: TikTok and the underage accounts, Beeper rinse-repeats again, the NYT’s vast games staff, and more


How much sugar was removed from people’s diets in the UK by a £300m tax system? A lot less than you think. CC-licensed photo by Uwe Hermann on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 9 links for you. Use them wisely. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


TikTok allowing under-13s to keep accounts, evidence suggests • The Guardian

Hibaq Farah and Dan Milmo:

»

TikTok faces questions over safeguards for child users after a Guardian investigation found that moderators were being told to allow under-13s to stay on the platform if they claimed their parents were overseeing their accounts.

In one example seen by the Guardian, a user who declared themselves to be 12 in their account bio, under TikTok’s minimum age of 13, was allowed to stay on the platform because their user profile stated the account was managed by their parents.

The internal communication sent in the autumn involved a quality analyst – someone who is responsible for any queries related to moderating video queues – who was asked by a moderator whether they should ban the user’s account.

The advice from the TikTok quality analyst was that if the account bio said it was managed by parents then moderators could allow the account to stay on the platform. The message was sent into a group chat with more than 70 moderators, who are responsible for looking at content mostly from Europe, the Middle East and Africa.

It has also been alleged that moderators have been told in meetings that if a parent is in the background of a seemingly underage video, or if the bio says an account is managed by a parent, those accounts can stay on the platform.

Suspected cases of underage account holders are sent to an “underage” queue for further moderation. Moderators have two options: to ban, which would mean the removal of the account, or to approve, allowing the account to stay on the platform.

A staff member at TikTok said they believed it was “incredibly easy to avoid getting banned for being underage. Once a kid learns that this works, they will tell their friends.”

TikTok said it was false to claim that children under 13 were allowed on the platform if they stated in their bio that the account was managed by an adult.

«

Who you going to believe, your lying eyes or TikTok’s PR person?
unique link to this extract


Next Beeper Mini fix requires users to have a Mac • MacRumors

Juli Clover:

»

The developers behind Beeper Mini are continuing with their effort to make iMessage for Android function despite Apple’s mitigations, and the latest “fix” requires Beeper Mini users to have access to a Mac.

On Reddit, the Beeper Mini team says that the Mac-based fix coming on December 20 stabilizes iMessage for Beeper Cloud and Mini, and it “works well” and “is very reliable.”

It is unclear how many Android users have a Mac or have a friend with a Mac to rely on, but the fix requires using a Mac to connect to iMessage on Beeper. According to Beeper Mini’s developers, registration data from an actual Mac has to be sent to Apple to use iMessage on Beeper. Beeper has been using its own Mac servers to provide that information to Apple, but that resulted in thousands of Beeper users having the same registration info, which was an “easy target for Apple.”

The Beeper update will instead generate unique registration data for each Mac, making it harder for Apple to tell which users are accessing iMessage through an Android device.

«

Oh good grief. Don’t tell me, the next forced update to Beeper will require users to have an iPhone.
unique link to this extract


A bitter pill for public health • The Critic Magazine

Christopher Snowdon:

»

claiming that the UK sugar tax had led to a 10% reduction in the amount of sugar consumed in soft drinks. Although one of its authors admitted that a decline of this magnitude “might sound modest”, it was presented as a win for public health. The preposterous pressure group Action on Sugar called for the tax to be “extended to other categories” and the 10% figure soon found its way into the National Food Strategy and several World Health Organisation reports.

Last week the study was retracted, along with an editorial titled “UK sugar tax hits the sweet spot” that had been published in the British Medical Journal claiming that the tax was “working exactly as intended”.

It turns out that tax has not been not working exactly as intended. In a new version of the study, the authors estimate that the decline in sugar consumption from soft drinks was just 2.7%, barely a quarter of the original figure, and that in contrast to the original study, which claimed that there had been no change in soft drink sales, the volume of soft drinks rose by 2.6%.

The decline in sugar consumption was originally said to be 30 grams per household per week. In the new study, it is estimated to be 8 grams per household per week. That works out at less than two calories per person per day. To get an idea of how little that is, get a slice of bread and take the tiniest nibble off one of the corners. That is the amount of calories reduced by a tax that costs consumers £300m a year.

«

Snowdon tends to start from a place of “this won’t work” about such taxes used to change behaviour, but in this case he turns out to have been dead right.
unique link to this extract


Inside The New York Times’ big bet on games • Vanity Fair

Charlotte Klein:

»

n the ninth floor of the New York Times headquarters, high above the bustling newsroom, a group of editors are doing the Sunday crossword. Or, rather, they’re undoing it. The editors already accepted this submission, one of the 150 to 200 puzzles arriving weekly, and are now working through it clue by clue—questioning, waffling, rewriting. They nitpick and fact-check. They debate the timelessness of a hint; whether the solver’s reaction will be Oh, I guess versus Aha!

…Joel Fagliano, sporting a New York Times T-shirt, a hoodie, and Allbirds sneakers, hunches over his computer and clicks around the grid while reading each clue and its answer aloud. Fagliano, known for never letting a meeting run long, works efficiently but also lets the group nerd out when appropriate. Like now. “All right, ‘Norwegian city depicted in’—oh really? I didn’t know Oslo was in the background of The Scream,” Fagliano says. He pulls up an image of the iconic Edvard Munch painting. “Hard to say what’s back there,” he chuckles, squinting at the ghostly image. “It seems like it’s sort of a whirling.”

The other editors are similarly skeptical as to whether the city of Oslo, the answer to the proposed clue, is clearly identifiable in the painting. “It’s just a blur. Maybe Munch says it was. I know it’s in Oslo,” says Iverson, referring to the physical location of the work. “If the painting is in an Oslo art gallery, I like that,” Ezersky says. Fagliano, still googling, adds, “This says they located the spot to a fjord overlooking Oslo.”

«

Sweet mother of god. 1) That is TOO MANY PEOPLE. American newspapers suffer from chronic overemployment, and this is a classic case. The Guardian, whose crosswords have a world-beating reputation, has one or two people who check the crossword. 2) That crossword clue is atrocious. You’d either know the answer, or guess it, but that’s an appalling way to clue anything.
unique link to this extract


Is this the end of geofence warrants? • Electronic Frontier Foundation

Jennifer Lynch is the EFF’s general counsel:

»

Google announced this week that it will be making several important changes to the way it handles users’ “Location History” data. These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Geofence warrants have been possible because Google collects and stores specific user location data (which Google calls “Location History” data) altogether in a massive database called “Sensorvault.” Google reported several years ago that geofence warrants make up 25% of all warrants it receives each year.

Google’s announcement outlined three changes to how it will treat Location History data. First, going forward, this data will be stored, by default, on a user’s device, instead of with Google in the cloud. Second, it will be set by default to delete after three months; currently Google stores the data for at least 18 months. Finally, if users choose to back up their data to the cloud, Google will “automatically encrypt your backed-up data so no one can read it, including Google.”

«

It’s a sort of arms race between legal loopholes and technological ones.
unique link to this extract


Studios are loosening their reluctance to send old shows back to Netflix • The New York Times

John Koblin and Nicole Sperling:

»

For years, entertainment company executives happily licensed classic movies and television shows to Netflix. Both sides enjoyed the spoils: Netflix received popular content like “Friends” and Disney’s “Moana,” which satisfied its ever-growing subscriber base, and it sent bags of cash back to the companies.

But around five years ago, executives realized they were “selling nuclear weapons technology” to a powerful rival, as Disney’s chief executive, Robert A. Iger, put it. Studios needed those same beloved movies and shows for the streaming services they were building from scratch, and fueling Netflix’s rise was only hurting them. The content spigots were, in large part, turned off.

Then the harsh realities of streaming began to emerge.

Confronting sizable debt burdens and the fact that most streaming services still don’t make money, studios like Disney and Warner Bros. Discovery have begun to soften their do-not-sell-to-Netflix stances. The companies are still holding back their most popular content — movies from the Disney-owned Star Wars and Marvel universes and blockbuster original series like HBO’s “Game of Thrones” aren’t going anywhere — but dozens of other films like “Dune” and “Prometheus” and series like “Young Sheldon” are being sent to the streaming behemoth in return for much-needed cash. And Netflix is once again benefiting.

Ted Sarandos, one of Netflix’s co-chief executives, said at an investor conference last week that the “availability to license has opened up a lot more than it was in the past,” arguing that the studios’ earlier decision to hold back content was “unnatural.”

“They’ve always built the studios to license,” he said.

As David Decker, the content sales president for Warner Bros. Discovery, said: “Licensing is becoming in vogue again. It never went away, but there’s more of a willingness to license things again. It generates money, and it gets content viewed and seen.”

«

This story could have been headlined “Studios are rediscovering their eagerness to make money from their content, who cares where it’s shown”. It also shows that Netflix has won the streaming wars. Now it’s just a question of which ones drop out and which ones can find a niche in which to thrive.
unique link to this extract


Carter-Ruck and the Ponzi scheme • Tax Policy

Dan Neidle:

»

OneCoin was one of the biggest scams in history. There was no “mining”. There was no blockchain. The “exchange” presented fake prices, designed to make investors think the price of OneCoin was rising when, in reality, there was no price at all. OneCoin was a fraud from the start – a Ponzi [pyramid] scheme, where new investors’ money was used to pay old investors. It also had pyramid scheme features – existing investors were incentivised to sell packages to new investors, who’d pay up to €118,000 for worthless “training courses” accompanied by “tokens” that could be exchanged for OneCoins.

OneCoin it failed spectacularly in 2017, and its executives are all now either in jail or in hiding. Around $4bn was stolen from millions of investors. Its founder, Ruja Ignatova, is one of the FBI’s ten most wanted fugitives.

Carter-Ruck is possibly the UK’s most well-known libel-specialist law firm. At some point in 2016 it decided to act for OneCoin and Ruja Ignatova. How did it make that decision?

I was a partner in a large law firm for many years. Before a partner could act for a new client, a team went through procedures to check the bona fides of that client and their business. This included searches of the internet and other open source materials, as well as searches of private databases. Partly this was about protecting the firm’s reputation. But also it was about the serious consequences for a law firm which facilitated criminal activity or received money that was the proceeds of crime. I am not giving away any secrets by saying this, because these are procedures followed by all UK law firms.

What would reasonable due diligence have found in mid-2016, if we limit ourselves to material available on the public internet?

«

Neidle’s dissection of this, and letters sent to various publications by Carter-Ruck, really is a delight. As a side note on the whole business, there’s a great deal of suspicion that Ignatova is now resting at the bottom of several lakes and/or concrete piles, as OneCoin is thought by some to have been a reservoir for some extremely shady money.

I wonder if Carter-Ruck’s bills were paid, and in what currency.
unique link to this extract


Facebook is being overrun with stolen, AI-generated images that people think are real • 404 Media

Jason Koebler:

»

In the photo, a man kneels in an outdoor sawmill next to his painstaking work: An intricate wooden carving of his bulldog, which he proudly gazes at. “Made it with my own hands,” the Facebook caption reads. The image has 1,300 likes, 405 comments, and 47 shares. “Beautiful work of art,” one of the comments reads. “You have an AMAZING talent!,” another says. “Nice work, love it!” “Awesome work keep it up.” 

This incredible work of art, a “wooden monument to my dog,” has been posted dozens of times across dozens of engagement bait Facebook pages. But every time, the man and the dog are different. Sometimes the dog is hyperrealistic. Sometimes the bulldog is a German Shepherd. Sometimes the man’s hair is slicked back, sometimes it stands up. Sometimes the man sits on the other side of the dog. Sometimes the man looks Latino, other times he looks white; clearly, it is a different man, and a different dog, in most of the images. 

Depending on the image, it is obvious, to me, that the man and the dog are not real. The dog often looks weirdly polygonal, or like some wood carving filter has been applied to an image of a real dog. Sometimes the dog’s ear has obvious artifacts associated with AI-generated images. Other times, it’s the man who looks fake. Variations of this picture are being posted all over Facebook by a series of gigantic meme pages with names like “Go Story,” “Amazing World,” “Did you know?” “Follow me,” “Avokaddo,” and so on.

Universally, the comment sections of these pages feature hundreds of people who have no idea that these are AI-generated and are truly inspired by the dog carving. A version of this image posted on Dogs 4 life has 1 million likes, 39,000 comments, and 17,000 shares. The Dogs 4 life account has spammed links to buy cheap, dog-branded stuff to the top of the comments section.

In many ways, this is a tale as old as time: people lie and steal content online in exchange for likes, influence and money all the time. But the spread of this type of content on Facebook over the last several months has shown that the once-prophesized future where cheap, AI-generated trash content floods out the hard work of real humans is already here, and is already taking over Facebook.

It also shows Facebook is doing essentially nothing to help its users decipher real content from AI-generated content masquerading as real content, and that huge masses of Facebook users are completely unprepared for our AI-generated future.

«

And that last paragraph is the point, really. (Also, just as frustrating: there is a real man and the wooden dog he made.) How shocking of course that it should be Facebook’s denizens who aren’t willing to dig just a little bit to confirm whether something is real.
unique link to this extract


Nobody knows what’s happening online anymore – The Atlantic

Charlie Warzel:

»

You are currently logged on to the largest version of the internet that has ever existed. By clicking and scrolling, you’re one of the 5 billion–plus people contributing to an unfathomable array of networked information—quintillions of bytes produced each day.

The sprawl has become disorienting. Some of my peers in the media have written about how the internet has started to feel “placeless” and more ephemeral, even like it is “evaporating.” Perhaps this is because, as my colleague Ian Bogost has argued, “the age of social media is ending,” and there is no clear replacement. Or maybe artificial intelligence is flooding the internet with synthetic information and killing the old web. Behind these theories is the same general perception: Understanding what is actually happening online has become harder than ever.

…Consider TikTok for a second—arguably the most vibrant platform on the internet. Try to imagine which posts might have been most popular on the site this year. Perhaps a dispatch from the Middle East or incendiary commentary on the mass bombings in Gaza? Or maybe something lighter, like a Gen Z dance trend or gossip about Taylor Swift and Travis Kelce? Well, no: According to TikTok’s year-end report, the most popular videos in the U.S.—clips racking up as many as half a billion views each—aren’t topical at all. They include makeup tutorials, food ASMR, a woman showing off a huge house cat, and a guy spray-painting his ceiling to look like Iron Man. As a Verge headline noted earlier this month, “TikTok’s biggest hits are videos you’ve probably never seen.” Other platforms have the same issue: Facebook’s most recent “Widely Viewed Content Report” is full of vapid, pixelated, mostly repackaged memes and videos getting tens of millions of views.

The dynamic extends beyond social media too. Just last week, Netflix unexpectedly released an unusually comprehensive “engagement report” revealing audience-consumption numbers for most of the TV shows and movies in its library—more than 18,000 titles in all. The attempt at transparency caused confusion among some viewers: Netflix’s single most popular anything from January and June 2023 was a recent thriller series called The Night Agent, which was streamed for 812 million hours globally. “I stay pretty plugged in with media, especially TV shows – legit have never heard of what’s apparently the most watched scripted show in the world,” one person posted on Threads.

«

I thought I’d never seen The Night Agent, and then recalled that I’d watched the first five minutes or so and filed it in the “White House CIA conspiracy murder lone good guy” mental drawer, along with stuff like Salt (OK female lead but was written for Tom Cruise), White House Down, Designated Survivor, and so on. But the wider question is the big one: we’re now in a position where we’re trying to understand everything in the world, and that’s impossible.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.