Start Up No.1949: Musk firing view decline, where’s Mastodon’s Android hit?, SpaceX ‘not for drones’, smell the Moon, and more


What if, and just hear me out, we could get AI to redo Futurama as a 1980s US sitcom with humans? Or do the same for other modern series? CC-licensed photo by Dave Monk on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another posting at the Social Warming Substack going live from 0845 UK time. Read or sign up for free and welcome it into your inbox.


A selection of 10 links for you. View them, dammit! I’m @charlesarthur on Twitter. Observations and links welcome.


Elon Musk fires a top Twitter engineer over his declining view count • Platformer

Zoë Schiffer and Casey Newton:

»

For weeks now, Elon Musk has been preoccupied with worries about how many people are seeing his tweets. Last week, the Twitter CEO took his Twitter account private for a day to test whether that might boost the size of his audience. The move came after several prominent right-wing accounts that Musk interacts with complained that recent changes to Twitter had reduced their reach.

On Tuesday, Musk gathered a group of engineers and advisors into a room at Twitter’s headquarters looking for answers. Why are his engagement numbers tanking?

“This is ridiculous,” he said, according to multiple sources with direct knowledge of the meeting. “I have more than 100 million followers, and I’m only getting tens of thousands of impressions.”

One of the company’s two remaining principal engineers offered a possible explanation for Musk’s declining reach: just under a year after the Tesla CEO made his surprise offer to buy Twitter for $44bn, public interest in his antics is waning.  

Employees showed Musk internal data regarding engagement with his account, along with a Google Trends chart. Last April, they told him, Musk was at “peak” popularity in search rankings, indicated by a score of “100.” Today, he’s at a score of nine. Engineers had previously investigated whether Musk’s reach had somehow been artificially restricted, but found no evidence that the algorithm was biased against him.

Musk did not take the news well. 

“You’re fired, you’re fired,” Musk told the engineer. (Platformer is withholding the engineer’s name in light of the harassment Musk has directed at former Twitter employees.)

Dissatisfied with engineers’ work so far, Musk has instructed employees to track how many times each of his tweets are recommended, according to one current worker.

It has now been seven weeks since Twitter added public view counts for every tweet. At the time, Musk promised that the feature would give the world a better sense of how vibrant the platform is. 

“Shows how much more alive Twitter is than it may seem, as over 90% of Twitter users read, but don’t tweet, reply or like, as those are public actions,” he tweeted.

Almost two months later, though, view counts have had the opposite effect, emphasizing how little engagement most posts get relative to their audience size.

«

Damned reality not living up to Musk’s expectations again.
unique link to this extract


Where are the good Android apps for Mastodon? • The Verge

Barbara Krasnoff would like to know, and pretty much hunts down the answer:

»

Samsung, which is responsible for a large number of the Android phones on the market, offers a version of Android whose interface and most basic features can be pretty different from those of Google’s version (which can be found on phones like the Pixel line).

It takes resources to deal with those differences — resources that individual developers and smaller companies may not have. JR Raphael, founder and publisher of Android Intelligence, says, “These days, it’s pretty rare to see any major company fail to release an app for both Android and iOS at the same time, with equal priorities. Where I think we see a noticeable contrast is with the smaller, startup-based services and more indie app developers. In those sorts of scenarios, where resources are clearly limited and a company has to make decisions about where its attention is most valuable, we do still see places sometimes focusing on iOS initially and then coming back to Android later, down the line — or sometimes even just focusing on iOS exclusively. It’s a frustrating reality and one I wish we could change.”

«

The Twitter mobile app for iOS and Android is pretty horrible by iOS standards, certainly compared to an app made by the two-man team at Tapbots (who made Tweetbot); I suspect it looks the same on Android, and that that is sort of the point – to have a fairly consistent interface.

But Mastodon doesn’t have a big organisation to write the “official” app, and so you get all the small developers having a stab at it. Often with a knife and fork. And that’s where the finicky attention to detail that iOS developers devote to their work (for which they do get rewarded) shows up. Krasnoff links to John Gruber’s notes on this, and he’s right: the two platforms simply have different baselines for acceptable design, just as happens on Windows and macOS.
unique link to this extract


Ukraine war: Elon Musk’s SpaceX firm bars Kyiv from using Starlink tech for drone control • BBC News

James Fitzgerald:

»

SpaceX has limited Ukraine’s ability to use its satellite internet service for military purposes – after reports that Kyiv has used it to control drones.

Early in the war, Ukraine was given thousands of SpaceX Starlink dishes – which connect to satellites and help people stay connected to the internet.

But it is also said to have used the tech to target Russian positions – breaking policies set out by SpaceX.
A Ukrainian official said companies had to choose which “side” they were on. They could join Ukraine and “the right to freedom”, or pick Russia and “its ‘right’ to kill and seize territories”, tweeted presidential adviser Mykhailo Podolyak.

At an event in Washington DC on Wednesday, SpaceX president Gwynne Shotwell explained that Starlink technology was “never meant to be weaponised”.

She made reference to Ukraine’s alleged use of Starlink to control drones, and stressed that the equipment had been provided for humanitarian use.

Uncrewed aircraft have played an important role in the war, having been used by Kyiv to search out Russian troops, drop bombs and counter Moscow’s own drone attacks.

Russia has been accused of attempting to jam Starlink signals by SpaceX founder Elon Musk.

Ms Shotwell confirmed that it was acceptable for the Ukrainian military for deploy Starlink technology “for comms”, but said her intent was “never to have them use it for offensive purposes”.

«

Sooo.. the Ukrainian military should just use it for.. defensive purposes? What amazing nonsense from Starlink. You can’t use it for things that are weapons, but you can use it for people who are holding and using weapons?
unique link to this extract


The Moon smells like gunpowder • Nautilus

Jillian Scudder:

»

Dirt on Earth is usually not very sharp; small pieces of rock and degraded plant material are tumbled against each other and generally turn out somewhat polished, like river rocks, before they enter our noses. If you happen to be allergic to dust, it’s bad luck, but it’s not doing much in the way of physical harm.

The lunar dust, on the other hand, is the shattered remains of rocks, broken repeatedly by tiny meteorites striking the surface. It’s sharp. So sharp, in fact, that it slashed the seals on some of the vacuum-sealed bags meant to preserve moon dust on the way home; they wound up being contaminated with oxygen by the time the Apollo missions made their three-day trip back to Earth.

It clung so severely to the moonwalking space suits, that even brushing each other off before returning to the module effectively did nothing to remove the dust. Considering that the astronauts were notoriously clumsy on the lunar surface, trying to adapt to both the unwieldy suits and the lowered gravity, most of them had taken several tumbles over the course of their moonwalks, and these suits were no longer pristine after many hours on the surface. They were, instead, rather comprehensively covered in lunar dirt.

It was more than just getting wedged in the folds of the suit—it was static cling. If you have ever seen a cat try to extract itself from a box of packing styrofoam without trailing pieces stuck to all parts of itself, that’s the problem we were having with the lunar dust on the moon.

…The human lung does not like tiny microscopic shards of rock. Breathing these in can damage lung tissue in a way that is difficult to repair, because the rocks are so sharp and so tiny, that simply coughing won’t expel them, and so they stay embedded in the lungs, continuously doing damage and eventually causing problems similar to very severe pneumonia.

There’s an earthbound parallel called silicosis, which comes from breathing fine mineral dust, most notably from mining quartz, and which still causes deaths today, less now from mining and more from the cutting of quartz countertops without proper protection. Between 1999 and 2019, 2,512 people in the United States died of silicosis. Like the moon dust, quartz isn’t intrinsically toxic, it’s just that it’s like inhaling fine shards of glass, which isn’t a great idea.

But it’s one of many problems we’re going to have to solve if we want humans to go live on the moon.

«

*Sighs* *crosses off timeshare sales on the Moon*. (The gunpowder thing is because it’s wrecking the inside of your nose.)
unique link to this extract


Extracting tables from images in Python • Better Programming

Xavier Canton:

»

About a year ago, I was tasked with extracting and structuring data from documents, mainly contained in tables. I had no prior knowledge in computer vision and struggled to find a suitable “plug-and-play” solution. The options available were either state-of-the-art neural network (NN) based solutions that were heavy and tedious, or simpler OpenCV-based solutions that were inconsistent.

Inspired by existing OpenCV scripts, I developed a simple and consistent method to extract tables and turned it into an open-source Python library: img2table.

What does my library do?
Lightweight (compared to deep learning solutions), the package requires no training and minimal parametrization. It provides:
• Table identification for images and PDF files, including bounding boxes at the table cell level
• Table content extraction by providing support for OCR services/tools (Tesseract, AWS Textract, Google Vision, and Azure OCR as of now)
• Extraction of table titles
• Handling of complex table structures such as merged cells
• Implementation of a method to correct skew and rotation of images
• Extracted tables are returned as a simple object, including a Pandas DataFrame representation
• Export extracted tables to an Excel file, preserving their original structure.

«

For those who have been looking for something like this. Usual proviso: I haven’t had time to try it myself, so approach with care. But I can already think of projects, such as extracting data from financial reports or government documents, where this is just what you want. (You could also try this entirely different service – small amounts free, then it’s paid – but I haven’t tried it and the usual warnings about uploading sensitive content applies.)
unique link to this extract


Viral spread: Peter Hotez on the increase of anti-science aggression on social media • Bulletin of the Atomic Scientists

Sara Goudarzi interviews Hotez, a professor in virology and microbiology (and who incidentally has a daughter with autism, a condition he has pointed out is not due to childhood vaccination):

»

Goudarzi: Are you seeing a difference in the attitudes towards COVID-19 vaccines versus say the flu or MMR vaccines?

Hotez: Yeah, because remember, MMR came out of autism; that was the [original] assertion that the MMR vaccine causes autism. That was [The Lancet] paper from 1998. But then around 2014, 2015 (I like to think because I was helping to take some of the wind out of the sails around autism), you started to see the anti-vaccine movement pivot around this concept of health and medical freedom, that you can’t tell us what to do about our kids. And that was the first link to the Republican Tea Party here in Texas. In fact, they started getting PAC [political action committee] money… It really took off in Texas and Oklahoma and that’s what you see came off the rails with COVID-19.

It has been documented now by Charles Gaba, a health analyst whose work has been mentioned in the New York Times, National Public Radio, Pew Research Center and Peterson Academic Center, that the lowest vaccination rates were where health freedom propaganda was the strongest, and among those who are in red states like Texas, the redder the county, the lower the vaccination rate, the higher the numbers of deaths. It’s a really tight correlation, so much so that David Leonhardt in the New York Times has called it red COVID. My forthcoming book, The Deadly Rise of Antiscience: A Scientist’s Warning and it’s full on linked to far-right politics and it’s been embraced by the House Freedom Caucus.

After vaccines became widely available, the statements from the 2021 CPAC conference of conservatives in Dallas was first along the lines of: “they’re going to vaccinate you, and then they’re going to take away your guns and your bibles.” It’s ridiculous.

«

Ridiculous, and yet it’s happening. Nobody knows what the solution to such partisan thinking is. The really concerning thing might be that there isn’t one, short of a devastating event that means everyone has to work together for survival. That seems like too high a price to get rid of partisanship.
unique link to this extract


What ChatGPT and generative AI mean for science • Nature

Chris Stokel-Walker and Richard Van Noorden:

»

Some researchers think LLMs are well-suited to speeding up tasks such as writing papers or grants, as long as there’s human oversight. “Scientists are not going to sit and write long introductions for grant applications any more,” says Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden, who has co-authored a manuscript using GPT-3 as an experiment. “They’re just going to ask systems to do that.”

Tom Tumiel, a research engineer at InstaDeep, a London-based software consultancy firm, says he uses LLMs every day as assistants to help write code. “It’s almost like a better Stack Overflow,” he says, referring to the popular community website where coders answer each others’ queries.

But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.

This unreliability is baked into how LLMs are built. ChatGPT and its competitors work by learning the statistical patterns of language in enormous databases of online text — including any untruths, biases or outmoded knowledge. When LLMs are then given prompts (such as Greene and Pividori’s carefully structured requests to rewrite parts of manuscripts), they simply spit out, word by word, any way to continue the conversation that seems stylistically plausible.

The result is that LLMs easily produce errors and misleading information, particularly for technical topics that they might have had little data to train on. LLMs also can’t show the origins of their information; if asked to write an academic paper, they make up fictitious citations.

«

What’s really telling is that in the middle of the article there’s a “What’s your experience with ChatGPT? Take Nature’s poll”, which indicates how keenly this is being watched in a field where, after all, writing authoritative-sounding text is what it’s all about. (Thanks G for the link.)
unique link to this extract


Why is YouTube filled with these AI-generated show intros? • Daily Dot

Audra Schroeder:

»

Over the last month, a new kind of uncanny content has been flooding YouTube: Intros to animated series and TV shows recreated with AI-produced images, made to look like they came from another decade. 

There’s Family Guy, The Simpsons, Bob’s Burgers, Adventure Time, Futurama, South Park, and Beavis and Butt-Head. These are just a handful of examples from the last month, but they all have the same waking-nightmare feel to them, likely because a majority were made in Midjourney. I didn’t search for any of these videos, but around late January, YouTube started recommending them, and wouldn’t stop. Most recently, it thought I would like “The Joe Rogan Experience as a 90’s Sitcom.”

Three weeks ago, YouTuber Lyrical Realms posted a surreal Family Guy intro, which has more than 5 million views. It features just stills, presented as a slideshow, showing the main characters from the animated show as live-action renderings. But once you zoom in on the teeth and hands, it looks a little less “human,” and the characters don’t feel like they’d be pleasant to actually watch. (Largely because there is no dialogue or plot.)

One of the comments on the clip: “mind blowing. you could convince somebody who didn’t know about family guy that this was a real sitcom.” 

Lyrical Realms told the Daily Dot that they were inspired by “the recent surge in popularity of ‘80s sci-fi movies.

“I have a background in machine learning and a love for AI, so this was a natural fit for me. The process was quite long, taking me five full days to generate between 1,000 to 1,500 images. The most challenging part was the prompt engineering and having to discard many images during the process until I found the perfect prompts.” 

YouTuber Suburban Garden says they saw a video titled “Dark Souls as an 80’s Dark Fantasy Film” in early January and got inspired to try one out: “I got the free trial for Adobe premiere and stayed up until six in the morning trying to release my video before someone else took the idea.”

After publishing a Futurama intro that got more than 900,000 views in two weeks, they say their “channel of only 10 subscribers quickly rose to 1,000 in three days.” It currently has more than 5,000 subscribers. 

Last month, writer Ryan Broderick pondered “how long it’ll take for YouTube to start trying to downrank this stuff algorithmically.” But it seems these kinds of videos are only proliferating, and some of them are getting millions of views; YouTube doesn’t have much incentive to stop it.

«

They are terrible, as much as anything because they capture the terrible lighting and casting of 80s sitcoms. It’s the AI tsunami coming at us.
unique link to this extract


Inside Safe City, Moscow’s AI surveillance dystopia • WIRED

Masha Borak:

»

The Russian capital is now the seventh-most-surveilled city in the world. Across Russia, there are an estimated 21 million surveillance cameras, and the country ranks among the top in the world in terms of the number of connected surveillance cameras. The system created by Moscow’s government, dubbed Safe City, was touted by city officials as a way to streamline its public safety systems. In recent years, however, its 217,000 surveillance cameras, designed to catch criminals and terrorists, have been turned against protestors, political rivals, and journalists. 

“Facial recognition was supposed to be the ‘cherry on top,’ the reason why all of this was built,” says a former employee of NTechLab, one of the principal companies building Safe City’s face recognition system.

Following Russia’s invasion of Ukraine, Safe City’s data collection practices have become increasingly opaque. The project is now seen as a tool of rising digital repression as Russia wages war against Ukraine and dissenting voices within its own borders. It is an example of the danger smart city technologies pose. And for the engineers and programmers who built such systems, its transformation into a tool of oppression has led to a moment of reckoning. 

…In addition to its network of more than 200,000 cameras, Safe City also incorporates data from 169 information systems, managing data on citizens, public services, transportation, and nearly everything else that makes up Moscow’s infrastructure. This includes anonymized cell phone geolocation data collection, vehicle license plate recognition, data from ride-hailing services, and voice recognition devices. As Safe City was still rolling out in 2020, the Russian government announced plans to spend $1.3bn deploying similar Safe City systems across Russia. From the outside, the potential for the system to be abused seemed obvious. But for those involved in its development, it looked like many other smart city projects. “No one expected that the country would turn into hell in two years,” says one former NTechLab employee…

«

What’s also notable in this story is all the companies mentioned as suppliers which scramble to be quoted saying they’ve got no involvement since the Ukraine invasion.
unique link to this extract


How deepfake videos are used to spread disinformation • The New York Times

Adam Satariano and Paul Mozur:

»

In one video, a news anchor with perfectly combed dark hair and a stubbly beard outlined what he saw as the United States’ shameful lack of action against gun violence.

In another video, a female news anchor heralded China’s role in geopolitical relations at an international summit meeting.

But something was off. Their voices were stilted and failed to sync with the movement of their mouths. Their faces had a pixelated, video-game quality and their hair appeared unnaturally plastered to the head. The captions were filled with grammatical mistakes.

The two broadcasters, purportedly anchors for a news outlet called Wolf News, are not real people. They are computer-generated avatars created by artificial intelligence software. And late last year, videos of them were distributed by pro-China bot accounts on Facebook and Twitter, in the first known instance of “deepfake” video technology being used to create fictitious people as part of a state-aligned information campaign.

“This is the first time we’ve seen this in the wild,” said Jack Stubbs, the vice president of intelligence at Graphika, a research firm that studies disinformation. Graphika discovered the pro-China campaign, which appeared intended to promote the interests of the Chinese Communist Party and undercut the United States for English-speaking viewers.

… Graphika linked the two fake Wolf News presenters to technology made by Synthesia, an A.I. company based above a clothing shop in London’s Oxford Circus.

The five-year-old start-up makes software for creating deepfake avatars. A customer simply needs to type up a script, which is then read by one of the digital actors made with Synthesia’s tools.

AI avatars are “digital twins,” Synthesia said, that are based on the appearances of hired actors and can be manipulated to speak in 120 languages and accents. It offers more than 85 characters to choose from with different genders, ages, ethnicities, voice tones and fashion choices.

«

unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.