Start Up No.1920: the deepfake photo threat, pricing ChatGPT, the fossil fuel job ban, ban that post!, how Twitter ends, and more

Has fusion power finally, at last, moved past being a terrific backdrop for research groups to pose in front of? New results might be encouraging. Perhaps. CC-licensed photo by Steve Jurvetson on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

Have you seen the latest post at the Social Warming Substack? It’s about Google and ChatGPT. Topical, as you’d expect.

A selection of 10 links for you. Well, always nice to be optimistic. I’m @charlesarthur on Twitter. Observations and links welcome.

Thanks to AI, it’s probably time to take your photos off the Internet • Ars Technica

Benj Edwards:


If you’re one of the billions of people who have posted pictures of themselves on social media over the past decade, it may be time to rethink that behavior. New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things. Not everyone may be at risk, but everyone should know about it.

Photographs have always been subject to falsifications—first in darkrooms with scissors and paste and then via Adobe Photoshop through pixels. But it took a great deal of skill to pull off convincingly. Today, creating convincing photorealistic fakes has become almost trivial.

Once an AI model learns how to render someone, their image becomes a software plaything. The AI can create images of them in infinite quantities. And the AI model can be shared, allowing other people to create images of that person as well.

When we started writing this article, we asked a brave volunteer if we could use their social media images to attempt to train an AI model to create fakes. They agreed, but the results were too convincing, and the reputational risk proved too great. So instead, we used AI to create a set of seven simulated social media photos of a fictitious person we’ll call “John.” That way, we can safely show you the results. For now, let’s pretend John is a real guy. The outcome is exactly the same, as you’ll see below.

In our pretend scenario, “John” is an elementary school teacher. Like many of us, over the past 12 years, John has posted photos of himself on Facebook at his job, relaxing at home, or while going places.

Using nothing but those seven images, someone could train AI to generate images that make it seem like John has a secret life. For example, he might like to take nude selfies in his classroom. At night, John might go to bars dressed like a clown. On weekends, he could be part of an extremist paramilitary group. And maybe he served prison time for an illegal drug charge but has hidden that from his employer.


Smart article: rather than waiting for a pressure group or academic to come up with this idea, they thought about the problem themselves. And it’s clearly a potentially big problem. Sure to come up in political campaigns near you.
unique link to this extract

Comparing Google and ChatGPT • Hacker News

From the comments, this first is by “hncel”:


I work at Alphabet and I recently went to an internal tech talk about deploying large language models like this at Google. As a disclaimer I’ll first note that this is not my area of expertise, I just attended the tech talk because it sounded interesting.

Large language models like GPT are one of the biggest areas of active ML research at Google, and there’s a ton of pretty obvious applications for how they can be used to answer queries, index information, etc. There is a huge budget at Google related to staffing people to work on these kinds of models and do the actual training, which is very expensive because it takes a ton of compute capacity to train these super huge language models. However what I gathered from the talk is the economics of actually using these kinds of language models in the biggest Google products (e.g. search, gmail) isn’t quite there yet. It’s one thing to put up a demo that interested nerds can play with, but it’s quite another thing to try to integrate it deeply in a system that serves billions of requests a day when you take into account serving costs, added latency, and the fact that the average revenue on something like a Google search is close to infinitesimal already. I think I remember the presenter saying something like they’d want to reduce the costs by at least 10x before it would be feasible to integrate models like this in products like search. A 10x or even 100x improvement is obviously an attainable target in the next few years, so I think technology like this is coming in the next few years.

Commenter “summerlight”: This is so true. Some folks in Ads also tried to explore using large language models (one example: LLM is going to be the ultimate solution for contextual targeting if it’s properly done), but one of the major bottleneck is always its cost and latency. Even if you can afford cpu/gpu/tpu costs, you always have to play within a finite latency budget. Large language model often adds latency by order of seconds, not even milliseconds! This is simply not acceptable..


One proviso: hncel only created their account in June 2021, and this is the first comment. Hard to be certain that they do know this, but it sounds reasonable.
unique link to this extract

Fossil fuel recruiters banned from three more UK universities • The Guardian

Damian Carrington:


Three more UK universities have banned fossil fuel companies from recruiting students through their career services, with one citing the industry as a “fundamental barrier to a more just and sustainable world”.

The University of the Arts London, University of Bedfordshire, and Wrexham Glyndwr University join Birkbeck, University of London, which was the first to adopt a fossil-free careers service policy in September.

The moves follow a campaign supported by the student-led group People & Planet, which is now active in dozens of universities. The group said universities have been “propping up the companies most responsible for destroying the planet”, while the climate crisis was “the defining issue of most students’ lifetimes”. The campaign is backed by the National Union of Students and the Universities and College Union, which represents academics and support staff.

“The approach supports future generations to make meaningful career decisions,” said Lynda Powell, the executive director of operations at Wrexham Glyndwr University (WGU). “Through this we are supporting the development of a sustainable workforce for the future.”

…The Guardian revealed in May that the world’s biggest fossil fuel firms were planning scores of “carbon bomb” oil and gas projects that would drive the climate past internationally agreed temperature limits and lead to catastrophic global impacts. UN secretary general, António Guterres, also told US students that month: “Don’t work for climate wreckers. Use your talents to drive us towards a renewable future.”


No quote from the fossil fuel companies in this or preceding stories on the topic. I suppose they’d argue that they’d like the brightest talents so they can speed up the transition away from fossil fuels? (Tough argument when the UNSG is against you, though.) Though I’m not sure how many they’d be looking to recruit from the University of the Arts London. Also, how fossil-free does it go? No car companies? Electricity generation companies? Plastic manufacturers?
unique link to this extract

Quiz: pretend you’re a Facebook content moderator • PBS


In 2018, after much debate and controversy, Facebook finally published its censorship policies. All 27 pages of them. The move, wrote the LA Times, “adds a new degree of transparency to a process that users, the public and advocates have criticized as arbitrary and opaque.” But as explored in the Independent Lens film The Cleaners, to what end do those policies translate into something sensible that a contractor hired to do the actual censoring can understand and apply? 

And if you were one of those “cleaners,” what decisions would you make based on FB policy and your background?

This quiz is based on real scenarios as well as Facebook’s own censorship guidelines. Your task: Imagine that you yourself are a censor for hire, a “cleaner” whose job it is to monitor a social media feed. Get into the mindset of these real-life cleaners and try to guess what they actually decided.


I got 5/11 (and mostly got those correct 5 when I went against my initial instincts). Content moderation, at least by Facebook’s standards, is hard. (Though I think its rules on Holocaust denial have changed since the quiz was created.)

Via Katie Harbath’s Anchor Change Substack. Harbath used to be a key player in moderation around election content at Facebook (when it was just that). She lists the elections coming up worldwide in 2023: at least 39, and another 13 to be announced. In other words, at least one per week, and long runups to some of them.
unique link to this extract

With a thud, not a bang • The Fence

Séamas O’Reilly:


Boris Johnson’s premiership was eventually brought down over his handling of the misdeeds of Chris Pincher, having been bloodied by the months-long reveals of Partygate. And yet, equally well- documented scandals relating to covid policy, vip lane contracts for Tory donors, extrajudicial overreach and even funnelling cash to his American mistress made little or no impact at all.

Some stories, it seems, have just enough currency to survive the ever-tightening gyre of the 24-hour news cycle, while others barely scratch the sides as they reach escape velocity and pass out the other end, unremarked upon.

We asked three highly esteemed investigative journalists what hope years-long investigations have in a landscape where a single tweet or tv appearance can dominate a weekend’s press, and asked: what happens when their hard-earned scoop lands not with a bang, but with a thud?


It is the most frustrating thing to work on a piece that you think is the absolute bees’ knees and discover that everyone else thinks it’s a bee’s fart. Yet as this shows, it can be nothing to do with the importance or quality of the piece at all.
unique link to this extract

What if failure is the plan? • Zephoria

danah boyd ponders how Twitter might end:


consider the collapse of local news journalism. The myth that this was caused by craigslist or Google drives me bonkers. Throughout the 80s and 90s, private equity firms and hedge funds gobbled up local news enterprises to extract their real estate. They didn’t give a shit about journalism; they just wanted prime real estate that they could develop. And news organizations had it in the form of buildings in the middle of town. So financiers squeezed the news orgs until there was no money to be squeezed and then they hung them out to dry. There was no configuration in which local news was going to survive, no magical upwards trajectory of revenue based on advertising alone. If it weren’t for craigslist and Google, the financiers would’ve squeezed these enterprises for a few more years, but the end state was always failure. Failure was the profit strategy for the financiers. (It still boggles my mind how many people believe that the loss of news journalism is because of internet advertising. I have to give financiers credit for their tremendous skill at shifting the blame.)

I highly doubt that Twitter is going to be a 100-year company. For better or worse, I think failure is the end state for Twitter. The question is not if but when, how, and who will be hurt in the process?
Right now, what worries me are the people getting hurt. I’m sickened to watch “journalists” aid and abet efforts to publicly shame former workers (especially junior employees) in a sadistic game of “accountability” that truly perverts the concept. I’m terrified for the activists and vulnerable people around the world whose content exists in Twitter’s databases, whose private tweets and DMs can be used against them if they land in the wrong hands (either by direct action or hacked activity). I’m disgusted to think that this data will almost certainly be auctioned off.

Frankly, there’s a part of me that keeps wondering if there’s a way to end this circus faster to prevent even greater harms. (Dear Delaware courts, any advice?)

No one who creates a product wants to envision failure as an inevitable end state. Then again, humans aren’t so good at remembering that death is an inevitable end state either.


unique link to this extract

Tesla says it is adding radar in its cars next month amid self-driving suite concerns • Electrek

Fred Lambert:


Tesla has told the FCC that it plans to market a new radar starting next month. The move raises even more concerns about potentially needed updates to its hardware suite to achieve the promised self-driving capability.

Since 2016, Tesla has claimed that all its vehicles produced going forward have “all the needed hardware” to become self-driving with future software updates. It turned out not to be true.

Tesla already had to upgrade its onboard computer and cameras in earlier vehicles, and it has yet to achieve self-driving capability. Its Full Self-Driving (FSD) software is still in beta and doesn’t enable fully autonomous driving.

The automaker not only had to upgrade its hardware in some cases, but it even removed some hardware. First, it was the front-facing radar and more recently the ultrasonic sensors.

It’s all part of its “Tesla Vision” approach where the automaker believes that the best way to achieve self-driving capability is through cameras being the only sensors. The logic is that the roads are designed to be operated by humans who operate cars through vision (eyes) and biological neural nets (brain).


Removed radar from its vehicles in 2021, and the ultrasonic sensors earlier this year. Looks like at least the radar’s coming back. Which creates the possibility that there will be a group of Teslas from 2021/22 which won’t be able to do the self-driving function, if it ever arrives. Then again, that might be a while. The cars might be obsolete by then.
unique link to this extract

Inside the frantic texts exchanged by crypto executives as FTX collapsed • The New York Times

David Yaffe-Bellany and Emily Flitter:


The day before the embattled cryptocurrency exchange FTX filed for bankruptcy, Changpeng Zhao, the chief executive of the rival exchange Binance, sent an alarmed text to Sam Bankman-Fried, FTX’s founder.

Mr. Zhao was concerned that Mr. Bankman-Fried was orchestrating crypto trades that could send the industry into a meltdown. “Stop now, don’t cause more damage,” Mr. Zhao wrote in a group chat with Mr. Bankman-Fried and other crypto executives on Nov. 10. “The more damage you do now, the more jail time.”

FTX and its sister hedge fund, Alameda Research, had just collapsed after a run on deposits exposed an $8bn hole in the exchange’s accounts. The implosion unleashed a crypto crisis, as firms with ties to FTX teetered on the brink of bankruptcy, calling the future of the entire industry into question.

The series of about a dozen group texts between Mr. Zhao and Mr. Bankman-Fried on Nov. 10, which were obtained by The New York Times, show that key crypto leaders feared that the situation could get even worse. And their frantic communications offer a rare glimpse into the unusual way business is conducted behind the scenes in the industry, with at least three top officials from rival companies exchanging messages in a group chat on the encrypted messaging app Signal.


Such a group (and chat) would be wildly illegal in the regulated world of fiat exchanges. Zhao had earlier that week pulled out of a provisional agreement to buy FTX to get it out of, well, not having any money or pretend money. FTX began shorting Tether, the stablecoin nominally tied to the dollar, which did decline – briefly.

Among the people in the “Exchange collaboration” Signal group was the CTO of Tether. Why, given that Tether doesn’t operate an exchange, and that (as @bitfinexed points out) people selling tethers for less than a dollar means Tether has free money because it claims to have 100% fiat backing for every tether issued? (If you “short sell” your £5 for £4, someone in theory has profited by £1.) So maybe tether isn’t actually backed by fiat?
unique link to this extract

‘Made my blood run cold’: unmasking a TikTok creator who doesn’t really exist • Vice

Katherine Denkinson:


Relatively unknown until November 2020, [Carrie Jade] Williams’ status in the literary community grew after she won the Financial Times’ Bodley Head/FT Essay Prize, which is open to writers under the age of 35. The winning entry is published in the FT Weekend, the weekend edition of the British newspaper, although the competition does not appear to have been run for the last two years. Williams’ entry was a moving essay about her diagnosis with Huntington’s Disease, a debilitating, degenerative genetic condition that affects the brain. Written using a speech-to-text computer programme, the essay won her a £1,000 prize. 

The piece was also praised by influential people. Hilary Knight, director of digital strategy at the Tate, a leading group of art galleries in the UK, described it as “an incredibly moving read and a reminder we shouldn’t need about designing for inclusion”. 

“When I received my diagnosis I wrote a bucket list and decided I wanted to write a novel to leave behind, and that’s really how my writing started,” Williams told the Financial Times. “Getting a diagnosis that means you’ll stop being able to communicate is terrifying, but writing gave me back my voice.”

Williams hasn’t published a novel, but she has become a high-profile advocate for people living with disabilities, and a well-known figure on the Irish literary scene. She has a profile on the publishing house Penguin’s website, and has appeared at festivals in County Kerry, on the Guilty Feminist podcast, and at writers’ workshops in St John’s Theatre, Listowel and online. 


But of course Denkinson started looking into it, and things fell apart. It’s an astonishing story, and a terrific piece of finding-out-facts journalism. Though people like this function as a sort of urban legend: a warning of what happens if we trust people are who they say they are.
unique link to this extract

US scientists boost clean power hopes with fusion energy breakthrough • Financial Times

Tom Wilson:


Physicists have since the 1950s sought to harness the fusion reaction that powers the sun, but no group had been able to produce more energy from the reaction than it consumes — a milestone known as net energy gain or target gain, which would help prove the process could provide a reliable, abundant alternative to fossil fuels and conventional nuclear energy.

The federal Lawrence Livermore National Laboratory in California, which uses a process called inertial confinement fusion that involves bombarding a tiny pellet of hydrogen plasma with the world’s biggest laser, had achieved net energy gain in a fusion experiment in the past two weeks, the people said.

Although many scientists believe fusion power stations are still decades away, the technology’s potential is hard to ignore. Fusion reactions emit no carbon, produce no long-lived radioactive waste and a small cup of the hydrogen fuel could theoretically power a house for hundreds of years.


Wow! Net energy gain! Does this mean all my scepticism should be shelved? I was prepared to eat my words. Then I kept on reading:


The fusion reaction at the US government facility produced about 2.5 megajoules of energy, which was about 120% of the 2.1 megajoules of energy in the lasers, the people with knowledge of the results said, adding that the data was still being analysed.


Of the lasers? Yes, but the lasers are only a part of the energy needed to power the whole system. Some distance away from “in total, more energy out than in”. (Thanks Diggory for the link.)
unique link to this extract

• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.

Errata, corrigenda and ai no corrida: none notified

2 thoughts on “Start Up No.1920: the deepfake photo threat, pricing ChatGPT, the fossil fuel job ban, ban that post!, how Twitter ends, and more

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.