Start Up No.1933: inside Musk’s mad mad mad mad Twitter, how ChatGPT lost its toxicity, will the Online Safety Bill cull users?, and more

With Microsoft making job cuts and hardware writeoffs, is the Surface line going to be in trouble? CC-licensed photo by Tatsuo Yamashita on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 10 links for you. Didn’t you know? I’m @charlesarthur on Twitter. Observations and links welcome.

Inside Elon Musk’s “extremely hardcore” Twitter • The Verge

Zoe Schiffer, Casey Newton and Alex Heath:


[On November 20], Musk took the stage at Twitter headquarters. He was dressed in black jeans and black boots with a black T-shirt that read “I Love Twitter” in barely legible black writing. Flanked by two bodyguards, he tried to articulate his vision for the company. “This is not a right-wing takeover of Twitter,” he told employees. “It is a moderate-wing takeover of Twitter.”

As employees peppered him with questions, the billionaire free-associated, answering their concerns with smug dismissals and grandiose promises. What about his plan to turn Twitter from a mere social network into a super-app? “You’re not getting it, you’re not understanding,” he said, sounding frustrated. “I just used WeChat as an example. We can’t freakin’ clone WeChat; that would be absurd.” What about rival social platforms? “I don’t think about competitors … I don’t care what Facebook, YouTube, or what anyone else is doing. Couldn’t give a damn. We just need to make Twitter as goddamn amazing as possible.” What about rebuilding Twitter’s leadership team that he’d decimated in his first week? “Initially, there will be a lot of changes, and then over time you’ll see far fewer changes.” 

Twitter employees were used to grilling their bosses about every detail of how the company ran, an openness that was common at major tech companies around Silicon Valley. Even employees who still believed in Musk’s vision of Twitter hoped for a similar dialogue with their leader. Some expected it, now that the slackers were gone. But over the course of half an hour, Musk made it clear that the two-way street between the CEO and staffers was now closed.


The story also points out how one eager employee, on hearing of the impending takeover, wrote that “Musk has a track record of having a Midas Touch!” At which one other employee pointed out that things didn’t end well for Midas.

If you’ve been following the Twitter saga, much of this is familiar, but seen as a gestalt of failure, it’s Pelion upon Ossa.
unique link to this extract

Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic • Time

Billy Perrigo:


ChatGPT’s predecessor, GPT-3, had already shown an impressive ability to string sentences together. But it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks. This is because the AI had been trained on hundreds of billions of words scraped from the internet—a vast repository of human language. That huge training dataset was the reason for GPT-3’s impressive linguistic capabilities, but was also perhaps its biggest curse. Since parts of the internet are replete with toxicity and bias, there was no easy way of purging those sections of the training data. Even a team of hundreds of humans would have taken decades to trawl through the enormous dataset manually. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.


The benefit is that you should only have to do this once. ChatGPT recognises toxicity (mostly?) so it could act as a content moderator, in theory.
unique link to this extract

Full memo: Microsoft to cut 10k jobs, about 5% of workforce, and take $1.2bn restructuring charge • GeekWire

Todd Bishop, with Satya Nadella’s memo to staff:


First, we will align our cost structure with our revenue and where we see customer demand. Today, we are making changes that will result in the reduction of our overall workforce by 10,000 jobs through the end of FY23 Q3. This represents less than 5% of our total employee base, with some notifications happening today. It’s important to note that while we are eliminating roles in some areas, we will continue to hire in key strategic areas. We know this is a challenging time for each person impacted. The senior leadership team and I are committed that as we go through this process, we will do so in the most thoughtful and transparent way possible.

Second, we will continue to invest in strategic areas for our future, meaning we are allocating both our capital and talent to areas of secular growth and long-term competitiveness for the company, while divesting in other areas. These are the kinds of hard choices we have made throughout our 47-year history to remain a consequential company in this industry that is unforgiving to anyone who doesn’t adapt to platform shifts. As such, we are taking a $1.2bn charge in Q2 related to severance costs, changes to our hardware portfolio, and the cost of lease consolidation as we create higher density across our workspaces.


Taking a charge related to “changes to our hardware portfolio” to me suggests dropping something – perhaps quite a few things – from the Surface line, which hasn’t set the world alight in sales terms since its inception.
unique link to this extract

Social media platforms brace for hit to user numbers from age checks • Financial Times

Ian Johnston and Cristina Criddle:


Social media companies expect age verification measures in the UK’s Online Safety Bill will reduce user numbers, hitting advertising revenue on platforms including TikTok and Instagram.

The long-awaited legislation …would not only remove underage users from the platforms but also discourage individuals without identification or with privacy concerns, people involved with policy at leading social media companies said.

Fear of falling user numbers comes as the platforms deal with declining ad revenue, their primary source of income, brought on by the global economic slowdown, and as legislation around the world is introduced that places stringent new demands on tech giants to police content on their platforms.

“More vetting of users means fewer users,” said a person familiar with advertising at Instagram. “That means fewer users to advertise to, less inventory and fewer clicks and views for business”.


Hang on, I’ve got my microscopic violin here, but before I start playing it, could we just clarify: the big worry is about users who shouldn’t be on the service at all because they’re under age, but you were never troubled enough to actually enforce the age limits even though they exist for a reason (even if you disagree with it) because you got advertising revenue and didn’t have to worry about any effects on the underage users because that was someone else’s, probably the parents’ or society’s, problem?
unique link to this extract

The Internet Transition • Berjon

Robin Berjon:


So the world is populated by highly complex organisms, and we as a species are in the transitory process of further organising an increasingly complex society. It’s often the case that simpler can be better, so is it really a good thing that we’re making our social organisation more complex?

TL;DR yes. We should systematically be fostering social complexity. Complexity in society is good.

Increased specialisation and intensified cooperation allow us to solve harder problems. Like feeding everyone without running out of planet, giving everyone access to Wikipedia without destroying democracy, curing more diseases for more people despite them being more interconnected, or more generally decreasing violent conflict at all scales. Riding the juggernaut that is the modern world can at times feel intense enough that it may be tempting to want to simplify society. Unfortunately, short of an astounding leap forward in science and governance (such that we can deal with complex issues without being as complex as a collective), simplifying society would also mean losing highly desirable collective capabilities such as advanced medecine not to speak of others yet to come. We’re complex because the real capabilities we ambition to share require it, and our better ambitions — those compatible with equality and sustainability — shouldn’t be sacrificed.

This isn’t an idle speculation. The complexity of a society can and does vary over time. When complexity drops sharply in a society, that is known as “collapse”.


And then you add the internet. Too much complexity? Or is it simplifying? A very thought-provoking essay.
unique link to this extract

CNET’s article-writing AI is already publishing very dumb errors • Futurism

Jon Christian:


CNET editor-in-chief Connie Guglielmo acknowledged the AI-written articles in a post that celebrated CNET’s reputation for “being transparent“.

Without acknowledging the criticism, Guglielmo wrote that the publication was changing the byline on its AI-generated articles from “CNET Money Staff” to simply “CNET Money,” as well as making the disclosure more prominent. Furthermore, she promised, every story published under the program had been “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”

That may well be the case. But we couldn’t help but notice that one of the very same AI-generated articles that Guglielmo highlighted in her post makes a series of boneheaded errors that drag the concept of replacing human writers with AI down to earth.

Take this section in the article, which is a basic explainer about compound interest:


“To calculate compound interest, use the following formula:

Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods 

For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.”



You’re the editor – 3% of 10,000 is what? And is what’s written there correct? By the way you have three other articles that need fact-checking waiting which need to be passed in the next 15 minutes. This stuff is a lot tougher than it looks.

However, “being transparent” after you’re called out for doing something sneaky: love it, CNet.
unique link to this extract

Wikipedia criticises ‘harsh’ new Online Safety Bill plans • BBC News

Chris Vallance:


[Wikimedia Foundation representative Rebecca] MacKinnon says the foundation is concerned about the effect of the bill on volunteer-run sites.

She told the BBC that the threat of “harsh” new criminal penalties for tech bosses “will affect not only big corporations, but also public interest websites such as Wikipedia”.

Ms MacKinnon says the law should follow the EU Digital Services Act which, she argues, differentiates between centralised content moderation carried out by employees and Wikipedia-style moderation by community volunteers.

The government told the BBC the bill is designed to strike the balance between tackling harm without imposing unnecessary burdens on low-risk tech companies. “[Regulator] Ofcom will take a reasonable and proportionate approach when monitoring and enforcing the safety duties outlined in the bill, focusing on services where the risk of harm is highest,” it said.

How sites are treated under the bill partly depends on their size. But lawyers have also pointed out that some of the duties in the bill, promoted as a way to rein in big tech, will affect much smaller services where users can communicate with other users.

Nearly 50 Tory MPs wanted to amend the Online Safety Bill to introduce two-year sentences for managers who fail to stop children seeing harmful material. Under a deal with the rebels to stave off defeat, ministers have now promised to introduce similar proposals.

Neil Brown, a solicitor specialising in internet and telecoms law, told the BBC: “The bill, and the amendment, would impose pages of duties on someone who, for fun, runs their own social media or photo/video sharing server, or hosts a multi-player game which lets players chat or see each other’s content or creations.” He suggests limiting the scope of the bill to the major commercial operators with multi-million pound turnover would help “remove the burden and threat to hobbyists and volunteers”.


Under the proposed law, running a Minecraft server could get you into trouble, Brown suggests, as the “senior manager” or “provider of a service”. That’s the trouble with the internet: you can never quite predict the scale at which people are going to do things.
unique link to this extract

How the Facebook Portal died, yet almost lived • Buzzfeed News

Katie Notopoulos:


the decision to pull the plug came because executives didn’t see a path to the Portal becoming a massive business (instead of just a nice business), and with shifting priorities at Meta, it didn’t make the cut. “We’re super sad about it,” [Meta CTO Andrew] Bosworth said [in an interview with Buzzfeed]. “You know the saying, ‘It’s not prioritization unless it hurts’? This one hurts.” (It’s not a total loss though: Existing Portal devices will continue to work and receive support.)

Bosworth said that “the entire smart home category has underwhelmed expectations for a while now.” He added, “I think if you go back to where we expected smart home to be as an industry when Portal entered the market versus where it is today, it’s just not been nearly successful as we expected.”

BuzzFeed News can report that there was one missed opportunity for the Portal to live on. In summer 2020, Facebook was deep in talks with Amazon to license the Portal technology and platform to make a deal where the Portal tech (and its valuable Messenger contact lists) could be licensed to Amazon smart devices.

Amazon’s Echo Show, a competitor to the Portal, is a stand-alone video-chatting device that is Alexa-enabled and features a smart camera and touchscreen. However, the Echo Show only allows you to video call people with either another Show or through the Alexa app on their phone, which… When was the last time someone called you through the Alexa app? With the Portal, you can call any of your friends via Messenger or WhatsApp.

“We were maybe two days away from signing an agreement with Amazon,” Bosworth said. “But this is the middle of the pandemic, and so Portal sales are spiking — they’re going kind of through the roof — and we don’t have the resources to do both.”


There was also opposition from Zuckerberg. Yet the Portal sold millions, and was successful with a tough target for tech: popular with women over 40.
unique link to this extract

Apple announces the second-generation HomePod • Six Colors

Dan Moren:


he full-size speakers also borrows some features from its smaller sibling, the HomePod mini, adding Ultra Wideband technology to allow you to hand off audio between your iPhone and the speaker and a Thread radio for connecting to smart home devices. Apple’s also added temperature, humidity, and accelerometer sensors to the device. Like previous HomePods, the second-gen model can be paired with a second speaker for a stero pair, and can share audio throughout a home via AirPlay.

And, in a feature that I’ve been advocating for, Siri on the HomePod can now use Sound Recognition to listen for specific noises, like a fire alarm or carbon monoxide detector going off, and alert you—though Apple says that feature will arrive in a software update later this spring and requires the new Home architecture (which the company has temporarily suspended).

Of course, one of the tricky selling points of the previous HomePod was the price—it debuted at $349, though it could often be found for cheaper. The second-generation HomePod starts at $299, but there are apparently some tradeoffs to hit that point: for example, the new model includes five tweeters to the first-generation’s seven, four microphones as opposed to the previous six, and (strangely) the older 802.11n Wi-Fi, as opposed to the 802.11ac found in the first model.


Same size, slightly different translucent touch surface. Not much else different. It’s a puzzle why it went away, and now it’s a puzzle why it’s come back.
unique link to this extract

UMG’s Sir Lucian Grainge: ‘The economic model for streaming needs to evolve’ • Music Week

Andre Paine:


In a significant development, the industry leader turned to the future of streaming later in the memo [to all Universal Music staff for the new year]. While reminding UMG colleagues of the major’s “pioneering” approach to embrace streaming over a decade ago, Sir Lucian noted that “every blazingly transformative technological development inevitably creates new challenges for us to confront”. 

“Today, some platforms are adding 100,000 tracks per day,” he wrote. “And with such a vast and unnavigable number of tracks flooding the platforms, consumers are increasingly being guided by algorithms to lower-quality functional content that in some cases can barely pass for ‘music’.”

Sir Lucian described the source of this egregious content – brief sound files designed to divert royalties from real music – as “bad actors who do not share our commitment to artists and artistry have been swooping into the reinvigorated industry”. 

“What’s become clear to us and to so many artists and songwriters – developing and established ones alike is that the economic model for streaming needs to evolve,” he added. “As technology advances and platforms evolve, it’s not surprising that there’s also a need for business model innovation to keep pace with change. There is a growing disconnect between, on the one hand, the devotion to those artists whom fans value and seek to support and, on the other, the way subscription fees are paid by the platforms. Under the current model, the critical contributions of too many artists, as well as the engagement of too many fans, are undervalued.

“Therefore, to correct this imbalance, we need an updated model. Not one that pits artists of one genre against artists of another or major label artists against indie or DIY artists. We need a model that supports all artists – DIY, indie and major. An innovative, ‘artist-centric’ model that values all subscribers and rewards the music they love. A model that will be a win for artists, fans, and labels alike, and, at the same time, also enhances the value proposition of the platforms themselves, accelerating subscriber growth, and better monetising fandom.”


The “bad actor” problem is a real one for the streaming services, and the labels: both lose out.
unique link to this extract

• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.

Errata, corrigenda and ai no corrida: Mark Gould points out in a comment on yesterday’s post that the BBC has estimated how much power its different radio platforms use, and AM isn’t the biggest. (Though it doesn’t have that many users, so perhaps proportionally?)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.