Start Up No.1997: the fresh threat of AI, ChatGPT reads minds?, life in digital journalism, NYPD boosts AirTags, and more


You can now tour every Star Trek bridge via a web portal – you won’t even need to wear the clothes to fit in. But shouldn’t it be VR instead? CC-licensed photo by WarvanWarvan on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at about 0845 UK time. Free signup.


A selection of 9 links for you. Use them wisely. I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


Yuval Noah Harari argues that AI has hacked the operating system of human civilisation • The Economist

Yuval Noah Harari:

»

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about ChatGPT and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually AI. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an AI bot, while the AI could hone its messages so precisely that it stands a good chance of influencing us.

«

There’s also a (released in a rush) interview with Geoffrey Hinton, ex-Google, at MIT Tech Review:

»

“These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”

«

(YNH piece via John Naughton.)
unique link to this extract


Scientists use GPT AI to passively read people’s thoughts in breakthrough • Vice

Becky Ferreira:

»

The breakthrough marks the first time that continuous language has been non-invasively reconstructed from human brain activities, which are read through a functional magnetic resonance imaging (fMRI) machine. 

The decoder was able to interpret the gist of stories that human subjects watched or listened to—or even simply imagined—using fMRI brain patterns, an achievement that essentially allows it to read peoples’ minds with unprecedented efficacy. While this technology is still in its early stages, scientists hope it might one day help people with neurological conditions that affect speech to clearly communicate with the outside world.

However, the team that made the decoder also warned that brain-reading platforms could eventually have nefarious applications, including as a means of surveillance for governments and employers. Though the researchers emphasized that their decoder requires the cooperation of human subjects to work, they argued that “brain–computer interfaces should respect mental privacy,” according to a study published on Monday in Nature Neuroscience.

“Currently, language-decoding is done using implanted devices that require neurosurgery, and our study is the first to decode continuous language, meaning more than full words or sentences, from non-invasive brain recordings, which we collect using functional MRI,” said Jerry Tang, a graduate student in computer science at the University of Texas at Austin who led the study, in a press briefing held last Thursday.

“The goal of language-decoding is to take recordings of a user’s brain activity and predict the words that the user was hearing or saying or imagining,” he noted. “Eventually, we hope that this technology can help people who have lost the ability to speak due to injuries like strokes, or diseases like ALS.”

«

It’s very dramatic, though fMRI is one of those Tinkerbell technologies – the more you believe in it, the better it works. But if you don’t…
unique link to this extract


May 2 1997: Labour routs Tories in historic election • BBC On This Day

May 1997:

»

The Labour Party has won the general election in a landslide victory, leaving the Conservatives in tatters after 18 years in power, with Scotland and Wales left devoid of Tory representation.

Labour now has a formidable 419 seats (including the speaker) – the largest the party has ever taken. The Conservatives took just 165, their worst performance since 1906.

Tony Blair – at 43 the youngest British prime minister this century – promised he would deliver “unity and purpose for the future”.

John Major has resigned as Conservative leader, saying “When the curtain falls it’s time to get off the stage and that is what I propose to do.”

«

Major went to spend the May afternoon watching a cricket match, and stayed mostly quiet for more than a decade until the Brexit vote hove into view.

The next election is due some time before the end of 2024. Let’s see how that goes.
unique link to this extract


Surviving (just about) the digital media carnage • The Fence

An anonymous insider (who I think worked at Buzzfeed) on the highs and lows:

»

Even when I arrived, there was a sense that the glory days were over. Staff complained loudly that there was no free swag bag at the Christmas party. (A few years back, everyone had been gifted wraps of coke.) Our company had recently secured mega-bucks investment from corporate investors, and over the coming years the pressure to make good on that investment became increasingly strained.

Redundancies crashed over the editorial team in waves. First our news division was laid off. Then the parts of the site that were trafficking badly were excised, then a wider round of lay-offs that seemed to cherry-pick people at random. One time, they forgot to lay off a colleague for the simple reason that they forgot he existed. Someone eventually remembered him, and got in touch to let him know his services would no longer be required – after he’d enjoyed the sweet relief of thinking he’d escaped.

So many talented people were laid off, and so many mediocre employees survived. There was no way the lay-offs could be performance-related. Executives crashed through strategies. We were pivoting to video, pivoting away from video, pivoting to a digital-first strategy, pivoting to a multi-platform strategy, consolidating our brands under one brand, unconsolidating them again.
I came to understand the vagaries of my employer in the same way that a child learns to study the rhythms and temper of an abusive parent. Typically, there would be an eight to twelve-month period of calm, before the sudden, stuttering shock of a round of Friday afternoon lay-offs. Colleagues were mourned on Twitter. We would mutter about unionising.

…But even the best managers would have been powerless to face down the Facebook and Google duopoly. It was like a suicide attempt, where the person realises too late that they don’t actually want to die and scrambles for a foothold – their toe hooked on an overturned chair – before succumbing to a slow, asphyxiating death.

«

To be honest, sounds like working on The Independent from 1995-2000, which was a constant round of layoffs and dwindling numbers. Except we were already unionised. Still, had an upside: met my future wife at one of the leaving dos.

unique link to this extract


NYPD urges citizens to buy AirTags to fight surge in car thefts • Ars Technica

Scharon Harding:

»

The New York Police Department (NYPD) and New York City’s self-proclaimed computer geek of a mayor are urging resident car owners to equip their vehicles with an Apple AirTag. During a press conference on Sunday, Mayor Eric Adams announced the distribution of 500 free AirTags to New Yorkers, saying the technology would aid in reducing the city’s surging car theft numbers.

Adams held the press conference at the 43rd precinct in the Bronx, where he said there had been 200 instances of grand larceny of autos. An NYPD official said that in New York City, 966 Hyundais and Kias have been stolen this year thus far, already surpassing 2022’s 819 total. The NYPD’s public crime statistics tracker says there have been 4,492 vehicle thefts this year, a 13.3% increase compared to the same period last year and the largest increase among NYC’s seven major crime categories.

Adams, as the city did when announcing litigation against Kia and Hyundai on April 7, largely blamed the rise in car thefts on Kia and Hyundai, which he said are “leading the way” in stolen car brands.

Hyundais and Kias were the subjects of the Kia Challenge TikTok trend that encouraged people to jack said vehicles with a mere USB-A cable. The topic has graduated way beyond a social media fad and into a serious concern. Adams, for example, pointed to stolen cars as a gateway to other crimes, like hit-and-runs. It can also be dangerous; four teenagers in upstate New York died during a joyride with a stolen Kia last year. And some insurance companies even stopped taking new insurance policies for some Hyundais and Kias. In February, Kia and Hyundai issued updates to make the cars harder to lift.

Adams was adamant grand larceny auto numbers were dragging the city’s overall crime numbers up and urged New Yorkers to “participate” in the fight against car theft by using an AirTag.

«

I thought the UK had cut down on car theft through immobilisers, but London in 2022 had more than 26,000 thefts – though it seems a large proportion of the targeted cars are high-end, keyless models stolen by complex methods (as we’ve discussed here recently). Those have trackers built in. Still get nicked.
unique link to this extract


Brazil pushes back on big tech firms’ campaign against ‘fake news law’ • Reuters

Anthony Boadle:

»

Brazil’s government and judiciary objected on Tuesday to big tech firms campaigning against an internet regulation bill aimed at cracking down on fake news, alleging undue interference in the debate in Congress.

Bill 2630, also known as the Fake News Law, puts the onus on the internet companies, search engines and social messaging services to find and report illegal material, instead of leaving it to the courts, charging hefty fines for failures to do so.

Tech firms have been campaigning against the bill, including Google which had added a link on its search engine in Brazil connecting to blogs against the bill and asking users to lobby their representatives.

Justice Minister Flavio Dino ordered Google to change the link on Tuesday, saying the company had two hours after notification or would face fines of one million reais ($198,000) per hour if it did not.

“What is this? An editorial? This is not a media or an advertising company,” the minister told a news conference, calling Google’s link disguised and misleading advertising for the company’s stance against the law.

The US company promptly pulled the link, though Google defended its right to communicate its concerns through “marketing campaigns” on its platforms and denied altering search results to favor material contrary to the bill.

“We support discussions on measures to combat the phenomenon of misinformation. All Brazilians have the right to be part of this conversation, and as such, we are committed to communicating our concerns about Bill 2630 publicly and transparently,” it said in a statement.

«

On its face, this law is like the XKCD cartoon “Someone is wrong on the internet”, except Google and other tech firms have to correct it all the time. What does that mean for YouTube – do all the flat earth videos vanish in Brazil? Or all of the videos?
unique link to this extract


‘Star Trek’ fans can now virtually tour every Starship Enterprise bridge • Smithsonian Magazine

Sarah Kuta:

»

For decades, many “Star Trek” fans have imagined what it would be like to work from the bridge of the starship Enterprise, the long-running franchise’s high-tech space-exploring vessel. Through various iterations and seasons of the series, created by Gene Roddenberry in the ’60s, the bridge has remained a constant, serving as the backdrop for many important moments in the show’s 800-plus episodes.

Now, die-hard Trekkies and casual watchers alike can virtually roam around the Enterprise’s bridge to their heart’s content, thanks to a sophisticated and highly detailed new web portal that brings the space to life.

The site features 360-degree, 3D models of the various versions of the Enterprise, as well as a timeline of the ship’s evolution throughout the franchise’s history. Fans of the show can also read detailed information about each version of the ship’s design, its significance to the “Star Trek” storyline and its production backstory.

«

This seems like it would be the ideal thing for virtual reality. Though, OK, you might need some sort of thing where you’re not walking, but floating around. So VR with no legs so you’re not tempted to walk?
unique link to this extract


About us • Fakespot

»

Fakespot’s mission is to bring trust and transparency to the Internet by eliminating misinformation and fraud, starting with eCommerce. Fakespot protects consumers while saving them both time and money by using AI to detect fraudulent product reviews and third-party sellers in real-time. Our proprietary technology analyzes billions of consumer reviews to quickly identify suspicious activity and then recommend better alternatives to consumers. We got tired of getting ripped off online, so we made it our mission to never let it happen to anyone else.

«

Mozilla has just bought this company:

»

In Mozilla, we have found a partner that shares a similar mission as to what the future of the internet should look like, where the convergence of trust, privacy and security play an imperative part of our digital experiences.

In a time where it’s simpler than ever before to generate fake content, the browser is the first entry point to consuming that content. As such, browsers have the most potential for true innovations where actions, like shopping, become better than ever before.

«

Mozilla, clinging to life through Google’s generous sponsorship of its search box (renews this year!), and still looking for that new USP.
unique link to this extract


Intel: Just You Wait. Again.• Monday Note

Jean-Louis Gassée looks at how Intel has promised – or threatened – to catch up with the ARM-based world ever since Apple stuck a Qualcomm chip in the first iPhone:

»

the company’s revenue for its new IFS foundry business decreased by 24% to an insignificant $118m, with a $140m operating loss gingerly explained as “increased spending to support strategic growth”. Other Intel businesses such as Networking (NEX) products and Mobileye — yet another Autonomous Driving Technology — add nothing promising to the company’s picture.

This doesn’t prevent Gelsinger from once again intoning the Just You Wait refrain. This time, the promise is to “regain transistor performance and power performance leadership by 2025”.

Is it credible?

We all agree that the US tech industry would be better served by Intel providing a better alternative to TSMC’s and Samsung’s advanced foundries. Indeed, We The Taxpayers are funding efforts to stimulate our country’s semiconductor sector at the tune of $52B. I won’t comment other than to reminisce about a difficult late 80s conversation with an industry CEO when, as an Apple exec, I naively opposed an attempt to combat the loss of semiconductor memory business to foreign competitors by subsidizing something tentatively called US Memories. But, in this really complicated 2023 world, what choices do we actually have?

For years I’ve watched Intel’s repeated mistakes, the misplaced self-regard, the ineffective leadership changes for this Silicon Valley icon, for the inventor of the first commercial microprocessor, only to be disappointed time and again as the company failed to shake the Wintel yoke — while Microsoft successfully diversified.

I fervently hope Pat Gelsinger succeeds.

«

Chances aren’t looking that good that he will, though. Maybe Intel will become an also-ran in the category it invented.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.1996: Ex-Google AI scientist sounds worried, China’s search censors, Germany probes Huawei, Dorsey disses Musk, and more


What if – just imagine – queries to doctors were answered by ChatGPT? It turns out people like that. CC-licensed photo by Camilo Rueda Lõpez on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at about 0845 UK time. Free signup.


A selection of 9 links for you. Still waiting. I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


‘The godfather of AI’ quits Google and warns of danger ahead • The New York Times

Cade Metz interviewed Dr Geoffrey Hinton, the British scientist who first built a neural network, and in 2012 built a neural net that could identify common objects in photos:

»

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of AI. A part of him, he said, now regrets his life’s work.

…As companies improve their AI systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of AI technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that AI technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

«

This interview implies lots of questions. He was worried while inside Google? He’s worried now he’s outside, but Google is saying everything’s peachy? They’re all rushing too quickly towards putting this stuff out, even while denying that’s the case? It’s not encouraging.
unique link to this extract


ChatGPT will see you now: doctors using AI to answer patient questions • WSJ

Nidhi Subbaraman:

»

In California and Wisconsin, OpenAI’s “GPT” generative artificial intelligence is reading patient messages and drafting responses from their doctors. The operation is part of a pilot program in which three health systems test if the AI will cut the time that medical staff spend replying to patients’ online inquiries.

UC San Diego Health and UW Health began testing the tool in April. Stanford Health Care aims to join the rollout early next week. Altogether, about two dozen healthcare staff are piloting this tool. 

Marlene Millen, a primary care physician at UC San Diego Health who is helping lead the AI test, has been testing GPT in her inbox for about a week. Early AI-generated responses needed heavy editing, she said, and her team has been working to improve the replies. They are also adding a kind of bedside manner: If a patient mentioned returning from a trip, the draft could include a line that asked if their travels went well. “It gives the human touch that we would,” Dr. Millen said.

There is preliminary data that suggests AI could add value. ChatGPT scored better than real doctors at responding to patient queries posted online, according to a study published Friday in the journal JAMA Internal Medicine, in which a panel of doctors did blind evaluations of posts.

As many industries test ChatGPT as a business tool, hospital administrators and doctors are hopeful that the AI-assist will ease burnout among their staff, a problem that skyrocketed during the pandemic. The crush of messages and health-records management is a contributor, among administrative tasks, according to the American Medical Association. 

«

I guess it’s sort of equal: the patients are using search engines (usually Dr Google), and now the doctors are entering the arms race (a little late). The advantage is that ChatGPT is polite and you can narrow (or train) its expertise.
unique link to this extract


Missing Links: A comparison of search censorship in China • The Citizen Lab

Jeffrey Knockel, Ken Kato, and Emile Dirks:

»

Key findings:
• Across eight China-accessible search platforms analyzed — Baidu, Baidu Zhidao, Bilibili, Microsoft Bing, Douyin, Jingdong, Sogou, and Weibo — we discovered over 60,000 unique censorship rules used to partially or totally censor search results returned on these platforms.

• We investigated different levels of censorship affecting each platform, which might either totally block all results or selectively allow some through, and we applied novel methods to unambiguously and exactly determine the rules triggering each of these types of censorship across all platforms.

• Among web search engines Microsoft Bing and Baidu, Bing’s chief competitor in China, we found that, although Baidu has more censorship rules than Bing, Bing’s political censorship rules were broader and affected more search results than Baidu. Bing on average also restricted displaying search results from a greater number of website domains.

• These findings call into question the ability of non-Chinese technology companies to better resist censorship demands than their Chinese counterparts and serve as a dismal forecast concerning the ability of other non-Chinese technology companies to introduce search products or other services in China without integrating at least as many restrictions on political and religious expression as their Chinese competitors.

«

One has to wonder if the people of China are aware of this, and there’s a sort of silent consensus that it’s OK, or if there’s some growing unhappiness with it.
unique link to this extract


How China’s Huawei spooked Germany into launching a probe • POLITICO

Louis Westendarp:

»

While much of the fear around Huawei in the West has focused on espionage and the risk of data leaking to Beijing, Germany’s latest investigation — and the intelligence that triggered it — point to another risk: the potential of sabotage through critical components that could collapse telecoms networks.

In March, the interior ministry announced it was checking all components with security implications from two Chinese telecoms suppliers, Huawei and ZTE. The review was launched to identify technology “that could enable a state to exercise political power,” a high-ranking official from the interior ministry said at the time.

German lawmakers were briefed on the probe by security authorities in a classified parliamentary hearing at the German Bundestag’s digital committee in early April. The session was held by the German interior ministry and the federal intelligence service, the two lawmakers said. Germany’s cybersecurity agency was also present, one lawmaker added.

In the briefing, security officials told lawmakers that one tech component in particular triggered the ministry’s investigation, namely an energy management component from Huawei, two lawmakers present at the briefing who spoke under the condition of anonymity because of the classified nature of the information told POLITICO.

The revelations suggest security officials feared such a component could be used to disrupt telecoms operations or — in a worst case scenario — be exploited to bring down a network.

…In its review, the German interior ministry is asking network operators to submit a list of all Chinese “security-relevant” components. The checks are expected to conclude in the coming months.

The review could lead to operators having to “rip and replace” components provided by Chinese suppliers in past years if they’re deemed too risky.

«

The paranoid style of politics is returning.

unique link to this extract


April 19 1995: Many feared dead in Oklahoma bombing • BBC On This Day

April 1995:

»

A huge car bomb has exploded at a government building in Oklahoma City killing at least 80 people including 17 children at a nursery.

At least 100 people have been injured and the number of dead is expected to rise.

In an emotional speech, President Bill Clinton vowed “swift, certain and severe” punishment for those behind the atrocity.

“The United States will not tolerate and I will not allow the people of this country to be intimidated by evil cowards,” he told a White House news conference this evening.

The blast happened just after 0900 local time when most workers were in their offices. It destroyed the facade of the ten-storey Alfred Murrah Building.

One survivor said he thought there was an earthquake: “I never heard anything that loud. It was a horrible noise…the roar of the whole building crumbling,”

There were scenes of chaos as paramedics treated the wounded on the pavement and rescue workers battled to dig out those still trapped in the rubble.

«

The work of Timothy McVeigh, Gulf War veteran, as retaliation – he said – for the government siege of Waco, Texas in which 82 of the Branch Davidian sect died. McVeigh’s actions led an entire right-wing conspiracist militia to surface over the next 30 years.

unique link to this extract


15 June 1996: Huge explosion rocks central Manchester • BBC On This Day

June 1996:

»

A massive bomb has devastated a busy shopping area in central Manchester.

Two hundred people were injured in the attack, mostly by flying glass, and seven are said to be in a serious condition. Police believe the IRA planted the device.

The bomb exploded at about 1120 BST on Corporation Street outside the Arndale shopping centre.

It is the seventh attack by the Irish Republican group since it broke its ceasefire in February and is the second largest on the British mainland.

A local television station received a telephone warning at 1000 BST – just as the city centre was filling up with Saturday shoppers.

The caller used a recognised IRA codeword.

One hour and 20 minutes after the warning, police were still clearing hundreds of people from a huge area of central Manchester.

Army bomb disposal experts were using a remote-controlled device to examine a suspect van parked outside Marks & Spencer when it blew up in an uncontrolled explosion.

«

Less than two years later, the IRA’s political wing, Sinn Fein, signed the Good Friday Agreement which ended the terrorism campaign, and brought peace to Northern Ireland. It’s the only successful political negotiation to end a conflict in the past 25 years.
unique link to this extract


Jack Dorsey says Elon Musk shouldn’t have bought Twitter after all • The Washington Post

Faiz Siddiqui and Will Oremus:

»

[Former Twitter CEO Jack] Dorsey said he thought Musk, the Tesla CEO who serves in the same role at Twitter today, should have paid $1bn to back out of the deal to acquire the social media platform. The comments are a stark reversal from Dorsey’s strong endorsement of Musk’s takeover, when he wrote a year ago that if Twitter had to be a company at all, “Elon is the singular solution I trust.”

“I trust his mission to extend the light of consciousness,” Dorsey tweeted at the time.

In his remarks on Bluesky on Friday, Dorsey struck a far different tone. Dorsey said he didn’t think Musk “acted right” after pursuing the site and realizing his potential mistake, adding that he did not believe the company’s board should have forced the sale.

“It all went south,” Dorsey added.

Musk did not respond to a request for comment on Dorsey’s remarks. Musk appeared on Friday night’s “Real Time With Bill Maher” on HBO, and spoke on topics including his time in charge of the company, a recent meeting with U.S. Senate Majority Leader Charles E. Schumer (D-N.Y.), and his concerns about rhetoric coming from the political left.

“It was on a fast track to bankruptcy,” Musk said of Twitter. “So I had to take drastic action. There wasn’t any choice.”

«

Pity Musk didn’t come to the same realisation a lot earlier. Possibly he did and thought that actually, he could make it all happen.
unique link to this extract


Scientists in India protest move to drop Darwinian evolution from textbooks • Science

Athar Parvaiz:

»

Scientists in India are protesting a decision to remove discussion of Charles Darwin’s theory of evolution from textbooks used by millions of students in ninth and 10th grades. More than 4000 researchers and others have so far signed an open letter asking officials to restore the material.

The removal makes “a travesty of the notion of a well-rounded secondary education,” says evolutionary biologist Amitabh Joshi of the Jawaharlal Nehru Centre for Advanced Scientific Research. Other researchers fear it signals a growing embrace of pseudoscience by Indian officials.

The Breakthrough Science Society, a nonprofit group, launched the open letter on 20 April after learning that the National Council of Educational Research and Training (NCERT), an autonomous government organization that sets curricula and publishes textbooks for India’s 256 million primary and secondary students, had made the move as part of a “content rationalization” process. NCERT first removed discussion of Darwinian evolution from the textbooks at the height of the COVID-19 pandemic in order to streamline online classes, the society says. (Last year, NCERT issued a document that said it wanted to avoid content that was “irrelevant” in the “present context.”)

…One major concern, Joshi says, is that most Indian students will get no exposure to the concept of evolution if it is dropped from the ninth and 10th grade curriculum, because they do not go on to study biology in later grades. “Evolution is perhaps the most important part of biology that all educated citizens should be aware of,” Joshi says. “It speaks directly to who we are, as humans, and our position within the living world.”

«

No word on whether they’re replacing it with something else, or just hoping children absorb the idea by osmosis.
unique link to this extract


Rise of the Newsbots: AI-generated news websites are proliferating • NewsGuard

McKenzie Sadeghi and Lorenzo Arvanitis:

»

In April 2023, NewsGuard identified 49 websites spanning seven languages — Chinese, Czech, English, French, Portuguese, Tagalog, and Thai — that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication — here in the form of what appear to be typical news websites. 

The websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day. Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.

Many of the sites are saturated with advertisements, indicating that they were likely designed to generate revenue from programmatic ads — ads that are placed algorithmically across the web and that finance much of the world’s media — just as the internet’s first generation of content farms, operated by humans, were built to do.

In short, as numerous and more powerful AI tools have been unveiled and made available to the public in recent months, concerns that they could be used to conjure up entire news organizations  — once the subject of speculation by media scholars — have now become a reality.

In April 2023, NewsGuard sent emails to the 29 sites in the analysis that listed contact information, and two confirmed that they have used AI. Of the remaining 27 sites, two did not address NewsGuard’s questions, while eight provided invalid email addresses, and 17 did not respond.

«

Used to be you’d just feed a normal site through a thesaurus to produce a junk news site, but now we have machines to generate it from scratch. Hurrah?
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Start Up No.1995: Hollywood writers wary of AI, Wikipedia’s UK threat, that Google engineer on its AI, bad black holes, and more


The Buzzfeed offices in New York were a microcosm of the company – but the tech industry only wanted to chew it up and spit it out, a former staffer says. CC-licensed photo by Anthony Quintano on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 11 links for you. Got a spare ribbon? I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


Unions representing Hollywood writers and actors seek limits on AI and chatbots • The New York Times

Noam Scheiber and John Koblin:

»

When the union representing Hollywood writers laid out its list of objectives for contract negotiations with studios this spring, it included familiar language on compensation, which the writers say has either stagnated or dropped amid an explosion of new shows.

But far down, the document added a distinctly 2023 twist. Under a section titled “Professional Standards and Protection in the Employment of Writers,” the union wrote that it aimed to “regulate use of material produced using artificial intelligence or similar technologies.”

To the mix of computer programmers, marketing copywriters, travel advisers, lawyers and comic illustrators suddenly alarmed by the rising prowess of generative AI, one can now add screenwriters.

“It is not out of the realm of possibility that before 2026, which is the next time we will negotiate with these companies, they might just go, ‘you know what, we’re good,’” said Mike Schur, the creator of “The Good Place” and co-creator of “Parks and Recreation.”

“We don’t need you,” he imagines hearing from the other side. “We have a bunch of A.I.s that are creating a bunch of entertainment that people are kind of OK with.”

In their attempts to push back, the writers have what a lot of other white-collar workers don’t: a labour union.

Mr. Schur, who serves on the bargaining committee of the Writers Guild of America as it seeks to avert a strike before its contract expires on Monday, said the union hopes to “draw a line in the sand right now and say, ‘Writers are human beings.’”

«

The point about the WGA (as it’s known) being an unusual beast, by being a union for white-collar workers, is very salient. Being pessimistic – or optimistic – about the potential for AI to evolve and develop is a sensible defensive position. Of course it isn’t close now. And it probably won’t be close to being able to write a scene for years. But you wouldn’t want your forebears to have sold your future for a mess of pottage, would you.

unique link to this extract


We interviewed the engineer Google fired for saying its AI had come to life • Futurism

Maggie Harrison spoke to Blake Lemoine, who Told You It Was Bad:

»

BL: In mid-2021 — before ChatGPT was an app — during that safety effort I mentioned, Bard was already in the works. It wasn’t called Bard then, but they were working on it, and they were trying to figure out whether or not it was safe to release it. They were on the verge of releasing something in the fall of 2022. So it would have come out right around the same time as ChatGPT, or right before it. Then, in part because of some of the safety concerns I raised, they deleted it.

So I don’t think they’re being pushed around by OpenAI. I think that’s just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something.

MH: So, as you say, Google could have released something a bit sooner, but you very specifically said maybe we should slow down, and they — 

BL: They still have far more advanced technology that they haven’t made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They’ve had that technology for over two years. What they’ve spent the intervening two years doing is working on the safety of it — making sure that it doesn’t make things up too often, making sure that it doesn’t have racial or gender biases, or political biases, things like that. That’s what they spent those two years doing. But the basic existence of that technology is years old, at this point.

And in those two years, it wasn’t like they weren’t inventing other things. There are plenty of other systems that give Google’s AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it.

That’s the one that I was like, “you know this thing, this thing’s awake.” And they haven’t let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model.

«

I still don’t believe Lemoine’s correct about the consciousness part, but the inside info about Google is fascinating.
unique link to this extract


Black holes resolve paradoxes by destroying quantum states • Science News

Lisa Grossman:

»

Don’t try to do a quantum experiment near a black hole — its mere presence ruins all quantum states in its vicinity, researchers say.

The finding comes from a thought experiment that pits the rules of quantum mechanics and black holes against each other, physicists reported April 17 at a meeting of the American Physical Society. Any quantum experiment done near a black hole could set up a paradox, the researchers find, in which the black hole reveals information about its interior — something physics says is forbidden. The way around the paradox, the team reports, is if the black hole simply destroys any quantum states that come close.

That destruction could have implications for future theories of quantum gravity. These sought-after theories aim to unite quantum mechanics, the set of rules governing subatomic particles, and general relativity, which describes how mass moves on cosmic scales.

“The idea is to use properties of the [theories] that you understand, which [are] quantum mechanics and gravity, to probe aspects of the fundamental theory,” which is quantum gravity, says theoretical physicist Gautam Satishchandran of Princeton University.

Here’s how Satishchandran, along with theoretical physicists Daine Danielson and Robert Wald, both of the University of Chicago, did just that.

«

This is a really quite puzzling – as in non-obvious, but logical – outcome, but it seems to ascribe a bizarre power to black holes that’s hard to square with something that’s just a big mass. The next, obvious, question is, well, how close exactly can you be to the black hole before it starts messing around with your quantum experiments? (And, presumably, your quantum computers on your gleaming new starship?)
unique link to this extract


How Buzzfeed News went bust • NY Mag

John Herrman:

»

Even back when I worked at Buzzfeed, it was clear enough that one of two things was likely to happen. Scenario one, which [Buzzfeed founder] Jonah [Peretti] embraced and preached, was a world where “social news” made sense, and running alongside the tech giants was the profitable and righteous way of the future. Publishers’ adjacency to social media wasn’t a temporary and inherently doomed state of affairs — it was bankable, and major investments in pre-profit digital media were rational. Scenario two was less appealing to contemplate. In this world, all ad-supported news — not just BuzzFeed — really was as fucked as it otherwise seemed to be when Google showed up, even before Facebook made its brazen bid to capture and monetize the online commons. From the vantage point of 2023, the history described in [Buzzfeed editor Ben Smith’s forthcoming book] Traffic sounds less like a story of entrepreneurial experimentation than an account of a recurring industry delusion. But it’s a delusion worth studying today as it threatens to manifest again. The tech industry will not ever save the media. It will sooner eat it alive.

There are many more books’ worth of material to write about the last ten years in online journalism, but I’d like to take a moment to emphasize the straightforwardness of the overall story. Just over a decade ago, a small group of social-media services became very large. Facebook, which had started as a place to keep up with friends, evolved into a tool for consuming media. This created a massive and sudden demand for fresh content, including, at the margins, news. Publishers old and new rushed to address the need, epitomized by BuzzFeed, which raised huge sums of VC money on the promise it could do so profitably, with maximally sharable and engaging content, some of which was sponsored. The newsroom portion of the proposition was straightforward, if incomplete. The platforms were hungry for stories, and what is a newsroom if not a machine for producing fresh and authoritative links, ready to share, comment on, or get mad about? And so this era, whatever it was, began.

What came next wasn’t much more complicated. Social media kept growing, ingesting and digesting the web around it, and sending some of its users back as readers in exchange. Its business model was an improvement, in nearly every way, on that of the news sites that were now supplying them with free content: bigger audiences, better targeting, and endless user-generated media. In the early days — let’s say 2011 to 2012 — there was a lot of windfall traffic for media companies. Random stories from years ago would suddenly have hundreds of thousands of readers, having been stripped of their original context and reshared by Facebook users. These new readers arrived in large numbers but didn’t really stick around. Their arrival was interpreted as an invitation. In hindsight, it was a warning.

«

So true. The tech industry doesn’t want to share. It wants to take. Everything.
unique link to this extract


UK readers may lose access to Wikipedia amid online safety bill requirements • The Guardian

Dan Milmo:

»

The Wikimedia Foundation, which hosts the Wikipedia site, has said it will not carry out age checks on users, which it fears will be required by the [online safety bill when it becomes an] act.

[Wikimedia UK chief executive Lucy] Crompton-Reid said some content on the site could trigger age verification measures under the terms of the bill.

“For example, educational text and images about sexuality could be misinterpreted as pornography,” she said.

She added: “The increased bureaucracy imposed by this bill will have the effect that only the really big players with significant compliance budgets will be able to operate in the UK market. This could have dire consequences on the information ecosystem here and is, in my view, quite the opposite of what the legislation originally set out to achieve.”

Rebecca MacKinnon, vice-president of global advocacy at the Wikimedia Foundation, has said carrying out age verification would “violate our commitment to collect minimal data about readers and contributors”.

The online safety bill requires commercial pornography sites to carry out age checks. It will also require sites such as Wikipedia to proactively prevent children from encountering pornographic material, with the bill in its current form referring to age verification as one of the possible tools for this. However, there is also a question mark over whether any of Wikipedia’s content would meet the definition of pornographic material in the bill.

«

This was presented on social media as OMG GUVMINT IS SHUTTING DOWN WIKIPEDIA. Except as the story here notes, there’s a questionmark – rather a big one, I’d suggest – over how you’d describe Wikipedia as pornography. It’s self-evidently an education and information site. The government’s description is that “all sites that publish pornography” will have to put in checks. You’d need to add a lot of pornography to Wikipedia to really make it fit that description, which would be a perverse way to prove that you don’t like that requirement of the OSB.
unique link to this extract


Language experience predicts music processing in a half-million speakers of 54 languages • Current Biology

Jingxuan Liu et al:

»

we used web-based citizen science to assess music perception skill on a global scale in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba). We compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian).

Whether or not participants had taken music lessons, native speakers of all 19 tonal languages had an improved ability to discriminate musical melodies on average, relative to speakers of non-tonal languages. But this improvement came with a trade-off: tonal language speakers were also worse at processing the musical beat.

The results, which held across native speakers of many diverse languages and were robust to geographic and demographic variation, demonstrate that linguistic experience shapes music perception, with implications for relations between music, language, and culture in the human mind.

«

They got people to respond at The Music Lab. The implication seems to be that tonal language speakers are less good at keeping rhythm. Don’t ask them to judge that scene in Whiplash, then. Rushing! Dragging! WHICH IS IT!

unique link to this extract


Prompt engineering techniques with Azure OpenAI • Microsoft Learn

»

This guide will walk you through some advanced techniques in prompt design and prompt engineering. If you’re new to prompt engineering, we recommend starting with our introduction to prompt engineering guide.

While the principles of prompt engineering can be generalized across many different model types, certain models expect a specialized prompt structure. For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play:
•Chat Completion API
•Completion API.

Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The Chat Completion API supports the ChatGPT (preview) and GPT-4 (preview) models. These models are designed to take input formatted in a specific chat-like transcript stored inside an array of dictionaries.

«

If you need an introduction – and let’s face it, this is probably going to be in the sixth forum curriculum in a few years (or should be) – then this is as good a place as any to start.
unique link to this extract


Web3 funding continues to crater — drops 82% year to year • Crunchbase

Chris Metinko:

»

In the first quarter of the year, funding to VC-backed Web3 startups hit its lowest point since the very early days of the space as deal flow continues to slow.

Venture funding plummeted 82% year to year, dropping from $9.1bn in Q1 of 2022 to only $1.7bn, per Crunchbase data.

The funding number is also a 30% decline from the final quarter of last year, and the lowest total since the fourth quarter of 2020 — which saw only $1.1bn — when many people had never heard of Web3. [Many people still haven’t – Overspill Ed.]

Deal flow also continued its pronounced drop, as only 333 deals were completed in the first quarter — down from 369 in the previous quarter and a sharp drop from the  more than 500 announced in Q1 2022. The total number of deals is the lowest since Q4 2020.

Perhaps nothing illustrates the differences between the first quarter of last year and the first quarter of the current one in terms of funding to Web3 startups more than the dramatic fall of big rounds.

In Q1 2022, VC-backed startups raised 29 rounds of more than $100m. That included massive raises of $400m or more by ConsenSys and Polygon Technology, as well as — of course —  FTX and its US affiliate FTX US.

«

FTX? Gosh I wonder what happened to them. Bet all the VCs took a lot of guidance from them.
unique link to this extract


Elizabeth Holmes delays start of prison sentence with last-minute appeal • CNN Business

Jennifer Korn and Catherine Thorbecke:

»

Elizabeth Holmes won’t be starting her 11-year prison sentence just yet.

The disgraced Theranos founder was previously expected to report to prison on Thursday, but she will remain free a little longer while the court considers a last-minute appeal, according to a filing Tuesday night.

Holmes was sentenced last November after she was convicted months earlier on multiple charges of defrauding investors while running the now-defunct blood testing startup. Earlier this month, her request to remain free while she appeals her conviction was denied by a judge, setting her up to report to prison on April 27.

On Tuesday, however, Holmes’ legal team filed an appeal of the judge’s decision. As a result, Holmes can remain free on bail while the latest appeal is considered by the court, as per the court’s rules.

«

Gnnnnnngggh. However:

»

Holmes’ ex-boyfriend and former COO Ramesh “Sunny” Balwani was indicted alongside Holmes and convicted of fraud in a separate trial. Like Holmes, Balwani’s legal team delayed the start of his prison sentence by roughly a month with an appeal.

Balwani reported to prison last week to serve out his nearly 13-year sentence.

«

Oh well, the wheels of justice grind slow, but they do at least grind.
unique link to this extract


AI journalism is getting harder to tell from the old-fashioned, human-generated kind • The Guardian

Ian Tucker:

»

A couple of weeks ago I tweeted a call-out for freelance journalists to pitch me feature ideas for the science and technology section of the Observer’s New Review. Unsurprisingly, given headlines, fears and interest in LLM (large language model) chatbots such as ChatGPT, many of the suggestions that flooded in focused on artificial intelligence – including a pitch about how it is being employed to predict deforestation in the Amazon.

One submission however, from an engineering student who had posted a couple of articles on Medium, seemed to be riding the artificial intelligence wave with more chutzpah. He offered three feature ideas – pitches on innovative agriculture, data storage and the therapeutic potential of VR. While coherent, the pitches had a bland authority about them, repetitive paragraph structure, and featured upbeat endings, which if you’ve been toying with ChatGPT or reading about Google chatbot Bard’s latest mishaps, are hints of chatbot-generated content.

I showed them to a colleague. “They feel synthetic,” he said. Another described them as having the tone of a “life insurance policy document”. Were our suspicions correct? I decided to ask ChatGPT. The bot wasn’t so sure: “The texts could have been written by a human, as they demonstrate a high level of domain knowledge and expertise, and do not contain any obvious errors or inconsistencies,” it responded.

…If the chatbot were a bit more intelligent it might have suggested that I put the suspect content through OpenAI’s text classifier. When I did, two of the pitches were rated “possibly” AI generated. Of the two Medium blog posts with the student’s name on, one was rated “possibly” and the other “likely”.

I decided to email him and ask him if his pitches were written by a chatbot. His response was honest: “I must confess that you are correct in your assumption that my writing was indeed generated with the assistance of AI technology.”

But he was unashamed: “My goal is to leverage the power of AI to produce high-quality content that meets the needs of my clients and readers. I believe that by combining the best of both worlds – human creativity and AI technology – we can achieve great things.” Even this email, according to OpenAI’s detector, was “likely” AI generated.

«

unique link to this extract


Requiem for the newsroom • The New York Times

Maureen Dowd:

»

“What would a newspaper movie look like today?” wondered my New York Times colleague Jim Rutenberg. “A bunch of individuals at their apartments, surrounded by sad houseplants, using Slack?”

Mike Isikoff, an investigative reporter at Yahoo who worked with me at The Washington Star back in the ’70s, agreed: “Newsrooms were a crackling gaggle of gossip, jokes, anxiety and oddball hilarious characters. Now we sit at home alone staring at our computers. What a drag.”

As my friend Mark Leibovich, a writer at The Atlantic, noted: “I can’t think of a profession that relies more on osmosis, and just being around other people, than journalism. There’s a reason they made all those newspaper movies, ‘All the President’s Men,’ ‘Spotlight,’ ‘The Paper.’
“There’s a reason people get tours of newsrooms. You don’t want a tour of your local H&R Block office.”

Now, Leibovich said, he does most meetings from home. “At the end of a Zoom call, nobody says, ‘Hey, do you want to get a drink?’ There’s just a click at the end of the meetings. Nothing dribbles out afterward, and you can really learn things from the little meetings after the meetings.”

When Leibovich got his first newspaper job answering phones and sorting mail at The Boston Phoenix, he soon learned that “the best journalism school is overhearing journalists doing their jobs.”

Isikoff still recalls how excited he was when he heard his seatmate at The Star, Robert Pear, the late, great reporter who later worked at The Times, track down the fugitive financier Robert Vesco in Cuba. “Hello, Mr. Vesco,” Pear said in his whispery voice. “This is Robert Pear of The Washington Star.”

With journalists swarming around Washington for the annual White House Correspondents’ Dinner and cascade of parties, it seems like a good time to write the final obituary for the American newspaper newsroom.

«

I haven’t been inside a newsroom for a long time, but they did seem to be getting quieter. The biggest trend was away from boozy lunches and towards sandwiches at a desk.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: Apologies: I forgot to include a link to a news story from 1995. Tomorrow we’ll have two years to cover!