What if – just imagine – queries to doctors were answered by ChatGPT? It turns out people like that. CC-licensed photo by Camilo Rueda Lõpez on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
There’s another post coming this week at the Social Warming Substack on Friday at about 0845 UK time. Free signup.
A selection of 9 links for you. Still waiting. I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.
‘The godfather of AI’ quits Google and warns of danger ahead • The New York Times
Cade Metz interviewed Dr Geoffrey Hinton, the British scientist who first built a neural network, and in 2012 built a neural net that could identify common objects in photos:
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of AI. A part of him, he said, now regrets his life’s work.
…As companies improve their AI systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of AI technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that AI technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
This interview implies lots of questions. He was worried while inside Google? He’s worried now he’s outside, but Google is saying everything’s peachy? They’re all rushing too quickly towards putting this stuff out, even while denying that’s the case? It’s not encouraging.
unique link to this extract
ChatGPT will see you now: doctors using AI to answer patient questions • WSJ
In California and Wisconsin, OpenAI’s “GPT” generative artificial intelligence is reading patient messages and drafting responses from their doctors. The operation is part of a pilot program in which three health systems test if the AI will cut the time that medical staff spend replying to patients’ online inquiries.
UC San Diego Health and UW Health began testing the tool in April. Stanford Health Care aims to join the rollout early next week. Altogether, about two dozen healthcare staff are piloting this tool.
Marlene Millen, a primary care physician at UC San Diego Health who is helping lead the AI test, has been testing GPT in her inbox for about a week. Early AI-generated responses needed heavy editing, she said, and her team has been working to improve the replies. They are also adding a kind of bedside manner: If a patient mentioned returning from a trip, the draft could include a line that asked if their travels went well. “It gives the human touch that we would,” Dr. Millen said.
There is preliminary data that suggests AI could add value. ChatGPT scored better than real doctors at responding to patient queries posted online, according to a study published Friday in the journal JAMA Internal Medicine, in which a panel of doctors did blind evaluations of posts.
As many industries test ChatGPT as a business tool, hospital administrators and doctors are hopeful that the AI-assist will ease burnout among their staff, a problem that skyrocketed during the pandemic. The crush of messages and health-records management is a contributor, among administrative tasks, according to the American Medical Association.
I guess it’s sort of equal: the patients are using search engines (usually Dr Google), and now the doctors are entering the arms race (a little late). The advantage is that ChatGPT is polite and you can narrow (or train) its expertise.
unique link to this extract
Missing Links: A comparison of search censorship in China • The Citizen Lab
Jeffrey Knockel, Ken Kato, and Emile Dirks:
• Across eight China-accessible search platforms analyzed — Baidu, Baidu Zhidao, Bilibili, Microsoft Bing, Douyin, Jingdong, Sogou, and Weibo — we discovered over 60,000 unique censorship rules used to partially or totally censor search results returned on these platforms.
• We investigated different levels of censorship affecting each platform, which might either totally block all results or selectively allow some through, and we applied novel methods to unambiguously and exactly determine the rules triggering each of these types of censorship across all platforms.
• Among web search engines Microsoft Bing and Baidu, Bing’s chief competitor in China, we found that, although Baidu has more censorship rules than Bing, Bing’s political censorship rules were broader and affected more search results than Baidu. Bing on average also restricted displaying search results from a greater number of website domains.
• These findings call into question the ability of non-Chinese technology companies to better resist censorship demands than their Chinese counterparts and serve as a dismal forecast concerning the ability of other non-Chinese technology companies to introduce search products or other services in China without integrating at least as many restrictions on political and religious expression as their Chinese competitors.
One has to wonder if the people of China are aware of this, and there’s a sort of silent consensus that it’s OK, or if there’s some growing unhappiness with it.
unique link to this extract
How China’s Huawei spooked Germany into launching a probe • POLITICO
While much of the fear around Huawei in the West has focused on espionage and the risk of data leaking to Beijing, Germany’s latest investigation — and the intelligence that triggered it — point to another risk: the potential of sabotage through critical components that could collapse telecoms networks.
In March, the interior ministry announced it was checking all components with security implications from two Chinese telecoms suppliers, Huawei and ZTE. The review was launched to identify technology “that could enable a state to exercise political power,” a high-ranking official from the interior ministry said at the time.
German lawmakers were briefed on the probe by security authorities in a classified parliamentary hearing at the German Bundestag’s digital committee in early April. The session was held by the German interior ministry and the federal intelligence service, the two lawmakers said. Germany’s cybersecurity agency was also present, one lawmaker added.
In the briefing, security officials told lawmakers that one tech component in particular triggered the ministry’s investigation, namely an energy management component from Huawei, two lawmakers present at the briefing who spoke under the condition of anonymity because of the classified nature of the information told POLITICO.
The revelations suggest security officials feared such a component could be used to disrupt telecoms operations or — in a worst case scenario — be exploited to bring down a network.
…In its review, the German interior ministry is asking network operators to submit a list of all Chinese “security-relevant” components. The checks are expected to conclude in the coming months.
The review could lead to operators having to “rip and replace” components provided by Chinese suppliers in past years if they’re deemed too risky.
The paranoid style of politics is returning.
April 19 1995: Many feared dead in Oklahoma bombing • BBC On This Day
A huge car bomb has exploded at a government building in Oklahoma City killing at least 80 people including 17 children at a nursery.
At least 100 people have been injured and the number of dead is expected to rise.
In an emotional speech, President Bill Clinton vowed “swift, certain and severe” punishment for those behind the atrocity.
“The United States will not tolerate and I will not allow the people of this country to be intimidated by evil cowards,” he told a White House news conference this evening.
The blast happened just after 0900 local time when most workers were in their offices. It destroyed the facade of the ten-storey Alfred Murrah Building.
One survivor said he thought there was an earthquake: “I never heard anything that loud. It was a horrible noise…the roar of the whole building crumbling,”
There were scenes of chaos as paramedics treated the wounded on the pavement and rescue workers battled to dig out those still trapped in the rubble.
The work of Timothy McVeigh, Gulf War veteran, as retaliation – he said – for the government siege of Waco, Texas in which 82 of the Branch Davidian sect died. McVeigh’s actions led an entire right-wing conspiracist militia to surface over the next 30 years.
15 June 1996: Huge explosion rocks central Manchester • BBC On This Day
A massive bomb has devastated a busy shopping area in central Manchester.
Two hundred people were injured in the attack, mostly by flying glass, and seven are said to be in a serious condition. Police believe the IRA planted the device.
The bomb exploded at about 1120 BST on Corporation Street outside the Arndale shopping centre.
It is the seventh attack by the Irish Republican group since it broke its ceasefire in February and is the second largest on the British mainland.
A local television station received a telephone warning at 1000 BST – just as the city centre was filling up with Saturday shoppers.
The caller used a recognised IRA codeword.
One hour and 20 minutes after the warning, police were still clearing hundreds of people from a huge area of central Manchester.
Army bomb disposal experts were using a remote-controlled device to examine a suspect van parked outside Marks & Spencer when it blew up in an uncontrolled explosion.
Less than two years later, the IRA’s political wing, Sinn Fein, signed the Good Friday Agreement which ended the terrorism campaign, and brought peace to Northern Ireland. It’s the only successful political negotiation to end a conflict in the past 25 years.
unique link to this extract
Jack Dorsey says Elon Musk shouldn’t have bought Twitter after all • The Washington Post
Faiz Siddiqui and Will Oremus:
[Former Twitter CEO Jack] Dorsey said he thought Musk, the Tesla CEO who serves in the same role at Twitter today, should have paid $1bn to back out of the deal to acquire the social media platform. The comments are a stark reversal from Dorsey’s strong endorsement of Musk’s takeover, when he wrote a year ago that if Twitter had to be a company at all, “Elon is the singular solution I trust.”
“I trust his mission to extend the light of consciousness,” Dorsey tweeted at the time.
In his remarks on Bluesky on Friday, Dorsey struck a far different tone. Dorsey said he didn’t think Musk “acted right” after pursuing the site and realizing his potential mistake, adding that he did not believe the company’s board should have forced the sale.
“It all went south,” Dorsey added.
Musk did not respond to a request for comment on Dorsey’s remarks. Musk appeared on Friday night’s “Real Time With Bill Maher” on HBO, and spoke on topics including his time in charge of the company, a recent meeting with U.S. Senate Majority Leader Charles E. Schumer (D-N.Y.), and his concerns about rhetoric coming from the political left.
“It was on a fast track to bankruptcy,” Musk said of Twitter. “So I had to take drastic action. There wasn’t any choice.”
Pity Musk didn’t come to the same realisation a lot earlier. Possibly he did and thought that actually, he could make it all happen.
unique link to this extract
Scientists in India protest move to drop Darwinian evolution from textbooks • Science
Scientists in India are protesting a decision to remove discussion of Charles Darwin’s theory of evolution from textbooks used by millions of students in ninth and 10th grades. More than 4000 researchers and others have so far signed an open letter asking officials to restore the material.
The removal makes “a travesty of the notion of a well-rounded secondary education,” says evolutionary biologist Amitabh Joshi of the Jawaharlal Nehru Centre for Advanced Scientific Research. Other researchers fear it signals a growing embrace of pseudoscience by Indian officials.
The Breakthrough Science Society, a nonprofit group, launched the open letter on 20 April after learning that the National Council of Educational Research and Training (NCERT), an autonomous government organization that sets curricula and publishes textbooks for India’s 256 million primary and secondary students, had made the move as part of a “content rationalization” process. NCERT first removed discussion of Darwinian evolution from the textbooks at the height of the COVID-19 pandemic in order to streamline online classes, the society says. (Last year, NCERT issued a document that said it wanted to avoid content that was “irrelevant” in the “present context.”)
…One major concern, Joshi says, is that most Indian students will get no exposure to the concept of evolution if it is dropped from the ninth and 10th grade curriculum, because they do not go on to study biology in later grades. “Evolution is perhaps the most important part of biology that all educated citizens should be aware of,” Joshi says. “It speaks directly to who we are, as humans, and our position within the living world.”
No word on whether they’re replacing it with something else, or just hoping children absorb the idea by osmosis.
unique link to this extract
Rise of the Newsbots: AI-generated news websites are proliferating • NewsGuard
McKenzie Sadeghi and Lorenzo Arvanitis:
In April 2023, NewsGuard identified 49 websites spanning seven languages — Chinese, Czech, English, French, Portuguese, Tagalog, and Thai — that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication — here in the form of what appear to be typical news websites.
The websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day. Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.
Many of the sites are saturated with advertisements, indicating that they were likely designed to generate revenue from programmatic ads — ads that are placed algorithmically across the web and that finance much of the world’s media — just as the internet’s first generation of content farms, operated by humans, were built to do.
In short, as numerous and more powerful AI tools have been unveiled and made available to the public in recent months, concerns that they could be used to conjure up entire news organizations — once the subject of speculation by media scholars — have now become a reality.
In April 2023, NewsGuard sent emails to the 29 sites in the analysis that listed contact information, and two confirmed that they have used AI. Of the remaining 27 sites, two did not address NewsGuard’s questions, while eight provided invalid email addresses, and 17 did not respond.
Used to be you’d just feed a normal site through a thesaurus to produce a junk news site, but now we have machines to generate it from scratch. Hurrah?
unique link to this extract
|• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?
Read Social Warming, my latest book, and find answers – and more.
Errata, corrigenda and ai no corrida: none notified