Start Up No.1965: influencer parents and their children, chatbots infuse Office, screwing up self-driving, China v LLMs, and more


The birthday cake might say 50 (or 70!), but in your mind you’re 35. It’s a very common feeling – but why? CC-licensed photo by Dark Dwarf on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 10 links for you. Great to hear that nothing happened during the break. I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


Influencer parents and their children are rethinking growing up on social media • Teen Vogue

Fortesa Latifi:

»

For Gen-Z, social media has always been a given. Many consider the first social networking site launched in 1997, the same year that Pew Research marks as the beginning of Generation Z. It’s commonplace for young people of this generation to have their triumphs and travails documented on the Internet, with a digital footprint that follows them from platform to platform over the years. But for some young people, their parents shared more than evidence of an elementary school Spelling Bee win or a smiling photo of their first day in college. Instead, the intimate details of their lives, from videos of them as crying children to footage of a parent disciplining them – are shared and sometimes monetized without their explicit consent.

Claire, whose name has been changed to protect her privacy, has never known a life that doesn’t include a camera being pointed in her direction. The first time she went viral, she was a toddler. When the family’s channel started to rake in the views, Claire says both her parents left their jobs because the revenue from the YouTube channel was enough to support the family and to land them a nicer house and new car. “That’s not fair that I have to support everyone,” she said. “I try not to be resentful but I kind of [am].” Once, she told her dad she didn’t want to do YouTube videos anymore and he told her they would have to move out of their house and her parents would have to go back to work, leaving no money for “nice things.”

When the family is together, the YouTube channel is what they talk about. Claire says her father has told her he may be her father, but he’s also her boss. “It’s a lot of pressure,” she said. When Claire turns 18 and can move out on her own, she’s considering going no-contact with her parents.

«

unique link to this extract


I’m 53 years old. So why am I 36 in my head? • The Atlantic

Jennifer Senior:

»

“How old do you feel?” is an altogether different question from “How old are you in your head?” The most inspired paper I read about subjective age, from 2006, asked this of its 1,470 participants—in a Danish population (Denmark being the kind of place where studies like these would happen)—and what the two authors discovered is that adults over 40 perceive themselves to be, on average, about 20% younger than their actual age. “We ran this thing, and the data were gorgeous,” says David C. Rubin (75 in real life, 60 in his head), one of the paper’s authors and a psychology and neuroscience professor at Duke University. “It was just all these beautiful, smooth curves.”

Why we’re possessed of this urge to subtract is another matter. Rubin and his co-author, Dorthe Berntsen, didn’t make it the focus of this particular paper, and the researchers who do often propose a crude, predictable answer—namely, that lots of people consider aging a catastrophe, which, while true, seems to tell only a fraction of the story. You could just as well make a different case: that viewing yourself as younger is a form of optimism, rather than denialism. It says that you envision many generative years ahead of you, that you will not be written off, that your future is not one long, dreary corridor of locked doors.

I think of my own numbers, for instance—which, though a slight departure from the Rubin-Berntsen rule, are still within a reasonable range (or so Rubin assures me). I’m 53 in real life but suspended at 36 in my head, and if I stop my brain from doing its usual Tilt-A-Whirl for long enough, I land on the same explanation: At 36, I knew the broad contours of my life, but hadn’t yet filled them in. I was professionally established, but still brimmed with potential. I was paired off with my husband, but not yet lost in the marshes of a long marriage (and, okay, not yet a tiresome fishwife). I was soon to be pregnant, but not yet a mother fretting about eating habits, screen habits, study habits, the brutal folkways of adolescents, the porn merchants of the internet.

I was not yet on the gray turnpike of middle age, in other words.

«

The linked study asked people from the (physical) age of 20 upwards. The “younger mentally than physically” perception starts to take hold at about 30. But why, ah, that’s what the article gets into.
unique link to this extract


Just because chatbots can’t think doesn’t mean they can’t lie • The Nation

Maria Bustillos:

»

In late February, Tyler Cowen, a libertarian economics professor at George Mason University, published a blog post titled, “Who was the most important critic of the printing press in the 17th century?” Cowen’s post contended that the polymath and statesman Francis Bacon was an “important” critic of the printing press; unfortunately, the post contains long, fake quotes attributed to Bacon’s The Advancement of Learning (1605), complete with false chapter and section numbers.

Tech writer Mathew Ingram drew attention to the fabrications a few days later, noting that Cowen has been writing approvingly about the AI chatbot ChatGPT for some time now; several commenters on Cowen’s post assumed the fake quotes must be the handiwork of ChatGPT. (Cowen did not reply to e-mailed questions regarding the post by press time, and later removed the post entirely, with no explanation whatsoever. However, a copy remains at the Internet Archive’s Wayback Machine).

Fortunately, it was child’s play to fact-check Cowen’s fake quotes against the original text of The Advancement of Learning, for free, at the Internet Archive’s Open Library. After checking out the real book, I popped over to ChatGPT for a Q&A session of my own. The bot promptly started concocting fake, grossly inelegant Bacon quotes and chapter titles for me, too, so I called it out.

…But here’s the worst part. When I searched Google on the phrase, “17th century criticism of the printing press,” the results linked to Cowen’s fake-filled blog post! These published falsehoods have already polluted Google. It was a bit weird to realize, right then, that I am going to have to stop using Google for work, but it’s true. The breakneck deployment of half-baked AI, and its unthinking adoption by a load of credulous writers, means that Google—where, admittedly, I’ve found the quality of search results to be steadily deteriorating for years—is no longer a reliable starting point for research.

«

Google gives priority to fresher content in its results, so this is going to be a growing problem as people start using ChatGPT and relatives to generate more and more content. The ranking algorithm is going to need a lot of rethinking. (The rest of the article is about the publishing industry’s lawsuit against the Internet Archive over ebook lending, on the basis that you need Bacon’s book to be accessible to check those facts, and that if the Archive loses its case then searchable text of out-of-copyright books will disappear. That claims seems unsupported.)
unique link to this extract


Microsoft’s new Copilot will change Office documents forever • The Verge

Tom Warren:

»

Microsoft’s new AI-powered Copilot summarized my meeting instantly yesterday (the meeting was with Microsoft to discuss Copilot, of course) before listing out the questions I’d asked just seconds before. I’ve watched Microsoft demo the future of work for years with concepts about virtual assistants, but Copilot is the closest thing I’ve ever seen to them coming true.

“In our minds this is the new way of computing, the new way of working with technology, and the most adaptive technology we’ve seen,” says Jon Friedman, corporate vice president of design and research at Microsoft, in an interview with The Verge.

I was speaking to Friedman in a Teams call when he activated Copilot midway through our meeting to perform its AI-powered magic. Microsoft has a flashy marketing video that shows off Copilot’s potential, but seeing Friedman demonstrate this in real time across Office apps and in Teams left me convinced it will forever change how we interact with software, create documents, and ultimately, how we work.

Copilot appears in Office apps as a useful AI chatbot on the sidebar, but it’s much more than just that. You could be in the middle of a Word document, and it will gently appear when you highlight an entire paragraph — much like how Word has UI prompts that highlight your spelling mistakes. You can use it to rewrite your paragraphs with 10 suggestions of new text to flick through and freely edit, or you can have Copilot generate entire documents for you.

«

Let’s be positive – soon we should be in the position where we can set the AI to write and send the documents, and the response will be written and sent by an AI, and we humans can go off and do something much more interesting.
unique link to this extract


Wonder Dynamics puts a full-service CG character studio in a web platform • TechCrunch

Devin Coldewey:

»

The tools of modern cinema have become increasingly accessible to independent and even amateur filmmakers, but realistic CG characters (like them or not) have remained the province of big-budget projects. Wonder Dynamics aims to change that with a platform that lets creators literally drag and drop a CG character into any scene as if it was professionally captured and edited.

Yes, it sounds a bit like overpromising. Your skepticism is warranted, but as a skeptic myself I have to say I was extremely impressed with what the startup showed of Wonder Studio, the company’s web-based editor. This isn’t a toy like an AR filter — it’s a full-scale tool, and one that co-founders Nikola Todorovic and Tye Sheridan have longed for themselves. And most importantly, it’s meant to make artists’ jobs easier, not replace them outright.

“The goal all along was to make a tool for artists, to empower them. Someone who has big dreams doesn’t always have the resources to manifest them,” said Sheridan, whom many will have seen starring in Spielberg’s film adaptation of Ready Player One — so his familiarity with the complexities of CG-assisted production and motion capture are very much firsthand.

Todorovic and Sheridan have known and worked with each other for years and frequently hit this wall: “Both Tye and I were writing films we couldn’t afford to make,” said Todorovic. Their company, which has operated mostly in stealth until now, raised a $2.5m seed round in early 2021 and an additional $10m A round later that year.

«

There’s a short video showing how it works. Compare and contrast with this next story, on illustration.
unique link to this extract


Netflix’s anime AI art causes background artist panic • Rest of World

Andrew Deck:

»

On January 31, Netflix turned heads with the release of a new anime short film. Posted to Netflix Japan’s official YouTube account, The Dog and the Boy follows a robotic dog and his human companion, who are separated by war and then reunited in old age. All background art for the three-minute video was created using an AI image generator, similar to tools like Stable Diffusion and Midjourney.

A tweet from the official Netflix Japan account describes the novel technique as “an experimental effort to help the anime industry, which has a labor shortage.”

Backlash from anime fans and illustrator communities has been swift, reflecting real fears about being automated out of a job. “A lot of [anime] artists are scared, and rightly so,” Zakuga Mignon, an illustrator who asked to use their professional name due to ongoing threats. Mignon founded the hashtag #SupportHumanArtists, which first took off in December but has become prominent in the backlash against Netflix’s film.

But The Dog and the Boy wasn’t just a threat to artists generally. It targeted background artists specifically: a class of animation workers that is particularly vulnerable to automation and downsizing. For those fighting to elevate background artists’ work, it’s an alarming trend — and a troubling reminder of how automated tools can play on divisions within a profession.

«

unique link to this extract


How Elon Musk spoiled the dream of ‘Full Self-Driving’ • The Washington Post

Faiz Siddiqui:

»

Long before he became “Chief Twit” of Twitter, Elon Musk had a different obsession: making Teslas drive themselves. The technology was expensive and, two years ago when the supply chain was falling apart, Musk became determined to bring down the cost.
Tech is not your friend. We are. Sign up for The Tech Friend newsletter.

He zeroed in on a target: the car radar sensors, which are designed to detect hazards at long ranges and prevent the vehicles from barreling into other cars in traffic. The sleek bodies of the cars already bristled with eight cameras designed to view the road and spot hazards in each direction. That, Musk argued, should be enough.

Some Tesla engineers were aghast, said former employees with knowledge of his reaction, speaking on the condition of anonymity for fear of retribution. They contacted a trusted former executive for advice on how to talk Musk out of it, in previously unreported pushback. Without radar, Teslas would be susceptible to basic perception errors if the cameras were obscured by raindrops or even bright sunlight, problems that could lead to crashes.

Six years after Tesla promoted a self-driving car’s flawless drive, a car using recent ‘Full Self-Driving’ beta software couldn’t drive the route without error. (Video: Jonathan Baran/The Washington Post)
Musk was unconvinced and overruled his engineers. In May 2021 Tesla announced it was eliminating radar on new cars. Soon after, the company began disabling radar in cars already on the road. The result, according to interviews with nearly a dozen former employees and test drivers, safety officials and other experts, was an uptick in crashes, near misses and other embarrassing mistakes by Tesla vehicles suddenly deprived of a critical sensor.

Musk has described the Tesla “Full Self-Driving” technology as “the difference between Tesla being worth a lot of money and being worth basically zero,” but his dream of autonomous cars is hitting roadblocks.

«

A really comprehensive piece of reporting. Includes the now-expected phrase for a story about people working for Musk: “Most spoke on the condition of anonymity for fear of retribution.”
unique link to this extract


Room-temperature superconductor discovery claim meets with resistance • Quanta Magazine

Charlie Wood and Zack Savitsky:

»

In a packed talk on Tuesday afternoon at the American Physical Society’s annual March meeting in Las Vegas, Ranga Dias, a physicist at the University of Rochester, announced that he and his team had achieved a century-old dream of the field: a superconductor that works at room temperature and near-room pressure. Interest was so intense in the presentation that security personnel stopped entry to the overflowing room more than fifteen minutes before the talk. They could be overheard shooing curious onlookers away shortly before Dias began speaking.

The results, published in Nature, appear to show that a conventional conductor — a solid composed of hydrogen, nitrogen and the rare-earth metal lutetium — was transformed into a flawless material capable of conducting electricity with perfect efficiency.

While the announcement has been greeted with enthusiasm by some scientists, others are far more cautious, pointing to the research group’s controversial history of alleged research malfeasance.

«

I mean, come on. “Meets resistance” for a superconductor story being disbelieved is next-level headline writing.
unique link to this extract


The strongest evidence yet that an animal started the pandemic • The Atlantic

Katherine J. Wu:

»

A few weeks ago, the [genetic sequence] data appeared on an open-access genomic database called GISAID, after being quietly posted by researchers affiliated with [China’s] Center for Disease Control and Prevention. By almost pure happenstance, scientists in Europe, North America, and Australia spotted the sequences, downloaded them, and began an analysis.

The samples were already known to be positive for the coronavirus, and had been scrutinized before by the same group of Chinese researchers who uploaded the data to GISAID. But that prior analysis, released as a preprint publication in February 2022, asserted that “no animal host of SARS-CoV-2 can be deduced.” Any motes of coronavirus at the market, the study suggested, had most likely been chauffeured in by infected humans, rather than wild creatures for sale.

The new analysis, led by Kristian Andersen, Edward Holmes, and Michael Worobey—three prominent researchers who have been looking into the virus’s roots—shows that that may not be the case. Within about half a day of downloading the data from GISAID, the trio and their collaborators discovered that several market samples that tested positive for SARS-CoV-2 were also coming back chock-full of animal genetic material—much of which was a match for the common raccoon dog, a small animal related to foxes that has a raccoon-like face. Because of how the samples were gathered, and because viruses can’t persist by themselves in the environment, the scientists think that their findings could indicate the presence of a coronavirus-infected raccoon dog in the spots where the swabs were taken.

…China has, for years, been keen on pushing the narrative that the pandemic didn’t start within its borders. In early 2020, a Chinese official suggested that the novel coronavirus may have emerged from a US Army lab in Maryland. The notion that a dangerous virus sprang out from wet-market mammals echoed the beginnings of the SARS-CoV-1 epidemic two decades ago—and this time, officials immediately shut down the Huanan market, and vehemently pushed back against assertions that live animals being sold illegally in the country were to blame.

«

The latter point is the most interesting in the article: China doesn’t want SARS-Cov-2 to have come from a lab leak or a wet market. So it just clamps down on everything.
unique link to this extract


China’s censors are afraid of ChatGPT • Foreign Policy

Nicholas Welch and Jordan Schneider:

»

China’s aspirations to become a world-leading AI superpower are fast approaching a head-on collision with none other than its own censorship regime. The Chinese Communist Party (CCP) prioritizes controlling the information space over innovation and creativity, human or otherwise. That may dramatically hinder the development and rollout of LLMs [large language models], leaving China to find itself a pace behind the West in the AI race.

According to a bombshell report from Nikkei Asia, Chinese regulators have instructed key Chinese tech companies not to offer ChatGPT services “amid growing alarm in Beijing over the AI-powered chatbot’s uncensored replies to user queries.” A cited justification, from state-sponsored newspaper China Daily, is that such chatbots “could provide a helping hand to the U.S. government in its spread of disinformation and its manipulation of global narratives for its own geopolitical interests.”

The fundamental problem is that plenty of speech is forbidden in China—and the political penalties for straying over the line are harsh. A chatbot that produces racist content or threatens to stalk a user makes for an embarrassing story in the United States; a chatbot that implies Taiwan is an independent country or says Tiananmen Square was a massacre can bring down the wrath of the CCP on its parent company.

Ensuring that LLMs never say anything disparaging about the CCP is a genuinely herculean and perhaps impossible task. As Yonadav Shavit, a computer science Ph.D. student at Harvard University, put it: “Getting a chatbot to follow the rules 90% of the time is fairly easy. But getting it to follow the rules 99.99% of the time is a major unsolved research problem.”

…the de facto method by which Chinese AI companies compete among one another would involve feeding clever and suggestive prompts to an opponent’s AI chatbot, waiting until it produces material critical of the CCP, and forwarding a screenshot to the CAC. That’s what happened with Bluegogo, a bikeshare company. In early June 2017, the app featured a promotion using tank icons around Tiananmen Square. The $140m company folded immediately. Although most guessed that Bluegogo had been hacked by a competitor, to the CCP that defence was clearly irrelevant.

«

unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.