Start Up No.2024: DeSantis campaign uses AI-faked Trump pics, labelling you 650,000 ways, further Vision Pro thoughts, and more


Analysts on Wall Street earnings calls have a new favourite phrase which is both bizarre and humdrum. CC-licensed photo by Dave Dugdale on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


Operational note: The Overspill is going to be on a break for two weeks. Next edition on Monday 26th.


There’s another post at the Social Warming Substack going live at about 0845 BST: it’s about India.

A selection of 10 links for you. Don’t you mouse over me, buddy. I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


Ron DeSantis ad uses AI-generated photos of Trump, Fauci • AP Fact Check

Bill McCarthy:

»

A new Ron DeSantis campaign video attacking Donald Trump purports to show three photos of the former president embracing Anthony Fauci, a key member of the US coronavirus task force, with kisses on the cheek. But the images have the markings of fakes created using artificial intelligence technology, three experts in media forensics told AFP.

“Donald Trump became a household name by FIRING countless people *on television*,” the DeSantis rapid response team wrote in a June 5, 2023 tweet sharing the video. “But when it came to Fauci…”

The 44-second spot contrasts footage of Trump telling contestants “You’re fired” during his time as a reality TV show host with clips of him explaining why he would not give the boot to Fauci, who headed the US National Institute of Allergy and Infectious Diseases and was the face of America’s coronavirus response.

…”It was sneaky to intermix what appears to be authentic photos with fake photos, but these three images are almost certainly AI generated,” said Hany Farid, a professor at the University of California, Berkeley and expert in digital forensics, misinformation and image analysis.
Farid and two other media forensics experts polled by AFP agreed the images have irregular characteristics typical of those produced by AI.

“These images contain many signs indicating that they were AI-generated,” said Matthew Stamm, an associate professor of electrical and computer engineering at Drexel University, who specializes in detecting falsified images and videos. “For example, if you look closely at Donald Trump’s hair in the top-left, bottom-middle, and bottom-right images, you can see that it contains inconsistent textures and is significantly blurrier than other nearby content such as his ears or other regions of his face.”

«

And so it begins: AI fakery becomes part of the US political landscape. The campaigning proper has barely begun, and we’re going to have audio deepfakes to come too; the Trump campaign used a deepfake of Trump’s voice to mock DeSantis’s Twitter campaign launch.
unique link to this extract


From “heavy purchasers” of pregnancy tests to the depression-prone: we found 650,000 ways advertisers label you • The Markup

Jon Keegan:

»

What words would you use to describe yourself? You might say you’re a dog owner, a parent, that you like Taylor Swift, or that you’re into knitting. If you feel like sharing, you might say you have a sunny personality or that you follow a certain religion. 

If you spend any time online, you probably have some idea that the digital ad industry is constantly collecting data about you, including a lot of personal information, and sorting you into specialized categories so you’re more likely to buy the things they advertise to you. But in a rare look at just how deep—and weird—the rabbit hole of targeted advertising gets, The Markup has analyzed a database of 650,000 of these audience segments, newly unearthed on the website of Microsoft’s ad platform Xandr. The trove of data indicates that advertisers could also target people based on sensitive information like being “heavy purchasers” of pregnancy test kits, having an interest in brain tumors, being prone to depression, visiting places of worship, or feeling “easily deflated” or that they “get a raw deal out of life.”

Many of the Xandr ad categories are more prosaic, classifying people as “Affluent Millennials,” for example, or as “Dunkin Donuts Visitors.” Industry critics have raised questions about the accuracy of this type of targeting. And the practice of slicing and dicing audiences for advertisers is an old one. 

But the exposure of a collection of audience segments this size offers consumers an unusual look at how they and their families are packaged, described, and categorized by ad companies. 

Because the segments also include the names of the companies involved in creating them, they also shed light on how disparate pools of personal data—collected by tracking people’s online activity and real-world movements—are combined into bespoke, branded groups of potential ad viewers that can be marketed to publishers and advertisers.

«

They have a search system where you can see how people get segmented. It’s really remarkable how incredibly narrow the brackets can be.
unique link to this extract


Apple Vision Pro hands-on: way ahead of Meta in critical ways • UploadVR

Ian Hamilton has tried all the VR headsets:

»

Vision Pro outclassed Meta Quest Pro and every other headset I’ve ever tried to a degree that is utterly show-stopping. I could see the weight of the headset still being a bit straining, and Apple wouldn’t talk about the field of view, but it felt at least competitive if not wider than existing headsets. Overall, Vision Pro provided easily the best headset demo I’ve ever tried, by a wide margin.

My first moment with Vision Pro seeing the physical room viewed through the headset’s display in passthrough, I looked down at my own hands and it felt as if I was looking at them directly. This was a powerful moment, more powerful than any previous “first” I’d experienced in VR. I feel the need to reiterate. I was looking at my own hands reconstructed by a headset’s sensors and it felt as if I was looking at them directly.

For the last decade, VR developers struggled with a number of tough design questions. Should they only show tracked controller models? Should they attach cartoonish hands? What about connecting those hands to arms? Sure, those are all interesting design questions, but those should be secondary implementations to a person simply looking down and feeling like their hands are their own. Vision Pro did this right out of the gate for the first time in VR hardware. It worked so well that I question how transparent optics, like those in use by HoloLens 2 or Magic Leap 2, will ever hope to match Apple’s version of passthrough augmented reality in an opaque headset.

Passthrough was by no means perfect – I could still see a kind of jittery visual artifact with fast head or hand movements. Also, in some of Apple’s software, I could see a thin outline around my fingertips. But these are incredibly minor critiques relative to the idea that this was night-and-day better than every other passthrough experience I’ve ever seen. From now on, every time I look through Quest Pro passthrough, I’ll be frustrated that I’m not using Vision Pro.

Is the difference between the $1000 Quest Pro and $3500 Vision Pro worth it? I’ll frame my answer to this question this way – can you afford to pay $2500 extra for a better sense of sight?

«

unique link to this extract


Apple’s Vision Pro isn’t the future • WIRED

Kate Knibbs is “a senior writer at WIRED, covering culture”:

»

I’m not a gambler, but I’d bet everything that Apple’s Vision Pro will flop.

…This is not a “revolutionary” gadget, no matter how confident Tim Cook looks when he says it is. It’s a rare misfire, and a sign that Apple is losing its ability to turn tech-geek novelties into normie must-haves. It doesn’t augur the future so much as suggest that Cupertino doesn’t have a clear view forward. 

“Every successful Apple product of the past two decades has disappeared into our lives in some way—the iPhone into our pockets, the iPad into our purses, the Apple Watch living on our wrists, and the AirPods resting in our ears,” my colleague Lauren Goode wrote this week, after demoing the device at WWDC. “But the Vision Pro is also unlike almost every other modern Apple product in one crucial way: It doesn’t disappear.” Instead, Goode wrote, the device settles onto your face, hides your eyes, “sensory organs that are a crucial part of the lived human experience.” The same was true of all virtual reality headsets and augmented reality glasses, she conceded, but the Vision Pro marked the first time an Apple product had made such an intrusion into people’s lives.  

Reading Lauren’s review converted me into a full-fledged Vision Pro doomer. It drives home the reality that an Apple headset, no matter how nifty its specs, is still a big honking gizmo plonked between its wearer and the rest of the world, inherently a barrier more than a conduit. 

«

Well, it’s a point of view. Of course the difficulty is in proving “flop”. Take the iPhone 5C and the Apple Watch: both were declared flops within about six months of going on sale. For the iPhone 5C, that was almost certainly true (Chinese buyers preferred the metal 5S), but the Watch is going well. So when do you know it’s a flop? Who decides? Let’s come back in two years.
unique link to this extract


Vision Pro’s big reveal • ROUGH TYPE

Nick Carr:

»

Vision Pro’s value seems to lie largely in the realm of metaphor. There’s that brilliant little reality dial—the “digital crown”—that allows you to fade in and out of the world, an analog rendering of the way our consciousness now wavers between presence and absence, here and not-here. And there’s the projection of your eyes onto the outer surface of the lens, so those around you can judge your degree of social and emotional availability at any given moment. Your eyes disappear, Apple explains, as you become more “immersed,” as you retreat from your physical surroundings into the screen’s captivating images. See you later. Your fingers keep moving, though, worrying their virtual worry beads, the body reduced to interface. In its metaphors, Vision Pro reveals us for what we have become: avatars in the uncanny valley.

«

“Virtual worry beads”. What a phrase. Carr, of course, always has an orthogonal take on tech, which is what makes him worth reading.

One point, though: nobody outside of Apple has seen what the “virtual eyes” projected on the goggles look like in real life. (Yes yes adverts sure.) All the demos have been to a single person, who wears the headset. Minor point, but might affect the spookiness in a shared space.
unique link to this extract


Smoke forecast • FireSmoke.ca

This interactive map shows where the smoke – particularly the most dangerous small particles, known as PM 2.5 – is expected to move over the next few days. There’s no lower limit for PM 2.5 but levels above 10 are seen as particularly risky for any extended period. Some of the US eastern seaboard, and its cities, are forecast to see levels of 60 or more.
unique link to this extract


Wall Street has a new favourite phrase and it’s utterly nauseating • Financial Times

Louis Ashworth:

»

A spectre is haunting earnings calls — the spectre of double-clicking.

If you haven’t encountered this phrase previously, you might — naive, tiny baby that you are — think it’s just about interfacing with software.

And, of course, a search for “double click” on analytics platform AlphaSense also throws up a few tech demonstrations (though not very many, based on Alphaville’s half-arsed QA process).

But the truth is far more sinister.

On cloud verticalisation: “Satya, in your prepared remarks, you spoke about an increase in verticalisation of Azure. Can we double-click on that a bit more?” Gregg Moskowitz, Mizuho, on the Microsoft April 2023 call

On the exceptional growth in Europe: “[C]urious to hear or maybe if you can double-click on what’s driving the exceptional growth here in Europe.” Samik Chatterjee, of JPMorgan, on the Apple April 2023 call.

On new customers: “Just double-click on what customers are coming to Salesforce and engaging with you around some of the new things that we’ll hear about it sounds like in June.” Brent Bracelin, of Piper Sandler, on the Salesforce May 2023 call.

«

Heaven only knows what started this. It’s absolutely 😱😱 and is starting to challenge “Great quarter, guys” for the most-used cliché in earnings calls. You might be grateful that the article is behind the paywall. Meanwhile, the suggestions for “Further reading” at the end of the article are:

»

— How I lost my 25-year battle against corporate claptrap (FT)
— How to Clean Up Vomit: 12 Steps (WikiHow)

«

unique link to this extract


Scientists claim over 99% identification of ChatGPT • The Register

Katyanna Quach:

»

“Right now, there are some pretty glaring problems with AI writing,” said Heather Desaire, first author of a paper published in the journal Cell Reports Physical Science, and a chemistry professor at the University of Kansas, in a statement. “One of the biggest problems is that it assembles text from many sources and there isn’t any kind of accuracy check – it’s kind of like the game Two Truths and a Lie.”

Desaire and her colleagues compiled datasets to train and test an algorithm to classify papers written by scientists and by ChatGPT. They selected 64 “perspectives” articles – a specific style of article published in science journals – representing a diverse range of topics from biology to physics, and prompted ChatGPT to generate paragraphs describing the same research to create 128 fake articles. A total of 1,276 paragraphs were produced by AI and used to train the classifier.

Next, the team compiled two more datasets, each containing 30 real perspectives articles and 60 ChatGPT-written papers, totaling 1,210 paragraphs to test the algorithm.

Initial experiments reported the classifier was able to discern between real science writing from humans and AI-generated papers 100% of the time. Accuracy at the individual paragraph level, however, dropped slightly – to 92%, it’s claimed. 

They believe their classifier is effective, because it homes in on a range of stylistic differences between human and AI writing. Scientists are more likely to have a richer vocabulary and write longer paragraphs containing more diverse words than machines. They also use punctuation like question marks, brackets, semicolons more frequently than ChatGPT, except for speech marks used for quotations. 

ChatGPT is also less precise, and doesn’t provide specific information about figures or other scientist names compared to humans. Real science papers also use more equivocal language – like “however”, “but”, “although” as well as “this” and “because”.

«

Good points about the punctuation and vocabulary. Perhaps they’ve discovered the formula!
unique link to this extract


Adobe will cover any legal bills around generative AI copyright issues • Fast Company

Chris Stokel-Walker:

»

Adobe Firefly, the software giant’s AI-powered image generation and expansion tool, is being rolled out to businesses today. At its flagship Adobe Summit event, the company is unveiling an expansion of Firefly for enterprise users that will include “full indemnification for the content created through these features,” says Claude Alexandre, VP of digital media at Adobe. (The publicly available beta of Firefly has already been used to create AI-generated riffs on classic album covers and works of art.)

Anything created using Firefly’s text-to-image generation tool will be fully indemnified by the company “as a proof point that we stand behind the commercial safety and readiness of these features,” Alexandre says.

That’s important because of the challenges around the legal status of generative AI tools and their outputs. The standards around generative AI and copyright have not yet been settled legally, which is causing companies to hold off using generative AI in their business operations. This decision, Alexandre hopes, provides clarity.

The Firefly model is trained on stock images for which Adobe already holds the rights, as well as on openly licensed content (for example, Creative Commons images) and public-domain content. “Adobe has actually offered indemnification for quite some time against the use of its own products, and in particular for stock [images],” Alexandre says, noting that this is an extension of the practice.

«

Note the catch though: “The offer will be available only to enterprise customers”. Presumably, customers who are still paying their subscription at the time the litigation arrives, and while it’s in progress. So the subscription becomes a form of insurance too.
unique link to this extract


Wind and solar overtake fossil generation in the EU • Ember

»

New data from energy think tank Ember shows that wind and solar produced more EU electricity than fossil fuels in May, for the first full month on record. Almost a third of the EU’s electricity in May was generated from wind and solar (31%, 59 TWh), while fossil fuels generated a record low of 27% (53 TWh).

“Europe’s electricity transition has hit hyperdrive,” said Ember’s Europe lead Sarah Brown. “Clean power keeps smashing record after record.”

The new milestone was driven by solar growth, strong wind performance and low electricity demand. Solar generated a record 14% of EU electricity in May, hitting an all-time high of 27 TWh, which exceeds the monthly solar records set in July last year. For the first time, EU solar generation overtook coal generation, with coal generating just 10% of EU electricity in May.

Wind power grew year-on-year to generate 17% of EU electricity in May (32 TWh). However, this was lower than the record set in January this year when wind produced 23% (54 TWh) of EU electricity. 

The strong performance of wind and solar meant that EU coal generation fell to an all-time monthly low in May, with just 10% (20 TWh) of EU electricity coming from the most polluting source. The record-low coal generation in May was just below the previous record set during the pandemic lockdowns, when coal power generated slightly above 10% of EU electricity in April 2020.

«

“Low electricity demand” is a strange one, which isn’t explained. Why, given everything, is electricity demand lower? Could microgeneration from solar panels be having some broader impact, since it reduces direct demand for power?
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.