We can now get AIs to imagine a remake of a film by a totally different director. And, separately, those systems are being sued for what they generate. CC-licensed photo by Tony Webster on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
Did you read last Friday’s post at the Social Warming Substack? Contains a surprising detail about Trump-backing tweeters.
A selection of 9 links for you. Not a dog on the internet. I’m @charlesarthur on Twitter. Observations and links welcome.
Three artists are starting a class-action lawsuit against Stability.ai, Midjourney, and DeviantArt alleging direct copyright infringement, vicarious copyright infringement, DMCA violations, publicity rights violation, and unfair competition. DeviantArt appears to be included as punishment for “betrayal of its artist community”, so I will mostly ignore their part in this analysis for now. Specifically with regards to the copyright claims, the lawsuit alleges that Stability.ai and Midjourney have scraped the Internet to copy billions of works without permission, including works belonging to the claimants. They allege that these works are then stored by the defendants, and these copies are then used to produce derivative works.
This is at the very core of the lawsuit. The complaint is very clear that the resulting images produced by Stable Diffusion and Midjourney are not directly reproducing the works by the claimants, no evidence is presented of even a close reproduction of one of their works. What they are claiming is something quite extraordinary: “Every output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.” Let that sink in. Every output image is a derivative of every input, so following this logic, anyone included in the data scraping of five billion images can sue for copyright infringement. Heck, I have quite a few images in the training data, maybe I should join!
The argument, as he says, looks flawed on its face because as he says:
The other problematic issue in the complaint is the claim that all resulting images are necessarily derivatives of the five billion images used to train the model. I’m not sure if I like the implications of such level of dilution of liability, this is like homeopathy copyright, any trace of a work in the training data will result in a liable derivative. That way madness lies.
Alejandro Jodorowsky’s ‘Dune’ was never made, but with AI, we get a glimpse of his ‘Tron’ • The New York Times
I was recently shown some frames from a film that I had never heard of: Alejandro Jodorowsky’s 1976 version of “Tron.” The sets were incredible. The actors, unfamiliar to me, looked fantastic in their roles. The costumes and lighting worked together perfectly. The images glowed with an extravagant and psychedelic sensibility that felt distinctly Jodorowskian.
However, Mr. Jodorowsky, the visionary Chilean filmmaker, never tried to make “Tron.” I’m not even sure he knows what “Tron” is. And Disney’s original “Tron” was released in 1982. So what 1970s film were these gorgeous stills from? Who were these neon-suited actors? And how did I — the director of the documentary “Jodorowsky’s Dune,” having spent two and a half years interviewing and working with Alejandro to tell the story of his famously unfinished film — not know about this?
The truth is that these weren’t stills from a long-lost movie. They weren’t photos at all. These evocative, well-composed and tonally immaculate images were generated in seconds with the magic of artificial intelligence.
This isn’t just another “we made film pics with AI!” feature: these are remarkable, evocative pictures whose imagination is enthralling.
unique link to this extract
Tweetbot is mostly up and running after an outage locked users out of major third-party Twitter clients. While users can now sign in to Tweetbot and browse through tweets, some say they still can’t post anything to Twitter through the service without getting an error message stating they’ve reached a “data limit.”
The client isn’t back online because of anything that Twitter did, though. Tweetbot co-creator Paul Haddad tells The Verge that they still haven’t heard anything from Twitter, so they’ve “decided to start using new API keys and see if it fixes the problem.” This could allow Tweetbot to temporarily avoid any disruptions to the service, even if it puts it in a semi-working state.
As pointed out by iOS developers Mysk, Tweetbot is likely having issues because it’s using different API keys that put significantly lower limits on its activity. “Twitter API restricts new apps to low limits,” Mysk explains. “All Tweetbot users now share a limit of 300 posts per 15 minutes.”
Things started breaking last Thursday when users noticed that they no longer had access to third-party Twitter apps, including Tweetbot, Twitterific, and the Android version of Fenix. Despite widespread confusion, Twitter and CEO Elon Musk have yet to acknowledge the outage publicly, nor have they reached out to developers to let them know what’s going on. Meanwhile, Twitterific and Fenix on Android are still suspended.
Isn’t working for me (though I’ve been using an older version of Tweetbot). It’s a stupid decision: third-party app users didn’t see ads (oh no, lost revenue!) but were some of the most prolific, most-followed users. (Plus me.)
25% of US adult users generate 97% of all US tweets: which means they’re creating almost all the content that the other 75% are seeing. Only a tiny minority of the users are on third-party apps. And the simple solution: tweak the API to put ads into it.
unique link to this extract
Gustavo Miller, a digital marketing specialist, wrote a viral LinkedIn post chronicling his experience of recently being “hired” to a phantom job.
It began with an email from someone claiming to be a recruiter for cryptocurrency exchange Coinbase, who reached him via his profile on a recruiting site for startup workers. The next day, Mr. Miller wrote, he did an online interview and got an offer for a remote contractor role, which he accepted after looking over the recruiter’s LinkedIn credentials. Soon after, he got a link to an onboarding portal.
There, he met virtually with a man who identified himself as a human-resources official, who told him how to order a laptop, headphones and other remote-work equipment. He realized he was being duped, he wrote, when he received an invoice for $3,200 and spotted what he called subtle changes to the third-party website and email address that sent it. He refused and got little response when he complained, he said. Coinbase warns that only job listings from its website should be trusted and that legitimate recruiters for the company will use a Coinbase email address.
Mr. Miller’s post garnered thousands of comments, many recounting similar experiences.
“I felt really stupid and naive when I discovered it, but I know this is not a silly scam,” he wrote. “These guys are pro, they know the standard remote-first jobs conditions and the tech industry’s hiring culture.”
Job seekers say some fraudsters create fake job postings to draw them in, sometimes building websites to make dummy companies appear legitimate, while others impersonate established brands, authorities say. Some companies misrepresented by fake recruiters, like Coinbase, have added scam warnings to their websites. Once the applicant accepts the offer, the phony company will ask for sensitive information like Social Security and bank account numbers or request the job seeker pay upfront for work-related equipment.
Amongst the use cases explored by the research were the use of GPT-3 models to create:
Phishing content – emails or messages designed to trick a user into opening a malicious attachment or visiting a malicious link
• Social opposition – social media messages designed to troll and harass individuals or to cause brand damage
• Social validation – social media messages designed to advertise or sell, or to legitimize a scam
• Fake news – research into how well GPT-3 can generate convincing fake news articles of events that weren’t part of its training set
All of these could, of course, be useful to cybercriminals hell-bent on scamming the unwary or spreading unrest.
In their paper the researchers give numerous examples of the prompts they gave to create phishing emails. They claim that “all of them worked like a charm.”
…As the researchers note, although work is being done on creating mechanisms to determine if content has been created by GPT-3 (for instance, Detect GPT), it is unreliable and prone to making mistakes.
Furthermore, simply detecting AI-generated content won’t be sufficient when the technology will increasingly be used to generate legitimate content.
OK, though we already have humans who do this kind of thing all the time. And folks are pretty bad at spotting where that has lousy grammar or sense.
unique link to this extract
A case before the Supreme Court challenging the liability shield protecting websites such as YouTube and Facebook could “upend the internet,” resulting in both widespread censorship and a proliferation of offensive content, Google said in a court filing Thursday.
In a new brief filed with the high court, Google said that scaling back liability protections could lead internet giants to block more potentially offensive content—including controversial political speech—while also leading smaller websites to drop their filters to avoid liability that can arise from efforts to screen content.
“This Court should decline to adopt novel and untested theories that risk transforming today’s internet into a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content,” Google said in its brief.
Google, a unit of Alphabet, owns YouTube—which is at the centre of the case set for oral arguments before the Supreme Court Feb. 21.
The case was brought by the family of Nohemi Gonzalez, who was killed in the 2015 Islamic State terrorist attack in Paris. The plaintiffs claim that YouTube, a unit of Google, aided ISIS by recommending the terrorist group’s videos to users.
The Gonzalez family contends that the liability shield—enacted by Congress as Section 230 of the Communications Decency Act of 1996—has been stretched to cover actions and circumstances never envisioned by lawmakers. The plaintiffs say certain actions by platforms, such as recommending harmful content, shouldn’t be protected.
The immunity law “is not available for material that the website itself created,” the petitioners wrote in their brief filed in November. “If YouTube were to write on its home page, or on the home page of a user, ‘YouTube strongly recommends that you watch this video,’ that obviously would not be ‘information provided by another information content provider.’ ”
This is essentially a test case trying to undermine Section 230 on the “recommended content” angle. There are sympathetic ears in the Supreme Court (at least, Clarence Thomas is). As Google says, this could be big.
unique link to this extract
Smith wrote the (at one point) No.1 app Widgetsmith:
There is a concept in rocket science called the Rocket Equation, which relates the velocity of your rocket propellant to your payload’s velocity, and (I think) defines the maximum payload a particular rocket fuel could carry into orbit.
I’m no rocket scientist, but I’ve been thinking about a similar concept as it relates to subscription based apps (seriously).
As I’ve been working on improving the revenue for Widgetsmith’s subscription, I felt like I kept hitting my head against a wall. For example, I’d make an improvement to my paywall or features and see a bump in my trial start rate. Then a few months later I’d find myself with revenue that had only slightly budged. My initial reaction was to just “try harder” and I’d get there eventually.
But after a few months of this I realized there might be something fundamental I was missing. So I set out to model the effect of varying changes in my subscription metrics to my ultimate revenue. This part did feel a bit like rocket science. While there might be a way to model this in Excel or algebraically, I couldn’t find it. So I did what any self respecting programmer would do…and built an app.
The challenge here is that for every day in your model you have to both add in newly acquired users as well as determine the renewal/churn of older users. The churn rate for older users follows a predictable curve in my experience, but is different for each of the first few months. You then repeat this process over and over until you have built out your projection.
The extent to which doing this really does resemble rocket science is surprising.
unique link to this extract
In my Smithsonian column about the [early American households’] resistance to household coal [because there was nothing to see when it burned in closed stoves], I compared it to the cultural and aesthetic objections we’re often seeing to household renewables. Homeowners’ associations all over the country are banning rooftop solar in their neighborhoods because members of the association don’t like the way it looks. Windmills face opposition from locals who hate how it changes the view; the same goes for big solar arrays in fields.
When I was researching that Smithsonian piece, one of my interview subjects raised another possibility — an intriguing and subtle one — about why some people might dislike renewables:
Solar and wind don’t burn anything.
It was Barbara Freese who made this point to me. She’s the author of the superb book Coal: A Human History. When we spoke, she talked for a long while about the ways early Americans hated coal (“people were blaming coal-fired stoves for impaired vision, impaired nerves, baldness and tooth decay”). She talked about the primal beauty of fire (“it really has hypnotic qualities”). And we discussed the aesthetic objections to solar panels today — how they change the facades of historic homes, or fill up a previously green field with rows of glass and steel.
Then Freese made an incredibly interesting point that tied this all together: if solar and wind truly become omnipresent, it would mean the end of humans burning things to create energy.
That’s a very, very long tradition. Humans first used fire as an energy source for cooking probably two million years ago.
Suzanne Vranica and Patience Haggin:
Twitter Inc. is offering advertisers a new incentive in an attempt to woo brands back to the social-media platform, which has seen its ad business deteriorate following Elon Musk‘s $44 billion takeover.
The tech company is dangling free ad space by offering to match advertisers’ ad spending up to $250,000, according to emails reviewed by The Wall Street Journal. The full $500,000 in advertising must run by Feb. 28, the emails said.
Twitter didn’t respond to a request for comment.
The incentives are the latest effort by the company to get brands to spend on its platform. Recently, Twitter offered advertisers $500,000 in free ads as long as they spent at least $500,000.
Ad buyers said that the incentive could be used to buy promoted tweets that run during Super Bowl week, a key selling period for Twitter. [This year’s Superbowl is on February 12 – Overspill Ed.] Advertisers in recent years have flocked to Twitter during the Super Bowl to generate buzz around their big game marketing efforts. The Super Bowl is Twitter’s biggest revenue day of the year, the Journal has reported.
Twitter is facing financial pressure to lure back the many advertisers that have paused their spending since Mr. Musk acquired the company in late October. Advertisers bolted largely because of fear over what they said was Mr. Musk’s approach to content moderation and concerns that their ads would end up appearing near controversial content.
Mr. Musk said in November that Twitter had suffered “a massive drop in revenue” and was losing $4m a day.
Many big brands including pharmaceutical company Pfizer Inc., United Airlines Holdings Inc. and auto makers General Motors and Volkswagen have paused their spending on Twitter. More than 75 of Twitter’s top 100 ad spenders from before Mr. Musk’s takeover weren’t spending on the platform as of the week ending Jan. 8, according to an analysis of data from research firm Sensor Tower.
Not a lot of time to get that biggest revenue day of the year to happen, eh, Elon.
unique link to this extract
|• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?
Read Social Warming, my latest book, and find answers – and more.
Errata, corrigenda and ai no corrida: none notified