Carmakers are being too quick to push “self-driving” systems, according to a specialist in the US, who says they make drivers incautious. CC-licensed photo by Ted Drake on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
On Friday there was another post at the Social Warming Substack though Substack’s emails didn’t reach everyone (such as me). It’s about Mastodon and Content Warnings.
A selection of 10 links for you. Non-toxic. I’m @charlesarthur on Twitter. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.
The death of Nohemi Gonzalez led to a Supreme Court fight over Section 230 with Google • The Washington Post
Gerrit De Vynck:
»
In 2017, the Gonzalez family and the lawyers filed their case, arguing that Google’s YouTube video site broke the US Anti-Terrorism Act by promoting Islamic State propaganda videos with its recommendation algorithms. Google says the case is without merit because the law protects internet companies from liability for content posted by their users. The lower courts sided with Google, but the family appealed, and last October the Supreme Court agreed to hear the case.
The Supreme Court’s decision could have major ramifications for both the internet as we know it and the tech giants who dominate it. For nearly three decades, Section 230, the provision of law that is at the heart of the Supreme Court case, has protected internet companies from being liable for the content posted by their users, allowing platforms like Facebook and YouTube to grow into the cultural and commercial behemoths they are today.
Advocates argue the law is vital to a free and open internet, giving companies space to allow users to freely post what they want, while also giving them the ability to police their platforms as they see fit, keeping them from being further inundated with spam or harassment. Critics of the law say it gives tech companies a pass to shirk responsibility or engage in unfair censorship. Seventy-nine outside companies, trade organizations, politicians and nonprofits have submitted arguments in the case.
Gonzalez said she never imagined the case would become so significant. “I can’t even believe now that I’m here in Washington and about to go to court,” she said.
«
Well now, there was no internet when the Constitution was written, so.. I don’t know how they’ll interpret this.
unique link to this extract
Meta launches subscription service for Facebook and Instagram • Bloomberg via BNN Bloomberg
Kurt Wagner:
»
Facebook parent company Meta Platforms Inc. is launching a subscription service called Meta Verified that will include a handful of additional perks and features, including account verification badges for those who pay.
The new subscription will cost $11.99 per month — $14.99 if purchased through the iOS app — and is primarily targeted toward content creators. In addition to a verification badge, the subscription includes “proactive account protection, access to account support, and increased visibility and reach,” a Meta spokesperson said in an email.
Chief executive Officer Mark Zuckerberg announced the new product via his Instagram Channel, a service that was unveiled in the past week. The option will be available on both Facebook and Instagram, but they’ll be separate subscriptions.
Subscription offerings have become popular for social networking companies in recent years as a way to diversify their businesses, which are heavily reliant on advertising. Snap Inc. has an offering called Snapchat Plus, and Twitter Inc. is also pushing a subscription offering right now, with account verification being a major selling point.
Meta makes almost all of its revenue from advertising, but that business can be inconsistent and severely affected by the broader economy. Meta’s business was hit hard at the beginning of the pandemic, for instance, and again last year during the war in Europe and the rise of inflation. Subscriptions offer a more consistent revenue stream.
It’s unclear, though, if users want to pay for services that have always been free. Twitter’s subscription offering has been slow to take off. Perhaps the most valuable aspect of Meta’s subscription package will be “increased visibility.” Standing out on Facebook or Instagram is more difficult these days, even among a user’s own followers. The company has started to push users toward more content they may be interested in, not necessarily content from people they follow.
«
Access to account support! Imagine that. How revolutionary to offer support for a service that you offer. But of course this is only for “content creators”, not all the rest of the public who.. create content, just not sufficiently enthusiastically.
unique link to this extract
Peabody EDI Office responds to MSU shooting with email written using ChatGPT • The Vanderbilt Hustler
Rachael Perrotta:
»
A note at the bottom of a Feb. 16 email from the Peabody Office of Equity, Diversity and Inclusion regarding the recent shooting at Michigan State University stated that the message had been written using ChatGPT, an AI text generator.
Associate Dean for Equity, Diversity and Inclusion Nicole Joseph sent a follow-up, apology email to the Peabody community on Feb. 17 at 6:30 p.m. CST. She stated using ChatGPT to write the initial email was “poor judgment.”
“While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College,” the follow-up email reads. “As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.”
«
Maybe.. not to use it to write letters expressing deep human empathy? Though I can’t get into the mindset of someone who thinks “hey, commiserating letter to write after shooting incident killed three students and left five in critical condition – I know, I’ll get the computer to write it!”
Or in the words of Senior Jackson Davis, an undergraduate there,
»
“They release milquetoast, mealymouthed statements that really say nothing whenever an issue arises on or off campus with real political and moral stakes,” Davis said. “I consider this more of a mask-off moment than any sort of revelation about the disingenuous nature of academic bureaucracy.”
«
Bing and Google’s chatbots are a disaster • The Atlantic
Matteo Wong:
»
even if ChatGPT and its cousins had learned to predict words perfectly, they would still lack other basic skills. For instance, they don’t understand the physical world or how to use logic, are terrible at math, and, most germane to searching the internet, can’t fact-check themselves. Just yesterday, ChatGPT told me there are six letters in its name.
These language programs do write some “new” things—they’re called “hallucinations,” but they could also be described as lies. Similar to how autocorrect is ducking terrible at getting single letters right, these models mess up entire sentences and paragraphs. The new Bing reportedly said that 2022 comes after 2023, and then stated that the current year is 2022, all while gaslighting users when they argued with it; ChatGPT is known for conjuring statistics from fabricated sources. Bing made up personality traits about the political scientist Rumman Chowdhury and engaged in plenty of creepy, gendered speculation about her personal life. The journalist Mark Hachman, trying to show his son how the new Bing has antibias filters, instead induced the AI to teach his youngest child a vile host of ethnic slurs (Microsoft said it took “immediate action … to address this issue”).
Asked about these problems, a Microsoft spokesperson wrote in an email that, “given this is an early preview, [the new Bing] can sometimes show unexpected or inaccurate answers,” and that “we are adjusting its responses to create coherent, relevant and positive answers.” And a Google spokesperson told me over email, “Testing and feedback, from Googlers and external trusted testers, are important aspects of improving Bard to ensure it’s ready for our users.”
«
Maybe we should think of these systems as giving voice to the id of the internet: to the frothing roar subsumed and encoded in billions of web pages. When Sydney, Bing’s evil twin, tells you to leave your spouse, it’s the internet roaring at you as it roars at everyone else.
unique link to this extract
Tootfinder
»
The search is case-insensitive. You can append * to search for words starting with the search term but not preprend *. Words must be 3 letters long at least. You can use NEAR, NOT, AND and OR.
«
This is an opt-in system, which means that though it’s a great idea, it won’t get traction. People just don’t do things which aren’t defaults. (Rather like my discovery that you can opt out of seeing Content Warnings on Mastodon – it’s a setting, on Ivory and others, but turned on by default.)
Further reading: explaining Mastodon and the Fediverse.
unique link to this extract
Even the FBI says you should use an ad blocker • TechCrunch
Zack Whittaker:
»
consider giving the gift of security with an ad blocker.
That’s the takeaway message from an unlikely source — the FBI — which this week issued an alert warning that cybercriminals are using online ads in search results with the ultimate goal of stealing or extorting money from victims.
In a pre-holiday public service announcement, the FBI said that cybercriminals are buying ads to impersonate legitimate brands, like cryptocurrency exchanges. Ads are often placed at the top of search results but with “minimum distinction” between the ads and the search results, the feds say, which can look identical to the brands that the cybercriminals are impersonating. Malicious ads are also used to trick victims into installing malware disguised as genuine apps, which can steal passwords and deploy file-encrypting ransomware.
One of the FBI’s recommendations for consumers is to install an ad blocker.
As the name suggests, ad blockers are web browser extensions that broadly block online ads from loading in your browser, including in search results. By blocking ads, would-be victims are not shown any ads at all, making it easier to find and access the websites of legitimate brands.
Ad blockers don’t just remove the enormous bloat from websites, like auto-playing video and splashy ads that take up half the page, which make your computer fans run like jet engines. Ad blockers are also good for privacy, because they prevent the tracking code within ads from loading. That means the ad companies, like Google and Facebook, cannot track you as you browse the web, or learn which websites you visit, or infer what things you might be interested in based on your web history.
«
I missed this when it happened, on December 21 last year. But it’s nice that the US government is telling you to adblock. The linked article has a few recommendations.
unique link to this extract
An update on two-factor authentication using SMS on Twitter • Twitter Blog
“Twitter Inc”:
»
To date, we have offered three methods of 2FA: text message, authentication app, and security key.
While historically a popular form of 2FA, unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors. So starting today, we will no longer allow accounts to enroll in the text message/SMS method of 2FA unless they are Twitter Blue subscribers. The availability of text message 2FA for Twitter Blue may vary by country and carrier.
Non-Twitter Blue subscribers that are already enrolled will have 30 days to disable this method and enroll in another. After 20 March 2023, we will no longer permit non-Twitter Blue subscribers to use text messages as a 2FA method. At that time, accounts with text message 2FA still enabled will have it disabled. Disabling text message 2FA does not automatically disassociate your phone number from your Twitter account.
«
Twitter’s transparency report from 2H 2021 shows that only 2.6% of Twitter users had 2FA enabled, yet of those 74% were using SMS.
The obvious reason for doing this is costs (charged via Twilio which handles the SMS stuff with carriers) but as this thread points out, SMS is very liable to fraud between complicit hackers and unscrupulous telcos. However, charging $8/month won’t stop people who can earn tens of thousands of dollars per month per account.
Arguably, better to deprecate SMS. Authentication apps such as Authy are better in every way. (Thanks Nick for the fraud thread.)
unique link to this extract
I watched Elon Musk kill Twitter’s culture from the inside • The Atlantic
Rumman Chowdhury:
»
Twitter has never been perfect. Jack Dorsey’s distracted leadership across multiple companies kept him from defining a clear strategic direction for the platform. His short-tenured successor, Parag Agrawal, was well intentioned but ineffectual. Constant chaos and endless structuring and restructuring were ongoing internal jokes. Competing imperatives sometimes manifested in disagreements between those of us charged with protecting users and the team leading algorithmic personalization. Our mandate was to seek outcomes that kept people safe. Theirs was to drive up engagement and therefore revenue. The big takeaway: ethics don’t always scale with short-term engagement.
A mentor once told me that my role was to be a truth teller. Sometimes that meant confronting leadership with uncomfortable realities. At Twitter, it meant pointing to revenue-enhancing methods (such as increased personalization) that would lead to ideological filter bubbles, open up methods of algorithmic bot manipulation, or inadvertently popularize misinformation. We worked on ways to improve our toxic-speech-identification algorithms so they would not discriminate against African-American Vernacular English as well as forms of reclaimed speech. All of this depended on rank-and-file employees. Messy as it was, Twitter sometimes seemed to function mostly on goodwill and the dedication of its staff. But it functioned.
Those days are over. From the announcement of Musk’s bid to the day he walked into the office holding a sink, I watched, horrified, as he slowly killed Twitter’s culture. Debate and constructive dissent was stifled on Slack, leaders accepted their fate or quietly resigned, and Twitter slowly shifted from being a company that cared about the people on the platform to a company that only cares about people as monetizable units. The few days I spent at Musk’s Twitter could best be described as a Lord of the Flies–like test of character as existing leadership crumbled, Musk’s cronies moved in, and his haphazard management—if it could be called that—instilled a sense of fear and confusion.
«
In addition: Twitter is now having trouble paying some employees [In Europe] on time.
unique link to this extract
How we boosted marketing email open rate from 20% to 60% • Catonmat
»
At Browserling and Online Tools, one simple trick changed our marketing email open rate from 20% to 60%.
This trick was to start sending the emails at the same hour the user last visited our website.
For example, if a user was last on our website at 3:43pm, then now we send the marketing emails to this user at around 3pm.
Before this trick, we were sending the emails at random times.
«
Not even regular times? Like some newsletter writers do for their daily emails? Even so, this is a surprising-and-obvious move, and they’ve got the data to confirm it.
unique link to this extract
Carmakers are pushing autonomous tech. This engineer wants limits • The New York Times
Cade Metz:
»
Last fall, Missy Cummings sent a document to her colleagues at the National Highway Traffic Safety Administration that revealed a surprising trend: When people using advanced driver-assistance systems die or are injured in a car crash, they are more likely to have been speeding than people driving cars on their own.
The two-page analysis of nearly 400 crashes involving systems like Tesla’s Autopilot and General Motors’ Super Cruise is far from conclusive. But it raises fresh questions about the technologies that have been installed in hundreds of thousands of cars on US roads. Dr. Cummings said the data indicated that drivers were becoming too confident in the systems’ abilities and that automakers and regulators should restrict when and how the technology was used.
People “are over-trusting the technology,” she said. “They are letting the cars speed. And they are getting into accidents that are seriously injuring them or killing them.”
Dr. Cummings, an engineering and computer science professor at George Mason University who specializes in autonomous systems, recently returned to academia after more than a year at the safety agency. On Wednesday, she will present some of her findings at the University of Michigan, a short drive from Detroit, the main hub of the US auto industry.
…In interviews last week, Dr. Cummings said automakers and regulators ought to prevent such systems from operating over the speed limit and require drivers using them to keep their hands on the steering wheel and eyes on the road.
“Car companies — meaning Tesla and others — are marketing this as a hands-free technology,” she said. “That is a nightmare.”
«
• Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified