Sure, you could use a cheap sticker to cover your webcam. But isn’t that just paranoia, and aren’t there cameras always on everywhere already? CC-licensed photo by Quinn Comendant on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 10 links for you. Very much here with you. I’m @charlesarthur on Twitter. Observations and links welcome.
IEEE Spectrum: Second Life was almost like a proto-metaverse. Why do you think it didn’t break through to the mainstream?
[Former chief architect of Second Life, Philip] Rosedale: It’s interesting to note that Second Life is, in my opinion, still the largest and the closest thing to a metaverse that we have as it relates to grown-ups. The environments that are used by kids, such as Roblox, are very interesting as well but very different in terms of what they offer. If you talk about people wanting to go to a live concert, or wanting to go shopping or something like that, I think Second Life is still US $650 million a year in transactions and a million people using it. But Second Life didn’t grow beyond about a million people. It’s been growing more with COVID, but as you say, it didn’t break out, it didn’t become a billion people. And the hope that Facebook has is that there’ll be a billion people using a metaverse.
So I think the reason why it didn’t, and this reason is still very true today, is simply that most adults are not yet comfortable engaging with new people, or engaging socially, in a multi-player context online. I’ve worked on this a lot and it’s been incredibly rewarding for the people for whom it has worked. And even work we did more recently with High Fidelity, which was very similar—a full VR environment, but with the headsets rather than with desktop—there are small groups of people that have gotten immense pleasure or opportunity to make money, and things like that, out of these environments.
But they’re still not for everybody. People are not able to communicate with facial and body language yet, in a way that is anywhere near adequate. And I think that it’s a very steep cliff. If you have the alternative, to have your social life happen in the real world, I think a great majority of people make that choice, and it’s a binary choice. They don’t split their social life partly between the real world and partly online. I think that’s the reason why we don’t see the breakout yet, and nothing that Facebook has said or demonstrated changes what I just said.
The point about facial expressions is a big one that I hadn’t thought of before, but we take so much from how people respond facially in a meeting. Even the annoyance of Zoom does let you get some of that information. A half-body avatar, nope.
unique link to this extract
The omicron coronavirus variant was already in the Netherlands a week before South Africa reported the new variant to the World Health Organization, according to a Dutch health agency.
The variant was recently identified in retests of samples that were taken on Nov. 19 and 23, the National Institute for Public Health and the Environment, or RIVM, announced on Tuesday.
Revelations about the variant’s existence in Europe before it was reported in Africa add a new twist to questions about where and how the variant originated — and whether travel bans on South Africa and its neighbors are an appropriate response to the variant.
South African President Cyril Ramaphosa says his country is being punished for detecting the variant and informing global health authorities about it.
“You do not try and contain a virus through imposing bans unscientifically and indiscriminately,” Ramaphosa said on Tuesday, adding that measures such as testing all travelers are the best tools for combating the pandemic.
South African officials raised the alarm about the heavily mutated variant, B.1.1.529, on Nov. 24. Two days later, the WHO classified it as a variant of concern and dubbed it omicron.
Clear that there has been community transmission for ages (relative to incubation). Where it really originated is anyone’s guess. But it certainly strengthens the case for not naming variants after the places where they were first identified.
unique link to this extract
Tatum Hunter, with a pretty daft (and paranoid) article – judge by the headline – which does nevertheless have this interesting observation:
it’s only a matter of time before smart glasses are part of our everyday lives, according to [CEO and fcofounder of nonprofit security company XR Safety Initiative, Kayya] Pearlman. And that’s not to mention the camera-enabled connected devices springing up in our homes, cities and workplaces.
We’re heading toward an era of “constant reality capture,” she said, in which people and companies will be recording wherever we go. It raises privacy questions we haven’t yet tackled.
“What happens to our privacy when these [webcam] covers are just completely a historical phenomenon, and nobody cares anymore because everything is recorded anyway?” she said. “We’re moving into this culture where the question of ‘Should I have a mechanical cover to shut off any camera that could be spying on me?’ is moot.”
For Pearlman, real privacy is a matter of context, control and choice: In what context am I willing to be recorded? What control do I get over the data that’s captured? And was I given a choice to opt out?
Right now, companies make those types of privacy decisions, not consumers. In the future, that needs to change, Pearlman said.
“I think we need to open, decentralize and make these decisions collectively so that billions of people don’t feel powerless when these choices are taken away,” she said.
Doesn’t matter whether you decentralise it or not. What matters is what the default is: if it’s on by default, it’ll stay on for over 90% of users. Ditto for off. (Unless somehow you make users choose one or the other with neither preferred early on.) In effect, companies always make the decision.
unique link to this extract
Commuters in Los Angeles now spend 119 hours each year stuck in unmoving traffic; in Moscow, they spend an average of 210 hours, or nine entire days. There are as many as 2 billion parking spaces in the U.S. (eight times more than there are cars), often on valuable urban land that could otherwise be used—along with excess road space—for housing or parks. Pollution from tailpipes is linked to hundreds of thousands of deaths globally each year. SUVs, alone, now emit more than 700 megatons of greenhouse gases annually, more than the total emissions of the U.K. and the Netherlands. More than 1.25 million people are killed in road crashes each year.
In response, some cities and neighborhoods are beginning to rethink where cars can go—and redesigning streets to prioritize other uses, from public transportation to parks. It’s happening around the world, including on major streets in cities like San Francisco and New York, but happening at the largest scale in several European cities. Here are a few of the most interesting examples.
She starts with “Amsterdam”, and then says that it’s not actually car-free, which is definitely an annoying way to start a list of car-free cities. It’s going car-fewer, which is progress at least.
unique link to this extract
There are two related questions here: a) Could this new prototype ever work well enough and affordably enough that it could be in wide(r) use? And more alchemically, b) does it offer enough people a sufficiently interesting and useful new ability that they’d change their behavior around it? Do we desire this new thing?
I think b) is, of the two, the much harder question to answer. There are a lot of convoluted reasons why a technology becomes desirable. Sometimes it’s because the tech solves a problem that’s low on Maslow’s pyramid, like clean-water engineering. Everyone wants that. (Indeed, many technologies that are critical to basic existence are often infrastructural and civic.) But even with many consumer technologies — i.e. when you’re buying something that isn’t for basic survival — you can detect when a new tech triggers a novel, previously latent desire.
Personal cameras did that. In the late 19th century, people were very familiar with photography, but the demand for owning and carrying around a camera wasn’t obvious until the Brownie came out. Suddenly, everyday people discovered photography was delightful for personal expression, and a way to document the arc of their lives.
But other times in consumer tech, b) is much trickier to discern. GPS chips in our phones: Did people really want that? On the one hand, GPS gives your phone enormous utility, as with turn-by-turn maps. On the other hand, GPS lets authorities track your every move, which most people find icky. Worse, the market tends to seal off options, making it difficult to know whether people really prefer the current state of affairs.
A lot of these questions are more easily analysed by what Steven Johnson calls “adjacent technologies” – finding what things are easily integrated or repurposed for an existing need. GPS had been around a long time before it found its way into phones; the adjacent technology was chip fabs capable of turning out chips that could analyse them by the multimillion. But to know what the adjacent technologies are, you need to be really immersed in all of them.
unique link to this extract
The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce — and in a way not seen in plants and animals.
Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04in) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal.
Now the scientists that developed them at the University of Vermont, Tufts University and Harvard University’s Wyss Institute for Biologically Inspired Engineering said they have discovered an entirely new form of biological reproduction different from any animal or plant known to science.
“I was astounded by it,” said Michael Levin, a professor of biology and director of the Allen Discovery Center at Tufts University who was co-lead author of the new research.
“Frogs have a way of reproducing that they normally use but when you … liberate (the cells) from the rest of the embryo and you give them a chance to figure out how to be in a new environment, not only do they figure out a new way to move, but they also figure out apparently a new way to reproduce.”
Has anyone there read Jurassic Park? Frogs tend to be important to Life Finding A Way.
unique link to this extract
Moorfields patient receives world’s first 3D printed eye • Moorfields Eye Hospital NHS Foundation Trust
A Moorfields Eye Hospital NHS Foundation Trust patient was the first person in the world to be supplied solely with a fully digital 3D printed prosthetic eye on 25 November 2021. He first tried his eye on 11 November alongside a traditional acrylic prosthetic. By going home on 25 November with just his printed eye, he is the first patient to use a 3D printed eye as their sole prosthetic.
A 3D printed eye is a true biomimic and a more realistic prosthetic, with clearer definition and real depth to the pupil. Unlike traditional methods, it uses scans of the eye instead of an invasive mould of the eye socket, so difficult for children that they can need a general anaesthetic.
Crucially, the production process is much faster. Traditional acrylic prosthetic eyes are hand-painted and take about six weeks to complete. With 3D printing, once a scan has been taken, the prosthesis can be printed within two and a half hours. It is then sent to an ocularist to finish, polish and fit. The whole process takes just two to three weeks.
Steve Verze, the first patient, is an engineer in his 40s from Hackney.
Given the way 3D printing has fallen out of the public consciousness, it’s worth remembering that it does actually have great uses. It’s widely used for reconstructive surgery (teeth and skulls) and, of course, for eyes too now.
unique link to this extract
According to internal documents reviewed by Reveal from The Center for Investigative Reporting and WIRED, Amazon’s vast empire of customer data – its metastasizing record of what you search for, what you buy, what shows you watch, what pills you take, what you say to Alexa and who’s at your front door – had become so sprawling, fragmented and promiscuously shared within the company that the security division couldn’t even map all of it, much less adequately defend its borders.
In the name of speedy customer service, unbridled growth and rapid-fire “invention on behalf of customers” – in the name of delighting you – Amazon had given broad swathes of its global workforce extraordinary latitude to tap into customer data at will. It was, as former Amazon chief information security officer Gary Gagnon calls it, a “free-for-all” of internal access to customer information. And as information security leaders warned, that free-for-all left the company wide open to “internal threat actors” while simultaneously making it inordinately difficult to track where all of Amazon’s data was flowing.
To be clear: This story is not about Amazon Web Services, the cloud-computing wing that manages data for millions of enterprises and government agencies, which has its own, separate information security apparatus. It’s about the online retail platform used by hundreds of millions of ordinary consumers. And on that side of Amazon’s business, InfoSec staffers warned of an unnerving “inability to detect security incidents.”
By the time DeVore started testifying about Amazon’s long-standing commitment to privacy and security, the dangers that the security division had identified weren’t just theoretical. According to Reveal and WIRED’s findings, they were real, and they were pervasive. Across Amazon, some low-level employees were using their data privileges to snoop on the purchases of celebrities, while others were taking bribes to help shady sellers sabotage competitors’ businesses, doctor Amazon’s review system and sell knock-off products to unsuspecting customers. Millions of credit card numbers had sat in the wrong place on Amazon’s internal network for years, with the security team unable to establish definitively whether they’d been unduly accessed.
Though I suppose you could sort of intuit some of it from the way that they’re so quick on the chat function to go over your purchases and find what you’ve got a problem with. The “God Mode” is so commonplace in startup companies it’s hard to remove once they grow.
unique link to this extract
‘I am not gonna die on the internet for you!’: how game streaming went from dream job to a burnout nightmare • The Guardian
The fact is that, especially for up-and-coming streamers trying to make it in the crowded world of playing video games on the internet, the camera is almost never off. Sticking to a regular schedule is the best way to build an audience on Twitch, and those schedules regularly involve at least eight hours of continuous streaming, five days a week or more. “My sleep schedule shifted into the North American time zone because most of the people who were viewing my channel at the time were there,” says 36-year-old Cassie, a founder of the Black Twitch UK network, who has been streaming for five years under the name GeekyCassie. “I would do my day at work, nap a bit, and then stream for up to eight to 12 hours at night-time. I’d be absolutely beat, and then get up and do my work again … People burn out and then they don’t enjoy it any more.”
At that time Cassie was living at home with her mum, whose cooking and care enabled these ridiculous hours. “There’s absolutely no way that I would do that now. I don’t really feel like we should be encouraging it,” she says. “I see [young streamers] do things like 24‑hour live gaming marathons, then have an hour’s sleep, and then later that day I’ll see photos of them skating outside on Insta. I’m like: ‘How are you doing this? What is going on!?’”
“Burnout is an incredibly real thing in gaming,” says Imane Anys, AKA Pokimane, who has put in thousands of hours to become the most popular female streamer on Twitch, with 8.4 million subscribers. “A streamer sets their own work hours and it can be easy to fall into the trap of streaming eight to 12 hours a day, seven days a week. It’s frightening because people grind crazy long hours, and see results – hence why so many do it. I’ve veered away from doing extreme hours of livestreaming in an effort to upkeep my mental health and I’ve found that it aids in the longevity of my career.” Now she streams in shorter bursts, but even so, she only “usually” takes a day off a week to spend with friends or relaxing.
Ben Thompson has a modest proposal:
one of the biggest challenges facing would-be Twitter clones is not simply that a complete lack of moderation leads to an overwhelming amount of crap, but also that the sort of person who thrives on Twitter very much wants to know everything that is happening in the world, including amongst those outside of their circle. Being stuck on a text-based social network that only has some of the information to be consumed is lame; having access to anyone and everything, for better or worse, is a value prop that only Twitter can provide.
This, then, is the other thing that often baffles analysts: Twitter has one of the most powerful moats on the Internet. Sure, Facebook has ubiquity, Instagram has influencers, and TikTok has homegrown stars, but I find it easier to imagine any of those fading before Twitter’s grip on information flow disappears (in part, of course, because Twitter has shown that it’s a pretty crappy business).
So let’s review: there is both little evidence that Twitter can monetize via direct response marketing, and reason to believe that the problem is not simply mismanagement. At the same time, Twitter is absolutely essential to a core group of users who are not simply unconcerned with the problems inherent to Twitter’s public broadcast model (including abuse and mob behavior), but actually find the platform indispensable for precisely those reasons: Twitter is where the news is made, shaped, and battled over, and there is very little chance of another platform displacing it, in large part because no one is economically motivated to do so.
Given this, why not charge for access?
|• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?
Social Warming, my latest book, and find answers – and more.
Errata, corrigenda and ai no corrida: just me, or was there not really that much informed followup on Jack Dorsey leaving Twitter? Or maybe it was in the wrong part of the news cycle.