Wouldn’t it be great if computer webcams were situated in the middle of the screen, or somewhere better? Dell has a concept for that. CC-licensed photo by JJ Merelo on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 11 links for you. Not increasing every two days. I’m @charlesarthur on Twitter. Observations and links welcome.
Robert McMillan and Dustin Volz:
Hackers linked to China and other governments are among a growing assortment of cyberattackers seeking to exploit a widespread and severe vulnerability in computer server software, according to cybersecurity firms and Microsoft.
The involvement of hackers whom analysts have linked to nation-states underscored the increasing gravity of the flaw in Log4j software, a free bit of code that logs activity in computer networks and applications.
Cybersecurity researchers say it is one of the most dire cybersecurity threats to emerge in years and could enable devastating attacks, including ransomware, in both the immediate and distant future. Government-sponsored hackers are often among the best-resourced and most capable, analysts say.
“The effects of this vulnerability will reverberate for months to come—maybe even years—as we try to close these doors and try to hunt down all the actors who made their way in,” said John Hultquist, vice president of intelligence analysis at the US-based cybersecurity firm Mandiant.
Both Microsoft and Mandiant said they have observed hacking groups linked to China and Iran launching attacks that exploit the flaw in Log4j. In an update to its website posted late Tuesday, Microsoft said that it had also seen nation-backed hackers from North Korea and Turkey using the attack. Some attackers appear to be experimenting with the attack; others are trying to use it to break into online targets, Microsoft said.
This is going to go on and on. How long before it pops up in a seriously big exploit?
unique link to this extract
A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution • Google Project Zero blog
Ian Beer and Samuel Groß:
In the late 1990’s, bandwidth and storage were much more scarce than they are now. It was in that environment that the JBIG2 standard emerged. JBIG2 is a domain specific image codec designed to compress images where pixels can only be black or white.
It was developed to achieve extremely high compression ratios for scans of text documents and was implemented and used in high-end office scanner/printer devices like the XEROX WorkCenter device shown below. If you used the scan to pdf functionality of a device like this a decade ago, your PDF likely had a JBIG2 stream in it.
The PDFs files produced by those scanners were exceptionally small, perhaps only a few kilobytes. There are two novel techniques which JBIG2 uses to achieve these extreme compression ratios which are relevant to this exploit.
Effectively every text document, especially those written in languages with small alphabets like English or German, consists of many repeated letters (also known as glyphs) on each page. JBIG2 tries to segment each page into glyphs then uses simple pattern matching to match up glyphs which look the same:
JBIG2 doesn’t actually know anything about glyphs and it isn’t doing OCR (optical character recognition.) A JBIG encoder is just looking for connected regions of pixels and grouping similar looking regions together. The compression algorithm is to simply substitute all sufficiently-similar looking regions with a copy of just one of them.
There’s a significant issue with such a scheme: it’s far too easy for a poor encoder to accidentally swap similar looking characters, and this can happen with interesting consequences. D. Kriesel’s blog has some motivating examples where PDFs of scanned invoices have different figures or PDFs of scanned construction drawings end up with incorrect measurements. These aren’t the issues we’re looking at, but they are one significant reason why JBIG2 is not a common compression format anymore.
So it turns out that the NSO’s Pegasus relies on a flaw in a decades-old piece of open source software originally intended for scanning. One observer’s description: “NSO has been using simple logical operators in an old compression format to basically build a whole virtual computer on top of it.” The blogpost isn’t a short read, by the way. There’s also the discussion on Hacker News, with its predictable mixture of yawning and awe.
unique link to this extract
In essence, Drax [power station] is a gigantic woodstove. In 2019, Drax emitted more than fifteen million tons of CO2, which is roughly equivalent to the greenhouse-gas emissions produced by three million typical passenger vehicles in one year. Of those emissions, Drax reported that 12.8m tons were “biologically sequestered carbon” from biomass (wood). In 2020, the numbers increased: 16.5m tons, 13.2m from biomass. Meanwhile, the Drax Group calls itself “the biggest decarbonization project in Europe,” delivering “a decarbonized economy and healthy forests.”
The apparent conflict between what Drax does and what it says it does has its origins in the United Nations Conference on Climate Change of 1997. The conference established the Kyoto Protocol, which was intended to reduce emissions and “prevent dangerous anthropogenic interference with the climate system.” The UN’s Intergovernmental Panel on Climate Change (IPCC) classified wind and solar power as renewable-energy sources. But wood-burning was harder to categorize: It’s renewable, technically, because trees grow back. In accounting for greenhouse gases, the IPCC sorts emissions into different “sectors,” which include land-use and energy production. It’s hard to imagine now, but at the time, the IPCC was concerned that if they counted emissions from harvesting trees in the land sector, it would be duplicative to count emissions from the burning of pellets in the energy sector.
According to William Moomaw, an emeritus professor of international environmental policy at Tufts University, and lead author of several IPCC reports, negotiators thought of biomass as only a minor part of energy production—small-scale enough that forest regrowth could theoretically keep up with the incidental harvesting of trees. “At the time these guidelines were drawn up, the IPCC did not imagine a situation where millions of tons of wood would be shipped four thousand miles away to be burned in another country,” Moomaw said.
Google was one of the early leaders in the first wave of modern augmented reality (AR) research and devices, but the company has appeared to cool to AR in recent years even as Apple and Facebook have invested heavily in it. But it looks like that trend will soon be reversed.
On LinkedIn, operating system engineering director Mark Lucovsky announced that he has joined Google. He previously headed up mixed reality operating system work for Meta, and before that he was one of the key architects of Windows NT at Microsoft. “My role is to lead the Operating System team for Augmented Reality at Google,” he wrote.
He also posted a link to some job listings at Google that give the impression Google is getting just as serious about AR as Apple or Meta.
…Other job listings say new hires will be working on an “innovative AR device.” And one specifies that Google is “focused on making immersive computing accessible to billions of people through mobile devices.”
So Google is getting serious about AR… again? It’s as if there’s no institutional memory there, or they think that everything about Google Glass should be consigned to the bin. Which might, actually, not be wrong.
unique link to this extract
So what is social VR like? Imagine gaming combined with zany, old-style Internet chat rooms: messy, experimental and often dominated by men. There are trolls and obnoxious kids. And while most people are generally well-behaved and enthusiastic about the new medium, there seem to be few measures in place to prevent bad behavior beyond a few quick guidelines when you enter a space and features that let you block and mute problematic users.
On a visit to Horizon Venues for my first mingling experience, I picked an avatar that was a close approximation of what I looked like in real life: straight brown hair and a blazer and jeans. But it meant that when I was teleported into the main lobby area — a vast room with a tree in the middle — I was the only woman among a dozen or so men. We were all cartoonish-looking avatars floating around with no legs. Quite a few of us were in leather jackets.
Within moments, I was surprised by a deep voice in my ear, as if someone was whispering into it. “Hey. How are you?” One of the avatars had zoomed up to within inches of me, then floated away, taking me aback. A small group of male avatars began to form around me, staying silent. As I chatted with a man from Israel named Eran who was showing me how to jump (you need to figure out how to activate it via your settings), several in the surrounding crowd started holding their thumbs and forefingers out in front of them, making a frame. Digital photos of my bemused avatar appeared between their hands. One by one, they began handing the photos to me. The experience was awkward and I felt a bit like a specimen.
“Just chuck em’ away,” said a man in a bright blue suit with a London accent who had just floated up to us. Despite many attempts to shake away the portraits, they kept sticking to my digital hand like flypaper.
Meta warns all visitors to Horizon Venues that its “trained safety specialists” can dredge up a recording of any incident, and that users can activate a Safe Zone around themselves by pressing a button on their virtual wrist, muting the people around them. I didn’t feel unsafe, but I was uncomfortable, and there were no clear rules about etiquette and personal space.
Amazing: again and again, these digital spaces are created with no thought of how women will respond to them.
unique link to this extract
Even if the wind stops blowing in the next three weeks, wind power will end the year as the leading source of electricity in Spain. This will mean wind overtaking nuclear in the national energy matrix for the first time since 2013, the only year since records began in which wind turbines were the main source of power. That year was particularly good in terms of wind resources while nuclear was affected by the closure of the Garoña plant in Burgos. Since then, however, wind power has continued to grow as a percentage of total energy generated both in absolute and relative terms, a trend that looks to continue in the near future.
The milestone, advanced by Spanish news site Nius, is just a taste of things to come. “Wind power is going to dominate the Spanish electricity grid for a long time,” says Francisco Valverde, a consultant at the energy company Menta Energía.
According to the National Integrated Energy and Climate Plan (PNIEC), released by the Spanish government last year, the installed capacity of wind turbines will almost double between now and 2030. During this period, the rate of growth of solar photovoltaic will be even greater as installed capacity more than quadruples, making it the second most important electricity source, though it will still lag far behind wind power, even when solar thermal is taken into account. Meanwhile, installed nuclear power will fall to less than half its current level. And both combined-cycle plants, which use natural gas, and hydroelectricity will maintain their weight in a mix in which coal will no longer be included.
Nuclear has been the biggest single source for quite a while; both nuclear and wind are more than 20% of generation, and CCGT about 17% this year. (Solar has only recently gone above 4%.)
unique link to this extract
Paul Hudson @twostraws:
Folks have been requesting Xcode for iPad for some time, but that would have required a pretty epic effort – does that mean all of Interface Builder? All the Objective-C and C++ support? Or – *cue silent screaming* – Info.plist files?
Swift Playgrounds has chosen a different way: rather than trying to recreate all of Xcode on iPadOS, it instead aims to produce “Diet Xcode” – by which I mean “slimmer, faster, and streamlined” and not “why does my drink taste weird.” That means we get Xcode-style code completion that appears instantly, we get Xcode-style instant SwiftUI previews as we type, we get Xcode-style imports for SPM packages through Git, and much more.
And don’t think for a moment there are compromises on what you can code, because there really aren’t: this is full Swift 5.5 with all the latest concurrency features, plus access to the full set of SwiftUI API for iOS 15. Even better, at last there is access to debug output using print() and similar – by default it slides up from the bottom in a toast-style notification then animates away after a few seconds, but you can also make the console permanently visible if you prefer.
But, critically we don’t get some of Xcode’s biggest problems. For example, when you want to add a capability to a Swift Playgrounds app, it’s all done using a beautiful new user interface where you select from a list, then enter any addition data as prompted – that means goodbye to adding keys like “NSLocationAlwaysAndWhenInUseUsageDescription” to your property list.
Best of all, if you decide you want to move your project over from Swift Playgrounds to Xcode, you can do just that: just hit Share, then AirDrop it to your Mac, and Xcode will pick up exactly where you left off.
So, in brief, you can now write apps for the iPad (or iPhone) on the iPad. Which was a longstanding criticism of the iPad – that it wasn’t a “proper” computer because you couldn’t write apps on it to run on it. Guess they’ll need new ones now.
unique link to this extract
Winter is coming: researchers uncover the surprising cause of the Little Ice Age • University of Massachusetts Amherst
The Little Ice Age was one of the coldest periods of the past 10,000 years, a period of cooling that was particularly pronounced in the North Atlantic region. This cold spell, whose precise timeline scholars debate, but which seems to have set in around 600 years ago, was responsible for crop failures, famines and pandemics throughout Europe, resulting in misery and death for millions. To date, the mechanisms that led to this harsh climate state have remained inconclusive. However, a new paper published recently in Science Advances gives an up-to-date picture of the events that brought about the Little Ice Age. Surprisingly, the cooling appears to have been triggered by an unusually warm episode.
When lead author Francois Lapointe, postdoctoral researcher and lecturer in geosciences at UMass Amherst and Raymond Bradley, distinguished professor in geosciences at UMass Amherst began carefully examining their 3,000-year reconstruction of North Atlantic sea surface temperatures, results of which were published in the Proceedings of the National Academy of Sciences in 2020, they noticed something surprising: a sudden change from very warm conditions in the late 1300s to unprecedented cold conditions in the early 1400s, only 20 years later.
Using many detailed marine records, Lapointe and Bradley discovered that there was an abnormally strong northward transfer of warm water in the late 1300s which peaked around 1380. As a result, the waters south of Greenland and the Nordic Seas became much warmer than usual. “No one has recognized this before,” notes Lapointe.
Normally, there is always a transfer of warm water from the tropics to the arctic. It’s a well-known process called the Atlantic Meridional Overturning Circulation (AMOC), which is like a planetary conveyor belt. Typically, warm water from the tropics flows north along the coast of Northern Europe, and when it reaches higher latitudes and meets colder arctic waters, it loses heat and becomes denser, causing the water to sink at the bottom of the ocean. This deep-water formation then flows south along the coast of North America and continues on to circulate around the world.
But in the late 1300s, AMOC strengthened significantly, which meant that far more warm water than usual was moving north, which in turn cause rapid arctic ice loss. Over the course of a few decades in the late 1300s and 1400s, vast amounts of ice were flushed out into the North Atlantic, which not only cooled the North Atlantic waters, but also diluted their saltiness, ultimately causing AMOC to collapse. It is this collapse that then triggered a substantial cooling.
Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. With dozens of lifelike voices across a broad set of languages, you can build speech-enabled applications that work in many different countries.
In addition to Standard TTS voices, Amazon Polly offers Neural Text-to-Speech (NTTS) voices that deliver advanced improvements in speech quality through a new machine learning approach. Polly’s Neural TTS technology also supports a Newscaster speaking style that is tailored to news narration use cases.
Finally, Amazon Polly Brand Voice can create a custom voice for your organization. This is a custom engagement where you will work with the Amazon Polly team to build an NTTS voice for the exclusive use of your organization.
Alexa-responding-and-more-as-a-service. Available in eight languages, and both male and female in six of those. I reckon that they’ve covered most of the world’s population there (it includes Chinese) – if they can get Hindi in too, they’re sorted. And the free tier offers 5 million characters per month for the first 12 months. Unclear whether that’s permanent; the implication seems to be that it isn’t.
unique link to this extract
The future no longer seems very open; in fact, its contours seem very, very clear. What had once been a rat’s nest of 15 or 20 recognizable independent digital-media startups has been reduced, through purchases and mergers, to four consolidated brand portfolios that have any chance at medium-term survival: BuzzFeed-Huffpost-Complex, Vox-Verge-SB Nation-Eater-NYMag, Bustle-Mic-Gawker, and Vice-Refinery29.
All of these rely for revenue on some mix of display advertising and sponcon, ecommerce and affiliate marketing, and direct payments in the form of subscriptions or membership1. All of them have laid off workers. All of them are likely to go public (or try) within the next couple years, to cash out investors and to make consolidation easier. The most optimistic outcome for any one of these companies is that it leads the next round of mergers and acquisitions and emerges at the top of a larger portfolio of brands, giving it more leverage with advertisers and further diversifying its audience and revenue streams.
The sector is now the province of private-equity vultures rather than venture-capital sharks. No one looks at digital media companies and sees unicorns anymore; they see stones that might have a little more blood in them2.
What’s changed? Not much about the fundamentals, really. The big difference between now and 2011 is that there’s no longer the expectation (or recent experience) of “disruptive” upheaval in media infrastructure. Part of what made the digital media sector so attractive to venture capitalists in the early 2010s was how frequently and how quickly the landscape of media distribution was changing. Every few years a new growth opportunity would emerge — SEO! No, wait, social sharing! No, wait, dark social! No, wait, video! No, wait ecommerce! — offering potentially huge audiences or revenue figures. (But also necessitating debilitating shifts in editorial tone, resources, and strategies.)
No one expects a new Facebook (or, for that matter, a new iPhone) to emerge anytime soon, transforming the whole sector3; we’ve reached a point where we know what works and what needs to happen.
Though we think we’ve reached the point where we know what works every few years; and then it changes.
unique link to this extract
At first glance, the Concept Pari looks like the typical webcam that sits atop your monitor. However, Dell made it so that you can remove the cylindrical camera housing from its dock (which also doubles as a USB-C wireless charging station, letting you carry around the 1oz camera in your hand). The camera itself shoots in 1080p, has a built-in mic, and because it’s wireless, is connected to Wi-Fi. Dell also notes that it comes with a vertical indicator light, helping you maintain alignment as you use the camera freehandedly.
What might be neater than its wire-free housing is that if you’re tired of staring into the face of a soulless webcam during virtual meetings, the Concept Pari comes with a magnetic backing that allows you to stick the camera anywhere on your monitor (hopefully without affecting the display). Place the camera just above the head of the person you’re talking with, and you should be able to comfortably maintain eye contact while actually looking at the person on your screen.
When you’re done with a meeting, you can reverse the camera in its dock so that it’s facing away from you, offering additional privacy when the camera’s not in use. Dell says it will still charge when it’s in this position. Although the webcam isn’t for sale just yet, and very well may never be, it’s a concept that doesn’t seem too unrealistic to see a lot of people adopting.
It all reads so well until that very final sentence. The problem of webcams being off-centre – up, down, sideways – is so very frustrating given how prevalent video calls are now.
unique link to this extract
Still some time to order my book (for yourself or a friend): Social Warming explains why outrage and fake news travel so much further and faster on social networks, and how that creates problems for journalism, democracy – and society.
Errata, corrigenda and ai no corrida: none notified