A Formula 1 pit stop team has skills that have turned out to be useful for intensive care teams – saving lives on the way. CC-licensed photo by United Autosports on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 9 links for you. Free of pegs. I’m @charlesarthur on Twitter. Observations and links welcome.
Mitchell Clark has the roundup, which features another phone (preorder on July 21st! Leave that browser tab open for ten weeks!), and pre-announced another two phones, AirPods Pro copies, a Pixel Watch (though Clark says “we don’t know what kind of chip it’ll be powered by nor do we know how much it’ll cost”) to launch in the autumn – keep another browser tab open – and these:
Google announced that it plans to release an Android-powered tablet next year to act as a “perfect companion for Pixel with a larger form factor.” The writing for this one has been on the wall for a while. (Android 12L focused on large-screen experiences, and there have been some tablet-related hires over in Mountain View.) But it’s good to hear that Google is looking to get into tablets again. The only real hardware detail we have about Google’s upcoming device is that it’ll have a Tensor chip in it.
Right at the end of its presentation, Google showed off a pair of AR glasses that were capable of real-time translation during a conversation. There are pretty much no details on whether this will be a product people can buy, but it’s certainly interesting to see more hints of Google’s plan for joining companies like Snap and Meta in the race to put AR on your face.
Scaled those glasses back a fair bit since the excitement of Google I/O 2012. Do you remember the promise of that concept video? Plus what happened to the restaurant-booking voice AI?
A month ago, the future looked bright for Terra and its main backer Do Kwon: A consortium called the Luna Foundation Guard (LFG) aimed at providing collateral for Luna — then at an all-time high value of $119 — had bought more than $1.5bn in Bitcoin to shore up UST’s peg, with its members reading like a Who’s Who of crypto.
But on Monday, all of the mechanisms that were supposed to keep UST stable weren’t. It fell to a low of 60 cents on that day, and reached a further low of around 20 cents in another crash on Wednesday, taking its market value down from $18.4bn to $5bn. Luna also fell considerably, dropping to as low as $2.35.
“Many people were caught off guard,” said Nikita Fadeev, partner and head of crypto fund Fasanara Digital, which de-risked its position in advance of the crash. “Everything broke there. It is full capitulation.”
Exactly why all of Terra’s carefully-planned mechanisms failed to do their job remains unclear, and conspiracy theories abound about shadowy actors with untold wealth to play with. But one thing’s for certain: Kwon isn’t going down without a fight.
He’s now attempting to raise $1.5bn from new and old investors alike to provide more collateral to UST, hoping to rebuild the token’s liquidity after it virtually disappeared from order books overnight. Some suspect that $1.5bn won’t even be enough, and it could take days, if not weeks, for UST to re-peg to the dollar.
This has wiped out a lot of people. The theory is this: someone calculated that if the Terra/Luna pairing was disrupted, the LFG would have to sell a lot of bitcoin to stabilise it; that would drive down the price of bitcoin.
So they borrowed a billion dollars or so of bitcoin at a high price, disrupted the Terra/Luna pairing, and when LFG had to sell bitcoin and the price went down, they repaid the loan – effectively buying the bitcoin at the lower price, and kept the difference.
unique link to this extract
The Chinese military deployed forces all around the island of Taiwan over the weekend in a set of large-scale military drills that one Chinese military analyst called a “rehearsal of possible real action.”
On Monday, the Chinese People’s Liberation Army (PLA) announced its Eastern Theater Command organized maritime, aerial, conventional missile and other forces around Taiwan and carried out drills around the island from Friday to Sunday. The Eastern Theater Command said the drills were intended “to test and improve the joint operations capability of multiple services and arms.”
While Taiwan governs itself as an independent nation, China considers the island a part of its territory and Chinese officials have repeatedly discussed “reunification” with the island, including by means of military force.
The Chinese state-run Global Times publication reported maritime, aerial, conventional missile and “other forces” participated in the drills around Taiwan. During the drills, China’s Liaoning aircraft carrier deployed east of the island while a large number of Chinese aircraft and warships carried out drills to the island’s west.
The Ministry of National Defence for the Republic of China (the formal name of the Taiwanese government) documented several instances of Chinese military aircraft entering its air defense identification zone (ADIZ) over the course of the three-day exercise.
Not now, China. (Which perhaps is why China thinks: maybe now, China.) The Global Times says “the PLA exercise was a partial rehearsal of a possible reunification-by-force operation”. When people tell you what they’re planning…
unique link to this extract
On Monday night, I saw one of the most despair-inducing performances about the hope of climate action that I’ve witnessed in years.
Nancy Pelosi, the Speaker of the House, took the stage here at the Aspen Ideas: Climate festival to discuss what congressional Democrats are doing on climate change. Her remarks were more effective as a litany of missed opportunities. Susan Goldberg, recently the editor in chief of National Geographic, now a dean at Arizona State University, asked the Speaker point-blank whether Democrats were going to pass climate legislation, and Pelosi all but shrugged. The House has already passed a roughly $2 trillion bill containing President Joe Biden’s climate priorities, she said. Now it was in the Senate’s hands. If it happened to get a bill back to her, the House would pass it.
Missing was any sense that this legislation is a make-or-break moment for the broader Democratic caucus. Gone was any suggestion that if Democrats fail to pass a bill this term, then America’s climate commitment under the Paris Agreement will be out of reach, and worse heat waves, larger wildfires, and damaging famines across the country and around the world within the next decade and a half will be all but assured.
Pelosi did not seem to understand, really, why Congress needed to pass a climate law this session. (She seemed to blame the fossil-fuel industry for the current Congress’s inaction.) She repeatedly justified climate action by saying it was “for the children.” This became the rhetorical leitmotif of her remarks—Congress had to act “for the children.” Explaining why she wanted more women in Congress, she said that they had to learn to “throw a punch—for the children.” That line was how she closed.
The British government, with an ostensibly right-wing government, is doing a lot more on the green agenda than the US, with an ostensibly left-wing government. But labels are deceptive. Plus the US political system is sclerotic. The Democrats look likely to let power slip away over the next two years. We’ll all suffer.
unique link to this extract
[Google senior director of product management, Josh] Woodward is showing me AI Test Kitchen, an Android app that will give select users limited access to Google’s latest and greatest AI language model, LaMDA 2. The model itself is an update to the original LaMDA announced at last year’s I/O and has the same basic functionality: you talk to it, and it talks back. But Test Kitchen wraps the system in a new, accessible interface, which encourages users to give feedback about its performance.
As Woodward explains, the idea is to create an experimental space for Google’s latest AI models. “These language models are very exciting, but they’re also very incomplete,” he says. “And we want to come up with a way to gradually get something in the hands of people to both see hopefully how it’s useful but also give feedback and point out areas where it comes up short.”
The app has three modes: “Imagine It,” “Talk About It,” and “List It,” with each intended to test a different aspect of the system’s functionality. “Imagine It” asks users to name a real or imaginary place, which LaMDA will then describe (the test is whether LaMDA can match your description); “Talk About It” offers a conversational prompt (like “talk to a tennis ball about dog”) with the intention of testing whether the AI stays on topic; while “List It” asks users to name any task or topic, with the aim of seeing if LaMDA can break it down into useful bullet points (so, if you say “I want to plant a vegetable garden,” the response might include sub-topics like “What do you want to grow?” and “Water and care”).
AI Test Kitchen will be rolling out in the US in the coming months but won’t be on the Play Store for just anyone to download. Woodward says Google hasn’t fully decided how it will offer access but suggests it will be on an invitation-only basis, with the company reaching out to academics, researchers, and policymakers to see if they’re interested in trying it out.
“List It” definitely sounds useful for task-oriented work. The other two.. I don’t quite see the point. Shouldn’t AI be good for organising information and then repeating it back to us?
unique link to this extract
Thomas Stackpole talks to Molly White, who has been documenting the madness of crowds, aka web3:
MW: If a person’s wallet address is known and they are using a popular chain like Ethereum to transact, anyone [else] can see all transactions they’ve made.
Imagine if you went on a first date, and when you paid them back for your half of the meal, they could now see every other transaction you’d ever made — not just the public transactions on some app you used to transfer the cash but any transactions: the split checks with all of your previous dates, that monthly transfer to your therapist, the debts you’re paying off (or not), the charities to which you’re donating (or not), the amount you’re putting in a retirement account (or not). What if they could see the location of the corner store by your apartment where you so frequently go to grab a pint of ice cream at 10 PM? And this would also be visible to your ex-partners, your estranged family members, your prospective employers, or any number of outside parties interested in collecting your data and using it for any purpose they like. If you had a stalker or had left an abusive relationship or were the target of harassment, the granular details of your life are right there.
There are some blockchains that try to obfuscate these types of details for privacy purposes. But there are trade-offs here: While transparency can enable harassment, the features that make it possible to achieve privacy in a trustless system also enable financial crimes like money laundering. It is also very difficult to use those currencies (and to cash them out to traditional forms of currency). There are various techniques that people can use to try to remain anonymous, but they tend to require technical skill and quite a lot of work on the user’s end to maintain that anonymity.
TS: This point of view seems almost totally absent from the conversation. Why do you think that is?
MW: I think a lot of companies haven’t put much thought into the technology’s abuse potential. I’m surprised at how often I bring it up and the person I’m talking to admits that it’s never crossed their mind.
Facebook and its parent company, Meta, recently released a new tool that can be used to quickly develop state-of-the-art AI. But according to the company’s researchers, the system has the same problem as its predecessors: It’s extremely bad at avoiding results that reinforce racist and sexist stereotypes.
The new system, called OPT-175B, is a kind of template known as a large language model, a collection of pre-trained components that are increasingly used in machine-learning tools that process human language. More recently, natural language processing systems have been used to produce some uncannily accurate results, like the ability to generate images from a short text description. But large language models have been repeatedly criticized for encoding biases into machine-learning systems, and Facebook’s model seems to be no different—or even worse—than the tools that preceded it.
In a paper accompanying the release, Meta researchers write that the model “has a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.” This means it’s easy to get biased and harmful results even when you’re not trying. The system is also vulnerable to “adversarial prompts,” where small, trivial changes in phrasing can be used to evade the system’s safeguards and produce toxic content.
In the paper they explicitly acknowledge that there’s a problem, and essentially seem to believe it’s to do with the dataset. Which it is. If you train a system with “dialogue” from the internet, it’s going to come from forums (though this isn’t overt), and we all know why Godwin’s Law came about; in some cases it’s actually true.
unique link to this extract
John Koblin and Nicole Sperling:
Netflix could introduce its lower-priced ad-supported tier by the end of the year, a more accelerated timeline than originally indicated, the company told employees in a recent note.
In the note, Netflix executives said they were aiming to introduce the ad tier in the final three months of the year, said two people who shared details of the communication on the condition of anonymity to describe internal company discussions. The note also said Netflix planned to begin cracking down on password sharing among its subscriber base around the same time, the people said.
Last month, Netflix stunned the media industry and Madison Avenue when it revealed that it would begin offering a lower-priced subscription featuring ads, after years of publicly stating that commercials would never be seen on the streaming platform.
But Netflix is facing significant business challenges. In announcing first-quarter earnings last month, Netflix said it lost 200,000 subscribers in the first three months of the year — the first time that has happened in a decade — and expected to lose two million more in the months to come. Since the subscriber announcement, Netflix’s share price has dropped sharply, wiping away roughly $70bn in the company’s market capitalization.
Reed Hastings, Netflix’s co-chief executive, told investors that the company would examine the possibility of introducing an advertising-supported platform and that it would try to “figure it out over the next year or two.”
… in the note to employees, Netflix executives invoked their competitors, saying HBO and Hulu have been able to “maintain strong brands while offering an ad-supported service.”
“Every major streaming company excluding Apple has or has announced an ad-supported service,” the note said. “For good reason, people want lower-priced options.”
As expected, the signs were clear enough – it’s approached an outside company about ad infrastructure.
unique link to this extract
Phan writes occasional fascinating threads on Twitter, and then gathers them together on a page. After pointing out how many F1 innovations have come to our roads (paddle gearshifts, push-button ignition, disc brakes, regenerative brakes, aerodynamic design) he also notes that:
F1 technology and best practices have also found its way into non-road car industries.
Hospitals. This is my favourite example of F1 knowledge transfer. In the mid-90s, a Children’s Hospital in the UK improved its ICU hand-off process by consulting with the Ferrari F1 pit crew team.
The hospital recorded its surgery room operation and Ferrari suggested a new protocol. One big change was for the hospital to have the equivalent of a pit crew “lollipop man”; this is the individual that holds a sign on a long stick and only waves a driver through after making sure everyone else on the team has put the tires on.
After changing its protocol, the hospital’s error rate dropped from 30% to 10%.
The Williams F1 pit team similarly helped a hospital in Wales improve its neonatal resuscitation process.
Apparently Mercedes calls F1 “the fastest R&D lab in the world”; McLaren had a division which sold its telemetry and control systems to third parties; that unit was then sold for an unknown amount – but given that revenues were $43m, probably north of $200m
unique link to this extract
|• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?
Read Social Warming, my latest book, and find answers – and more.
Errata, corrigenda and ai no corrida: none notified