Start Up No.2231: how early western PCs were made to work in China, traffic lights for self-driving cars?, AI headphones, and more


Zebrafish are surviving, so far, on the Chinese space station – but seem a little disoriented. CC-licensed photo by Oregon State University on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.


A selection of 10 links for you. Which was is up, though? I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.


How China’s 1980s PC industry hacked dot-matrix printers • Fast Company

Thomas Mullaney:

»

Commercial dot-matrix printing was yet another arena in which the needs of Chinese character I/O were not accounted for. This is witnessed most clearly in the then-dominant configuration of printer heads—specifically the 9-pin printer heads found in mass-manufactured dot-matrix printers during the 1970s. Using nine pins, these early dot-matrix printers were able to produce low-resolution Latin alphabet bitmaps with just one pass of the printer head. The choice of nine pins, in other words, was “tuned” to the needs of Latin alphabetic script.

These same printer heads were incapable of printing low-resolution Chinese character bitmaps using anything less than two full passes of the printer head, one below the other. Two-pass printing dramatically increased the time needed to print Chinese as compared to English, however, and introduced graphical inaccuracies, whether due to inconsistencies in the advancement of the platen or uneven ink registration (that is, characters with differing ink densities on their upper and lower halves).

Compounding these problems, Chinese characters printed in this way were twice the height of English words. This created comically distorted printouts in which English words appeared austere and economical, while Chinese characters appeared grotesquely oversized. Not only did this waste paper, but it left Chinese-language documents looking something like large-print children’s books. When consumers in the Chinese-Japanese-Korean (CJK) world began to import Western-manufactured dot-matrix printers, then, they faced yet another facet of Latin alphabetic bias.

«

This is an extract from what looks like a fascinating book about how China yanked itself into the computer age. Necessity as the mother of invention, and all that.
unique link to this extract


Fish are adapting to weightlessness on the Chinese space station • Universe Today

Scott Alan Johnston:

»

Four zebrafish are alive and well after nearly a month in space aboard China’s Tiangong space station. As part of an experiment testing the development of vertebrates in microgravity, the fish live and swim within a small habitat aboard the station.

While the zebrafish have thus far survived, they are showing some signs of disorientation. The taikonauts aboard Tiangong – Ye Guangfu, Li Cong, and Li Guangsu – have reported instances of swimming upside down, backward, and in circular motions, suggesting that microgravity is having an effect on their spatial awareness.

The zebrafish were launched aboard Shenzhou-18, which carried them, as well as a batch of hornwort, to orbit on April 25, 2024. The aim of the project is to create a self-sustaining ecosystem, studying the effects of both microgravity and radiation on the development and growth of these species.

As a test subject, zebrafish have several advantages. Their short reproductive and development cycle, and transparent eggs, allow scientists to study their growth quickly and effectively, and their genetic makeup shares similarities with humans, potentially offering insights that are relevant to human health. The zebrafish genome has been fully sequenced, and for these reasons zebrafish are commonly used in scientific experiments on Earth. Seeing how these well-studied creatures behave in such an extreme environment may have a lot to tell us about the life and development of vertebrates across species while exposed to microgravity.

«

“Showing some signs of disorientation” sounds like they’re doing remarkably well, all things considered. (My understanding is they were taken up as fertilised embryos, rather than adult fish, because I’d have questions about how fish experience – or survive – high G forces.)
unique link to this extract


AI headphones let wearer listen to a single person in a crowd, by looking at them just once • UW News

Stefan Milne and Kiyomi Taguchi:

»

A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “enroll” them. The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

The team presented its findings May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems. The code for the proof-of-concept device is available for others to build on. The system is not commercially available.

“We tend to think of AI now as web-based chatbots that answer questions,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “But in this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With our devices you can now hear a single speaker clearly even if you are in a noisy environment with lots of other people talking.”

To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously; there’s a 16º margin of error. The headphones send that signal to an on-board embedded computer, where the team’s machine learning software learns the desired speaker’s vocal patterns.

«

Nifty. Wonder how long it will take for this to be incorporated into commercial systems. (It would be madness to try to do this as a standalone commercial project. One hopes the UW team realise that.)
unique link to this extract


You were promised a jetpack by liars • Pluralistic

Cory Doctorow:

»

As a science fiction writer, I find it weird that some sf tropes – like space colonization – have become culture-war touchstones. You know, that whole “we were promised jetpacks” thing.

I confess, I never looked too hard at the practicalities of jetpacks, because they are so obviously either used as a visual shorthand (as in the Jetsons) or as a metaphor. Even a brief moment’s serious consideration should make it clear why we wouldn’t want the distracted, stoned, drunk, suicidal, homicidal maniacs who pilot their two-ton killbots through our residential streets at 75mph to be flying over our heads with a reservoir of high explosives strapped to their backs.

Jetpacks can make for interesting sf eyeball kicks or literary symbols, but I don’t actually want to live in a world of jetpacks. I just want to read about them, and, of course, write about them.

I had blithely assumed that this was the principle reason we never got the jetpacks we were “promised.” I mean, there kind of was a promise, right? I grew up seeing videos of rocketeers flying their jetpacks high above the heads of amazed crowds, at World’s Fairs and Disneyland and big public spectacles. There was that scene in Thunderball where James Bond (the canonical Connery Bond, no less) makes an escape by jetpack. There was even a Gilligan’s Island episode where the castaways find a jetpack and scheme to fly it all the way back to Hawai’i.

Clearly, jetpacks were possible, but they didn’t make any sense, so we decided not to use them, right?

Well, I was wrong. In a terrific new 99% Invisible episode, Chris Berube tracks the history of all those jetpacks we saw on TV for decades, and reveals that they were all the same jetpack, flown by just one guy, who risked his life every time he went up in it

«

Terrifying. (Rather like Doctorow’s work rate. How in the world does he do it.)
unique link to this extract


A brief history of the traffic light and why we need a new colour • Al Jazeera

»

The universally known traffic light has not experienced a significant redesign in almost 100 years, ever since William Pott, a Detroit police officer, created the first three-section traffic light in the United States in 1921. Now, say experts, the rise of driverless cars means that a new set of safety guidelines is needed to ensure they interact correctly with traffic signals.

Traffic lights around the world typically use red, amber and green lights to signal to drivers whether they should stop, go or get ready to either stop or go at intersections and pedestrian crossings. Ali Hajbabaie, a North Carolina State University (NCSU) engineering professor, is leading a team to design a traffic system that considers how driverless cars respond to traffic signals.

Hajbabaie told The Associated Press news agency that he proposes adding another light – possibly a white one.

…Humans and autonomous cars use different sets of visual cues when it comes to interpreting lighting systems. Different colours – sometimes flashing to indicate that a change is imminent – work best for the human brain, while a single light works better for autonomous cars.

Therefore, a fourth light – most likely white – would be added for the benefit of self-driving cars. The white light would be interpreted by a self-driven car as an instruction to “keep going unless instructed otherwise”.

Hajbabaie, the NCSU professor, explained: “If the white light is active, you just follow the vehicle in front of you.”

«

This is going to make those captchas with “click the squares with traffic lights” much trickier. Also, though it sounds like a Beatles lyric, how many traffic lights would you have to change around the world? I’m going to guess hundreds of millions in existence.
unique link to this extract


Announcing Exa, a search engine built for the AI world

exa:

»

The internet contains the collective knowledge output of mankind – all the great works of art and literature, millions of essays, hundreds of millions of research papers, billions of images and videos, trillions of ideas sprinkled across tweets, forums, and memes.

Searching the internet should feel like navigating a grand library of knowledge, where you could weave insights across cultures, industries, and millenia.

Of course it doesn’t feel that way. Today, searching the internet feels more like navigating a landfill.

Many have debated what’s wrong with search.

But the core problem is actually simple – knowledge on the internet is buried under an overwhelming amount of information.

The core solution is also simple – we need a better search algorithm to filter all that information and organize the knowledge buried inside.

Exa is going to organize the world’s knowledge. Exa’s goal is to understand any query – no matter how complex – and filter the internet to exactly the knowledge required for that query.

To illustrate the problem, try googling “startups working on climate change”.

«

Interesting: I’ve noticed a few people saying “this is the new Google, like Google was back in 1999”. Notable that it needs you to ask in a slightly stilted style:

»

Exa uses a transformer architecture to predict links given text, and it gets its power from having been trained on the way that people talk about links on the Internet. This training produces a model that returns links that are both high in relevance and quality. However, the model does expect queries that look like how people describe a link on the Internet. For example, ‘best restaurants in SF” is a bad query, whereas “Here is the best restaurant in SF:” is a good query.

«

I tried it with a query local to me. It’s… a work in progress.
unique link to this extract


From pollution to solution • Cremieux Recueil

Andrew Song:

»

This post examines the transformative role that stratospheric aerosol injection of SO₂ can play in moderating change in Earth’s climate. While many fear the corrosive effects of SO₂ based on its familiar ground-level impacts, its application in the upper atmosphere tells a different story altogether—one of cooling the Earth effectively and potentially reversing some of the most severe effects of global warming.

…First, we’ll determine the total area affected. To do this, we’ll assume aerosols spread uniformly over the Earth’s surface, an area of approximately 510 million square kilometers.

Second, we’ll compute the sulfuric acid required to produce mildly acidic rain. To impact our designated area, we’ll need 0.01 metric tons of sulfuric acid per square kilometer. Thus, the total required sulfuric acid is 0.01 metric tons per square kilometer * 510 million square kilometers = 5.1 million metric tons.

Third, we’ll compute the SO₂ needed to produce mildly acidic rain. Considering the conversion efficiency of 70%, the total SO₂ needed would be 5.1 million metric tons over the 70% efficiency, or approximately 7.29 million metric tons of SO₂.

Thus, we have our upper limit of 7.29 million metric tons of SO₂ that can be pumped into the atmosphere safely each year. Any more and we’ll end up with acidic rain. The next question is, does the amount bring global temperatures down meaningfully? Luckily, this exact question has been modeled extensively by scientists and featured in a report by the United Nations Environment Programme, which states:

»

It is estimated that continuous injection rates of 8–16 million metric tons of sulphur dioxide (SO₂) per year (approximately equivalent to the estimated injection amount of Mount Pinatubo in the single year of 1991) would reduce global mean temperature by 1°C.

«

«

Song is a co-founder of Make Sunsets, “the first company in the world to commercialise stratospheric aerosol injection as a service to cool the Earth”. It has some big-name VC backers. The bigger question is whether we want private companies noodling with the atmosphere. (Though in a sense, they have been for centuries already.)
unique link to this extract


“I was misidentified as shoplifter by facial recognition tech” • BBC News

James Clayton:

»

Sara needed some chocolate – she had had one of those days – so wandered into a Home Bargains store.

“Within less than a minute, I’m approached by a store worker who comes up to me and says, ‘You’re a thief, you need to leave the store’.”

Sara – who wants to remain anonymous – was wrongly accused after being flagged by a facial-recognition system called Facewatch. She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology.

“I was just crying and crying the entire journey home… I thought, ‘Oh, will my life be the same? I’m going to be looked at as a shoplifter when I’ve never stolen’.” Facewatch later wrote to Sara and acknowledged it had made an error.

Facewatch is used in numerous stores in the UK – including Budgens, Sports Direct and Costcutter – to identify shoplifters. The company declined to comment on Sara’s case to the BBC, but did say its technology helped to prevent crime and protect frontline workers. Home Bargains, too, declined to comment.

It’s not just retailers who are turning to the technology. On a humid day in Bethnal Green, in east London, we joined the police as they positioned a modified white van on the high street. Cameras attached to its roof captured thousands of images of people’s faces. If they matched people on a police watchlist, officers would speak to them and potentially arrest them.

Unflattering references to the technology liken the process to a supermarket checkout – where your face becomes a bar code.

On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech. That included two people who breached the terms of their sexual-harm prevention orders, a man wanted for grievous bodily harm and a person wanted for the assault of a police officer.

«

unique link to this extract


How BBC’s breaking news alerts are giving voters – and political parties – an electoral buzz • The Guardian

Jim Waterson:

»

he most powerful person in British media during this election, in terms of having the most direct access to voters, is no longer the editor of BBC’s News at Six or the person who chooses the headlines on Radio 2. Nor are they a newspaper editor, a TikTok influencer, or a podcaster. Instead, they’re the anonymous on-shift editor of the BBC News app, making snap judgments on whether to make the phones of millions of Britons buzz with a breaking news push alert.

The BBC does not publish user numbers, but external research suggests about 12.6 million Britons have its news app installed. BBC newsroom sources say the actual number is higher and the assumption is that about 60% of users have notifications enabled. This means that on a conservative estimate, a typical push alert is reaching the phones of seven million Britons – more than any other broadcast news bulletin in the UK.

Craig Oliver, David Cameron’s former director of communications, said influencing the BBC’s coverage was the main objective for all political press officers. This used to mean phoning the editors of specific television news shows. Now the focus is shifting online: “The sheer scale of the website alone and its breaking news alerts is huge. Once something gets into the water supply of the BBC, it’s very hard to get it out.”

The audience for the BBC’s News at Six, the most-watched television news show in the country, is down to about 3.5 million viewers – and its audience is ageing; the average BBC One television viewer is now in their 60s. Print newspaper front pages remain heavily analysed, but their reach has collapsed. Twenty years ago, the Sun still sold 3.5m copies a day. Now the biggest selling print newspaper is the Daily Mail, on 700,000 copies.

All have been quietly overtaken by the the BBC News app push alert, which was only widely adopted a decade ago. An alert can drive readers to open the full news story – or its headline can exist as a standalone nugget of news, a 25-word summary destined to remain unclicked. Some people are driven to distraction by breaking news buzzes, while others grab their phones the moment they see them arrive.

«

Waterson is billed as the “political media editor”, which I hope is only temporary during the election.
unique link to this extract


Google is playing a dangerous game with AI search • The Atlantic

Caroline Mimbs Nyce tried Google’s new AI search on health questions:

»

This risk with generative AI isn’t just about Google spitting out blatantly wrong, eyeroll-worthy answers. As the AI research scientist Margaret Mitchell tweeted, “This isn’t about ‘gotchas,’ this is about pointing out clearly foreseeable harms.” Most people, I hope, should know not to eat rocks. The bigger concern is smaller sourcing and reasoning errors—especially when someone is Googling for an immediate answer, and might be more likely to read nothing more than the AI overview. For instance, it told me that pregnant women could eat sushi as long as it doesn’t contain raw fish. Which is technically true, but basically all sushi has raw fish. When I asked about ADHD, it cited AccreditedSchoolsOnline.org, an irrelevant website about school quality.

When I Googled How effective is chemotherapy?, the AI overview said that the one-year survival rate is 52%. That statistic comes from a real scientific paper, but it’s specifically about head and neck cancers, and the survival rate for patients not receiving chemotherapy was far lower. The AI overview confidently bolded and highlighted the stat as if it applied to all cancers.

In certain instances, a search bot might genuinely be helpful. Wading through a huge list of Google search results can be a pain, especially compared with a chatbot response that sums it up for you. The tool might also get better with time.

Still, it may never be perfect. At Google’s size, content moderation is incredibly challenging even without generative AI. One Google executive told me last year that 15% of daily searches are ones the company has never seen before. Now Google Search is stuck with the same problems that other chatbots have: Companies can create rules about what they should and shouldn’t respond to, but they can’t always be enforced with precision.

«

Not it “may” never be perfect; it can never be perfect. Search is already incredibly difficult, and the sooner people – including people at Google – understand that the better.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?Read Social Warming, my latest book, and find answers – and more.

Errata, corrigenda and ai no corrida: none notified

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.