
Production of Sony’s PSVR 2 headset has been halted as stocks build up, according to a new report. CC-licensed photo by Marco Verch on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.
A selection of 10 links for you. Not overstocked. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. Observations and links welcome.
Apple may hire Google to power new iPhone AI features using Gemini—report • Ars Technica
Benj Edwards:
»
On Monday, Bloomberg reported that Apple is in talks to license Google’s Gemini model to power AI features like Siri in a future iPhone software update coming later in 2024, according to people familiar with the situation. Apple has also reportedly conducted similar talks with ChatGPT maker OpenAI.
The potential integration of Google Gemini into iOS 18 could bring a range of new cloud-based (off-device) AI-powered features to Apple’s smartphone, including image creation or essay writing based on simple prompts. However, the terms and branding of the agreement have not yet been finalized, and the implementation details remain unclear. The companies are unlikely to announce any deal until Apple’s annual Worldwide Developers Conference in June.
Gemini could also bring new capabilities to Apple’s widely criticized voice assistant, Siri, which trails newer AI assistants powered by large language models (LLMs) in understanding and responding to complex questions. Rumors of Apple’s own internal frustration with Siri—and potential remedies—have been kicking around for some time. In January, 9to5Mac revealed that Apple had been conducting tests with a beta version of iOS 17.4 that used OpenAI’s ChatGPT API to power Siri.
As we have previously reported, Apple has also been developing its own AI models, including a large language model codenamed Ajax and a basic chatbot called Apple GPT. However, the company’s LLM technology is said to lag behind that of its competitors, making a partnership with Google or another AI provider a more attractive option.
Google launched Gemini, a language-based AI assistant similar to ChatGPT, in December and has updated it several times since. Many industry experts consider the larger Gemini models to be roughly as capable as OpenAI’s GPT-4 Turbo, which powers the subscription versions of ChatGPT. Until just recently, with the emergence of Gemini Ultra and Claude 3, OpenAI’s top model held a fairly wide lead in perceived LLM capability.
The potential partnership between Apple and Google could significantly impact the AI industry, as Apple’s platform represents more than 2 billion active devices worldwide. If the agreement gets finalized, it would build upon the existing search partnership between the two companies, which has seen Google pay Apple billions of dollars annually to make its search engine the default option on iPhones and other Apple devices.
«
One would slightly expect Google will be paying Apple for this, rather than vice-versa, as would happen anywhere else.
unique link to this extract
Apple and Microsoft’s industry-defining legal battle started 26 years ago • Quartz
Laura Bratton:
»
Apple launched a landmark lawsuit against Microsoft 36 years ago on March 18, 1988 — the outcome of which defined how tech companies could use one another’s ideas as they developed then-groundbreaking computer software.
In its $5.5bn suit, Apple alleged that Microsoft copied its computers’ look and feel with Windows version 2.03. At the time, Apple’s computers were the first to have graphical user interfaces (or GUIs) — a new and exciting way for users to interact with computer screens using visual elements like icons, buttons, and menus rather than text. Apple released its first commercial computer with a GUI named “Lisa” in 1983. Here’s how the New York Times described GUIs five years later:
“…virtually all personal computer makers are moving toward more of a Macintosh look. That appearance is based on what the industry calls a graphical user interface, in which information appears in windows and operations are carried out by pointing at objects and menus using a handheld device called a mouse – a major selling point of the Macintosh.”
Before the suit, the burgeoning tech giants were amicable. In 1985, they worked out a deal. Apple licensed its design elements to Microsoft for Windows version 1, and Apple got the rights to use some Microsoft products. But that all went to the wayside when Microsoft released the next version of Windows, which contained even more elements of Apple’s GUI.
…Apple’s suit was also dismissed, and the Supreme Court denied Apple’s final appeal in 1995, upholding a prior ruling that Microsoft’s use of its design elements was covered by their 1985 agreement or otherwise not copyrightable.
«
You could say that Apple has been launching quixotic lawsuits for this long, really. Failing at this didn’t stop it doing very much the same against Google over Android and Samsung over phone design.
unique link to this extract
Gatekeeping is Apple’s brand promise • Marginal REVOLUTION
Alex Tabarrok starts off quoting Steve Sinofsky, ex-Microsoft and now analyst:
»
»
Android has the kind of success Microsoft would envy, but not Apple, primarily because with that success came most all the same issues that Microsoft sees (still) with the Windows PC. The security, privacy, abuse, fragility, and other problems of the PC show up on Android at a rate like the PC compared to Macintosh and iPhone. Only this time it is not the lack of motivation bad actors have to exploit iPhone, rather it is the foresight of the Steve Jobs vision for computing. He pushed to have a new kind of computer that further encapsulated and abstracted the computer to make it safer, more reliable, more private, and secure, great battery life, more accessible, more consistent, always easier to use, and so on. These attributes did not happen by accident. They were the process of design and architecture from the very start. These attributes are the brand promise of iPhone as much as the brand promise of Android is openness, ubiquity, low price, choice.
The lesson of the first two decades of the PC and the first almost two decades of smartphones are that these ends of a spectrum are not accidental. These choices are not mutually compatible. You don’t get both. I know this is horrible to say and everyone believes that there is somehow malicious intent to lock people into a closed environment or an unintentional incompetence that permits bad software to invade an ecosystem. Neither of those would be the case. Quite simply, there’s a choice between engineering and architecting for one or the other and once you start you can’t go back. More importantly, the market values and demands both.
That is unless you’re a regulator in Brussels. Then you sit in an amazing government building and decide that it is entirely possible to just by fiat declare that the iPhone should have all the attributes of openness.
«
Apple’s promise to iPhone users is that it will be a gatekeeper. Gatekeeping is what allows Apple to promise greater security, privacy, usability and reliability. Gatekeeping is Apple’s brand promise.
«
Sinofsky is not a fan of the EU’s DMA.
unique link to this extract
DNA tests are uncovering the true prevalence of incest • The Atlantic
Sarah Zhang:
»
In 1975 a psychiatric textbook put the frequency of incest at one in a million.
But this number is almost certainly a dramatic underestimate. The stigma around openly discussing incest, which often involves child sexual abuse, has long made the subject difficult to study. In the 1980s, feminist scholars argued, based on the testimonies of victims, that incest was far more common than recognized, and in recent years, DNA has offered a new kind of biological proof. Widespread genetic testing is uncovering case after secret case of children born to close biological relatives—providing an unprecedented accounting of incest in modern society.
The geneticist Jim Wilson, at the University of Edinburgh, was shocked by the frequency he found in the UK Biobank, an anonymized research database: one in 7,000 people, according to his unpublished analysis, was born to parents who were first-degree relatives—a brother and a sister or a parent and a child. “That’s way, way more than I think many people would ever imagine,” he told me. And this number is just a floor: It reflects only the cases that resulted in pregnancy, that did not end in miscarriage or abortion, and that led to the birth of a child who grew into an adult who volunteered for a research study.
Most of the people affected may never know about their parentage, but these days, many are stumbling into the truth after AncestryDNA and 23andMe tests. Steve’s case was one of the first Moore worked on involving closely related parents. She now knows of well over 1,000 additional cases of people born from incest, the significant majority between first-degree relatives, with the rest between second-degree relatives (half-siblings, uncle-niece, aunt-nephew, grandparent-grandchild). The cases show up in every part of society, every strata of income, she told me.
Neither AncestryDNA nor 23andMe informs customers about incest directly, so the thousand-plus cases [genetic genealogist CeCe] Moore knows of all come from the tiny proportion of testers who investigated further. This meant, for example, uploading their DNA profiles to a third-party genealogy site to analyze what are known as “runs of homozygosity,” or ROH: long stretches where the DNA inherited from one’s mother and father are identical.
«
The implications of this are quite dramatic: the diseases from inbreeding can be horrendous.
unique link to this extract
Nvidia reveals Blackwell B200 GPU, the ‘world’s most powerful chip’ for AI • The Verge
Sean Hollister:
»
Nvidia’s must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead — with the new Blackwell B200 GPU and GB200 “superchip.”
Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors. Also, it says, a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It “reduces cost and energy consumption by up to 25x” over an H100, says Nvidia.
Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia’s CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts.
On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia says the GB200 has a somewhat more modest seven times the performance of an H100, and Nvidia says it offers four times the training speed.
«
I have absolutely no idea whether these numbers are impressive, modest, surprising or what. They just mean nothing at all to me. But I record them here so you know it happened.
unique link to this extract
Sony hits pause on PSVR2 production as unsold inventory piles up • Bloomberg via The Business Times
Takashi Mochizuki:
»
Sony Group has paused production of its PSVR2 headset until it clears a backlog of unsold units, according to sources familiar with its plans, adding to doubts about the appeal of virtual reality gadgets.
Sales of the US$550 wearable accessory to the PlayStation 5 have slowed progressively since its launch and stocks of the device are building up, according to the sources, who asked not to be named as the information is not public. Sony has produced well over two million units of the product launched in February of last year, the sources said.
PSVR2 shipments have declined every quarter since its debut, according to IDC, which tracks deliveries to retailers rather than consumers. The surplus of assembled devices is throughout Sony’s supply chain, the sources said. Still, IDC’s Francisco Jeronimo sees a recovery for the product category in the coming years with the help of Apple’s entry. “We forecast the VR market to grow on average 31.5% per year between 2023 and 2028,” he said.
Alongside Meta Platforms, Sony has been one of the leading purveyors of virtual reality gear, but both have struggled to attract enough content and entertainment creators to make their platforms compelling. A similar problem stalks Apple’s much pricier Vision Pro headset, as it made its debut without tailored apps from key entertainment platforms Netflix and Alphabet’s YouTube.
…Tokyo-based Sony last month announced it is shutting down its PlayStation London division, which was focused on making virtual reality games. That move was part of a wider set of layoffs that also affected in-house studios such as Guerrilla Games, which had worked on creating a PSVR2-exclusive game in its popular Horizon series, Horizon Call of the Mountain.
«
VR headsets are approaching the eye’s resolution limits • IEEE Spectrum
Matthew Smith:
»
The Chinese consumer electronics company TCL Technology recently unveiled a monstrous, 163-inch 4K Micro-LED television that one home theater expert described as “tall as Darth Vader.” Each of the TV’s 8.3 million pixels is an independent, miniscule LED, a feat for which TCL charges over $100,000.
But here’s the real surprise: TCL’s new TV isn’t the most pixel-dense or exotic display ever produced. That honor goes to the emerging frontier of Micro-OLED and Micro-LED displays built for AR/VR headsets. Mojo Vision, a leader in micro-LED displays, recently demonstrated a full-color Micro-LED display frontplane with a density of 5,510 pixels%imeter (14,000 pixels per inch) at CES 2024. That display, if blown up to the size of TCL’s television, would pack over 220 billion pixels.
Pixel densities that high may seem absurd, but absurd density is key for the next generation of augmented, mixed, and virtual reality headsets. Stuffing more pixels inside each centimeter allows not only lifelike visuals but also smaller, more compact displays that achieve a necessary level of visual resolution. But building displays at this scale isn’t easy, and it leads to unique technical hurdles the AR/VR industry is still learning to leap.
“Why so many pixels? Well, who wouldn’t want more pixels?” asks Patrick Wyatt, chief product officer at VR headset maker Varjo. “Now the question becomes, how do you do it?”
…Nordic Ren, CEO of Pimax, says the goal is not total pixel count but instead pixels-per-degree (PPD), a measurement of the pixels in each degree of the user’s field of vision. “PPD is pivotal in defining visual clarity,” says Ren. “Even for industry front-runners like Pimax and Apple, the immersion quality still isn’t optimal. Every incremental enhancement to resolution elevates the experience.”
«
New Havana Syndrome studies find no evidence of brain injuries • The New York Times
Julian Barnes:
»
New studies by the National Institutes of Health failed to find evidence of brain injury in scans or blood markers of the diplomats and spies who suffered symptoms of Havana syndrome, bolstering the conclusions of U.S. intelligence agencies about the strange health incidents.
Spy agencies have concluded that the debilitating symptoms associated with Havana syndrome, including dizziness and migraines, are not the work of a hostile foreign power. They have not identified a weapon or device that caused the injuries, and intelligence analysts now believe the symptoms are most likely explained by environmental factors, existing medical conditions or stress.
The lead scientist on one of the two new studies said that while the study was not designed to find a cause, the findings were consistent with those determinations.
The authors said the studies are at odds with findings from researchers at the University of Pennsylvania, who found differences in brain scans of people with Havana syndrome symptoms and a control group
Dr. David Relman, a prominent scientist who has had access to the classified files involving the cases and representatives of people suffering from Havana syndrome, said the new studies were flawed. Many brain injuries are difficult to detect with scans or blood markers, he said. He added that the findings do not dispute that an external force, like a directed energy device, could have injured the current and former government workers«
This is going to drag on and on. It seems strange that stress, alone, could cause the physical symptoms that people have complained of. At the very least, if there is a directed energy weapon – even a low-power one – the Russians have certainly got their money’s worth out of it in terms of disruption.
unique link to this extract
The exponential enshittification of science • Marcus on AI
Gary Marcus:
»
In my opinion, every [science publication] article with ChatGPT remnants should be considered suspect and perhaps retracted, because hallucinations may have filtered in, and both authors and reviewers) were asleep at the switch.
And there is no way reviewers and journals are going to be able to keep up. Reviewers are typically unpaid academics who are already stretched to their limits; tripling their workload would not be feasible. And GenAI might do a lot worse than merely tripling workloads; the total number of articles may radically spike, many of them dubious and a waste of reviewers’ time. Lots of bad stuff is going to sneak in.
Not long ago a science-fiction outlet called Clarkesworld that allowed open submission was overrun by fake submissions. I will never forget the graph they shared, the exponential increase in number of users they needed to ban each month.
I warned then that other areas would see the same. It looks like science is next.
If science journals and science itself are overrun by LLM-generated garbage, as now seems imminent, my agnostic “I can’t quite tell yet if LLMs will ultimately be of net benefit to society” is going to switch to “shut it down if they can’t fix this problem.” That’s a huge potential cost.
«
It’s pretty simple – you look for “Certainly, I can help you with”. It is, as he says, a big red flag.
unique link to this extract
How science sleuths track down bad research • WSJ
Nidhi Subbaraman:
»
It was early January when the Dana-Farber Cancer Institute received a complaint about signs of image manipulation in dozens of papers by senior researchers. Days later, the organization said it was seeking to retract or correct several of the studies, sending shock waves through the scientific community.
Mass General Brigham and Harvard Medical School were sent a complaint the same month: A collection of nearly 30 papers co-authored by another professor appeared to contain copied or doctored images.
The complaints were from different critics, but they had something in common. Both scientists—molecular biologist Sholto David and image expert Elisabeth Bik—had used the same tool in their analyses: an image-scanning software called Imagetwin.
Behind the recent spotlight on suspect science lies software such as Imagetwin, from a company based in Vienna, and another called Proofig AI, made by a company in Israel. The software brands aid scientists in scouring hundreds of studies and are turbocharging the process of spotting deceptive images.
Before the tools emerged, data detectives pored over images in published research with their own eyes, something that could take a few minutes or about an hour, with some people possessing a flair for seeing patterns. Now, the tech tools automate this effort, pointing to problematic images within a minute or two.
Scientific images offer a rare glimpse of raw data: millions of pixels tidily presented alongside the text of a paper. Common types of images include photographs of tissue slices and cells. Researchers say no two tissue samples taken from different animals should ever look the same under a microscope, nor should two different cell cultures.
When they do, that is a red flag.
…Imagetwin in particular offers a feature that none of the human detectives can replicate: It compares photos in one paper against a database of 51 million images reaching back 20 years, flagging photos copied from previous studies. “That is an amazing find that humans can never do,” Bik said.
«
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified