
Can Amazon make its Alexa line of products turn a profit by adding an LLM to them? CC-licensed photo by Stock Catalog on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
There’s another post coming this week at the Social Warming Substack on Friday at 0845 UK time. Free signup.
A selection of 9 links for you. Intelligent enough. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Samsung and Meta are looking into earbuds with cameras, following Apple’s AirPods’ lead • TechRadar
Carrie Marshall:
»
Apple isn’t the only firm considering sticking cameras into your earbuds, although it’s probably closer than most: as we reported last year, Apple has been experimenting with IR cameras in AirPods, and is apparently planning to use them to help inform AI and deliver the audio equivalent of smart glasses.
A new report says that Apple isn’t the only firm wanting to be an eye-in-ear pioneer. Meta and Samsung are apparently looking into people’s ears too, but the path to in-ear cameras has proved to be a little tricky.
The report, by Bloomberg, details the efforts of Apple’s earbud rivals. Meta’s system appears to have the same goal as Apple’s one – not to take photos or record video, but to analyze the world around you and provide input to AI assistants – and “would let users look at an object and ask the earbuds to analyze the item”, much like Meta’s Ray-Ban glasses do. However, such devices are at least a few years away.
Meta has encountered several issues, which presumably Apple has encountered too. The report says that there have been issues with people who have long hair, and Meta is apparently unsatisfied with the camera angles of the devices currently named “Camera Buds”.
As for Samsung, those legendary leakers “people with knowledge of the matter” say that the firm is also considering a version of earbuds with cameras inside. However as yet there’s no detail of how advanced that project is, or if it’s even begun.
Cameras on earbuds make a lot of sense as an alternative to the idea of smart AR glasses, because there will be a big hurdle to get people who don’t wear glasses normally to put them on.
«
Fabulous story about not one, not two, but three companies’ plans for products which may or may not be under development. The absolute pinnacle of speculative reporting. Though I think it will be a lot easier to persuade people to put AirPods (and similar) in their ears than to wear glasses they don’t need.
Though those would be some tiny, tiny cameras. How do you get the weight down, and the battery charged?
unique link to this extract
Beijing’s targeting of Taiwan’s undersea cables previews cross-strait tensions under a Trump presidency • The Diplomat
Hans Horan:
»
On January 5, the Taiwanese government alleged that the Chinese-owned vessel Shunxin-39 cut an undersea fiber-optic cable near Taiwan’s Keelung Harbor by reportedly dragging its anchor across the seabed. Taiwan’s government-run telecommunications operator, Chunghwa Telecom, discovered the alleged sabotage after receiving a disruption warning around 7:51 a.m. While the ship is reportedly registered in Cameroon and Tanzania, the Taiwanese Coast Guard stated that all seven crew members were Chinese nationals and the ship’s owner was based in Hong Kong.
On January 10, a director of the company operating Shunxin-39 refuted the allegations, despite the ship’s movements reportedly sustaining the sabotage hypothesis.
This incident appears to be the latest example of Beijing-directed “gray-zone harassment.” In 2023, similar sabotage severed two submarine cables connecting Taiwan’s Matsu Islands, which temporarily disrupted their internet services. This most recent incident highlights the complex dynamics of China’s gray-zone tactics against Taiwan. Most notably, its timing – just weeks before Donald Trump’s inauguration for a second term as the United States’ president – raises the stakes, with China potentially testing the resilience of the Taiwan-U.S. partnership and Washington’s broader commitment to Indo-Pacific security.
The investigation into the Shunxin-39 incident remains inconclusive thus far, though the incident is far from isolated.
«
“Dragging our anchor” is the new “shooting down your satellite”.
unique link to this extract
AI mistakes are very different from human mistakes • Schneier on Security
Bruce Schneier and Nathan Sanders:
»
Someone who makes calculus mistakes is also likely to respond “I don’t know” to calculus-related questions.
To the extent that AI systems make these human-like mistakes, we can bring all of our mistake-correcting systems to bear on their output. But the current crop of AI models—particularly LLMs—make mistakes differently.
AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.
And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is.
This situation indicates two possible areas of research. The first is to engineer LLMs that make more human-like mistakes. The second is to build new mistake-correcting systems that deal with the specific sorts of mistakes that LLMs tend to make.
…Humans may occasionally make seemingly random, incomprehensible, and inconsistent mistakes, but such occurrences are rare and often indicative of more serious problems. We also tend not to put people exhibiting these behaviors in decision-making positions. Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.
«
‘Severance’: Apple TV+ series has generated $200m for streamer • Deadline
Max Goldbart:
»
Severance‘s long-awaited second season returned to the small screen last Friday and research from Parrot Analytics has found that the first generated more than $200m for the tech giant.
Parrot came to these figures via its Content Valuation methodology, which uses a formula to correlate audience demand with subscribers and therefore revenue. The system also examines how shows and movies generate value for streamers in markets across the globe.
According to Parrot, Severance is doing well compared with Apple hits like Slow Horses and The Morning Show. The former generated $184.8M during a similar timeframe to Severance Season 1, while the latter made $299.4M but across a much longer period of time. From Q3 2020 to Q3 2024, Ted Lasso, which has been teasing a fourth season, generated a whopping $609.4M, Parrot said.
As an acquisition driver, Parrot noted that the EMEA and Latin America regions have seen the greatest contribution from Severance.
Severance Season 2 launched last Friday but is dropping episodes weekly, meaning fans will have to wait patiently for their fix of Mark S, Helly R and Mr Milchick. This builds into Parrot Senior Entertainment Industry Strategist Brandon Katz’s notion that “the critically acclaimed first season of Severance not only aligns with Apple’s premium brand, but provided a long tail of value for the streamer.”
Parrot’s research found that almost half of the revenue generated by Dan Erickson and Ben Stiller‘s hit came in the 12 months after the finale, which “underscores the show’s unique ability to elicit catch-up viewing and rewatches from hungry fans,” according to Katz. It is perhaps no wonder then that Apple has chosen weekly drops for Severance Season 2.
“All of this, plus its healthy pre-release demand trends, sets the stage for a ‘break out sequel’ type of performance for Season 2, which would help Apple fill the anchor series void without Ted Lasso,”added Katz. When looking at the 28 days leading into the upcoming season, Severance Season 2 now compares favorably to rival hits such as Cobra Kai, The Mandalorian and The Lord of the Rings: The Rings of Power, Parrot said, backing up Katz’s claim.
«
The subscription for Apple TV+ (to get Severance etc) is $9.99 monthly, or £8.99 – which is ~$120 annually. I don’t see how you can get to $200m “generated” from the first season unless you think around two million people signed up and stayed signed up specifically because of Severance. (Which is, no argument, terrific.) I find that hard to credit.
unique link to this extract
A Samsung integration helps make Google’s Gemini the AI assistant to beat • The Verge
David Pierce:
»
According to recent reporting from The Wall Street Journal, CEO Sundar Pichai now believes Gemini has surpassed ChatGPT, and he wants Google to have 500 million users by the end of this year. It might just get there one Samsung phone at a time. [The new Galaxy phones use Google Gemini by default, rather than Samsung’s Bixby.]
Gemini is now a front-and-center feature on the world’s most popular Android phones, and millions upon millions of people will likely start to use it more — or use it at all — now that it’s so accessible. For Google, which is essentially betting that Gemini is the future of every single one of its products, that brings a hugely important new set of users and interactions. All that data makes Gemini better, which makes it more useful, which makes it more popular. Which makes it better again.
Right now, Google appears to be well ahead of its competitors in one important way: Gemini is the most capable virtual assistant on the market right now, and it’s not particularly close. It’s not that Gemini is specifically great; it’s just that it has more access to more information and more users than anyone else. This race is still in its early stages, and no AI product is very good yet — but Google knows better than anyone that if you can be everywhere, you can get good really fast. That worked so well with search that it got Google into antitrust trouble. This time, at least so far, it seems like Google’s going to have an even easier time taking over the market.
«
Bet there’s a juicy contract for Samsung to use Gemini rather than Bixby. However, I keep seeing examples of terrible misinformation being quoted by people in screenshots of Gemini results. Do not trust the chatbots. Do not trust the search engines. Check it yourself.
unique link to this extract
Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027 • Ars Technica
Benj Edwards:
»
Anthropic CEO Dario Amodei predicted that AI models may surpass human capabilities “in almost everything” within two to three years, according to a Wall Street Journal interview at the World Economic Forum in Davos, Switzerland.
Speaking at Journal House in Davos, Amodei said, “I don’t know exactly when it’ll come, I don’t know if it’ll be 2027. I think it’s plausible it could be longer than that. I don’t think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics.”
Amodei co-founded Anthropic in 2021 with his sister, Daniela Amodei, and five other former OpenAI employees. Not long after, Anthropic emerged as a strong technological competitor to OpenAI’s AI products (such as GPT-4 and ChatGPT). Most recently, its Claude 3.5 Sonnet model has remained highly regarded among some AI users and highly ranked among AI benchmarks.
During the WSJ interview, Amodei also spoke some about the potential implications of highly intelligent AI systems when these AI models can control advanced robotics.
“[If] we make good enough AI systems, they’ll enable us to make better robots. And so when that happens, we will need to have a conversation… at places like this event, about how do we organize our economy, right? How do humans find meaning?”
He then shared his concerns about how human-level AI models and robotics that are capable of replacing all human labour may require a complete re-think of how humans value both labour and themselves.
“We’ve recognized that we’ve reached the point as a technological civilization where the idea, there’s huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labour, and this is where they feel their sense of self worth,” he added. “Once that idea gets invalidated, we’re all going to have to sit down and figure it out.”
«
Clever timescale: close enough to feel dangerous, far enough away to be deniable. Perhaps even further. “Better than humans at almost everything”? Really?
unique link to this extract
Solar-charging backpacks are helping children in Africa to read after dark • CNN
Joshua Korber Hoffman:
»
Fewer than half of households in mainland Tanzania are connected to electricity. This falls to just over a third in rural areas. Consequently, many families rely on kerosene lamps to provide light after dark.
These lamps produce dim light and are expensive to fill. They also pollute the air and carry the risk of burns. Parents often opt to send their children to bed, James explained, rather than allowing them to use the lamp to read.
James’ solution – flexible solar panels sewn onto the outside of bags to power a reading light – was inspired by a university professor who carried around a solar charger for his phone, sewn into a fabric pouch. “It gave me the confidence that what I want is going to work,” said James.
He started in 2016 by handmaking 80 backpacks per month, sewing on a solar panel sourced from China that charged during the children’s walk to and from school. By the time they returned home, they would have enough power for a reading light. A fully charged bag can power a light for six to eight hours, meaning that one day of bright weather can allow for multiple nights of reading, even if cloudy weather arrives.
James says the solar backpacks are more affordable than using an oil lamp. A solar bag costs between 12,000 and 22,500 Tanzanian shillings (approximately $4-8), with the reading light included – the same price as 12-22.5 days of using a kerosene lamp, according to an average cost estimated in a survey of Soma Bags customers.
Sold mainly from his growing franchise of mobile library carts, the bags became popular, and James increased production. He founded Soma Bags in 2019 and oversaw the construction of his own factory in the village of Bulale, in the Mwanza region, in 2020. The company now employs 65 staff.
«
AI simulates 500 million years of evolution to discover artificial fluorescent protein • EL PAÍS English
Javier Yanes:
»
In New York, a group of former researchers from Meta — the parent company of social networks Facebook, Instagram, and WhatsApp — founded EvolutionaryScale, an AI startup focused on biology. The EvolutionaryScale Model 3 (ESM3) system created by the company is a generative language model — the same kind of platform that powers ChatGPT. However, while ChatGPT generates text, ESM3 generates proteins, the fundamental building blocks of life.
ESM3 feeds on sequence, structure, and function data from existing proteins to learn the biological language of these molecules and create new ones. Its creators have trained it with 771 billion data packets derived from 3.15 billion sequences, 236 million structures, and 539 million functional traits. This adds up to more than one trillion teraflops (a measure of computational performance) — the most computing power ever used in biology, according to the company.
…Rives and his collaborators applied ESM3 to the task of creating a new green fluorescent protein (GFP). GFP is a naturally occurring protein that glows green under ultraviolet light and is commonly used in research as a marker. The first GFP was discovered in a jellyfish, but other versions can also be found in corals and anemones. The scientists trained ESM3 to generate a new GFP, and the result surprised them: a fluorescent protein, which they named esmGFP, that is only 58% similar to the most closely related GFP. According to the researchers, this is equivalent to simulating 500 million years of evolution. ESM3 is now available to the scientific community as a new tool for designing proteins with therapeutic functions, environmental remediation capabilities, and other potential applications.
Thus, AI has uncovered a path that nature could have taken 500 million years ago, but for reasons unknown, did not.
«
Amazon races to transplant Alexa’s ‘brain’ with generative AI • Financial Times
Madhumita Murgia and Camilla Hodgson :
»
Amazon is gearing up to relaunch its Alexa voice-powered digital assistant as an artificial intelligence “agent” that can complete practical tasks, as the tech group races to resolve the challenges that have dogged the system’s AI overhaul.
The $2.4tn company has for the past two years sought to redesign Alexa, its conversational system embedded within 500mn consumer devices worldwide, so the software’s “brain” is transplanted with generative AI.
Rohit Prasad, who leads the artificial general intelligence (AGI) team at Amazon, told the Financial Times the voice assistant still needed to surmount several technical hurdles before the rollout.
This includes solving the problem of “hallucinations” or fabricated answers, its response speed or “latency”, and reliability. “Hallucinations have to be close to zero,” said Prasad. “It’s still an open problem in the industry, but we are working extremely hard on it.”
The vision of Amazon’s leaders is to transform Alexa, which is still used for a narrow set of simple tasks such as playing music and setting alarms, to an “agentic” product that acts as a personalised concierge. This could include anything from suggesting restaurants to configuring the lights in the bedroom based on a person’s sleep cycles.
Alexa’s redesign has been in train since the launch of OpenAI’s ChatGPT, backed by Microsoft, in late 2022.
«
Just a reminder that back in July, the WSJ reported that “Between 2017 and 2021, Amazon had more than $25bn in losses from its devices business, according to the documents. The losses for the years before and after that period couldn’t be determined.” Wonder if ChatGPT is going to fix that. I have my doubts.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified