Start Up No.2455: how 2017’s US tax law kills tech jobs, Apple’s comma problem, DOGE’s flawed AI, testing Alexa+, and more


Chinlone, or caneball, is a popular game under threat in Myanmar due to the country’s internal conflicts. CC-licensed photo by Scott Edmunds on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 9 links for you. Keeping up. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


The tax code time bomb fueling mass tech layoffs • Quartz

Catherine Baab:

»

Between 2022 and today, a little-noticed tweak to the U.S. tax code has quietly rewired the financial logic of how American companies invest in research and development. Outside of CFO and accounting circles, almost no one knew it existed. “I work on these tax write-offs and still hadn’t heard about this,” a chief operating officer at a private-equity-backed tech company told Quartz. “It’s just been so weirdly silent.”

Still, the delayed change to a decades-old tax provision — buried deep in the 2017 tax law — has contributed to the loss of hundreds of thousands of high-paying, white-collar jobs. That’s the picture that emerges from a review of corporate filings, public financial data, analysis of timelines, and interviews with industry insiders. One accountant, working in-house at a tech company, described it as a “niche issue with broad impact,” echoing sentiments from venture capital investors also interviewed for this article. Some spoke on condition of anonymity to discuss sensitive political matters.

Since the start of 2023, more than 500,000 tech workers have been laid off, according to industry tallies. Headlines have blamed over-hiring during the pandemic and, more recently, AI. But beneath the surface was a hidden accelerant: a change to what’s known as Section 174 that helped gut in-house software and product development teams everywhere from tech giants such as Microsoft and Meta to much smaller, private, direct-to-consumer and other internet-first companies.

…For almost 70 years, American companies could deduct 100% of qualified research and development spending in the year they incurred the costs. Salaries, software, contractor payments — if it contributed to creating or improving a product, it came off the top of a firm’s taxable income.

The deduction was guaranteed by Section 174 of the IRS Code of 1954, and under the provision, R&D flourished in the U.S.

…When Congress passed the Tax Cuts and Jobs Act (TCJA), the signature legislative achievement of President Donald Trump’s first term, it slashed the corporate tax rate from 35% to 21% — a massive revenue loss on paper for the federal government.

To make the 2017 bill comply with Senate budget rules, lawmakers needed to offset the cost. So they added future tax hikes that wouldn’t kick in right away, wouldn’t provoke immediate backlash from businesses, and could, in theory, be quietly repealed later.

The delayed change to Section 174 — from immediate expensing of R&D to mandatory amortization, meaning that companies must spread the deduction out in smaller chunks over five or even 15-year periods — was that kind of provision.

«

Complex, but: significant. Goes to show how a badly-written bill can screw things up. Speaking of which..
unique link to this extract


A simple comma is going to cost Apple billions in Europe • Cafetech

Jérôme Marin:

»

The disagreement between Apple and Brussels centres on Article 5.4 [of the Digital Markets Act]. In its English version, the article states that the gatekeeper—the term used by the Commission for the seven major tech companies subject to the DMA—“shall allow business users, free of charge, to communicate and promote offers, including under different conditions […], and to conclude contracts with those end users.”

This lengthy sentence creates ambiguity: what exactly does “free of charge” apply to? Apple claims it only applies to “communicate” and “promote,” meaning the right to insert redirect links in an app. But not to “conclude contracts,” meaning making purchases. Based on that, Apple argues it can still charge commissions on those external transactions.

The European Commission interprets it differently: contract conclusion must also be free of charge. It relies on the comma before the phrase “and to conclude contracts,” turning the sentence into an “enumeration.” “That ‘free of charge’ applies to all that is being enumerated after”, it explains in its detailed decision sent to Apple as part of the €500m fine, which was made public last week.

“In other words, the price for app developers to pay [for external purchases] is zero,” writes the Commission. However, its case could be weakened by inconsistencies in the French and German translations of the text, which it acknowledges are “ambiguous.” Still, “other linguistic versions leave no room for interpretation,” notes Brussels.

«

My understanding was always that lawyers would draft clauses like that without commas in order to avoid ambiguities like this. Commas should only be used in legal documents where their use won’t introduce ambiguity (which isn’t often).

You’d think Apple might have allowed for an adversarial reading like this.
unique link to this extract


DOGE developed error-prone AI to help kill Veterans Affairs contracts • ProPublica

Brandon Roberts, Vernal Coleman and Eric Umansky:

»

As the Trump administration prepared to cancel contracts at the Department of Veteran Affairs this year, officials turned to a software engineer with no health care or government experience to guide them.

The engineer, working for the Department of Government Efficiency, quickly built an artificial intelligence tool to identify which services from private companies were not essential. He labeled those contracts “MUNCHABLE.”

The code, using outdated and inexpensive AI models, produced results with glaring mistakes. For instance, it hallucinated the size of contracts, frequently misreading them and inflating their value. It concluded more than a thousand were each worth $34 million, when in fact some were for as little as $35,000.

The DOGE AI tool flagged more than 2,000 contracts for “munching.” It’s unclear how many have been or are on track to be canceled — the Trump administration’s decisions on VA contracts have largely been a black box. The VA uses contractors for many reasons, including to support hospitals, research and other services aimed at caring for ailing veterans.

VA officials have said they’ve killed nearly 600 contracts overall. Congressional Democrats have been pressing VA leaders for specific details of what’s been canceled without success.

We identified at least two dozen on the DOGE list that have been canceled so far. Among the canceled contracts was one to maintain a gene sequencing device used to develop better cancer treatments. Another was for blood sample analysis in support of a VA research project. Another was to provide additional tools to measure and improve the care nurses provide.

ProPublica obtained the code and the contracts it flagged from a source and shared them with a half dozen AI and procurement experts. All said the script was flawed.

«

Who could have guessed etc.
unique link to this extract


Hollywood already uses generative AI (and is hiding it) • Vulture

Lila Shapiro:

»

Hollywood has lately been in what media observers have described as an “existential crisis,” an implosion, and a “death spiral.” Studios were making fewer films, and fewer people were watching the ones that got made. Layoffs were mounting. Depending on whom you asked, AI was either hastening the end or offering a lifeline. It had brought a proliferation of tools that were capable, to varying degrees of success, of creating every component of a film: the script, the footage, the soundtrack, the actors.

Among the many concerns rattling nerves around town, one significant issue was that nearly every AI model capable of generating video had been trained on vast troves of copyrighted material scraped from the internet without permission or compensation. When the writers and actors unions ended their strikes in 2023, the new contracts included guardrails on AI — ensuring, for the moment anyway, that their members retained some measure of control over how the studios could use the technology.

The new contracts barred studios from using scripts written by AI and from digitally cloning actors without explicit consent and compensation. But they left the door open for certain uses, particularly with generative video: studios can use models to create synthetic performers and other sorts of footage — including visual effects and entire scenes — so long as a human is nominally in charge and the unions are given a chance to bargain over terms. Even with that latitude, the industry is hemmed in by a growing thicket of legal uncertainty, especially around how these systems were trained in the first place.

Over 35 copyright-related lawsuits have been filed against AI companies so far, the outcomes of which could determine whether generative AI has a future in Hollywood at all. As one producer put it to me, “The biggest fear in all of Hollywood is that you’re going to make a blockbuster, and guess what? You’re going to sit in litigation for the next 30 years.”

«

But, as Shapiro finds, they are using it and hoping they’ll be able to short-circuit the courts bit relatively quickly.
unique link to this extract


Seventeen takeaways from Mary Meeker’s AI Deck • Read Trung

Trung Phan:

»

Mary Meeker — a leading Wall Street tech analyst during the 1990s turned widely-read VC — published her famous Internet Trends report for the first time since 2019.

She came out of retirement with a 340-page slide deck because of the rise of breaded chicken sandwiches generative AI.

If you spend any time on social, the AI hype has gotten a bit out of hand. Every day, someone is posting a new AI video or text or image feature with the clickbait title “this is insane”, “this is wild”, “this is crazy”, “Google is dead”, “wow, Apple is cooked”, “[Insert Fortune 50 Company] is over” or “drive down to Walmart, buy some adult diapers and put them on because you will lose bladder control after seeing what you’re about to see”.

It’s a bit much (and I am guilty of doing every single one of those hooks).

Meeker’s deck does a great job of contextualizing the hype and it’s not as hype-y as it seems once you read through her stats. Specifically, the consumer adoption and infrastructure buildout of AI is happening at an unprecedented pace.

While the $1T in Big Tech data centre spend may not yield a great return for all the players involved, I think it’s a pretty massive win for consumers. There are clear parallels to the Dotcom Bubble when telecoms spent $500B+ on fiber and cables etc. Many went bust but we were left with massive data infrastructure that led to later internet and mobile waves (for sure, the technology itself is amoral and was used for the entirety of the human experience…just like AI will).

On the consumer side, Meeker paints an impressive picture of ChatGPT. Its growth blows away previous tech platforms. While this does not guarantee OpenAI will win the long-run generative AI race — or even be around decades from now — it has an enviable position with huge mindshare.

«

Happily, by cutting Meeker’s deck down to 5% of its original length, Phan makes this eminently readable. In short: number go up.
unique link to this extract


Early access to Alexa+: testing out Amazon’s next-gen AI assistant • USA Today

Jason St Angelo:

»

I’ve been very pleased – and occasionally quite impressed – with Alexa+. It can do everything the original Alexa can do, only with a flexible, approachable personality that makes interactions more pleasant and accurate. The AI-enabled Alexa+ can also handle more complex tasks, from brainstorming menus and managing calendars to planning repairs and finding the right restaurant – and making the reservation.

Of course, my time with the new-and-improved AI assistant hasn’t been without off-the-rails interactions. That’s to be expected with generative AI, regardless of whether it’s in pre-release or a finished product. More often, though, I’ve had expansive, interesting and helpful conversations that, like the sweet smack of a perfect golf drive, make you want to come back and try again.

Those experiences are critical, I’ve found. Because Alexa+ doesn’t magically integrate into your life. It requires a routine change. Which means that if you don’t work to find reasons to engage, Alexa+ will end up being just a wittier, more pleasant-sounding egg timer than the one you have now.

The more you engage – and the more Alexa+ delivers – the easier it gets.

…Our smart home may not be any smarter now. But Alexa+ makes it much easier to use. Alexa+ is also much more tolerant of word choice. It doesn’t care, for example, if I call it the entry light or foyer light, or front door light. It will turn it on regardless. It’s much easier than before to ask for live views of our Ring cameras on our Echo Show 21. As well, I can ask Alexa+ to show me all packages delivered in the past day. 

I was even able to build a routine – turn off a set of lights at bedtime, then report the weather and review my day tomorrow on the Echo device in our walk-in closet – by talking to the Echo Show 21 in the kitchen. That’s next-level accessible! This feature alone could turn out to be the killer smart home app.

My wife and I have had some fruitful conversations with Alexa+. As we prepared dinner one night, we lamented how the seafood quality has declined at our go-to grocer. Alexa helped us find a specialty shop with super-fresh – albeit pricey – fish.

Another valuable back-and-forth came when I took my DIY project to Alexa. I wanted to add an electrical outlet to an exterior wall. Alexa offered step-by-step instructions that we iterated on together as I shared more specifics. It felt natural.

«

Did he check that he wouldn’t be electrocuted by the offered instructions?
unique link to this extract


The worst argument against Ozempic • Cremieux Recueil

“Cremieux”:

»

The argument is that GLP-1RAs [Ozempic, Wegovy, etc] are bad… because you regain weight when you stop them? By the same logic, diet and exercise are bad because, when you cut them out, you’ll also tend to regain weight. Weight rebound after giving up exercise programs and dieting is universal, so to the extent that’s a problem for GLP-1RAs, why isn’t it also stated as a problem for everything else?

There’s a more sophisticated argument that accepts people who need GLP-1RAs might need to use them forever in order to maintain their reduced weight, but that this is dangerous for “reasons”. After all, the proponents of this argument claim, we don’t know about the potential long-term harms of GLP-1RAs. We know about effects since they’ve been approved (circa 2017) but that’s it, and what if being on them much longer has unexpected side effects?

This argument at least facially makes sense and it’s fine to humour it, but we have to do so in a calibrated way.

What might the long-term harms actually be? No one has provided a mechanism to guide the search and the primary mechanism of GLP-1-induced weight loss (which is agonism of brainstem GLP-1 neurons involved in appetite) doesn’t seem likely to be directly harmful. Why would the harms have not shown up over the past near-decade of use? They must be so unpredictable as to evade detection over a reasonably long period of time. And most importantly, what’s the counterfactual?

Every other day, a new story comes out about something that GLP-1RAs help to address, from chronic kidney disease to infertility. Given GLP-1RAs are so universally helpful, I believe we should update against them being mysteriously harmful. Additionally, we should weight the benefits versus the hypothetical costs. We know the benefits to living a life without obesity are enormous, and if I had to bet, I would say that the people taking GLP-1RAs for weight management long-term will have longer, happier, healthier lives thanks to these drugs, unless the unexpected happens and there really is a lurking harm to GLP-1RAs—a harm that has, so far, evaded detection and prediction.

«

The argument that they’re not good because you have to keep taking them is: they are produced by pharmaceutical companies, and you don’t know how they will change their price. (Look what happened in the US to EpiPen costs.) Whereas exercise and dieting are essentially free.

But perhaps if there’s enough competition the price will stay down, as with sildenafil. I do find these drugs fascinating – not because I need them, but because of their potential effect on society.
unique link to this extract


Myanmar’s chinlone ball sport threatened by conflict and rattan shortages • Al Jazeera

»

Myanmar’s version is believed to date back 1,500 years.

Evidence for its longevity is seen in a French archaeologist’s discovery of a replica silver chinlone ball at a pagoda built during the Pyu era, which stretched from 200 BC to 900 AD.

Originally, the sport was played as a casual pastime, a form of exercise and for royal amusement. In 1953, however, the game was codified with formal rules and a scoring system, part of efforts to define Myanmar’s national culture after independence from Britain.

“No one else will preserve Myanmar’s traditional heritage unless the Myanmar people do it,” player Min Naing, 42, says.

Despite ongoing conflict, players continue to congregate beneath motorway flyovers, around street lamps dimmed by wartime blackouts and on purpose-made chinlone courts – often open-sided metal sheds with concrete floors.

“I worry about this sport disappearing,” master chinlone ball maker Pe Thein says while labouring in a sweltering workshop in Hinthada, 110km (68 miles) northwest of Yangon. “That’s the reason we are passing it on through our handiwork.”

Seated cross-legged, men shave cane into strips, curve them with a hand crank and deftly weave them into melon-sized balls with pentagonal holes before boiling them in vats of water to enhance their durability.

“We check our chinlone’s quality as if we’re checking diamonds or gemstones,” the 64-year-old Pe Thein says. “As we respect the chinlone, it respects us back.”

Each ball takes about two hours to produce and brings business-owner Maung Kaw $2.40. But supplies of the premium rattan he seeks from Rakhine state in western Myanmar are becoming scarce. Fierce fighting between military forces and opposition groups that now control nearly all of the state has made supplies precarious. Farmers are too frightened to venture into the jungle battlegrounds to cut cane, Maung Kaw says, which jeopardises his livelihood.

«

There’s some video of it being played. It’s like volleyball crossed with keepy-uppy. These sorts of ancient sports struggle to continue in our monetised world.
unique link to this extract


Artificial intelligence is not intelligent • The Atlantic

Tyler Austin Harper:

»

Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, “In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.” The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him.

This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI “dating concierge” that will interact with other users’ concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: see the booming market for “AI girlfriends.”

Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry’s “tradition of anthropomorphizing”: talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion.

«

unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

2 thoughts on “Start Up No.2455: how 2017’s US tax law kills tech jobs, Apple’s comma problem, DOGE’s flawed AI, testing Alexa+, and more

  1. One aspect of the DOGE fiasco I believe has been under-examined by writers – not ignored, but not given the depth of investigation it should have – is just how embarrassingly bad it’s been from a technical point of view. Whatever else one thinks about the Great Musk Satan, his redeeming quality is supposed to be that he’s actually smart about technology. But he makes errors of the sort which a real experienced technologist should know to avoid. Things like 150-year-olds getting Social Security. If something like that appears to come up in an initial investigation, the first thought should be some sort of legacy database issue. Not that an obvious and massive source of waste, fraud, and abuse has been discovered, which has heretofore escaped the notice of every auditor and investigator.

    In theory, there’s nothing wrong with using the dreaded AI (spit on ground) to generate a list of potential leads for analysis, same as looking at a Wikipedia page. But, like we’re constantly told in terms of excusing Wikipedia, it’s your responsibility to check that you didn’t get nonsense from it.

    But sadly this is too complex, as opposed perhaps simply sneering “techbro!” at him (which tends to mean just “uncool nerd!”). Still, I’d hope a proper take-down here would lose Musk much respect among some of his actual techie fans.

  2. The use of Section 174 for R&D will pale in comparison to many of the “poison pill” clauses written into the big beautiful bill as currently drafted which take the savings now but defer a huge number of actions to the next administration(assuming there is one).

    The Republicans, in many ways like the Conservatives in the UK always do this, only this time it is on an unprecedented scale.

Leave a reply to Mark Cathcart Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.