
The TouchID system on the iPhone 6 is difficult to fix because it’s linked to Apple Pay. Photo by Janitors on Flickr.
There’s been a huge amount of coverage on the topic of “error 53”, which is a message thrown up by iTunes when it detects a particular fault on newer iPhones. But of course the rewriting hasn’t actually tried to add any value or understanding, for the most part. (Oh, internet journalism, if you only knew how crap you are.)

Techmeme coverage of “error 53”: did any of it add any detail?
So here’s my attempt to explain it, starting from what we know, and what we can find out, and what we can deduce. On with the show!
What is #error53?
It’s the error shown in iTunes for an iPhone 6, 6 Plus, 6S or 6S Plus after an operating software upgrade (eg upgrading from iOS 8.1 to iOS 8.2, or 8.1 to 9.0, or 9.2 to 9.2.1) if the phone has had its TouchID sensor replaced or its cable interfered with since the last software upgrade.
Error 53 (almost) bricks the device: it tells you to plug it in to iTunes and recover it, but in the instance above it won’t work. There is a way to bring the phone back to life if you’ve had Error 53, which we’ll come to presently.
Update: Apple has now (February 18, ten days later) released an iOS update for those using 9.2.1 and updating via iTunes which fixes this. Read the support document.
This is just Apple trying to stop third-party repairs, isn’t it?
That’s the conspiracy version of the explanation, but it isn’t self-consistent. Third-party repairers say they can still replace batteries, screens, and various other bits. What they’ve learnt though is that doing anything with TouchID on the iPhone 6/etc can kill the phone. So they avoid doing those repairs, and tell people to take affected phones to Apple repair shops.
Note that third-party repair shops have known about the home button problem for a long time. However, it’s only just come to media attention.
Why doesn’t it happen to the iPhone 5S?
The fact that this only began happening with the iPhone 6/Plus sharpened the conspiracy that this is Apple trying to shut down third-party repairs. (But it also weakens the conspiracy theory, because wouldn’t Apple seek to block it on all devices?) The reason is down to the key difference between the 5S and the 6/Plus: the 5S doesn’t have NFC, and so can’t do Apple Pay.
Why does Apple Pay matter in this?
Apple Pay means the phone contains Secure Elements, which are cryptographic stores with credit card and payment data – including (I surmise) how to turn a credit card number into an NFC payment mechanism, which is not the sort of information that banks want to be leaked everywhere.
Why does it only happen after an OS upgrade, rather than right after a replacement?
To understand this, we have to go to Apple’s security documents about iOS 9, and how security works with TouchID (the fingerprint reader), the Secure Enclave (which stores a hashed version of your fingerprint) and the Secure Elements, which store key financial data in an encrypted form.
Here’s a diagram from Apple’s security document, showing the direction of trust as the device boots up: it travels from the bottom to the top. We’re only interested in the stuff at the bottom of this stack at present (from “Apple root certificate” upward to the top of the “hardware/firmware” part).

Apple’s explanation of how the security system works in the iPhone: booting starts from the bottom and progresses upwards.
On bootup, the system goes through various hardware checks to ensure that everything is tickety-boo, cryptographically speaking. If it finds something wrong, then it gives you the “Connect to iTunes” screen, and if you’re lucky, throws up an error message. Note that if something is wrong at this bootup stage, you don’t reach the higher level of the file system and OS partition; you’re stuck at the hardware/firmware level.
If you replace the TouchID system on a device, the system doesn’t throw an error at this point. Why not? I’m not completely sure, but I think that the TouchID subsystem doesn’t have an entry in the device’s own hardware/firmware listing, so the device can’t tell whether the TouchID system that’s installed is the same one it originally had at manufacture.
Update: on thinking some more about it, I think this is why. The security model is one which doesn’t trust values that are stored on-device but not burnt into hardware. So any value in a firmware register could have been changed. Now, if the TouchID serial were stored on hardware, it could be checked on boot to see if it’s trusted – but you’d never be able to replace the TouchID sensor, because the old serial is burnt into the chip. A firmware value on startup can’t be trusted because it might have been changed.
Therefore the device doesn’t brick when it’s first turned on after repair. It has to rely on something external which has stored the TouchID serial – that is, Apple’s installation authorisation server./Update
What happens on a software upgrade is subtly different from simply booting. From Apple’s document, on p6:
During an iOS upgrade, iTunes (or the device itself, in the case of OTA [over-the-air] software updates) connects to the Apple installation authorization server and sends it a list of cryptographic measurements for each part of the installation bundle to be installed [emphasis added] (for example, LLB, iBoot, the kernel, and OS image), a random anti-replay value (nonce), and the device’s unique ID (ECID).
The authorization server checks the presented list of measurements against versions for which installation is permitted and, if it finds a match, adds the ECID to the measurement and signs the result. The server passes a complete set of signed data to the device as part of the upgrade process.
Adding the ECID “personalizes” the authorization for the requesting device. By authorizing and signing only for known measurements, the server ensures that the update takes place exactly as provided by Apple. The boot-time chain-of-trust evaluation verifies that the signature comes from Apple and that the measurement of the item loaded from disk, combined with the device’s ECID, matches what was covered by the signature.
These steps ensure that the authorization is for a specific device and that an old iOS version from one device can’t be copied to another. The nonce prevents an attacker from saving the server’s response and using it to tamper with a device or otherwise alter the system software.
What I think is happening is that the new TouchID system’s serial number is in included in the cryptographic data sent to the authorisation server, and when that is compared against what it should be for the given ECID, the numbers don’t match.
At that point, the authorisation server decides that Something Bad is going on, and blocks the update. The device now fails the low-level boot – it can’t get past the kernel level to the OS boot – and so the device is bricked.
And that is why it bricks on a software update.
Why doesn’t it check with the authorisation server after the repair?
The phone doesn’t have any way of “knowing” whether it’s restarting after a repair, or after it ran out of battery, or you just turned it off for the night. If every phone were to check in with the authorisation server on being powered on, three things would happen: (1) the authorisation server would die (2) people would be furious because their phone wouldn’t boot because it would need connectivity to check the details for its ECID, and you don’t always have connectivity when you turn your phone on (3) Apple would get majorly dinged for “snooping on when people turn their phone on.”
That doesn’t explain why it doesn’t happen on the 5S, though.
Damn right. At which point we have to consider that the “cryptographic measurements” sent back for an iPhone 6/etc differ from those of an iPhone 5S, specifically because of the Apple Pay-related Secure Elements.
Why does the device still work after the third-party replacement?
Let’s qualify this: it does work, but TouchID (and so Apple Pay and others) don’t work after a third-party fix that affects TouchID. The pairing there between the Secure Element/Secure Enclave/TouchID, which was set up when the device was manufactured, is lost. It carries on not working; then at some point, you get a software upgrade notification. And then – disaster.
Considering this, I think what is stored for communication with the server is the TouchID pairing status. If it’s unpaired, the update can’t go ahead.
Update: the fix issued by Apple must tell it to go ahead if the TouchID pairing status is changed, but leaving TouchID disabled.
What if you’ve never set up Apple Pay?
Doesn’t matter. The issue is not the data you’ve stored in the device, but the data that’s built into the device – cryptographic keys used for creating payment authorisation for credit cards. Those are in the Secure Elements.
What are the Secure Elements, and what do they contain?
Here’s a definition:
An SE is a tamper resistant hardware platform, capable of securely hosting applications and storing confidential and cryptographic data. For example, in the finance industry SEs are used to host personalized card applications and cryptographic keys required to perform financial (EMV) transactions at a point-of-sale terminal. SEs used in the identity market may hold biometric data or certificates which can be used for signing documents. Whichever purpose, the secure environment provided by the SE protects the user’s credentials ensuring the safety of the user’s data.
The reason why Error 53 happens when you change or interfere with the TouchID sensor on a more-recent-than-5S phone is that the system detects – during the software upgrade – that something has changed, and that the embedded trust system has been broken. And so the device doesn’t get authorisation to update.
Why does the Secure Elements stuff matter, though?
The banks/financial institutions specify that the operating system must not be able to directly access the data in the “trusted zone” (the Secure Elements).
How can you recover from Error 53?
Quite simple: replace the new TouchID processor with the old one. (People say they have successfully done this.) However, saying it is a lot easier than doing it. Some people don’t have the old one. Or the old one might just be broken.
How does Apple replace TouchID systems?
We don’t know, but we know it can, because it does. There must be a method for updating the cryptographic measurement list held by the authorisation server for a particular ECID. I’d imagine that involves logging into a server, entering an ECID (or connecting the phone) and letting the two talk to each other.
Note that when you have your screen repaired by Apple, it will tell you to disable TouchID first. And afterwards, you’ll have to recalibrate it. So there might be something there.
Why can’t Apple do that to devices which have failed on Error 53?
We don’t know. (Possibly it can.)
Could Apple change things so that in future it just disables TouchID and software updates still work?
Perhaps. I suspect it would need some sort of adjustment to what gets sent to the authorisation server, or what the server considers OK to approve. But if Apple is tied here by what the financial institutions demand around the Secure Elements, it might not have the choice.
Why hasn’t Apple explained that this is a risk of third-party replacement?
Ah, now we come to the challenge of Being Apple. Its mystique (for that’s what a lot of it is) lies in saying very little about how it does things, and asking people to take this stuff on trust, or for granted.
Thus when it comes to repairs, Apple’s implied assumption is that everyone will bring their device to an Authorised Apple Dealer, or Apple, to get it fixed. This ignores the fact that it now sells phones in countries where you’d have to travel for hours and hours to reach either of those – if you were lucky.
Naturally, people go to third-party repair shops to get these things done. And then problems start, because you’re talking about a pocket supercomputer with embedded cryptographic systems that are sensitive to being fiddled with.
But Apple has done a bad job here in communicating the risks of getting anything around the TouchID system replaced. It really needed to get the message out there.
Why didn’t Apple get the message out there?
Probably it’s been difficult to separate the signal from the noise on this. If someone comes in to an Apple Store with an Error 53 phone, it’s hard to know at first why it has done it. The device gets replaced, and the old one sent back to Apple, but that’s barely half of the feedback loop: it has to reach Apple, someone has to figure out why it doesn’t work, and then inform stores, and also inform the marketing people that this can be a problem which needs to be communicated.
Very likely there are people in Apple Engineering, Apple Retail and Apple Marketing who are right now looking at an email trail and smacking their foreheads as they realise what the problem they missed was. Those phones sent back from the stores marked as “will not boot”… ohhh.
That’s the problem with big organisations, though: that sort of feedback loop is really, really hard to organise well. Alternatively, perhaps it has been noticed, but it hasn’t affected a large number of people, and so isn’t as high a priority as.. something else. (We don’t know what.) Of course, to the affected people, it’s a bloody high priority.
Shouldn’t Apple allow third-party TouchID repairs, though? After all, the phone is your property.
The “property” argument isn’t a great one, to be honest. Apple sells you a device, but it doesn’t give you untrammelled rights to it; you aren’t legally allowed to (try to) decompile the software, or the firmware, or to dig into things like the Secure Elements. You don’t own the entire thing.
That’s how things are these days; the open-software absolutists run into a problem with mobile phones, because even if you can download and compile the operating system (a la Andy Rubin) you won’t be able to do that on the baseband software which actually provides the mobile functions. So it’s never completely “your” phone. That’s the case with PCs too these days – there’s stuff on the motherboard you don’t get to mess with.
None of this proves it isn’t Apple just shutting out third-party repairs, though.
Ah, proof. It’s so hard to prove the imaginary, or to refute it. However the scenario where some Apple executives gather round a table and say “You know what? We’re losing valuable revenues and profits from people using third-party repairs! We need to brick those phones!” fails both Occam’s Razor and Hanlon’s Razor, the two logical tests that help you filter through a lot of modern crap.
Occam’s, you’ll recall, is “don’t let entities multiply unnecessarily – aka “the simplest explanation is probably the right one.” Hanlon’s, meanwhile, is “never ascribe to conspiracy what can more easily be ascribed to cockup.”
Why does “shutting out third party repairs” fail Occam’s? Because it requires a lot of people putting in varying amounts of effort to make it happen.
For the malicious version: Apple has to have decided (1) it doesn’t like third-party repairs; (2) it wants people to have a bad experience when they try to upgrade their software (is it certain people will connect the third-party repair with the bricking, given that the events might be weeks or months apart? They might even have had an Apple fix of some sort in the meantime) (3) to set in motion an internal program whereby third-party replacements using correctly-sourced parts will fail, but its own repairs using the same parts won’t (quite risky) (4) to keep all this secret while also instructing its repair shops how to do this.
For the accidental explanation: the new TouchID system on the iPhone 6/etc now pairs with the Secure Elements and its cryptographic signature is sent to the update server on device activation. If the signature doesn’t match on subsequent update requests, the device isn’t authorised.
See how much simpler the latter one is? It doesn’t require any executives, or nefarious planning; just some work by the engineers updating the TouchID/Secure Elements systems. That satisfies Occam.
But equally, the second also satisfies Hanlon’s Razor. Nobody has been malicious; if anything, they’ve been trying to safeguard customers by making sure that sensitive (to financial groups) information can’t get hacked off your phone. However, in doing that, they’ve created a situation where customers get a bad experience and Apple gets bad publicity over something it would have hoped would give it kudos.
The shibboleth
In all the coverage of this topic, it is quite amazing how ready people are to assume the worst. Apple is uniquely capable of polarising people, who find it exceptionally hard to be indifferent about what it does. Either it’s a sort of wellspring of ideas and direction in all sorts of markets, from PCs to mobile phones to smart watches; or it’s a malicious money-grabbing marketing machine seeking ever more ways to rip people and governments off, while foisting commodity products on people at sky-high prices.
For instance, where do you think Cory Doctorow stands on it?
Punish. There’s a verb.
Or Dan Gillmor?
(Both links in those tweets are to the same Guardian article that kicked this all off on Saturday.)
Yet if you look on Hacker News, you’ll find the tenor of the discussion is much more like “oh, that makes sense from a security point of view”. And security experts on Twitter such as Steve Bellovin and Matthew Green could discuss the matter without invoking conspiracy theories.
I find it odd that people who write publicly for money seem more willing to go for the conspiracy theory than those who don’t. Doesn’t exposure to enough organisations teach you that the bigger they get, the more easily screwups happen, and the less communication there is between their many arms?
And Apple really is big these days, stretching across an incredibly broad area of the computing market – from Macs to mobile phones to tablets to smart watches to iPods, from desktop operating systems to mobile operating systems (tweaked differently for the tablet and the phone), to smartwatch and TV set-top box operating systems, to desktop and mobile applications, to cross-platform music programs (iTunes is on Mac OSX and Windows; Apple Music is on iOS, Windows, Mac OSX and Android), to web services (CloudKit) and even chip design.
I’m pretty confident in saying that no other company is doing as many things across as many hardware and software platforms. Google is huge, but doesn’t make hardware in anything like that volume; Microsoft is huge too, but doesn’t make hardware in any appreciable volume. Apple does the whole thing, including chip design. The combination of hardware and software challenge in adding just one new feature to any individual device line is mind-boggling, because you have to consider how it’s going to affect everything else.
In that context, an engineering team working away on an improved TouchID system which authenticates against tampering probably thought they were doing just the right thing. Instead, they were throwing their retail and PR people into a media storm. The size of the teacup is yet to be determined.
Quite how Apple is going to get its explanation across will be educative to watch. (I haven’t spoken to Apple in writing this.) The more interesting question though is: what will happen once lots of Android devices start using Android Pay (which has pretty much the same trust requirements) and those start breaking? Will third-party repairers be able to fix them, or will they have to be sent back to the manufacturer? And if it’s the latter (or if people try the former) how much hell is there going to be to pay?
Though you suspect you know the answer already. It won’t arise, because not that many OEMs will implement Android Pay, and the people who get inconvenienced won’t make as much noise about it. Who cares if someone with an HTC phone has to swap it and loses their data? You’d struggle to get most newsdesks to know what an HTC phone was. Say “iPhone”, though…