Oh, and also: Microsoft’s warning it might write down the mobile division

Got anything you want to dump? “Goodwill” by Editor B on Flickr.

Did you read my piece about the negative per-handset margins of Microsoft’s Lumia phones? Oh, you should, it’s great.

Bearing that in mind…

Spotted in Microsoft’s 10-Q (that’s the full quarterly writeup of everything going on in the business is this rather important couple of paragraphs. (“Goodwill” is accounting talk for “stuff that you can’t drop on your foot but honestly, it’s worth something, that’s why we paid so much for the company”). I’ve added some emphasis:

We determined that none of our reporting units were at risk of impairment as of our most recent annual goodwill impairment testing date. The valuation of acquired assets and liabilities, including goodwill, resulting from the acquisition of NDS, is reflective of the enterprise value based on the long-term financial forecast for the Phone Hardware business. In this highly competitive and volatile market, it is possible that we may not realize our forecast.

Considering the magnitude of the goodwill and intangible assets in the Phone Hardware reporting unit (see Note 8 – Business Combinations of the Notes to Financial Statements), we closely monitor the performance of the business versus the long-term forecast to determine if any impairments exist in our Phone Hardware reporting unit.

In the third quarter of fiscal year 2015, Phone Hardware did not meet its sales volume and revenue goals, and the mix of units sold had lower margins than planned.

We are currently beginning our annual budgeting and planning process. We use the targets, resource allocations, and strategic decisions made in this process as the inputs for the associated cash flows and valuations in our annual impairment test.

Given its recent performance, the Phone Hardware reporting unit is at an elevated risk of impairment. Declines in expected future cash flows, reduction in future unit volume growth rates, or an increase in the risk-adjusted discount rate used to estimate the fair value of the Phone Hardware reporting unit may result in a determination that an impairment adjustment is required, resulting in a potentially material charge to earnings.

And how much goodwill is there? In Note 8, we find that there’s an allocation of $5,456m to “goodwill” for the acquisition of Nokia Devices and Services (NDS). Five and a half billion dollars of “no, honestly, it’s worth this much in the long term, just watch”.

Or as the 10-Q puts it, “The goodwill was primarily attributed to increased synergies that are expected to be achieved from the integration of NDS.”

Now it seems those synergies maybe aren’t happening.

Sony in September wrote down the expected value of its phone business by £1bn. And it has a mobile business that competes quite well at the high end.

Microsoft’s writedown could be much, much bigger. Perhaps not as big as the $6.2bn writedown of aQuantive in 2012 (it wrote off the whole value of the acquisition), but potentially quite a lot.

Microsoft’s per-handset profit, or the lack of it – and its impact on Windows Phone’s future

How much did this Lumia 920 cost to make? And will it have a successor? Photo by Whatleydude on Flickr.

In case anyone was in any doubt, Microsoft’s results last week demonstrated once more what we’re coming to know about the mobile handset industry: it’s damned hard to make any money in it. When I published a fairly simple analysis of the state of the top-end Android handset market (with a comparator to Apple’s iPhone profits), people were apparently flabbergasted by how thin the per-handset operating margins were on these devices which sold for hundreds of dollars.

Estimated Android handset operating profit Q4 2014

See the original post for more detail and caveats.

But Microsoft showed that it’s not even able to generate gross margin while selling millions of handsets. (Gross margin is the difference between how much it costs you to make the item – usually factory costs and distribution costs – and what you get for it. Gross margin normally excludes R&D and sales & marketing costs; to get the operating profit, you subtract those costs too, so operating profit is always less than gross margin.) My analysis of Android handset makers looked at operating profit.

Negative gross margin takes some doing; spending more making stuff than you take in for it is exceptionally bad business. But Microsoft Mobile did, officially: take a look at Microsoft’s 10Q for the calendar first quarter of 2015.

Microsoft phone hardware revenues, Jan-Mar 2015

It says

Phone Hardware revenue was $1.4bn in the third quarter of fiscal year 2015, as we sold 8.6m Lumia phones and 24.7m non-Lumia phones. We acquired NDS in the fourth quarter of fiscal year 2014. Phone Hardware gross margin was $(4) million in the third quarter of fiscal year 2015. Phone Hardware cost of revenue, including $147m amortization of acquired intangible assets, was $1.4bn.

For those unversed in accountancy notation, that “$(4) million” means “minus $4 million”. Accountants use brackets rather than a minus sign because it’s easy to overlook a minus sign and create a horrendous hash in your calculations.

For the nine months,

Phone Hardware revenue was $6.3bn in fiscal year 2015, as we sold 28.5m Lumia phones and 107.3m non-Lumia phones. Phone Hardware gross margin was $805m in fiscal year 2015. Phone Hardware cost of revenue, including $401m amortization of acquired intangible assets, was $5.5bn.

This does take some untangling. In my analysis, I’m going to ignore the writeoffs (amortisation) of intangible assets – essentially, goodwill (“how much more we paid than the physical assets are worth”) being written down. This actually makes the gross margin look better – as in, in positive territory. That’s a start.

Pause for history

Some brief history. When Nokia made phones, it used to provide wonderfully detailed results, in which it would tell you how many featurephones and smartphones it had sold, and at what average selling price (ASP). This made it easy to see how its business was going. It didn’t give you gross margins – only operating margin for the whole phone business. In general, though, we knew its featurephone business was profitable, and that once it moved to the Windows Phone Lumia range, the smartphone side lost a ton of money.

Enter Microsoft, buying Nokia’s phone business – including featurephones – for €5.4bn, which was completed on April 25th 2014. That’s when the featurephone and Lumia sales start showing up in Microsoft’s results, and we shift to the “gross margin” measurement. (Microsoft does this because Steve Ballmer reorganised it to an Apple-style “apportion all cost across the board”, rather than making each division its own profit-and-loss fiefdom.)

Given the Lumia ASP and sales figures at Nokia, you could work out the ASP of featurephones, and their contribution to revenues. What I’ve done in the table below is use Nokia’s featurephone and Lumia ASP (converted from euros to dollars at the prevailing rate at the end of each quarter) and try to carry that forward to estimate the recent ASPs of Lumia handsets under Microsoft’s ownership, and their contribution to revenues.

Featurephone and estimated Lumia ASPs

If you estimate the ASP for featurephones based on the Nokia numbers, you can figure out those for the Lumia phones at Microsoft.

A few things to note: I’m assuming that featurephone ASPs are falling. Even with that, there’s a clear fall in the ASP of the Lumia phones – from (a really quite high) $238 in the second calendar quarter of 2014 to the present. There hasn’t been a flagship phone released in that time, so perhaps not surprising.

Also, smartphone revenue has flipped from being the minority source at Nokia to being the majority source now (even at $20 featurephone ASP, it’s still like that) because featurephone sales are collapsing, while those of smartphones are remaining fairly static – like this:

Estimated split of smartphone and featurephone revenue at Microsoft

Based on ASP assumptions, you can figure out how much revenue smartphones and featurephones generate. That’s not profit, though.

Now we move on, to seek out gross margin. There’s no data from Microsoft about the separate gross margins of the featurephones or Lumias. We don’t know how much they cost to build, or which might be profitable. So we have to use estimates and what people tell us.

Fortunately, we do have some indication of how profitable Nokia featurephones were. In an interview in April 2013, Nokia’s director of platform and content said that the profit margins on the $20 Nokia 105 were the same as those on the Lumia phones. How much might that be? Again, we don’t know, but it can’t be a lot. Putting it at $5 seems reasonable.

We have the figures for total gross margin; we also have the intangibles writeoff in the financials. So that gives us a “real” gross margin (ie the day-to-day gross margin for the quarter, excluding accountancy writeoffs):

Gross margin, excluding intangibles

Microsoft gives quarterly figures for intangible writeoffs; subtracting that gives the hardware gross margin.

Now we make assumptions about featurephone gross margin. I’ve gone for $5, falling to $4 as the average price of the handset falls from $20 to $15.

From this, and from the data we’ve got about total phone shipments, it’s quite simple to back-calculate to come out with figures for the total contribution to gross margin by featurephones and Lumias.

Progress! GM and profit gives Lumia data

If we assume per-handset profit on featurephones, we can use that with the GM data to figure out how much Lumias cost to make. And we have the ASP..

Which tells us what? The CQ2 figure is anomalous – Microsoft mentions an intangibles writeoff in that period, but doesn’t specify how much (unlike other periods). It’s likely the total gross margin was larger if you ignore that, which would put the gross margin per Lumia into the black.

What we also see is that even if we allow a miniscule per-handset gross margin for each featurephone, we see a rapidly falling gross margin for the Lumia line. Here’s an alternative scenario, if we think that the featurephones have a $8 gross margin, falling to $6 in the latest quarter:

What if featurephones have higher profit?

Giving a per-handset profit of $8 for featurephones makes the Lumia business look much worse. It’s unlikely, though.

On this higher profit for featurephones, the Lumia gross margin goes into negative territory. You can argue that’s too high a margin for a featurephone. Doesn’t matter though – the direction of travel is clear: the Lumia barely washes its face.

And this, don’t forget, is before you include the costs of sales and marketing – all those Lumia ads! – and research and development. Here’s the R+D impact:

Three months ended March 31, 2015 compared with three months ended March 31, 2014: Research and development expenses increased $241 million or 9%, mainly due to increased investment in new products and services, including NDS expenses of $212 million.

For the nine months:

Research and development expenses increased $694 million or 8%, mainly due to increased investment in new products and services, including NDS expenses of $815 million. These increases were partially offset by a decline in research and development expenses in our Operating Systems engineering group, primarily driven by reduced headcount-related expenses.

A little digging shows the R+D costs for the NDS (Nokia Devices and Services) segment by quarter. That gives us a sort of “halfway” operating profit once you deduct R+D, which shows that the division has moved into the red even before you consider sales and marketing costs.

R+D by quarter for Microsoft's mobile business

R+D numbers are mentioned in the quarterly 10Q. To get towards the operating profit (or loss), we need to subtract that.

That’s not profit

To figure out whether the handset division makes an operating profit we’d have to know the sales and marketing costs. It’s pretty improbable that those were anything less than $300m per quarter (that’s $10m per day, worldwide). Which means that Microsoft’s handset division has been loss-making since it took it over, despite those profitable featurephones, and ignoring the writedown of intangibles.

On a per-handset basis, if you allow $300m per quarter for sales and marketing (into which we’ll also roll administration), then you get a clear picture: Lumia handsets don’t make an operating profit at all.

Estimated per-handset loss for Lumias

Don’t forget, this relies on assumptions around featurephone profitability and price.

But hey, you say, that’s all assuming that featurephones are making $5 per handset. What if it’s $0? No problem – that’s the fun of spreadsheets. You can play with assumptions:

What if featurephones make $0?

Lumias still show a per-handset loss. (Remember this is with featurephone pricing of $15 in the latest quarter.)

OK, dammit – what if featurephones lose $5 per handset? Lumias still plunge into operating loss in the latest quarter – and remember, this is ignoring intangible writeoffs, and allowing just $10m per day worldwide for sales and marketing:

What if featurephones lose money?

Even at a $5 loss per featurephone, Lumias aren’t moneymakers at the operating level (on my assumptions).

You might have thought that Android handset makers had it tough, but at least – at the top end – they’re ekeing out something.

(Yes, these numbers are based on assumptions about featurephone pricing and margin, but I’d defend them as all being reasonable based on what we know about the business in the present and past.)

Update: thanks to Vlad Dudau of Neowin, who pointed me to Microsoft’s discussion of its results, where they say

We have made significant progress in reducing the operating expense base in the Phone business, moving from an annualized rate of $4.5bn at acquisition to a run rate under $2.5bn.

Opex is R+D plus sales+marketing and things like general+administration; an annualised run rate of $2.5bn is $625m per quarter – which is slightly more than I was allowing there. That would make the Lumia margins worse.

Microsoft adds:

That said, the changing mix of our portfolio to the value segment and the significant negative headwind from FX [foreign exchange rates] will impact our ability to reach operational break even in FY16.

So, no good news in the offing. (/Update.)

Why carry on, then?

Why, then, does Microsoft persist with Windows Phone? It can’t really think that it’s somehow going to come good and suddenly take off to challenge iOS and Android. The idea (which some outside Microsoft cling to) that the introduction of Windows 10, where apps can be written for both desktop and mobile, will suddenly lead to a huge uptake (by businesses?) is pie in the sky. Mobile and desktop have different design demands. Corporations with mobile needs haven’t been sitting on their hands for the past five years waiting for Windows Phone to reach a sort of maturity; they’ve been hiring people who can hook into their systems using iPhones and Android phones. Under Satya Nadella, Microsoft has recognised this, offering Office and other key software on rival platforms to capture (or retain) users and revenue.

You can’t justify it on “they’ll make it back in profits on services”; the 80m or so Lumia owners around the world aren’t the high-end users, but low-end ones who are less likely to spend on apps, or pricey Microsoft products.

So why? Two clear reasons. First, it’s important to keep playing in this space; Microsoft needs to have a mobile offering because it’s impossible to say where in the future a mobile-focussed offering might be key.

Secondly, though, is that more simple one: pride. Couple that with the inertia of a big organisation, and the fact that in the scheme of Microsoft’s profits the losses from the mobile division (about $500m, ignoring intangibles, over the past three quarters) are piffling, and there’s no reason to stop.

However, things could change. I’ve argued previously that Nadella should just give up on Windows Phone, and move to an Android fork. Not long after I argued that, Nokia (then still Finnish-owned) introduced the Nokia X, using Microsoft services and AOSP (Android without Google services on top). Microsoft rapidly killed it.

But now Microsoft is preinstalling its apps on the Samsung Galaxy S6 – and more importantly, has a “strategic partnership” with Cyanogen. The latter is a huge, and smart, move: it seems to me the easiest way for Microsoft to make a real impact on mobile.

If the Cyanogen move takes off, though, I could see Windows Phone withering. Why bother with loss-making hardware when you can piggyback on the world’s most successful mobile OS (that’s Android/AOSP) for the pure gravy of services profit? I wouldn’t.

Other posts you might find interesting:
Android OEM profitability, and the most surprising number from Q4 2014

Why Google’s struggles with the EC – and FTC – matter

How Gresham’s Law explains why news sites are turning off online comments

Amazon: ever-growing behemoth, or topped out?

An Amazon warehouse. Photo by hnnbz on Flickr

It’s the London Book Fair this week, and I was kindly invited to speak at its Digital Minds session on Monday. These are the slides that I created for the talk. (Plus the CC-licensed photo, as above.)

Obviously, for book publishers the terror over the past few years is that Amazon is going to eat up everything, laying waste to the old book-buying system and forcing down the prices they can ask while at the same time everyone dumps paper books in favour of Kindles.

For publishers, that looks like the worst kind of lock-in.

But I prefer a data-driven approach: look at the numbers, and the numbers in a broader context. Amazon provides pretty clear financial results (with useful breakdowns by geography and segment), and there are also useful datasets from book publishers about the size of the UK market. (I focussed on the UK market because that’s what was available, but if anyone wants to pay me to do a bigger study relating to other countries, get in touch :-))

Here’s the presentation:

A few words to add extra context (since I did actually talk too – this wasn’t just a mime show). The numbers relate to the slide number.

5) and 6) yeah, Amazon does sell beer, but my more general point is that these declines in numbers (of petrol filling stations and pubs open in the UK) are due to structural changes in society, not something Amazon has done. If you ascribe changes to the wrong cause, you’ll come up with the wrong solution to it.

7) Clearly, the decline in independent bookshops (overlaid onto the right-hand chart, showing the growth in book sales and ebook sales) predates ebooks – though not Amazon itself. This doesn’t look at concentration of the industry; I didn’t look at the simple number of books published. I think that has gone up, even excluding ebooks.

9) figures taken from Amazon’s results, and using a four-quarter moving average. The international media sales (red line) actually went negative in the most recent quarter, while US media sales (blue line) went to just 1%. “Media” covers everything from books to DVDs.

10) data from the Pew Research Center in the US, which does very robust studies. They haven’t found any growth in ereader ownership since January 2014. There’s a natural ceiling on ereader desire-to-buy.

11) Ereaders are popular with people who read a lot of books. The difference between the median and mean numbers here tell us this is a skewed population – those who read a lot really read a lot. They’re likely to have an ereader. But not everyone will get an ereader. The eager buyers have bought one.

13) See? New sales of Kindles have pretty much halted. Other more recent stories confirm this.

14) 30m Kindles sold in total is a lot – but compare that to total population in the US+Europe of about 500m. It’s not taking over the world.

15) 16) Amazon turns out not to be so great at making hardware that people want to buy.

17) Even in tablets, the rest of the market is growing, but the Kindle Fire HD isn’t doing much. Total about 30m sold (my calculation), also throughout US and Europe – but doubt there’s a lot of book reading going on with them; they’re for other media.

20) You may be able to think of another ebook that a “standard” publisher was able to turn into a bestselling book that was then made into a big film. (The Martian is being made into a film with Matt Damon. Looking forward to that.)

22) Amazon’s FCF (free cash flow) is a hot topic, at least in some quarters. The company shows very little profit, but its FCF is great. Isn’t it?

23) Well, the use of capital leases means that – rather as with the Labour government and PFI – the spending is all being pushed off the balance sheet and into a sort of future reckoning. Great as long as nobody worries about it; bad if Wall Street does worry about it.

24) you can just skip to this one if you want the conclusions.

Thanks for reading. I’m happy to come and give speeches at all sorts of events on topics like this.

Why Google’s struggles with the EC – and FTC – matter

Margrethe Vestager, the Danish-born EC competition commissioner. Photo by Radikal Venstre on Flickr.

“Google doesn’t have any friends,” I was told by someone who has watched the search engine’s tussle with the US Federal Trade Commission and latterly with the European Commission. “It makes enemies all over the place. Look how nobody is standing up for it in this fight. It’s on its own.”

The release, apparently accidentally, of the FTC staff’s report on whether to sue Google over antitrust in 2012 to the Wall Street Journal has highlighted just how true that is. We only got every other page of one of two reports. But that gives us a lot to chew on as the EC prepares a Statement of Objections against Google that will force some sort of settlement. (It’s obvious that the EC is going for an SOO: three previous attempts to settle without one foundered, and the new competition commissioner Margrethe Vestager clearly isn’t going to go down the same road into the teeth of political disapproval.)

The Wall Street Journal has published the FTC staffers’ internal report to the commissioners. And guess what? It shows them outlining many ways in which Google was behaving anticompetitively.

The FTC report says Google
• demoted rivals for vertical business (such as Shopping) in its search engine results pages (SERPS), and promoted its own businesses above those rivals, even when its own offered worse options
• scraped content such as Amazon rankings in order to populate its own rankings for competing services
• scraped content from sites such as Yelp, and when they complained, threatened to remove them from search listings
• crucially, acted in a way that (the report says) resulted “in real harm to consumers and to innovation in the online search and advertising markets. Google has strengthened its monopolies over search and search advertising through anticompetitive means, and has forestalled competitors and would-be competitors’ ability to challenge those monopolies, and this will have lasting negative effects on consumer welfare.”
• among the companies that complained to the FTC, confidentially, were Amazon, eBay, Yelp, Shopzilla and more. Amazon and eBay stand out, because they’re two of Google’s biggest advertisers – yet there they are, saying they don’t like its tactics.

Now the WSJ has published what it got from the FTC: every other page of the report prepared by the staff looking at what happened, with some amazing stories. It’s worth a read. Particularly worth looking at is “footnote 154”, which is on p132 of the physical report, p71 of the electronic one on the WSJ. This is where it shows how Google put its thumb on the scale when it came to competing with rival vertical sites.

What does Google want, though?

Before you do that, though, bear in mind the prism through which you have to understand Google’s actions.

Google’s key business model is to offer search across the internet, and sell ads against peoples’ searches for information (AdWords) or reading on sites where it controls the ads (AdSense).

For that business model to work at maximum efficiency, Google needs
• to be able to offer the “best” search results, as perceived by users (though it’s willing to sacrifice this – see later – and you could ask whether the majority of users will notice)
• to have the maximum possible access to information across the internet to populate search results. Note that this is why it’s in Google’s interests to make cost barriers to information to be pushed to zero, even if that isn’t in the interests of the people or organisations that initially gather and actually own and collate the information; it’s also in Google’s interests to ignore copyright for as long and as far as possible until forced to comply, because that means it can use datasets of dubious legality to improve search results
• to capture as much search advertising as it can
• to capture as much online display advertising as it can

None of those is “evil” in itself. But equally, none is fairies and kittens. It’s rapacious; the image in Dave Eggers’s The Circle (a parable about Google), of a transparent shark that swallows everything it can and turns it into silt, is apt.

YouTube (which it has owned since 2005) is an interesting supporting example here. It’s in Google’s interests for there to be as much material as possible on it, regardless of copyright, so that it can show display adverts (those irritating pre-rolls). It’s in its interests for videos to follow endlessly unless you stop them (an “innovation” it has recently introduced, and from which you have to opt out).

It’s also in its interests for YouTube to rank as highly as possible in search results even if it isn’t the optimum, original or most-linked source of a video, because that way Google captures the advertising around that content, rather than any content owner capturing value (from rental or sale or associated advertising).

It’s also in its interests to do only as much as it absolutely has to in order to remove copyrighted content – and even then, it will often suggest to the copyright owner instead that they just overlook the copyright infringement, and monetise it instead in an ad revenue split. Where of course Google gets to decide the split. (Example: film studios, and all the pirated content from their productions; record labels, and all the uploaded content there, which is monetised through ContentID. Pause for a moment and think about this: you and I wouldn’t have a hope of making money from content other people had uploaded without permission to our website. And particularly not to be able to decide the revenue split from any such monetisation. That Google can and does with YouTube shows its market power – and also the weakness of the law in this space. The record labels couldn’t get a preemptive injunction; so they were left with a fait accompli.)

Think vertical

In building associated businesses (aka “vertical search” – so-called because they’re specific to a field) – such as Google Shopping (where listing was at first free, but then became paid-for just like AdWords), or Google Flight Search (where Google could benefit from being top), or Google Product Search, the FTC report confirmed what everyone had said repeatedly: Google pushed its own product above rivals, even when its own were worse, and even at its own expense.

The FTC report is instructive here. It cites a number of examples where Google either forced other sites to give it content, or took that content (even when the other sites didn’t want it to), or sacrificed search quality in order to push its own vertical products.

Forcing sites to give it content? In building Google Local, Google copied content from Yelp and many other local websites. When they protested – Yelp cut off its data feed to Google – Google tried for a bit, and then came up with a masterplan: it set up Google Places and told local websites that they had to allow it to scrape their content and allow it there, or it would exclude them altogether from web search. Ta-da! There were all the reviews that Google needed to populate Google Local, provided by its putative rivals for free, despite all the effort and cost it had taken them to gather them.

Classic Google: access other peoples’ content for free; ignore the consequential benefits. For Google, it isn’t important whether those local websites survive or not, because it has their data. For a company like Yelp, which relies on people coming to its site and using it, and inputting data, and makes its money from local ads and brand ads, any move by Google to annex its content is a serious threat.

This also points to Google’s dominance. Sites like Shopzilla, the FTC noted, were scared to deny Google the free rein to its data because they worried that people wouldn’t find them.

Shopzilla worried about exclusion from Google's listings

Google offers you a ‘standard licence’, and you’d better accept it.

That’s arm-twisting of the first order.

Google was definitely worried about verticals taking away from its core business: in 2005 Bill Brougher, a Google product manager, said in an internal email that “the real threat” of Google not “executing on verticals” (ie having its own offerings) was

“(a) loss of traffic from Google.com because folks search elsewhere for some queries (b) related revenue loss for high spend verticals like travel (c) missing opportunity if someone else creates the platform to build verticals (d) if one of our big competitors builds a constellation of high quality verticals, we are hurt badly”.

You’ve got questions

Obviously, you’ll be going “but..”:
1) But aren’t “verticals” just another form of search? No – though they need search to be visible. A retailer of any sort is a “vertical”: a shop needs to know what it has to sell in order to offer it for sale. But populating the shop, tying up deals with wholesalers, figuring out pricing – those aren’t “search”. Amazon is a “vertical”; Moneysupermarket is a “vertical” (where it sells various deals, and wraps it with information in its forums). Hotel booking sites, shopping sites, they’re all “verticals”.

Their problem is that they need what they’re offering (“hotel tonight in Wolverhampton”) to be visible via general search, but they don’t want that to be something that can be scraped easily.

Amazon, for example, gave Google limited access to its raw feed; but Google wanted more, including star ratings and sales rankings. Amazon didn’t want to give that up for bulk use (though it was happy for it to be visible individually, when users called a page up). Google simply scraped the Amazon data, page by page – and used the rankings to populate its own shopping services. It did the same with Yelp – which eventually complained and sent a formal cease-and-desist notice.

In passing, this is a classic example of Google having it both ways: if your dataset is big enough, as with Amazon’s, then Google – and its supporters – can claim that scooping up of extra data such as shopping rankings and star ratings is “fair use”; if your dataset is small, then you’re probably small too, and will be threatened by the possibility of exclusion if you refuse to yield it up – witness Shopzilla, above.

(Side note: Microsoft wasn’t above doing something similar when it was dominant. Just read about the Stac compression case: Microsoft got a deep look at a third-party technology that effectively doubled your storage space in the bad old days of MS-DOS; then it took the idea and used rolled it into MS-DOS for free, rather than licensing it. Monopolists act in very similar ways.)

2) But rival search sites are “just a click away”. You don’t have to use Google. The FTC acknowledges this point, which is one that Eric Schmidt and Google have made often. There’s a true/not true element to this. The search engine business effectively collapsed after the dot-com boom in 2000: Alta Vista, which was then the biggest (in revenue and staffing terms) lost all its display ads. And Google did the job better. That’s undeniable. But for at least five crucial years, it had pretty much zero competition. Microsoft was in disarray, and Google was able to attract both search data and advertisers to corner the market.

What’s more, it was the default for search on Firefox and Safari, which helped propel its use. The combination of “better, unrivalled and default” made it a monopoly. Most people don’t even know there’s an alternative, and couldn’t find one if asked. Just listen how many times in everyday conversation – on the radio, in the street, in newspapers – you hear “google” used as a verb.

One thought on that “just a click away” – Google has poured huge amounts of money into making sure that people aren’t presented with any other search engine to begin with. The Mozilla organisation’s biggest source of funds for years has been Google, paying to be its default search (until last autumn, when Yahoo paid for the US default and Google, I understand, didn’t enter a bid – because Google Chrome is now bigger than Firefox). Google pays Apple billions every year to be the default search on Safari on the Mac, iPhone and iPad.

Clearly, Google doesn’t want to be in the position where it’s the one that’s a click away. That’s because it knows that the vast majority of people – usually 95% or so, for any setting – use the defaults.

The reality is that we are where we are: Google is the most-used search engine, it has the largest number and value of search advertisers, and crucially it is annexing other markets in verticals. This dominance/annexation nexus is exactly the point that Microsoft was at with Windows and Internet Explorer.

The difference, the FTC acknowledged, is in the “harm to consumers”. Antitrust, under the US Sherman act, rests on three legs: monopoly of a market; using that monopoly to annexe other markets; harm to consumers. In US v Microsoft, the “harm to consumers” was that by forcing inclusion of Internet Explorer,

“Microsoft foreclosed an opportunity for OEMs to make Windows PC systems less confusing and more user-friendly, as consumers desired” and “by pressuring Intel to drop the development of platform-level NSP software, and otherwise to cut back on its software development efforts, Microsoft deprived consumers of software innovation that they very well may have found valuable, had the innovation been allowed to reach the marketplace. None of these actions had pro-competitive justifications”; furthermore, in the final line of the judgement, Thomas Penfield Jackson says “The ultimate result is that some innovations that would truly benefit consumers never occur for the sole reason that they do not coincide with Microsoft’s self-interest.”

In the case of the FTC and Google, the harm to consumers is less clear-cut; in fact, that’s part of why the FTC held off. Yet it’s hard to look at the tactics that Google used – grabbing other companies’ content, demoting vertical rivals in search, promoting its own verticals even though they’re worse – and not see the same restriction of innovation going on. Might Shopzilla have turned into a rival to Amazon? Could Yelp have built its own map service? Or become something else? History is full of companies which have sort-of-accidentally “pivoted” into something remarkable: Microsoft with MS-DOS for IBM (a contract it got because the company IBM first contacted didn’t respond); Instagram into photos (it was going to be a rival to Foursquare).

What’s most remarkable about the demotion of rivals is that users actually preferred the rivals to be ranked higher according to Google’s own tests.

Footnote 154: the smoking gun

In footnote 154 (on page 132 of the report, but referring to page 29 of the body – which is sadly missing), the FTC describes what happened in 2006-7, when Google was essentially trying to push “vertical search” sites off the front page of results. Google would test big changes to its algorithms on “raters” – ordinary people who were asked to judge how much better a set of SERPs were, according to criteria given them by Google. I’m quoting at length from the footnote:

Initially, Google compiled a list of target comparison shopping sites and demoted them from the top 10 web results, but users preferred comparison shopping sites to the merchant sites that were often boosted by the demotion. (Internal email quote: “We had moderate losses [in raters’ rating – CA] when we promoted an etailer page which listed a single product because the raters thought this was worse than a bizrate or nextag page which listed several similar products. Etailer pagers which listed multiple products fared better but were still not considered better than the meta-shopping pages like bizrate or nextag”).

Google then tried an algorithm that would demote the CSEs [comparison shopping etailer], but not below sites of a certain relevance. Again, the experiment failed, because users liked the quality of the CSE sites. (Internal email quote: “The bizrate/nextag/epinions pages are decently good results, They are usually formatted, rarely broken, load quickly and usually on-topic. Raters tend to like them. I make this point because the replacement pages that we promoted are occasionally off-topic or dead links. Another positive aspect of the meta-shopping pages is that they usually give a variety of choices… The single retailer pagers tend to be single product pages, For a more general query, raters like the variety of choices the meta-shopping site seems to give.”)

Google tried another experiment which kept a CSE within the top five results if it was already there, but demoted others “aggressively”. This too resulted in slightly negative results.

Unable to get positive reviews from raters when Google demoted comparison shopping sites, Google changed the raters’ criteria [my emphasis – CA] to try to get positive results.

Previously, raters judged new algorithms by looking at search results before and after he change “side by side” (SxS), and rated which search results was more relevant in each position. After the first set of results, Google asked the users to instead focus on the diversity and utility of the whole set of results, rather than result by result, telling users explicitly that “if two results on the same side have very similar content then having those two results may not be more valuable than just having one,” When Google tried the new rating criteria with an algorithm which demoted CSEs such that sometimes no CSEs remained in the top 10, the test again came back “solidly negative”.

Google again changed changed its algorithm to demote CSEs only if more than two appeared in the top 10 results, and then, only demoting those beyond the top two. With this change, Google finally got a slightly positive rating it its “diversity test” from its raters. Google finally launched this algorithm change in June 2007.

Here’s the point to hold on to: users preferred having the comparison sites on the first page. But Google was trying to push them off because, as page 28 of the report explains,

“While Google embarked on a multi-year strategy of developing and showcasing its own vertical properties, Google simultaneously adopted a strategy of demoting, or refusing to display, links to certain vertical websites in highly commercial categories. According to Google, the company has targeted for demotion vertical websites that have ‘little or no original content’ or that contains ‘duplicative’ content.”

On that basis, wouldn’t Google have to demote its own verticals? There’s nothing original there. But Google also decided that comparison sites were “undesirable to users” – despite all the evidence that it kept getting from its raters – while at the same time deciding that its own verticals, which sometimes held worse results, were desirable to users.

Clearly, Google doesn’t necessarily pursue what users perceive to be the best results. It’s quite happy to abandon that in the pursuit of what’s perceived as best for Google.

Fair fight?

Now, that’s fair enough – up to a point. Google can mess around with its SERPs but only until it uses its search monopoly to annex other markets to the disbenefit of consumers. It’s easy to argue that in preventing rival verticals getting visibility, it reduced the options open to consumers. What’s much harder is proving harm. That’s where the FTC stalled.

But in Europe, that last part isn’t a block. Monopoly power together with annexation is enough to get you hauled before the European Commission’s DGCOMP (directorate-general of competition). The FTC and EC coordinated closely on their investigations, to the extent of swapping papers and evidence. So the EC DGCOMP has full copies of both the FTC reports. (If only they would leak..)

There’s been plenty of complaining that the EC’s pursuit of Google is just petty nationalism. People – well, Americans – point to the experiment where papers prevented Google News linking to them. Their traffic collapsed. They came back to Google News. Traffic recovered. Sure, this shows that Google is essential; cue Americans crowing about how stupid the newspapers were.

However, if you stop to think about the meaning of the word “monopoly”, that’s not necessarily a good thing for Google to have demonstrated – even unwittingly – in Europe. Now the publishers, who have what could generously be called a love-hate relationship with Google, can show yet another piece of evidence to DGCOMP about the company’s dominant position.

What happens next?

Vestager will issue a Statement of Objections (which, sadly, won’t be public) some time in the next few weeks; that will go to Google, which will redact the commercially confidential bits, then send it back to Vestager, who will show it to complainants (of whom there are quite a few), who will comment and then give it back to Vestager.

Then the hard work starts. Whether Google seeks to settle will depend on what Vestager is demanding. Will she try to forestall Google from foreclosing emerging spaces – the future verticals we don’t know about? Or just try to change how it treats existing verticals? (Ideally, she’d do both.) Many of the issues around scraping and portability of advertising which Almunia enumerated in May 2012 have been settled already (now that Google has wrapped them up; the scraped datasets aren’t coming out of its data roach motel).

Neither is going to make all the revenue lost to Google favouring its own services come back. And as with record labels and YouTube, it’s likely that Google will try to stretch this out for as long as possible; the more it does, the more money it gets, and the less leverage its rivals have.

Even so, I can’t help thinking that rather as with Microsoft and Internet Explorer, the chance to act decisively has long been missed. Instead, a different phenomenon is pushing Google’s dominance on the desktop aside: mobile. Mobile ads are cheaper, see fewer clicks, and search is used less compared to apps. I’d love to see a breakdown of Google’s income from mobile between app sales and search ad sales (and YouTube ad sales): I wonder if apps might be the bigger revenue generator. Yelp, meanwhile, seems to do OK in the new world of mobile. It’s possible – maybe even likely – that Google’s dominance of the desktop will be, like Microsoft, broken not by the actions of legislators but by the broader change in technologies.

Right and wrong lessons

But the wrong lesson to take from that would be “legislators shouldn’t do anything”. Because there’s always the potential for inaction to corner a market and foreclose on real innovation. Big companies which become dominant need to worry that legislators will come after them, because even that consideration makes them play more fairly.

And that’s why the Google tussle with the FTC and EC matters. It might not make any difference to those that feel wronged by Google on the desktop. But it could forestall whoever comes next, and it will focus the minds of the legislators and the would-be rivals. Google might not have any friends. There might come a time when it will wish it had some, though.

BlackBerry’s lucky that BB10 handsets sell so badly, or it would have real problems

A broken business model? Photo by MattHurst on Flickr.

BlackBerry announced its fiscal fourth-quarter results on Friday, covering the period over December 2014 to the end of February 2015, and they were pretty woeful. Revenues came in at $660m, way below what analysts were expecting, and the company made an operating loss of $106m ($50m of which was adjustment for the potential value of the $1.25bn cash injection it got from a debenture issue). It managed to squeak a net profit, helped by $115m gained from selling its share in the Rockstar patent consortium.

I’m continually fascinated by BlackBerry, because it’s a company struggling to turn itself from one thing (a business that sells handsets and takes an ongoing fee from handling data for them) to another (a company that makes its money from software licensing from managing handsets in businesses).

BlackBerry’s problem is that it’s still stuck on the old model, while the new model isn’t coming up fast enough to help it. Here’s the revenue breakdown:
• hardware revenues (from device sales) were $274m;
• service revenues (principally from carriers paying it for carrying data for its BB7 handsets) were $309m;
• software revenues were $67m. (There’s another $10m of “other” which is things like currency hedging and handset warranties.)

Service with a smile

BlackBerry gets the vast majority of its services revenue from the Service Activation Fee (SAF) on BB7 handsets, which is a recurring monthly payment from carriers for carrying data. When people buy BB10 handsets, it doesn’t get an SAF. However, users of its BB10 handsets are counted as “subscribers” in its numbers – its latest 40F annual report says

“BlackBerry World is a content distribution storefront managed by the Company that enables developers to reach BlackBerry subscribers around the world”

BB7 and BB10 handsets can access that storefront. Later it says that

“The Company currently generates service revenue from billings to its BlackBerry subscriber account base that utilize BlackBerry 7 and prior BlackBerry operating systems primarily from a monthly infrastructure access fee (sometimes referred to as a “service access fee” or “SAF”)…”

So both BB10 and BB7 users are “subscribers”, but only the BB7 ones generate SAF revenue.

Now consider this: BlackBerry isn’t gaining any new consumer subscribers. It’s losing them hand over fist, though it might be hanging on to some business users. BlackBerry is very coy about its subscriber count and how many BB10 handsets have reached end users, not mentioning them in its earnings releases, and squirrelling them away in its financial documents. But they can be unearthed if you’re determined enough. (I am.) Here’s the latest subscriber number, on p106 of the 40F, which was released some time after the financial results on Friday.

BlackBerry subscribers: 37m

Detail from BlackBerry’s 40F report for the latest year: it says it has 37m subscribers.

Here’s how its subscriber count has been going – dug out, again, from details in financials going back over many quarters:

Total BlackBerry subscribers over time

The peak – 80m – occurred before the release of BB10.

At the end of February 2015, the subscriber count was 37m. The total number of handsets that reached customers (“sell-through”) in the past eight quarters since BB10 was launched two years (eight quarters) ago is 26.2m. Digging back reveals how many BB10 handsets have actually shipped to end users – surely replacing existing BB7 handsets: just 10.1m.

Handset sales mix since BB10 launch

BB7 handsets have been substantial for some time.

Two things:
• this suggests that 70% of BlackBerry subscriber handsets now in use were bought in the past two years.
• isn’t it amazing that BB10, the platform that was going to be BlackBerry’s salvation, has only sold two-thirds as many handsets as the platform it was supposedly making redundant two years ago. It’s as if the iPhone 4S were running iOS 6 and radically outselling the iPhone 6, or the Galaxy S3 were outselling the Galaxy S5.

So this is how the subscriber base looks, split into BB7 and BB10:

BlackBerry subscriber base, by handset

BB10 still makes only a small proportion of users – about 10m out of 37m

Easy assumptions

Let’s assume all 10.1m BB10 handsets are in use, and all replaced BB7 handsets. That means there are now 26.9m BB7 handsets generating SAF revenues, at an average $309m/26.9m = $11.49 per quarter. (Remember this number, we’ll use it later.)

But – imagine – what if every handset sold since the introduction of BB10 had been a BB10 handset? That would mean 26.2m BB10 handsets, so only 37m-26.2m = 10.8m BB7 handsets in use. In the just-gone quarter they would have generated $11.49 x 10.8m = $124.1m in service revenue – a drop of $184.9m in service revenue. Yow!

Then again, think of all the hardware revenue! Surely that would more than make up for it?

Yes, it would – ah, but for one detail: SAF is crazy profitable, and it’s profitable during the lifetime of the handset. This has big implications. Companies want to be profitable, and want to be profitable for a long time – not just one-offs from handset sales.

Services are super-profitable for BlackBerry because it has the infrastructure for it, and can easily handle the data volumes involved (it used to handle it for twice as many devices, after all). Another little detail I found in the financials is that in the fiscal year to end Feb 2015, services and software brought in revenues of $1,854m – but the cost of sales (ie how much it cost to do) was just $287m.

Here are the numbers for three preceding years:

BlackBerry hardware and software margins

Hardware slumps towards loss while services and software coin it.

And now the three most recent years:
Hardware, services software cost of sales

For the past three fiscal years

Services and software together have a gross margin of nearly 85%, compared to hardware gross margins which have wavered between 4% in FY13, -2.7% in FY14, and 6.7% in the latest year. (They were as high as 36% back in the year to March 2011, when it shipped 52m handsets, and 20% in the year following, when it shipped 49m. In the just-gone year it shipped 7m. You don’t get 20% margins at that scale.)

Hardware, services: comparative margins

Hardware’s a pretty lousy way to make money (at least if you’re BlackBerry)

Even if you treat software revenues as pure 100% profit, the services gross margin still comes out as 82%, even in a “bad” year.

Services: still very profitable

Even if you assume software is 100% profitable, services come in with a margin of 82% or so.

So in our scenario where BlackBerry has lost $184.9m in service revenue (because all the handsets it sold were BB10), that means it has forgone $151.6m in gross margin profit in a single quarter. At a time when it’s struggling to show any profit (remember, it recorded an operating loss, because of things like R+D and marketing), that’s bad.

What price do handsets need to sell for to make up for that? Let’s first find out how much we need to collect. Take the $11.49 per-handset SAF revenue from the latest quarter: at 82% margin, that yields $9.42 in gross margin profit per BB7 handset per quarter.

We saw above that 70% of handsets were replaced over two years; logically all 100% should refresh over three years. So SAF revenue yields profit for, let’s say, 12 quarters. Even at the low SAF revenues we’re seeing, if you take that as read, then over 12 quarters (at per-quarter $11.49 revenue, $9.42 gross margin) services yields a per-handset gross margin profit of $9.42 x 12 = $113.04 over a BB7 handset’s life..

Hardware: bad news

How does hardware compare? Well, for BB10 hardware to be worth it for BlackBerry, it has to generate as much, or more, gross profit over three years.

At 4% gross margin, that means BB10 handsets would have to sell at an average price of $113.04/0.04 = $2,826. Yes, nearly three thousand dollars. Even at 20% margin, it would need a handset that it sells to carriers for $565.20. That’s iPhone-style pricing.

They’re nowhere near that. Yeah, I know, the BlackBerry fans will tell me that the BB10 Classic is going to sell like crazy, because it looks like a BB7 handset, and that carriers are all behind the company, and so on. Look though at the price of the Classic: $449. At 10% gross margin (and taking the retail price as what BlackBerry gets, which it isn’t), that’s a hardware gross profit of $44.90 (and a service profit of $0). At 20% (which it won’t make at the tiny scale it operates on), the one-off hardware profit is $89.80.

On its current SAF, BlackBerry gets more profit from a BB7 handset in three quarters than a BB10 handset at 10% margin; or six quarters at 20% margin. You might argue that BB7 handsets lose money at sale while BB10 ones make it. The numbers don’t say that, though. BlackBerry’s numbers have all trended down ever since the launch of BB10, and its accounts are littered with writedowns on inventory.

Conclusion: squaring the circle

In short? John Chen is pretty fortunate that BB10 handsets don’t sell that well, because they’d tear down the already faltering finances of the company. It actually makes better financial sense to keep selling BB7 handsets. The immediate handset profit is lousy, but the recurring revenues are great.

Now, it’s true that carriers are pushing down the SAF. It’s also true that BlackBerry hardware average selling prices (ASPs) are edging up – to $231 in the most recent quarter, when “most” (66% – it’s on page 128 of the 40F) handsets that reached customers were BB10s. That yields a gross profit margin of $9.24 – but unlike the SAF, that has to last over 12 quarters. (This is why one-off hardware is such a hard game to make pay, and why recurring high-margin software revenues is so great. Contrast the business models of Apple v Microsoft.)

Apart from continuing to slash costs and headcount, there’s no obvious way for Chen to square this circle. He needs enterprise customers to sign up to the BES 12 service (enterprise server), but it was very noticeable that whenever he was asked about this during the earnings call he said he didn’t have the numbers who had converted over with him:

Q: So maybe give us a quarterly total and somewhat of a split between while under the EZ PASS program and after the EZ PASS program?

John Chen – Chief Executive Officer: First of all after the — the EZ PASS program ended at 6.8 million licenses, no, I don’t have that number with me and I will have to look at some metrics.

Uh-huh. I bet if the numbers had been good, he’d have made sure that they were right there at his side. The fact he couldn’t offer anything didn’t sound good to me.

Still, it could have been worse. He could have had to tell people that they’d only sold BB10 handsets.

(None of this, by the way, is a comment on the quality or otherwise of BB10. It’s simply what emerges from the numbers. But I will say that the hope held out by BlackBerry fans that people will buy it for “security” is misplaced. When the head of the FBI is demanding back doors into iPhones and Android phones because they’re too secure, smartphone security has easily reached “good enough”.)

Committing acts of journalism: the New Yorker profile of Jony Ive

Pretty sure they’re not rewriting that week’s New Yorker. The Toronto Star newsroom in December 1930. Photo by Toronto History on Flickr.

Ian Parker’s magnificent profile of Jony Ive (and to a lesser extent Apple) in the New Yorker has received lots of attention – mainly from journalists at a multitude of news outlets who each spent a jolly hour or so filleting it for 23 Things You Maybe Didn’t Know But Might Be Persuaded To Read Because They’re In The Form Of A List Rather Than A 17,000-Word Article.

What’s been largely overlooked is the sheer amount of work done by Parker in putting this together. Clearly, Apple’s PR people played an important part: they set up at least three interviews with Ive (some time in late July/early August; the day after the iPhone release; some time later when they go to see the new campus being built). There’s also an interview with Tim Cook, whose time isn’t exactly limitless.

But once you’ve got past those, you realise that Parker has spoken to loads more people. Let’s list the people internal to Apple who Parker spoke to, even briefly, to quote:
• Marc Newsom
• Craig Federighi
• Jeff Williams
• Bart Andre
• Hartmut Esslinger
• Eugene Whang
• Julian Honig
• Jody Akana
• Dan Riccio
• Evans Hankey
• Alan Dye

That’s 11 people beside Ive and Cook, and that’s only inside Apple.

Now here’s a list of the people Parker spoke to outside the company:

• Laurene Powell Jobs
• Robert Brunner
• Jeremy Kuempel
• JJ Abrams
• Richard Sapper
• Richard Seymour
• Clive Grinyer
• Paola Antonelli
• Doug Satzger (who Parker contacted, but who wouldn’t comment – but Parker at least tried)
• Bob Mansfield
• Michael Ive (Ive’s father) [added after a reader familiar with the article pointed out this omission]

That’s another 10 11 people, of varying difficulty on access. (He also has a brief conversation with Heather Ive, Ive’s wife, but it hardly counts as an interview. It’s not clear from the text whether he communicated with Paul Smith to verify something about the contents of notes, but you can bet that – this being the New Yorker – the point did get fact-checked.)

One score and three

In all, that’s 23 24 people with whom he had longer or shorter interviews. And there’s also a hell of a lot of reading and information mixed in there – worn very lightly (such as the point about how much profit Apple makes during a 25-second pause, or how quickly phones were coming off Chinese assembly lines as Tim Cook announced them). Deciding what parts of 23 24 interviews to use (once you’ve transcribed them, of course) and what to throw away, plus what part of the observations around them to use (Jony Ive’s manner), and then simply writing it and getting it straight, and through fact-checking, subbing, and editing, is a huge task. At that length, it’s a short novella.

Parker’s reward? I’m sure that a ton of people have read the article and learnt from it. Lots of people have done rapid rewrites of salient bits of it, but the only piece of journalism has been Parker’s, in the first place. Sure, Apple gave him access, and lots flowed from that.

Sure, Apple chose to set it up – and this is the first substantial piece about Apple I’ve ever seen in the New Yorker, which I’ve been reading for decades – but the company wouldn’t have been able to dictate the content or conclusions; notice Parker’s sardonic take on Cook’s comments about how the forthcoming Watch, which Cook was wearing, would be all about notifications: “I noticed that, at this moment in the history of personal technology, Cook still uses notifications in the form of a young woman appearing silently from nowhere to hold a sheet of paper in his line of sight.”

And the access that Parker managed to get to other people? Clearly, he was helped by the name of the New Yorker (and quite possibly some nudging from Apple to people like Powell Jobs and Bob Mansfield).

Views differ. So go and find some

But the contrast between Parker’s in-depth cover-the-bases profile, and the many pieces that get thrown off every day on news outlets, where the writer doesn’t bother to get an opposing, or a neutral, or indeed any outside view about a piece of information – bugs me often. When I worked at New Scientist, it was an ironclad rule (and I think, reading it, still is) that in any news piece you were writing, you had to get an outside opinion about whatever marvel had been unveiled, whether it was the birth of the universe or the discovery of a new species of beetle under a log in the Amazon. As a writer, this put one to the test, and the stress was only magnified at a daily paper when one had to be able to find people willing to do similar things to tight deadlines.

Yet those outside voices play an important part in news: they counterbalance, and stop the pell-mell rush towards what can otherwise be the recycling of press releases. It dismays me when, as often happens, a Big Company announces something which is both (a) years away from fruition (b) being spun furiously for its “innovative” potential, and then sees yards of approving writeups in which no news editor has thrown the copy back at the writer and said “No. Go back and find someone – preferably two people – with an opinion about it.”

Yes, I know, web deadlines, bla bla bla. I cleave to the view that if someone else has already written the story, your job as a journalist/writer is to move the story on – do something more, get a new perspective, find out something new, unearth the fact nobody else has brought to light.

This applies even more when everyone’s rushing to put something, dear god anything, online so Google News will anoint it as this minute’s top story. If you move the story on, then you become the one they look to. Quality will tell because monkeys (well, software) will eventually eat those jobs doing rewrite anyway. Look, a company that automatically writes sports and finance stories just sold for $80m. It’s coming for your listicles next.

Parker, obviously, demonstrates that quality becomes a virtuous circle. Do it well, and you get better access. But you have to do it well in the first place.