It’s all AI, all the time

All my feeeds seem to be full of reflections on the inevitability of the changes that will soon be brought about by artificial intelligence. After spending time thinking about this at length last week it may be my cognitive biases kicking in, but I’m pretty sure it’s not just me noticing these posts more.

Ton Zijlstra has an interesting view on today’s corporations as ‘slow AI’, and how they are geared to take advantage of digital AI:

…‘Slow AI’ as corporations are context blind, single purpose algorithms. That single purpose being shareholder value. Jeremy Lent (in 2017) made the same point when he dubbed corporations ‘socio-paths with global reach’ and said that the fear of runaway AI was focusing on the wrong thing because “humans have already created a force that is well on its way to devouring both humanity and the earth in just the way they fear. It’s called the Corporation”. Basically our AI overlords are already here: they likely employ you. Of course existing Slow AI is best positioned to adopt its faster young, digital algorithms. It as such can be seen as the first step of the feared iterative path of run-away AI.

Daniel Miessler conceptualises Universal Business Components, a way of looking at and breaking down the knowledge work performed by white-collar staff today:

Companies like Bain, KPMG, and McKinsey will thrive in this world. They’ll send armies of smiling 22-year-olds to come in and talk about “optimizing the work that humans do”, and “making sure they’re working on the fulfilling part of their jobs”.

So, assuming you’re realizing how devastating this is going to be to jobs, which if you’re reading this you probably are—what can we do?

The answer is mostly nothing.

This is coming. Like, immediately. This, combined with SPQA architectures, is going to be the most powerful tool business leaders have ever had.

When I first heard about the open letter published in late March calling on AI labs to pause their research for six months, I immediately assumed it was a ploy by those who wanted to catch up. In some cases, it might have been — but I now feel much more inclined to take the letter and its signatories at face value.

The hallucinations of AI creators

Naomi Klein, writing in The Guardian:

The former Google CEO Eric Schmidt summed up the case when he told the Atlantic that AI’s risks were worth taking, because “If you think about the biggest problems in the world, they are all really hard – climate change, human organizations, and so forth. And so, I always want people to be smarter.”

According to this logic, the failure to “solve” big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.

The reason this very smart counsel has been ignored is not due to a reading comprehension problem, or because we somehow need machines to do our thinking for us. It’s because doing what the climate crisis demands of us would strand trillions of dollars of fossil fuel assets, while challenging the consumption-based growth model at the heart of our interconnected economies. The climate crisis is not, in fact, a mystery or a riddle we haven’t yet solved due to insufficiently robust data sets. We know what it would take, but it’s not a quick fix – it’s a paradigm shift. Waiting for machines to spit out a more palatable and/or profitable answer is not a cure for this crisis, it’s one more symptom of it.

The whole article is an excellent read. I’d love us to move to a Star Trek-like future where everyone has what they need and the planet isn’t burning. But — being generous to the motives of AI developers and those with a financial interest in their work — there’s an avalanche of wishful thinking that the market will somehow get us there from here.

U.S. music revenues

A recent Stratechery post pointed me in the direction of these wonderful graphs of music revenues and sales. They fascinate me.

Some random thoughts:

  • Ringtone revenues were massive, for a very short period. 2005 is when they register on the graphs, but by 2008 they are already in decline. I would have thought that they would be correlated to the introduction of the iPhone, but the sales seem to pre-date it.
  • I’d never heard the term ‘synchronisation’ before. Apparently it is the payment for using songs in films, TV shows and adverts (and presumably videogames too).
  • Physical music video sales were never a big thing but they don’t seem to be killed off until 10 years after YouTube turned up in 2005.
  • I’d forgotten that cassette singles were a thing. I had some. I barely played them.
  • 1998 was the peak revenue for recorded music in the U.S., adjusted for inflation. Paid streaming subscriptions now dominate, as you would expect, but the revenues are way down. Spotify has never turned an annual profit. It would be interesting (and probably impossible) to see what a graph of total artist payouts for recorded music sales looks like. Maybe the revenue decline for artists is a return to a historical norm.
  • I don’t know what the ‘Kiosk’ category is. It seems to start in 2005, but is barely noticeable.
  • LPs/EPs really dominate the early years of the graph. I guess that by 1973 we are already beyond the era of singles being the main focus.

Superfans and marginal customer acquisition costs

Ben Thompson has been running some superb interviews for subscribers of his Stratechery newsletter. His recent interview with Michael Nathanson of the MoffettNathanson research group is no exception.

I loved this insight about how companies are valued when they are relatively new and growing quickly. Their maths may be over-optimistic as they underestimate the cost of adding marginal customers after their initial rapid rise:

Ben Thompson: …a mistake a lot of companies make is they over-index on their initial customer. The problem is when you’re watching a company, that customer wants your product really bad, they’ll jump through a lot of hoops, they’ll pay a high price to get it. Companies build lifetime value models and derive their customer acquisition costs numbers from those initial customers and then they say, “These are our unit costs”, and those unit costs don’t actually apply when you get to the marginal customer because you end up spending way more to acquire them than you thought you would have.

Michael Nathanson: That’s my question to Disney, which is, and I think you wrote this — your first 100 million subs, look at the efficiency of how you built Disney+, it was a hot knife through butter. But now to get the next 100 million subs, what are you going to do? You’re going to add sports, do entertainment, more localized content. My question to Disney is, is it better just to play the super-fan strategy where you know your fans are going to be paying a high ARPU [average revenue per user] and always be there, or do you want to, like Netflix, go broader? I don’t have an answer, but I keep asking management, “Have you done the math?”