For some reason my rate of reading has been very slow this year. This may explain the feeling I had when I finished New Dark Age: Technology and the End of the Future by James Bridle — that it hadn’t made a big impression on me. Looking back over the 150 highlights I made as I read the book, I think I am mistaken. Bridle covers a lot of ground, and I can see in the highlights the origins of ideas that have been buzzing around in my head over the past couple of months.
The fascinating premise of the book is that that the more technology seeps further into our world, the less we understand about it — we enter a collective ‘dark age’ of understanding. This is a paradox given that we now have greater access to knowledge than at any time in the past. It made me think of something else I read or heard — perhaps from Alain de Botton — that modern knowledge work is now largely invisible. You can stand in the middle of an office full of people and not be able to simply see or understand what everyone is doing. This wasn’t true back in the days when computers were human. Scaling this notion up from the level of a single office to our whole society, the premise still holds true.
It was fascinating to read about the SSEC, a working computer that went on show in the window of premises opposite IBM’s headquarters in Manhattan. It’s a perfect metaphor for us not being able to see what the technology is doing:
[…]the IBM Selective Sequence Electronic Calculator (SSEC), installed in New York in 1948, refused such easy reading. It was called a calculator because in 1948 computers were still people, and the president of IBM, Thomas J. Watson, wanted to reassure the public that his products were not designed to replace them. […] The SSEC was installed in full view of the public inside a former ladies’ shoe shop next to IBM’s offices on East Fifty-Seventh Street, behind thick plate glass. […] To the crowds pressed up against the glass, even with the columns in place, the SSEC radiated a sleek, modern appearance. It took its aesthetic cues from the Harvard Mark I, which was designed by Norman Bel Geddes, the architect of the celebrated Futurama exhibit at the 1939 New York World’s Fair. It was housed in the first computer room to utilise a raised floor, now standard in data centres, to hide unsightly cabling from its audience […] after the first couple of weeks, the machine was largely taken up by top secret calculations for a programme called Hippo, devised by John von Neumann’s team at Los Alamos to simulate the first hydrogen bomb. Programming Hippo took almost a year, and when it was ready it was run continuously on the SSEC, twenty-four hours a day, seven days a week, for several months. The result of the calculations was at least three full simulations of a hydrogen bomb explosion: calculations carried out in full view of the public, in a shopfront in New York City, without anyone on the street being even slightly aware of what was going on.
Bridle asserts that we have mistaken the collection of masses of data for increased information and knowledge, but this is misplaced. The more data we have, the harder it is to make sense of it:
And so we find ourselves today connected to vast repositories of knowledge, and yet we have not learned to think. In fact, the opposite is true: that which was intended to enlighten the world in practice darkens it. The abundance of information and the plurality of worldviews now accessible to us through the internet are not producing a coherent consensus reality, but one riven by fundamentalist insistence on simplistic narratives, conspiracy theories, and post-factual politics. It is on this contradiction that the idea of a new dark age turns: an age in which the value we have placed upon knowledge is destroyed by the abundance of that profitable commodity, and in which we look about ourselves in search of new ways to understand the world.
With the rapid deployment of large language models and other types of artificial intelligence, this issue is probably going to get worse. People are working on trying to understand why generative AI works as it does; as I learned recently, the history of AI contains a substantial amount of trial and error.
It was also shocking to me to read that the mass surveillance that came to light through the Edward Snowden revelations a decade ago have been collectively shrugged off and continue to this day:
Ultimately, the public appetite for confronting the insane, insatiable demands of the intelligence agencies was never there and, having briefly surfaced in 2013, has fallen off, wearied by the drip-drip of revelation and the sheer existential horror of it all. We never really wanted to know what was in those secret rooms, those windowless buildings in the centre of the city, because the answer was always going to be bad. Much like climate change, mass surveillance has proved to be too vast and destabilising an idea for society to really get its head around.
And this is despite there being evidence that this kind of mass surveillance doesn’t work very well:
Studies have repeatedly shown that mass surveillance generates little to no useful information for counterterrorism offices. In 2013, the President’s Review Group on Intelligence and Communications Technologies declared mass surveillance ‘not essential to preventing attacks’, finding that most leads were generated by traditional investigative techniques such as informants and reports of suspicious activities.
I think that people don’t understand, or don’t care, enough about surveillance. When I tell people that I have Siri turned off on my Apple devices, that I won’t have an Amazon Alexa or Google Home ‘smart speaker’ in my house, and wouldn’t install a Ring doorbell, I sound like a tin-foil hat-wearing crazy person. But I’m really not keen on everything I’m saying being recorded, stored on some random servers somewhere and available to engineers that work at the company that owns them.
I’ve also been thinking about how our 1990s-era visions of the Internet being a democratising, distributed force have not played out like that at all. The tendency of both IT services and infrastructure has been to move towards monopolies and oligopolies. And when regulations arrive, the incumbents are the beneficiaries; they are able to respond to the regulations and implement any required changes with their deep pockets. Conversely, the price of entry for new companies may then be too high. The rising tide of the proliferation of technology into everything doesn’t lift all boats equally.
Technology is in fact a key driver of inequality across many sectors. The relentless progress of automation–from supermarket checkouts to trading algorithms, factory robots to self-driving cars–increasingly threatens human employment across the board. There is no safety net for those whose skills are rendered obsolete by machines; and even those who programme the machines are not immune. As the capabilities of machines increase, more and more professions are under attack, with artificial intelligence augmenting the process. The internet itself helps shape this path to inequality, as network effects and the global availability of services produces a winner-takes-all marketplace, from social networks and search engines to grocery stores and taxi companies. The complaint of the Right against communism–that we’d all have to buy our goods from a single state supplier–has been supplanted by the necessity of buying everything from Amazon. And one of the keys to this augmented inequality is the opacity of technological systems themselves.
It’s a fascinating read. I was already some way through the book before realising that there is an updated edition available. I haven’t been able to find out what has changed with this new version, but I am sure it will only have enhanced what is already a very good book.
@adoran2 just on the subject of voice control — which I hate by the way — in terms of surveillance, I was of the understanding that sound will only be recorded when you have activated the assistant with the trigger word. I.e. not everything you’re saying is recorded and stored.
@matharden Please forgive me for not completely trusting the software 😊.
My stance against it is futile as everybody else around me has it turned on. It’s like deciding not to use Gmail — everyone else is using Gmail, so my emails will still be in Gmail.
@adoran2 maybe I’m wrong. But I thought that was the case. If that is what is said to be the case, do you not trust that it’s the truth?
@matharden Yes, I don’t trust it.