I finally got around to reading the Stephen Wolfram essay on What Is ChatGPT Doing … and Why Does It Work? Despite being written in relatively simple terms, the article still pushed the boundaries of my comprehension. Parts of it landed on my brain like an impressionist painting.
Things that stuck out for me:
In order to improve the output, a deliberate injection of randomness (called ‘temperature’) is required, which means that ‘lower-probability’ words get added as text is generated. Without this, the output seems to be “flatter”, “less interesting” and doesn’t “show any creativity”.
Neural networks are better at more complex problems than on simple ones. Doing arithmetic via a neural network-based AI is very difficult as there is no sequence of operations as you would find in a traditional procedural computer program. Humans can do lots of complicated tasks, but we use computers for calculations because they are better at doing this type of work than we are. Now that plugins are available for ChatGPT, it can itself ‘use a computer’ in a similar way that we do, offloading this type of traditional computational work.
Many times, Wolfram says something along the lines of “we don’t know why this works, it just does”. The whole field of AI using neural networks seems to be trial and error, as the models are too complex for us to fathom and reason about.
People do seem to be looking at the output from ChatGPT and then quickly drawing conclusions of where things are headed from a ‘general intelligence’ point of view. As Matt Ballantine puts it, this may be a kind of ‘Halo effect’, where we are projecting our hopes and fears onto the technology. However, just because it is good at one type of task — generating text — doesn’t necessarily mean that it is good at other types of tasks. From Wolfram’s essay:
But there’s something potentially confusing about all of this. In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do […]
But this isn’t the right conclusion to draw. Computationally irreducible processes are still computationally irreducible, and are still fundamentally hard for computers—even if computers can readily compute their individual steps. And instead what we should conclude is that tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought.
So my last big takeaway is that — maybe — human language is much less complex than we thought it was.
Every 18 months or so I find myself feeling that my personal information workflow is working against me. Sometimes I end up diving into an inevitably fruitless quest to find an application that could be ‘the answer to everything’.
Last year I thought that some of the friction might have been coming from where I am able to access each application that I use. In my personal life I have an iPhone, an iPad and a MacBook, but at work I use a Windows laptop. I always prefer web applications as they can, in theory, be accessed from anywhere. However, it’s difficult to find web apps that have all of the features that I want.
My whiteboard from December 2021 trying to work all of this out.
Mapping out each of the applications was useful; it made me realise that I could move my old documents and notes archive in Evernote over to OneNote, saving money on a subscription. After wrestling with the migration over a few days, that was that. Things got busy and I didn’t look at my personal workflow again. Until now.
After getting ‘the itch’ again, this time I’ve tried to map out exactly what my current personal workflow looks like, regardless of where the applications are accessible. Here is the resulting mess:
My workflow, such as it is, today. (Click to enlarge.)
I haven’t decided where to go from here. What I do know is that I need to ponder this for a bit before making any changes. Experience tells me that the problems I have (or feel that I have) are less about the applications and more about the purposeful habits that I need to form.
Some disorganised thoughts:
There is still definitely an issue with where I can access each of the components from. Every time I need to switch devices, there is friction.
Finding apps that are super secure — i.e. those that encrypt data locally before being sent to the application’s cloud storage — do exist, but at the moment they feel like using a cheese grater to shave your legs. Yes, I could use Standard Notes everywhere, but the friction of working with it is much higher than being forced onto my Apple devices to use Ulysses.
Some of the apps are replacements for each other in theory, but not in practice.
Readwise Reader can keep YouTube videos I want to watch later, but they then become slightly less accessible if I am sitting down to watch them in front of a TV.
Readwise Reader can also accept RSS feeds, but at the moment the implementation is nowhere near as good as Feedbin. I tried it through exporting my OPML file of feed subscriptions and importing it into Reader, but when it wasn’t working for me I found I had to painstakingly back out my RSS subscriptions one by one.
I’m still searching for a good way to curate my reading backlog. I estimate that I have over 1,000 ebooks1, hundreds of physical books, hundreds of PDFs and nearly 9,000 articles saved to my ‘read later’ app. I’ve already done the maths to work out that even if I live to a ripe old age, there is not enough time left to get through all of the books that I’ve bought. As Ben Thompson has been saying: in an age of abundance, the most precious and valuable thing becomes attention. I have lists of all my books in Dynalist, but still rely on serendipity when it’s time to pick up another one to read.
I need to work out the best way to distinguish between the things I have to do versus the things I want to do. Not that these are absolutes; the amount of things that I absolutely, positively have to do is probably minimal. I might save a YouTube video that would be super helpful for my job right now, and want to prioritise this above others that I have saved for broader learning or entertainment. What’s the easiest way to distinguish them and be purposeful about what I pick up next?
Similarly, where should a list of ‘check out concept x’ tasks go? These aren’t really ‘tasks’. When is the right time to pick one of these up?
I’m finding that using Kanban for projects is much easier than long lists of tasks in a to-do app. At work we use Planview AgilePlace (formerly known as LeanKit) which from what I can tell is the most incredible Kaban tool out there; if you can imagine the swimlanes, you can probably draw them in AgilePlace. But it’s difficult to justify the cost of $20/month for a personal licence. I’m using Trello for now.
Needing to look at different apps to decide what to do next is a problem. But how much worse is it than using one app and changing focus between project views and task views?
Are date-based reminders (put the bins out, clean the dishwasher, replace the cycle helmet, stain the garden fence) a different class of tasks altogether? Are they the only things that should be put in a classic ‘to do’ tool?
One of the main sticking points of my current workflow is items hanging around for too long in my capture tools (Drafts and Dynalist) when they should be moved off somewhere else. Taking the time to regularly review any of these lists is also a key practice. Sometimes I haven’t decided what I want to do with a thing so it doesn’t move on anywhere, which is also a problem. I need to get more decisive the first time I capture a thing.
Document storage is a lost art. After I drew the diagram above, I’ve consolidated all of my cloud documents onto one platform — OneDrive — but now need to go through and file what’s there.
I know that there are no right answers. However, now that I can see it all, hopefully I can start to work out some purposeful, meaningful changes to how I manage all of this stuff. I’m going to make sure that I measure twice, cut once.
The consequence of slowly building up a library as Kindle books were discounted. Aside from checking the Kindle Daily Deal page, I’ve largely stopped now. Looking back, I don’t think this was a great strategy. It seems much better to be mindful about making a few well-intentioned purchases, deliberately paying full price for books from authors I like. ↩
Interesting to read Nick Drage’s riposte to The A.I. Dilemma which I watched a few weeks ago. I agree with his points on the presentation in terms of lack of citations and extreme interpretations, which when scrutinised does the subject a disservice.
The presentation is worth watching just to see what they get away with. And because the benefits and threats of AI are worth considering and adapting to, and especially because the presenters are so right in encouraging us to think about the systemic changes taking place and who is making those changes, but I’m really not sure this presentation helps anyone in that endeavor.
This isn’t to say that the topics raised are not important ones. I’m currently a third of the way through listening to a very long podcast interview between Lex Fridman and Eliezer Yudkowsky on “Dangers of AI and the End of Human Civilization”. Both of them know infinitely more about the topic than I do. It’s very philosophical, questioning whether we’d know if something had become ‘sentient’ in a world where the progress of AIs is gradual in a ‘boiling frogs’ sense. The way they talk about GPT-4 and the emergent properties of transformers in particular makes it sound like even the researchers aren’t fully sure of how these systems work. Which is interesting to me, a complete layperson in this space.
This is an excellent blog post on working with ChatGPT to generate insightful book summaries. It’s long, but it covers a lot of ground in terms of what the technology does well and what it struggles with right now. Jumping to the conclusion, it seems that you get much better results if you feed the tool with your own notes first; it isn’t immediately obvious that the model doesn’t have access to (or hasn’t been trained on) the contents of a particular book.
When I finish a book that I’ve enjoyed, I like to write a blog post about it. It’s this process of writing which properly embeds the book into my memory. It also gives me something that I can refer back to, which I often do. As I read, I make copious highlights — and occasionally, notes — which all go into Readwise. If the book has captured my imagination, I start writing by browsing through these highlights. Any that seem particularly important, or make or support a point that I want to make somewhere in the write-up, get copied into a draft blog post. From there I try to work out what I’m really thinking. I love this process. It takes a lot of effort, but the end result can be super satisfying.
The summary that I’ve shared most often is A Seat at The Table by Mark Schwartz, which seems to pop up in conversations at work all the time. Going back to my own blog post is a great way to refresh my memory on the key points and to continue whatever conversation I happen to be in.
My favourite write-up is Hitman by Bret Hart. I picked the book up this time last year as a holiday read. I had no idea it would have such a big impact on me, bringing back lots of childhood memories and getting me thinking about the strange ways in which the rise of the Internet has changed our world. Getting my thoughts in order after I put the book down was incredibly satisfying.
Using ChatGPT or another Large Language Model to generate a book summary for me defeats the point. The process of crafting a narrative, in my head and then on a digital page, is arguably more valuable than the output. Getting a tool to do this for me could be a shortcut to a write-up, but at the expense of me learning and growing from what I’ve read.
All my feeeds seem to be full of reflections on the inevitability of the changes that will soon be brought about by artificial intelligence. After spending time thinking about this at length last week it may be my cognitive biases kicking in, but I’m pretty sure it’s not just me noticing these posts more.
…‘Slow AI’ as corporations are context blind, single purpose algorithms. That single purpose being shareholder value. Jeremy Lent (in 2017) made the same point when he dubbed corporations ‘socio-paths with global reach’ and said that the fear of runaway AI was focusing on the wrong thing because “humans have already created a force that is well on its way to devouring both humanity and the earth in just the way they fear. It’s called the Corporation”. Basically our AI overlords are already here: they likely employ you. Of course existing Slow AI is best positioned to adopt its faster young, digital algorithms. It as such can be seen as the first step of the feared iterative path of run-away AI.
Daniel Miessler conceptualises Universal Business Components, a way of looking at and breaking down the knowledge work performed by white-collar staff today:
Companies like Bain, KPMG, and McKinsey will thrive in this world. They’ll send armies of smiling 22-year-olds to come in and talk about “optimizing the work that humans do”, and “making sure they’re working on the fulfilling part of their jobs”.
…
So, assuming you’re realizing how devastating this is going to be to jobs, which if you’re reading this you probably are—what can we do?
The answer is mostly nothing.
This is coming. Like, immediately. This, combined with SPQA architectures, is going to be the most powerful tool business leaders have ever had.
When I first heard about the open letter published in late March calling on AI labs to pause their research for six months, I immediately assumed it was a ploy by those who wanted to catch up. In some cases, it might have been — but I now feel much more inclined to take the letter and its signatories at face value.
The former Google CEO Eric Schmidt summed up the case when he told the Atlantic that AI’s risks were worth taking, because “If you think about the biggest problems in the world, they are all really hard – climate change, human organizations, and so forth. And so, I always want people to be smarter.”
According to this logic, the failure to “solve” big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.
The reason this very smart counsel has been ignored is not due to a reading comprehension problem, or because we somehow need machines to do our thinking for us. It’s because doing what the climate crisis demands of us would strand trillions of dollars of fossil fuel assets, while challenging the consumption-based growth model at the heart of our interconnected economies. The climate crisis is not, in fact, a mystery or a riddle we haven’t yet solved due to insufficiently robust data sets. We know what it would take, but it’s not a quick fix – it’s a paradigm shift. Waiting for machines to spit out a more palatable and/or profitable answer is not a cure for this crisis, it’s one more symptom of it.
The whole article is an excellent read. I’d love us to move to a Star Trek-like future where everyone has what they need and the planet isn’t burning. But — being generous to the motives of AI developers and those with a financial interest in their work — there’s an avalanche of wishful thinking that the market will somehow get us there from here.
I recently watched this video from the Center for Humane Technology. At one point during the presentation, the presenters stop and ask everyone in the audience to join them in taking a deep breath. There is no irony. Nobody laughs. I don’t mind admitting that at that point I wanted to cry.
Back in the year 2000, I can remember exactly where I was when I read Bill Joy’s article in Wired magazine, Why the Future Doesn’t Need Us. I was in my first year of work after I graduated from university, commuting to the office on a Northern Line tube train, totally absorbed in the text. The impact of the article was massive — the issue of Wired that came out two months later contained multiple pages dedicated to emails, letters and faxes that they had received in response:
James G. Callaway, CEO, Capital Unity Network: Just read Joy’s warning in Wired – went up and kissed my kids while they were sleeping.
The essay even has its own Wikipedia page. The article has been with me ever since, and I keep coming back to it. The AI Dilemma video made me go back and read it once again.
OpenAI released ChatGPT at the end of last year. I have never known a technology to move so quickly to being the focus of everyone’s attention. It pops up in meetings, on podcasts, in town hall addresses, in webinars, in email newsletters, in the corridor. It’s everywhere. ‘ChatGPT’ has already become an anepronym for large language models (LLMs) as a whole — artificial intelligence models designed to understand and generate natural language text. As shown in the video, it is the fastest growing consumer application in history. A few months later, Microsoft announced CoPilot, an integration of the OpenAI technology into the Microsoft 365 ecosystem. At work, we watched the preview video with our eyes turning into saucers and our jaws on the floor.
Every day I seem to read about new AI-powered tools. You can use plain language to develop Excel spreadsheet formulas. You can accelerate your writing and editing. The race is on to work out how we can use the technology. The feeling is that we have do to it — and have to try to do it before everybody else does — so that we can gain some competitive advantage. It is so compelling. I’m already out of breath. But something doesn’t feel right.
My dad left school at 15. But his lack of further education was made up for by his fascination with in the world. His interests were infectious. As a child I used to love it when we sat down in front of the TV together, hearing what he had to say as we watched. Alongside David Attenborough documentaries on the natural world and our shared love of music through Top of The Pops, one of our favourite shows was Tomorrow’s World. It was fascinating. I have vivid memories of sitting there, finding out about compact discs and learning about how information could be sent down fibre optic cables. I was lucky to be born in the mid-1970s, at just the right time to benefit from the BBC Computer Literacy Project which sparked my interest in computers. When I left school in the mid-1990s, I couldn’t believe my luck that the Internet and World Wide Web had turned up as I was about to start my adult life. Getting online and connecting with other people blew my mind. In 1995 I turned 18 and felt I needed to take some time off before going to university. I landed on my feet with a temporary job at a telecommunications company, being paid to learn HTML and to develop one of the first intranet sites. Every day brought something new. I was in my element. Technology has always been exciting to me.
Watching The AI Dilemma gave me the complete opposite feeling to those evenings I spent watching Tomorrow’s World with my dad. As I took the deep breaths along with the presenters, I couldn’t help but think about my two teenage boys and what the world is going to look like for them. I wonder if I am becoming a luddite in my old age. I don’t know; maybe. But for the first time I do feel like an old man, with the world changing around me in ways I don’t understand, and an overwhelming desire to ask it to slow down a bit.
Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science’s quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own. —[Bill Joy, Why The Future Doesn’t Need Us]
I’ve had conversations about the dangers of these new tools with colleagues and friends who work in technology. My initial assessment of the threat posed to an organisation was that this has the same risks as any other method of confidential data accidentally leaking out onto the Internet. Company staff shouldn’t be copying and pasting swathes of internal text or source code into a random web tool, e.g. asking the system for improvements to what they have written, as they would effectively be giving the information away to the tool’s service provider, and potentially anyone else who uses that tool in the future. This alone is a difficult problem to solve. For example, most people do not understand that email isn’t a guaranteed safe and secure mechanism for sending sensitive data. Even they do think about this, their need to get a thing done can outweigh any security concerns. Those of us with a ‘geek mindset’ who believe we are good at critiquing new technologies, treading carefully and pointing out the flaws are going to be completely outnumbered by those who rush in and start embracing the new tools without a care in the world.
The AI Dilemma has made me realise that I’ve not been thinking hard enough. The downside risks are much, much greater. Even if we do not think that there will soon be a super intelligent, self-learning, self-replicating machine coming after us, we are already in an era where we can no longer trust anything we see or hear. Any security that relies on voice matching should now be considered to be broken. Photographs and videos can’t be trusted. People have tools that can give them any answer, good or bad, for what they want to achieve, with no simple or easy way for a responsible company to filter the responses. We are giving children the ability to get advice from these anthropomorphised systems, without checking how the systems are guiding them. The implications for society are profound.
Joy’s article was concerned with three emerging threats — robotics, genetic engineering and nanotech. Re-reading the article in 2023, I think that ‘robotics’ is shorthand for ‘robotics and AI’.
The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them. —[Bill Joy, Why The Future Doesn’t Need Us]
The video gives us guidance of “3 Rules of Technology”:
When you invent a new technology, you uncover a new class of responsibilities [— think about the need to have laws on ‘the right to be forgotten’ now that all of our histories can be surfaced via search engines; the need for this law was much less pronounced before we were all online]
If the tech confers power, it starts a race [— look at how Microsoft, Google et al have been getting their AI chatbot products out into the world following the release of ChatGPT, without worrying too much about whether they are ready or not]
If you do not coordinate, the race ends in tragedy.
It feels like the desire to be the first to harness the power and wealth from utilising these new tools is completely dominating any calls for caution.
Nearly 20 years ago, in the documentary The Day After Trinity, Freeman Dyson summarized the scientific attitudes that brought us to the nuclear precipice:
“I have felt it myself. The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles—this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds.” —[Bill Joy, Why The Future Doesn’t Need Us]
Over the years, what has stuck in my mind the most from Joy’s article is how the desire to experiment and find out can override all caution (emphasis mine):
We know that in preparing this first atomic test the physicists proceeded despite a large number of possible dangers. They were initially worried, based on a calculation by Edward Teller, that an atomic explosion might set fire to the atmosphere. A revised calculation reduced the danger of destroying the world to a three-in-a-million chance. (Teller says he was later able to dismiss the prospect of atmospheric ignition entirely.) Oppenheimer, though, was sufficiently concerned about the result of Trinity that he arranged for a possible evacuation of the southwest part of the state of New Mexico. —[Bill Joy, Why The Future Doesn’t Need Us]
There is some hope. We managed to limit the proliferation of nuclear weapons to a handful of countries. But developing a nuclear weapon is a logistically difficult process. Taking powerful software and putting it out in the world — not so much.
The new Pandora’s boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed. Ideas can’t be put back in a box; unlike uranium or plutonium, they don’t need to be mined and refined, and they can be freely copied. Once they are out, they are out. —[Bill Joy, Why The Future Doesn’t Need Us]
The future seems increasingly obscured to me, with so much uncertainty. As the progress of these technologies accelerates, I feel less and less sure of what is just around the corner.
I’ve been pondering: does the fact that Twitter is still functioning set expectations for business executives, who will think it’s fine to slash a technology budget and still expect core services to remain? Will they be asking “what were all these IT staff doing all day”?
Here’s an organizational hack I’ve used a few times which I think more people should try: run your own personal engineering blog inside your organization
You can use it as a place to write about projects you are working on, share TILs about how things work internally, and occasionally informally advocate for larger changes you’d like to make
Crucially: don’t ask for permission to do this! Find some existing system you can cram it into and just start writing
Systems I’ve used for this include:
a Slack channel, where you post long messages, maybe using the Slack “posts” feature
Confluence has a blog feature which isn’t great but it’s definitely Good Enough
A GitHub repo within your organization works fine too, you can write content there in Markdown files
…
One thing to consider with this: if you want your content to live on after you leave the organization (I certainly do) it’s a good idea to pick a platform for it that’s not likely to vanish in a puff of smoke when the IT team shuts down your organizational accounts
That’s one of the things I like about Confluence, Slack and private GitHub repos for this
The most liberating thing about having a personal internal blog is that it gives you somewhere to drop informal system documentation, without making a commitment to keep it updated in the future
Unlike out-of-date official documentation there’s no harm caused by a clearly dated blog post from two years ago that describes how the system worked at that point in time
I thoroughly endorse this. I’ve been setting up blogs and internal communication channels at all of the organisations I have worked at over the past few years. We’ve recently started an internal blog for our team using a ‘community’ on Viva Engage (formerly Yammer) as it is the only ready-made platform that has reach across the whole company. At the moment we are still talking into the void, but these things take time.
Microsoft 365 used to offer a blogging facility on your ‘Delve profile’, but this was squirrelled away on the web and was tied to your account; it wouldn’t be widely visible and would disappear when you left the company. That facility now seems to have gone away. We tried using SharePoint, but it felt a bit like using a cheese grater to shave your legs — it would do the job, but not without a lot of pain.
There is so much value in working out loud, but I’ve never had much success in persuading other people to start posting their thoughts in blog form. The closest thing we have to it are internal Teams posts, which team members do post and do look like blogs — they may have a title, there’s some content and then there is a thread of comments. Perhaps these are easier to write because the audience is limited to a few known colleagues. We’ll keep experimenting.
My five-year-old MacBook Pro has started to play up again. I had Apple replace the battery late last year. Now when I turn it on it works for a couple of minutes before the screen goes black and the touchpad stops giving feedback, despite the battery being being charged. Plugging in brings it back after a minute or so. This is the problem that made me schedule a battery replacement in the first place.
For five years I’ve had a MacBook Pro at home and a couple of different well-specced Lenovo ThinkPads for work (X280, T14s). I have to say that I much prefer working on the ThinkPads. Windows in its current guise is excellent and rarely causes me any issues. There’s a lot to love about Apple products and the integration between devices, but I have never fallen in love with my Mac.
I reached the milestone of 100,000 scrobbles on Last.FM today. Every time a song plays on my hi-fi at home, or on my Spotify account when I am out and about, it gets logged on the service. I love that I have all of this data about my listening habits. It’s fascinating to see all of those song plays displayed graphically and look back on what I’ve been listening to.
From Scatter.FM. Those plays in the early hours are intriguing!
My top artists and top albums from the Last.FM site
Last.FM used to be a big deal back in the day but has faded into semi-obscurity. As my listening habits have moved back towards physical and downloaded media I’ve had to compensate by using different tools to get things logged:
I buy music from Bandcamp and download the lossless files which I like to listen to on my iPhone. Eavescrob does a great job of logging things played on my iPhone’s native music app (although you have to remember to open it after a listening session).
Discographic integrates with my physical music collection that I have catalogued in Discogs and lets me log an album play with a swipe.
I recently discovered Finale which has a myriad of useful features, such as listening to what’s playing around you now, Shazam-style, and logging it for you.
It’ll be interesting to see how quickly I log the next 100,000. Will it take another 12 years?
The latest episode of the excellent WB-40 podcast is filled with a series of interesting interviews from the recent OpenUK Open Source Software thought leadership event. The conversations are wide-ranging and well a listen.
One of the discussions noted that open source software development was resilient in the face of the COVID-19 pandemic, given that contributors worked remotely and asynchronously in the first place. This got me thinking about Automattic, the company behind WordPress. They are remote-first, with staff spread all around the world. Last year, Matt Mullenweg, Automattic’s founder, appeared on the Postlight Podcast where he enthused me with his passion for all things open source:
…WordPress is actually not the most important thing in the world to me, open source is. […] essentially a hack to get competitors to work together and sort of create a shared commons of knowledge and functionality in the case of software, that something getting bigger, it becomes better, where with most proprietary solutions, when something gets bigger, it becomes worse or becomes less lined with its users. Because the owners of WordPress are its users. And […] the sort of survival rate of proprietary software like they’re all evolutionary dead ends, the very long term, that might be 20, 30, 40 years. But it’s all going to move to open source because that’s where all the incentives are. I think even a company like Microsoft, being now one of the largest open contributors, source contributors in the world, is astounding, and something that I think most people wouldn’t have predicted 10 or 20 years ago, but I believe it’s actually inevitable.
Another interview covered the concept of a ‘software bill of materials’, where applications come with a breakdown of the components that they use. Driven by the US Government Cybersecurity and Infrastructure Security Agency (CISA), the goal is for organisations that use specific software to quickly identify where they may be exposed to security vulnerabilities in the underlying components. For open source projects that have not published this information, there are some automated tools such as It-Depends that go some way to discovering these dependencies.
There is often an argument that open source software is safer than closed source, proprietary software. The idea is that open source software will have more eyes on it and therefore people will have the ability to discover, report and fix critical security defects. I wonder if there is always a point where a community has built up around a product to make this true, with less popular products being more at risk of vulnerabilities or deliberately rogue code?
Ben Higgins and Ted Driggs from Extrahopappeared on an episode of the Risky Business podcast last year to take the ‘software bill of materials’ idea one step further. They advocate for a ‘bill of behaviours’ where software is supplied with details of what its users can expect, e.g. external and internal network destinations, and a list of ports and how they are used. These behaviours would be published in a format that common security products can understand. I love this idea and hope it gains traction. Driggs gave an update on the podcast in February about how the initiative is going.
At the end of July I requalified as a First Aider. As soon as I received my certificate I uploaded it to the GoodSAM app to reactivate my account. GoodSAM is a service where anyone who is suitably qualified can be alerted as a first responder if an ambulance or paramedic can’t get to a location quickly enough.
The concept is brilliant. The app is terrible.
Last night’s alert did its job. It was loud and made me jump out of my skin. Once I’d worked out what was going on, the app showed me where the incident was and asked whether I wanted to accept or reject it. I accepted, and was then shown a chat window with an address, age and sex of the person who was in trouble. As I put my shoes on and grabbed my CPR face mask I saw another person message on the chat to say that they were on their way. I guessed that they could see my messages so I said I was also heading there, but every time I wrote on the chat I got an automatic response to say that the chat wasn’t monitored.
I quickly found the house. Fortunately, the casualty was stood in their doorway, talking on the phone and already feeling better. About five minutes later another GoodSAM first responder turned up — the same person from the chat who had said that he was on his way. The casualty felt very upset about having called out a few volunteers to her house; the main job was therefore to reassure her that it was absolutely fine and that we were glad to help. What we couldn’t tell her was whether anyone else would be coming to see her. We don’t work for the ambulance service, and nothing on the app gave us any clue as to how to find out if a medical professional would be on their way. She called her sister to reassure her that she was feeling better, and then passed the phone to me; I felt like a wally as I was unable to tell her sister whether anyone would or wouldn’t be coming.
What the app did show me was this:
GoodSAM buttons
This user interface is terrible. I pressed the button marked ‘On scene’. The text then changed to ‘No longer on scene’, but the button didn’t change appearance. I couldn’t make it out — did the words indicate a status, with the app showing that I was now no longer on the scene, or did I need to press the button to tell people that I am no longer on the scene? There was no other indicator anywhere in the app to say that a message had been sent to anyone. (And what on earth does ‘Show Metronom’ mean? I didn’t press it to find out.)
After fifteen minutes or so, two paramedics arrived. The two of us explained that we were GoodSAM first responders, which was the last interaction that we had with anybody. Baffled, we walked away from the scene complaining about how rubbish the GoodSAM app was. The other responder said that he thought up to two people get alerted for any given incident, but this seemed to be guesswork based on experience rather than knowing this for sure.
I checked the app again a while later and found something under the ‘Reports’ tab. It also showed a whole bunch of unread notifications from my previous stint as a first responder about five years ago:
Reports? Alerts? Feedback?
Swiping left on the latest alert gave me a form to complete which included a plethora of fields whose labels didn’t make any sense to me. I had to answer questions such as whether the casualty lived, died, or is still being treated. How would I know, if I had left the scene an hour ago?
Outcome. I’ve no idea which of the bottom three options was the correct one to pick.
What happens with this report? Who gets it and reviews it? I have no idea, and the app offers no clues.
I now have a chat thread called ‘Organisational messages’ which is another example of how confusing the application is. The the messages that I and the other responder sent are no longer visible, but some messages from 2018 are. It’s so random.
What will now disappear from my device?
I love that this app exists and that it allows me to put my first aid skills to good use. I am sure that it has saved lives by getting skilled first aiders to casualties quickly. I just don’t understand how the interface can be so dreadful, and how it hasn’t improved in all the years that it has been available. NHS Trusts are paying to use this service, but I am not sure they are aware about how awful the experience is.
I loved this insight about how companies are valued when they are relatively new and growing quickly. Their maths may be over-optimistic as they underestimate the cost of adding marginal customers after their initial rapid rise:
Ben Thompson: …a mistake a lot of companies make is they over-index on their initial customer. The problem is when you’re watching a company, that customer wants your product really bad, they’ll jump through a lot of hoops, they’ll pay a high price to get it. Companies build lifetime value models and derive their customer acquisition costs numbers from those initial customers and then they say, “These are our unit costs”, and those unit costs don’t actually apply when you get to the marginal customer because you end up spending way more to acquire them than you thought you would have.
Michael Nathanson: That’s my question to Disney, which is, and I think you wrote this — your first 100 million subs, look at the efficiency of how you built Disney+, it was a hot knife through butter. But now to get the next 100 million subs, what are you going to do? You’re going to add sports, do entertainment, more localized content. My question to Disney is, is it better just to play the super-fan strategy where you know your fans are going to be paying a high ARPU [average revenue per user] and always be there, or do you want to, like Netflix, go broader? I don’t have an answer, but I keep asking management, “Have you done the math?”
I had such a great time at the Centre for Computing History, indulging myself in memories of the years where my friends and I regularly gathered together around a TV to play the GoldenEye on the Nintendo 64. The panel session with three members of the development team is now up on YouTube, along with the post-talk Q&A. Both are well worth watching.
They developed the game with very little help from other teams at Rare. It was fascinating to hear about the challenges of having so much going on in the game without the frame rate suffering to an unacceptable degree. Who knew that the placement of doorways on levels would have such a huge bearing on processing?
Some of the game features — such as AI characters that go off to do things other than running straight at you and the inclusion of a sniper rifle — are now standard elements that you expect with any first-person shooter. Multiplayer was added to the game only four months before their deadline; remarkable as the game probably wouldn’t be as celebrated today without it.
I managed to get tickets to the event through being a Patreon of the Centre and being altered before they went on general release. If this kind of thing interests you, it is well worth setting up a small monthly donation to help them to continue their work.
In the main exhibition space there is a bowl of microprocessors in front of some slices of silicon:
One of my friends is a complete microprocessor geek. I guessed that sending him the photo above would be like catnip. It turns out I was right. Within minutes, I got this response:
And here was the summary that followed:
Core ix (Arrandale, mobile variant of Westmere)
LGA775 desktop package, so anything from late Pentium 4, Pentium D, Core 2 Duo or Quad, or their Celeron variants.
Socket 939 AMD, either an Athlon 64 Sledgehammer, Clawhammer or Winchester. The giveaway is the gap in the middle away from the 4x key pins in the circumference.
One of my all time favourites, an AMD K6-2+ — “Sharptooth” — basically took the original Pentium (which topped out at 200MHz) socket to 600MHz, and even has 128KB of L2 cache on board.
An AMD opteron, probably one of the early dual or quad models. Can’t tell without the pinout or model.
6.1 almost certainly a Pentium-3 Mobile
6.2 hard to say. Definitely 286 era, but probably a Motorola like 10 [below]
Impossible to say from that angle.
Either a Pentium-MMX (166-233) or a Celeron Mendocino. Both used the same black OPGA flip chip design.
A 486, either from: AMD, Intel, Cyrix, SGS-Thomson or Texas Instruments
I remember purchasing a discounted Morley Teletext Adapter for my BBC Micro with the idea that I would download software that was embedded in the UK analogue TV signals. I struggled as I had to use an indoor aerial which gave relatively terrible results. (I don’t remember asking my parents to get a second aerial cable installed, but I am pretty sure they would have said no.) By the time I had bought the adapter I had missed the boat as telesoftware broadcasts had stopped. Still, it was fun to go digging around in the depths of the pages, using my computer keyboard to input the hexadecimal page numbers that were impossible with a TV remote control. Until I watched this video, I hadn’t realised that I might have been able to stumble across someone accessing services such as ‘online’ banking. Amazing.
A couple of friends shared this video with me; a feature-length tour-de-force that goes from the 2008-era financial crisis through the invention of blockchains and cryptocurrencies, Non-Fungible Tokens (NFTs) and Decentralised Autonomous Organisations (DAOs). This must have taken an age to put together.
I heard this blog post mentioned on a couple of podcasts a few weeks back and have only just got around to reading it. It’s fascinating. The author, Moxie Marlinspike, dissects the state of the current ‘web3’ world of decentralised blockchains and non-fungible tokens (NFTs). Reading his post is probably the most educational use I’ve made of ten minutes this year, and I can’t recommend it highly enough. I’m jotting down my thoughts here in order to record and check my understanding.
When we all started out on the Internet, companies ran their own servers — email servers, web servers for personal or corporate websites etc. We pulled data from other people’s servers, and they pulled data from ours. Everything was distributed. This was actually the point, by design — ARPANET, the forerunner to the Internet, was architected in such a way that if a chunk of the network was taken down, the rest of the network would stay up. Your requests for information would route around whatever problem had occurred. This was effectively ‘web1’.
Over time, the Internet has become more centralised around a number of ‘big tech’ companies for a variety of reasons:
Servers can now be ‘spun up’ at the point of need and financed as operational expense using Amazon Web Services, Azure, Google Cloud etc. instead of costly capital investment. Lots of companies use a small number of these platforms.
Services like Microsoft 365 or Google Workspace allow companies to provide email, file sharing etc. and pay for the vendors to take care of the servers that run them instead of employing people to do it in-house.
People have coalesced around platforms where other people are instead of communicating point-to-point. Facebook, Twitter, Instagram, WhatsApp and Snapchat are good examples.
Although the Internet as a whole is resilient to failure, the architecture of ‘web1’ meant that your web or email servers could catastrophically go offline. Individual servers are vulnerable to problems such as ‘distributed denial of service’ (DDoS) attacks, where they are deliberately hit with more requests than they can handle and become unable to serve legitimate visitors. This could also happen if there is a ‘real’ spike in demand, e.g. if the content you are hosting or service you are running suddenly becomes extremely popular. People have mitigated this by using services from ‘content delivery networks’ (CDNs) such as Cloudflare and Fastly which, amongst other services, ‘cache’ copies of the content all over the world on their own network of servers, close to where the requestors are.
This is ‘web2’. Companies no longer need to buy, manage, administer and patch their own servers as they can rent them, or the applications that run on them, from specialist firms. But this creates a different set of issues. New entrants to these web2 service markets struggle to get a foothold, and the over-reliance on a small number of big players creates vulnerabilities for the system as a whole. For example, when Fastly had an outage in 2021, it took many its customers offline. Companies like Fastly reduce your risk on a day-to-day basis but increase the overall risk of the system by being a point of concentration. They are a big single point of failure.
‘Blockchain’ has been a buzzword for a long time now. The idea of blockchains is attractive: they are decentralised, with no company or state owning them. A blockchain of ledger entries exists across a network of computers, with work being done by — and between — those computers to agree on the canonical version through consensus. In other words, those computers talk to each other to come to agreement on what the blockchain ledger looks like, and nobody has ownership or control. Everyone has also heard of cryptocurrencies such as Bitcoin. The Bitcoin blockchain is a distributed ledger of ownership of the currency. The big buzz at the moment is around NFTs, each of which is effectively a digital ledger entry on a blockchain that records someone is the ‘owner’ of something.
A blockchain doesn’t live on your mobile device or in your web browser, it lives on servers. In order to write a new entry onto a blockchain, you need to start by sending a request to one of these servers. This is actually a limitation. From Marlinspike’s blog post:
“When people talk about blockchains, they talk about distributed trust, leaderless consensus, and all the mechanics of how that works, but often gloss over the reality that clients ultimately can’t participate in those mechanics. All the network diagrams are of servers, the trust model is between servers, everything is about servers. Blockchains are designed to be a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.”
A number of platforms have sprung up that provide the ability to write to popular blockchains. People would rather use these platforms than create and run something themselves, for many of the reasons that the ‘web2’ platforms came to be. People do not want to run their own servers. This brings its own problems in that in order to add a ledger entry to a blockchain, you now have an additional ‘hop’ to go through. The quality and architecture of these platforms used to access a blockchain really matters.
At the moment, calls and responses to these platforms are not particularly complex; there is little verification that what you get back in response to a request to retrieve data is actually what is really stored on the blockchain. These access platforms also get visibility of all of the calls that are made via their services. If you’re writing something to the blockchain, via a platform, they’ll know who you are and what you wrote because they helped you to do it.
“So much work, energy, and time has gone into creating a trustless distributed consensus mechanism, but virtually all clients that wish to access it do so by simply trusting the outputs from these two companies without any further verification. It also doesn’t seem like the best privacy situation.”
The author illustrates the point with an example of an NFT that he created which looks different depending on where you take a look at it from. He can do this because the blockchain doesn’t actually contain the data that defines the NFT itself, just a link that points to where the NFT is. So, as he owns the location of the NFT image, he can serve up different content depending on who or what is asking to see it. At some point, OpenSea, one of the popular NFT marketplaces, decided to remove his NFT from their catalogue. It was still on the blockchain, but invisible to anyone using OpenSea. This is interesting as it shows how much control a ‘web2’ platform has over the ‘web3’ blockchain.
If you have to go through one of these ‘web2’ platforms to interact with a blockchain, therefore losing some of the distributed benefits, why do the platforms bother with the blockchain at all? Writing new entries to a blockchain such as Etherium is very expensive. So why not have a marketplace for NFTs where ownership is simply written into a database owned by a company like OpenSea? The author’s conclusion is that it is because there is a blockchain gold rush, for now at least. Without the buzzword and everyone piling in, a platform like OpenSea would never take off.
Postlight recently published a podcast episode with Michael Sippey called “On web3, Again” which is well worth a listen. The whole episode is great, but there are some pointers from about 35m25s in on how to start to experiment with all of this yourself, if you have the disposable income to do it.