BadSAM

Last night my phone scared the bejesus out of me.

At the end of July I requalified as a First Aider. As soon as I received my certificate I uploaded it to the GoodSAM app to reactivate my account. GoodSAM is a service where anyone who is suitably qualified can be alerted as a first responder if an ambulance or paramedic can’t get to a location quickly enough.

The concept is brilliant. The app is terrible.

Last night’s alert did its job. It was loud and made me jump out of my skin. Once I’d worked out what was going on, the app showed me where the incident was and asked whether I wanted to accept or reject it. I accepted, and was then shown a chat window with an address, age and sex of the person who was in trouble. As I put my shoes on and grabbed my CPR face mask I saw another person message on the chat to say that they were on their way. I guessed that they could see my messages so I said I was also heading there, but every time I wrote on the chat I got an automatic response to say that the chat wasn’t monitored.

I quickly found the house. Fortunately, the casualty was stood in their doorway, talking on the phone and already feeling better. About five minutes later another GoodSAM first responder turned up — the same person from the chat who had said that he was on his way. The casualty felt very upset about having called out a few volunteers to her house; the main job was therefore to reassure her that it was absolutely fine and that we were glad to help. What we couldn’t tell her was whether anyone else would be coming to see her. We don’t work for the ambulance service, and nothing on the app gave us any clue as to how to find out if a medical professional would be on their way. She called her sister to reassure her that she was feeling better, and then passed the phone to me; I felt like a wally as I was unable to tell her sister whether anyone would or wouldn’t be coming.

What the app did show me was this:

GoodSAM buttons

GoodSAM buttons

This user interface is terrible. I pressed the button marked ‘On scene’. The text then changed to ‘No longer on scene’, but the button didn’t change appearance. I couldn’t make it out — did the words indicate a status, with the app showing that I was now no longer on the scene, or did I need to press the button to tell people that I am no longer on the scene? There was no other indicator anywhere in the app to say that a message had been sent to anyone. (And what on earth does ‘Show Metronom’ mean? I didn’t press it to find out.)

After fifteen minutes or so, two paramedics arrived. The two of us explained that we were GoodSAM first responders, which was the last interaction that we had with anybody. Baffled, we walked away from the scene complaining about how rubbish the GoodSAM app was. The other responder said that he thought up to two people get alerted for any given incident, but this seemed to be guesswork based on experience rather than knowing this for sure.

I checked the app again a while later and found something under the ‘Reports’ tab. It also showed a whole bunch of unread notifications from my previous stint as a first responder about five years ago:

Reports? Alerts? Feedback?

Reports? Alerts? Feedback?

Swiping left on the latest alert gave me a form to complete which included a plethora of fields whose labels didn’t make any sense to me. I had to answer questions such as whether the casualty lived, died, or is still being treated. How would I know, if I had left the scene an hour ago?

Outcome. I’ve no idea which of the bottom three options was the correct one to pick.

Outcome. I’ve no idea which of the bottom three options was the correct one to pick.

What happens with this report? Who gets it and reviews it? I have no idea, and the app offers no clues.

I now have a chat thread called ‘Organisational messages’ which is another example of how confusing the application is. The the messages that I and the other responder sent are no longer visible, but some messages from 2018 are. It’s so random.

What will now disappear from my device?

What will now disappear from my device?

I love that this app exists and that it allows me to put my first aid skills to good use. I am sure that it has saved lives by getting skilled first aiders to casualties quickly. I just don’t understand how the interface can be so dreadful, and how it hasn’t improved in all the years that it has been available. NHS Trusts are paying to use this service, but I am not sure they are aware about how awful the experience is.

Superfans and marginal customer acquisition costs

Ben Thompson has been running some superb interviews for subscribers of his Stratechery newsletter. His recent interview with Michael Nathanson of the MoffettNathanson research group is no exception.

I loved this insight about how companies are valued when they are relatively new and growing quickly. Their maths may be over-optimistic as they underestimate the cost of adding marginal customers after their initial rapid rise:

Ben Thompson: …a mistake a lot of companies make is they over-index on their initial customer. The problem is when you’re watching a company, that customer wants your product really bad, they’ll jump through a lot of hoops, they’ll pay a high price to get it. Companies build lifetime value models and derive their customer acquisition costs numbers from those initial customers and then they say, “These are our unit costs”, and those unit costs don’t actually apply when you get to the marginal customer because you end up spending way more to acquire them than you thought you would have.

Michael Nathanson: That’s my question to Disney, which is, and I think you wrote this — your first 100 million subs, look at the efficiency of how you built Disney+, it was a hot knife through butter. But now to get the next 100 million subs, what are you going to do? You’re going to add sports, do entertainment, more localized content. My question to Disney is, is it better just to play the super-fan strategy where you know your fans are going to be paying a high ARPU [average revenue per user] and always be there, or do you want to, like Netflix, go broader? I don’t have an answer, but I keep asking management, “Have you done the math?”

📺 Interview with Sophie Wilson

I enjoyed this interview with Sophie Wilson, co-inventor of the ARM processor architecture. I hadn’t realised just how important the BBC Micro had been in creating the design. The story of Wilson and her colleagues being motivated by seeing how small the team was when they visited Western Design Center reminded me of the tale of Steve Jobs visiting Xerox PARC and seeing a graphical user interface and mouse for the first time, even if the latter may not be completely true.

25 Years of GoldenEye Dev

I had such a great time at the Centre for Computing History, indulging myself in memories of the years where my friends and I regularly gathered together around a TV to play the GoldenEye on the Nintendo 64. The panel session with three members of the development team is now up on YouTube, along with the post-talk Q&A. Both are well worth watching.

They developed the game with very little help from other teams at Rare. It was fascinating to hear about the challenges of having so much going on in the game without the frame rate suffering to an unacceptable degree. Who knew that the placement of doorways on levels would have such a huge bearing on processing?

Some of the game features — such as AI characters that go off to do things other than running straight at you and the inclusion of a sniper rifle — are now standard elements that you expect with any first-person shooter. Multiplayer was added to the game only four months before their deadline; remarkable as the game probably wouldn’t be as celebrated today without it.

I managed to get tickets to the event through being a Patreon of the Centre and being altered before they went on general release. If this kind of thing interests you, it is well worth setting up a small monthly donation to help them to continue their work.

Microchip catnip

Saturday’s visit to the Centre for Computing History for a talk with some of the GoldenEye development team gave me the opportunity to look around their exhibits. It’s a really wonderful place.

In the main exhibition space there is a bowl of microprocessors in front of some slices of silicon:

One of my friends is a complete microprocessor geek. I guessed that sending him the photo above would be like catnip. It turns out I was right. Within minutes, I got this response:

And here was the summary that followed:

  1. Core ix (Arrandale, mobile variant of Westmere)
  2. LGA775 desktop package, so anything from late Pentium 4, Pentium D, Core 2 Duo or Quad, or their Celeron variants.
  3. Socket 939 AMD, either an Athlon 64 Sledgehammer, Clawhammer or Winchester. The giveaway is the gap in the middle away from the 4x key pins in the circumference.
  4. One of my all time favourites, an AMD K6-2+ — “Sharptooth” — basically took the original Pentium (which topped out at 200MHz) socket to 600MHz, and even has 128KB of L2 cache on board.
  5. An AMD opteron, probably one of the early dual or quad models. Can’t tell without the pinout or model.
  6. 6.1 almost certainly a Pentium-3 Mobile
    6.2 hard to say. Definitely 286 era, but probably a Motorola like 10 [below]
  7. Impossible to say from that angle.
  8. Either a Pentium-MMX (166-233) or a Celeron Mendocino. Both used the same black OPGA flip chip design.
  9. A 486, either from: AMD, Intel, Cyrix, SGS-Thomson or Texas Instruments
  10. A Motorola 6800 or 68000

Bravo, my friend. 👏

Teletext archeology

Jason Robertson’s talk on recovering teletext pages from videotapes is fascinating. Teletext was magical in the pre-Internet days. I had no idea that the service went back to the early 1970s. There is a searchable archive of pages that have been recovered. I love this preservation of ephemeral digital artefacts.

I remember purchasing a discounted Morley Teletext Adapter for my BBC Micro with the idea that I would download software that was embedded in the UK analogue TV signals. I struggled as I had to use an indoor aerial which gave relatively terrible results. (I don’t remember asking my parents to get a second aerial cable installed, but I am pretty sure they would have said no.) By the time I had bought the adapter I had missed the boat as telesoftware broadcasts had stopped. Still, it was fun to go digging around in the depths of the pages, using my computer keyboard to input the hexadecimal page numbers that were impossible with a TV remote control. Until I watched this video, I hadn’t realised that I might have been able to stumble across someone accessing services such as ‘online’ banking. Amazing.

Line Goes Up

A couple of friends shared this video with me; a feature-length tour-de-force that goes from the 2008-era financial crisis through the invention of blockchains and cryptocurrencies, Non-Fungible Tokens (NFTs) and Decentralised Autonomous Organisations (DAOs). This must have taken an age to put together.

Getting my head around ‘My first impressions of web3’

I heard this blog post mentioned on a couple of podcasts a few weeks back and have only just got around to reading it. It’s fascinating. The author, Moxie Marlinspike, dissects the state of the current ‘web3’ world of decentralised blockchains and non-fungible tokens (NFTs). Reading his post is probably the most educational use I’ve made of ten minutes this year, and I can’t recommend it highly enough. I’m jotting down my thoughts here in order to record and check my understanding.

When we all started out on the Internet, companies ran their own servers — email servers, web servers for personal or corporate websites etc. We pulled data from other people’s servers, and they pulled data from ours. Everything was distributed. This was actually the point, by design — ARPANET, the forerunner to the Internet, was architected in such a way that if a chunk of the network was taken down, the rest of the network would stay up. Your requests for information would route around whatever problem had occurred. This was effectively ‘web1’.

Over time, the Internet has become more centralised around a number of ‘big tech’ companies for a variety of reasons:

  • Servers can now be ‘spun up’ at the point of need and financed as operational expense using Amazon Web Services, Azure, Google Cloud etc. instead of costly capital investment. Lots of companies use a small number of these platforms.
  • Services like Microsoft 365 or Google Workspace allow companies to provide email, file sharing etc. and pay for the vendors to take care of the servers that run them instead of employing people to do it in-house.
  • People have coalesced around platforms where other people are instead of communicating point-to-point. Facebook, Twitter, Instagram, WhatsApp and Snapchat are good examples.
  • Although the Internet as a whole is resilient to failure, the architecture of ‘web1’ meant that your web or email servers could catastrophically go offline. Individual servers are vulnerable to problems such as ‘distributed denial of service’ (DDoS) attacks, where they are deliberately hit with more requests than they can handle and become unable to serve legitimate visitors. This could also happen if there is a ‘real’ spike in demand, e.g. if the content you are hosting or service you are running suddenly becomes extremely popular. People have mitigated this by using services from ‘content delivery networks’ (CDNs) such as Cloudflare and Fastly which, amongst other services, ‘cache’ copies of the content all over the world on their own network of servers, close to where the requestors are.

This is ‘web2’. Companies no longer need to buy, manage, administer and patch their own servers as they can rent them, or the applications that run on them, from specialist firms. But this creates a different set of issues. New entrants to these web2 service markets struggle to get a foothold, and the over-reliance on a small number of big players creates vulnerabilities for the system as a whole. For example, when Fastly had an outage in 2021, it took many its customers offline. Companies like Fastly reduce your risk on a day-to-day basis but increase the overall risk of the system by being a point of concentration. They are a big single point of failure.

‘Blockchain’ has been a buzzword for a long time now. The idea of blockchains is attractive: they are decentralised, with no company or state owning them. A blockchain of ledger entries exists across a network of computers, with work being done by — and between — those computers to agree on the canonical version through consensus. In other words, those computers talk to each other to come to agreement on what the blockchain ledger looks like, and nobody has ownership or control. Everyone has also heard of cryptocurrencies such as Bitcoin. The Bitcoin blockchain is a distributed ledger of ownership of the currency. The big buzz at the moment is around NFTs, each of which is effectively a digital ledger entry on a blockchain that records someone is the ‘owner’ of something.

A blockchain doesn’t live on your mobile device or in your web browser, it lives on servers. In order to write a new entry onto a blockchain, you need to start by sending a request to one of these servers. This is actually a limitation. From Marlinspike’s blog post:

“When people talk about blockchains, they talk about distributed trust, leaderless consensus, and all the mechanics of how that works, but often gloss over the reality that clients ultimately can’t participate in those mechanics. All the network diagrams are of servers, the trust model is between servers, everything is about servers. Blockchains are designed to be a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.”

A number of platforms have sprung up that provide the ability to write to popular blockchains. People would rather use these platforms than create and run something themselves, for many of the reasons that the ‘web2’ platforms came to be. People do not want to run their own servers. This brings its own problems in that in order to add a ledger entry to a blockchain, you now have an additional ‘hop’ to go through. The quality and architecture of these platforms used to access a blockchain really matters.

At the moment, calls and responses to these platforms are not particularly complex; there is little verification that what you get back in response to a request to retrieve data is actually what is really stored on the blockchain. These access platforms also get visibility of all of the calls that are made via their services. If you’re writing something to the blockchain, via a platform, they’ll know who you are and what you wrote because they helped you to do it.

“So much work, energy, and time has gone into creating a trustless distributed consensus mechanism, but virtually all clients that wish to access it do so by simply trusting the outputs from these two companies without any further verification. It also doesn’t seem like the best privacy situation.”

The author illustrates the point with an example of an NFT that he created which looks different depending on where you take a look at it from. He can do this because the blockchain doesn’t actually contain the data that defines the NFT itself, just a link that points to where the NFT is. So, as he owns the location of the NFT image, he can serve up different content depending on who or what is asking to see it. At some point, OpenSea, one of the popular NFT marketplaces, decided to remove his NFT from their catalogue. It was still on the blockchain, but invisible to anyone using OpenSea. This is interesting as it shows how much control a ‘web2’ platform has over the ‘web3’ blockchain.

If you have to go through one of these ‘web2’ platforms to interact with a blockchain, therefore losing some of the distributed benefits, why do the platforms bother with the blockchain at all? Writing new entries to a blockchain such as Etherium is very expensive. So why not have a marketplace for NFTs where ownership is simply written into a database owned by a company like OpenSea? The author’s conclusion is that it is because there is a blockchain gold rush, for now at least. Without the buzzword and everyone piling in, a platform like OpenSea would never take off.

Postlight recently published a podcast episode with Michael Sippey called “On web3, Again” which is well worth a listen. The whole episode is great, but there are some pointers from about 35m25s in on how to start to experiment with all of this yourself, if you have the disposable income to do it.

Thinking about proxies

Recent conversations at work have got me thinking about the proxy metrics that we use, and how much nuance and detail they hide.

Cybersecurity

Last week, we had a look at a tool that presented a ‘cybersecurity dashboard’ for our organisation. It is a powerful tool, with lots of capabilities for investigating and remediating security issues across our IT infrastructure estate. But what struck me was a big number presented front-and-centre on the first page. It looked a bit like this1:

It was simply a percentage. I’ve been pondering it since, wondering if it us useful or not.

80.4%. Is this good? If that’s my organisation’s score, can I sleep well at night? When I was at university, an average score of 70% in your exams and coursework meant that you were awarded with a first-class degree. So that number has always stayed with me and has felt intrinsically ‘good’. 80.4% is substantially higher than this. But what about that other 19.6%? Can we relax, or do we need to keep pushing to 100%? Can you ever truly be 100% secure if you’re running any kind of IT system?

Perhaps it is meant as a useful jumping off point for investigation. Or it is meant to be used over time, i.e. last week we were 78.9% and now we’re 80.4%, so things are going in the right direction. Maybe both. I’m not sure.

It’s a common idea that executives don’t want the detail. They simply want to see a big green light that says that things are ok. If there’s no green, they want to know that things are being dealt with in order to bring the amber or red thing back to green again. In the example above, although the ‘speed gauge’ is blue, it is still an attempt to aggregate all of the cybersecurity data across an organisation into a simple number. To me, it feels dangerous to boil it down to a single proxy metric.

I likened the single score to a song being reduced to its average frequency. Music can make us laugh, sing or cry. It can make our pulses race and our hearts throb. But the beauty and nuance is completely lost if you take the average of all of the sound and boil it down to one long continuous tone. (Someone has actually done this so you can hear examples for yourself.2)

Inflation

Food writer, journalist and activist Jack Monroe wrote an incredibly insightful thread on the latest inflation figures. The news headlines were screaming that the inflation number is 5.4% — a 30-year high. However, this hides the nuance of what exactly has been increasing in price and what has remained static. As usual, the poorest in society bear a disproportionate brunt of the increase. For people that depend on the cheapest goods, inflation is much higher, as the cost of those goods have been increasing at a much higher rate. Her original thread is well worth a read:

It was wonderful to see this thread get so much attention. Today Monroe announced that the Office of National Statistics will be making changes:

Financial data

I’ve been working in Financial Services for over 20 years. During the financial crisis of 2007–2008 I was employed by one of the banks that suffered terrible losses. In the lengthy report that was published to shareholders, it was notable that there was a dependency on a number of metrics such as Value at Risk which were in effect ’green’ even when the global financial system started to unravel. The actual problem was the sheer amount of toxic financial products that were on the balance sheet; as soon as the assumption of how much they were worth was revised, it triggered eye-watering losses.

From the report:

UBS’s Market Risk framework relies upon VaR and Stress Loss to set and monitor market risks at a portfolio level. [p19]

In the context of the CDO structuring business and Negative Basis and AMPS trades, IB MRC [Market Risk Control] relied primarily upon VaR and Stress limits and monitoring to provide risk control for the CDO desk. As noted above, there were no Operational limits on the CDO Warehouse and throughout 2006 and 2007, there were no notional limits on the retention of unhedged Super Senior positions and AMPS Super Senior positions, or the CDO Warehouse… [p20]

In other words, the amount of ‘good quality’ collateralised debt obligations (CDOs) that could be held on the balance sheet wasn’t subject to a cap. These were the instruments that were later found to be ‘toxic’.

MRC VaR methodologies relied on the AAA rating of the Super Senior positions. The AAA rating determined the relevant product-type time series to be used in calculating VaR. In turn, the product-type time series determined the volatility sensitivities to be applied to Super Senior positions. Until Q3 2007, the 5-year time series had demonstrated very low levels of volatility sensitivities. As a consequence, even unhedged Super Senior positions contributed little to VaR utilisation. [p20]

This means that the model, which produced a ‘green’ status for Value at Risk, was based on the historical data which said that ‘everything is fine’. No consideration seemed to have been taken on the sheer amount of CDOs that were being held. As the financial crisis unfolded and it became clear that the assets were no longer worth 100%, the revaluations resulted in nearly USD 50bn in losses.

Proxies should be a jumping off point

Proxies are attractive as they often boil down complex things into simple metrics that we think we can interpret and understand. Wherever I see or use them I need to think about what assumptions they are built on and to check that they are not being substituted for the important details.


  1. Taken from the Microsoft Windows Defender ATP Fall Creators Update 
  2. Lots of people on forums seem baffled as to why anyone would want to do that. I love that someone has done it. 

Enabling everyone in hybrid meetings

On Wednesday I attended a long workshop from home. A few of us were dialled in via BlueJeans, while the vast majority of attendees were physically present in the room. It struck me that being a remote participant must be a teeny tiny bit like having a disability; it was difficult to hear, difficult to see and I had to work extra hard to participate. We spent a significant amount of time staring at an empty lectern, hearing voices fade in and out but not seeing anyone on screen.

There’s a big push to ‘crack hybrid’. I know that the technology will inevitably improve to make these meetings better, reducing the friction between being in the room and out of it. But for now, if the meeting is a workshop, or just the kind where you want to democratise participation and involve everyone (as opposed to talking at them webinar or lecture style), then it makes sense to me to have everyone join in the same way.

Elizabeth Stokoe puts it better than me:

As we go back to our offices, the best meetings are going to be those where the organiser has put thought and energy into how they should be configured to meet their goals.

📚 A Seat at the Table

I picked up Mark Schwartz’s A Seat at the Table as I have recently been thinking about how we can move away from the perception of our IT team as the people who ‘turn up and fix the Wi-Fi’ to one where we are seen as true business partners. The book took me by surprise in being less of a self-help manual and more of a well-articulated argument as to why the old ways in which we did things no longer apply in the digital age. It is brilliant.

Schwartz has a way of encapsulating key concepts and arguments in short, smart prose. The book contains the best articulation of the case for Agile, Lean and DevOps that I have read. There is so much wisdom in a single sentence, for example:

What is the value of adhering to a plan that was made at the beginning of a project, when uncertainty was greatest?

One of the books referenced heavily in A Seat at the Table is Lean Enterprise by Jez Humble, Joanne Molesky and Barry O’Reilly which I read some time ago. Lean Enterprise goes into more detail in terms of the concepts and mechanics used in modern software development such as continuous integration, automated testing etc. and brings them together into a coherent whole. Schwartz does not cover these topics in detail but gives just enough information to make his case as to why they are the sensible way forward for developing software.

A company may typically engage their IT department as if they are an external supplier. They haggle and negotiate, they fix scope and cost and they then the work starts. This approach does make some sense for working with a truly external vendor where they are taking on some of the financial risk of overrunning and you are able to specify exactly what you want in detail, for example where physical IT infrastructure is being delivered, installed and configured. It makes little sense when you are creating a new software system. It makes even less sense when the IT team are colleagues in the same organisation, trying to work out what investments will make the biggest impact on the company. We win and lose together.

First of all, we came to speak about “IT and the business” as two separate things, as if IT were an outside contractor. It had to be so: the business was us and IT was them. The arms-length contracting paradigm was amplified, in some companies, by the use of a chargeback model under which IT “charged” business units based on their consumption of IT services. Since it was essentially managing a contractor relationship, the business needed to specify its requirements perfectly and in detail so that it could hold IT to delivering on them, on schedule, completely, with high quality, and within budget. The contractor-control model led, inevitably, to the idea that IT should be delivering “customer service” to the enterprise—you’d certainly expect service with a smile if you were paying so much money to your contractors.

For readers who are familiar with why we use Agile software development methods, the arguments against the old ‘waterfall’ approach are well-known. What is more interesting is that Schwartz also points to issues that advocates of the Agile approach have exacerbated. Agile people can be suspicious of anyone that looks like a manager, and want them to get out of the way so that they can get on with the job. Schwartz argues that the role of managers and leadership is to remove impediments, many of which the Agile team cannot easily deal with on their own:

When the team cannot accomplish objectives, I am forced to conclude that they cannot do it within the given constraints. The team might need members with different skills. It might need permission to try an experiment. It might need the help of another part of the organization. It might need a policy to be waived. But if the task is possible and the team cannot achieve it, then there is a constraining factor. My job is to remove it.

What if someone on the team is really just not performing? Perhaps not putting in his or her share of effort, or being careless, or uncooperative? Well, then, dealing with the problem is simply another example of removing an impediment for the team.

The critical role of middle management, it would seem, is to give delivery teams the tools they need to do their jobs, to participate in problem-solving where the problems to be solved cross the boundaries of delivery teams, to support the delivery teams by making critical tactical decisions that the team is not empowered to make, and to help remove impediments on a day-to-day basis. The critical insight here, I think, is that middle management is a creative role, not a span-of-control role. Middle managers add value by contributing their creativity, skills, and authority to the community effort of delivering IT value.

He makes a clear case for getting rid of ‘project thinking’ completely. If you want a software delivery initiative to stay on budget, the only way to do that is through an agile project. The team will cost the organisation their run rate which is almost always known in advance. Work can be stopped at any time, preserving the developments and insights that have been created up to that point.

As a former PMO head, and with my current responsibilities of running a portfolio of change initiatives, it was interesting to see the approach to ‘business cases’ recommended in the book. Instead of signing off on a set of requirements for a particular cost by a certain date, you should be looking to assess the team on what they want to achieve and whether they have the skills, processes and discipline to give you confidence that they will:

  • be effective,
  • manage a robust process for determining the work they will do,
  • make good decisions,
  • seek feedback,
  • continually improve.

Schwartz gives a brilliant example of how difficult it is to articulate the value of something in the IT world, which gave me flashbacks to the hours I have spent wrestling with colleagues over their project business cases:

How much value does a new firewall have? Well … let’s see … the cost of a typical hacker event is X dollars, and it is Y% less likely if we have the firewall. Really? How do we know that it will be the firewall that will block the next intrusion rather than one of our other security controls? How do we know how likely it is that the hackers will be targeting us? For how long will the firewall protect us? Will the value of our assets—that is, the cost of the potential hack—remain steady over time? Or will we have more valuable assets later?

The word ‘requirements’ should go away, but so should the word ’needs’; if the organisation ‘requires’ or ‘needs’ something, what are the implications for right now when the organisation doesn’t have it? Instead of using these terms, we should be formulating hypotheses about things we can change which will help bring value to the organisation. Things that we can test and get fast feedback on.

Schwartz also argues against product as a metaphor, which was a surprise to me given how prevalent product management is within the industry today:

But the product metaphor, like many others in this book, has outlived its usefulness. We maintain a car to make it continue to function as if it were new. A piece of software, on the other hand, does not require lubrication—it continues to operate the way it always has even if we don’t “maintain” it. What we call maintenance is really making changes to keep up with changes in the business need or technology standards.

Senior IT leaders are ’stewards’ of three critical ‘assets’ in the organisation:

  1. The Enterprise Architecture asset  — the collection of capabilities that allows the organisation to function, polished and groomed by the IT team.
  2. The IT people asset — ensuring that the organisation has the right skills.
  3. The Data asset — the information contained in the company’s databases, and the company’s ability to use that information.

Much of the book comes back to these three assets to emphasise and elaborate on their meaning, and the work required to “polish and groom” them.

The author makes the case that CIOs should take their seat at the table with the rest of the CxOs through being confident, bold, and simply taking the seat in the same way that the others do. To talk of IT being ‘aligned’ to the business is to imply that IT can be ‘misaligned’, doing its own thing without giving any thought to the rest of the organisation. The CFO, CMO or any other CxO does not need to continually justify their existence and prove their worth to the business, and neither should the CIO. The CIO needs to have deep technology knowledge — deeper than the rest of the people around the table — and bring this knowledge to bear to deliver value for the organisation, owning the outcomes instead of just ‘delivering products’.

It follows that the CIO is the member of the senior leadership team—the team that oversees the entire enterprise—who contributes deep expertise in information technology. I do mean to say deep expertise. Increasingly, everyone in the enterprise knows a lot about technology; the CIO, then, is the person who knows more than everyone else. The CIO should be more technical, not less—that is how he or she contributes to enterprise value creation; otherwise, the role would not be needed.

The age of IT organizations hiding behind requirements—“just tell me what you need”— is gone. IT leaders must instead take ownership, responsibility, and accountability for accomplishing the business’s objectives. The IT leader must have the courage to own outcomes.

IT investments are so central to corporate initiatives that it is hard to make any other investment decisions without first making IT decisions. This last point is interesting, right? Perhaps it suggests that IT governance decisions should be made together with or in advance of other business governance decisions. Instead, in our traditional model, we think first about “business” decisions, and then try to “align” the IT decisions with them. But in our digital world—if we are truly committed to the idea that that’s the world we live in—IT should not follow business decisions but drive them.

CIOs and their staff have an excellent “end-to-end understanding of the business, a discipline and mindset of accomplishing goals, and an inclination toward innovation and change.” They bring a lot to the table.

Schwartz makes a case for the rest of the organisation becoming digitally literate and sophisticated in their use of technology. This may extend to people from all parts of the organisation being able to contribute to the codebase (or “Enterprise Architecture asset”) that is managed by IT. This should be no different to developers on an open source project making changes and submitting a ‘pull request’ to have those changes incorporated into the official codebase. We should embrace it, fostering and harnessing the enthusiasm of our colleagues. We should care less about who is doing the work and more about whether the company’s needs are met.

As much as I enjoyed the book, there were points where I disagreed. Schwartz argues strongly against purchasing off-the-shelf software — ever, it seems — and advocates building things in-house. He makes the point that software developed for the marketplace may not be a good fit for our business and may come with a lot of baggage. My view is that this completely depends on where the software sits in the stack and how commoditised it is. It makes no sense to implement our own TCP/IP stack, for example, nor does it make any sense to develop our own email client. (Nobody ever gained a new customer based on how good their email system was. Probably.) But I do agree that for software that is going to give us a competitive edge, we want to be developing this in-house. I think that something along the lines of a Wardley Map could be useful for thinking about this, where the further along the evolution curve a component is, the less Agile in-house development would be the preferred choice:

Overall this book is a fantastic read and will be one I come back to. It’s given me lots to think about as we start to make a case for new ways of working that go beyond the IT department.

Using a LeanKit board to manage risks

LeanKit is my favourite productivity tool. Our team has been using it for the past couple years to manage our work across a series of Kanban boards. It is super easy to use, and offers a massive amount of flexibility compared to the implementation of Kanban boards in Jira, Trello or Microsoft Planner. You can configure both vertical and horizontal swimlanes, instead of just the vertical columns of tasks that the other tools offer. It is easy to represent your team’s workflow as it feels like the tool is working with you instead of against you.

Recently we have also started to use LeanKit to manage our department’s risks. At our company, we have an Operational Risk framework used across the organisation that looks like this:

The first job is to reproduce this layout in LeanKit so that everyone can relate the board back to the model. The board editor makes this super easy.

Here the typical Kanban setup of ‘to do’, ‘doing’ and ‘done’ is represented by the ‘New Risks’, ‘Current (Residual) Risk — After Mitigation’ and ‘Closed — Ready To Archive’ lanes respectively:

The man section is then broken down into rows for ‘likelihood’, with ‘severity’ columns in each row. The coloured circles in each of the box titles are emojis that are added when editing the title of each box (on Windows use the Windows key plus ‘.’ to bring up the emoji picker, and on a Mac press ‘fn’ or use the Edit > Emoji & Symbols dropdown in the menu bar). We also have a ‘Themes’ section at the top of this lane — more on this later.

In the ‘New Risks’ column we have a space for a template as well as a ‘For review’ section which has been allocated as the drop lane. By default, new risks will go here when we think of them; we then periodically review them as a team and drag them to the appropriate place on the board.

We then need to configure the board. The Board Settings tab can be used to set a title, description, custom URL and specify who gets access:

In this example, I’m not yet ready to let anyone else use it so I have set the default security to ‘No Access’:

In the Card Types section we define three types of card. The main one is ‘Risk’ but we also create ‘Theme’ to group our risks together. We also left ‘Subtask’ as one of the defaults in case someone wants to use the on-card mini-Kanban board to manage the tasks relating to an individual risk. We pick some colours we like, and delete all of the other default types of card:

We also set up a Custom icon so that we can see at a glance which of our risks are mitigated/accepted, those that we are working on and those where we need to give them attention.

We ensure that every card has one of these custom icons when we create it. During a review we can then filter the board so that, for example, only the red-starred cards appear.

Next we create the template card. First, we set the Card Header to allow custom header text. With templates, I like to leave the board user with instructions such as ‘Copy me!’:

We then create the template card itself. This goes some way to ensuring that all of the new risks get created in a similar way, with similar information. This card will be put into the ‘Template’ section of the board:

In order to distinguish one risk from another, and report them to wherever they need to go, we want each risk to have a unique identifier. We can now go back to the Card Header in the board’s settings and select ‘Auto-incremented Number’ with a header prefix of ‘Risk ‘. This means that new cards added to the board will be called ‘Risk 1’, ‘Risk 2’, ‘Risk 3’ etc.:

The ‘Risk ‘ prefix does have the effect of changing the name of the template card, but this isn’t too confusing:

We can now start adding risks to the board, and linking them to themes as shown below:

Having a visual representation of our risks in this way is so much better than the usual spreadsheet with one risk per row. It’s allowed us to incorporate risk management much more into our day-to-day work. We can assign owners to each risk, and use all of the rich features of LeanKit such as adding comments, due dates etc.

If we decide a risk needs to be reclassified in terms of its likelihood or severity, we simply drag the card to the new location on the board. The card itself will keep a history of its journey in its audit log. If we absolutely have to submit our risks in a spreadsheet somewhere, we can export the board contents as a CSV file and format it in Excel.

The best thing about managing the risks in this way is that we can link any mitigation work directly to the risks themselves. Where we agree a follow-up action, we create a task cards on the appropriate team Kanban boards and then link each of those cards to the risk — the risk card becomes the parent card of the task. In this way, we can see at a glance all of our risks and track the work as it gets completed across the organisation.

The loveliest place on the Internet

A close friend of mine recently asked, out of the blue:

In a word, the answer is yes. For the past few years I’ve been using micro.blog, which I’ve come to think of as the loveliest place on the Internet. It isn’t lovely by chance, it has been deliberately designed that way, and is lovingly nurtured to keep it full of positive vibes across its wonderful community.

On the surface, micro.blog is selling blog hosting. For $5/month, you can sign up and host your blog posts there. The monthly fee makes sure that your site isn’t cluttered with any adverts. You can post using apps for iOS, iPadOS, MacOS and Android as well as via the web. Blog posts can be ‘micro’ status updates of 280 characters or less, and if you go over this limit, the official apps and web interface reveal a ‘title’ field to accompany a more traditional blog post. You can syndicate your posts to Twitter, Medium, Mastodon, LinkedIn and Tumblr, with the full text being posted if it can fit in a tweet, and a title and link posted if it is longer. Photo uploads are included; the official apps helpfully strip out any EXIF metadata, such as where the photo was taken, in order to protect your privacy. A $10/month plan gives you the ability to host short videos or even podcasts (‘micro casts’), which you can record using the Wavelength iOS app.

But this is just where the magic starts. You don’t actually need to host your content on micro.blog. I’ve had my own blog for many years, and have recently started to take my content off of other platforms such as Instagram and Goodreads and host it myself — I want my content on my own platform, not somebody else’s. If you have an existing blog like I do, you can create an account and link it to your existing website via an RSS feed. Any post you write on your own blog then gets posted to your micro.blog account, and syndicated to wherever you want it to go. You can even set up multiple feeds with multiple destinations for cross-posting:

I post at my own blog and micro.blog picks up the feed

I post at my own blog and micro.blog picks up the feed

Once you have an account set up, you can start to use the main micro.blog interface. Here’s where you will see posts from everyone that you have ‘followed’, whether they have a micro.blog hosted account or syndicate from their own site. It’s a bit like a calmer, happier version of Twitter:

Here’s where some of the thoughtfulness of the design comes in. Discovering people to follow actually takes some effort. You can click on the Discover button to see a set of recent posts from a variety of users, lovingly hand-curated by the micro.blog Community Manager Jean MacDonald. If one of these posts catches your eye, you can click on the username or profile photo and press ‘follow’. That person’s posts will now appear in your timeline. By default you will also see their ‘replies’ to other members of the community. You can click on a button to view the whole conversation thread, which can lead you to explore and discover more people. This effort means that the emphasis is on quality over quantity, with gradual discovery.

You can also discover people by selecting someone’s profile and clicking through to see who they follow that you aren’t following:

One of the best design choices made on the platform is that there are no follower counts. I have no idea how many other users follow me, or anybody else. There are also no ‘likes’, just a button to privately bookmark a post for you to access again easily in the future. Reposts (or ‘retweets’) don’t exist either — if you like a post that you read and want to amplify it, you need to create a new post of your own. Hashtags are also not supported. Again, the emphasis is on quality and not quantity, on conversations instead of engagement metrics. This works to reduce people posting content just to ‘go viral’, and keeps the noise down.

So far, so straightforward. But there’s more magic.

The creator of micro.blog has authored an iOS app called Sunlit, which allows the creation of photo posts on your blog, whether you are hosting your content on micro.blog or externally. You can also see posts from other people that you follow on micro.blog. It’s like viewing the micro.blog content through an Instagram-type lens. The photos you see are from the same timeline that you see in micro.blog. You can comment on and bookmark posts, and switch between the Sunlit app and the main micro.blog apps or web interface. I love an occasional browse through the timeline using this app if I’m in the mood just to see some wonderful photos.

So, this means that all of your content sits on your own blog. No more silos of posts on different platforms.

But here’s the thing that feels most magical to me. If someone reads a post of yours on micro.blog, they can reply to it. But in the spirit of you owning your own content, these replies get posted as comments on your original blog post, even if the blog is hosted on your own independent website. Every time this happens, it blows my mind a tiny bit. Here’s an example — I recently posted about how exhausting the Clubhouse app is, which sparked a few comments and conversations. People responded on micro.blog and these ended up as comments on the post on my site:

It really is a wonderful place to spend some time. If you’re looking for an alternative to the noisy, regularly hostile place that the traditional social media platforms have become, and/or owning your content is important to you, it is well worth checking out. It’s been a source of joy over the past couple of weeks to see a real-life friend regularly posting there, and I know from talking to him that he’s loving it.

UC Today podcast

I really enjoy the UC Today podcast. If you’re involved in administering or working with Microsoft Teams, the latest episode is well worth 20 minutes of your time.

Key points covered in this episode that stuck in my head after listening:

  • Calling plan will now be part of the Microsoft E5 licence (so no additional purchase necessary) everywhere that Microsoft is a telco except the US and Puerto Rico.
  • Teams is getting native integration to WebEx meetings, so you can join WebEx calls from Teams meeting rooms.
  • Microsoft are selling a bolt-on for API access to record calls/meetings. The list price is USD 12/user/month, which seems quite expensive. You will also need a third party tool to record.
  • Other features and enhancements to the user interface including the ‘Together’ mode, which is meant to make calls less tiring.

Keeping a ScanSnap going on macOS Mojave

After trying a few different apps, I switched to ExactScan Pro on macOS given that Fujitsu no longer support my scanner. Expensive, but not as much as buying a whole new piece of hardware to achieve the same result. I’ve only scanned a few thousand pages and it seemed tragic to get rid of it when it was working perfectly well.

The saddest thing was the way the old Fujitsu software died. It still seemed to work, but created PDFs with the pages out of order. I was scratching my head as it was almost like someone had deliberately sabotaged the code so that it wouldn’t work properly.

Debugging images in a WordPress upload via Ulysses

I spent hours trying to debug why Ulysses wouldn’t post my latest weeknotes to WordPress from my iPad. I kept getting an error that the document contained unsupported HEIC-format images but I couldn’t see which, as the image filenames are hidden in the Ulysses sheet. The post took ten minutes or so to upload, and each time I found I had to go to the WordPress media library and delete the images that did upload successfully so that they weren’t duplicated on my next attempt.

Eventually I found a way to view them, by dragging the images one by one from Ulysses into Gladys where the filenames became visible.

It turns out that the root cause was my having dragged-and-dropped enhanced images directly from the lightbox in Camera+ 2. Saving them to Photos first and then dragging them in from there got around the problem.

New host

What started with a Troy Hunt-inspired investigation into how I can enforce HTTPS on my Bitnami/AWS-hosted personal website turned into a full-on migration over to a new hosted web provider. After some initial teething problems related to the fact that my new site would be hosted at andrewdoran.uk and my old site was already at that same address, I managed to get it up and running with minimal hassle. Support from the staff at Siteground was excellent, answering my questions quickly and pointing me to exactly the resources I needed to get going.

Once I had everything in place the migration itself only took a couple of hours. I started with an export and import of my site using the WordPress-provided tools but found that this only transferred the basics — mainly the text. I’ve spent a lot of time over the years tweaking different aspects of the site and didn’t want to go through trying to reassemble it again. The All-in-One WP Migration plugin came to my rescue — this exports pretty much every aspect of a WordPress site including media, plugins, customisations etc. and lets you drag and drop the exported file to its new home. In order to export the data I had create a new folder on the server and grant write permissions to it, but I didn’t need to make any customisations for the upload to work on the new site. Exporting is free but importing a file of over 512Mb means that you need to buy a licence for USD 69 (about GBP 50). My export file was 1.4Gb so I had to pay; in my mind this was money well-spent considering the alternative of spending hours making all the customisations, reinstalling plugins and uploading all of my old media again.

Uploading my own static HTML content was as simple as can be, again making reference to the many straightforward Siteground tools and reference pages on how to create a key pair to enable an SFTP connection.

Once the content was uploaded and I’d tested it out I had to make a few tweaks to the variables so that it recognised itself as the canonical andrewdoran.uk and then repointed the DNS entries to the new site. I know that DNS is meant to take up to 48h to propagate but the change seemed almost instant from where I was connecting from. I also made a simple change to redirect ‘www’ requests to the non-www equivalent.

Having got the site up and running it was exceedingly easy to use the Siteground-provided tools to not only install a valid Let’s Encrypt SSL certificate but also to get any requests to HTTP pages redirected to the HTTPS equivalent on the site.

As far as migrations go it was exceptionally straightforward. It’s great to know that not only am I now able to serve up site content over HTTPS but also that I don’t need to worry about maintaining the operating system on my web server, as the host will do that for me.

Unintended consequences

Awful to read that Bitcoin Mining Now Consuming More Electricity Than 159 Countries Including Ireland & Most Countries In Africa. This is tragedy of the commons writ large.

I could be overly-sensitive after having just read The Internet Is Not The Answer, but this just reinforces the point raised in the book that the Internet has led to many terrible unintended consequences. We are literally paying to burn fossil fuel and hasten our own destruction through climate change in the real world in order to gamble on being lucky enough to obtain a slice of currency in the digital one.

I love technology where it brings enjoyment or helps us to be better at what we do. But whenever I read something like this, as I have mentioned here before, Bill Joy’s words are never far from my mind:

Now, as then, we are creators of new technologies and stars of the imagined future, driven – this time by great financial rewards and global competition – despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining.

I’m starting to think his article is the most important thing I’ve ever read. It’s certainly something that has shaped my worldview and continues to pop back into my mind again and again.