Microchip catnip

Saturday’s visit to the Centre for Computing History for a talk with some of the GoldenEye development team gave me the opportunity to look around their exhibits. It’s a really wonderful place.

In the main exhibition space there is a bowl of microprocessors in front of some slices of silicon:

One of my friends is a complete microprocessor geek. I guessed that sending him the photo above would be like catnip. It turns out I was right. Within minutes, I got this response:

And here was the summary that followed:

  1. Core ix (Arrandale, mobile variant of Westmere)
  2. LGA775 desktop package, so anything from late Pentium 4, Pentium D, Core 2 Duo or Quad, or their Celeron variants.
  3. Socket 939 AMD, either an Athlon 64 Sledgehammer, Clawhammer or Winchester. The giveaway is the gap in the middle away from the 4x key pins in the circumference.
  4. One of my all time favourites, an AMD K6-2+ — “Sharptooth” — basically took the original Pentium (which topped out at 200MHz) socket to 600MHz, and even has 128KB of L2 cache on board.
  5. An AMD opteron, probably one of the early dual or quad models. Can’t tell without the pinout or model.
  6. 6.1 almost certainly a Pentium-3 Mobile
    6.2 hard to say. Definitely 286 era, but probably a Motorola like 10 [below]
  7. Impossible to say from that angle.
  8. Either a Pentium-MMX (166-233) or a Celeron Mendocino. Both used the same black OPGA flip chip design.
  9. A 486, either from: AMD, Intel, Cyrix, SGS-Thomson or Texas Instruments
  10. A Motorola 6800 or 68000

Bravo, my friend. 👏

Teletext archeology

Jason Robertson’s talk on recovering teletext pages from videotapes is fascinating. Teletext was magical in the pre-Internet days. I had no idea that the service went back to the early 1970s. There is a searchable archive of pages that have been recovered. I love this preservation of ephemeral digital artefacts.

I remember purchasing a discounted Morley Teletext Adapter for my BBC Micro with the idea that I would download software that was embedded in the UK analogue TV signals. I struggled as I had to use an indoor aerial which gave relatively terrible results. (I don’t remember asking my parents to get a second aerial cable installed, but I am pretty sure they would have said no.) By the time I had bought the adapter I had missed the boat as telesoftware broadcasts had stopped. Still, it was fun to go digging around in the depths of the pages, using my computer keyboard to input the hexadecimal page numbers that were impossible with a TV remote control. Until I watched this video, I hadn’t realised that I might have been able to stumble across someone accessing services such as ‘online’ banking. Amazing.

Line Goes Up

A couple of friends shared this video with me; a feature-length tour-de-force that goes from the 2008-era financial crisis through the invention of blockchains and cryptocurrencies, Non-Fungible Tokens (NFTs) and Decentralised Autonomous Organisations (DAOs). This must have taken an age to put together.

Getting my head around ‘My first impressions of web3’

I heard this blog post mentioned on a couple of podcasts a few weeks back and have only just got around to reading it. It’s fascinating. The author, Moxie Marlinspike, dissects the state of the current ‘web3’ world of decentralised blockchains and non-fungible tokens (NFTs). Reading his post is probably the most educational use I’ve made of ten minutes this year, and I can’t recommend it highly enough. I’m jotting down my thoughts here in order to record and check my understanding.

When we all started out on the Internet, companies ran their own servers — email servers, web servers for personal or corporate websites etc. We pulled data from other people’s servers, and they pulled data from ours. Everything was distributed. This was actually the point, by design — ARPANET, the forerunner to the Internet, was architected in such a way that if a chunk of the network was taken down, the rest of the network would stay up. Your requests for information would route around whatever problem had occurred. This was effectively ‘web1’.

Over time, the Internet has become more centralised around a number of ‘big tech’ companies for a variety of reasons:

  • Servers can now be ‘spun up’ at the point of need and financed as operational expense using Amazon Web Services, Azure, Google Cloud etc. instead of costly capital investment. Lots of companies use a small number of these platforms.
  • Services like Microsoft 365 or Google Workspace allow companies to provide email, file sharing etc. and pay for the vendors to take care of the servers that run them instead of employing people to do it in-house.
  • People have coalesced around platforms where other people are instead of communicating point-to-point. Facebook, Twitter, Instagram, WhatsApp and Snapchat are good examples.
  • Although the Internet as a whole is resilient to failure, the architecture of ‘web1’ meant that your web or email servers could catastrophically go offline. Individual servers are vulnerable to problems such as ‘distributed denial of service’ (DDoS) attacks, where they are deliberately hit with more requests than they can handle and become unable to serve legitimate visitors. This could also happen if there is a ‘real’ spike in demand, e.g. if the content you are hosting or service you are running suddenly becomes extremely popular. People have mitigated this by using services from ‘content delivery networks’ (CDNs) such as Cloudflare and Fastly which, amongst other services, ‘cache’ copies of the content all over the world on their own network of servers, close to where the requestors are.

This is ‘web2’. Companies no longer need to buy, manage, administer and patch their own servers as they can rent them, or the applications that run on them, from specialist firms. But this creates a different set of issues. New entrants to these web2 service markets struggle to get a foothold, and the over-reliance on a small number of big players creates vulnerabilities for the system as a whole. For example, when Fastly had an outage in 2021, it took many its customers offline. Companies like Fastly reduce your risk on a day-to-day basis but increase the overall risk of the system by being a point of concentration. They are a big single point of failure.

‘Blockchain’ has been a buzzword for a long time now. The idea of blockchains is attractive: they are decentralised, with no company or state owning them. A blockchain of ledger entries exists across a network of computers, with work being done by — and between — those computers to agree on the canonical version through consensus. In other words, those computers talk to each other to come to agreement on what the blockchain ledger looks like, and nobody has ownership or control. Everyone has also heard of cryptocurrencies such as Bitcoin. The Bitcoin blockchain is a distributed ledger of ownership of the currency. The big buzz at the moment is around NFTs, each of which is effectively a digital ledger entry on a blockchain that records someone is the ‘owner’ of something.

A blockchain doesn’t live on your mobile device or in your web browser, it lives on servers. In order to write a new entry onto a blockchain, you need to start by sending a request to one of these servers. This is actually a limitation. From Marlinspike’s blog post:

“When people talk about blockchains, they talk about distributed trust, leaderless consensus, and all the mechanics of how that works, but often gloss over the reality that clients ultimately can’t participate in those mechanics. All the network diagrams are of servers, the trust model is between servers, everything is about servers. Blockchains are designed to be a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.”

A number of platforms have sprung up that provide the ability to write to popular blockchains. People would rather use these platforms than create and run something themselves, for many of the reasons that the ‘web2’ platforms came to be. People do not want to run their own servers. This brings its own problems in that in order to add a ledger entry to a blockchain, you now have an additional ‘hop’ to go through. The quality and architecture of these platforms used to access a blockchain really matters.

At the moment, calls and responses to these platforms are not particularly complex; there is little verification that what you get back in response to a request to retrieve data is actually what is really stored on the blockchain. These access platforms also get visibility of all of the calls that are made via their services. If you’re writing something to the blockchain, via a platform, they’ll know who you are and what you wrote because they helped you to do it.

“So much work, energy, and time has gone into creating a trustless distributed consensus mechanism, but virtually all clients that wish to access it do so by simply trusting the outputs from these two companies without any further verification. It also doesn’t seem like the best privacy situation.”

The author illustrates the point with an example of an NFT that he created which looks different depending on where you take a look at it from. He can do this because the blockchain doesn’t actually contain the data that defines the NFT itself, just a link that points to where the NFT is. So, as he owns the location of the NFT image, he can serve up different content depending on who or what is asking to see it. At some point, OpenSea, one of the popular NFT marketplaces, decided to remove his NFT from their catalogue. It was still on the blockchain, but invisible to anyone using OpenSea. This is interesting as it shows how much control a ‘web2’ platform has over the ‘web3’ blockchain.

If you have to go through one of these ‘web2’ platforms to interact with a blockchain, therefore losing some of the distributed benefits, why do the platforms bother with the blockchain at all? Writing new entries to a blockchain such as Etherium is very expensive. So why not have a marketplace for NFTs where ownership is simply written into a database owned by a company like OpenSea? The author’s conclusion is that it is because there is a blockchain gold rush, for now at least. Without the buzzword and everyone piling in, a platform like OpenSea would never take off.

Postlight recently published a podcast episode with Michael Sippey called “On web3, Again” which is well worth a listen. The whole episode is great, but there are some pointers from about 35m25s in on how to start to experiment with all of this yourself, if you have the disposable income to do it.

Thinking about proxies

Recent conversations at work have got me thinking about the proxy metrics that we use, and how much nuance and detail they hide.

Cybersecurity

Last week, we had a look at a tool that presented a ‘cybersecurity dashboard’ for our organisation. It is a powerful tool, with lots of capabilities for investigating and remediating security issues across our IT infrastructure estate. But what struck me was a big number presented front-and-centre on the first page. It looked a bit like this1:

It was simply a percentage. I’ve been pondering it since, wondering if it us useful or not.

80.4%. Is this good? If that’s my organisation’s score, can I sleep well at night? When I was at university, an average score of 70% in your exams and coursework meant that you were awarded with a first-class degree. So that number has always stayed with me and has felt intrinsically ‘good’. 80.4% is substantially higher than this. But what about that other 19.6%? Can we relax, or do we need to keep pushing to 100%? Can you ever truly be 100% secure if you’re running any kind of IT system?

Perhaps it is meant as a useful jumping off point for investigation. Or it is meant to be used over time, i.e. last week we were 78.9% and now we’re 80.4%, so things are going in the right direction. Maybe both. I’m not sure.

It’s a common idea that executives don’t want the detail. They simply want to see a big green light that says that things are ok. If there’s no green, they want to know that things are being dealt with in order to bring the amber or red thing back to green again. In the example above, although the ‘speed gauge’ is blue, it is still an attempt to aggregate all of the cybersecurity data across an organisation into a simple number. To me, it feels dangerous to boil it down to a single proxy metric.

I likened the single score to a song being reduced to its average frequency. Music can make us laugh, sing or cry. It can make our pulses race and our hearts throb. But the beauty and nuance is completely lost if you take the average of all of the sound and boil it down to one long continuous tone. (Someone has actually done this so you can hear examples for yourself.2)

Inflation

Food writer, journalist and activist Jack Monroe wrote an incredibly insightful thread on the latest inflation figures. The news headlines were screaming that the inflation number is 5.4% — a 30-year high. However, this hides the nuance of what exactly has been increasing in price and what has remained static. As usual, the poorest in society bear a disproportionate brunt of the increase. For people that depend on the cheapest goods, inflation is much higher, as the cost of those goods have been increasing at a much higher rate. Her original thread is well worth a read:

It was wonderful to see this thread get so much attention. Today Monroe announced that the Office of National Statistics will be making changes:

Financial data

I’ve been working in Financial Services for over 20 years. During the financial crisis of 2007–2008 I was employed by one of the banks that suffered terrible losses. In the lengthy report that was published to shareholders, it was notable that there was a dependency on a number of metrics such as Value at Risk which were in effect ’green’ even when the global financial system started to unravel. The actual problem was the sheer amount of toxic financial products that were on the balance sheet; as soon as the assumption of how much they were worth was revised, it triggered eye-watering losses.

From the report:

UBS’s Market Risk framework relies upon VaR and Stress Loss to set and monitor market risks at a portfolio level. [p19]

In the context of the CDO structuring business and Negative Basis and AMPS trades, IB MRC [Market Risk Control] relied primarily upon VaR and Stress limits and monitoring to provide risk control for the CDO desk. As noted above, there were no Operational limits on the CDO Warehouse and throughout 2006 and 2007, there were no notional limits on the retention of unhedged Super Senior positions and AMPS Super Senior positions, or the CDO Warehouse… [p20]

In other words, the amount of ‘good quality’ collateralised debt obligations (CDOs) that could be held on the balance sheet wasn’t subject to a cap. These were the instruments that were later found to be ‘toxic’.

MRC VaR methodologies relied on the AAA rating of the Super Senior positions. The AAA rating determined the relevant product-type time series to be used in calculating VaR. In turn, the product-type time series determined the volatility sensitivities to be applied to Super Senior positions. Until Q3 2007, the 5-year time series had demonstrated very low levels of volatility sensitivities. As a consequence, even unhedged Super Senior positions contributed little to VaR utilisation. [p20]

This means that the model, which produced a ‘green’ status for Value at Risk, was based on the historical data which said that ‘everything is fine’. No consideration seemed to have been taken on the sheer amount of CDOs that were being held. As the financial crisis unfolded and it became clear that the assets were no longer worth 100%, the revaluations resulted in nearly USD 50bn in losses.

Proxies should be a jumping off point

Proxies are attractive as they often boil down complex things into simple metrics that we think we can interpret and understand. Wherever I see or use them I need to think about what assumptions they are built on and to check that they are not being substituted for the important details.


  1. Taken from the Microsoft Windows Defender ATP Fall Creators Update 
  2. Lots of people on forums seem baffled as to why anyone would want to do that. I love that someone has done it. 

Enabling everyone in hybrid meetings

On Wednesday I attended a long workshop from home. A few of us were dialled in via BlueJeans, while the vast majority of attendees were physically present in the room. It struck me that being a remote participant must be a teeny tiny bit like having a disability; it was difficult to hear, difficult to see and I had to work extra hard to participate. We spent a significant amount of time staring at an empty lectern, hearing voices fade in and out but not seeing anyone on screen.

There’s a big push to ‘crack hybrid’. I know that the technology will inevitably improve to make these meetings better, reducing the friction between being in the room and out of it. But for now, if the meeting is a workshop, or just the kind where you want to democratise participation and involve everyone (as opposed to talking at them webinar or lecture style), then it makes sense to me to have everyone join in the same way.

Elizabeth Stokoe puts it better than me:

As we go back to our offices, the best meetings are going to be those where the organiser has put thought and energy into how they should be configured to meet their goals.

📚 A Seat at the Table

I picked up Mark Schwartz’s A Seat at the Table as I have recently been thinking about how we can move away from the perception of our IT team as the people who ‘turn up and fix the Wi-Fi’ to one where we are seen as true business partners. The book took me by surprise in being less of a self-help manual and more of a well-articulated argument as to why the old ways in which we did things no longer apply in the digital age. It is brilliant.

Schwartz has a way of encapsulating key concepts and arguments in short, smart prose. The book contains the best articulation of the case for Agile, Lean and DevOps that I have read. There is so much wisdom in a single sentence, for example:

What is the value of adhering to a plan that was made at the beginning of a project, when uncertainty was greatest?

One of the books referenced heavily in A Seat at the Table is Lean Enterprise by Jez Humble, Joanne Molesky and Barry O’Reilly which I read some time ago. Lean Enterprise goes into more detail in terms of the concepts and mechanics used in modern software development such as continuous integration, automated testing etc. and brings them together into a coherent whole. Schwartz does not cover these topics in detail but gives just enough information to make his case as to why they are the sensible way forward for developing software.

A company may typically engage their IT department as if they are an external supplier. They haggle and negotiate, they fix scope and cost and they then the work starts. This approach does make some sense for working with a truly external vendor where they are taking on some of the financial risk of overrunning and you are able to specify exactly what you want in detail, for example where physical IT infrastructure is being delivered, installed and configured. It makes little sense when you are creating a new software system. It makes even less sense when the IT team are colleagues in the same organisation, trying to work out what investments will make the biggest impact on the company. We win and lose together.

First of all, we came to speak about “IT and the business” as two separate things, as if IT were an outside contractor. It had to be so: the business was us and IT was them. The arms-length contracting paradigm was amplified, in some companies, by the use of a chargeback model under which IT “charged” business units based on their consumption of IT services. Since it was essentially managing a contractor relationship, the business needed to specify its requirements perfectly and in detail so that it could hold IT to delivering on them, on schedule, completely, with high quality, and within budget. The contractor-control model led, inevitably, to the idea that IT should be delivering “customer service” to the enterprise—you’d certainly expect service with a smile if you were paying so much money to your contractors.

For readers who are familiar with why we use Agile software development methods, the arguments against the old ‘waterfall’ approach are well-known. What is more interesting is that Schwartz also points to issues that advocates of the Agile approach have exacerbated. Agile people can be suspicious of anyone that looks like a manager, and want them to get out of the way so that they can get on with the job. Schwartz argues that the role of managers and leadership is to remove impediments, many of which the Agile team cannot easily deal with on their own:

When the team cannot accomplish objectives, I am forced to conclude that they cannot do it within the given constraints. The team might need members with different skills. It might need permission to try an experiment. It might need the help of another part of the organization. It might need a policy to be waived. But if the task is possible and the team cannot achieve it, then there is a constraining factor. My job is to remove it.

What if someone on the team is really just not performing? Perhaps not putting in his or her share of effort, or being careless, or uncooperative? Well, then, dealing with the problem is simply another example of removing an impediment for the team.

The critical role of middle management, it would seem, is to give delivery teams the tools they need to do their jobs, to participate in problem-solving where the problems to be solved cross the boundaries of delivery teams, to support the delivery teams by making critical tactical decisions that the team is not empowered to make, and to help remove impediments on a day-to-day basis. The critical insight here, I think, is that middle management is a creative role, not a span-of-control role. Middle managers add value by contributing their creativity, skills, and authority to the community effort of delivering IT value.

He makes a clear case for getting rid of ‘project thinking’ completely. If you want a software delivery initiative to stay on budget, the only way to do that is through an agile project. The team will cost the organisation their run rate which is almost always known in advance. Work can be stopped at any time, preserving the developments and insights that have been created up to that point.

As a former PMO head, and with my current responsibilities of running a portfolio of change initiatives, it was interesting to see the approach to ‘business cases’ recommended in the book. Instead of signing off on a set of requirements for a particular cost by a certain date, you should be looking to assess the team on what they want to achieve and whether they have the skills, processes and discipline to give you confidence that they will:

  • be effective,
  • manage a robust process for determining the work they will do,
  • make good decisions,
  • seek feedback,
  • continually improve.

Schwartz gives a brilliant example of how difficult it is to articulate the value of something in the IT world, which gave me flashbacks to the hours I have spent wrestling with colleagues over their project business cases:

How much value does a new firewall have? Well … let’s see … the cost of a typical hacker event is X dollars, and it is Y% less likely if we have the firewall. Really? How do we know that it will be the firewall that will block the next intrusion rather than one of our other security controls? How do we know how likely it is that the hackers will be targeting us? For how long will the firewall protect us? Will the value of our assets—that is, the cost of the potential hack—remain steady over time? Or will we have more valuable assets later?

The word ‘requirements’ should go away, but so should the word ’needs’; if the organisation ‘requires’ or ‘needs’ something, what are the implications for right now when the organisation doesn’t have it? Instead of using these terms, we should be formulating hypotheses about things we can change which will help bring value to the organisation. Things that we can test and get fast feedback on.

Schwartz also argues against product as a metaphor, which was a surprise to me given how prevalent product management is within the industry today:

But the product metaphor, like many others in this book, has outlived its usefulness. We maintain a car to make it continue to function as if it were new. A piece of software, on the other hand, does not require lubrication—it continues to operate the way it always has even if we don’t “maintain” it. What we call maintenance is really making changes to keep up with changes in the business need or technology standards.

Senior IT leaders are ’stewards’ of three critical ‘assets’ in the organisation:

  1. The Enterprise Architecture asset  — the collection of capabilities that allows the organisation to function, polished and groomed by the IT team.
  2. The IT people asset — ensuring that the organisation has the right skills.
  3. The Data asset — the information contained in the company’s databases, and the company’s ability to use that information.

Much of the book comes back to these three assets to emphasise and elaborate on their meaning, and the work required to “polish and groom” them.

The author makes the case that CIOs should take their seat at the table with the rest of the CxOs through being confident, bold, and simply taking the seat in the same way that the others do. To talk of IT being ‘aligned’ to the business is to imply that IT can be ‘misaligned’, doing its own thing without giving any thought to the rest of the organisation. The CFO, CMO or any other CxO does not need to continually justify their existence and prove their worth to the business, and neither should the CIO. The CIO needs to have deep technology knowledge — deeper than the rest of the people around the table — and bring this knowledge to bear to deliver value for the organisation, owning the outcomes instead of just ‘delivering products’.

It follows that the CIO is the member of the senior leadership team—the team that oversees the entire enterprise—who contributes deep expertise in information technology. I do mean to say deep expertise. Increasingly, everyone in the enterprise knows a lot about technology; the CIO, then, is the person who knows more than everyone else. The CIO should be more technical, not less—that is how he or she contributes to enterprise value creation; otherwise, the role would not be needed.

The age of IT organizations hiding behind requirements—“just tell me what you need”— is gone. IT leaders must instead take ownership, responsibility, and accountability for accomplishing the business’s objectives. The IT leader must have the courage to own outcomes.

IT investments are so central to corporate initiatives that it is hard to make any other investment decisions without first making IT decisions. This last point is interesting, right? Perhaps it suggests that IT governance decisions should be made together with or in advance of other business governance decisions. Instead, in our traditional model, we think first about “business” decisions, and then try to “align” the IT decisions with them. But in our digital world—if we are truly committed to the idea that that’s the world we live in—IT should not follow business decisions but drive them.

CIOs and their staff have an excellent “end-to-end understanding of the business, a discipline and mindset of accomplishing goals, and an inclination toward innovation and change.” They bring a lot to the table.

Schwartz makes a case for the rest of the organisation becoming digitally literate and sophisticated in their use of technology. This may extend to people from all parts of the organisation being able to contribute to the codebase (or “Enterprise Architecture asset”) that is managed by IT. This should be no different to developers on an open source project making changes and submitting a ‘pull request’ to have those changes incorporated into the official codebase. We should embrace it, fostering and harnessing the enthusiasm of our colleagues. We should care less about who is doing the work and more about whether the company’s needs are met.

As much as I enjoyed the book, there were points where I disagreed. Schwartz argues strongly against purchasing off-the-shelf software — ever, it seems — and advocates building things in-house. He makes the point that software developed for the marketplace may not be a good fit for our business and may come with a lot of baggage. My view is that this completely depends on where the software sits in the stack and how commoditised it is. It makes no sense to implement our own TCP/IP stack, for example, nor does it make any sense to develop our own email client. (Nobody ever gained a new customer based on how good their email system was. Probably.) But I do agree that for software that is going to give us a competitive edge, we want to be developing this in-house. I think that something along the lines of a Wardley Map could be useful for thinking about this, where the further along the evolution curve a component is, the less Agile in-house development would be the preferred choice:

Overall this book is a fantastic read and will be one I come back to. It’s given me lots to think about as we start to make a case for new ways of working that go beyond the IT department.

Using a LeanKit board to manage risks

LeanKit is my favourite productivity tool. Our team has been using it for the past couple years to manage our work across a series of Kanban boards. It is super easy to use, and offers a massive amount of flexibility compared to the implementation of Kanban boards in Jira, Trello or Microsoft Planner. You can configure both vertical and horizontal swimlanes, instead of just the vertical columns of tasks that the other tools offer. It is easy to represent your team’s workflow as it feels like the tool is working with you instead of against you.

Recently we have also started to use LeanKit to manage our department’s risks. At our company, we have an Operational Risk framework used across the organisation that looks like this:

The first job is to reproduce this layout in LeanKit so that everyone can relate the board back to the model. The board editor makes this super easy.

Here the typical Kanban setup of ‘to do’, ‘doing’ and ‘done’ is represented by the ‘New Risks’, ‘Current (Residual) Risk — After Mitigation’ and ‘Closed — Ready To Archive’ lanes respectively:

The man section is then broken down into rows for ‘likelihood’, with ‘severity’ columns in each row. The coloured circles in each of the box titles are emojis that are added when editing the title of each box (on Windows use the Windows key plus ‘.’ to bring up the emoji picker, and on a Mac press ‘fn’ or use the Edit > Emoji & Symbols dropdown in the menu bar). We also have a ‘Themes’ section at the top of this lane — more on this later.

In the ‘New Risks’ column we have a space for a template as well as a ‘For review’ section which has been allocated as the drop lane. By default, new risks will go here when we think of them; we then periodically review them as a team and drag them to the appropriate place on the board.

We then need to configure the board. The Board Settings tab can be used to set a title, description, custom URL and specify who gets access:

In this example, I’m not yet ready to let anyone else use it so I have set the default security to ‘No Access’:

In the Card Types section we define three types of card. The main one is ‘Risk’ but we also create ‘Theme’ to group our risks together. We also left ‘Subtask’ as one of the defaults in case someone wants to use the on-card mini-Kanban board to manage the tasks relating to an individual risk. We pick some colours we like, and delete all of the other default types of card:

We also set up a Custom icon so that we can see at a glance which of our risks are mitigated/accepted, those that we are working on and those where we need to give them attention.

We ensure that every card has one of these custom icons when we create it. During a review we can then filter the board so that, for example, only the red-starred cards appear.

Next we create the template card. First, we set the Card Header to allow custom header text. With templates, I like to leave the board user with instructions such as ‘Copy me!’:

We then create the template card itself. This goes some way to ensuring that all of the new risks get created in a similar way, with similar information. This card will be put into the ‘Template’ section of the board:

In order to distinguish one risk from another, and report them to wherever they need to go, we want each risk to have a unique identifier. We can now go back to the Card Header in the board’s settings and select ‘Auto-incremented Number’ with a header prefix of ‘Risk ‘. This means that new cards added to the board will be called ‘Risk 1’, ‘Risk 2’, ‘Risk 3’ etc.:

The ‘Risk ‘ prefix does have the effect of changing the name of the template card, but this isn’t too confusing:

We can now start adding risks to the board, and linking them to themes as shown below:

Having a visual representation of our risks in this way is so much better than the usual spreadsheet with one risk per row. It’s allowed us to incorporate risk management much more into our day-to-day work. We can assign owners to each risk, and use all of the rich features of LeanKit such as adding comments, due dates etc.

If we decide a risk needs to be reclassified in terms of its likelihood or severity, we simply drag the card to the new location on the board. The card itself will keep a history of its journey in its audit log. If we absolutely have to submit our risks in a spreadsheet somewhere, we can export the board contents as a CSV file and format it in Excel.

The best thing about managing the risks in this way is that we can link any mitigation work directly to the risks themselves. Where we agree a follow-up action, we create a task cards on the appropriate team Kanban boards and then link each of those cards to the risk — the risk card becomes the parent card of the task. In this way, we can see at a glance all of our risks and track the work as it gets completed across the organisation.

The loveliest place on the Internet

A close friend of mine recently asked, out of the blue:

In a word, the answer is yes. For the past few years I’ve been using micro.blog, which I’ve come to think of as the loveliest place on the Internet. It isn’t lovely by chance, it has been deliberately designed that way, and is lovingly nurtured to keep it full of positive vibes across its wonderful community.

On the surface, micro.blog is selling blog hosting. For $5/month, you can sign up and host your blog posts there. The monthly fee makes sure that your site isn’t cluttered with any adverts. You can post using apps for iOS, iPadOS, MacOS and Android as well as via the web. Blog posts can be ‘micro’ status updates of 280 characters or less, and if you go over this limit, the official apps and web interface reveal a ‘title’ field to accompany a more traditional blog post. You can syndicate your posts to Twitter, Medium, Mastodon, LinkedIn and Tumblr, with the full text being posted if it can fit in a tweet, and a title and link posted if it is longer. Photo uploads are included; the official apps helpfully strip out any EXIF metadata, such as where the photo was taken, in order to protect your privacy. A $10/month plan gives you the ability to host short videos or even podcasts (‘micro casts’), which you can record using the Wavelength iOS app.

But this is just where the magic starts. You don’t actually need to host your content on micro.blog. I’ve had my own blog for many years, and have recently started to take my content off of other platforms such as Instagram and Goodreads and host it myself — I want my content on my own platform, not somebody else’s. If you have an existing blog like I do, you can create an account and link it to your existing website via an RSS feed. Any post you write on your own blog then gets posted to your micro.blog account, and syndicated to wherever you want it to go. You can even set up multiple feeds with multiple destinations for cross-posting:

I post at my own blog and micro.blog picks up the feed

I post at my own blog and micro.blog picks up the feed

Once you have an account set up, you can start to use the main micro.blog interface. Here’s where you will see posts from everyone that you have ‘followed’, whether they have a micro.blog hosted account or syndicate from their own site. It’s a bit like a calmer, happier version of Twitter:

Here’s where some of the thoughtfulness of the design comes in. Discovering people to follow actually takes some effort. You can click on the Discover button to see a set of recent posts from a variety of users, lovingly hand-curated by the micro.blog Community Manager Jean MacDonald. If one of these posts catches your eye, you can click on the username or profile photo and press ‘follow’. That person’s posts will now appear in your timeline. By default you will also see their ‘replies’ to other members of the community. You can click on a button to view the whole conversation thread, which can lead you to explore and discover more people. This effort means that the emphasis is on quality over quantity, with gradual discovery.

You can also discover people by selecting someone’s profile and clicking through to see who they follow that you aren’t following:

One of the best design choices made on the platform is that there are no follower counts. I have no idea how many other users follow me, or anybody else. There are also no ‘likes’, just a button to privately bookmark a post for you to access again easily in the future. Reposts (or ‘retweets’) don’t exist either — if you like a post that you read and want to amplify it, you need to create a new post of your own. Hashtags are also not supported. Again, the emphasis is on quality and not quantity, on conversations instead of engagement metrics. This works to reduce people posting content just to ‘go viral’, and keeps the noise down.

So far, so straightforward. But there’s more magic.

The creator of micro.blog has authored an iOS app called Sunlit, which allows the creation of photo posts on your blog, whether you are hosting your content on micro.blog or externally. You can also see posts from other people that you follow on micro.blog. It’s like viewing the micro.blog content through an Instagram-type lens. The photos you see are from the same timeline that you see in micro.blog. You can comment on and bookmark posts, and switch between the Sunlit app and the main micro.blog apps or web interface. I love an occasional browse through the timeline using this app if I’m in the mood just to see some wonderful photos.

So, this means that all of your content sits on your own blog. No more silos of posts on different platforms.

But here’s the thing that feels most magical to me. If someone reads a post of yours on micro.blog, they can reply to it. But in the spirit of you owning your own content, these replies get posted as comments on your original blog post, even if the blog is hosted on your own independent website. Every time this happens, it blows my mind a tiny bit. Here’s an example — I recently posted about how exhausting the Clubhouse app is, which sparked a few comments and conversations. People responded on micro.blog and these ended up as comments on the post on my site:

It really is a wonderful place to spend some time. If you’re looking for an alternative to the noisy, regularly hostile place that the traditional social media platforms have become, and/or owning your content is important to you, it is well worth checking out. It’s been a source of joy over the past couple of weeks to see a real-life friend regularly posting there, and I know from talking to him that he’s loving it.

UC Today podcast

I really enjoy the UC Today podcast. If you’re involved in administering or working with Microsoft Teams, the latest episode is well worth 20 minutes of your time.

Key points covered in this episode that stuck in my head after listening:

  • Calling plan will now be part of the Microsoft E5 licence (so no additional purchase necessary) everywhere that Microsoft is a telco except the US and Puerto Rico.
  • Teams is getting native integration to WebEx meetings, so you can join WebEx calls from Teams meeting rooms.
  • Microsoft are selling a bolt-on for API access to record calls/meetings. The list price is USD 12/user/month, which seems quite expensive. You will also need a third party tool to record.
  • Other features and enhancements to the user interface including the ‘Together’ mode, which is meant to make calls less tiring.

Keeping a ScanSnap going on macOS Mojave

After trying a few different apps, I switched to ExactScan Pro on macOS given that Fujitsu no longer support my scanner. Expensive, but not as much as buying a whole new piece of hardware to achieve the same result. I’ve only scanned a few thousand pages and it seemed tragic to get rid of it when it was working perfectly well.

The saddest thing was the way the old Fujitsu software died. It still seemed to work, but created PDFs with the pages out of order. I was scratching my head as it was almost like someone had deliberately sabotaged the code so that it wouldn’t work properly.

Debugging images in a WordPress upload via Ulysses

I spent hours trying to debug why Ulysses wouldn’t post my latest weeknotes to WordPress from my iPad. I kept getting an error that the document contained unsupported HEIC-format images but I couldn’t see which, as the image filenames are hidden in the Ulysses sheet. The post took ten minutes or so to upload, and each time I found I had to go to the WordPress media library and delete the images that did upload successfully so that they weren’t duplicated on my next attempt.

Eventually I found a way to view them, by dragging the images one by one from Ulysses into Gladys where the filenames became visible.

It turns out that the root cause was my having dragged-and-dropped enhanced images directly from the lightbox in Camera+ 2. Saving them to Photos first and then dragging them in from there got around the problem.

New host

What started with a Troy Hunt-inspired investigation into how I can enforce HTTPS on my Bitnami/AWS-hosted personal website turned into a full-on migration over to a new hosted web provider. After some initial teething problems related to the fact that my new site would be hosted at andrewdoran.uk and my old site was already at that same address, I managed to get it up and running with minimal hassle. Support from the staff at Siteground was excellent, answering my questions quickly and pointing me to exactly the resources I needed to get going.

Once I had everything in place the migration itself only took a couple of hours. I started with an export and import of my site using the WordPress-provided tools but found that this only transferred the basics — mainly the text. I’ve spent a lot of time over the years tweaking different aspects of the site and didn’t want to go through trying to reassemble it again. The All-in-One WP Migration plugin came to my rescue — this exports pretty much every aspect of a WordPress site including media, plugins, customisations etc. and lets you drag and drop the exported file to its new home. In order to export the data I had create a new folder on the server and grant write permissions to it, but I didn’t need to make any customisations for the upload to work on the new site. Exporting is free but importing a file of over 512Mb means that you need to buy a licence for USD 69 (about GBP 50). My export file was 1.4Gb so I had to pay; in my mind this was money well-spent considering the alternative of spending hours making all the customisations, reinstalling plugins and uploading all of my old media again.

Uploading my own static HTML content was as simple as can be, again making reference to the many straightforward Siteground tools and reference pages on how to create a key pair to enable an SFTP connection.

Once the content was uploaded and I’d tested it out I had to make a few tweaks to the variables so that it recognised itself as the canonical andrewdoran.uk and then repointed the DNS entries to the new site. I know that DNS is meant to take up to 48h to propagate but the change seemed almost instant from where I was connecting from. I also made a simple change to redirect ‘www’ requests to the non-www equivalent.

Having got the site up and running it was exceedingly easy to use the Siteground-provided tools to not only install a valid Let’s Encrypt SSL certificate but also to get any requests to HTTP pages redirected to the HTTPS equivalent on the site.

As far as migrations go it was exceptionally straightforward. It’s great to know that not only am I now able to serve up site content over HTTPS but also that I don’t need to worry about maintaining the operating system on my web server, as the host will do that for me.

Unintended consequences

Awful to read that Bitcoin Mining Now Consuming More Electricity Than 159 Countries Including Ireland & Most Countries In Africa. This is tragedy of the commons writ large.

I could be overly-sensitive after having just read The Internet Is Not The Answer, but this just reinforces the point raised in the book that the Internet has led to many terrible unintended consequences. We are literally paying to burn fossil fuel and hasten our own destruction through climate change in the real world in order to gamble on being lucky enough to obtain a slice of currency in the digital one.

I love technology where it brings enjoyment or helps us to be better at what we do. But whenever I read something like this, as I have mentioned here before, Bill Joy’s words are never far from my mind:

Now, as then, we are creators of new technologies and stars of the imagined future, driven – this time by great financial rewards and global competition – despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining.

I’m starting to think his article is the most important thing I’ve ever read. It’s certainly something that has shaped my worldview and continues to pop back into my mind again and again.

Spark

Spark is absolutely killing it with these new features. I switched from using Microsoft Outlook as my primary email client a while ago on both macOS and iOS and it does pretty much everything that I need. Recent updates have included Integrations with a whole bunch of third-party applications and massive improvements to search.

I remain concerned that Spark is free, but Readdle seem to have a game plan where it will earn them an income at some point.

Response hierarchy

It was really interesting to read Michael Lopp’s latest blog post showing his relative probability to respond to an incoming communication based on the medium through which it is sent:

…I realized that I had updated the prioritized hierarchy to how likely I will respond to a piece of communication. From least likely to most likely, this is the hierarchy:
Spam < LinkedIn < Facebook < Twitter < Email < Slack < Phone < SMS < Face to Face

This struck a chord with me. A while ago I wrote down a list of all of the electronic inboxes that were playing a part in my life as I needed to take a step back and see it all. Discounting the ones that are both from and to myself (namely my unprocessed Drafts entries and my Evernote inbox), my own response hierarchy today looks something like this:

Spam < Flickr comments < LinkedIn < Voicemail < Facebook Messenger < Blog comments < Personal email < Goodreads/Strava comments < Facebook mentions < School governor email < Work email < Phone < Twitter < Skype for Business < Telegram/WhatsApp < SMS < Face to Face

Maybe I am over-thinking it as the comments and mentions don’t always require a response (although the notifications do nag at me on my phone and I have a lingering guilt about not looking at them as often as I perhaps should). Anyway, let’s remove those:

Spam < LinkedIn < Voicemail < Facebook Messenger < Personal email < School governor email < Work email < Phone < Twitter < Skype for Business < Telegram/WhatsApp < SMS < Face to Face

Lopp’s analysis of each form of communication is interesting. I’m impressed that he manages to get to Inbox Zero every day both at work and home. I get there sometimes, but it isn’t as frequent as I would like.

My hierarchy isn’t always consistent. Voicemails on my mobile from strangers get much less attention than voicemails from people I know, but even then iOS doesn’t do a great job of nagging me about the ones that I have listened to but not actioned. Occasionally I’ll flick across to voicemail and find six or seven that stretch back over the past few months.

I don’t answer the phone to external numbers on my work phone as 95% of the time it is a sales call; unfortunately for those callers I have also removed my work Voicemail so I don’t need to deal with changing the security PIN every month. The value of voicemail is far outweighed by the inconvenience of accessing it — most of the time my missed calls list is sufficient for me to know who to get in contact with. People who really need to contact me in a work context from outside my company will have my email address or mobile number.

Email is fine for business type things but completely broken for ‘proper’ correspondence in that the more important a personal note is to me, the longer I’ll tend to leave it until I find the time to sit down and write a considered, meaningful response. I fully understand that this may be no more email’s fault than it is the fault of the letter-writing paper that also goes untouched in our house. Perhaps the long-form two-way personal communication is dead in the era of instant responses, or only useful when you have a lot to say to the other person and don’t want to be interrupted or get a reply.

We use Skype at work for instant messaging but it is almost completely on a 1:1 basis with barely any shared channels. It feels like a missed opportunity but multiple attempts to get it started have never caught on. Perhaps our company is too small, or we don’t have enough geeks.

Slack doesn’t feature at all as an inbox for me yet — I’m a member of three ‘teams’, none of which are directly linked to my employer. I mainly lurk and therefore don’t get many communications that way.

Twitter used to occupy a giant amount of time but my usage has tailed off significantly over the past couple of years. For a long while it felt like a real community and that I was part of something — I even organised a small handful of well-attended ‘tweetups’ in our town for everyone to meet — but over time I had subconsciously given up trying to keep up and have gone back to reading blogs and books. I get very little direct communication from it and when I do I’m pretty responsive. The main role it plays in my life now is as an aggregation source of interesting things to read via the wonderful Nuzzel app.

It’s interesting to me to write this down as it gives me a realisation of how complicated things are these days and how much of a cognitive burden it is to keep up with it all. It’s no longer sufficient to get to Inbox Zero with my three email accounts and feel that I am ‘done’; all of the others need to be checked and drained as well on a regular basis.

Of project portfolios and an uncertain future

Earlier this week I was fortunate enough to attend the Gartner Project Portfolio Management (PPM) and IT Governance summit. I approached the event with some trepidation—this is my field and it would clearly be useful to be amongst peers and understand what the latest thinking is, but I expected it to be full of vendors voraciously pushing their wares and lots of attendees who were wrestling with nuances of Microsoft Project and getting ‘resources’ to work with the processes they had rolled out. Although there was indeed some of this, 80% of the content over the two days was extremely valuable. Many of the sessions had big overlaps with topics covered in Lean Enterprise, a fantastic book I read earlier this year which has really honed my thinking about the right way to approach IT and product development work within a large organisation.

The final keynote presentation of the day was extremely thought-provoking. Donna Fitzgerald and Robert Handler gave a talk called ‘Gartner Predicts the Future of PPM’ but it was so much more than just a talk about project portfolio management. The key issue is on how fast the world is changing around us and how quickly we will need to adapt. Early on, they quoted Ray Kurzweil in saying that:

“Our intuition about the future is linear. But the reality of information technology is exponential, and that makes a profound difference.”

The basic messages that I took away from the presentation were as follows:

  • Technological advances are exponential (think Moore’s Law, Metcalfe’s law etc.)
  • Anything that can be automated will be automated.
  • Things that yesterday we generally believed were impossible to automate are being automated (e.g. self-driving trucks, textual analytics etc.—this table from this paper was reproduced in the Gartner slide deck)
  • Only the very highest-level cerebral work will be left, to be done by good people who are seasoned experts.
  • Applying this to PPM, classic project management will be a generic skill, versatility of skills will be a necessity and the role of the PM will be to enable the team to get things done and shift obstacles out of the way.

As I sat there in the audience I couldn’t help but drift away from the PPM world and think back to an article I read in Wired magazine fifteen years ago. The article had such a profound effect on me that I can still remember exactly where I was as I read it—on a Northern Line tube train, heading to work one morning. It’s called Why The Future Doesn’t Need Us and is by Bill Joy, then Chief Scientist at and one of the founders of Sun Microsystems.

My main memory of the article was that humans pursue technological and scientific progress for its own sake, because it is in our nature to explore and discover. Out of this could come unintended consequences such as self-replicating nanotechnology that takes over the world in a horrendous ‘grey goo‘ scenario. Over the past fifteen years, this quest for scientific progress has become synonymous in my mind with the quest for continual ‘economic growth‘.

On my way home from the conference, with thoughts from the keynote still fresh in my mind, I decided to re-read the article. Two things surprised and struck me when I did so: (1) first part of the article focused on Joy’s meeting with Kurzweil so the keynote and the article seemed to have a common thread or root and (2) the article didn’t just focus on technology but also economic growth:

“Now, as then, we are creators of new technologies and stars of the imagined future, driven – this time by great financial rewards and global competition – despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining.”

“I believe we must find alternative outlets for our creative forces, beyond the culture of perpetual economic growth; this growth has largely been a blessing for several hundred years, but it has not brought us unalloyed happiness, and we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers.”

I tweeted that the keynote had got me “worried about our drive to automation and the unemployed masses” and a friend responded, stating that I should read about the so-called ‘lump of labour fallacy‘.

I did look into this, and found out that this basically says that the following line of thinking is fallacious:

  • There is a finite amount of work to be done.
  • Where work is automated, the overall pool of work is decreased, so the people unemployed by automation will not be able to find new jobs.

The reason it is ‘widely accepted’ as a fallacy is that historically, through technological, economic and societal change there is new work to be done and over time the labour force shifts to having these new skills. Think about the industrial revolution and the jobs that were lost because of the changes and how the population developed new skills over time for new jobs that had previously never existed.

Of the various excellent articles I have read on this, three things make me think that this historically has been a fallacy but may not be in the future:
  1. The speed of technological change. As per the Kurzweil quote at the top of this post, progress is not linear and it is getting faster. The speed of progress means that people may not have time to re-skill within their own lifetime.
  2. If it is true that “Only the very highest-level cerebral work will be left, to be done by good people who are seasoned experts” then how do you become a seasoned expert if there are no lower-level tasks to be done that allow you can learn the ropes? Will your field still be un-automated by the time you get to be a seasoned expert with a couple of decades of experience behind you?
  3. The work to be done may shift and new jobs may be invented, but who is going to do that work? Will it be automated from the get-go?

I think there are genuine reasons to be concerned. Personally, I do not understand how we blindly accept ‘economic growth’ through capitalism as a singular goal that is commonly agreed on as being an aim for a company, a society, a country or humanity. Expanding populations and finite resources surely mean that there are limits to continual ‘growth’. I know that many people much smarter than me must have examined this question and that people can point me towards countless texts where this is considered. What I do understand is that even through something as gigantic as the recent financial crisis we did not come up with anything better than what we have today—even though many great minds were questioning it and reasoning as to where we should go from here—and that while our current configuration is still in place, it is not an option for an organisation to avoid seeking growth in the form of increased revenues and lower costs through technological innovation, automation etc. If you are participating in capitalism and not striving to be the best that you can be then someone else will take your customers and you will be out of business. The pace at which this is happening is accelerating. 50 years ago, the average life expectancy of a Fortune 500 company was 75 years; as of 2014 it was less than 15 years.

I don’t have any conclusions right now. I know that as an individual with a family to look after, a mortgage to pay etc. I am very much an active participant in this process. But as per Bill Joy’s article that I read all those years ago:

“My continuing professional work is on improving the reliability of software. Software is a tool, and as a toolbuilder I must struggle with the uses to which the tools I make are put. I have always believed that making software more reliable, given its many uses, will make the world a safer and better place; if I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine such a day may come.

This all leaves me not angry but at least a bit melancholic. Henceforth, for me, progress will be somewhat bittersweet.”

Final year project

mixlogo

Mat has generously given me some space on his web server to host a copy of the project I completed in the final year of my degree about ten years ago.  It’s called An Implementation of Donald Knuth’s MIX and is a Java applet version of a mythical computer that Knuth wrote about in The Art of Computer Programming Vol 1.  I’ve added the source code to the page which has never been out in the big wide world before.  Hopefully someone will find it interesting!

UPDATE – 29 October 2014: I now have my own web space and am hosting this directly.