Trying to understand how ChatGPT works

I finally got around to reading the Stephen Wolfram essay on What Is ChatGPT Doing … and Why Does It Work? Despite being written in relatively simple terms, the article still pushed the boundaries of my comprehension. Parts of it landed on my brain like an impressionist painting.

Things that stuck out for me:

  • In order to improve the output, a deliberate injection of randomness (called ‘temperature’) is required, which means that ‘lower-probability’ words get added as text is generated. Without this, the output seems to be “flatter”, “less interesting” and doesn’t “show any creativity”.
  • Neural networks are better at more complex problems than on simple ones. Doing arithmetic via a neural network-based AI is very difficult as there is no sequence of operations as you would find in a traditional procedural computer program. Humans can do lots of complicated tasks, but we use computers for calculations because they are better at doing this type of work than we are. Now that plugins are available for ChatGPT, it can itself ‘use a computer’ in a similar way that we do, offloading this type of traditional computational work.
  • Many times, Wolfram says something along the lines of “we don’t know why this works, it just does”. The whole field of AI using neural networks seems to be trial and error, as the models are too complex for us to fathom and reason about.

Particularly over the past decade, there’ve been many advances in the art of training neural nets. And, yes, it is basically an art. Sometimes—especially in retrospect—one can see at least a glimmer of a “scientific explanation” for something that’s being done. But mostly things have been discovered by trial and error, adding ideas and tricks that have progressively built a significant lore about how to work with neural nets.

  • People do seem to be looking at the output from ChatGPT and then quickly drawing conclusions of where things are headed from a ‘general intelligence’ point of view. As Matt Ballantine puts it, this may be a kind of ‘Halo effect’, where we are projecting our hopes and fears onto the technology. However, just because it is good at one type of task — generating text — doesn’t necessarily mean that it is good at other types of tasks. From Wolfram’s essay:

But there’s something potentially confusing about all of this. In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do […]

But this isn’t the right conclusion to draw. Computationally irreducible processes are still computationally irreducible, and are still fundamentally hard for computers—even if computers can readily compute their individual steps. And instead what we should conclude is that tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought.

  • So my last big takeaway is that — maybe — human language is much less complex than we thought it was.

Another view on The A.I. Dilemma

Interesting to read Nick Drage’s riposte to The A.I. Dilemma which I watched a few weeks ago. I agree with his points on the presentation in terms of lack of citations and extreme interpretations, which when scrutinised does the subject a disservice.

The presentation is worth watching just to see what they get away with. And because the benefits and threats of AI are worth considering and adapting to, and especially because the presenters are so right in encouraging us to think about the systemic changes taking place and who is making those changes, but I’m really not sure this presentation helps anyone in that endeavor.

This isn’t to say that the topics raised are not important ones. I’m currently a third of the way through listening to a very long podcast interview between Lex Fridman and Eliezer Yudkowsky on “Dangers of AI and the End of Human Civilization”. Both of them know infinitely more about the topic than I do. It’s very philosophical, questioning whether we’d know if something had become ‘sentient’ in a world where the progress of AIs is gradual in a ‘boiling frogs’ sense. The way they talk about GPT-4 and the emergent properties of transformers in particular makes it sound like even the researchers aren’t fully sure of how these systems work. Which is interesting to me, a complete layperson in this space.

It’s all AI, all the time

All my feeeds seem to be full of reflections on the inevitability of the changes that will soon be brought about by artificial intelligence. After spending time thinking about this at length last week it may be my cognitive biases kicking in, but I’m pretty sure it’s not just me noticing these posts more.

Ton Zijlstra has an interesting view on today’s corporations as ‘slow AI’, and how they are geared to take advantage of digital AI:

…‘Slow AI’ as corporations are context blind, single purpose algorithms. That single purpose being shareholder value. Jeremy Lent (in 2017) made the same point when he dubbed corporations ‘socio-paths with global reach’ and said that the fear of runaway AI was focusing on the wrong thing because “humans have already created a force that is well on its way to devouring both humanity and the earth in just the way they fear. It’s called the Corporation”. Basically our AI overlords are already here: they likely employ you. Of course existing Slow AI is best positioned to adopt its faster young, digital algorithms. It as such can be seen as the first step of the feared iterative path of run-away AI.

Daniel Miessler conceptualises Universal Business Components, a way of looking at and breaking down the knowledge work performed by white-collar staff today:

Companies like Bain, KPMG, and McKinsey will thrive in this world. They’ll send armies of smiling 22-year-olds to come in and talk about “optimizing the work that humans do”, and “making sure they’re working on the fulfilling part of their jobs”.

So, assuming you’re realizing how devastating this is going to be to jobs, which if you’re reading this you probably are—what can we do?

The answer is mostly nothing.

This is coming. Like, immediately. This, combined with SPQA architectures, is going to be the most powerful tool business leaders have ever had.

When I first heard about the open letter published in late March calling on AI labs to pause their research for six months, I immediately assumed it was a ploy by those who wanted to catch up. In some cases, it might have been — but I now feel much more inclined to take the letter and its signatories at face value.

The hallucinations of AI creators

Naomi Klein, writing in The Guardian:

The former Google CEO Eric Schmidt summed up the case when he told the Atlantic that AI’s risks were worth taking, because “If you think about the biggest problems in the world, they are all really hard – climate change, human organizations, and so forth. And so, I always want people to be smarter.”

According to this logic, the failure to “solve” big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.

The reason this very smart counsel has been ignored is not due to a reading comprehension problem, or because we somehow need machines to do our thinking for us. It’s because doing what the climate crisis demands of us would strand trillions of dollars of fossil fuel assets, while challenging the consumption-based growth model at the heart of our interconnected economies. The climate crisis is not, in fact, a mystery or a riddle we haven’t yet solved due to insufficiently robust data sets. We know what it would take, but it’s not a quick fix – it’s a paradigm shift. Waiting for machines to spit out a more palatable and/or profitable answer is not a cure for this crisis, it’s one more symptom of it.

The whole article is an excellent read. I’d love us to move to a Star Trek-like future where everyone has what they need and the planet isn’t burning. But — being generous to the motives of AI developers and those with a financial interest in their work — there’s an avalanche of wishful thinking that the market will somehow get us there from here.

Increasingly obscured future

I recently watched this video from the Center for Humane Technology. At one point during the presentation, the presenters stop and ask everyone in the audience to join them in taking a deep breath. There is no irony. Nobody laughs. I don’t mind admitting that at that point I wanted to cry.

Back in the year 2000, I can remember exactly where I was when I read Bill Joy’s article in Wired magazine, Why the Future Doesn’t Need Us. I was in my first year of work after I graduated from university, commuting to the office on a Northern Line tube train, totally absorbed in the text. The impact of the article was massive — the issue of Wired that came out two months later contained multiple pages dedicated to emails, letters and faxes that they had received in response:

James G. Callaway, CEO, Capital Unity Network: Just read Joy’s warning in Wired – went up and kissed my kids while they were sleeping.

The essay even has its own Wikipedia page. The article has been with me ever since, and I keep coming back to it. The AI Dilemma video made me go back and read it once again.

OpenAI released ChatGPT at the end of last year. I have never known a technology to move so quickly to being the focus of everyone’s attention. It pops up in meetings, on podcasts, in town hall addresses, in webinars, in email newsletters, in the corridor. It’s everywhere. ‘ChatGPT’ has already become an anepronym for large language models (LLMs) as a whole — artificial intelligence models designed to understand and generate natural language text. As shown in the video, it is the fastest growing consumer application in history. A few months later, Microsoft announced CoPilot, an integration of the OpenAI technology into the Microsoft 365 ecosystem. At work, we watched the preview video with our eyes turning into saucers and our jaws on the floor.

Every day I seem to read about new AI-powered tools. You can use plain language to develop Excel spreadsheet formulas. You can accelerate your writing and editing. The race is on to work out how we can use the technology. The feeling is that we have do to it — and have to try to do it before everybody else does — so that we can gain some competitive advantage. It is so compelling. I’m already out of breath. But something doesn’t feel right.

My dad left school at 15. But his lack of further education was made up for by his fascination with in the world. His interests were infectious. As a child I used to love it when we sat down in front of the TV together, hearing what he had to say as we watched. Alongside David Attenborough documentaries on the natural world and our shared love of music through Top of The Pops, one of our favourite shows was Tomorrow’s World. It was fascinating. I have vivid memories of sitting there, finding out about compact discs and learning about how information could be sent down fibre optic cables. I was lucky to be born in the mid-1970s, at just the right time to benefit from the BBC Computer Literacy Project which sparked my interest in computers. When I left school in the mid-1990s, I couldn’t believe my luck that the Internet and World Wide Web had turned up as I was about to start my adult life. Getting online and connecting with other people blew my mind. In 1995 I turned 18 and felt I needed to take some time off before going to university. I landed on my feet with a temporary job at a telecommunications company, being paid to learn HTML and to develop one of the first intranet sites. Every day brought something new. I was in my element. Technology has always been exciting to me.

Watching The AI Dilemma gave me the complete opposite feeling to those evenings I spent watching Tomorrow’s World with my dad. As I took the deep breaths along with the presenters, I couldn’t help but think about my two teenage boys and what the world is going to look like for them. I wonder if I am becoming a luddite in my old age. I don’t know; maybe. But for the first time I do feel like an old man, with the world changing around me in ways I don’t understand, and an overwhelming desire to ask it to slow down a bit.

Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science’s quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own. —[Bill Joy, Why The Future Doesn’t Need Us]

I’ve had conversations about the dangers of these new tools with colleagues and friends who work in technology. My initial assessment of the threat posed to an organisation was that this has the same risks as any other method of confidential data accidentally leaking out onto the Internet. Company staff shouldn’t be copying and pasting swathes of internal text or source code into a random web tool, e.g. asking the system for improvements to what they have written, as they would effectively be giving the information away to the tool’s service provider, and potentially anyone else who uses that tool in the future. This alone is a difficult problem to solve. For example, most people do not understand that email isn’t a guaranteed safe and secure mechanism for sending sensitive data. Even they do think about this, their need to get a thing done can outweigh any security concerns. Those of us with a ‘geek mindset’ who believe we are good at critiquing new technologies, treading carefully and pointing out the flaws are going to be completely outnumbered by those who rush in and start embracing the new tools without a care in the world.

The AI Dilemma has made me realise that I’ve not been thinking hard enough. The downside risks are much, much greater. Even if we do not think that there will soon be a super intelligent, self-learning, self-replicating machine coming after us, we are already in an era where we can no longer trust anything we see or hear. Any security that relies on voice matching should now be considered to be broken. Photographs and videos can’t be trusted. People have tools that can give them any answer, good or bad, for what they want to achieve, with no simple or easy way for a responsible company to filter the responses. We are giving children the ability to get advice from these anthropomorphised systems, without checking how the systems are guiding them. The implications for society are profound.

Joy’s article was concerned with three emerging threats — robotics, genetic engineering and nanotech. Re-reading the article in 2023, I think that ‘robotics’ is shorthand for ‘robotics and AI’.

The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them. —[Bill Joy, Why The Future Doesn’t Need Us]

The video gives us guidance of “3 Rules of Technology”:

  1. When you invent a new technology, you uncover a new class of responsibilities [— think about the need to have laws on ‘the right to be forgotten’ now that all of our histories can be surfaced via search engines; the need for this law was much less pronounced before we were all online]
  2. If the tech confers power, it starts a race [— look at how Microsoft, Google et al have been getting their AI chatbot products out into the world following the release of ChatGPT, without worrying too much about whether they are ready or not]
  3. If you do not coordinate, the race ends in tragedy.

It feels like the desire to be the first to harness the power and wealth from utilising these new tools is completely dominating any calls for caution.

Nearly 20 years ago, in the documentary The Day After Trinity, Freeman Dyson summarized the scientific attitudes that brought us to the nuclear precipice:

“I have felt it myself. The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles—this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds.” —[Bill Joy, Why The Future Doesn’t Need Us]

Over the years, what has stuck in my mind the most from Joy’s article is how the desire to experiment and find out can override all caution (emphasis mine):

We know that in preparing this first atomic test the physicists proceeded despite a large number of possible dangers. They were initially worried, based on a calculation by Edward Teller, that an atomic explosion might set fire to the atmosphere. A revised calculation reduced the danger of destroying the world to a three-in-a-million chance. (Teller says he was later able to dismiss the prospect of atmospheric ignition entirely.) Oppenheimer, though, was sufficiently concerned about the result of Trinity that he arranged for a possible evacuation of the southwest part of the state of New Mexico. —[Bill Joy, Why The Future Doesn’t Need Us]

There is some hope. We managed to limit the proliferation of nuclear weapons to a handful of countries. But developing a nuclear weapon is a logistically difficult process. Taking powerful software and putting it out in the world — not so much.

The new Pandora’s boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed. Ideas can’t be put back in a box; unlike uranium or plutonium, they don’t need to be mined and refined, and they can be freely copied. Once they are out, they are out. —[Bill Joy, Why The Future Doesn’t Need Us]

The future seems increasingly obscured to me, with so much uncertainty. As the progress of these technologies accelerates, I feel less and less sure of what is just around the corner.

📚 Everything You Need to Know About the Menopause

There are some things that happen in life that people don’t talk about, despite the commonality of the experience. Recently, a group of my online friends started discussing their, and their partners’, experience of the menopause. One person shared with the group, and all of a sudden the stories came pouring out. I knew the basics, but I didn’t realise how much of a difficult — and sometimes devastating — experience it could be.

My wife and I are both 45 so it felt like a good time to learn a lot more about it. Kate Muir’s book, Everything You Need to Know About the Menopause (but were too afraid to ask) is an excellent place to start.

The key points I took from the book were:

  • Dealing with the effects of the menopause over a long period of time is a relatively recent phenomenon. In the Victorian era in the UK, people used to die at the average age of 59. With average life expectancy now extended by thirty years, women have to live in a post-menopausal state for much longer.
  • There is nowhere near enough education about the menopause. We learn about puberty at school but not about what happens to half of the population in later life. Given how reluctant people are to talk about it, access to information can be difficult.

The divide between those who have menopause support and knowledge and those left to suffer is massive.

  • More worryingly, the lack of education also extends to the medical profession. The book contains horrific stories of undiagnosed and misdiagnosed patients, including the case of one woman ultimately being given electroshock therapy after being diagnosed with ‘treatment-resistant depression’. It turned out that her symptoms were caused by hormone deficiency:

Although the menopause will happen to every woman in the world, and has massive health consequences, according to a Menopause Support investigation, 41 per cent of UK medical schools do not give mandatory menopause education.

… in one study of around 3,000 British menopausal women, after complaining of the onset of low mood or anxiety, 66 per cent were offered antidepressants by their doctor instead of hormones.

  • Some good news is that there is freely-accessible information out there for medical professionals, for example this 90-minute video from Dr Louise Newson on assessing perimenopausal and menopausal women, and safely prescribing HRT during remote consultations:

  • Menopause leads to other major health issues — osteoporosis (brittle and fragile bones), Alzheimer’s (dementia) and heart disease. There are some things you can do to combat a reduction in bone density, such as high-impact exercise, but on their own they are not as effective as when they are combined with Hormone Replacement Therapy (HRT). Using body-identical transdermal estrogen after the age of 50 halves a woman’s chances of breaking a hip and reduces her chances of having a heart attack.
  • A Women’s Health Initiative study in 2002 made people extremely wary of HRT. It turns out that there are different types of treatment; compounded ‘bioidentical’ tablets are awful as there is no reliable way to know what they contain, whereas body-identical hormone cream does not carry the same risks:

We need to question the conventional wisdom, which says that HRT causes breast cancer and that the risks of taking HRT outweigh the benefits. What most people – including me, until I began my investigation – think they know about HRT is wrong on two counts: every form of HRT is not the same, and the terrifying cancer-scare headlines which erupted with the Women’s Health Initiative Study back in 2002 refer to the older, synthetic forms of HRT that have now been superseded by a completely different products.

The bad news: In the general population, 23 cases of breast cancer will be diagnosed per 1,000 women. If women take the old, synthetic HRT, an additional 4 cases appear. If women drink a large glass of wine every day, an additional 5 cases appear. If women are obese (BMI over 30), an additional 24 cases appear. The good news: If women take 2.5 hours of moderate exercise per week, 7 cases disappear. If women take estrogen-only HRT, 4 cases disappear.

  • The experience of the menopause is yet another burden for women that can hold them back in their careers. It typically turns up at a time when they already have a lot on their plates, trying to sustain a career whilst dealing with moody teenagers and ageing parents. Hot flushes can be debilitating. Thanks to reports on COVID-19 we have heard a lot about ‘brain fog’; unfortunately this is another symptom of the menopause:

When scientists ask menopausal women about their symptoms, 80 per cent report hot flushes, 77 per cent report joint pain, and 60 per cent memory issues. Aside from these three, further plagues of the menopause include: heart palpitations, sleeplessness, anxiety, depression, headaches, panic attacks, exhaustion, irritability, muscle pain, night sweats, loss of libido, vaginal dryness, body odour, brittle nails, dry mouth, digestive problems, gum disease, dry skin, hair loss, poor concentration, weight gain, dizzy spells, stress incontinence – and last but not least, something that might be from a horror movie: formication, which means an itchy feeling under the skin, like ants. I had that. Quite simply, the majority of women battle through the menopause, and only a lucky few are symptom-free.

  • Suicide is at its highest for women aged 45–49, and at its second highest in the 50–54 age group.
  • Some women have to deal with menopause much earlier in their lives than they would otherwise expect. Early onset menopause, and medical menopause (i.e. following a medical procedure), can both be extremely traumatic. One in 40 women experience the menopause before they turn 40.
  • Women actually produce more testosterone than estrogen. According to menopause experts, testosterone is an essential hormone that should be replaced and yet it is not officially prescribed ‘on licence’ on the UK National Health Service as part of HRT. It shouldn’t be considered a ‘lifestyle drug’ just used to enhance a person’s libido, but “a life-saving hormone that will preserve [women’s] brains, bodies and long-term health.” It enhances “cognition, muscle, mode, bone density and energy.”
  • There is a ‘window of opportunity’ at the start of the menopause to begin estrogen replacement which reduces the chances of dementia and Alzheimer’s.
  • However, promising research is growing on older women starting HRT a decade or more after the menopause.
  • There is a small group of oncologists are looking at prescribing HRT to breast cancer survivors following a good recovery, used in conjunction with anti-cancer drugs such as tamoxifen. It may be that in some cases, the quality of a person’s life post-menopause outweighs the risks.

The book is a must-read. It has increased my knowledge from next-to-nothing to a broad, general understanding of something that half of the people around me will go through at some point in their lives. I’ve bought a second copy to be left in our book-swap rack at my office.

Exercise or sleep?

It was a struggle today. I only managed just over four hours’ sleep last night. I was up just very early this morning in order to fit my indoor bike trainer session in before an early work meeting.

I’m very surprised it was as high as 67%!

I’m very surprised it was as high as 67%!

I had planned to go to bed earlier, but we have a young teenager who has just moved into the ‘not tired at night’ phase and it doesn’t yet feel right to leave him to shut down the house while we ascend the wooden hill.

Due to the lockdown I am missing the hour of walking that used to be part of my daily commute to and from the office, so since March I’ve been prioritising exercise on most days. I enjoy exercise for its own sake, but it’s also motivating that there are numerous articles about how desk-based jobs are literally killing us:

Both the total volume of sedentary time and its accrual in prolonged, uninterrupted bouts are associated with all-cause mortality, suggesting that physical activity guidelines should target reducing and interrupting sedentary time to reduce risk for death.

But…other research says that lack of sleep may lead to Alzheimer’s disease in later life. I remember hearing how Margaret Thatcher got by on four hours’ sleep a night, making her the “best informed person in the room” according to her biographer; she suffered from dementia in her final years.

A wise man that once worked with me said “you can’t cheat the body”, and he’s right. But given the choice between exercising and sleeping, what’s the right balance to strike?

Are you still not going out?

Friends and family think I’m at best over-cautious, or at worst ridiculous. They don’t say it to me directly, but I sense it.

Most people I know seem to have returned to some kind of normality. Getting together indoors, going to pubs and restaurants, eating out, sharing trips in cars. These things crept back in gradually. People are fed up with keeping away from others and so badly want it all to be over. We stopped hearing about the people catching it, going to hospital with it, dying from it. It feels like the risks abated, and behaviour changed day by day.

Because I am not joining in, and continue to avoid any unnecessary face-to-face contact, I’m now very much an outlier. “Are you still not going out, Andrew?” “Life has to go on.”

I question my attitude all the time. I get drawn in. Perhaps I am being over-cautious, and need to get back to being social again. I’m certainly missing human contact and having any kind of a social life. But then I read a horror story about the long-term problems that some COVID survivors are trying to cope with, and it just reinforces my desire to keep away from everyone. It’s as if there is one version of events out there in the real world, and then people I know are gaslighting me.

COVID-19 has not been with us for very long, and every day there seems to be new stories about possible impacts on the human body, or new developments such as being able to catch the virus more than once. Even if the long-term impacts are mild, I am happy to make sacrifices to avoid them. From the New York Times:

“In meetings, “I can’t find words,” said Mr. Reagan, who has now taken a leave. “I feel like I sound like an idiot.””

I remember one December where I had to run a workshop after a big night out of festive drinking. My hangover manifested itself in that I was unable to string sentences together properly. Something had altered in my brain, albeit temporarily, and it was torture. As I spoke, it was as though I had a separate inner dialogue that was asking me “Where is this sentence going?”, and I didn’t know. The thought of being stuck like that permanently fills me with dread.

The film Awakenings (1990) with Robert De Niro and Robin Williams has always fascinated me. Based on a book by neurologist Oliver Sacks, it depicts people who had become victims of the encephalitis lethargica epidemic of the 1920s. From Wikipedia:

The disease attacks the brain, leaving some victims in a statue-like condition, speechless and motionless. Between 1915 and 1926, an epidemic of encephalitis lethargica spread around the world. Nearly five million people were affected, a third of whom died in the acute stages. Many of those who survived never returned to their pre-morbid vigour.

The book and/or the film draws a link between the influenza pandemic of 1918 and the subsequent encephalitis lethargica pandemic that followed. My understanding is that there is no irrefutable evidence that the first pandemic caused the second one, but this continues to be the subject of scientific debate.

Curious, I searched the web for “encephalitis lethargica” and “COVID” and found that (of course) I am not the only one to be thinking about this. Some examples:

US National Library of Medicine: From encephalitis lethargica to COVID-19: Is there another epidemic ahead?

The above characteristics can be indicative of the ability of coronaviruses to produce persistent neurological lesions. Acute COVID-19-related encephalitis, along with the potentially long-term worrying consequences of the disease, underscore the need for clinicians to pay attention to the suspected cases of encephalitis in this regard.

The Lancet:  COVID-19: can we learn from encephalitis lethargica?

We should take advantage of both historical and novel evidence. The prevalence of anosmia, combined with the neuroinvasive properties of coronaviruses, might support neuroinvasion by SARS-CoV-2. Whether the infection might trigger neurodegeneration, starting in the olfactory bulb, in predisposed patients is unknown. We should not underestimate the potential long-term neurological sequelae of this novel coronavirus.

NHS University College London Hospitals: Increase in delirium, rare brain inflammation and stroke linked to COVID-19

“We should be vigilant and look out for these complications in people who have had COVID-19. Whether we will see an epidemic on a large scale of brain damage linked to the pandemic – perhaps similar to the encephalitis lethargica outbreak in the 1920s and 1930s after the 1918 influenza pandemic – remains to be seen.”

The Conversation: How coronavirus affects the brain

Encephalitis and sleeping sickness had been linked to previous influenza outbreaks between the 1580s to 1890s. But the 20th-century epidemic of encephalitis lethargica started in 1915, before the influenza pandemic, and continued into the 1930s, so a direct link between the two has remained difficult to prove.

In those who died, postmortems revealed a pattern of inflammation in the seat of the brain (known as the brainstem). Some patients who had damage to areas of the brain involved in movement were locked in their bodies, unable to move for decades (post-encephalitic Parkinsonism), and were only “awakened” by treatment with L-Dopa (a chemical that naturally occurs in the body) by Oliver Sacks in the 1960s. It is too early to tell if we will see a similar outbreak associated with the COVID-19 pandemic, though early reports of encephalitis in COVID-19 have shown features similar to those in encephalitis lethargica.

The aftermath of this global event has many lessons for us now in the time of COVID-19. One, of course, is that we may see widespread brain damage following this viral pandemic.

I’m not sure when I’ll be at the stage where I feel comfortable visiting friends at their houses, sharing car journeys, or meeting up in pubs or restaurants. I doubt that there is a rigorous logical set of conditions that would need to be specifically met before I start doing those things again. I’ll know it when I feel it. Perhaps this stuff is just different for everyone based on their perception of risk versus their need to socialise to maintain a quality of life and good mental health. Perhaps part of it is that I am lucky to have a job that I can do from home so my need to venture out is minimal. Perhaps my interest in politics over the past few years has made me much more deeply distrustful of our government and their response to the pandemic than many other people. Eight months in, the novelty of being at home all the time has worn off, but I’m still ok to keep hunkering down for now.

It’s all relative

On my walk with the boys on Monday night we had a great ‘ramble chat’ that covered a vast range of topics. It was so lovely to hear them ask questions and respond to each other on how they interpreted the world. We got to talking about my work and I told them that my team had been tackling a problem of helping staff in different cities to have a faster connection to each other, but that there is a natural limit to how fast this can be. We talked about computer networks, the speed of light, and relativity. I gave them my understanding that time isn’t a thing that just exists on its own; it is related to space, and that time is perceived to be (or is?) slower for things that move faster. We watch a lot of Star Trek together and my youngest boy pointed out the connection where various fictional spaceships have ‘slingshotted’ around the Sun in order to gain enough speed to travel through time. I told them about the experiments where very accurate clocks were flown around in aircraft and got out of sync with the same clocks on the ground.

Back in the world of fact, it got me thinking about how much relativity really has an impact on everyday life. I wondered if going on lots of international business trips kept you younger, for example.

I had read before that GPS satellites have had to be designed to take relativity into account:

…the relativistic offset in the rates of the satellite clocks is so large that, if left uncompensated, it would cause navigational errors that accumulate faster than 10 km per day! GPS accounts for relativity by electronically adjusting the rates of the satellite clocks, and by building mathematical corrections into the computer chips which solve for the user’s location. Without the proper application of relativity, GPS would fail in its navigational functions within about 2 minutes.

So much for satellites that are moving at 14,000km/hr in orbits 20,000km above the Earth. What about people?

This article explains the impact on someone that travels around a lot at relatively high speed to the rest of us:

For this example we will look at an airline pilot. For simplicity let’s say that our pilot spends his or her whole career on the Atlantic route, flying (on average) 25 hours a week for 40 years at an average speed of 550 mph (880 km/h). This is undoubtedly a lot of “high” speed travelling but how much time will our pilot “save” due to time dilation?

… in a lifetime of flying our airline pilot saves a total of 0.000056 seconds as compared to an external observer.

Not much to be concerned about, but that number of 0.000056 seconds (56 microseconds) still seems big in that there are things in the real world that are that long. It’s roughly the same as the cycle time for the highest human-audible tone (20 kHz), or the read access latency for a modern solid-state computer drive which holds non-volatile computer data.