Reposted from Security Management Magazine
As COVID-19 approaches its two-year anniversary, security and risk management professionals would be wise to reflect on past failures in hopes of making our society antifragile to future global calamities.
Antifragility is the property of a system or organization that aims to increase its strength and resilience as the system encounters shocks, stressors, or volatility. The term was coined by Nassim Taleb, an academic who raised the alarm on COVID-19 months before it devastated North America.
To enable the security industry to better prepare for future events, we need to fundamentally change the way we think about risk, and Taleb has more lessons to share.
Before 2008, Nassim Nicholas Taleb was known within finance and statistics circles as an expert (a title he would likely bemoan) in risk, uncertainty, and probability. He made a fortune as an institutional trader during the market crash of 1987 and later added to his wealth by independently using financial derivatives to bet against the markets in the run-up to both the 2001 dotcom bubble bursting and the 2007-2008 housing and financial crisis. In 2001, he wrote Fooled by Randomness, part one of his five-volume philosophical essay on uncertainty—Incerto. Taleb would go on to write The Black Swan (2007), The Bed of Procrustes (2010), Antifragile (2012), and Skin in the Game (2019).
The Black Swan catapulted Taleb to international stardom, mostly resulting from his accurate prediction of the ensuing 2008 financial collapse. While the book offers a complex examination into epistemology, the failures of human cognition, and the inability to adequately measure and react to risk, the underlying concept is that most events in daily life are common, easily anticipated, and inconsequential. Nevertheless, certain extreme events (black swans) have major impacts on our world, often transforming the way we think about everything afterward.
The original use of the black swan concept significantly predates Taleb and describes events that were incredibly rare. Semantically, the black swan metaphor has its origins in the idea of empirical falsifiability; if someone were to make the statement that “only white swans exist,” one would need to either examine every white swan in the world (a tedious task) or, conversely, see a single black swan to falsify the original statement.
Taleb expanded the metaphor: a black swan event is one that has massive consequences and is unpredictable. However, when such an event occurs, it suddenly becomes retroactively explainable or predictable. Taleb’s theory helps explain how such rare events fall under the radar, so to speak, of even the greatest experts and powerful governments. It is also used to describe how psychological biases tend to blind people in their assessment of risk.
Some notable examples of black swan events—according to Taleb—are the 1987 market crash, the Russian financial default of 1998, the dotcom bubble of 2001, and the 9/11 terrorist attacks.
Is COVID-19 a Black Swan?
Notice that COVID-19 was not listed with the other black swan examples above. On talk shows and webinars over the past two years, Taleb has repeatedly corrected hosts who call the pandemic a black swan event. Instead, he says, COVID-19 is a white swan.
While the COVID-19 outbreak was rare and continues to have massive consequences, the world has witnessed dozens of deadly pandemics over the past centuries. More importantly, the next large pandemic had been widely predicted by various experts and scholars. For instance, the tennis championships at Wimbledon paid $1.9 million per year for pandemic insurance over the past 17 years, which paid out $141 million in 2020, Forbes reported.
Despite COVID-19’s ineligibility to join the black swan club, Taleb’s work is still incredibly applicable. Nearly two years after the emergence of COVID-19, the quality of political and public discourse still sows confusion and doubt in the minds of many concerning what policy measures are right or wrong, trivial or consequential. Instead of trying to patch pre-existing problems with systems currently in place, we need to make our systems more robust to future shocks (antifragility).
An antifragile system is better off after a shock. The good news is that the world today is far more robust and prepared for future pandemics than we were one or two years ago. Some industries are also much stronger today as a result of the pandemic. Healthcare has become more antifragile by investing in supply chain management, home delivery services, and redesigning just-in-time system infrastructure to hold more inventory. These had been a huge risk management blind spot prior to COVID-19, despite the commercial savings that could have been associated with them.
The bad news is that there are still too many unknowns around economics and health to understand how the current pandemic will affect the ability to deal with future non-pandemic shocks. To better prepare for the shock to global norms, it behooves security professionals to go beyond black swans and discern some of Taleb's other valuable lessons.
The global effects of COVID-19 were preventable. Had countries shut everything down in January 2020, even for a limited time, the virus would not have spread the way it did. Subsidizing the financial loss of airlines back in January 2020 would have paled in comparison to the trillions lost around the world in subsequent months. In a mid-2020 interview with Bloomberg, Taleb said of governments and corporations: “They didn’t want to spend pennies, now they need to spend trillions.” In early 2020, many experts argued that there was little to no evidence that COVID-19 was harmful, but Taleb’s work reminds us that there is a difference between absence of evidence and evidence of absence.
Prepare for the worst. Reacting early to something that turns out to be nothing is far less costly than reacting late to something that ends up being significant. According to Taleb, epidemics have some of the fattest tails of any type of event; in non-statistical parlance, this means that most epidemics will have a negligible effect on the overall human population, while a select few (say, one every 100 years) will become pandemics, have catastrophic effects, and be several orders of magnitude more powerful than any preceding epidemic.
Respond early. The idea of tradeoffs between harms caused to the economy due to lockdowns and deaths caused by COVID-19 is fallacious when talking about implementing lockdowns early; the earlier one reacts to the pandemic, the less overall damage the economy will suffer because of future policy decisions. Once countries fail to act early, tradeoffs become much more important, especially when infection and mortality rates become clearer.
Gear up. Early adoption of face masks (even non-medical grade cloth masks)—when implemented by many people—has a multiplicative effect on reducing transmission. This is caused by the nonlinearity associated with viral load transmission; a 50 percent reduction in viral load transmission by wearing a mask can easily lead to a 99 percent reduction in infection, because a certain threshold of viral load is needed to cause an infection, as mentioned by Taleb in a subsequent Bloomberg interview. This is why it was a dangerous policy for some governments and NGOs to originally mislead the public by stating that face masks didn’t protect against COVID-19, even if such pronouncements were made to reduce N95 mask hoarding and hospital shortages, he said.
If more medical professionals had been incapacitated by COVID-19 due to increased PPE shortages, this would have obviously placed additional strain on a system already overwhelmed by the virus. However, much of this hypothetical strain would become moot by an earlier adoption of masking policies. According to Taleb, it would have been much more effective to have everyone wear masks and shut down the economy to a lesser extent, instead focusing on super-spreader events and vulnerable groups.
Rationality is not scalable. In January 2020, Taleb co-wrote and published a paper warning about the pandemic, long before COVID-19 was top-of-mind worldwide. He urged policymakers to respond early by “killing [the virus] in the egg before [it] can hatch.” In other words, it was not rational to wear a mask, socially distance, or stock up on food back in February 2020; doing so would have been considered a paranoid act. But when the consequences of such paranoia are limited and the payoffs are hypothetically unlimited, acting irrationally—especially when considering the multiplicative effects of mask wearing—is the correct decision.
Beware the ludic fallacy. This fallacy involves being fooled by closed models or games and their misuse in modelling real-life situations. In The Black Swan,Taleb gave an example of a Las Vegas casino that spent hundreds of millions of dollars on the most advanced technology to discover cheaters and card counters, as well as track whales who could jeopardize the casino’s bottom line with a few substantial lucky bets.
That very same casino—the MGM Mirage—failed to see various other risks not found in their modeling. First, they lost approximately $100 million in revenue when a tiger unexpectedly maimed Roy Horn from Siegfried and Roy, bringing an end to the duo’s illustrious Las Vegas show. The very same casino was also threatened by a disgruntled contractor’s unsuccessful attempt to dynamite the casino after being insulted by the casino’s settlement offer after he was injured on the job. Lastly, the casino was forced to pay a $5 million fine and nearly lost its gambling license when an incompetent employee decided to hide gambling profit forms requested by the IRS in a box under his desk for no specific reason.
Each of these scenarios represented far greater risk to the casino’s profitability and survivability, but alas, they were left out of the original risk assessment, leading to the casino’s decision to spend its entire budget on anti-cheating technology.
Be wary of some experts. Most people, including experts, are bad at evaluating their own knowledge. When given the opportunity to estimate the odds of some number of things or events taking place within a given range, people of all backgrounds usually overestimate their ability to correctly guess their rate of error for the range they choose themselves. It is important to note that this has nothing to do with their underlying knowledge, but instead reflects their hubris concerning their ability to gauge the accuracy of their knowledge.
Not all experts are created equal, Taleb warned. Experts in static or technical fields (such as plumbing, accounting, or neurosurgery) are not only highly desirable, but also useful. In other dynamic or forward-looking fields (such as economics, financial analysis, or clinical psychology), such expertise is much less desirable because it is dependent on the ability to predict future events, and it is often fraught with fraudulent and pseudointellectual behavior.
Now, to be clear, this doesn’t mean that one shouldn’t pursue such careers, or that anyone involved in these fields is fraudulent; instead, one should simply acknowledge that a brain surgeon is far more capable of accurately identifying and removing a tumor from a patient than an economist or a financial analyst is able to predict how interest rates or oil prices will fluctuate over the next five years.
The field of epidemiology falls somewhere between a static and dynamic state, which helps explain why Dr. Anthony Fauci, director of the U.S. National Institute of Allergy and Infectious Diseases and the chief medical advisor to the U.S. president, has faced such a barrage of criticism throughout the COVID-19 pandemic. His field (medicine) is rooted in empiricism and technical know-how, but epidemics and pandemics offer relatively small amounts of data due to their rarity, while simultaneously embodying extremely high levels of dynamism.
You can’t predict black swans. That’s what makes them black swans. Taleb often gets irritated when a newscaster or interviewer calls him an oracle or says he can predict such events before they occur; instead, he simply insures himself and his clients against the unlikely event that they do occur at a financial rate that is both feasible and reasonable given all other variables at play. Taleb has said that “the policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number.”
Out of more than 30 years of trading, Taleb states that he only had four profitable years, but that even one of these years more than covered all his cumulative losses. “Take all the risks you can, but make sure you’re here tomorrow.” This is the opposite of the culture of modern finance and public policy. In other words, instead of being conservative to daily small and medium-sized risks but completely oblivious to large or catastrophic risk, insure yourself against catastrophic risk (via tail-risk hedging) while taking more small and medium-sized risks to increase overall returns or, in the case of public policy, results.
As written in The Black Swan, Taleb’s work reminds us that “the wise one is the one who knows that he cannot see things far away.”
See Original Post