From Ankle Bones to Algorithms: A Brief History of Risk Management
Cut Through The Noise
Receive a curated digest of key Pharmaceutical and Biotech news, designed for Risk, Compliance and Audit professionals.
A 5-minute read, emailed every Friday.

Risk is an inseparable part of the human experience. From our earliest ancestors facing the uncertainties of a hunt to modern pharmaceutical and biotech companies navigating complex global markets, we’ve always grappled with the unknown and sought to manage it as best we can.
This constant dance with uncertainty has driven us to develop ever more elaborate and sophisticated strategies and tools for managing risk. We’ve evolved from relying on gut feelings and superstition to objectively analyzing data and deploying advanced technologies to guide our decisions.
This article explores the surprisingly long, and interesting, history of risk management, tracing its evolution from ancient rituals to the cutting-edge systems and algorithms that shape it’s practice today.
Early Days: When Sheep Bones Were High-Tech
Imagine relying on sheep ankle bones to help predict the future. That’s exactly what the ancient Sumerians did around 3200 BCE. These bones, called astragali, were the earliest form of dice, used for everything from games of chance to settling legal disputes, and even making wartime decisions! It’s a far cry from the risk models we use today, but it highlights an early example of our fascination with studying, and seeking to predict, the unknown.
Interestingly, these same Sumerians were also pioneers in medicine, developing early pharmacological recipes. Their remedies, documented on clay tablets, often combined practical ingredients with spells and incantations, reflecting a world where science and superstition were heavily intertwined.
Around the same time, people in China were developing their own unique ways to try and influence the future. They consulted the I Ching, an ancient text of wisdom, to gain insights into potential outcomes. They practiced Feng Shui, harmonizing their surroundings to invite good fortune and minimize risks. Farmers diversified their crops and built irrigation systems to mitigate the unpredictable forces of nature. And communities formed mutual aid societies to share the burden of unexpected hardship. These diverse practices, rooted in philosophy, religion, and practical knowledge, suggest that the quest to manage risk was a universal human endeavour, existing from the very beginning of civilisation.
The Seeds of Reason: The Ancient Greeks and Romans
In ancient Greece, while oracles and omens still held sway, thinkers like Thales of Miletus began seeking natural explanations for the world around them. Hippocrates, considered the “father of medicine,” advocated for observation and rational diagnosis, moving away from purely mystical explanations for naturally occurring phenomena, including illness. This marked a significant step towards a more systematic understanding of the concept of cause and effect, and ultimately risk.
The Romans, known for their practicality, were also keen gamblers. Graffiti found in Pompeii even depicts heated arguments between players, proving that some things never change! Their famous phrase “alea iacta est” (“the die is cast”), spoken by Julius Caesar as he crossed the Rubicon, encapsulates their understanding of risk and the need to make decisions, even in the face of significant uncertainty.
The Islamic Golden Age: Laying the Foundation for Modern Risk Management
The Golden Age of Islam witnessed remarkable advancements in science and mathematics. Scholars like Jabir Ibn Hayyan and Ibn Sina (Avicenna) made groundbreaking contributions to chemistry and medicine. But perhaps the most crucial development for human progress during this period was the adoption and refinement of the Hindu-Arabic numeral system, including the revolutionary concept of zero. This system, later brought to Europe by Fibonacci, provided the essential building blocks and logic for precise calculation, paving the way for the development of applied mathematics, including probability theory.
The Renaissance: Quantifying the Unknown
A seemingly trivial question about dividing stakes in an unfinished game of chance led to a major breakthrough in the 17th century. When gambler Antoine Gombaud posed this question to Blaise Pascal and Pierre de Fermat, their correspondence laid the foundation for modern probability theory. Suddenly, risk could be quantified, analyzed, and ultimately managed.
This new understanding of probability and risk had a profound impact. It gave rise to the insurance industry, with marine insurance becoming vital during the “Age of Discovery”. During this time, Lloyd’s Coffee House in London emerged as a central hub for this burgeoning industry.
In medicine, the impact of the work of Pascal and Fermat was equally significant. Systematic thinking, fuelled by probability theory, would help determine modern approaches to clinical trials and evidence-based medicine, shaping the way we develop and evaluate drugs today.
The Industrial Revolution: Managing the Risks of Progress
The Industrial Revolution brought new opportunities, but also significant challenges. Factories were hazardous, machines were unreliable, and new products often carried unforeseen dangers. New forms of risk emerged, including workplace safety and environmental pollution, requiring specialist understanding and management and eventually the development of dedicated disciplines. We can thank Health and Safety legislation for the advent of the ‘risk register’.
At the same time, scientists like Abraham de Moivre and Thomas Bayes developed new statistical tools to help model and deepen our understanding of uncertainty and update probabilities based on new information, tools that are still essential for risk management in various fields, including pharmaceuticals.
Regulation and Governance: Protecting People and Ensuring Accountability
This was starkly evident in the pharmaceutical industry, where disasters like the 1937 Elixir Sulfanilamide incident (where a toxic solvent in a new drug formulation led to over 100 deaths) and the thalidomide crisis (which caused thousands of birth defects) exposed the need for greater control and oversight.
These events spurred significant changes in drug development regulations that remain the basis for regulation today including at the Federal Drug Administration (FDA), in the U.S., the Medicines and Healthcare products Regulatory Agency (MHRA), in the U.K. and the European Medicines Agency (EMA), in the E.U.
In the early 2000’s, a wave of corporate scandals, including the infamous cases of Enron and WorldCom, revealed the consequences of unchecked financial risk-taking and lax governance. These scandals weren’t just about bad accounting; they exposed a fundamental lack of responsibility and oversight at the highest levels of some of the world’s biggest companies. In response, governments stepped in.
The U.S. passed the Sarbanes-Oxley Act (SOX), designed to hold directors accountable and improve financial reporting. Similar legislation emerged in other countries, such as the Corporate Governance Code in the U.K., which aimed to standardize and mandate formal risk management practices in a bid to help protect shareholders and other stakeholders.
These new regulations, while well-intentioned, often acted like narrow spotlights, illuminating specific risks in isolated areas – drug safety here, financial reporting there. But this created a kind of “tunnel vision”, and a fragmented approach to risk management.
Companies became more adept at (and more exclusively focussed on) managing the risks under each spotlight. However, the risks identified in each domain could not easily be compared with each other, preventing any overall view of the company’s risk profile.
Worse still, this siloed approach meant many other types of “unregulated” risk, many with the potential to cause serious damage (such as those related to strategy and execution), remained hidden in the shadows.
The Rise of ERM: Connecting the Dots
This “spotlight effect” highlighted the need for a more holistic, all-encompassing and integrated approach to risk management – one that could illuminate the entire forest, not just individual trees.
And so, in the early 21st century, following consultations with regulators, academics and business leaders, “Enterprise Risk Management” (ERM) emerged, with organizations like ISO and COSO subsequently developing globally recognised ERM standards.
ERM links risk to strategy by providing a unifying framework, consolidating all risks already managed through traditional methods (for example, GxP, quality assurance, finance, pharmacovigilance etc.), while also offering a means to systematically identify, assess, and respond to all other significant risks from across an organization’s entire value chain, regardless of their nature or source.
However, despite being practiced for two decades, ERM can still feel like a “work in progress” for many organizations. It is a maturing discipline, perhaps best thought of as an adolescent teenager, still trying to articulate its place within the organization and how best to fully demonstrate the unique value it can bring to Boards, Leadership Teams and the business.
This challenge may be particularly acute in the pharmaceutical industry, where a history of specialized risk management functions – GxP, quality assurance, etc. – can create internal confusion, even ‘competition’ over the ownership of risk. Without careful management, this internal resistance and friction can often hinder the adoption of a truly integrated ERM approach.
The Digital Age: Where Bits Meet Humanity
The digital age has unleashed a torrent of data and technology which is poised to transform the landscape of risk management once again.
We now have powerful tools, such as Artificial Intelligence and Machine Learning that can sift through and analyse vast swathes of data to identify patterns and anticipate potential problems otherwise invisible to the human eye.
But this digital revolution also brings new challenges, from the ever-present threat of cyberattacks to the complex ethical dilemmas surrounding the use of AI in healthcare.
Yet, even as our tools become more sophisticated, the human element, a common strand dating back to ancient times, remains absolutely central to effective risk management. It’s the human mind that interprets a system’s outputs, makes judgment calls, and ultimately decides how best to respond to uncertainty. Thankfully, then, it appears unlikely that we are all going to replaced by machines in the near future!
In this digital age, the key is to find the right balance between technological prowess and human insight, using data and algorithms to inform our decisions while relying on experience, professional judgement, and ethical values to guide us – all very much human qualities.
Conclusion: Embracing Uncertainty, Shaping the Future
The journey of risk management is a testament to human ingenuity, a story of continuous adaptation and innovation. From the sheep ankle bones cast by ancient civilizations to the sophisticated algorithms of today, we’ve always sought to understand and manage the unknown.
Today, we have unprecedented tools at our disposal. We no longer must simply accept our fate. To some extent we are now at ‘cause’ over ‘effect’, while acknowledging the limits of our knowledge. We can future-pace and simulate complex scenarios, predict future trends, and make data-driven decisions with greater confidence than ever before.
But the essence of risk management remains the same: it’s about embracing uncertainty and making the most informed choices possible in the face of the unknown.
It’s about recognizing that in business, as in life, progress requires us to take risks, but to do so with wisdom and foresight. Profit, after all, is the reward for successful risk-taking.