The easy path: Finding the balance of automation in a great governance strategy

Regulation & Legislation
AI Governance & Assurance
Risks & Liability

Humans are inclined to take the easy path, looking for ways to make life more comfortable and less demanding. However, those who have achieved great things know that true fulfillment and satisfaction come not from taking the easy road, but from tackling the challenges and obstacles that come our way early on. Ryan Holiday explores this concept in his insightful book on Stoicism, "The Obstacle is the Way."; or as the US Navy Seals say, "The only easy day is yesterday."

Despite this knowledge, its too tempting to take the easy way, avoiding the fundamentals and seeking instant gratification. That is, until the habit of taking the easy way generates greater and often immovable obstacles. Humans then face regret of not having addressed obstacles in the beginning that are seemingly small in comparison to their present situation.

This push and pull of taking the easy way often spills over into our professional lives as we constantly pursue automation and productivity enhancements. There is a time and place for productivity enhancement and automation. But in the context of why you pursue governance in the first place, there must be a clear line of demarcation between automating business operations and automating their governance. The more our world becomes automated, the more crucial it becomes to revisit the fundamentals, assumptions, and understand what we are doing and why.

Governance and the slippery slope of automation

The purpose of governance is to ensure that operational processes, becoming increasingly automated, are performing as intended. However, there is a pernicious raise in ‘Cognitive GRC’, ‘automating governance documentation with LLMs’ etc., that runs in parallel to the proper execution of governance.

One of the key advantages of automation is that deterministic computer programs do not make mistakes. Now, they may produce erroneous results from being designed to do the wrong thing or fail due to unforeseen edge cases, however if the automation is to always add new numbers together, the automation will not occasionally say 2+2=3. It will 1000 out of 1000 be 2+2=4. Now, when we introducing stochastic (random) models, specifically predictive models such as pricing regression models, fraud detection classification models, or predict the next word large language models, there is a degree of variability into the results that needs to be accountant for.

Taken together, deterministic ‘programs’ and stochastic ‘models’ systems have been proven to reduce effort, increase productivity, lower costs, and drive additional revenue for businesses. However, herein lies the rub: the more automation we use the more we need to be very careful that they are built properly, documented thoroughly, with their assumptions and caveats fully understood.

The trend toward trying to automate the crucial steps of understanding what you are doing and why, and how a system works is taking the easy path for a short term efficiency gain in exchange for a long term determinantal result.

Assumptions, documentation, and false positives

Actuaries, statisticians, economists, control theorists, et al. understand the need for detailed, thoughtful documentation and review of the underlying assumptions that are a crucial component to complex systems. There are even many efficiencies that can be gained with continuous detective controls for input drift, output drift, outliers, bias, model changes, security reviews, validations, stress test, the construction of regulatory filing reports, etc. however the crucial step of initial governance in a governance solution, and taking the hard road to objectively, independently, manually review and validate that the system is constructed properly and functioning as intended cannot be automated.

The dangerous trend of making model cards documentation from source code is an excellent example of this. The implicit assumption is that the modeling system was built properly and now the annoying risk team wants a model white paper. Let's generate one so that they leave us alone. This cuts out the crucial step of understanding what you did and why. Concretely,

  • Why did you use a neural network for this model? Would XGBoost have worked instead, or even a logistic regression. If not, why not?
  • How did you validate?
  • What underlying assumptions are in your data, are the samples independent?
  • What distributions are your data from?
  • If you are using parametric methods such as Z-scores, are the assumptions for normality met?
  • etc.

These are just a small number of questions and considerations that get omitted from governance  if you cut out the crucial step of writing up what you did and most importantly why.

To conclude, for businesses pursuing efficiency gains through automation and AI, the ensure your modeling systems are performing as anticipated, the expert curation and review of governance documentation becomes of paramount importance. Join Monitaur to do the hard things now, utilizing the fundamentals, to yield safe, performant, resilient and antifragile AI systems for the long term.

Read more about governance strategy, evidence, and documentation