Preparing for Regulatory Oversight of Advanced Modeling and AI

In today’s competitive and tumultuous environment, companies are beginning to rely on advanced modeling and artificial intelligence tools to drive decisions. With increased use of Monte Carlo simulation models and other tech-based tools, regulators are beginning to ask more questions about models and the data that goes into them.

An article prepared by McKinsey & Company provides a glimpse as to why regulators are placing greater emphasis on “model risk management” when it says:

The stakes in managing model risk have never been higher. When things go wrong, consequences can be severe. With digitization and automation, more models are being integrated into business processes, exposing institutions to greater model risk and consequent operational losses. The risk lies equally in defective models and model misuse.

Defective models (…or ones used incorrectly) can lead to losses into the hundreds of millions or even billions. As I discuss in this post on outputs and reports, regulators are looking for assurance that the company is being well run, is compliant with relevant laws, and is financially solvent.

If your company is starting to integrate models, AI, and other technology tools into its decision making, regulators and other third-parties are going to scrutinize how well tested and proven these tools are.

Some questions they may ask include:

  • Where is the data coming from?
  • How are you managing the data?
  • Why should we [as regulators] trust the data?
  • To what extent are your data and subsequent models impacting decisions?

It’s important to remember that “model risk” regulations for insurance and other industries may be several years behind financial institutions, but that doesn’t mean you shouldn’t be preparing.

It’s better to understand potential questions beforehand, is it not?

How can organizations prepare for regulator questions around AI, modeling and other tech tools for decision-making?

Because these tools are so new, especially for non-financial firms, there is little historical data on how accurate models and other methods are.

Besides some of the general questions mentioned above, think about questions regulators and other people outside of the organization may ask as you develop your models. Those questions may include:

  • How historically accurate has this data been?
  • Has there been any in-depth trending and analysis done on this data before?
  • What has the organization done to ensure the completeness of the information?
  • Where was the information sourced? (third party, consumer, government, etc.)
  • What assumptions are being made in the use of this data?

In addition to questions, regulators will also want to see any documentation about how your models were developed and used. This guidance for financial firms from the FDIC explains:

Documentation of model development and validation should be sufficiently detailed so that parties unfamiliar with a model can understand how the model operates, its limitations, and its key assumptions. Documentation provides for continuity of operations, makes compliance with policy transparent, and helps track recommendations, responses, and exceptions.

But as I explain in this article on regulators and ERM, you have to walk a fine line…sharing too little OR too much with regulators could prompt additional scrutiny. But I am NOT advocating operating the model(s) in a black box environment where the model operations are held in secret.

In the long run, companies who rely heavily on models and AI may want to consider a formal risk management framework.

Banks and other financial firms may already be doing this since they are at the forefront of using models, AI, and machine learning to drive decisions. Some lenders are even using AI instead of traditional FICO credit scores to make decisions on credit applications.

Therefore, guidance for developing a risk management framework around models is most advanced for the financial industry. Standards such as the SR 11-7 guidance issued by the Federal Reserve System in 2011 can provide some good clues on where to start, even if you are not in the financial industry.

At a fundamental level, a governance framework for modeling and AI:

…provides explicit support and structure to risk management functions through policies defining relevant risk management activities, procedures that implement those policies, allocation of resources, and mechanisms for evaluation whether policies and procedures are being carried out as specified.

Does this mean you need to have a complex, formal framework before using modeling and other tech-based tools to drive decisions?

Absolutely not!

The complexity of any framework will be driven by a variety of things, some of which include the number of data sources, number of stakeholders using the output, and the frequency the model will be updated, to name a few.

Simply having some rules around roles & responsibilities, guidance on what the model is being used for, and requirements of data going into the model are all good reasons for a framework. Unless your company is in a highly regulated industry and subject to more intense scrutiny, this should be sufficient.

As modeling, AI, machine learning and other tech-based tools become more common in the years ahead, organizations should expect more questions and scrutiny around how they are using them to drive decisions. Taking a little bit of time now to understand how this oversight will unfold will go a long way towards ensuring you can satisfy the regulators’ needs with the least amount of headaches possible.

How is your company or industry preparing for the potential of regulatory scrutiny of modeling and AI?

I’m interested in learning more from you on how we as risk professionals can factor the future of oversight into how we plan and execute risk management activities. Leave a comment below or join the conversation on LinkedIn.

And if your company would like to use modeling like Monte Carlo Simulation and other technology-based tools to better inform decisions but don’t know where to start, please feel free to reach out to discuss your situation today!

Featured image courtesy of Michael Dziedzic via Unsplash.com

, , ,

Related Posts

2 Comments. Leave new

  • Hans Læssøe
    April 17, 2020 2:34 am

    Interesting, I must say. Today trading on the stock exchange anywhere in the world is largely driven by algorithms and elements of artificial intelligence where the process is “upside down” as machine makes the decisions and humans execute these. I am so old, I remember the good old days, where humans decided and machines “did”.

    In my view, there is an inherent danger in legislatively requiring a risk management of decision modelling as this would take some of the decision responsibility away from investors/traders/…. This is potentially dangerous you cannot blame/punish a software/algorithm.

    What about insider trading. If/when different investor and company AI systems talk to each other or share/affect available data, will they not be likely to share non-public information and thereby essentially execute insider trading to some extent? Where is the borderline between legal and illegal?

    That said – the use of modelling and algorithms, machine learning and artificial intelligence (i.e. smarter computers) is destined to explode in the decade(s) to come to a level none of us can even imagine today. One just has to think autonomous vehicles (including airplanes), healthcare and diagnosing, product pricing, education, judicial processing, … plus a few areas, I haven’t tought about. So – having a comprehensive and systematic “system quality assurance” methodology will be highly advisable – even when not demanded by regulators.

    PS: Note, that when AI systems are fed biased data, the decisions that comes out of these will be biased as well. One example could be that statistics show that white Anglo-Saxon students from middle+ income homes (normally/in average) perform better than eg. African/American students, AI systems will tend to favour the former over the latter. This and like, but more subtle and opaque biases is a huge danger to the optimal use of AI.

    Reply
    • Thank you for always insightful comments, Hans! You bring up some great points and interesting questions. There is certainly an inherent danger like you say, but this would affect various industries differently I would imagine. Having a basic framework for ensuring the outputs from models are trustworthy and accurate is certainly advisable regardless of any regulatory demands. If nothing else, creditors and investors will appreciate a company that goes the extra mile in this regard.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu