Artificial Intelligence, Liability and Risk Management

Blog Image

In the world of insurance, “AI” has long referred to Additional Insured. However, it’s 21st century meaning – Artificial Intelligence – is crowding the older acronym for attention as AI is increasingly seem as a panacea for speeding up workflows, reducing labor and improving accuracy. But is it?

A November 2019 report by global consulting firm McKinsey found a 25% annual increase in the use of AI by businesses, with 58% of companies stating they have embedded AI technology into at least one process or business product. And, as with anything new, the implications are not fully understood, leading to new layers of technical complexity and exposure to certain types of risk not previously of concern.

Businesses contemplating the introduction of AI into their work and decision-making processes need to ask themselves the following questions:

  1. Does the AI create new pathways for invasion of privacy?
    This should be of particular concern to firms required to conform to the European Union’s General Data Protection Regulation (GDPR). The E.U.’s “right to be forgotten” rule needs to be factored into the design of any AI process where large amounts of data are collected, stored and used.
  2. How good is the AI algorithm?
    Developers of software are no different than the rest of us. There are elements of human bias in all our actions and decisions. A seemingly trivial software design assumption could lead to erroneous outputs. The AI created must be evaluated and tested to ensure that the information derived from the process is not inherently slanted due to human failures introduced unwittingly into the underlying computer code.
  3. How good are the data being analyzed?
    As they say, garbage in, garbage out. Consideration must be given to the type(s) of data being collected for evaluation. Is the information reliable? Is it the most appropriate resource for the process? Does it contain bias that could skew the results?

Along with these thorny questions, the matter of risk control must be considered. Cyber insurance is an ever-evolving product, one that most likely is not keeping up with the pace of technologic change and its associated risk. At present, cyber insurance is typically broad enough to cover claims related to security, such as a data breach by malware. But as the use of AI grows, so too will specific AI-related claims, and they will inevitably splinter off into areas not previously considered by underwriters, such as the questions posed above.

“Silent cyber,” where the risks associated with AI are not clearly defined in the policy language remain the norm and the P&C sector of the insurance industry is not moving fast enough to address these concerns. It is possible that once AI-driven cyber claims become more common insurers will move to exclude coverage within traditional P&C policies and AI-type coverages will be carved out into new insurance products.

In summation, AI calls for all elements of the business to participate in the formulation of AI processes and to carefully address the risks that AI creates. Apart from security and regulatory concerns, the introduction of AI into the decision-making process calls for holistic evaluation of all aspects of the business. For example, the contractors creating the AI tools are the central component of the AI results. These relationships must be evaluated to determine who is responsible for what piece of the risk matrix. It is essential for stakeholders contemplating the use of AI to communicate their activities to those responsible for managing insurance and risk transfer.

As with all things technical, laws, regulations and best practices are playing catch-up to reality. As much as AI can be important to the future success of the business, it must also be recognized that newness has its own inherent downside and it is incumbent on management to minimize liability through careful planning and asking the hard questions.

 

The Docutrax Blog Library

Back To Blog Stream

Leave a Comment