"The Acts of a Thing?" Well, Hello, Skynet
Oct 3, 2019

How does AI figure in risk management? A few years ago, the question would have been unanswerable and unnecessary, but events move with great dispatch nowadays. A lawsuit/countersuit now underway concerning the performance of an AI-driven investment venture opens a door to a whole new world of AI-related risk questions. While the particulars of this case relate only to financial and investment dealings and are being tried under Canadian law, they bring up fundamental questions such as who is responsible for an AI's operation? What entity exerts control? What's the role and liability of the people who built it in the first place? Does anyone really understand what the multiple complex algorithms embedded in the AI actually do?

In short, an AI-driven trading program built by a software company and offered by a financial services entity performed about like any rookie encountering international markets for the first time - it lost a lot of money on - in retrospect - dumb trades. The people who lost that money are suing. Now under the law in Quebec, "the custodian of a thing is bound to make reparation for injury resulting from the autonomous act of the thing, unless he proves that he is not at fault."

Yes, but - how any of these terms apply in this case is utterly unclear. Who is the "custodian" for example? What acts are "autonomous"? And so forth.

This may sound esoteric, but it's not. AI, the "thing" in this case, is being embedded in more and more aspects of how we all do business every day. Your company is undoubtedly enjoying the services of suppliers and vendors using AI right now. Regardless of the business you're in, your organization is probably either deploying AI-based functions and services now or planning to in the near future. Understanding how liability flows from the "acts of a thing," whether the thing in question resides in your operations or inside a vendor is just about guaranteed to be central to how you manage risk going forward.

The applicable law appears to be somewhat undercooked at the moment, not seasoned by nearly enough actual decisions, but a few common sense guidelines suggest themselves. For example, any contract should specify responsibility for the actions of any AI involved. Terms like "autonomous acts" need to be nailed down. If you're the entity providing the AI based service, can you avoid giving the AI agency and relegate its role to support and advice, keeping real, live humans in the loop at all times? Of course, you and your broker need to have a deep meeting of the minds on how your new AI impacts your liability, E&O, D&O, and every other policy.

Keep in mind that building and deploying your very own AI, like Skynet, could become your very own Terminator if it's not properly hedged and insured. AI is on the move because it's powerful. Our homo sapiens V1.0 brain can't handle more than seven variables at one time. AI apps execute megaflops per second. Use your V1.0 brain to manage that megaflop sized risk.

 

Are You ESG Transparent?

You chose risk management as a career because you thrive on challenge, right? Lucky you - we have a whole new type of challenge taking form before our eyes. Our new acronym for this issue is ESG which means environmental, social, and governance data. ESG represents a different kind of risk. No accidents or injuries are involved, nothing gets smashed or flooded, no electrons are sequestered or otherwise molested. ESG transparency is about how the world sees your investment appeal. Is yours a company that large institutions and "high net worth individuals" (love that euphemism) want to own part of?

Here's the rub. Until recently, the conventional wisdom valued keeping ESG as off the books as feasible. But now that field's reversing. The Dow Jones News Service tells us that "energy giant Exxon Mobil Corp. learned that lesson in June when Legal & General Investment Management America, an arm of Europe's second-largest asset manager, removed the company from the holdings of its GBP5 billion ($6.2 billion) Future World funds, a fund group for investors who consider ESG criteria." ESG transparency is more and more being viewed as a key to a company's long term viability. What are you doing about emissions, fresh water use, raw materials utilization? Are you a good corporate citizen, as that concept is understood in 2019?

Here's the real risk, according to Dow Jones: "while companies that don't disclose environmental and social data may not always lose investors, they are more often being passed over by new investors, in favor of firms with better disclosure practices, ESG investors say." That's a risk that should be visible in any ERM discussion. This risk is based not on events but on non-events - investments that don't happen.

See, this really is a great time to be a risk manager. No more same old, same old.

 

Quick Take 1:
Taking Steps - 10,000 of Them

The September issue of the Journal of Occupational and Environmental Medicine (membership required) brings some useful news concerning employee wellness. As this journal has noted before, wellness is too important to be left to HR alone, considering that overall employee fitness, or lack thereof, has a serious impact on the cost and duration of workers' comp claims.

While the examples studied in the JOEM article come from Australia, we have our own equivalent of the 10,000 steps challenge here in America. The government of Queensland ran a workplace challenge from 2012 through last year. They used "microgrants" to local businesses to support the handing out of pedometers and setting up of employee step challenges to improve mobility. The point for our purposes is that the program succeeded in improving physical activity levels among employees. Levels of employee adoption were good and, for the most part, increased activity was maintained over time. A simple, low cost program - a box of pedometers, an app to track activity, some PR, and some modest prizes - and it worked.

Improving and maintaining employee health should be part of loss engineering and safety, not just HR and benefits. Employee wellness is an essential risk management activity, not a frill. Have you discussed this with HR lately, planned any joint programs to get your people off their duffs and in motion? Do you need a step challenge?

Photograph of a group of people's legs who are walking'

 
Walking shoes, anyone?

 

Quick Take 2:
Is That a Driverless Delivery Van at Your Loading Dock?

We've been watching the slow development of driverless/autonomous vehicles for some time - and the fleet of largely unanswered risk questions that dog this fledgling industry. An item on CNBC last week shows that the risk is closer than you might think. CNBC's excellent correspondent on all matters automotive, Phil LeBeau, describes how another new entrant, Postmates Serve, is on the brink of rolling out its new hybrid delivery service in LA and San Francisco.

Postmates has partnered with Phantom Auto to package a semi-autonomous delivery service in which self-driving delivery vehicles are overseen by remote human operators who can intervene if a problem occurs that the vehicle cannot solve on its own. The model reminds us of the drone tech that our armed forces have been using for several years now. The machine does its own thing until it has a question. Note that another newcomer in Texas, Kodiak Robotics, is just now rolling autonomous trucks between Dallas and Houston in another stage one trial.

The risk angle? Is it possible that a local delivery "last mile" service like Postmates might be handling delivery chores somewhere in your logistic structure - without your knowing it? Let's be frank. If one of these new style delivery vans chugging along on its own recognizance plows into a full school bus while taking your goods to a client, you will be named by every plaintiff's attorney on the block. Who knows what novel theories of liability may be spun around such an incident? Now that autonomous delivery is becoming a fact and not just an experiment, you have some research to do and some decisions to make. Do you allow the use of these devices at any point in your delivery pipeline? If not, what controls do you have in place? If yes, does your risk program include suitable policies, riders, etc. for just in case?

The best free advice your faithful correspondent ever got from a lawyer applies here: never, ever have your name on a landmark Supreme Court decision.

A 26 ton self-driving delivery truck already on the roads in Sweden

 
A 26 ton self-driving delivery truck already on the roads in Sweden.

 

A Stat We Like

According to a new survey by Mercer, fifty percent of all employers say they have enhanced their employee assistance programs, while just over one-third have implemented a tele-therapy program. Yeah, that's those people over in benefits again doing something right. Are risk, comp, and safety plugged in and taking advantage to improve the management of employee injuries and their recovery process? If not, whassammata wid youse?

Tags:

Share This
Subscribe
 
* Required Fields