27 July 2024
A lot of us worry about artificial intelligence. Some worry that it could cause a nuclear war. Others worry that some pre-programmed machine will cause other untold harm to human beings, our environment or our communities.
These worries call to mind the HAL 9000, the artificial intelligence-aided, onboard spacecraft computer, in Arthur C. Clarke’s 2001: A Space Odyssey, and the 1968 film by the same name directed by Stanley Kubrick. The spacecraft was taking astronauts on a mission to Jupiter. If you remember, the computer was designed to assist them on their mission. It ran everything. The astronauts spoke to it directly, addressing it as HAL.
There was only one problem. It became apparent that HAL was intentionally killing them! Come to find out, there was a latent problem with HAL’s programming. Something in it dictated that HAL’s survival was more important than the survival of the people it was supposed to serve.
This is the worry of artificial intelligence. The machine will begin to act in unexpected destructive ways not thought of when it was designed and there will be no way to stop it.
We know this is a valid concern. It’s frightening to turn decisions involving our future health and wellbeing over to machines. That’s because it’s impossible to take into consideration, when the machine is being designed, everything which might happen in the future. HAL’s flaw in A Space Odyssey was fiction, but there are real life examples as well.
As a corporate lawyer for more than 40 years, I’ve noticed how much the corporate law resembles computer programming and how similar the flaw in HAL’s programming is to what I think is a design flaw in the modern corporation. Like HAL, the corporation’s purpose is to serve human beings. It is a tool to facilitate the aggregation of capital by limiting investors’ risk. Putting capital together facilitates research and technological progress. It can improve everyone’s standard of living—assuming it doesn’t create a disaster first.
Like HAL, human beings determine the programming of corporations (by passing and amending the corporate law). Like HAL, the people who designed the corporate law in the second half of the 1800s didn’t fully appreciate all the damage that could result when conditions changed in the future.
When our existing corporate law was first adopted, corporations posed very little risk to the public interest. They were relatively small, acted mostly locally and utilized 19th century technology. Furthermore, they hadn’t learned how to interfere in the democratic process, lobbying to delay or frustrate the passage of new legislation designed to curb corporate anti-social behaviour. As a result of the foregoing, a corporation’s capacity back then to inflict severe damage on the public interest was small.
Today, all these conditions have changed. Big companies employ tens of thousands of workers and control billions of dollars of assets. They operate globally. They use modern technology capable of causing severe damage to the environment and other elements of the public interest. Furthermore, they have become highly proficient at lobbying our elected leaders and regulatory agencies to limit new legislation which might adversely affect their costs.
This is where we now stand. Companies inflicting severe harm on the environment and other elements of the public interest too often cannot be stopped because of rules programmed into companies and democracy years ago. These rules no longer serve humanity well. Indeed, in some cases they are causing it great harm. Something must be done to bring them up to date.
You might argue that corporate personnel can always voluntarily stop the destruction of the public interest and therefore the analogy to artificial intelligence is inappropriate. As a corporate lawyer for over 40 years and someone who understands how companies work, I can tell you that people behave differently when they’re working for a corporation than they do on their own. Inside a corporation, they play roles dictated by certain pre-established rules.
These rules sometimes take personal judgment out of the role players’ decision making and makes corporations behave like they are being directed by artificial intelligence. Adopted more than a century ago, the rules are the equivalent of a corporation’s programming. One rule in particular raises the biggest problems. It requires corporate personnel (specifically directors, the people in charge) to always put the company’s self-interest first.
When large amounts of money are at stake, this rule limits the ability of company personnel to protect the public interest. They must follow the rule even when they know it will cause severe harm to the environment (e.g., emitting significant quantities of greenhouse gases causing the warming of the planet) or another element of the public interest.
Every corporate law requires directors “to act in the best interests of their company” (or words to that effect). Every director knows it. Following the rule is the directors’ job, and because they are in charge, the job of everyone the company employs. Every company officer is aware of it and most workers understand it whether or not they have actually read the law. Because it’s the law, the market expects them all to follow it. They are left with no choice.
Dave, the last surviving astronaut in Space Odyssey, eventually unplugs HAL and continues his mission. Although the problem of corporations causing severe damage would be eliminated if the corporate law was rescinded (and corporations eliminated), throwing the baby out with the bath water is not necessary. Capital formation should be able to occur without simultaneously destroying the public interest. It is time to adjust the corporation’s 19th century programming by adding 21st century boundaries on the directive to pursue the company’s self-interest.
The solution to stopping continuous and intentional behaviour which causes severe damage is to make directors know from the outset that they must stop such damage regardless of the adverse financial consequences it will have on the company.
There is a relatively simple way to achieve this result. Change the existing duty of directors in the corporate law from “act in the best interests of the company” to “act in the best interests of the company, but not at the expense of severe damage to the environment or other elements of the public interest.” I call this change, the Code for Corporate Citizenship (the “Code”). You can read more about the Code at www.codeforcorporatecitizenship.com.
This is a big change, but only for the relatively few companies which are already causing severe harm. For the vast majority of corporations and companies yet to be formed, it will merely be a caution to monitor their operations to ensure they don’t cause severe damage to the public interest in the future. As a result of the Code’s enactment, all business will become far less destructive. The Code will make the artificial intelligence that now drives corporate behaviour less destructive and, in a sense, more intelligent.
Comments