Type to search

It’s Time To Look Harder At The Morality Of AI Innovation

It’s Time To Look Harder At The Morality Of AI

Stephen Ritter is Chief Technology Officer at Mitek, a global leader in mobile deposit and digital identity verification solutions.

AI (Artificial Intelligence) concept. Deep learning. GUI (Graphical User Interface).

getty

Lately, the crescendo of calls for more scrutiny into the algorithms that—whether we know it or not—shape our daily lives has drawn me back to the AI Bill of Rights draft developed by the U.S. Office of Science and Technology and released in 2022.

I’ve called for moving this concept forward because it’s time. However, in thinking about it more deeply, I also think it is required. As business leaders, we need it.

The current draft is designed to serve as a catalyst for conversation about what should be in the final version. I’d say the majority of feedback so far has been on data and human privacy, which makes perfect sense to me as a data privacy guy. However, what’s missing is adequate recognition of the ethical and even moral aspects of deploying machine learning and AI. What framework should we consider when making critical decisions about developing or deploying AI in our companies? What impact will our decisions have on our human workforce? What responsibility do we have to factor in human consequences as part of the business case we build for the deployment of technology?

I admit these kinds of questions come to me naturally. Today, I am the CTO of a technology company. My university study included computer science (which was primarily the study of theory and coding) as well as cognitive science, with a curriculum that included everything from a study of how the human brain works to philosophical questions such as what makes us human. It was the latter of the two that encouraged me to think about the ethics of AI.

As AI has moved from the domain of researchers to engineers, I question whether we are giving adequate thought to—and building sufficient guardrails around—AI’s impact on the world’s workers.

A Tale As Old As Time—With A Twist

These concerns aren’t new. British textile workers destroyed weaving machinery in textile factories during the Industrial Revolution. Many people feared the coming of steam engines, automobiles and any other invention with the capacity to bring significant social or economic change. People in those industries were rightfully concerned. People did lose their jobs. They did have to find new ways to earn a living.

What makes this an issue even more worth raising today is the pace and widespread nature of this generation’s technological advancements. Not only do we think this same cycle will happen again, we know it will. AI can do certain—and, soon, many—things better, faster and cheaper than humans.

At some point before long, driverless trucks will hit the road. AI will interpret medical images. Computer systems will scan passports at travel and entry points. AI will collect, manipulate and analyze data with little human intervention in order to make company decisions.

At the same time, it is ushering in a future with almost limitless possibilities. As an expert in machine learning, I see how this technology has the potential to help us solve previously intractable problems, from world hunger to income disparity and the climate crisis. It is already being used to speed up medical research and the development of lifesaving treatments. The questions I raise aren’t meant to stop or slow the adoption and use of AI and machine learning—just the opposite. People in past generations feared technological change and tried to stop it. That doesn’t work. A far better approach is to identify what may occur and then ensure we have safeguards in place to make sure those who are impacted are prepared and have options.

Expanding The AI Bill Of Rights

I believe it’s time to adopt the AI Bill of Rights—after adding guidelines to address its human impacts. Here are three areas to start.

• Education. Many today are graduating with computer science degrees with an emphasis on machine learning. Their degrees prepare them to design and build technology systems that are more breathtaking than anything humankind has ever imagined. Are they getting a grounding in the human ethics of the fields they are preparing to enter? Should they? I benefited from a well-rounded curriculum that prepared me to be aware of some of the ethical and moral issues raised by these technological advances. Courses in philosophy, ethics and morals tailored to the requirements of future research and computer scientists would be terrific additions to computer science curriculums—and I would make them mandatory.

• Business. As business leaders, many of us are rightfully excited about using AI and machine learning to improve our operations and help manage workforce shortages. When building a business case for the increased use of AI, we can make human impact part of our decision-making process. Is there anything we can do to champion our employees and prepare them for changes that result from our decision? Some of the best companies in the world today are known for their superior training programs and commitment to helping their employees advance along their career paths. Data processors can become knowledge workers. Line workers can become quality inspectors and trainers.

• Government. The government can recognize the societal changes ahead and think about whether there should be more focus and protection for workers in a future version of the AI Bill of Rights. It can sponsor discussions about what role government should take to make sure our workers are prepared and have options. When developing a human-impact framework, it must consult experts in the fields of business organization, change and economic analysis as well as human potential so the result is an approach that engages and inspires as many people as possible.

I applaud any conversations about how to integrate the benefits of technology with our responsibilities as world citizens. We need technology to help us solve today’s most difficult problems, and we will benefit from the advantages it gives us. Now, we just need to be sure we have adequately planned for the impact those benefits will bring.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?