Should We Be Concerned About Artificial Intelligence Ethics?

By: Arthur Cole| - Leave a comment

Bigstock

As the enterprise becomes more steeped in machine learning, cognitive computing and other forms of autonomous infrastructure, the issue of artificial intelligence ethics keeps coming up.

Science fiction abounds with tales of benign computer systems that suddenly conclude people are simply inadequate and set about destroying humanity. As amusing as it is, though, how realistic is it that artificial intelligence (AI) could someday pose a hazard due to its development as it continues to evolve? Are there any reasons (and surefire means) to keep it in check?

Deciding Who Makes the Rules

According to WebVisions, there are two ways to program ethics into machines: Hard code them into their operating systems or establish guidelines that allow these machines to reach ethical conclusions on their own. With hard-coding, you run into the potential of different ethics governing different machines, which makes them just as likely to become confused as they try to learn and address multiple needs at once.

The latter approach, however, opens the door to misinterpretation of ethical behavior based on the machine’s own observations. A key example of this is Microsoft’s recent twitterbot, Tay, whose AI algorithm began tweeting offensive comments after it was tricked into thinking they were acceptable in normal user conversation.

Defining Ethics for a Machine

Leaders in technology — Stephen Hawking, Elon Musk and Steve Wozniak among them — suggest that because ethics are subjective to things like culture and community, machine learning is hard pressed to behave any more or less “ethically” than humans. As AI starts to make its way into increasingly sensitive areas, including weapons systems, power generation and mass transportation, the consequences of unethical machine behavior can be significant.

As Media Genesis noted: Even if computers are programmed to do good, there’s always a chance (albeit small) that their autonomous problem-solving capabilities will lead them to conclusions that people would deem unethical despite their good objectives.

This is one of the issues that an Austin, Texas company called Lucid is trying to work through with its Ethics Advisory Panel. The group consists of leading academics and researchers — the latest being Dr. Adrian Well, senior research fellow of the Machine Learning Group at Cambridge University — who will attempt to identify and codify the many ethical challenges confronting AI development. The group is delving into disciplines as diverse as history and economics to humanitarianism and international law to hopefully find a way for artificial intelligence ethics to permeate throughout emerging digital ecosystems. The panel currently holds seven members, with five seats still to be awarded.

Risk vs. Innovation

Perhaps the biggest concern around the emerging AI field is rooted in Hollywood-driven fear, which puts the brakes on development and deployment at a time when it stands to produce some truly revolutionary improvements to the human condition. As tech journalist Chris Price reports for ShinyShiny, leading platforms are already showing vast improvements in areas like medical diagnosis, giving patients with extremely rare conditions a fighting chance when most of the medical industry is focused on mass-market treatments.

And the simple fact is that the broad term “artificial intelligence” represents many gradations of “intelligence” so that most common applications — things like smart cars and smart machines — are actually not that intelligent at all. They simply have the ability to perform limited tasks on their own based on the programming behind their primary functions.

Ultimately, artificial intelligence ethics are no more or less measurable than our reliance on human ethics to make the same decisions. A more appropriate strategy would be to subject the world’s thinking machines to the same checks and balances that govern human activity, so that any one computer can’t pose a problem on its own — even if it technically can do so.

Ignorance is almost always the primary driver of fear, which is why it should be incumbent on the tech industry to educate the public about the realities of AI so that there are no misunderstandings over what it is, what it is not and what it is capable of.

Topics: , , ,

Comments

About The Author

Arthur Cole

Freelance Writer

With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

Articles by Arthur Cole
See All Posts