Even as AI continues to deliver important benefits, it raises important ethical and social issues. As a society, we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that encourage, not encumber, our core values. Over the next several years, AI research, systems development, and social and regulatory frameworks will shape how the benefits of AI are evaluated against its costs and risks.
The internet and enormous, easily accessible computing power in the cloud have enabled the unprecedented accumulation and analysis of vast amounts of data. We believe that Artificial Intelligence will, over time, bring extraordinary capabilities to a variety of industries, nations, and communities.
At the same time, we also acknowledge 3 special areas of concerns, and welcome conversations with startups with innovative solutions to these big problems:
1). Unintended Results Due to Lack of Context:
Most Science Fiction tends to demonize AI. The Borg in Star Trek, the Cylons in Battlestar Galactica, or the Terminator are all pop-culture examples of Artificial Intelligence turning on humanity leading to devastating consequences. At Omega, we don’t believe that generalized artificial intelligence is a threat (or possibility) for the foreseeable future. However, we recognize that AI can misinterpret instructions due to its lack of understanding of context.
For instance, most people remember the example of the AI chatbot called Tay, which was deigned to generate the friendly conversational banter of a 19-year-old. It turned out however that the bot couldn’t figure out what constituted “reasonable” or “friendly” conversation (as its creators intended), and rapidly devolved into sending out inflammatory messages, laced with hate-speech.
While many have acknowledged that Tay was inadequately conceived and executed, it illustrates that AI has a problem understanding the context that informs its instructions. Indeed, entire communities have different definitions of what “offensive” means, so you have wonder to what extent any bot can truly sandbox what it means to be offensive.
2) Tunnel-Vision Goal Orientation
Much has been made of AI programs beating humans at games such as Jeopardy, Chess, and Go. When humans play a game, we play for enjoyment, for collaboration, to build camaraderie, and to learn things that might apply to other facets of our lives. By contrast, AI systems are programmed with a single-minded objective function. The AI cares only about achieving its objective (utility function).
For example, children often ask Alexa all sorts of questions. When the questions are factual (e.g., “What is the weather?”), Alexa gives a factual answer. But what about subjective questions? What does Alexa answer when a child asks about the meaning of life or about their relationships at school? Do we want algorithms teaching subjective values to an entire generation?
3). Lack of Explainability
A fundamental limitation of many AI techniques is that the AI cannot explain how it generated its conclusion. Human communities have evolved, in great part, based on our need to create ways to explain the things around us—Religion in the Middle Ages, Reasoning during the Renaissance, Ideology in the contemporary era.
Many AI systems work like black boxes. The AI can’t explain why it made a certain decision. Also known as the transparency problem, this raises a variety of problematic issues—e.g., inadvertent discrimination risk with automated loan underwriting, liability for autonomous cars, accountability for bad outcomes, and the role of ethics in decision making.