When Machines Are Wiser than their Programmers | Dr. R. Jared Staudt | CWR
How do we decide on how to program computers when we don’t know how to live our own lives?
Sci-fi is now reality. The rapid growth of artificial intelligence truly baffles the mind, though not the imagination. Long envisioned in fiction, the creation of “intelligent” computers has arrived and not just for research purposes. Projects are underway to entrust transportation, commerce, and even warfare into the hands of autonomous robots.
In the growth of artificial intelligence one key question emerges. How do we judge the morality of robotic choices? This of course only intensifies the debate about human morality itself. In order to program morality one must understand the nature of morality. Materialists envision morality solely in terms of the mechanics of the brain and thus envision replicating this material process in computers. If the essence of morality resides in our spiritual soul (even if working through the material brain) then it would be impossible to replicate a moral sense or conscience.
The attempt to replicate morality became apparent recently in a project at Google. In a “conversation” with an “intelligent” computer the following dialogue emerged, which reveals the heart of the controversy: