Concierge AI: Machines and morality. Within five decades, we hope to relinquish full autonomy of our defense network to Rasputin. But can we trust an artificial intelligence to make the moral decision? Would you like to know more?
Concierge AI: Autonomous vehicles were first built at the dawn of the millennium, prior to which there were over a million automobile-related deaths per year. AI programmers at the time found themselves confronted with moral quandries that were once mere experiments in thought. A self-driving car on a collision course that kills five people has but one viable alternative: a new path where one innocent bystander is killed. For a machine to make a decision on whether to divert its course, its creator must undertake the impossible task of assigning an objective value to individual Human life. It turns out that the ethical implications of artificial intelligence are a great deal more complicated than Asimov's Laws of Robotics lead us to believe.
Concierge AI: The engineers of Clovis Bray conceived a solution during the development of our Warmind project. By relegating ethical decision-making to a Black Box Morality system, the Warmind instruments its own proprietary virtue quantifiers incomprehensible to even its own creators. Rasputin determines morality on its own terms, and by design we are blind to that process in order to preserve its objectivity.