Who's responsible for AI?
Three-quarters of executives say they expected artificial intelligence to ‘substantially transform’ their companies within a mere three years, while one- third name ethical risks as one of the top three concerns around AI, according to a recent Deloitte survey.
Some of the ethical risks associated with AI differ from those related to conventional information technology, says Deloitte. This is due in part to the role played by large datasets in AI systems, some applications of AI technology such as facial recognition, and the capabilities that some systems demonstrate, from automatic learning to superhuman perception.
And there’s the issue of responsibility. AI technologies increasingly automate the decision-making process for a wide range of critical applications like autonomous driving and disease diagnosis, so the question arises about who bears the responsibility for the harm with which these AI systems may be associated. According to Deloitte, it’s now up to businesses, governments and the public to work towards establishing proper accountability structures.
“Even as they seek to take advantage of AI technology to improve business performance, companies should consider the ethical questions raised by this technology and begin to develop their capacity to leverage it effectively and in an ethically responsible way,” the consultancy says.