It’s time to really scrutinise AI
Artificial intelligence algorithms should be conducted to clinical trials. That’s because those offering personalised feeds to suit users’ tastes can have pernicious side effects, edging people towards extremes, as well as possibly eroding civility and trust in society, argue Olaf Groth, professor at Hult Business School, Mark Nitzberg of the Center for Human-Compatible AI (CHAI) at UC Berkeley, and Stuart Russell, professor at UC Berkeley, in Wired.
They maintain that intelligent systems at scale need regulation as they are an “unprecedented force multiplier” for the promotion of the interests of an individual/group.
“Manipulating user preferences and using bot armies to leverage widespread deceit has disrupted societal cohesion and democratic processes,” the authors say. “To protect the cognitive autonomy of individuals and the political health of society at large, we need to make the function and application of algorithms transparent, and the [US Food and Drug Administration] provides a useful model.”
A new AI agency would require a broad diversity of expertise and have to slot psychologists and sociologists alongside programmers and economists. And it would need to deploy system-level testing to not only observe how the software works but also model any second- and third-order social effects it could have.
“While the drug approval process can be flawed or corrupted… its model is far better than the minuscule defences we have today to protect us from weapons of mass societal destruction.”