An paper on artificial intelligence governance (AI governance) has just been published by Mark Anthony Camilleri. Camilleri, who is an associate professor at the University of Malta, was featured among the world’s top two percent scientists.
Camilleri’s research implies that all those who are involved in the research, development and maintenance of AI systems have social and ethical responsibilities to bear toward their consumers as well as to other stakeholders in society.
In the paper, the professor writes how, for the time being, there is still limited research focused on AI principles and regulatory guidelines for the developers of expert systems like machine learning (ML) and/or deep learning (DL) technologies. The research, he says, addresses this knowledge gap in the academic literature as explained in the abstract below.
An abstract of the paper, which was sole authored, is available below.
“The objectives of this contribution are threefold. It describes AI governance frameworks that were put forward by technology conglomerates, policy makers and by intergovernmental organizations; it sheds light on the extant literature on AI governance, as well as on the intersection of AI and corporate social responsibility; it identifies key dimensions of AI governance, and elaborates about the promotion of accountability and transparency; explainability, interpretability and reproducibility; fairness and inclusiveness; privacy and safety of end users, as well as on the prevention of risks and of cyber security issues from AI systems.”
The full paper can be accessed online.