Responsible AI / Panelist

Belona Sonna

Bel’s AI Initiative; AfroLeadership

Australia

Belona Sonna is a Ph.D. candidate in the Humanising Machine Intelligence program at the Australian National University. She earned a bachelors degree in software engineering and a masters in computer science from the University of Ngaoundere, Cameroon, before joining the African Master in Machine Intelligence scholarship program at the African Institute for Mathematical Sciences in Rwanda. Her current research focuses on explainability and privacy preservation in AI-based solutions for health care applications. Sonna has been named one of 2022’s 100 Brilliant Women in AI Ethics.

Voting History

Statement Response
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Agree “Many factors can help us recognize the efforts of companies investing in responsible AI programs. Recently, most of them have been striving to follow RAI principles as part of the development of their AI processes. However, the main aspect that reinforces my idea of their real investment is to see business leaders seeking knowledge for RAI deployment (for example, the TRAIL [Trustworthy and Responsible AI Learning] certificate program for industry, led by Mila). As we have already pointed out in this series of articles, RAI will be implemented in companies when top management becomes involved in the process.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Strongly disagree “Although centralized management of responsible AI can guarantee that all projects follow the same circuit of control, this can quickly become a disadvantage if the project under examination is complex. Moreover, centralized management runs counter to the vision of responsible AI development, which would preferably involve all players in the development chain. This is why, in my opinion, decentralized management makes it possible to distribute the roles of each unit according to their expertise, to ensure not only positive interaction but also the involvement of all.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Strongly agree “Generative AI tools have the distinction of being able to produce content without direct human intervention, so there is a huge debate about who is responsible for the results, which in most cases are unexpected. In addition, generative AI tools can sometimes be misleading, as they do not have a humanlike understanding of the subject matter. Another concern is intellectual property rights and privacy. Before adopting generative AI tool services, it is important to mitigate these risks.

Overall, I believe that most current RAI programs are not prepared to deal with the risks associated with new generative AI tools.”
RAI programs effectively address the risks of third-party AI tools. Disagree “RAI programs alone cannot effectively address the risks associated with the use or integration of third-party AI tools. This is because RAI programs are built within a framework where a number of principles are required to ensure that the results have only a positive impact for end users. However, the same assurance might not exist for third-party AI tools. Third-party AI tools can cause many problems, including a lack of transparency, and security and privacy issues. Their use without any prior verification phase could then corrupt the primary product.

With this context in mind, it is necessary to elaborate, in addition to the RAI programs, a related risk management system for third-party tools that will serve to ensure perfect cohesion between the two entities according to RAI principles.”
Executives usually think of RAI as a technology issue. Neither agree nor disagree “Executives’ views on RAI are strongly related to their backgrounds. While those with a technical background think that the issue of RAI is about building an efficient and robust model, those with a social background think that it is more or less a way to have a model that is consistent with societal values. The good news is that the requirements of RAI are both technical and social. Hence, the real question for effective RAI in organizations is how to establish an adequate management program that addresses both the technical and social aspects. A suitable answer to this question requires an organization’s executives to be open-minded on the following aspects: the needs of the society, the AI literacy of the human users, the choice of technology tools, and the respect of AI ethics principles during the design of AI solutions.

Overall, RAI should not be considered only a technology issue. Instead, executives, regardless of their backgrounds, should take it as a rights management plan that should be established by taking into account the technology, the reality of society, and the end users exposed to the final AI solution.”
Mature RAI programs minimize AI system failures. Strongly agree “RAI has the ability to enable the design of AI solutions that challenge both technical and societal failures of AI systems. Although some of the principles of AI ethics are difficult to implement at this time, the logic behind the design of RAI programs is to minimize the risk of errors in the designed solutions. Therefore, I strongly believe that mature RAI programs minimize the failures of AI systems as they are meant to build robust solutions but also preserve human dignity anyhow.”
RAI constrains AI-related innovation. Strongly disagree “If we consider an innovation to be a technical or scientific change in a process aimed at improving the use of a service, then AI is undoubtedly the innovative means of our era and of the future. Every day, new models are produced with ever greater predictive capabilities. However, most of them are complex and difficult to explain to the average user, which makes their adoption in society difficult. Of course, it is legitimate that people want to understand why and how decisions affecting their daily lives are made. So is it necessary to create increasingly powerful models if they are not ultimately exploited in our societies?

RAI is the way to couple the computational power of AI with a social dimension necessary to build and keep a trustworthy relationship with end users. Thus, rather than a being constraint, RAI aims to move AI-related innovation from the technical to the social dimension needed to improve people’s lives through bias-free solutions. Of course, considering RAI rules may reduce the accuracy of models slightly, but then again, what really matters? Demonstrating the computational power of models, or putting that computational power to the benefit of humans?”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “Corporate social responsibility efforts consist of four responsibilities: environmental, ethical, philanthropic, and economic. Simply put, social responsibility efforts are about companies maximizing profits while respecting society. Therefore, I am all for organizations combining their responsible AI efforts with their social responsibility efforts, as they both have the same goal. When building an AI model, the goal is to use the power of AI algorithms to make profits; on top of that, responsible AI aims to produce solutions that are ethical, environmentally friendly, trustworthy, and explainable.

Beyond designing AI-based solutions, organizations involved in responsible AI establish a close relationship between solutions and society by incorporating values that place humans at the center of development. In such a context, people are more likely to use, trust, and recommend the products: The organization is accountable and can get a lion’s share of the market. Furthermore, through this relationship, the organization is more aware of society’s needs and concerns and is therefore able to produce solutions that matter to it. This is a key to business success and innovation.”
Responsible AI should be a part of the top management agenda. Strongly agree “Like communication strategy or business strategy, responsible AI is a decision-making strategy that every company’s management team should develop and ensure its implementation within its business. To this end, it is essential that this strategy is present in the daily life of the company through the activities of development. Several reasons can justify this point of view; indeed, if subscribing to responsible AI is a commitment that a company makes that all its AI-based solutions must respect certain standards, then a systematic surveillance must be done during the development process to ensure the feasibility. In addition, the company must frequently update itself on the evolution of its standards, which are very often proposed by external agencies, in order not to face any external censure or prejudice.

Furthermore, the decision to subscribe to responsible AI must be carried out by the management team for its implementation to be effective. As the decision makers of the general policy of the company, it is up to them to assert what should be done by reminding the executors of the expectations and goals of the company by any means.”