As AI flourishes into GenAI, there emerges the need for some sort of digital Hippocratic oath for the creators of these technologies. While ethics in AI has become a subject of discussion of late, ethicists globally have been exploring the subject for decades, suggesting frameworks to build technology upon.
Out of the lot, we have handpicked seven ethical AI frameworks for developers to incorporate ethics into their technological innovations.
In 2021, Thoughtworks employees put together a catalogue of techniques that can be used to step back and understand the ethical implications of their work, which they published as the Responsible Tech Playbook. The 50-slide framework highlights three techniques, solicits different points of view, identifies and addresses ethical challenges before they become bigger problems, and ensures that the technology is designed to meet the needs and support the values of the people.
There seems to be a fair amount of overlap among the approaches, so the study highlights a three-step subset that gets much of the benefit with a relatively small impact on a business’s current processes.
Developed by the founder of AI Ethics Lab, Cansu Canca, the Puzzle-solving in Ethics (PiE) Model is a model for integrating ethics into the AI innovation cycle implemented through consulting the AI Ethics Lab. The model introduced in 2018 focuses on the core ‘What is the right thing to do?’ question in a systematic manner while integrating ethics analyses and solutions at every step of innovation.
In 2020, Rolls Royce released a comprehensive guide for businesses to check and balance their AI projects in a fair, ethical and trustworthy manner. The company said it is essentially a checklist for companies to consider the impact of using AI before deciding whether to proceed.
The document looks across a total of 32 facets of societal impact, governance and trust, and transparency and requires executives and boards to provide evidence that these have been rigorously considered. The company released an updated version later in 2021.
The framework guides government agencies and the private sector on managing new AI risks and promoting responsible AI. AI’s general-purpose nature makes it challenging” for information technology risk management. The framework introduces “socio-technical” dimensions through the approach. Experts have pointed to the depth of the framework, especially its specificity in implementing controls and policies to better govern AI systems within different organisational contexts.
The European Union Agency for Cybersecurity (ENISA) released a framework in December 2021, addressing the security of ML algorithms. The report mainly focuses on identifying the potential risks and weaknesses associated with the technology.
Additionally, it recommends a set of security measures to upgrade cybersecurity in machine learning powered systems. One challenge highlighted in the report is the delicate balance between implementing security controls and maintaining the expected performance levels of these systems.
In collaboration with Institute for the Future (IFTF), Omidyar Network, a Silicon Valley investment firm took the first step with the release of the ‘Ethical OS’ toolkit in 2018. The framework attempts to tackle challenges of great concern that have risen as an unintended consequence of technology. It’s a three-part toolkit to help technologists understand how tech might be compromised down the road, and build safeguards against future risks.
The 2020 paper authored by Bernard Arogyaswamy argues that sustainability consists of three forces: economic, social, and ecological, in tension with one another.
The framework suggests that companies should have clear rules about what’s right and wrong, and the people who enforce these rules should have enough power. It highlights the importance of ethical criteria, social impacts, and what users think when they use big tech in a way that’s good for the long term and follows ethical principles.