Ethical AI
Ethical AI and AI for good is at the core of our work, and we adhere to a strict Ethical Artificial Intelligence Policy
At Humans in the Loop, we not only have a mission to support conflict-affected and displaced people prosper through integration into the global digital economy but through our advocacy and business model, aim to help make the AI industry more human-centric.
We provide trainings and upskilling opportunities on the subject of AI and also provide services related to the development of AI models through our social enterprise.
As a service provider and an important link in the AI supply chain, Humans in the Loop is committed to contributing to the implementation of AI systems which are ethical, trustworthy, and bias-aware. The work of the social enterprise mostly focuses on computer vision applications, so the Ethical AI Policy is built around this premise. Our policy is in line with the proposed EU AI Act and all other applicable legal and regulatory frameworks.
Humans in the Loop promotes the following ethical AI principles:
- Accountability: As a dataset collection and annotation company, we share the responsibility with our clients for the datasets that we produce under their instructions and guidance.
- Data privacy: We respect the privacy of individuals and provide mechanisms to give consent and receive compensation for personal data used in datasets.
- Compliance: We act in accordance with the law and all relevant regulatory regimes.
- Human agency: We promote the human intervention, monitoring, and validation of AI systems in accordance with the level of risk severity.
- Fairness: We strive to produce datasets which result in individuals within similar groups being treated in a similar manner, without discrimination or unfair biases.
- Transparency: We implement documentation practices in order to ensure that the datasets it produces are built in a transparent and auditable way.
- Stewardship: We collect and handle data with care and intentionality and we exercise stewardship over the datasets that we produce.
- Beneficial AI: When selecting clients, we favor those who use AI for the common good and reject those whose work might have harmful effects for humans or the environment.
Client Selection
At Humans in the Loop we have a strict client selection policy based on our social impact scoring system. In line with our commitment to social impact and creating a dignified and safe workspace for our annotators, Humans in the Loop does not provide services:
- to companies in the military and defense sectors, as well as any company developing solutions involving weapons, killer drones, or other technologies whose purpose is to cause or facilitate injury to people.
- related to content moderation and explicit images.
- to any AI applications classified as posing an “unacceptable risk” by the EU AI Act.
Humans in the Loop is committed to working closely with our clients to scope their projects with ethics in mind, raise awareness about potential biases, and promote best practices for building trustworthy AI.
Best Practice
Special attention is paid to the following projects, with the following best practices for each one:
- Projects which require the annotation of subjective human self-held characteristics and protected attributes, such as gender, race, disability, and others.
- Instead of annotating gender (especially with binary variables), context-aware labels should be used for objective annotation, such as beard, makeup, shirt, haircut, etc.
- Instead of annotating race or ethnicity, skin color should be annotated according to a widely-accepted dermatological convention, in addition to other objective characteristics such as the presence of an eye crease.
- nstead of annotating ability, context-aware labels such as wheelchair, walking stick, hearing aids, or guide dog should be used.
- Projects which require the annotation of image-level tags for classification purposes should be discouraged given the challenges in interpretability or model predictions.
Annotators and Training
We are conscious that each person comes with their own sets of values, beliefs, and personal biases, and we raise awareness among our annotators about the potential biases they might be exhibiting and transmitting to the data while performing the annotation. The training module on “Bias in AI” on the Humans in the Loop Training Center should be completed by all annotators in order to be eligible for annotation projects by earning the badge of “Ethical AI Annotator”.
However, we also promote a view of annotation as interpretative and collaborative work where a shared understanding must be built between annotators and the client.
We promote the use of clear guidelines, open communication, edge case examples, as well as straightforward annotation interfaces and processes, as ways to mitigate potential biases.
Humans in the Loop believes that informed annotators are empowered to make better decisions when annotating the dataset. Therefore we always share the background information of the AI project and its purpose in the Guidelines which are shared with annotators.
Avoiding bias in computer vision
The best practices and research of Humans in the Loop on how to avoid bias through better dataset collection and annotation. Use the link below to find out more about our whitepapers and download them.
Supporting research in ethical annotation
Humans in the Loop has participated in the research of the Critical AI Lab at the Weizenbaum Institute, TU Berlin