As an important link in the AI supply chain, we at Humans in the Loop recognize the role that we play in ensuring that AI models do not carry harmful biases.
We are publishing a whitepaper series dedicated to raising awareness among our customers and partners about the issue and providing practical examples of how to avoid bias.
The first part of the series covers dataset collection and sheds light on dataset collection practices in the computer vision community which may result in harmful prejudices being absorbed by the AI system. We discuss the history of large-scale dataset collection, best practices for ensuring fair representation, as well as what happens when things go wrong.
Stay tuned for the second part of the series which will cover dataset annotation.