As an important link in the AI supply chain, we at Humans in the Loop recognize the role that we play in ensuring that AI models do not carry harmful biases.
This is the second part of a whitepaper series dedicated to raising awareness among our customers and partners about the issue and providing practical examples of how to avoid bias.
This part of the series focuses on dataset annotation and the importance of iterations. In this paper we touch upon the history of data annotation, the best practices for bias-free labeling and how to deal with labeling bias, as well as how iterations and audits can help identify bias in a continuous way.