At HITL, we are dedicated to fostering an inclusive, ethical, and socially responsible approach to artificial intelligence. Our recent webinar, featuring Morgan Scheurman, a leading expert in ethical AI, provided a deep dive into the critical intersections of identity, ethics, and AI. Here’s an overview of the compelling insights and practical strategies shared during the session.

Understanding the Motivation Behind Ethical AI Research

One of the most engaging parts of the webinar was when Morgan shared his personal and academic motivations for focusing on ethical AI. His identity as a member of the LGBTQ+ community and his academic background in women and gender studies have significantly shaped his research path. This unique perspective helps illuminate how personal experiences and academic insights can drive impactful research in AI ethics. Morgan’s work is driven by a desire to address the intersections of gender-based and racial discrimination in AI, ensuring that these systems are fair and inclusive.

How Worker Identities Shape Data Annotation Practices

Morgan’s research delves into how the identities and experiences of data workers influence data annotation practices and AI outcomes. He highlighted the significant impact that familiarity with certain demographics can have on empathy and understanding during the annotation process. For instance, workers found it more challenging to annotate faces from demographics they were less familiar with, underscoring the importance of having diverse representation within data annotation teams. These insights are crucial for understanding how biases can subtly influence AI systems and the importance of addressing these biases through diverse and inclusive practices.

Practical Implementation: HITL's Approach to Ethical AI

At HITL, we’re not just discussing ethical AI—we’re putting it into practice. Tess Valbuena, our COO and Interim CEO, shared practical strategies that HITL employs to incorporate ethical considerations into our training programs and project guidelines. Our ‘Ethical AI’ training module, mandatory for all annotators, focuses on recognizing and mitigating biases. Additionally, we ensure that our project guidelines are communicated in the annotators’ native languages and involve diverse teams in the quality assurance process. Tess also mentioned that we continuously seek ways to improve, such as including more explicit training on positionality and bias, encouraging clients to create culturally contextualized guidelines, and involving annotators in project design and client interactions where appropriate.

Conclusion: The Value of Humans in the Loop

The webinar concluded with a powerful message about the importance of including human- in-the-loop throughout the AI lifecycle. Tess emphasized that human involvement helps to mitigate bias, improve explainability, ensure ethical use, catch errors, and provide real-world context to AI decisions. Morgan reinforced this by highlighting the unique expertise that data workers bring to the table and the need for better communication channels between workers and clients to ensure that diverse perspectives are considered in AI development.

We extend our deepest gratitude to Morgan Scheurman for sharing his profound research and insights. Thank you to everyone who joined us in this enlightening session. Stay tuned for more engaging discussions in our HITL webinar series!

Leave a Reply

Your email address will not be published. Required fields are marked *

Get In Touch

We’re an award winning social enterprise powering the AI solutions of the future