Using annotations to optimize the plant phenotyping process
Type of service
The Global Wheat Head Dataset is an international research initiative led by CAPTE, University of Saskatchewan and University of Tokyo, aiming at providing a large and diverse dataset on wheat head localization. It is the first such dataset and it includes a very large range of cultivars provided by 16 institutions from different continents: Europe, Africa, North America, Oceania, and Asia.
Together, these institutions launched The Global Wheat Head Challenge, an international data science competition part of ICCV 2021. The objective of the challenge is to have a software model capable of locating wheat heads on a wide variety of data, without bias. Previously, the initiative had relied on university students for the labeling but in order to expand the existing wheat head dataset and make sure that the quality is as high as possible, the researchers opted for the professional annotation services of Humans in the Loop.
For several years, agricultural research has been using sensors to observe plants at key moments in their development. Nonetheless, some important plant traits are still measured by hand. For example, wheat heads are counted manually from digital images, which is a long and tedious process. This is where the Global Wheat Head initiative saw an opportunity to optimize the plant phenotyping process by using deep learning.
However, this task can be visually challenging. There is often an overlap of dense wheat plants, and the wind can blur the photographs, making the identification of single heads difficult. Additionally, appearances vary due to maturity, colour, genotype, head orientation, and the presence or absence of barbs. Finally, because wheat is grown worldwide, different varieties, planting densities, patterns, and field conditions must be considered. To end manual counting, the initiative needed to create a robust algorithm to address all these issues.
The researchers chose Humans in the Loop for the task thanks to our commitment to ethical AI labeling: “We know that labelling is not the most interesting career. HITL is more than labelling, it is also using the money to help people to be trained for another career and improve their situation, while having an immediate salary to support their family. They guarantee a certain ethic in the labelling business, which was why we chose them in the first place.”
Etienne David from INRAe who managed the labeling operations told us:
We did a benchmark of 20+ labelling companies and chose HITL for the reactiveness of the project managers, competitive prices, and tech for good aspects.
The main concern at the beginning of the project was the training to recognize wheat heads on the images which can be a challenging task for non-agronomist. However, with a few rounds of calibration, Humans in the Loop was able to train the entire team up to the standards of the researchers. We also applied our standard process based on quick iterations which helped to address systemic mistakes and clarify questions.
Humans in the Loop started by performing bounding box annotation, which was a useful learning experience in order for the annotators to get familiar with the data and its specificities. For example, there were many additional objects present in the images which were easy to mistake for wheat heads, and wheat heads varied greatly in shape and size. A very common edge case was overlapping heads, for which there had to be a clear protocol for handling.
At the end of the project, the annotation was switched to polygons, and by that time the annotators were already familiar with different types of wheat heads and different overlapping cases, which made the annotations even more precise, with deadlines almost always met on time or in advance.
Etienne David from INRAe who managed the labeling operations shared that “Humans in the Loop reduced the labelling time and the management of internal labelers which was crucial in our choice to outsource labeling work. Having one project manager simplifies the labelling burden.”
With the guidance of the researchers from the Global Wheat Head initiative, Humans in the Loop annotated more than 87,000 wheat heads on close to 2000 images. This expanded the existing dataset by adding 4 new countries and 22 measurement sessions in order to help reinforce the quality of the existing data. The complete dataset was published on the Global Wheat Head Challenge page and over the course of 2 months, it gathered 2,400+ submissions with participants joining from over 25 countries.
The resulting solutions for the challenge are now available to use by researchers and producers around the world in order to assess wheat head density, health and maturity much more effectively.
Interested in implementing a human in the loop in your agricultural AI pipeline? Get in touch with us and we would be happy to have a call.