About the series
As you know, we at Humans in the Loop have a great love and appreciation of a well-designed annotation tool. After the great feedback on the reviews we published of our the best platforms on the market here and here, we decided that it’s time for a deep dive in some of our all-time favorites!
This article is the fifth from a series of 10 reviews which will be published each week. Our first four reviews on Supervise.ly, TrainingData.io, Annotate.Online and Hasty.ai can be found here, here, here and here. Soon we will be uploading the links to the other articles as they are released.
The whole series is based on the premise of transparency and honesty and none of these reviews are sponsored. They are just our way to give props to the best teams out there working on making annotation easier for AI teams, and to share some of the know-how that we have been accumulating over the past few years as a professional annotation company.
As in previous reviews, our parameters are:
- project management
If you have additional questions or want to get in touch with us to beta test or feature your tool in an upcoming article, feel free to email us at email@example.com!
Unlike other platforms that we have featured so far, Picterra is not exactly an annotation tool but rather a a platform for training AI models for satellite imagery.
The platform was founded in 2016 in Switzerland with the mission to allow individuals and organizations to process and analyze the ever-growing volume of earth observation imagery, primarily from satellites and drones. It democratizes access to data and expertise by allowing non-experts to train their own AI detectors and receive training through the Picterra University.
The free version of the platform supports the processing of 500 MegaPixels (MP) of imagery per month. For larger volumes, the platform offers a monthly/yearly subscription and additional MP credits.
The Picterra platform is most suitable for large maps of tiled imagery taken from drones and satellites with a large number of objects that need to be detected. From water bodies and greenery to buildings, cars, birds and cattle, the platform can be used on a variety of scales and for a variety of purposes.
The type of imagery that the platform works with is 8-bit RGBA geotiff (.tiff). Other formats are also supported (.png, .jpg, multispectral, single band, 16 or 32 bits) but non-georeferenced imagery is automatically georeferenced while images with more than 3 channels are mapped to RGB. Since recently, the platform also allows to purchase satellite data directly and import online maps.
Once the imagery is uploaded, you can go to ‘Training mode’ in order to train a detector by annotating relevant data. The first step is to draw a ‘training area’ within the imagery and then annotate all objects inside of it by using polygons, bounding boxes or circles. Areas and object outlines can also be uploaded using GeoJson.
One big difference between Picterra and other platforms that we have reviewed is that it’s largely meant for individual use. It does not support user permissions or project management functionalities (e.g. assigning tasks for annotations, reviewing work) in its standard version and these features are only available in the Enterprise tier.
Currently, it’s perfect for individual use, especially drone lovers or small AI departments within organizations. We are hoping that as it grows and more teams start using it, Picterra will be implementing even more project management features. For now, they mostly relateto the file and imagery management suite, as well as the model library that we will touch upon in the next section.
Once a model has been trained and applied on images, there are also quite a lot of useful statistics that you can access, including the total number of objects detected on the image or the total detected area. By using the ‘Generate a report’ function, users can get PDF or web reports including a heapmap of the region or a change tracking comparison.
The automation workflow is quite simplified and accessible. Once you have completed the initial annotations in ‘Training mode’, you can then select some ‘Test areas’ on the map and click on ‘Train detector’. The custom model is then trained and applied on the test areas so that you can review how it’s performing so far on the object detection. Using ‘Accuracy areas’ you can test the detection accuracy against your own annotations as well.
Essentially, this creates a simple human-in-the-loop workflow where the user monitors how the model is performing and annotates additional data when improvements are needed. For example, if the model is performing well on rural areas but not urban ones, you can easily annotate some additional urban areas and re-train it. Finally, when the model has achieved good levels of accuracy, it can be applied over the whole set of imagery using ‘Detection Mode’.
The custom model training is the best thing about Picterra and its potential is enormous because it allows anyone from the Picterra community to train their own models, to share them or to use the model library. Picterra is also actively involved in humanitarian and ecological causes and its uses for wildfire tracking, disaster management, and wildlife preservation are amazing.
Hope this was helpful! If you are working on an AI project and are currently reviewing which tool might be the most appropriate for it, get in touch with us and we would be happy to have a call and advise you on the best way to build your pipeline.