Auto-fit: A human-machine collaboration feature for fitting bounding box annotations

Large high-quality annotated datasets are essential in training deep learning models, but are expensive and time-consuming to create. A large chunk of time in the annotation process goes into adjusting bounding boxes to fit the desired object. In this paper, we propose the facilitation of human mach...

Full description

Saved in:
Bibliographic Details
Main Authors: Cruz, Meygen, Keh, Jefferson, Velasco, Neil Oliver M., Jose, John Anthony, Sybingco, Edwin, Dadios, Elmer P., Madria, Wira F., Miguel, Angelimarie
Format: text
Published: Animo Repository 2020
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/faculty_research/12591
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Description
Summary:Large high-quality annotated datasets are essential in training deep learning models, but are expensive and time-consuming to create. A large chunk of time in the annotation process goes into adjusting bounding boxes to fit the desired object. In this paper, we propose the facilitation of human machine collaboration through the creation of an Auto- Fit feature which automatically tightens an initial bounding box around an object being annotated. The challenge lies in making this feature class agnostic in order to allow its usage regardless of the type of object being annotated. This is achieved through the use of various computer vision algorithms to extract the desired object as a foreground mask, determine the coordinates of its extremities, and redraw the bounding box based on these new coordinates. The best results were achieved with the Grabcut algorithm, which attained an accuracy of 84.69% on small boxes. The Pytorch implementation of ResNet-101 pre-trained on the COCO train2017 dataset is also used as a foreground extractor in one iteration of the implementation, in order to provide a baseline comparison between the performance of a computer vision- based solution versus one based on a standalone object detection model. This garnered an accuracy of 83.04% on small boxes, showing that the computer vision-based solution is able to surpass the accuracy of a standalone object detection model.