NREC developed advanced machine vision techniques for safety around agricultural vehicles. Robotics offers the opportunity to improve efficiency on the farm, but these systems must reliably detect other workers to ensure their safety.
Enabling the full promise of robotics in agriculture requires reliable detection and tracking of human coworkers so that people and machines can effectively and safely perform required tasks. Many agricultural machines are powerful and potentially dangerous, and certain tasks require humans to work closely to these machines. Other applications may need to enforce a safety buffer, and agricultural fields generally have minimal access controls. Even for smaller agricultural robots, it is often important for them to understand where the people in their environment are to effectively complete their tasks.
Our previous work resulted in a spatially distributed multi-vehicle system of autonomous tractors that shared task responsibilities with multiple human co-workers to accomplish agricultural operations in a citrus orchard. This system has demonstrated over 2400km of autonomous operation and performed significant useful work at a higher productivity level than current methods. The system includes a sophisticated obstacle detection system, but a key limiting factor was the reliable detection of people when partially occluded by tree branches and weeds or when lying on the ground or in other non-standard poses. [1] [2]
[1] Carnegie Mellon University. “Integrated Automation for Sustainable Specialty Crops.”
[2] S. J. Moorehead, C. K. Wellington, B. J. Gilmore, and C. Vallespi, “Automating orchards: A system of autonomous tractors for orchard maintenance,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) Workshop on Agricultural Robotics, 2012.
The aim of this work was to advance the state of the art in detection and tracking of people in agricultural environments. We benchmarked many current methods in pedestrian detection and developed new ones, and released the dataset below [3,4]. We hope that this common benchmark allows the field to move forward with a spirit of both competition and cooperation.
[3] T. Tabor, Z. Pezzementi, C. Vallespi and C. Wellington, ‘People in the Weeds: Pedestrian Detection Goes Off-road’, in 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics, Purdue University, West Lafayette, IN, 2015.
[4] Z. Pezzementi, T. Tabor, P. Hu, J. Chang, D. Ramanan, C. Wellington, B. Babu, and H. Herman. Comparing apples and oranges: Off-road pedestrian detection on the National Robotics Engineering Center agricultural person-detection dataset. J Field Robotics. 2017;00:1–19. DOI: 10.1002/rob.21760. arXiv: 1707.07169.
The NREC Person Detection Dataset is a collection of off-road videos taken in an apple orchard and orange grove. The videos are collected with a set of visible people in a variety of outfits, locations, and times. We encourage you to train a detector on our dataset and submit your curves for display on this webpage.
Labels are provided in Pascal VOC format and images are provided as rectified pngs. A training set has been partitioned for algorithm training. A full validation set has been partitioned for algorithm tuning and development results. Finally, a test set is provided for final evaluation and publication. We ask that the test set be used only after completion of development, in order to preserve the integrity of the dataset.
Details, analysis, and initial results on the data set can be found in our paper. Please cite this paper for any work making use of the data set:
Z. Pezzementi, T. Tabor, P. Hu, J. Chang, D. Ramanan, C. Wellington, B. Babu, and H. Herman. Comparing apples and oranges: Off-road pedestrian detection on the National Robotics Engineering Center agricultural person-detection dataset. J Field Robotics. 2017;00:1–19.
DOI: 10.1002/rob.21760. arXiv: 1707.07169.
Scripts for working with the dataset are available at: https://github.com/zpezz/nrecAgPersonEval
The benchmark only requires the apples left labeled and oranges left labeled. The right images are provided for stereo. Additional left and right images, including 7 frames (1 second) before the labeled data begins are available in the unlabeled files. These can be used to compute motion features for detection or for visual odometry and new view synthesis benchmarking. Finally, the unassigned.zip file includes additional labeled data not included in the dataset, for instance, videos taken at night.
PLEASE READ: The links for each data set will take you a corresponding folder on Box.com. Each folder contains .zip files of the data. The files are labeled numerically: “example-file-name-1.zip, example-file-name-2.zip, etc.” Please download and open them in order, starting with 1.
Benchmark
apples left labeled
oranges left labeled
Right Stereo Images for Benchmark
apples right labeled
oranges right labeled
Other Images from Benchmark videos
(1 second of video before labels, images not subsampled, variable length after labels)
apples left unlabeled
apples right unlabeled
oranges left unlabeled
oranges right unlabeled
Pose Data in KITTI Odometry Format
(requires Benchmark labeled and unlabeled data for alignment)
poses
Other Videos
unassigned video
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Below are standings on the various benchmark metrics for the dataset. Please see https://github.com/zpezz/nrecAgPersonEval for the detailed definition of each category and implementation of the evaluation criteria.
ROCs are shown for a bounding box overlap (intersection over union: IoU) requirement of 0.5. Values shown in the table are average detection rate, which averages performance over IoU values from 0.3 to 0.7 in steps of size 0.1. Averages over each constituent ROC are computed by sampling the curve at false positive rates between 10-3 and 10-1 in steps of 101/4
See the “Other Videos” section for visualization of the top algorithms’ detections on the full test set.
To submit your own results, contact humandetection@nrec.ri.cmu.edu
MFC
Z. Pezzementi, T. Tabor, P. Hu, J. Chang, D. Ramanan, C. Wellington, B. Babu, and H. Herman. Comparing apples and oranges: Off-road pedestrian detection on the National Robotics Engineering Center agricultural person-detection dataset. J Field Robotics. 2017;00:1–19. https://doi.org/10.1002/rob.21760. arXiv preprint arXiv:1707.07169.
RPN+BF
Ref: Zhang, L., Lin, L., Liang, X., and He, K. Is faster R-CNN doing well for pedestrian detection? ECCV 2016.
Notes: Using default settings on their FOS implementation
MSCNN
Ref: Cai, Z., Fan, Q., Feris, R. S., and Vasconcelos, N. A unified multi-scale deep convolutional neural network for fast object detection. ECCV 2016
Notes: Using default settings on their FOS implementation
Detectnet
Ref: Tao, A., Barker, J., and Sarathy, S. (2016). Detectnet: Deep neural network for object detection in digits. Website.
Notes: Using default settings on their FOS implementation
This work was supported by the USDA National Institute of Food and Agriculture as part of the National Robotics Initiative under award number 2014-67021-22171.
National Robotics Engineering Center
10 40th Street
Pittsburgh, PA 15201
+1 (412) 681-6900
Carnegie Mellon University
Legal Info | www.cmu.edu