The University of Arizona
Please note that this event has ended!

Automatically Identifying, Counting, and Describing Wild Animals in Camera-trap Images with Deep Learning.


Automatically Identifying, Counting, and Describing Wild Animals in Camera-trap Images with Deep Learning.
Series: TRIPODS Seminar
Location: ENR2 S210
Presenter: Mohammad (Arash) Norouzzadeh, Ph.D. candidate of Computer Science at the University of Wyoming.

Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would revolutionize our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could transform many fields of biology, ecology, and zoology into “big data” sciences. Motion sensor “camera traps” enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2-million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with over 93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving more than 8.4 years (at 40 hours per week) of human labeling effort (i.e. over 17,000 hours) on this 3.2-million-image dataset. Those efficiency gains immediately highlight the importance of using deep neural networks to automate data extraction from camera-trap images. Our results suggest that this technology could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild.

Future directions to investigate are:

1- Finding more accurate and more data-efficient methods 
2- Applying the proposed techniques to other camera-trap projects 
3- Extracting other types of information from camera-trap images 
4- Integrating the proposed tools with an existing citizen science platform (The Snapshot Serengeti project).

(Pizza, coffee & tea will be provided at 11:20am)