Classifying species from camera trap photographs
Research team: Andrea Zonnekus, Dr. Andrew Squelch, Dr. Bill Bateman
CIC specialist: Shiv Meka
In collaboration with researchers from Curtin University’s Biological Ecology (CUBE), a machine learning based model was built to classify species for use in the wildlife research. The overall workflow is intended to reduce the time required to complete wildlife research by, potentially, months as the researchers are no longer required to spend time manually sorting and tagging data from photo sets. Camera traps, as they are known in the field, are activated on transient movement. Mostly used to monitor animals in the wild, such motion detectors are usually deployed for a period of time over a range of predetermined locations. Based on various metrics (light, length of exposure, and type of interference), a camera trap might produce over 10,000 to 100,000 images. A 4-layer convolutional neural network was used to train manually labelled data. The model predicted classes with 93% accuracy. Future work involves identifying individuals within the species.
This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia.