A few years ago the pair became mesmerised, like many of us, by an Alaskan webcam broadcasting brown bears from Katmai National Park. They also happened to be seeking a project to hone their machine learning expertise.
“We thought, machine learning is really great at identifying people, what could it do for bears?” Miller said. Could artificial intelligence used for face recognition be harnessed to discern one bear face from another? At Knight Inlet in British Columbia, Canada, Melanie Clapham was pondering the same question. Dr. Clapham, a postdoctoral researcher at the University of Victoria working with Chris Darimont of the Raincoast Conservation Foundation, was keen to explore face recognition technology as an aid to her grizzly bear studies. But her expertise was bear biology, not A.I.
Fortuitously, the four found a match on Wildlabs.net, an online broker of collaborations between technologists and conservationists. Combining their skill sets, Miller and Nguyen volunteered spare time over several years for this passion project that would eventually bear fruit, reporting the results of their experiment last week in the journal Ecology and Evolution. The project they produced, BearID, could help conservationists monitor the health of bear populations in various parts of the world, and perhaps aid work with other animals, too. They got started by looking for other animals that had gotten the deep learning treatment. “In typical engineering fashion, we’re always looking for a shortcut,” Miller said.
They discovered “dog hipsterizer,” a program that found the faces, eyes and noses of dogs in photos and placed rimmed glasses and mustaches on them. “That was where we started,” Nguyen said. Although trained on dogs, dog hipsterizer worked reasonably well on the similarly shaped faces of bears, giving them a programming head start. Nevertheless, Nguyen said, the work’s initial stages were tedious. Creating a training data set for the deep learning program involved examining over 4,000 photos with bears in them and then manually highlighting each bear’s eyes, nose and ears by drawing boxes around them so the program could learn to find these features.
The system also had to overcome a challenge of brown bears’ physical appearance.
To monitor populations, “we have to be able to recognise individuals,” said Dr. Clapham. But bears don’t have any feature comparable to a fingerprint, such as a zebra’s stripes or a giraffe’s spots. From 4,675 fully labelled bear faces on DSLR photographs, taken from research and bear-viewing sites at Brooks River, Ala., and Knight Inlet, they randomly split images into training and testing data sets. Once trained from 3,740 bear faces, deep learning went to work “unsupervised,” Dr. Clapham said, to see how well it could spot differences between known bears from 935 photographs.
But how does it actually tell those bears apart? Before the era of deep learning, “we tried to imagine how humans perceive faces and how we distinguish individuals,” said Alexander Loos, a research engineer at the Fraunhofer Institute for Digital Media Technology, in Germany, who was not involved in the study but has collaborated with Dr. Clapham in the past. Programmers would manually input face descriptors into a computer.
Lesley Evans Ogden is a journalist with NYT© 2020
The New York Times