Vision in Dogs

Vision in Dogs, how dogs see us, an average field of vision in dogs is about 250 degrees. Humans only have 180 degrees.

image of a dog's nose in closeup as it sniffs the cameraA still from one of the home movies, featuring a dog interacting with the camera. (Emory Canine Cognitive Neuroscience Lab)

Putting cute dogs in an MRI machine and watching their brains while they watch home movies might sound like a rollicking good time just for its own sake. As a bonus, it can also be educational.

A team of scientists have done just that, using machine learning to decode the visual processing taking place inside the minds of a pair of pooches. They discovered a fascinating difference between canine and human perception: dogs are way more visually attuned to actions, rather than who or what is performing those actions.

This could be an important piece of the canine cognition puzzle, since it reveals what a dog’s brain prioritizes when it comes to vision.

“While our work is based on just two dogs it offers proof of concept that these methods work on canines,” says neuroscientist Erin Phillips, then of Emory University, now at Princeton.

“I hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.”

The research, as Phillips noted, was conducted on two dogs, Daisy and Bhubo. The team had filmed three 30-minute videos, using a gimbal and a selfie stick, of dog-specific content. This included dogs running around, and humans interacting with dogs, giving them pets or treats. Other activities included vehicles passing by, humans interacting with each other, a deer crossing a path, a cat in a house, and dogs walking on leashes.

bhubo the dog preparing to watch a movie
Bhubo and his human, Ashwin Sakhardande, preparing for a movie. Bhubo’s ears are taped down to keep noise-dampening earplugs in place, because MRIs are very loud. (Emory Canine Cognitive Neuroscience Lab)

Daisy and Bhubo were each shown these movies in three 30 minute sessions for a total of 90 minutes, while relaxing unrestrained in an fMRI machine. This remarkable accomplishment was achieved through the use of training techniques designed by psychologist Gregory Berns, who first managed to take an MRI of a fully awake, unrestrained dog a decade ago.

So the researchers were also able to scan the brains of Daisy and Bhubo as they sat, awake, alert, and comfortable, in the machine, watching home movies filmed just for them. Sounds pretty nice, actually.

“They didn’t even need treats,” says Phillips. “It was amusing because it’s serious science, and a lot of time and effort went into it, but it came down to these dogs watching videos of other dogs and humans acting kind of silly.”

Daisy the dog in the fmri machine
Daisy the dog taking a turn in the fMRI machine. Her human, Rebecca Beasley, is not pictured. (Emory Canine Cognitive Neuroscience Lab)

The video data was segmented by timestamps to identify classifiers such as objects (such as dogs, humans, vehicles, or other animals) or actions (such as sniffing, eating, or playing). This information, as well as the brain activity of the two dogs, was fed into a neural network called Ivis that was designed to map brain activity to those classifiers.

Two humans also watched the videos while undergoing an MRI; that data was also given to Ivis.

The AI was able to map the human brain data to the classifiers with 99 percent accuracy, for both object and action classifiers. With the dogs, Ivis was a little shakier. It didn’t work at all for the object classifiers. However, for the actions, the AI mapped the visual to the brain activity with an accuracy range between 75 and 88 percent.

“We humans are very object oriented,” Berns says. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”

Dogs, he added, have significant differences in the way they perceive the world compared to humans. They only distinguish shades of what we’d perceive as blue and yellow parts of the spectrum, but have a higher density of motion-sensitive vision receptors.

This could be because dogs need to be more aware of threats in their environment than humans do; or it could have something to do with reliance on other senses; or perhaps both. Humans are very visually oriented, but for dogs, their olfactory sense is the most powerful, with a much larger proportion of their brain devoted to processing olfactory information.

Mapping brain activity to olfactory input might be a trickier experiment to design, but it could be enlightening, too. As could conducting further, more detailed research into the vision perception of dogs, and potentially other animals in the future.

“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” Berns says. “The fact that we are able to do that is remarkable.”

The research has been published in the Journal of Visualized Experiments.