Fig1
Visual representation of the angle study used to conduct the experiment. Zuckerman Institute

A team of researchers from Columbia’s Zuckerman Institute has decoded the pathway the brain uses to perceive the world around it. When we see or hear something, the brain follows a certain chemical trail to build internal representations. This memory technique was studied by a team of researchers who used simple information responses to stationary objects to create a mathematical model that can explain several reflexive responses like stereotyping in humans.

“The order by which the brain reacts to, or encodes, information about the outside world is very well understood,” said Ning Qian, a neuroscientist and a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute in a report published on the Institute's website. “Encoding always goes from simple things to the more complex. But recalling, or decoding, that information is trickier to understand, in large part because there was no method — aside from mathematical modeling — to relate the activity of brain cells to a person’s perceptual judgment.”

Researchers did not have the neural mapping techniques or the imaging equipment until now to study the memory techniques of our brain. As the path of encoding (making new memories) information in our brain was already known, scientists assumed that decoding information would follow the same path way. Memories start from scratch and new details are added with further observation. This is generally a top-down process and researchers expected a similar process for decoding (recollecting processed information and memories).

The team set out to decode the way we decode our memories. The findings showed the team that the pre-conceived notion is wrong. According to Qian “Decoding actually goes backward, from high levels to low.”

The experiment had a simple design to clear pre-held interpretations of results. They asked 12 people to do a series of simple and similar tasks. They viewed images at a 50-degree angle on a computer screen for half a second. After this fleeting image disappeared, the participants repositioned two dots that were on the screen to match what they remembered to be the angle they viewed the image in.

This simple plotting of viewing angles was repeated 50 times. They had to view this at an angle of 53-degrees also and this was the second activity for the experiment. The third activity involved viewing the two images together. The participants had to decode information from both points of view (angles) individually and judge which one was greater.

What the team expected was that people would use the individual points from the 50-degree angle and the 53-degree angle and then compare them to make a judgment call on which image is at a greater angle.

“Memories of exact angles are usually imprecise, which we confirmed during the first set of one-line tasks. So, in the two-line task, traditional models predicted that the angle of the 50-degree line would frequently be reported as greater than the angle of the 53-degree line,” said Dr. Qian in the report.

But what happened astonished everyone. Traditional models failed to explain several findings in the data, which revealed that participants used bi-directional interactions to judge the angles of the individual lines. While they used a linear path to understand the angles of the images — which is encoding — they used the opposite process to decode this information. The subjects seemed to compare the two angles to judge which is greater and then make a decision, which is not a linear thought process at all

The subjects were seen using context more. In simple terms, when asked to recollect the two angles, the subjects used a very non-linear thought process. They used the relative angles of the two images to study which one was greater to make judgments. Instead of judging the first image, then judging the second image, the brain kind of superimposed both images to understand the difference in angle between both images first. This helped them judge which was at a greater angle. “This was striking evidence of participants employing this reverse decoding method,” said Qian.

The researchers argue that reverse decoding makes sense, because context is a better memory technique. Looking at a face, you tend to assess quickly if someone is frowning, and then, when questioned, we try and figure out the exact angles of the eyebrows. “Even your daily experience shows that perception seems to go from high to low levels,” Qian added.

The team feels knowing about how we understand information can make education and data processing easier. We can learn concepts faster if we hack into the way we make memories, they believe.