- Undergraduate
- News & Events
- People
Back to Top Nav
Advanced technology is driving research in the neuroscience of cognition.
Traditionally, when students are tested on a subject, they use paper and pencil, or perhaps a computer, to demonstrate what they have learned. But what if brain scans could show them in the act of acquiring knowledge and then applying it to questions and problems?
That's the question driving research being done by David Kraemer, assistant professor of education and a graduate adviser in psychological brain sciences, with his team of graduate and undergraduate students at the Dartmouth Cognitive Neuroscience of Learning Laboratory in the Department of Education.
"We are looking for patterns of brain activity that are meaningfully related to having learned something, versus not having learned it. There's very little data on how learning something in school changes the brain, especially at the level of an individual student," says Kraemer.
Lab members use advanced technology, including fMRI scans and virtual reality devices, to map the experiences, conditions, and strategies that lead to successful learning. Kraemer works closely with Xia Zhou and Devin Balkcom, both associate professors of computer science, among other faculty.
A common thread runs through the lab's work: the examination of learning in the minds of individuals. But the aims of specific experiments vary widely. For example, the results of one study may help teachers teach STEM concepts more effectively. Another set of experiments could enable sign-language learners to advance more quickly. And a third project explores how to prevent people from processing information in a way that is largely biased by their prior beliefs.
The lab's STEM learning study, in collaboration with Vicki May, a professor of engineering at Thayer School of Engineering, and Solomon Diamond, an associate professor of engineering at Thayer, looks at differences in the way intermediate engineering students versus peers with no advanced physics or engineering education grasp concepts relating to the stability of structures. Joshua Cetron '16 also worked on this project, first for his neuroscience senior honors thesis and then as a full-time researcher in the lab.
Both groups were given a brief overview of Newton's third law: For every action there is an equal and opposite reaction. They were then shown pictures—bridges, lampposts, buildings, and so on—and asked whether arrows superimposed on the images accurately described the forces at work. The intermediate engineering students were correct about 75 percent of the time, while their novice peers were correct about half the time (which is the same as guessing).
The test scores correlated with brain scans. "The two groups are literally seeing the same thing," Kraemer says, referring to sections in the scans where visual processing takes place. "But they are thinking about what they see in different ways."
The brains of the more advanced engineering students showed neural activity in places where the beginners' brains were less active: in the motor cortex, which, interestingly, controls the hands, as well as regions of visual cortex that process higher-level categorical knowledge. This raises the possibility that successful learners gain valuable information through hands-on instruction, a hypothesis the team is now testing in a follow-up study.
"The results can be pretty meaningful for developers of educational curriculums, for example," says Kraemer. "As you're developing an instructional approach, you might have focus groups, with one class learning in one way and another learning in a different way. You could give them a test at the end to see how much they learned, but also give them a brain scan, to see if the brain test and the traditional test together can predict which curriculum leads to better learning and long-term retention."
Another study about cognition could change the way American sign language (ASL) is taught online. Participants wear virtual reality goggles and gloves. They learn a small set of commonly-used ASL signs, and try to use them to communicate as sensors in the gloves and in the computer camera are used to compare their hand motions to the correct positions. The study, being conducted in tandem with Gallaudet University in Washington, D.C., is in a preliminary stage, although initial pilot testing seems promising. When complete, it could enhance self-guided ASL tutorials.
Such a system would not replace face-to-face learning, Kraemer says, but it could improve practice. "And then the question is, can you learn from this faster than you would on your own? Down the road, we also want to learn how the brain attributes meaning to what are, initially, unfamiliar hand motions."
Can brain imaging reveal the difference between logical thinking and using facts selectively to support a foregone conclusion? The latter is called "motivated reasoning," and it's the focus of work being done in the lab by Katherine Alfred, Guarini '20, with additional guidance from Professor of Government Brendan Nyhan. Participants are given sets of data relating to two very different questions: "Is this gun control legislation effective at reducing crime rates?" and, "Is this skin cream effective at reducing rashes?"
"So then we get a really good comparison of how people approach a problem when there's just sort of a neutral content—skin cream— versus content they have strong prior beliefs about—guns," says Alfred. "We're looking at how the reasoning process changes depending on the topic, and at interventions designed to improve that process."
All these experiments lead to a similar conclusion, says Kraemer.
"What you've learned about something can change the way that you look at it. "And somebody else can be looking at the very same thing and be thinking something very different."
Charlotte Albright can be reached charlotte.e.albright@dartmouth.edu.