A new study suggests that a toddler’s visual experience may play a key role in learning their first word.
After their first year of staring and babbling, babies eventually begin to say their first words. Although millions of parents are aware of this, researchers at Indiana University and the Georgia Institute of Technology recently cracked the code, discovering a major role in a baby’s first words: a baby’s first words are strongly tied to their visual experience.
Drawing on theories of statistical learning, researchers discovered that the number of times an object enters a baby’s visual field increases the probability of a baby’s ability to associate that specific word with the object. Visual memory is key into getting words stuck on objects. All those familiar visual objects such as fork or bottle work as an aggregated experience, first words are slowly learned for few visually pervasive objects.
Linda Smith, professor of psychological and brain sciences, and her colleagues went inside a baby’s brain to figure this out. People assume babies see the same things their parents see. After all, they live in the same house and ride the same cars. However, it turns out babies are not good at controlling their bodies and they are not interested in looking at the same things adults look at. As babies gradually develop, their visual world shifts. A 3 month old baby and a 1-year-old baby have totally different visual experiences. Researchers were interested in getting a sense of the visual world of babies who are close to saying their first words, so they placed head video cameras in 8 to 10 month old infants and captured 247 at-home mealtime events and analyzed the objects in view. Why meal events? These are activities are performed continually on a daily basis and make up a big percentage of a baby’s daily visual experience. Results showed that there was a strong correlation between the most frequently appearing objects with the top words appearing in the images collected by the study. This study’s conclusion suggests that a visual experience is a key factor in early world learning.
This new discovery may change the basis for new theories of how infants acquire language and even how children with language deficits and autism are treated. Research has already shown for some time that by age 2, toddlers can use social cues and contexts to know which words and objects go together. By that age, toddlers are already good at grouping things into categories. If they know at home that a fork is called a fork, they will know that a different fork in a restaurant will also be called the same, even if they don’t look exactly the same. These skills, however, develop over time.
In order to learn a word, it may be more helpful to constantly see the object than to point at an object and say that world aloud in hopes of teaching your child to learn that word. This could have strong implications for how kids with delayed speech and other disorders are treated: through visuals rather than using other association methods. Furthermore, if a child is struggling to learn words, he might have issues with visual processing, or he might live somewhere where he does not see the same objects constantly. It may be the case that visual differences among children influence how quickly they begin to say words. A child living in an unstable environment where their visual world is constantly shifting may struggle more with object and word association. While there may be various studies on language development there has been less focus on how babies learn those first words – visual elements have been overlooked.
“Taking account of the visual brings a whole new dimension of word-learning into view,” Smith mentions. “If all you ever worry about is the word side of word-learning, you may be missing half the problem: visual cues that aid language learning.”
Interested in learning more about this study? Check out the following links!