Welcome to readin – the best world tech news chanel.

From Baby Talk to Baby AI| GuyWhoKnowsThings

We ask a lot of ourselves when we are babies. Somehow we must go from sensory masses to mobile, rational and attentive communicators in just a few years. Here you are, a baby with no vocabulary, in a room full of toys and stuffed animals. You pick up a Lincoln log and your caretaker says, “This is a 'log.'” Over time, you come to understand that “trunk” does not strictly refer to this particular brown plastic cylinder nor to brown plastic cylinders in general, but rather to brown plastic cylinders that embody the characteristics of felled and stripped tree parts. , which are also, of course, “logs”.

There has been much research and heated debate about how babies achieve this. Some scientists have argued that most of our language acquisition can be explained by associative learning, since we associate sounds with sensitivity, in the same way that dogs associate the sound of a bell with food. Others claim that there are features built into the human mind that have shaped the forms of all languages ​​and are crucial to our learning. Still others maintain that young children build your understanding of new words in addition to your understanding of other words.

This discourse moved forward on a recent Sunday morning, as Tammy Kwan and Brenden Lake delivered blackberries from a bowl to the mouth of their twenty-one-month-old daughter Luna. Luna was dressed in pink tights and a pink tutu, with a silicone bib around her neck and a soft pink hat on her head. A lightweight GoPro-type camera was placed on the front.

“Babooga,” he said, pointing at the berries with a round finger. Dr. Kwan gave him the rest and Dr. Lake looked at the empty bowl with amusement. “That's like $10,” he said. A light in the chamber flickered.

For an hour each week for the past 11 months, Dr. Lake, a psychologist at New York University whose research focuses on human and artificial intelligence, has been attaching a camera to Luna and recording things from her point of view while play. His goal is to use the videos to train a language model using the same sensory information that a young child is exposed to: a LunaBot, if you will. In doing so, he hopes to create better tools for understanding both AI and ourselves. “We think this research finally establishes that link between those two areas of study,” Dr. Lake said. “We can finally put them in dialogue with each other.”

Share this article:
you may also like
Next magazine you need
most popular

what you need to know

in your inbox every morning