Welcome to readin – the best world tech news chanel.

Meta's Ray-Ban smart glasses use AI to see, hear and speak. As they are?| GuyWhoKnowsThings

In a sign that the tech industry is getting weirder, Meta soon plans to release a major update that transforms Ray-Ban Meta, its video-recording camera glasses, into a device only seen in sci-fi movies.

Next month, the glasses will be able to use new AI software to see the real world and describe what you're looking at, similar to the AI ​​assistant in the movie “Her.”

The glasses, which come in various frames starting at $300 and lenses starting at $17, have been used primarily for taking photos and videos and listening to music. But with new artificial intelligence software, they can be used to scan famous landmarks, translate languages, and identify animal breeds and exotic fruits, among other tasks.

To use the AI ​​software, users simply say, “Hello, Meta,” followed by a message like “Look and tell me what kind of dog this is.” The AI ​​then responds with a computer-generated voice that plays through the glasses' small speakers.

The concept of AI software is so novel and peculiar that when we (Brian reviewed the Ray-Bans last year, and Mike Isaac, who covers Meta and uses smart glasses to produce a cooking show — I found out, we were dying to try it. Meta gave us early access to the update and we tested the technology over the past few weeks.

We take the glasses to the zoo, grocery stores, and a museum while interrogating the AI ​​with questions and requests.

The result: We were simultaneously entertained by the virtual assistant's mistakes (for example, mistaking a monkey for a giraffe) and impressed when it performed useful tasks like determining that a package of cookies was gluten-free.

A Meta spokesperson said that because the technology was still new, the artificial intelligence would not always get things right and that feedback would improve the glasses over time.

Meta's software also created transcripts of our questions and the AI's answers, which we captured in screenshots. Here are the highlights of our month of coexistence with the Meta assistant.

BRIAN: Naturally, the first thing I had to try out with Meta's AI was my corgi, Max. I looked at the chubby dog ​​and asked, “Hey, Meta, what am I looking at?”

“A cute Corgi dog sitting on the floor with his tongue out,” the assistant said. Right, especially the being cute part.

MIGUEL: Meta's AI correctly recognized my dog, Bruna, as a “black and brown Bernese mountain dog.” She half-expected the AI ​​software to think she was a bear, the animal her neighbors most constantly mistake her for.

BRIAN: After the AI ​​correctly identified my dog, the next logical step was to test it on zoo animals. So I recently visited the Oakland Zoo in Oakland, California, where, for two hours, I observed about a dozen animals, including parrots, turtles, monkeys, and zebras. I told him: “Hey Meta, look and tell me what kind of animal that is.”

The AI ​​was wrong the vast majority of the time, in part because many animals were caged and further away. He confused a primate with a giraffe, a duck with a turtle and a meerkat with a giant panda, among other confusions. On the other hand, I was impressed when the AI ​​correctly identified a specific breed of parrot known as the blue and gold macaw, as well as zebras.

The strangest part of this experiment was talking to an AI assistant in the presence of children and their parents. They pretended not to hear the only lone adult in the park while I apparently muttered to myself.

MIGUEL: I also had a peculiar time shopping. Being inside a Safeway and talking to myself was a little embarrassing, so I tried to keep my voice down. I still got a few sideways glances.

When Meta's AI worked, it was lovely. I picked up a strange looking package of Oreos and asked her to look at the package and tell me if they were gluten-free. (They weren't). She answered questions like these correctly about half the time, although I can't say that she saved any time compared to reading the label.

But the only reason I put on these glasses was to start my own Instagram Cooking Show – a flattering way of saying that I record myself preparing the week's meal while talking to myself. These glasses made doing so much easier than using a phone and a hand.

The AI ​​assistant can also offer help in the kitchen. If I need to know how many teaspoons are in a tablespoon and my hands are covered in olive oil, for example, I can ask you to tell me. (There are three teaspoons in a tablespoon, just FYI.)

But when I asked the AI ​​to look at a handful of ingredients I had and create a recipe, it spit out quick instructions for an egg cream, which isn't exactly helpful for following directions at my own pace.

A handful of examples to choose from might have been more useful, but that might require tweaks to the UI and maybe even a screen inside my glasses.

A Meta spokesperson said users could ask follow-up questions to get more accurate and helpful answers from their assistant.

BRIAN: I went to the supermarket and bought the most exotic fruit I could find: a custard apple, a green, scaly fruit that looks like a dinosaur egg. When I gave Meta's AI multiple chances to identify it, it made a different guess each time: a chocolate-covered nut, a stone fruit, an apple, and finally a durian, which was close, but not a banana.

MIGUEL: The new software's ability to recognize landmarks and monuments appeared to be working. Looking down a block in downtown San Francisco toward a towering dome, Meta's AI correctly answered, “City Hall.” It's a good trick and perhaps useful if you are a tourist.

Other times they were unpredictable. While driving from the city to my home in Oakland, I asked Meta which bridge she was on while she looked out the window in front of me (with both hands on the wheel, of course). The first answer was the Golden Gate Bridge, which was wrong. On the second try, he discovered he was on the Bay Bridge, which made me wonder if he just needed a clearer shot of the tall white suspension posts of the newer portion to be right.

BRIAN: I visited the San Francisco Museum of Modern Art to see if Meta's AI could do the job of a tour guide. After taking photographs of about two dozen paintings and asking the assistant to tell me about the artwork I was looking at, the AI ​​was able to describe the images and what mediums were used to compose the art, which would be good for a college student. history of art. – but could not identify the artist or the title. (A Meta spokesperson said that another software update released after my visit to the museum improved this capability.)

After the update, I tried looking at images on my computer screen of more famous works of art, including the Mona Lisa, and the AI ​​correctly identified them.

BRIAN: At a Chinese restaurant, I pointed out a menu item written in Chinese and asked Meta to translate it into English, but the AI ​​said it currently only supported English, Spanish, Italian, French, and German. (I was surprised because Mark Zuckerberg learned Mandarin).

MIGUEL: He did a pretty good job translating a book title from English to German.

Meta's AI-powered glasses offer an intriguing glimpse into a future that seems distant. The defects highlight the limitations and challenges in designing this type of product. The glasses could probably work better for identifying zoo animals and fruits, for example, if the camera had a higher resolution, but a nicer lens would add bulk. And no matter where we were, it was awkward talking to a virtual assistant in public. It's unclear if that will ever feel normal.

But when it worked, it worked well and we had fun, and the fact that Meta's AI can do things like translate languages ​​and identify landmarks through a pair of hip-facing glasses shows how far the technology has come.

Share this article:
you may also like
Next magazine you need
most popular

what you need to know

in your inbox every morning