WTF Fun Fact 13720 – Brain-Computer Interfaces

Interactive technology took a significant leap forward with the latest development in brain-computer interfaces by engineers at The University of Texas at Austin. This new technology allows users to control video games using nothing but their thoughts, eliminating the need for traditional manual controls.

Breaking Barriers with Brain-Computer Interfaces

One of the groundbreaking aspects of this interface is its lack of need for individual calibration. Traditional brain-computer interfaces require extensive customization to align with each user’s unique neurological patterns. This new system, however, uses machine learning to adapt to individual users quickly, allowing for a much more user-friendly experience. This innovation drastically reduces setup time and makes the technology accessible to a broader audience, including those with motor disabilities.

The interface works by using a cap fitted with electrodes that capture brain activity. These signals are then translated into commands that control game elements, such as steering a car in a racing game. This setup not only introduces a new way of gaming but also holds the potential for significant advancements in assistive technology.

Enhancing Neuroplasticity Through Gaming

The research, led by José del R. Millán and his team, explores the technology and its impact on neuroplasticity—the brain’s ability to form new neural connections. The team’s efforts focus on harnessing this capability to improve brain function and quality of life for patients with neurological impairments.

Participants in the study engaged in two tasks. First, a complex car racing game requiring strategic thinking for maneuvers like turns. Then, a simpler task involving balancing a digital bar. These activities were chosen to train the brain in different ways to leverage the interface’s capacity to translate neural commands into digital actions.

Foundational Research and Future Applications

The research represents foundational work in the field of brain-computer interfaces. Initially tested on subjects without motor impairments, the next step involves trials with individuals who have motor disabilities. This expansion is crucial for validating the interface’s potential clinical applications.

Beyond gaming, the technology is poised to revolutionize how individuals with disabilities interact with their environments. The ongoing projects include developing a wheelchair navigable via thought and rehabilitation robots for hand and arm therapy, which were recently demonstrated at the South by Southwest Conference and Festivals.

This brain-computer interface stands out not only for its technological innovation but also for its commitment to improving lives. It exemplifies the potential of using machine learning to enhance independence and quality of life for people with disabilities. As this technology progresses, it promises to open new avenues for accessibility and personal empowerment, making everyday tasks more manageable and integrating advanced assistive technologies into the fabric of daily living.

 WTF fun facts

Source: “Universal brain-computer interface lets people play games with just their thoughts” — ScienceDaily

WTF Fun Fact 13536 – Digitizing Smell

In order to smell, our brains and noses have to work together, so the idea of digitizing smell seems pretty “out there.”

However, if you think about it, our noses are sensing molecules. Those molecules can be identified by a computer, and the smells the humans associated with them can be cataloged. It’s not quite teaching a computer to smell on its own, but maybe it’s best we don’t give them too many human abilities.

The Enigma of Olfaction

While we’ve successfully translated light into sight and sound into hearing, decoding the intricate world of smell remains a challenge.

Olfaction, compared to our other senses, is mysterious, diverse, and deeply rooted in both emotion and memory. Knowing this, can we teach machines to interpret this elusive sense?

Digitizing Smell

A collaboration between the Monell Chemical Senses Center and the startup Osmo aimed to bridge the gap between airborne chemicals and our brain’s odor perception. Their objective was not just to understand the science of smell better but to make a machine proficient enough to describe, in human terms, what various chemicals smell like.

Osmo, with roots in Google’s advanced research division, embarked on creating a machine-learning model. The foundation of this model was an industry dataset, which detailed the molecular structures and scent profiles of 5,000 known odorants.

The idea? Feed the model a molecule’s shape and get a descriptive prediction of its smell.

That might sound simple, but the team had to make sure they could ensure the model’s accuracy.

The Litmus Test: Man vs. Machine

To validate the machine’s “sense of smell,” a unique test was devised.

A group of 15 panelists, trained rigorously using specialized odor kits, was tasked with describing 400 unique odors. The model then predicted descriptions for the same set.

Astonishingly, the machine’s predictions often matched or even outperformed individual human assessments, showcasing its unprecedented accuracy.

Machines That Can ‘Smell’ vs. Digitizing Smell

Beyond its core training, the model displayed unexpected capabilities. It accurately predicted odor strength, a feature it wasn’t explicitly trained for, and identified distinct molecules with surprisingly similar scents. This accomplishment suggests we’re inching closer to a world where machines can reliably “smell.”

But for now, that’s overstating it. The team has made a major leap towards digitizing smell. But machines don’t have senses. They can only replicate the kind of information our brains produce when we smell things. Of course, they don’t have any sense of enjoyment (or repulsion) at certain smells.

In any case, the Monell and Osmo collaboration has significantly advanced our journey in understanding and replicating the sense of smell. As we move forward, this research could revolutionize industries from perfumery to food and beyond.

 WTF fun facts

Source: “A step closer to digitizing the sense of smell: Model describes odors better than human panelists” — Science Daily

WTF Fun Fact 13482 – GPT-3 Reasoning Skills

Research from UCLA psychologists has discovered a surprising new contender in our analogical reasoning battles – the artificial intelligence language model, GPT-3. Apparently, it holds its own against college undergraduates on reasoning problems typical of intelligence tests and the SAT.

But it fails to answer a key question: Is GPT-3 merely parroting human reasoning, or has it stumbled onto a brand-new cognitive process? (And, does this research say more about technology, college students, or intelligence tests?!)

Humans vs GPT-3

OpenAI holds GPT-3’s secrets under tight wraps, so they aren’t going to be much help in figuring out how the algorithm works its “magic.” Despite the mystery, the UCLA researchers found that GPT-3 outperformed their expectations on some tasks. Yet, other tasks saw it crash and burn.

Despite its ability to embarrass some college students, the study’s first author, Taylor Webb, emphasized GPT-3’s limitations. While it excels at analogical reasoning, it fails spectacularly at tasks simple for humans, like using tools to solve physical problems.

Webb and his colleagues tested GPT-3 on problems inspired by Raven’s Progressive Matrices. They translated the visual problems into text and gave the same problems to 40 UCLA undergraduate students.

Not only did GPT-3 perform as well as humans, but it also made similar mistakes.

What the Study Results Mean

GPT-3 solved 80% of the problems correctly, while the human average score was below 60%. The team then tested GPT-3 with SAT analogy questions they believed had never been on the internet (which would mean they weren’t part of the GPT training data). Again, GPT-3 outperformed the average college applicant’s score (then again, we know these tests aren’t really a measure of intelligence).

However, when the researchers tested the program against student volunteers on analogy problems based on short stories, GPT-3 struggled.

And tasks that require understanding physical space continue to baffle the so-called “artificial intelligence.”

“No matter how impressive our results, it’s important to emphasize that this system has major limitations,” said Taylor Webb, the study’s first author. “It can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems — some of which children can solve quickly — the things it suggested were nonsensical.”

 WTF fun facts

Source: “GPT-3 can reason about as well as a college student, psychologists report” — ScienceDaily

WTF Fun Fact 12824 – Teen Invents Tool To Catch Elephant Poachers

A teen named Anika Puri has invented a new way to catch elephant poachers.

“I was quite taken aback,” the 17-year-old Chappaqua, New York student told Smithsonian Magazine. “Because I always thought, ‘well, poaching is illegal, how come it really is still such a big issue?’”

Learning more about elephant poachers

Puri and her family visited India a few years ago and saw ivory lined up at a Bombay market. The ivory trade has been illegal for decades in India.

After some research, Puri realized African elephants are still being hunted and that the “forest elephant population had declined by about 62 percent between 2002 and 2011.” Those numbers continue to drop today.

As a wildlife lover who is gifted in science and technology, Puri invented a system to help catch poachers.

According to Smithsonian: “Drones are currently used to detect and capture images of poachers, and they aren’t that accurate, the teenager explains. But after watching videos of elephants and humans, she saw how the two differed vastly in the way they move—their speed, their turning patterns and other motions.”

Tracking and stopping elephant poachers

Once she saw the difference in movements between humans and elephants, she realized she could build a piece of technology to track their movements.

As a result, she spent 2 years creating ElSa (short for “elephant savior”). Still in the prototype stage, the machine-learning driven device “analyzes movement patterns in thermal infrared videos of humans and elephants.”

Better yet, Puri says the accuracy is 4 times that of other tools. Her tool also costs a mere $250 to make whereas others run into the thousands of dollars due to their use of high-resolution cameras.

However: “ElSa uses a $250 FLIR ONE Pro thermal camera with 206×156 pixel resolution that plugs into an off-the-shelf iPhone 6. The camera and iPhone are then attached to a drone, and the system produces real-time inferences as it flies over parks as to whether objects below are human or elephant.”  WTF fun facts

Source: “This Teenager Invented a Low-Cost Tool to Spot Elephant Poachers in Real Time” — Smithsonian Magazine