WTF Fun Fact 13624 – The Phantom Touch Illusion

Using Virtual reality (VR) scenarios where subjects interacted with their bodies using virtual objects, a research team from Ruhr University Bochum in Germany unearthed the phenomenon of the phantom touch illusion. This sensation occurs when individuals in VR environments experience a tingling feeling upon virtual contact, despite the absence of physical interaction.

Unraveling the Mystery of Phantom Touch

Dr. Artur Pilacinski and Professor Christian Klaes, spearheading the research, were intrigued by this illusion. “People in virtual reality sometimes feel as though they’re touching real objects,” explains Pilacinski. The subjects described this sensation as a tingling or electrifying experience, akin to a breeze passing through their hand. This study, detailed in the journal Scientific Reports, sheds light on how our brains and bodies interpret virtual experiences.

The research involved 36 volunteers who, equipped with VR glasses, first acclimated to the virtual environment. Their task was to touch their hand with a virtual stick in this environment. The participants reported sensations, predominantly tingling, even when touching parts of their bodies not visible in the VR setting. This finding suggests that our perception and body sensation stem from a blend of sensory inputs.

Control Experiments and Unique Results

A control experiment was conducted to discern if similar sensations could arise without VR. This used a laser pointer instead of virtual objects. That experiment did not result in the phantom touch, underscoring the unique nature of the phenomenon within virtual environments.

The discovery of the phantom touch illusion propels research in human perception and holds potential applications in VR technology and medicine. “This could enhance our understanding of neurological diseases affecting body perception,” notes neuroscience researcher Christian Klaes.

Future Research and Collaborative Efforts

The team at Bochum is eager to delve deeper into this illusion and its underlying mechanisms. A partnership with the University of Sussex aims to differentiate actual phantom touch sensations from cognitive processes like suggestion or experimental conditions. “We are keen to explore the neural basis of this illusion and expand our understanding,” says Pilacinski.

This research marks a significant step in VR technology, offering a new perspective on how virtual experiences can influence our sensory perceptions. As VR continues to evolve, its applications in understanding human cognition and aiding medical advancements become increasingly evident. The phantom touch illusion not only intrigues the scientific community but also paves the way for innovative uses of VR in various fields.

 WTF fun facts


WTF Fun Fact 13536 – Digitizing Smell

In order to smell, our brains and noses have to work together, so the idea of digitizing smell seems pretty “out there.”

However, if you think about it, our noses are sensing molecules. Those molecules can be identified by a computer, and the smells the humans associated with them can be cataloged. It’s not quite teaching a computer to smell on its own, but maybe it’s best we don’t give them too many human abilities.

The Enigma of Olfaction

While we’ve successfully translated light into sight and sound into hearing, decoding the intricate world of smell remains a challenge.

Olfaction, compared to our other senses, is mysterious, diverse, and deeply rooted in both emotion and memory. Knowing this, can we teach machines to interpret this elusive sense?

Digitizing Smell

A collaboration between the Monell Chemical Senses Center and the startup Osmo aimed to bridge the gap between airborne chemicals and our brain’s odor perception. Their objective was not just to understand the science of smell better but to make a machine proficient enough to describe, in human terms, what various chemicals smell like.

Osmo, with roots in Google’s advanced research division, embarked on creating a machine-learning model. The foundation of this model was an industry dataset, which detailed the molecular structures and scent profiles of 5,000 known odorants.

The idea? Feed the model a molecule’s shape and get a descriptive prediction of its smell.

That might sound simple, but the team had to make sure they could ensure the model’s accuracy.

The Litmus Test: Man vs. Machine

To validate the machine’s “sense of smell,” a unique test was devised.

A group of 15 panelists, trained rigorously using specialized odor kits, was tasked with describing 400 unique odors. The model then predicted descriptions for the same set.

Astonishingly, the machine’s predictions often matched or even outperformed individual human assessments, showcasing its unprecedented accuracy.

Machines That Can ‘Smell’ vs. Digitizing Smell

Beyond its core training, the model displayed unexpected capabilities. It accurately predicted odor strength, a feature it wasn’t explicitly trained for, and identified distinct molecules with surprisingly similar scents. This accomplishment suggests we’re inching closer to a world where machines can reliably “smell.”

But for now, that’s overstating it. The team has made a major leap towards digitizing smell. But machines don’t have senses. They can only replicate the kind of information our brains produce when we smell things. Of course, they don’t have any sense of enjoyment (or repulsion) at certain smells.

In any case, the Monell and Osmo collaboration has significantly advanced our journey in understanding and replicating the sense of smell. As we move forward, this research could revolutionize industries from perfumery to food and beyond.

 WTF fun facts

Source: “A step closer to digitizing the sense of smell: Model describes odors better than human panelists” — Science Daily

WTF Fun Fact 13484 – Robots That Feel

Robots that feel?! Ok, no. We don’t mean robots that have feelings. We mean robots that have a “sense” of touch. Or at the very least robots programmed not to crush things they pick up. That’s still progress!

The modern robotics field is continuously pushing the boundaries of technology and automation. As a part of this ongoing exploration, scientists from the Queen Mary University of London, alongside their international colleagues from China and USA, have developed an innovative, affordable sensor called the L3 F-TOUCH. This unique invention enhances a robot’s tactile abilities, granting it a human-like sense of touch.

Robots That Feel Thanks to the L3 F-TOUCH Sensor

A principal objective in robotics has been achieving human-level dexterity, specifically during manipulation and grasping tasks. The human hand’s ability to sense factors such as pressure, temperature, texture, and pain, in addition to distinguishing objects based on properties like shape, size, and weight, has set the standard.

Until now, many robot hands or graspers have fallen short, lacking these vital haptic capabilities. As you might imagine, this makes handling objects a complicated task. Robots’ fingers lack the “feel of touch,” resulting in objects slipping away or being unintentionally crushed if fragile. And that’s not something we want if we’re ever going to let them work with people, like the elderly.

Mechanics and Functionality

Leading the groundbreaking study, Professor Kaspar Althoefer of Queen Mary University of London and his team, introduces the L3 F-TOUCH. The name stands for Lightweight, Low-cost, and wireless communication. It’s a high-resolution fingertip sensor that directly measures an object’s geometry and the forces necessary to interact with it.

This sensor sets itself apart from others in its league that estimate interaction forces via camera-acquired tactile information. The L3 F-TOUCH takes a direct approach, achieving a higher measurement accuracy.

Professor Althoefer and his team plan to further enhance the sensor’s capabilities. They aim to add rotational forces such as twists, vital in tasks like screw fastening.

These advancements could extend the sense of touch to more dynamic and agile robots, improving their functionality in manipulation tasks and even in human-robot interaction settings, such as patient rehabilitation or physical support for the elderly.

 WTF fun facts

Source: “Researchers develop low-cost sensor to enhance robots’ sense of touch” — ScienceDaily

WTF Fun Fact 13482 – GPT-3 Reasoning Skills

Research from UCLA psychologists has discovered a surprising new contender in our analogical reasoning battles – the artificial intelligence language model, GPT-3. Apparently, it holds its own against college undergraduates on reasoning problems typical of intelligence tests and the SAT.

But it fails to answer a key question: Is GPT-3 merely parroting human reasoning, or has it stumbled onto a brand-new cognitive process? (And, does this research say more about technology, college students, or intelligence tests?!)

Humans vs GPT-3

OpenAI holds GPT-3’s secrets under tight wraps, so they aren’t going to be much help in figuring out how the algorithm works its “magic.” Despite the mystery, the UCLA researchers found that GPT-3 outperformed their expectations on some tasks. Yet, other tasks saw it crash and burn.

Despite its ability to embarrass some college students, the study’s first author, Taylor Webb, emphasized GPT-3’s limitations. While it excels at analogical reasoning, it fails spectacularly at tasks simple for humans, like using tools to solve physical problems.

Webb and his colleagues tested GPT-3 on problems inspired by Raven’s Progressive Matrices. They translated the visual problems into text and gave the same problems to 40 UCLA undergraduate students.

Not only did GPT-3 perform as well as humans, but it also made similar mistakes.

What the Study Results Mean

GPT-3 solved 80% of the problems correctly, while the human average score was below 60%. The team then tested GPT-3 with SAT analogy questions they believed had never been on the internet (which would mean they weren’t part of the GPT training data). Again, GPT-3 outperformed the average college applicant’s score (then again, we know these tests aren’t really a measure of intelligence).

However, when the researchers tested the program against student volunteers on analogy problems based on short stories, GPT-3 struggled.

And tasks that require understanding physical space continue to baffle the so-called “artificial intelligence.”

“No matter how impressive our results, it’s important to emphasize that this system has major limitations,” said Taylor Webb, the study’s first author. “It can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems — some of which children can solve quickly — the things it suggested were nonsensical.”

 WTF fun facts

Source: “GPT-3 can reason about as well as a college student, psychologists report” — ScienceDaily

WTF Fun Fact 13451 – Shape-Shifting Robot

You’ve seen robotic dogs, humanoid robots that can do backflips, etc. – but have you seen the new shape-shifting robots? Just what the world needs, right? Well…maybe!

Do shape-shifting robots really exist?

They do exist, but they’re small – and they’re certainly not a threat. Although we don’t exactly love the headline from EurekAlert “Watch this person-shaped robot liquify and escape jail, all with the power of magnets.” But whatever. Like all robots, they’re pretty cool, aside from their (granted, far-fetched) potential to destroy us all.

This robot can indeed transform, liquefy itself, slip into the smallest crevices, and then reassemble with absolute precision. The shape-shifting robot creators drew inspiration from a sea cucumber.

What do a sea cucumber and a shape-shifting robot have in common?

Sea cucumbers have a unique ability—they can alter their stiffness rapidly and reversibly. This is the fascinating biological phenomenon that the researchers hoped to replicate in their robotic system.

Traditional robots, with their rigid bodies, lack the flexibility to navigate small spaces. There are “softer” robots, but while they’re more pliable, are often weaker and harder to control. So, to overcome these challenges, the team aimed to create a robot that could oscillate between being a solid and a liquid.

The new breed of robot is an alloy of gallium—a metal with a low melting point—and includes embedded magnetic particles. The particles allow the robot to respond to magnetic fields, which scientists can use to control its movement and induce changes in its state—from solid to liquid and vice versa.

The team from Carnegie Mellon University christened their groundbreaking creation the “magnetoactive solid-liquid phase transitional machine.” Catchy!

The power of transformation

In a magnetic field, the robot can jump, stretch, climb walls, and even solder a circuit board. Its most impressive trick? The ability to liquefy and squeeze itself out of a mock prison—only to solidify once again on the outside. When in solid state, this robot can bear weights 30 times its own, demonstrating remarkable strength and flexibility.

Interestingly, the shapeshifting robot might have potential applications in the medical field. In a proof-of-concept experiment, the robot successfully removed a ball from a model of a human stomach. It quickly moved to the ball, melted around it, reformed, and exited the model stomach—ball in tow.

Although gallium was the metal of choice in these experiments, other metals could be introduced to adjust the melting point for real-life applications.

Future applications

Looking ahead, the gallium robots could serve a variety of purposes. From assembling and repairing hard-to-reach circuits to acting as a universal screw that melts and reforms to fit any socket, the possibilities seem endless.

The technology might have significant biomedical applications as well. For instance, these robots could deliver drugs inside a patient’s body or remove foreign objects. However, before any in-human application, tracking the robot’s position within a patient’s body is a hurdle scientists need to overcome.

Who knows, maybe a doctor will ask you to swallow a shape-shifting robot someday. What a thing to look forward to!

Wanna see the robot melt and reconstitute? Someone set it to some soothing music on YouTube:

 WTF fun facts

Source: “This Shape-Shifting Robot Can Liquefy Itself and Reform” — Smithsonian Magazine

WTF Fun Fact 13353 – New Deepfake Research

New deepfake research is attempting to resurrect victims of crime for the common good. Of course, like all deepfakes, this too raises ethical concerns.

How new deepfake research brings people “back from the dead”

Deepfakes are a technology that uses artificial intelligence to create hyper-realistic images and videos of people. But so far, they’ve largely been misused to spread misinformation.

Now, researchers at the University of Florida and the University of Michigan are now exploring the possibility of using deepfake resurrections to promote the public good.

Their study focuses on “deepfake resurrections.” This refers to bringing deceased individuals virtually “back to life” using AI-generated images and videos. The researchers emphasize that their approach is different from controversial cases of deepfake resurrections, such as the ones used for political manipulation or commercial purposes. Instead, they aim to explore scenarios where deepfake resurrections could have a positive impact on society.

Can this technology be used for good?

The researchers conducted a study involving approximately 2,000 participants to explore the potential applications of deepfake resurrections. In this study, they focused on creating deepfake resurrections of victims of drunk driving and domestic violence. The aim was to examine the reactions of the participants and assess whether such resurrections could effectively raise awareness about these pressing social issues.

By using deepfake resurrections to share the stories of these victims, the researchers sought to humanize the issues and evoke empathy in the audience.

However, the PSA had little effect and a more negative than positive reaction. The researchers chalked this up to the lack of trust in deepfakes overall, noting that this affected the effectiveness of deepfake resurrections in raising awareness about social issues.

Ethical considerations

The exploration of deepfake resurrections for the public good raises several ethical questions. One major concern is consent. Since the deceased cannot provide consent, the researchers suggest obtaining permission from the deceased’s estate or family members. This would require creating guidelines to ensure that deepfake resurrections are used in a manner that respects the individual’s legacy and values.

Another ethical consideration is the potential emotional impact on the deceased’s loved ones. While some may find comfort in deepfake resurrections, others might perceive it as a disturbing or disrespectful act. To address this concern, researchers propose involving mental health professionals in the development of deepfake resurrections to ensure they are created with sensitivity and empathy.

Lastly, there is the question of authenticity. The researchers acknowledge the potential for deepfake resurrections to spread misinformation or perpetuate false narratives. To mitigate this risk, they suggest that deepfake resurrections should be transparently labeled as such and accompanied by disclaimers.

 WTF fun facts

Source: “Dying To Tell You: “Deepfake Resurrections” To Promote Public Good Explored By Researchers” — IFL Science

WTF Fun Fact 13352 – How CAPTCHA Works

The CAPTCHA test is a widely used tool for preventing automated bots from accessing websites and online services. But do you know how CAPTCHA works? For example, does it seem like the “I am not a robot” checkbox might be a bit too easy to fool?

Are you a robot?

CAPTCHAs help protect sensitive information and prevent malicious activities, such as spamming, data scraping, and brute-force attacks. Additionally, they help ensure the fair use of online resources by limiting access to human users.

CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” The primary purpose of a CAPTCHA is to differentiate between human users and automated bots, preventing bots from accessing sensitive information or conducting malicious activities. CAPTCHAs come in various forms, including distorted text, image recognition.

How CAPTCHA works

The “I am not a robot” CAPTCHA test, also known as the Google reCAPTCHA, has become increasingly popular due to its simplicity and user-friendly design. At first glance, it appears that users simply need to click the checkbox to prove they are human. However, there is more to the test than meets the eye.

The “I am not a robot” test relies on advanced risk analysis to determine whether a user is a human or a bot. When a user clicks the checkbox, the test assesses various factors to make its determination. Some of these factors include:

  • Mouse Movements: The test tracks the user’s mouse movements as they approach and click on the checkbox. Human users tend to have irregular and varied mouse movements, while bots typically exhibit more uniform and direct paths.
  • Browsing Behavior: The test analyzes the user’s browsing behavior and history. This may include how long they have been on the page, their scrolling patterns, and the number of clicks made. This data helps the test to identify patterns that are characteristic of human users.
  • Cookies: The test checks for the presence of cookies in the user’s browser. Cookies are small pieces of data stored on a user’s device by websites they visit. Human users are more likely to have a variety of cookies from different websites. Bots typically have fewer or no cookies.
  • Browser and Device Information: The test collects information about the user’s browser and device. This can include the browser version, operating system, and screen resolution. This information helps to determine if the user is using a known bot or a legitimate browser.

CAPTCHA captures more than just a click

If the test determines a user is human based on these factors, they are granted access to the website. However, if the test is uncertain or detects bot-like behavior, the user may be prompted to complete additional CAPTCHA challenges, such as solving a puzzle or identifying objects in images.

 WTF fun facts

Source: “People Are Just Now Learning How The “I Am Not A Robot” Captcha Test Actually Works” — IFL Science

WTF Fun Fact 13302 – Bug Bounty Programs

Have you heard of “bug bounty programs”? No, they’re not about capturing critters in your yard. These programs are run by major tech companies. Companies like Google, Microsoft, and Facebook use these programs to incentivize hackers and security researchers to find and report vulnerabilities in their systems by offering rewards or cash bounties.

Big Tech’s bug bounty programs

Bug bounty programs allow tech companies to identify and address security weaknesses. But more importantly, they do so before the weaknesses can be exploited by cybercriminals. Some programs have paid out millions to researchers and hackers who found major vulnerabilities. For example, in 2019, Google paid out over $6.5 million in bug bounties to people around the world.

Bug bounty programs typically have guidelines and rules. These outline what types of vulnerabilities are eligible for rewards and how they should be reported. Once a researcher or hacker identifies a vulnerability, they submit it to the company’s bug bounty program. The company then verifies the bug and determines if it is eligible for a reward. If the vulnerability is valid, the company forks over the reward to the person who reported it.

Some companies may also offer other incentives, such as swag or recognition. This helps encourage participation. Some programs may even have different reward tiers for different types of vulnerabilities. For example, more critical or severe vulnerabilities earn higher payouts.

A win-win solution for cybersecurity

There are several reasons why companies use these programs. Identifying security vulnerabilities before they can be exploited by cybercriminals saves the company from potential data breaches, financial losses, and reputational damage.

The programs also allow companies to work with the security community. This helps them improve their security measures and stay ahead of emerging threats. These programs are also cost-effective. Companies discover security weaknesses, as they only pay for valid bugs that are reported.

 WTF fun facts

Source: “Google paid $6.7 million to bug bounty hunters in 2020” — ZDNet