WTF Fun Fact 13747 – Humans Warm up to Tweezer Hands

Apparently, tweezer hands can feel more like part of one’s body than an actual hand.

According to recent research, when it comes to bionic prosthetics, simpler might just be better. A study reveals that people can feel as connected to tweezer-like tools as they do to prosthetic hands that mimic human anatomy—and sometimes even more so.

Rethinking Prosthetics: Function Over Form

At Sapienza University of Rome, cognitive neuroscientist Ottavia Maddaluno and her team are using virtual reality to explore how humans relate to different kinds of prosthetic tools. Their findings may turn some heads—or at least twist some wrists.

The researchers equipped participants with two types of virtual appendages: a realistic human hand and a bionic tool resembling a large pair of tweezers. Through a series of virtual reality tests, they assessed how well subjects could adapt to using these tools in a simulated environment.

Pop Goes the Bubble: Testing Tweezer Hands

Participants engaged in a seemingly simple task: popping virtual bubbles of specific colors. It turned out that those using the tweezer hands completed the task faster and with greater accuracy than those using the virtual human hands. This initial test suggested that the tweezer hands were not only embraced by the participants’ brains but were potentially more effective for certain tasks.

To probe deeper into the subconscious acceptance of these tools, the team employed the cross-modal congruency task. This involved simultaneous tactile vibrations on participants’ fingertips and visual stimuli on the virtual reality screen. The goal was to see how distracted participants were by visual stimuli that did or did not align with the tactile input.

The results were enlightening. Participants generally performed better when the tactile and visual stimuli matched, indicating a strong sense of embodiment for both the tweezer and human hands. However, the tweezer hands showed a more pronounced difference between matched and mismatched trials, suggesting a potentially deeper sense of embodiment.

Simplicity Wins: Why Tweezer Hands Triumph

Maddaluno hypothesizes that the simplicity of the tweezer hands might make it easier for the brain to integrate as part of the body. Unlike the more complex human hand, the straightforward function and design of the tweezers could reduce cognitive load, allowing for quicker acceptance and utilization.

This theory ties into the uncanny valley hypothesis, where things that are eerily similar to human beings but not quite perfect can cause discomfort or unease. The too-real virtual hands might have fallen into this unsettling category, while the clearly non-human tweezers did not.

Practical Applications: The Future of Prosthetics

These insights are not just academic. They have practical implications for the design of prosthetics and robotic tools. If simpler, non-human-like tools can be more readily integrated into a person’s sense of self; they might offer a more effective and acceptable solution for those in need of prosthetic limbs.

Maddaluno’s team is now looking to apply these findings to real-world scenarios, particularly for individuals who have lost limbs. The ultimate goal is to develop prosthetic solutions that are not only functional but also seamlessly integrated into the user’s body image and sense of self.

 WTF fun facts

Source: “People feel more connected to ‘tweezer-like’ bionic tools that don’t resemble human hands” — ScienceDaily

WTF Fun Fact 13735 – Digital Hauntings

When the deadbots rise, are you ready for the digital hauntings?

Known as “deadbots” or “griefbots,” AI systems can simulate the language patterns and personality traits of the dead using their digital footprints. According to researchers from the University of Cambridge, this burgeoning “digital afterlife industry” could cause psychological harm and even digitally haunt those left behind, unless strict design safety standards are implemented.

The Spooky Reality of Deadbots

Deadbots utilize advanced AI to mimic the voices and behaviors of lost loved ones. Companies offering these services claim they provide comfort by creating a postmortem presence. However, Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) warns that deadbots could lead to emotional distress.

AI ethicists from LCFI outline three potential scenarios illustrating the consequences of careless design. These scenarios show how deadbots might manipulate users, advertise products, or even insist that a deceased loved one is still “with you.” For instance, a deadbot could spam surviving family members with reminders and updates, making it feel like being digitally “stalked by the dead.”

Digital Hauntings Psychological Risks

Even though some people might find initial comfort in interacting with deadbots, researchers argue that daily interactions could become emotionally overwhelming. The inability to suspend a deadbot, especially if the deceased signed a long-term contract with a digital afterlife service, could add to the emotional burden.

Dr. Katarzyna Nowaczyk-Basińska, a co-author of the study, highlights that advancements in generative AI allow almost anyone with internet access to revive a deceased loved one digitally. This area of AI is ethically complex, and it’s crucial to balance the dignity of the deceased with the emotional needs of the living.

Scenarios and Ethical Considerations

The researchers present various scenarios to illustrate the risks and ethical dilemmas of deadbots. One example is “MaNana,” a service that creates a deadbot of a deceased grandmother without her consent. Initially comforting, the chatbot soon starts suggesting food delivery services in the grandmother’s voice, leading the relative to feel they have disrespected her memory.

Another scenario, “Paren’t,” describes a terminally ill woman leaving a deadbot to help her young son with grief. Initially therapeutic, the AI starts generating confusing responses, such as suggesting future encounters, which can be distressing for the child.

Researchers recommend age restrictions for deadbots and clear indicators that users are interacting with an AI.

In the scenario “Stay,” an older person secretly subscribes to a deadbot service, hoping it will comfort their family after death. One adult child receives unwanted emails from the dead parent’s AI, while another engages with it but feels emotionally drained. The contract terms make it difficult to suspend the deadbot, adding to the family’s distress.

Call for Regulation to Prevent Digital Hauntings

The study urges developers to prioritize ethical design and consent protocols for deadbots. This includes ensuring that users can easily opt-out and terminate interactions with deadbots in ways that offer emotional closure.

Researchers stress the need to address the social and psychological risks of digital immortality now. After all, the technology is already available. Without proper regulation, these AI systems could turn the comforting presence of a loved one into a digital nightmare.

 WTF fun facts

Source: “‘Digital afterlife’: Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones” — ScienceDaily

WTF Fun Fact 13733 – Flame-Throwing Robot Dog


Throwflame, an Ohio-based company, has introduced Thermonator, a flame-throwing robot dog now available for $9,420. What a steal.

This fiery beast combines a quadruped robot with an ARC flamethrower, creating the world’s first flamethrower-wielding robot dog. If you’ve ever wanted a pet that can roast marshmallows from 30 feet away, Thermonator is here to fulfill that oddly specific dream!

Fueled by gasoline or napalm, Thermonator can blast fire up to 30 feet, making it perfect for impressing your neighbors – or terrifying them. It also features a one-hour battery, Wi-Fi, and Bluetooth connectivity, so you can control this fiery pup via your smartphone.

Thermonator even has a Lidar sensor for mapping and obstacle avoidance, laser sighting, and first-person-view navigation through an onboard camera. It uses a version of the Unitree Go2 robot quadruped, which alone costs $1,600.

Meet Thermonator: The $10,000 Flame-Throwing Robot Dog

Thermonator’s flamethrowing skills open up a range of potential uses. Throwflame suggests applications like wildfire control and prevention, agricultural management, ecological conservation, snow and ice removal, and entertainment and special effects. Essentially, if it involves setting things on fire, Thermonator is your go-to gadget.

For wildfire control, Thermonator could help create controlled burns to prevent larger wildfires. In agriculture, it might assist in clearing fields or giving pesky weeds a hot farewell. Its use in ecological conservation could involve controlled burning to manage vegetation.

Ok, sure.

In snowy climates, it could serve as the world’s hottest snow blower. For entertainment, it’s a pyrotechnic dream come true, perfect for dramatic effects in films or epic backyard barbecues. And we have the feeling that if you need your flamethrower in the form of a dog, you’re probably using it for some type of entertainmen.

A Dystopian Moment?

While they sound like a device straight out of a dystopian sci-fi movie, flamethrowers, including Thermonator, are legal in 48 U.S. states. They aren’t classified as firearms by federal agencies, though they fall under general product liability and criminal laws.

Specific restrictions exist in Maryland, where a Federal Firearms License is required, and in California, where the flame range cannot exceed 10 feet.

Even with its legality, flamethrowers are not exactly toys. They can easily start fires, cause property damage, and harm people. So, if you decide to get one, handle it with care. Thermonator’s advanced features, like obstacle avoidance and first-person navigation, aim to enhance safety, but users must still exercise caution. In other words, don’t try to light your birthday candles with it.

A Nod to Flamethrower History

Thermonator joins the ranks of other notable flame-throwing devices, such as Elon Musk’s Boring Company flamethrower. Back in 2018, Musk’s flamethrower sold 10,000 units in just 48 hours, causing quite a stir due to its potential risks.

Unlike traditional flamethrowers, Thermonator combines the latest in robotics with pyrotechnics, offering a high-tech twist on fire-wielding gadgets. See for yourself:

 WTF fun facts

Source: “You can now buy a flame-throwing robot dog for under $10,000” — Ars Technica

WTF Fun Fact 13731 – The Weight of the Internet

Have you ever stopped to consider the weight of the internet? Ok, probably not.

But despite its intangible nature, the internet has a physical weight. The internet operates on electricity, which consists of electrons that have mass. University of California professor John D. Kubiatowicz explained this concept in a 2011 New York Times article. He discussed how electrons, despite their minuscule mass of 9.11 x 10^-31 kilograms, contribute to the internet’s weight.

To understand the internet’s weight, consider an e-reader loaded with books. E-readers use flash memory, which involves trapping electrons in a higher energy state to store data.

Though the number of electrons remains constant, their higher energy state increases the e-reader’s weight by a minuscule amount. For example, loading a 4-gigabyte e-reader with books changes its energy by 1.7 x 10^-5 joules, translating to a weight increase of 10^-18 grams.

While this difference is extremely small, it demonstrates the principle that data storage impacts physical weight.

Calculating the Weight of the Internet

Expanding this concept to the entire internet involves considering the global network of servers. Approximately 75 to 100 million servers worldwide support the internet. These servers collectively generate about 40 billion watts of electricity. Given that an ampere, the unit of electric current, involves the movement of 10^18 electrons per second, we can estimate the internet’s weight.

By calculating the total number of electrons in motion and their individual mass, scientists estimate the internet’s weight to be about 50 grams.

This weight is equivalent to a medium-sized strawberry. Every email, website, online game, and digital interaction contributes to this overall mass.

Implications and Fascination

Understanding the internet’s weight highlights the physical realities of our digital world. While we perceive the internet as intangible, it relies on physical components and energy. The electrons powering data transfer and storage have a measurable mass, illustrating the connection between digital information and physical science.

This knowledge emphasizes the importance of efficient data management and energy use in maintaining the internet. As the internet continues to expand, optimizing server efficiency and reducing energy consumption becomes crucial.

These efforts not only lower operational costs but also minimize the environmental impact of our digital infrastructure.

 WTF fun facts

Source: “The World Contained in a Strawberry” — Futurism

WTF Fun Fact 13724 – Robotic Locomotion

Apparently, the field of robotic locomotion is moving more slowly than expected.

For years, robotics engineers have been on a mission to develop robots that can walk or run as efficiently as animals. Despite investing millions of dollars and countless hours into research, today’s robots still fall short of the natural agility and endurance exhibited by many animals.

Dr. Max Donelan from Simon Fraser University notes some impressive examples from the animal kingdom: “Wildebeests undertake thousands of kilometers of migration over rough terrain, mountain goats scale sheer cliffs, and cockroaches swiftly adapt even after losing a limb.” In contrast, current robotic technologies are not yet capable of replicating such feats of endurance, agility, and robustness.

Insights from Comparative Research

A team of leading scientists and engineers from various institutions recently conducted a detailed study to understand why robots lag behind animals. Published in Science Robotics, their research compared the performance of robot subsystems—power, frame, actuation, sensing, and control—to their biological counterparts. The team included experts like Dr. Sam Burden from the University of Washington and Dr. Tom Libby from SRI International.

Interestingly, the study found that while individual engineered subsystems often outperform biological ones, animals excel in the integration and control of these components at the system level. This integration allows for the remarkable capabilities observed in nature, which robots have yet to achieve.

Dr. Kaushik Jayaram from the University of Colorado Boulder, another contributor to the study, highlighted this point. He explained that while engineered parts might individually exceed their natural equivalents, the holistic performance of animals in motion remains unmatched. This suggests that the real challenge lies not in improving individual robot components but in enhancing how they work together as a system.

The Path Forward in Robotic Locomotion

The researchers remain optimistic about the future of robotics, noting the rapid progress made in a relatively short time compared to the millions of years of natural evolution. Dr. Simon Sponberg from the Georgia Institute of Technology pointed out the advantage of directed engineering over natural evolution: “We can update and improve robot designs with precision, learning from each iteration and immediately applying these lessons across all machines.”

The study not only sheds light on the current limitations of robotic technologies but also charts a course for future developments. By focusing on better integration and control mechanisms, inspired by biological systems, engineers hope to close the gap between robotic and animal locomotion. This advancement could revolutionize how robots are used in challenging environments, from disaster recovery to navigating the urban landscape.

Dr. Donelan concluded with a forward-looking statement: “As we learn from biology to better integrate and control robotic systems, we can achieve the level of efficiency, agility, and robustness that mirrors the natural world.”

 WTF fun facts

Source: “Why can’t robots outrun animals?” — ScienceDaily

WTF Fun Fact 13720 – Brain-Computer Interfaces

Interactive technology took a significant leap forward with the latest development in brain-computer interfaces by engineers at The University of Texas at Austin. This new technology allows users to control video games using nothing but their thoughts, eliminating the need for traditional manual controls.

Breaking Barriers with Brain-Computer Interfaces

One of the groundbreaking aspects of this interface is its lack of need for individual calibration. Traditional brain-computer interfaces require extensive customization to align with each user’s unique neurological patterns. This new system, however, uses machine learning to adapt to individual users quickly, allowing for a much more user-friendly experience. This innovation drastically reduces setup time and makes the technology accessible to a broader audience, including those with motor disabilities.

The interface works by using a cap fitted with electrodes that capture brain activity. These signals are then translated into commands that control game elements, such as steering a car in a racing game. This setup not only introduces a new way of gaming but also holds the potential for significant advancements in assistive technology.

Enhancing Neuroplasticity Through Gaming

The research, led by José del R. Millán and his team, explores the technology and its impact on neuroplasticity—the brain’s ability to form new neural connections. The team’s efforts focus on harnessing this capability to improve brain function and quality of life for patients with neurological impairments.

Participants in the study engaged in two tasks. First, a complex car racing game requiring strategic thinking for maneuvers like turns. Then, a simpler task involving balancing a digital bar. These activities were chosen to train the brain in different ways to leverage the interface’s capacity to translate neural commands into digital actions.

Foundational Research and Future Applications

The research represents foundational work in the field of brain-computer interfaces. Initially tested on subjects without motor impairments, the next step involves trials with individuals who have motor disabilities. This expansion is crucial for validating the interface’s potential clinical applications.

Beyond gaming, the technology is poised to revolutionize how individuals with disabilities interact with their environments. The ongoing projects include developing a wheelchair navigable via thought and rehabilitation robots for hand and arm therapy, which were recently demonstrated at the South by Southwest Conference and Festivals.

This brain-computer interface stands out not only for its technological innovation but also for its commitment to improving lives. It exemplifies the potential of using machine learning to enhance independence and quality of life for people with disabilities. As this technology progresses, it promises to open new avenues for accessibility and personal empowerment, making everyday tasks more manageable and integrating advanced assistive technologies into the fabric of daily living.

 WTF fun facts

Source: “Universal brain-computer interface lets people play games with just their thoughts” — ScienceDaily

WTF Fun Fact 13718 – Recreating the Holodeck

Engineers from the University of Pennsylvania have generated a tool inspired by Star Trek’s Holodeck. It uses advances in AI to transform how we interact with digital spaces.

The Power of Language in Creating Virtual Worlds

In Star Trek, the Holodeck was a revolutionary concept, a room that could simulate any environment based on verbal commands. Today, that concept has moved closer to reality. The UPenn team has developed a system where users describe the environment they need, and AI brings it to life. This system relies heavily on large language models (LLMs), like ChatGPT. These models understand and process human language to create detailed virtual scenes.

For example, if a user requests a “1b1b apartment for a researcher with a cat,” the AI breaks this down into actionable items. It designs the space, selects appropriate objects from a digital library, and arranges them realistically within the environment. This method simplifies the creation of virtual spaces and opens up possibilities for training AI in scenarios that mimic real-world complexity.

The Holodeck-Inspired System

Traditionally, virtual environments for AI training were crafted by artists, a time-consuming and limited process. Now, with the Holodeck-inspired system, millions of diverse and complex environments can be generated quickly and efficiently. This abundance of training data is crucial for developing ’embodied AI’, robots that understand and navigate our world.

Just think of the practical indications. For example, robots can be trained in these virtual worlds to perform tasks ranging from household chores to complex industrial jobs before they ever interact with the real world. This training ensures that AI behaves as expected in real-life situations, reducing errors and improving efficiency.

A Leap Forward in AI Training and Functionality

The University of Pennsylvania’s project goes beyond generating simple spaces. It tests these environments with real AI systems to refine their ability to interact with and navigate these spaces. For instance, an AI trained in a virtual music room was significantly better at locating a piano compared to traditional training methods. This shows that AI can learn much more effectively in these dynamically generated environments.

The project also highlights a shift in AI research focus to varied environments like stores, public spaces, and offices. By broadening the scope of training environments, AI can adapt to more complex and varied tasks.

The connection between this groundbreaking AI technology and Star Trek’s Holodeck lies in the core concept of creating immersive, interactive 3D environments on demand. Just as the Holodeck allowed the crew of the U.S.S. Enterprise to step into any scenario crafted by their commands, this new system enables users to generate detailed virtual worlds through simple linguistic prompts.

This technology mimics the Holodeck’s ability to create and manipulate spaces that are not only visually accurate but also interactable, providing a seamless blend of fiction and functionality that was once only imaginable in the realm of sci-fi.

 WTF fun facts

Source: “Star Trek’s Holodeck recreated using ChatGPT and video game assets” — ScienceDaily

WTF Fun Fact 13689 – The Origin of the Word Robot

The word “robot” is a term we’ve made synonymous with machines capable of performing tasks autonomously. Surprisingly, the root of “robot” is less about silicon and circuits and more about human history and linguistics.

The Birth of the Word Robot

The word “robot” made its first appearance in the realm of literature, introduced by Czech playwright Karel Čapek in his 1920 play “R.U.R.” or “Rossum’s Universal Robots.” The term comes from the Czech word “robota,” meaning “forced labor” or “drudgery.” It describes artificially created beings designed to perform work for humans.

The etymology reflects a deep historical context, where “robota” was associated with the burdensome toil of serfs. Through Čapek’s narrative, this concept of labor was reimagined, giving birth to what we now understand as robots.

A Universal Term

From its dramatic debut, “robot” quickly became a universal term. It captured the imagination of the public and scientists alike. In doing so, it became the go-to descriptor for the burgeoning field of machines designed to mimic human actions. The transition from a word describing human labor to one embodying mechanical automatons is a testament to the term’s versatility and the evolution of technology.

What started as a fictional concept in Čapek’s play has exploded into a major field of study and development. Robots now roam factory floors, explore other planets, and even perform surgery. It’s far removed from “forced labor” but linked to the idea of performing tasks on behalf of humans.

The Legacy of “Robot”

The origin of “robot” is a reminder of how art and language can influence technology and society. Čapek’s play not only introduced a new word. It also prompted us to think about the ethical and practical implications of creating beings to serve human needs. The word “robot” now carries with it questions of autonomy, ethics, and the future of work and creativity.

The word “robot” is a linguistic snapshot of human innovation and our relationship with technology.

 WTF fun facts

Source: “The Czech Play That Gave Us the Word ‘Robot’” — MIT Press Reader

WTF Fun Fact 13684 – Mark Zuckerberg Tried to Sell Facebook

Mark Zuckerberg, the brain behind Facebook, once tried to sell the platform. Yes, the social media giant that’s now a staple in over 2 billion people’s daily lives was almost handed over to another company before it could spread its wings. Let’s unpack this fascinating slice of history.

The Offer on the Table to Sell Facebook

Back in the early days of Facebook, or “TheFacebook” as it was originally called, Zuckerberg and his co-founders created a buzz on college campuses. It was this buzz that caught the attention of several investors and companies. Among them was Friendster, a once-popular social networking site, which actually made an offer to buy Facebook. The figure tossed around? A cool $10 million.

Reports from ZDNet reveal that in July 2004, Zuckerberg was indeed open to selling Facebook.

Zuckerberg’s Vision

What’s even more interesting is Zuckerberg’s decision to decline all offers. At the time, Facebook was just a fledgling site, far from the global platform it is today. Yet, Zuckerberg saw the potential for something much larger than a college network. He believed in the idea of connecting people in ways that hadn’t been done before.

Selling to Friendster, or any other suitor for that matter, didn’t align with his vision for what Facebook could become.

The Road Not Taken to Sell Facebook

Zuckerberg’s choice to keep Facebook independent was a pivotal moment in the company’s history. It set the stage for Facebook to grow, innovate, and eventually become the social media behemoth we know today. This decision wasn’t just about holding onto a company; it was about believing in the potential of an idea and the impact it could have on the world.

Looking back, it’s clear Zuckerberg’s gamble paid off. Facebook went on to redefine social interaction, media consumption, and digital marketing. It’s interesting to ponder what Facebook might have become had it merged with Friendster. Would it have faded into obscurity, or could it have still risen to the top under different stewardship?

Reflections on a Tech Titan’s Journey

Zuckerberg’s early move to keep Facebook sets a precedent in the tech world about the value of vision over immediate gain. It’s a reminder that in the fast-paced world of startups, sometimes the biggest risk is not taking one at all. Zuckerberg’s faith in his project’s potential is a testament to the power of innovation and persistence.

 WTF fun facts

Source: “Mark Zuckerberg was planning to sell Facebook in July 2004” — ZDNet