WTF Fun Fact 13735 – Digital Hauntings

When the deadbots rise, are you ready for the digital hauntings?

Known as “deadbots” or “griefbots,” AI systems can simulate the language patterns and personality traits of the dead using their digital footprints. According to researchers from the University of Cambridge, this burgeoning “digital afterlife industry” could cause psychological harm and even digitally haunt those left behind, unless strict design safety standards are implemented.

The Spooky Reality of Deadbots

Deadbots utilize advanced AI to mimic the voices and behaviors of lost loved ones. Companies offering these services claim they provide comfort by creating a postmortem presence. However, Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) warns that deadbots could lead to emotional distress.

AI ethicists from LCFI outline three potential scenarios illustrating the consequences of careless design. These scenarios show how deadbots might manipulate users, advertise products, or even insist that a deceased loved one is still “with you.” For instance, a deadbot could spam surviving family members with reminders and updates, making it feel like being digitally “stalked by the dead.”

Digital Hauntings Psychological Risks

Even though some people might find initial comfort in interacting with deadbots, researchers argue that daily interactions could become emotionally overwhelming. The inability to suspend a deadbot, especially if the deceased signed a long-term contract with a digital afterlife service, could add to the emotional burden.

Dr. Katarzyna Nowaczyk-Basińska, a co-author of the study, highlights that advancements in generative AI allow almost anyone with internet access to revive a deceased loved one digitally. This area of AI is ethically complex, and it’s crucial to balance the dignity of the deceased with the emotional needs of the living.

Scenarios and Ethical Considerations

The researchers present various scenarios to illustrate the risks and ethical dilemmas of deadbots. One example is “MaNana,” a service that creates a deadbot of a deceased grandmother without her consent. Initially comforting, the chatbot soon starts suggesting food delivery services in the grandmother’s voice, leading the relative to feel they have disrespected her memory.

Another scenario, “Paren’t,” describes a terminally ill woman leaving a deadbot to help her young son with grief. Initially therapeutic, the AI starts generating confusing responses, such as suggesting future encounters, which can be distressing for the child.

Researchers recommend age restrictions for deadbots and clear indicators that users are interacting with an AI.

In the scenario “Stay,” an older person secretly subscribes to a deadbot service, hoping it will comfort their family after death. One adult child receives unwanted emails from the dead parent’s AI, while another engages with it but feels emotionally drained. The contract terms make it difficult to suspend the deadbot, adding to the family’s distress.

Call for Regulation to Prevent Digital Hauntings

The study urges developers to prioritize ethical design and consent protocols for deadbots. This includes ensuring that users can easily opt-out and terminate interactions with deadbots in ways that offer emotional closure.

Researchers stress the need to address the social and psychological risks of digital immortality now. After all, the technology is already available. Without proper regulation, these AI systems could turn the comforting presence of a loved one into a digital nightmare.

 WTF fun facts

Source: “‘Digital afterlife’: Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones” — ScienceDaily

WTF Fun Fact 13720 – Brain-Computer Interfaces

Interactive technology took a significant leap forward with the latest development in brain-computer interfaces by engineers at The University of Texas at Austin. This new technology allows users to control video games using nothing but their thoughts, eliminating the need for traditional manual controls.

Breaking Barriers with Brain-Computer Interfaces

One of the groundbreaking aspects of this interface is its lack of need for individual calibration. Traditional brain-computer interfaces require extensive customization to align with each user’s unique neurological patterns. This new system, however, uses machine learning to adapt to individual users quickly, allowing for a much more user-friendly experience. This innovation drastically reduces setup time and makes the technology accessible to a broader audience, including those with motor disabilities.

The interface works by using a cap fitted with electrodes that capture brain activity. These signals are then translated into commands that control game elements, such as steering a car in a racing game. This setup not only introduces a new way of gaming but also holds the potential for significant advancements in assistive technology.

Enhancing Neuroplasticity Through Gaming

The research, led by José del R. Millán and his team, explores the technology and its impact on neuroplasticity—the brain’s ability to form new neural connections. The team’s efforts focus on harnessing this capability to improve brain function and quality of life for patients with neurological impairments.

Participants in the study engaged in two tasks. First, a complex car racing game requiring strategic thinking for maneuvers like turns. Then, a simpler task involving balancing a digital bar. These activities were chosen to train the brain in different ways to leverage the interface’s capacity to translate neural commands into digital actions.

Foundational Research and Future Applications

The research represents foundational work in the field of brain-computer interfaces. Initially tested on subjects without motor impairments, the next step involves trials with individuals who have motor disabilities. This expansion is crucial for validating the interface’s potential clinical applications.

Beyond gaming, the technology is poised to revolutionize how individuals with disabilities interact with their environments. The ongoing projects include developing a wheelchair navigable via thought and rehabilitation robots for hand and arm therapy, which were recently demonstrated at the South by Southwest Conference and Festivals.

This brain-computer interface stands out not only for its technological innovation but also for its commitment to improving lives. It exemplifies the potential of using machine learning to enhance independence and quality of life for people with disabilities. As this technology progresses, it promises to open new avenues for accessibility and personal empowerment, making everyday tasks more manageable and integrating advanced assistive technologies into the fabric of daily living.

 WTF fun facts

Source: “Universal brain-computer interface lets people play games with just their thoughts” — ScienceDaily

WTF Fun Fact 13667 – AI Predicts Life Events

Artificial intelligence (AI) continues to push the boundaries of what we believe is possible – in fact, now AI predicts human life events.

A groundbreaking study recently revealed the potential of AI to forecast significant life occurrences with notable precision.

AI Predicts Life’s Complex Patterns

At the heart of this innovative research is a model known as “ife2vec.” This transformative AI tool was trained using vast amounts of data about people’s lives. This includes their residence, education, income, health, and work conditions. By employing ‘transformer models’ akin to the renowned ChatGPT, life2vec systematically organized this data to predict future events. This includes their time of death.

The researchers’ approach was to treat human life as a sequence of events, much like words in a sentence. This method allowed the AI to identify patterns and make predictions about future occurrences. Surprisingly, life2vec demonstrated a superior ability to predict outcomes such as personality traits and time of death compared to other advanced neural networks.

The Ethical Implications of Predictive AI

The promise of predictive AI in enhancing our understanding of life patterns is undeniable. But it also raises significant ethical questions. Issues around data protection, privacy, and potential biases inherent in the data are crucial considerations. Before such models can be applied in practical settings, like assessing individual disease risks or other significant life events, these ethical challenges must be thoroughly understood and addressed.

Looking ahead, the research team envisions incorporating various data forms into their model, such as text, images, or information about social connections. This expansion could revolutionize the interaction between social and health sciences, offering a more holistic view of human life and its potential trajectories.

 WTF fun facts

Source: “Artificial intelligence can predict events in people’s lives” — ScienceDaily

WTF Fun Fact 13636 – AI and Rogue Waves

For centuries, sailors have whispered tales of monstrous rogue waves capable of splitting ships and damaging oil rigs. These maritime myths turned real with the documented 26-meter-high rogue wave at Draupner oil platform in 1995.

Fast forward to 2023, and researchers at the University of Copenhagen and the University of Victoria have harnessed the power of artificial intelligence (AI) to predict these oceanic giants. They’ve developed a revolutionary formula using data from over a billion waves spanning 700 years, transforming maritime safety.

Decoding Rogue Waves: A Data-Driven Approach

The quest to understand rogue waves led researchers to explore vast ocean data. They focused on rogue waves, twice the size of surrounding waves, and even the extreme ones over 20 meters high. By analyzing data from buoys across the US and its territories, they amassed more than a billion wave records, equivalent to 700 years of ocean activity.

Using machine learning, the researchers crafted an algorithm to identify rogue wave causes. They discovered that rogue waves occur more frequently than imagined, with about one monster wave daily at random ocean locations. However, not all are the colossal 20-meter giants feared by mariners.

AI as a New-Age Oceanographer

The study stands out for its use of AI, particularly symbolic regression. Unlike traditional AI methods that offer single predictions, this approach yields an equation. It’s akin to Kepler deciphering planetary movements from Tycho Brahe’s astronomical data, but with AI analyzing waves.

The AI examined over a billion waves and formulated an equation, providing a “recipe” for rogue waves. This groundbreaking method offers a transparent algorithm, aligning with physics laws, and enhances human understanding beyond the typical AI black box.

Contrary to popular belief that rogue waves stem from energy-stealing wave combinations, this research points to “linear superposition” as the primary cause. Known since the 1700s, this phenomenon occurs when two wave systems intersect, amplifying each other momentarily.

The study’s data supports this long-standing theory, offering a new perspective on rogue wave formation.

Towards Safer Maritime Journeys

This AI-driven algorithm is a boon for the shipping industry, constantly navigating potential dangers at sea. With approximately 50,000 cargo ships sailing globally, this tool enables route planning that accounts for the risk of rogue waves. Shipping companies can now use the algorithm for risk assessment and choose safer routes accordingly.

The research, algorithm, and utilized weather and wave data are publicly accessible. This openness allows entities like weather services and public authorities to calculate rogue wave probabilities easily. The study’s transparency in intermediate calculations sets it apart from typical AI models, enhancing our understanding of these oceanic phenomena.

The University of Copenhagen’s groundbreaking research, blending AI with oceanography, marks a significant advancement in our understanding of rogue waves. By transforming a massive wave database into a clear, physics-aligned equation, this study not only demystifies a long-standing maritime mystery but also paves the way for safer sea travels. The algorithm’s potential to predict these maritime monsters will be a crucial tool for the global shipping industry, heralding a new era of informed and safer ocean navigation.

 WTF fun facts

Source: “AI finds formula on how to predict monster waves” — ScienceDaily

WTF Fun Fact 13625 – AI and Realistic Faces

Researchers at The Australian National University (ANU) have found that AI-generated faces now appear to be more realistic faces than those of actual humans. But that’s only true if the AI is generating the faces of white people.

This development raises crucial questions about AI’s influence on our perception of identity.

Training Bias in AI

This study reveals a concerning trend. People often see AI-generated white faces as more human than real ones. Yet, this isn’t the case for faces of people of color.

Dr. Amy Dawel attributes this to AI’s training bias. AI algorithms have been fed more white faces than any other. This imbalance could increase racial biases online. It’s especially troubling in professional settings, like headshot creation. AI often alters skin and eye colors of people of color, aligning them more with white features.

The Illusion of AI Realistic Faces

Elizabeth Miller, co-author of the study, highlights a critical issue. People don’t realize they’re being fooled by AI faces. This unawareness is alarming. Those who mistake AI faces for real ones are often the most confident in their judgment.

Although physical differences between AI and human faces exist, they’re often misinterpreted. People see AI’s proportionate features as human-like. Yet, AI technology is evolving rapidly. Soon, distinguishing AI from human faces could become even more challenging.

This trend could significantly impact misinformation spread and identity theft. Dr. Dawel calls for more transparency around AI.

Keeping AI open to researchers and the public is essential. It helps identify potential problems early. Public education about AI’s realism is also crucial. An informed public can be more skeptical about online images.

Public Awareness and Tools for Detection

As AI blurs the line between real and synthetic, new challenges emerge. We need tools to identify AI imposters accurately. Dr. Dawel suggests educating people about AI’s realism. Such knowledge could foster skepticism about online images. This approach might reduce risks associated with advanced AI.

ANU’s study marks a significant moment in AI development. AI’s ability to create faces now surpasses human perception in certain cases. The implications are vast, touching on identity and the potential for misuse.

As AI evolves, transparency, education, and technological solutions will be key. We must navigate these challenges collectively to ensure AI’s responsible and beneficial use.

 WTF fun facts

Source: “AI faces look more real than actual human face” — ScienceDaily

WTF Fun Fact 13589 – A Voice Test for Diabetes

If you’re scared of needles, you might be interested to know that researchers are investigating a possible voice test for diabetes.

That’s right. A brief recording of your voice could indicate whether or not you have diabetes.

A Voice Test for Diabetes?

A program designed to use no more than 10 seconds of speech has proven capable of identifying the presence of diabetes with remarkable accuracy.

In an experiment conducted by Klick Labs, 267 individuals recorded a short phrase on their smartphones six times a day over a span of two weeks. This group had recently undergone testing for Type 2 diabetes. The aim? To discern any acoustic differences between the voices of those who tested positive and those who didn’t.

By analyzing the participants’ voice prints in conjunction with data like age, sex, height, and weight, an AI model made astonishing predictions. The accuracy rate stood at 86% for men and an even higher 89% for women.

Unraveling the Science Behind Voice Analysis

The question arises: Why does diabetes influence one’s voice? The synthesis of our voice is a multifaceted process that integrates the respiratory system, nervous system, and the larynx. Factors that impact any of these systems can, in turn, alter the voice. While such changes might escape the human ear, computers, with their advanced analytical capacities, can detect them with precision.

Among the vocal attributes studied, pitch and its variation proved to be the most predictive of diabetes. Interestingly, some vocal attributes only enhanced prediction accuracy for one gender. For instance, “perturbation jitter” was a key factor for women, whereas “amplitude perturbation quotient shimmer” was significant for men.

It’s worth noting that prolonged elevated blood sugar can impair peripheral nerves and muscle fibers, leading to voice disorders. Moreover, even temporary elevations in blood glucose can potentially influence vocal cord elasticity, though this theory still awaits validation. Furthermore, emotional factors, such as anxiety and depression—both of which can be associated with diabetes—may further modulate voice characteristics.

Beyond Conventional Diabetes Testing

Jaycee Kaufman, the leading author of the study, emphasized the transformative potential of their findings: “Voice technology can potentially revolutionize the way the medical community screens for diabetes. Traditional detection methods can be cumbersome, both in terms of time and cost. This technology could eliminate these challenges altogether.”

Considering the global surge in diabetes cases, and the complications arising from late diagnoses, the introduction of a non-invasive, rapid testing tool can be a game-changer. The International Diabetes Federation has highlighted that nearly 50% of adults with diabetes remain unaware of their condition. Predictably, this unawareness is most pronounced in nations where healthcare infrastructure is stretched thin. The economic implications are staggering, with undiagnosed diabetes projected to cost an exorbitant $2.1 trillion annually by 2030.

Voice technology, as an alternative to blood sample-based tests, presents a promising avenue for early detection and intervention.

A Healthier Future Using A Voice Test for Diabetes

Buoyed by the success of their study, Klick Labs is planning a larger-scale project. They aim not only to refine the accuracy of their model but also to expand its scope. Their vision extends beyond diabetes detection, as they explore its applicability to conditions like prediabetes and hypertension.

Yan Fossat, co-author of the study, expressed enthusiasm for the innovation: “Voice technology has the potential to usher in a new era in healthcare, positioning itself as a vital digital screening tool that’s both accessible and economical.”

As the study gains traction and the technology evolves, the implications for global health are profound. With the power of voice technology, a world where early, easy, and efficient disease detection is the norm, may not be too far off.

 WTF fun facts

Source: “10 Seconds Of Recorded Speech Can Reveal If Someone Has Diabetes” — IFL Science

WTF Fun Fact 13536 – Digitizing Smell

In order to smell, our brains and noses have to work together, so the idea of digitizing smell seems pretty “out there.”

However, if you think about it, our noses are sensing molecules. Those molecules can be identified by a computer, and the smells the humans associated with them can be cataloged. It’s not quite teaching a computer to smell on its own, but maybe it’s best we don’t give them too many human abilities.

The Enigma of Olfaction

While we’ve successfully translated light into sight and sound into hearing, decoding the intricate world of smell remains a challenge.

Olfaction, compared to our other senses, is mysterious, diverse, and deeply rooted in both emotion and memory. Knowing this, can we teach machines to interpret this elusive sense?

Digitizing Smell

A collaboration between the Monell Chemical Senses Center and the startup Osmo aimed to bridge the gap between airborne chemicals and our brain’s odor perception. Their objective was not just to understand the science of smell better but to make a machine proficient enough to describe, in human terms, what various chemicals smell like.

Osmo, with roots in Google’s advanced research division, embarked on creating a machine-learning model. The foundation of this model was an industry dataset, which detailed the molecular structures and scent profiles of 5,000 known odorants.

The idea? Feed the model a molecule’s shape and get a descriptive prediction of its smell.

That might sound simple, but the team had to make sure they could ensure the model’s accuracy.

The Litmus Test: Man vs. Machine

To validate the machine’s “sense of smell,” a unique test was devised.

A group of 15 panelists, trained rigorously using specialized odor kits, was tasked with describing 400 unique odors. The model then predicted descriptions for the same set.

Astonishingly, the machine’s predictions often matched or even outperformed individual human assessments, showcasing its unprecedented accuracy.

Machines That Can ‘Smell’ vs. Digitizing Smell

Beyond its core training, the model displayed unexpected capabilities. It accurately predicted odor strength, a feature it wasn’t explicitly trained for, and identified distinct molecules with surprisingly similar scents. This accomplishment suggests we’re inching closer to a world where machines can reliably “smell.”

But for now, that’s overstating it. The team has made a major leap towards digitizing smell. But machines don’t have senses. They can only replicate the kind of information our brains produce when we smell things. Of course, they don’t have any sense of enjoyment (or repulsion) at certain smells.

In any case, the Monell and Osmo collaboration has significantly advanced our journey in understanding and replicating the sense of smell. As we move forward, this research could revolutionize industries from perfumery to food and beyond.

 WTF fun facts

Source: “A step closer to digitizing the sense of smell: Model describes odors better than human panelists” — Science Daily

WTF Fun Fact 13446 – Danish AI Political Party

The Synthetic Party is a Danish AI political party led by an AI chatbot named Leader Lars.

How does an AI political party work?

Denmark’s political landscape is making an intriguing pivot towards artificial intelligence. Leader Lars was brought to life by artist collective Computer Lars and the non-profit art and tech organization MindFuture Foundation. Is this a new era in political representation?

The Synthetic Party, established in May 2022, aspires to represent the values of the 20% of Danes who do not vote. This AI chatbot is not just a figurehead. It’s equipped with policies drawn from Danish fringe parties since 1970. And its human members are committed to executing these AI-derived platforms.

Why involve an AI in politics?

The Synthetic Party seeks to represent data from all fringe parties striving for a parliamentary seat. It’s a novel concept that allows individual political visions, usually limited by financial and logistical constraints, to gain representation. The unique aspect of this political approach is the interaction between citizens and Leader Lars on Discord, a platform where people can speak directly to the AI. This feature fosters a unique form of democratic engagement.

The party’s AI-led political approach raises questions about the viability and accountability of machine learning in government. For instance, can an AI truly grasp and represent human needs and values? How do we hold an AI accountable for its decisions? The Synthetic Party’s response to these questions lies in the transparency and auditability of AI decision-making processes.

Party policy

The Synthetic Party’s policies are bold, to say the least. From establishing a universal basic income of 100,000 Danish kroner per month (equivalent to $13,700, and over double the Danish average salary) to creating a jointly-owned internet and IT sector within the government, the party seeks to innovate and challenge the status quo.

Crucially, the Synthetic Party is not about putting a chatbot in charge. Instead, it’s about exploring the democratic potential of AI and machine learning. The party sees AI as a tool to amplify and understand a wide range of opinions, even if those opinions sometimes contradict each other.

In addition to offering fresh political perspectives, the Synthetic Party aims to raise awareness about the role of AI in our lives and the importance of AI accountability. For example, it advocates for the addition of an 18th Sustainable Development Goal. This would focus on the relationship between humans and AI to the United Nations’ SDGs.

The Synthetic Party seeks to promote a more democratic, accountable, and transparent use of AI in politics. The party needs 20,000 signatures to run in the 2023 November general election. If it gets those, it could introduce a novel form of political representation in Denmark. It would be one that goes beyond a simple figurehead and instead uses AI as a tool for political change.

 WTF fun facts

Source: “This Danish Political Party Is Led by an AI” — VICE

WTF Fun Fact 13242 – An AI Discovery

An AI discovery stunned literature enthusiasts in early 2023. An artificial intelligence (AI) technology being used to transcribe anonymous historic works at Spain‘s National Library managed to identify a play that was actually written by Felix Lope de Vega, one of Spain’s great playwrights.

How did the AI discovery come about?

According to CNN: “The National Library said on Tuesday that experts later confirmed that the Baroque playwright — one of the most prominent names of the Spanish Golden Age — wrote “La francesa Laura” (The Frenchwoman Laura) a few years before his death in 1635.”

The manuscript in the library’s archives is a copy. However, no one knew there was an original. That may have been destroyed.

Researchers from universities in Vienna and Valladolid used AI to digitize 1,300 anonymous manuscripts and books at the library. This allowed a machine to scan the text and transcribe it without requiring years of human labor.

The algorithm was also designed to compare traits of the previously anonymous plays to known plays in order to find similarities. And that’s precisely how La francesca Laura was identified as one of Felix Lope de Vega’s plays.

The National Library said the words used in the text were “closely aligned with Lope’s, and not with those of the other 350 playwrights who were part of the experiment.”

According to CNN, “Experts then used traditional philological research resources to corroborate the findings.” In other words, they went through the known history of the author for hints that he wrote such a play.

A new classic

CNN summarized the play:

“The plot focuses around Laura, the daughter of the Duke of Brittany and wife to Count Arnaldo. The heir to the French throne is captivated by her, and although she rejects him, her jealous husband tries to poison her. Ultimately, Laura’s righteousness is proven and happiness is restored.”

The play will be published by Gredos publishing house later in 2023.  WTF fun facts

Source: “AI reveals unknown play by one of Spain’s greatest writers in library archive” — CNN