WTF Fun Fact 13667 – AI Predicts Life Events

Artificial intelligence (AI) continues to push the boundaries of what we believe is possible – in fact, now AI predicts human life events.

A groundbreaking study recently revealed the potential of AI to forecast significant life occurrences with notable precision.

AI Predicts Life’s Complex Patterns

At the heart of this innovative research is a model known as “ife2vec.” This transformative AI tool was trained using vast amounts of data about people’s lives. This includes their residence, education, income, health, and work conditions. By employing ‘transformer models’ akin to the renowned ChatGPT, life2vec systematically organized this data to predict future events. This includes their time of death.

The researchers’ approach was to treat human life as a sequence of events, much like words in a sentence. This method allowed the AI to identify patterns and make predictions about future occurrences. Surprisingly, life2vec demonstrated a superior ability to predict outcomes such as personality traits and time of death compared to other advanced neural networks.

The Ethical Implications of Predictive AI

The promise of predictive AI in enhancing our understanding of life patterns is undeniable. But it also raises significant ethical questions. Issues around data protection, privacy, and potential biases inherent in the data are crucial considerations. Before such models can be applied in practical settings, like assessing individual disease risks or other significant life events, these ethical challenges must be thoroughly understood and addressed.

Looking ahead, the research team envisions incorporating various data forms into their model, such as text, images, or information about social connections. This expansion could revolutionize the interaction between social and health sciences, offering a more holistic view of human life and its potential trajectories.

 WTF fun facts

Source: “Artificial intelligence can predict events in people’s lives” — ScienceDaily

WTF Fun Fact 13636 – AI and Rogue Waves

For centuries, sailors have whispered tales of monstrous rogue waves capable of splitting ships and damaging oil rigs. These maritime myths turned real with the documented 26-meter-high rogue wave at Draupner oil platform in 1995.

Fast forward to 2023, and researchers at the University of Copenhagen and the University of Victoria have harnessed the power of artificial intelligence (AI) to predict these oceanic giants. They’ve developed a revolutionary formula using data from over a billion waves spanning 700 years, transforming maritime safety.

Decoding Rogue Waves: A Data-Driven Approach

The quest to understand rogue waves led researchers to explore vast ocean data. They focused on rogue waves, twice the size of surrounding waves, and even the extreme ones over 20 meters high. By analyzing data from buoys across the US and its territories, they amassed more than a billion wave records, equivalent to 700 years of ocean activity.

Using machine learning, the researchers crafted an algorithm to identify rogue wave causes. They discovered that rogue waves occur more frequently than imagined, with about one monster wave daily at random ocean locations. However, not all are the colossal 20-meter giants feared by mariners.

AI as a New-Age Oceanographer

The study stands out for its use of AI, particularly symbolic regression. Unlike traditional AI methods that offer single predictions, this approach yields an equation. It’s akin to Kepler deciphering planetary movements from Tycho Brahe’s astronomical data, but with AI analyzing waves.

The AI examined over a billion waves and formulated an equation, providing a “recipe” for rogue waves. This groundbreaking method offers a transparent algorithm, aligning with physics laws, and enhances human understanding beyond the typical AI black box.

Contrary to popular belief that rogue waves stem from energy-stealing wave combinations, this research points to “linear superposition” as the primary cause. Known since the 1700s, this phenomenon occurs when two wave systems intersect, amplifying each other momentarily.

The study’s data supports this long-standing theory, offering a new perspective on rogue wave formation.

Towards Safer Maritime Journeys

This AI-driven algorithm is a boon for the shipping industry, constantly navigating potential dangers at sea. With approximately 50,000 cargo ships sailing globally, this tool enables route planning that accounts for the risk of rogue waves. Shipping companies can now use the algorithm for risk assessment and choose safer routes accordingly.

The research, algorithm, and utilized weather and wave data are publicly accessible. This openness allows entities like weather services and public authorities to calculate rogue wave probabilities easily. The study’s transparency in intermediate calculations sets it apart from typical AI models, enhancing our understanding of these oceanic phenomena.

The University of Copenhagen’s groundbreaking research, blending AI with oceanography, marks a significant advancement in our understanding of rogue waves. By transforming a massive wave database into a clear, physics-aligned equation, this study not only demystifies a long-standing maritime mystery but also paves the way for safer sea travels. The algorithm’s potential to predict these maritime monsters will be a crucial tool for the global shipping industry, heralding a new era of informed and safer ocean navigation.

 WTF fun facts

Source: “AI finds formula on how to predict monster waves” — ScienceDaily

WTF Fun Fact 13625 – AI and Realistic Faces

Researchers at The Australian National University (ANU) have found that AI-generated faces now appear to be more realistic faces than those of actual humans. But that’s only true if the AI is generating the faces of white people.

This development raises crucial questions about AI’s influence on our perception of identity.

Training Bias in AI

This study reveals a concerning trend. People often see AI-generated white faces as more human than real ones. Yet, this isn’t the case for faces of people of color.

Dr. Amy Dawel attributes this to AI’s training bias. AI algorithms have been fed more white faces than any other. This imbalance could increase racial biases online. It’s especially troubling in professional settings, like headshot creation. AI often alters skin and eye colors of people of color, aligning them more with white features.

The Illusion of AI Realistic Faces

Elizabeth Miller, co-author of the study, highlights a critical issue. People don’t realize they’re being fooled by AI faces. This unawareness is alarming. Those who mistake AI faces for real ones are often the most confident in their judgment.

Although physical differences between AI and human faces exist, they’re often misinterpreted. People see AI’s proportionate features as human-like. Yet, AI technology is evolving rapidly. Soon, distinguishing AI from human faces could become even more challenging.

This trend could significantly impact misinformation spread and identity theft. Dr. Dawel calls for more transparency around AI.

Keeping AI open to researchers and the public is essential. It helps identify potential problems early. Public education about AI’s realism is also crucial. An informed public can be more skeptical about online images.

Public Awareness and Tools for Detection

As AI blurs the line between real and synthetic, new challenges emerge. We need tools to identify AI imposters accurately. Dr. Dawel suggests educating people about AI’s realism. Such knowledge could foster skepticism about online images. This approach might reduce risks associated with advanced AI.

ANU’s study marks a significant moment in AI development. AI’s ability to create faces now surpasses human perception in certain cases. The implications are vast, touching on identity and the potential for misuse.

As AI evolves, transparency, education, and technological solutions will be key. We must navigate these challenges collectively to ensure AI’s responsible and beneficial use.

 WTF fun facts

Source: “AI faces look more real than actual human face” — ScienceDaily

WTF Fun Fact 13589 – A Voice Test for Diabetes

If you’re scared of needles, you might be interested to know that researchers are investigating a possible voice test for diabetes.

That’s right. A brief recording of your voice could indicate whether or not you have diabetes.

A Voice Test for Diabetes?

A program designed to use no more than 10 seconds of speech has proven capable of identifying the presence of diabetes with remarkable accuracy.

In an experiment conducted by Klick Labs, 267 individuals recorded a short phrase on their smartphones six times a day over a span of two weeks. This group had recently undergone testing for Type 2 diabetes. The aim? To discern any acoustic differences between the voices of those who tested positive and those who didn’t.

By analyzing the participants’ voice prints in conjunction with data like age, sex, height, and weight, an AI model made astonishing predictions. The accuracy rate stood at 86% for men and an even higher 89% for women.

Unraveling the Science Behind Voice Analysis

The question arises: Why does diabetes influence one’s voice? The synthesis of our voice is a multifaceted process that integrates the respiratory system, nervous system, and the larynx. Factors that impact any of these systems can, in turn, alter the voice. While such changes might escape the human ear, computers, with their advanced analytical capacities, can detect them with precision.

Among the vocal attributes studied, pitch and its variation proved to be the most predictive of diabetes. Interestingly, some vocal attributes only enhanced prediction accuracy for one gender. For instance, “perturbation jitter” was a key factor for women, whereas “amplitude perturbation quotient shimmer” was significant for men.

It’s worth noting that prolonged elevated blood sugar can impair peripheral nerves and muscle fibers, leading to voice disorders. Moreover, even temporary elevations in blood glucose can potentially influence vocal cord elasticity, though this theory still awaits validation. Furthermore, emotional factors, such as anxiety and depression—both of which can be associated with diabetes—may further modulate voice characteristics.

Beyond Conventional Diabetes Testing

Jaycee Kaufman, the leading author of the study, emphasized the transformative potential of their findings: “Voice technology can potentially revolutionize the way the medical community screens for diabetes. Traditional detection methods can be cumbersome, both in terms of time and cost. This technology could eliminate these challenges altogether.”

Considering the global surge in diabetes cases, and the complications arising from late diagnoses, the introduction of a non-invasive, rapid testing tool can be a game-changer. The International Diabetes Federation has highlighted that nearly 50% of adults with diabetes remain unaware of their condition. Predictably, this unawareness is most pronounced in nations where healthcare infrastructure is stretched thin. The economic implications are staggering, with undiagnosed diabetes projected to cost an exorbitant $2.1 trillion annually by 2030.

Voice technology, as an alternative to blood sample-based tests, presents a promising avenue for early detection and intervention.

A Healthier Future Using A Voice Test for Diabetes

Buoyed by the success of their study, Klick Labs is planning a larger-scale project. They aim not only to refine the accuracy of their model but also to expand its scope. Their vision extends beyond diabetes detection, as they explore its applicability to conditions like prediabetes and hypertension.

Yan Fossat, co-author of the study, expressed enthusiasm for the innovation: “Voice technology has the potential to usher in a new era in healthcare, positioning itself as a vital digital screening tool that’s both accessible and economical.”

As the study gains traction and the technology evolves, the implications for global health are profound. With the power of voice technology, a world where early, easy, and efficient disease detection is the norm, may not be too far off.

 WTF fun facts

Source: “10 Seconds Of Recorded Speech Can Reveal If Someone Has Diabetes” — IFL Science

WTF Fun Fact 13536 – Digitizing Smell

In order to smell, our brains and noses have to work together, so the idea of digitizing smell seems pretty “out there.”

However, if you think about it, our noses are sensing molecules. Those molecules can be identified by a computer, and the smells the humans associated with them can be cataloged. It’s not quite teaching a computer to smell on its own, but maybe it’s best we don’t give them too many human abilities.

The Enigma of Olfaction

While we’ve successfully translated light into sight and sound into hearing, decoding the intricate world of smell remains a challenge.

Olfaction, compared to our other senses, is mysterious, diverse, and deeply rooted in both emotion and memory. Knowing this, can we teach machines to interpret this elusive sense?

Digitizing Smell

A collaboration between the Monell Chemical Senses Center and the startup Osmo aimed to bridge the gap between airborne chemicals and our brain’s odor perception. Their objective was not just to understand the science of smell better but to make a machine proficient enough to describe, in human terms, what various chemicals smell like.

Osmo, with roots in Google’s advanced research division, embarked on creating a machine-learning model. The foundation of this model was an industry dataset, which detailed the molecular structures and scent profiles of 5,000 known odorants.

The idea? Feed the model a molecule’s shape and get a descriptive prediction of its smell.

That might sound simple, but the team had to make sure they could ensure the model’s accuracy.

The Litmus Test: Man vs. Machine

To validate the machine’s “sense of smell,” a unique test was devised.

A group of 15 panelists, trained rigorously using specialized odor kits, was tasked with describing 400 unique odors. The model then predicted descriptions for the same set.

Astonishingly, the machine’s predictions often matched or even outperformed individual human assessments, showcasing its unprecedented accuracy.

Machines That Can ‘Smell’ vs. Digitizing Smell

Beyond its core training, the model displayed unexpected capabilities. It accurately predicted odor strength, a feature it wasn’t explicitly trained for, and identified distinct molecules with surprisingly similar scents. This accomplishment suggests we’re inching closer to a world where machines can reliably “smell.”

But for now, that’s overstating it. The team has made a major leap towards digitizing smell. But machines don’t have senses. They can only replicate the kind of information our brains produce when we smell things. Of course, they don’t have any sense of enjoyment (or repulsion) at certain smells.

In any case, the Monell and Osmo collaboration has significantly advanced our journey in understanding and replicating the sense of smell. As we move forward, this research could revolutionize industries from perfumery to food and beyond.

 WTF fun facts

Source: “A step closer to digitizing the sense of smell: Model describes odors better than human panelists” — Science Daily

WTF Fun Fact 13446 – Danish AI Political Party

The Synthetic Party is a Danish AI political party led by an AI chatbot named Leader Lars.

How does an AI political party work?

Denmark’s political landscape is making an intriguing pivot towards artificial intelligence. Leader Lars was brought to life by artist collective Computer Lars and the non-profit art and tech organization MindFuture Foundation. Is this a new era in political representation?

The Synthetic Party, established in May 2022, aspires to represent the values of the 20% of Danes who do not vote. This AI chatbot is not just a figurehead. It’s equipped with policies drawn from Danish fringe parties since 1970. And its human members are committed to executing these AI-derived platforms.

Why involve an AI in politics?

The Synthetic Party seeks to represent data from all fringe parties striving for a parliamentary seat. It’s a novel concept that allows individual political visions, usually limited by financial and logistical constraints, to gain representation. The unique aspect of this political approach is the interaction between citizens and Leader Lars on Discord, a platform where people can speak directly to the AI. This feature fosters a unique form of democratic engagement.

The party’s AI-led political approach raises questions about the viability and accountability of machine learning in government. For instance, can an AI truly grasp and represent human needs and values? How do we hold an AI accountable for its decisions? The Synthetic Party’s response to these questions lies in the transparency and auditability of AI decision-making processes.

Party policy

The Synthetic Party’s policies are bold, to say the least. From establishing a universal basic income of 100,000 Danish kroner per month (equivalent to $13,700, and over double the Danish average salary) to creating a jointly-owned internet and IT sector within the government, the party seeks to innovate and challenge the status quo.

Crucially, the Synthetic Party is not about putting a chatbot in charge. Instead, it’s about exploring the democratic potential of AI and machine learning. The party sees AI as a tool to amplify and understand a wide range of opinions, even if those opinions sometimes contradict each other.

In addition to offering fresh political perspectives, the Synthetic Party aims to raise awareness about the role of AI in our lives and the importance of AI accountability. For example, it advocates for the addition of an 18th Sustainable Development Goal. This would focus on the relationship between humans and AI to the United Nations’ SDGs.

The Synthetic Party seeks to promote a more democratic, accountable, and transparent use of AI in politics. The party needs 20,000 signatures to run in the 2023 November general election. If it gets those, it could introduce a novel form of political representation in Denmark. It would be one that goes beyond a simple figurehead and instead uses AI as a tool for political change.

 WTF fun facts

Source: “This Danish Political Party Is Led by an AI” — VICE

WTF Fun Fact 13242 – An AI Discovery

An AI discovery stunned literature enthusiasts in early 2023. An artificial intelligence (AI) technology being used to transcribe anonymous historic works at Spain‘s National Library managed to identify a play that was actually written by Felix Lope de Vega, one of Spain’s great playwrights.

How did the AI discovery come about?

According to CNN: “The National Library said on Tuesday that experts later confirmed that the Baroque playwright — one of the most prominent names of the Spanish Golden Age — wrote “La francesa Laura” (The Frenchwoman Laura) a few years before his death in 1635.”

The manuscript in the library’s archives is a copy. However, no one knew there was an original. That may have been destroyed.

Researchers from universities in Vienna and Valladolid used AI to digitize 1,300 anonymous manuscripts and books at the library. This allowed a machine to scan the text and transcribe it without requiring years of human labor.

The algorithm was also designed to compare traits of the previously anonymous plays to known plays in order to find similarities. And that’s precisely how La francesca Laura was identified as one of Felix Lope de Vega’s plays.

The National Library said the words used in the text were “closely aligned with Lope’s, and not with those of the other 350 playwrights who were part of the experiment.”

According to CNN, “Experts then used traditional philological research resources to corroborate the findings.” In other words, they went through the known history of the author for hints that he wrote such a play.

A new classic

CNN summarized the play:

“The plot focuses around Laura, the daughter of the Duke of Brittany and wife to Count Arnaldo. The heir to the French throne is captivated by her, and although she rejects him, her jealous husband tries to poison her. Ultimately, Laura’s righteousness is proven and happiness is restored.”

The play will be published by Gredos publishing house later in 2023.  WTF fun facts

Source: “AI reveals unknown play by one of Spain’s greatest writers in library archive” — CNN

WTF Fun Fact 13240 – ChatGPT Medical Exam

There’s been a lot of speculation and panic around what the AI tool ChatGPT can and can’t do and whether it’s going to replace/destroy us all. But it looks like it’s not going to be replacing doctors any time soon, even though it may be a semi-reliable source for those studying for the United States Medical Licensing Exam (USMLE).

Did ChatGPT pass a medical licensing exam?

In a word, no. ChatGPT did not “pass” a medical licensing exam, though there will probably be some sensational headlines to the contrary.

A study published February 9, 2023 in the open-access journal PLOS Digital Health by Tiffany Kung, Victor Tseng, and others at AnsibleHealth found that:

“ChatGPT can score at or around the approximately 60 percent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that make coherent, internal sense and contain frequent insights…”

In other words, ChatGPT can generate human-like responses to certain types of questions on the exam.

According to a press release about the study:

“Kung and colleagues tested ChatGPT’s performance on the USMLE, a highly standardized and regulated series of three exams (Steps 1, 2CK, and 3) required for medical licensure in the United States. Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.”

However, they had to remove all image-based questions, so the AI could only attempt 350 of the 376 questions (that are available for public viewing) that appeared on the June 2022 exam.

How well did AI do on the medical exam?

There’s a joke people tell that goes a little something like this:
What do you call the person who graduates first in their medical class?
Answer: Doctor.
What do you call the person who graduates last in their medical class?
Answer: Doctor.

In other words, you’re never going to know if your doctor aced their exam or barely passed.

But if ChatGPT is your doctor, you will know it’s incapable of doing really well on the exam (though in some cases, it did pass). After the researchers removed all “indeterminate responses,” “ChatGPT scored between 52.4% and 75.0% across the three USMLE exams. The passing threshold each year is approximately 60%.”

It was good at producing novel and clinically significant insights the vast majority of the time, however. But if you can’t pass, that doesn’t really mean much.

What’s also interesting is that “ChatGPT exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8% on an older dataset of USMLE-style questions.”

But all that means is that ChatGPT is a really good AI language model. It doesn’t mean it’ll be replacing your doctor any time soon. And since it’s not always good at producing correct information, you may not want to use it to get medical advice.

What ChatGPT can be used for is helping to distill jargon-heavy medical writing into language that non-experts can understand. So if there’s a scientific study you’d like to know more about, you can plug that into ChatGPT and get a fairly reliable summary.  WTF fun facts

Source: “ChatGPT can (almost) pass the US Medical Licensing Exam” — EurekAlert

WTF Fun Fact 13231 – The Downsides of ChatGPT

ChatGPT may seem like the answer students, coders, and writers have been looking for, but it’s far from perfect. In fact, the generative AI can be riddled with misinformation that’s likely to get you caught. And the downsides of ChatGPT are easy to discern since you can just ask the chatbot itself.

What is ChatGPT?

Chat GPT is a chatbot built by Open AI. Just go to chat.openai.com and see for yourself. You can ask it anything.

Of course, that doesn’t mean you’ll get the right answer. Sure, it may be the answer you want to hear. It may be grammatically correct. It may even seem elegantly stated. But look a little closer, and you’ll find it’s relatively simplistic and repetitive. And it can be downright incorrect and misleading.

What are the downsides of ChatGPT?

We went right to the source to find out how the AI might come up short. According to ChatGPT itself (when asked “what is one of the major drawbacks of chatgpt?”):

“One of the major drawbacks of ChatGPT is its lack of context awareness and the possibility of generating biased or incorrect responses.

Since ChatGPT is trained on a large corpus of text data from the internet, it may sometimes generate responses that reflect the biases and inaccuracies present in that data. This can lead to the generation of incorrect information or responses that perpetuate harmful stereotypes or beliefs.

Additionally, ChatGPT operates on a token-by-token basis, meaning that it doesn’t have an overall understanding of the context of a conversation. This can lead to responses that are not fully relevant or coherent, especially when the input is complex or requires a deep understanding of a particular subject.

To mitigate these issues, it is important to carefully evaluate and curate the training data used to train language models like ChatGPT, and to use appropriate post-processing techniques to ensure the quality and accuracy of the generated responses.”

We’re impressed the AI went the extra mile and suggested some more shortcomings!

The future of ChatGPT

Yes, the chatbot is a viral sensation. And it absolutely has the ability to do some good and save some time. But it also has the ability to be used to fill the Internet (and your homework assignments) will all sorts of confident-sounding information that’s factually incorrect. And the kicker is that it may someday be trained on the same incorrect websites it helped create.

Of course, there are some pretty ominous things ChatGPT can be used for. And to be fair, Open AI has been…well, open about the downsides of ChatGPT. According to Bleeping Computer (cited below), “OpenAI has been upfront about the AI’s shortcomings including its ability to ‘produce harmful instructions or biased content and continues to fine-tune ChatGPT.”  WTF fun facts

Source: “OpenAI’s new ChatGPT bot: 10 dangerous things it’s capable of” — Bleeping Computer