WTF Fun Fact 13667 – AI Predicts Life Events

Artificial intelligence (AI) continues to push the boundaries of what we believe is possible – in fact, now AI predicts human life events.

A groundbreaking study recently revealed the potential of AI to forecast significant life occurrences with notable precision.

AI Predicts Life’s Complex Patterns

At the heart of this innovative research is a model known as “ife2vec.” This transformative AI tool was trained using vast amounts of data about people’s lives. This includes their residence, education, income, health, and work conditions. By employing ‘transformer models’ akin to the renowned ChatGPT, life2vec systematically organized this data to predict future events. This includes their time of death.

The researchers’ approach was to treat human life as a sequence of events, much like words in a sentence. This method allowed the AI to identify patterns and make predictions about future occurrences. Surprisingly, life2vec demonstrated a superior ability to predict outcomes such as personality traits and time of death compared to other advanced neural networks.

The Ethical Implications of Predictive AI

The promise of predictive AI in enhancing our understanding of life patterns is undeniable. But it also raises significant ethical questions. Issues around data protection, privacy, and potential biases inherent in the data are crucial considerations. Before such models can be applied in practical settings, like assessing individual disease risks or other significant life events, these ethical challenges must be thoroughly understood and addressed.

Looking ahead, the research team envisions incorporating various data forms into their model, such as text, images, or information about social connections. This expansion could revolutionize the interaction between social and health sciences, offering a more holistic view of human life and its potential trajectories.

 WTF fun facts

Source: “Artificial intelligence can predict events in people’s lives” — ScienceDaily

WTF Fun Fact 13636 – AI and Rogue Waves

For centuries, sailors have whispered tales of monstrous rogue waves capable of splitting ships and damaging oil rigs. These maritime myths turned real with the documented 26-meter-high rogue wave at Draupner oil platform in 1995.

Fast forward to 2023, and researchers at the University of Copenhagen and the University of Victoria have harnessed the power of artificial intelligence (AI) to predict these oceanic giants. They’ve developed a revolutionary formula using data from over a billion waves spanning 700 years, transforming maritime safety.

Decoding Rogue Waves: A Data-Driven Approach

The quest to understand rogue waves led researchers to explore vast ocean data. They focused on rogue waves, twice the size of surrounding waves, and even the extreme ones over 20 meters high. By analyzing data from buoys across the US and its territories, they amassed more than a billion wave records, equivalent to 700 years of ocean activity.

Using machine learning, the researchers crafted an algorithm to identify rogue wave causes. They discovered that rogue waves occur more frequently than imagined, with about one monster wave daily at random ocean locations. However, not all are the colossal 20-meter giants feared by mariners.

AI as a New-Age Oceanographer

The study stands out for its use of AI, particularly symbolic regression. Unlike traditional AI methods that offer single predictions, this approach yields an equation. It’s akin to Kepler deciphering planetary movements from Tycho Brahe’s astronomical data, but with AI analyzing waves.

The AI examined over a billion waves and formulated an equation, providing a “recipe” for rogue waves. This groundbreaking method offers a transparent algorithm, aligning with physics laws, and enhances human understanding beyond the typical AI black box.

Contrary to popular belief that rogue waves stem from energy-stealing wave combinations, this research points to “linear superposition” as the primary cause. Known since the 1700s, this phenomenon occurs when two wave systems intersect, amplifying each other momentarily.

The study’s data supports this long-standing theory, offering a new perspective on rogue wave formation.

Towards Safer Maritime Journeys

This AI-driven algorithm is a boon for the shipping industry, constantly navigating potential dangers at sea. With approximately 50,000 cargo ships sailing globally, this tool enables route planning that accounts for the risk of rogue waves. Shipping companies can now use the algorithm for risk assessment and choose safer routes accordingly.

The research, algorithm, and utilized weather and wave data are publicly accessible. This openness allows entities like weather services and public authorities to calculate rogue wave probabilities easily. The study’s transparency in intermediate calculations sets it apart from typical AI models, enhancing our understanding of these oceanic phenomena.

The University of Copenhagen’s groundbreaking research, blending AI with oceanography, marks a significant advancement in our understanding of rogue waves. By transforming a massive wave database into a clear, physics-aligned equation, this study not only demystifies a long-standing maritime mystery but also paves the way for safer sea travels. The algorithm’s potential to predict these maritime monsters will be a crucial tool for the global shipping industry, heralding a new era of informed and safer ocean navigation.

 WTF fun facts

Source: “AI finds formula on how to predict monster waves” — ScienceDaily

WTF Fun Fact 13625 – AI and Realistic Faces

Researchers at The Australian National University (ANU) have found that AI-generated faces now appear to be more realistic faces than those of actual humans. But that’s only true if the AI is generating the faces of white people.

This development raises crucial questions about AI’s influence on our perception of identity.

Training Bias in AI

This study reveals a concerning trend. People often see AI-generated white faces as more human than real ones. Yet, this isn’t the case for faces of people of color.

Dr. Amy Dawel attributes this to AI’s training bias. AI algorithms have been fed more white faces than any other. This imbalance could increase racial biases online. It’s especially troubling in professional settings, like headshot creation. AI often alters skin and eye colors of people of color, aligning them more with white features.

The Illusion of AI Realistic Faces

Elizabeth Miller, co-author of the study, highlights a critical issue. People don’t realize they’re being fooled by AI faces. This unawareness is alarming. Those who mistake AI faces for real ones are often the most confident in their judgment.

Although physical differences between AI and human faces exist, they’re often misinterpreted. People see AI’s proportionate features as human-like. Yet, AI technology is evolving rapidly. Soon, distinguishing AI from human faces could become even more challenging.

This trend could significantly impact misinformation spread and identity theft. Dr. Dawel calls for more transparency around AI.

Keeping AI open to researchers and the public is essential. It helps identify potential problems early. Public education about AI’s realism is also crucial. An informed public can be more skeptical about online images.

Public Awareness and Tools for Detection

As AI blurs the line between real and synthetic, new challenges emerge. We need tools to identify AI imposters accurately. Dr. Dawel suggests educating people about AI’s realism. Such knowledge could foster skepticism about online images. This approach might reduce risks associated with advanced AI.

ANU’s study marks a significant moment in AI development. AI’s ability to create faces now surpasses human perception in certain cases. The implications are vast, touching on identity and the potential for misuse.

As AI evolves, transparency, education, and technological solutions will be key. We must navigate these challenges collectively to ensure AI’s responsible and beneficial use.

 WTF fun facts

Source: “AI faces look more real than actual human face” — ScienceDaily

WTF Fun Fact 13589 – A Voice Test for Diabetes

If you’re scared of needles, you might be interested to know that researchers are investigating a possible voice test for diabetes.

That’s right. A brief recording of your voice could indicate whether or not you have diabetes.

A Voice Test for Diabetes?

A program designed to use no more than 10 seconds of speech has proven capable of identifying the presence of diabetes with remarkable accuracy.

In an experiment conducted by Klick Labs, 267 individuals recorded a short phrase on their smartphones six times a day over a span of two weeks. This group had recently undergone testing for Type 2 diabetes. The aim? To discern any acoustic differences between the voices of those who tested positive and those who didn’t.

By analyzing the participants’ voice prints in conjunction with data like age, sex, height, and weight, an AI model made astonishing predictions. The accuracy rate stood at 86% for men and an even higher 89% for women.

Unraveling the Science Behind Voice Analysis

The question arises: Why does diabetes influence one’s voice? The synthesis of our voice is a multifaceted process that integrates the respiratory system, nervous system, and the larynx. Factors that impact any of these systems can, in turn, alter the voice. While such changes might escape the human ear, computers, with their advanced analytical capacities, can detect them with precision.

Among the vocal attributes studied, pitch and its variation proved to be the most predictive of diabetes. Interestingly, some vocal attributes only enhanced prediction accuracy for one gender. For instance, “perturbation jitter” was a key factor for women, whereas “amplitude perturbation quotient shimmer” was significant for men.

It’s worth noting that prolonged elevated blood sugar can impair peripheral nerves and muscle fibers, leading to voice disorders. Moreover, even temporary elevations in blood glucose can potentially influence vocal cord elasticity, though this theory still awaits validation. Furthermore, emotional factors, such as anxiety and depression—both of which can be associated with diabetes—may further modulate voice characteristics.

Beyond Conventional Diabetes Testing

Jaycee Kaufman, the leading author of the study, emphasized the transformative potential of their findings: “Voice technology can potentially revolutionize the way the medical community screens for diabetes. Traditional detection methods can be cumbersome, both in terms of time and cost. This technology could eliminate these challenges altogether.”

Considering the global surge in diabetes cases, and the complications arising from late diagnoses, the introduction of a non-invasive, rapid testing tool can be a game-changer. The International Diabetes Federation has highlighted that nearly 50% of adults with diabetes remain unaware of their condition. Predictably, this unawareness is most pronounced in nations where healthcare infrastructure is stretched thin. The economic implications are staggering, with undiagnosed diabetes projected to cost an exorbitant $2.1 trillion annually by 2030.

Voice technology, as an alternative to blood sample-based tests, presents a promising avenue for early detection and intervention.

A Healthier Future Using A Voice Test for Diabetes

Buoyed by the success of their study, Klick Labs is planning a larger-scale project. They aim not only to refine the accuracy of their model but also to expand its scope. Their vision extends beyond diabetes detection, as they explore its applicability to conditions like prediabetes and hypertension.

Yan Fossat, co-author of the study, expressed enthusiasm for the innovation: “Voice technology has the potential to usher in a new era in healthcare, positioning itself as a vital digital screening tool that’s both accessible and economical.”

As the study gains traction and the technology evolves, the implications for global health are profound. With the power of voice technology, a world where early, easy, and efficient disease detection is the norm, may not be too far off.

 WTF fun facts

Source: “10 Seconds Of Recorded Speech Can Reveal If Someone Has Diabetes” — IFL Science

WTF Fun Fact 13482 – GPT-3 Reasoning Skills

Research from UCLA psychologists has discovered a surprising new contender in our analogical reasoning battles – the artificial intelligence language model, GPT-3. Apparently, it holds its own against college undergraduates on reasoning problems typical of intelligence tests and the SAT.

But it fails to answer a key question: Is GPT-3 merely parroting human reasoning, or has it stumbled onto a brand-new cognitive process? (And, does this research say more about technology, college students, or intelligence tests?!)

Humans vs GPT-3

OpenAI holds GPT-3’s secrets under tight wraps, so they aren’t going to be much help in figuring out how the algorithm works its “magic.” Despite the mystery, the UCLA researchers found that GPT-3 outperformed their expectations on some tasks. Yet, other tasks saw it crash and burn.

Despite its ability to embarrass some college students, the study’s first author, Taylor Webb, emphasized GPT-3’s limitations. While it excels at analogical reasoning, it fails spectacularly at tasks simple for humans, like using tools to solve physical problems.

Webb and his colleagues tested GPT-3 on problems inspired by Raven’s Progressive Matrices. They translated the visual problems into text and gave the same problems to 40 UCLA undergraduate students.

Not only did GPT-3 perform as well as humans, but it also made similar mistakes.

What the Study Results Mean

GPT-3 solved 80% of the problems correctly, while the human average score was below 60%. The team then tested GPT-3 with SAT analogy questions they believed had never been on the internet (which would mean they weren’t part of the GPT training data). Again, GPT-3 outperformed the average college applicant’s score (then again, we know these tests aren’t really a measure of intelligence).

However, when the researchers tested the program against student volunteers on analogy problems based on short stories, GPT-3 struggled.

And tasks that require understanding physical space continue to baffle the so-called “artificial intelligence.”

“No matter how impressive our results, it’s important to emphasize that this system has major limitations,” said Taylor Webb, the study’s first author. “It can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems — some of which children can solve quickly — the things it suggested were nonsensical.”

 WTF fun facts

Source: “GPT-3 can reason about as well as a college student, psychologists report” — ScienceDaily

WTF Fun Fact 13446 – Danish AI Political Party

The Synthetic Party is a Danish AI political party led by an AI chatbot named Leader Lars.

How does an AI political party work?

Denmark’s political landscape is making an intriguing pivot towards artificial intelligence. Leader Lars was brought to life by artist collective Computer Lars and the non-profit art and tech organization MindFuture Foundation. Is this a new era in political representation?

The Synthetic Party, established in May 2022, aspires to represent the values of the 20% of Danes who do not vote. This AI chatbot is not just a figurehead. It’s equipped with policies drawn from Danish fringe parties since 1970. And its human members are committed to executing these AI-derived platforms.

Why involve an AI in politics?

The Synthetic Party seeks to represent data from all fringe parties striving for a parliamentary seat. It’s a novel concept that allows individual political visions, usually limited by financial and logistical constraints, to gain representation. The unique aspect of this political approach is the interaction between citizens and Leader Lars on Discord, a platform where people can speak directly to the AI. This feature fosters a unique form of democratic engagement.

The party’s AI-led political approach raises questions about the viability and accountability of machine learning in government. For instance, can an AI truly grasp and represent human needs and values? How do we hold an AI accountable for its decisions? The Synthetic Party’s response to these questions lies in the transparency and auditability of AI decision-making processes.

Party policy

The Synthetic Party’s policies are bold, to say the least. From establishing a universal basic income of 100,000 Danish kroner per month (equivalent to $13,700, and over double the Danish average salary) to creating a jointly-owned internet and IT sector within the government, the party seeks to innovate and challenge the status quo.

Crucially, the Synthetic Party is not about putting a chatbot in charge. Instead, it’s about exploring the democratic potential of AI and machine learning. The party sees AI as a tool to amplify and understand a wide range of opinions, even if those opinions sometimes contradict each other.

In addition to offering fresh political perspectives, the Synthetic Party aims to raise awareness about the role of AI in our lives and the importance of AI accountability. For example, it advocates for the addition of an 18th Sustainable Development Goal. This would focus on the relationship between humans and AI to the United Nations’ SDGs.

The Synthetic Party seeks to promote a more democratic, accountable, and transparent use of AI in politics. The party needs 20,000 signatures to run in the 2023 November general election. If it gets those, it could introduce a novel form of political representation in Denmark. It would be one that goes beyond a simple figurehead and instead uses AI as a tool for political change.

 WTF fun facts

Source: “This Danish Political Party Is Led by an AI” — VICE

WTF Fun Fact 13249 – ChatGPT Consequences for Creatives

ChatGPT is just a generative language model – a very fancy form of autocorrect, in some cases. And it doesn’t give answers that indicate it’s out to replace human writers. But we don’t yet know what the ChatGPT consequences for creatives are – and neither does the AI. When asked, ChatGPT spat out this answer:

“While AI-generated content may be able to produce work that is similar in style and content to that produced by humans, it is not a replacement for human creativity. There will always be a demand for unique and original human-generated content, and AI-generated content is unlikely to completely replace the work of human writers.”

The word unlikely is a bit eye-opening, but a machine can’t predict how humans may misuse it to replace other humans.

What are some of the possible consequences of ChatGPT for creatives?

ChatGPT is already impacting writers’ ability to make a living. And because it’s not a very good writer, it’s affecting people’s access to good writing.

ChatGPT produces the following argument in favor of itself:

“..there is potential for generative AI to be used in collaboration with human writers, rather than as a replacement. For example, an AI language model could be used to suggest ideas, provide inspiration, or even generate a first draft of a piece of writing, which could then be refined and edited by a human writer.”

It “suggests” that humans will learn to work around it and develop new business models to make room for everyone. Of course, it is quite frequently wrong about things.

“The toasters look like us now”

Plenty of writers are having fun with ChatGPT, however. They’ve never written articles with its help while citing it as a resource. What a way to add to your word count!

And when pressed a bit on its ability to replace humans, it recites the same old line about the potential for collaboration before producing the following paragraph:

“Overall, it is important to approach the development of AI in creative fields with caution and a recognition of the potential risks and benefits. While AI has the potential to transform the creative process in many ways, it is important to find ways to ensure that it does not have a negative impact on the livelihoods of human artists and writers.”

Of course, that last part is up to humans.  WTF fun facts

Source: “ChatGPT isn’t putting me out of a job yet, but it’s very good fun” — TechCrunch (paywalled, sadly)

WTF Fun Fact 13242 – An AI Discovery

An AI discovery stunned literature enthusiasts in early 2023. An artificial intelligence (AI) technology being used to transcribe anonymous historic works at Spain‘s National Library managed to identify a play that was actually written by Felix Lope de Vega, one of Spain’s great playwrights.

How did the AI discovery come about?

According to CNN: “The National Library said on Tuesday that experts later confirmed that the Baroque playwright — one of the most prominent names of the Spanish Golden Age — wrote “La francesa Laura” (The Frenchwoman Laura) a few years before his death in 1635.”

The manuscript in the library’s archives is a copy. However, no one knew there was an original. That may have been destroyed.

Researchers from universities in Vienna and Valladolid used AI to digitize 1,300 anonymous manuscripts and books at the library. This allowed a machine to scan the text and transcribe it without requiring years of human labor.

The algorithm was also designed to compare traits of the previously anonymous plays to known plays in order to find similarities. And that’s precisely how La francesca Laura was identified as one of Felix Lope de Vega’s plays.

The National Library said the words used in the text were “closely aligned with Lope’s, and not with those of the other 350 playwrights who were part of the experiment.”

According to CNN, “Experts then used traditional philological research resources to corroborate the findings.” In other words, they went through the known history of the author for hints that he wrote such a play.

A new classic

CNN summarized the play:

“The plot focuses around Laura, the daughter of the Duke of Brittany and wife to Count Arnaldo. The heir to the French throne is captivated by her, and although she rejects him, her jealous husband tries to poison her. Ultimately, Laura’s righteousness is proven and happiness is restored.”

The play will be published by Gredos publishing house later in 2023.  WTF fun facts

Source: “AI reveals unknown play by one of Spain’s greatest writers in library archive” — CNN

WTF Fun Fact 13231 – The Downsides of ChatGPT

ChatGPT may seem like the answer students, coders, and writers have been looking for, but it’s far from perfect. In fact, the generative AI can be riddled with misinformation that’s likely to get you caught. And the downsides of ChatGPT are easy to discern since you can just ask the chatbot itself.

What is ChatGPT?

Chat GPT is a chatbot built by Open AI. Just go to chat.openai.com and see for yourself. You can ask it anything.

Of course, that doesn’t mean you’ll get the right answer. Sure, it may be the answer you want to hear. It may be grammatically correct. It may even seem elegantly stated. But look a little closer, and you’ll find it’s relatively simplistic and repetitive. And it can be downright incorrect and misleading.

What are the downsides of ChatGPT?

We went right to the source to find out how the AI might come up short. According to ChatGPT itself (when asked “what is one of the major drawbacks of chatgpt?”):

“One of the major drawbacks of ChatGPT is its lack of context awareness and the possibility of generating biased or incorrect responses.

Since ChatGPT is trained on a large corpus of text data from the internet, it may sometimes generate responses that reflect the biases and inaccuracies present in that data. This can lead to the generation of incorrect information or responses that perpetuate harmful stereotypes or beliefs.

Additionally, ChatGPT operates on a token-by-token basis, meaning that it doesn’t have an overall understanding of the context of a conversation. This can lead to responses that are not fully relevant or coherent, especially when the input is complex or requires a deep understanding of a particular subject.

To mitigate these issues, it is important to carefully evaluate and curate the training data used to train language models like ChatGPT, and to use appropriate post-processing techniques to ensure the quality and accuracy of the generated responses.”

We’re impressed the AI went the extra mile and suggested some more shortcomings!

The future of ChatGPT

Yes, the chatbot is a viral sensation. And it absolutely has the ability to do some good and save some time. But it also has the ability to be used to fill the Internet (and your homework assignments) will all sorts of confident-sounding information that’s factually incorrect. And the kicker is that it may someday be trained on the same incorrect websites it helped create.

Of course, there are some pretty ominous things ChatGPT can be used for. And to be fair, Open AI has been…well, open about the downsides of ChatGPT. According to Bleeping Computer (cited below), “OpenAI has been upfront about the AI’s shortcomings including its ability to ‘produce harmful instructions or biased content and continues to fine-tune ChatGPT.”  WTF fun facts

Source: “OpenAI’s new ChatGPT bot: 10 dangerous things it’s capable of” — Bleeping Computer