Hilarious Hijinks Microsoft Bing’s AI Chatbot Delivers Election Info with a Twist
Microsoft Bing AI chatbot spreads inaccurate election information and dataImagine you have a chatbot that knows everything about politics. It’s like having a political guru at your fingertips, ready to answer any burning question you may have. But what if this so-called guru has a habit of spreading misinformation and quoting dubious sources? Well, that seems to be the case with Microsoft’s AI chatbot, now known as Copilot.
A recent study by AI Forensics and AlgorithmWatch has revealed that Copilot, formerly known as Bing chatbot, is spreading false information about political elections. According to the study, this digital know-it-all gets it wrong 30% of the time when it comes to basic questions about politics in Germany, Switzerland, and even the 2024 presidential elections in the United States. It’s like asking a fortune teller for the lottery numbers and getting an answer that sends you straight into bankruptcy.
But wait, there’s more! The study also found that Copilot is not the only culprit here. They tested another chatbot called ChatGPT-4 and found similar inaccuracies. It’s like a contagious disease spreading through the realm of artificial intelligence.
“As generative AI becomes more widespread, this could affect one of the cornerstones of democracy: the access to reliable and transparent public information.”
Now, I know what you’re thinking – maybe the chatbot has some built-in safeguards to prevent it from going off the rails. Well, here’s the kicker: those safeguards are “unevenly” distributed, causing Copilot to dodge questions 40% of the time. It’s like a politician with more evasiveness than a professional dodgeball player.
Microsoft, the parent company of Copilot, has responded to the study’s findings, promising to fix the issues before the 2024 presidential elections. They even had the audacity to tell users to double-check the information provided by their own AI creation. It’s like going to a restaurant and being told to cook your own meal. Talk about customer service, right?
But hey, it’s not just Microsoft feeling the heat. Senators in the U.S. proposed a bill to punish creators of unauthorized AI replicas of living or dead humans. It’s like they’re trying to prevent the rise of AI zombies or something. And Meta, the parent company of Facebook and Instagram, took preemptive measures by banning generative AI ad creation tools for political advertisers. It’s like they’re saying, “Okay, AI, you can’t be trusted with political ads. Stick to cute cat videos.”
So, dear digital asset investors, next time you turn to an AI chatbot for election info, make sure to bring your skepticism and fact-checking skills. After all, even the smartest algorithms can sometimes pull the wool over our eyes. Stay informed, stay curious, and don’t let the bots fool you!
Have you ever had an encounter with an AI chatbot that left you scratching your head? Share your story in the comments below!
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- Chainalysis Suspected stolen at least $374 million in targeted phishing scams approved by 2023
- Scorpion Casino Hits $2.2 Million Jackpot as Investors Flock to Secure Passive Income.
- From Russia (and China) with Love The Anticipated Debut of CBDC Payments in 2024
- Bankless Four Cryptocurrencies Worth Paying Attention to in the AI Integration Direction
- SharkTeam Analysis of the OKX DEX attack event and on-chain asset tracing
- Interpreting the Digital Asset Anti-Money Laundering Act initiated by five US senators, including Elizabeth Warren.
- Analysis of Mantle LSP Dual-channel yield aggregator supported by RWA and ETH PoS