Newstral
Article
Ars Technica on 2024-03-16 01:17
Researchers use ASCII art to elicit harmful responses from 5 major AI chatbots
Related news
- Chatbots’ inaccurate, misleading responses about U.S. elections threaten to keep voters from pollsSeattle Times
- TAI chatbots provided harmful eating disorder content: reportthehill.com
- Microsoft Investigates Harmful Chatbot Responses—The Latest Chatbot Blunder From Top AI CompaniesForbes
- Two versions of the trolley problem elicit similar responses everywhereArs Technica
- Jailbreaking LLMs with ASCII Artschneier.com
- On ChatbotsTechCrunch
- AI chatbots ‘lack safeguards to prevent spread of health disinformation’Shropshire Star
- Disinformation Researchers Raise Alarms About A.I. ChatbotsThe New York Times
- UNM researchers study PTSD responses, seek volunteersKOB 4
- Don’t Trust the ChatbotsThe Atlantic
- Why do chatbots suck?TechCrunch
- Are Chatbots Useful?Forbes
- Meta launches consumer AI chatbots with celebrity avatars in its social appsArs Technica
- Google and Microsoft’s AI Chatbots Refuse to Say Who Won the 2020 US ElectionWired.com
- DOOM in ASCII mode is equal parts fun and frustratingthenextweb.com
- SBaidu teases Ernie 4 large language model after China approves ChatGPT-style chatbots for public usescmp.com
- Chatbots' inaccurate, misleading responses about U.S. elections threaten to keep voters from pollsStar Tribune