Article

engadgetengadget on 2024-05-20 15:39

UK's AI Safety Institute easily jailbreaks major LLMs

Researchers found that LLMs were easily to jailbreak and can produce harmful outputs.

Related news