Survey reveals mass concern over generative AI security risks

A new Malwarebytes survey has revealed that 81% of people are concerned about the security risks posed by ChatGPT and generative AI. The cybersecurity vendor collected a total of 1,449 responses from a survey in late May, with 51% of those polled questioning whether AI tools can improve internet safety and 63% distrusting ChatGPT information. What’s more, 52% want ChatGPT developments paused so regulations can catch up. Just 7% of respondents agreed that ChatGPT and other AI tools will improve internet safety.

In March, a raft of tech luminaries signed a letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months to allow time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” The letter cited the “profound risks” posed by “AI systems with human-competitive” intelligence.

To read this article in full, please click here