AI safety: I'm a cybersecurity expert and here's why using AI chatbots can be risky
and on Freeview 262 or Freely 565
- Concerns about how AI-chatbots protect users data.
- Microsoft’s Recall faced backlash over security measures.
- OpenAI, ChatGPT makers, launch new security committee.
A cybersecurity expert has warned that using AI-chatbots can carry “quite the risk”.
Advertisement
Hide AdAdvertisement
Hide AdMicrosoft's disastrous unveiling of its Recall feature for laptops has once again concerns about the safety of artificial intelligence (AI) products. Tech giants - from Apple to Google - have gone all-in on the AI gold rush in 2024.
Inquisitive users may have already dabbled in the world of generative AI tools such as ChatGPT, Midjourney or Bard (later rebranded to Gemini). Chatbots in particular have grown in popularity - and Microsoft shone the spotlight on its CoPilot+ and Recall features at the recent Surface event.
Unfortunately for Bill Gates founded company, the Recall announcement, in particular, went down like a lead balloon. It wasn’t helped when it was quickly revealed that hackers could easily access all the data stored by the AI-tool and steal it, leading to Microsoft backtracking on the feature being turned on automatically on eligible devices.
Advertisement
Hide AdAdvertisement
Hide AdThe disaster of Recall’s unveiling once again raised concerns about the safety of AI-chatbots, particularly when it comes to protecting user’s information. Keith Martin, Professor of Information Security at Royal Holloway, University of London, warned that such tools “present quite a risk”.
He explained: “We don’t really have current norms and laws around this technology. There’s a real risk that people submit data to these tools that they shouldn’t be releasing. When a new technology comes out and society hasn't decided how we are going to use this technology, it does pose risks.”
John Sun, founder and CEO of Spring Labs, writing for Forbes said: “The fears some executives have around data security and generative AI are far from unfounded. Information employees give AI tools, including customer data and intellectual property, could become the property of the companies behind these tools.”
Advertisement
Hide AdAdvertisement
Hide AdResponding to security concerns, Perplexity AI Chief Strategy Officer Johnny Ho told us: "Data privacy and security are core components of building user trust, which is why we adhere to industry best practices for data privacy and account security. To learn more, you can check out our Privacy Policy and Trust Center, where we've outlined these best practices in more detail."
OpenAI, the company behind ChatGPT, launched a new safety and security committee last month - but it does feature three members of the board including CEO Sam Altman. On its website the company explained: “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.
“A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials.”
Advertisement
Hide AdAdvertisement
Hide AdIf you want to learn more about concerns about AI, computer scientist Sasha Luccioni has a fantastic TED Talk called AI Is Dangerous, but Not for the Reasons You Think. It is just 10 minutes long and can be watched on YouTube right now.
Comment Guidelines
National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.