15.9 C
Budapest
April 26, 2024
2020 – 2024 © MOCOHU Magyarország Hírek, Hungary News
Image default
AIchatgptCultureAICybersecurityFounder Interviewsgenerative aiHÍREKManchesternorth westprémiumTech HubsVILÁG ANGOL

Staff using chatbots are risking sensitive data, warns AI founder

Businesses should be cautious about employees using generative AI tools like ChatGPT due to the risk of data leaks, according to the founder of a cybersecurity AI startup.
Speaking to UKTN, James Moore, founder of CultureAI, said employees are putting sensitive company information into AI chatbots, putting that information at risk.
ChatGPT, Gemini and other AI chatbots have quickly become a staple of office work for many in a bid to improve efficiency.
Some use these tools to review and improve code, others to organise sales data, or analyse business strategy.
Moore said that this presents a serious risk to a company, due in part to the limited transparency around data storage by generative AI platforms.
Moore gave the example of a fintech employee who put thousands of customer records containing personal and financial information into ChatGPT.
“The risk there primarily is you don’t then know how ChatGPT is storing that data,” the CultureAI founder said.
“You don’t know if they’re storing that data – they probably are – and finally, whether or not they’re going to use that for training purposes.”
CultureAI is a Manchester-based startup that has developed software tools for enterprises to manage cyber risks stemming from employee behaviour.
Moore is not alone in this view. Google parent company Alphabet has previously warned staff not to enter confidential information into chatbots – including its own large language model, Gemini, formerly Bard.
The issue of what data should be allowed to be used to train large language models is the subject of fierce debate, with a number of ongoing lawsuits – including the New York Times case against OpenAI – likely to inform the practices of AI firms down the line.
In cases of staff using ChatGPT, however, Moore warned that OpenAI is freely able to use data willingly given to it by users, and because these can often be through personal accounts, the company has no say in it.
“There’s a real risk there from the two standpoints,” Moore said. “ChatGPT, or any of the other platforms out there could get breached and that data then ends up in the wild.”
Last April, ChatGPT gave users the ability to turn off chat history, which means it won’t use data to train and improve the model. When this setting is on, ChatGPT says it will delete chat history after 30 days. OpenAI has also launched an enterprise subscription that comes with additional privacy and security tools.
But the other risk, according to Moore, is the chance that the platform will “reveal that data to a third party that then requests it.”
Moore said that through manipulative wording by user prompts, AI models can essentially be tricked into sharing confidential information.
“We’ve seen cases where organisations have had their private data uploaded and then you can say, tell me a story about such and such company’s approach to developing such and such software,” said Moore. “What does that code what would that code look like? And the AI platform can spit that out.”
Read more: Most generative AI models likely ‘illegal’, says former Stability VP
The post Staff using chatbots are risking sensitive data, warns AI founder appeared first on UKTN.

Related posts

Arm’s automotive chip revenue doubles

MOCOHU

Tévét, nyaralást nyerhet, aki beoltatja magát

MOCOHU

Cambridge biotech lands £15M for its innovative wound care solution

MOCOHU

DMCA.com Protection Status


Pin It on Pinterest

Share This