(CTN News) – Currently, the federal government is advising about the cyber security threat of ChatGPT, stating that an information stealing malware (Raccoon, Vider, Redline) has breached approximately 100,000 ChatGPT accounts on the dark web.
According to the advisory, the report about the breach also highlights one of the major challenges facing Al-driven projects (including ChatGPT); the sophistication of cyber-attacks.
Precautionary measures and cautious use of ChatGPT (at the organizational and individual levels) have been recommended by the government.
ChatGPT and other Al-powered APIs are being integrated into operational flows and information systems worldwide. ChatGPT accounts demonstrate the significance of Al-powered tools as well as Cyber risks associated with storing conversations.
The breach of a user account may expose proprietary information, areas of interest/research, internal operational/business strategies, personal communications, and software code.
Users should take precautions by not entering sensitive data into.
It is recommended to disable the chat saving feature or to delete those conversations manually as soon as possible, and (2)
Use a malware-free/screened system for ChatGPT.
(3) Users handling extremely sensitive data should not use ChatGPT/other Al-powered tools and APIs (if they are infected with information stealer malware). When absolutely necessary, dummy data or masking of critical information may be used.
Using best practices, organizations can ensure is used securely and data is protected through precautionary measures.
Furthermore, Al technology is constantly evolving. Keeping up-to-date with the latest security trends may be the key to protection. Here are a few best practices (but not limited to):
Perform a comprehensive risk assessment before using ChatGPT in order to identify potential/exploitable vulnerabilities. A plan to mitigate risks and protect data will be developed as a result.
(2) Communicate with using secure channels to prevent unauthorized access. Secure APIs and encrypted communication channels are part of this.
(3) Monitoring Access: It is important to monitor who has access to ChatGPT. Access should be restricted to authorized individuals only. Strong access controls and monitoring of access logs can help achieve this.
(4) Implement Zero-Trust Security: Adopt a zero-trust security strategy that assumes every device on the network is a potential threat. A strong authentication mechanism should be used to grant access to resources based on need-to-know basis.
(5) Train the Employees: Employees should be trained on using and the risks associated with it. They are aware of the potential threat of social engineering attacks and do not share sensitive data with chatbots.