Posted inSecurity

ChatGPT: 6 risks financial institutions should know

Risks need to be carefully considered and managed to ensure the safe and responsible use of ChatGPT in financial institutions.

ChatGPT

As a language model, ChatGPT can be a useful tool for financial institutions to enhance customer service and improve efficiency in various tasks. However, there are also potential risks associated with its use, including privacy concerns, accuracy issues, and the possibility of errors in decision-making processes. These risks need to be carefully considered and managed to ensure the safe and responsible use of ChatGPT in financial institutions.

In a recent blog posted by CTM360, the company gave insights into some of the risks associated with integrating ChatGPT.

  • Data Exposure: Using ChatGPT in the workplace poses a risk of inadvertently exposing sensitive data, such as confidential financial information or private code containing sensitive information, which could lead to privacy or security breaches.
  • Misinformation: Due to its programming and training data, ChatGPT may generate inaccurate responses. Given that it was only trained on data sets available through 2021, the tool may pull inaccurate online data.
  • Technology Dependency: Excessive reliance on technology could lead to overlooking human judgment and intuition, highlighting the importance of maintaining a balance between technology and human expertise for financial professionals.
  • Privacy Concerns: The collection of personal data by ChatGPT to train and improve the AI model can pose a significant risk to individuals and organisations if the information is exposed or used maliciously.

External risks of using ChatGPT:

  • Social Engineering: ChatGPT can be utilised by cybercriminals to create convincing, highly personalised phishing emails that impersonate individuals or organisations, increasing the likelihood of successful phishing attacks and making it difficult for victims to detect the attack.
  • Creating malicious scripts and malware: Cybercriminals can train ChatGPT on extensive code to develop malware strains that are difficult to detect, as they can bypass traditional security defenses by using polymorphic techniques such as encryption and obfuscation, which dynamically alter the malware’s code and behaviour.