Since the recent introduction of Chat Generative Pre-Trained Transformer (ChatGPT) and other artificial intelligence (AI) models, professionals across various industries have embraced their use to enhance efficiency and productivity and save time when performing tasks such as research, email rephrasing and creating in-house presentations. However, this reliance on AI models, coupled with feeding real-time data into Internet-connected or dark web-linked AI models, poses a significant threat to data security.
Cybercriminals are capitalizing on the availability of AI tools such as ChatGPT to create adversarial variants with no ethical safeguards. FraudGPT exemplifies this trend, enabling novice actors to launch sophisticated phishing and business email attacks at scale.1 Cybercriminals have also used this technology to create convincing deepfakes of celebrities or spread political misinformation.2 They even tricked a UK-based energy firm into transferring €220,000 to a Hungarian bank account in 2019.3 In addition, although some password-cracking algorithms already existed, cybercriminals can now analyze large password data sets and generate different password variations using machine learning (ML) and AI.
AI algorithms can also automate and enhance the distribution of ransomware, which encrypts critical business data and demands a ransom for decryption codes. These algorithms can selectively target valuable assets, increasing the potential payout for cybercriminals. In addition, phishing attacks involving business emails target enterprises in an attempt to steal money or critical information. AI algorithms can analyze communication patterns and generate convincing phishing emails that impersonate high-level executives or business partners, aiming to trick employees into performing unauthorized actions such as initiating fraudulent transactions or disclosing sensitive information.
Although AI can save time and effort, many users remain unaware of the potential risk to an enterprise’s sensitive data. Leaked or compromised real-time data can be exploited and sold to competitors at exorbitant prices, leading to substantial penalties, lawsuits, investor loss of confidence and long-lasting reputational damage. Both the European Union and the United States impose strict regulations on data security and may restrict the use of AI until privacy regulations are formulated and adhered to.
Figure 1 lists some data security concerns and mitigation strategies when utilizing AI technologies.
The evolving AI threat landscape offers both challenges and opportunities for information security professionals. Backed by data and research, AI models are becoming integral to cybersecurity, serving defenders and attackers alike. Staying informed, embracing AI for defense and advocating ethical practices can help navigate this dynamic landscape, protect against emerging threats and maintain the positive impact of AI models in the digital realm.
Incorporating mitigation strategies during AI model development and deployment enhances data security, fosters stakeholder trust and reduces the risk of data breaches and privacy violations. A comprehensive approach involving data governance, risk assessment and security measures such as regular audits, encryption, access controls and employee training in data handling is vital to safeguard data confidentiality while harnessing the advantages of AI models. These efforts not only bolster overall security but also provide a competitive edge in the market.
Endnotes
1 K.sabreena; “FraudGPT: The Alarming Rise of AI-Powered Cybercrime Tools,” Analytics Vidhya, 28 July 2023, http://www.analyticsvidhya.com/blog/2023/07/fraudgpt-the-alarming-rise-of-ai-powered-cybercrime-tools/
2 Damiani, J.; “A Voice Deepfake Was Used to Scam a CEO Out of $243,000,” Forbes, 3 September 2019, http://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=1dd777e32241
3 Stupp, C.; “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case,” The Wall Street Journal, 30 August 2019, http://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
SHARAD VERMA | CISA, CRISC
Is the senior manager at EY Global Delivery Services, specializing in business consulting with a focus on third-party cyberrisk management for global clients. With a diverse background spanning more than 14 years, he has expertise in cybersecurity, IT process consultancy, IT audits and business continuity management across various domains. He also authored two white papers on Payment Card Industry Data Security Standards (PCI DSS). He has also published two thought leadership articles on the Payment Card Industry Data Security Standard in the ISACA® Journal.