Researching the Potentially Harmful Impact of Generative AI on Cybersecurity
By Emilia Chiscop-Head
Pratt graduate students and industry partners researched how Chat GPT and other AI models can make cyber-attacks easier.
Students from the Pratt School of Engineering, industry partners, professors, and a high schooler researched the potentially harmful impact of Generative Artificial Intelligence (GAI) on Cybersecurity. Since the Summer of 2023, this Duke team and their corporate partners from Coalfire and SafeBreach have analyzed how malicious agents can leverage GAI to create potent and sophisticated cyber-attacks. Based on their research, the authors recommend that organizations build more complex defensive strategies to counteract the sophisticated employment of GAI in cyber-attacks. “This research project is all about Duke’s mission to inspire the next generation, suggest solutions to the world’s biggest challenges, and become a key driver of innovation,” said Michael Roman, Ph.D., a corresponding author on this research paper initiated by Arturo Ehuan, Executive Director of the ME in Cybersecurity Program and Duke CISO Executive Certificate Program.
A cross-disciplinary team
The team carrying out the research included a Duke Cybersecurity student, Shivani Metta, who graduated last December, a 2024 candidate in the Duke MS in Electrical and Computer Engineering, Jack Parker, Professors Ehuan and Roman, three representatives of Coalfire – Adam Kerns, Pete Deros and Priyadharshini Parthasarathy, as well as Itzik Kotler, Co-Founder & Chief Technical Officer of SafeBreach. Professor Ehuan included in this research a Lynbrook High School senior, Isaac Chang, who emailed him this past summer to inquire about an internship. “I wanted to allow him to come on board, seeing his enthusiasm and knowing that these opportunities are inspirational and life-changing,” said Arturo Ehuan, who truly believes in Duke’s mission to be inclusive and inspire others.
Exploring the good, the bad, and the ugly of Generative AI
The team studied advanced GAI models, such as Generative Pre-trained Transformers (GPT), and other Large Learning Models (LLMs) like OpenAI’s ChatGPT and Google’s BERT. They showed that these cutting-edge technologies “ushered innovative data processing and automation opportunities and introduced significant cybersecurity challenges. As GAI rapidly progresses, it outstrips the current pace of cybersecurity protocols and regulatory frameworks, leading to a paradox wherein the innovations meant to safeguard digital infrastructures also enhance the arsenal available to cyber criminals. These adversaries, adept at swiftly integrating and exploiting emerging technologies, may utilize GAI to develop more covert and adaptable malware, thus complicating traditional cybersecurity efforts. The acceleration of GAI presents an ambiguous frontier for cybersecurity experts, offering potent tools for threat detection and response while concurrently providing cyber attackers with the means to engineer more intricate and potent malware,” students described in the paper’s abstract.
Students’ passions and goals inspired the research.
The students shared how this paper was shaped and inspired by their aspirations and goals for the future. “My curiosity about the impact of AI on cybersecurity has been piqued ever since the release of ChatGPT, which fueled my decision to enroll in the Advanced Machine Learning course led by Professor Michael Roman in the fall of 2023. During a conversation about my career aspirations, Professor Arturo Ehuan offered me the chance to participate in a research project that aligned perfectly with my interests and academic pursuits. It presented a valuable chance to explore AI’s broader implications beyond the classroom,” said Shivani Metta, a Duke Cybersecurity December ‘23 graduate whose contribution to this research will serve as her portfolio for her next career step: “After finishing this research project, I am focused on securing a data security analyst in the industry, which will allow me to refine further the skills I cultivated while studying at Duke University. I aspire to deepen my understanding of various facets of cybersecurity,” added Shivani.
The industry partners helped test how hackers can use chatbots.
The team studied the evolution of artificial intelligence and its impact on industry. It continued with an analysis of how GAI can be used to facilitate malicious activities – for which the partnership with the two corporate partners was essential.“I helped investigate the section regarding How Generative AI is used in the attack narratives of malware development. Even with hands-on malicious tools such as wormGPT, FraudGPT, etc, we can use chatbots to write malicious codes in code snippets and disguise under the notion of learning to evade rules set by the models,” said Priyadharshini, senior consultant for Coalfire. A co-founder and chief technical officer for SafeBreach, Itzik Kotler, helped with the offensive security aspects of the paper, including how to abuse and potentially circumvent ChatGPT Safety & Ethical mechanisms. The industry partners shared that this was an exciting experience. “I joined the research group due to my fervent passion for AI and their responsible development and protection,” said the other Coalfire representative, Adam Kerns, managing principal for offensive security. The corporate partners performed data analysis and participated in all phases, from meetings to literature reviews and manuscript writing and review. “My decision to join this team was driven by a desire to delve deeper into the ethical and technical aspects of cybersecurity within the context of AI, ultimately aiming to contribute to advancing these technologies responsibly and securely,” added Adam.
Lessons learned: Taking an alarmist stance towards AI is not the answer.
After completing this project, the students shared that they learned a lot and hope to continue answering essential questions about GAI’s misuse in cybersecurity. “One key takeaway for me is the understanding that there might not always be a definitive correct answer when it comes to AI and cybersecurity laws, as they are still in a phase of ongoing development. “This is partly because these technologies evolve faster than regulations, placing the responsibility on industry professionals to help narrow this disparity and contemplate how to govern these technologies more effectively, “said Shivani. Jack Parker was surprised to discover the considerable asymmetry (in funding and number of researchers) between efforts to push the frontier of AI capabilities and ensure that AI is safe, secure, and beneficial. For Isaac, the main lesson he learned was how to think outside the box to understand the impact of GAI on the industry. “Through the guidance of the mentor team and brainstorming with my project coworkers, I have learned to adopt a forward-thinking mindset while researching the applications of generative AI on cybersecurity.” The project made the industry partners realize the need for careful analysis of this topic and the realization that definitive answers cannot be given. “Taking an alarmist stance towards AI isn’t necessary. Instead, organizations should approach AI endeavors cautiously, fully aware that developing AI models demands careful and secure handling. Engaging in AI development requires a relentless mindset and a commitment to thoroughness,” said Adam Kerns from Coalfire.
Goals for continuing the research to inform the industry.
The team hopes to continue the research to support all organizations concerned about how new forms of AI can benefit hackers and disrupt the cybersecurity industry. “I hope this research will inspire decision-makers at leading AI companies to pay serious attention to the medium and long-term security risks associated with their work, especially the potential for model weight theft and subsequent malicious fine-tuning by bad actors. If threat vectors like this can be cut off in time, society will benefit,” said ECE student Jack Parker. “Determining the optimal approach for organizations remains a challenge. Compliance and regulation alone don’t guarantee robust cybersecurity practices. It’s essential for organizations to carefully assess these frameworks and ensure that their requirements are implemented effectively to establish tangible security measures rather than just adhering to ‘check the box’ security protocols,” concluded Coalfire’s Adam Kerns.
Photo by Tima Miroshnichenko, Pexels.com