Hacking Humans Using LLMs with Fredrik Heiding: Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models | Las Vegas Black Hat 2023 Event Coverage | Redefining CyberSecurity Podcast With Sean Martin and Marco Ciappelli

0 Views· 07/31/23
Redefining CyberSecurity
0

Guest: Fredrik Heiding, Research Fellow at Harvard University [@Harvard]On Linkedin | https://www.linkedin.com/in/fheiding/<br />____________________________Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/i....tspmagazine-podcast- Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/i....tspmagazine-podcast- Episode’s SponsorsIsland.io | https://itspm.ag/island-io-6b5....ffd_________________ NotesIn this Chats on the Road to Black Hat USA, hosts Sean and Marco discuss the use of AI in hacking and cybersecurity with guest Frederick Heiding, specifically large language models, such as GPT-3 and GPT-4 (ChatGPT). They explore the concept of using AI to create realistic phishing emails that are difficult to detect, and how cybercriminals can exploit this technology to deceive individuals and organizations.The episode also looks at the ease with which AI can generate content that appears real, making it a powerful tool in the hands of attackers. The trio discuss the potential dangers of AI-powered phishing emails and the need for more sophisticated spam filters that can accurately detect the intent of these emails, providing more granular information and recommended actions for users.Throughout the episode, there is a recognition of AI as a tool that can be used for both good and bad purposes, emphasizing the importance of ethics and the ongoing race between cybercriminals and cybersecurity professionals. The conversation also touches on the positive applications of AI in detecting and preventing phishing attacks, showcasing the efforts of the "good guys" in the cybersecurity world. They discuss the potential for AI to help in blocking phishing emails and providing more granular information and recommended actions for users.About the SessionAI programs, built using large language models, make it possible to automatically create realistic phishing emails based on a few data points about a user. They stand in contrast to "traditional" phishing emails that hackers design using a handful of general rules they have gleaned from experience.<br /><br />The V-Triad is an inductive model that replicates these rules. In this study, we compare users' suspicion towards emails created automatically by GPT-4 and created using the V-triad. We also combine GPT-4 with the V-triad to assess their combined potential. A fourth group, exposed to generic phishing emails created without a specific method, was our control group. We utilized a factorial approach, targeting 200 randomly selected participants recruited for the study. First, we measured the behavioral and cognitive reasons for falling for the phish. Next, the study trained GPT-4 to detect the phishing emails created in the study after having trained it on the extensive cybercrime dataset hosted by Cambridge. We hypothesize that the emails created by GPT-4 will yield a similar click-through rate as those creat

Show more

 0 Comments sort   Sort By


Up next