Abstract
This article investigates the potential misuse of large language models (LLM) for low-resource, highly personalised social engineering attacks. The study explores how ChatGPT can infer personality traits during natural conversations by leveraging publicly available personal information, such as social media data, as an entry point. Utilising the social engineering personality framework (SEPF), the research endeavours to optimise attack vectors based on the Big Five personality traits, with the objective of enhancing the persuasiveness of social engineering strategies. The approach is divided into four phases: verifying conversational capabilities, conducting personality analyses, applying the SEPF for attack optimisation, and evaluating the persuasiveness of personalised attacks. The present paper offers a proof-of-principle for the initial phase, demonstrating ChatGPT’s capacity to engage in natural conversations while conducting personality analyses in a discreet manner. The findings indicate that while ChatGPT exhibits the capacity to simulate human-like interactions, limitations in conversational variance and the reliability of personality assessment were observed. The study identifies challenges such as generalisations, lack of score differentiation, and confirmation bias, and proposes refinements like increasing interaction depth, adjusting scoring scales, and using tailored personas. Subsequent research will investigate enhanced personality inference techniques, personalisation of attack vectors, and their impact on susceptibility to social engineering attacks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only