Prospective Students
I am seeking motivated Ph.D. students to join my Cybersecurity research group at Towson University for Fall 2026 or Spring 2027. I am particularly interested in students with strong interest or experience in Artificial Intelligence, especially Large Language Models (LLMs), and their application to Cybersecurity and Cybersecurity Education. Support is available for qualified students. Applicants with an M.S. degree and prior graduate-level research experience will be given preference.
Please email your CV/Resume and a brief statement of interest outlining your background and research goals. TA/RA positions are available.
Who would be a strong fit / Preferred Qualifications:
- Strong background in cybersecurity, system security, or related areas.
- Experience with AI/ML, particularly LLMs, prompt engineering, reasoning workflows, or data-driven security analysis.
- Solid programming skills in Python, C/C++, or systems-level development.
- Master’s degree in Computer Science, Computer Engineering, Cybersecurity, or a related field (preferred). Towson University prioritizes applicants who already hold an M.S. degree, and there is a significant stipend difference between students entering the Ph.D. program with and without an M.S.
- Prior research experience, including thesis work, independent projects, or publications.
- Strong motivation, curiosity, and the ability to work independently on research problems.
My current prioritized research areas include:
- AI Driven Cyber Deception and Resilient Defense Design: This direction focuses on designing high fidelity cyber deception strategies using AI and Large Language Models (LLMs). I study how to identify system level cyber objects that can be manipulated as believable virtual baits, and how LLM guided reasoning can support the design, adaptation, and orchestration of deception mechanisms for proactive defense. This research also examines the security challenges of deception, including how attackers may detect, evade, or counter such defenses, and how to make deception mechanisms more resilient and effective in practice.
- Advanced Malware Analysis and Automated Reasoning with AI: This direction focuses on malware analysis, behavioral understanding, and automated reasoning using AI. I am interested in leveraging AI to analyze malware execution traces, system events, API behaviors, and related artifacts in order to extract attacker tactics, techniques, and procedures, support threat intelligence generation, and improve security analysis workflows. The goal is to build intelligent approaches for understanding malicious behavior and supporting security decision making at scale.
- Cybersecurity Education with AI Integration: I explore how AI and LLMs can support cybersecurity education through curriculum generation, curriculum adoption, content updating, knowledge unit mapping, outcome alignment, and adaptive learning support. I am also interested in building intelligent educational tools and agents, such as chatbots and LLM based assistants, to improve student engagement, personalize learning experiences, support instructors, and help students better achieve learning objectives in cybersecurity courses and programs.
← Back to Home Page