Prospective Students
I am actively recruiting dedicated Ph.D. candidates who are passionate about cutting-edge Cybersecurity research and plan to start their Ph.D. journey at Towson University in Spring/Fall 2026. If you have strong interests in Malware Analysis, Defense Strategies, Deception-based System Orchestration, and the use of Large Language Models (LLMs) in Cybersecurity, I encourage you to contact me at msajid@towson.edu.
Please include your CV/Resume, GRE/TOEFL Scores, and a brief statement outlining your motivation to pursue a Ph.D. TA/RA positions are available.
Preferred Qualifications:
- Master’s degree in Computer Science/Engineering or a related field (required) with experience/background in cybersecurity or system security.
- Research experience and prior publication(s) (required).
- Hands-on experience with large-scale datasets, malware analysis, reverse engineering, or application of ML/DL/LLM for security analytics.
- Familiarity with AI/LLM integration into security analysis and detection frameworks.
My current prioritized research areas include:
- Autonomous and Verifiable Deceptive System Orchestration: My research focuses on systematically identifying and manipulating system-level cyber objects leveraged by attackers to build deception-based detection mechanisms. Through dynamic malware analysis and behavioral profiling, I aim to extract deception parameters directly from malicious samples and design deceptive environments that can proactively disrupt attacker operations. Optimization of deception planning through reinforcement learning and adaptive system orchestration is a key part of this work.
- Advanced Automated Dynamic Malware Analysis and Reasoning: This project seeks to enhance dynamic malware analysis pipelines using AI-guided symbolic execution, high-level behavioral mapping, and automated reasoning. Our goal is to target evasive malware families, intelligently drive execution towards malicious TTPs (tactics, techniques, procedures), and automatically extract threat intelligence using AI, CTIO models, and NLP-powered reasoning engines.
- Leveraging Large Language Models (LLMs) for Cyber Deception and Malware Understanding: In this direction, we utilize LLMs (e.g., ChatGPT, Code Interpreter, specialized security-tuned models) for analyzing malware execution traces, log graphs, event sequences, and system interactions. Using advanced prompt engineering and contextual reasoning, we aim to extract actionable knowledge for deception design, malware attribution, and behavior classification, enabling new forms of AI-powered cyber deception.
- Cybersecurity Education with AI Integration: I am also exploring how LLMs can enhance cybersecurity education, curriculum generation, knowledge mapping, and adaptive student learning experiences. This direction may be available for highly motivated applicants with a strong interest in cybersecurity education research.
Please note: If your primary interest is solely in generic ML/DL research without system-level security focus, I kindly request that you refrain from contacting me. I am specifically looking for students motivated by problems in system security, malware analysis, deception, and AI-enabled defense techniques.
While my ongoing projects have clear focus areas, I am open to considering strong research proposals that demonstrate creativity, rigor, and alignment with my core expertise. If you have a well-defined project idea within cybersecurity or related subfields, feel free to reach out with your proposal and timeline for discussion.
Important visa note: Due to temporary U.S. visa appointment pauses, international applicants needing a brand-new F-1 visa may experience delays. Applicants who are already in the U.S. on valid visa status or those who are U.S. Permanent Residents or U.S. Citizens may receive priority consideration. Please feel free to reach out to discuss your individual situation.
← Back to Home Page