Abstract
The rapid development of large language models (LLMs) in recent years has opened new possibilities across diverse domains. In this talk, I will share several of my recent works that apply natural language processing techniques in both cybersecurity and educational assessment. In the first part, I will demonstrate how LLMs can assist malware analysis, including generating human-readable reports from execution traces and even simulating malware behaviors for research and defense purposes. In the second part, I will present our efforts in automated essay scoring, where data augmentation methods are employed to overcome data scarcity and a mixture-of-experts model is designed to imitate the decision-making process of human evaluators.