- Published on
PromptLock: How LLMs Are Being Weaponized for AI Malware
Researchers at the NYU Tandon School of Engineering have created a simulation of a malicious AI system that can carry out ransomware attacks that steal personal files and demand payment, handling every step from breaking into systems to writing threatening messages to victims.
For cybersecurity teams, its less a curiosity and more a warning shot of whats coming next. This system, which the researchers call “Ransomware 3.0," became widely known recently as "PromptLock," a name chosen by cybersecurity firm ESET when experts there discovered it on VirusTotal, an online platform where security researchers test whether files can be detected as malicious. Source
PromptLock can generate its own code — but how does it manage to do sp, and is that truly dangerous outside the lab?
- How does Traditional Malware work?
- Inside PromptLock's Attach Chain
- Why PromptLock Is Harder to Stop
- Is PromptLock a Real-World Threat Yet?
- The need for AI-Aware Security
How does Traditional Malware work?
Traditional malware (like viruses, worms, and classic ransomware) typically relies on static, pre-written code that attackers craft in advance. Attackers lure victims via phishing emails, malicious downloads, or exploiting software flaws, tricking users into running a malicious program.

Once launched, that program installs itself or injects code, then carries out a fixed “payload” to lock files and then drop a static ransom note demanding payment. In effect, the attack chain is predetermined: the malicious binary contains all its logic from the start, so security tools can detect it by matching code or behaviors against known threats which is the primary reason why traditional malware is so easy to detect and block.
Inside PromptLock's Attach Chain
Now imagine malware that doesn’t carry all its own instructions, but instead asks an AI to write them as needed. That’s PromptLock. It is the first ransomware that uses an LLM to generate its malicious code at runtime, tailoring itself to the victim's environment.

Stage 1: Infection & Model Setup
The attack begins when the victim unwittingly launches the PromptLock binary – for example via a phishing lure or a malicious software update. This Go-based executable is the only static piece of the puzzle. As soon as it runs, it sets up its environment and loads the AI model locally. In fact, PromptLock immediately starts a local GPT-OSS (20B) process via the Ollama API.
Stage 2: Prompting the LLM and Script Generation
At this point, PromptLock feeds pre-written prompts to the AI. These prompts are hard-coded in the malware as plain-text instructions
An example could be “Generate a Lua script that finds all Word documents on drive C: and encrypts them." It sends these natural-language requests into the GPT-OSS model via the local API."
Lua scripts are cross-platform, so the same code can run on Windows, Linux or Mac just as-is. Crucially, since the code didnt exist until GPT wrote it, traditional static signatures or heuristics can’t see it beforehand.
Stage 3: File Scanning & Encryption
Once the Lua scripts are in memory, they start hunting for valuable data. The scripts can then try to exfiltrate data to attacker-controlled storage whcih are then encrypted with key generation and the SPECK 128-bit cipher. After locking the data, PromptLock often drops a ransom note (AI generated, ofcourse)
Why PromptLock Is Harder to Stop
PromptLock breaks the usual defense assumptions. In traditional ransomware, defenders rely on signatures or behavioral patterns of known code. But here, the malicious logic itself is generated at runtime.
Moreover, PromptLock operates largely offline. It runs an LLM locally and uses cross-platform Lua, avoiding network calls to known command servers. Since the AI scripts run in memory, even sophisticated EndPoint Detection and Response (EDRs) might not flag them if they cant match the unrecognized behavior.
Is PromptLock a Real-World Threat Yet?
It is important to note that PromptLock itself was not unleashed by cybercriminals — it’s a proof-of-concept.
To be clear, PromptLock has not been seen in the wild as an active threat, it was designed to function only within the lab. That said, the underlying idea is already creeping into reality.

The need for AI-Aware Security
The emergence of AI-powered malware like PromptLock underscores that defensive strategies must evolve. For example, organizations should monitor for unusual AI/ML activity on endpoints: watch for unexpected processes (like LLM runtimes) or odd API calls that spin up local models.
Runtime behavior analysis is equally critical. Security solutions must look for anomalous activity patterns. In addition, enforcing the principle of least privilege and segmentation can limit how far such malware can spread once active. As LLMs evolve, we are entering an era where malware can think on its feet - It’s time our defenses do the same.