Postmortem light: Zettlr and Electron Fuses

Postmortem light: Zettlr and Electron Fuses

On February 6, 2024, a security researcher alerted us of a potential security issue with Zettlr version 3.0.3 and below. On February 7, we fixed this very issue with the release of Zettlr version 3.0.4, shortly followed by 3.0.5. In this postmortem, we want to inform you about what even happened, whether you have been affected, and what we have learned so far.

Postmortem light: Zettlr and Electron Fuses

Postmortem light: Zettlr and Electron Fuses

[Update 29-02-2024] The CVE-number has now been issued by MITRE. You will be able to find more info about this bug at CVE-2024-25757.

On February 7, 2024, a security update has been released for Zettlr with the version number 3.0.4 that fixes a vulnerability that has been raised against Zettlr versions 3.0.3 and below. Zettlr versions below 3.0.4 were vulnerable to a relatively convoluted attack that would possibly allow malicious actors to utilize the application to execute code on your computer.

This is the second postmortem that I write for Zettlr, after a first one about three years ago. I try to be as transparent as possible with you, Zettlr’s users, whenever something happens that affects your security in using the app.

Today’s postmortem is a bit different from what postmortems usually look like. While there will be a CVE (Common Vulnerabilities and Exposures) number assigned to this incident (unfortunately the application is still pending with MITRE), the CVE is fortunately not critical. But I would like to use this opportunity to highlight issues in the communication from the Electron-team that I believe to be less than ideal in this instance.

As in the last case, this postmortem will feature three sections: a timeline of what went wrong, an analysis of why it went wrong, and a “lessons learned” section.

Was I affected? Very unlikely. The issue that I am going to describe would have required an attacker to already have infiltrated your entire system, and in that case, there are very slim chances that the attacker would’ve even touched the Zettlr binary. So I am fairly confident that possibly not a single user was affected. I may be wrong, but after reading this post, you will be in a position to double-check in case you want to be sure.

Timeline: What Went Wrong?

Let’s first get everybody on the same page: What went wrong, and when did it go wrong?

February 6 and 7, 2024: From Notification to Fix

In the morning (Central European Time) of February 6, we received an email outlining a security vulnerability within Zettlr that was enabled by a lack of sufficiently locking down the binary. User “soulfood” described how someone could force Zettlr’s binary to start a debug server from the app. If the command-line flag --inspect is passed to Zettlr when starting the app, the underlying Electron framework will spin up a web server that will listen to requests and allows the execution of arbitrary code within the main process of the app. User “soulfood” referred to a presentation that has been given at the DEFCON security conference in Las Vegas in the summer of 2023, where this exploit has been presented to the world.

Mitigation was very simple, however, and within 24 hours – in the morning of February 7 – we released Zettlr 3.0.4 that patches this issue. Specifically, the update contains code that flipped a series of “fuses” in the binary that would make it ignore the --inspect command line flag (and a few others that we don’t need).

There is one exotic use-case for this exploit that I can think of. Imagine someone wants to steal what you write. For example, imagine you are working with confidential information. This vulnerability would enable a person to execute code within the Zettlr binary that would send your file’s contents to a remote server – all without you knowing. There are more convenient ways of achieving that, however, so it is still very improbable that any malicious actor would think of this approach first. But there could be instances in which it would raise less suspicion if the app itself sends the contents, rather than some third-party script that is constantly running in the background.

February 7, 2024: The Electron Team Complains About The CVE

That same day, the Electron team published a blog post addressing this CVE, as it has apparently been raised against several Electron applications. Given that one of the co-authors of the post is Keeley Hammond, software engineer at Slack – another Electron-application – I suspect that Slack is among them. The authors express their discontent with the raising of CVEs. In particular, they say they believe the CVEs have “not been raised in good faith”. They suspect “CVE farming”, i.e., raising CVEs simply to increase the clout of the corresponding security researcher with no added benefit to the community.

A CVE, or Common Vulnerabilities and Exposures (identifier) is a number that is assigned to security issues with software. It is used to communicate that there is something wrong with either an application such as Zettlr, or a software library. It is usually applied for by security researchers who constantly monitor software for potential security issues. Thus, raising CVEs can also be used to signal that a security researcher knows what they are doing.

The post continues that exploiting the mentioned CVE requires an attacker to already have code execution access to the computer. In layman’s terms, this basically means: Before the attacker can execute malicious code within unpatched Electron apps, they will already be able to execute malicious code anywhere on the computer. In short: they claim no attacker would use this vulnerability, because they simply don’t need it to wreak havoc.

I agree with the Electron team here in that the vulnerabilities are not critical. But I believe the post’s focus on the severity level of the CVEs – “critical” – is just a straw man, diverging attention away from the lack of communication from the Electron team in this matter.

Specifically, they argue that “the mere existence of a CVE with the scary critical severity might lead to end user confusion”. This begs the question of whom they mean with end users? If they mean “end-end users”, i.e., those who simply use Electron applications – no, not at all. These users usually only hear of CVEs through postmortems such as this, which gives the developer ample room to explain what has happened and what happens next. If they mean people like me who use Electron to develop applications – yes, I am confused! But not because of the CVE, but because of the blog post by the Electron team. I will return to this matter.

Thus, contra the Electron team I do agree that this issue needs to have been risen. Specifically: Neither you as users – nor me as the developer (!) – should expect a production-ready binary to be able to spin up a web server. Why should it ever need to do that? Exactly: it doesn’t. It is unintended behavior, and it is often unintended behavior that allows attackers to infiltrate computer systems. I will come back to this later as well.

But before that, it is interesting to go back in time a bit, because while researching the full timeline of this incident, I stumbled upon something interesting.

January 31, 2024: Electron Forge Enables Fuses By Default

While updating our Electron Forge dependency, I discovered in the changelog of the new version 7.3.0 that it includes a Pull Request which by default sets many of the available Electron fuses in the templates. This means: Any new Electron app that is being developed using any of the templates that Electron Forge ships with will by default be locked down. No more --inspect-flags for those binaries.

This made me even more curious, so I decided to confirm a faint suspicion I had. At the time of writing, the “Security” guide by the Electron team that includes many best practices when it comes to securing an Electron app includes a section on disabling Electron fuses. I decided to double-check with the Wayback machine, and, lo and behold, this section did not exist a month ago – it is missing in the snapshot of the security guide from early December 2023.

The Electron team themselves writes:

This [flag; meant is --inspect] can let external scripts run commands that they potentially would not be allowed to, but that your application might have the rights for.

Does the Electron team maybe believe that fuses can be a security risk after all? I do not believe that anyone on the Electron team is acting in bad faith. But I do believe that the communication strategy is a bit lacking. This leads me to the analysis.

For reference, here is the full timeline as a quick list:

Analysis: Why did it go wrong?

To understand what my problem with this timeline is, we need to understand fuses and what they do in the context of Electron. Also, we need to understand how the Electron team has communicated the existence of fuses until now.

How do Fuses Work?

At a very high level, Electron is a combination of a web server (the “main process”) that has access to the operating system (including files and, depending on the application, Bluetooth devices, the camera, and others). This web server then spins up browsers (the “renderer processes”) which display something that you can see and interact with: an application window.

During development, developers need to interact with both parts of the Electron application in order to debug it. This includes spinning up a debug server within the application to which code is then sent that can be used to inspect various parts of the app while it is running.

This is insanely helpful as it makes finding and fixing bugs much quicker. However, while very helpful in development, this debug server could pose a security problem when regular users use it: (a) they won’t need it; and (b) it has a frightening amount of permissions and capabilities.

Many applications use so-called compiler-flags that tell the compiler whether it should even include certain code that is only used during development when it is building the finished and polished application. This means that any potential powerful code (such as a debug server) is not even included in the finished application.

(N.B.: If you ever downloaded a debug build of an application, and it was larger than the final version: This is the reason; it simply included all the various debug code that developers use during development.)

Electron-development is different, however. Specifically, when we “compile” an Electron app, we do not really compile it. Rather, we bundle together all the code and rename a pre-built binary that we literally download from the internet. That pre-built binary then executes our bundled code and this is what you see as an application. This means, on a high level, the actual binary of Zettlr is exactly the same as the one of, e.g., Slack, or Discord. The only difference is which code it executes.

Now, in this instance, the problem is: If us developers don’t recompile the binary (which is nice, because it makes development faster), this means that the same code that includes said debug server will inevitably end up in the final binaries.

The solution that the Electron team came up with is to use so-called “fuses”. Basically, the idea is the following: Because it is impossible to find the debug server code within a binary file and simply remove it, they included a variable in the code that is always set to “true”. When set to “false”, however, this same variable would ensure that the app ignores any request to actually start the debug server. In other words, the server code would still be in the application binary, but nobody would be able to execute it. Since such a variable has a very simple binary representation, it is easy to find it in the binary and change it after the app has been compiled. This is what fuses do.

What is the Problem With Fuses?

Now to my problem that I have with fuses. They have been introduced a few years back, but they were never announced. Everyone already knew they existed, but nobody suspected that they are any important part of developing Electron applications – me included. So I never bothered implementing them.

But now it turns out that there is actually a security dimension to them. I quickly turned the fuses on not just in Zettlr, but in any other Electron app that I had lying around. And the Electron team does seem to agree that there is something to it. Otherwise, why would they have started adding them to Forge’s boilerplate and to the security guidelines?

The blog post they published did not help, either. The entire article makes it sound as if it’s not a big deal. It has a very hand-wavy tone that seems to dismiss the implications of these fuses. However, all of this ignores that this is basically developer-grade power shipped in a consumer application.

This is as if you would give a child a bar of TNT but removed the fuse and told it that it was a brick. Sure, without a fuse, TNT is incredibly safe. But what stops another person from just adding a new fuse and lighting it?

The Electron team never communicated the fact that production-ready binaries shipped with powerful developer features, and this kept many developers in blissful ignorance as to the potential dangers of this. Sure, if I had thought about this for longer, I may have come up with this very convoluted attack vector. But then … why would the Electron team implement this equally convoluted way of getting rid of development code for production applications instead of just … well, providing two builds, one for production, one for development?

I never thought about this, but this makes absolutely sense. And it makes me all the angrier about the dismissive tone of the Electron team. In fact, the Forge group (that belongs to the Electron core) did not exactly rush in implementing fuses support (support came with version 6.1.1 in April 2023 – two years after the release of Electron fuses).

All in all, I think that the way the Electron team communicated this was less than ideal. I think there are a few lessons that the developer community as a whole should take away from this incident.

Lessons To Be Learned: Communication Within the Developer Community

I believe the communication by Electron towards its community was suboptimal. I fully believe the Electron team that they do not deem fuses to be an extreme security problem. But they were aware that shipping debug features with a production-ready application might be a problem. Why else would they have developed their fuses approach in the first place? The only plausible explanation is that they already suspected that it might be problematic to ship a binary with all of these features enabled.

I think there are three instances in which the communication should have been better:

  1. When they initially launched the fuses support. That was in early 2021. I double-checked: There was no dedicated blog post saying something along the lines of “Hey, you may want to take a look at this new thing we did. It could be useful”. Indeed, the only mentions of fuses on the Electron blog are when they added one for some feature.
  2. When the Electron Forge team added simple support for them in early 2023. Yes, it is in the changelog, but only as a single line stating that they added the fuses plugin. What is it? What are its implications? Nobody knows.
  3. When security researchers started raising CVEs against various Electron-based apps. I don’t know when that started, but probably sometime in late 2023, coinciding with the publication of the corresponding Defcon talk on September 15. This would have been the perfect chance for the Electron team to publish a blog post explaining to Electron developers that Electron fuses might be more important than they had anticipated until then.

Do not get me wrong: I fully support that the Electron team did ensure that from now on fuses have a more central standing within the ecosystem. I do not hold a grudge against them, because I can absolutely see how they did not believe this to be such a drama. However, I do disagree with their communication strategy. Especially, while “farming CVEs” for personal gain of security researchers is indeed a problem, I don’t believe that this argument holds any water in this particular instance. I would have wished for more of a “Huh, turns out it’s indeed problematic”-tone. Especially since this is literally what they did with adding fuses to the security guidelines and adding default fuses support to Electron Forge: acknowledge that fuses have a security-relevant dimension.

I believe that the blog post has hurt everyone involved – the CVE system that might be perceived as an ill-attempt at managing security; the security researchers who are “just in it for the clout”; and the Electron framework that has to deal with criticism anyhow – without a need. By clandestinely adding the correct information to the right places, but openly arguing that it’s all not a big deal, this raises suspicion – and rightfully so.

CVEs are a way by the software development world to communicate something. Namely: Potentially security-critical issues with software. There are CVEs that turn out to be actually expected behavior, and there are also CVEs filed in bad faith. But then: no social institution is perfect, and being nitpicky at the instances in which it fails hurts all the important functions it fulfills just as much. This is exactly where political apathy comes from. “Trust me, I’m a sociologist.”

But that is also why I don’t understand the hostile stance that the Electron team has towards them: A CVE is merely a symbol; a signal that something may be afoul. It should be used to flag to developers and users alike: “Hey, there is something potentially important to you.” If it then turns out to be not too critical after all, that is perfectly fine — but better warn users one time too much than one time too little. Every user can reflect on whether they may have been affected by something. But if you don’t tell them, they have no chance to actually think about this to determine: was I affected, or not?

Therefore, here are my lessons for us all:

  1. If a security researcher raises a CVE, take it seriously. Do not whine if the security researcher has come to a different estimate about the severity of the issue. Just objectively assess the CVE and decide on what to do. If the solution is to just write a PSA, then what is the problem?
  2. If you come to the conclusion that the CVE touches upon a valid issue, communicate as such. Do not on the one hand update your security guides to implicitly affirm the CVE, while publicly saying that there’s nothing to see here. This erodes trust in you and/or your software.
  3. If you add a feature that might help security, announce it. Even if you yourself don’t think it’s all that problematic, for others it may be. Always think about what other developers and users might think of it. The belief that whenever you announce something potentially severe this will cause “confusion” is false. As I have mentioned above: My own confusion was not caused by the CVE. It was caused by the dismissive tone of the blog post.

To lighten up the mood, while writing this blog post, I continuously had a song with a more than fitting title stuck in my head, so give it a listen: Slydigs — Light the Fuse.

Stay sharp!