Free as in Liability
Open-Source Sustainability and the Java Advantage
Imagine you’re running a claims processing system for a mid-size insurer. It’s been in production for six years. It works. Nobody’s touched it for ages.
Somewhere in the middle of that stack, embedded inside a Spring Boot fat JAR, is Apache Tomcat. Not the latest Tomcat: just the one that was current when the application was last repackaged. Maybe 8.5. Maybe 9.0. You’re not entirely sure, because nobody’s checked in a while. Your vulnerability scanner says it’s clean.
This article is about why that clean scan might be the most dangerous thing in your risk register.
We’re going to look at Java differently: not as a programming language, but as a supply chain. Who maintains what we consume? Who funds that maintenance? And what happens ( as it looks increasingly likely ) when parts of that pipeline go dark?
The Platform That Runs the Business Logic
Java occupies a unique position in the software world. It’s not the language most people learn first, or the one that wins popularity contests on social media. It’s the language where most business logic lives. Not the UI layer, not the infrastructure plumbing. The actual decision-making core of enterprises: insurance claims, trade execution, regulatory reporting, healthcare processing, payroll, and tax.
The code that decides who gets paid, what gets approved, and whether a transaction clears. That’s Java.
And what does that business logic run on? Frameworks, libraries, and embedded servers. Almost all are written in Java or a JVM language. That sounds clean enough, but the reality is considerably messier. We don’t all live on latest. In the Java ecosystem, we pretty much never live on the latest. What’s in production is a long tail stretching back to earlier versions of Java than anyone would care to admit.
And beyond the version tail, there’s a fork tail, a build-it-yourself tail, a shaded-JAR tail, and a dozen other ways in which code that started as “latest” morphed into something mostly the same but slightly different. This isn’t occasional. It’s happening all the time, across the entire ecosystem.
Open-source consumption hit extraordinary volumes in 2025. Maven Central alone processes billions of requests, but the funding flowing back to maintainers remains a fraction of the value extracted. Consumption scales effortlessly. Maintenance doesn’t.
We eat a lot of open source. We contribute very little back. And we’re about to discover what that imbalance costs.
The Tomcat Delta
Take Apache Tomcat. Tomcat doesn’t just exist as the official Apache distribution. It’s inside enterprise application servers, inside product bundles, inside Spring Boot JARs, inside appliances, inside internal systems last touched in 2017. It’s not a calm river flowing from upstream to downstream. It’s a vast delta — with rapids and the occasional waterfall. The code spreads, splits, branches, gets repackaged, embedded, wrapped, containerised, and shipped in directions nobody can fully track.
Some of those branches are maintained. Many are not. That Spring Boot starter you pulled in three years ago embedded a specific Tomcat version at build time. You’ve updated Spring Boot since then (probably ) but do you know which Tomcat is inside the current fat JAR? Do you know whether that version still receives security patches from Apache? Do you know whether the version you’re running is the Apache version at all, or a vendor-modified fork with its own patch history?
Most people don’t. Most people have never needed to ask.
That claims processing system? It’s one of thousands of nodes in that delta. And the machinery that keeps it secure, compatible, and functional is far more fragile than the download link suggests. We might recognise the cost, effort, and complexity involved in shipping our own business systems: the CI/CD infrastructure, the testing, the release coordination, but we frequently give little thought to the equivalent effort upstream. The TCK licensing and certification, security triage, backport engineering, and coordinated disclosure process. Somebody has to do all of that. For every version. On every branch. Continuously.
That’s understandable. It’s also about to become a serious problem.
The Version Promise That Isn’t
Before we get to the security question, there’s a more fundamental issue lurking in the supply chain that most Java developers encounter regularly but rarely think about in structural terms.
When you pull a library update from Maven Central, how confident are you that a patch version is actually safe to adopt?
The answer, if you look at the data, is: not very.
Research into the Maven Central ecosystem has found that breaking changes: genuine binary and source incompatibilities, are distributed across all version levels in roughly equal proportions. Major versions, minor versions, patch versions. The breaking changes don’t cluster where semantic versioning says they should. They’re spread everywhere, in approximately even thirds.
Several studies, including the academic analysis by Raemaekers et al., “Semantic Versioning versus Breaking Changes: A Study of the Maven Repository” (IEEE 2014), and Jezek & Crossley, “An Empirical Study of Java API Usages” (2013). Have established the “roughly equal thirds” finding from analysis of Maven Central artefacts.
That distribution is not due to library maintainers being careless. Most of them are doing their best with limited time, limited tooling, and no test harness that represents how their library is actually used in the wild. A method signature change that looks innocuous on a Monday morning becomes a NoSuchMethodError problem in your production system on Friday afternoon. The maintainer bumped the patch version because they thought they’d only fixed a bug. They didn’t realise they’d also narrowed a return type, or removed a method that was technically internal but had been public since 2014.
Java makes this particularly insidious because of how the JVM resolves methods at runtime. Binary compatibility: whether compiled code works against a new version of a library without recompilation is governed by rules in the JVM specification that most developers never learn and that no version number encodes. A change can be source-compatible (it compiles fine if you recompile) but binary-incompatible (it throws NoSuchMethodError at runtime against pre-compiled code). The compiler and the JVM enforce different rules. Version numbers tell you nothing about which rules have been violated.
Consider the simplest possible example: a library method whose return type changes from Object to String. Source-compatible. Any code that compiled against the old version will compile against the new one, because String is an Object. But binary-incompatible, because the JVM resolves methods by their full descriptor, including the return type. Pre-compiled calling code will look for a method returning Object, find one returning String, and throw a linkage error. The version number on Maven Central says “patch release.” The JVM says “crash.”
Semantic versioning is a social contract, not a technical guarantee.
And in the Java ecosystem, where applications routinely have hundreds of transitive dependencies, each one trusting the version numbers of the others, the entire edifice rests on a promise that the data shows nobody consistently keeps.
This means the supply chain problem isn’t only about security vulnerabilities. It’s also about the plain, unglamorous reality that updating your dependencies can break your application even when the update is supposed to be safe. When it comes to the libraries we depend on, we are simultaneously worried about the ones we haven’t updated and nervous about updating them.
That’s not a comfortable position. And it gets worse. Because much as upgrading is a problem, not upgrading is even worse.
Three Pressures, One Converging Problem
Three forces are making the status quo untenable. Every one of them hits our Tomcat-in-the-claims-system scenario directly.
The Regulatory Stick
If you’re in Europe, the headline is the EU Cyber Resilience Act. The CRA imposes due diligence obligations on anyone placing software products on the European market, with compliance deadlines arriving in 2026.
What does “due diligence” mean when your software supply chain includes an embedded Tomcat version that went end of life two years ago? The CRA doesn’t care whether your dependency is maintained by a foundation or a lone developer in their spare time. The obligation lands on you, the deployer. And the penalties for non-compliance have teeth.
If you’re in the United States, you might be tempted to think this doesn’t apply to you. That would be a mistake.
The US has its own regulatory momentum. Executive Order 14028, signed in 2021, directed federal agencies to require SBOMs from their software suppliers and set in motion a chain of requirements that ripple outward through any organisation that sells to the US government. CISA’s Secure Software Development Framework lays out expectations for how software should be built, tested, and maintained.
Expectations that federal procurement is increasingly making contractual. The FTC has signalled willingness to pursue enforcement actions against companies whose security practices fall short of what the market would consider reasonable. And the SEC now requires public companies to disclose material cybersecurity incidents, which creates board-level attention for supply chain risk in a way that no amount of conference talks ever managed.
None of these carries the CRA’s explicit product liability framing. But here’s the thing: regulations don’t have to be legally binding in your jurisdiction to change your obligations. They set expectations. Industries adopt them. Customers demand them. Insurers require them. The CRA’s requirements around vulnerability handling, SBOM provision, and end-of-life disclosure are rapidly becoming the baseline that regulated industries worldwide measure themselves against.
Not because Brussels has jurisdiction in Kansas City, but because the financial services sector in Kansas City does business with counterparties who are subject to the CRA. And because the underlying logic of these regulations: know what’s in your software, maintain it, disclose when you can’t, is simply good engineering discipline that nobody should have needed a law to enforce.
The practical effect is convergence. Whether you’re complying with the CRA, aligning with CISA’s SSDF, meeting FTC expectations of “reasonable security practices,” or satisfying your cyber insurer’s questionnaire, the obligations look remarkably similar: know your dependencies, track their support status, have a plan for end-of-life components, and be able to demonstrate all of this on demand. The penalty regime differs. The CRA can fine you, the FTC can sue you, your insurer can refuse to cover you, but the work you need to do is essentially the same.
Because Java is disproportionately present in regulated industries: finance, insurance, healthcare, government, etc., this regulatory convergence hits the Java ecosystem deeper and harder than most other platforms.
That insurer running the claims system? Whether they’re in Frankfurt or Philadelphia, they’re about to have a very uncomfortable conversation with their compliance team.
The Maintainer Crisis
Burnout among open-source maintainers isn’t new. But AI-generated issues, pull requests, and vulnerability reports are a new kind of noise, and the signal-to-noise ratio for maintainers is collapsing at exactly the wrong moment. Even well-resourced projects like Apache Tomcat feel this. Smaller libraries: the ones sitting three or four levels deep in your dependency tree, the ones maintained by one person who hasn’t committed in eight months. They feel it far more acutely.
Roughly around 60 per cent of open-source maintainers are unpaid. Many of the libraries that underpin billion-dollar enterprises are maintained in the margins of someone’s evening, between dinner and bed, for no compensation beyond the occasional thank-you issue.
That model worked, more or less, when the demands on those maintainers were limited to fixing bugs and cutting releases.
It doesn’t work when every maintainer also has to be a security analyst, a compliance consultant, and a spam filter for AI-generated pull requests that look plausible but subtly introduce regressions.
Nobody worried about this for years because the risk felt abstract. It doesn’t feel abstract any more.
The Funding Gap
Corporate dependency on open source versus corporate contribution to it. The asymmetry is well documented and stubbornly persistent. Some organisations are trying to close the gap. HeroDevs, for instance, channels a portion of revenue from its end-of-life support services back into the open-source ecosystem and has committed to a $20 million sustainability fund to support creators and maintainers who follow end-of-life best practices.
That’s one model. It’s not the only model, and it won’t solve the structural problem on its own. But it’s worth examining what happens when commercial support for EOL software creates a revenue stream that feeds back upstream, because the alternative is that upstream slowly starves while downstream keeps consuming.
A question that nobody in a boardroom wants to answer? : if the open-source software your business depends on disappeared tomorrow, what would it cost you to replace it? Now compare that number with what you’re paying the people who maintain it.
The gap between those two figures is your real exposure.
Why Java Is Better Positioned (and Why That’s Not Enough)
Java has structural advantages in this landscape that most other platforms lack. A coordinated quarterly security cadence across multiple vendors. Vendor-neutral distributions through Adoptium. Foundation governance via Eclipse, Apache, and CommonHaus. Strong commercial stewardship from competing vendors who have a vested interest in keeping the platform healthy.
No other major language ecosystem has this layered defence.
That’s not an accident. It exists precisely because Java has always been the platform where the consequences of getting it wrong are measured in regulatory fines and audit failures, not just downtime. The ecosystem evolved the governance it needed because the workloads demanded it.
At IBM, the value we brought to our customers by producing Java runtimes, the thing that kept us honest, wasn’t best practice; it was the knowledge that banks, insurers, and government agencies were running production workloads on what we shipped. You don’t get sloppy when a securities exchange is on the other end of your release pipeline.
Our hypothetical insurer benefits from all of this - at the JDK layer.
The JDK itself has never been more secure or better maintained. But that Tomcat 8.5 sitting inside the fat JAR? That’s above the JDK governance line. The same business logic that justifies Java’s enterprise governance runs atop libraries and servers that lack those protections. The governance layer covers the foundation. It doesn’t cover the building materials.
Java does have one emerging advantage here: integration between EOL support providers and Maven Central is starting to surface secure, drop-in replacements for end-of-life components directly in the toolchain that Java developers already use.
The remediation path for Java is becoming shorter than in ecosystems where EOL dependencies simply rot in place. But you have to know you need the replacement first.
Silence Is the Real Exposure
Back to the claims system. The vulnerability scanner says Tomcat 8.5 is clean. No CVEs. All green. Time for a cup of tea.
Except here’s what most people don’t realise: a CVE records the existence of a vulnerability. It doesn’t demand a patch, and it certainly doesn’t guarantee one. You can have a CVE filed against software that is unmaintained, unpatchable, or abandoned. And more importantly, the absence of a CVE doesn’t mean the absence of a vulnerability. It might just mean nobody looked, or the maintainers rejected the CVE because they thought it was working as designed, or that since no-one planned to fix it, there was no need for cve!
Or ( and this is the bit that should keep you up at night ) somebody looked, somebody found something, somebody fixed it in their own fork, and nobody told the rest of the ecosystem.
This is the downstream-fix, upstream-blind problem. Vulnerabilities don’t surface neatly at the source and flow down to consumers. More often, they’re found downstream first by a customer running a fuzz test, by a developer who spots a strange edge case during a refactor, by a researcher who wasn’t even looking at Tomcat but triggered a shared code path.
The first person to see a vulnerability is very often someone far from the upstream project.
When a downstream vendor finds and fixes a vulnerability in end-of-life code, they face a genuine dilemma. Upstream has moved on. No patch is coming for that branch. Some vendors stay silent (reasoning that reporting a vulnerability with no available fix just hands attackers a target list). The logic sounds reasonable. It’s also an illusion.
Attackers aren’t waiting for CVE databases. They’re diffing patches, fuzzing old codebases, and targeting EOL systems precisely because they know nobody is watching. A silent vulnerability isn’t safer. It’s more profitable for attackers. The only people kept in the dark are the defenders.
So that clean Tomcat scan on the claims system? It might not mean “no vulnerabilities.” It might mean “no one who found them bothered to tell you.”
SBOMs misrepresent reality. Scanners report no known issues. And the business logic that decides whether claims get paid continues to run on code that the ecosystem has moved past.
The Gap Your Scanner Doesn’t Know It Has
Most software composition analysis tools do one job well: they match your dependency list against known vulnerability databases. What they typically don’t tell you is which of those dependencies are unsupported and can’t be patched — not because there’s a known vulnerability right now, but because there’s nobody on the other end to fix the next one.
That’s the gap. Not “is this component vulnerable today?” but “is anyone still minding the shop?”
The practical question is how to find these ghost dependencies: components that are functionally abandoned but still running in production. What signals indicate a project is dead? An empty, dusty commit history is the obvious one, but it’s not sufficient. Projects can be quiet because they’re genuinely stable, or quiet because the maintainer burned out and walked away. You need to look at issue responsiveness, security patch cadence, whether the project has any mechanism for reporting vulnerabilities, and whether it has declared its own end-of-life status.
Most projects don’t declare EOL. They just stop. One day, there’s a commit, and then there isn’t. The issues pile up. The pull requests go unreviewed. The CI pipeline fails and nobody fixes it. The dependency is still in your pom.xml, still passing vulnerability scans, still technically “fine.” But it’s dead. And you’re building on top of it.
Historically, libraries were rarely considered abandoned. They were called “stable.”
While here, let me mention that HeroDevs maintains a free EOL dataset scanner that identifies end-of-life packages in your codebase within minutes, surfacing unsupported components that your existing security tooling may miss. It’s not a replacement for your SCA pipeline. It simply fills the gap your SCA pipeline doesn’t know it has. The point isn’t to generate another dashboard of red and amber lights. It’s to answer the question your vulnerability scanner can’t: “Am I depending on code that nobody will ever fix?”
If the answer is yes: and in most non-trivial Java applications, the answer is often yes. Then you need a plan.
“Hope nobody finds a vulnerability” is not a plan.
Back to the Claims System
So, our hypothetical insurer and its claims processing system. That Tomcat 8.5 inside the fat JAR.
They have options. A panicked migration at two in the morning, after an auditor flags the EOL dependency, is one of them. It’s also the expensive one, and the one most likely to break something in production. The smarter path is to buy yourself a rational migration timeline: use a commercially supported drop-in replacement that maintains the same APIs and behaviour while providing ongoing security and compliance patches, then migrate properly, on your own schedule, with commercial cover in the meantime. That’s the model HeroDevs’ Never-Ending Support was built for. Not as a permanent solution, but as a bridge between “we can’t move yet” and “we’ve moved safely.”
But whatever the migration strategy, one principle matters more than any commercial arrangement.
When downstream vendors find and fix vulnerabilities in end-of-life code, they have a responsibility to report upstream and ensure CVEs are filed. Even when no upstream fix is coming. That transparency is what keeps the whole ecosystem honest. Silence is not caution. It’s exposure. Disclosure doesn’t create risk. It distributes risk knowledge to the people who need it.
The insurer’s claims system is running today. Somewhere inside it, embedded in a JAR last updated years ago, is code that the rest of the world has moved past. The scanner says it’s clean. The SBOM says it’s accounted for. The version number says the last update was safe.
None of those things means what you think they mean.

