Due to ongoing geopolitical events, particularly the Russia-Ukraine conflict, some individuals have begun to “poison” major open-source projects to try to cause damage. There were developers (cyber-activists) that chose to introduce code specifically targeting computers in Russia and Belarus. When run on a target system, this code would wipe the computers. We’ve seen this type of supply chain attack in private codebases with devastating results in the recent past. Although the idea of “poisoned” open-source projects is not new per se, it is coming back with a vengeance.
When code is introduced to an open-source project, others can examine the code, repurpose it, and even use it for nefarious purposes. We have seen this happen when cybersecurity toolsets get leaked and repurposed for use in attacks. What is designed to be a strength of the open-source platform (open code inspection and reusability) is exploited by threat actors. In general, programmers look to bring code to production as quickly as possible. Open-source libraries are faster and, in most cases, more reliable than developing your own implementation of a network socket, for example.
The key is that while open-source tools and libraries can be helpful, companies must realize that they can also be a source of security vulnerabilities. Proper care should be taken when using these open-source implementations. Your security team should be able to talk about what protections are in place when open-source projects are used in the environment.
An important question to ask is, how are projects being reviewed for security vulnerabilities introduced to the applications and organization? Suppose you are not looking at dynamic and static code analysis, code compositional analysis, and compile-time security evaluations for your environment. In that case, your organization is left open to these kinds of vulnerabilities.
The experts at Nexum are here to discuss your current approach and have several solutions in mind to help address these metrics.