Can the federal government jump-start the market for secure software?
It thinks that it can, and that it must do so. It said as much in May, when the administration of President Joe Biden released Executive Order 14028 on improving the nation’s cybersecurity. “[W]e’re going to use the power of federal procurement to jump-start this market because everything we buy has to be built securely,” boasted Anne Neuberger, deputy national security advisor for cybersecurity and emerging technology, at a press briefing when the EO was announced.
If the feds do jump-start the secure software market, a key driver will be the definition of critical software. That will determine which software is in scope for the security recommendations flowing from the executive order.
The EO required the secretary of Commerce to publish a definition for critical software within 45 days. The National Institute of Standards and Technology (NIST), in consultation with the National Security Agency (NSA), the Cybersecurity and Infrastructure Security Agency (CISA), and other federal departments, completed this task on time in late June.
Here’s what security teams need to know.
Critical software defined
According to NIST, EO critical software is any software that has, or has direct software dependencies upon, one or more components with at least one of these attributes:
- Is designed to run with elevated privilege or manage privileges
- Has direct or privileged access to networking or computing resources
- Is designed to control access to data or operational technology
- Performs a function critical to trust
- Operates outside of normal trust boundaries with privileged access
The new definition of critical software covers a lot of behind-the-scenes compute tools that perform functions dealing with user trust and operational monitoring and are designed to be managed by users with an elevated privilege level.
The definition applies to “software of all forms,” including cloud-based software. However, NIST is recommending that initially US agencies charged with implementing the EO focus on “standalone, on-premises software that has security-critical functions or poses similar significant potential for harm if compromised.”
New security measures
In addition to the definition of critical software required by the EO, NIST published “Security Measures for ‘EO-Critical Software’” and “Recommended Minimum Standards for Vendor or Developer Verification (Testing) of Software,” aimed at creating guidance for securing software used by US federal agencies
The security measures publication largely entails a set of principles for zero-trust security. Of the objectives laid out in the publication, the most important ones are:
- Protect EO-critical software and EO-critical software platforms (the platforms on which EO-critical software runs, such as endpoints, servers, and cloud resources) from unauthorized access and usage.
- Protect the confidentiality, integrity, and availability of data used by EO-critical software and EO-critical software platforms. (Audits will most likely align with NIST 800-171 and the Cybersecurity Maturity Model Certification (CMMC) to enhance the protection of controlled unclassified information (CUI) within the supply chain. There was also an EO in June for IoT and connected applications around privacy data.)
- Quickly detect, respond to, and recover from threats and incidents involving EO-critical software and EO-critical software platforms.
New security standards
While the “Security Measures” publication focuses on running software, the second document NIST published focuses on developing it and seeks to provide “minimum standards recommended for verification by software vendors or developers.” Some of the techniques for verifying code by developers called out in the document include:
- Threat modeling to help identify key or potentially overlooked testing targets
- Automated testing so tests can be repeated often, such as during every commit or before an issue is retired
- Static code analysis for finding bugs in code and identifying hard-coded passwords and private encryption keys buried in code
- Dynamic analysis for identifying runtime bugs and creating “black box” tests that can address functional specifications or requirements, negative tests for flagging invalid inputs and behavior, denial-of-service and overload attempts, input boundary analysis, and input combinations. Dynamic testing also encompasses code-based testing, test cases for catching previously discovered bugs, fuzz testing, and web app scanning.
- Third-party and open source code analysis to make sure libraries, services, and packages used by applications are as secure as those produced locally
- Bug fixes as soon as possible after discovery, and process improvements to prevent such bugs in the future, or to at least catch them earlier in the development process
Who needs to act
While software vendors that sell to the government are certainly going to be affected, so will commercial entities that aren’t software vendors. That includes those in the defense industrial base (DIB), since they have developers who write custom software for the government.
Even a company that does services for federal agencies will face data and user access requirements—and that includes the 30% of companies in the United States with over $50 million in revenues that do business with the federal government.
Larger states such as California, T
exas, Florida, and New York are already drafting legislation or executive orders of their own for their agencies and any companies that do business with their state to follow similar guidance.
Now that NIST has published the definition of critical software and recommended security measures, the next step is for the government to review existing federal acquisition regulations (FARs) to see whether contract language should be amended. The EO provides one year from its issuance for this process to be completed.
Once fully enacted, based on amended FAR language, the government will essentially prevent itself from acquiring and using any software that meets the definition of “critical” but cannot satisfy these security measures.
Jump-start, or doomed effort?
Still, the question remains, will these changes jump-start the market for secure software?
Some critics argue that the federal government is overreaching with the new standards it’s imposing on the critical applications it purchases. They argue that some vendors will be discouraged from bidding for government contracts because they can’t afford to comply with the new security requirements. That will shrink the pool of contractors available to the government and curtail its access to innovation.
This also isn’t the first attempt by the government to try to move the needle on securing software or the products it acquires. An executive order signed by President Ronald Reagan in 1990 said government computer systems storing sensitive data would have to be C2-equivalent by 1992. A commonly heard rallying cry at that time became “C2 by ’92”. But that never happened. The government never purchased enough Orange Book-evaluated products to justify the amount of money the IT industry poured into having their products evaluated.
Will history repeat itself? Is the cost of continuing the status quo truly unacceptable this time? I certainly hope so—only time will tell. But as Neuberger said at a forum held by the Center for Strategic & International Studies in May:
“[W]e need to change our mindset around software and hardware to demand security in those products. For too often it’s been OK to sell software and hardware products and sell security software separately, or, frankly, make security the configuration responsibility of the user. And I think, given the criticality of technology to our lives today, we as consumers have to begin—and when I say consumers, I mean individuals, companies, and governments—need to start demanding that we can have more confidence in the technology our lives rely on.”