Patch/Configuration Management

In the wake of the CrowdStrike outage, here’s a workable four-step patching strategy

Share
Today’s columnist, Ashley Leonard of Syxsense, shares a four-step patching strategy in the wake of the CrowdStrike incident. (Adobe Stock)

ANALYSIS: The recent CrowdStrike incident in which an auto-update took down airports and medical facilities around the world highlights one of the huge risks companies face today relying on vendor-enabled auto-updates.

Many organizations use application and operating system auto-updates to keep their applications fully updated and secure. While it’s vital that all software gets patched to secure the organization from threats and comply with several security standards and regulatory requirements, it’s extremely dangerous to rely on auto-updates to perform these tasks.

We’ve all now seen the results of an auto-update gone bad.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

While I have seen cybersecurity leaders recommend turning auto-updates on to implement patches or new features more quickly, this can introduce significant risk because this process does not get tested as thoroughly as a major software update. That’s one reason I do not recommend auto-updates. Using auto-updates means that every time a patch gets deployed, the organization has a significantly increased risk of negatively impacting IT operations.

In my 30 years of helping global organizations patch and secure all their endpoints, this recent global IT outage involving CrowdStrike and Microsoft machines highlights how fundamental, and yet difficult patching has become for many organizations. Our team helps enterprises refine their patch and updating strategy to ensure their endpoints are secure while minimizing any operational risks to the organization. Here are some of the best practices we’ve learned along the way. 

  • Stage 1 – Alpha Group: Start by creating a test lab. The lab should mimic the configuration of the production environment and contain all operating system versions, servers, images and configurations used in production. Next, deploy patches and updates to the test lab and monitor results for 24 hours. It’s crucial that part of this initial test also involves a reboot of all the test lab devices. Many updates do not fully install until a device gets rebooted, so it’s critical to complete the reboot process.
  • Stage 2 – Beta Group: Select a few devices in each department for initial production testing. Try to select the most diverse group of operating systems and images. Deploy patches to this group and monitor results. Remember to coordinate this deployment with the company’s IT help desk. They need to know about these updates so the team can receive rapid feedback on any problems.
  • Stage 3 – Phased Production Rollout: Once the Alpha and Beta groups are working properly, it’s time for production deployment. Start by deploying in groups of devices. For example, deploy 20% of each department on the first day, 30% on the second day, 30% on the third day, and 20% on the final day. Best to deploy in phases across departments. If errors occur, the company doesn’t want to take out an entire department.
  • Stage 4 – Reporting: After all endpoints have received the update or patch, it’s vital to have reports showing that the organizations endpoints have been fully updated and are operational. Should a breach occur in the future, these reports can offer evidence that the organization was fully patched in accordance with company policies, security standards, or regulatory requirements.

These processes are not completed in a day. It requires planning, coordination across the enterprise, and continuous monitoring. 

Along with following these four steps to implement a thorough vetting and patching strategy, cloud-native management technologies that let organizations manage these complex processes from anywhere without requiring physical access to the machine itself can help organizations reduce their risk when a future outage hits.

Time to get to work.

Ashley Leonard, founder and CEO, Syxsense

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.