Summary & Key Takeaways
|
You may have heard about the salt configuration management vulnerability (CVE-2020-11651 and CVE-2020-11652), impacting several large-scale organizations. The vulnerability involves the communication protocol of the salt master, and specifically its authenticated protocol, which had a major flaw, allowing a bypass of sanity checks, and the calling of sensitive internal functions that could allow misappropriation of communication keys, and continue even further with unauthenticated commands.
We have indeed been hit by an attack exploiting this very vulnerability during the night of May 3rd, 2020.
We at Algolia have always been candid and transparent about problems. We are doing this public post-mortem with the goal to provide full and detailed information about this incident to our users, but also to improve ourselves in the long run.
It all started on Sunday, May 3, at 3:12am Paris time, when the phone of one of our core engineers suddenly rang. Multiple alerts were firing at the same time because the API was no longer available for a number of customers. The scale of the incident was large enough to justify waking up our infrastructure team to assist, and two additional engineers quickly joined to help investigate the issue in the middle of the night.
Servers were continuing to fire alerts, shutting down one by one, without a clear cause, and remote access to these servers was also impossible. Large-scale network outage was suspected at the beginning, and one of our providers, OVH, quickly assisted in the investigation, ruling out this hypothesis. At the same time, we found suspicious commands while auditing our logs.
We were able to quickly determine that our configuration manager had been the victim of an attack, propagating malware commands to a number of servers in our Europe clusters. Part of our infrastructure was now running not just our code.
Sometimes overlooked, configuration management is a critical part of an infrastructure, especially in a large cloud. Being able to deploy packages to be installed, setting up services and maintaining them on thousands of servers is a must-have. But it can become an Achilles’ heel, as it involves a central server that gets privileged access to a whole lot of servers. One mistake can be amplified to painful extents.
More than five hundred servers were impacted, most of them temporarily losing indexing service, but some of them also losing search capabilities. Thanks to the way that we designed our architecture, this didn’t significantly impact our customers as additional servers stayed healthy and took over the search traffic.
While recovering the servers one by one, our main concern was to accurately evaluate what was the exact scale of the attack. And the first question is always: was any data breached? Analyzing the payloads executed by the malware, we concluded that the only goal of the attack was to mine crypto-currencies, and not to collect, alter, destroy or damage data. We could have been less fortunate, and this is a lesson we are not going to forget any time soon.
In addition, a portion of our monitoring network was overloaded by the detected failures, yet the status page was showing that everything is green. This was confusing and misleading. We are working on fixing it, and we’ll retrospectively update the status indicators to reflect the issues we detected.
Here is the downtime a number of our users suffered, in terms of indexing, but also sometimes on the search service:
We first started by shutting down the configuration manager involved in the incident across all of our infrastructure, keeping files for later forensic analysis. Then, we teamed up with our different providers, started rebooting all of the impacted servers one by one, and investigated their state. We identified that two malwares had been injected— one to mine crypto-currencies and another as a backdoor server. We started killing all malwares, restored files back to their original state, and then built a plan to reinstall all the impacted servers one by one.
As part of our standard protocol, public announcements were regularly updated on https://status.algolia.com/, and we informed customers reaching out to our support team of the situation.
Seven hours after the first alert was triggered, we rebooted the last server after removing all injected malware, and started working on the configuration management side. A dozen people were involved in recovering and working tirelessly to rebuild damaged services.
The impacted component was an older tool, and has been running fine, but we were planning to rework it in the following two quarters. We have now put in place a temporary fix which secures the environment and we’re actively working on reworking the system sooner than planned.
The security and up time of our services are two of our core priorities. Therefore, there is no acceptable excuse for this outage. You can count on us to learn from this, and improve our setup and infrastructure so this never happens again.
____________
https://nvd.nist.gov/vuln/detail/CVE-2020-11651
https://nvd.nist.gov/vuln/detail/CVE-2020-11652
https://thehackernews.com/2020/05/saltstack-rce-exploit.html
Julien Lemoine
Co-founder & former CTO at AlgoliaPowered by Algolia AI Recommendations
Adam Surak
Director of Infrastructure & Security @ AlgoliaJulien Lemoine
Co-founder & former CTO at AlgoliaCraig Williams
Director of Product Design & Research