Frequently Asked Questions (FAQ):

How do these rankings help the ranked organizations?
Several ways, including:
  1. By showing ranked organizations and their customers how they compare to their peers.
  2. By letting ranked organizations see how their rankings change when they change their security measures.
  3. By increasing awareness of security problems for ranked organizations, their customers, and the public, thus contributing to a consensus on fixing security problems.
In addition, we can do longitudinal studies of individual organizations and of peer or other groups; we have data going back for years.

How can rankings that make organizations look bad help?
Reputation is a spectrum from fame to shame. Companies that rank well may want to brag about that, and the resulting fame can help attract and retain customers. Companies that rank poorly have incentive to fix their problems to improve their rankings. Disclosure is what enables reputation, so ranked companies may have incentive to promote further disclosure for further reputational rankings.

Will these rankings help fix all security problems?
Our approach is a proxy, not a panacea. We use outbound spam as a symptom of underlying security problems, not as a magic medicine to cure them all. The work of actually stopping spam or fixing underlying security problems still requires all the same kinds of infosec techniques, policies, and procedures it always has. These reputational rankings provide incentives to apply such infosec, and information to assist in doing so.

Why do we not find the IP addresses mentioned in the email in the CBL or PSBL blocklists?
These advisories aggregate numeric data from the previous month. Any specific IP address that a blocklist saw emitting spam during that month may be out of the blocklist now. You can see variation during that month in the charts in the organizational analysis web page for each organization. We use that data to construct a composite Borda count, so the ranked organizations can see how well they are doing compared to their peers on one symptom of Internet security.

Can you provide us with some logs including date and time stamp?
No, we cannot; sorry. Please contact the source blocklists for that.

Can you send us the email headers for each spam message you saw?
These organizational analyses use aggregated numeric data from the previous month to serve a different and more proactive and preventative purpose than traditional reactive incident reports. The main point of our analyses is the cross-organizational peer comparisons enabled by constructing the Borda count from the raw data. Such comparisons previously have not been possible for information security on an ongoing basis. Using spam as a proxy for security enables such comparisons, which can provide positive publicity or fame for organizations that rank well. We do not collect the raw data; we get it from the CBL and PSBL anti-spam blocklists, which do not usually release email headers.

Do these rankings reveal information that will help the bad guys?
Our detailed data comes from information the subject organizations are already releasing to the outside world, namely outbound spam. Therefore we are not revealing the vulnerabilities that let the miscreants in, nor are We revealing any other corporate secrets.

If an organization doesn't want outbound spam information disclosed, it can improve its infosec and thus reduce the information it releases, and everyone will win — except the bad guys. Further, the bad guys already cooperate through their extensive black market (of spammers renting from bot herders using exploits bought from malware writers, facilitated by their own use of numerous forms of online interaction), so the miscreants already know more than the good guys in many ways. These rankings let the ranked organizations cooperate through competition or collaboration to improve their security and defeat the bad guys.

How are these rankings produced?
We combine anti-spam blocklist information with Internet routing information from several sources, including CBL and Team Cymru, to map IP addresses to netblocks to Autonomous Systems (ASes), and we use further information from LexisNexis and other sources to map ASes to organizations and to categorize organizations by industry. Then we build peer rankings of organizations from two metrics (volume and host), and we use those constituent rankings to build a Borda count for overall composite rankings. This data is quite voluminous, we collect it daily, and we have archives of it going back for years for control purposes. We mine it into different presentation formats and interaction methods, similar to sports scores. These presentations are intended to engage treated organizations, their competitors, their customers, their trade groups, stock exchanges, governments, and the public.

Where does the Borda Count come from?
The Borda count is a voting system that combines multiple orders of preference into a single metric. In our case, we have four rankings from four combinations of data sources and metrics: CBL Volume, CBL Host, PSBL Volume, and PSBL Host.

How can I be ranked if your report shows no spam for my organization?
Your organization got the best possible ranking because we observed no spam from it for the relevant month. Compared to organizations that sent spam, yours has gained a positive reputation, or fame. For what your rank number means in that case, see Zero score.

How do these rankings differ from college rankings?
This is live time-series data, not surveys. Most college or consumer rankings are based on surveys, but our approach uses much more voluminous and much more independent third-party data. Though a survey can provide more depth of context, automated time-series data can provide far more detailed and comprehensive information on certain aspects of an organization. This difference is similar to that between sending a survey to corporate executives vs. collecting automated time-series network performance data on the same company. These rankings do not deal with network performance in the traditional sense (latency, throughput, traffic, etc.). Rather these rankings indicate security performance through the proxy of outbound spam.

How are these rankings different from previous efforts?
We appreciate and encourage most non-intrusive and non-coercive security disclosure efforts, such as NVD, DShield, Verizon's DBIR, SEC breach disclosure guidance, and other spam rankings such as those from CBL, or cisco. However, there is an important distinction between passive and active disclosure. Most disclosure of security information to date has been relatively passive, often incomplete, obscure, or infrequent, and often in the forms of records and requirements. Useful disclosure needs to be more active, in comprehensive coverage with regular, frequent, and ongoing reporting, plus presentation to the whole world in forms that are readily understandable.

How are these rankings different from SEC breach disclosure guidance?
The 2011 U.S. Securities and Exchange Commission (SEC) guidance was an excellent move, but it is merely advisory and is likely to be spottily implemented. We use copious data from anti-spam blocklist data that is already available that covers the entire Internet, and we convert that data into information presentations like sports scores that the whole world can see and understand.

After SEC-required breach disclosure, a company could show customers improved cloud.SpamRankings.net reputational rankings as evidence of improved security, and thus as incentives to remain customers. Only true fans dig into baseball fielding statistics such as putouts, yet everyone understands the scores of a game, and most people understand league rankings. A motivated investor (unlike an average customer) may dig into the SEC-required breach reports, but comparisons among companies will require a lot of work by the investor. If those breaches were turned into reputational rankings, a breached company could use those and other such rankings as third-party evidence of their improved league standing, as it were. Our task is to actively present rankings with the clarity of sports scores while providing paths to statistics and other aspects of interest to the numerous specialist markets.

How are these rankings different from the National Vulnerability Database (NVD)?
NVD is an excellent initiative, but most people don't know about it, and those who do need expertise in mining it, plus most breaches and their vulnerabilities probably are not in it. These rankings cover every organization in the U.S. (although we do not publish on all of them), we publish much more frequently and with much more effort to reach the public in accessible presentations.

How are these rankings different from DShield by the SANS Internet Storm Center?
DShield is an excellent active collaborative firewall intrusion log database, yet it requires specialized knowledge to search and interpret. These rankings cover every organization in the U.S. (although we do not publish on all of them), we publish much more frequently and with much more effort to reach the public in accessible presentations.

How are these rankings different from other spam rankings such as those from CBL, cisco, or spamcop.net?
While we encourage all public spam rankings, most others are hidden several levels deep in websites for other purposes. Many of them are in highly non-intuitive formats (such as the rank number as column number 7 of 14). And none of them are actively promoted. Cloud.SpamRankings.net is a publicly visible top-level set of rankings actively promoted. See our table of comparisons with other rankings.

How are these rankings different from Verizon's DBIR?
While Verizon's Data Breach Investigations Report only comes out annually and is targetted towards information security professionals, to the exclusion of corporate executives, much less the public. Sharing of security information among specialists is productive, yet more can be done. Verizon's new VERIS Community Database is one such step in a good direction.

Are there public policy implications of these reputational rankings?
Our project involves statistical experiments on changes in these rankings over time to determine whether they cause reduced spam volume and thus by proxy probably improve underlying infosec in the ranked organizations. Such a result would motivate policy recommendations for further infosec disclosure, enabling further reputational rankings and improved infosec.

How should further disclosures be done?
Any requirements for further disclosure should attempt to go beyond passive databases and occasional press releases. Rather, frequent, regular, high-density, worldwide, and ongoing information, disclosed to the public through the web in formats that are easily usable for constructing reputational rankings for further active disclosure in presentation, delivery, and interaction with the ranked organizations for improved infosec.

Can disclosure promote cooperation?
Such active disclosure, insofar as it provides incentives for organizational change in infosec, may also promote cooperation in promotion of further active disclosure, through best practices, standards, and more widespread use of reputational rankings. This approach could thus avoid some pitfalls of legislative approaches, such as the failure of regulations for mark to market on loans because of lobbying by the affected organizations. If the experiments determine that reputational incentives work to transform organizational infosec, the marketing methods used in the experiments may also provide incentives for the treated organizations (and their customers) to lobby for further policy requirements for further active disclosure.

What organizations should require disclosure?
The various policy-setting organizations may want to examine any positive results of these experiments and consider requirements in various forms, ranging from voluntary regulatory standards such as Basel III, to membership rules by stock markets, to legislation by governments.

What are the potential benefits of disclosure?
The potential benefits of reputational incentives for cooperation range from a safer Internet for better commerce to better national and global security.