School of Computer and Information Science, Edith Cowan University, Perth, Western Australia
Each year the latest information security surveys are released to the computing and business communities. Often their findings and their methodologies are subject to criticism from the information security community, professional bodies and others in the profession. This paper looks at the viewpoints of both the producers and the critics of the surveys. The criticisms cover such issues as the methodologies, the response rates, the experience of the respondents, the design of the questions and the interpretation of the results. This paper looks at these issues and discusses the validity of these criticisms, the impact of the surveys and their value to business and government. It compares the methodologies of some of the largest local and international players in the area. It discusses the issues arising from flawed methodologies, inaccurate information and poor processes, including the perceived lack of integrity and the accuracy of the measurements and methodologies. Despite the strong criticism a middle ground emerged. Data input by the participants, whether accurate or not, may be highly subjective and influenced by their environment and business profile. Furthermore, security at a business level may be extremely complex, the governance principles dictating that the organisation profile, management ideologies and core business values must be accounted for and balanced even for IT. The paper also considers the interpretation of the results and how they may be influenced by current and future products and the vendors of those products. Finally the paper takes a closer look at the use of the surveys in a business context and attempts to show that if constructively used, these surveys can be powerful metric tools used in driving information security strategies in spite of their perceived deficiencies.