Questions to Ask When Choosing a Survey
Note: This page is under development Feb 24-March 7; in the meantime, click here for a pdf version of this information

There are many school climate surveys available today.  Some of these surveys are well designed, but some are not--including some that are highly recommended and some that are available from reputable educational sources. A poorly designed survey wastes time, wastes resources by misleading your anti-bullying efforts, and could even cause complications by producing apparent evidence that your efforts to educate students about bullying have made your school climate worse.  This page suggests a number of questions that you might want to ask if you have found a survey that you are considering using in your school or district. A free survey might not be free if it is expensive in other ways. Click on each question/sub-question block to see more information.

How long has this survey been in use?
  • Does the survey have a track record, showing successful identification of problem areas?
  • The ability to produce evidence of yearly improvement if follow-up surveys are done in subsequent years?
Look for a survey with a track record of at least a few years, so that the survey is fully field-tested, and the behavior of the measures in the survey, including their ability to produce useful data and effectively measure changes from year to year, is demonstrated. If the survey has a long history, ask if it has been recently updated so that it reflects contemporary issues faced by students and recent research on the aspects of school climate that are relevant to bullying.
If your school is in New Jersey, is the survey designed specifically with New Jersey schools, and New Jersey laws and regulations, in mind?
  • Is the survey designed to facilitate compliance with the ABR?
  • Is the survey designed to avoid the requirement for active parental consent, particularly in New Jersey, which has more restrictive requirements than established by Federal law?
New Jersey is unique for many reasons, including our diversity, our politics, our different regions, and our laws.  This is not Texas, California, or Sweden; this is New Jersey.  For example, national surveys use definitions of "bullying" that might be inconsistent with statutory NJ definitions, causing confusion and difficulty fitting data into NJ rubrics and forms.  In New Jersey, both because of our diversity and because HIB is defined as behavior based on enumerated or other distinguishing "characteristics," it is essential that assessments of school climate include appropriate assessments of the climate for diverse groups of students, e.g., students with disabilities and students with a variety of ethnic/cultural backgrounds.  These issues of diversity are often overlooked by standardized surveys of "school climate" which usually assess the general "positiveness" of school climate, thereby missing exactly the issues—biases—that define HIB in New Jersey.  Also, New Jersey's active parental consent law for student surveys is more restrictive than similar federal regulations.  Active parental consent reduces response rates and biases findings because research shows that students with more negative experiences at school are less likely to participate in a survey if parental consent is required, so avoiding the active parental consent requirement will produce more accurate, useful findings. (Note: Responsibility for determining the need for active parental consent rests with the school district; no provider should guarantee that their survey does not require active parental consent.) For New Jersey schools, surveys are most useful if they are designed with New Jersey, including the ABR and NJ active parental consent requirements, in mind.
Who designed the survey?
  • Were the topics covered in the survey chosen by professional anti-bullying experts with evidence-based knowledge of research on school climate? 
  • Do these experts work directly in preK-Grade12 schools and have experience with actual school bullying, in addition to academic knowledge of the research on bullying? 
  • Was the survey designed by professional survey researchers with experience surveying youth, and a working knowledge of youth culture, including cyber culture?
What are the qualifications of the person who actually designed the survey questionnaire itself? You cannot necessarily rely on the reputation of the organization or website from which you obtained the survey questionnaire. A survey on the website of a reputable organization might appear to carry the clout of that organization, but it might not have been developed by members of that organization. Find out:

What are the credentials of the person who actually chose the topics to be covered in the survey, wrote the questions, selected the response options, and determined the order of questions in the questionnaire? The person who developed the survey should have:
  1. Professional expertise in survey question and questionnaire design. Survey research is a professional specialty, and the quality and usefulness of survey findings is a direct result of the quality of the survey questionnaire. Survey research skills include questionnaire design & construction; question context, order, and wording; dataset management; sampling methodology and sample bias correction; cultural differences in response patterns; techniques for eliminating social desirability response biases; and statistical analysis and interpretation. These are professional skills that individuals with master's or doctoral degrees in fields like sociology, psychology, public policy, and political science might possess, if they have specialized in the survey research branches of their fields. It might seem that writing a questionnaire is a simple, straightforward task; click here to see some examples of how even simple questions can lead to problems if they are not worded carefully.
  2. Authentic knowledge of the research literature on aspects of school climate that impact bullying behavior. Authentic knowledge means that an individual has the professional background to read and understand social scientific literature, evaluate the methodology and validity of the findings, recognize the implications of scientific research for practical applications, and understand why certain research findings are controversial. This requires a background understanding of the way that people behave in groups and institutional situations, and the social factors that influence individual and group behavior. Reading several articles about research on bullying does not produce authentic knowledge of the aspects of school climate that impact bullying and should be assessed in a student survey.
  3. An understanding of K-12 schools, including practical experience in addressing bullying in schools. Even a well educated social scientist with a thorough understanding of the research literature on bullying cannot apply this knowledge realistically to school situations without also understanding the existing culture, climate, and structure of schools.
  4. In New Jersey, familiarity with relevant laws, including the ABR, the NJ LAD, and active parental consent requirements for student surveys.
Does the survey test knowledge, or does it measure attitudes, opinions, perceptions, and experiences?
  • Does the survey ask questions to see whether or not students have accurate factual information about bullying, for example, statistics about bullying or definitions of bullying? 
  • Does the survey ask about students' perceptions of the school, experiences in school, feelngs, and attitudes about school, teachers, and their peers? 
A survey is not a test. If your survey is asking students to identify correct facts about bullying, or to tell you what they think someone "should" do when they see bullying, it is not a survey. Your students' answers to these questions only reveal how well they can tell you what you want to hear, lack programming implications, and waste the unique potential of an anonymous survey. Also, a "survey" that resembles a test can cause resentment and disengagement among students and staff members. A survey should measure attitudes, opinions, perceptions (e.g., of school climate) and experiences (e.g., of being targeted by peers). Your survey findings should paint a picture of your school from the point of view of your students. If there are "right" or "wrong" answers to questions on the survey, then think carefully about what you want to measure, and whether the answers to those questions will give you useful information.
How does the survey assess the prevalence of "bullying" in your school?
  • Does the survey define “bullying” for students and then ask them whether they’ve been bullied or known anyone who was bullied? 
  • Or, does the survey ask students about specific behaviors, without using the word “bullying”? 
  • Does the survey distinguish between verbal, social, physical, and cyber behaviors, or does it ask global questions about “bullying” in general?
Although it is important that all questions in a survey be carefully designed, in reviewing many school climate surveys, I have found that questions assessing the prevalence of bullying are the most likely to be poorly written. The most obvious way of assessing prevalence would be to ask students "have you been bullied? (yes, no); with the percentage of students who answer "yes" being taken as an indication of the prevalence of bullying. However, this question is fraught with problems, including: it does not produce meaningful data, it does not produce data with practical implications for bullying prevention, it might produce data that could lead to liability complications, and, if you ask the question the following year to assess your progress in addressing bullying, this question might actually produce findings that indicate that your school climate is getting worse even if it has gotten better. Click here for a further explanation of why this question is a poor choice as an assessment of bullying prevalence.
What response scales do the questions in the survey use? In other words, what response choices are offered to students for their use in answering the questions? Does the survey include open-ended as well as closed-ended questions?
  • Are the response scales appropriate to the questions asked, and will they produce findings with direct application to your programming efforts? 
  • If your school climate improves, will these response scales be able to detect the improvement?
  • Are the response scales appropriate for students of different ages?
Make sure the response scales (the answers students are given to choose from) are appropriate to the questions on the survey. The following are a few examples of things to watch out for.

I have seen questions that ask about the prevalence of behaviors followed by 5- or 7-point Likert agree-disagree scales; this is poor design (e.g., "bullying is common at my school" agree…disagree). A measure of prevalence should have responses that reflect frequency or proportion, e.g., "once a day," "once a week," etc. or "all students," "most students," etc. "Common" is an ambiguous description of frequency; how often is "common"? What does it mean to "agree" that "bullying is common," and how is this different from "strongly agreeing" that "bullying is common"? Do those who strongly agree think bullying is more "common" than those who simply agree? I suspect that what the authors of such questions intended to measure was how common students think bullying is, not how much they agree that bullying is common. In general, beware of any question that uses words denoting proportion or frequency ("most" "often") followed by agree-disagree response scales.

An opinion question, on the other hand, may be followed by an agree-disagree scale. For example, "Discipline at this school is fair" could be followed by an agree-disagree scale.

Response scales should also be appropriate to the age of the students; elementary school students need fewer response choices than high school students, but the choices should be specific enough for you to be able to detect changes in school climate from year to year. For most purposes, Yes/No response choices do not provide enough precision for you to detect improvement from year to year; if a survey relies heavily on yes/no questions, look for a survey that provides students with greater flexibility in response options.

Finally, beware of "false objectivity;" just because you can add up a series of scores and come up with an "average" or an "index" does not mean you have useful information; often, students' answers to individual questions will be more useful than composite scores. Composite scores are only useful if the indices have been developed through a procedure like factor analysis, and then statistically "normed" for the population you are surveying (in this case, New Jersey students). Simply summing or averaging students' answers to a series of related questions does not produce a meaningful result; beware of false objectivity.
Does the survey measure aspects of school climate that have been found by research to be related to bullying?
If the survey purports to be a survey of school climate with relevance to bullying prevention, then make sure the survey includes questions about those aspects of school climate that have been shown by research to be related to bullying, or that are expected to be related to bullying based on sound social scientific principles. If you also want to know whether students feel that their school has enough extra-curricular activities, whether they like the cafeteria food, and whether they think their peers are intelligent, that's fine, but if you have limited time to allow students to do the survey, you might want a survey that is more focused on evidence-based climate correlates of bullying behavior. Some of those aspects of school climate are:
  • Students' perceptions of what the problems are in school, & how serious these problems are
  • Prevalence of bias-based language; prevalence of quasi-bullying behaviors
  • Clarity of school rules and students' perceptions of fairness of enforcement and discipline
  • Social norms; students' perceptions of their peers' attitudes and responses to incidents
  • Students' trust of adults, and perceptions of whether adults take bullying seriously, intervene when incidents occur, and treat each other with respect
  • Density of social support networks within the student body (having friends is a protective factor against bullying and other risks to mental and social health)
  • Climate for various groups of students, based on characteristics enumerated in the NJ Law Against Discrimination and other relevant "distinguishing characteristics" as per the ABR

For information about the School Climate and Bullying Prevention Survey (SCBPS) offered by Spectrum Diversity LLC, please visit the FAQs about the SCBPS page on this website.