Survey researchers employ a variety of techniques in the collection of survey data. People can be contacted and surveyed using several different modes: by an interviewer in-person or on the telephone (either a landline or cellphone), via the internet or by paper questionnaires (delivered in person or in the mail).
- Mobile app development is a tricky process. It is costly and time-consuming and requires proper planning, testing, and marketing strategies. Hours poured in the development and testing of an app are always going to be insufficient, and at early stages, updates and fixes are going to be rolled frequently.
- Use the #1 Rated online survey maker and publish an amazing survey in minutes. More than 15 million surveys completed, optimized for responses, try it free.
- Scale questions. Another easy way to analyze customer satisfaction data is through the use of scale questions. The benefit of asking scale questions is that you will be provided with more data than a simple ‘Yes' or ‘No,' and this data can then be used to come up with scores based on the responses.
The choice of mode can affect who can be interviewed in the survey, the availability of an effective way to sample people in the population, how people can be contacted and selected to be respondents, and who responds to the survey. In addition, factors related to the mode, such as the presence of an interviewer and whether information is communicated aurally or visually, can influence how people respond. Surveyors are increasingly conducting mixed-mode surveys where respondents are contacted and interviewed using a variety of modes.
Survey response rates can vary for each mode and are affected by aspects of the survey design (e.g., number of calls/contacts, length of field period, use of incentives, survey length, etc.). In recent years surveyors have been faced with declining response rates for most surveys, which we discuss in more detail in the section on the problem of declining response rates.
Get help resolving your EA game issues. Read help articles, troubleshooting steps, or open a support ticket to get back in the game. Chance to make a difference; Cash drawings each month; Rewarded EVERY survey; Complete privacy; Easy access: mobile, tablet or desktop; Donate your points to a good cause.
In addition to landline and cellphone surveys, Pew Research Center also conducts web surveys and mixed-mode surveys, where people can be surveyed by more than one mode. We discuss these types of surveys in the following sections and provide examples from polls that used each method. In addition, some of our surveys involve reinterviewing people we have previously surveyed to see if their attitudes or behaviors have changed. For example, in presidential election years we often interview voters, who were first surveyed earlier in the fall, again after the election in order to understand how their opinions may have changed from when they were interviewed previously.
Cellphone surveys
Telephone surveys have traditionally been conducted only by landline telephone. However, now that almost half of Americans have a cellphone but no landline telephone service, more surveys are including interviews with people on their cellphones. For certain subgroups, such as young adults, Hispanics and African Americans, the cell only rate is even higher. Research has shown that as the number of adults who are cell only has grown, the potential for bias in landline surveys that do not include cellphone interviews is growing.
Cellphone surveys are conducted in conjunction with a landline survey to improve coverage. The data are then combined for analysis. In addition to the issues associated with sampling cellphones, there are also unique challenges that arise when interviewing people on their cellphones.
One of the most important considerations when conducting cellphone surveys is that the costs are substantially higher than for a traditional landline survey. The cost of a completed cellphone interview is one-and-a-half to two times more than a completed landline interview. Although some of the fixed costs associated with landline surveys are not duplicated when a cellphone sample is added (such as programming the questionnaire), other costs are higher (data processing and weighting are more complex in dual-frame surveys).
Cellphone surveys are more expensive because of the additional effort needed to screen for eligible respondents. A significant number of people reached on a cellphone are under the age of 18 and thus are not eligible for most of our surveys of adults. Cellphone surveys also cost more because federal regulations require cellphone numbers to be dialed manually (whereas auto-dialers can be used to dial landline numbers before calls are transferred to interviewers). In addition, respondents (including those to Pew Research surveys) are often offered small cash reimbursements to help offset any costs they might incur for completing the survey on their cellphone. These payments, as well as the additional time necessary for interviewers to collect contact information in order to reimburse respondents, add to the cost of conducting cellphone surveys.
Most cellphones also have caller identification or other screening devices that allow people to see the number that is calling before deciding to answer. People also differ considerably in how they use their cellphones (e.g., whether they are turned on all the time or used only during work hours or for emergencies). The respondents' environment also can have a greater influence on cellphone surveys. Although people responding to landline surveys are generally at home, cellphone respondents can be virtually anywhere when receiving the call. Legal restrictions on the use of cellphones while driving, as well as concerns about safety, also have raised the issue of whether people should be responding to surveys on their cellphones while driving. In addition, people often talk on their cellphones in more open places where they may have less privacy; this may affect how they respond to survey questions, especially those that cover more sensitive topics. These concerns have led some surveyors (including Pew Research Center) to ask cellphone respondents whether they are in a safe place and whether they can speak freely before continuing with the interview. Lastly, the quality of connection may influence whether an interview can be completed at that time, and interruptions may be more common on cellphones.
Response rates are typically lower for cellphone surveys than for landline surveys. In terms of data quality, some researchers have suggested that respondents may be more distracted during a cellphone interview, but our research has not found substantive differences in the quality of responses between landline and cellphone interviews. Interviewer ratings of respondent cooperation and levels of distraction have been similar in the cell and landline samples, with cellphone respondents sometimes demonstrating even slightly greater cooperation and less distraction than landline respondents.
Related publications
- The Growing Gap between Landline and Dual Frame Election Polls: Republican Vote Share Bigger in Landline-Only Surveys Nov. 22, 2010
- Cell Phones and Election Polls: An Update Oct. 13, 2010
- Assessing the Cell Phone Challenge May 20, 2010
- Accurately Locating Where Wireless Respondents Live Requires More Than a Phone Number July 9, 2009
- Calling Cell Phones in '08 Pre-Election Polls Dec. 18, 2008
- Cell Phones and the 2008 Vote: An Update Sept. 23, 2008
- Cell Phones and the 2008 Vote: An Update July 17, 2008
- Research Roundup: Latest Findings on Cell Phones and Polling May 22, 2008
- The Impact of 'Cell-Onlys' On Public Opinion Polling Jan. 31, 2008
- How Serious is Polling's Cell-Only Problem? June 20, 2007
- The Cell Phone Challenge to Survey Research May 15, 2006
Internet surveys
The number of surveys being conducted over the internet has increased dramatically in the last 10 years, driven by a dramatic rise in internet penetration and the relatively low cost of conducting web surveys in comparison with other methods. Web surveys have a number of advantages over other modes of interview. They are convenient for respondents to take on their own time and at their own pace. The lack of an interviewer means web surveys suffer from less social desirability bias than interviewer-administered modes. And web surveys also allow researchers to use a host of multimedia elements, such as having respondents view videos or listen to audio clips, which are not available to other survey modes.
Although more surveys are being conducted via the web, internet surveys are not without their drawbacks. Surveys of the general population that rely only on the internet can be subject to significant biases resulting from undercoverage and nonresponse. Not everyone in the U.S. has access to the internet, and there are significant demographic differences between those who do have access and those who do not. People with lower incomes, less education, living in rural areas or ages 65 and older are underrepresented among internet users and those with high-speed internet access (see our internet research for the latest trends).
There also is no systematic way to collect a traditional probability sample of the general population using the internet. There is no national list of email addresses from which people could be sampled, and there is no standard convention for email addresses, as there is for phone numbers, that would allow random sampling. Internet surveys of the general public must thus first contact people by another method, such as through the mail or by phone, and ask them to complete the survey online.
Because of these limitations, researchers use two main strategies for surveying the general population using the internet. One strategy is to randomly sample and contact people using another mode (mail, telephone or face-to-face) and ask them to complete a survey on the web. Some of the surveys may allow respondents to complete the survey by a variety of modes and therefore potentially avoid the undercoverage problem created by the fact that not everyone has access to the web. This method is used for one-time surveys and for creating survey panels where all or a portion of the panelists take surveys via the web (such as the GfK KnowledgePanel and more recently the Pew Research Center's American Trends Panel). Contacting respondents using probability-based sampling via another mode allows surveyors to estimate a margin of error for the survey (see probability and non-probability sampling for more information).
Pew Research Center also has conducted internet surveys of random samples of elite and special populations, where a list of the population exists and can be used to draw a random sample. Then, the sampled persons are asked to complete the survey online or by other modes. For example, see the scientist survey reported in 'Public and Scientists' Views on Science and Society.'
Another internet survey strategy relies on convenience samples of internet users. Researchers use one-time surveys that invite participation from whoever sees the survey invitation online, or rely on panels of respondents who opt-in or volunteer to participate in the panel. These surveys are subject to the same limitations facing other surveys using non-probability-based samples: The relationship between the sample and the population is unknown, so there is no theoretical basis for computing or reporting a margin of sampling error and thus for estimating how representative the sample is of the population as a whole. (Also see the American Association for Public Opinion Research's (AAPOR) Non-Probability Sampling Task Force Report and the AAPOR report on Opt-In Surveys and Margin of Error.) Many organizations are now experimenting with non-probability sampling in hopes of overcoming some of the traditional limitations these methods have faced. One example of this is sample matching, where a non-probability sample is drawn with similar characteristics to a target probability-based sample, and the former uses the selection probabilities of the latter to weight the final data. Another example is sample blending, whereby probability-based samples are combined with non-probability samples using specialized weighting techniques to blend the two. Here at the Pew Research Center we are closely following experiments with these methodologies, and conducting some of our own, to better understand the strengths and weaknesses of varying approaches.
Related publications
- Public and Scientists' Views on Science and Society Jan. 29, 2015
- Women and Leadership Jan. 14, 2015
- Public Sees U.S. Power Declining as Support for Global Engagement Slips Dec. 3, 2013
- What the Public Knows – In Words, Pictures, Maps and Graphs Sept. 5, 2013
- A Survey of LGBT Americans June 13, 2013
The problem of declining response rates
As Americans are now faced with more demands on their time, they are exercising more choice over when and how they can be contacted. The growth in the number of unsolicited telephone calls has also resulted in people employing more sophisticated technology for screening their calls (e.g., voice mail, caller identification, call blocking and privacy managers). This has resulted in fewer people participating in telephone polls than was the case when telephone surveys first became prevalent. As a consequence, response rates have continued to decline over the past decade.
Pew Research Center has conducted several survey experiments to gauge the effects of respondent cooperation on the validity of the results. These experiments compare responses from a standard survey, conducted with commonly utilized polling procedures over a five-day field period, with a survey conducted over a much longer period that employed more rigorous techniques aimed at obtaining a higher response rate and interviewing more difficult to reach respondents.
Findings from the 2012 study 'Assessing the Representativeness of Public Opinion Surveys,' the 2003 study 'Polls Face Growing Resistance, But Still Representative' and the 1997 study 'Conservative Opinions Not Underestimated, But Racial Hostility Missed' indicate that carefully conducted polls continue to obtain representative samples of the public and provide accurate data about the views and experiences of Americans. These results are also reported in Public Opinion Quarterly.
Related publications
- Assessing the Representativeness of Public Opinion Surveys May 15, 2012
- Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey (2006) Public Opinion Quarterly 70: 759-779.
- Polls Face Growing Resistance, But Still Representative April 20, 2004
- Consequences of Reducing Nonresponses in a National Telephone Survey (2000) Public Opinion Quarterly 64:125-148.
- Conservative Opinions Not Underestimated, But Racial Hostility Missed March 27, 1998
Mixed-mode surveys
Over the past decade, there has been a rise in mixed-mode surveys where multiple modes are used to contact and survey respondents. The increase in mixed-mode surveys has been driven by several factors, including declining response rates, coverage problems in single-mode surveys and the development of web surveys. Because there are now a variety of methods available, survey researchers can determine the best mode or combination of modes to fit the needs of each particular study and the population to be surveyed. However, when multiple modes are used for data collection, factors related to each mode, such as the presence of an interviewer and whether information is communicated aurally or visually, may affect how people respond.
Although Pew Research Center primarily conducts telephone surveys, we also occasionally conduct mixed-mode surveys, where people are surveyed by more than one mode. For example, we have conducted mixed-mode surveys of foreign policy experts and journalists, where respondents can complete the survey via the web or by telephone.
Related publications
- U.S. Seen as Less Important, China as More Powerful Dec. 3, 2009
- Financial Woes Now Overshadow All Other Concerns for Journalists March 17, 2008
- Bottom-Line Pressures Now Hurting Coverage, Say Journalists May 23, 2004
- Self Censorship: How Often and Why April 30, 2000
Reinterviews
Reinterviews are typically used to examine whether individuals have changed their opinions, behaviors or circumstances (such as employment, health status or income) over time. Survey designs that include reinterviews are sometimes called panel surveys. The key feature of this survey design is that the same individuals who were interviewed at the time of the first survey are interviewed again at a later date. Pew Research Center sometimes conducts reinterviews, especially to learn more about whether and how voters' opinions change during the course of a presidential election campaign. For an example from the 2012 presidential campaign, see 'Low Marks for the 2012 Campaign.' For an example comparing foreign policy opinions before and after the events of Sept. 11, 2001, see 'America's New Internationalist Point of View.'
Some of the reports listed below used reinterviews primarily to ask follow-up questions about respondents' opinions rather than to analyze opinion change on the same issues. Survey reports of this sort include 'Beyond Red vs. Blue' and 'Voters Like Campaign 2004, But Too Much ‘Mud-Slinging.''
Related publications
- Low Marks for the 2012 Campaign, Nov. 15, 2012
- High Marks for the Campaign, a High Bar for Obama Nov. 13, 2008
- Beyond Red vs. Blue May 10, 2005
- Voters Liked Campaign 2004, But Too Much ‘Mud-Slinging' Nov. 11, 2004
- Swing Voters Slow to Decide, Still Cross-Pressured Oct. 27, 2004
- America's New Internationalist Point of View Oct. 24, 2001
- Campaign 2000 Highly Rated Nov. 16, 2000
- Voters Side with Bush for Now Nov. 14, 2000
- Retro-Politics Nov. 11, 1999
- Popular Policies and Unpopular Press Lift Clinton Ratings Feb. 6, 1998
- News Attracts Most Internet Users Dec. 16, 1996
- Campaign '96 Gets Lower Grades from Voters Nov. 15, 1996
- The People, the Press & Politics Sept. 21, 1994
- Voters Say ‘Thumbs Up' To Campaign, Process & Coverage Nov. 15, 1992
- Perot is Back Oct. 26, 1992
Panel surveys
A survey panel is a sample of respondents who have agreed to take part in multiple surveys over time. Pew Research Center has used panels on a number of occasions and now has its own nationally representative survey panel known as the American Trends Panel.
Panels have several advantages over alternative methods of collecting survey data. Perhaps the most familiar use of panels is to track change in attitudes or behaviors of the same individuals over time. Whereas independent samples can yield evidence about change, it is more difficult to estimate exactly how much change is occurring – and among whom it is occurring – without being able to track the same individuals at two or more points in time.
A second advantage of a panel is that considerable information about the panelists can be accumulated over time. Because panelists may respond to multiple surveys on different topics, it is possible to build a much richer portrait of the respondents than is feasible in a single survey interview, which must be limited in length to prevent respondent fatigue.
Related to this is another advantage. Additional identifying information about respondents (such as an address) is often obtained for panelists, and this information can be used to help match externally available data, such as voting history, to the respondents. The information necessary to make an accurate match is often somewhat sensitive and difficult to obtain from respondents in a one-time interview.
A fourth advantage is that panels can provide a relatively efficient method of data collection compared with fresh samples because the participants have already agreed to take part in more surveys. The major effort required with a fresh sample – making an initial contact, persuading respondents to take part and gathering the necessary demographic information for weighing – is not needed once a respondent has joined a panel.
A fifth advantage is that it may be possible to survey members of a panel using different interviewing modes at different points in time. Contact information can be gathered from panelists (e.g., mailing addresses or email addresses) and used to facilitate a different interview mode than the original one, or to contact respondents in different ways to encourage participation.
Surveygame.net
But panels have some limitations as well. They can be expensive to create and maintain, requiring more extensive technical skill and oversight than a single-shot survey. A second concern is that repeated questioning of the same individuals may yield different results than we would obtain with independent or 'fresh' samples. If the same questions are asked repeatedly, respondents may remember their answers and feel some pressure to be consistent over time. Respondents might change their behavior because of questions you've asked; for example, questions about voting might spur them to register to vote. Respondents also become more skilled at answering particular kinds of questions. This may be beneficial in some instances, but to the extent it occurs, the panel results may be different from what would have been obtained from independent samples of people who have not had the practice in responding to surveys. A final disadvantage is that panelists may drop out over time, making the panel less representative of the target population as time passes if the kinds of people who drop out are different from those who tend to remain. For example, young people may move more frequently and thus be more likely to be lost to the panel when they move.
Probability and non-probability panels
Surveygames.net
Survey panels comprise many different types of samples. A fundamental distinction is between panels built with probability samples and those built with non-probability, or 'opt-in' samples (click here for a discussion of what makes a probability sample).
Among both types of survey panels, the samples may be intended to represent the entire population or only a portion of it. Pew Research Center's American Trends Panel (described below) is a nationally representative probability sample of U.S. adults. Another nationally representative panel is GfK's KnowledgePanel. An example of a panel representing a subgroup of the population is the National Longitudinal Survey of Youth 1979. It is a nationally representative sample of young men and women who were 14-22 years old when they were first surveyed in 1979. This panel is a product of the U.S. Bureau of Labor Statistics.
There are numerous non-probability or opt-in panels in operation. The methods used to build the samples for these panels differ, but in most cases the panelists have volunteered to join the panel and take surveys in exchange for some type of modest reward, either for themselves or for a charity. These panels tend to be used more for market research than for opinion and policy research, but the distinction is not a sharp one. Two well-known opt-in panels are YouGov and SurveyMonkey Audience. In addition to the debate surrounding non-probability samples, opt-in panels are typically limited to people who already have internet access, and thus do not represent the entire population of the U.S.
Why was the list created?
Ageism is a rampant problem in the game industry and within the broader technology sector, and it's becoming even more so as first generations of game creators reach their traditional 'retirement years.' While sexism in the industry has garnered tremendous attention, and rightfully so in the wake of Gamergate and other incidents, the response to ageism has typically been tepid by comparison.
The ageism issue is further emphasized when various media outlets enthusiastically publish their annual '30 under 30' lists to double down on the value of youth and vision while continuing to exacerbate the problem by ignoring the value of experience and wisdom. Articles about ageism do appear from time to time, but they are generally scarce. To counter the trend, I conceived of the '50 Over 50' list, soliciting nominations from a wide array of game industry colleagues around the world. The primary intent is to demonstrate that ageism, like sexism, must be openly confronted, discussed, and resolved. The secondary purpose is to actually highlight the incredible talent who are over 50 years old and still passionately involved in game creation in some capacity.
How is the list created?
A Google form was created that asked recipients to nominate game creators and those who are still active in the industry in some capacity (advisory, consulting, education, etc.) and are over 50 years old. The form was distributed widely via various social media networks for several weeks. Once the nominations were completed, the submitted names were collated and ranked on the number of duplicate nominations received, in order to establish a cut-off point for the top 50 nominations.
How many game creators were nominated for the list?
In 2018, 201 people were nominated '50 Over 50' list. For 2019, just over 120 people were nominated.
Why are certain notable game professionals missing from the list?
Slither io 2 online game. The list is conducted strictly with a request for nominations, being distributed to game industry colleagues. If a notable person is not nominated, or they are nominated but don't receive enough duplicate nominations to qualify, their name does not appear on the list.
Will there be future editions of the list?
The 2018 Inaugural Winner's List was so positively received, as promised the 50 Over 50 Survey will now become a recurring event as an ongoing statement against ageism. It will continue celebrating the amazing work of game industry veterans.
Are past 50 Over 50 List finalists eligible to be nominated again?
No. Although participants will likely nominate them again, the amazing individuals who made the final cut in previous annual lists will not be eligible to be featured again. Their names are enshrined on the Cumulative 50 Over 50 List with an indication for which year they were nominated.