Avian Conservation and Ecology
Home | Archives | About | Login | Submissions | Subscribe | Contact | Search
 ACE Home > Vol. 5, No. 2 > Art. 13

Copyright © 2010 by the author(s). Published here under license by The Resilience Alliance.
Go to the pdf version of this article

The following is the established format for referencing this article:
Wiersma, Y. F. 2010. Birding 2.0: citizen science and effective monitoring in the Web 2.0 world. Avian Conservation and Ecology 5(2): 13. [online] URL: http://www.ace-eco.org/vol5/iss2/art13/
http://dx.doi.org/10.5751/ACE-00427-050213


Essay

Birding 2.0: Citizen Science and Effective Monitoring in the Web 2.0 World
 
Ornithologie 2.0: la science citoyenne et les programmes de suivi à l’ère d’internet 2.0

Yolanda F. Wiersma 1


1Department of Biology, Memorial University



ABSTRACT


The amateur birding community has a long and proud tradition of contributing to bird surveys and bird atlases. Coordinated activities such as Breeding Bird Atlases and the Christmas Bird Count are examples of "citizen science" projects. With the advent of technology, Web 2.0 sites such as eBird have been developed to facilitate online sharing of data and thus increase the potential for real-time monitoring. However, as recently articulated in an editorial in this journal and elsewhere, monitoring is best served when based on a priori hypotheses. Harnessing citizen scientists to collect data following a hypothetico-deductive approach carries challenges. Moreover, the use of citizen science in scientific and monitoring studies has raised issues of data accuracy and quality. These issues are compounded when data collection moves into the Web 2.0 world. An examination of the literature from social geography on the concept of "citizen sensors" and volunteered geographic information (VGI) yields thoughtful reflections on the challenges of data quality/data accuracy when applying information from citizen sensors to research and management questions. VGI has been harnessed in a number of contexts, including for environmental and ecological monitoring activities. Here, I argue that conceptualizing a monitoring project as an experiment following the scientific method can further contribute to the use of VGI. I show how principles of experimental design can be applied to monitoring projects to better control for data quality of VGI. This includes suggestions for how citizen sensors can be harnessed to address issues of experimental controls and how to design monitoring projects to increase randomization and replication of sampled data, hence increasing scientific reliability and statistical power.

RÉSUMÉ

La communauté des ornithologues amateurs a une longue et fière tradition de contribution aux dénombrements d’oiseaux et aux projets d’atlas. Les activités telles que les atlas d’oiseaux nicheurs et le Recensement des oiseaux de Noël sont des exemples de "science citoyenne". Avec l’avènement de la technologie informatique, des sites internet 2.0 comme celui d’eBird ont été développés afin de permettre l’entrée en ligne des données, ouvrant ainsi la possibilité d’assurer un suivi en temps réel. Cependant, tel que mentionné dans un éditorial récemment publié dans cette revue et dans d’autres publications, le suivi est plus efficace lorsqu’il repose sur des hypothèses a priori. Mettre la science citoyenne à contribution pour récolter des données en suivant une approche hypothético-déductive comporte des défis. De plus, le recours aux citoyens dans le cadre d’études scientifiques et de programmes de suivi soulève la question de la précision et de la qualité des données. Cette question est d’autant plus sérieuse lorsque les données sont récoltées dans le contexte de l’internet 2.0. Une revue de la littérature en géographie sociale sur les concepts de "citoyens capteurs" et d’information géographique volontaire (IGV) fournit des réflexions utiles sur les défis de qualité et de précision des données lorsqu’on applique l’information fournie par les citoyens capteurs dans les domaines de la recherche et de la gestion. L’IGV a été mise à profit dans un grand nombre de contextes, incluant les programmes de suivi écologique et environnemental. Dans cet essai, je soutiens que la conceptualisation d’un projet de suivi en tant qu’expérience scientifique peut contribuer davantage à l’utilisation des IGV. J’illustre de quelle façon les dispositifs expérimentaux peuvent être appliqués aux projets de suivi afin de mieux contrôler la qualité des données issues d’IGV. Ceci inclut des suggestions sur l’implication des citoyens capteurs afin de mettre en place des témoins expérimentaux et de développer des programmes de suivi afin d’accroître la randomisation et la réplication des données échantillonnées, ce qui augmente la fiabilité scientifique et la puissance statistique.


Key words: citizen science; citizen sensors; experimental design; monitoring; volunteered geographical information (VGI); Web 2.0



MONITORING, CITIZEN SCIENCE, AND VOLUNTEERED GEOGRAPHIC INFORMATION

The use of volunteer "citizen scientists" as part of long-term monitoring projects is not a new concept in avian studies. In ornithology, the use of citizen science dates back more than a century, with projects like the Christmas Bird Count that have a long history of harnessing volunteers to collect data. The Cornell Lab of Ornithology is a leader in engaging citizen scientists in avian research and has developed a number of projects aimed to answer specific questions (see Bonney et al. 2010 for a summary). One of these projects, eBird, represents the latest trend of citizen science—harnessing the power of Web 2.0 Internet Technology (Sullivan et al. 2009). Web 2.0 refers to the new generation of Web applications that are defined by high interactivity of information. In the Web 2.0 world, the audience is made of active users who generate their own content for sharing with select groups or with the public at large via the World Wide Web. eBird is a Web 2.0 application that allows citizen scientists to share and manage their own sightings on a globally accessible database (Sullivan et al. 2009). Sullivan et al. refer to these amateur birdwatchers as "avian biological sensors" (2009:2290). This is a very similar concept to the term "citizen sensor." which was developed by the geographer Goodchild (2007a, b) to describe individuals who "sense" geographic information and who use this information to construct "elaborate mental understanding of the areas where [they] live and work" (Goodchild 2007a:25). Goodchild further suggested the term "volunteered geographic information" (VGI) to emphasize the spatial component of data common to many citizen science projects.

Goodchild (2007a, b) outlines how development of mobile devices, increased broadband access, and the emergence of Web 2.0 technologies further allow for VGI to be shared widely. Much of Goodchild's work focuses on situations in which VGI has been used to generate better maps or for creation of application-specific cartographic products (e.g., hiking trails, cycling routes). However, VGI and Web 2.0 technologies have also been incorporated into a range of environmental monitoring projects. Silvertown (2009) highlights how mobile devices have been developed to aid in monitoring wildlife in South Africa (CyberTracking); the tool is now used on five continents. Tulloch (2008) notes that VGI has been harnessed for activities such as monitoring the timing and location of vernal pools that serve as important amphibian habitat in New Jersey. Sullivan et al. (2009) provide examples of how data from eBird (another example of VGI) have been used to visualize seasonal distribution changes, in monitoring range change, and to provide data for conservation prioritization and decision support tools. eBird data have also been used to develop spatiotemporal explanatory models to better understand changes in species distribution (Fink et al. 2010).

Unlike other citizen science projects coordinated by the Cornell Lab (Bonney et al. 2010: Table 1), the eBird site is not explicitly driven by a specific hypothesis. Thus, eBird may not fit the criteria for monitoring as outlined in a recent editorial in this journal (Nudds and Villard 2009), which stated that the purpose of monitoring should be to test a priori hypotheses and not simply to generate a series of observations that can be used to generate post hoc hypotheses. Others have concurred that monitoring should be hypothesis driven and that monitoring should be viewed as a systematic program that is set up to assist in the evaluation of the effects of a given human activity or set of activities on the environment or on a particular ecosystem (Wiersma 2005; Nichols and Williams 2006; Francis et al. 2009). Monitoring should not be viewed as an activity in isolation or as simply inventory work, but rather as a key part of an adaptive management process (Nudds 1999; Wiersma and Campbell 2002; Nichols and Williams 2006; Francis et al. 2009; Nudds and Villard 2009). However, Wintle et al. (2010) have argued that monitoring that is not driven by a specific hypothesis or research question may have value as well. Wintle et al. (2010) differentiate between "targeted" and "surveillance" monitoring, the former fitting the criteria outlined by Nudds and Villard (2009), the latter lacking any a priori hypothesis and focused more on inventory work. Wintle et al. (2010) argue that surveillance monitoring may have value in detecting (Rumsfeldian) "unknown unknowns" and may assist in the generation of new, and as yet unanticipated, hypotheses. Nichols and Williams (2006) argue that, although surveillance monitoring can be used to generate hypotheses, it only allows for weak inference of trends and causal mechanisms and is usually less cost-effective than targeted monitoring that is focused on a particular set of conservation hypotheses. This may be true in many cases of surveillance monitoring, although in Web 2.0 environment, where data are supplied by volunteer citizen scientists, cost efficiency may be less of an issue. Dickenson et al. (2010) suggest that the Cornell Lab's Project Feeder Watch contributes $3 million/year worth of observer effort, and Sullivan et al. (2009) noted that the cost per datum on eBird in 2008 was only 3 cents.

The debate between the perceived merits of surveillance vs. targeted monitoring continues to exist with respect to conservation projects (see Haughland et al. 2010; Lindenmayer and Likens 2010). The recent widespread availability of large data sets, the ability of citizen scientists to quickly contribute to data collection (e.g., by using mobile devices), and the relatively low cost of computers to manage and handle large amounts of data have led some to suggest that "data-intensive" science may represent a new paradigm in scientific research (Frankel and Reid 2008; Kelling et al. 2009; Dickenson et al. 2010). Despite the fact that many who advocate a hypothetico-deductive approach to ecological monitoring may view surveillance monitoring and data mining approaches as an anathema, these activities are not going to disappear and are likely to increase. In the Web 2.0 world, surveillance monitoring programs are relatively easy and inexpensive to set up and have the potential to generate large quantities of VGI (and may additionally provide highly tangible benefits in the form of public education and awareness). For example, eBird (which largely fits at the surveillance end of Wintle et al.'s [2010] targeted-surveillance monitoring spectrum) has already generated more than 21 million records from more than half a million users, and these data have contributed to conservation and management (Sullivan et al. 2009). Dickenson et al. (2010) summarize a number of contributions that large-scale citizen science projects have made to ecological questions. Formal comparisons between the utility of surveillance data (eBird) and data that have been collected following more prescribed approaches (Breeding Bird Surveys) suggest that, for many species, surveillance data provide similar information as targeted data (Munson et al. 2010). Thus, surveillance monitoring does not appear to be without value, and its relative cost-effectiveness is generally low, particularly for projects that encompass broad spatial extent and capture large amounts of data (Dickenson et al. 2010). However, the scientific validity of surveillance monitoring in terms of the ability to explicitly test hypotheses and infer causal mechanisms when compared with targeted monitoring is still open to question and debate.

Much of the discussion about VGI and citizen sensors is centered within the intellectual domains of social science and social geography (e.g., Elwood 2008a,b; Flanagin and Metzger 2008; Tulloch 2008; see also UCSB workshop on Volunteered Geographic Information: http://www.ncgia.ucsb.edu/projects/vgi/ and recent special issues on the topic of VGI [Elwood 2008a; Feick and Roche 2010]). Because VGI is dealing with large numbers of individuals embedded in society who are interacting via technology, it may be useful to consider these questions within a social science framework. In this essay, I highlight some of the recent literature on VGI from the perspective of social geographers and identify ways in which conservation scientists and ecologists might harness these insights to maximize the value of citizen science data in the Web 2.0 world. I also offer suggestions for ways to harness VGI to enhance scientific reliability.


AN OVERVIEW OF VOLUNTEERED GEOGRAPHIC INFORMATION

VGI has several advantages over data collected by professional scientists. These include a relatively low cost to implement, a potentially large set of samples gathered, the potential to have data collected from across a wider geographic area than a single scientist can cover, and also a potentially fine resolution of data because most citizen sensors will submit observations from areas close to home and/or areas that they know very well. The fact that data from many citizen sensors accumulated via Web 2.0 technology can result in a large number of observations over a large spatial extent has already been demonstrated with eBird. However, there are disadvantages with VGI. One disadvantage is potential bias due to issues of the "digital divide." The concept of the digital divide is well studied in sociology and points out that there are differences in levels of technological literacy (e.g., Charkraborty and Bosman 2005) and in access to broadband in remote and rural areas (e.g., Chinn and Fairlie 2007) that result in differential levels of public participation based on factors such as income, education, and place of residence. A further challenge of VGI is that there is little top-down control of data gathering and thus a greater potential for errors and user biases to creep in, unless explicit protocols are developed for data collection along with post hoc data filtering.

Recently, Gouveia and Fonesca (2008) developed a framework to illustrate how VGI can be harnessed with information and communication technologies (including Web 2.0) to enhance monitoring activities. They cite the essential elements of a monitoring network as including the following: 1) motivated citizens, 2) sensing "devices" (whether instruments or human observers) that can detect and register environmental variables, and 3) back-end information structure to support these activities. Motivated citizens (item 1) are important, but knowing who are the individuals contributing the data is also important. Coleman et al. (2009, 2010) provide a taxonomy of the different types of contributors of VGI and their motivations (Table 1). An understanding of where individuals fit within this taxonomy may be helpful in addressing data quality issues. Although Gouvei and Fonesca (2008) outline critical components of a monitoring framework that are essential to generate VGI for monitoring, the data generated will not be of much use if the accuracy and quality are limited (or unknown). Bishr and Mantelas (2008) propose a trust and reputation model for evaluating and filtering VGI based on agreement between observations as well as spatial and social proximity between observers and the objects under observation. The automated data filtering used by eBird is an example of such a trust and reputation model. Data submitted to eBird are automatically assessed through a comparison of each new data entry with existing ones to filter out improbable sightings based on proximity to previous sightings. Data quality is a key consideration of end-users (i.e., scientists and managers) who wish to integrate large data sets and use them in decision making, and geographers are developing a range of methods to evaluate VGI (e.g., Bishr and Mantelas 2008; Coleman et al. 2009).

Flanagin and Metzger (2008) question the credibility of VGI and cite research that has been carried out to enhance our understanding of data quality (or perceptions thereof) when dealing with VGI. These include information about who is volunteering the information, the amount of activity an individual has on a particular website, or the number of corroborating views from other citizen sensors. Tulloch (2008) discusses issues of identifying the "public" that is contributing the information and what constitutes their participation. DeLongueville et al. (2010) propose three strategies to increase the credibility of VGI, including standardized data creation methods, volunteered-based quality control (see also Bishr and Mantelas 2008), and data aggregation and cross-validation. However, these methods for enhancing credibility of VGI with respect to real-world scientific problems have only been implemented in a handful of cases on Web 2.0 surveillance monitoring websites (see detail on eBird in Sullivan et al. 2009, and natural hazards example in DeLongueville et al. 2010).

These insights from social geographers are valuable for anyone considering implementing or using surveillance monitoring data, particularly data generated via Web 2.0 applications. In addition to these considerations from social geography, I contend that some consideration of data quality issues from the perspective of experimental science will assist in enhancing the quality of VGI. Thus, I propose that the use of VGI for monitoring also be considered within a scientific experimental framework. In the final section of this essay, I provide an overview of the parallels between monitoring and scientific experimentation, identify the key criteria for robust experimental design, and outline a framework for addressing these criteria with VGI.


MONITORING AND THE SCIENTIFIC METHOD

Francis et al. (2009) outline some key questions for avian monitoring, including questions about geographic scope/scale, sample size, survey protocols, and sampling design. As highlighted above, it is often not difficult (although there are exceptions) for VGI to generate large sample sizes and cover a wide geographic scope. Survey protocols can be developed and shared online, but how well citizen scientists follow these will often be uncertain. However, we should not discount that citizen scientists have some levels of scientific literacy. For example, Trumbell et al. (2000) document citizen scientists thinking quite carefully about elements of experimental design in a backyard feeder study. Most problematic for VGI and surveillance monitoring are issues of sampling design, specifically the issues of control, replication, and randomization. Lindenmayer and Likens (2009) emphasize the importance of solid experimental design at the outset of any monitoring project, but they emphasize that good design comes from good questions, an attribute surveillance monitoring usually lacks. Elements of experimental design are much easier to implement in a monitoring program (e.g., the Christmas Bird Count, Breeding Bird Surveys) in which there is a good deal of top-down control over the activities of citizen scientists (although Lindenmayer and Likens [2009] point out that, even with more targeted programs, experimental design is still often missing). When anonymous members of the public are generating VGI via Web 2.0, there is even less control over sampling design.

A recent evaluation (Munson et al. 2010) showed that, despite differences in survey method and coverage, broad-level conclusions about distribution and population trends for a subset of birds were not significantly different when evaluated using surveillance data from eBird vs. data from Breeding Bird Surveys. However, eBird is generally considered the "gold standard" for data quality control in a Web 2.0 surveillance monitoring project, and such congruence may not always be the case. Many Web 2.0 citizen science projects do not appear to include a rigorous protocol for data filtering. If we accept that surveillance monitoring and VGI will increase, and concede that smaller-scale projects may not have the resources of the Cornell Lab available when implementing a Web 2.0 project, we need to consider how to better design monitoring and collection of VGI to meet the requirements of good sampling design (Francis et al. 2009) if we are to make effective use of these data sets.

Experimental control

The concept of an experimental control goes somewhat against the spirit of VGI, which emphasizes independence, democratization, and individuality (Goodchild 2007b). Proponents of VGI emphasize that its strength lies in individuals providing information on locations that are personally important (Tulloch 2008) and feel that it is not desirable to control from which points in geographic space volunteers contribute information. From an experimental perspective then, it can be difficult to set up observations in "treatment" and "control" areas because there may be little top-down control of how the data are gathered (however, see Seeger [2008] for a description of "facilitated VGI," which might fall more toward the "targeted" end of the targeted-surveillance monitoring spectrum). Grira et al. (2010) suggest implementing an a priori approach to addressing issues of spatial data usability. In the absence of this, data analysis may require the use of a post hoc analysis of all the VGI to assign certain points as the "control" based on their location relative to the environmental effect that the analyst is interested in. For example, De Longueville et al. (2010) have created a fairly prescribed set of steps to enhance the credibility of VGI and make it more useful to scientific and technical investigators.

Replication

One of the biggest advantages of VGI is the potential for a large number of observations to be generated, so there may be sufficient replication to detect significant trends. Despite the potential for high replication of observations with VGI, there are some issues of concern. High numbers of observations are more likely for charismatic and easy-to-detect species and phenomena and less likely for hard-to-detect species (those that are shy, present in low numbers, or only active at night) or species that do not capture public interest. In addition, because observations are submitted via the Internet, there is the potential for some replicates to be illegitimate (e.g., because they are fictional [saboteurs]); however, these are likely to be small in number and smaller than real, but unintentional, errors. There are a number of techniques available to filter data and impose quality control standards, and eBird makes effective use of automatic filtering processes. In addition to the automatic filtering, a network of over 500 individuals act as regional editors and screen records caught by the automatic filters. This is an example of "crowdsourcing" (a term that describes the use of large numbers of individuals connected via social-networking or other Web 2.0 applications to carry out data validation; see Raykar et al. 2010 for an overview). Dickenson et al. (2010) provide a range of strategies for dealing with observer error and biases in data. These include providing comprehensive training, asking for information on survey effort, standardized protocols, and filtering out data based on participants' levels of activity or years of experience. Collection of biographical/demographic information from citizen sensors (home address, level of expertise) can be quite critical to help filter out those samples that are not in the area of interest and can also be used to verify sightings.

The large data sets generated by VGI require consideration about how to handle, display, and manipulate large amounts of data (Dickenson et al. 2010). Novel techniques are emerging, mainly from the field of computer science (Frankel and Reid 2008; Howe et al. 2008; Lynch 2008). Such techniques will be useful to maximize the scientific utility of large, citizen science-generated data sets. Although not discounting the value of targeted, hypothesis-driven research, Kelling et al. (2009) propose a new analysis paradigm that can be applied to large surveillance monitoring data sets in which occurrence patterns emerge through techniques tailored to the discovery of complex patterns in high-dimensional data.

Randomization

The assumption that citizen scientists may be diverse enough to represent a random sample of the population may not be valid. The incorporation of Web 2.0 technology to upload sightings may introduce some biases that project managers should be aware of. There is a possibility that citizen sensors may be biased to younger people who are more computer-savvy. There may be a certain level of scientific literacy required to contribute information (e.g., knowledge of species taxonomy). As well, citizen sensors in remote areas with limited or no Internet access will be unlikely to participate. Those who lack the financial means to acquire the tools (digital cameras, GPS-enabled devices) or who lack the time to participate will not be able to contribute. Observations are likely be biased to urban and settled areas and not be randomly distributed in space. The well-known "Power Law" (Wilkinson 2008; Stuckman and Purtilo 2009), which has shown that a few very keen individuals contribute the bulk of the observations with any kind of Open Source platform (e.g., Open Street Map, Wikipedia), will also contribute to nonrandomness of VGI data. There are further biases in terms of how citizen scientists are made aware of the opportunity to contribute VGI. If promotion of a Web-based interface is only via certain media or within certain geographic areas (e.g., cities but not rural areas) then potential contributors risk being excluded.

Problems of nonrandom VGI can be addressed fairly easily with post hoc data filtering and statistical tools if individual data points can be linked to individual citizen scientists. Thus, a Web interface that allows each user to set up a personal user account is a valuable strategy. This will allow for the project managers to track which users are generating the most information and can be used to remove potential biases from the data. Account IDs can be fictitious names to preserve anonymity but still allow linking of individuals to their VGI (assuming one individual is not setting up multiple user accounts). The more that is known about the identity of the observers, the more filtering of data can be done based on where the observer lies on the spectrum of contributors (Coleman et al. 2009, 2010). This information can be collected through a voluntary form that asks for simple biographical/demographic information. More creative options are to ingrate monitoring/data collection with mobile gaming that moves users randomly through a landscape or that prompts them to visit randomly assigned sites. See Tulloch (2008) for an example of a gaming application to generate VGI. Both of these suggestions might be viewed as a form of facilitated VGI (sensu Seeger 2008).


CONCLUSIONS

There is a large and growing literature on the use of VGI in environmental and ecological monitoring. Despite the fact that many of these monitoring projects are not based on a priori hypotheses, Wintle et al. (2010) contend that such extensive databases ("surveillance monitoring") may have as yet unanticipated benefits and thus should not always be discounted. VGI can have advantages in terms of generating fine-grained data over large spatial extents and potentially over long time periods. Dickenson et al. (2010) suggest that such projects can complement hypothesis-driven research. Thus, it would be beneficial to design surveillance monitoring/citizen science projects to maximize their use. Social science research that investigates the behavior of individuals on Web 2.0 sites, identifies which "publics" are involved in VGI, and quantifies the nature of their participation (e.g., Sieber 2006; Tulloch 2008) can play an important role in refining the use of VGI in monitoring. Careful consideration of how to manage and effectively use (including understanding the limits of) the potentially large data sets generated via VGI will be of importance. VGI and surveillance monitoring programs that are carefully designed can minimize (or at least anticipate) the challenges requiring scientific control and sufficient randomization and replication of data collection and thus enhance the scientific credibility of the monitoring endeavor. Such programs have a higher likelihood of contributing positively to management and policy decisions.


RESPONSES TO THIS ARTICLE


Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.



ACKNOWLEDGMENTS

This essay is based on discussions with my colleagues on Project P-IV 41, "The Participatory Geoweb and Environmental Change," which is funded by GEOIDE (GEOmatics for Information Decisions), a Canadian National Centre of Excellence. Thanks to R. Sieber, R. Lukyanenko, B. Klinkenberg, C. Rinner, and two anonymous reviewers for critical feedback on an earlier draft of this manuscript.



LITERATURE CITED

Bishr, M., and L. Mantelas. 2008. A trust and reputation model for filtering and classifying knowledge about urban growth. GeoJournal 72:229-237.

Bonney, R., C. B. Cooper, J. Dickinson, S. Kelling, T. Phillips, K. V. Rosenberg, and J. Shirk. 2010. Citizen science: a developing tool for expanding science knowledge and scientific literacy. BioScience 59:977-984.

Charkraborty, J., and M. M. Bosman. 2005. Measuring the digital divide in the United States: Race, income, and personal computer ownership. Professional Geographer 57:395-410.

Chinn, M. D., and R. W. Fairlie. 2007. The determinants of the global digital divide: a cross-country analysis of computer and internet penetratin. Oxford Economic Papers 59:16-44.

Coleman, D., Y. Georgiandou, and J. Labont. 2009. Volunteered geographic information: the nature and motivation of produsers. International Journal of Spatial Data Infrastructures Research 4:332-358.

Coleman, D., B. Sabone, and N. J. Nkhwanana. 2010. Volunteering geographic information to authoritative databases: linking contributor motivations to program characteristics. Geomatica 64:27-39.

DeLongueville, B., G. Luraschi, P. Smits, S. Peedell, and T. De Groeve. 2010. Citizens as sensors for natural hazards: a VGI integration workflow. Geomatica 64:41-59.

Dickenson, J. L., B. Zuckerberg, and D. N. Bonter. 2010. Citizen science as an ecological research tool: challenges and benefits. Annual Review of Ecology, Evolution, and Systematics 41:149-172.

Elwood, S. 2008a. Volunteered geographic information: key questions, concepts and methods to guide emerging research and practice. GeoJournal 72:133-135.

Elwood, S. 2008b. Volunteered geographic information: future research directions motivated by critical, participatory and feminist GIS. GeoJournal 72:173-183.

Feick, R., and S. Roche. 2010. Introduction – special issue on volunteered geographic information (VGI). Geomatica 64:7-9.

Fink, D., W. M. Hochachka, B. Zuckerberg, D. W. Winkler, B. Shaby, M. A. Munson, G. Hooker, M. Riedewald, D. Sheldon, and S. Kelling. 2010. Spatiotemporal explanatory models for broad-scale survey data. Ecological Applications 20:2131-2147.

Flanagin, A. J., and M. J. Metzger. 2008. The credibility of volunteered geographic information. GeoJournal 72:137-148.

Francis, C. M., P. J. Blancher, and R. D. Phoenix. 2009. Bird monitoring programs in Ontario: what have we got and what do we need? The Forestry Chronicle 85:202-217.

Frankel, F., and R. Reid. 2008. Distilling meaning from data. Nature 455:30.

Goodchild, M. 2007a. Citizens as voluntary sensors: spatial data infrastructure in the world of Web 2.0. International Journal of Spatial Data Infrastructure Research 2:24-32.

Goodchild, M. F. 2007b. Citizens as sensors: the world of volunteered geography. GeoJournal 69:211-221.

Gouveia, C., and A. Fonesca. 2008. New approaches to environmental monitoring: the use of ICT to explore volunteered geographic information. GeoJournal 72:185-197.

Grira, J., Y. Bédard, and S. Roche. 2010. Spatial data uncertainty in the VGI world: going from consumer to producer. Geomatica 64:61-71.

Haughland, D. L., J.-M. Hero, J. Schieck, J. G. Castley, S. Boutin, P. Sólymos, B. E. Lawson, G. Holloway, and W. E. Magnusson. 2010. Planning forwards: biodiversity research and monitoring systems for better management. Trends in Ecology and Evolution 25:199-200.

Howe, D., M. Costanzo, P. Fey, T. Gojobori, L. Hannick, W. Hide, D. P. Hill, R. Kania, M. Schaeffer, S. St Pierra, S. Twigger, O. White, and S. Y. Rhee. 2008. The future of biocuration. Nature 455:47-50

Kelling, S., W. M. Hochachka, D. Fink, M. Riedwald, R. Caruana, G. Ballard, and G. Hooker. 2009. Data-intensive science: a new paradigm for biodiversity studies. BioScience 59:613-620.

Lindenmayer D. B., and G. E. Likens. 2009. Adaptive monitoring: a new paradigm for long-term research and monitoring. Trends in Ecology and Evolution 24:482-482.

Lindenmayer D. B., and G. E. Likens. 2010. Improving ecological monitoring. Trends in Ecology and Evolution 25:200-201.

Lynch, C. 2008. How do your data grow? Nature 455:28-29

Munson, M. A., R. Caruana, D. Fink, W. M. Hochachka, M. Iliff, K. V. Rosenberg, D. Sheldon, B. L. Sullivan, C. Wood, and S. Kelling. 2010. A method for measuring the relative information content of data from different monitoring protocols. Methods in Ecology and Evolution 1:263-273.

Nichols, J. D., and B. K. Williams. 2006. Monitoring for conservation. Trends in Ecology and Evolution 21:668-673.

Nudds, T. D. 1999. Adaptive management and the conservation of biodiversity. Pages 179-193 in R. K. Baydack, H. Campa, III, and J. B. Haufler, editors. Practical approaches to the conservation of biological diversity. Island Press, Washington, D.C., USA.

Nudds, T. D., and M.-A.Villard. 2009. Is monitoring growing up? Avian Conservation and Ecology 4(1):7.

Raykar, V. C., S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. 2010. Learning from crowds. Journal of Machine Learning Research 11:1297-1322.

Seeger, C. J. 2008. The role of facilitated volunteered geographic information in the landscape planning and site design process. GeoJournal 72:199-213.

Sieber, R. E. 2006. Public participation geographic information systems: a literature review and framework. Annals of the American Association of Geographers 96:491-507.

Silvertown, J. 2009. A new dawn for citizen science. Trends in Ecology and Evolution 24:467-471.

Stuckman, J., and J. Purtilo. 2009. Measuring the wikisphere. Proceedings of the 5th International Symposium on Wikis and Open Collaboration, WikiSym 2009 October 25-27. Article No. 111. Orlando, Florida, USA. [online] URL: http://www.wikisym.org/ws2009/procfiles/p111-stuckman.pdf.

Sullivan, B. L., C. L. Wood, M. J. Iliff, R. E. Bonney, D. Fink, and S. Kelling. 2009. eBird: a citizen-based bird observation network in the biological sciences. Biological Conservation 142:2282-2292.

Trumbell, D. J., R. Bonney, D. Bascom, and A. Cabral. 2000. Thinking scientifically during participation in a citizen science project. Science Education 84:265-275.

Tulloch, D. L. 2008. Is VGI participation? From vernal pools to video games. GeoJournal 72:161-171.

Wiersma, Y. F. 2005. Environmental benchmarks vs. ecological benchmarks in assessment and monitoring in Canada: is there a difference? Environmental Monitoring and Assessment 100:1-9.

Wiersma, Y. F., and M. Campbell. 2002. A monitoring framework for Canada's National Parks: assessing integrity across a system. Pages 196-212 in S. Bondrop-Nielsen and N. W. P. Munro, editors. Managing Protected Areas in a Changing World, Proceedings of the Fourth International Conference on Science and Management of Protected Areas (Waterloo, ON, Canada, 2000). Science and Management of Protected Areas Association, Wolfville, NS, Canada.

Wilkinson, D. M. 2008. Strong regularities in online peer production. Pages 302-309 in L. Fortnow, J. Riedl, and T. Sandholm, editors. Proceedings of the 9th ACM Conference on Electronic Commerce ISBN: 978-1-60558-169-9.

Wintle, D. A., M. C. Runge, and S. A. Bekessy. 2010. Allocating monitoring effort in the face of unknown unknowns. Ecology Letters 13:1325-1337


Address of Correspondent:
Yolanda F. Wiersma
300 Prince Phillip Drive
St. John's, NL, Canada A1B 3X9
ywiersma@mun.ca
Avian Conservation and Ecology

Up

Home | Archives | About | Login | Submissions | Subscribe | Contact | Search