Back to Home

Description of Project

Robert Borofsky

Center for a Public Anthropology

Hawaii Pacific University

This project, presented by the Center for a Public Anthropology, highlights anthropological publications that attract the broader public’s attention – in news outlets, policy documents and blogs. The emphasis is on publications, not individuals. This has the advantage of avoiding sensationalist statements in various media that attract attention for one or another reason. The focus is on the professional, intellectual productions that lie at the heart of the discipline – publications. We track which publications draw the attention of the broader public. As noted below, the project is presently limited to publications published by full-time faculty at the top 60 schools (as ranked by their research funding).

Differing Perspectives

Anthropologists often address their publications to small coteries of colleagues. Reinforcing this tendency, they frequently assess a publication’s value by the number of times colleagues cite it. Such assessments suggest anthropology is primarily meant for anthropologists. They convey an intellectual insularity.

Agencies and foundations outside academia, however, fund most anthropological research. (Anthropologists rarely fund their own work.) A key criterion of these funders is that the research they fund have value for the broader public, not just for a few individuals . The National Science Foundation (NSF), for example, requires all proposals and final reports to specify the “broader impacts” of their research defined as encompassing “the potential to benefit society and contribute to the achievement of specific, desired, societal outcomes” and written “insofar as possible, [to] be understandable to a scientifically . . . literate lay reader” (http://www.nsf.gov/pubs/policydocs/pappguide/nsf13001/gpg_2.jsp#IIC2d and  http://www.nsf.gov/pubs/policydocs/pappguide/nsf13001/gpg_3.jsp#IIIA2 ).  The National Institutes of Health (NIH) Office of Behavioral and Social Sciences Research affirms “realizing the full potential of our Nation’s investment in health research requires that science inform both practice and policy . . . we can stimulate relevant and usable research that is informed by the needs of end users whether they are healthy individuals, patients, practitioners, community leaders, or policymakers” (https://obssr.od.nih.gov/pdf/OBSSR_Prospectus.pdf, p.17).  Paralleling these perspectives, the United Kingdom’s Research Councils (RCUK) stresses a commitment “to supporting and rewarding researchers to engage with the public” (http://www.rcuk.ac.uk/pe/embedding/).

The fact that anthropologists frequently write for a small coterie of colleagues does not mean their research is not relevant to the broader public. It can be. But there are few metrics to measure and reward those whose publications are noted by people outside the academy. The main metrics, such as Google Scholar, highlight the attention paid by academics to other academics’ publications.

Three Audiences: Funding Agencies, University Administrators, and Faculty

Altmetrics – short for alternative metrics – provides a number of metrics capable of tracking publications. These can be used to highlight the public value of anthropological publications. By clicking on the red and orange rectangles to the left of a publication, readers can see which news outlets, which policy documents, and which blogs make reference to that publication. Such metrics are important because, quoting a recent newsletter, “Funders, review and hiring committees are always on the look out for extra evidence of the broader dissemination and influence of research” (http://www.altmetric.com/blog/tell-us-how-you-are-making-a-difference-5-top-tips-for-interpreting-your-altmetrics-data/).

Given funding agencies concern with public outreach, having an applicant’s research cited in news outlets or policy documents offers an advantage in the grant review process. A recent commentary in Nature notes: “In the digital age, a growing number of researchers and publishers are using more than just [academic] citation counts to track the impact of their publications. In an essay in PLoS Biology, three authors from a major UK research-funding agency argue that alternative metrics — or altmetrics, such as social-media mentions — can help funders to measure the full reach of the research that they support” (http://www.nature.com/news/funders-drawn-to-alternative-metrics-1.16524).

Likewise, university administrators are concerned with the public significance of research conducted at their institutions. Having faculty cited by news outlets enhances the reputations of their schools. While administrators rarely structure promotion and tenure solely around a scholar’s outreach, it is clear that administrators value the public recognition of their faculty.

By providing a clear measurable standard for public recognition, this project can enhance an institution’s status with the funders, alumni, and politicians who support it. Quoting from the Society of Research Administrators International, “The need for metrics is apparent. When thoughtfully composed and well aligned with the strategic mission of an institution and its units, such metrics can help assess success and areas for improvement at department, office, school, or center level[s]” (http://srainternational.org/).

By training and experience, faculty are often encouraged to focus on narrow academic subjects. That does not mean public outreach cannot enhance their careers. Such outreach can prove an asset in funding applications as well as in promotion and tenure reviews. The Evolution of Impact Indicators suggests, “the potential uses of altmetrics for academics fall into three main categories: for monitoring and tracking early attention [to their publication vs. the much delayed citations in academic journals], for showcasing engagement, and for discovery purposes . . . at present, authors rely on download stats, citation data (which takes a long time to accrue), and direct feedback from the academic community to gauge how their work has been received. With altmetrics, those same authors can start to see not only how academics but also how the wider public are responding to their work as soon as it is published” (http://www.opda.cam.ac.uk/file/evolution-of-impact-indicators.pdf, page 23-4).

In respect to “discovery purposes” the project’s metrics offer guidance for increasing public recognition of anthropological publications. As noted under “How data are collected and scored”, publications relating to archeology and biological anthropology tend to be cited in the media more than those dealing with cultural anthropology. Being published in more interdisciplinary journals, archeology and biological anthropology tend have a wider distribution than the discipline’s cultural focused journals. This fits with a point Borofsky emphasizes in Why a Public Anthropology (2011) – anthropology tends to works best when it works with others.

The Top 60 Departments

Why does the project focus on anthropology departments at the top 60 American universities as ranked by their research funding (see https://ncsesdata.nsf.gov/profiles/site?method=rankingBySource&ds=herd)? It partly stems from focusing on schools where the project’s metric may attract more attention. But primarily it derives from a pragmatic concern. Restricting the number of anthropologists and schools examined makes sorting the data more manageable. Especially in the project’s early stages, this sorting is done manually. Even focusing on only 60 schools means tracking references to publications authored by 1475 anthropologists.

Attention Versus Impact

A common frame of reference for assessing a publication’s value involves the publication’s “impact factor” – a standard suggested by the American linguist  Garfield in 1955 (note https://en.wikipedia.org/wiki/Impact_factor).  According to Webster’s Third International Dictionary, impact, as a noun, refers to “the force of impression of one thing on another.” While some anthropological publications may “arouse and hold attention and interest” they rarely are a “concerted force producing change.” That is to say, they rarely have real impact.

In a study Borofsky conducted (2011) of how academics cite prominent anthropologists – such as Lévi-Strauss, Geertz, and Wolf – most citations to these figures did not include extensive discussions. The citations mostly referred to the figures in passing, discussing them for no more than two sentences. Assessing impact by academic citation scores, in other words, conveys a false impression. It overstates the actual attention a publication receives.

Instead of referring to a publication’s impact factor, the project focuses on the degree to which a publication attracts attention – especially in news outlets and policy documents. More references in more media outlets suggest more public attention.

The Center

The Center for a Public Anthropology that supports this project is a non-profit – or 501 (c)(3) in the U.S. tax code – that encourages scholars and their students to address public problems in public ways. It operates two websites: www.publicanthropology.org and www.publicanthropology.net. As the Center’s logo affirms, the Center fosters accountability in higher education. Phrased another way, the Center seeks to encourage academics to move beyond the traditional “do no harm” ethos to one that strives to “do good”, to focus on benefiting others.

In collaboration with the University of California Press, the Center created and now directs the California Series in Public Anthropology. It also adjudicates, with the Press, the yearly California Public Anthropology International Competitions that selects promising proposals – prior to their completion – for publication in the Series.

Over the past several years, the Center’s Community Action Project (www.publicanthropology.net) has worked with more than 70 colleges and universities from across North America to address various ethical concerns. Through this project, for example, the Center has played a role in facilitating the return of the Yanomami blood samples, once stored in American research institutions, to the Yanomami. It has also facilitated greater enforcement of an NSF regulation requiring reports of a project’s benefits (see http://www.nsf.gov/pubs/policydocs/pappguide/nsf15001/aag_2.jsp#IID3).

In addition to the current project, being conducted in collaboration with Altmetric.com, the Center is working on an Assessment of Core Educational Proficiencies (ACEP). This assessment seeks to measure and improve college students’ critical thinking, problem solving and effective writing skills. For further information about the Center, please write Borofsky@publicanthropology.org.

References:

Williams, Catherine and Danielle Padula. The Evolution of Impact Indicators: From bibliometrics to altmetrics. 2015. London: Altmetric and Chicago: Scholastic.

Borofsky, Robert. Why a Public Anthropology? 2011. Kailua, HI: Center for a Public Anthropology.

Back to Home