Back to Home

PLEASE READ

(Meant especially for those skeptical about this project).

Robert Borofsky

Center for a Public Anthropology

No doubt some will view Public Anthropology’s Metrics Project as provocative. Viewed from one perspective, it presents, as never before, clear, delineated, metric details of which anthropologists in what departments publish material that is commented on by media from around the world. It moves beyond Google Scholar’s focus on who cites whom in academic publications. Moreover, the data are transparent. Readers can see which of the world’s media take note of what anthropologists are writing, and what they say about them.

Understanding the public impact of anthropological publications is particularly important at this time of disciplinary decline. As An Anthropology of Anthropology notes,1 the field is diminishing in terms of members, departments, and impact.2 And there is the problem raised by David Graeber:

There is a general consensus that anthropology is in trouble. It is a discipline sitting on top of a vast compendium of accumulated knowledge about human social and cultural achievement and possibility, increasingly uncertain as to what, precisely, was the point of compiling it.3

Outside funding would help address the problem. But why should governments and foundations fund a discipline that only seems relevant to a small coterie of scholars? The Metrics Project draws anthropologists to write for a broader public.

Viewed from a different perspective, using the Project’s metric data to not only assess the public outreach of particular individuals and departments, but to rank them vis-à-vis one another is akin to stepping on a large hornet’s nest. As An Anthropology of Anthropology observes, there is considerable faculty pushback against framing intellectual productivity in metric terms.4 A report by the American Association of University Professors (AAUP) regarding Academic Analytics cautions that “measuring faculty ‘productivity’ with an exclusive or excessive emphasis on quantitative measures of research output must inevitably fail to take adequate account of the variety and totality of scholarly accomplishments.”5 Paul Jump, reporting on metric-based assessments, suggests:

Research managers can become “over-reliant on indicators that are widely felt to be problematic or not properly understood . . . or on indicators that may be used insensitively or inappropriately,” and do not “fully recognize the diverse contributions of individual researchers to the overall institutional mission or the wider public good.”6

Cris Shore and Susan Wright write that a key result of this focus on metrics “has been its effect in changing the identity of professionals and the way they conceptualize themselves. The audited subject is recast as a depersonalized unit of economic resource whose productivity and performance must constantly be measured and enhanced.”7

Quoting Jump again, readers should realize this focus on metric assessments:

…is being whipped up by “powerful currents” arising from, inter alia, “growing pressures for audit and evaluation of public spending on higher education and research; demands by policymakers for more strategic intelligence on research quality and impact; [and] competition within and between institutions for prestige, students, staff and resources.”

Metrics—numbers—give at least the impression of objectivity, and they have become increasingly important in the management and assessment of research ever since citation databases such as the Science Citation Index, Scopus, and Google Scholar became available online in the early 2000s. Metrics are particularly popular in political circles. . . . Within universities, too, metrics have been widely adopted, not merely for institutional benchmarking but also, increasingly, for managing the performance of academics. . . . The Metric Tide . . . attributes this state of affairs to the increasing pressure on universities to be “more accountable to government and public funders of research,” and also to the financial pressures imposed on institutions by constrained funding and globalization.8

At issue, in enhancing anthropology’s public outreach, is how to proceed without “throwing out the baby with the bathwater.” How can we gain the benefits of assessing the public outreach of anthropological writings beyond the academy without turning anthropologists into pen pushing peons producing publications of ambiguous quality? To me, emphasizing anthropology’s value to those beyond the academy – whether they be funders, the broader public, or those who facilitate an anthropologist’s research in the field – seems a reasonable way to help reverse the discipline’s decline.

That said, it is critical to note that the data presented here are not perfect. They do not reflect a correspondence sense of truth; they do not perfectly match reality. They present an important picture of which anthropology departments and faculty produce publications that are noted by those beyond the academy. But it stands to reason that given the massive amount of data presented here, there would be problems. Let me point out two.

Patrick V. Kirch is a distinguished archeologist focusing on the Pacific. He has published a number of important books, including Anahulu (co-authored with Marshall Sahlins) which was awarded the 1998 J. I. Staley Prize.9 His webpage at the University of Hawaii Anthropology Department notes his considerable community engagement.10 Yet he is only listed in the Metrics Project with one publication, “Simulating social-ecological systems: the Island Digital Ecosystem Avatars (IDEA) consortium” in Giga Science. This clearly does not express the breadth of Kirch’s intellectual endeavors. In terms of his prominent books, it need be noted that Altmetric only started including books in 2016. Kirch’s only book published after that, Heiau, ‘Āina, Lani: The Hawaiian Temple System in Ancient Kahikinui and Kaupō, Maui with Clive Ruggles, is a study of 78 temple sites in southeastern Maui.11 I presume a number of his articles after 2011 – the date Altmetric started seriously collected data – are specialized as well.

Clearly the Metric Project’s data do not convey the breadth of Kirch’s contributions – such as his membership on the Board of Directors of the Bishop Museum or on the Advisory Board of the Hawai’i Land Trust. The Metrics Project is focused on the degree to which an author’s publications are taken note of by national and international media outside of the academy. Specialized studies on Hawaii do not necessarily attract such attention.

Generally, when one clicks on the publication listed, the individual referred to will be one of the publication’s authors. However, occasionally it is not. In an earlier version of the Metrics Project, I found 67 references to the recently deceased Paul Farmer. The results of this analysis point to another problem with the Metrics material. For most of these references, Paul Farmer is listed as an author. In respect to the others, one cited him in the publication’s text, another in the publication’s references, and for one, he authored the forward. There also are two anomalous cases. One publication involved a Paul Hamilton and a Madison Farmer as authors. Presumably, the search engine combined them into Paul Farmer. In “The Year That Ebola Virus Took Over West Africa” by Bausch in the American Journal of Tropical Medicine and Hygiene,12 the article does not make specific mention of Paul Farmer. As its title suggests, the article is about the Ebola epidemic in West Africa. Paul Farmer, in his role as Chief Strategist for Partners in Health, was quite involved in treating Ebola in West Africa during the period referred to. I am unsure how Paul Farmer’s name became associated with this article. But the error rate, if one includes both anomalies, is three percent.

“The perfect is the enemy of the good.” I first heard this aphorism – commonly attributed to Voltaire – when Nancy Pelosi, then Speaker of the U.S. House of Representatives, described President Obama’s healthcare plan.13 In the present context, it refers to the Metrics Project which, while not perfect, offers a powerful way to nudge anthropologists to be more focused on the broader public in their publications.

To stem the discipline’s decline, the Metrics Project seeks to draw anthropologists to be more inclusive in their writings. Quoting Primo Levi, “if not now, when?”14 Waiting for the Metrics Project to be completely cleansed of all its errors at some unspecified date in the future will not do. Too much time will be lost. In the case of the Affordable Care Act (aka Obamacare), 21.3 million Americans are now enrolled in its coverage.15 According to the Center on Budget and Policy Priorities it has “saved the lives of at least 19,200 adults aged 55 to 64 over the four-year period from 2014 to 2017” in states that expanded medical coverage. Imagine what the statistics would be if the U.S. Congress had waited.16

I agree academic assessments such as Academic Analytics can be deeply problematic.17 But I would suggest the problem lies less with the metric assessments themselves than in the careless and manipulative ways various academic administrators apply them. As the above example of Patrick Kirch demonstrates, the Metrics Project is not a total measure of an individual’s “public outreach.” It is a measure of to what degree anthropological publications are noted in a host of publications from around the world across a range of subjects beyond the academy and its academic focus. It is a nudge to speak to a wider audience.

Despite this limited objective, critics will still likely criticize the Metrics Project for its intent. But what are they offering to reverse the discipline’s decline? Is it something that can be easily and effectively implemented like the Metrics Project? Does it have its own imperfections?

Let me suggest two ways to address errors readers find in the massive amount of data presented by the Metrics Project. First, readers can email me errors they notice in their work. (My address is borofsky@publicanthropology.org). To make a change, I need:

  1. the URL for the relevant publication and
  2. the URLs for at least three reasonably prominent national or international media sources that discuss your publication (since that is the standard currently being applied for the Metrics Project).

This solution lacks the “bells and whistles” that Altmetric provides, but it does address biases in the departmental rankings. For those interested in Altmetric’s more elaborate presentations as well as the ability to cross-check citations, please contact Altmetric via this form: https://form.typeform.com/to/SWoxIY?typeform-source=altmetric.typeform.com. For more information about Altmetric and how it operates, readers might wish to visit: https://help.altmetric.com/support/solutions.

The Metrics Project will certainly have its critics. Given the current temper of the discipline, it is inevitable. But might we agree that the considerable metric data Altmetric has complied on anthropological publications can be used to a positive end – encouraging anthropologists to write for broader audiences? Some anthropologists already do an excellent job of speaking out to the broader public. But, as the Metrics Project illustrates, others need to be nudged along. Are you open to helping?



REFERENCES

  1. Robert Borofsky, An Anthropology of Anthropology (Center for a Public Anthropology, 2019), 206-207.
  2. Compare with Charles King, Gods of the Upper Air (Doubleday, 2019).
  3. Robert Borofsky, An Anthropology of Anthropology (Center for a Public Anthropology, 2019), front matter.
  4. _____. An Anthropology of Anthropology (Center for a Public Anthropology, 2019), 198 ff.
  5. “Statement on ‘Academic Analytics’ and Research Metrics,” American Association of University Professors (AAUP), posted March 22, 2016, https://www.aaup.org/sites/default/files/2016-AcademicAnalytics_statement.pdf, accessed June 21, 2024.
  6. Paul Jump, “Metrics: How to Handle Them Responsibly,” Times Literary Supplement, July 9, 2015, https://www.timeshighereducation.com/features/metrics-how-to-handle-them-responsibly, accessed June 21, 2024.
  7. Cris Shore and Susan Wright, “Coercive Accountability,” in Audit Cultures, ed. Marilyn Strathern (Routledge, 2000), 62.
  8. Paul Jump, “Metrics: How to Handle Them Responsibly,” Times Literary Supplement, July 9, 2015, https://www.timeshighereducation.com/features/metrics-how-to-handle-them-responsibly, accessed June 21, 2024.
  9. Patrick Vinton Kirch and Marshall Sahlins, Anahulu, Volumes 1 and 2 (University of Chicago Press, 1994).
  10. “Patrick V. Kirch,” Anthropology, University of Hawai’i at Mānoa, https://anthropology.manoa.hawaii.edu/pat-kirch/, accessed June 21, 2024.
  11. Patrick Vinton Kirch and Clive Ruggles, Heiau, ‘Āina, Lani: The Hawaiian Temple System in Ancient Kahikinui and Kaupō, Maui (University of Hawaii Press, 2019).
  12. Daniel G. Bausch, “The Year that Ebola Virus Took Over West Africa: Missed Opportunities for Prevention,” The American Journal of Tropical Medicine and Hygiene 92, no. 2 (2015): 229-232, https://doi.org/10.4269%2Fajtmh.14-0818.
  13. “Perfect is the Enemy of Good,” Wikipedia, last modified August 23, 2024, 15:46 (UTC), https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good.
  14. “If Not Now, When?,” Wikipedia, last modified June 5, 2024, 13:50 (UTC), https://en.wikipedia.org/wiki/If_Not_Now,_When%3F_(novel).
  15. Medicaid Services. Press Release. Jan.24, 2024. https://www.cms.gov/newsroom/press-releases/historic-213-million-people-choose-aca-marketplace-coverage, accessed June 21, 2024.
  16. Medicaid Expansion has Saved at Least 19,000 Lives, New Research Finds,” Center on Budget and Policy Priorities, posted November 6, 2019, https://www.cbpp.org/research/health/medicaid-expansion-has-saved-at-least-19000-lives-new-research-finds#:~:text=The%20Affordable%20Care%20Act's%20(ACA,period%20from%202014%20to%20201, accessed June 21, 2024.
  17. Robert Borofsky, An Anthropology of Anthropology (Center for a Public Anthropology, 2019) 197-199.

Back to Home