The Evaluating Merit Review webinar series involves AIBS staff, academics, research funders and government officials reporting results and perspectives on the science of the peer review process, with particular focus on review of funding applications. AIBS SPARS is committed to analyzing and sharing results of our own analyses as well as disseminating and promoting discussion about others’ results in the growing body of research concerning the peer review process. One of our goals at AIBS SPARS is to foster a community-wide discussion that enhances the integrity of the peer review process that has served science so well for so many years. Links to archived webinars in the series are listed below:
Monday, July 9, 2018
4:00 PM EST
About the Presenter:
Professor Adrian Barnett has a Bachelor of Science in Statistics from University College London and a PhD from the University of Queensland. He has worked for over 21 years as a statistician, working for drug companies, research councils and universities. He has Senior Research Fellowship from the Australian National Health and Medical Research Council with a project title of: "Meta-research: Using research to increase the value of health and medical research". He is the current vice-president of the Statistical Society of Australia.
Event took place on Wednesday, February 14, 2018
George Chacko will present a simple accessible framework for mining, linking, and analyzing data from policy documents, regulatory approvals, research grants, bibliographic and patent databases, and clinical trials in order to document collaborations and identify influential research accomplishments. He will present observations on the involvement of peer review in these accomplishments and discuss how the framework could be used to extend these studies.Presenters:
George Chacko, Chief Scientist at NET ESolutions Corporation
George Chacko is Chief Scientist at NET ESolutions Corporation, McLean, VA. His group is interested in the application of modern information technology techniques to problems in research evaluation. A particular focus is the acquisition, integration, and curation of linked research data coupled with network methods to illuminate the historical scientific achievements that invariably precede major translational breakthroughs. An example of the group's work can be seen in Keserci et al. (2017) http://www.heliyon.com/article/e00442/. Prior to joining NETE, he worked at the University of Illinois Urbana-Champaign, and the Center for Scientific Review, National Institutes of Health in various roles and assignments spanning peer review, portfolio analysis, and research analytics. His doctoral and postdoctoral research related to platelet and lymphocyte activation by immunoreceptor tyrosine based activation motif (ITAM) receptors.
Event took place on Thursday, May 19, 2016
Science relies on experts evaluating the correctness, quality, relevance, and importance of manuscripts. Finding appropriate matches between reviewers and manuscripts is surprisingly labor intensive, leading to unintended tardiness, subjectivity, and errors. In this talk, Dr. Daniel Acuna, from Northwestern University, will describe his work on making this matching automatic using machine learning. Dr. Acuna will also describe an article scoring approach that attempts to correct for excessive harshness and other biases.
Event took place on Thursday, February 18, 2016
Speaker: Michael Lauer, Deputy Director for Extramural Research at National Institutes of Health (NIH)
Event took place on Thursday, October 15, 2015
Speaker: Carole Lee, University of Washington in Seattle
Event took place on Thursday, August 27, 2015
Speaker: Stephen Gallo, Technical Operations Manager Scientific Peer Advisory & Review Services (SPARS)