logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

Bibliometrics And Research Evaluation Uses And Abuses Yves Gingras

  • SKU: BELL-28537540
Bibliometrics And Research Evaluation Uses And Abuses Yves Gingras
$ 31.00 $ 45.00 (-31%)

0.0

0 reviews

Bibliometrics And Research Evaluation Uses And Abuses Yves Gingras instant download after payment.

Publisher: The MIT Press
File Extension: PDF
File size: 2.53 MB
Pages: 156
Author: Yves Gingras
ISBN: 9780262337649, 0262337649
Language: English
Year: 2016

Product desciption

Bibliometrics And Research Evaluation Uses And Abuses Yves Gingras by Yves Gingras 9780262337649, 0262337649 instant download after payment.

Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.
The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything—eachers, professors, training programs, universities--using quantitative indicators. Among the tools used to measure "research excellence,"bibliometrics—aggregate data on publications and citations--has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.
The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.
Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.

Related Products