[ANPPOM-Lista] Declaração de São Francisco sobre avaliação de pesquisa

Carlos Palombini cpalombini em gmail.com
Ter Out 25 11:59:03 BRST 2016


​Olá,

Encontrei há poucos dias esta declaração sobre avaliação de pesquisa que
talvez interesse a outros, que eventualmente a queiram subscrever. O
endereço é http://www.ascb.org/dora.

Carlos
San Francisco Declaration on Research Assessment Putting Science Into The
Assessment of ResearchThere is a pressing need to improve the ways in which
the output of scientific research is evaluated by funding agencies,
academic institutions, and other parties.

To address this issue, a group of editors and publishers of scholarly
journals met during the Annual Meeting of The American Society for Cell
Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group
developed a set of recommendations, referred to as the San Francisco
Declaration on Research Assessment. We invite interested parties across all
scientific disciplines to indicate their support by adding their names to
this Declaration.

The outputs from scientific research are many and varied, including:
research articles reporting new knowledge, data, reagents, and software;
intellectual property; and highly trained young scientists. Funding
agencies, institutions that employ scientists, and scientists themselves,
all have a desire, and need, to assess the quality and impact of scientific
outputs. It is thus imperative that scientific output is measured
accurately and evaluated wisely.

The Journal Impact Factor is frequently used as the primary parameter with
which to compare the scientific output of individuals and institutions. The
Journal Impact Factor, as calculated by Thomson Reuters, was originally
created as a tool to help librarians identify journals to purchase, not as
a measure of the scientific quality of research in an article. With that in
mind, it is critical to understand that the Journal Impact Factor has a
number of well-documented deficiencies as a tool for research assessment.
These limitations include: A) citation distributions within journals are
highly skewed [1–3]; B) the properties of the Journal Impact Factor are
field-specific: it is a composite of multiple, highly diverse article
types, including primary research papers and reviews [1, 4]; C) Journal
Impact Factors can be manipulated (or “gamed”) by editorial policy [5]; and
D) data used to calculate the Journal Impact Factors are neither
transparent nor openly available to the public [4, 6, 7].

Below we make a number of recommendations for improving the way in which
the quality of research output is evaluated. Outputs other than research
articles will grow in importance in assessing research effectiveness in the
future, but the peer-reviewed research paper will remain a central research
output that informs research assessment. Our recommendations therefore
focus primarily on practices relating to research articles published in
peer-reviewed journals but can and should be extended by recognizing
additional products, such as datasets, as important research outputs. These
recommendations are aimed at funding agencies, academic institutions,
journals, organizations that supply metrics, and individual researchers.

A number of themes run through these recommendations:

   - The need to eliminate the use of journal-based metrics, such as
   Journal Impact Factors, in funding, appointment, and promotion
   considerations;
   - The need to assess research on its own merits rather than on the basis
   of the journal in which the research is published; and
   - The need to capitalize on teh opportunities provided by online
   publication (such as relaxing unnecessary limits on the number of words,
   figures, and references in articles, and exploring new indicators of
   significance and impact).



We recognize that many funding agencies, institutions, publishers, and
researchers are already encouraging improved practices in research
assessment. Such steps are beginning to increase the momentum toward more
sophisticated and meaningful approaches to research evaluation that can now
be built upon and adopted by all of the key constituencies involved.

The signatories of the *San Francisco Declaration on Research Assessment*
support the adoption of the following practices in research assessment.

General Recommendation 1. Do not use journal-based metrics, such as Journal
Impact Factors, as a surrogate measure of the quality of individual
research articles, to assess an individual scientist’s contributions, or in
hiring, promotion, or funding decisions.

For Funding Agencies 2. Be explicit about the criteria used in evaluating
the scientific productivity of grant applicants and clearly highlight,
especially for early-stage investigators, that the scientific content of a
paper is much more important than publication metrics or the identity of
the journal in which it was published.

3. For the purposes of research assessment, consider the value and impact
of all research outputs (including datasets and software) in addition to
research publications, and consider a broad range of impact measures
including qualitative indicators of research impact, such as influence on
policy and practice.

For Institutions 4. Be explicit about the criteria used to reach hiring,
tenure, and promotion decisions, clearly highlighting, especially for
early-stage investigators, that the scientific content of a paper is much
more important than publication metrics or the identity of the journal in
which it was published.

5. For the purposes of research assessment, consider the value and impact
of all research outputs (including datasets and software) in addition to
research publications, and consider a broad range of impact measures
including qualitative indicators of research impact, such as influence on
policy and practice.

For Publishers 6. Greatly reduce emphasis on the journal impact factor as a
promotional tool, ideally by ceasing to promote the impact factor or by
presenting the metric in the context of a variety of journal-based metrics
(e.g., 5-year impact factor, EigenFactor [8], SCImago [9], h-index,
editorial and publication times, etc.) that provide a richer view of
journal performance.

7. Make available a range of article-level metrics to encourage a shift
toward assessment based on the scientific content of an article rather than
publication metrics of the journal in which it was published.

8. Encourage responsible authorship practices and the provision of
information about the specific contributions of each author.

9. Whether a journal is open-access or subscription-based, remove all reuse
limitations on reference lists in research articles and make them available
under the Creative Commons Public Domain Dedication [10].

10. Remove or reduce the constraints on the number of references in
research articles, and, where appropriate, mandate the citation of primary
literature in favor of reviews in order to give credit to the group(s) who
first reported a finding.

For organizations that supply metrics 11. Be open and transparent by
providing data and methods used to calculate all metrics.

12. Provide the data under a licence that allows unrestricted reuse, and
provide computational access to data, where possible.

13. Be clear that inappropriate manipulation of metrics will not be
tolerated; be explicit about what constitutes inappropriate manipulation
and what measures will be taken to combat this.

14. Account for the variation in article types (e.g., reviews versus
research articles), and in different subject areas when metrics are used,
aggregated, or compared.

For Researchers 15. When involved in committees making decisions about
funding, hiring, tenure, or promotion, make assessments based on scientific
content rather than publication metrics.

16. Wherever appropriate, cite primary literature in which observations are
first reported rather than reviews in order to give credit where credit is
due.

17. Use a range of article metrics and indicators on personal/supporting
statements, as evidence of the impact of individual published articles and
other research outputs [11].

18. Challenge research assessment practices that rely inappropriately on
Journal Impact Factors and promote and teach best practice that focuses on
the value and influence of specific research outputs.

References Adler, R., Ewing, J., and Taylor, P. (2008) Citation statistics.
A report from the International Mathematical Union.
www.mathunion.org/publications/report/citationstatistics0
<http://www.ascb.org/dora/www.mathunion.org/publications/report/citationstatistics0>

Seglen, P.O. (1997) Why the impact factor of journals should not be used
for evaluating research. BMJ 314, 498–502. Editorial (2005). Not so deep
impact. Nature 435, 1003–1004.

Vanclay, J.K. (2012) Impact Factor: Outdated artefact or stepping-stone to
journal certification. Scientometric 92, 211–238.

The PLoS Medicine Editors (2006). The impact factor game. PLoS Med 3(6):
e291 doi:10.1371/journal.pmed.0030291. Rossner, M., Van Epps, H., Hill, E.
(2007). Show me the data. J. Cell Biol. 179, 1091–1092.

Rossner M., Van Epps H., and Hill E. (2008). Irreproducible results: A
response to Thomson Scientific. J. Cell Biol. 180, 254–255.
http://www.eigenfactor.org/
http://www.scimagojr.com/
http://opencitations.wordpress.com/2013/01/03/open-letter-to-publishers

-- 
carlos palombini, ph.d. (dunelm)
professor de musicologia ufmg
professor colaborador ppgm-unirio


Mais detalhes sobre a lista de discussão Anppom-L