• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

A Question About Peer-Review

A paper by a "good" author in a mediocre journal is probably not a very good paper.

A paper by a mediocre author in a good journal is probably a good paper.

I think this statement doesn't mean too much if you don't define what you mean by a "good paper".

Articles in Nature generally present work that is of broad interest, and that report significant results. Papers in a journal like Phyical Review B, by contrast, are generally not as significant in this regard, and are more likely either incremental advances or only relevant to a small, select audience. But that really says almost nothing about the relative quality of the data, or the reliability of the author's conclusions. In fact, high-profile journals can sometimes be at higher risk of being wrong, since they often get the first submissions on exciting new topics, and the lesser journals get subsequent publications where theories get refined, interpretations corrected, and more extensive data presented. So it's really significance, not reliability, where the high-profile journals have an advantage.
 
It's rather like "accreditation," in that regard. I could start a university tomorrow if I liked, just by signing a few papers and giving a credit card number. Starting an "accredited" university would be a little bit more difficult, since I would need to find an accreditation board and persuade them that a university without faculty, facilities, or coursework was worth accrediting, and I'd probably have to pay that board a substantial sum of money to do so.
Or you could found your own accreditation board, and have them accredit your university.
 
I think this statement doesn't mean too much if you don't define what you mean by a "good paper".
agreed.
So it's really significance, not reliability, where the high-profile journals have an advantage.
but what is more significant: to be cited twenty times in the year after publication, or to be cited twenty times in years 10-15 after publication?

(not that i take citation rate to equal significance, but i do think this is an interesting question!)
 
The results are fairly clear that journal is a better predictor than author.
you tell a nice story. and it is the standard story. widely believed by people who have never submitted to or reviewed for Nature or PRL. perhaps it is a true story.

i do not wish to reinvent the wheel, i am happy to use yours; i question if you have one, this being a sceptical website and all.

so i can i have a pointer to your evidence please? i am curious if i am wrong, if we are asking different quesitons, or if we are using the same terms in different ways and just misunderstand each other.

thanks

ps. have you asked someone that submits regularly to Nature (et al) if it is their best papers and only their best papers that eventually appear in Nature?
 
The link isn't very strong, and there's not much feedback, simply because "peer review" is not the sort of thing that can be put on a sliding scale. Your journal either has it or it doesn't.....
sorry i was not clear: i meant the "quality" of the journal and the quality of it peer review are linked.

as "impact factor" becomes more "important", journals get swamped with submissions, the quality of their review process drops, and over-worked editors tend to evaluate the paper in light of the reviews less and echo the conclusions of the reviewers. unless of course they know the authors, or are particularly interested in the paper.

i think those count as "nasty feedbacks".
 
I should add to that, that even the rejection rate for a journal is misleading.
arguably every (publically known) statistic is misleading on its own, that does not mean it is not useful, of course.

bpesta said:
Impact factor pretty much answers the question.
can you define this for us? and exactly which question it answers?

and is your "A" "B" "C" rating of journals part of the IF?
bpesta said:
Plus, journal quality is probably the most important factor universities look at when deciding on tenure.
sad, if true, that they no longer look at the skills of the candidate. perhaps even read a few of the papers...
 
arguably every (publically known) statistic is misleading on its own, that does not mean it is not useful, of course.

can you define this for us? and exactly which question it answers?

and is your "A" "B" "C" rating of journals part of the IF?sad, if true, that they no longer look at the skills of the candidate. perhaps even read a few of the papers...


I could be misreading yer tone, but I don't understand why you seem so peaved by the argument. What exactly are you arguing for-- sorry if I missed it, or mistook your tone.

That said, I wonder how many people go up for tenure where the PRC doesn't even read a single article. I bet it happens all the time, but I have no data.

I think journal quality is like porno, those in the field know it when they see it.

Also, lots of studies on journal rankings-- survey 100s of academics in a particular field and have them rank journals based on whatever measure of quality the study looks at.

The correlations between the expert ranking and other measures (like impact) are huge.

I'm personally ok with the fact that any field has a set of top tier journals-- nearly universally recognized as such by people who are experts in the area-- and that publishing in these journals (or not) is perhaps the most important thing determining your rep as a scientist.

Also, I'm not sure Nature or Science are the best examples to counter argue. The goal of them seems to be to report interesting results in an area to readers who probably aren't experts in that area.

I'm talking about journals that few people would want to read unless they were researching the field. The best work is reported in the best journals (at the risk of being circular)
 
sorry i was not clear: i meant the "quality" of the journal and the quality of it peer review are linked.

as "impact factor" becomes more "important", journals get swamped with submissions, the quality of their review process drops, and over-worked editors tend to evaluate the paper in light of the reviews less and echo the conclusions of the reviewers. unless of course they know the authors, or are particularly interested in the paper.

i think those count as "nasty feedbacks".


You seem to be demanding data from others, do you have any data that shows this? I just don't buy that a quality journal would whore it's reputation by lowering it's standards to publish more and more crap.

It's been my experience that no one sends crap to A journals-- it's embarassing for you as an author, and you get an ass reaming from the reviewers / editor.

There's a difference between wanting to publish in an A and actually being able to do so.
 
You seem to be demanding data from others
it is not a demand. but a request. two in fact.

i asked for supporting evidence after someone made a claim i either disagreed with or did not understand (hopefully not both!).

isn't that the way it is supposed to work? the one who posts the claim provides support for his/her statement?
 
I think journal quality is like porno, those in the field know it when they see it.
here i think we agree. and i think it is a great analogy.

but note quantitative standards for porn are very very difficult to establish; and once established the are usually breached by stuff that is obviously porn!

but that seems to indicate you do not believe a number like "impact factor" measures quality???

I'm Also, I'm not sure Nature or Science are the best examples to counter argue. The goal of them seems to be to report interesting results in an area to readers who probably aren't experts in that area.
you guys picked them, not me!

and they have relatively large impact factors.

and i agree, they are not the best examples from your point of view!

I'm talking about journals that few people would want to read unless they were researching the field. The best work is reported in the best journals (at the risk of being circular)
even if that journal has a low impact factor?

yet i agree with your comments in this post (contrast Physica D with Phy Rev Lett, for example...)

and this question of ranking journals, while interesting, differs from the question of identifying good papers (without reading them!).
 
I could be misreading yer tone, but I don't understand why you seem so peaved by the argument. What exactly are you arguing for-- sorry if I missed it, or mistook your tone.
i am not peaved! i regret giving that impression. i honestly wanted a citation or link! each question asked was asked in good faith. sorry it sounded otherwise, it was not my intention!
I wonder how many people go up for tenure where the PRC doesn't even read a single article. I bet it happens all the time, but I have no data.
i have very small N stats, but they probably support (a slightly weaker form of) your side of the wager.

g'night
 
you tell a nice story. and it is the standard story. widely believed by people who have never submitted to or reviewed for Nature or PRL. perhaps it is a true story.

It's also believed by people who have submitted to or reviewed for Nature or PRL. Myself, for one, and I suspect some others on this very thread.

so i can i have a pointer to your evidence please?

Certainly. The past fifty years of research on journal quality by ISI.
 
Since no one's given a definition of Impact Factor yet, here it is for the folks following along at home:

Impact Factor = the average number of citations (in proper journals) that an article in that journal gets in the year following its publication.

For a given journal it'll vary from year to year, usually only a little but for some journals which don't publish many articles in a year just one highly cited paper can make a dramatic difference. It's a reasonbly good way to compare journals within a discipline, not so much across disciplines as medicine and life-sciences articles tend to have higher citation rates than natural sciences and engineering journals. This is because if I invent a good new heart operation and report it then the next hundred people to try it out will (rightly) publish the results when they try it and cite my original paper. If I prove for the Riemann hypothesis there may not be any more to be said on the subject.

While it's hard for a journal to have a high I.F. for its subject and not be good, it is possible for an article to be highly cited without being influential.

Ways to get more citations include: writing review articles which don't contain new research but tend to be highly cited (not that there's anything wrong with that); publishing articles with mistakes or controversial statements in them (if you can get them past the reviewers); citing yourself whenever possible (as a reviewer I've seen outrageous examples of this).

If you're a journal editor/publisher and you want to raise your IF then encourage all the above behaviours and speed up your processing of papers. Plenty of engineering journals can take the best part of a year to review and publish an article, which dooms their chances of a high IF from the get go.

Current highest impact factor is for Annual Review of Immunology with 52.431. Nature is at 32.182, Science at 31.853. Surface Science Reports is the highest all natural science journal with 21.35 (21st highest), Solid State Physics has 16 (39th). All those not mentioned with higher factors are medical journals. Physical Review Letters is regarded as an important journal in physics (it only publishes 4 page reports). Its IF is 7.218. ISI index about 6,000 journals, about half of them have an IF > 1. I regularly read, and publish in journals with IF < 1 because that's how it works out for my field.

Most of Stephen Hawking's papers are published in Physical Review D. Its impact factor is currently 5.156. I've heard biomedical PhD students turn their noses up at medical journals with such a low factor. OK, I'm labouring the point now, I'll stop.
 
lenny said:
so i can i have a pointer to your evidence please?
Certainly. The past fifty years of research on journal quality by ISI.
seriously, do you want to provide evidence to support your claim or not?

one citation of a paper we can all read in which "quality" is defined in one or more interesting ways, and which provides evidence supporting your claim that a random article in a high IF journal is more likely to be of high "quality" than a random paper by an outstanding scientist.

ideally targeting the physics literature, as you publish there and it would be easier for me to understand.
 
2 questions from a new author:

I recently published a piece in a medical journal

1) How do I find out what its impact factor rating is? Is there a website for this?

2) How do I find out how many times my paper has been cited?
 
2 questions from a new author:

I recently published a piece in a medical journal

1) How do I find out what its impact factor rating is? Is there a website for this?

2) How do I find out how many times my paper has been cited?

i'd ask your e-librarian, or IT people; it is not a free service but i expect Hopkins must subscribe. once you get to ISI/WoS, search on the year AND your name, it is all menu driven and finding your paper &c is pretty straightforward. but to answer your question, you can get there via

www.isinet.com
 
I said
Impact Factor = the average number of citations (in proper journals) that an article in that journal gets in the year following its publication.

...but it's not quite so simple. In the unlikely event that anyone gives a toss the impact factor for the year 2004 (the most recent available) is actually calculated as (Cites in 2004 to articles published in 2003 + Cites in 2004 to articles published in 2002)/(No. of articles published in 2003 + No. of articles published in 2002).

There's also an "immediacy index" which = (Cites in 2004 to articles published in 2004)/(No. of articles published in 2004). No one seems to get so bothered about it though.
 
Peer just means equal, so the work has been passed by your equals as being OK.

So homoeopaths pass the work of other homoeopaths and so on. This means that their journals are legitimately described as "peer-reviewed" while at the same time being complete deep-fried balderdash.

And I've seen some ghastly horrors even in allegedly A-list journals, just because the editor didn't send the paper to the right people to get a proper critique.

Rolfe.
 
thanks Thing,
In the unlikely event that anyone gives a toss the impact factor for the year 2004 (the most recent available) is actually calculated as (Cites in 2004 to articles published in 2003 + Cites in 2004 to articles published in 2002)/(No. of articles published in 2003 + No. of articles published in 2002).
it is a useful number, but (like every single-valued statistic) but fundamentally limited; in this case it is (by design) blind to longterm impact, not does not serve as a basis for comparison between fast turnaround journals which get many short items out quickly and the lumbering heavy-science journals (where iterating with the reviewers and printing delays can take a paper beyond the horizon of the IF).
http://www.internationalskeptics.com/forums/showpost.php?p=1618633&postcount=23

would be interesting to break the IF down to account for self-citations (Nature citing Nature) or look at the relative contributions to IF via the weighted variety of journals (like a breadth index).

and its limitations are are well known to ISI, who regularly include disclaimers like
"The impact factor will help you evaluate a journal's relative importance, especially when you compare it to others in the same field." (from a paid access page at portal.isiknowledge.com)

evangelicals sometimes take it as something more than it is. (evidence in the posts on this thread suffice to support that).
 
Further discussion and criticism of impact factors in pdf form here, courtesy of David Colquhoun's web pages.
 

Back
Top Bottom