Your review should answer the following questions:
- Relevance: Does the paper topic fit with the publication?
This is typically more of an issue for conferences. A very good paper
can and should be rejected if it is out of scope.
- Novelty: Are the paper results novel? The same or very
similar result can't have been published elsewhere by somebody else.
(For experimental papers, it is often useful to confirm results, so it
is not unreasonable to publish a paper that takes the same measurement
techniques published earlier and applies them to a different
environment, as long as that environment is indeed sufficiently
different from the original.) It generally does not matter whether the
authors knew or could reasonably have known about the earlier results -
that's just tough luck. This may differ for archival journals, if the
authors were the first to publish the result at a conference. If the
same novel results are submitted to the same conference, sometimes both
papers are published, but that appears to be a rare occurrence.
- Is the paper sufficiently different from previous papers by the same
authors? Some people like to explore the "least publishable unit" and
thus submit a paper that only adds minor results each time. There is no
hard rule for this, but my guess is that at least two thirds of the
paper should be new (or the one third of such exceptional value that it
would stand on its own). Authors should not get "credit" for the parts
that have already been published; they should be treated as an
introduction and background.
- Interest: Are the results interesting? Particularly for
conferences, the paper should be of interest to more than one person in
the room, so highly specialized results, even if buttressed by ten pages
of mathematics, are probably of lesser interest.
- Is all relevant related work cited?
- Correctness: Does the paper appear to be correct? While the
reviewer is not expected to re-run simulations or do mathematical proofs
(although the latter used to be common in less hurried times), the
reviewer should note anything that makes the results doubtful,
particularly if the results are counter-intuitive. Unfortunately, many
experimental papers are rather sloppy, e.g., don't provide confidence
intervals or don't indicate the simulation lengths used.
- Applicability: For some conferences, applicability matters.
Thus, theoretical results are not likely to be of interest unless they
have a direct application to a real system. However, in most
conferences, results that make assumptions that are not wholly realistic
are often acceptable if they highlight interesting features of the real
system or allow comparison to other results.
- Presentation: Is the paper written well enough to be readable
without undue effort? While authors can be expected to improve minor
presentation faults, if a conference paper is unreadable, it can be too
much of a leap of faith to assume that the authors will rewrite the
paper. They should be encouraged to resubmit elsewhere. For a journal,
authors need to indicate how they have addressed reviewer concerns, so a
bit more latitude is possible.
Some people take the not unreasonable attitude that if an author
didn't have the time to spell-check and proof-read the paper, she
probably didn't have the time to do careful experiments and proofs,
Since many authors are not native speakers of English, it is quite
helpful to provide a list of places that need particular attention. You
don't need to do proofreading, but alerting the writer to things a spell
checker is likely to miss are useful (e.g., break vs. brake, effect vs.
It helps to know how many papers are likely to be accepted, as a
fraction. Typical journals like Transactions on Networking
accept only about a quarter of the papers submitted; some conferences
by Henning Schulzrinne