Guide to Reviewing
CHI Papers and Notes
Introduction
The CHI review process for 2011 will continue to make use of the new subcommittee structure and contribution types that were introduced for CHI 2009. This review structure is different from prior years and from many other HCI conferences. The key points in undertaking reviews under this structure are:
- Your primary criterion for judging a paper is: Does
this submission provide a strong contribution to the field of HCI?
- In recognition of the diversity of different
contributions that can be made in the interdisciplinary work of HCI,
authors indicate a contribution type. As part of judging whether the
paper makes a strong contribution, consider the questions about
contributions as specified in the description of each contribution type
(see: Selecting a contribution
type).
- In recognition that there are a range
of different types of expertise in our field, and in an attempt to have
each paper judged by true experts in its topic, authors submit their
papers to subcommittees which are clustered around different topics (see: Selecting a subcommittee). However,
matches of papers to subcommittees are not always clean or perfect. So as
a reviewer you should not judge a
paper by how well it fits the subcommittee theme.
- Reviewers rate the paper using a 5-point ranking scale;
your written appraisal must support your numeric ranking.
Contribution
Types
CHI encourages strong contributions of many different types of papers and is actively trying to discourage the perception that papers must follow some narrow formula. To support this, each author can now specify a contribution type for their submission (see: Selecting a contribution type). The description of each contribution type includes a set of questions that you, as a reviewer, should consider when evaluating the contribution of the paper. What is important about these questions is that they may encourage you to think about the contribution in ways that differ from what you may perceive as the CHI formula paper.
They should also encourage you to think about the contribution in ways that differ to what you might have done - i.e., to recognizse and avoid your own biases about, for example, what types of questions or theories or approaches you would have applied had you done this research or what type of system you would have built. You are asked to judge the paper with respect to the questions that this paper asks, and to judge whether the methods were appropriate enough for the questions, and whether the results were of value to the CHI community.
However, these questions are not a checklist. The submission(s) you are handling may not cleanly fall within a single contribution type, and the paper should not be penalized for that. When this happens, your primary criteria for judging a paper should still be: Does this submission provide a strong contribution to the field of HCI? This is important. The contribution type and its associated questions are a guide to assist your thinking; they are not a strict criteria by which you should measure acceptance. They are there to help you think more liberally about the paper’s contribution. If, in your opinion, the paper still makes a contribution outside of the specific contribution type questions, then judge it accordingly.
What you will see on the review form is:
- A summary of the contribution type questions (i.e.,
review criteria), followed by a Contribution Type Specific Rating.
By filling in your response, you are judging the paper according to its
contribution type.
- Immediately following this is an Overall Rating.
By filling this in, you are judging the overall contribution of the paper.
While we suggest you use the contribution type as a starting point for
appraising the contribution, the paper does not need to exactly match the
contribution type criteria.
- The various other boxes ask you to provided details.
Again, while we suggest you use the contribution type as a starting point
for appraising the contribution, we stress that the paper does not need to
exactly match the contribution type criteria.
Subcommittees
To improve the reviewing process, authors are now asked to submit their papers to sub-committees that best reflect their most salient contribution, where each sub-committee is responsible for various topic areas typically seen at CHI (see: Selecting a subcommittee for details). Each sub-committee will comprise of a sub-chair and associate chairs who are knowledgeable in these topics. The idea is that, as specialists in the theme areas, they should be able to find good referees (such as you) for each submission, and that as specialists they should be able to better handle the meta-review process.
However, as a reviewer, you should not judge the paper by how well
it fits the subcommittee theme(s). Many papers will not cleanly fit
into a particularly sub-committee for a variety of reasons, and we do not want
to penalize authors for this. Remember, the subcommittee organization is there
only to try to improve reviewer matches and to better handle the volume of
submissions. If you have a paper that does not fit the subcommittee theme, evaluate
it as best you can with respect to its own quality. Any topic is valid,
as long as it fits within the interests of a reasonable fraction of the overall
CHI audience - the primary criterion remains the contribution to HCI.
Papers
and Notes
Papers and Notes are both reviewed within the same rigorous review process
and at the highest level are judged by very similar criteria (i.e., does
this paper or note provide a strong contribution to the field of HCI?).
However, it is important as a reviewer to realize that the type of content that
is appropriate for each is somewhat different. In particular, Notes present
brief and focused research contributions that are noteworthy, but may not be as
complete
comprehensive or provide the same
depth of results as a full paper. For details about these differences see Papers versus Notes: What is the difference?.
Numeric
Ratings and Written Appraisals
The numeric ratings on the review form are on a 5 point scale (with half
steps) starting at 1.0 (the lowest ratings) to 5.0 (the highest). Be as
accurate with your ratings as possible. As in previous years, it is your
written appraisal that is crucial. Make sure your review is balanced,
and that its details reflect the numeric rating. The appraisal of a paper
should always indicate why the paper deserves that rating.
Your review may be discounted if, for example, you numerically rate a paper
highly but only indicate its flaws. As well, please be polite
to authors. Even if you rate a paper poorly, you can critique it in a positive
voice. As part of polite reviewing practice, you should always state
what is good about a paper first, followed by your criticisms.
If possible, you should offer suggestions for improvement
along with your criticism.
Prior Publication
Content appearing at CHI should be new and groundbreaking. Therefore, material that has been previously published in widely disseminated publications should not be republished unless the work has been “significantly” revised. Guidelines for determining “significance” of a revision are stated in the ACM Policy on Pre-Publication Evaluation and the ACM Policy on Prior Publication and Simultaneous Submissions. Roughly, a significant revision would contain more than 25% new content material (i.e., material that offers new insights, new results, etc.) and significantly amplify or clarify the original material. These are subjective measures left to the interpretation and judgment of the reviewers and committee members – authors are advised to revise well beyond the Policy guidelines.
References
The debate about what makes a good CHI paper has been ongoing for several years, and the introduction of contribution types are an attempt to address this debate. If you are interested, the papers below touch upon this debate and contain references to additional papers that concern it.
- Greenberg, S. and Buxton, B. 2008. Usability evaluation
considered harmful (some of the time). In Proceeding of the Twenty-Sixth
Annual SIGCHI Conference on Human Factors in Computing Systems. CHI '08.
ACM, 111-120. DOI=http://doi.acm.org/10.1145/1357054.1357074
- Olsen, D. R. 2007. Evaluating user interface systems
research. In Proceedings of the 20th Annual ACM Symposium on User
interface Software and Technology. UIST '07. ACM, 251-258. DOI=http://doi.acm.org/10.1145/1294211.1294256
- Dourish, P. 2006. Implications
for design. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems. CHI '06, ACM, 541-550. DOI=http://doi.acm.org/10.1145/1124772.1124855
- Newman, W. 1994. A preliminary analysis of the products
of HCI research, using pro forma abstracts. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems: Celebrating
interdependence (Boston, Massachusetts, United States, April 24 - 28,
1994). ACM, New York, NY, 278-284. DOI=http://doi.acm.org/10.1145/191666.191766