Reviewing

First of all, we dearly thank all reviewers and meta-metareviewer for offering their time and energy for reviewing submissions for ICTIR 2024. We could not have this conference without your help.

Criteria for reviewers

The review form will have a text field for the more objective part of your review, and another text field for the more subjective part of your review. For the objective part, be concrete and provide falsifiable evidence for your criticism. See the following paragraphs for what is meant by this. When in doubt, please ask the PC Chairs.

Soundness

What you should judge primarily about a paper is whether it is sound. That is: Is the problem clearly stated? Are the methods clearly described? Are the results clearly stated? Are the various claims in the paper supported? Note that a result need not always be quantitative, it can also have the form of a non-trivial insight. We do not except reviewers to deeply check correctness (this can be very hard, depending on the topic), but watch out for contradictions or other signs that something is wrong. 

Presentation

Be careful when judging the quality of the presentation. Maybe you did not sleep well or the paper is far from your area of expertise and therefore hard to understand for you. Stick to objective markers such as: are the concepts and the terminology properly defined before they are used, can the abstract and introduction be understood before having read and understood the rest of the paper, are enough details provided so that it is clear enough what the authors have actually done. If you find that a paper is "poorly written", provide concrete examples of where the paper fails any of these criteria. 

Difficulty

Be careful when judging the difficulty of a paper. A method can be simple, yet solve a problem that nobody has solved before. A paper can also be relatively straightforward, but a lot of work, with results that were not known before, or with results that were assumed but not yet proven to be true. All these are fine. Of course, if the problem is obviously unrealistic or an easy exercise to solve, that can be a valid reason for rejection or downgrading. 

Impact

Be very careful when judging the future impact of a paper. If a paper is sound (see above) and reasonably presented (see above), leave it to the community to decide how interesting or useful they find it. Also, judge a submission for what it does, not for what it could have also done (unless an integral part of the problem or its analysis was omitted).

Related Work

Roughly speaking, there are two categories of related work. The one category is related work that is directly relevant or competing with the work from the submission. Not citing such work can be a valid reason for rejection, provided that you name the missing reference and clarify how it is directly relevant. The other category is more indirectly relevant work. Be more lenient when that is not cited, especially if there is a large body of such work. Of course, if a paper misses not just one individual paper but a whole body of work, that can be a reason for rejection or downgrading. 

Scope regarding topic

The Call for Papers makes a clear statement about this. A paper should be about Information Retrieval, or from a neighboring field with a clear and significant connection to Information Retrieval. Do not use this as the sole criterion for rejection unless the topic is very clearly out of scope.

Scope regarding methodology

The Call for Papers makes a clear statement about this. There should be a theoretical part that is a significant, integral, and non-trivial part of the paper, and not just an add-on. The term "theoretical" is meant in a comprehensive sense, encompassing not only mathematical work, but also conceptual work, modelling work, generalizations, etc. Do not use this as the sole criterion for rejection unless the paper has no significant theoretical part (in the described sense) at all.

Guidelines for Meta-Reviewers

In a nutshell, the central job of the meta-reviewers is to "review the reviews", that is, separate the wheat from the chaff, ask for clarification wherever it is needed, and resolve or somehow integrate conflicting opinions in the reviews. Specifically, here are the instructions send to the meta-reviewers:

1. Familiarize yourself with each paper by at least understanding what the problem is, what the main method to solve it is, and what the main results are. Be prepared to look deeper into the paper if the reviews require it, either because the reviews contradict each other or because important information is missing from the reviews or provided with low confidence.

2. Reviews are of mixed quality, and there is nothing we can do about that. Your main task is to carefully read the reviews and separate the wheat from the chaff. To that end, the first thing you should do is read the reviewing criteria in the first section of this page, which we have also communicated explicity to the reviewers by mail. It is very important and only a 5-minute read, so please do read it. To further ease your task, the review form has two text fields, one for the *objective* parts of the review and one for the *subjective* part of the review. You will see it when you read the reviews.

3. The most important part of each review is the objective part. Objective means that each criticism should be supported by falsifiable evidence. For example, a statement that the paper is full of mistakes should be supported by concrete examples of these mistakes. A statement that important related work is missing should be supported by naming that related work and saying how it is direct relevant. You get the idea.

4. Your task is to check whether the criticism in the reviews is indeed supported by such evidence and whether that evidence is correct. If not, please ask the reviewers to provide the evidence (or try to find it out yourself). Similarly, if statements in the reviews are unclear, try to clarify it by asking the reviewers (or try to find it out yourself). If even after these efforts, criticism remains that is not supported by evidence or the evidence turns out to be false or doubtful, you are free to discard it (and you probably should discard it). 

5. Some of the reviews will be rather generic, either because the reviewer did not make an effort or because they lack expertise or both. In such a case, try to get some additional information out of the reviewer by asking them at east once (how you phrase those requests depends on the reviews and your personal style). If your are not successful, accept it and feel free to discard that review (or parts of it) as not very useful for making a decision on the paper. We have accounted for this by allocating four reviews to each paper. This increases the chance of having at least two and hopefully three reviews of good quality for each submission.

6.  The main part of your meta-review should *not* be a summary of the reviews (it is useful for us though if you could start your meta-review by a *very brief* summary of what the paper is about and what its main contribution is). The main part of your meta-review should be an account of which parts of the reviews you considered for your assessment and which parts you discarded. Also provide background information on your decision process that is not evident from just reading the reviews. Write the meta-review so that it is useful in this sense both for the PC chairs (us) and for the authors.

Schedule

Submission deadline for authors: April 25, 2024
Submission deadline for reviewers: May 16, 2024
Submission deadline for meta-reviewers: May 30, 2024
Notification of acceptance: June 6, 2024
Time zone for all deadlines: Anywhere on Earth (AoE)
Note that all dates are on a Thursday