Reviewer, AC & SAC Guidelines
Frequently asked questions
Frequently asked questions can be found here.
Contacting the program chairs
If you encounter a situation that you are unable to resolve on your own, please contact the program chairs. Your email will be entered into a task-management system to ensure it is handled appropriately. Please refrain from writing to the program chairs at their own email addresses.
Introduction
Thank you for agreeing to serve for NIPS 2018! The community needs outstanding people like yourself to make NIPS a success, and we will work hard to make your duties as easy as possible. This page provides an overview of SAC, AC, and reviewer responsibilities and key dates.
Key dates
-
SACs, ACs & reviewers enter domain conflicts, subject areas, TPMS information, etc.
-
Before Wed May 23
-
-
Submission deadline
-
Fri May 18 (4pm ET; 8pm UTC)
-
-
PCs clean submissions, test for duplicates, run TPMS
-
5 days: Sat May 19--Wed May 23
-
-
SACs bid on ACs
-
5 days: Sat May 19--Wed May 23
-
-
PCs assign ACs to SACs
-
1 week: Thu May 24--Wed May 30
-
-
ACs & reviewers bid on submissions
-
1 week: Thu May 24--Wed May 30
-
-
SACs, ACs, reviewers & authors enter individual conflicts
-
1 week: Thu May 24--Wed May 30
-
-
PCs assign ACs to submissions
-
3 days: Thu May 31--Sat Jun 2
-
-
SACs micro-adjust AC assignments
-
4 days: Sun Jun 3--Wed Jun 6
-
-
ACs bid on reviewers for submissions
-
1 week: Thu Jun 7--Wed Jun 13
-
-
PCs assign submissions to reviewers
-
3 days: Thu Jun 14--Sat Jun 16
-
-
SACs & ACs micro-adjust reviewer assignments
-
4 days: Sun Jun 17--Wed Jun 20
-
-
SACs check for conflicts
-
2.5 weeks: Sun Jun 3--Wed Jun 20
-
-
Reviewers write reviews
-
3 weeks: Thu Jun 21--Wed Jul 11
-
-
Authors respond to reviews
-
1 week: Thu Jul 26--Wed Aug 1
-
-
ACs & reviewers discuss reviews & responses
-
2 weeks: Thu Aug 2--Wed Aug 15
-
-
SACs & ACs make initial decisions (SACs focus on borderline cases)
-
1 week: Thu Aug 9--Wed Aug 15
-
-
ACs write metareviews (SACs focus on borderline cases)
-
1 week: Thu Aug 16--Wed Aug 22
-
-
PCs & SACs finalize decisions (ACs & reviewers involved as necessary)
-
1 week: Thu Aug 23--Wed Aug 29
-
-
Notification date
-
Wed Sep 5
-
General information
-
Please respect deadlines and respond to emails as promptly as possible!
-
It is crucial that we are able to reach you in a timely manner. We will send most emails from CMT (i.e., email@msr-cmt.org). Such emails are sometimes accidentally marked as spam. Please check your spam folder regularly and if you find such an email in there, please whitelist the CMT email address so that you will receive future emails from CMT.
-
If you have changed or plan to change your email address, please update CMT accordingly. We have no way of knowing whether an email sent to you from CMT has bounced, so it is crucial that you make sure that CMT has the correct email address for you at all times. You should also make sure that your domain conflicts in CMT are up to date; these are important for preventing conflicts during the review process.
-
The NIPS definitions of conflicts of interest (and instructions for entering them) have changed a little from last year, so please make sure you read this year’s definitions.
-
NIPS uses the Toronto Paper Matching System (TPMS) to assign submissions to ACs and reviewers. Please log into TPMS here and make sure that your profile is up to date.
-
All participants must agree to abide by the NIPS code of conduct.
Responsibilities
-
Senior Area Chair. With the growth in submissions, NIPS recently incorporated the role of senior area chair (SAC). SACs work alongside the area chairs (ACs) and program chairs (PCs). Each SAC oversees the work of a small number of ACs (around 7), making sure that the reviewing process goes smoothly. SACs serve as the first port of call for ACs if they need assistance or guidance. The reviewing process is double blind at the level of ACs (i.e., ACs cannot see author identities), but not at the level of SACs and program chairs; SACs are therefore responsible for identifying conflicts of interest (and other unusual activity, such as suspicious bidding patterns) and re-assigning submissions to ACs or reviewers accordingly. SACs are also responsible for helping ACs chase late reviewers, calibrating decisions across ACs, and discussing borderline papers. During the final decision-making phase, SACs will discuss all proposed decisions with the PCs. There is no physical SAC/AC meeting; most discussions with reviewers, ACs, and PCs will take place via CMT or email, with some video conferences toward the end of the reviewing process. Although reviewer identities are visible to ACs and SACs, they are hidden from the other reviewers. Please therefore refrain from using reviewers’ names during the discussion phase. After decisions have been made, reviews and meta-reviews will be made public (but reviewer and SAC/AC identities will remain anonymous).
-
Area Chair. Each area chair (AC) oversees around 20 submissions, making sure that the reviewing process goes smoothly. ACs are responsible for helping the program chairs (PCs) recruit reviewers, recommending reviewers for submissions, chasing late reviewers, facilitating discussions among reviewers, writing meta-reviews, evaluating the quality of reviews, and helping make decisions. The reviewing process is double blind at the level of ACs; each AC will work with a senior area chair (SAC), who is responsible for identifying conflicts of interest (and other unusual activity, such as suspicious bidding patterns) and re-assigning submissions to ACs or reviewers accordingly. SACs serve as the first port of call for ACs if they need assistance or guidance throughout the reviewing process. SACs also calibrate decisions across ACs. There are no physical meetings; most discussions with reviewers, SACs, and PCs will take place via CMT or email, with some video conferences toward the end of the reviewing process. Although reviewer identities are visible to ACs and SACs, they are hidden from the other reviewers. Please therefore refrain from using reviewers’ names during the discussion phase. AC identities are accessible by reviewers. After decisions have been made, reviews and meta-reviews will be made public (but reviewer and SAC/AC identities will remain anonymous).
-
Reviewer. Each reviewer will be assigned around 4--6 submissions to review. Reviewers are responsible for reviewing submissions, reading author responses, discussing submissions and author responses with other reviewers and area chairs (ACs), and helping make decisions. The reviewing process is double blind at the level of reviewers. There are no physical meetings; discussions with other reviewers and ACs will take place via CMT or email. After decisions have been made, reviews and meta-reviews will be made public (but reviewer and AC identities will remain anonymous). This year, as an incentive, ACs will be asked to evaluate the quality of each review using three scores: “exceeded expectations”, “met expectations,’ and “failed to meet expectations.” The 200 or so highest-scoring reviewers will be awarded free NIPS registrations. The next 800 or so highest-scoring reviewers will have registrations reserved for them (for a limited time frame). The lowest-scoring reviewers may not be invited to review for future conferences.
SAC best practices
-
It is okay to be unavailable for part of the review process (e.g., on vacation for a few days), but if you will be unavailable for more than that -- especially during important windows (e.g., decision-making) -- you must let the program chairs know ASAP.
-
With great power comes great responsibility! Take your job seriously and be fair.
-
If you have a conflict of interest with a submission that is assigned to one of your ACs, please contact the program chairs immediately. (Note that our definitions have changed a little from last year, so please carefully read this year’s definitions here.)
-
DO NOT talk to other SACs about submissions assigned to your ACs without prior approval from the program chairs; other SACs may have conflicts with these submissions.
-
DO NOT talk to other SACs or ACs about your own submissions (i.e., submissions you are an author on) or submissions with which you have a conflict of interest.
-
Be professional and listen to the reviewers and ACs, but do not give in to undue influence.
-
If an AC wants to make a decision that is not clearly supported by the reviews, please check that they justify their decision appropriately, including, but not limited to, reading the submission in depth and writing a detailed meta-review that explains their decision.
-
Help calibrate decisions by working closely with your ACs. It is your responsibility to figure out how best to work with your ACs during this process (e.g., over email, phone, video conferences, etc). Pay particularly close attention to borderline papers.
AC best practices
-
It is okay to be unavailable for part of the review process (e.g., on vacation for a few days), but if you will be unavailable for more than that -- especially during important windows (e.g., discussion, decision-making) -- you must let your SAC know ASAP.
-
With great power comes great responsibility! Take your job seriously and be fair.
-
If you have a conflict of interest with a submission that is assigned to you, please contact your SAC immediately so that the paper can be reassigned. (Note that our definitions have changed a little from last year, so please carefully read this year’s definitions here.)
-
DO NOT talk to other ACs about submissions that are assigned to you without prior approval from your SAC; other ACs may have conflicts with these submissions. In general, your primary point of contact for any discussions should be your SAC. Your SAC does not have any conflicts with any of the submissions that are assigned to you.
-
DO NOT talk to other SACs or ACs about your own submissions (i.e., submissions you are an author on) or submissions with which you have a conflict of interest.
-
Be professional and listen to the reviewers, but do not give in to undue influence.
-
Make sure your reviewers read and (if appropriate) respond to all author responses.
-
After the author response phase, you must initiate a discussion via CMT for each submission and make sure the reviewers engage in the discussion phase.
- Read all reviews carefully. After reading a review, please evaluate its quality by indicating (on CMT) whether it “exceeded expectations,” “met expectations,” or “failed to meet expectations.” You should use the information in the reviewer instructions (see the section “Review Content” below), as well as how helpful the review and subsequent discussion by the reviewer were in making your decision about the submission. The 200 or so highest-scoring reviewers will be awarded free NIPS registrations. The next 800 or so highest-scoring reviewers will have registrations reserved for them (for a limited time frame). The lowest-scoring reviewers may not be invited to review for future conferences.
-
Your meta-review should explain your decision to the authors. Your comments should augment the reviews, and explain how the reviews, author response, and discussion were used to arrive at your decision. Dismissing or ignoring a review is not acceptable unless you have a good reason for doing so. If you want to make a decision that is not clearly supported by the reviews, perhaps because the reviewers did not come to a consensus, please justify your decision appropriately, including, but not limited to, reading the submission in depth and writing a detailed meta-review that explains your decision.
Reviewer best practices
-
It is okay to be unavailable for part of the review process (e.g., on vacation for a few days), but if you will be unavailable for more than that -- especially during a important windows (e.g., discussion, decision-making) -- you must let your ACs know ASAP.
-
With great power comes great responsibility! Take your job seriously and be fair.
-
Write thoughtful and constructive reviews. Your reviews must accord with the NIPS code of conduct. Although the double-blind review process reduces the risk of discrimination, reviews can inadvertently contain subtle discrimination, which should be actively avoided.
-
In particular, please be diligent about avoiding comments regarding English style or grammar that may be interpreted as implying the author is "foreign" or "non-native". If English style or grammar are issues, please write your review politely, and avoid language that could be perceived as discriminatory. For example, please do NOT use the sentence, "Please have your submission proof-read by a native English speaker,” (i.e., avoid the phrase “native-English speaker”). Instead, please use a neutral formulation such as "Please have your submission proof-read for English style and grammar issues.”
-
If you have a conflict of interest with a submission that is assigned to you, please contact your AC immediately so that the paper can be reassigned. (Note that our definitions have changed a little from last year, so please carefully read this year’s definitions here.)
-
The reviewing process will be double blind at the level of reviewers and ACs (i.e., reviewers and ACs cannot see author identities) but not at the level of SACs or program chairs. If you are assigned a submission that is not adequately anonymized (e.g., includes author names, author affiliations, acknowledgements, or other identifying information) then please contact the corresponding AC. Under no circumstances should you attempt to find out the identities of the authors for any of your assigned submissions (e.g., by searching on Google or arXiv). If you accidentally find out, please do not divulge the identities to anyone, but do tell your AC that this has happened and make a note of this in the “Confidential Comments to Program Chairs” text field when you submit your review. You should not let the authors’ identities influence your decision in any way.
-
DO NOT talk to other reviewers, ACs, or SACs about submissions that are assigned to you without prior approval from your AC; other reviewers, ACs, and SACs may have conflicts with these submissions. In general, your primary point of contact for any discussions should be the corresponding AC for that submission.
-
DO NOT talk to other reviewers, ACs, or SACs about your own submissions (i.e., submissions you are an author on) or submissions with which you have a conflict.
-
Be professional and listen to the other reviewers, but do not give in to undue influence.
-
Read and (if appropriate) respond to all author responses. It is not fair to ignore any author response, even for submissions that you think should be rejected. Sometimes author responses contain clarifications that change reviewers’ minds.
-
Engage actively in the discussion phase for each of the submissions that you are assigned, even if you are not specifically prompted to do so by the corresponding AC.
-
It is not fair to dismiss any submission without having thoroughly read it. Think about the times when you received an unfair, unjustified, short, or dismissive review. Try not be that reviewer! Always be constructive and help the authors understand your viewpoint, without being dismissive or using inappropriate language. If you need to cite existing work to justify one of your comments, please be as precise as possible and give a complete citation.
-
If you would like the authors to clarify something during the author response phase, please articulate this clearly in your review (e.g., “I would like to see results of experiment X” or “Can you please include details about the parameter settings used for experiment Y”).
Reviewer Instructions
Online submission system (CMT)
All reviews must be submitted via the NIPS 2018 CMT site. You may visit the site multiple times and revise your reviews as often as necessary. If you are both an author and a reviewer, please use the same email address for both roles in CMT. During the reviewing process, you will receive many emails from CMT (i.e., email@msr-cmt.org). Such emails are sometimes accidentally marked as spam. Please check your spam folder regularly and if you find such an email in there, please whitelist the CMT email address so that you will receive future emails from CMT.
Please note that NIPS is using CMT3 for the first time this year.
Confidentiality
You must keep everything relating to the review process confidential. Do not to use ideas and results from submissions in your own work until they become publicly available (e.g., via a technical report or a published paper). Do not to talk about or distribute submissions (or the ideas and results described in them) to anyone without prior approval from the program chairs.
Double-blind reviewing
The reviewing process will be double blind at the level of reviewers and ACs (i.e., reviewers and ACs cannot see author identities) but not at the level of SACs and program chairs. Authors are responsible for anonymizing their submissions. In particular, they should not include author names, author affiliations, or acknowledgements in their submissions and they should avoid providing any other identifying information (even in the supplementary material). If you are assigned a submission that is not adequately anonymized (e.g., includes author names, author affiliations, acknowledgements, or other identifying information) then please contact the corresponding AC. Under no circumstances should you attempt to find out the identities of the authors for any of your assigned submissions (e.g., by searching on Google or arXiv). If you accidentally find out, please do not divulge the identities to anyone, but do tell your AC that this has happened and make a note of this in the “Confidential Comments to Program Chairs” text field when you submit your review. You should not let the authors’ identities influence your decision in any way.
Supplementary material
Authors may submit up to 100MB of supplementary material, such as proofs, derivations, data, or source code; all supplementary material must be in PDF or ZIP format. Your responsibility as a reviewer is to read and review the submission itself; looking at supplementary material as at your discretion. That said, NIPS submissions are short, so you may wish to look at supplementary material before complaining about insufficient details, proofs, or experimental results.
Formatting instructions
Submissions are limited to eight content pages, including all figures and tables, in the NIPS “submission” style; additional pages containing only references are allowed. Authors must use the NIPS 2018 LaTeX style file. If you are assigned any submissions that violate the NIPS style (e.g., by decreasing margins or font size) or page limits, please contact the program chairs.
Dual submissions
Submissions that are identical or substantially similar to papers that are in submission to, have been accepted to, or have been published in other archival conferences, journals, workshops, etc. should be deemed dual submissions. Submissions that are identical or substantially similar to other NIPS submissions should also be deemed dual submissions; submissions should be distinct and sufficiently substantial. Slicing contributions too thinly may be sufficient for submissions to be deemed dual submissions. If you suspect that a submission that has been assigned to you is a dual submission or if you require further clarification, please contact the corresponding AC and program chairs. For more information about dual submissions, please see the author guidelines.
Review content
We know that serving as a reviewer for NIPS is time consuming, but the community needs outstanding people like yourself to uphold the scientific quality of NIPS. Review content is the primary means by which ACs, SACs, and program chairs make decisions about submissions. Please make your review as detailed and informative as possible; short, superficial reviews that venture uninformed opinions or guesses are worse than no review since they may result in the rejection of a high-quality submission. We ask that you pay particular attention to the question, “Does this submission add value to the NIPS community?” Solid, technical papers that explore new territory or point out new directions for research are preferable to papers that advance the state of the art, but only incrementally. Review content is also the primary means by which authors understand their submissions’ decisions. Reviews for rejected submissions help authors understand how to improve their work for other conferences or journals. Reviews for accepted submissions help authors understand how to improve their work for the camera-ready versions.
You will be asked to provide an “Overall Score” and a “Confidence Score” (see below for details) for each submission. You should explain these values in the the "Detailed Comments " text field. Your comments should begin by summarizing the main ideas of the submission and relating these ideas to previous work at NIPS and in other archival conferences and journals. Although this part of the review may not provide much new information to authors, it is invaluable to ACs, SACs, and program chairs, and it can help the authors determine whether there are misunderstandings that need to be addressed in their author response. You should then summarize the strengths and weaknesses of the submission, focusing on each of the following four criteria:
Quality: Is the submission technically sound? Are claims well supported by theoretical analysis or experimental results? Is this a complete piece of work or work in progress? Are the authors careful and honest about evaluating both the strengths and weaknesses of their work?
Example from nips30/reviews/1548.html
“The technical content of the paper appears to be correct albeit some small careless mistakes that I believe are typos instead of technical flaw (see #4 below).
…
4. The equation in line 125 appears to be wrong. Shouldn't there be a line break before the last equal sign, and shouldn't the last expression be equal to E_q[(\frac{p(z,x)}{q(z)})^2]?”
“The idea of having a sandwich bound for the log-marginal likelihood is certainly good. While the authors did demonstrate that the bound does indeed contain the log-marginal likelihood as expected, it is not entirely clear that the sandwich bound will be useful for model selection. This is not demonstrated in the experiment despite being one of the selling point of the paper. It's important to back up this claim using simulated data in experiment.”
Example from OpenReview
“Technical issues: The move from (1) to (2) is problematic. Yes it is a lower bound, but by igoring H(Z), equation (2) ignores the fact that H(Z) will potentially vary more significantly that H(Z|Y). As a result of removing H(Z), the objective (2) encourages Z that are low entropy as the H(Z) term is ignored, doubly so as low entropy Z results in low entropy Z|Y. Yes the -H(X|Z) mitigates against a complete entropy collapse for H(Z), but it still neglects critical terms. In fact one might wonder if this is the reason that semantic noise addition needs to be done anyway, just to push up the entropy of Z to stop it reducing too much. In (3) arbitrary balancing parameters lamda_1 and lambda_2 are introduced ex-nihilo - they were not there in (2). This is not ever justified. Then in (5), a further choice is made by simply adding L_{NLL} to the objective. But in the supervised case, the targets are known and so turn up in H(Z|Y). Hence now H(Z|Y) should be conditioned on the targets. However instead another objective is added again without justification, and the conditional entropy of Z is left disconnected from the data it is to be conditioned on. One might argue the C(X,Y,Z) simply acts as a prior on the networks (and hence implicitly on the weights) that we consider, which is then combined with a likelihood term, but this case is not made. In fact there is no explicit probabilistic or information theoretic motivation for the chosen objective. Given these issues, it is then not too surprising that some further things need to be done, such as semantic noise addition to actually get things working properly. It may be the form of noise addition is a good idea, but given the troublesome objective being used in the first place, it is very hard to draw conclusions. In summary, substantially better theoretical justification of the chosen model is needed, before any reasonable conclusion on the semantic noise modelling can be made.”
Clarity: Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note: a superbly written paper provides enough information for an expert reader to reproduce its results.)
Example from /nips30/reviews/1548.html
“While the paper is pretty readable, there is certainly room for improvements in the clarity of the paper. I find paragraphs in section 1 and 2 to be repetitive. It is clear enough from the Introduction that the key advantages of CHIVI are the zero avoiding approximations and the sandwich bound. I don't find it necessary to be stressing that much more in section 2. Other than that, many equations in the paper do not have numbers. The references to the appendices are also wrong (There is no Appendix D or F). There is an extra period in line 188.
The Related Work section is well-written. Good job!”
Example from /nips30/reviews/1173.html
“The paper is generally well-written and structured clearly. The notation could be improved in a couple of places. In the inference model (equations between ll. 82-83), I would suggest adding a frame superscript to clarify that inference is occurring within each frame, e.g. q_{\phi}(z_2^{(n)} | x^{(n)}) and q_{\phi}(z_1^{(n)} | x^{(n)}, z_2^{(n)}). In addition, in Section 3 it was not immediately clear that a frame is defined to itself be a sub-sequence.”
Originality: Are the tasks or methods new? Is the work a novel combination of well-known techniques? Is it clear how this work differs from previous contributions? Is related work adequately cited? (Abstracts and links to many previous NIPS papers are available here.)
Example from /nips30/reviews/60.html
“The main contribution of this paper is to offer a convergence proof for minimizing sum fi(x) + g(x) where fi(x) is smooth, and g is nonsmooth, in an asynchronous setting. The problem is well-motivated; there is indeed no known proof for this, in my knowledge.
…
There are two main theoretical results. Theorem 1 gives a convergence rate for proxSAGA, which is incrementally better than a previous result. Theorem 2 gives the rate for an asynchronous setting, which is more groundbreaking.”
Example from /nips30/reviews/1173.html
“The paper is missing a related work section and also does not cite several related works, particularly regarding RNN variants with latent variables (Fraccaro et al. 2016; Chung et al. 2017), hierarchical probabilistic generative models (Johnson et al. 2016; Edwards & Storkey 2017) and disentanglement in generative models (Higgins et al. 2017). The proposed graphical model is similar to that of Edwards & Storkey (2017), though the frame-level Seq2Seq makes the proposed method sufficiently original. The study of disentanglement for sequential data is also fairly novel.”
Significance: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?
Example from /nips30/reviews/688.html
“I liked this article very much. It answers a very natural question: gradient descent is an extremely classical, and very simple algorithm. Although it is known not to be the fastest one in many situations, it is widely used in practice; we need to understand its convergence rate. The proof is also conceptually simple and elegant, and I found its presentation very clear.”
Example from /nips30/reviews/3278.html
”This paper seems to be a useful contribution to the literature on protein docking, showing a modest improvement over the state of the art. As such, I think the paper would be well-suited for publication in a molecular biology venue, or perhaps as an application paper at NIPS. The main weakness of the paper in my view is that it is a fairly straightforward application of an existing technique (GCNs) to a new domain (plus some feature engineering). As such I am leaning towards a rejection for NIPS.”
Please comment on and take into account the strengths of the submission. It can be tempting to only comment on the weaknesses; however, ACs, SACs, and program chairs need to understand both the strengths and the weaknesses in order to make an informed decision. It is useful for the ACs, SACs, and program chairs if you include a list of arguments for and against acceptance. If you believe that a submission is out of scope for NIPS, then please justify this judgement appropriately, including, but not limited to, looking at subject areas and previous NIPS papers. If you need to cite existing work, please be as precise as possible and give a complete citation.
Example from /nips30/reviews/587.html
“There are several things to like about this paper:
- The problem of safe RL is very important, of great interest to the community and without too much in the way of high quality solutions.
- The authors make good use of the developed tools in model-based control and provide some bridge between developments across sub-fields.
- The simulations support the insight from the main theoretical analysis, and the algorithm seems to outperform its baseline.
However, I found that there were several shortcomings:
- I found the paper as a whole a little hard to follow and even poorly written as a whole. For a specific example of this see the paragraph beginning 197.
- The treatment of prior work and especially the "exploration/exploitation" problem is inadequate and seems to be treated as an afterthought: but of course it is totally central to the problem! Prior work such as [34] deserve a much more detailed discussion and comparison so that the reader can understand how/why this method is different.
- Something is confusing (or perhaps even wrong) about the way that Figure 1 is presented. In an RL problem you cannot just "sample" state-actions, but instead you may need to plan ahead over multiple timesteps for efficient exploration.
- The main theorems are hard to really internalize in any practical way, would something like a "regret bound" be possible instead? I'm not sure that these types of guarantees are that useful.
- The experiments are really on quite a simple toy domain that didn't really enthuse me.”
Example from https://openreview.net/forum?id=SkkTMpjex¬eId=rkgMSRKrx
“The main contributions of the paper are:
1) Distributed variant of K-FAC that is efficient for optimizing deep neural networks. The authors mitigate the computational bottlenecks of the method (second order statistic computation and Fisher Block inverses) by asynchronous updating.
2) The authors propose a “doubly-factored” Kronecker approximation for layers whose inputs are too large to be handled by the standard Kronecker-factored approximation. They also present (Appendix A) a cheaper Kronecker factored approximation for convolutional layers.
3) Empirically illustrate the performance of the method, and show:
- Asynchronous Fisher Block inversions do not adversely affect the performance of the method (CIFAR-10)
- K-FAC is faster than Synchronous SGD (with and without BN, and with momentum) (ImageNet)
- Doubly-factored K-FAC method does not deteriorate the performance of the method (ImageNet and ResNet)
- Favorable scaling properties of K-FAC with mini-batch size
Pros:
- Paper presents interesting ideas on how to make computationally demanding aspects of K-FAC tractable.
- Experiments are well thought out and highlight the key advantages of the method over Synchronous SGD (with and without BN).
Cons:
- “…it should be possible to scale our implementation to a larger distributed system with hundreds of workers.” The authors mention that this should be possible, but fail to mention the potential issues with respect to communication, load balancing and node (worker) failure. That being said, as a proof-of-concept, the method seems to perform well and this is a good starting point.
- Mini-batch size scaling experiments: the authors do not provide validation curves, which may be interesting for such an experiment. Keskar et. al. 2016 (On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima) provide empirical evidence that large-batch methods do not generalize as well as small batch methods. As a result, even if the method has favorable scaling properties (in terms of mini-batch sizes), this may not be effective.”
Your comments should be detailed, specific, and polite. Please avoid vague, subjective complaints. Think about the times when you received an unfair, unjustified, short, or dismissive review. Try not be that reviewer! Always be constructive and help the authors understand your viewpoint, without being dismissive or using inappropriate language. Remember that you are not reviewing your level of interest in the submission, but its scientific contribution to the field!
If you have comments that you wish to be kept confidential from the authors, you can use either the “Confidential Comments to Area Chair” text field or the “Confidential Comments to Program Chairs” text field. Such comments might include explicit comparisons of the submission to other submissions and criticisms that are more bluntly stated. If you accidentally find out the identities of the authors, please do not divulge the identities to anyone, but do tell your AC that this has happened and make a note of this in the “Confidential Comments to Program Chairs” text field.
Overall score
You will be asked to provide a “Overall Score” between 1 and 10 for each submission. The ACs, SACs, and program chairs will interpret these scores via the following scale.
-
10: Top 5% of accepted NIPS papers. Truly groundbreaking work.
I will consider not reviewing for NIPS again if this submission is rejected. -
9: Top 15% of accepted NIPS papers. An excellent submission; a strong accept.
I will fight for accepting this submission. -
8: Top 50% of accepted NIPS papers. A very good submission; a clear accept.
I vote and argue for accepting this submission. -
7: A good submission; an accept.
I vote for accepting this submission, although I would not be upset if it were rejected. -
6: Marginally above the acceptance threshold.
I tend to vote for accepting this submission, but rejecting it would not be that bad. -
5: Marginally below the acceptance threshold.
I tend to vote for rejecting this submission, but accepting it would not be that bad. -
4: An okay submission, but not good enough; a reject.
I vote for rejecting this submission, although I would not be upset if it were accepted. -
3: A clear reject.
I vote and argue for rejecting this submission. -
2: I'm surprised this work was submitted to NIPS; a strong reject.
I will fight for rejecting this submission. -
1: Trivial or wrong or already known.
I will consider not reviewing for NIPS again if this submission is accepted.
You should NOT assume that you were assigned a representative sample of submissions, nor should you adjust your scores to match the overall conference acceptance rates. The “Overall Score” for each submission should reflect your assessment of the submission’s contributions.
Confidence score
You will be asked to provide a “Confidence Score” between 1 and 5 for each submission. The ACs, SACs, and program chairs will interpret these scores via the following scale.
-
5: You are absolutely certain about your assessment.
You are very familiar with the related work. -
4: You are confident in your assessment, but not absolutely certain.
It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. -
3: You are fairly confident in your assessment.
It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. -
2: You are willing to defend your assessment, but it is quite likely that you did not understand central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
-
1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Author response
Authors will be given the opportunity to respond to their reviews before decisions are made. This is to enable them to address misunderstandings, point out parts of their submissions that were overlooked, or disagree with the reviewers’ assessments. In previous years, some authors felt that their responses were ignored. As a reviewer, it is your responsibility to read and (if appropriate) respond to each author response. It is not fair to ignore any author response, even for submissions that you think should be rejected. Although it is possible that an author response will not change your assessment of a submission, you must convey to the authors that you have carefully considered their comments. As you read each author response, keep an open mind. Have you overlooked something? Please update each review to indicate that you have read the author response and whether you agree or disagree with it. You should be more specific than “I have read the author response and my opinion remains the same.” If that is the case, you should explain why it remains it remains the same, what the author response failed to address, etc.
Discussion
After the author response phase, the AC for each submission will initiate a discussion via CMT to encourage the reviewers to come to a consensus. If the reviewers do come to a consensus, the program chairs will take it seriously; only rarely are unanimous assessments overruled. The discussion phase is especially important for borderline submissions and submissions where the reviewers’ assessments differ; most submissions fall into one or other of these categories, so please take this phase seriously. When discussing a submission, try to remember that different people have different backgrounds and different points of view. Ask yourself, “Do the other reviewers' comments make sense?" and do consider changing your mind in light of their comments, if appropriate. That said, if you think the other reviewers are not correct, you are not required to change your mind. Reviewer consensus is valuable, but it is not mandatory.