maybelator
maybelator t1_j5t6yba wrote
Reply to comment by Rainandblame in [D] CVPR Reviews are out by banmeyoucoward
More likely no, but if your rebuttal can change the reject it can go through. Spend a lot of effort on this reviewer.
maybelator t1_j5t6ao4 wrote
Reply to comment by bombay_doors in [D] CVPR Reviews are out by banmeyoucoward
Had orals recently with 4,4,4->5,5,4 and 4,4,4->5,5,5 (pre->post rebuttal). I think the strong accepts are a must, never had an oral without them.
maybelator t1_iyycm4g wrote
Reply to comment by xgu5 in [D] Score 4.5 GNN paper from Muhan Zhang at Peking University was amazingly accepted by NeurIPS 2022 by Even_Stay3387
Because they have 30+ papers to manage. The reviews allow them to focus on edge cases such as this one.
With CMT, the rebuttal is partially addressed to the AC. With openreview, I agree that it looks more uncomfortable.
maybelator t1_ixcyhrp wrote
Reply to comment by Simping4Kaiming in [D] AISTATS 2023 reviews are out by von_oldmann
Papers are not selected based on their average, but wether or bot there is a consensus towards acceptance. Show the meta reviewers that you can meaningfully address the reservations of the 4 and 5, or show that they are not valid (be very careful with this route).
Based on the scores, the scale tips in your favor. But the rebuttal will be critical.
maybelator t1_ixc8iei wrote
Reply to comment by No-Connection4924 in [D] AISTATS 2023 reviews are out by von_oldmann
I think a meta reviewer fucked up. Maybe to eager on emergency reviews?
maybelator t1_ixc8f5q wrote
Reply to comment by Simping4Kaiming in [D] AISTATS 2023 reviews are out by von_oldmann
100% depends on your rebuttal. Show that you understood the low reviews and improved your paper in consequence. Or if a bad review is nonsensical make it clear with line ref and citations.
maybelator t1_ix30udx wrote
Reply to comment by No_Potato_1999 in [D] AAAI 2023 Notification of Acceptance/Rejection by errohan400
Most decisions are straightforward. But the borderline cases involves hundreds of meta reviewers, and dozen of senior meta reviewers, hundreds of late and emergency reviewers, all giving their time for free.
maybelator t1_iwbxutj wrote
Reply to comment by zimonitrome in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
The Huber loss encourages the regularized variable to be close to 0. However, this loss is also smooth: the amplitude of the gradient decreases as the variable nears its stationary point. In consequence, it will have many coordinates close to 0 but not exactly. Achieving true sparsity requires thresholding which adds a a lot of other complications.
In contrast the amplitude of the gradient of the L1 norm (absolute value in dim 1) remain the same no matter how close it gets to 0. The functional has a kink (the subgradient contains a neighborhood of 0). In consequence, if you used a well-suited optimization algorithm, the variable will have true sparsity, i.e. a lot of exact 0.
maybelator t1_iwbpkjo wrote
Reply to comment by zimonitrome in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
Not if you want true sparsity !
maybelator t1_ivxgacq wrote
Reply to comment by jrkirby in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
> Is the derivative of ReLU at 0.0 equal to NaN, 0 or 1?
The derivative of ReLu is not defined at 0, but its subderivative is and is the set [0,1].
You can pick any value in this set, and you end up with (stochastic) subgradient descent, which converges for small enough learning rates (to a critical point).
For ReLu, the discontinuity are of mass 0 and are not "attractive", ie there is no reason for the iterate to end up exactly at 0, so it can be safely ignored. This is not the case for the L1 norm for example, whose subgradient at 0 is [-1,1]. It present a "kink" at 0 as the subderivative contains a neighborhood of 0, and hence is attractive: your iterate will get stuck there. In these cases, it is recommended to use proximal algorithms, typically forward-backward schemes.
maybelator t1_j5t74ae wrote
Reply to comment by Jack7heRapper in [D] CVPR Reviews are out by banmeyoucoward
Reviewers are not allowed to ask for more experiments. You could signal this to the area chair. But ultimately, the paper probably won't be accepted.
Do write a rebuttal however, it's a great exercise.