Review for NeurIPS paper: Learning Physical Constraints with Neural Projections

Neural Information Processing Systems 

Weaknesses: From the results, it seems that separate models are trained for each of the systems, but within each system, there is not that much procedural generation in the structure/relative distances of the system particles, just on the initial positions/velocities. Do you expect this to work in cases where there is more procedural variation in the relative positioning of the points (e.g. if sometimes the rigid it is a square, sometimes an arbitrary trapezoid). I guess this could work if you added another input to C with the previous state, or with some reference distances, but it does not seem it would work on the current form of the model, since it would be impossible for the C function to tell whether the constraints are being satisfied or not, just by looking at the position of the points, since it does not know if the constraints to be satisfied should be that of a square or that of a specific trapezoid. Similarly, I wonder how much of the context of the other particles the model is relying on to infer how systems should collide against a wall. For example, if we were making predictions for a system with a single particle that in a single timestep would bounce elastically of a wall, I wonder if the system would always put the particle right at the wall (the linear prediction would move it past the wall, and the constraint satisfaction would put it back right at the wall, where the constraint is satisfied with minimal displacement, but would not make it bounce back off the wall. Then in the next step the linear extrapolation, would essentially do the same thing once more, and then beyond that the particle would become permanently stuck at the wall once two consecutive positions place it at the wall).