Submitted by martenlienen t3_zfvb8h in MachineLearning
martenlienen OP t1_izi8k3g wrote
Reply to comment by MathChief in [R] torchode: A Parallel ODE Solver for PyTorch by martenlienen
First, the difference in steps is probably due to different tolerances in the step size controller.
The loop times is measured in milliseconds. Of course, that is much slower than what you got in matlab. The difference is that we did all benchmarks on GPU, because that is the usual mode for deep learning even though it is certainly inappropriate for the VdP equation if you were interested in it for anything else but benchmarking the inner loop of an ODE solver on a GPU. I think, you can get similar numbers to your matlab code in diffrax with JIT compilation on a CPU. However, you won't get it with torchode because PyTorch's JIT is not as good as JAX's and specifically this line is really slow on CPUs. Nonetheless, after comparing several alternatives we chose this because, as I said, in practice in most of deep learning only GPU performance matters.
MathChief t1_izjayhi wrote
Cool. Thanks for the explanation.
Viewing a single comment thread. View all comments