1F9 t1_jcdfbje wrote
I am concerned that moving more stuff up into Python is a mistake. It limits support for other languages, like Rust, which speak to the C++ core. Also, executing Python is slower, so limits what can be done by the framework before being considered “too slow.”
Moving a bit to a high level language seems like a win, but when that inspires moving large parts of a big project to high-level languages, I’ve seen unfortunate results. It seems each piece in a high level language often imposes non-obvious costs on all the pieces.
This is nothing new. Way back in the day, Netscape gave up on Javagator, and Microsoft “reset” Windows longhorn to rip out all the c#. Years of work by large teams thrown away.
-Rizhiy- t1_jce09xx wrote
There is a reason it is called PyTorch)
1F9 t1_jcfxc5b wrote
That reason is that they replaced Lua with Python as the high-level language that wrapped Torch's core, and needed to differentiate that from the original Torch. But it seems as though you believe the "py" prefix means the correct design decision for the project is to replace ever more parts of torch with Python. Perhaps you could elaborate more on your thinking there?
Philpax t1_jcdtj6o wrote
Agreed. It also complicates productionising the model if you're reliant on features that are only available in the Python interface. Of course, there are ways around that (like just rewriting the relevant bits), but it's still unfortunate.
programmerChilli t1_jcdykn2 wrote
The segregation is that the "ML logic" is moving into Python, but you can still export the model to C++.
zbyte64 t1_jcdzvhh wrote
That's why all my ML is done in OvjectiveC /s. Production looks different for different use cases.
ML4Bratwurst t1_jced3ae wrote
Because we all know that python can't call c++ code
Exarctus t1_jcfmqqs wrote
I think you’ve entirely misunderstood what PyTorch is and how it functions.
PyTorch is a front-end to libtorch, which is the C++ backend. Libtorch itself is a wrapper to various highly optimised libraries as well as CUDA implementations of specific ops. Virtually nothing computationally expensive is done on the python layer.
[deleted] OP t1_jcftpu2 wrote
You should look into torch dynamo and torchinductor, a good overview here: https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747
[deleted] OP t1_jcio9zw wrote
[removed]
duboispourlhiver t1_jcek81c wrote
IMHO this can only be answered on a case by case basis and there is no general rule. If anyone really understands what has been moved to python and what are the consequences, his lights are welcome
[deleted] OP t1_jcdnva2 wrote
[deleted]
Viewing a single comment thread. View all comments