gahaalt
gahaalt OP t1_izo0w5m wrote
Reply to comment by David202023 in Progress Table - is it better than TQDM for your use case? by gahaalt
Hi! Thanks for the feedback.
Actually, Progress Table is not tied to Keras or any other Deep Learning framework. You can use Progress Table to track any long-running process that produces data. The source code is not neural network specific :)
To help you start out, I've created a markdown file with PyTorch integration example. Check this out: integrations.md. Let me know if it's clear!
Submitted by gahaalt t3_zh381d in deeplearning
gahaalt OP t1_ivnveuu wrote
Reply to comment by VinnyVeritas in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Why not have an API that simplifies model creation in PyTorch? Let's not debate which framework is better. Model creation is but a small brick in the whole framework ecosystem. I am sure there are people who want to stick with PyTorch while creating models conveiniently.
gahaalt OP t1_ivnsc23 wrote
Reply to comment by androstudios in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Yes. But nn.Sequential won't allow for example for residual connections. You can, however, create a Symbolic Model with residual connections and call the entire model as one function, in your words.
gahaalt OP t1_ivns08k wrote
Reply to comment by Mefaso in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Thanks for the opinion. Please look at the article: "What are Symbolic and Imperative APIs in TensorFlow 2.0?" by Josh Gordon linked here. It seems natural to him to describe this API as "Symbolic".
Basically if you google "Symbolic API" it seems to be commonly used to describe this very thing.
Also, a similar nomenclature is used in mxnet.
gahaalt OP t1_ivlr45z wrote
Reply to comment by gahaalt in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Let me copy the comparison in case somebody doesn't feel like clicking the link. This might be long, however.
ResNet with the help of Pytorch Symbolic:
from torch import nn
from pytorch_symbolic import Input, SymbolicModel
inputs = Input(shape=(3, 32, 32))
x = nn.Conv2d(inputs.C, 32, 3)(inputs)(nn.ReLU())
x = nn.Conv2d(x.C, 64, 3)(x)(nn.ReLU())
block_1_output = nn.MaxPool2d(3)(x)
x = nn.Conv2d(block_1_output.C, 64, 3, padding=1)(block_1_output)(nn.ReLU())
x = nn.Conv2d(x.C, 64, 3, padding=1)(x)(nn.ReLU())
block_2_output = x + block_1_output
x = nn.Conv2d(block_2_output.C, 64, 3, padding=1)(block_2_output)(nn.ReLU())
x = nn.Conv2d(x.C, 64, 3, padding=1)(x)(nn.ReLU())
block_3_output = x + block_2_output
x = nn.Conv2d(block_3_output.C, 64, 3)(block_3_output)(nn.ReLU())
x = nn.AvgPool2d(kernel_size=x.HW)(x)(nn.Flatten())
x = nn.Linear(x.features, 256)(x)(nn.ReLU())
x = nn.Dropout(0.5)(x)
outputs = nn.Linear(x.features, 10)(x)
model = SymbolicModel(inputs, outputs)
ResNet defined in "standard" PyTorch:
from torch import nn
class ToyResNet(nn.Module):
def __init__(self):
super().__init__()
self.relu = nn.ReLU()
self.block1conv1 = nn.Conv2d(3, 32, 3)
self.block1conv2 = nn.Conv2d(32, 64, 3)
self.maxpool = nn.MaxPool2d(3)
self.block2conv1 = nn.Conv2d(64, 64, 3, padding=1)
self.block2conv2 = nn.Conv2d(64, 64, 3, padding=1)
self.block3conv1 = nn.Conv2d(64, 64, 3, padding=1)
self.block3conv2 = nn.Conv2d(64, 64, 3, padding=1)
self.conv1 = nn.Conv2d(64, 64, 3)
kernel_size = 7 # calculated by hand
self.global_pool = nn.AvgPool2d(kernel_size)
self.flatten = nn.Flatten()
self.linear = nn.Linear(64, 256)
self.dropout = nn.Dropout(0.5)
self.classifier = nn.Linear(256, 10)
def forward(self, x):
x = self.relu(self.block1conv1(x))
x = self.relu(self.block1conv2(x))
block_1_output = self.maxpool(x)
x = self.relu(self.block2conv1(block_1_output))
x = self.relu(self.block2conv2(x))
block_2_output = x + block_1_output
x = self.relu(self.block3conv1(block_2_output))
x = self.relu(self.block3conv2(x))
block_3_output = x + block_2_output
x = self.relu(self.conv1(block_3_output))
x = self.global_pool(x)
x = self.flatten(x)
x = self.relu(self.linear(x))
x = self.dropout(x)
return self.classifier(x)
model = ToyResNet()
gahaalt OP t1_ivlq9z1 wrote
Reply to comment by llun-ved in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Thanks a lot for the suggestion! The comparison between symbolic and imperative declarations is indeed interesting. I included it in the documentation, here is the link to the specific section. This is an example of a toy ResNet neural network, still simple but a tad more interesting.
gahaalt OP t1_ivkmxtm wrote
Reply to comment by SEND_ALL_DOG_PICS in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Thanks for this question!
Pytorch Symbolic is simplifying the definition of the neural network models. It is indeed creating a graph under the hood to do this. In this graph, every edge is an nn.Module
.
torchdynamo
looks great as a tool for optimizing existing models to perform better on the GPU by removing the CPU overhead entirely. Sometimes the improvement is really impressive.
Yes, torchdynamo
does some kind of graph capture as well. It even modifies the byte-code to speed up the execution. But in the end it is a wrapper for an nn.Module
that speeds it up. To speed up the model, you have to define it first.
So the two libraries are actually independent. You can use torchdynamo
to speed up models created with Pytorch Symbolic. IMO it is a great combination.
gahaalt OP t1_ivk66uu wrote
Reply to comment by KingsmanVince in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Here.
It is just like the standard layer, but retrieves the necessary input shape during the first forward
call.
You can use lazy layers in Pytorch Symbolic too.
gahaalt OP t1_ivk3tvj wrote
Reply to comment by violentdeli8 in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Yeah! You have a lot of flexibility to do NAS here. You can create a huge graph of layers and sample a smaller path from it to create a Symbolic Model. One non-standard thing you need to do to pull it off is to modify ._children
attribute of Symbolic Data when you want to rewire the connections in this graph.
I might add an example for a simple NAS soon.
gahaalt OP t1_ivjw8lx wrote
Reply to comment by Zondartul in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
No, it has a different purpose than SymPy. As I understand SymPy is a library mainly for manipulating symbolic mathematical expressions.
Pytorch Symbolic uses symbolic variables to record (capture) the operations and later to replay them on arbitrary data. Under the hood, there's a graph with symbolic variables as nodes and transformations (e.g. layers) as edges.
Pytorch Symbolic can capture and replay arbitrary Python operations, but cannot display them in such neat notation as SymPy does.
Submitted by gahaalt t3_ypkfwq in MachineLearning
gahaalt OP t1_izo28vv wrote
Reply to comment by RichardBJ1 in Progress Table - is it better than TQDM for your use case? by gahaalt
Hello! Thanks for your feedback. Actually, Progress Table is flexible and you can display arbitrary data in table cells. It can be, for example, a string
f"{epoch}/{total_epochs}"
. It's you who defines what will be displayed :)To make it clearer, I created integrations.md where you can see an example of Progress Table integration with PyTorch and Keras.