inFamous_16
inFamous_16 OP t1_javmu8a wrote
Reply to comment by Jaffa6 in [R] Variable size input to pre-trained BERT model by inFamous_16
ohh ok... super clear, Thanks for your time! I will check this out
inFamous_16 OP t1_jav6112 wrote
Reply to comment by Jaffa6 in [R] Variable size input to pre-trained BERT model by inFamous_16
Ahhh... thank you! I wasn't aware of the concept attention mask. Also I had one more doubt, As I already have tweet features of variable size after concatenation, Is there a way to skip the tokenization step because I don't require it? I only need padding and attention mask.
inFamous_16 OP t1_jauvj21 wrote
Reply to comment by I_will_delete_myself in [R] Variable size input to pre-trained BERT model by inFamous_16
yeah thanks... That's the first thought came into my mind but isn't that way we will lose the context of original feature vector?
inFamous_16 t1_j5a67of wrote
Reply to [R] Is there a way to combine a knowledge graph and other types of data for ML purposes? by Low-Mood3229
Read TextGCN paper which uses Graph Neural Network for text classification task
inFamous_16 OP t1_jb5et2d wrote
Reply to comment by boosandy in [R] Variable size input to pre-trained BERT model by inFamous_16
yeah, got it.. thank you!