Ahhh... thank you! I wasn't aware of the concept attention mask. Also I had one more doubt, As I already have tweet features of variable size after concatenation, Is there a way to skip the tokenization step because I don't require it? I only need padding and attention mask.
inFamous_16 OP t1_jb5et2d wrote
Reply to comment by boosandy in [R] Variable size input to pre-trained BERT model by inFamous_16
yeah, got it.. thank you!