hellrail
hellrail t1_isjxlxf wrote
U need to find a method tobturn these names into a feature vector, such that in feature space similar names ate clustered together naturally. Start with standard string similarities to get the feature vector, if that does not result in sufficiently unambigious cluster formations proceed by lemmatization methods and if it still is not sufficient try out some prelearned mod ls to generate the feature encoding
hellrail t1_isjl10j wrote
Reply to [P] I built densify, a data augmentation and visualization tool for point clouds by jsonathan
Well first the augmentation is totallly correlated with the original points, therefore they absolutely do not add any new information. Secondly, that approach enlarges the input size, typically one wants the opposite.
Therefore i say densifying pcls artifically for training purposes is nonsense
hellrail t1_is1e684 wrote
hellrail t1_iqsvjt7 wrote
Reply to [D] - Why do Attention layers work so well? Don't weights in DNNs already tell the network how much weight/attention to give to a specific input? (High weight = lots of attention, low weight = little attention) by 029187
Transformers are graph networks applied on graph data, CNNs do not operate on graph data
hellrail t1_iskgwjz wrote
Reply to comment by VaporSprite in [P] I built densify, a data augmentation and visualization tool for point clouds by jsonathan
No, why should it.
This densification can make it easier to reach a generalizing training state, but the generalized state probably performs worse than a well generalized state without the augmentation as it changes the distribution to learn slightly by artificially imposing that a portion of the points are the center of mass of a triangulation of another portion of points. That is not generally the case for sensor data that will come in, therefore the modified distribution has low relevance to the real distribution that one wants to learn.