Submitted by wellfriedbeans t3_10r6qn0 in MachineLearning
jimmymvp t1_j71cgkw wrote
Reply to comment by badabummbadabing in [D] Normalizing Flows in 2023? by wellfriedbeans
There is a trick how you can get away with gradually expanding your latent dimension with normalising flows, if you assume that the dimensions are independent to a certain point, then you sample from a base distribution and concatenate in the middle of the flow.
Again, MCMC sampling, simulation based inference are examples. Imagine you have an energy function that describes the distribution (you don't have data), how do you sample from this distribution? You would do some MCMC, how would you arrive to a good proposal distribution to make the MCMC algorithm more efficient? You would fit the proposal based on some limited data that you have or inductive biases such as certain invariances etc.
Viewing a single comment thread. View all comments