Submitted by LordChips4 t3_101pvlg in MachineLearning
MediumOrder5478 t1_j2p3uau wrote
I would have the network regress the lens distortion parameters (like k1 to k6, p1, p2). You should be able to produce synthetic rendered training data.
LordChips4 OP t1_j2pd2ry wrote
The distortion I'm aiming to correct is not from the lens but from a qr code posted on a cylindrical surface ( e.g. qr code posted on lampost), with an unknown radius. So ( at least by my understanding) there's no parameters? So my input would be a distorted qr code image and the output from the trained network would be the predicted qr code image without distortion. Am I wrong in my approach/way of thinking?
Pyrite_Pro t1_j2p4q30 wrote
That may be a better approach than just using GANs, given that OP uses these inferred parameters for “lossless” correction. GANs themselves may not reconstruct all image details faithfully.
Viewing a single comment thread. View all comments