The distortion I'm aiming to correct is not from the lens but from a qr code posted on a cylindrical surface ( e.g. qr code posted on lampost), with an unknown radius. So ( at least by my understanding) there's no parameters? So my input would be a distorted qr code image and the output from the trained network would be the predicted qr code image without distortion. Am I wrong in my approach/way of thinking?
I copy pasted the response from above since I feel it fits here aswell!
The distortion I'm aiming to correct is not from the lens but from a qr code posted on a cylindrical surface ( e.g. qr code posted on lampost), with an unknown radius. So ( at least by my understanding) there's no parameters? So my input would be a distorted qr code image and the output from the trained network would be the predicted qr code image without distortion. Am I wrong in my approach/way of thinking?
LordChips4 OP t1_j2pd8zf wrote
Reply to comment by Realistic_Decision99 in [P] Using machine learning to correct geometrical distortion in images by LordChips4
The distortion I'm aiming to correct is not from the lens but from a qr code posted on a cylindrical surface ( e.g. qr code posted on lampost), with an unknown radius. So ( at least by my understanding) there's no parameters? So my input would be a distorted qr code image and the output from the trained network would be the predicted qr code image without distortion. Am I wrong in my approach/way of thinking? I copy pasted the response from above since I feel it fits here aswell!