Viewing a single comment thread. View all comments

waitforthestopsign t1_ja5vh3d wrote

As I understand it, it's because you are restricting the variance in the angles of light rays that enter your eye and hit the retina.

The way your eyes (and cameras) work is by using three different components to focus light to create an image. First, you have the retina, in the case of a camera a sensor, which is what captures the photons and creates the image. If you have just the sensor, then light hits it from every direction, and you just get a blur with varying intensity. So you have to do two things to get a clear image: you have to focus the light rays and you have to eliminate light rays that are interfering with the image. Focussing the light rays works by using a lens, which both your eye and a camera have. They refract the light rays in a way, so that they converge ideally at exactly the point where the sensor/retina is located, producing a sharp image. But this still leaves you with a problem. If you have a sensor and a lens, you can focus the light rays, but you also receive all of the light that is not being focussed. So the other thing that both a camera and your eye have is a small opening in front of the sensor, that restricts the angle at which light can enter and hit the sensor, in the case of a camera that is the aperture and in case of your eye that is the iris. If you have ever used a camera and are slightly familiar with the settings, you may know that decreasing the aperture (increasing the fstop) increases the depth of the area that is in focus. This happens because by making the hole smaller, you are cutting out some of the light rays, which means cutting out some of the variance in the angles at which light hits the sensor, which makes more of the image in focus (although this isn't the only factor that determines sharpness of course). And that is also what you are doing with your hand in front of your eye, decreasing the aperture, reducing the variance in the angles of light rays that enter your eye, therefore making up for what your lens may not be able to focus properly. This is actually how a camera obscura works. It uses no lens, but only a tiny opening relative to the "sensor", therefore cutting out all light except for the rays that enter at a very specific angle, and thus producing a relatively sharp image, if the sensor is the right distance from the opening.

11

maddaneccles1 t1_ja7jn9j wrote

Just to add this good explanation...

So if you focus on a specific point, objects closer than that point will blur as they get closer, and similarly objects further away will blur as distance increases.

The range of distances over which you have acceptable focus is known as 'Depth of Field' (or DoF) and it's affected by two factors: 1.> How far away you are focusing (the further away you focus, the larger the DoF) and 2.> The size of the aperture (e.g. iris or a gap in your fingers) through which you're looking (the smaller the aperture, the larger the DoF - this is for the reasons explained by u/waitforthestopsign)

The DoF is not symmetrical - in fact objects closer to you than the focal point quickly blur (but we tend not to notice because these objects are often in our peripheral vision); objects further away than the focal point blur more slowly as distance increases.

A consequence of this is that in very bright light the iris in our eyes contracts to limit the amount of light entering the eye, which has the effect of increasing the DoF and makes it much easier for the lens to focus - this is because it the lens doesn't need to be as accurate, and deficiencies in the lens (e.g. long-sightedness) become less noticeable. The effect is particularly noticeable on objects that are close to us which is one reason why reading in good light can be so much easier that in poor light.

1

GorchestopherH t1_ja8zgjh wrote

Bingo. This is why a pinhole camera can focus light sources at any distance.

Tradeoff: they are dim.

1