Viewing a single comment thread. View all comments

aggasalk t1_je0ov30 wrote

when you get to cortex, spatial tuning is rather precise, and binocular neurons are generally tuned for the same retinal position (this suggests another question of "what is retinal position anyway?" but I don't think that's actually too problematic). I'm sure if you looked at a large number of such neurons, you'd find that (like everything else) it's actually a random distribution, albeit very narrowly distributed.

the precision of this common input is the real basis of retinal correspondence (apart from the matter of the parenthetical question above). the more precise it is (the narrower that distribution), the more informative differences in input can be, and so the better for stereopsis.

1

ch1214ch OP t1_je1uve2 wrote

So would it be right to say that a binocular neuron--including one that allow for stereopsis--receives input from the same retinal position of each eye? (As opposed to it receiving input from different retinal positions) Is this right? Bc I was wondering if input from different retinal positions to the same neuron allowed for depth perception, or if it was different input from the same position

1

aggasalk t1_je2nbh0 wrote

It's ok to say it, but I think "same" might give the wrong sense, since it's not necessarily clear what "same" means here.

Correspondence is really the clearest concept - two retinal locations correspond in that they both respond to the same point in physical space, given certain optical & mechanical conditions. Those conditions are that the the physical point is at the same distance as the vergence distance of the two eyes (in other words, where they are both 'pointing', taking the axis of an eye to be a line between the center of the pupil and the foveola of the retina).

Under those conditions, a point in physical space will be imaged on precisely corresponding positions in the two retinas, and then I suppose it's fine to think of those as "the same positions".

You get the finest depth information, about the smallest differences in depth, from slightly different inputs both from the "same" i.e. precisely corresponding positions. The coarser the spatial grain (i.e. the more spread out in space it is), the larger the depth it can signal. So coarser depth signals will be transmitted by neurons with larger receptive fields, and potentially also by neurons with looser or less precise binocular correspondence. but I think the general rule will be that binocular neurons are for corresponding positions, and lack of precision amounts to noise, not a special source of information in itself.

1

ch1214ch OP t1_je8vds0 wrote

Okay, lets say the left and right retinas are like two laptop screens with all the pixels numbered/labeled the same. Does a corresponding position fall on the same numbered/labeled pixel for each eye, or would the correspondence fall on different numbered pixels?

Does that make sense? Like if the retinas were like battleship would the corresponding position be the same (b4 and b4) or would they be different (like b4 for the left eye and c4 for the right eye)

I want to know if they correspond in the sense that they are b4 b4. Or because they correspond to the same point in physical space but are in fact different "pixels", like b4 and e6

1

aggasalk t1_jea3rqy wrote

The same, I guess? When it comes down to it, binocular correspondence is as precise as the location of photoreceptors in the retina, at least this is true for central (foveal) vision, it might be less precise than that in the periphery.

But.. when it comes to binocular correspondence, the correspondence isn't really between receptors or pixels ("points") in the retina - starting with the optic nerve, visual neurons have "receptive fields" that cover a fuzzy region (but still clearly localized) of the retina. So correspondence isn't technically between points but between areas.

But those areas are at many scales, and I tell you it gets really complicated really fast when you look at it closely: pick a point in the binocular visual field (like, look at a single pixel on your screen). This point, if small enough, might fall on a single photoreceptor in each eye - photoreceptors at "corresponding positions". But the correspondence is being encoded, in the brain, by many many many neurons with receptive fields of different sizes, all of which overlap that point.

I guess this can suggest to you how to think about binocular correspondence. There is a tiny point of light shining out in space, and you look at it. Certain monocular neurons (in each eye, and downstream from there all the way to primary visual cortex) are excited by this point of light. Starting in primary visual cortex (and especially after that) there will be binocular neurons that are excited by that point, and that would be excited by it even if one eye were closed (meaning, they "want" a specific point in space, regardless of which eye it came from). That is, those binocular neurons are encoding the same point in space, and this is the basis of binocular correspondence.

If you move the point of light over so that it excites a different set of receptors, then the downstream activity will also shift, and some different neurons will be excited. But there will be overlap: some binocular neurons will be excited by both positions (they have "large receptive fields") but some will be more selective, excited only by one position or another. So not only is there binocular correspondence encoded, but it is multiscale - there is correspondence between points of many sizes.

1