Direction selectivity requires nonlinearity

Thanks to Damon Clark at Yale and Jacob A. Zavatone-Veth at Harvard for pointing out the following to me.

I had always thought that you could get a direction-selective neuron with a linear filter that is spatiotemporally inseparable, so that it is “tilted” in spacetime, with the gradient defining a speed and direction. I always thought you would get a bigger response for a stimulus moving at the speed and in the direction matching the filter, than for one moving in the opposite direction. I know models, like the motion energy model, then tend to place a nonlinearity after the linear filtering, but I didn’t think this was necessary for direction-tuning when the filter is already tilted in this way.

Well… yes it is (although it does slightly depend on what you mean by direction-selective). The video below shows a Gaussian-blob stimulus passing over a receptive field, first left to right and then right to left.

The leftmost panel shows the tilted spatiotemporal linear filter representing the receptive field. How to read this function: It responds weakly to the present stimulus (tau=0), responding most to stimulation at x=-50. It responds more strongly to stimuli as we go back into the past. It responds most strongly to stimuli that were presented at a time tau=40 units ago, located at x=-11. As we go further back into the past, its responds decays away. It responds weakly to stimuli presented 70 time-units ago, most strongly to those that were at x=10 units 70 time-units ago. This panel doesn’t change, as the filter’s shape is a permanent feature (it’s time-invariant).

The middle panel shows the stimulus. The bottom row (tau=0) shows where the stimulus is now; the rows above show where it was at times progressively further into the past. At any given moment of time, the stimulus is a Gaussian function of position. The contour lines show the filter for comparison.

The right-hand panel shows the response of the linear filter, which is the inner product of the filter and the stimulus at every moment in time.
The red curve shows the response when the stimulus moves rightward; the blue curve shows the response when the stimulus moves leftward.

Here is the response as a function of time for both directions of motion. Notice that although the response to the leftward stimulus peaks at a much higher value, the total response is the same for both directions of motion. I had never realised this was the case until Damon pointed it out to me and found it hard to believe at first — although as the video makes clear, it’s just because in both cases the stimulus is sweeping out the same volume under the filter.

So can you describe this linear filter as direction-selective? It certainly gives a different response to the same stimulus moving rightward vs leftward, so I’d argue to that extent you can describe it as such. But since the total response is the same for both directions, it’s hard to argue it has a preferred direction. And it’s certainly true that to use it in any meaningful way, you’d want to apply a nonlinearity, whether squaring or a threshold or whatever. For example, if you wanted to use this “leftward filter” to drive a robot to turn its head to follow a leftward moving object, you’d be in trouble if you just turned the head leftward by an angle corresponding to the output of this filter. Sure the robot would turn its head left by so many degrees as an object passed left in front of it, but it would also turn its head left by the exact same angle if an object passed rightward! So in that sense, this filter is not direction-selective, and a nonlinearity is required to make it so.

Many thanks Damon and Jacob for taking the time to explain this to me!


Fitzgerald & Clark (2015). Nonlinear circuits for naturalistic visual motion estimation. eLife 2015;4:e09123.


The (slightly crummy) Matlab code I wrote to generate this video is below:

function JDirTest
clear all;
close all;

% This makes the tilted RF
xp = [-100:100];
X = exp(-(xp./20).^2)
figure(1000)
plot(xp,X);
nt = 100; %number of time samples
tau = [1:nt];
tG = exp(-((tau-40)./20).^2); % just shifted so as to be causal
for j = 1:nt
rim(j,:) = circshift(X,j-round(nt/2));
lim(j,:) = circshift(X,round(nt/2)-j);
RFim(j,:) = circshift(X,j-round(nt/2));
RFim(j,:) = RFim(j,:) .* tG(j);
end
figure
imagesc(xp,tau,RFim);
xlabel(‘position x ‘)
ylabel(‘time \tau (seconds before present)’)
set(gca,’ydir’,’norm’)
title(‘Linear spatiotemporal filter f(x,\tau)’)

% Now run a nice simulation
figure(‘pos’,[20 374 1495 420])
for jdirection=1:2
if jdirection==1
x0=-100;
speed=+1;
colspeed = ‘r’;
label=’rightward’;
else
subplot(1,3,3)
hold on
x0 = +100;
speed=-1;
colspeed=’b’;
label=’leftward’;
end

subplot(1,3,1)
imagesc(xp,tau,RFim);
set(gca,’ydir’,’norm’)
xlabel(‘position x ‘)
ylabel(‘relative time \tau (seconds before present)’)
set(gca,’ydir’)

time = [0:300];
for jt=1:length(time)
timenow = time(jt);
% delete(h)
% h = plot(stimuluslocation(t
subplot(1,3,1)
title(sprintf(‘Current time is t=%3.0f’,timenow))
subplot(1,3,2)
s = stimulus(xp,tau,timenow);
hold off
imagesc(xp,tau,s);
hold on
contour(xp,tau,RFim);

set(gca,’ydir’,’norm’)
xlabel(‘position x ‘)
ylabel(‘relative time \tau (seconds before present)’)
set(gca,’ydir’)
title(‘Stimulus s(x,t-\tau)’)
drawnow

% Do inner product of current stimulus with filter:
response(jt) = sum(sum( s.*RFim)) ;
% Plot it
if jt>1
subplot(1,3,3)
h(jdirection) = plot(time(1:jt),response(1:jt),’-‘,’col’,colspeed);
xlim([0 max(time)])
ylim([0 1000])
lab{jdirection} = sprintf(‘%s, total = %3.0f’,label,trapz(time(1:jt),response(1:jt)));
legend(h,lab)
title(‘Response’)
xlabel(‘time (s)’)
end

end

end % do other direction

% Stimulus is a Dirac delta function, x = vt + x0
function s = stimulus(xp,tau,time)
% returns s(x,t-au)
[xp2,tau2] = meshgrid(xp,tau);
%s = 0*xp2;
%s(xp2 == time – tau2 ) = 1;
s = exp(-(xp-x0 – speed*(time – tau2)).^2/2/10^2);
end

end

Da Vinci Stereopsis

I have just been asked for “a succinct explanation of da Vinci stereopsis”. I googled in the hope of finding one, but couldn’t, so thought I’d put one up here.

Leonardo da Vinci didn’t quite realise that stereoscopic depth perception was a thing, but he did explain in his “Treatise on Painting” that a given object occludes different parts of the background when viewed from the left eye as compared to the right eye. “Da Vinci Stereopsis” now refers to depth perception based on the occlusion geometry in the two eyes. The term was introduced by Nakayama and Shimojo in a 1990 paper.

Consider the left-hand figure below. Both eyes see a large black rectangular object, but the right eye also sees a black bar to its right. Most observers, seeing these images, experience a weak sense that the bar is further away than the rectangle. This is because of the geometry shown in the figure. The left eye doesn’t see the bar because it’s hidden from view (“occluded”) by the nearer black rectangle.


Conversely, in the right-hand figure, the bar is only visible in the left eye, again to the right of the rectangle. Now, most people will report a weak sense that the bar is closer than the rectangle. This is because these retinal images could be accounted for by the scene shown at the top of the figure: the bar is technical seen by both eyes, but in the right eye it appears on top of the rectangle. Both objects are black and so the bar is invisible in the right eye.

Many vision scientists think that da Vinci stereopsis is a separate form of stereo vision that is not based on disparity (the separation between the images of the same object as seen in left and right eye). The argument is that because the bar is only visible in one eye’s image, a disparity cannot be defined.

Vision Sciences Society meeting 2017

Sid, Maydel, Chris and I had an excellent time at the VSS meeting last month. Thanks to Ignacio for this snap of my talk on “When invisible noise obscures the signal: the consequences of nonlinearity in motion detection.” Sid gave a great talk on his work “Modeling response variability in disparity-selective cells.” tmp2

3D glasses with glasses

I was giving a talk recently about my work on viewer experience with stereoscopic 3D television, and an audience member asked a good question, which was: Was there any relationship between people complaining of adverse effects and whether they routinely wore prescription spectacles? Such people are wearing two pairs of glasses to view S3D, which might be more uncomfortable, but equally they are already used to wearing glasses so might be less bothered than your average person who is wearing glasses only to view 3D.

We didn’t put anything about that in the papers, but I dug out the data and had a look. I haven’t done the stats, but it seems pretty clear there’s no effect of glasses. First, here is Fig 7 from Read & Bohr 2014:

tmp

And here is a version split up by whether or not participants usually wore glasses (in each pair of bars, the left-hand bar is for people who wore contacts or no correction, and the right-hand bar is for people who wore glasses).
tmp
In the graph, it looks as if there’s a striking difference in the “fake 3D passive” case, but really that’s to do with the small number of participants – we have 1/17 people without glasses reporting adverse effects, compared to 3/15 in the people with glasses. So if just one person changed their answer, it would look much less impressive. Since the effect isn’t seen in the other groups, I think it’s probably just a blip.

Averaging over all participants who wore 3D glasses (ie excluding only those in the true 2D group), the numbers are as follows:
n total = 311
n reporting adverse effects = 64 (21%)
n who habitually wear glasses = 117 (38%)
of whom n reporting adverse effects = 21 (18%)
n who do not habitually wear glasses = 194 (62%)
of whom n reporting adverse effects = 43 (22%)

User experience while viewing stereoscopic 3D television

Read JCA, Bohr I ( 2014 )
User experience while viewing stereoscopic 3D television

Ergonomics Vol: 57(8) Pages: 1140-53 [view on journal website] Pubmed ID : 24874550


Journal press release "Good news for couch potatoes".
0.4 MiB
1468 Downloads

Why don’t we see the world upside down?

This question comes up occasionally and I was just recently asked a similar question by email, so I thought it would be a good idea to do a blog post that everyone can see. Although there’s a great article on this here: http://mentalfloss.com/uk/biology/30542/your-eyes-see-everything-upside-down

First off, the image of the world projected onto our retina is upside. This is just a consequence of geometry. This image from the Wikipedia article on pinhole cameras shows this nicely:

Our eye is more sophisticated than a pinhole camera — it has a lens so it can collect light over the whole of our pupil and bring it to a focus on our retina — but that isn’t important here. The retinal image is still upside-down. So why don’t we see the world upside-down?

One way of answering that is to point out that our eyes don’t, actually, “see” anything at all. Seeing happens in the brain. All your brain needs to know is the relationship between which photoreceptors are receiving the light, and where the object is in the world. We’ve learnt that if we want to touch an object whose image appears at the bottom of our eye, we usually have to raise our hands up (in the direction of our shoulders) while extending them, not move them down (towards our feet). So long as we know the correct mapping, it doesn’t actually matter where on the eye the information is.