Demo videos from our mantis 3D glasses paper

We uploaded 6 nice demo videos as Supplementary Material for our Scientific Reports paper. Unfortunately the links are currently broken (I have emailed) and in any case they are provided in a slightly clunky way where you have to download them. So I thought I would do a blog post explaining the videos here.

Here is a video of a mantis shown a 2D “bug” stimulus (zero disparity). A black disk spirals in towards the centre of the screen. Because the disk is black, it is visible as a dark disk in both eyes, i.e. it’s an ordinary 2D stimulus. The mantis therefore sees it, correctly, in the screen plane, 10cm in front of the insect. The mantis knows its fore-arms can’t reach that far, so it doesn’t bother to strike.

Loading Video Player….

Next, here’s a video of a mantis shown the same bug stimulus in 3D. Now the disk is shown in blue for the left eye and green for the right (bearing in mind that the mantis is upside down). Because the mantis’s left eye is covered by a green filter, the green disk is invisible to it – it’s just bright on a bright background, i.e. effectively not there, whereas the blue disk appears dark on a bright background.
This is a “crossed” geometry, i.e. lines of sight from each disk to the eye that can see it cross over in front of the screen, at a distance about 2.5cm in front of the insect. This is well within the mantis’s catch range, so the insect strikes out trying to catch the bug. You can sometimes see children doing the same thing when they see 3D TV for the first time!

Loading Video Player….

Here’s a slo-mo version recorded with our high-speed camera. Unfortunately the quality has taken a big hit, but at least you get to see the details of the strike…

Loading Video Player….

Sceptical minds (the best kind) might wonder if this is the correct explanation. What if the filters don’t work properly and there’s lots of crosstalk? Then, the mantis is seeing a single dark disk in our “2D” condition and two dimmer disks in our “3D” condition. Maybe the two disks are the reason it strikes, nothing to do with the 3D. Or maybe there’s some other artefact. As a control, we swapped the green and blue disks over, effectively swapping the left and right eye’s images. Now the lines of sight don’t intersect at all, i.e. this image is not consistent with a single object anywhere in space. Sure enough, the mantis doesn’t strike. Obviously, in different insects we put the blue/green glasses on different eyes, so we could be sure the difference really was due to the binocular geometry, not the colours or similar confounds.

Loading Video Player….

Here’s the figure from our paper which illustrates this geometry and also shows the results:

Crosstalk with insect 3D glasses

Our first paper on praying mantis 3D vision has just come out in Scientific Reports: Insect stereopsis demonstrated using a 3D insect cinema by Nityananda, Tarawneh, Rosner, Nicolas, Crichton, Read. There’s a press release here.

One issue we discuss in the paper is https://www.yahoo.com/tech/scientists-gave-praying-mantises-tiny-142122820.html>the problem of crosstalk. This was why we ended up using our anaglyph (green/blue) 3D glasses after having initially explored circularly-polarising glasses.
Here are two videos we prepared to illustrate the crosstalk experienced with the two systems.

The first video shows how bad the crosstalk is with a patterned-retarder display, which separates the images by circular polarisation. Crosstalk is very viewing-angle dependent in these displays, and we think that because the mantises are so close to the screen, they are sometimes seeing the target at very oblique angles, and this is why they get so much crosstalk.

Loading Video Player….

In contrast, anaglyph glasses, which separate the images by spectral wavelength, show very low crosstalk regardless of viewing angle. We think this is why they work so much better, as we demonstrate in the paper.

Loading Video Player….

Fire and Light Yule Festival – with added science


re-enactor dressed as mediaeval monk, wearing anaglypd 3D glasses
Over the weekend, Kathleen and I, along with Stacey from the Institute of Neuroscience, Gordon Love from Durham University and colleagues from Northumbria University, helped deliver some science activities for the “Fire and Light Yule Festival at North Shields’ Old Low Light. This isn’t as I first assumed a lighthouse, but is a leading light – pilots would guide vessels safely into North Shields harbour by steering so as to keep the lamps on the High Light and the Low Light vertically aligned with one another. This kept them on a course which would avoid them grounding either on mudflats, or on the treacherous Black Midden rocks. So it’s a great example of an industrial application for light technology, and tied in well with the presentation by Prof Fary Ghassemlooy from Northumbria University on cutting-edge “li-fi” communication.

Stacey, Kathleen and I talked about how we see and perceive the world through light, with the help of some visual illusions and our ASTEROID 3D vision test. It was a great event and I’d like to do it again next year.

Percentage of variance explained

I was just looking for a good explanation of this online to point a lab member towards, and I couldn’t find anything suitable, so I thought I’d write something myself. (Since then, this https://en.wikipedia.org/wiki/Fraction_of_variance_unexplained seems pretty good.

The idea is you have a set of experimental data {Yi} (i=1 to N). These might be responses collected in N different conditions, for example.
The mean is

M = sum{Yi} / N

and the variance about this mean is the total variance

TV = var(Yi) = sum{ (Yi-M)^2 } / N

Now suppose you have some model or fit which predicts values Fi. The residuals between the fit and the data are Ri=(Yi-Fi). The mean of the squared residuals is

RV = sum { (Yi - Fi )^2 } / N

(this is identical to the variance of the residuals if Fi and Yi have the same mean, as they do in linear regression)
The fraction of UNexplained variance is RV/TV, so the fraction of explained variance is 1-RV/TV.

In a perfect world, Fi would be equal to Yi for all i, and therefore these residuals would all be zero. RV=0 and thus all of the variance is explained. The other end of the spectrum is where all the fit values are the same, just equal to the mean of the data: Fi=M. Then we can see from the equations above that RV=TV and none of the variance is explained.

The percentage of variance which is explained is

PV = 100(1-RV/TV).

Which Windows/Matlab version with Psychophysics Toolbox?

My colleague Nicholas Port was just asking me about my experience with different versions of Windows & Matlab running Psychophysics Toolbox. I asked my team about their experience, and I thought it might be useful to record their comments here:

Paul: “I used windows 7 and Matlab 2012b for psychophysical experiments before and I didn’t have any issues. However when I was using an Eyelink 1000 (external eyetracker) and timing was important for some reason the functionality only worked on 64 bit, so it is certainly worth considering which version you should use. Windows 7 and 2012A is more than enough.”

Zoltan: “I used Windows 8.1 with Matlab R2014A. I had some issues with timing because the desktop mode is handled differently.
If you do production-quality stuff where timing is important, go with Windows 7 for the time being. I never had any issues with the 64-bit Matlab. However, there is a software limitation with Windows 7: the refresh rate you can set on any monitor is capped at 85 Hz. You will need to do some hacking in the registry if you have a Radeon card, and you can conveniently make a new custom resolution with anything nvidia.
So, currently, the most stable stuff is a Windows 7 PC, 64-bit Matlab, and an nVidia GPU. If you need more power, switch off the swap file, visual effects and have plenty of RAM.”

Ghaith: “I’ve been using PTB on Windows 7 using both 32/64-bit versions of Matlab with no problems.”

Partow: “I had some problems in downloading psychtoolbox with 32-bit on a Windows 7. 64 is fine.”

VSS2015

Back from the Vision Sciences Society meeting; really enjoyed it as ever. The Readlab team had a productive and fun time.

Sid gave a talk about his physiology recordings of V1 neurons in Bruce Cumming’s lab. Zoltan presented a poster about his work using temporal-frequency tagging with disparity in EEG, and Paul gave a poster about how disparity cues influence our perception of size. I gave a poster about ongoing attempts in the lab to decide whether praying mantis vision has distinct spatial frequency channels, a problem which has been engaging lab members Ghaith, Vivek and Lisa as well as long-term collaborator Ignacio Serrano-Pedraza. Ignacio gave a talk about how patching one eye for as little as 2 hrs can result in significant changes in surround suppression, work in collaboration with Holly Bridge down in Oxford.

Sid Henriksen

Zoltan Derzsi


Wise words from Lord Rayleigh

I’ve just come across the following by Lord Rayleigh, him of the scattering, quoted in The Handbook of Perception and Human Performance:

“In science, by a fiction as remarkable as any to be found in law, what has once been published, even though it be in the Russian language, is spoken of as known, and it is too often forgotten that the rediscovery in the library may be a more difficult and uncertain process than the first discovery in the laboratory.”

Maybe slightly less true today than in 1884, thanks to Google, but still more true than I’d like!

Google cardboard

Finally got round to buying a Google Cardboard viewer and trying it out on my HTC One phone. Ooof. That is painful. I don’t know if it’s me or my phone (the magnetic clicker doesn’t work with HTC One, that much I know) but the effect is nauseating. It keeps jumping and skipping even when I don’t move my head. I had to take it off after about two minutes. I hope for the sake of the future of VR that other people aren’t having the same experience as me…

Should have asked a first-year psychology student…

I recently watched “The Machine”, an enjoyable sci-fi flick about a secret defence project to construct an artificially intelligent android who will be the ultimate super-soldier. Naturally, they design their robot killer in the form of a shapely young woman who spends much of the movie in a state of undress. What surprised me was that, in the opening scenes, they manage to totally mess up the well-known Sally-Anne “doll” test for theory of mind.

In the words of Wikipedia, the “the key question [is] the Belief Question: ‘Where will Sally look for her marble?'”. In the movie, the scientist asks “Where should Sally look for her ball?” which makes a nonsense of the whole thing! I am surprised that no one picked up on this in the course of making the movie. Come on people, it’s not rocket science.