Learning a new language

The last programming language I learnt was Matlab, back circa 1994. Eek, that is twenty years ago now! It seems a shame that today there are all these amazing electronic devices, but coding is so much less accessible than when I was a wee lass learning to program my ZX Spectrum in BASIC. So I have been trying to “get with the program” a bit by learning some Android. I followed a couple of the tutorials on http://developer.android.com/training/basics/firstapp/index.html, but found it hard going, although I did successfully code up a 2D dynamic random dot pattern. Then Brad Pearce pointed me to Game Maker Studio, which offers a very simple drag-and-drop interface for building games, supported by code as necessary. So I’ve spent an entertaining few hours trying to get to grips with that. It has a good set of tutorials but I was too impatient to follow them for long and have now dived off-piste trying to code up my own game. It’s hard going learning something new isn’t it! I bet all the students who I’ve sternly instructed to “learn Matlab” over the years can relate to that :).

Nuremberg – PlusOptix PowerRef 3

This week Jenny and myself took a flight to Nuremberg (via Amsterdam) to visit PlusOptix to view their PowerRef3 ( http://www.plusoptix.com/lang-en/accommodation-meter.html ). It made a pleasant change from being in the lab everyday to get on a plane and look at some equipment (I was very excited).

We arrived and were greeted by Ralph, an employee of the company who took us to look at the equipment.

The idea behind the PowerRef 3 is to use a mirror system to shine infared light into the eyes and record the reflections from the back of the retina to determine both each eyes position (and hence gaze, from which VERGENCE can be measured) and also the refraction of the eyes (which is a way to measure the ACCOMMODATION). We are planning the main study of my PhD to involve looking at the decoupling of the two cues above and so were very impressed by the equipments accuracy and ease of use. The clever way PlusOptix do the calculations is to insist the camera is located one metre away from the eye for the distance the infared light travels. This means that calibration (which I am told by Jenny is a nightmare with conventional eyetracker technology) is not required, as everything is calculated from measurable numbers in relation to this distance of 1 metre.

Jenny and I discussed the mathematics behind the data, and created some matlab code to run our own evaluations on the accuracy and reliabiliity of the data with regards to a theoretical ‘perfect’ case and we were astounded by how similar the results were. I am very much looking forward to getting the lab a PowerRef 3 to run our experiments with.

As a closing note it is worth mentioning that the people at PlusOptix from the boss all the way down to the general workforce were polite, kind and interested in saying hello and talking to us. Ralph, who gave up two days to look after us, and Christian, who I had been in contact with before heading over there, were very generous and helped make the couple of days away be much more enjoyable!

Welcome Ronny!

My mantis Dream Team is now complete with the arrival of Ronny Rosner this week. Ronny is busy setting up our insect electrophysiology lab. I’m delighted to have such an experienced neurophysiologist on board, and am excited to see what new insights he will produce about mantis visual processing.

It’s the most wonderful time of the year

Before Christmas started I had a hectic week or two.

First up was IC3D at the beginning of December. First time on a Eurostar, first visit to Belgium and first international conference. I though the scientific conference was very well done, designed and implemented by scientists, for scientists. Made some good connections and had some good and interesting conversations. The overriding theme was more engineering based than I would have chosen, but I can’t fault the conference. The final day and a half was the professional conference, where a lot of networking took place which I unfortunately as a PhD student was very low on the list of priorities. However I did get to visit Galaxy studios and experience Auro (which is called immersive 3D audio), very impressed and it will certainly be the next big thing when cinemas can afford to get it installed and more content is created for it! I came back with one standout idea from the conference, and if I’m lucky hopefully will have another experiment out of it!

An important part of research which I hadn’t really thought about is to ensure that the next lot of minds decide to continue in the scientific field: You can’t just sit in your little bubble, do science and find stuff out, you need to interact. In doing so I have taken a Psychology third year undergrad on as a project student, Patrick, and I am working with him on an experiment I came up with in Liege at IC3D. In it we want to compare the size of a stimulus to the stereoscopic depth, and see which cue (if providing conflicting information) is the overriding one, stereo or size. To set up the experiment took a lot of complex mathematics working with congruent triangles and trigonometry, and the computing code took a little time to sort out some teething issues with displaying the card correctly, but had the bulk of the experiment done before Christmas.

Then followed a lot of beer and too many mince pies over the Christmas holidays.

After coming back (and vowing to lose the stone I put on over Christmas) I have since typed up some explanation to the project Patrick and I are doing and fixed some of the computer code. I am going to continue to get the experiment set up and sorted, and also hopefully submit a paper to JoV soon. I am also waiting on a couple of things before I start my next experiment up, so it’s all go!

IC3D (preparation)

The big brilliant news this week is that I have been accepted to present some of my data at IC3D this year in Liege! It’s an international conference focussing mostly on my field of interest and it is a peer reviewed conference so the kudos is rather high!

They have taken my paper on oblique viewing angles nicely and want me to present it. They have returned the paper with some reviewer comments for minor alterations, which I am currently working with Jenny to get done. However one of the comments was, quite simply, to remove some of the irrelevant pieces of information I provide. I think this is in itself a bit of an annoying thing to say, because as a scientist, and a mathematician, I really try to only say the important stuff! If life could be bullet pointed that would suit me to a tee.

So I am now in the process of sorting out the paper, and also being sure I can publish the full study still in a good journal (hopefully Journal of Vision), before the deadline of a week on Wednesday.

Time flies when you’re having fun, I can’t believe it’s already November of this year! I would advocate anybody who likes hard, ever-changing and challenging work to go into research. I’m certainly living the dream!

Neuronal models of motion sensitivity

I was just assembling a (personal, biased) reading list on neuronal models of motion sensitivity, and it occurred to me it might be good to do it publicly as a blog post. Please any neuro readers, chip in with your own contributions!

Books

Landy & Movshon (eds), Computational models of visual processing

Russell & Karen DeValois, Spatial Vision

Concepts

Zanker, J., Modeling human motion perception. I. Classical stimuli. Naturwissenschaften, 1994. 81(4): p. 156-63.
I think this might be a good place to start – what do you reckon?

Models of simple motion sensors

roughly corresponding to V1 in my mind.

Adelson, E.H. and J.R. Bergen, Spatiotemporal energy models for the perception of motion. J Opt Soc Am [A], 1985. 2(2): p. 284-99.
Poss my all-time fave in the genre. So clear and logical.

Watson, A.B. and A.J. Ahumada, Jr., Model of human visual-motion sensing. J Opt Soc Am A, 1985. 2(2): p. 322-41.
van Santen, J.P. and G. Sperling, Temporal covariance model of human motion perception. J Opt Soc Am [A], 1984. 1(5): p. 451-73.
van Santen, J.P. and G. Sperling, Elaborated Reichardt detectors. J Opt Soc Am A, 1985. 2(2): p. 300-21.
Three more classics.

Perceptual consequences of this sort of motion sensor:

Sheliga, B.M., et al., Initial ocular following in humans: a response to first-order motion energy. Vision Res, 2005. 45(25-26): p. 3307-21.
Such a nice demo with the missing-fundamental stimulus.

Serrano-Pedraza, I., G. P., and A. Derrington, Evidence for reciprocal antagonism between motion sensors tuned to coarse and fine features. Journal of Vision, 2007. 7(12): p. 8 1-14.
Intriguing result and nice clear modelling

What does MT do differently?

Heeger, D.J., E.P. Simoncelli, and J.A. Movshon, Computational models of cortical visual processing. Proc Natl Acad Sci U S A, 1996. 93: p. 623-627.
Getting into the difference between V1 and MT, pattern vs component motion etc

Perrone, J.A. and A. Thiele, Speed skills: measuring the visual speed analyzing properties of primate MT neurons. Nat Neurosci, 2001. 4(5): p. 526-32.
How do you get a neuron tuned to speed from neurons tuned to spatial and temporal frequency?

What do you reckon, vision science community? What must-read papers did I miss out?

Mantis in motion

Things are going well with the mantids. They are such great experimental subjects; I think I prefer them to humans! Certainly a lot less hassle :).

Although the main thrust of the M3 project will be about their stereo vision, we are getting increasingly excited about the great questions we can ask about their motion perception. Lisa is currently collecting data on a motion perception question, while Vivek is progressing the main stereo research arc by constructing ever more refined 3D glasses for the mantids. Ghaith is working on stimulus generation for both projects and starting to develop models of the underlying algorithms. Exciting times.