Peripheral Flicker Fusion at High Luminance: Beyond the Ferry–Porter Law

Peripheral Flicker Fusion at High Luminance: Beyond the Ferry–Porter Law by Fernandez-Alonso M, Innes W, Read JCA, FernandezAlonsoInnesRead2023.pdf (0.9 MiB) - The relationship between luminous intensity and the maximum frequency of flicker that can be detected defines the limits of the temporal-resolving ability of the human visual system, and characterizing it has important theoretical and practical applications; particularly for determining the optimal refresh rate for visual displays that would avoid the visibility of flicker and other temporal artifacts. Previous research has shown that this relationship is best described by the Ferry–Porter law, which states that critical flicker fusion (CFF) increases as a linear function of log retinal illuminance. The existing experimental data showed that this law holds for a wide range of stimuli and up to 10,000 Trolands; however, beyond this, it was not clear if the CFF continued to increase linearly or if the function saturated. Our aim was to extend the experimental data available to higher light intensities than previously reported in the literature. For this, we measured the peripheral CFF at a range of illuminances over six orders of magnitude. Our results showed that for up to 104 Trolands, the data conformed to the Ferry–Porter law with a similar slope, as previously established for this eccentricity; however, at higher intensities, the CFF function flattens and saturates at ~90 Hz for a target size of 5.7 degrees, and at ~100 Hz for a target of 10 degrees of angular size. These experimental results could prove valuable for the design of brighter visual displays and illumination sources that are temporally modulated.

New Approaches to 3D Vision

New Approaches to 3D Vision by Linton P, Morgan MJ, Read JCA, Vishwanath D, Creem-Regehr SH, Domini F, LintonMorganReadVishwanathCreemRegehrDomini2022.pdf (3.2 MiB) - New approaches to 3D vision are enabling new advances in artificial intelligence and autonomous vehicles, a better understanding of how animals navigate the 3D world, and new insights into human perception in virtual and augmented reality. Whilst traditional approaches to 3D vision in computer vision (SLAM: simultaneous localization and mapping), animal navigation (cognitive maps), and human vision (optimal cue integration) start from the assumption that the aim of 3D vision is to provide an accurate 3D model of the world, the new approaches to 3D vision explored in this issue challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models or no model at all. This issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles.

This article is part of a discussion meeting issue ‘New approaches to 3D vision’.

Stereopsis without correspondence

Stereopsis without correspondence by Read JCA, Read2022.pdf (1.9 MiB) - Stereopsis has traditionally been considered a complex visual ability, restricted
to large-brained animals. The discovery in the 1980s that insects, too, have
stereopsis, therefore, challenged theories of stereopsis. How can such simple
brains see in three dimensions? A likely answer is that insect stereopsis has
evolved to produce simple behaviour, such as orienting towards the closer
of two objects or triggering a strike when prey comes within range. Scientific
thinking about stereopsis has been unduly anthropomorphic, for example
assuming that stereopsis must require binocular fusion or a solution of the
stereo correspondence problem. In fact, useful behaviour can be produced
with very basic stereoscopic algorithms which make no attempt to achieve
fusion or correspondence, or to produce even a coarse map of depth across
the visual field. This may explain why some aspects of insect stereopsis
seem poorly designed from an engineering point of view: for example,
paying no attention to whether interocular contrast or velocities match. Such
algorithms demonstrably work well enough in practice for their species, and
may prove useful in particular autonomous applications.
This article is part of a discussion meeting issue ‘New approaches to 3D
vision’.

Synthetic OCT Data Generation to Enhance the Performance of Diagnostic Models for Neurodegenerative Diseases

Synthetic OCT Data Generation to Enhance the Performance of Diagnostic Models for Neurodegenerative Diseases by Danesh H, Steel DH, Hogg J, Ashtari F, Innes WF, Bacardit, J, Hurlbert A, Read JCA, Kafieh R, DaneshMaghooliDehghaniKafieh2021.pdf (3.1 MiB) - Purpose: Optical coherence tomography (OCT) has recently emerged as a source for powerful biomarkers in neurodegenerative diseases such as multiple sclerosis (MS) and neuromyelitis optica (NMO). The application of machine learning techniques to the analysis of OCT data has enabled automatic extraction of information with potential to aid the timely diagnosis of neurodegenerative diseases. These algorithms require large amounts of labeled data, but few such OCT data sets are available now.

Methods: To address this challenge, here we propose a synthetic data generation method yielding a tailored augmentation of three-dimensional (3D) OCT data and preserving differences between control and disease data. A 3D active shape model is used to produce synthetic retinal layer boundaries, simulating data from healthy controls (HCs) as well as from patients with MS or NMO.

Results: To evaluate the generated data, retinal thickness maps are extracted and evaluated under a broad range of quality metrics. The results show that the proposed model can generate realistic-appearing synthetic maps. Quantitatively, the image histograms of the synthetic thickness maps agree with the real thickness maps, and the cross-correlations between synthetic and real maps are also high. Finally, we use the generated data as an augmentation technique to train stronger diagnostic models than those using only the real data.

Conclusions: This approach provides valuable data augmentation, which can help overcome key bottlenecks of limited data.

Translational Relevance: By addressing the challenge posed by limited data, the proposed method helps apply machine learning methods to diagnose neurodegenerative diseases from retinal imaging.

A computational model of stereoscopic prey capture in praying mantises

A computational model of stereoscopic prey capture in praying mantises by O’Keeffe J, Yap SH, Llamas-Cornejo I, Nityananda V, Read JCA, OKeeffeYapLlamasCornejoNityanandaRead2022.pdf (4.2 MiB) - We present a simple model which can account for the stereoscopic sensitivity of praying mantis predatory strikes. The model consists of a single “disparity sensor”: a binocular neuron sensitive to stereoscopic disparity and thus to distance from the animal. The model is based closely on the known behavioural and neurophysiological properties of mantis stereopsis. The monocular inputs to the neuron reflect temporal change and are insensitive to contrast sign, making the sensor insensitive to interocular correlation. The monocular receptive fields have a excitatory centre and inhibitory surround, making them tuned to size. The disparity sensor combines inputs from the two eyes linearly, applies a threshold and then an exponent output nonlinearity. The activity of the sensor represents the model mantis’s instantaneous probability of striking. We integrate this over the stimulus duration to obtain the expected number of strikes in response to moving targets with different stereoscopic disparity, size and vertical disparity. We optimised the parameters of the model so as to bring its predictions into agreement with our empirical data on mean strike rate as a function of stimulus size and disparity. The model proves capable of reproducing the relatively broad tuning to size and narrow tuning to stereoscopic disparity seen in mantis striking behaviour. Although the model has only a single centre-surround receptive field in each eye, it displays qualitatively the same interaction between size and disparity as we observed in real mantids: the preferred size increases as simulated prey distance increases beyond the preferred distance. We show that this occurs because of a stereoscopic “false match” between the leading edge of the stimulus in one eye and its trailing edge in the other; further work will be required to find whether such false matches occur in real mantises. Importantly, the model also displays realistic responses to stimuli with vertical disparity and to pairs of identical stimuli offering a “ghost match”, despite not being fitted to these data. This is the first image-computable model of insect stereopsis, and reproduces key features of both neurophysiology and striking behaviour.

Binocular Vision and Stereopsis Across the Animal Kingdom

Binocular Vision and Stereopsis Across the Animal Kingdom by Read JCA, Read2021.pdf (3.4 MiB) - Most animals have at least some binocular overlap, i.e., a region of space that is viewed by both eyes. This reduces the overall visual field and raises the problem of combining two views of the world, seen from different vantage points, into a coherent whole. However, binocular vision also offers many potential advantages, including increased ability to see around obstacles and increased contrast sensitivity. One particularly interesting use for binocular vision is comparing information from both eyes to derive information about depth. There are many different ways in which this might be done, but in this review, I refer to them all under the general heading of stereopsis. This review examines the different possible uses of binocular vision and stereopsis and compares what is currently known about the neural basis of stereopsis in different taxa. Studying different animals helps us break free of preconceptions stemming from the way that stereopsis operates in human vision and provides new insights into the different possible forms of stereopsis.

Reduced surround suppression in monocular motion perception

Reduced surround suppression in monocular motion perception by Arranz-Paraiso S, Read JCA, Serrano-Pedraza I, ArranzParaisoReadSerranoPedraza2021.pdf (0.7 MiB) - Motion discrimination of large stimuli is impaired at high contrast and short durations. This psychophysical result has been linked with the center-surround suppression found in neurons of area MT. Recent physiology results have shown that most frontoparallel MT cells respond more strongly to binocular than to monocular stimulation. Here we measured the surround suppression strength under binocular and monocular viewing. Thirty-nine participants took part in two experiments: (a) where the nonstimulated eye viewed a blank field of the same luminance (n = 8) and (b) where it was occluded with a patch (n = 31). In both experiments, we measured duration thresholds for small (1 deg diameter) and large (7 deg) drifting gratings of 1 cpd with 85% contrast. For each subject, a Motion Suppression Index (MSI) was computed by subtracting the duration thresholds in logarithmic units of the large minus the small stimulus. Results were similar in both experiments. Combining the MSI of both experiments, we found that the strength of suppression for binocular condition (MSIbinocular = 0.249 ± 0.126 log10 (ms)) is 1.79 times higher than under monocular viewing (MSImonocular = 0.139 ± 0.137 log10 (ms)). This increase is too high to be explained by the higher perceived contrast of binocular stimuli and offers a new way of testing whether MT neurons account for surround suppression. Potentially, differences in surround suppression reported in clinical populations may reflect altered binocular processing.

ASTEROID stereotest v1.0: lower stereo thresholds using smaller, denser and faster dots

ASTEROID stereotest v1.0: lower stereo thresholds using smaller, denser and faster dots by Read JCA, Wong ZY, Yek X, Wong YX, Bachtoula O, Llamas-Cornejo I, Serrano-Pedraza, ReadWongYekWongBachtoulaLlamasCornejoSerranoPedraza2020_compressed.pdf (0.6 MiB) - Purpose: In 2019, we described ASTEROID, a new stereotest run on a 3D tablet
computer which involves a four-alternative disparity detection task on a dynamic
random-dot stereogram. Stereo thresholds measured with ASTEROID were well
correlated with, but systematically higher than (by a factor of around 1.5), thresholds
measured with previous laboratory stereotests or the Randot Preschool clinical
stereotest. We speculated that this might be due to the relatively large, sparse
dots used in ASTEROID v0.9. Here, we introduce and test the stereo thresholds
and test-repeatability of the new ASTEROID v1.0, which uses precomputed
images to allow stereograms made up of much smaller, denser dots.
Methods: Stereo thresholds and test/retest repeatability were tested and compared
between the old and new versions of ASTEROID (n = 75) and the Randot Circles
(n = 31) stereotest, in healthy young adults.
Results: Thresholds on ASTEROID v1.0 are lower (better) than on ASTEROID
v0.9 by a factor of 1.4, and do not differ significantly from thresholds on the Randot
Circles. Thresholds were roughly log-normally distributed with a mean of
1.54 log10 arcsec (35 arcsec) on ASTEROID v1.0 compared to 1.70 log10 arcsec
(50 arcsec) on ASTEROID v0.9. The standard deviation between observers was
the same for both versions, 0.32 log10 arcsec, corresponding to a factor of 2 above
and below the mean. There was no difference between the versions in their test/
retest repeatability, with 95% coefficient of repeatability = 0.46 log10 arcsec (a
factor of 2.9 or 1.5 octaves) and a Pearson correlation of 0.8 (comparable to other
clinical stereotests).
Conclusion: The poorer stereo thresholds previously reported with ASTEROID
v0.9 appear to have been due to the relatively large, coarse dots and low density
used, rather than to some other aspect of the technology. Employing the small
dots and high density used in ASTEROID v1.0, thresholds and test/retest repeatability
are similar to other clinical stereotests.

Extending the Human Foveal Spatial Contrast Sensitivity Function to High Luminance Range

Extending the Human Foveal Spatial Contrast Sensitivity Function to High Luminance Range by Kaspiris-Rousellis C, Fernandez-Alonso M, Read JCA, KaspirisRousellisFernandezAlonsoRead2019.pdf (3.4 MiB) - The human contrast sensitivity function (CSF) is the most general way of quantifying what human vision can perceive. It predicts which artifacts will be visible on a display and what changes to hardware will result in noticeable improvements. Contrast sensitivity varies with luminance, and as new technology is producing higher luminance range displays, it is becoming essential to understand how the CSF behaves in this regime. Following this direction, we investigated the effect of adaptation luminance on contrast sensitivity for sine-wave gratings over a large number of CSF measurements in the literature. We examined the validity of the linear to DeVries-Rose to Weber region transition that is usually assumed to predict this relationship. We found a gradual transition among the three regions with steeper/flatter slopes for higher/lower frequencies and lower/higher retinal illuminance. A further decreasing region was located at low to intermediate frequencies, which was consistent across studies. Based on this theoretical construct, we adopted a CSF model consisting of central elements in the human visual signal processing and three limiting internal noise components corresponding to each region. We assessed the model’s performance on the measured contrast sensitivities and proposed an eight-parameter form to describe the contrast sensitivity surface in the spatial frequency-luminance domain.