We weren't sure at first how much of a problem this was for the existing model. In principle, it could require fundamental changes, for example indicating that stereoresolution is set at a much higher cortical level than V1. However, we thought of one pretty minor tweak which could potentially reconcile the model and data. V1 neurons are believed to show a "size-disparity correlation", i.e. the larger disparities are encoded by neurons with larger receptive fields. In our model, a single "correlation detector" represents a pool of V1 neurons tuned to different spatial frequencies and orientations. The size of the window within which interocular correlation is computed represents the minimum receptive field of neurons in this pool. In our previous model, following Gepshtein, Banks, Landy et al, we had assumed that this window was the same for all disparities. Now, we made the window larger for neuronal pools tuned to larger disparities. Now, as the amplitude of a corrugation increased, the receptive-field size of the cells encoding it also increased, limiting the ability to perceive high-frequency corrugations. It turned out that this impaired the model's ability to "see" square-wave corrugations, bringing performance down to the level of sine-waves, just as in humans.