The human contrast sensitivity function (CSF) is the most general way of quantifying what human vision can perceive. It predicts which artifacts will be visible on a display and what changes to hardware will result in noticeable improvements. Contrast sensitivity varies with luminance, and as new technology is producing higher luminance range displays, it is becoming essential to understand how the CSF behaves in this regime. Following this direction, we investigated the effect of adaptation luminance on contrast sensitivity for sine-wave gratings over a large number of CSF measurements in the literature. We examined the validity of the linear to DeVries-Rose to Weber region transition that is usually assumed to predict this relationship. We found a gradual transition among the three regions with steeper/flatter slopes for higher/lower frequencies and lower/higher retinal illuminance. A further decreasing region was located at low to intermediate frequencies, which was consistent across studies. Based on this theoretical construct, we adopted a CSF model consisting of central elements in the human visual signal processing and three limiting internal noise components corresponding to each region. We assessed the model’s performance on the measured contrast sensitivities and proposed an eight-parameter form to describe the contrast sensitivity surface in the spatial frequency-luminance domain.