<<

ON THE PHYSICS OF PERCEPTION

Willy Wong

A thesis submitted in conformity with the requirements for the degree of DOCTOR OF PHILOSOPHY Graduate Department of Physics and Institute of Biomedical Engineering University of Toronto

@ Copyright by Willy Wong 1997 National Library Bibliothèque nationale I*I of Canada du Canada Acquisitions and Acquisitions et Bibliographie Services services bibliographiques 395 Wellington Street 395. rue Wellington OttawaON K1AON4 Ottawa ON KIA ON4 Canada Canada Your file Votre tëtemœ

Our Ne Notre rdtdrence

The author has granted a non- L'auteur a accordé une licence non exclusive licence allowing the exclusive permettant à la National Library of Canada to Bibliothèque nationale du Canada de reproduce, loan, distribute or seU reproduire, prêter, distribuer ou copies of this thesis in microform, vendre des copies de cette thèse sous paper or electronic formats. la fome de microfiche/nlm, de reproduction sur papier ou sur format électronique.

The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantiai extracts fiom it Ni la thèse ni des extraits substantiels may be printed or otherwise de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation. On the Physics of Perception PhD 1997 Willy Wong, Department of Physics and hstitute for Biomedical Engineering University of Toronto

Abst ract

A single equation has been discovered governing the flow of information or entropy fiom a sensory (such as light or ) to the organ of sensation (such as a rod or a hair cell) which can account for nearly all expenmental results recorded in sensory neurophysiology relating stimulus intensity and stimulus duration to biolog- ical response. This equation is capable of handling any time-varying single-stimulus input. The biological response is taken as the frequency of neural discharge.

When this equation is extended to the psychophysical level, the theory is capable of unifymg many phenornena associated with thresholds.

There may be no other sensory theory of comparable generality. The scope and limitations of the theory are discussed. Acknowledgment s

1 would like to take this opportunity to thank some of the people who have helped make t his t hesis possible.

First of dl, I am greatly indebted to my supervisor, Prof. Ken Norwich, for his contribution to this work. This thesis represents a coilaborative effort between Prof.

Nornrich and myself.

1 would like to thank my supervisory committee, Prof. Rashmi Desai, Prof. Sajeev

.John and Prof. Ham Kunov for their many helpfiil suggestions, criticisms and com- ments regarding the research. This thesis would not be possible without their insight and expertise.

In particular, 1 would like to acknowledge the generosity of Prof. Kuuov for grant- ing me the use of all materials and equiprnent in the Bioacoustics Laboratory at the

Ins titute of Biomedical Engineering. His generosity is great ly appreciat ed.

A special word of thanks to Prof. Alf Dolan, Prof. bfichael Menzinger (chemistry) and Prof. John Perz, members of my final examination committee. 1 am particularly gratefd to Prof. Perz for his methodical review of my thesis.

1 woiild also like to thank Prof. Lawrence Wmd af the University of Bntish

Colimbia for his many helpfd suggestions and comments as extemal reviewer.

This research was supported by fellowships fiom the University of Toronto and the

... Ill Governrnent of Ontario. Furthermore, much of the research was carried out with the aid of an NSERC operating gant to Prof. Ken Norwich.

Words cannot elcpress the gratitude that 1 owe to my parents? Bing and James, and Auntie Helen for al1 their support and encouragement. 1 couldn't have gone this far withoiit you!

To my lovely Etsuko... thank you so much for helping me grow iip, for showhg me that there is more to Me than just intellectual pursuits.

And hdy, to Ken Norwich, a friend and teacher for nearly 7 years, you have inspired me in a way like no other scientist. 1 will forever remember your limitless creativity, honesty, generosity and patience.

To yoii, Ken, 1 dedicate this thesis. Contents

1 Introduction 1

2 An Overview of Sensory Physiology and Psychology 5

1 The Sensory Systern...... - . . . . 5

2 The Physiology of the ...... 8

1.I Neira1 Adaptation...... 8

2.1.1 Early Rise in the Adaptation Cwe. . -8

2.1.2 Spontaneous Activity . . . 10

2.1.3 De-adaptation . . . 10

2.2 Driven Neural Response ...... 10

2.3 Empirical Laws Goveniing the Neural Response ...... 11

3 The Psychology of the Senses ...... 14

3.1 Psychophysical Adaptation ...... 14

3.2 Magnitude Estimation and the Law of Sensation...... 15

3.3 Threshold Phenornena ...... 18

3.3.1 Absolute Threshold . . . 18

3.3.2 Differential Threshold . . . 22

3.4 Simple Reaction Time ...... 26 4 AFinalNote ...... 31

5 Conclusions ...... -32

3 The Sensory Entropy Theory 34

1 Classical Entropy Theory (pre-1993) ...... 35

1.1 Properties of the Classical Entropy Equation ...... -37

1.2 Failures of the Classical Theory ...... -40

2 hiversal Entropy Theory (1993-) ...... 41

2.1 Deriving the Universal Entropy Equation ...... -41

2.2 Siimmary ...... 48

4 Neural Explorations with the Universal Entropy Equation 50

1 Neural Adaptation ...... -50

2 Spontaneoiis Neiiral Activity ...... 51

3 Early Rise in the Adaptation Cime ...... 54

3.1 Correlated Receptor Samples ...... 54

3.2 Effects of Stimulus Rise Tirne on Adaptation ...... -55

4 NelualDe-adaptation ...... 56

5 On the Relationship Between the Classical and the Universal

Entropy Equation ...... 58

6 Finther Experimental Validation of the Universal Entropy Equation ..... 58

6.1 Ailditory Neural Response ...... 59

vi 6.2 O t her Cases ...... -60

7 Other Investigators ...... 63

5 Uni&ing Threshold P henomena 64

1 Thresholds and the Neiiral Response ...... -66

3 Introdiicing the Threshold Hypothesis ...... 67

3 Differential Thresholds and Weber Fractions ...... 71

3.1 Deriving the Differential Threshold Equation ...... -71

3.2 Implications of the Differential Threshold Equation ...... -72

3.3 Validating the Differential Threshold Equation ...... -73

3.3.1 Visual Differential Thresholds . . -73

3.3.2 Auditory DifFerential Thresholds ... 76

3.4 A Generalization of the Threshold Hypothesis ...... 80

4 Absolute Thresholds ...... -81

4.1 Bloch's Law: Part I ...... -83

6 Simple Reaction Time 84

1 Deriving Piéron's Law ...... 84

2 Implications of the Entropic Derivation of Piéron's Law ...... 88

2.1 Bloch's Law: Part II ...... -89

7 Speculating on the Psychophysical Response 91

vii Psychophysical Adaptation ...... 92

Magnitude Estimation ...... 96

2.1 The Measurements of Riesz ...... 96

2.2 Magnitude Estimation ...... 98

8 Discussion

1 The Interna1 Signal. 61 ...... 101

2 A Common Set of Acoustical Parameters? ...... 104

3 The Threshold Associated with Simple Reaction Time ...... -105

4 Two Remaining Points ...... 106

9 Conclusions 107

1 CreneralRemarks ...... 107

3 Specific Remarks ...... 108

2.1 Neurophysiologicd Phenornena ...... -108

2.2 Psychophysical Phenornena ...... -109

A 4 Mode1 of Equilibrium Receptor Memory 112

1 Physical Model ...... 112

2 Derivation of the Power Dependence of Receptor blernory ...... 114

B Deriving the Neural Response to a Double Step Input C Deriving the Differential Threshofd Equation for the Continuous

Increment 121

D Deriving Eq. (59)

E The Absolute Threshold of Human

F MapIe Worksheets

Bibliography Chapter Introduction

The stiidy of the senses is a highly interdisciphary endeavour. A complete stiidy

may involve a wide range of disciplines including mat hemat ics, philosophy, physics,

physiology and psychology.

In this thesis, 1 hope to present a unified and conceptual approach towards the iuiderstanding of fundamental sensory processes. My approach is primarily fcom the point of view of a physicist.

Perhaps it is important to realize what this thesis is not about. The thesis is not about the detailed rnechanisms of sensory processes. That is, 1 shd not be concerned with the functional differences between the eyes and the eus. While siich a stiidy woidd obviousiy be of importance to the understanding of sensory physiology,

1 shall instead focus on the questions: 'What is sensation?" "What is the nature of perception?"

To approach issues of such fundamental nature, we cannot concern ourseives with the details of every sensory organ. Instead, we rnust strive for a coherent view of the entire sensory system by emphasizing the features common to all organs. One common feature is their ability to receive and transmit information. This idea was first siiggested in 1965 (see Norwich. 1983). Thus, perception is linked to information or entropy. Entropy becomes, so to speak, the "fundamental currency" of perception.

Central to the entropic approach is the concept of uncertainty. During the per- ceptiial process, the organism acquires information £rom the stimulus signal through the process of sampling. This information reduces the "uncertainty" imparted to the receptor at the time of first contact with the stimulus signal.

By conjecture, the associated sensory response is then equated to this uncertainty.

The greater the stimulus uncertainty, the greater the sensory response.

A mathematical theory embodying these principles have been under development for the past 30 years. The entropy theory has been put forth as a theory capable of iuiifying most if not al1 of sensory studies involving the three main variables: sensory response, stimulus intensity and duration of stimulation.

One important strength of the sensory entropy theory is its ability to predict the action and response for almost all sensory modalities (vision, audition, gustation, olfaction, muscle, temperature) in a.ll complex organisms.

The reader might be suspicious of such an extravagant claim and be surpriseci that the sensory system can even be addressed with a single theory. In this age of increasing complexity, science has tended to stray fiom its original conceptual roots. and to approach nature empirically. There are, for example, many competing approaches which treat the sensory system purely as a non-linear black box of many parameters (eg. Saglam et al., 1995). While such an approach wodd mode1 every nitance observed in the biologically measured response, a non-linear black box would probably not give hirther insight hto the concept of sensory perception.

What the entropy theory lacks in terrns of fine details, it gains in the larger picttire.

In that seme, ths approach is similar to the style of art known as Impressionism. For example, several of the Impressionistic artists painted with the technique of pointil-

Lism. While their works lack realisrn when viewed closely, a larger picture emanates from the camvas when seen farther back.

As with any other theory, the sensory entropy theory can be stripped of al1 its interpretation Ieaving only a 'Lcalciilator"of sensory variable.. However, just as the eqillvalence principle is an essential and inseparable feature of generd relativity, we siiggest that the strength of the entropy theory lies prirnarily in its interpretational and not mathematical power.

In this thesis, 1 shall use data obtained from many sensory modalities to test the theory. However, for certain phenomena such as sensory thresholds, complete sets of experiments can only be found for a fav modalities like audition and vision.

Nevertheless, it is expected that the entropy equations should also hold for data acquired hum O t her modalities as weU. This thesis is divided into 9 chapters. In Chapter 2, the basic ideas of sensory physiology and psycholog- are introduced. This is an important chapter because al1 of the later developments require this background knowledge. Chapter 3 deals with the derivation of the ''master equation" within sensory entropy theory. The follow-ing Chapters 4-7 deal with the application of this equation to the unification of sensory studies. Chapter 8 reviews some of the new ideas appearing in this thesis, and Chapter 9 concludes the thesis. Chapter 2 An Overview of Sensory Physi- ology and Psychology

In this chapter I introdvce the basic concepts and ideas ofsensory physiology

and psychology relevant to the research.

2.1 The Sensory System

The sensory system is a direct link between the organism and the external world. The

information communicated from the external environment is mediateci by the sensory stimuli detected by the sense organs. For example, vibrations of air molecules are translated into the we hear and the energy of photons form the images we see.

The transduction process responsible for changing physical stimuli (eg. sound waves or photons) into the resulting sensory experience is not hilly imderstood.

The sensory system can be categorized roughly into three main regions. The neural periphery is the area where the organism first cornes into contact with external stimuli. These extemal signals are detected by the sensory receptors (eg. the rods or cones

in the eye. the hair cells of the inner ear, etc.). Associated with the r-ceptors are the

primary or 6rst-order afferent neurons w-hich respond to the stimuli detected by the

receptor. The neurons Lire action potentialç with a coded frequency whereby stimuli

of higher intensities elicit higher firing frequencies. and vice versa.

These neilral impulses proceed along the peripheral and central neruous system

towards the oiiter region of the brain known as the sensory cortex. It is believed

that this region is responsible for converting the freqiiency of neural impulses to the

residting cognate experience of sensation (eg., "1 hear a loiid soiuid" or "1 see a bright

light" ) .

The three regions of the sensory system are illustrateci schematically in Fig. 1.

The regions are conveniently labeiled as: A (neiird periphery), B (nervoiis system)

and C (brain).

Responses originating £iom the neural periphery will be termed the neural response.

Responses originating £rom the conscious apprehension of the stimulus will be termed

the psychophysical response. The neural response is typically rneasured in terms of

the freqiiency of nerve impulses generated by the first-order sensory neuron. The

psychophysicd response is generally measured by some subjective scale to rate the sensory magnitude of stimuli.

It has long been observeci that the neural response is directly related to the psy- Region A Region B Region C Neural Peripbery Peripberal and Brain Central Nervous System

Sensory Cortex

Sensory Nerve Fibre Nerve Impulses chophysical response (eg. Borg et al., 1967). Greater neural responses are generally associated with greater psychophysical responses, and vice versa.

2.2 The Physiology of the Senses

MI of the basic sensory modalities share certain common physiological features.

2.2.1 Neural Adaptation

If the isolated single sensory receptor is excited by stimuli of hed intensity and prolonged duration, a characteristic response is observed at the associated afferent netaon. The freqiiency of nerve impulses is greatest near the initial onset of stimulus7 falling steadily with prolonged stimulus duration. This phenornenon is termed neural adaptation and is observed nearly universdy for aJl sensory modalities. For example, the data of Adrian and Zotterman (1926) are plotted in Fig. 2. They measured the firing fiequency of impulses in a frog muscle spincile under a load of a 1 g weight.

Adaptation occimeci over the period of 5 to 20 seconds corresponding to the time when the muscle was loaded.

In addition to the effects of adaptation, other features have &O been observed in the neiilal responses of the various sensory modalities.

2.2.1.1 Early Rise in the Adaptation Curve

Upon omet of stimulus, the firing frequency requires some tirne before reaching the Figae 2: Adaptation in a £iog muscle fibre. A 1 g load was applied to the muscle from time 5 to 20 seconds. Data fiom Adrian and Zotterrnan (1926). maximiun rate. For example, in Fig. 2' we see that approximately 2 seconds were reqilired before the muscle nerve fibre reached its maximum £king rate.

2.2.1.2 Spontaneous Activity

Most sensory neiuons actively generate neinal signals even in the absence of an ex- ternal input. In the case illiistrated in Fig. 2, we see a significant arnount of activity generated by the muscle spindle prior to the loading.

When an input is terminateci, the neural response describes a mirror image of the adaptation ciwe (Dudel, 1986). Typically, the response will fall abruptly at the end of stimulation, and gradudy show a rnonotonic rise towards the spontaneous rate.

De-adaptation is observed From time 20 seconds and onwards in Fig. 2.

2.2.2 Driven Neural Response

Complementary to adaptation is the eEect of the driven neural response. How does the receptor/neiiron complex respond to stimuli of ked duration and Mering in- tensities? When we observe the data of Smith in Fig. 3, we notice that the firing freqiiency increases wit h increasing stimulus ma,pitude. These data were recorded in the auditory fibers of the Mongolian Gerbil ear and the stimulus is a pure tone of diuation 1 ms. Near the high intensities, the neural response saturates because the fibres are incapable of responding at any greater firing frequencies. At the very low intensit ies. the response approaches the level of spontaneous activity. Similar charac- teristics have been obsenred with other investigations as weil (eg. compare Galambos and Davis. 1943).

In siimmary. we see that the neural response is primarily dependent on the two lwiables. stimulus intensity and stimulus duration. F9r 6xed intensities. increasing the dtiration will decrease the firing rate. At fked durations, increasing the inten- sity will increase the firing rate. This result is summdzed by an idealized three dimensional plot of intensity and diuation versus response in Fig. 4.

2.2.3 Empirical Laws Governing the Neural Response bIany empirical observations and equations have been formulateci to account for the experimentdy observed neural response below saturation levels.

In terms of the driven neural response (cf. Fig. 3), many investigators (eg. Stevens.

1970: Schmidt. 1986) have observed that the firing rate F is approximately a power function of the stimulus intensity I:

where Ciis a constant and n is some constant greater than zero. Other investigators

(eg. Hartline and Graham, 1932) have found that the firing rate obeys a logarithmic law of the form Figure 3: The driven neural response of an auditory fibre to sound stimulation. Data were recorded hom the inner hair cell of a Mongolian Gerbil. From Smith (1988). Figure 4: Three-dimensional plot showing an idealized neural response to both stim- idiis intensity and duration: F = F(I,t). where CI, and C3 are constants.

For adaptation response, most investigators tend to use a mono- or double-exponential hmction (eg. Bohnenberger, 1981). For example a moneexponential function would take on the form

where a serves as the adaptation time constant.

It is important to remember that Eqs. (1), (2) and (3) sre d empirical. They were al1 obtained in a pimely ad hoc fashion for the purpose of fitting experimental data.

2.3 The Psychology of the Senses

From the neural periphery, we now tiirn our attention towards the psychophysical response (region C in Fig. 1). Ever since the discovery of the nerve impulses, it has been posttilated that the psychophysical response mut be, in some way, derived from the neural response. We now examine a number of properties for which the psychophysical response seems to rnirror the neural response.

2.3.1 Psychophysicd Adaptation

The psychophysical analogue of neural adaptation is something we ail experience on a daily bais. Any pimgent odour thst catches one's attention will gradually fade

14 from the sensorium with prolonged stimulation. For example, the smeil of paint is greatest when pihst enter a newly painted room.

In Fig. 5, the data of Gent and McBurney for psychogustatorid adaptation are plotted. Saltiness was measured by the investigators (using a subjective scale) for varying diuations of stimulation. The stimulus was a salted filter paper placed on the human tongiie. One again. we notice that sensory response is a monotonic decreasing fimction of stimulus duration. This same effect has been obsewed for the other sensory modalities as well.

One very rnarked difference between psychophysical and neural response is the time course over which adaptation proceeds. For certain rnodalities like vision or aiidition, neural adaptation is largely complete by about 100 ms, whereas, psychophysical adap- tation may occiu over a length of several minutes. The reason for the Werence in time scales for the two processes is still a mystery to researchers. Although both responses tend to be similar, it is not at all clear how the psychophysical response is derived from the neural response.

2.3.2 Magnitude Estimation and the Law of Sensation

Corresponding to the driven neural response is the law of sensation. The law of sensation states t hat psychophysical sensation increaçes as a monotonic function of increasing stimulus intensity (with duration held constant). Depending on how the siibjective scale is chosen, the data may be fitted with either a power law championed Figure 5: Psychogustatory adaptation recordecl for a NaCl stimulus. Data £rom Gent and hlcBiirney (1978). by Stevens (1936. 1961).

where @ is the subject's sensory response or magnitude estimation corresponding to a stimidtis of intensity I, or may follow a logarithmic law as proposed by Fechner

(1860119661,

in direct analogy with the two empirical neural Eqs. (1) and (2). Indeed, since many investigators made the identification of !P with F, the Stevens' and Fechnerian psychophysicd laws provided the mode1 for both neural Eqs. (1) and (2).

While the logarithmic law was originally proposed by Fechner, Stevens was not the first person to stiggest the power law. The honour goes to a Belgian physicist narned

J. A. F. Plateau who worked in the nineteenth century. Interestingly, Fechner was also a physicist, working originally on voltaic cells in the mid-1800's. Fechner, who coined the term psychophysik, is now known as the father of .

The first person to apply the power law to psychoacoustics was the physicist H.

Fletcher (Fletcher and Mtmon, 1933). Before comrnencing his studies on human hearing at the AT&T Bell Labs, Fletcher was an assistant to Robert Millikan in the famoiis oil drop experiments.

RLiich of the data from experiments on magnitude estimation do not conform ac- ciuately to either the log or the power law. Consider for example the data for

17 NaCl of Stevens (1969) plocted in Fig. 6. Neither fùnction will fit the data prop erly over the entire range. We see that there is a tendency for the lower data points to conform to a power function, while the higher points conform more closely to a logarit hmic fiuiction. This tendency has been observed with quite a few other experi- ments and woidd tend to suggest that there is another more complete psychophysical law encompassing both Eqs. (4) and (5) as the limiting cases for both smd and large stimidirs intensities.

2.3.3 Threshold Phenornena

Al1 of the psychophysical phenomena we have considered thus far have had clear neural analogues. That is, psychophysical adaptation is the analogue of neural adaptation, and the law of sensation is the analogue of the driven neural response. We now tim our attention to another class of phenomena which does not have, or does not appear to have, a clear neural analogue.

2.3.3.1 Absolute Threshold

In his book What is Life?, Schrodinger speculates on the reasons why the brain, with a "sensorial system" attached to it, is not sensitive to single atoms. Since the brain is capable of very orderly and coherent thought processes, he reasons that the physical stimuli (atoms, molecules, etc.) which account for the perceptual processes miist also obey strict physical laws. For, if the random "atomistic" fluctuations were Figure 6: Magnitude estimation for NaCl solution. Data from Stevens (1969). The open circles correspond to data plotted with the logarithmic left axis and the H.led circles correspond to the hear right &S. perceived directly. the information transmitted to the brain would not resiilt in any orderly percept or thought process. However, since a large ensemble of atoms wiU always obey statistical laws, he postdates that the ody way the brain is capable of maint aining well-ordered t hought processes is by perceiving only stimuli of sacient intensity which will obey strict physical laws to a high degree of accuracy.

We see that Schrodinger is duding to the idea of a sensory threshold below which no stimulus is detectable. We terni this the absolute detection threshold. The absoiute threshold is always dehed with some statistical criterion in mind because the sensory system itself is siibject to fluctuations on a moment to moment basis. Hence, we may define the absolute threshold to be the stimulus intensity which is detectable Say 50% of the time. Obviously, a merent criterion will change the absolute threshold. A

90% criterion ~vill give a much higher threshold than a 75% criterion.

Here is one example illustrating the absolute threshold. In 1942, Hecht' Pirenne and Schlaer determined that rod photoreceptors are sensitive to single photons in the dark. However, a human subject requires on average 10-20 photons before he or she can detect the light stimulus 60% of the tirne. Thus, a neural response does not necessarily imply a psychophysical response.

It is apparent that threshold intensity and stimulus duration must obey some sort of reciprocal relationship. For example, a weaker intensity may be detectable provided that it is on for a longer period of time. Thus, Bloch, in 1885, formidated fis celebrated empirical law relating the absolute threshold intensity. AI, and the stimulus dination, At:

where C8 is some constant independent of intensity.

However. it soon became apparent that Eq. (6) could not account for all the rneasiired data. In audition. Garner (1947) proposed a variant of Bloch's law,

where B is some positive exponent. For siibstantidy lower values for AI, Blondel and Rey independently proposed an alternative equation for the visual threshold,

where ü is called the Blondel-Rey constant. AIm is the lowest possible threshold given a Cived statistical criteria. That is, in the limit where the duration of stimulation becornes very long (At -+ m), Al approaches AI,. An identical equation was also proposed for aiidition by Hughes (1946).

Eq. (8) has the additional benefit that, for short durations (At < a), it approxi- mates Bloch's law,

Later researchers were eager to rehe the Blondel-Rey law because they had found

21 that. the data did not conform too closely to Eq. (8). Instead. it was proposed that

be iised as an alternative expression for the absolute threshold. The constant n serves miich the same rôle as the inverse Blondel-Rey constant 1/a. Eq. (10) also reduces to Bloch's law in the iimit of smd At.

2.3.3.2 Differentid Threshold

So farl we have considered thresholds for absolute detectability. When we are ai stimulus intensities far above threshold, ow experience teils iis that we perceive in a continuun. However, is this a red or apparent continuum?

Since we can never jiidge the difference in intensity between two stimuli Mth arbitrary acciuacy, there must also be a threshold associated with the process cf differentiation. CVe cdthis minimum perceptible difference the dzfferential threshold

AI.

To give an example, we might wish to measure the Merential threshold of heavi- ness for a hiunan siibject. Given an object of, say, 100 g, how much must the weight be changed before a just noticeable difference (jnd) in sensation is detected, Say, 75% of the time?

Siich experiments were carried forth by investigators as far back as 1830's. E.H.

Weber, a physiologist, postulated that if a stimulus of intensity 1 is presented to a siibject, the fiactional change in intensity to provide a jnd (just noticeable difference)

22 in sensation is constant. That is.

which is known as Weber's law. The expression AI/I is calleci the Weber fraction.

The Weber fraction is the differential threshold divideci by the bzse intensity

From his work. the physicist Fechner was astute enough to make the following association. Denoting the jnd in sensation as A@,Fechner assimeci that

That is. the jnd in sensation is independent of the magnitude of the intensity. Corn- bini~igthe two eqiiations, Fechner obtained

which he then integrated by replacing the finite différences with differential quantities to obtain his famous logarithmic psychophysical law (cf. Eq. (5))

This ~echniqueis now cded Feclinerian integration.

Returning to E,q. (11), we see that Weber's law predicts a horizontal line when

AI/ I is plotted against I. However, later investigation revealed that the experimen- tdy obtained data tended to deviate substantially kom Weber's law , particularly

23 for the lower intensities. Typicdy the Weber fraction wodd rise siibstantially for decreasing values of 1.

Konig and Brodhtm (1889) for vision, and bucisen (1923) and Riesz (1928) for atidition. obt ained resiilts which tended to conform to the equation

where, for large values of I, the Weber fkaction approaches a constant as predicted by Weber's Iaw.

Fig. 7 shows Riesz's empirical fits to his own measurements. He perfomed mea- siirements at several frequencies of sound.

In the limit where I + O, we see that the differential threshold eqiials the absoliite threshold. For a fixecl statistical criterion, we know that the absolute threshold is constant. Thus,

Dividing both sides by I we obtain

log (Y) = - log (1)+ log (Cii) .

That is. the Weber fraction data when plotted in full logarithmic coordinates (log (AlII) vs. log(1)) miist fall with dope - 1 for 1 -+ O. This relationship was observed exper- imentally by Miller (1947) for audition.

24 Figtve 7: The data of Riesz (1928) showing Weber fraction versus sound intensity across different Erequencies. There are many ways to measure the Weber fraction (see Comsweet and Pinsker.

1964: Viemeister, 1988). One method is shown in Fig. 8. This method requires the siibject to detect an increment (or decrement) over a pedestal of intensity I. Usudy the pedestal is of a length much longer than the increment. Typicdy, a siibject is given two intervals with the pedestal. However, only one of the two intervals contains the increment. The siibject tries to identiS the intend in which the increment was present. This method of rneasuring the Weber haction is known as the continuous rnethod.

As with the absoliite threshold, one might expect that the differential threshold woidd also vary aith the duration of the increment intensity. For example, retuming to the continuoits method in Fig. 8, how would the differential threshold change if we varied At? One might expect that the threshold decreases with increasing At in a manner similar to the absolute threshold. For example, shown in Fig. 9 are two cimes measiued by Garner and Miller (1947) showing how the Weber fraction

(or differential threshold) varies with incrernent duration at two different intensities.

The cimes fall monotonically in agreement with the various empirical laws for the absolute threshold.

2.3.4 Simple Reaction Time

All of the threshold phenomena considered in the last section showed a reciprocal relationship between stimulus duration and intensity. We now consider an entirely Figure 8: Stimulus input for continuous increment. Figure 9: Weber fraction versus increment duration for audition as measured by

Garner and Miller (1947). The open circles are at 40 dB SL and the iilled circles at

70 dB SL. different class of sensory phenomena which also exhibits the same relationship.

Simple reaction time mas be defineci as the duration of time that elapses between stimidus omet and time of first motor response to that stimulus. For example, a siibject might be required to press a button imrnediately upon detecting a visiial stimidus. From experience, one knows that a stronger stimulus elicits a faster reaction time, and a weaker stimulus elicits a sfower reaction tirne.

CVhereas the threshold valiies of AI considered in the last section are always smd compared to the fidl physiologicd range of sensation (for example, the differential threshold? AI, may be less than 1 dB for hearing, compared to the hill range of human hearing which spans 120+ dB), the valiies of AI for simple reaction are nominal (eg. see Fig. 10, x-axis).

In Fig. 10. the auditory data of Chochoile (1940) illustrate the relationship between

AI? the stimulus intensity, and At,, the reaction time. The curve shows a monotonic fa11 of reaction time with increasing intensity. Typicdy, At, does not fall beyond some minimum which is called, appropriately, At, ,in.

In 1945, the eminent physiologist Piéron discovered that most if not ail of the simple reaction data measireci fkom the various sensory modalities could be fitted wi t h the single empirical equation

At, = At,,, + -b18 AIn ' where the exponent n is in approximate agreement with the value obtained bom Eq.

29 Figue 10: Simple reaction time to a 1000 Hz auditory tone as measured by Chocholle

(1940). ( 15) for the same sensory modality. Along with the conctvlence of exponents, the sirnilarity in form of these two equations is striking (eg. see Ward and Davidson,

1993).

It shoiild be noted that we have adopteci a Merent -bol for the stimuliis dura- tion in this section. There is a very simple reason for this change. At, is a reaction time which combines both a sensory and a rnotor response time. It might be helpfid to think of the detection time, At, as behg purely the time reqillred for the sensory system to process the signal, and At, is piuely the thne required for the subject to make any motor response. Thus, it is hypothesized that

At, = At + At,, (19) an idea which was discussed by Halpern (1986) (see also Norwich, 1993). We rniist be mindfiil of the differences between At and At, as we proceed with the theoretical development .

2.4 A Final Note

This section is a brief digression on the sociology (and not the science) of sensory perception.

From the concept of psychophysical response alone, we can easily see that al1 experiments involving a psychophysical element must invariably be performed on human siibj ec t s (however , see EvI arks, 1974, "Animal psychophysics" ) .

31 On the other hand, d data obtained fiom the neural periphery are from non- hiunan siibjects. Because of the restrictions on human e-xperimentation few sen- sorineiiral experiments have ever been performed on human siibjects.

Quite apart fiom the ethical considerations of whether or not such research should be carried out on hiunan siibjects, it is important to realize that a liucury not available to 11s is a cornpiete set of sensory data (both neural and psychophysicd) from a single organism. For example. there is no chance to compare, Say, neural sensory parameters wïth psychophysical parameters.

Nevertheiess, we shall try our best to infer the missing experiments from the results ciirrently available to ris.

2.5 Conclusions

This chapter has been quite long and involved. However, a proper understanding of this chapter will give the reader a better appreciation of the developments to follow.

Highlighted here are some of the main points of this chapter:

0 The neural response, F, originates from Region A (cf. Fig. 1).

O The psychophysical response, Q, originates from Region C. rn The nelirai response can be measured in terms of the hing rate of nerve impulses from the primary aEerent neurons associated with the sensory receptor or detector . rn The psychophysical response is measured in terms of a subjective scale. The psychophysical response, to a large extent, mirrors the neural response. al- thoiigh it is not clear how one is denved from the other.

Sensory response decreases with adaptation.

Sensory response increases with increasing stimulus intensities.

The absolute threshold rneasures the minimum perceptible intensity.

The differential threshold rneasures the minimum increment intensity before a change in sensation is perceptible.

Simple reaction time rneasures the time reqiiired to make a motor response given a sensory ciie.

Sensory processing time is different hom reaction time.

There are many cornmon properties shared among the basic sensory modaiities.

The sensory sciences are replete with ernpirical equations. Chapter 3 The Sensory Entropy Theory

In the last chapter, many of the empirical findings and observations of sen-

sory perception were discussed. No single theory has ever been successful at

unifging even a feu empirical resdts for a single modality. Howeuer, the sen-

sory entropic approach, under development for the past 30 years, has been put

forth as a theory capable of unifying most of the basic sensory obseruations

within a cornmon mathematical and philosophical frBmevork.

The development of the Sensory Entropy Theoy can be dzuided into tuio

phases: the classical theory, which 2s the more restricted foremnner, and

the ,universal theory, which is more general and is capable of even greater

predictiue pouler, although at the ezpense of greater mathematical complexity.

The task oj this chapter is to derive an entropy equation which then sentes

as a master equation or "equation of state". Having obtained the master

equation, it is then a relatively simple task tu manzpulate this equation in

uarious ways to derive most if not al1 of the empirical results presented zn the last chapter. The application and manipulation of the entropy equation

is the content of the following chapters.

This chapter zs dzvided into two sections. The first part deals with deriving

the classical entropy equation. No attempt is made at being ràgorous. Instead,

1 present a hevristic derivation, concentrating on what makes "sense". That

way, the reader can obtain a better feel for the entropy equation in its sim,pler

fom.

The second part deals wzth a full derivation of the universal entropy equa-

tion. At the end of this chapter, we will drop the classical equation and keep

only the universal equation for the remaining chapters.

3.1 Classical Entropy Theory (pre-1993)

In the previous chapter, we designated the psychophysical response by 8. When we consider hearing, for example, we know that \k mut be approximately a logarithmic fimction of intensity because a 40 dB tone sounds roughly twice as loud as a 20 dB tone. Thlis, Sepwating these constants gives

where K', 3' and n are all positive constants.

Recdi that 9 decreases with increasing stimulus duration. Thus we might have

althoiigh the iuiits of D' mut be changed to reflect the change in dimensionality. As t + oo, all sensation should vanish due to adaptation. Instead, Eq. (23) diverges.

Hence we might modify this equation to obtain

so that as t -, oo, iI, -, 0.

Xssiiming that the psychophysical response exactly rnirrors the ne-ara1 response jexcept for the difference in time scales), we can now write

where K is a new constant and ,a< to arcount for the tirne dilation as we proceed from neural to psychophysical response.

Mthough the connection to entropy or information theory is obscure in this heuris- tic derivation of Eq. (25), we shd elaborate fully on the entropic heritage in the derivation of the universal entropy equation.

36 Eq. (25)' the equation for neural response. was first derived by Norwich in 1977.

This equation was later extended to work at the psychophysical level. That is. Eq.

(25) historically preceded Eq. (24).

A modification of his original derivation from 1977 wiil be presented in Section 3.2.

3.1.1 Properties of the Classical Entropy Equation

As mentioned earlier, the entropy Eqs. (24) and (25) have enjoyed some success at conferring a degree of unification in the sensory sciences. The ability of the entropy eqiiations to integrate some twenty or more sensory empirical equations, phenornena' etc.. is well dociunented in a recent monograph (Norwich, 1993).

A qiuck check of Eqs. (24) and (25) yields the follonring results.

If I is held constant, increasing the stimulus duration will produce residts consistent with adaptation (although, in this ad hoc derivation, we "built in" the effects of adaptation, the original derivation did not contain such arbitrary measures). Please see Norwich and McConville (1991).

When the stimulus duration is heid constant, we can set y = fl/t to obtain

Wien y ln<< 1 (stimulus is weak) , taking a first order series expansion, we obtain

in agreement with power law of sensation (cf. Eq. (4)). When yIn » 1 (stimulus is st rong ) . we have. approximately

* = C** log I + C23, in accordance with Fechner's logarithmic law. A similar process with Eq. (25) d yield Eqs. (1) and (2) for the neural response.

We are now in a position to imderstand why neither the power nor the log law cotdd fit the data in Fig. 6 over the entire range. Since Eq. (26) is the more "complete" law. ernbracing the two other psychophysicd laws, we can expect a better fit of the data over the entire range as demonstrated in Fig. il. Parameter values of n' = 41,

7 = 0.10 and n = 1.5 were irseci.

The method of curve-fitting used is detailed in Appendix G.

If we reverse the process of "Fechnerian integration" (cf. Section 23-32), ie., beginning with the psychophysical Law, we differentiate it with respect to intensity and replace the differential qiiantities with their finite ciifferences, we obtain

Using Fechner's assumption of the constancy of the jnd (A* = Cil, see Eq. (12)), we can rewrite Eq. (29) as

in agreement with the empirical Weber haction eqiiation obtained by Knudsen and

Riesz (see Eq. (15)). Figure 11: Same data as Fig. 6 (Stevens, 1969) fitted with the classical entropy Eq.

(26). Furthemore. the laws of Bloch? Piéron and Blondel-Rey can all be derived from

Eq. (25). albeit with greater effort. It suffices to mention here that the entropy eqiiation ha been quite successfd at umfying many empirical results.

3.1.2 Failures of the Classical Theory

However. the classical entropy theory also had its few shortcomings. We now docii- ment some of its shortcomings and failures.

The classical theory only works with stimuli of constant intensity. It cannot predict the sensory response given a tirne-vmying input.

When we examine the limit of long stimulus duration in Eq. (25), we see that

F adapts to completion (F = O). However, a quick glance at Fig. 2, or at results obtained from any other sensory modality (e.g. Matthews, Ml),fails to co&m this prediction.

Fiirthermore, in Fig. 2, we see neural activity even in the absence of external input.

However, if we set I = O in Eq. (25), we find that F = 0.

The classical entropy equation is incapable of predicting the rising portion of the adaptation cime, or the effect of de-adaptation as observed in Fig. 2.

Recall that, experimentally, it has been observed that the differential threshold or

Weber fraction decreases for increasing stimulus durations (see Section 2.3-3.2). In

Eq. (29), recall that = p'lt. Inserting this expression into Eq. (29) shows that the di fferent i al t hreshold in creases for increasing stimulus duration. In addition. the classical entropy theory fails to predict (or fails to predict rvell) many of the empirical results of threshoid and simple reaction time phenornena.

At this point. it was evident that we required a more general theory rvhich coidd address many of the shortcornings of the classical theory.

3.2 Universal Entropy Theory (1993-)

In this section, we shd derive the central result of the Universal Theory - the gener- alized entropy eqtiation. This eqiiation is capable of predicting the neural response,

F. given any time-varying input 1. With a few modifications, we are following the derivation of the original (classical) entropy equation as put forth by Norwich in

1977. Most of the residts of this section have been published in a recent publication

(Norwich and Wong, 1995).

3.2.1 Deriving the Universal Entropy Equation

Fliicttiations may reflect themselves as a stationary time series. Fluctuations generate imcertainty in the mean of the series.

At the neural level, microscopic fluctuations in stimulus intensity (eg. density fltictuations) generate uncertainv in the mean level of stimulation. Since the organism is interested in the mean intensity (cf. Schrodinger, 1989), we might attribute a degree of "imcertainty" or entropy to the sensory receptors. The receptors are regarded as reporting their state of uncertainty to the brain. It is hypothesized that the imcertainty is encoded by the firing rate of the sensory neiirons.

We regard the sensory receptors as sampling the stimulus environment. Successive sampling of the stimulus produces a time-series of intensity values. For steady input, a stationary tirne-series of sample values with stimulus variance, a*.and mean, p, is obtained. This variance then characterizes the receptor's uncertainty in the mean stimidus intensity. .4ny expression of the entropy shoidd be a monotonically increasing hmction of the variance. That is, greater stimidus variance gives rise to greater imcert ninty or entropy.

We mode1 the uncert ainty mathematically using the Boltzmann-Shannon measure of entropy (Shannon, 1948),

where H is the imcertainty or entropy and p(x) dx is the probability density for the receptor to sample an intensity value between x and x + dx. Following Brïllouin

(1962), information is regarded as negentropy. That is, reduction in uncertainty will generate information.

The use of the term "entropy" might suggest to the reader that H is a measure of physical or thennodynamic entropy. PVhile Eq. (31) certainly resembles Boltzmann's

H-fimction, no direct relationship between informational and physical entropy was implied by Shannon. Similarly, our use of the term "entropy" within the sensory

42 domain does not necessarily imply a connection to entropy as iised in physics. although in certain. restricted cases, sensorid and thermodynamical entropy have ben show to be eqiiivalent (see Wong, 1993).

Defining sorne iiseful terminology, we take each sample to consist of many separate sanzplings. Together, m sarnplings create a sample of size m. The mean of each sample will tend to be normally distnbilted in accordance with the central Mt theorem.

This property is. of course, independent of the original stimulus distribution (be it imiform, Poisson, etc.). If 5 is a random variable representing the mean of a sample of size m, then its density function pif) dZ is given by a normal distribution with mean ps and variance G/m:

where m is the sample size.

Using Eq. (31), we can now obtain a simple expression for the entropy of mean stimuliis values. Substituting for p(x) hom Eq. (32) we obtain

For increasing sample sizes, entropy decreases rnonotonically suggesting that the r+ ceptors are losing imcertainty in the mean intensity.

Transmission of sensory information is always iimited by the presence of other signals or noise. We assiune that these signals have a normal or Gaussian spectrum

43 of intensities and are termed the reference input. Since the receptor cannot distinguish between the stimulus and the reference signal, the sensory input, as sampled by the receptor, is actually a convolution of two signals. Denoting the mean and variance of the reference distribution by and ai respectively, the convolution of the two normal distributions is another normal distribution (eg. Fraser, 1976) with mean ps + and variance o$/m+ ai. Hence, the entropy of both signai and reference is

Siibtracting off the entropy of reference input alone, 4 In (2reai),we obtain

This eqiiation then gives the information which is transmitted to the brain by the sensory receptor regarding its rmcertainty in the mean intensity when rn sarnplings of the stimuliis have been made.

As it stands, Eq. (35) is a complete equation which goverm the neural response.

However, three additional postdates are required before this equation can be used to analyze e.xperimenta1 results.

In most physical systems, the variance is related to the mean. For example, statis- tical mechanics predicts that the fluctuation in the density of a dilute, classical gas has variance equal to mean density. Here, we assume that the variance can be related

44 to the mean by the relationship

where E is a constant of proportionajity and p is a parameter of the physical systern characterizing the magnitude of fluctuations at the receptor level. p is a positive constant. Eq. (36) states that larger fluctuations are associated with larger quantities.

The term, 61: may be any natiually occurring internal signal within the sensory system (eg. spontaneous otoacoiistical emissions in the ear. thermal noise, etc.). As a conseqiience of 61, fluctuations are recorded by the receptor even in absence of an external signal.

61 is an important parameter within the universal theory. More will be disciissed in the later chapters. However, nie do rnake the passing remark that the magnitude of 61 is greater than the absolute threshold, AI. No theoretical reason is given for this remark except that this is what is usudy observed when the equations are fitted to data.

Taking fl = cri , a constant, we can now write

Recd that rn is the sample size. In order that the receptor can process information from the stimuliis samples, it must have the ability to store the samples locally. Hence, m is also a measure of local receptor memory. At stimulus omet, we assume that

45 the receptor progressively increases the sample size over tirne. Hence m increases. thereby reducing the stimulus uncertainty as quantifieci by H. If m were to increase indefinitel- the receptor would require infinite memory. Hence, it is reasonable to assiune that m wiL grow to a saturation level, Say m.,, after which the sample size will no longer increase.

If theory is to match the experimental results, we have foimd it necessaq to take rn,, as a rnonotonic increasing hinction of intensity in the form

where O < q c 1 and mzq is a constant to give the correct units for m.,.

Thus. m,, zs the requzred mernosy to store sarnples ut zntenszty I.

In audition and vision, where I may change by several orders of magnitude (eg.

IO"+ for audition)? q is necessary to keep the memory change within reasonable limits. Notice that me,is dependent on both I and 61.

LWde Eq. (38) appears to be an arbitrary assimption, in Appendk A I present a physical mode1 which may account for this power dependence.

When the input intensity changes, so too will me,.How does rn approach the new me,value? Since it is assurneci that memory changes continuousIy, rn will approach me,smoothly. Let us Say that the rate of change of m is a fimction, f,of the ciifference between m and me,, dm --- (m- m.,) dt f Expanding f in a Taylor series and neglecting terms beyond the first order, ive obtain

where f (O) = O to allow for steady state when m = m.,. We set a = -/'(O), where the parameter a. which has the dimensions of inverse time, is expected to be dways greater than zero.

LWle this derivation is suitable only for smd merences, our experience with fitting esperimental data has shown that Eq. (40) tends to work quite well under d conditions.

A physical mode1 of Eq. (40) is given in Norwich and Wong (1995, Assembly-line blodel). A possible electrochemical explanation of Eq. (40) might also be found in the work of Schroeder and Hall (1974).

Eq. (40) can be solved easily if ne,is independent of time. However, if I is a fimction of time (say, a stimulus with a slowly varying intensity profile), me, will indirectly be a function of time as well. Under these circtimstances, the solution becomes

m(t) = rn(t') e-a(t-t') + a e-=' 1earmeq di, (41) t' where the initial condition is evaluated at time t = t'. In principle, we are able to accoimt for any time-varying sensory input.

Al1 that remains is to relate stimulus imcertainw, H, to sensory response. In

47 1977. Norwich postdated that the firing rate, F' of the receptor's associateci prirnary afferent neuron is directly proportional to its uncertainty (Norwich. 1977). That is,

wtiere k is a constant and has iinits of inverse time or frequency.

One must pause to consider the importance of this single equation. Whereas the

Boltzmann-Shannon measure of entropy may be considered a modei of the uncer- tainty resident at the receptor, Eq. (42) is a fundamental conjecture regarding the objectivity and measiirability of sensation. The tme test of t hese eqiiations, of course, is in their predictive power both in theory and in experiment.

.4s we shd see, Eq. (42), dong with Eq. (37), provides a good theoretical pre- diction of the neural sensory response to all inputs for intensities below physiological saturation levels.

3.2.2 Summary

Only three eqiiations fÎom this chapter will be required for the remainder of the thesis:

Eq. (3'7), relating receptor uncertainty to stimulus intensity and memory,

Eq. (41), solving for the receptor memory, and Eq. (42). relating receptor imcertainty to neural response:

F = kN. Chapter 4 Neural Explorations with the Uni- versa1 Entropy Equat ion

If the universal entropy equation zs to supersede the restricted or classical

equation as the equation of p~eference, it must be capable of euen greater

predzctiue power. In the folloving sections, we shall utzlize Eqs. (37) and

(42) to ezplore neural phenornena in a rnanner not previovsly possible with

the simpler classical entropy equation. That zs, we shall attempt to address

the shortcomings of the classical theory mentioned in Section 3.1.2.

4.1 Neural Adaptation

Consider the neural response to a step input as shown in Fig. 12. The stimulus is of constant intensity, 1, beginning at time t = O. The solution to Eq. (40) is If the receptor was completely adapted prior to the input then m(0) = me,(,=,= m& 619 from Eq. (38), and m,,l = mzq (1+ 61),. Obtaining F fiom Eqs. (43),

(37) and (42). we obtain

where 3 = ,'lm&. It is easily seen that mzq always factors out of m and can be incorporated into ,O.

The denominator in Eq. (44) can be rearranged to give (I+61)q - rëaC7where

ï = (1+ 6 I)q- 61s. Since r > 0, the denominator is monotonically increasing and, as a conseqiience, F decreases monotonically for increasing t corresponding to neural adaptation. Whereas, in Eq. (25), F always adapts to completion (F = 0) for large t, Eq. (44) shows that, for t - co,

where n = p - q. That is, F does not Myadapt.

4.2 Spontaneous Neural Activity if there is no external signal (1= 0) and the receptor is completely adapted, we have frorn Eq. (45) Figure 12: Step input intensity. The presence of 61 prodiices a "spontaneous" firing rate independent of the extemal inptit . as commonly O bserved experimentdy. 4.3 Early Rise in the Adaptation Curve

There are several ways to interpret the early rise in the neiiral adaptation curve (cf.

Section 2.2.1.1).

4.3.1 Correlated Receptor Sarnples

The rate of change of m with respect to time indicates how quickly the receptor samples. In terms of adaptation, we see that dntldt is greatest at initial stimulus onset. decreasing steadily for increasing durations. It is assumed that each receptor sampie is statistically independent of the previous sample. However, if successive sarnples are separated by a period les than the correlation time of the stimulus, the samples will be not be entirely independent.

The sample variance obtained from a correlated sequence of stimulus values will tend to be smaller than the variance obtained kom independent samples. Thus, by

Eq. (33), the entropy or response will be diminished as a consequence of the low variance. .4s the sample values become less correlated, both the variance and the response will rise. When the sampling period exceeds the correlation period, the entropy will fall monotonically as expected with adaptation.

The effects of correlation in the olfactory system were explored in a paper by

Aebersold, Norwich and Wong (1993). In this paper, the correlation period was stiidied tising computer simulations of the Brownian motion of odorant molecules. 4.3.2 Effects of Stimulus Rise Time on Adaptation

Eq. (44) was obtained for a step hinction input at time t = O. More realisticdy. stimuli have finite rise times which may affect the neural response. For example,

Smith and Brachman (1980) found that by decreasing rise tirne, the peak of the adaptation curve rises, and vice versa. Using a numerical solution of the differential eqoation, it can be shown that the iuiiversal theory is capable of predicting this effect.

Suppose that we can mode1 the rising stimulus profile with a function of the form

wliere a is the inverse rise time constant. We have assumed that the stimulus begins at time t = O. Eq. (40) can be solved numericdy with I(t) for different valiies of a. Siibstituting these values into Eq. (37), we can now observe the effects of the rise time on the adaptation cime. Using parameters values k = 2, = 0.1, 61 = 0.1, p = 1, q = 0.5, a = 5 and Ï = 5, the results are shown in Fig. 13.

It is essential to remember that the parameter a (stimulus rise time constant) is not related to the parameter a (adaptation time constant).

The cime rises to a peak before falling, in agreement with Fig. 2. Furthemore, we see the peak shift upwards and towards the origin as the rise time is reduced in agreement wi t h experimental O bsemtion. 4.4 Neural De-adaptation

If the neural response is adapted to a stimulus of intensity I by time t = 0, the response takes on the form of Eq. (45). If the signal is now removed (I = O), we can solve for the shape of the neural response.

Recd that Eq. (43) is the solution of the differential Eq. (40) for constant intensity. From Eq. (38),we see that m(0) = mq(1 + 61)' and m, = m:, b1q.

Siibstitiiting these values into Eq. (43), we obtain

Generally speaking, the stimulus intensity I is always much greater than the in- ternal signal 61. Therefore, m(0) is much geater than me,. Since m(t) changes continiiously from m(0)to me,,we see that m(t)decreases monotonically from t = O onwards.

From Eq. (&), F drops suddenly at t = O because I = O. Afterwards, F increases steadily as m(t)decreases with the. F keeps increasing until steady state is reached in Eq. (48). In that case, F approaches the spontaneous value given in Eq. (46).

This residt is in agreement with the experimentdy observed de-adaptation effect

(see Fig. 2). Figure 13: The effects of stimulus rise theon adaptation as predicted by the universal equation. a is the inverse rise time constant. For top to bottom, ru = 1000,50, and

10 respectively. A shorter rise time produces a higher and quicker peak. 4.5 On the Relationship Between the Classical and the Universal Ent ropy Equat ion

As mentioned earlier, the heritage of the more complex universal entropy equation

can be traced directly back to the classical entropy Eq. (25). Hcjw can we complete

t his correspondence?

Since the classical equation is lirnited to step inputs as shown in Fig. 12 (see

Section 3.1.2) , we return to Eq. (44), which is the neural response to a step input

as predicted by the iuiiversal equation.

CVe can simplify Eq. (44) for the limit where the stimulus is of shorter duration

(t « l/a) and higher intensity (1» 61) to obtain, approximately,

Setting K = k/2, = 4/a and n = p - q, we see that the classical entropy Eq. (25)

is recovered.

4.6 Further Experimental Validation of the Universal Entropy Equation

In this section, we utilize Eq. (44) to analyze data from real experiments. However,

before proceeding, one shoidd take cognizance of the difficulties in validating such a complicated equation as Eq. (44). The problem involves the number of parameters.

Wihile only two parameters are required to uniquely spec* a straight line, adaptation

58 riwes woidd typically require 2-3 parameters. Fitting a cime with 6 parameters throiigh adaptation data would yield no imique set of parameters.

The or&-way of obtaining robust parameters is by sznzultaneously fitting the epua- tion to multiple sets of data. For example, Eq. (44) is a huiction of taro variables: that is, F = F(I.t). If we have data fkorn the same preparation which measures both

F as a fimction of 1 for constant t and F as a function of t for constant 1, then we can evaliiate the 6 parameters for two cuves, or 3 parameters or degrees of fkeedorn per cime.

Cime-fitting of a single. multi-parameter fimction to multiple sets of data using the same parameter values is a very stringent test of the validity of an equation.

Sirniiltaneoiis cuve fitting has, to our knowledge, never been iised by other groups to test any sensory theory. For further details, please see Appendix G.

4.6.1 Auditory Neural Response

Brachman and Smith (Figs. 4 and 6, Smith, 1988) measured both firing rate as a fimction of tone duration at ked intensity (adaptation) and firing rate as a hinction of tone intensity at ûxed durations (driven response) . Both experiments were conducted on the same auditory nerve fibre in the same organism.

In the adaptation experiment, they recorded the number of measured impulses in a 960 ms bin as a function of stimulus duration in the anesthetized Mongolian gerbil. The impulse count was converted to firing rate by dividing through by the bin interval. An averaged firing rate over 91 triais was then obtained. Fig. 14 shows

their transformed data for a 39 dB SPL tone at the characteristic frequency (CF) of

the fiber (2.44 kHz).

For the driven neural response, they measured the maximal hing rate during a one

miilisecond interval after response onset as a function of stimulus intensity. Since the

derivation of Eq. (44) demands that the duration of the stimulus be kept constant,

Eq. (44) is not completely suited to their experimentd procedure. Nevertheless. we

shall use it as a good approximation to their measurements. The data £rom the graph

by Smith (Fig. 6, 1988) have been digitized and are shown in Fig. 15.

Since the same fibre was iised in both experiments, we require that the same set of

parameter values should fit both sets of data. Hence, Eq. (44) has 6 parameters (k,

,.p. , a, 1)For the driven response, we require an additional parameter f whch is

the average stimulus diration. The values k = 3.3 x 102 spikes/s, ,O = 2.5 x 10-~IL', p = 1.1. q = 9.0 x IO-', a = 9.0 x 10 Hz, 61 = 2.5 and t= 1.1 x 10-l rns were obtained by simultaneously fitting the data with Eq. (44). The theoretical fit is shown as the solid line in Figs. 14 and 15. The fit is quite good in both cases. In Fig. 15, we see that, for the higher intensities, Eq. (44) fails to predict the saturation of the neural response.

We can also predict the cat muscle fibre response to an increasing ramp and hold Figure 14: Neural adaptation data of Smith (1988) fitted with Eq. (44). The same set of parameters were used to fit the data in Fig. 15 as weil. Figure 15: Driven neural response as measured by Smith (1988) and fitted by Eq.

(44). Data was coliected fIom the same fibre as Fig. 14. Consequently, the same set of parameters were used to fit both sets of data. stimidiis tiiçed by Awisziis and Schafer (NB),the mechanoreceptor response to steady and sinusoidal stimulation as measured by Bohnenberger (198 1) (see Norwich and

Wong, l995), and also many other input profiles for wious sensory modalities as well.

4.7 Other Investigators

The only other investigator to our knowledge who has worked with comparable gener- ality on developing a universal theory for sensory receptor action is Zwislocki (1973).

In this paper: he examines the effect of stimulus intensity on the neural response.

His approach is based on detailed receptor mechanisms and is somewhat empirical.

Furthemore, Zwislocki does not deal with adaptation phenomena. However, he does demonstrate similarities in the neural response for a wide range of sensory modalities and organisms. Chapter 5 Unieing Threshold P henomena

In the prevzous chapter, we explored a wide range of neural phenornena using

the universal entropy equation. We now wish to use the same eqvatzon to

study psychophysical thresholds.

However, before tahng to this task, it zs important to survey what m-

searchers currently understand or do not understand abovt threshold phe-

nomena. Quoted belout 2s a passage from a review article unitten with re-

gard to the current state of auditory threshold research (Algom and Babtofi

1984):

"Ahhough there is general agreement with regard to the fundamental

empirical findings involvecl in the time - intensity relation, there is no such

consensus with regard to a theoretical formulation. Various theoretical in-

terpretations have appeared in the literature over the 1st 40 years. These

theoretical formulations have been mainly independent developments by dif-

ferent authors rather than representative of an organic development of a dpamic field of interest. The interpretations that have been offered differ

in terms of the coverage of data as well as in their level of mathematicai so-

phistication. This situation rnakes it difficult to assess the "state of the art"

reliably: as well as to incorporate new experimental finds into an accepted

body of relevant knowledge."

The sarne evaluation may also be ascribed to the understanding of thresh-

olds for other sensory modalities as well.

In this chapter I hope to rectzh this situation. Bvilding upon the earlier

*work of Norwich (1987, 1989), 1 wish to demonstrate that we now have a

theory capable of unzhing the study of thresholds almost in its entirety. mile

the success of the clessical entropy theory in this area has been lzmited, we

shall see that many threshold phenornena can be understood using only Eq.

(37) coupled tvith a single threshold h ypothesis.

While ou seensory receptors are sensitive to single atomic events, the combineci action of a number of these events is required before a psychophysical response can be generat ed. Yet , how is the detection of t hreshold stimuli achieved given the neural response? That is, what condition does the brain impose upon the neural response before the stimuliis can be detected psychophysicdy?

We shall attempt to answer these questions in the following sections. 5.1 Thresholds and the Neural Response

In Section 2.3.3, the sensory threshold was dehed as the minhum stimulus level which is detectable, Say, 50% of the tirne. Thus, a subject may be reqiked in an experiment to detect a weak flash of light in a dark room. The experimenter adjusts the intensity of the flash until it is detectable, Say, half of the the. We termed this the absolirte threshold. For stimulus intensities above the absolute threshold, the differential threshold governs how closely we can merentiate between stimuli of two different intensities. As with the absolute threshold, the differential threshold is always dehed with a statistical criterion in mind.

Both the absolute and the differential thresholds exhibit a reciprocal relationship between minimum perceptible intensity and stimulus duration. Thus, the thee main variables governing t hreshold phenomena are AI (threshold intensity) : At (stimulus duration) and F (sensory response).

Recd that the organism hst cornes into contact with the stimuliis at the neural periphery (cf. Fig. 1). Hence, the threshold condition must ultimately be decided from the neural response. Eq. (37), introduced in the Chapter 3, relates the neural response F to both the stimulus intensity I and stimulus duration t: F = F(I,t).

That is, the three threshold variables are all related in a single equation for the neural response.

In the following section, we shall examine how the threshold condition can be denwd from the neural response iising Eq. (37).

It is essential to keep in mind tliat only Eq. (37) will be required for this entire chapt er .

Some investigators incorporate signal detection theory in their study of sensory thresholds (eg. Durlach and Braida, 1969; Hellman and Hellman, 1990). Signal detection theory is a mathematical and theoretical system that deals with both de- cisional and sensory components in detection and discrimination tasks (Coren and

PVard, 1989). While this approach is hdamentaily different fkom the entropic ap proach, the two methods are actudy compatible (see Hellman and Heilman, 1990, and Wong and Norwich, 1995).

5.2 Introducing the Threshold Hypothesis

Fig. 8 shows one method of measuring the differential threshold. Given a pedestal of intensity I, what is the minimal increment Al which is detectable by the subject

Say 50% of the tirne? This method of measuring the Weber fraction is cded the continiious method. Fig. 8 has been replotted in Fig. 16.

In the last chapter, we examuieci the phenomenon of neural adaptation by solving for the neural response given a step input. (If required, a quick reuiew of Section 4.1 would be helpful at thù point.)

Similarly, we can also solve for the neural response H = F/k correspondhg to the input shom in Fig. 16. Solving for the response gives the residts shown in Fig. 17.

The detailed calculations are left to Appendix B.

Fig. 17 is in good agreement with the neural response as observed experirnentalb

Please see Smith and Zwislocki (1975).

As mentioned in the introduction, one feature comrnon to aU sensory organs is their ability to transmit sensory information. Thus, following Norwich et al. (l989), we can now e.xpress the fhdamental entropic conjecture regarding the detection of thresholds: An increment AI cm be detected ij and only zf the change in neural cntropy or information ezceeds the constant AH. In other words, a fured quantiîy of information must be transmitted neurdy in the interval [O, At] before the increment can be detected.

Mathematically, we might say that AI is detectable if and only if

where t, is the time of first receptor sample in the interval [O, At]. H is given by

where ,O = B'/m&, Y = AI/ (I+ 61) and C = B(I + 61)". Eq. (51) is derived in

Appendk B and corresponds to the neural response to the increment in Fig. 16.

The threshold hypothesis is sornewhat analogous to the classical Bohr approach to the atom. Given an electron in its gound state, what are the energy requirements to

68 Figure 16: S tirnulus input for continuous increment . Region 1

Figure 17: The neural response corresponding to Fig. 16. move the electron to the next orbital or higher energy state? If a photon is incident iipon an atom with less energy than the difference between the two energy states, the electron will stay in its current orbital. Only a photon with su8icient energy can move the electron to the next orbital. Sirnilarly, our hypothesis states that the brain does not respond to any stimulus which fails to produce a change in neural entropy exceeding AH.

CVe now explore how this single hypothesis can change the entire way in which thresholds are interpreted.

5.3 Differential Thresholds and Weber Fractions

5.3.1 Deriving the Different id Threshold Equat ion

Specid cases of Eq. (50) may be expresseci in simpler forms. We make the following asstunptions in order to simpli@ the equation: AI << I + bl, At >> t,, and AH « 1.

Recd that AI is the differential threshold (see Section 2.3.3.2). The assiunption

AI << I + 61 implies that the merential threshold intensity is smd compared to the combined input (external + intemal signal). The second assumption relates sampling time to increment duration. Qualitatively, these two assumptions appear to be compatible. Recall that there is an inverse relationship between stimulus intensity and stimulus duration for a threshold response. A longer duration would imply a smaller incrernent. and thus the two assumptions are indeed compatible.

The third assimption relates to the absolute magnitude of the transmitted infor- mation. Since threshold quantities are relatively srnail, it is reasonable to assume that the magnitude of AH, or the threshold criterion, will also be srnall.

It is important to point out that AH is directly related to the statistical criterion established by the experimenter (see Section 2.3.3.1). AH links the entropic approach to the theory of signal detectability. While there exists no current theory which can relate the two quantities mathematically, we know that AH is a monotonicdy increasing hmction of the response criterion. That is, less transmitted information is reqiiired if the experimenter sets a lower response criterion.

Taking a series expansion to first order in both AI/ (1+ 61) and AH in Eq. (50), one obtains

where we have set n = p - q. The details of this calculation are preçented in Appendix

5.3.2 Implications of the Differential Threshold Equation

Recall that there are many empirical observations and equations governing the Weber fraction (see Section 2.3.3.2). Using Eq. (52), they can be explained quite simply. This eqiiation is identical to the empirical Knudsen-Riesz Weber fÎaction Eq. (15) and is simiiar to the expressions proposed by many other investigators (ie. Hecht.

1935: Siebert. 1968).

As the pedestal intensity is made smaller and smder (I -+ O), we can expand Eq.

(52) to obtain

In this case. a double logarithmic plot of AIII vs. I will yieid a straight line Mth slope - 1 as observed by many investigators.

5.3.3 Validating the Different idThreshold Equation

Here. we are once again dealing with an eqiiation of many parameters. In total, Eq.

(52) contains 5 parameters (AH/qis considered as a single parameter). We shall use the technique of simultaneous curve fitt ing to validate this eqiiation.

In Eq. (52), the Weber haction is a hinction of both I and t. By curve fitting simidtaneoiisly to both variables, we are subjectzng this equation to a uery severe experimental test.

5 -3.3.1 Visual Differentid Thresholds

Roufs (1971) rneasured the differential threshold of the eye to point sources of light (1°

73 fovea). Both the pedestal intensity (1)and the increment duration (At) were varied.

In Fig. 18' Roufs' measixements (symbols) along with the theoret ical prediction

(solid line) are plotted for AI vs. At. I is held constant as a parameter and its values are listed in the Iegend.

The pedestal intensity I is measured in units of troland (td). The troland is dehed as the retinal illumination from a 1 /m2 light source when the area of the aperture of the eye is 1 mm2.

Jiist as a reminder. it is important not to confise the differential threshold? AI, with the Weber fraction, AI/[. In Fig. 18, AI is plotted.

Except for a, notice that the set of 9 curves can fitted with a single set of parameters

(AH/q= 1.0 x IO-*, n = 7.5 x IO-'! 61 = 1.8J = 1.1).

The reason why different a values were used to fit the data can be understood quite simply. The parameter a determines how quickly adaptation proceeds. In the visiial system, there are two types of receptors. The rods and cones each have a different time coiuse for adaptation. yet both function simiiltaneously in the detection of the thresliold stimulus. Hence more than one value of a is required for the hybrid visual receptor described by Eq. (52). The value of a = 4.4 x 10 Hz was used for the top 3 ciwes, a = 2.5 x 10 Hz for the middle cwe, and a = 1.6 x 10 Hz for the remahing cimes. The valiies of a correspond roughly to the different visual conditions: photopic

(vision iuider daylight or bright illumination, cone vision), mesopic (vision under Figtire 18: The visual dEerentia.1 threshold data of Roufs (1971). Theoretical fit iising

Eq. (32). Notice that 9 cunres were fitted to a total of 7 parameters. The unit 'td' stands for troland. rediiced illiunination. combined rod and cone vision). and scotopic (vision under low

illiiminance. rod vision).

In total. 7 parameters were used to fit 9 ciwes. This corresponds to les than one degree of freedom per curve.

5.3.3.2 Auditory Differential Thresholds

Eq. (52) can also simultaneously fit data obtazned /rom two dzfferent experiments.

In the first e-xperiment (Carlyon and Moore, 1986, 'CM'), the Weber haction was measilrd as a fimction of I for a 500 Hz tone (data from Fig. 1b, continuous pedestal condition in the absence of bandstop noise). The duration of the increment was 20 ms. That is. AI/ I was measiired as a hinction of I for At held coustant.

The second experiment involved measuring the Weber fraction as a fimction of increment duration (Garner and Miller, 1947, 'GM'). Using a 500 Hz tone, they made measurernents at both 40 and 70 dB SL (Table 1, data of GAII). That is, AI/I was measured as a function of At for I held constant.

CVe simdtaneoiisly fitted Eq. (52) with 6 parameters to three sets of data, or an average of 2 parameters per data set. 61 was hed at 2-5.?threshiwhere Ithresh i~ the absolute threshold measured at 500 Hz (see Appendix E). We leave the discussion of this point to Chapter 8.

Ithreshis temporarily used to denote the absoliite threshold (see Section 2.3.3.2) to avoid confùsion with the ditferential threshold AI (see Section 2.3.3.1). 3. n. a. are the first three parameters; AHl/q and AHz/q are the 4th and 5th.

Since the thresholds were determined by two different criteria (CM. 71%; Gkl, 30%), tsvo different values of AH were required. We set iip the equations as follows:

'AH2 q) 1 + 2-51 1 ( 105~~~)(' + ,3(105-46710+?.5&hreSh ) (GM, 40 dB SL) y 1 70 (l + w)(l + ) (GM, dB SL) (55) where we have converted al1 intensity values to SPL iising standard threshold data

(see Appendiu E).

The resiilts are shown in Figs. 19 and 20. The following parameter values were obtained: AHl/q = 1.6 x 10-~, AH2/q = 8.1 x 10-~, ,8 = 7.0~IO-~I;~, n = 3.6 x 10-1 and a = 1.1 x 10 Hz. Five parameters were used to fit 3 curves.

From the parameters, we see that AHl > AH2, in qualitative agreement with the higher criterion iised by Carlyon and Moore.

In Fig. 20, the cwes deviate from the data for smdAt, which may be seen in part as a violation of the inequality AI « I+61. However, even when this approximation is not made, the theory still deviates fkom the data. Since the hequency spectrum for short tones is broader by the uncertainty relationship (Af At = 1/2), it has been stiggested that the siibjects in the Gmer and PvIiller experiment were using, in part? the high frequency spectral cues. The extra stimulus artifact may have given the siibjects a substantial advantage in detecting the shorter increments - hence the

77 Figue 19: Data of Carlyon and Moore (1986) as fitted by Eq. (55) showing Weber fraction as a fiuiction of intensity. The same parameters were used to fit the data in

Fig. 20. Figure 20: Data of Garner and Miller (1947) as fitted by Eq. (55) showing Weber fraction as a fimction increment duration. The same set of parameters were used to fit the data in Fig. 19. spiirioiisly low values for the Weber fractions.

5.3.4 A Generalization of the Threshold Hypot hesis

We now retiirn briefly to the experiments of Roufs. An obvious generalization of the e.uperimenta1 paradigm as shown in Fig. 16 is to rneasure the threshold associated with a decrement. That is. instead of a change of +AI, we ask the subject to detect a change in the pedestal intensity of -41.

What might we expect £rom the residts of such an euperiment? If we examine Eq.

(52), we see that a first order expansion in both AI/ (1+ 61) and AH produced an eqiiation which is linear in both Al and AH.

Thiis, if we postdate that a decrement of -Ai cm be detected if and only if the accompanying neid response shows an increase in uncertainty or entropy of the amount -AH, then Eq. (52) predzcts that such an ezperiment would yield identical results to the corresponding incremental expenrnent (apart from the sign of AI).

That is, a loss in information is required for the perception of a decrement.

Not siirprisingly, this is what was observed experimentally. In Fig. 21, Roub' residts are plotted showing bath the data for the incremental and the decrernental experiments for the same pedestal intensity. The agreement is quite striking. 5.4 Absolute Thresholds

The absoliite threshold is the limit of the differentid threshold when the pedestal is rediiced to zero (1= O) (see Section 2.3.3.2). Frorn Fig. 16, all that remains of the intensity profile is a stimulus of magnitude AI and duration At.

Reti~rningto Eq. (52), midtiplying both sides by I and setting I = O, we obtain

where

Eq. (56) was Brst obtained by Plomp and Bouman (1959)' and later by Zwislocki

(1960) as an idternative expression for the Blondel-Rey law (or Hughes' law in aiidi- t ion).

Of the pantheon of empirical relationships governing threshold phenornena, Eq.

(56) appears to be the only equation which has ever been derived. Even in this case, the derivation is based iipon empirical equations governing the neural response

(Zwislocki) or some arbitrary mode1 of the sensory organ involving electric circuits

(Plomp and Boiunan).

As long as the same response criterion is used, a single value of AH determines

81 Figure 21: Data of Roufs (1971) demonstrating that the magnitude of the Merential threshold is the same for both a positive and a negative increment for the same pedestal intensity. The open symbols are for positive incrernents, the med syrnbols are for negat ive increments . both the differential and the absolute threshold. Returning to Fig. 18: we see that the single Eq. (52) fits ail cwes includiog the cuve for I = O using only a single value of AH. The cime for 1 = O measiires the absolute threshold.

5.4.1 Bloch's Law: Part 1

When At « lla, we cmexpand Eq. (56) to obtain, approximately,

which is Bloch's law. although care must taken to observe that the approximation

AI << 61 remains valid. We shd return to Bloch's law fkom a Werent perspective in the next chapter. Chapter Simple Reaction Time

Recall that reaction time zs another class of phenomena which exhzbits an

inverse relationshzp between stimulus intensity and stimulus duration. It zs

not usually classijied arnong threshold phen cmena. Nevertheless, we endeau-

our to show in this chapter that there zs a threshold assoc2ated with simple

reaction time, and that absolute, differential and reaction thresholds are al1

related.

6.1 Deriving Piéron's Law

The measurement of simple reaction times typically follows a protocol where a subject makes an motor response (say, by pressing a button) immediately upon detecting the stimulus input shown in Fig. 22. At, is cded the simple reaction tirne.

Apart hom the motor response, the measurement of simple reaction times is iden- tical to the measirement of absolute thresholds. We can follow upon this idea by Figue 22: Stimulus input for a simple reaction time experiment. Siibject responds by making a motor response at t = At,. retiirning to the t hreshold hypothesis introduced in the 1st chapter.

CVhile most absolute thresholds intensities are smd compared to the fidl phys iological range. intensities associated with reaction time measurernents tend to Lie within the normal perceptible levels. That is, LI is almost always much greater than the absolnte threshold intensity.

With this mind, we must now discard one of the assumptions of the last chapter

- that AI is small. Instead. we adopt a new approximation that AI » 61. By the reciprocity relation. since AI is now large, At shodd be smaiIer in compensation.

We miist be very careful to differentiate properly between At and At,. As postu- lated by Eq. (19), the reaction time At, is actually a sum of the sensory processing time At, and the time required to make a motor response At,. WhiIe At may be smd in magnitude, At, might be substantial in cornparison. Caution must be observed so that we do not wrongly estimate the order of magnitude of these variables.

In the 1st chapter, only the sensory processing tirne (At)was required since there was no rnotor response.

The other approximation, that t, « At, must also be discarded since At is now much smaller.

Retiiniing to Eqs. (50) and (51), we now set I = O (no pedestal is present). This time we assume that AI » 61 and AH « 1. At is left unrestricted. Solving Eq. (50) for At and expanding the resulting equation to qth order in 61/AI and hst order in AH. we obtain

(59) where we observe that O < q < 1 and p > q (see Section 3.2.1). The detailed calculation of Eq. (59) is provided in Appendix D.

Wlile this eqiiation appears daimting, it is actudy qiiite simple. Demg

and assiiming that 6I/AI zz O (ie. we take e zeroth order expansion in 6I/AI of Eq.

(59)), we get Q At = At,,, + - AIn ' where 9 = 2AH (1 - ëa'0)2/a,5ëat0.

As mentioned earlier, this equation governs the therequired for the sensory system to process a stimulus of intemity AI.

Eq. (61) is fascinating because it represents a limit or Iower bound on the process- ing time of the hiunan sensory system. That is, no rnatter how large we make AI, the sensory system caanot process the information any faster than Atmin.

If we were to obtain an expression for the simple reaction time fkom Eq. (61), we woidd add the rnotor response time, At,, to both sides of this equation to give

At, =At,,,+- e AIn ' where Atr = Atmin+ At, (cf. Eq. ( 19)). Eq. (62) is Piéron's law, fkst introdticed

in Section 2.3.4.

It is proposed that g be calleci Piéron's constant in honour of the eminent physiol- ogist .

6.2 Implications of the Entropic Derivation of Piéron's Law

Recd that one of the interesting features of Piéron's law is its simihrity in mathe- matical form to the Knudsen-Riesz Weber &action Eq. (15). We can now understand why this is so: both equations are a linearization of Eq. (50) with respect to AH and

I or AI. Consequently, the exponent (n) is identical in both equations (see Section

2.3.4).

Yet, we must remember that Eq. (62) was derived with the rather poor approx- imation that 6I/AI = O (zeroth order expansion). This assumption is not always valid. üsing instead E,q. (59) for At, one obtains

O 1 e2 At, = At,,,, + -e Ain +-+-AIp AIg' where and fi are all positive constants.

m3en data which conforms closer to Eq. (63) are "forcibly" fitted with Eq. (62)

(Piéron's law) , we see that a much lower exponent would be obtsined kom the fitting procedure in order to compensate for the two missing positive terms in Eq. (63).

88 This is exactly what has been observed by many researchers incltiding Marks (1974) and Norwich (1993). The "effective" exponent obtained by fitting Eq. (62) to exper- imental data is almost always lower than the exponent obtained by fitting Eq. (25) to Weber fraction data for the same sensory modality.

6.2.1 Bloch's Law: Part II

Only one empirical resiilt from Chapter 2 remains unanswered.

Recall that a variant to Bloch's law was proposed by Gmer in 1947 (cf. Eq. (7)).

This law is approximate at best when used to fit experirnental data. Nevertheless. we shall attempt to derive this equation although, admittedly, the derivation is not ent irely satisfact ory.

Rewriting Eq. (55), we obtain

If AI is srnall (although not too small as to violate the assumption 61/Ai << 1), At is mtich greater than Atmin. This, we can drop this term in favour of the remainùig terms. Generally speakuig, n, p and q are all of the same order of magnitude. Hence, we might approximate Eq. (64) with the simpler equation

where 0 is essentiaily the mean of the three exponents. It is important to keep in mind that Eq. (65) is a very crude approximation to Eq. (64).

89 Rewriting Eq. (65) in more familiar form we obtain

In Chapter 5, Eq. (58) (Bloch's law) was derived under the condition Al << 61.

That is. AI is ciose to threshold Levels. By contrast, Eq. (66) was derived with the approximation Al >> 61: that is, AI is at suprathreshold values. Hence, we might siunmarize the resiilts of both chapters with the foilowing table:

Al At = C,, AI near threshold (AI < 61)

AI atl/' = C31i AI suprathreshold (AI >> 61), a residt which has been observed by many researchers. Please see Algom & Babkoff

(1984). Chapter 7 Speculating on the Psychophysi- cal Response

This chapter deals wïth the extension of the universal entropy theory to new

domains. The results of this chapter repi-esent the "state of the art" and,

consequently, require fvrther investigation and research.

A11 of the psychophysical phenomena considered thus far have been in a sense

"objective". That is, the measurement process does not require an arbitrary decision or judgment on the part of the subject or experimenter (apart hom deciding on the statistical criterion for a threshold).

However , ot her areas of psychophysics do involve some element of arbitrariness.

For euample, how do we rate the subjective response of an individual? What rnethod shoidd be used for scahg the response?

As we proceed, we shd bear in mind these concem. 7.1 Psychophysical Adaptation

The difficidty of scaling the subjective response to psychophysical adaptation was overcome in part by the method of Békésy, which provides an objective rneasiire of psychophysical adaptation. Before we get to his methodology, let us briefiy digress to his briliiant scientific life.

Georg von Békésy was a Hungarian scientist who obtained his doctorate degree in physics from the University of Budapest. Working in the Himgarian Telephone

Research Laboratory. he became interested in how the human eu works. There has never been scientist who had contributed more to the understanding of hearing than

Békésy. For his contributions, he was awarded the Nobel Prize for Physiology or

Medicine in 19610joining a large number of physicists who have won this prize.

To measure adaptation in the ear to pure tones, Békésy employed a technique whereby a siibject woidd match the intensity of an adapting tone in one ear to the intensity of another tone in the opposite or contralateral ear. The tone in the con- tralateral ear is of fixed duration (200 ms) and the sound intensity in the adapting ear is of futed intensity (94 dB). The fiequency of both tones are identical (800 Hz).

Let iis denote the intensity of the matching tone as 1, (in the contralateral ear) and the intensity of the tone in the adapting ear as I.. That is, I, is ked at some value and I, is adjusted at regdar intervals to match the loudness of I.. Békésy's measiuements (1929) for a mean of 10 subjects are show in Fig. 23. Figue 23: Psychophysical adaptation as measured by the technique of loudness matching. Data of Békésy (1929)as fitted by Eq. (69). dB Adaptation is defined as

10 1% (Im/L). Békés- in another experiment. was able to show that the effect observed in Fig. 23 is not due to an interaction between the hoears. That is, the rneasirred phenornenon issues from a single ear alone. However. this is, admittedly, still a contentious issue.

Please see, for example. Scharf (1983).

To obtain a theoretical equation which may accotmt for the data in Fig. 23, we briefly retiirn to the two classical entropy Eqs. (24) and (25). Eq. (24) govems the psychophysicai response and Eq. (25) govems the neural response. The simple relationship between these two equations implies that

where we have ignored for the moment the difference in time scale (cf. Section 2.3.1).

We shd now attempt to use this same relationship to obtain the psychophysical response from the neural response for the universal theory as well.

The neural response F(I?t) to a step input is given by Eq. (44). Since @ a F, it does not matter whether we equate Q or F. Thus, for the condition of equal loudness, we demand that

where 1, = 94 dB, t, = 200 ms and O < E < 1 is an empirical parameter to account for the difference in time scales (see Section 2.3.1). Simphfying this equation we obtain ù'sing the values of p = 4.1 x 10-l. q = 9.8 x IO-* and 5 = 9.0 x IO-^. the theoretical prediction is shown by the solid cime in Fig. 23. dB adaptation is dehed as

1 O II)a was set qua1 to the value obtained in Section 5.3.3.2 for the differential threshold (a = 1.1 x 10 Hz), and the value of 61 was pegged at 2.5 times the absoliite threshold value for an 800 Hz tone (see Appendk E).

Implicitly, we have made the assumption that the same set of parameters should satisijr both ears. However, we mtst keep in mind that this may not alaays be the case.

The agreement between the theoretical curve and the data is qiiite striking. Notice that the value of < predicts that neuroacoiistic adaptation proceeds approximately

1000 times fmter than psychoacoiistical adaptation.

In essence: we have demonstrateci that psychophysical adaptation can be related to neiud adaptation throiigh a linear transformation in stimulus duration t. However, liasserman and Kong (1974) have warned against possible errors in correlating the t;wo responses. In particular, t hey distinguish between psychophysical time ( "stimulus diiration" ) iuid neurophysical time ( "tirne after stimulus omet"). This merence is best imderstood in terms of an experiment involving stimuli of short duration . The netval response reflects an instantanmus reaction to the sensory stimulus, whereas the psychophysical response might reflect the totality of the entire stimulus (ie. an integrated response). In the case of an integrated response, psychophysical adaptation camot be obtained fiom neural adaptation through a simple isomorphic map.

7.2 Magnitude Estimation

In Section 2.3.2. the concept of magnitude estimation and the law of sensation was introdiiced. We now apply the universal theory towards the understanding of this psychophysical effect .

Let iis begin with the Weber fraction.

7.2.1 The Measurements of Riesz

R. R. Riesz was a physicist working at AT&T Bell Labs in the 1920's and 30's. His name is nearly eponymous with the classical study of auditory Weber fractions. In

Section 5.3.2, his empirical equation was derived as Eq. (53). Using Riesz's spbolç, we can rewrite Eq. (15) as

where Ithreshis now ~sedto denote the absolute threshold to avoid confusion with the differential threshold AI. A.rranging the terms we obtain

which we showed earlier to be similar to Eq. (52) : where we have midtipiied both sides of the equation by I/ (I + 61). Since Riesz's diiration of increment (At) is on the order of 1 sec, we have, approximately?

iising the value of a = 10.56 Hz obtained from Section 5.3.3.2 for the ditferential t hres hold. Hence,

Riesz rneasured the Weber fkaction at different frequencies (see Fig. 7). By fitting al1 his cimes to a single equation, he was able to parameterize S,, S, and n as

If we now make the identification that

we can lise Fechnerian integration (see Section 2.3.3.2) to obtain H = H(I). Fechne rian integration implies that , to a first order approximation, Siibstitiiting for AH/AI kom Eq. (74, we obtain

Since H is linearly related to F from Eq. (42) and is linearly related to F from

Eq. (67)'we can write for the psychophysical response

where A is a constant of proportionality. Note the similarity between Eq. (81) and the classical eqiiation for psychophysical response, Eq. (26).

7.2.2 Magnitude Estimation

We can now use Eq. (81) to analyze magnitude estimation data. 0 and n can be set. to the values given by Riesz's parameterization. Once again, it is assumeci that

61 = 2.5ILhrerhi where the value of the absolute threshold is given in -4ppendix E.

Thus, the only j?ee parameter left in Eq. (81) 2s A.

Luce and Mo (1965) carried out experiments where subjects were asked to estimate the loiidness of tones using an assigneci number scale. A 1000 Hz tone was used and so the values of n and ILhrerhwere d evaluated at this fkequency.

We take the subject's mean magnitude estirnate to be numerically equal to the value of a. Eq. (81) was used to fit their data (Fig. 2, Subject 9) and a value of

A = 2.1 x 102 was obtained. The theoretical fit dong with their data is shown in

Fig. 24. The agreement is quite strikzng considering that the parameters in Eq. (81)

98 were drawn from three sepa.rate ezperiments of three dgerent paradzgms (absolute and dzfferential thresholds. and nean magnitude estimation).

Continuing this process with other frequencies, we would obtain a set of frequency dependent magnitude estimation or loudness cimes. In a recent paper (Wong and

Norwich? 1995), it was demonstrateci that the phenornenon known as equal loudness contours can be derived from siich a process.

However, the data in Fig. 24 are by no means representative of other rneasurements made for magnitude estimation (eg. compare Hellman and Zwislocki in audition,

1961). In general. Eq. (81) camot be used in conjunction with parameters obtained from other e-xperiments to give a good theoretical fit to all magnitude estimation data. Perhaps ths discrepancy may be due to problems of psychophysical scaling.

That is, different psychophysical techniques give rise t O different lo~idnesscimes. A paper is currently in press dealing this issue (Norwich and Wong, 1996). Figure 24: Estimation of loudness. Data from Luce and Mo (1965). The theoretical fit is from Eq. (81) with only one fiee parameter (a scaling parameter). ,4U other parameters were obtained from the measurements of Riesz (1928). Chapter Discussion

In this chapter we focus on severnl new ideas which have made their appear-

ance in this thesis.

8.1 The Interna1 Signal, SI

Let LIS examine the key equations which contain the intemal signal term 61.

In Eq. (46) (Section 4.2), bI governs the spontaneous activity of the sensory neliron. If 6 1 = 0, then there is no spontaneouç firing rate. That is, Fsp = 0.

The Weber fiaction Eq. (52) will not fall with slope -1 if 61 = O. This point has also been made by hellman and Hellman (1995) specificaiiy for audition. To better imderstand this point, we set 61 = O in Eq. (52) to obtain This. in the limit as I -+ O, Eq. (82) becornes

AI 1 - O( -. I In (83)

That is. the Weber fraction fds with dope -n. We recd (Section 5.3.2) that with

61 > O? AI/I a l/I (Eq. (54)).

The absoliite threshold vanishes if there is no 61. Provided that n 5 1, setting

61 = O in Eq. (57) gives I, = 0, and consequently AI = O from Eq. (56).

Piéron's law (62) becornes exact if 61 = O. That is, Eq. (62) eqiials Eq. (63) since

91 and g2 both equai zero when 61 = O. In this case, there would be no theoretical explmation for why the exponent obtained from the Knudsen-Riesz Weber fraction eqiiation (Eq. (15)) is ditferent hmthe exponent obtained hom Piéron's law.

Generally speaking, 61 divides the domain of sensory studies into two regions: (1) differential and absolute threshold phenornena and (2) simple reaction time phenom- ena. That is, in Chapter 5 (threshold), we took the approximation AI « 61. In

Chapter 6 (reaction tirne), we used the approximation AI >> 61.

In Section 5.3.3.2 on differential thresholds, it was hst postulated that 61 is ap- proximately 2.5 times the magnitude of the absolute threshold (AlAT).Furthemore,

EI = 2.5aIAT was iised when the psychoacoustical adaptation data of Békésy and the magnitude estimation data of Luce and Mo were fitted (Chapter 7).

The value of 2.5AIAT was hrst suggested in the work of Zwislocki (1965), who incorporateci a term similar to our 61 in his derivation of a psychoacoustic loudneçs

102 ftmction.

We are now in a position to how see closely 61 -- 2.3AIATmatches the experimental values. First. it must be pointed out t hat AIeATis a function of sound frequency. That is? ttie absolute threshold at 500 Hz wiU be different from the threshold at 1000 Hz.

The effects of frequency on the absolute threshold is shown in Fig. 28 (Appendix E).

The values used to fit the data of Garner and Miller, and Carlyon and Moore

(Section 5.3.3.2) can be siibstituted into

This eqiiation was obtained by inserting Eq. (57) into Eq. (56). Typically speaking, the duration of tones (At) iised clinically to measure the absolute thresholds are such that 1 - e-"*l = 1 since a = 1.1 x 10 Hz and At is on the order of 1 sec. Substituting the remaining values into Eq. (84) (including 61 - 2.5AIAT),we obtain, for a 500 Hz tone, AIAT = 17.8 dB SPL with the 50% criterion. The rneasured value according to Fig. 28 is approximately 14.7 dB SPL. The two values are within reasonable agreement. If 61 = 2NAT was used instead, we would obtain aimost exact results

(15.0 dB).

It can be seen that 61 is an important parameter within the sensory system. Per- haps fi~tiire experimental work will be able to idente and elucidate the origins of this term. 8.2 A Comrnon Set of Acoustical Parameters?

It is not sificient to jiidge a theory purely by how well it fits the available data. There is an additionai reqiiirement that the fitted parameters must cohere between the dif-

Ferent edxperimentalparadigms. Thus, we demand that the parameters obtained hom threshold measurements must match the parameters obtained fkom simple reaction time. etc.

We have already demonstrated a concurrence of the parameter 61 across the various acoiistical experiments. Similady, we can make cornparisons wit h ot her parameters as weU.

Recall that the value of 0 = 7.0 x ~o-~I;' was obtained from simultaneously fitting the data of Carlyon and Moore, and Garner and Miller (see Section 5.3.3.2) for a 500 Hz tone. Frorn Eq. (78), Riesz's measurements predict that ,DI:,,, -

3.6 x 10-~.Using the threshold data from Appendix E, we can solve for fi to obtaui

1.2 x IO-* I;l at 500 Hz. The two values are of same order of magnitude. Furthermore,

= 3.6 x 10-1 was obtained hom the data of CM and GM, comparecl to Riesz's value of approxhately 3.2 x IO-'.

Men we tiinied our attention to psychophysicd adaptation, it was demonstrated that the data of Békésy could be fitted using a value of the exponent qua1 to n = p - q = 3.2 x IO-'. Since the data were obtained for an 800 Hz tone, we see that this is in good agreement with Riesz's value of about 2.9 x 10-l. Furthermore, Békésy's valiies of p = 4.1 x 10-L and q = 9.8 x IO-* are in good agreement with the eqtiation given by the mode1 of equilibrium receptor memory in Appendix -4 (q = p/4, cf. Eq.

(A15)).

In the last chapter, the data of Riesz were also used to predict the magnitude estimation data of Luce and Mo with only one additional scaling parameter.

This is perhaps only the second time in the history of the sensory sciences that a common set of parameters has ever been found for one modality. The first attempt was made by Norwich in 1993 for gustation using NaCl data.

Thiis, it appears that there is indeed some coherence between the parameters ob- tained hom the different psychoacoustical experirnents. However, there stiil remains the problem of obtaining a complete set of results for a single subject. This task must also be repeated for the other modalities as weU.

8.3 The Threshold Associated with Simple React ion Time

As mentioned earlier, measurements of reaction times are not usudy associated with threshold phenornena. Consequently, we divided the material into two separate chap ters (5 and 6). However, we have demonstrated that there is also a threshold as- sociated with reaction times. bthermore, it was postulated that this threshold is governed by the same condition as are the differential and the absolute thresholds. The concept that reaction times and threshold phenornena are two sides of the

same coin is a relatively new idea.

8.4 Two Remaining Points...

( 1) The original entropy theory put forth the idea that perception can only occur when

there is a gain in information. However, we saw in Section 5.3.4 that perception can also occiu when information is lost. This opens up an entirely new way of interpreting the nature of perception.

(2) Two parameters have been identified representing the fundamental lower limit to sensory ability I, represents the smdest signal which is detectable by the sensory system. Atr represents the shortest time for which the sensory system can process stimulus information. Both terrns are iinear functions of the information threshold,

AH. Chapter 9 Conclusions

9.1 General Remarks

This thesis is primarily about unification. However, unification is only possible when the imderlying conceptiial foundation has been elucidated.

The ideas of the sensory entropy theory as pioneered by Professor Kenneth Nor- wich have been pivotal in establishing what rnay be the very first conceptual and mathematical theory of sensory perception.

Based upon the hypothesis that entropy or uncertainty is the prirnary "state vari- able" of perception, ure are able to dernonstrate, with entropy theory, that much if not all of basic sensory studies can be understood in temof a single derived equation.

In this thesis, 1 have attempted to continue the work of Prof. Norwich by extendhg the classical entropy theory. Having developed what I termed the universal entropy theory as a generalization upon the classical theory, 1 then demonstrate that this theory is capable of accoiinting for (1) almost ad of the experimentdy observed neiuai phenornena issillng fiom the response of an isolated sensory receptor with its associated primq afFerent neuron; and (2) almost all of the empirical resdts observeci in s tiidies on t hreshold and reaction tirnes.

Of particdar importance is the introduction of the threshold hypothesis and the idea that there is a threshold governing reaction times.

9.2 Specific Remarks

In this t hesis' 1 sought to demonstrate that many of the sensory equations discovered ernpirically issue from a single equation derived entropicdy. This equation relates the firing freqiiency F of a primary aiferent sensory neuron to both the stimulus intensity

I and stimulus duration t. That is, F = F(I,t).

F (1.t) represents a sensory input/output "transfer function" : Given any time- varying inpiit 1, the conesponding biological output or response F can be predicted.

Several highlights hom the thesis include:

9.2.1 Neurophysiological P henomena

The single equation, which is temeci the neural entropy equation, is capable of pre- dicting

(1) The neilrd adaptation effect (F vs. t) and driven neural response (F vs. I).

For example, the neural entropy equation gives a good theoretical fit to the data of Brachman and Smith (see Smith' 1988) rneasiired from the inner hair cells of the

Mongolian Gerbil.

(2) The effects of the stimulus rise time on the neural adaptation curve as weU as the de-adapta t ion response, incremental response and spontaneous neural activi.

Much of (1) and (2) can be found in a recent publication (Norwich and Wong,

1995).

9.2.2 Psychophysical Phenornena

The same equation can also account for psychophysical phenomena such as

(1) Psychoacoiistical adaptation as measured by Békésy (1929), rnatching the loiid- ness of two tones of dinerent durationsr

(2) Magnitude estimation data of, say, Luce and Mo (1965).

Furthermore, when combineci with a threshold hypothesis, this equation can be iised to derive

(3) The Weber fraction equation of Knudsen (1923) and Riesz (1928);

(4) The curves of Garner and bUer (1947) and Roufs (1971) measuring Weber fraction vs. increment duration;

(5) The absoliite threshold equation of Plomp and Bouman (1959), Zwislocki

(1960);

(6) Miller's observation (1947) that the slope of the Weber fraction cuwe must fall

Mth slope -1 near threshold on a log(AI/I) W. log(I) plot; (7) Bloch's law and Garner's (1947) modification to B1och.s law:

(8) Piéron's law for simple reaction times.

The theory also identifies a new parameter 61 which represents a signal interna1 to the sensory organ. 6 1 determines the absolute threshold, Ithrhresh,and Ithrerh oc 61.

For example, Zwislocki (1965) postulated the that 61 2.5 Ithresh. This resdt has now been derived in my thesis.

Some of the resdts mentioned above have ben published in Wong and Norwich

(1995), Xorwich and Wong (1996) and Wong and Norwich (1996).

A guide to, and siimmary of, the developments in this thesis is shown in Fig. 25. A Guide to This Thesis

Introduction to Sensory Entropy Theory Chapter 3

C lass ical Equation Section 3.1

4 / Derivation of the ) Universal Equation

Application to Direct Psychophysical Phenomena Response Chopter 7

1 Quantum Threshold 1 Hypothesis 1 \

1 Appiicationto Application to Sensory Thresholds Simple Reaction Chapter 5 Time Chopter 6

Figure 25: A guide to this thesis. Appendix A A Model of Equilibrium Recep- tor Memory

In this appendix a simple mode1 is developed which can account for the power de-

pendence of me, on I (see Eq. (38)). Actudy, it suffices to show that me, a Iq since we can treat the tenn I + SI as a combined intensity.

A.1 Physical Model

In the top diagram of Fig. 26, we see an explicit time sequence of receptor sam- ples. Each cross represents one samphg, and the sample size rn increases with each interval. We shall suppose that each sampling represents a measurement made by,

Say. an olfactory receptor. These rneasurements would reflect the density of odorant molecules in a local receptor "subvolurne" . Thus, a typical sequence of sarnples might be similar to the one found in the lower diagram of Fig. 26.

These numbers were drawn from a Poisson distribution (assuming the odorant molecules follow ideal gas statistics) with mean = 4. I have circled all instances of the number '3' for later use.

It is assumed that the receptor takes its est sampling as a "standard". Al1 future Figure 26: A time sequence of receptor samples. Each value represents one samphg.

In this case, the receptor is counting the number of times the value '3' is measured. sarnplings are to be compared to this one standard. In the example shown in Fig. 26. the density value of '3' is taken as the standard. We now postdate that the receptor looks for a repetitive pattern in the density sarnpling sequence. That is, the receptor is searching for repetitions of the standard value (in this case '3') in its sampling seqiience.

How many repetitions might the receptor look for? Let ris Say that the receptor coimts d occurrences of '3' during the course of adaptation. n, can then be defined as the number of samples required to count 4 occurrences of the standard '3' for a particular mean density (in this case mean = 4). When rn = me, it is assumed that the receptor no longer increases its sample size.

Continuing our example, we see that if 4 = 3 then me,= 4. There are 3 occurrences of the value '3' by the fourth sample (in total there are 4), and thus me, takes on the value of 4.

One might also nrish to refine the definition of me, by averaging it over all possible initial values or standards so as to produce an average value of m.,. We shd proceed by this method.

Derivat ion of the Power Dependence Receptor Memory

Let ris now look at the problem in the abstract. Denoting i as the density value obtained in the first sample, i becomes the value of the standard. The probability of sampling a value i is given by the Poisson distribution p ji: A). where X is the mean density:

For the Nth sample, there will be N samplings drawn £rom the population. On average, t here wilI be N p (2; A) occurrences of i. Of course, one must remember that

N p (2; A) is an integer. This condition is satisfied if N is large.

\?le require that after m = me, samples, the counted number of occurrences of 2 is

Since p (2; A) remains constant in the sum, we obtain

mc~ p (i;A) N = p (2; A) N=l

Solving for me, gives -1 + 41 + 8#/p (2; A) meq = 2

Since me, 2 O, we have discarded the negative root. U'e can now average me, over dl possible initial conditions,

To demonstrate that Eq. (A5) gives a power dependence of .fieq on the mean density, A, we must make several approximations. CVriting out Eq. (A5) in fidl we have

In general? 8$/p (2: A) >> 1 because C$2 1 and p (i; A) < 1 by definition. Therefore.

\Ne shd now adopt the large X approximation so that the Poisson distribution can be approximated by the normal distribution with &ance equal to mean. Thus,

Since the step size is small compared to the fùll range of summation, we can approx- irnate the sum by an integral,

Using the result

we obtain Since both X and o w-iU tend to be large, we can take erf (9)= 1 and drop the bal term of 1 to obtain finally.

which is the power dependence we were looking for when we take 1 = A.

This eqtiation can be cornpareci to vdties obtained by simulation. In Fig. 27, the simulation restdts are plotted.

Taking the logarithm of both sides in Eq. (A12), we obtain

Siibstitiiting the value of d = 50 into ths equation, we obtain a slope of 0.25 and an intercept of 3.1. The best-fit line in Fig. 27 has slope 2.0 x IO-' & 3 x 10-~and intercept 3.1 & 1 x 10-l.

Eq. (A12) constrains the value of q to be 0.25. However, we do not have to restrict oiirselves to a Poisson distribution - a general normal distribution can be used in its place. In this case, variance does not necessarily qua1 mean.

However, if the variance is related to the mean in the form

(cf. Eq. (36)), repeating the above cdculations for the normal distribution would yield Figure 27: Simulation of the equilibrium receptor memory mode1 with Q = 50. The bat-fit straight line has approximately the same dope and intercept as predicted by

Eq. (A12). In this case. we have q = p/4. Appendix B Deriving the Neural Response to a Double Step Input

GVe can easily solve for the neural response found in Fig. 17 given the intensity profile

in Fig 16.

The response in region 1 is given simply by Eq. (44) with the time origin shifted

from t = O to t = -7.

To obtain the response in region 2, we first assume that T is of sufbcient length that

the receptor has completely adapted to the pedestal. Thus, m (O) = m~,Ïq,where

Ï = 1 + 61, and, referring to Eqs. (38),(40) and (43), we have

rn (t) = Ïqëat + (Ï+AI)' (1 - ëat)

= Ïq [e-at + (1+ (1 - ëat)], where Y = AI/Ï and me, = meq (Ï+ ~1)~.

Substitiiting m (t)into Eq. (37), we obtain for the receptor entropy in region 2

where 3 = p/rnzq and C = pÏn = B7p-q. Once again, we have set 0' = mz, Appendix C Deriving the Differential Thresh- old Equation for the Continuous Incrernent

In this appendix, we derive Eq. (52) from Eqs. (50) and (51).

Using Eq. (51), we can evaluate the difference in H between [t,, At] to obtain

where we have iised the approximation t, = O from t, < At. This difference must exceed AH for detectability. Equating the two quantities, we obtain

Here we make the crucial assumption that Y = AI/ (1+ 61) is smd. Recd that

61 is greater than AI (se Section 8.1). In general, even as I -r O, Y « 1 is still not a bad approximation because AI < 61. The validity of these approximations will become clearer as we proceed. Ussing Y « 1, tve can expand the right hand side of

Eq. (C3) to first order in Y to obtain

The approximation of Eq. (C3) by Eq. (C4) may be verified manuah, or by entering

Eq. (C3) as an eaupression in a symbolic manipulator (eg. Maple) and requesting a first-order Taylor series approximation. We have included a Maple worksheet detailing this calculation in Appendix F.

If AH is also smd, using e2PH = 1 + 2AH1 Rie codd then mite, approximately,

or.

which is Eq. (52). Appendix D Deriving Eq. (59)

Once again we evaluate the difference in H between [t,, At] iising Eq. (5l), to obtain

where < = 3AIn, x = (61/~lI)~and x = e-"'~.Solving Eq. (Dl) for e-aAt gives r 1

Recall that q < p since n = p - q is always positive hom our experience. Ehrther- more. O < q < 1 (see Section 3.2.1) and xl/q is of power greater than 1. Solving Eq.

(D2) for At and taking a first order expansion in x, we obtain

A fiirther expansion to fust order in AH yields Siibstituting for X, C and x, we obtain, findy,

which is Eq. (59).

A Mapie worksheet detahg this calcrilation is shown in Appendix F. Appendix E The Absolute Threshold of Hu- man Hearing

The absoliite threshold Ithreohfor human hearing has ben measured by many inves tigators. In Fig. 28, the data of Wegel (1932) are shown. Notice that the threshold is a fimction of soimd frequency.

We fitted his data with the empirical equation

2 11.43 10 log,, (-) = 17.37' [iog,, (A)]+3-468 x 1oe6 [log,, (f )] - 1 (El) where Io is the standard intensity 10-l2 W/m2. The data and fitted curve are shonm in Fig. 28. Figure 28: The absolute threshold measured for Merent sound frequencies. Data of

Wegel (1932). The data were fitted by the empirical Eq. (El). Appendix F Maple Worksheets

On the following pages, the Maple worksheets detailhg the calculations in Appendix

C and D are shown.

Page 126 shows the derivation of the dinerential threshold Eq. (C5) (Appendix C), and pages 127 and 128 show the derivation of the reaction time Eq. (D5) (Appendix

D)-

For convenience, the following parameters were renamed: AH = DH and At = Dt.

All other parameters and variables remained unchangeci. Maple V Release 3 1

> simplify (" ) ; Maple V Release 3 1

> simplify(sube(O=O,")); Mapie V Release 3 2 Appendix G

Ali fitted cimes were obtained using a computer program implernenting the Lbsirnplex

algorithm" proposed by Nelder and Mead (1965). This method minimizes the sum-of-

square-error (SSE) between the measured data and the theoretical curve. The squate

error is calculated by taking the difference between the fitted value, yth,,, calculated

ith pair of 1V measured points, the sum-of-square-error is dehed as

in SSE is achieved.

The simplex process can be generalized to any number of theoretical cwes and

sets of measured data. A cornbined SSE can be obtained by adding together the

individual SSE's. A common set of parameters can be obtained by minimizing the

combined sum-of-squareenor.

Sensitivity analysis is an integral part of determining the robustness of fitted pa-

rameters. Ideally, the SSE shows a strong global minimum with respect to one set

of parameters. In other words, a small change in parameter values produces a large change in SSE. However, in reality, both the scatter in the data as well as the corn- plexity of the fitted eqiiation can give rise to an error space which contains many local minima. Since the simplex algorithm can guarantee only hding a local (and not global) minimum, how can we hure that the fitted parameters give the lowest

SSE? Rirthermore. if several minima show approximately the same sum-of-square error, which set of parameters should be chosen as representative?

PVhile the theory of sençitivity analysis has ben developed quite extensively for dynamical systems and is relatively easy to use (e-g. see Hearne, 1985), sensitivity analysis for more complicated equations or systems of equations require much greater effort (Press et al, 1986).

One method to evaluate the robiistness of parameters is by repeating the cime- fit ting procedure several times with significantly different starting points for each parameter value. If approximatelq. the same final values are obtained each time then we can be reasonably sure these values characterize a global minimum. This is the method adopted for this thesis.

To take an example, the data of Stevens (1969) from Fig. 11 were fitted 6 times in log-log coordinates with initial parameter values differing by at least one order of magnitude. The results show that the bal parameter values never ClifFer by more than 4 parts in IO4:

Initial valiies Final values I I k 3 n k P n SSE

0.1 1 10 41.5604 0.0985774 1.488741 0.3099874

0.1 10 1 41.54814 0.0986096 1.488846 0.3099875

1 0.1 10 41.54536 0.0986133 1.488829 0.3099874

1 10 0.1 41.54431 0.0986163 1.488873 0.3099874

10 0.1 1 41.54334 0.0986183 1.488868 0.3099875

10 1 0.1 41.55325 0.0985915 1.488859 0.30998'75

The second part of the analysis involves looking at how a smdchange in parameter value will infliience the sum-of-square-error. Summarized in the following table are the residts of changing the values of the parameters, one at a time, by f10% fkom t heir final values and looking at the corresponding change in SSE. The values k = 41.5604,

0 = 0.0985774, n = 1.488741 and SSE = 0.3099874 were chosen as representative

133 final values:

k ,Li' n SSE % change in SSE

k + 10% 45.71644 - - 0.3917878 26%

k - 10% 37.40436 - - 0.4098463 32%

$3+ 10% - 1 0.108435 - 0-3555062 15%

,3 - 10% - 0.0887195 - 0.3668767 18%

n t 10% - - 1.637615 1 0.4760579 54%

n, - 10% - - 1.3398669 1 0.5062194 63%

From this table, we see that a 10% change in parameter values produces at Ieast a

15% change in SSE. This is a good indication that the fitted values are robust. Bibliography

[l] Adrian, E. D. and Zotterman, Y. (1926). 'The impulses produced by sensory nerve endings. II. The response of a single end-organ," Journal of Physiology 61, 151-171. [2] Aebersold, B.? Norwich, K.H. and Wong, W. (1993). "Density fluctuation in Brownian motion and its significance in olfaction," Mathematical and Computer Modelling 18, 19-30. [3j Aigom. D. and Babkoff. H. (1984). "Auditory temporal integration at threshold: Theories and some implications of ciment research," in Contributions to sensory physiolugy, Vol. 8 edited by W.D. Neff (Academic Press, New York). (41 Awiszus, F. and Schafer, S.S. (1993). "Subdivision of primary afferents fÎom passive cat muscle spindles based on a single slow-adaptation parameter," Brain Research 612, 110-114. [5] von Békésy, G. (1929). "Zur Theorie des Horens: Über die Bestimmung des einem reinen Tonempfinden entsprechenden Erregungsgebietes der Basilannembran ver- mit tels t Ermüd~mgserscheinungen,"Physik. Zeits. 30 1 l5-?25. (61 Bloch. A.M. (1885). "Experiences sur la vision," C.R. Soc. Biol. 2, pp. 493-495. '1 Bohnenberger, J. (1981) "Matched transfer characteristics of single units in a compound dit sense organ," Joiimal of Comparative Physiology A 142. 391-402. ] Borg, G., Diamant, H., Strom, L. and Zotterrnan, Y. (1967) "The relation be- tween neural and perceptual intensity: A comparative study on the neural and psychophysical response to taste stimuli," Journal of Physiology 192, 13-20. [9] Brillouin, Léon. (1962) Science & information theory 2nd edition (Academic Press, New York). [IO] Carlyon, R.P. and Moore, B.C.J. (1986). "Continuous versus gated pedestals and the 'severe departure' from Weber's law," Journal of the Acoustical Society of Arnerica 79, 453-460. [Il] Chocholle, R. (1940). "Variations des temps de réaction auditifs en fonction de l'intensité à diverses fréquences," l'Année Psychologique 41, 65- 124. [12] Coren, S. and Ward, LM. (1989). Sensation & perception (Harcourt Brace Jo- vanovich, Orlando). [13] Cornsweet, T.N. and Pinsker. HM. (1964). "Luminance discrimination of brief flashes under various conditions of adaptation," Journal of Physiology 176, pp. 294310. [14] D udel. J . ( l986), "General sensory physiology, psychophysics," in Fvndarnentals of sensory physiology, 3rd ed., edited by R.F. Schmidt (Springer-Verlag, New York) p. 97. [15] Didach, N.I. and Braida, L.D. (1969). "Intensity perception. 1. Preliminary the- ory of intensity resolution," Joimal of the Acoustical Society of America 46, 372-383. [16] Fechner, G.T. (1860/1966). Elements of psychophysics Edç. D.H. Howse and E.L. Boring, Trans. H.E. Adler (Holt, Reinhart & Wkton, New York). [17] Fletcher, H. and blunson, W.A. (1933). "Loudness, its definition, measurement and calculation," Journal of the Acoustical Society of America 5, 82-108. [18] Fraser, D.A.S. (1976). Probability and statistics (DAIPress, Toronto). [19] Galambos, R. and Davis, H. (1943). "The response of single auditory nerve fibers to acoiistic sti~nulation,~'Journal of Neurophysiology 6, 39-57. [20] Garner, W.R. (1947). "The effect of kequency spectrum on temporal integration in the ear," Journal of the Acoustical Sociew of America 19, 808-815. [21] Garner, W.R. and Miller, G.A. (1947). "Differentid sençitivity to intensity as a function of the duration of the cornparison tone," Journal of Experimental Psychology 34, 450-463. [22] Gent J.F. and McBurney, D.H. (1978). "The course of gustatory adaptation," Perception and Psychophysics 23, 171-175. [23] Halpern, B.P. (1986). "Constraints imposed on taste physiology by human taste reaction time data," Neuroscience & Biobehavioral Reviews 10, l35- 15 1. 1241 Hartline, H.K. and Graham C.H. (1932). "Nerve impulses from single receptors in the eye," Journal of Cellular and Comparative Physiology 1, 277-295. [25] Hearn, J.W. (1985). "Sensitivity analysis of parameter cornbinations," Applied Mathematical klodelling 9, 106108. [26] Hecht , S. (1935). "A theory of visual intensity di~crimination,'~Journal of Generd P hysiology 18, 767-789. [27] Hecht, S., Shlaer, S. and Pirenne, M. (1942). "Energy, quanta and vision," Jour- nal of General Physiology 25, 819-840. 1281 Hellman, R.P. and Zwislocki, J. (1961). "Some factors decting the estimation of loiidness," Joiirnal of the Acoustical Society of America 33, 687-694. [29] Hellman, W.S. and Hellman, R.P. (1990). "Intensity discrimination as the driving force for loudness. Application to pure tones in quiet," Journal of the Acoustical Society of America 87, 1253-1265. [30] Hellman? CVS. and Hellman. R.P. (1995). "Effect of the shape of the lotidness hction on the Weber fraction: Predictions and measurements," Abstract #2201 AR0 Midwinter hleeting. [31] Hughes. J.W. (1946). "The threshold of audition for short periods of stimulation," Proceedings of the Royal Society of London Series B 133, 486-490. [32] Knitdsen, V.O. (1923). "The sensibility of the ear to smd Merences in intensity and frequency," Physical Review 21, 84102. 1331 Konig, A. and Brodhun, E. (1889). "Experimentelle Untersuchungen iiber die psychophysische Fundamentalformel in Bezug auf den Gesichtsinn," Sitzungsber. Preiiss &ad. Wiss. 27, p. 641-644. 1341 Luce, R.D. and Mo, S.S. (1965). "EyIagnitude estimation of heaviness and loudness by individual subjects: A test of a probabilistic response theory," The British Jomal of Mat hematical and Statistical Psychology 18, 159- 174. [35] Marks, L. (1974). "On scales of sensation: Prolegomena to any fiiture psy- chophysics that will be able to corne forth as science," Perception and Psy- chophysics 16, 35û-376. (361 Matthews, B.H.C. (1931). ''The response of a single end organ," Journzl of Phys iology 71, 641 10. [37] Miller, G.A. (1947). "Sençitivity to changes in the intensity of white noise and its relation to masking and loudness," Journal of the Acoustical Society of America 19, 609-619. [38] Nelder, J.A. and Mead, R. (1965). "A Simplex method for function minùniza- tion." Cornputer Journal 7, 308-313. [39] Norwich, K.H. (1977). "On the information received by sensory receptors," Bul- letin of Mathematical Biology 39, 453-46 l. [40] Norwich, K.H. (1983). "To perceive is to doubt: The relativity of perception," Joiimal of Theoretical Biology 102, 175-190. [41] Norwich, K.H. (1987). "On the theory of Weber fractions," Perception and Psy- chophysics 42, 286-298. [42] Norwich, K.H. (1993). Information, sensation and perception (Academic Press, San Diego). [43] Norwich, K.H. and McConville, K.M.V. (1991). "An informational approach to sensory adaptation," Joiin~alof Comparative Physiology A 168, 15 1- 157. [44] Norwich. K.H.. Sebiun, C.N.L. and Axelrad. E. (1989). "An idormational ap proach to reaction times," Bulletin of Mathematical Biology 51. 347-358. [45] Norwich. K.H. and Wong, W. (1995). "A universal mode1 of single-unit sensory receptor action," Mat hematical Biosciences 125, 83- 108. [46] Norwich. K.H. and Wong, W. (1996). "Unification of psychophysicai phenom- ena," Perception and Psychophysics. Under second revision. [47] Piéron, H. (1956). The sensations: Their functtons, pmcebses and mechanisms. Translated by M. H. Pire~e.(J. Gamet îvliller, London). (481 Plomp. R. and Bouman, M.A. (1959). "Relation between hearing threshold and diiration for tone puises," Journal of the Acoustical Society of America 31, 749- 758. [49] Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T. (1986). Nu- merical recipes: The art of scienttjîc computing, (Cabridge University Press, New York). [SOI Riesz, R.R. (1928). "Differentid intensity sensitivity of the ear for pure tones," Physical Review, Series 2 31, 867-875. [5l] Roiifs, J.A. J. (1971). 'Session C: Discussion," in The perception and application gf flashing lights, (University of Toronto Press, Toronto), pp. 163-166. [52] Roiifs, J.A. J. ( 1971). "Threshold perception of flashes in relation to Bicker," in The perception and application of flashing lights, (University of Toronto Press, Toronto), pp. 29-42. [53] Saglarn, MA., Marmarelis, V.Z. and Berger, T.W. (1995). "Approximation of input /output relation of a biological neural system by feedforward neinal nets," Annals of Biomedical Engineering (Abstract #393) 23, S-83. [54] Scharf, B. (1983). "Loudness adaptation," in Hearing Research and Theory, edited by J.V. Tobias (Academic, New York). [55] Schmidt R.F. (l986), "Somatovisceral sensibdity," in Fundamentals of sensory physiology, 3rd ed., edited by R.F. Schmidt (Springer-Verlag, New York) p. 89. [56] Schrodinger, E. (1989). What is ltje? 6Mznd and matter (Cambridge University Press, Cambridge). [57] Schroeder, M.R. and Hall, J.L. (1974). "Mode1 for mechanical to neurd tram- diiction in the auditory receptor," Journal of the Acoustical Society of America 55, 1055-1060. [58] Shannon, C.E. (1948). "A mathematical theory of communication," Bell System Technical Journal 27, 379-423. [59] Siebert , W.M. (1968). "Stimulus transformations in the peripheral auditory sys- tem," in Recognizing patterns, edited by P.A. Kolers and M. Eden (MIT Cam- bridge, MA). [60! Smith, R.L. (1988). "Encoding of sound intensity by auditory neuronc " in Au- ditory function: Neurobiologzcal bases of hearing, edited by G. M. Edelrnan, W. E. Gd,and W. M. Cowan (Wiley, New York), pp. 243-274. [61] Smith, R.L. and Brachman, ML. (1980). "Dynamic response of single auditory- nerve fibers: Some effects of intensity and tirne:" in Psychophysical, physiological and behavioural studzes in heuring, edited by G. van den Brink, and A. Bilson (Delft University Press, Delft), pp. 312-319. [62] Smith, R.L. and Zwislocki, J.J. (1975). "Short-term adaptation and incremental responses of single auditory-nerve fibers," Bioiogical Cybernetics 17, 169-182. [63] Stevens, S.S. (1936). "A scale for the measurement of a psychological magnitude: loudness," Psychological Review 43, 405-4 16. [64] Stevens, S.S. (2961). "To honor Fechner and repeal his law," Science 133, 80-86. [65] Stevens, S.S. (1969). "Sensory scdes of taste intensity," Perception and Psy- chophysics 6, 302-308. [66] Stevens, S.S. (IWO). "Neural events and the psychophysical law," Science 170, 1043-1050. [67] Viemeister, N.F. (1988). "Psychophysical aspects of auditory intensity coding," in Auditory function: Neurobiologzcal bases of hearing, edited by G. M. Edehan, W. E. Gall, and W. M. Cowan (Wiley, New York), pp. 213-241 [68] Ward, L.M. and Davidson, K.P. (1993). "Where the action is: Weber fractions as a hinction of sound pressure at low frequencies," Journal of the Acoustical Society of America 94, 2587-2594. [69]Wasserman, G.S. and Kong, King-Leung (1974)."IUusory Correlation of Bright- neçs Enhancement and Tkmients in the Nervous System," Science 184, 911-913. [70] Wegel, R.L. (1932). "Physical data and physiology of excitation of the auditory nerve," hmds of Otology, Rhinology and Laryngology 41, 740-779. [71] Wong, W. (1993). On the physics of perception (MSc Report, Department of P hysics , University of Toronto). [72] Wong, W. and Norwich, K.H. (1995j. "Obtaining equal loudness contours from Weber fractions," Journal of the Acoustical Society of America 97, 3761-3767. [73] Zwislocki, J.J. (1960). "Theory of temporal auditory summation," Journal of the Acoustical Society of America 32, 1046-1060. ['id] Zwislocki, J.J. (1965)."halysis of some auditory characteristics," in Handbook of mathematical psychology, Vol. II edited by R.R. Bush, E. Galanter. and R.D. Luce (Wiley. New York). [75] Zwislocki, J. J . (1973). "On the intensity charact eristics of sensory receptors: A generalized hinction," Kybernetic 12, 169-183. IMAGE EVALUATION TEST TARGET (QA-3)

APPLIED -A IMAGE. tnc -.-= 1653 East Main Street --- - Rochester. NY 14609 USA ---- Phone: 7t6/482-0300 ------Fax: 7t 6/288-5989

O 1993. Appl'i Image. Inc.. Ail Rihls Reserved