Minimum Description Length Model Selection in Associative Learning

Minimum Description Length Model Selection in Associative Learning

Available online at www.sciencedirect.com ScienceDirect Minimum description length model selection in associative learning 1 2 C Randy Gallistel and Jason T Wilkes Two principles of information theory — maximum entropy and can — is important to progress in determining the neuro- minimum description length — motivate a computational biological bases of learning and memory. model of associative learning that explains assignment of credit, response timing, and the parametric invariances. The Though conditioning phenomena are commonly referred maximum entropy principle gives rise to two distributions — to as ‘associative’ learning, associative theories have failed the exponential and the evitable Gaussian — which naturally for decades to capture many well-established experimen- lend themselves to inference involving each of the two tal results. No associative theory known to us addresses all fundamental classes of predictors — enduring states and point three of the following well-established bodies of experi- events. These distributions are the ‘atoms’ from which more mental results: (1) The assignment of credit (aka cue complex representations are built. The representation that is competition). (2) The timing of the conditioned response. ‘learned’ is determined by the principle of minimum- (3) The three parametric invariances: (i) Time-scale-in- description-length. In this theory, learning is a synonym for data variance: trials to acquisition is proportionate to the ratio of compression: The representation of its experience that the total CS time to total training time [1,2 ]. This means, animal learns is the representation that best allows the data of among things, that for fixed total training time, the number experience to be compressed. of CS_US pairings is irrelevant [3 ]. (ii) Reinforced trials to acquisition is invariant under partial reinforcement [4]. (iii) Addresses Omitted reinforcements to extinction is invariant under 1 Rutgers Center for Cognitive Science, 152 Frelinghuysen Rd, partial reinforcement during training [4]. Piscataway, NJ 08854-80210, United States 2 Department of Psychology, UCSB, Santa Barbara, CA, United States For example, the models of Rescorla and Wagner [6], Corresponding author: Gallistel, C Randy ([email protected]) Mackintosh [7], and Pearce and Hall [8] address cue competition but not response timing nor the parametric invariances, while associative models that address re- Current Opinion in Behavioral Sciences 2016, 11:8–13 sponse timing [9,10] do not address credit assignment This review comes from a themed issue on Computational modeling nor the parametric invariances. Some temporal difference Edited by Peter Dayan and Daniel Durstewitz learning models have addressed — to some extent — both response timing and credit assignment [11,12], but they do not address the parametric invariances. A step in the right direction has been provided by non- http://dx.doi.org/10.1016/j.cobeha.2016.02.025 associative models of ‘associative’ learning [13,14,15 ]. 2352-1546/Published by Elsevier Ltd. These have the dual virtues of (i) framing behavior in conditioning paradigms as the product of normative sta- tistical inference, and (ii) allowing the complexity of the statistical model formed by the brain to depend on the data. However, the support for the distributions in these Introduction normative-statistical-inference theories is trial types. To the animal, there is no such thing as a trial. Associative learning, as studied in Pavlovian and operant conditioning paradigms, is an extensively studied and We have recently elaborated a non-associative, causal, frequently modeled cognitive process. Most computa- algorithmic, real-time theory of associative learning [16 ] tional models of it are associative models; they assume that addresses all three bodies of evidence. It is founded that the learning consists of the alteration of connection on two principles of information theory: maximum entro- strengths between elements of the mind (nodes) or brain py and minimum description length (MDL). (neurons). The assumption that changes in associative strength are realized in the brain by changes in synaptic conductances is explicit in most research aimed at estab- Core assumptions lishing the neurobiological basis of learning and memory. We assume that the brain represents candidate predic- Thus, the question whether the associative theories can tors — possible causes of some event of interest — in one be made to yield an account of well established experi- of two basic forms: all predictors are either states (such as a mental results — and if not, what alternative theories light being on or off) or point events (such as a light turning Current Opinion in Behavioral Sciences 2016, 11:8–13 www.sciencedirect.com Minimum description length model Gallistel and Wilkes 9 on or off). By definition, states are perceived to have The seemingly independent narratives of the two para- duration — a temporal interior — while point events are graphs above turn out to be related in a surprisingly perceived to be located at a point in time, with no elegant manner. As it happens, the ‘first’ maximum measurable duration. States are simply continuous inter- entropy distribution — the exponential — is naturally vals, whose onsets and offsets are themselves point suited to inference involving states, while the ‘second’ events. They have, therefore, the potential to convey maximum entropy distribution — the Gaussian — is nat- rate information (e.g., ‘the average rate of a shock is urally suited to inference involving point events. This 3 times per minute during state S’). Point events have elegant relationship forms the foundation of our theory the potential to convey timing information (e.g., ‘a shock of associative learning. tends to occur approximately 10 seconds after the offset of state S’). One moment known (m) ! exponential ! states ! rates. The principle of maximum entropy [41,42] is a powerful Two moments known (m and v) ! Gaussian ! point method for translating qualitative prior information into events ! time. quantitative form in a way the embodies the information fed into it, but no more. Given minimal assumptions, These two cognitive building blocks — states and point maximum entropy effectively builds a model for us out events; one dimensional intervals and zero dimensional of a data set. For example, suppose we have a set of data points — are the atoms from which candidate representa- points generated by a distribution of unknown form. We tions of temporal experience are built. Given a condition- can then make different (qualitative) assumptions about ing protocol with a single CS, we have one state cue (the the distribution that generated that data, and examine state ‘CS = on’) and two point cues (the events ‘CS onset’ the resulting (quantitative) models. If we choose, as our and ‘CS offset’). The experimental chamber itself consti- set of background assumptions, a statement such as, tutes an additional state cue, while the moments at which ‘The mean is roughly m. That’s all I know,’ then the the animal enters and leaves it constitute additional point maximum entropy formalism tells us that our state of cues. Thus, even in a simple experiment involving a single knowledge is most honestly represented by the expo- CS, we already find ourselves with a nontrivial number of nential distribution. If we choose as our set of back- candidate predictors, and a correspondingly nontrivial ground assumptions, the statement, ‘The mean is number of candidate hypotheses as to what might be causing roughly m. The variance is roughly v. That’s all I know,’ the USs the animal experiences in the chamber. then the maximum entropy formalism tells us that our state of knowledge is most honestly represented by the An animal is removed from its cage and placed in an Gaussian distribution. experimental chamber. One minute later, a light turns on, Figure 1 DB L1 L2 US Dcs t Bon CSon CSoff CSon CSoff Models in play at time t Cue Stochastic Model Intuitive Interpretation λ B: exp(-t/DB) USs happen in this box at = 1/DB B Bon: G(p,L1,wL1) there's a US at roughly L1 after they put you in this box λ CS: exp(–t /2DCS) USs happen during the CS at = 1/2DCS B CSon: G(p,L2,wL2) there's sometimes a US at rougly L2 after CS onset Current Opinion in Behavioral Sciences The time line at top shows the first two trials (CS presentations, white rectangles) of one of Rescorla’s [17] protocols. The subject is placed in the box at Bon (gray background). The US occurs at latency L1 if its latency is measured from Bon or L2 if measured from CSon. The state cues are the Background (gray) and the CS (white). The appropriate stochastic model is ambiguous at time t; any of the 4 listed are viable. www.sciencedirect.com Current Opinion in Behavioral Sciences 2016, 11:8–13 10 Computational modeling and stays on for one minute (see Figure 1). Forty seconds Gaussian form, which we call the Bernoulli Gauss, and B B after the onset of the light, the animal experiences a which we denote by G(tjt0,p,m,s). The cumulative G single shock. Five minutes later, the light appears again distribution function specifies the cumulative probability for one minute, but with no shock. Given this limited of an anticipated event as a function of, t, for an event that information, how does it represent its situation in order to occurs with probability, p, following a point cue at t0, with effectively predict the future? Was the shock caused by an expected latency, m, and an expected temporal disper- the onset of the light, with a 40 s delay? Or was it caused sion measured by s (see Figure 2). by the state ‘Light = on,’ with a random rate of roughly one shock per 2 min? Or was it caused by the state of The vector equation is: being in the experimental chamber, with a rate of roughly B Gðtjt0; p; m; sÞ one shock every ten minutes? Or perhaps the point event n o tÀt0Àm tÀt0Àm of being dropped into the experimental chamber itself is ¼ pF ð1ÀpÞ 1ÀF s s to blame.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us