METALEARNING, NEUROMODULATION and EMOTION Papers and Presentations 【Basic Concepts】 Doya, K
Total Page:16
File Type:pdf, Size:1020Kb
METALEARNING, NEUROMODULATION AND EMOTION Papers and Presentations 【Basic Concepts】 Doya, K. (2002). Metalearning and Neuromodulation. Neural Networks, 15(4). Doya, K. (2000). Metalearning, neuromodulation, and emotion. In G. Hatano, N. Okada and H. Tanabe (Eds.), Affective Minds, pp. 101-104. Elsevier Science B. V. Doya, K. (2000). Possible roles of neuromodulators in the regulation of learning processes. 30th Annual Meeting, Society for Neuroscience. 【System Integration Group】 1999 Doya, K. (1999). Metalearning, neuromodulation and emotion. The 13th Toyota Conference on Affective Minds, 46-47. 2000 Bapi, R. S., Graydon, F. X.,Doya, K. (2000). Time course of learning of motor sequence representation. 30th Annual Meeting, Society for Neuroscience, 26, 707. Doya, K. (2000). Metalearning, Neuromodulation and Emotion. Humanoid Challenge, JST Inter-field Exchange Forum, 87-88. Doya, K. (2000). Possible roles of neuromodulators in the regulation of learning processes. 30th Annual Meeting, Society for Neuroscience, 26, 2103. Doya, K. (2000). A possible role of serotonin in regulating the time scale of reward prediction. Serotonin: From the Molecule to the Clinic, 89. Doya, K. (2000). Metalearning, neuromodulation, and emotion. G. Hatanao, et al. (eds) Affective Minds, Elsevier Science, B.V., 101-104. Doya, K. (2000). Complementary roles of basal ganglia and cerebellum in learning and motor control. Current Opinion in Neurobiology, 10(6), 732-739. Doya, K., Katagiri, K., Wolpert, D. M., Kawato, M. (2000). Recognition and imitation of movement paterns by a multiple predictor-controller architecture. Technical Report of IEICE, TL2000(11), 33-40. Doya, K., Samejima, K., Katagiri, K., Kawato, M. (2000). Multiple model-based reinforcement learning. Kawato Dynamic Brain Project Technical Report, KDB-TR-08, 1-20. Kimura, H., Doya, K. (2000). Motor control: Neural models and system theory. 14th International Symposium on Mathematical Theory and Networks and Systems, 232. Morimoto, J., Doya, K. (2000). Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning. 17th International Conference on Machine Learning, 1(1), 623-630. Morimoto, J., Doya, K. (2000). Robust reinforcement learning. Neural Information Processing Systems 2000, 77. Morimoto, J., Doya, K. (2000). Robust reinforcement learning. IEICE Neuro-Computing Workshop, NC2000-49, 59-66. Ohba, S., Ishii, S., Sato, M. (2000). Variational Bayes method for Mixture of Principal Component Analyzers. 7th International Conference on Neural Information Processing (ICONIP-2000), 2, 1416-1421. Samejima, K., Ueda, Y., Kimura, M., Doya, K., Schweighofer, N. (2000). Information coding of the striatal neurons during seqential movement. 30th Annual Meeting, Society for Neuroscience, 26, 681. Sato, M. (2000). On-line Bayesian Learning and Model Selection. 7th International Conference on Neural Information Processing, 1, 470-475. Sato, M. (2000). Convergence of On-line EM Algorithm. 7th International Conference on Neural Information Processing, 1, 476-481. 2001 Bapi, R. S., Doya, K. (2001). Multiple forward model architecture for sequence Processing. R. Sun and C. L. Giles (eds) Sequence Learning, Springer, 309-320. Doya, K. (2001). Robotic neuroscience: A synthetic approach to the brain. Neuroscience Research Supplement, 24(1), S16. Doya, K. (2001). Regulatory roles of serotonin and norepinephrine in reinforcement learning. 9th International Catecholamine Symposium, 36,S19-4. Doya, K. (2001). Specialization of cerebellum, basal ganglia, and cerebral cortex in supervised, reinforcement, and unsupervised learning. Stockholm Workshop on Computational Neuroscience, 13. Doya, K., Kimura, H., Kawato, M. (2001). Neural mechanisms of learning and control. IEEE Control Systems Magazine, 21(4), 42-54. Doya, K., Kimura, H., Miyamura, A. (2001). Motor control: Neural models and system theory. International Journal of Applied Mathematics and Computer Science, 11(1), 77-104. Kuroda, S., Yamamoto, K., Miyamoto, H., Doya, K., Kawato, M. (2001). Statistical characteristics of climbing fiber spikes necessary for efficient cerebellar learning. Biological Cybernetics, 84(3), 183-192. Matsumoto, N., Okada, M., Doya, K., Sugase, Y., Yamane, S., Kawano, K (2001). Dynamics of the face-responsive neurons in the temporal cortex. Neuroscience Research Supplement, 24(1), S73. Morimoto, J., Doya, K. (2001). Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning. Robotics and Autonomous Systems, 36, 37-51. Morimoto, J., Doya, K. (2001). Robust reinforcement learning. V.Tresp et al.(eds.) Advances in Neural Information Processing Systems 13. MIT Press, 1061-1067. Nakahara, H., Doya, K., Hikosaka, O. (2001). Parallel cortico-basal ganglia mechanisms for acquisition and execution of visuo-motor sequences - A computational approach. Journal of Cognitive Neuroscience, 13(5), 626-647. Sato, M. (2001). On-line model selection based on the variational Bayes. Neural Computation, 13(7), 1649-1681. Yoshimoto, J., Ishii, S., Sato, M. (2001). Application of reinforcement learning based on on-line EM algorithm to balancing of acrobot . Systems and Computers in Japan, 32(5), 12-20. 2002 Asahi, S., Okamoto, Y., Okada, G., Morinobu, S., Yamawaki, S., Doya, K. (2002). Relationship between brain activation during GO/NOGO task and impulsiveness: A fMRI study. 32nd Annual Meeting, Society for Neuroscience , 78.5. Capi, G., Uchibe, E., Doya, K. (2002). Selection of neural architecture and the environment complexity. The 3rd International Symposium on Human and Artificial Intelligence Systems: Dynamic Systems Approach for Embodiment and Sociality, 231-237. Doya, K. (2002). Metalearning and neuromodulation. Neural Networks, 15, 495-506. Doya, K., Dayan,P, Hasselmo M. E. (2002). Computational models of neuromodulation. Neural Networks , 15, 475-477. Doya, K., Samejima, K., Katagiri, K., Kawato, M. (2002). Multiple model-based reinforcement learning. Neural Computation, 14(6), 1347-1369. Okada, G.,Okamoto,Y., Ueda, K., Morinobu, S., Yamawaki, S.,, Doya, K.(2002). Selection between small, immediate rewards and large, delayed rewards in prediction of future reward: A functional magnetic resonance imaging study. 8th International Conference on Functional Mapping of the Human Brain, 968. Oba,S., Sato,M., Takemasa,I., Monden,M., Matsubara,K., Ishii,S. (2002). Missing value estimation using mixture of PCAs. ICANN 2002, 492-497. Sato, M., Nakamura,Y., Ishii,S. (2002). Reinforcement Learning for Biped Locomotion. ICANN 2002, 777-782. Sato, M., Oba,S. (2002). Incremental Sparse Kernel Machine. ICANN 2002, 700-706. Schweighofer, N., Doya, K. (2002). A biologically plausible computational model of meta-learning in reinforcement learning. 32nd Annual Meeting, Society for Neuroscience , 283.20. Ueda, Y., Samejima, K., Doya, K., Kimura, M. (2002). Reward value-dependent striate neuron activity of monkey performing trial-and-error behavioral decision task. Society for Neuroscience 32nd Annual Meeting, 765.13. 2003 Capi, G., Uchibe, E., Doya, K. (2003). Selection of Neural Architecture and the Environment Complexity. Dynamic Systems Approach for Embodiment and Sociality from Ecological Psychology to Robotics, 6, 311-317. Capi, G., Doya, K. (2003). Evolving Recurrent Neural Controllers for Sequential Tasks - A Parallel Implementation. Congress on Evolutionary Computation . Daniel M Wolpert, D. M., Doya, K., Kawato, M. (2003). A unifying computational framework for motor control and social interaction.. Philosophical Transactions of the Royal Society, 358, 593-602. Doya, K. (2003). Enjoy now or strive for future: Neural mechanisms of reward prediction at different time scales. RIKEN BSI Summer Program 2003, 36. Doya, K. (2003). A Computational Theory of Neuromodulation . International Symposium "New Horizons in Molecular Sciences and Systems: An Integrated Approach". Doya, K., Sugimoto, N., Daniel M Wolpert, Kawato, M. (2003). Selecting optimal behaviors based on contexts. International Symposium on Emergent Mechanisms of Communication (IEMC 2003). Elfwing, S., Uchibe, E., Doya, K. (2003). An Evolutionary Approach to Automatic Construction of the Structure in Hierarchical Reinforcement Learning. Genetic and Evolutionary Computation Conference (GECCO 2003), LNCS2723, 507-509. Eriksson, A., Capi, G. , Doya, K. (2003). Evolution of Meta-parameters in Reinforcement Learning Algorithm. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2003), 412-417. Nakamura,Y., Sato, M., Ishii,S. (2003). Reinforcement Learning for Biped Robot. 2nd International Symposium on Adaptive Motion of Animals and Machines (AMAM2003), 36. Oba,S., Sato, M., Ishii, S. (2003). Prior hyperparameters in Bayesian PCA. Joint 13th International Conference on Artificial Neural Networks and 10th International Conference On Neural Information Processing (ICANN/ICONIP2003), 271-279. Samejima, K., Ueda, Y., Doya, K., Kimura, M. (2003). ACTIVITY OF STRIATE PROJECTION NEURONS ENCODES ACTION-SELECTIVE REWARD EXPECTATIONS . Society for Neuroscience 33rd Annual Meeting . Samejima, K., Doya, K., Kawato, M. (2003). Inter-module credit assignment in modular reinforcement learning. Neural Networks, 16(7), 985-994. Tanaka, S., Doya, K., Okada, G., Ueda, K., Okamoto, Y., Yamawaki, S. (2003). Prediction of Immediate and Future Rewards Differentially Recruits Cortico-Basal Ganglia Loops. Society for Neuroscience 33rd Annual Meeting. Tanaka, S. , Doya, K., Okada, G. , Ueda, K. , Okamoto, Y., Yamawaki, S. (2003). Different Cortico-Basal