An Articulatory Synthesizer for Perceptual Research

Total Page:16

File Type:pdf, Size:1020Kb

An Articulatory Synthesizer for Perceptual Research An articulatorysynthesizer for perceptual research Philip Rubin and Thomas Baer Haskins Laboratories, 270 Crown Street, New Haven, Connecticut 06510 Paul Mermelstein Bell-Northern Researchand INRS Telecommunications,University of Quebec, Verdun, Quebec, Canada H3E 1H7 (Received15 March 1979;accepted for publication12 May 1981) A softwareartieulatory synthesizer, based upon a modeldeveloped by P. Mermelstein[J. Aeoust.Soe. Am. 53, 1070-1082 (1973)],has been implemented on a laboratorycomputer. The synthesizeris designedas a tool for studyingthe linguisticallyand pereeptuallysignificant aspects of artieulatoryevents. A prominentfeature of this systemis that it easilypermits modification of a limited setof key parametersthat controlthe positions of the major artieulators:the lips,jaw, tonguebody, tongue tip, velum,and hyoid bone.Time-varying control over vocal-tractshape and nasalcoupling is possibleby a straightforwardprocedure that is similar to key- frame animation:critical vocal-tract configurationsare specified,along with excitation and timing information.Articulation then proceeds along a directedpath betweenthese key frameswithin the time script specifiedby the user.Such a procedurepermits a sufficientlyfine degreeof controlover articulatorpositions and movements.The organizationof this systemand its presentand future applicationsare discussed. PACS numbers:43.70. Bk, 43.70.Jt, 43.70.Qa, 43.60.Qv INTRODUCTION Although the synthesizer is capable of producing short segments of quite natural speech with a parsimonious Articulatory synthesis has not usually been consi- input specification, its primary application is not based dered to be a research tool for studies of speech per- on these characteristics. The most important aspect of eeption, although many perceptual studies using syn- its design is that the artieulatory model, though simple, thetic stimuli are based upon artieulatory premises. captures the essential ingredients of real articulation. • In order to address the need for a research artieulatory Thus, synthetic speech may be produced in which arti- synthesizer, this paper provides a brief description of a eulator positions or the relative timing of artieulatory software articulatory synthesizer, implemented on a gestures are precisely and systematically controlled, laboratory computer at Haskins Laboratories, that is and the resulting acoustic output may be subjected to presently being used for a variety of experiments desig- both analytical and perceptual analyses. Real talkers ned to explore the relationships between perception cannot, in general, produce utterances with systematic and production. A related paper, by Abramson et al. variations of an isolated articulatory variable. Further, (1979), describes in greater detail some of these ex- for at least some artieulatory variables mfor example, periments, which focus on aspects of velar control and velar elevation and the corresponding degree of velar the distinction between oral stop consonants and their port opening--simple variations in the artieulatory pa- nasal counterparts. The intent of the present paper is rameter produce complex acoustic effects. Thus, the to provide an overview of the actual design and opera- synthesizer can be used to perform investigations that tion of this synthesizer, with specific regard to its use are not possible with real talkers, or investigations as a research tool for the perceptual evaluation of arti- that are difficult, at best, using acoustic synthesis eulatory gestures. techniques. Briefly, the artieulatory synthesizer embodies sever- al submodels. At its heart are simple models of six I. THE MODEL key artieulators. The positions of these artieulators determine the outline of the vocal tract in the midsagit- The model that we are using was originally developed tai plane. From this outline the width function and, by Mermelstein(1973) and is designedto permit simple subsequently, the area function of the vocal tract are control over a selected set of key artieulatory parame- determined. Source information is specified at the ters. The model is similar in many respects to that of acoustic, rather than artieulatory, level, and is inde- Coker (Coker and Fujimura, 1966; Coker, 1976), but pendent of the artieulatory model. Speech output dur- was developed independently. The two models em- ing each frame is obtained after first calculating, for a phasize different aspects of the speech production pro- particular vocal-tract shape, the acoustic transfer eess; Coker's implementation stresses synthesis-by- function for both the glottal and frieative sources, if rule, while the focus of the present model is on inter- they are used. For voiced sounds the transfer function active and systematic control of supraglottal artieula- accounts for both the oral and nasal branches of the vo- tory configurations and on the acoustic and perceptual cal tract. Continuous speech is obtained by a technique consequences of this control. The particular set of similar to key-frame animation (see below). parameters employed here provides for an adequate description of the vocal-tract shape, while also incor- a)Portionsof this paper were presentedat the 95th Meeting of porating both individual control over artieulators and the Acoustical Society of America, 16-19 May 1978, Provi- physiologically constrained interaction between artieu- dence, R. I. lators. Figure 1 shows a midsagittal section of the 321 J. Acoust.Soc. Am. 70(2), Aug. 1981 0001-4966/81/080321-08500.80 (D 1981 AcousticalSociety of America 321 Downloaded 06 Feb 2013 to 76.174.24.144. Redistribution subject to ASA license or copyright; see http://asadl.org/terms •L •c VOCAL - TRACT CENTER LINE C -- TONGUE BODY CENTER V -- VELUM T -- TONGUE TIP J - JAW L -- LIPS H-- HYOID FIG. 1. Key vocal tract parameters. FIG. 2. Grid system for conversion of midsagittal shape to cross-sectional area values. vocal tract, with the six key articulators labeled: tongue body, velum, tongue tip, jaw, lips, and hyoid additional parameters suggestedby factor analyses of bone position. (Hyoid bone position controls larynx tongue shapes (Harshman et al., 1977; Kiritani and height and pharynx width.) These articulators can be Sekimoto, 1977; Maeda, 1979), may be included. grouped into two major categories: those whose move- ment is independentof other articulators (the jaw, Once articulator positions and, thus, the vocal-tract velum, and hyoid bone); and articulators whose posi- outline, have been specified, cross-sectional areas are tions are functions of the positions of other articulators calculated by superposing a grid structure fixed to the (the tonguebody, tonguetip, and lips). The articula- maxilla on the vocal-tract outline and computing the tors of this second group all move relative to the jaw. points of intersection of the outline and the grid lines In addition, the tongue tip moves relative to the tongue (see Fig. 2). Grid resolution is variable within a body. In this manner, individual gestures can be sepa- limited range. In the default mode, grid lines are rated into components arising from the movement of 0.25 cm apart where parallel and at 5ø intervals where several articulators. For example, the lip-opening they are radial. Figure 2 shows the lowest resolution gesture in the productionof a/ba/is a functionof the case: parallel grid lines are separated by 0.5 cm and movement of two articulators: the opening of the lips the radial sections are at 10 ø intervals. The intersec- themselves, and the dropping of the jaw for the vowel tion of the vocal-tract center line with each grid line is articulation. Movements of the jaw and velum have one determined by estimating the point at which the dis- degree of freedom, all other articulators move with two tances to the superior-posterior and inferior-anterior degrees of freedom. Movement of the velum has two outlines are equal. The center line of the vocal tract effects: it alters the shape of the oral branch of the vo- is formed by connecting the points from adjacent grid cal tract, and in addition, it modulates the size of the lines. The length of this line represents the length of coupling port to the fixed nasal tract. A more detailed the vocal tract modeled as a sequence of acoustic tubes description of the models for each of these articulators of piecewise uniform cross-sectional areas. Vocal- is providedin Mermelstein (1973). Specificationof the tract widths are measured using the same algorithm as six key articulator positions completely determines the that for finding the center line. outline of the vocal tract in the midsagittal plane. Sagittal cross-dimensions are converted to cross- Validation of the articulatory model involved system- sectional areas with the aid of previously published atic comparisons with data derived from x-ray observa- data regarding the shape of the tract along its length. tions of natural utterances. The data used were ob- Different formulas are used for the pharyngeal region tained from Perkell (1969). An interactive procedure (Heinz and Stevens, 1964), oral region (Ladefogedet el., was used in which the experimenter matched the model- 1971), and labial region (Mermelstein et el., 1971). generated outline to x-ray tracings for the utterance, More specifically, cross-sectional area in the pharyn- "Why did Ken set the soggy net on top of his desk?" geal region is approximated as an ellipse with the Acoustic matching procedures are detailed in Mermel- vocal-tract width (w) as one axis, while the other axis stein (1973).
Recommended publications
  • Development of a Text to Speech System for Devanagari Konkani
    DEVELOPMENT OF A TEXT TO SPEECH SYSTEM FOR DEVANAGARI KONKANI A Thesis submitted to Goa University for the award of the degree of Doctor of Philosophy in Computer Science and Technology By Nilesh B. Fal Dessai GOA UNIVERSITY Taleigao Plateau May 2017 Development of a Text to Speech System for Devanagari Konkani A Thesis submitted to Goa University for the award of the degree of Doctor of Philosophy in Computer Science and Technology By Nilesh B. Fal Dessai GOA UNIVERSITY Taleigao Plateau May 2017 Statement As required under the ordinance OB-9.9 of Goa University, I, Nilesh B. Fal Dessai, state that the present Thesis entitled, ‘Development of a Text to Speech System for Devanagari Konkani’ is my original contribution carried out under the supervision of Dr. Jyoti D. Pawar, Associate Professor, Department of Computer Science and Technology, Goa University and the same has not been submitted on any previous occasion. To the best of my knowledge, the present study is the first comprehensive work of its kind in the area mentioned. The literature related to the problem investigated has been cited. Due acknowledgments have been made whenever facilities and suggestions have been availed of. (Nilesh B. Fal Dessai) i Certificate This is to certify that the thesis entitled ‘Development of a Text to Speech System for Devanagari Konkani’, submitted by Nilesh B. Fal Dessai, for the award of the degree of Doctor of Philosophy in Computer Science and Technology is based on his original studies carried out under my supervision. The thesis or any part thereof has not been previously submitted for any other degree or diploma in any University or Institute.
    [Show full text]
  • The Conductor Model of Online Speech Modulation
    The Conductor Model of Online Speech Modulation Fred Cummins Department of Computer Science, University College Dublin [email protected] Abstract. Two observations about prosodic modulation are made. Firstly, many prosodic parameters co-vary when speaking style is altered, and a similar set of variables are affected in particular dysarthrias. Second, there is a need to span the gap between the phenomenologically sim- ple space of intentional speech control and the much higher dimensional space of manifest effects. A novel model of speech production, the Con- ductor, is proposed which posits a functional subsystem in speech pro- duction responsible for the sequencing and modulation of relatively in- variant elements. The ways in which the Conductor can modulate these elements are limited, as its domain of variation is hypothesized to be a relatively low-dimensional space. Some known functions of the cortico- striatal circuits are reviewed and are found to be likely candidates for implementation of the Conductor, suggesting that the model may be well grounded in plausible neurophysiology. Other speech production models which consider the role of the basal ganglia are considered, leading to some observations about the inextricable linkage of linguistic and motor elements in speech as actually produced. 1 The Co-Modulation of Some Prosodic Variables It is a remarkable fact that speakers appear to be able to change many aspects of their speech collectively, and others not at all. I focus here on those prosodic variables which are collectively affected by intentional changes to speaking style, and argue that they are governed by a single modulatory process, the ‘Con- ductor’.
    [Show full text]
  • COMMITTEE TRUSTEES: Philip Rubin, Mark Boxer, Sandy Cloud
    DRAFT MINUTES SPECIAL TELEPHONE MEETING OF THE COMMITTEE FOR RESEARCH, ENTREPRENEURSHIP AND INNOVATION University of Connecticut Board of Trustees February 15, 2021 COMMITTEE TRUSTEES: Philip Rubin, Mark Boxer, Sandy Cloud, Marilda Gandara ADDITIONAL TRUSTEES: Chairman Toscano COMMITTEE MEMBERS: Rich Vogel, Tim Shannon UNIVERSITY SENATE REP: Jeannette Pritchard STAFF: Joanna Desjardin, President Katsouleas, Radenka Maric, David Nobel, Rachel Rubin, Dan Schwartz, Jeffrey Shoulson, Dan Weiner Committee Vice Chair Rubin convened the meeting at 10:01 a.m. by telephone. No public comment was volunteered on any of the agenda items. On a motion by Trustee Cloud, seconded by Mr. Vogel, the minutes from the November 16, 2020, Special Meeting of the REI Committee were approved. Vice Chair Rubin informed the committee that University Senate Representative Dr. Rajeev Bansal has retired and welcomed Dr. Jeanette Pritchard as the University Senate Representative on the committee. Vice Chair Rubin gave opening remarks and presented the publication, UConn in the Media. If anyone wants a copy they can contact Vice Chair Rubin or Joanna Desjardin. Congratulated Dr. David Noble for a featured article in UConn Today on entrepreneurship. Congratulated Dr. Radenka Maric for becoming a member of AAAS and her publication, Atomistic Insights into the Hydrogen Oxidation Reaction of Palladium-Ceria Bifunctional Catalysts for Anion-Exchange Membrane Fuel Cells, which is about her important work she and her colleagues continue to do on fuel cells. Would like to hear more about this at a future meeting. Welcomed Dr. Daniel Weiner, Vice President for Global Affairs, to the meeting. Vice Chair Rubin introduced Dr. Radenka Maric, Vice President for Research, Innovation & Entrepreneurship.
    [Show full text]
  • 序號no. 條碼號barcode 題名title 索書號call No. 館藏地location 1
    序 條碼號 號 題名 Title 索書號 Call No. 館藏地 Location Barcode No. 前棟3F一般圖書區(圖書館) 3F 科學化體能訓練實務與應用手冊 / 國家運動訓練 1 00370878 528.914 8566 General Monographic Collections 中心作 .- 高雄市 : 國訓中心, 2017.12 (Front Building) 前棟3F一般圖書區(圖書館) 3F 科學化體能訓練實務與應用手冊 / 國家運動訓練 2 00370879 528.914 8566 c.2 General Monographic Collections 中心作 .- 高雄市 : 國訓中心, 2017.12 (Front Building) 前棟3F一般圖書區(圖書館) 3F 科學化體能訓練實務與應用手冊 / 國家運動訓練 3 00370880 528.914 8566 c.3 General Monographic Collections 中心作 .- 高雄市 : 國訓中心, 2017.12 (Front Building) 前棟3F一般圖書區(圖書館) 3F 運動科學支援手冊 : 競技體操 / 國家運動訓練中心 4 00370881 528.914 8566 General Monographic Collections 作 .- 高雄市 : 國訓中心, 2017.12 (Front Building) 前棟3F一般圖書區(圖書館) 3F 運動科學支援手冊 : 競技體操 / 國家運動訓練中心 5 00370882 528.914 8566 c.2 General Monographic Collections 作 .- 高雄市 : 國訓中心, 2017.12 (Front Building) 前棟3F一般圖書區(圖書館) 3F 運動科學支援手冊 : 競技體操 / 國家運動訓練中心 6 00370883 528.914 8566 c.3 General Monographic Collections 作 .- 高雄市 : 國訓中心, 2017.12 (Front Building) 前棟2F醫學人文專區(圖書館) 被遺忘的幸福 : 敘事醫學閱讀反思與寫作 / 王雅慧 7 00370884 410.3 8443 2F Medical Humanity Collections 編著 .- 臺北市 : 城邦印書館, 2017.12 (Front Building) 前棟3F一般圖書區(圖書館) 3F 大浪淘沙 : 家族企業的優勝劣敗 / 鄭宏泰,周文港 8 00370885 553.67 8764 General Monographic Collections 主編 .- 香港 : 中華書局, 2017.12 (Front Building) 前棟3F一般圖書區(圖書館) 3F 9 00370886 旅加文集 / 許業武作 .- 臺北市 : 黃紅梅, 民107.04 078 8483 General Monographic Collections (Front Building) 眾志成城 = 10th solidarity : the Tenth Anniversary 前棟3F一般圖書區(圖書館) 3F 10 00370887 celebration special issue : 成大博物館十週年館慶特 069.833 8933 General Monographic Collections 刊 / 陳政宏主編 .- 臺南市 : 成大博物館, 2017.11 (Front Building) 前棟3F一般圖書區(圖書館) 3F
    [Show full text]
  • Universidade Do Algarve
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Sapientia UNIVERSIDADE DO ALGARVE Quebra de barreiras de comunica»c~aopara portadores de paralisia cerebral. Paulo Alexandre Lucas Afonso Condado Doutoramento em Engenharia Electr¶onicae Computa»c~ao Ci^enciasde Computa»c~ao 2009 UNIVERSIDADE DO ALGARVE Quebra de barreiras de comunica»c~aopara portadores de paralisia cerebral. Paulo Alexandre Lucas Afonso Condado Tese orientada por Professor Fernando Miguel Pais da Gra»caLobo Doutoramento em Engenharia Electr¶onicae Computa»c~ao Ci^enciasde Computa»c~ao 2009 Resumo Nos ¶ultimosanos, o estudo das tecnologias de acessibilidade tem adquirido relev^anciana ¶areade interfaces pessoa-m¶aquina. O desenvolvimento de dis- positivos, m¶etodos, e aplica»c~oesespeci¯camente desenhadas para superar as limita»c~oesdos utilizadores facilita a interac»c~aodestes com o mundo exterior. A import^anciado estudo nesta ¶area¶eenorme, visto facilitar a interac»c~ao dos portadores de de¯ci^enciascom os meios tecnol¶ogicosque, por sua vez, possibilitam-lhes uma melhor comunica»c~aoe interac»c~aocom os outros, con- tribuindo signi¯cativamente para a sua integra»c~aona sociedade. Esta disserta»c~aoapresenta um sistema inovador, conhecido por Easy- Voice, que integra diversas tecnologias para permitir que uma pessoa com de¯ci^enciasna fala possa efectuar chamadas telef¶onicasusando uma voz sintetizada. Subjacente a este sistema, desenhado com o objectivo de dis- ponibilizar um interface acess¶³vel at¶epara utilizadores que possuam graves problemas de coordena»c~aomotora, est¶ao conceito que ¶eposs¶³vel combinar tecnologias j¶aexistentes para criar mecanismos que facilitem a comunica»c~ao dos portadores de de¯ci^enciasa longas dist^ancias,nomeadamente as tecno- logias de s¶³ntese de voz, voz sobre IP (VoIP) e m¶etodos de interac»c~aopara portadores de limita»c~oesmotoras.
    [Show full text]
  • TADA (Task Dynamics Application) Manual
    TADA (TAsk Dynamics Application) manual Copyright Haskins Laboratories, Inc., 2001-2006 300 George Street, New Haven, CT 06514, USA written by Hosung Nam ([email protected]), Louis Goldstein ([email protected]) Developers: Catherine Browman, Louis Goldstein, Hosung Nam, Philip Rubin, Michael Proctor, Elliot Saltzman (alphabetical order) Description The core of TADA is a new MATLAB implementation of the Task Dynamic model of inter-articulator coordination in speech (Saltzman & Munhall, 1989). This functional coordination is accomplished with reference to speech Tasks, which are defined in the model as constriction Gestures accomplished by the various vocal tract constricting devices. Constriction formation is modeled using task-level point-attractor dynamical systems that guide the dynamical behavior of individual articulators and their coupling. This implementation includes not only the inter-articulator coordination model, but also • a planning model for inter-gestural timing, based on coupled oscillators (Nam et al, in prog) • a model (GEST) that generates the gestural coupling structure (graph) for an arbitrary English utterance from text input (either orthographic or an ARPABET transcription). Functionality TADA implements three models that are connected to one another as illustrated by the boxes in Figure 1. 1. Syllable structure-based gesture coupling model Takes as input a text string and generates an intergestural coupling graph, that specifies the gestures composing the utterance (represented in terms of tract variable dynamical parameters and articulator weights) and the coupling relations among the gestures’ timing oscillators. 2. Coupled oscillator model of inter-gestural coordination Takes as input a coupling graph, and generates a gestural score, that specifies activation intervals in time for each gesture.
    [Show full text]
  • Voice Synthesizer Application Android
    Voice synthesizer application android Continue The Download Now link sends you to the Windows Store, where you can continue the download process. You need to have an active Microsoft account to download the app. This download may not be available in some countries. Results 1 - 10 of 603 Prev 1 2 3 4 5 Next See also: Speech Synthesis Receming Device is an artificial production of human speech. The computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware. The text-to-speech system (TTS) converts the usual text of language into speech; other systems display symbolic linguistic representations, such as phonetic transcriptions in speech. Synthesized speech can be created by concatenating fragments of recorded speech that are stored in the database. Systems vary in size of stored speech blocks; The system that stores phones or diphones provides the greatest range of outputs, but may not have clarity. For specific domain use, storing whole words or suggestions allows for high-quality output. In addition, the synthesizer may include a vocal tract model and other characteristics of the human voice to create a fully synthetic voice output. The quality of the speech synthesizer is judged by its similarity to the human voice and its ability to be understood clearly. The clear text to speech program allows people with visual impairments or reading disabilities to listen to written words on their home computer. Many computer operating systems have included speech synthesizers since the early 1990s. A review of the typical TTS Automatic Announcement System synthetic voice announces the arriving train to Sweden.
    [Show full text]
  • Paper VR2000
    VTalk: A System for generating Text-to-Audio-Visual Speech Prem Kalra, Ashish Kapoor and Udit Kumar Goyal Department of Computer Science and Engineering, Indian Institute of Technology, Delhi Contact email: [email protected] Abstract This paper describes VTalk, a system for synthesizing text-to-audiovisual speech (TTAVS), where the input text is converted into an audiovisual speech stream incorporating the head and eye movements. It is an image-based system, where the face is modeled using a set of images of a human subject. A concatination of visemes –the corresponding lip shapes for phonemes— can be used for modeling visual speech. A smooth transition between visemes is achieved using morphing along the correspondence between the visemes obtained by optical flows. The phonemes and timing parameters given by the text-to-speech synthesizer determines the corresponding visemes to be used for the synthesis of the visual stream. We provide a method using polymorphing to incorporate co-articulation during the speech in our TTAVS. We also include nonverbal mechanisms in visual speech communication such as eye blinks and head nods, which make the talking head model more lifelike. For eye movement, a simple mask based approach is employed and view morphing is used to generate the intermediate images for the movement of head. All these features are integrated into a single system, which takes text, head and eye movement parameters as input and produces the complete audiovisual stream. Keywords: Text-to-audio-visual speech, Morphing, Visemes, Co-articulation. 1. Introduction The visual channel in speech communication is of great importance, a view of a face can improve intelligibility of both natural and synthetic speech.
    [Show full text]
  • Sine-Wave Synthesis, Or Sine-Wave Speech. a Short Introduction by Manfred Bartmann
    This material comes from <www.manfred-bartmann.de>. It forms part of Manfred Bartmann's CD Frisia Orientalis II: Making Music of Speech © 2017 Manfred Bartmann. The information, text, photo, graphic, audio and/or video material or other materials shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium in whole or in part without express written permission. Neither these materials nor any portion thereof may be stored or used on a computer except for personal use. In case of doubt, please contact <[email protected]>. MB. Sine-wave Synthesis, or Sine-wave Speech. A Short Introduction by Manfred Bartmann On the CD, three tracks make use of sine-wave syntheses: No. 3 April '84 - fieldworking in East Frisia, No. 4 An East Frisian farm-hand song (Deenstenleed) revisited, and No. 5 Rökeldoab Dada, a grooving Low German mouth music. For further information on these tracks please refer to the corresponding booklet entries. Please also refer to the sound files embedded in this document. Their playback requires an applicable PDF-viewer. © Adobe Reader XI will do. The pure-tone whistles within these tracks may remind one of warbling sounds for which R2-D2, an astromech droid popularised by the Star Wars movies, has become famous. ... please note that we did not hire R2-D2 as a studio musician ! R2-D2's iconic sounds were created by sound designer Ben Burtt, and I always had thought that he had used a software developed by Philip Rubin way back then for the purpose of what was to be called sine-wave synthesis (Rubin 1982).
    [Show full text]
  • COMMITTEE TRUSTEES: Philip Rubin, Sandy Cloud
    MINUTES SPECIAL MEETING OF THE BOT COMMITTEE FOR RESEARCH, ENTREPRENEURSHIP AND INNOVATION University of Connecticut Room 216, Hartford Campus Hartford Connecticut January 23, 2020 COMMITTEE TRUSTEES: Philip Rubin, Sandy Cloud (via telephone), Marilda Gandara (via telephone) ADDITIONAL TRUSTEES: Shari Cantor (via telephone), Tom Ritter, Dan Toscano (via telephone) COMMITTEE MEMBERS: Rich Vogel UNIVERSITY SENATE REP: Rajeev Bansal STAFF: Gail Garber, Radenka Maric, David Nobel, Rachel Rubin, John Volin, Dana Wine, Rita Zangari GUEST: David Steuber, CTNext, formerly of Senator Fonfara’s Office Committee Vice Chair Rubin convened the meeting of the Committee for Research, Entrepreneurship and Innovation at 10 a.m. on January 23, 2020. Vice Chair Rubin said the meeting’s purpose was to discuss the Technology Transfer Plan which required Board of Trustees approval by March 1. He confirmed that all had access to the plan which was forwarded in December and noted that the most recent version includes a draft budget. Committee Vice Chair Rubin asked Dr. Maric to begin her presentation. Dr. Maric noted that Vice Provost John Volin is responsible for the faculty hiring plan referenced in the tech transfer plan and described in PA 19-154; that Dr. Noble is responsible for the Werth Institute for Innovation and Entrepreneurship that supports student entrepreneurship; and that she is responsible for technology transfer and venture development Dr. Maric outlined requirements and foci of the plan along with committee comments received and how they were addressed in the plan. Dr. Maric noted that her office has already begun to act on the plan where no new resources are required.
    [Show full text]
  • A Million to One: Changing the Odds
    A Million to One: Changing the Odds Ubuntu Education Fund 2014 Annual Report BUILDing a sustainable Dear Friends, institution Invest in people and a centre that inspires Think, for a moment, about everything that it Impact the highest standard took for you to get to where you are today—all through the investments that your family, friends, and depth teachers made in you. The list seems endless Produce lasting THE Based in the when you really start to reflect on it. The hugs, change through community pep talks, and advice; the vaccines, Band-Aids, significant and UBUNTU Listen to the town- and doctor appointments; the pre-school classes, focused investment MODEL ships’ needs and SAT prep tutors, and university seminars. It’s overwhelming to try to remember utilize their resources just how many people supported you along the way, because none of us can say through strategic partnerships that we have achieved success entirely on our own. Cradle Now, consider a child orphaned by HIV, growing up in a home constructed to career with aluminum scrap metal; hunger, inadequate healthcare, and a grossly un- Create individualized pathways to equal education system overshadow so many of her dreams. The odds are meet each client’s household, health, stacked against her, a million to one. Wouldn’t it logically take the same invest- and educational needs ments for this vulnerable child to move from point A to point B? Wouldn’t it probably take more? At Ubuntu, we believe that not only does breaking the poverty trap for one child take a million interventions but also that our cradle to career approach is the only way to achieve sustainable development.
    [Show full text]
  • LONG-TERM MEMBERS 25+ Years of Membership
    LONG-TERM MEMBERS 25+ Years of Membership Stuart A. Aaronson, MD Stephen P. Ackland, MBBS Carol Aghajanian, MD Steven A. Akman, MD Icahn School of Medicine at Mount Sinai University of Newcastle Memorial Sloan Kettering Cancer Center Roper St. Francis Healthcare United States Australia United States United States Active Member Active Member Active Member Active Member 38 Years of Membership 33 Years of Membership 27 Years of Membership 35 Years of Membership Cory Abate-Shen, PhD Edward M. Acton, PhD Irina U. Agoulnik, PhD Emmanuel T. Akporiaye, PhD Columbia University Irving Medical United States Florida International University Verana Therapeutics Center Emeritus Member United States United States United States 42 Years of Membership Active Member Emeritus Member Active Member 25 Years of Membership 31 Years of Membership 26 Years of Membership David J. Adams, PhD Duke University Medical Center Imran Ahmad, PhD Ala-Eddin Al Moustafa, PhD James L. Abbruzzese, MD United States Northwestern Medicine McGill University Duke University Emeritus Member United States Canada United States 32 Years of Membership Active Member Active Member Active Member 25 Years of Membership 26 Years of Membership 32 Years of Membership Gregory P. Adams, PhD Elucida Oncology Nihal Ahmad, PhD Abdul Al Saadi, PhD Ehtesham A. Abdi, MBBS United States Univ. of Wisconsin Madison Sch. of Med. William Beaumont Hospital The Tweed Hospital Active Member & Public Health United States Australia 29 Years of Membership United States Emeritus Member Emeritus Member Active Member 52 Years of Membership 33 Years of Membership Lucile L. Adams-Campbell, PhD 25 Years of Membership Georgetown Lombardi Comprehensive Suresh K.
    [Show full text]