<<

Multimedia and Live Performance

Ian Willcock

Submitted in partial fulfilment of the requirements for the award of PhD

Institute of Creative Technologies De Montfort University

December 2011

Volume Two

Bibliography

Agile Alliance, (2001). Agile Alliance: The Agile Manifesto. [Web page] Available at: http://www.agilealliance.org/the-alliance/the-agile-manifesto/ (Accessed June 9, 2011). Allison, J. T. & Place, T. A. (2005) Teabox: A Sensor Data Interface System. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada,pp56-59. Anderson, D. P. &Kuivila, R. (1990) A system for computer music performance.ACM Trans. Comput. Syst., 8,pp56-82. Anderson, D. P. &Kuivila., R. (1991) Formula: A Programming Language for Expressive Computer Music. Computer, 24,pp12-21. Apache Subversion. (no date) [software] Available at: http://subversion.apache.org/ (Accessed June 17, 2011). Audacity (2010) [Software] (version 1.3) Available at http://audacity.sourceforge.net/about/?lang=en Auslander, P. (2009) Reactivation: Performance, Mediatization and the Present Moment. In Chatzichristodoulou, M., Jeffries, J. &Zerihan, R. (Eds.) Interfaces of Performance, pp 81-94.Farnham, Ashgate Publishing Ltd. Auvray, S., (2008).InfoQ: Distributed Version Control Systems: A Not-So-Quick Guide Through. Available at: http://www.infoq.com/articles/dvcs-guide (Accessed June 19, 2011) Aylward, R. &Paradiso, J. (2006) Sensemble: A Wireless, Compact, Multi-User Sensor System for Interactive Dance.Proceedings of the 2006 Conference onNew Interfaces for Musical Expression (NIME-06). Paris, France,pp134-139. Barbosa, A. (2003) Displaced Soundscapes: A Survey of Network Systems for Music and Sonic Art Creation. Leonardo Music Journal, 13. Barbosa, A., Cardoso, J. & Geiger, G. (2005) Network Latency Adaptive Tempo in the Public Sound Objects System.Proceedings of the 2005

Ian Willcock – Multimedia and Live Performance Bibliography – Page263

Conference on New Interfaces for Musical Expression (NIME- 05).Vancouver, Canada,pp 184-187. Bennett, P. (2006) PETECUBE: a Multimodal Feedback Interface. Proceedings of the 2006 Conference onNew Interfaces for Musical Expression (NIME- 06). Paris, France,pp 81-84. Berndtsson, M. et al (2002) Planning and Implementing your Final Year Project with Success! London. Springer Verlag Bex, G.J., Neven, F. & Van den Bussche, J., (2004). DTDs versus XML Schema: A Practical Study. In Proceedings of the Seventh International Workshop on the Web and Databases.WebDB 2004. Paris, France, pp 79-84. Available at: http://webdb2004.cs.columbia.edu/papers/6-1.pdf. Bevilacqua, F., Müller, R. & Schnell, N. (2005) MnM: a Max/MSP mapping toolbox. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada, pp 85-88. Birchfield, D., Phillips, K., Kidané, A. &Lorig, D. (2006) Interactive Public Sound Art: A Case Study. Proceedings of the 2006 Conference onNew Interfaces for Musical Expression (NIME-06). Paris, France, pp 43-48 Birringer, J. H. (1998a) The movement of memory: scanning dance. Leonardo, 31,pp165-72. Birringer, J. H. (1998b) Lively bodies, live machines: split realities. Leonardo, 31,pp141-2. Birringer, J. H. (2006) Performing Art, Performing Science. In Christie, J., Gough, R. & Watt, D. (Eds.) A performance Cosmology: Testimony from the Future, Evidence from the Past. London, Routledge. Birringer, J. (2008). Performance, Technology, and Science, New York: PAJ publications. Blaine, T. (2005) The Convergence of Alternate Controllers and Musical Interfaces in Interactive Entertainment. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada, pp 27-33. Bolter, J.D.(1984).Turing’s Man: Western Culture in the Computer Age, The University of North Carolina Press.

Ian Willcock – Multimedia and Live Performance Bibliography – Page264

Bolter, J.D. &Grusan, R., (2000).Remediation: Understanding New Media[New edition.], Cambridge, Massachusetts. MIT Press. Bosco, P., Martini, G. &Reteuna, G. (1997) MUSIC: An Interactive MUltimedia Service Composition Environment for Distributed Systems. Lecture Notes In Computer Science,pp31-40. Bosma, J. (no date) Music and the Internet ~ Musaic [Web page] Crossfade http://crossfade.walkerart.org/bosma/eng_text_01.html(Accessed - February 28, 2006) Bouillot, N. (2007) nJam User Experiments: Enabling Remote Musical Interaction from Milliseconds to Seconds. Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME-07). New York, pp 142-147. Brandt, E. & Dannenberg, R. (1998) Low-Latency Music Software Using Off- The-Shelf Operating Systems.1998 International Computer Music Conference.International Computer Music Association. Brandt, E. & Dannenberg, R. (1999) Time in Distributed Real-Time Systems.1999 International Computer Music Conference.International Computer Music Association. Broadhurst, S. &Machon, J. (Eds.) (2006) Performance and Technology: Practices of Virtual Embodiment and Interactivity, Baisingstoke, Palgrave Macmillan. Broadhurst, S. (2007)Digital Practices.Baisingstoke: Palgrave MacMillan Brown, C. (2003) Eternal Music Network [software] http://www.transjam.com/eternal/eternal_client.html Bryan-Kinns, N. & Healey, P. G. T. (2004) Daisyphone: Support for Remote Music Collaboration. Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME-04).Hamamatsu, Japan, pp 27- 30. Bugzilla.org, (2010). About ::Bugzilla :: bugzilla.org. Available at: http://www.bugzilla.org/about/ [Accessed June 17, 2011]. Buschmann, F. et al.(1996)Pattern-Oriented Software Architecture Volume 1: A System of Patterns(1st ed.), John Wiley & Sons.

Ian Willcock – Multimedia and Live Performance Bibliography – Page265

Bush, V. (1945) As We May Think.The Atlantic Monthly, 176(1), pp.101-108. Cage, J.(1952)4’33" [Music composition] Peters Edition 6777 Camurri, A. &Ferrentino, P. (1999) Interactive environments for music and multimedia.Multimedia Syst., 7, 32-47. Camurri, A., Coletta, P. &Varni, G. (2007) Developing multimodal interactive systems with EyesWeb XMI. Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME-07). New York, USA,pp 305-308. Chapman, N. & Chapman, J. (2004) Digital Multimedia, J.Wiley& Sons. Chatzichristodoulou, M.et al. Eds. (2009)Interfaces of Performance.Digital Research in the Arts and Humanities.Farnham, Ashgate Publishing Ltd. Coduys, T., Henry, C. &Cont, A. (2004) TOASTER and KROONDE: High- Resolution and High-Speed Real-time Sensor Interfaces. Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME- 04). Hamamatsu, Japan, pp 205-206. Coffey, A. & Atkinson,P. (1996). Making Sense of Qualitative Data. Thousand Oaks, SAGE Publications. Cohen, A. J. (1998) The Functions of Music in Multimedia: A Cognitive Approach. 5th International Conference on Music Perception and Cognition.Seoul, Seoul National University Press. Collins, N. &Olofsson, F. (2006) klippav: Live Algorithmic Splicing and Audiovisual Event Capture. Computer Music Journal, 30, pp 8-18. Desktop Project (CDP) (2010).About CDP. Available at: http://www.composersdesktop.com/about.html (Accessed August 14, 2011). Coniglio, M (2008) Isadora [Software] TroikaTronix.http://www.troikatronix.com/isadora.html Cope, D. Experiments in Musical Intelligence [online] University of California Santa Cruz http://arts.ucsc.edu/faculty/cope/experiments.htm (Accessed January 27, 2006) Cope, D. (1997) EMI (Experiments in Musical Intelligence) [software]

Ian Willcock – Multimedia and Live Performance Bibliography – Page266

Corness, G., (2008).Performer model: Towards a Framework for Interactive Performance Based on Perceived Intention. In Proceedings of the 2008 Conference on New Interfaces for Musical Expression (NIME-08). Genova, Italy. Available at: http://nime2008.casapaganini.org/documents/Proceedings/Posters/243.p df. Creative Commons(no date) About – Creative Commonshttp://creativecommons.org/about (Accessed November 26, 2011) Dannenberg, R. (2002) Aura as a Platform for Distributed Sensing and Control.Symposium on Sensing and Input for Media-Centric Systems (SIMS 02).Santa Barbara, University of California Santa Barbara Center for Research in Electronic Art Technology. Dannenberg, R. (2004) Aura II: Making Real-Time Systems Safe for Music. Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME-04).Hamamatsu, Japan, pp 132-137. Dannenberg, R. (2007) New Interfaces for Popular Music Performance. Proceedings of the 2007 conference on New Interfaces for Musical Expression (NIME-07). New York, USA, pp 130-135. Davis, M.(2001)Engines of Logic: Mathematicians and the Origin of the Computer Reprint. W. W. Norton & Co. Davison, A. (2003) Music and Multimedia: Theory and History. Music Analysis, 22,pp341-365. deLahunta, S.(2002a). Software for Dancers: Coding Forms. Journal of Performance Research. Available at: http://www.sdela.dds.nl/sfd/scott.html(Accessed July 21, 2011). deLahunta, S. (2002b). Software for Dancers: Isadora Article/ Part I. Available at: http://www.sdela.dds.nl/sfd/isadora.html (Accessed January 7, 2011). De Jong, S. (2006) A Tactile Closed-Loop Device for Musical Interaction. Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME-06). Paris, France, pp 79-80.

Ian Willcock – Multimedia and Live Performance Bibliography – Page267

Deutsch, D. (1998). The Fabric of Reality: Towards a Theory of Everything(New Edition), London, Penguin. Dewey, J. (1933) How We Think. A restatement of the relation of reflective thinking to the educative process (Revised edition), Boston: D. C. Heath Dick, B. (Ed.) (no date) Action Research Resources [Website] http://www.scu.edu.au/schools/gcm/ar/arhome.html(Accessed March 24, 2006). Digital Stages Festival.(2011) [Website/festival] http://digitalstagesfestival.co.uk/ (Accessed November 18th 2011). Dixon, S. (2007)Digital Performance. Cambridge, Massachusetts, MIT Press. Downie, M. (2005), Choreographing the Extended Agent: performance graphics for dance theatre, [unpublished Doctorial Thesis] The Media Lab: MIT [available online http://openendedgroup.com/index.php/publications/thesis-downie/] Downie, Field [software] Open Ended Group, [available online: http://openendedgroup.com/field/wiki/OverviewBanners2], (Accessed 25th February 2009). Duckworth, W. (2005) Virtual Music,New York, Routledge.

Eliasson, M., (2005).Advantages Of SubVersion Over CVS When Working With XP Projects.Agility and Software Configuration Management Project. Lund University, Sweden. [Available at: http://fileadmin.cs.lth.se/cs/Personal/Lars_Bendix/Research/ASCM/In- depth/Eliasson-2005.pdf.] Englebart, D.,(1968). The Demo, [video documentation of demonstration] Available at: http://www.youtube.com/watch?v=JfIgzSoTMOs (Accessed December 11th 2008). Eno, B. (1996) Generative Music.San Francisco, Imagination Conference. Essl, K. (1997) Lexikon-Sonate [software] http://www.essl.at/works/lexson- online.html Ferreirós, J, (2011) "The Early Development of Set Theory", The Stanford Encyclopedia of Philosophy (Fall 2011 Edition), Edward N. Zalta (ed.),

Ian Willcock – Multimedia and Live Performance Bibliography – Page268

[available at http://plato.stanford.edu/archives/fall2011/entries/settheory- early/]. Ffitch, J. &Padget, J. (2002) Learning to Play and Perform on Synthetic Instruments. IN Nordahl, M. (Ed.) Voices of Nature: Proceedings of ICMC 2002. Flanagan, D. (1999). Java in a Nutshell: A Desktop Quick Reference (In a Nutshell)(3rd ed.), Sebastopol, CA. O‟Reilly Media. Flety, E. (2004) Versatile sensor acquisition system utilizing Network Technology.Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME-04). Hamamatsu, Japan, pp 157-160. Flety, E. (2005) The WiSe Box: a Multi-performer Wireless Sensor Interface using WiFi and OSC. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada, pp 266-267. Finer, J. (2011), Longplayer - An Overview. Available at: http://longplayer.org/what/overview.php (Accessed July 20th 2011). Fober, D., Orlarey, Y.&Letz, S. (2002) Clock Skew Compensation over a High Latency Network. Proceedings of the International Computer Music Conference 2002.Gothenburg, Sweden.pp 548-552 Fober, D., Letz, S. &Orlarey, Y. (2005) Real Time Clock Skew Estimation over Network Delays. InGrame (Ed.) Technical Report Grame Computer Music Research lab. Föllmer, G. (2001) Soft Music [online] walker art - crossfade http://crossfade.walkerart.org/foellmer/text_print.html(Accessed - February 26th 2006) Forgy, C.L. (1982) Rete: A fast algorithm for the many pattern/many object pattern match problem. Artificial Intelligence, 19(1), pp.17-37. Foucault, M. (1969) The Archaeology of Knowledge, New York,Routledge.

François, A. & Chew, E. (2006) An Architectural Framework for Interactive Music Systems. Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME-06). Paris, France, pp 150-155.

Ian Willcock – Multimedia and Live Performance Bibliography – Page269

Freeman, E.T. et al., (2004). Head First Design Patterns(1st ed.), Sebastopol, CA.O‟Reilly Media. Friaetta, A.(2008) Open Sound Control: Constraints and Limitations. Proceedings of the 2008 Conference on New Interfaces for Musical Expression(NIME-08). Genova, Italy. Available at: http://nime2008.casapaganini.org/documents/Proceedings/Papers/163.p df. Friberg, A., Bresin, R. &Sundberg, J. (2006) Overview of the KTH rule system for musical performance.Advances in Cognitive Psychology, 2, pp 145- 161. Fry, B. &Reas, C. (2001) Processing [Software] Available at http://www.processing.org/ Gamma, E., Helm, R., Johnson, R. &Vlissides, J. (1995) Design Patterns, Addison Wesley. Gershenfeld, N. (1999) When Things Start to Think, London, Hodder and Stoughton. Gibson, W. (2000) Geeks and Artboys. In Packer, R. & Jordan, K. (Eds.) Multimedia from Wagner to Virtual Reality.New york, W.W. Norton & Co. Gilderfluke& Co. [online] Gilderfluke& Co. http://www.gilderfluke.com/ (Accessed November 23rd 2007). Goldberg, R. (1998) Performance: live art since 1960, New York, Harry N. Abrams Inc. Goulden, J. (1998) Multimedia Possibilities for Live and Lesbian Performance.New Theatre Quarterly 55, 14,p 234. Greggains, B. (1962) Unemployment has Boosted Training Programme. Times. London. Guedes, C. (2007) Establishing a Musical Channel of Communication between Dancers and Musicians in Computer-Mediated Collaborations in Dance Performance. Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME-07). New York, U.S.A., pp 417-419

Ian Willcock – Multimedia and Live Performance Bibliography – Page270

Gurevich, M. (2006) JamSpace: Designing A Collaborative Networked Music Space for Novices. Proceedings of the2006 Conference on New Interfaces for Musical Expression (NIME-06).Paris France, pp 118-123. Gurevich, M. &Treviño, J. (2007) Expression and Its Discontents: Toward an Ecology of Musical Creation. Proceedings of the 2007 Conference on New Interfaces for Musical Expression Conference (NIME-07).New York, U.S.A, pp 106-111. Gustavsen, B. (2001) Theory and Practice: the mediating Discourse. In Reason, P. & Bradbury, H. (Eds.) Handbook Of Action Research. (Paperback 2006 ed.) London, Sage Publications. Hähnel, T.(2010). From Mozart to MIDI: A Rule System for Expressive Articulation. In Proceedings of the 2010 Conference on New Interfaces for Musical Expression. NIME 2010. Sydney, Australia, pp. 72-75. (Available at: http://www.educ.dab.uts.edu.au/nime/PROCEEDINGS/papers/Paper%20 C1-C5/P72_Haehnel.pdf.) Hajdu, G. (1999) Quintet.net [software] http://www.quintet-net.org/ Hamel, K. (2006) Integrated Interactive Music Performance Environment.Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME-06). Paris, France. pp 380-383 Hashida, T., Kakehi, Y. &Naemura, T. (2004) Ensemble system with i-trace.In Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME-04). Hamamatsu, Japan, pp 215-216. Haskel, L. (1996) Plugged In: Multimedia and the Arts in London. London, London Arts Board. Heathfield, A. (Ed.) (2004) Live Art and Performance, London, Tate Publishing. Henke, R., (2002-).Atlantic Waves. [software/performances] Available at: http://monolake.de/concerts/atlantic_waves.html (Accessed November 14th 2011). Hill, L. & Paris, H. (2001) Guerilla Performance and Multimedia, London, Continuum.

Ian Willcock – Multimedia and Live Performance Bibliography – Page271

History of the OSI (no date)Open Source Initiative. Available at: http://www.opensource.org/history (Accessed August 13th 2011). Hodgson, P. (1996) Improvisor [software] No source. How Theme Parks Work(no date) [web page] MKPE Consulting LLC http://mkpe.com/theme_parks/control.php (Accessed November 26th 2011). Hugill, A. (2007) The Digital Musician, New York, Routledge. Hunt, A. & Kirk, R., (2003). MidiGrid: Past, Present and Future. In Proceedings of the 2003 Conference on New Interfaces for Musical Expression. (NIME-03). Montreal, pp. 135-139. [Available online at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.2.2075&rep=re p1&type=pdf.] Ifrah, G.(2000). The Universal History of Numbers III: The Computer and the Information Revolution, London. The Harvill Press. Introducing JSON. (no date) [Software] JSON. Available at: http://www.json.org/ (Accessed August 30, 2011). Jamieson, H. V., Ptacek, K., Saarinen, L., Smith, V. & Collision, A. B. (2002) s w i m - an exercise in remote intimacyMediaLab South Pacifc, Wellington, New Zealand. Java Compiler Compiler (JavaCC)(no date) - The Java Parser Generator [software] http://javacc.java.net/ (Accessed September1st 2011). JavaCC Grammar Files. Available at: http://javacc.java.net/doc/javaccgrm.html [Accessed September 1, 2011]. Jeyasingh, S., Gomez, P. &Dennehey, D. (2002) [h]interland. [performance] Greenwich Dance Agency. JISC infoNet - Creating the Statement of Requirements. (2011)[Web page] Available at: http://www.jiscinfonet.ac.uk/InfoKits/system-selection/ss- define-1.4 (Accessed August 19th 2011) Johnson, S. (1997) Interface Culture, Basic Books. Jordà, S. (2005) Multi-user Instruments: Models, Examples and Promises. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada, pp23-26.

Ian Willcock – Multimedia and Live Performance Bibliography – Page272

Jot, J.-M. (1999) Real-time spatial processing of sounds for music, multimedia and interactive human-computer interfaces. Multimedia Systems, 7, 55. JSyn audio synthesis API for Java(2011) [Software] Available at: http://www.softsynth.com/jsyn/ (Accessed November 26, 2011). Kaltenbrunner, M., Geiger, G. &Jorda, S. (2004) Dynamic Patching for Live Musical Performance. Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME-04).Hamamatsu, Japan, pp 19- 22. Koritzinsky, I.H., (2001). New Ways to Consistent Interfaces. In J. Nielsen, ed. Coordinating User Interfaces for Consistency. Morgan Kaufmann. Kurzweil, R. (1999) The Age of Spiritual Machines, New York, Viking Penguin. Kurzweil, R., (2005).The Singularity is Near: When Humans Transcend Biology, New York. Viking Press Inc. Larman, C., (2003).Agile and iterative development 1st ed., Addison Wesley. Laziza, K. (1996) LazizaVideodance and Lumia Project: the intersection of dance, technology and performance art. Leonardo, 29, 173-82. Leakey, R.E., (1981). Making of Mankind(BCA Edition). E P Dutton. Lee, E. &Borchers, J. (2005) The Role of Time in Engineering Computer Music Systems. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada,pp 204-207. Lew, M. (2004) Live Cinema: Designing an Instrument for Cinema Editing as a Live Performance. Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME-04). Hamamatsu, Japan, pp 144-149. Lévy, P. (1999) Collective Intelligence,Cambridge, Massachusetts, Perseus Books. Levy, S. (1984). Hackers: Heroes of the Computer Revolution, Anchor. Lewin, K. (1948) Resolving social conflicts; selected papers on group dynamics. Gertrude W. Lewin (ed.). New York: Harper & Row. Lindsey, C.S., Tolliver, J.S. &Lindblad, T. (2005)JavaTech, an Introduction to Scientific and Technical Computing with Java, Cambridge University Press.

Ian Willcock – Multimedia and Live Performance Bibliography – Page273

Lozano-Hemmer, R. (2005) Under Scan.[Interactive installation] Available at: http://www.lozano-hemmer.com/under_scan.php (Accessed November 26th 2011). Lycouris, S. (2009) Choreographic Environments. New Technologies and Movement Related Artistic Work. In Butterworth, J. &Wildschut, L. (Eds.) Contemporary Choreography. A Critical Reader. London, Routledge. Lykes, M. B. (2001) Creative Arts and Photography in Participatory Action Research in Guatamala. In Reason, P. & Bradbury, H. (Eds.) Handbook of Action Research.London, Sage Publications. Lyotard, Francois. (1979) The Postmodern Condition: A Report on Knowledge. Trans. Geoff Bennington et al.1984. Minneapolis: University of Minnesota Press. Magnusson, T. &Mendieta, E. H. (2007) The Acoustic, the Digital and the Body: A Survey on Musical Instruments. Proceedings of the2007 Conference on New Interfaces for Musical Expression (NIME-07).New York, pp 94- 99. Mäki-Patola, T. (2005) User Interface Comparison for Virtual Drums.Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada, pp 144-147. Manovich, L., (2002). The Language of New Media New Ed., MIT Press. Martin, C., Forster, B. &Cormick, H., (2010). Cross-Artform Performance Using Networked Interfaces: Last Man to Die‟s Vital LMTD. In Proceedings of the 2010 Conference on New Interfaces for Musical Expression. NIME 2010. Sydney, Australia, pp 204-210. Martin, K. D., Schreirer, E. D. &Vercoe, B. L. (1998) Music content analysis through models of audition. ACM Multimedia Workshop on Content Processing of Music for Mutimedia Applications.Bristol UK. Mathews, M. & Miller, J. (1958) MUSIC4 [software] Bell Labs Max6 (2011) [Software] (version 6) Cycling 74 http://cycling74.com/products/max/ Mcadams, D. A. (2002) Caught in the Act: A look at Contemporary Multimedia Performance, New York, Aperture.

Ian Willcock – Multimedia and Live Performance Bibliography – Page274

Welcome to MediaMation [online] MediaMation, Inc. http://www.mediamat.com/ (Accessed - 23rd November 2007) Medlin, M. (2002) Miss World. [performance] Kiasma, Finland. Medlin, M. (2005) Quartet (movements: duets and solo). [performance] ICA, London. Miklavcic, E.A. &Miklavcic, J.H.(2007). InterPlay: Performing on a High Tech Wire. In Proceedings of the 2008 conference on Digital Resources for the Humanities and Arts (DRHA08). Dartington College of Arts, Devon, UK.[Available at: http://www.anotherlanguage.org/education/papers/papers/2007_tech_wir e.pdf.] Miles, M. B. &Huberman, A. M. (1994) Qualitative Data Analysis: An expanded Sourcebook, Thousand Oaks, Sage Publications. Miranda, E. (2001) ARTIST: Towards AI-aided sound design systems [online] http://website.lineone.net/~edandalex/ai.htm(Accessed - April 6th 2005) Miranda, E. &Brouse, A. (2005) Toward Direct Brain-Computer Musical Interfaces. Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada, pp 216-219. Moore, J.T.S. (2001)Revolution OS, [video documentary] Wonderview Productions. Available at: http://video.google.com/videoplay?docid=7707585592627775409 (Accessed August 13th 2011). MusicBrainz [software] http://musicbrainz.org/ (Accessed July 4th 2006) Naef, M. &Collicott, D. (2006) A VR Interface for Collaborative 3D Audio Performance. In Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME-06). Paris, France, pp 57-60. Nagashima, Y. (1998) Real-Time Interactive Performance with Computer Graphics And Computer Music. 7th IFAC/IFIP/IFORS/IEA Symposium. Nagashima, Y. (2002) "Improvisession-Ii" : A Performing/Composing System For Improvisational Sessions With Networks. International Conference on Entertainment Computing IWEC2002.Makuhari, Japan.

Ian Willcock – Multimedia and Live Performance Bibliography – Page275

Nagashima, Y. (2004) Measurement of Latency in Interactive Multimedia Art.Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME-04). Hamamatsu, Japan, pp 173-176. Neary, D., (2005).Linux.ie :: Subversion - a better CVS. Available at: http://www.linux.ie/articles/subversion/ (Accessed June 19th 2011). Negroponte, N. (1995) Being Digital, London, Hodder and Stoughton. Nelson, T. (1965) Complex information processing: a file structure for the complex, the changing and the indeterminate. Proceedings of the 1965 20th national conference.Cleveland, Ohio, United States, Association for Computing Machinery (ACM). Neuhaus, M (2004) Auracle: A real-time, Distributed, Collaborative Instrument, [software] http://www.auracle.org/ Nielsen, J., 1993. Noncommand User Interfaces.Communications of the ACM, 36(4), pp.83-99. Norman, D. (2002). The Design of Everyday Things, Basic Books. O'sullivan, D. &Igoe, T. (2004) Physical Computing, Premier Press. Oliveros, P. (1999) Quantum Improvisation: The Cybernetic Presence. University of California, San Diego, Improvisation Across Borders Conference. Orio, N., Lemouton, S. & Schwarz, D. (2003) Score Following: State of the Art and New Developments. 2003 Conference on New Instruments for Musical Expression (NIME-03).Paris, France, IRCAM. Packer, R. & Jordan, K. (Eds.) (2001) Multimedia: From Wagner to Virtual Reality New York, W.W. Norton. Parkany, R. A. (2002) The Dialectics of Critical Pedagogy and Action Research. [available online] http://www.education.duq.edu/institutes/PDF/papers2002/Parkany.pdf Peng, L. & Al, E. (2004) ARIA: An adaptive and programmable media-flow architecture for interactive arts. LACM Multimedia 2004 12th ACM International Conference on Multimedia. Prior, A., (2011).Noise at the interface. In NYHEDSAVISEN Public-Interfaces. Public Interfaces. Aarhus University, Denmark. Available at:

Ian Willcock – Multimedia and Live Performance Bibliography – Page276

http://darc.imv.au.dk/publicinterfaces/?p=114 (Accessed September 4th 2011). Puckette, M. (2002) Max at Seventeen.Computer Music Journal, 26, 31-43. Ramakrishnan, C et al (2004) The Architecture of Auracle: A Real-Time, Distributed, Collaborative Instrument. Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME04). Hamamatsu, Japan, pp 100-103. Raymond, E.S. (2001). The Cathedral & the Bazaar, Sebastopol. O‟Reilly Media. Rebelo, P. & Renaud, A. (2006) The Frequencyliator - Distributing Structures for Networked Laptop Improvisation.Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME 06). Paris, France, pp 53- 56. Reason, P. & Bradbury, H. (Eds.) (2001) Handbook of Action Research, London, Sage Publications. Reaves, J.(2003). Defining “Digital Performance”. Digital Performance Institute. Available at: http://www.digitalperformance.org/node/1 (Accessed January 6th 2011). Rieser, M., The Street.[web page] Available at: http://www.martinrieser.com/The%20Street.htm (Accessed July 20th 2011). Riskin, J. (2003) The Defecating Duck, or, The Ambiguous Origins of Artificial Life. Critical Inquiry, 20,pp599-633. Rohs, M., Essl, G. & Roth, M. (2006) CaMus: Live Music Performance using Camera Phones and Visual Grid Tracking. In Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME-06). Paris, France, pp 31-36. Rothwell, N.(2005).Triptychos. [Interactive Installation] Cassiel. Available at: http://archive.cassiel.com/space/Projects/Triptychos (Accessed August 18th 2011). Roy, S.(2002). Technological Process. Dance Theatre Journal, 17(4). Available at: http://www.sdela.dds.nl/sfd/sanjoy.html (Accessed July 21th 2011).

Ian Willcock – Multimedia and Live Performance Bibliography – Page277

Rubin, H. & Rubin, I (2005) Qualitative Interviewing: The Art of Hearing Data Thousand Oaks, USA. Sage. Saldana. J. (1998). Ethical issues in an ethnographic performance text: The dramatic impact of the juicy stuff. Research in Drama Education 3(2), 181-196. Sarkar, M. &Vercoe, B. (2007) Recognition and Prediction in a Networked Music Performance System for Indian Percussion.Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME-07). New York, pp 317-320. Schiller, H. I. (1981) Who Knows, New York, Ablex Publishing Corporation. Shneiderman, B.(1997). Designing the User Interface, 3rd Ed.(3rd edition), Addison Wesley. Shore, J. & Warden, S., (2007).The Art of Agile Development,Sebastopol, CA.O‟Reilly Media. Siegel, W. & Jacobsen, J. (1998) The challenges of interactive dance: An overview and case study. Computer Music Journal, 22. Simple Object Access Protocol (SOAP) 1.1.(2000) World Wide Web Consortium (W3C). Available at: http://www.w3.org/TR/2000/NOTE-SOAP- 20000508/ (Accessed August 30th 2011). Simpson, G. &Polfremann, M. (2003) Report from 'Networking Multimedia Resources', the 3rd Open Archives Forum. Berlin, Genmany. Open Archives Forum. Software for Dancers: Description (2001) [web page] http://www.sdela.dds.nl/sfd/sfd2/descrip.html(Accessed July 20th 2011) Software for Dancers: Partcipants (2001) [Web page] http://www.sdela.dds.nl/sfd/sfd2/part.html (Accessed November 26th 2011) Speigel, L. (1986) Music Mouse [software] http://musicmouse.com/ Steiner, H.-C., Merrill, D. &Matthes, O. (2007) A Unified Toolkit for Accessing Human Interface Devices in Pure Data and Max/MSP. Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME- 07). New York, USA, pp375-378.

Ian Willcock – Multimedia and Live Performance Bibliography – Page278

Stelkens, J. (2003). peersynth. [Software] [available at http://www.peersynth.de/] Stephenson, N. (1999) In the Beginning was the Command Line. Perennial. Stockhausen, K. (1968) Aus den siebenTagen (Nr.26) Vienna, Austria. Universal Edition Tanaka, A. &Gemeinboeck, P. (2006) A Framework for Spatial Interaction in Locative Media.Proceedings of the 2006 Conference on New Interfaces for Musical expression (NIME-06).Paris France, pp 26-30. Taylor, R., Torres, D. & Boulanger, P. (2005) Using Music to Interact with a Virtual Character.Proceedings of the 2005 Conference on New Interfaces for Musical Expression (NIME-05). Vancouver, Canada, pp 220-223. Thirumalainambi, R., (2007).Pitfalls of JESS for Dynamic Systems.In International Conference on Artificial Intelligence and Pattern Recognition.Orlando, Florida, pp. 491-494. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.5917&rep =rep1&type=pdf. Tognazzini, B. (1992). Tog on Interface. Addison Wesley. Traub, P. (2005) Sounding the Net: Recent Sonic Works for the Internet and Computer Networks. Contemporary Music Review, 24, pp459-481. Turing, A (1936) On computable numbers, with an application to the Entscheidungsproblem‟ from Proceedings of the London Mathematical Society, (Ser. 2, Vol. 42, 1937). Turing, D. (2011). Whatever Dance Toolbox [Software]BADco. Available at: http://badco.hr/works/whatever-toolbox/ (Accessed August 12, 2011). Turner, J (2011) Emergent Dance [Unpublished PhD Thesis], University Of Chichester, An Accredited Institution Of The University Of Southampton Turkle, S. (1995) Life on Screen, New York, Touchstone. vanDeursen, A.,Klint,P. and Visser, J. (2000) "Domain-specific languages: an annotated bibliography," SIGPLAN Not., vol. 35, pp 26-36. Voelter, M., Kircher, M. &Zdun, U., (2004).Remoting Patterns: Foundations of Enterprise, Internet and Realtime Distributed Object Middleware, John Wiley & Sons.

Ian Willcock – Multimedia and Live Performance Bibliography – Page279

Wang, G., Misra, A. & Cook, P. (2006) Building Collaborative Graphical Interfaces in the Audicle.In Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME-06). Paris, France, pp 49-52. Weinstein, M.(2011)TAMS Analyzer, [Software] Available at: http://tamsys.sourceforge.net/ (Accessed June 24, 2011). Weiss, F. & Knorr, D., 2009.Pixelman, [interactive installation] Available at: http://www.frieder-weiss.de/works/all/Pixelman.php (Accessed August 18, 2011). Wells, D.(1999)User Stories. Available at: http://www.extremeprogramming.org/rules/userstories.html (Accessed June 9, 2011) Wells, D., (2009a).Agile Software Development: A gentle introduction. Available at: http://www.agile-process.org/ (Accessed June 9, 2011) Wells, D., (2009b).Manage by Features. Available at: http://www.agile- process.org/byfeature.html (Accessed June 9, 2011) Widmer, G. and Goebl, W.(2004) Computational models of expressive music performance: The state of the Art.Journal of New Music Research, vol. 33. Willcock, I. (2002) Hyper-Polyphony : Interactivity – Open Form. In Mahnkopf, C.-S., Cox, F. &Schurig, W. (Eds.)(2002)Polyphony & Complexity.Hofheim, WolkeVerlag. Willcock, I. (2005) Pinning Down the Artists: Developing a Methodology for Examining and Developing Creative Practice. In Proceedings of the 2006 DMU Humanities Graduate Conference [available online] http://www.tech.dmu.ac.uk/STRL/research/publications/pubs/2006/2006- 17.pdf Willcock, I. (2006) Composing without Composers: Creation, control and individuality in computer-based algorithmic composition.inElectronics in New Music, C.-S. Mahnkopf, F. Cox, and W.Schurig, (Eds.) (2006)Hofheim: WolkeVerlag.

Ian Willcock – Multimedia and Live Performance Bibliography – Page280

Willcock, I. (2010) Words Of Power: eMerge, A Language for the Dynamic Control of Live Performance. In International Journal of Humanities and Arts Computing 4.1–2 (2010): pp67–80 Edinburgh University Press Wittgenstein, L (1953) Philosophical Investigations (trans. G.E.M. Anscombe) (2nd Edition) London, Blackwell Publishers Wittgenstein, L (1974) TractatusLogico-Philosophicus (trans. Pears and McGuinness) (Paperback Edition). London, Routledge&Kegan Paul Wozniewski, M. &Boulliot, N., (2008).Large-Scale Mobile Audio Environments for Collaborative Musical Interaction.Proceedings of the 2008 Conference on New Interfaces for Musical expression(NIME-08). Available at: http://nime2008.casapaganini.org/documents/Proceedings/Papers/215.p df Xerces (2010) [software] (version 2.11) http://xerces.apache.org/xerces2-j/ (Accessed September4, 2011) Zicarelli, D. (1987) Jam Factory [software] Intelligent Music Zicarelli, D., Offenhartz, J., Widoff, A. &Chadabe, J. (1986) M [software] Intelligent Computer Music Systems http://www.cycling74.com/twiki/bin/view/FAQs/MtheHistory

Ian Willcock – Multimedia and Live Performance Bibliography – Page281

Appendix 1 Practitioner Survey Materials Interview Overview Firstly, thank you for agreeing to be interviewed and to allow your experience of working with Multimedia in Live Performance to inform my research. I am trying to see if there is a common set of functionality that people are using when they work with Multimedia in Live Performance – and, if there is, I will try and develop a system in collaboration with others to provide a support framework, allowing people to focus on their particular, individual artistic explorations rather than having to start right from the beginning each time a project is begun.

The interview will be structured around a framework of 9 areas of focus on multimedia use in live performance. This structure is made up of main and follow-up questions intended to enable you to provide detailed information about areas you have particular experience or knowledge, while not having to spend time on aspects that are not relevant to their practice. The framework and content are dynamic and will be continuously redesigned based on responses to completed interviews – i.e. we are not constrained by the framework, it just serves to make sure we consider certain areas of use. Researcher’s Introduction The interview will start with a brief explanation of why I have approached you for an interview - your work and experience in the field of inquiry. Then, I will describe the research project‟s aims and methodologies and of the intentions behind the interview including the use to which data will be put. At this stage, you will be asked formally to agree to take part in the survey and for our discussion to be recorded. The Interview The structured part of the interview begins by me asking you to describe briefly the work you have been doing.

We will then talk through a list I have made of ways artists could use multimedia in live performance. This is both to familiarise you with the entries and also to provide the opportunity for discussion about the basis for selecting and identifying these items. You may want to identify additions or amendments you feel are required and this is very much part of the way I want to work; the list is not a set of fixed categories, but rather a set of possible starting points. You may well have other useful starting points and I expect (and hope) that as my

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 282

research proceeds, I will make changes to the list based on the suggestions of those I talk to.

My current list of ways one might use Multimedia in Live Performance is: -

 Gathering input from performers or users  Gathering information about the environment  Outputting and presenting information  Controlling devices and systems  Processing information  Planning and preparing performances  Remembering and documenting  Generating material  Connecting performers or audiences

Each of the areas will then be looked at in turn. For each, you will be shown a diagram which includes both examples of technology-use in that category and names of technical resources used to produce that functionality. These items are not intended to be definitive, but are more to suggest starting points for you to review aspects of your practice. I will then ask you the main question(s) and, where it would be useful, some or all of the follow-up questions.

The whole interview should take about 45 minutes.

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 283

Gathering input from performers or users

Position or movement sensors Gestural control system Mouse/ keyboard Audience participation Gathering input microphone from performers or Speech MIDI users control keyboard

interactive Mobile narrative choices phone input

Starting points  Have you used digital technology or multimedia to collect information from performers (see above)?

 Have you used digital technology or multimedia to enable members of an audience to influence a performance (see above)?

Follow-up questions  How did you actually do this gathering of information - do you remember using any of the examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you allow performers or audience members to have control of or input to the performance in ways that are not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 284

Gathering information about the environment

Sensing DV camera environmental conditions Temperature Timing sensors performance

elements light sensors Gathering information about Sampling Radiation the environment sound or visual sensors material

RSS feed

Monitoring network Carnivore conditions

Starting points  In your work have you gathered information or monitored something via technology which you‟ve used to influence the way a performance unfolds (see above)?

Follow-up questions  How did you actually do this information gathering - do you remember using any of the examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you sense or monitor anything in ways that are not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 285

Outputting and presenting information

Sound projection spat Telematics

Outputting and Jitter Sending sms presenting messages information Max MSP

Virtual

performers

Projecting video, Isadora images or text

Starting points  Have you used digital technology to output or present information of any kind or to show participants (performers or audience) what is happening in a performance (see above)?

Follow-up questions  How did you actually do this presentation of information - do you remember using any of the examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you show the audience or performers what was happening in a performance in ways that are not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 286

Controlling devices or systems

controlling Bluetooth lighting

changes Ethernet Controlling robots OSI Controlling devices Triggering or systems

playback of XML sound or video

Triggering special MIDI effects

Starting points  Have you used digital technology or multimedia to control or trigger things in performance (see above)?

Follow-up questions  How did you actually do this controlling or triggering - do you remember using any of the technology examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you control or trigger anything not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 287

Processing information Randomising Reacting to Peersynth events Jess Rules-based systems Inference engine Converting one sort Processing of information to information another Pitch to MIDI

Combining MIDI event and mixing translation information Coding or iChat or decoding Skype Starting points  Have you used technology to decide how a performance proceeds (see above)?

 Have you used live conversion or mixing during performances (see above)?

Follow-up questions  How did you actually implement this decision-making? Do you remember using any of the examples in the diagram above?

 How did you actually do this mixing or combining? What sort(s) of information did you work with - any of the examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you process information in ways that are not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 288

Planning and preparing

Starting points

 Have you used digital technology or multimedia to prepare materials or set up cues or equipment for performances (see above)?

Follow-up questions  How did you actually do this preparation? What sort(s) of materials were involved - any of the examples in the diagram above?

 How did you actually do this checking or validating - do you remember using any of the examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you prepare or plan using digital multimedia in ways that are not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 289

Generating material

Generating

animations Max Msp

Synthesising sounds Soft Image Generating material Generating Flash text or images

Emulating Director performers

Starting points  Have you used digital technology or multimedia to generate or synthesise material (see above)?

Follow-up questions  How did you actually do this generation of material(s) - do you remember using any of the examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you generate material(s) in ways that are not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 290

Connecting performers or audiences

QuickTime server Peersynth

Telematics Auracle

Ethernet Connecting Streaming performers or media OSI audiences

XML Distributed collaboration Linking MIDI venues Starting points  Have you used digital technology or multimedia to influence how performers or audience connect physically – either with the performance or each other(see above)?

Follow-up questions  How did you actually do this connecting - do you remember using any of the technology examples in the diagram above?

 Did it work as you wanted and was it difficult to get it working as you wished?

 Did you connect the performers or audience in ways that are not included in the diagram above?

 Has this use of technology been central to any of your artistic intentions?

Ian Willcock - Multimedia and Live Performance Appendix 1 – Practitioner Survey Materials - Page 291

Appendix 2 Survey Subjects - Bibliographical Notes All texts are by the subjects.

Joby Burgess One of Britain‟s most diverse percussionists, Joby is best known for his virtuosic, often lissom performances, daring collaborations, extensive education work, and regularly appears throughout Europe, the USA and beyond.

Dedicated to the development of the percussion repertoire, often in combination with electronics, Joby spends much of his time commissioning and recording new music, with Powerplant, New Noise and ensemblebash. Recent highlights have included extensive tours with Peter Gabriel‟s New Blood Orchestra and Graham Fitkin‟s all star band; releases of Gabriel Prokofiev‟s Import/Export on Nonclassical and Frozen River Flows on Oboe Classics.

Joby regularly performs, records and collaborates with artists including Stewart Copeland, Michael Finnissy, Graham Fitkin, Peter Gabriel, John Kenny, Akram Khan, Sarah Leonard, Joanna MacGregor, Peter Maxwell Davies, NitinSawhney, Andy Sheppard, Keith Tippett and Nana Vasconcelos, along with many of the world‟s leading chamber ensembles. Joby is also a member of Stephen Deazley‟s Edinburgh based ensemble, Music at the Brewhouse, for which he was commissioned to arrange A-ha‟s pop classic Take On Me, for the 2008 St Magnus Festival in Orkney.

In 2004 Joby was appointed professor of percussion and director of percussion ensembles at Junior Trinity College of Music, Greenwich. Joby studied at the Guildhall School of Music & Drama, London.

Steve Dixon President of Lasalle College of Arts in Singapore and Professor of Digital Performance. His 800 page book Digital Performance: A History of New Media in Theater, Dance, Performance Art and Installation (2007, MIT Press) is the most comprehensive study of the field to date, and has won two international book awards.

His creative practice-as-research includes international multimedia theatre tours as director of The Chameleons Group (since 1994), two award winning CD- ROMs, interactive Internet performances, and telematic arts events.

Ian Willcock – Multimedia and Live Performance Appendix 2 – Survey Subjects - Bibliographical Notes - Page 292

Andy Lavender Professor of Theatre & Performance and Head of the School of Arts at the University of Surrey. He was previously Dean of Research at Central School of Speech & Drama, University of London. He is the artistic director of the theatre/performance company Lightwork.

He is a co-editor of two recent volumes, Making Contemporary Theatre: International Rehearsal Processes (Manchester University Press, 2010) and Mapping Intermediality in Performance (Amsterdam University Press, 2010), and co-convener of the Intermediality working group of the International Federation of Theatre Research.

Sarah Rubidge is a digital choreographer. As a collaborative artist she develops choreographic digital installations that focus on the use of the somatic as the primary medium of communication (Sensuous Geographies, 2003, Fugitive Moments, 2007). Some of the installations initiate improvisational choreographies (formal and informal) that become integral choreographic elements of the installation (Passing Phases 1994-1999, Sensuous Geographies). Others draw on the histories embodied in the fabric of old buildings (Hidden Histories 2001, Time & Tide 2001). The works have been shown in London, Los Angeles, and Brisbane.

In order to advance understanding of the intricate interplay between artistic and philosophical practice, Sarah also reflects on her practice in papers and articles that address the interweaving of the philosophical concepts that are embodied in her work. She was appointed Professor of Choreography and New Media at the University of Chichester in 2008.

Jane Turner - choreographer has worked extensively as a choreographer/director in dance, theatre, TV/film and experimental contexts since forming her own dancetheatre co. in 1990. For the company she has consistently created innovative, visually rich ”emotionally propelled choreography...formidable” The Stage, London, dance works which have been presented at leading venues throughout London, the U.K. and in Europe, Her work has drawn on extensive experience working with film for music videos, commercials and TV and particularly with animation director Mario Cavalli - their most recent collaboration being Libre (2008). For her own dance theatre touring productions she has extensively worked with image manipulation and increasingly experiments with the integration of interactive software such as Isadora as part of the performance matrix. Triggered

Ian Willcock – Multimedia and Live Performance Appendix 2 – Survey Subjects - Bibliographical Notes - Page 293

(2010/11) which premiered at Kings Place, London, involved music composition evolving in real time to dancers‟ responses to sculptures embedded with sound sensors.

Her choreographic method involves exploring the contemporary body and its multiple narratives. Her recent triptych of works explored femaleness: biology, representations, expression, to create vivid dancetheatre in: bodies, biology and the big bang Baby (2003/6); direct engagement with the theatrical Troop (2007), „ a classic seduction parade' Judith Mackrell, The Guardian; and our ritualistic relationship with Nature‟s systems Herd (2009).

An interest in how the world works underpins an aesthetic that embraces the epic in the everyday, interconnections between the personal, cultural and scientific. Such ideas have informed a wide range of projects building bridges between professionals and enthusiasts, artists and organisations, via education, community and cross-arts collaborations. Ground-breaking projects working with scientists and computer programmers in close relationship with the Institute of Contemporary Arts, emergence (2002) and e-Merge (2004), fed into a recently completed PhD at the University of Chichester.

Examples of Turner‟s recent independent and collaborative works include: Bleu (2009) at Colchester Arts Centre, a unique tribute to Yves Klein‟s paintings created by the naked form, with theatrenomad, Biograph (2008) solo dance with shadow puppetry with puppeteer Steve Tiplady and Indefinate Articles at the Battersea Arts Centre, Carnaval (2008) a Ballets Russes inspired event at the Blackpool Tower Ballroom and staged at the London Assembly in September 2008, and Nymphaeum (2006) a fantastical event with synchronised swimmers for the Porchester Baths, London. www.janeturner.net

Martyn Ware Born in 1956 in Sheffield, UK. After leaving school worked in computers for 3 years, in 1978 formed The Human League.Formed production company/label British Electric Foundation in 1980 and formed Heaven 17 the same year. Martyn has written, performed and produced two Human League, two BEF and nine Heaven 17 albums.As and artist has featured on recordings totaling over 50 million sales worldwide - producing Tina Turner, Terence Trent D‟Arby, Chaka Khan, Erasure, Marc Almond and Mavis Staples, etc..

Founded Illustrious Co. Ltd. with Vince Clarke in 2000 to exploit the creative and commercial possibilities of their unique three-dimensional sound technology in collaboration with fine artists, the performing arts and corporate clients around the world.

Ian Willcock – Multimedia and Live Performance Appendix 2 – Survey Subjects - Bibliographical Notes - Page 294

Clients include: BP, Ogilvy, The British Council, The Science Museum, The Royal Ballet, Amnesty International, The V&A Museum, Mute Records, BBC TV, the Royal Observatory Greenwich, BAFTA , Museum Of London. Tate Britain, Red Bull Music Academy, City Of Westminster Council, etc. etc.

Martyn produces and presents a series of events entitled „Future Of Sound‟ (21 so far) in UK and around the world and has created sonic architectural works at the British Pavilion at the Venice Architectural Biennale in 2006 amongst many others.

Martyn is also…

 Visiting Professor at C4DM at Queen Mary College, University of London  Fellow of the Royal Society for the Arts  Visiting lecturer at the Royal College of Art  Visiting lecturer at the Harvard Graduate School of Design  Founding member of the LA-based international think-tank group Matter  Fellow of the Academy of Urbanism  Founder member of 5D (a US based organization promoting future development of immersive design).  Member of the Writer‟s Guild of America  Member of the advisory board for the De Montfort University „DMU Creative‟ /internet radio project  Member of the advisory board for Queen Mary College, University of London, Doctoral Training Centre in Digital Music & Media for the Creative Economy  Curator for the main stage at the Vintage At Goodwood festival in August 2010  Honorary Patron of the Sensoria Festival, Sheffield

He also lectures extensively on music production, technology, and creativity at universities and colleges across the world and he is Head of Sonic Experience and co-founder of the world-leading sound-branding agency - SonicID.

Recent and forthcoming projects include…

 West Street Story(November 2011) Massive 3D soundscape in the centre of Brighton as part of the White Night Festival  Tate Topology (November 2011) 3D soundscape embodying immersive three-dimensional topology manifestations at the Tate Modern  Beatles Immersive (ten year install – opened July2011) 3D soundscape celebrating the early years of the Beatles in the brand new Museum of Liverpool  Michael Clarke Company (June 2011)

Ian Willcock – Multimedia and Live Performance Appendix 2 – Survey Subjects - Bibliographical Notes - Page 295

Huge-scale immersive 3D soundscape to accompany Michael Clarke‟s new show in the enormous Tate Modern turbine hall  Elizabeth Garrett Anderson Museum (April 2011) 3D soundscape featuring history of famous women‟s medical and social campaigner at the brand new museum in London  Creators Project (June- Oct 2010) New York, London, Sao Paolo, Seoul and Shanghai play host to the most amazingly eclectic group show of installations, talks, performance and events. Illustrious has created a 3D spatialization of Nick Zinner‟s composition fro his immersive photographic and sound installation called A.D.A.B.A.  Yves Saint Laurent, ‘Belle D’Opium’ Five Senses installation (June 2010) Together with Skrapic Consulting in Paris, Illustrious created a 3D soundscape composition together with 360 degree projection, theatrical lighting, rotating stage and olfactory installation to create an immersive impression of the essence of their brand new perfume.  Pret-A-Porter Paris (January 2010) – a large scale immersive 3D soundscape created for this Paris fashion trade fair based around the sounds and impressions of the various quarters of the city – incorporating contemporary and historical recordings to create a „magic realist‟ impression of Paris.  3D DJ Soundclash (February 2010) – Two DJ teams (Warp Records and Ninja Tunes) battle it out in a huge Illustrious 3D soundscape underneath the Royal Albert Hall – in collaboration with Red Bull Music Academy  Soundlife London (June 2009) – a giant 3D soundfield one-hour looping composition distilling the sonic essence of London into three dimensional space occupying the whole of Leicester Square in June 2009 for 10 hours a day for 10 days – playing to over 2 million people. This was created in collaboration with a variety of community groups, schools, ethnic associations and reminiscence groups and in association with the City Of Westminster Council  Large Scale Audio Immersion Experiment (November 2009) – in collaboration with Goldsmiths College, London and Duran Audio, Illustrious created a 70m x 70m outdoor sound installation using unique Intellivox speakers controlled by Wave Field Synthesis technology and 3DAudioScape. A competition was held for students‟ soundscape compositions and the winning pieces were exhibited in the space.  Acoustic Ecologies (2010-2012) – in collaboration with arts organization Education4Conservation, a series of works entitled „Nocturne/Wild Echoes‟ based upon the notion of outdoor immersive soundscapes in countryside environments, creating unexpected sonic interventions in natural environments.  Breathing Trees (2010) – a collaboration between light artists Creatmosphere and Illustrious, the third iteration of this exhibition will take place in Lima, Peru and will feature a huge scale (200m x 50m x

Ian Willcock – Multimedia and Live Performance Appendix 2 – Survey Subjects - Bibliographical Notes - Page 296

25m) installation in a park in the centre of the city based on the concept of wordless communication in sound and light between two trees, one male and one female.  Threeways SEN School Sensory Studio (2006–2012) – this ongoing project is at the core of the working philosophy of Illustrious – a world- first Sensory Studio for autistic, disabled and SEN pupils using state-of- the-art technology to create a highly responsive, immersive 3D sound, light and HD projection environment which can be easily controlled and modulated using a variety of sensors and controllers by teachers and the pupils themselves.

Ian Willcock – Multimedia and Live Performance Appendix 2 – Survey Subjects - Bibliographical Notes - Page 297

Appendix 3 eMerge Performance Scripting Guide

Version 1.4 – 9th November 2011

Ian Willcock

[email protected]

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 298

Contents

Introduction Page I (300)

Language overview Page 1 (301)

Events Page 6 (306)

Commands Page 14 (314)

Keyword Listing Page 21 (321)

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 299

Introduction

The eMerge system is a networked system that tracks performance events in real-time and is then able to issue commands to performers based upon a rule- set which has been created before the performance.

The eMerge scripting language, which controls the decision making and cue- issuing process, is based on simple text-based definitions of rules, events and commands. These are written by the user(s) in everyday language, and are then interpreted by the system. When you enter a rule or a command, the system checks that what you have entered contains the information it needs. It then either stores the information if it part of performance preparation, or it carries out the requested action(s) if it is a command for immediate execution.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 300

Language Overview The eMerge system The eMerge system connects live performers and other system-level sensors to a central server. The system has two main modes – a pre-performance, preparation state and an active, performance state.

In the preparation state, the system configuration is defined and checked and the rule structures that will determine how the performance will unfold are entered and edited. In performance, the server stores information about what is happening and continually compares what has happened against a stored set of circumstantial directives – e.g. if such and such an event occurs, carry out this action. Scripting language syntax The basic structure of the eMerge scripting language is the Rule. These are entered before a performance and are stored in the system. They define what the system should do during a performance when a particular set of events occurs. Rules can be active or inactive – so that responses to sets of events can change as a performance progresses.

When the triggering events occur for a rule, its command(s) are executed and the triggering events are cleared from the system‟s memory (so that the rule does not rigger again straight away). The rule itself however remains active unless it specifically makes itself inactive through one of its commands.

The other type of input is a Command. When a command is entered on its own (as opposed to when it is a part of a rule – see below), the system will carry out the requested action immediately. Commands can be entered at any time - while the system is in preparation mode or during a live performance.

Commands can cause a wide variety of actions to be carried out, these include causing information to be sent to one or more performers, or changes to the internal storage and operation of the emerge system itself.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 301

Rules Rules determine how the system operates when it is running a performance. A rule has two parts: events and commands. If an event happens during a performance, the command will be issued: -

If

ian says “boo”

Then

say “go” to tom

Rules can be triggered by any one of a number of events happening, and can carry out more than one command: -

If

Ian says “boo” Jane says “boo” tom says “boo” sonja says “boo”

Then

say “go” to sven show “start playing section four” to craig

Here, if any one of the named says (types) “boo” then the commands will be issued.

The events needed to trigger a rule can be also combined using „and‟: -

If jane says “boo” and Section three is active Then say “go” to sven show “start playing section four” to craig start section four

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 302

Events An event specification has three parts; a reference to a performer or other system object, a description of something they might do – their state - and the test for the event to be judged as having happened

Here are some events: -

Object reference test What nick says “stop”

rule SysName_RULE_2 is active

performance time is more than 10 minutes

zone view1 x is less than 3.5

The reference part of an event must be a word or phrase that refers to something the system knows about and holds information about (see the detailed section on referring for more information).

Things the system knows about include:

performers zones

rules sections

the performance

There are four tests that may be applied to judge if the event has happened. The user can use different words and symbols to specify them. Here are just some ways of writing the four tests: -

equals does not is higher is lower equal than than

is isn‟t is more than is less than

= ≠ > <

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 303

Commands Commands make the system do something. They may be triggered by events happening or they may be entered directly by a user at any time – even when the system is not running a performance.

Commands have two or three parts; an action to be carried out, a reference to a performer or other system object which is the target for that action and, where required, a description of what is to be done or sent to that target.

Here are some commands: -

Action Object reference What

start performance

say (to) Hannah “start singing”

set rule 3 INACTIVE

The reference part of a command must follow the same rules as for events – it must be a word or phrase that refers to something the system knows about and holds information about.

The action definitions that can be used depend partly on what the system and its components (performers, editors, sensors etc.) can do. Some of the words that will always be understood by the system are: -

say set show play

give a start stop

delete get make

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 304

Referring to Things or Objects When a rule or command is entered into the eMerge system, the user has to provide enough information for the system to be able to identify exactly which information should be examined or sent – depending whether the reference occurs in a rule or a command.

The system works this out by examining the object reference you provide and (particularly for events) the test you specify. If a word is not one of the built-in system words, it is assumed to bee the name of an object. For rules, sections, and zones you need to specify its type: -

zoneleft_space section introduction rule SysName_RULE_21

For performers, you just need to use the name they are logged in as: -

jane ian

Object names cannot contain spaces or punctuation marks.

Describing States or Values There are four different ways of describing what value should be tested or sent to a system component.

generalised ANYTHING, NOTHING, EMPTY, descriptions ACTIVE, INACTIVE

absolute “hello”, 125, “F# 3”, 2 minutes descriptions

references to image “symbol1”, sound “low buzz”, midi data “riff”, movie “gesture 7”

references to zone upstage, rule objects SysName_RULE_21, performance time

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 305

Events The eMerge system is able to keep track of, and respond to, a large number of different sorts of events. When specifying events in rules, there are often additional pieces of information about the events that can be included in rule definitions. The nature of an event is specified in rules by a combination of the choice of „action‟ word and information type. Events may be associated with a particular performer – or with a system-level input device such as a pressure pad or ambient light level sensor.

Identifying event sources Performers and the system-level objects (zones, sections and rules) are identified by name.

The ANYONE and NOONE keywords are also allowed.

Examples

(if) Jane gives a mouse signal

(if) ANYONE gives a mouse signal

System objects are referred to by name.

Examples

(if) performance time is more than 5 minutes

(if) section coda time is 2 minutes

(if) rule SysName_RULE_3 is ACTIVE

(if) zone myZone x-position is 2.0

After the source identification, an event definition must contain information about what sort of input and what value that input might have to be considered meaningful. The types of input that are currently implemented include; signals (of various types), text and midi.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 306

Signals Signal events are simple cues generated by the user or device interfacing with a client. They could include such things as mouse clicks, a key being pressed on the computer, a sudden significant change in sound level (such as a performer saying “bah”) or a sensor sending a message about location etc.. Each signal is sent to the system together with information about what sort of event it is. This additional information can be used or not as a situation demands.

Examples The general form, which will trigger on any signal event is: -

(if) Dean gives a signal

To specify the type of input, use a type keyword in front of signal: -

(if) Dean gives a mouse signal

To specify additional things, add keywords (see below for which words can be used with each input type): -

(if) Dean gives a double mouse signal

mouse signal e.g. Dean gives a mouse signal This type of signal is generated in response to a mouse click. It has 3 possible additional properties, short, long and double. short a „normal‟ quick mouse click.

e.g. Dean gives a short mouse signal long A click where the mouse button is held down for more than half a second before being released.

e.g. Dean gives a long mouse signal double A fast double click

e.g. Dean gives a double mouse signal

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 307

sound signal e.g. Claire gives a sound signal This type of signal is generated in response to a sudden sound pulse often through a performer‟s headset. It has 3 possible additional properties, short, long and double. short a short sound, for example “ki”.

e.g. Claire gives a short sound signal long A sound in which a high level is maintained for more than half a second, for example “shar”.

e.g. Claire gives a long sound signal double A double sound impulse, where the second sound starts within a third of a second of the first, for example, “di ka”.

e.g. Claire gives a double sound signal

key signal e.g. ethan gives a key signal This type of signal is generated when the performer presses and releases a single key on their computer‟s keyboard. It has 3 possible additional properties, short and long together with the letter value of the key being pressed. short a „normal‟ quick key press.

e.g. ethan gives a short key signal long A key press where the key is held down for more than half a second before being released.

e.g. ethan gives a long key signal Letter The letter value of the key – for character keys, this is the value character that would appear on screen if only that key were pressed, the letter is placed in speech marks. For control keys the following keyword can be used: - RETURN. SPACE is also allowed instead of “ “.

e.g. ethan gives a “z” key signal ethan gives a RETURN key signal

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 308

Text events Performance events involving text can, in principle, be of two types; typed or spoken (only the former is currently implemented). They are distinguished through the use of different keywords; types and says. Both types of event need the information to be matched to be specified either using a string of characters enclosed in speech marks or through a keyword. At the present time, both are synonymous in event descriptions.

Allowable keywords are; NOTHING and ANYTHING.

Examples

(if) susan types “welcome”

(if) ali types ANYTHING

If speech to text is implemented, the following can be used

(if) kate says “hello”

(if) mike says NOTHING types e.g. Molly types “I wandered lonely as a cloud” This type of event gives access to text input. The event is sent when the performer presses the RETURN key.

Note that the performer client cannot support both typed text input and key and mouse signals at the same time.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 309

MIDI Events Performance events involving MIDI can be collected by the performer client in 2 different ways – as individual events or completed notes. These collection settings are set in the performer client.

Events generates a midi signal each time certain sorts of midi events are received by the client. They are described in event specifications by the keywords gives a, see below for details.

Notes generates a separate performance event for each note that is played (i.e. one event for each noteOn-noteOff pair) and are described in event specifications using the plays keyword.

The following 2 collection strategies may be implemented in future versions: - Phrases generates a a list of notes played whenever the player pauses for more than 2 seconds and no notes are being held.

Statements are similar to phrases but they are generated (as a performance event which includes a list of notes played) whenever the player presses a particular key (usually the top or bottom note on a keyboard).

MIDI note events are specified using the plays keyword together with a short piece of text identifying the pitch to be matched. Notes are specified by letter name (in upper case) and the modifiers „#‟ for sharp and „b‟ for flat. There should then be a space and a number indicating which octave is wanted. The whole note specification should be contained in speech marks. The keywords NOTHING and ANYTHING are allowed.

Examples

(if) sophie plays “A 4”

(if) ian plays “F# 2”

(if) frank plays ANYTHING

plays e.g. rashid plays “C 4” This type of event gives access to high level midi input. When the event is sent depends upon the performer client settings. It will be when one of the following conditions is met: -

Note - when a noteOn-noteOff pair is completed

Individual MIDI events are captured as signals

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 310

midi signal e.g. yolanda gives a midi signal This type of signal gives event-level access to midi input devices. It generates a lot of data and midi input will generally be better gathered using higher level MIDI events (see above). It has 2 possible additional properties, eventType and note. eventType The type of MIDI event that generated the signal. Possible types are noteOn, noteOff, controlChange, programChange and pitchBend.

e.g. yolanda gives a noteOn midi signal note A short piece of text which identifies the note (if any) that is associated with the MIDI event. Middle C is identified as “C 3”. Notes are specified by letter name (in upper case) and the modifiers „#‟ for sharp and „b‟ for flat. There should then be a space and a number indicating which octave is wanted. The whole note specification should be contained in speech marks.

e.gyolanda gives a “G# 5” midi signal yolanda gives a “Bb 0” midi signal

Note that the above examples will trigger on both the starts and ends of midi notes. To get a single notification for each note that is played, use the dedicated MIDI event type (see play above).

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 311

System events The system is able to generate events arising from the performance, time based performance divisions called sections, representation of physical locations called zones and the state of rules. performance e.g. performance is active Performances have 2 qualities that can be used in specifying events; activity and elapsed time.

When a performance is started, it is set to active and its clock starts counting the seconds since the start performance command was issued. To create an event which is fired at a particular point in a performance, test for the number of seconds: - performance time is 120 or minutes: - performance time is 2 minutes

„more than‟ and „less than‟ are also allowed: - performance time > 45 performance time is more than 5 minutes performance time < 360 performance time is less than 6 minutes

zone e.g. zone zone1 is populated Zones are ways that physical locations can be dynamically represented by the system. They have 2 qualities; whether they are populated or empty and a 3 dimensional position.

Zones are identified by a name:- zonemyzone

You can use sense data to update their properties (see commands section) and then make rules which fire when a zone is „in‟ a particular location:- zonemyzone position is (1.5,2.0,0.0)

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 312

or when the zone is populated: - zonemyzone is populated or when one of the zone‟s position co-ordinates is equal to, less than or more than a specific value: - zonemyzone x is 5.0 zonemyzone y is less than 1.3 zonemyzone z > 10.5

section e.g. section interlude1 is active Sections are ways that time-based divisions of the performance can be dynamically represented by the system. They have 2 qualities; whether they are active or inactive and an elapsed time.

Sections are identified by a name:- section intro

Sections are created when a new section name is encountered by the system, but they are not started (set to active and their clock reset to 0 and started) unless the start command is used: - start section intro

Sections are stopped by the stop command: - stop section intro

When a section is started, its activity can be used to fire events in rules: -

(if) section intro is active

Its elapsed time, counted in seconds, can also be used in rules (until a section is started, it remains at 0):-

(if) section intro time > 4 minutes (if) section main time is less than 20

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 313

Commands Commands cause the system to do something. They can be issued by the user or triggered by a rule if its triggering circumstances arise. All commands require a target reference (e.g. performer 2, rule 7), although this is implicit in a small number of cases (see entry for „get‟ below) and a data value. The particular action triggered by a command is often dependent on both the keyword used and the data type supplied.

The ordering of elements in a command usually follows one of 2 syntactical models, as far as possible following that of „natural‟ English usage for each keyword. In all cases, the command keyword must be the first item.

Examples

set zone upstage to populated (keyword-target-data) say “hello” to kevin (keyword-data-target)

Commands can be divided into 2 main categories based on the type of action they cause to happen; cuing and data management commands.

Cuing commands These cause messages or orders to be sent to performer clients, usually accompanying one or more pieces of data and requesting the client to display it to the performer in an appropriate manner.

Examples

say “hello” to daniel (speaks) play “D# 4” to daniel (plays a tone) show “start section 7” to daniel (displays text) show image score1.gif to daniel (displays image)

Data management commands These direct the system to carry out actions on its own internal data representation. This could cause a change in the system operation, for example starting and stopping performances, or they might alter the database of stored rules.

Examples

start section intro set rule 2 to INACTIVE set zone camera1 position to (1.2,2.0,0.0)

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 314

Identifying targets Performers and rules are identified by name. If the first word of a target is not a keyword, it is assumed to be the name of a performer or rule.

The EVERYONE keyword is also allowed for cuing commands (those whose target is a „performer‟).

Examples

say “hello” to joseph show image image1.jpg to EVERYONE

Cuing Commands say saydata to target e.g. say “hello” to michelle The say command sends a text string to a performer client and asks it to speak it to the performer. The text to be spoken should be enclosed by speech marks.

The character sequences .txt and .html are not allowed in text strings as they are used to distinguish strings from filepaths (future performance clients will offer the capability to read from text files).

If the performer client does not have speech capability, the text will be displayed on screen instead.

give give a dataeventTypesignal to e.g. give a “F# 3” noteOn midi signal to mixer target The give command causes the named client to generate a single MIDI event. It has 2 required additional properties, eventType and note.

The noteOn and noteOff signals require a pitch, other MIDI signal types require an integer. eventType The type of MIDI event that generated the signal. Possible types are noteOn, noteOff, controlChange, programChange and pitchBend.

e.g. give a “F 5” noteOn midi signal to sarah note A short piece of text which identifies the note or integer value that is associated with the MIDI event.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 315

show showdata to target e.g. show “hello” to monica The show command sends a text string or an image to a performer client and asks it to display it on the computer‟s monitor.

Note that the current version of the performer client cannot display both text and images at the same time. However other modes of feedback (speech, MIDI etc.) do not affect the visual display.

When a performer client displays a new text or image item, it displaces whatever was previously being displayed.

Displaying text from a file is not yet implemented. text For text, the data expression may be either an absolute expression or a file reference (only the former is currently implemented). For absolute expressions, speech marks should enclose the text.

e.g. show “start playing now” to Jane

show polemic.txt to Tony

The character sequences .txt and .html are not allowed in absolute expressions as they are used to distinguish strings from filepaths. File names may not include spaces.

For files, the text should be formatted as plain text or as (simple) html. images For images, only a file reference is allowed which must be preceded by the keyword image. File names may not include spaces. Images should be stored in jpeg or gif format.

If the image is larger than the display area (950 wide, 600 or 485 high depending on the keyboard input mode) will be scaled to the display area‟s size. The proportions of the image are preserved

e.g. show image xmas_card.jpg to EVERYONE

The image will first be downloaded to the performer client‟s computer – which may cause a delay if files are large and networks are slow. However, the client caches all downloaded assets locally so that they are available instantly for subsequent use.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 316

play playdata to target e.g. play “C# 6” to Django The play command sends a text string identifying a MIDI note or a file reference to a performer client and asks the client to play the file through the appropriate playback system. The file may be either a MIDI file or a sound file – the type is indicated using the midi or sound keywords. File playback is not yet implemented.

Note that playing back a sound file may interfere with spoken feedback.

When a performer client receives a request to play a new piece of sound or MIDI information, it interrupts any playback that is already taking place.

There is currently no way to set instruments for MIDI playback using rules, although this can be done manually in the performance client.

Playback is currently restricted to a single MIDI channel – which is selectable in the performance client.

Only MIDI note playback is currently implemented.

MIDI note Notes are specified by letter name (in upper case) and the optional modifiers „#‟ for sharp and „b‟ for flat. There should then be a space and a number indicating which octave is wanted. The whole note specification should be contained in speech marks.

e.g. play “Bb 4” to Jane

MIDI file For MIDI files, a file reference, must be preceded by the keyword midi. File names may not include spaces.

e.g. play midi fanfare1.mid to performer 5

e.g. play midi 10.0.1.1/midi/fragment6.mid to performer 5 sound file For sound files, a file reference, must be preceded by the keyword sound. File names may not include spaces.

Permissible sound file formats are .aif, .wav and .mp3

e.g. play sound stretched.aif to performer 5

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 317

Data and system management commands get getme rule rule_name getme rule-list e.g. get me rule SysName_RULE_14 The get command causes a data object in the system to be retrieved and returned to the target (which has to be me, the requesting client at present) rule For rules, the events and commands making up a rule are returned in an XML format. At present, a rule‟s state and labels are not returned. rule-list Returns an XML listing of all current rule‟s (unique) system names. These are in the form of SysName_RULE_n, where n is an ID number

set settarget to data e.g. set rule 2 to INACTIVE set zone origin position to (2.5, 2.5, 5.0) The set command causes a data object in the system to be loaded with a new value. There are a number of allowable targets each of which affects the permissible values for data; performance, rule, section (not implemented). rule For rules, data can have two values, ACTIVE and INACTIVE. If a rule is active, its conditions are considered by the system when it judges if any significant events have taken place.

e.g. set rule 2 to ACTIVE

A rule can make itself inactive as one of its commands – meaning that it will only be triggered once. section For sections, data must be a text string.

e.g. set section “transition 2” to active

Assigning section a new name (even to the same name) causes the system to reset the section time. Section can thus be used in event definitions to make commands conditional on stages of a performance. Progress through a section is then given by section time.

e.g. (if) section time > 1 minute performance A performance can be either ACTIVE or INACTIVE. When the system starts, all performances are set to inactive with a performance time of 0.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 318

Setting performance to active can also be done using the start command (see below).

e.g. set performance to active

If a performance is active, its performance time will be continually increased. When it is stopped (by setting it to inactive), the performance time is reset to zero. zone A zone can be set to both a position and to „populated‟ or „empty‟. Zones are referred to by name.

e.g. set zone upstage to populated set zone avater1 position to (0.0, 3.1)

If the zone name has not been previously encountered by the system in the project you are working in, it is created.

Positions can be 2 dimensional or 3 dimensional and are specified 2 or 3 numbers within parentheses, separated by commas:-

e.g. (3.2,1.0,-2.3)

start start performance start section name e.g. start section introduction The start command is a synonym for set formalUnit to active (see set section above)

It can be used with performance or section, when a name for the section ust be supplied.

When the command is executed, the performance or section clock is started and is updated until the section or performance is stopped, reset or set to inactive. performance For the performance, no further information is needed.

e.g. start performance

The performance is always set to stopped (or inactive) when it is created or loaded from the database. section For sections, if the section has already been involved in the performance as either an event source or as the target of a command, it is set to active and its internal elapsed time clock is started.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 319

If it has not been involved in the performance, it is created and its clock started.

e.g. start section introduction

Sections are not saved by the system – they are created as needed during a performance (this process should be transparent to the user – if you mention a section and it does not already exist, it will be created)

stop stop performance stop section name e.g. stop section introduction The stop command is a synonym for set formalUnit to inactive (see set section above)

It can be used with performance or section, when a name for the section must be supplied.

When the command is executed, the performance or section clock is stopped and set to 0.0. performance For the performance, no further information is needed.

e.g. stop performance

The performance is always set to stopped (or inactive) when it is created or loaded from the database. section For sections, if the section has already been involved in the performance as either an event source or as the target of a command, it is set to inactive and its internal elapsed time clock is stopped and set to 0.0.

If it has not been involved in the performance, it is created.

e.g. stop section introduction

Sections are not deleted by stop so that their activity may still be used in event descriptions

e.g. (if) section introduction is inactive

However, note that there is currently no way to test if a section has ever been active.

Ian Willcock – Multimedia and Live Performance Appendix 3 - eMerge Performance Scripting Guide - Page 320

Keyword listing ACTIVE ANYONE ANYTHING controlChange delete double event EVERYONE get give a gives a image INACTIVE is is more than is less than key long make midi minutes mouse movie NOONE noteOff noteOn NOTHING osc Performer performance performance time pitchBend plays position programChange RETURN rule rule-list says seconds section section time short show signal

Ian Willcock – Multimedia and Live Performance Appendix 3 – eMerge Performance Scripting Guide - Page 321

sound SPACE time to trigger types x x-position y y-position z z-position

Ian Willcock – Multimedia and Live Performance Appendix 3 – eMerge Performance Scripting Guide - Page 322

Appendix 4 Live Interactive Multimedia Performance Toolbox (LIMPT) System Software Please see the file Read-Me.rtf on the DVD for a manifest of the software submitted as a part of this thesis.

Ian Willcock – Multimedia and Live Performance Appendix 4 – LIMPT System Software - Page 323

Appendix 5 LIMPT Client ActionScript Library Documentation

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 324

Software library – rationale As part of the multimedia server system development process, a software library containing a number of packages have been produced. These are intended to formalise and focus the development along best-practice software production guidelines and also to simplify the production of third-party specialised clients for different platforms and purposes since they can be used with a limited or customised GUI.

The library is currently implemented in ActionScript 2.0, although migration to AS3.0 is planned.

The latest versions of all packages are available from the project repository, http://bartleby.ioct.dmu.ac.uk/repos/mm_server/client/

Software packages - documentation

app

GlobalHooks AppManager AppPreferences GUIManager StatusManager

The app package contains classes that are highly customised for a particular client implementation; it is anticipated that they will be largely rewritten for each new client implementation. GlobalHooks Provides a single instance, global point of access to application components. AppManager Controls the startup process of the Flash data structures and managers. Hands control to the GUI manager when loading and initialisation etc. are complete. AppPreferences A CDT for global application preferences. GUIManager Controls the frame shown by a Flash GUI timeline; it allows different system components to reliably change the GUI display without interference. StatusManager Keeps track of client status and notifies any objects that have registered to receive util.IntDataObj messages.

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 325

component

CapabilityDB CapObject CompData CompLoader ComponentLoadManager ComponentManager Preferable

The component package contains classes that handle Flash GUI and input/output components for a client. These are loaded dynamically since different client implementations may well require different collections of settings and opportunities for user customisation. CapabilityDB This class maintains a database of a client‟s input and output media handling capabilities and which components provide them. When a message arrives from the central controller requesting that information is output in a particular way, this class returns the appropriate component reference to the message.Despatcher class CapObject The custom data type for the capabilityDB class. CompData The custom data type for the ComponentManager CompLoader A simple class that loads a single Flash .swf component and reports the result. Used by the ComponentLoadManager. ComponentLoadManager A helper class for the ComponentManager that is responsible for loading all components. ComponentManager The main class that handles Flash GUI components. It maintains a database of loaded components and routes system calls to them. It also deals with getting and setting user preferences for client components. Preferable An interface which components must implement if they are to use the user preferences services provided by the client (through the Preference Manager).

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 326

lexical

Command Rule RuleEditor RuleEditorServerProxy

The lexical package contains classes that support editing and entry of rules on the central controller.

Command This is an abstract data type that represents a stand-alone command in the rule editor Rule This is an abstract data type that represents a single rule in the RuleEditor RuleEditor A simple database class that maintains a local list of rules in the current performance. It also parses XML representations of rules (received from the central controller) into entries in its database so they can be displayed or edited. RuleEditorSeverProxy A class to handle communication with the server for the RuleEditor.

user

UserData UserManager PreferenceManager

The user package is a simple database system which allows a single client to be operated under different user names. This is optional and will not be required for all implementations. Although support for a user password is provided, passwords are not encrypted and are stored as plain text (they are hidden from other users, but vulnerable).

UserData The custom data type for the UserManager. UserManager A class providing a database of users and passwords. The username is attached to all events passed to the central controller and determines which part of its state variables are updated. PreferenceManager

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 327

A class that loads and saves preferences for a user for all software components that implement the Preferable interface.

network

Connection ConnectionsDB NetworkManager SocketController

The network package contains classes that maintain a database of available connections and also provide a single, global point of access to control and use the network connection.

Connection A custom data type for the ConnectionsDB class. ConnectionsDB A simple database of network connections. NetworkManager A class that is responsible for making and ending network connections and for providing a global point of access for sending and receiving information. SocketController A low-level socket class which provides an interface between a tcp/ip socket and the NetworkManager instance. It is based on code by Colin Moock. Other classes could be used to provide similar connectivity using different protocols (e.g.OSC)

xml

XMLBuilder XMLParser

The xml package provides helper classes that make manipulating XML data easier. They also ensure that changes in the base XML API (especially those in AS 3.0) will not require extensive refactoring of classes that use XML data.

XMLBuilder This class provides a number of static functions which allow the user to create XML document elements. XMLParser

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 328

This class defines an object that loads and parses xml specified as a URL or parses xml passed as a string. It also provides static methods for accessing data in an XML document.

util

AppProperties ConsoleLogger DetailedError HostInterface IntDataObject NetInterface Observable Properties

PropertyLoader Timer

The util package provides a number of (mostly low-level) utility classes that are used by other classes or by applications. AppProperties This class wraps the Properties class in an application-specific way. It provides a single, global point of access to a range of data types in a component‟s loaded property file(s). ConsoleLogger A simple class providing a default logger for the app.GlobalHooks class that routes all output to the output window. DetailedError An extension of the built-in Errorclass which also records the component name generating an error. HostInterface A complex class that provides a uniform, buffered, synchronous data and command interface to a host (Director in the current implementation). It prevents a host error cascading into all subsequent function calls (although those to the same Director component are disrupted). It requires a corresponding FlashCommunication object in the host. IntDataObject A simple custom message class which allows application components to receive notifications from subclasses of the Observable class.

NetInterface A class that provides buffered synchronous communication with the central controller via the NetworkManager.

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 329

Observable A simple implementation of the Subject component of the Observer design pattern (although implemented as a superclass rather than an interface). The class provides a simple database of observers and a notify method; state is left to subclasses. Properties A low-level class which simulates the java Properties class. PropertyLoader Loads individual properties files for the Properties class. Timer A very simple timer object which can used for both one-off and repeating tasks.

message DataEdit_Com_MessageBuilder DataEdit_Rule_MessageBuilder Dispatcher

Display_MessageProcessor Error_MessageProcessor MessageBuilder MessageBuilderFactory MessageProcessor MessageProcessorFactory PerfData_MessageBuilder XmlMessage

The message package provides classes that allow a client to send sensor information to the central controller and to process information received. It contains a number of specialist subclasses which are likely to increase in number as functionality is implemented. DataEdit_Com_MessageBuilder A specialised subclass of the MessageBuilderclass which builds dataEdit type messages carrying commands to the central controller. DataEdit_Rule_MessageBuilder A specialised subclass of the MessageBuilderclass which builds dataEdit type messages carrying rules to the central controller. Despatcher A class that takes received data and routes it to the appropriate output module based on information in the component.CapabilityDB(accessed through the single component.ComponentManager instance). Display_MessageProcessor

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 330

A specialised subclass of the MessageProcessorclass which unpacks messages from the central controller that contain information to be sent to one of the available output modules by the Despatcher. Error_MessageProcessor A specialised subclass of the MessageProcessor class which unpacks error messages from the central controller. The message data is sent to the current error output module by the Despatcher. MessageBuilder A base class for packing data into the XML formats required for transmission to the central controller. MessageBuilderFactory A class that returns the appropriate MessageBuilder subclass for a particular message type. MessageProcessor A base class for unpacking data from the XML formats used for transmission from the central controller. MessageProcessorFactory A class that returns the appropriate MessageProcessor subclass for a particular message type. PerfData_MessageBuilder A specialised subclass of the MessageBuilderclass which packs sensor data into the XML message format used for transmission to the central controller. XmlMessage A lightweight custom data object used for sending unpacked messages between system components.

project

Project ProjectsPrefs ProjectsDB ProjectsDBServerProxy

The project package provides classes for managing projects on the central controller. Project This is a CDT representing an individual project. ProjectPrefs The preferences for a specific project. ProjectsDB

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 331

A lightweight database which holds a local list of all available projects on the Central Controller. ProjectsDBServerProxy A class which manages communication between the projectsDB and the Central Controller.

Ian Willcock – Multimedia and Live Performance Appendix 5 - LIMPT Client AS Library Documentation - Page 332

Appendix 6 LIMPT Workshop Guide

This guide will help you get to know the LIMPT client – this is the piece of software that communicates between a performer (or other element of a performance) and the system.

The client collects and filters activity and tells the system when something significant has happened (this is known as an event). It also presents messages from the system to the performer, these are usually cues of one sort or another; a spoken command, a piece of text or an image.

The LIMPT client - overview There are three parts of the client programme;

 Client Configuration – where you set up the input and output options.  Prepare – where you add or edit the rules that will shape a performance.  Perform – where the client collects various types of input from you while you are performing and passes on cues to you from the system.

You move between these 3 sections using the buttons at the top-left-hand side.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 333

Getting started - configuring the client

1. Double click the perfClient_dist.osx icon

2. If you are using some input plug-ins (sound or MIDI), you may be warned that you are using a trial version – just click OK.

3. After the client has started up, if this is your first use, you‟ll see the configuration interface: -

4. Make sure userManagement is selected in the left-hand column, and then click on the + button at the bottom of the list of users on the right.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 334

5. Enter your name (do not worry about a password) and click the store button. You will now be remembered by the client.

6. You may want to configure input or output capabilities – we will turn on the speech option as an example. Click on the voiceOut option in the left- hand menu (you can always come back and change any of these options later): -

7. Click on the check-box in the options panel to turn speech output on. To check the voice, click on the blue button. If you don‟t hear anything, check that the check box has a blue cross in it and that your computer‟s audio is turned up.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 335

8. When you are ready, select the networkitem in the configuration panel:

9. This panel is where you select the server which will store and run your project. The chosen server will be highlighted in green. In most cases, you will choose Bartleby (this is the name of the LIMPT server at De Montfort university).

10. Click the connect button – you should see the connection indicator in the top-right hand corner turn green and the activity indicator flash red briefly. If the connection indicator does not turn green, check your computer‟s Internet connection.

11. A LIMPT server can store (and run) many performance projects, you can now choose which one you will work with; select the project item in the left-hand list.

12. You‟ll see a list of projects stored on the server you are connected to: -

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 336

13. The project you are working with is highlighted in green. To start with, select introduction_to_LIMPT; you will see it highlighted. The client will remember which project you are working on, but you can change it at any time by returning to this configuration panel. Introduction to LIMPT is a special project which will help you learn about the LIMPT system and how to use it.

14. You have now finished the initial client set-up and are now ready to work with the LIMPT system. To start with, you will explore how the client works in a performance. Select the Perform button in the top left-hand corner: -

15. Press the „1‟ key on your computer keyboard.

You can set the client (see the clientSettings configuration panel) to automaticallyuse your last settings, connect and then go straight to the Perform screen (this is the normal way to use it as a performer).

You will not have to go through the configuration process unless you want to change (e.g. work on another project). To return to the client configuration section, choose the Client button in the top, left-hand corner.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 337

The LIMPT system and performance The LIMPT system is based on a very simple model of how a live performance unfolds: -

The passing of cues to performers in response to significant actions (events) is governed by rules. A Rule can simply be a way of saying that, if a certain performer does something, then send a cue to another performer.

So, a simple rule might be:

(if) ianplays a C# 3 (then) say “start section three” to ruth

A LIMPT project stores all the rules that govern the generation of cues to performers (or other performance elements such as computers, lighting desks, projection systems etc) for a performance.

These rules are written in a fairly simple language called eMerge.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 338

Creating a LIMPT project 1. Click on the Client button in the top left-hand corner.

2. Select the network entry in the list on the left and check you are connected to Bartleby (the name of the server you are connected to is highlighted in green).

3. Select the project entry in the list on the left. You will now make a new project.

4. Click on the + button at the bottom of the project list, the new project panel appears:-

5. Give your new project a name that includes your user name (so you‟ll remember which it is) and then press the Store button. You should see your project name added to the list of projects. You do not need to put anything in the other text fields.

6. Click on your new project name to select it (it will be highlighted in green). This makes it the project you are now working on:-

7. Click on the Prepare button in the top left-hand corner, the rule entry and editing screen will show (it will be blank as you have just made a new project):

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 339

Controlling performances In the LIMPT system, the flow of a performance is shaped through a set of rules. These rules are written using the eMerge live performance scripting language. They have two parts; one (or more) events to look out for and one (or more) commands – things to do when the event(s) happens.

There is a detailed language reference which lists every word and gives examples of how to use them and explains the language‟s grammar – for now, we will just use a few features to get started.

In rules, performers are referred to by name (e.g. ian, jane etc.). Performers can give signals (e.g. a “1” key signal, a mouse signal) or they can do things (e.g. plays “F# 2”, types “start the show”). In the LIMPT system a „performer‟ may be a person, another computer programme or piece of equipment.

Cues can be given to performers in a number of ways; for now we will limit ourselves to showing things (e.g. show “Gradually speed up” to ian, show “score_page3.jpg” to jane) and saying things (e.g. say “start copying the performer to your right” to eric).

Adding a Rule to a Project A project may contain many rules, but they are all entered in the same way in the Prepare screen.

For this example, we will write rules that provide cues when a button is pressed on the computer keyboard – but this is only so that you can test your rules easily; the events that trigger rules could also be MIDI signals, sound levels, elapsed time, distance or signals provided by sensors such as floor pads. The basic principle is, if you can work out a way to sense a quality, you will probably be able to use it as an event in the LIMPT system.

1. To add a rule, you click on the + button at the bottom of the rule-list.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 340

2. The rule edit panel will open:

3. Click on the + button underneath the list of IF… events. Type your user name followed by gives a “z” key signal – here is my version: -

4. Click on the store button, you will see your text appear in the list of IF… events.

5. Click on the + button underneath the THEN… list, the entry window opens again

6. Type show “start circling the stage” to EVERYONE and then click the Store button. Your rule entry window should look something like this: -

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 341

7. Click on the store button; your rule is sent to the LIMPT server and stored there. You should see your rule appear in the list of rules and a message from the server saying INFO - Rule Stored.

8. You can now test your rule. Click on the Perform button in the top left- hand corner; you will see the perform screen. Press the „z‟ key – you should get a message from the server telling you to circle the stage!

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 342

Editing a Rule Often you will want to change rules; the process is very similar to entering a rule. We will change the event that triggers our rule and make it give a spoken as well as visual cue to performers.

1. Click on the Prepare button in the top left-hand corner. The rule(s) will be reloaded from the server.

2. Select the rule you wish to edit in the rule list, it will be highlighted in green:-

3. Click on the edit button at the bottom of the list:-

4. The rule entry panel opens with the rule you wish to edit. We will first change the event the rule is triggered by. Select the event by clicking on it, it will be highlighted in green:-

5. Click on the edit button just underneath the event listing (see above).

6. The text entry panel will open, edit the text so that it is your user name followed by gives a “x” key signal. Click the Store button. You will see the change in the Rule panel.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 343

7. We will now add another item to the commands. Click the + button underneath the THEN… listing (the third list in the rule panel).

8. In the text entry box, type say “move on to the next section” to EVERYONE. Click the store button. You should now have 2 items in the command list; one give a visual cue, the other a spoken cue: -

9. Click on the Store button at the bottom of the rule panel. You should return to the rule list and get a message from the server confirming the update: info Rule updates stored.

10. Click on the Perform button in the top left-hand corner.

11. Press the „x‟ key and you should both see and hear a cue.

Ian Willcock – Multimedia and Live Performance Appendix 6 - LIMPT Workshop Guide - Page 344

Appendix 7 Introduction to LIMPT (Online Materials) The following images are part of the online introduction (there is also a text to speech commentary on each slide) to the system used in workshops, but also permanently available on the Introduction to LIMPT project (press „1‟ to start).

Screen 1

Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 345

Screen 2

Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 346

Screen 3

Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 347

Screen 4 Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 348

Screen 5

Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 349

Screen 6

Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 350

Screen 7 Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 351

Screen 8 Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 352

Screen 9

Ian Willcock – Multimedia and Live Performance Appendix 7 – Introduction to LIMPT (Online Materials) - Page 353

Appendix 8 DRHA 10 Workshop Participant Survey

Ian Willcock – Multimedia and Live Performance Appendix 8 - DRHA 10 Workshop Participant Survey - Page 354