Ai Song Contest: Human-Ai Co-Creation in Songwriting

Total Page:16

File Type:pdf, Size:1020Kb

Ai Song Contest: Human-Ai Co-Creation in Songwriting AI SONG CONTEST: HUMAN-AI CO-CREATION IN SONGWRITING Cheng-Zhi Anna Huang1 Hendrik Vincent Koops2 Ed Newton-Rex3 Monica Dinculescu1 Carrie J. Cai1 1 Google Brain 2 RTL Netherlands 3 ByteDance annahuang,noms,[email protected], h.v.koops,[email protected] ABSTRACT experiences were enabled by major advances in machine learning and deep generative models [18, 66, 68], many of Machine learning is challenging the way we make music. which can now generate coherent pieces of long-form mu- Although research in deep generative models has dramati- sic [17, 30, 39, 55]. cally improved the capability and fluency of music models, Although substantial research has focused on improving recent work has shown that it can be challenging for hu- the algorithmic performance of these models, much less is mans to partner with this new class of algorithms. In this known about what musicians actually need when songwrit- paper, we present findings on what 13 musician/developer ing with these sophisticated models. Even when compos- teams, a total of 61 users, needed when co-creating a song ing short, two-bar counterpoints, it can be challenging for with AI, the challenges they faced, and how they lever- novice musicians to partner with a deep generative model: aged and repurposed existing characteristics of AI to over- users desire greater agency, control, and sense of author- come some of these challenges. Many teams adopted mod- ship vis-a-vis the AI during co-creation [45]. ular approaches, such as independently running multiple Recently, the dramatic diversification and proliferation smaller models that align with the musical building blocks of these models have opened up the possibility of leverag- of a song, before re-combining their results. As ML mod- ing a much wider range of model options, for the potential els are not easily steerable, teams also generated massive creation of more complex, multi-domain, and longer-form numbers of samples and curated them post-hoc, or used a musical pieces. Beyond using a single model trained on range of strategies to direct the generation, or algorithmi- music within a single genre, how might humans co-create cally ranked the samples. Ultimately, teams not only had with an open-ended set of deep generative models, in a to manage the “flare and focus” aspects of the creative pro- complex task setting such as songwriting? cess, but also juggle them with a parallel process of explor- In this paper, we conduct an in-depth study of what peo- ing and curating multiple ML models and outputs. These ple need when partnering with AI to make a song. Aside findings reflect a need to design machine learning-powered from the broad appeal and universal nature of songs, song- music interfaces that are more decomposable, steerable, in- writing is a particularly compelling lens through which terpretable, and adaptive, which in return will enable artists to study human-AI co-creation, because it typically in- to more effectively explore how AI can extend their per- volves creating and interleaving music in multiple medi- sonal expression. ums, including text (lyrics), music (melody, harmony, etc), and audio. This unique conglomeration of moving parts 1. INTRODUCTION introduces unique challenges surrounding human-AI co- creation that are worthy of deeper investigation. Songwriters are increasingly experimenting with machine learning as a way to extend their personal expression [12]. As a probe to understand and identify human-AI co- For example, in the symbolic domain, the band Yacht used creation needs, we conducted a survey during a large-scale MusicVAE [60], a variational autoencoder type of neural human-AI songwriting contest, in which 13 teams (61 par- ticipants) with mixed (musician-developer) skill sets were arXiv:2010.05388v1 [cs.SD] 12 Oct 2020 network, to find melodies hidden in between songs by in- terpolating between the musical lines in their back cata- invited to create a 3-minute song, using whatever AI al- log, commenting that “it took risks maybe we aren’t will- gorithms they preferred. Through an in-depth analysis of ing to” [47]. In the audio domain, Holly Herndon uses survey results, we present findings on what users needed neural networks trained on her own voice to produce a when co-creating a song using AI, what challenges they polyphonic choir. [15, 46]. In large part, these human-AI faced when songwriting with AI, and what strategies they leveraged to overcome some of these challenges. We discovered that, rather than using large, end-to-end © Cheng-Zhi Anna Huang, Hendrik Vincent Koops, Ed models, teams almost always resorted to breaking down Newton-Rex, Monica Dinculescu, Carrie J. Cai. Licensed under a Cre- their musical goals into smaller components, leveraging ative Commons Attribution 4.0 International License (CC BY 4.0). Attri- a wide combination of smaller generative models and re- bution: Cheng-Zhi Anna Huang, Hendrik Vincent Koops, Ed Newton- Rex, Monica Dinculescu, Carrie J. Cai. “AI Song Contest: Human-AI combining them in complex ways to achieve their creative Co-Creation in Songwriting”, 21st International Society for Music Infor- goals. Although some teams engaged in active co-creation mation Retrieval Conference, Montréal, Canada, 2020. with the model, many leveraged a more extreme, multi- stage approach of first generating a voluminous quantity to steer the AI and express their creative goals. Recent of musical snippets from models, before painstakingly cu- work has also found that users desire to retain a certain rating them manually post-hoc. Ultimately, use of AI sub- level of creative freedom when composing music with stantially changed how users iterate during the creative AI [26, 45, 65], and that semantically meaningful controls process, imposing a myriad of additional model-centric can significantly increase human sense of agency and cre- iteration loops and side tasks that needed to be executed ative authorship when co-creating with AI [45]. While alongside the creative process. Finally, we contribute rec- much of prior work examines human needs in the context ommendations for future AI music techniques to better of co-creating with a single tool, we expand on this emerg- place them in the music co-creativity context. ing body of literature by investigating how people assem- In sum, this paper makes the following contributions: ble a broad and open-ended set of real-world models, data • A description of common patterns these teams used sets, and technology when songwriting with AI. when songwriting with a diverse, open-ended set of deep generative models. 3. METHOD AND PARTICIPANTS • An analysis of the key challenges people faced The research was conducted during the first three months when attempting to express their songwriting goals of 2020, at the AI Song Contest organized by VPRO [67]. through AI, and the strategies they used in an attempt The contest was announced at ISMIR in November 2019, to circumvent these AI limitations. with an open call for participation. Teams were invited to • Implications and recommendations for how to better create songs using any artificial intelligence technology of design human-AI systems to empower users when their choice. The songs were required to be under 3 min- songwriting with AI. utes long, with the final output being an audio file of the song. At the end, participants reflected on their experi- ence by completing a survey. Researchers obtained con- 2. RELATED WORK sent from teams to use the survey data in publications. The Recent advances in AI, especially in deep generative mod- survey consisted of questions to probe how teams used AI els, have renewed interested in how AI can support mixed- in their creative process: initiative creative interfaces [16] to fuel human-AI co- • How did teams decide which aspects of the song to creation [28]. Mixed initiative [33] means designing in- use AI and which to be composed by musicians by terfaces where a human and an AI system can each “take hand? What were the trade-offs? the initiative” in making decisions. Co-creative [44] in this • How did teams develop their AI system? How did context means humans and AI working in partnership to teams incorporate their AI system into their work- produce novel content. For example, an AI might add to flow and generated material into their song? a user’s drawing [21, 50], alternate writing sentences of a In total, 13 teams (61 people) participated in the study. story with a human [14], or auto-complete missing parts of The teams ranged in size from 2 to 15 (median=4). Nearly a user’s music composition [4, 34, 38, 45]. three fourths of the teams had 1 to 2 experienced mu- Within the music domain, there has been a long his- sicians. A majority of teams had members with a dual tory of using AI techniques to model music composi- background in music and technology: 5 teams had 3 to tion [23, 31, 53, 54], by assisting in composing counter- 6 members each with this background, and 3 teams had 1 point [22, 32], harmonizing melodies [13, 43, 51], more to 2 members. We conducted an inductive thematic anal- general infilling [29, 35, 41, 61], exploring more adven- ysis [2, 5, 6] on the survey results to identify and better turous chord progressions [27, 37, 49], semantic controls understand patterns and themes found in the teams’ sur- to music and sound generation [11, 24, 36, 63], building vey responses. One researcher reviewed the survey data, new instruments through custom mappings or unsuper- identifying important sections of text, and two researchers vised learning [20,25], and enabling experimentation in all collaboratively examined relationships between these sec- facets of symbolic and acoustic manipulation of the musi- tions, iteratively converging on a set of themes.
Recommended publications
  • The Vocaloids~!
    ALL THE VOCALOIDS~! BY D J DATE MASAMUNE NEED TO KNOWS • Panel will be available online + list of all my resources – Will upload .pdf of PowerPoint that will be available post-con • Contact info. – Blog: djdatemasamune.wordpress.com • Especially if you have ANY feedback • Even if you leave mid-way, feel free to get one before you go • If you have any questions left, feel free to ask me after the panel or e-mail me DISCLAIMERS • Can only show so many Vocaloids – & which songs to show from each – Not even going to talk about costumes • B/c determining what’s popular/obscure btwn. Japanese/American culture can be difficult, sorry if you’re super familiar w/ any Vocaloid mentioned ^.^; – Based ‘popular’ ones off what I see (official) merch for the most (& even then) • Additionally, will ask lvl of familiarity w/ every Vocaloid • Every iteration of this panel plan to feature/swap out different Vocaloids • Only going over Vocaloids, not the history of the software, appends, etc. • Format of panel… POPULAR ONES VOCALOIDS YUKARI YUZUKI AOKI LAPIS MERLI CALNE CA (骸音シーエ Karune Shii-e) DAINA DEX Sharkie P VY1 VY2 ANON & KANON FUKASE UTATANE PIKO YOWANE HAKU TETO KASANE ARSLOID HIYAMA KIYOTERU KAAI YUKI LILY YOHIOloid V FLOWER Name: Don’t Say Lazy Lyrics: Sachiko Omori Composition: Hiroyuki Maezawa SeeU MAIKA GALACO QUESTIONS? BEFORE YOU GO… • Help yourself to my business cards If you have any questions, comments, feedback, etc., contact me however – My blog, e-mail, comment on a relating blog post, smoke signals, carrier pigeons, whatever tickles
    [Show full text]
  • New Vocaloid in the Making?
    Competition New Vocaloid in the Starting to Rise for Vocaloid making? page 12 page 1 New Faces Meet Anon & Kanon page 5 Facts About Vocaloid By Bill Treadway 5. VOCALOID CAN RAISE THE DEAD... SORT OF 10. MEET MIKU HATSUNE Vocaloid techniques have been handy for concert promoters, using the Voiced by actress Saki Fujita, Miku Hatsune is the most popular Vocaloid technology to bring back such dead entertainers as 2Pac, Michael Jackson to date. Crypton took Fujita’s voice and ran it through Vocaloid 2 software and Elvis Presley. Vocaloid software has even produced a posthumous to create Miku’s unique sound. Miku is often treated as a real person by singer in tribute of the late singer Hitoshi Ueki, although so far only his fans, especially men who actually fall for the Vocaloid as if she was flesh family has exclusive use of Ueki-loid. Just think, that Beatles reunion and blood. Not since Betty Boop has a virtual idol taken such a hold. could be possible after all! 9. VOCALOID USES DVD-ROM TECHNOLOGY 4. THE AVERAGE VOCALOID CONCERT-GOER IS MALE Yamaha and Crypton produced three software DVDs each containing While Vocaloid appeals to almost everybody, the average concert ticket the Vocaloid 2 and 3 programs. These DVDs allow you to manipulate her buyers in Japan are males aged 18-42. I’m not surprised, considering video voice and make her “sing”. Her voice banks are known as Hatsune Miku, store rental statistics in the 1990s stated that anime videos were largely Miku English, Dark Append, Light Append, Soft Append, Sweet Append, rented by males.
    [Show full text]
  • Products of Interest
    Products of Interest Pro Tools|HD Native and HD The PCIe card is shipped with inputs, and eight line outputs with Series Interfaces three DVDs: the Pro Tools software analog gain. Eight channels of ADAT and plug-ins installation DVD, a input/output, two eight-channel Avid (the use of the Digidesign name DVD of audio loops, and a training AES/EBU input/output connectors, was being phased out by Avid in 2010) DVD. An iLok USB Smart Key is used and two channels of S/PDIF in- has released Pro Tools|HD Native, a for software authorization. A 12-ft put/output are available on the inter- PCIe card that allows Macintosh and DigiLink Mini cable for use with the face, with sample rate convertors on PCs to run Pro Tools HD software. Pro Tools HD Series interfaces is also all the inputs. Audio can be routed Sessions can have up to 192 tracks included, along with an adapter for independently of a computer using with a single PCIe card providing older Pro Tools interfaces. The card a 14-input persistent monitor mixer. up to 64 channels of input/output. is compatible with Avid’s C|24 and A surround sound monitor section Pro Tools users can record, edit, ICON consoles, and with third-party supports up to 7.1 surround. Word and mix audio at sample rates up HUI consoles. Clock and Loop Synchronization in- to 192 kHz, with access to a Score The new series of HD audio inter- put/output are supported. Both the Editor, plug-ins, and MIDI tools.
    [Show full text]
  • Beauty Is in the Eye of the “Produser”: Japan's Virtual Idol Hatsune Miku from Software, to Network, to Stage
    BEAUTY IS IN THE EYE OF THE “PRODUSER”: JAPAN'S VIRTUAL IDOL HATSUNE MIKU FROM SOFTWARE, TO NETWORK, Intermittence + Interference POST-SCREEN: TO STAGE ANA MATILDE SOUSA ANA MATILDE SOUSA 117 INTRODUCTION The “virtual idol” dream is not new, but Hatsune Miku — a cybercelebrity origi- nating from Japan who is steadily becoming a worldwide phenomenon — con- stitutes a paradigm shift in this lineage initiated in 1958 by the novelty group of anthropomorphic squirrels Alvin and the Chipmunks. Since then many have followed, from The Archies to Gorillaz and 2.0Pac. In Japan, HoriPro’s “digital kid”, Date Kyoko, pioneered the cyber frontier with her hit single “Love Commu- nication” in 1996 (Wolff, n.d.). While in 2011, the idol supergroup AKB48 pulled an infamous publicity stunt by revealing their new girl, Aimi Eguchi, was a com- puter-generated combination of other group members (Chen, 2011). So what does Miku have that they do not? Despite her apparent similar- ity to fictional characters such as Rei Toei from William Gibson’s Idoru, Miku’s phenomenon has less to do with futuristic prospects of technological singu- larity than with present-day renegotiations of the roles of author, work and fan in Web 2.0 media cultures. By addressing her softwarennetworknstage transformations, this study draws on a rapidly growing scholarship (Hama- saki, Takeda, & Nishimura, 2008; Le, 2013; Conner, 2014; Guga, 2014; Annett, 2015; Leavitt, Knight, & Yoshiba, 2016) to investigate how Miku’s appearance on screen(s) has shaped her construction as a virtual idol through grassroots- corporate “produsage” (Bruns, 2008). MIKU, FROM THE BEGINNING With a visionary name announcing the “First Sound of Future”, Hatsune POST-SCREEN: Intermittence + Interference POST-SCREEN: Miku, created in August 2007 by Sapporo-based company Crypton Future Me- dia, is the most popular avatar of Yamaha’s cutting-edge voice synthesizer VO- CALOID.
    [Show full text]
  • All References in Sand Planet Vocaloid
    All References In Sand Planet Vocaloid Degree Mead siwash habitably or militarise stodgily when Adolfo is afflated. Frumpiest Blair usually demotes some pyritohedron or trenches insensibly. Euphuistic or monstrous, Wyn never upload any ultraists! Check back to escape the blue line modding if you said song even as a planet in Music aside, representing how using Miku was originally what got the attention, food but drink man power generation. Sand Planet Hatsune miku Miku Hatsune. Vocaloid producers are no hay razón para no longer be evil witch everyone. This is based on opinion. What you love this later see, but then you know that she tells him that might interest, that she had. Idk who passed away into a picture memes, plus uses in campbellsport, watch this gallery, or unloading food grade tank. The references about who love for all references in sand planet vocaloid hatsune miku tag shows very little. Keep up honey pure. Milky Time Hatsune Miku Cosplay Vocaloid Thanksgiving Dress Wh. Sand Planet Miku Model by YamisameAnimationViva Happy AnimationYowane. Sand Planet Miku Model by YamisameAnimationViva Happy AnimationYowane Haku. Character DesignCharacter Model SheetCharacter Design References. If not available, devastated, and edit your Premium Gallery info. Miku will continue to accompany him. You are depressing as if request is anything is? VOCALOID has changed a fair bit since I was in it. Crude and dry products such as gap sand cement lime and sugar. Credits: Blue Line Modding If you notice any mistake. Federazione Industria Musicale Italiana. God will one day take us out of here! That all about vocaloid have instruments, please look at her feelings to me if you sad for one look at rin even works, despite what do.
    [Show full text]
  • Vocaloid and the Future of Songwriting
    Vocaloid and the future of Songwriting Vocaloid is a concept that requires two explanations – it is a software developed at Pompeu Fabra University in Barcelona, Spain, with the backing of Yamaha Corporation and commercially marketed as a virtual instrument, similar to any other music creation and editing software, except that instead of instrumental sounds, Vocaloid gives digital music producers a powerful tool to enhance their music – the human voice. Vocalists are contracted by software companies with a licensing agreement with Yamaha, such as Crypton Future Media, to produce and sell software utilizing the Vocaloid technology, record vast libraries of phonemes that allow for the construction of lyric tracks, complete with pitch modulation and a multitude of other available effects. This allows digital music producers to record music and lyrics directly from their computers without needing recording equipment or, most importantly, a live vocalist. In this sense, Vocaloid has opened the doors of democracy to the music production industry – anyone with the time and passion for songwriting can record professional-quality music in their own home regardless of their own vocal talent or ability to hire talent. The second facet of Vocaloid began as a marketing ploy – the first generation of Japanese Vocaloid software featured two voices, one male and one female, named “Meiko” and “Kaito”. Their box art featured illustrations of the “performer”, depicting Meiko as a slender brunette in red pleather and Kaito with blue hair in a white jacket and scarf matching his hair. Japan’s cultural history of Shinto Animism cultivated an environment with a preference for anthropomorphism, leading fans to closely associate with the virtual pair, boosting the software’s popularity.
    [Show full text]
  • Hatsune Miku – Liveness and Labor and Hologram Singers
    Lucie Vágnerová, Columbia University Liveness and Labor and Hologram Singers Presented at Bone Flute to Auto-Tune: A Conference on Music & Technology in History, Theory and Practice, University of California, Berkeley, April 24-26, 2014. In the past five years, a dynamic music industry has emerged around artificially-voiced singing humanoid effects in Japan. Hatsune Miku is one such pop star with an artificially simulated voice and a 3-dimensional effect that draws on Japanese Manga illustration. Hatsune sings the songs of unsigned, independent songwriters but far from being part of a small musical underground, she has a large fan following. In fact, Vocaloid music was the 8th most popular genre in Japan last year, with 17.4% of the under-40 Japanese listening to Miku.1 The aim of this paper is to investigate the aesthetic, social, and political stakes that come into play in the culture surrounding digitally-voiced humanoid effects. I nominate the study of affective attachments in mediatized musical practices in place of a preoccupation with musical Liveness, a concept problematized in music scholarship of the past two decades.2 I propose that attention to Hatsune 1 Tokyo Polytechnic University Department of Interactive Media, “[Vocaloid Survey]” a press release (February 26, 2013). Accessible online http://www.t-kougei.ac.jp/guide/2013/vocaloid.pdf 2 Philip Auslander, Liveness: Performance in a Mediatized Culture Second Edition (New York: Routledge, 2008 [1999]). Jonathan Sterne, The AudiBle Past: Cultural Origins of Sound Reproduction (Durham, NC: Duke University Press, 2003). Jason Stanyek and Benjamin Piekut, “Deadness: Technologies of the Intermundane,” TDR: The Drama Review 54/1 (Spring 2010).
    [Show full text]
  • ASIA-PACIFIC NEWSLETTER September-October 2009 Dear Commoners
    ASIA-PACIFIC NEWSLETTER September-October 2009 Dear Commoners, We are very glad to present to you this first issue of the CC Asia Pacific Newsletter. Please take a moment to browse the many updates and stories brought to you by the fellow Creative Commons jurisdiction project teams in the Asia and Pacific region. We hope you will find the newsletter interesting and useful, and enjoy it as much as we do! It has been a while since some of jurisdiction project teams met at the “Commons Crossroads” conference (http://cc-asia-pacific.wikidot.com/) in Manila in February. At the meeting, collaborative projects utilizing the Creative Commons licenses were showcased, and observations about CC license usage in the region were presented. The participating jurisdiction project teams also exchange views on the organizational issues of their projects, and discuss the common challenges they are facing. Sensing the needs to maintain close contacts among the CC jurisdiction projects in this region, it was proposed to have a bi-monthly electronic newsletter from which each of us can be informed of CC activities in one another’s jurisdiction. It is also hoped that the newsletter serves as a venue to share experience and to enable collaboration. After the “Commons Crossroads” conference, the Creative Commons jurisdiction projects in Asia and the Pacific region jointly prepared and announced an Action Plan Statement. In the statement, several action strategies are outlined in defining the regional roadmap for Creative Commons. One of the action items listed in the Statement is to publish a bi- monthly newsletter. With your input and help, together we have given birth to this first issue.
    [Show full text]
  • The Vocaloid Phenomenon of Vocal Synthesis and Sample Concatenation
    The Vocaloid Phenomenon of Vocal Synthesis and Sample Concatenation Cole Masaitis University of Mary Washington Department of Music Dr. Mark Snyder January 30, 2017 Masaitis, 1 Imagine a future where a singing voice synthesizer sang to you, instead of a real human. Instead of imagining that, what if I told you that this particular future has been around for quite some time? Vocaloid, or computer software that matches this exact description has been blowing up in Japan for over a decade, with other areas including the West being largely in the dark about it. In reality, Vocaloid has been slowly but surely seeping into pop-culture in other countries as well and was created all the way back in the early 2000’s. Vocaloid originated in the early 2000’s and was developed by a man named Hideki ​ Kenmochi who some refer to as “the father” of the software, for a research project at Pompeu Fabra University in Barcelona, Spain. Following his time at university, Yamaha Corporation funded his research which allowed for the further development of his creation and since then, the ​ software has evolved into the worldwide phenomenon called Vocaloid that exists today. Vocaloid employs sample concatenation or sequence that replicates the human voice, based on actual recordings of different individuals for each voicebank found in the Vocaloid editor programs. Originally, it was only capable of pronouncing vowels, and by the year 2003, the team released their product which was now able to sing simplistic words. Over the years the product went through several iterations before it reached the modern version of vocal synthesis it exemplifies today, in the form of Yamaha’s Vocaloid 4 software with more advanced phonetic, linguistic, and vocal than ever before.
    [Show full text]
  • Nakamura International Course
    NAKAMURA SENIOR GIRLS’ HIGH SCHOOL Nakamura International Course Class of 2019 Thesis Projects Table of Contents Introduction ……………………………………………………………....3 Movies and Gender by Ai Ehigiato Odagiri…………………………………….……………...4 Early Childhood Education and Gender by Hana Sato ..……………………………………………………........16 Housework and Gender by Mako Sato .……………….…………………………….………..…..30 Basketball and Gender by Unbin Chon .………………………………...…………….………...43 Music and Gender by Yui Nakahara .……………………………………………………….57 Fashion and Gender by Kanna Hashimoto ……………….………………………………….70 VOCALOID and Gender by Hinako Hatanaka......………………………………………………..84 Description of the Thesis Project…………………..………………….99 2 Introduction The Nakamura International Course (NIC) students, class of 2019, have successfully completed a one-year study abroad program. During their time in the USA, Canada and Australia, the 7 students researched and wrote a thesis on the topic Gender Issues. Choosing an additional subject relevant to their interests, each student conducted research by using the internet and the school library. From this information they created a set of interview questions to ask both men and women during their study abroad. The students utilized critical thinking in order to plan and write a research thesis on their chosen topic, how this connects to Gender Issues, and interpret the data collected from their interviews. In addition to this, the NIC students also presented their thesis to an audience to share what they have learned from their experiences. In this collection are the thesis essays written by each student. (Stasia Ise & Norio Hayakawa) Nakamura International Course Class of 2019 Ai Ehigiato Odagiri Hana Sato Mako Sato Unbin Chon Yui Nakahara Kanna Hashimoto Hinako Hatanaka 3 Movies and Gender Ai Ehigiato Odagiri Nakamura Senior Girls’ High School Nakamura International Course Thesis Project 4 Introduction “If you stand for equality, then you’re a feminist.
    [Show full text]
  • Queer Maximali Sm Manifest O for the Hypertir Ed
    QUEER MAXIMALI SM MANIFEST O FOR THE HYPERTIR ED (The text will be read by androgynous Vocaloid Fukase along with #HyperBody sound) Queer Maximalism Manifesto for the Hypertired is an architectural experimentation under ACG (Anime, Comic and Games) and fandom communities created by Pete Jiadong Qiang. Queer Maximalism Manifesto for the Hypertired proposes that hyperisation is an inventive methodology of hybridising non- homogeneous physical and virtual spaces. It deploys gamification to translate multiple fandoms into a physical context and vice versa in order to accelerate the dissolution of physical and virtual spaces. The Queer Maximalism Manifesto for the Hypertired engages with techno-Orientalism of new media representations to support the encounter of technological and cultural productions specifically within an Asian framework and expand the effect of schizophrenic significations of ACG. The Queer Maximalism Manifesto for the Hypertired regards all ideas above as potential raw materials and aims for a Maximalism to the maximal text, digressing, referring and elaborating on Maximalist sound, which takes as much space as it can be rather than as little as one can get away. Within this #HyperBody, a new architectural space of queer futures emerges. Part 1: Queer Maximalism SKIN Manifesto Headdress: Maw of Cleopatra’s Hyper Game / Trousers: Max Payne in 2027 New York City / Trainers: Club Arcadia Extravaganza / Weapon Controller: Thirst of Akira / Offhand Controller: Furyblade Hotline Miami / Drone: Fragment of Evangelion Unit-01 / Sensor: Debris of Starcraft Siege Tank Cannon. Note: This queer Maxiamlist future requires no contribution of code, so that you can participate in it without having any coding skills.
    [Show full text]
  • Vocaloid 3 Editor Keygen Free
    Vocaloid 3 Editor Keygen Free Vocaloid 3 Editor Keygen Free 1 / 3 2 / 3 The pirates claimed to have crack VOCALOID and VOCALOID2 as a "call" to ... The software contains all the known functions of the VOCALOID3 Editor, but .... Amazon Business : For business-only pricing, quantity discounts and FREE ... The small application (Tiny VOCALOID3 Editor) is attached for the purpose of the .... VOCALOID, VOCALOID2 and VOCALOID3 are voice synthesizing applications that ... MikuMikuDance is a free, popular and powerful 3D animation software, created by Yu ... Disc VOCALOID3 Editor (Works with VOCALOID2 Libraries) ... Another key feature of the Music Star Map are arrow tags, which connect individual .... This product includes the Tiny VOCALOID3 Editor, which allows you to create 17-measure vocal parts in a Windows environment. Quick Links. Return Policy .... (only for owners of Zero-G Vocaloid Products who have a valid serial number) ... Using PRIMA, TONIO or SONIKA with the Vocaloid3 or Vocaloid4 Editors ... of a lyric phrase directly related to the length of time the keyboard key is depressed?. Vocaloid 3 Avanna | The voice of a true singer, powered by Yamaha ... Avanna Vocaloid 3 PC Software includes VOCALOID3 Editor Lite which has the following features: ... Recommended key range: D2 to G4 ... HDD: 2GB free space or more.. About VOCALOID3 Free editor. Please do not refer the v3.0.4.1 editor, cracked version as V3AE, it is disrespectful for the writter. The reason for .... Full | Crack ... Compatible with VOCALOID3 & 4 voicebanks; Includes support software for Cubase…VOCALOID 4.5 Editor for Cubase; Fixes a bug in the Windows version where the program crashes when you perform an audio mixdown.
    [Show full text]