Informing Artificial Intelligence Generative Techniques Using Cognitive Theories of Human Creativity
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Combinational Creativity and Computational Creativity
Combinational Creativity and Computational Creativity By Ji HAN Dyson School of Design Engineering Imperial College London A thesis submitted for the degree of Doctor of Philosophy April 2018 Originality Declaration The coding of the Combinator and the Retriever was accomplished in collaboration with Feng SHI, a PhD colleague in the Creativity Design and Innovation Lab at the Dyson School of Design Engineering, Imperial College London, and co-author with me of relevant publications. Except where otherwise stated, this thesis is the result of my own research. This research was conducted in the Dyson School of Design Engineering at Imperial College London, between October 2014 and December 2017. I certify that this thesis has not been submitted in whole or in parts as consideration for any other degree or qualification at this or any other institute of learning. Ji HAN April 2018 II Copyright Declaration The copyright of this thesis rests with the author and is made available under a Creative Commons Attribution Non-Commericial No Derivitives licence. Researchers are free to copy, distribute or transmit the thesis on the condition that they attribute it, that they do not use it for commercial purposes and that they do not alter, transform or build upon it. For any reuse or redistribution, researchers must make clear to others the license terms of this work. Ji HAN April 2018 III List of Publications The following journal and conference papers were published during this PhD research. Journal Publications 1. Han J., Shi F., Chen L., Childs P. R. N., 2018. The Combinator – A computer-based tool for creative idea generation based on a simulation approach. -
Low-Rank Tucker Approximation of a Tensor from Streaming Data : ; \ \ } Yiming Sun , Yang Guo , Charlene Luo , Joel Tropp , and Madeleine Udell
SIAM J. MATH.DATA SCI. © 2020 Society for Industrial and Applied Mathematics Vol. 2, No. 4, pp. 1123{1150 Low-Rank Tucker Approximation of a Tensor from Streaming Data˚ : ; x { } Yiming Sun , Yang Guo , Charlene Luo , Joel Tropp , and Madeleine Udell Abstract. This paper describes a new algorithm for computing a low-Tucker-rank approximation of a tensor. The method applies a randomized linear map to the tensor to obtain a sketch that captures the important directions within each mode, as well as the interactions among the modes. The sketch can be extracted from streaming or distributed data or with a single pass over the tensor, and it uses storage proportional to the degrees of freedom in the output Tucker approximation. The algorithm does not require a second pass over the tensor, although it can exploit another view to compute a superior approximation. The paper provides a rigorous theoretical guarantee on the approximation error. Extensive numerical experiments show that the algorithm produces useful results that improve on the state-of-the-art for streaming Tucker decomposition. Key words. Tucker decomposition, tensor compression, dimension reduction, sketching method, randomized algorithm, streaming algorithm AMS subject classifcations. 68Q25, 68R10, 68U05 DOI. 10.1137/19M1257718 1. Introduction. Large-scale datasets with natural tensor (multidimensional array) struc- ture arise in a wide variety of applications, including computer vision [39], neuroscience [9], scientifc simulation [3], sensor networks [31], and data mining [21]. In many cases, these tensors are too large to manipulate, to transmit, or even to store in a single machine. Luckily, tensors often exhibit a low-rank structure and can be approximated by a low-rank tensor factorization, such as CANDECOMP/PARAFAC (CP), tensor train, or Tucker factorization [20]. -
Computational Creativity
Computational Creativity Hannu Toivonen University of Helsinki [email protected] www.cs.helsinki.fi/hannu.toivonen ECSS, Gothenburg, 9 Oct 2018 8.10.2018 1 The following video clips, pictures and audio files have been removed from this file to save space: – Video: Poemcatcher tests ”Brain Poetry” machine at Frankfurt Book Fair: https://www.youtube.com/watch?v=cNnbTQL8j B4 – Images produced by Deep Dream, see e.g. https://en.wikipedia.org/wiki/DeepDream – Audio clip: music produced by a programme by Turing and his colleagues www.helsinki.fi/yliopisto 9.10.2018 2 Remote Associates Test (RAT) – What word relates to all of these three words? – coin – silver coin – quick – quick silver – spoon – silver spoon – Measures the ability to discover relationships between remotely associated concepts – A (controversial) psychometrical test of creativity – Correlates with IQ and originality in brain storming www.helsinki.fi/yliopisto 8.10.2018 3 Modeling RATs computationally – A single RAT question is a quadruple – A probabilistic approach: find that maximizes – Maximize (cf. naïve Bayes) www.helsinki.fi/yliopisto 8.10.2018 4 Modeling RATs computationally – Learn word frequencies from a large corpus – Use Google 1 and 2-grams to estimate probabilities and – (Google n-grams: a large, publically available collection of word sequences and their probabilities) – A lot more could be done, but we want to keep things as simple as possible – Creative behavior without an explicit semantic resource (such as WordNet) www.helsinki.fi/yliopisto 8.10.2018 -
Polynomial Tensor Sketch for Element-Wise Function of Low-Rank Matrix
Polynomial Tensor Sketch for Element-wise Function of Low-Rank Matrix Insu Han 1 Haim Avron 2 Jinwoo Shin 3 1 Abstract side, we obtain a o(n2)-time approximation scheme of f (A)x ≈ T T >x for an arbitrary vector x 2 n due This paper studies how to sketch element-wise U V R to the associative property of matrix multiplication, where functions of low-rank matrices. Formally, given the exact computation of f (A)x requires the complexity low-rank matrix A = [A ] and scalar non-linear ij of Θ(n2). function f, we aim for finding an approximated low-rank representation of the (possibly high- The matrix-vector multiplication f (A)x or the low-rank > rank) matrix [f(Aij)]. To this end, we propose an decomposition f (A) ≈ TU TV is useful in many machine efficient sketching-based algorithm whose com- learning algorithms. For example, the Gram matrices of plexity is significantly lower than the number of certain kernel functions, e.g., polynomial and radial basis entries of A, i.e., it runs without accessing all function (RBF), are element-wise matrix functions where entries of [f(Aij)] explicitly. The main idea un- the rank is the dimension of the underlying data. Such derlying our method is to combine a polynomial matrices are the cornerstone of so-called kernel methods approximation of f with the existing tensor sketch and the ability to multiply the Gram matrix to a vector scheme for approximating monomials of entries suffices for most kernel learning. The Sinkhorn-Knopp of A. -
Collaboratively Designing Metrics to Evaluate Creative Machines
ISEA 2020 MEASURING COMPUTATIONAL CREATIVITY: COLLABORATIVELY DESIGNING METRICS TO EVALUATE CREATIVE MACHINES EUNSU KANG JEAN OH ROBERT TWOMEY WELCOME! :D WE UNDERSTAND WE CANNOT MEASURE EVERY ASPECT OF CREATIVITY BUT WE WANT TO FIGURE OUT THE MEASURABLE ASPECTS OF CREATIVITY WE ARE HERE FOR • Collectively brainstorming evaluation metrics and producing a set of metrics. • Creating a map of metrics to help us have a better view on the problem • Building a community that contributes to this difficult task OUR SCHEDULE TODAY IS • Introduction (20 min) • Small group discussion: Elements of creative AI (30 min) • Sharing the first discussion results (15 min) • Invited speaker panel 1 (35 min) • Small group discussion 2: Evaluating selected projects (30 min) • Sharing the second discussion results (15 min) • Invited speaker panel 2 (35 min) • Small group discussion 3: Revising metrics, evaluating the second project (30 min) • Presentation of results and Q&A (30 min) • 5 min breaks as needed PANEL DISCUSSIONS BY INVITED SPEAKERS Panel 1: Graham Wakefield, Haru Ji, Fabrizio Poltronieri, Allison Parrish, Aaron Hertzmann • What are your methods and metrics for evaluating the creative AI system that you and your colleagues have developed? • How does the study of creative AI systems inform human creative practice? • How do we attribute (who is responsible for) the creativity in collaborative creative systems? • Do you think there is a difference between creative AI and human creativity? What would be the difference? If not, why not. PANEL DISCUSSIONS -
Low-Rank Pairwise Alignment Bilinear Network for Few-Shot Fine
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, JUNE 2019 1 Low-Rank Pairwise Alignment Bilinear Network For Few-Shot Fine-Grained Image Classification Huaxi Huang, Student Member, IEEE, Junjie Zhang, Jian Zhang, Senior Member, IEEE, Jingsong Xu, Qiang Wu, Senior Member, IEEE Abstract—Deep neural networks have demonstrated advanced abilities on various visual classification tasks, which heavily rely on the large-scale training samples with annotated ground-truth. Slot Alaskan Dog However, it is unrealistic always to require such annotation in h real-world applications. Recently, Few-Shot learning (FS), as an attempt to address the shortage of training samples, has made significant progress in generic classification tasks. Nonetheless, it Insect Husky Dog is still challenging for current FS models to distinguish the subtle differences between fine-grained categories given limited training Easy to Recognise A Little Hard to Classify data. To filling the classification gap, in this paper, we address Lion Audi A5 the Few-Shot Fine-Grained (FSFG) classification problem, which focuses on tackling the fine-grained classification under the challenging few-shot learning setting. A novel low-rank pairwise Piano Audi S5 bilinear pooling operation is proposed to capture the nuanced General Object Recognition with Single Sample Fine-grained Object Recognition with Single Sample differences between the support and query images for learning an effective distance metric. Moreover, a feature alignment layer Fig. 1. An example of general one-shot learning (Left) and fine-grained one- is designed to match the support image features with query shot learning (Right). For general one-shot learning, it is easy to learn the ones before the comparison. -
Explainable Computational Creativity
Explainable Computational Creativity Maria Teresa Llano Mark d’Inverno Matthew Yee-King SensiLab, Monash University Goldsmiths, University of London Goldsmiths, University of London Melbourne, Australia London, United Kingdom London, United Kingdom Jon McCormack Alon Ilsar Alison Pease Simon Colton SensiLab, Monash University SensiLab, Monash University Dundee University SensiLab, Monash University Melbourne, Australia Melbourne, Australia Dundee, United Kingdom Queen Mary, University of London Abstract for intermediate explanations as the process progresses, let Human collaboration with systems within the Computational alone place for exchanges of information that can exploit Creativity (CC) field is often restricted to shallow interac- human-machine co-creation. tions, where the creative processes, of systems and humans alike, are carried out in isolation, without any (or little) inter- In this paper we propose Explainable Computational Cre- vention from the user, and without any discussion about how ativity (XCC) as a subfield of XAI. The focus of this subfield the unfolding decisions are taking place. Fruitful co-creation is the study of bidirectional explainable models in the con- requires a sustained ongoing interaction that can include dis- text of computational creativity – where the term explainable cussions of ideas, comparisons to previous/other works, in- is used with a broader sense to cover not only one shot-style cremental improvements and revisions, etc. For these interac- explanations, but also for co-creative interventions -
Fast and Guaranteed Tensor Decomposition Via Sketching
Fast and Guaranteed Tensor Decomposition via Sketching Yining Wang, Hsiao-Yu Tung, Alex Smola Anima Anandkumar Machine Learning Department Department of EECS Carnegie Mellon University, Pittsburgh, PA 15213 University of California Irvine fyiningwa,[email protected] Irvine, CA 92697 [email protected] [email protected] Abstract Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized com- putation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as ten- sor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iter- ative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic mod- eling and obtain competitive results. Keywords: Tensor CP decomposition, count sketch, randomized methods, spec- tral methods, topic modeling 1 Introduction In many data-rich domains such as computer vision, neuroscience and social networks consisting of multi-modal and multi-relational data, tensors have emerged as a powerful paradigm for han- dling the data deluge. An important operation with tensor data is its decomposition, where the input tensor is decomposed into a succinct form. -
Download Thesis
Department of Computer Science Series of Publications A Report A-2021-4 Computational Understanding, Generation and Evaluation of Creative Expressions Khalid Alnajjar Doctoral dissertation, to be presented for public discussion with the permission of the Faculty of Science of the University of Helsinki, in Auditorium B123, Exactum, Pietari Kalmin katu 5, on the 22nd of March, 2021 at 12:00 o’clock. The defence is also open for the audience through remote access. University of Helsinki Finland Supervisor Hannu Toivonen, University of Helsinki, Finland Pre-examiners Josep Blat, Universitat Pompeu Fabra, Spain Tapio Salakoski, University of Turku, Finland Opponent Pablo Gervás, Universidad Complutense de Madrid, Spain Custos Hannu Toivonen, University of Helsinki, Finland Faculty Representative Teemu Roos, University of Helsinki, Finland Contact information Department of Computer Science P.O. Box 68 (Pietari Kalmin katu 5) FI-00014 University of Helsinki Finland Email address: [email protected].fi URL: https://cs.helsinki.fi/ Telephone: +358 2941 911 Copyright c 2021 Khalid Alnajjar ISSN 1238-8645 ISBN 978-951-51-7145-0 (paperback) ISBN 978-951-51-7146-7 (PDF) Helsinki 2021 Unigrafia Computational Understanding, Generation and Evaluation of Creative Expressions Khalid Alnajjar Department of Computer Science P.O. Box 68, FI-00014 University of Helsinki, Finland khalid.alnajjar@helsinki.fi https://www.khalidalnajjar.com/ https://www.rootroo.com/ PhD Thesis, Series of Publications A, Report A-2021-4 Helsinki, March 2021, 56 pages ISSN 1238-8645 ISBN 978-951-51-7145-0 (paperback) ISBN 978-951-51-7146-7 (PDF) Abstract Computational creativity has received a good amount of research interest in generating creative artefacts programmatically. -
Teaching Computational Creativity Margareta Ackerman1, Ashok Goel2, Colin G
Teaching Computational Creativity Margareta Ackerman1, Ashok Goel2, Colin G. Johnson3, Anna Jordanous3, Carlos Le´on4, Rafael P´erezy P´erez5, Hannu Toivonen6, and Dan Ventura7. 1 Computer Engineering Department, Santa Clara University. [email protected]. 2 School of Interactive Computing, Georgia Tech. [email protected] 3 School of Computing, University of Kent. fC.G.Johnson, [email protected] 4 Department of Software Engineering and Artificial Intelligence, Complutense University of Madrid. [email protected] 5 Depto. de Tecnolog´ıasde la Informaci´on,Universidad Aut´onomaMetropolitana. [email protected] 6 Department of Computer Science, University of Helsinki. [email protected] 7 Computer Science Department, Brigham Young University. [email protected] Abstract • Master theoretical and conceptual tools for analysing and discussing creative systems The increasing popularity of computational creativity (CC) in recent years gives rise to the need for educa- • Develop and understand one's own creative processes tional resources. This paper presents several modules that together act as a guide for developing new CC • Become familiar with the latest advances in CC, and courses as well as improving existing curricula. As well be able to critically discuss current state-of-the-art as introducing core CC concepts, we address pedagog- • Become able to create CC systems and/or techniques ical approaches to this interdisciplinary subject. An accessible overview of the field lets this paper double • Gain the ability to describe, employ and debate as an introductory tutorial to computational creativity. methods for evaluation of computational creativity • Be able to identify appropriate contexts for using CC Introduction In what follows, we offer examples of extant CC sys- As computational creativity (CC) grows in popular- tems that can be used as archetypes for introducing ity, it is increasingly being taught in some form as a key questions in the field; discuss different modeling university-level course. -
High-Order Attention Models for Visual Question Answering
High-Order Attention Models for Visual Question Answering Idan Schwartz Alexander G. Schwing Department of Computer Science Department of Electrical and Computer Engineering Technion University of Illinois at Urbana-Champaign [email protected] [email protected] Tamir Hazan Department of Industrial Engineering & Management Technion [email protected] Abstract The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset. 1 Introduction The quest for algorithms which enable cognitive abilities is an important part of machine learning and appears in many facets, e.g., in visual question answering tasks [6], image captioning [26], visual question generation [18, 10] and machine comprehension [8]. A common trait in these recent cognitive-like tasks is that they take into account different data modalities, for example, visual and textual data. To address these tasks, recently, attention mechanisms have emerged as a powerful common theme, which provides not only some form of interpretability if applied to deep net models, but also often improves performance [8]. -
Visual Attribute Transfer Through Deep Image Analogy ∗
Visual Attribute Transfer through Deep Image Analogy ∗ Jing Liao1, Yuan Yao2 ,y Lu Yuan1, Gang Hua1, and Sing Bing Kang1 1Microsoft Research, 2Shanghai Jiao Tong University : :: : : :: : A (input) A0 (output) B (output) B0 (input) Figure 1: Our technique allows us to establish semantically-meaningful dense correspondences between two input images A and B0. A0 and B are the reconstructed results subsequent to transfer of visual attributes. Abstract The applications of color transfer, texture transfer, and style transfer share a common theme, which we characterize as visual attribute We propose a new technique for visual attribute transfer across im- transfer. In other words, visual attribute transfer refers to copying ages that may have very different appearance but have perceptually visual attributes of one image such as color, texture, and style, to similar semantic structure. By visual attribute transfer, we mean another image. transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could In this paper, we describe a new technique for visual attribute trans- be that of a painting or a sketch while the other is a photo of a real fer for a pair of images that may be very different in appearance but scene, and both depict the same type of scene. semantically similar. In the context of images, by ”semantic”, we refer to high-level visual content involving identifiable objects. We Our technique finds semantically-meaningful dense correspon- deem two different images to be semantically similar if both are of dences between two input images. To accomplish this, it adapts the the same type of scene containing objects with the same names or notion of “image analogy” [Hertzmann et al.