ICML-2021-Paper-Digests.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Is Shuma the Chinese Analog of Soma/Haoma? a Study of Early Contacts Between Indo-Iranians and Chinese
SINO-PLATONIC PAPERS Number 216 October, 2011 Is Shuma the Chinese Analog of Soma/Haoma? A Study of Early Contacts between Indo-Iranians and Chinese by ZHANG He Victor H. Mair, Editor Sino-Platonic Papers Department of East Asian Languages and Civilizations University of Pennsylvania Philadelphia, PA 19104-6305 USA [email protected] www.sino-platonic.org SINO-PLATONIC PAPERS FOUNDED 1986 Editor-in-Chief VICTOR H. MAIR Associate Editors PAULA ROBERTS MARK SWOFFORD ISSN 2157-9679 (print) 2157-9687 (online) SINO-PLATONIC PAPERS is an occasional series dedicated to making available to specialists and the interested public the results of research that, because of its unconventional or controversial nature, might otherwise go unpublished. The editor-in-chief actively encourages younger, not yet well established, scholars and independent authors to submit manuscripts for consideration. Contributions in any of the major scholarly languages of the world, including romanized modern standard Mandarin (MSM) and Japanese, are acceptable. In special circumstances, papers written in one of the Sinitic topolects (fangyan) may be considered for publication. Although the chief focus of Sino-Platonic Papers is on the intercultural relations of China with other peoples, challenging and creative studies on a wide variety of philological subjects will be entertained. This series is not the place for safe, sober, and stodgy presentations. Sino- Platonic Papers prefers lively work that, while taking reasonable risks to advance the field, capitalizes on brilliant new insights into the development of civilization. Submissions are regularly sent out to be refereed, and extensive editorial suggestions for revision may be offered. Sino-Platonic Papers emphasizes substance over form. -
Final Program of CCC2020
第三十九届中国控制会议 The 39th Chinese Control Conference 程序册 Final Program 主办单位 中国自动化学会控制理论专业委员会 中国自动化学会 中国系统工程学会 承办单位 东北大学 CCC2020 Sponsoring Organizations Technical Committee on Control Theory, Chinese Association of Automation Chinese Association of Automation Systems Engineering Society of China Northeastern University, China 2020 年 7 月 27-29 日,中国·沈阳 July 27-29, 2020, Shenyang, China Proceedings of CCC2020 IEEE Catalog Number: CFP2040A -USB ISBN: 978-988-15639-9-6 CCC2020 Copyright and Reprint Permission: This material is permitted for personal use. For any other copying, reprint, republication or redistribution permission, please contact TCCT Secretariat, No. 55 Zhongguancun East Road, Beijing 100190, P. R. China. All rights reserved. Copyright@2020 by TCCT. 目录 (Contents) 目录 (Contents) ................................................................................................................................................... i 欢迎辞 (Welcome Address) ................................................................................................................................1 组织机构 (Conference Committees) ...................................................................................................................4 重要信息 (Important Information) ....................................................................................................................11 口头报告与张贴报告要求 (Instruction for Oral and Poster Presentations) .....................................................12 大会报告 (Plenary Lectures).............................................................................................................................14 -
Adaptive Sparse Representation of Continuous Input for Tsetlin Machines Based on Stochastic Searching on the Line
electronics Article Adaptive Sparse Representation of Continuous Input for Tsetlin Machines Based on Stochastic Searching on the Line Kuruge Darshana Abeyrathna *, Ole-Christoffer Granmo and Morten Goodwin Centre for Artificial Intelligence Research, University of Agder, 4870 Grimstad, Norway; [email protected] (O.-C.G.); [email protected] (M.G.) * Correspondence: [email protected] Abstract: This paper introduces a novel approach to representing continuous inputs in Tsetlin Machines (TMs). Instead of using one Tsetlin Automaton (TA) for every unique threshold found when Booleanizing continuous input, we employ two Stochastic Searching on the Line (SSL) automata to learn discriminative lower and upper bounds. The two resulting Boolean features are adapted to the rest of the clause by equipping each clause with its own team of SSLs, which update the bounds during the learning process. Two standard TAs finally decide whether to include the resulting features as part of the clause. In this way, only four automata altogether represent one continuous feature (instead of potentially hundreds of them). We evaluate the performance of the new scheme empirically using five datasets, along with a study of interpretability. On average, TMs with SSL feature representation use 4.3 times fewer literals than the TM with static threshold-based features. Furthermore, in terms of average memory usage and F1-Score, our approach outperforms simple Multi-Layered Artificial Neural Networks, Decision Trees, Support Vector Machines, K-Nearest Neighbor, Random Forest, Gradient Boosted Trees (XGBoost), and Explainable Boosting Machines (EBMs), as well as the standard and real-value weighted TMs. Our approach further outperforms Citation: Abeyrathna, K.D.; Granmo, Neural Additive Models on Fraud Detection and StructureBoost on CA-58 in terms of the Area Under O.-C.; Goodwin, M. -
A Complete Bibliography of Publications in the Proceedings of the VLDB Endowment
A Complete Bibliography of Publications in the Proceedings of the VLDB Endowment Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 13 April 2021 Version 1.59 Title word cross-reference * [2218]. -approximate [1221, 184]. -automorphism [249]. -center [1996]. -clique [2343, 2278]. -cliques [571]. -core [2188, 815, 1712, 1453]. (k;r) [1712]. ++ [624]. 3 [2239, 338, 1997]. + -DB [1088]. -gram [2141]. -grams [1002]. [1838]. 2 [1588]. [2467]. c [1221]. [184]. γ -graph [1647]. -hop [2009]. -means [1088]. K [624, 1691]. -nearest [1493, 1692]. [1453, 1493, 624, 29, 1913, 901, 373, 1692, -Nearest-Neighbor [87]. -partite [2346]. 1996, 1266, 1006, 2440, 1035, 2012, 1490, 21, -path [1081]. -ranks [1070]. -regret 198, 1222, 424, 1340, 959, 1443, 609, 1081, [1035, 1443]. -tree [2350, 596]. -trees [1288]. 1284, 1937, 86, 1691, 968, 2315, 27, 1231, 1800, -uncertainty [432]. 2188, 2343, 1836, 657, 1879, 785, 1819, 1899, 2270, 854, 666, 586, 520, 815, 1387, 2163, 811, 11g [110, 112, 283]. 12c [1780, 1374]. 1791, 2278, 1248, 372, 564, 1501, 87, 417, 1213, 1272, 49, 1036, 1070, 1503, 2170, 249]. 2.0 [126, 2079, 2113, 1158]. 2014 [1187]. 2R k∗ [2346]. k=2 [2009]. l [591]. L [1647]. n 1 [2305]. 2X [1506]. [2141, 1002]. r [571]. ρ [432]. s [2003, 2195]. t [2003, 2195]. 1 2 3X [54, 362]. -
Coalesced Multi-Output Tsetlin Machines with Clause Sharing∗
Coalesced Multi-Output Tsetlin Machines with Clause Sharing∗ Sondre Glimsdal† and Ole-Christoffer Granmo‡ August 18, 2021 Abstract Using finite-state machines to learn patterns, Tsetlin machines (TMs) have obtained competitive accuracy and learning speed across several benchmarks, with frugal memory- and energy footprint. A TM represents patterns as conjunctive clauses in propositional logic (AND-rules), each clause voting for or against a particular output. While efficient for single-output problems, one needs a separate TM per output for multi-output problems. Employing multiple TMs hinders pattern reuse because each TM then operates in a silo. In this paper, we introduce clause sharing, merging multiple TMs into a single one. Each clause is related to each output by using a weight. A positive weight makes the clause vote for output 1, while a negative weight makes the clause vote for output 0. The clauses thus coalesce to produce multiple outputs. The resulting coalesced Tsetlin machine (CoTM) simultaneously learns both the weights and the composition of each clause by employing interacting Stochastic Searching on the Line (SSL) and Tsetlin automata (TAs) teams. Our empirical results on MNIST, Fashion-MNIST, and Kuzushiji-MNIST show that CoTM obtains significantly higher accuracy than TM on 50- to 1K-clause configurations, indicating an ability to repurpose clauses. E.g., accuracy goes from 71:99% to 89:66% on Fashion- MNIST when employing 50 clauses per class (22 Kb memory). While TM and CoTM accuracy is similar when using more than 1K clauses per class, CoTM reaches peak accuracy 3ˆ faster on MNIST with 8K clauses. -
Arxiv:2105.09114V1 [Cs.CL] 19 May 2021 Decomposes Into Meaningful Words and Their Difficult to Detect Fake News Based on Linguistic Con- Negations
Explainable Tsetlin Machine framework for fake news detection with credibility score assessment Bimal Bhattarai Ole-Christoffer Granmo Lei Jiao University of Agder University of Agder University of Agder [email protected] [email protected] [email protected] Abstract particularly problematic as they seek to deceive people for political and financial gain (Gottfried The proliferation of fake news, i.e., news in- and Shearer, 2016). tentionally spread for misinformation, poses a threat to individuals and society. Despite In recent years, we have witnessed extensive various fact-checking websites such as Politi- growth of fake news in social media, spread across Fact, robust detection techniques are required news blogs, Twitter, and other social platforms. to deal with the increase in fake news. Sev- At present, most online misinformation is manu- eral deep learning models show promising re- ally written (Vargo et al., 2018). However, natural sults for fake news classification, however, language models like GPT-3 enable automatic gen- their black-box nature makes it difficult to ex- eration of realistic-looking fake news, which may plain their classification decisions and quality- assure the models. We here address this prob- accelerate future growth. Such growth is problem- lem by proposing a novel interpretable fake atic as most people nowadays digest news stories news detection framework based on the re- from social media and news blogs (Allcott and cently introduced Tsetlin Machine (TM). In Gentzkow, 2017). Indeed, the spread of fake news brief, we utilize the conjunctive clauses of poses a severe threat to journalism, individuals, and the TM to capture lexical and semantic prop- society. -
DETAILED PROGRAM Last Updated 27-Nov-2020
DETAILED PROGRAM Last Updated 27-Nov-2020 ALL TIMES in AEDT (UTC+11) Please note in the conversion table below that some of the times listed below are on the previous day. AEDT 7:00:00 AM 8:00:00 AM 9:00:00 AM 10:00:00 AM 11:00:00 AM 7:00:00 PM 7:00:00 PM 8:00:00 PM 9:00:00 PM 10:00:00 PM 11:00:00 PM GMT 9:00:00 PM 10:00:00 PM 11:00:00 PM 12:00:00 AM 1:00:00 AM 9:00:00 AM 9:00:00 AM 10:00:00 AM 11:00:00 AM 12:00:00 PM 1:00:00 PM Beijing 4:00:00 AM 5:00:00 AM 6:00:00 AM 7:00:00 AM 8:00:00 AM 4:00:00 PM 4:00:00 PM 5:00:00 PM 6:00:00 PM 7:00:00 PM 8:00:00 PM Paris 9:00:00 PM 10:00:00 PM 11:00:00 PM 12:00:00 AM 1:00:00 AM 9:00:00 AM 9:00:00 AM 10:00:00 AM 11:00:00 AM 12:00:00 PM 1:00:00 PM London 8:00:00 PM 9:00:00 PM 10:00:00 PM 11:00:00 PM 12:00:00 AM 8:00:00 AM 8:00:00 AM 9:00:00 AM 10:00:00 AM 11:00:00 AM 12:00:00 PM Mexico 3:00:00 PM 4:00:00 PM 5:00:00 PM 6:00:00 PM 7:00:00 PM 3:00:00 AM 3:00:00 AM 4:00:00 AM 5:00:00 AM 6:00:00 AM 7:00:00 AM New York 4:00:00 PM 5:00:00 PM 6:00:00 PM 7:00:00 PM 8:00:00 PM 4:00:00 AM 4:00:00 AM 5:00:00 AM 6:00:00 AM 7:00:00 AM 8:00:00 AM San Francisco 1:00:00 PM 2:00:00 PM 3:00:00 PM 4:00:00 PM 5:00:00 PM 1:00:00 AM 1:00:00 AM 2:00:00 AM 3:00:00 AM 4:00:00 AM 5:00:00 AM 2 Tuesday, December 1, 7:00AM-10:06AM Tutorial: Deep Learning 1.0 and Beyond, Instructor: Truyen Tran: Room 1 Tutorial: Artificial Intelligence-based Uncertainty Qualification: Importance, Challenges and Solutions, Instructor: Abbas Khosravi and Saeid Nahavandi: Room 2 Tutorial: Handling Data Streams in Continual and Rapidly Changing Environments, Instructor: Mahardhika Pratama: Room 3 Special Session: IEEE-CIS 2nd Technical Challenge On Energy Prediction From Smart Meter Data 7.01 AM: Welcome (Luis) – 5 minutes 7.05 AM: Introduction to the 2nd Challenge and summary of results (Isaac) – 15 mins 7.20 AM: Presentations – 12 min presentation + Questions (5 min) + 3 min switch to next speaker. -
ICM Problem F Contest Results
2020 Interdisciplinary Contest in Modeling® Press Release—April 24, 2020 COMAP is pleased to announce the results of the 22nd annual that limit the ability to reach this safe level and ways to more Interdisciplinary Contest in Modeling (ICM). This year 7203 teams equitably distribute causes and effects. The F Problem asked students representing institutions from sixteen countries/regions participated in to create an international policy that would address the resettlement the contest. Eighteen teams were designated as OUTSTANDING needs of environmentally displaced persons (those whose homelands representing the following schools: are lost to climate change) while also preserving the cultural heritage of those peoples. This problem required students to evaluate cultural Beijing Forestry University, China significance as well as understand geopolitical issues surrounding Beijing Normal University, China refugees. For all three problems, teams used pertinent data and Brown University, RI, USA (Rachel Carson Award) grappled with how phenomena internal and external to the system Capital University of Economics and Business, China needed to be considered and measured. The student teams produced Dongbei University of Finance and Economics, China creative and relevant solutions to these complex questions and built Donghua University, China models to handle the tasks assigned in the problems. The problems Nanjing University of Posts & Telecommunications, China also required data analysis, creative modeling, and scientific New York University, NY, USA methodology, along with effective writing and visualization to Shanghai Jiao Tong University, China (INFORMS Award) communicate their teams' results in a 20-page report. A selection of Shanghai University of Finance and Economics, China the Outstanding solution papers will be featured in The UMAP South China University of Technology, China Journal, along with commentaries from the problem authors and University of Electronic Science and Technology of China, China judges. -
Text Classification with Noisy Class Labels
Text Classification with Noisy Class Labels By Andrea Pagotto A thesis proposal submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfilment of the requirements for the degree of Master of Computer Science Ottawa-Carleton Institute for Computer Science School of Computer Science Carleton University Ottawa, Ontario June 2020 ⃝c Copyright 2020, Andrea Pagotto The undersigned hereby recommend to the Faculty of Graduate and Postdoctoral Affairs acceptance of the thesis, Text Classification with Noisy Class Labels submitted by Andrea Pagotto Dr. Michel Barbeau (Director, School of Computer Science) Dr. B. John Oommen (Thesis Supervisor) Carleton University June 2020 ii ABSTRACT Text classification is a sub-field of Natural Language Processing (NLP), thatin- volves mapping an input text document to an output topic or label. There are nu- merous practical applications for text classification, including email spam detection, identifying sentiments in social media, news topic detection, and many more. Due to the importance of the field, there has been much related research in recent years. Our work in this thesis specifically focuses on the problem of text classification in the setting of a Random Environment. The Random Environment in this application would be noise in the labels of the training data. Label noise is an important issue in classification, with many potential negative consequences, such as decreasing the accuracy of the predictions, and increasing the complexity of the trained models. Designing learning algorithms that help maximize a desired performance measure in such noisy settings, is very valuable for achieving success on real world data. This thesis also investigates a recently proposed classification method that in- volves the use of Learning Automata (LA) for text classification. -
Question Classification Using Interpretable Tsetlin Machine
Question Classification using Interpretable Tsetlin Machine Dragos, Constantin Nicolae [email protected] Research Institute for Artificial Intelligence “Mihai Drăgănescu” Bucharest, Romania ABSTRACT machine learning approaches [9], feature extraction plays a vital Question Answering (QA) is one of the hottest research topics in role in accomplishing the target accuracy. Feature extraction is Natural Language Processing (NLP) as well as Information Retrieval done using various lexical, syntactic features and parts of speech. (IR). Among various domains of QA, Question Classification (QC) Most of the machine learning algorithms are powerful to obtain is a very important system that classifies a question based on the good accuracy with QC data [17]. However, there always exists type of answer expected from it. Generalization is a very important a limitation of interpretation in the model. Decision Tree despite factor in NLP specifically in QA and subsequently in QC. There are having somewhat interpretable when have a complex tree makes numerous models for the classification of types of questions. Despite it slightly difficult for a human to extract the meaning outofit. its good performance, it lacks the interpretability that shows us how Similarly, a very powerful tool called deep neural network having the model can classify the output. Hence, in this paper, we propose impressive performance is still criticized for being black-box in a Tsetlin Machine based QC task that shows the interpretability nature [3]. of the model yet retaining the state-of-the-art performance. Our Interpretability is a huge topic of interest in recent machine model is validated by comparing it with other interpretable machine learning domain. -
2019 IEEE PES Innovative Smart Grid Technologies Asia ISGT ASIA 2019
2019 IEEE PES Innovative Smart Grid Technologies Asia ISGT ASIA 2019 Conference Program Organized by May 21-24, 2019 Chengdu, China INTERNATIONAL ADVISORY BOARD ISGT 2019 ORGANIZING COMMITTEE ISGT 2019 TECHNICAL COMMITTEE Alphabetical Order of the Last Name Abhisek Ukil, The University of Auckland Hui Ma, The University of Queensland Ahmad Zahedi, James Cook University Huifen Zhang, University of Jinan Ali Alouani, Tenessee Technology University Jaesung Jung, Ajou University Amit Kumar, B T K I T DWARAHAT Jiabing Hu, Huazhong University of Science and Anan Zhang, Southwest Petroleum University Technology Arsalan Habib Khawaja, National University of Science Jiajun Duan, GEIRI North America and Technology Jian-Tang Liao, National Cheng Kung University Ashkan Yousefi, University of California, Berkeley Jianxue Wang, Xi’an Jiaotong University Babar Hussain, PIEAS Jianxue Wang, Xi’an Jiaotong University Baorong Zhou, China Southern Power Grid Jie Wu, Sichuan Electric Power Research Institute Baorong Zhou, China Southern Power Grid Jinghua Li, Guangxi Key Laboratory of Power System Binbin Li, Harbin Institute of Technology Optimization and Energy Technology Biyun Chen, Guangxi Key Laboratory of Power System Jingru Li, State Grid Economic and Technological Optimization and Energy Technology (Guangxi Research Institute University) Jinrui Tang, Wuhan University of Technology Bo Hu, Chongqing University Jun Liang, Cardiff University Can Hu, State Grid Sichuan Company Junbo Zhao, Virginia Tech Can Huang, Lawrence Livermore National Laboratory Junjie -
Friday, May 15, 2020 the University of Arizona
FRIDAY, MAY 15, 2020 THE UNIVERSITY OF ARIZONA It was in 1885 that the Thirteenth Territorial Legislature founded the University of Arizona with an appropriation of $25,000 — but no land. This appropriation was not welcomed by many residents of Tucson and Pima County, as they were looking for either the state Capitol building, a prison, or even an asylum for the insane — but definitely not a university. The money would be available on the condition that the community provided a suitable site. Just before the $25,000 was to be returned to the Legislature, two gamblers and a saloon-keeper donated 40 acres of land “way out east of town,” and thus the University could become a reality. Classes began in 1891 with 32 students and six teachers, all accommodated in one building. The first class graduated in 1895, when three students received their degrees. Today, the University of Arizona is one of the nation's top public universities, according to U.S. News & World Report. It has grown to more than 45,000 students and 15,000 faculty and staff members across the University, which includes the main campus, the College of Applied Science & Technology in Sierra Vista, the College of Medicine – Phoenix and Arizona Online. The University is organized into 21 colleges and 23 schools. It is one of the top 10 employers in Arizona, with an economic impact of more than $1 billion in fiscal year 2017 as a result of the University’s research expenditures. 156th Annual Arizona COMMENCEMENT Table of Contents STREAMED PROGRAM · · · · · · · · · · · · · · · ·