Aiohttp Documentation Release 3.7.4

Total Page:16

File Type:pdf, Size:1020Kb

Aiohttp Documentation Release 3.7.4 aiohttp Documentation Release 3.7.4 aiohttp contributors Feb 25, 2021 CONTENTS 1 Key Features 3 2 Library Installation 5 2.1 Installing speedups altogether......................................5 3 Getting Started 7 3.1 Client example..............................................7 3.2 Server example:.............................................7 4 What’s new in aiohttp 3? 9 5 Tutorial 11 6 Source code 13 7 Dependencies 15 8 Communication channels 17 9 Contributing 19 10 Authors and License 21 11 Policy for Backward Incompatible Changes 23 12 Table Of Contents 25 12.1 Client................................................... 25 12.2 Server................................................... 79 12.3 Utilities.................................................. 162 12.4 FAQ.................................................... 177 12.5 Miscellaneous.............................................. 184 12.6 Who uses aiohttp?............................................ 246 12.7 Contributing............................................... 250 Python Module Index 255 Index 257 i ii aiohttp Documentation, Release 3.7.4 Asynchronous HTTP Client/Server for asyncio and Python. Current version is 3.7.4. CONTENTS 1 aiohttp Documentation, Release 3.7.4 2 CONTENTS CHAPTER ONE KEY FEATURES • Supports both Client and HTTP Server. • Supports both Server WebSockets and Client WebSockets out-of-the-box without the Callback Hell. • Web-server has Middlewares, Signals and plugable routing. 3 aiohttp Documentation, Release 3.7.4 4 Chapter 1. Key Features CHAPTER TWO LIBRARY INSTALLATION $ pip install aiohttp You may want to install optional cchardet library as faster replacement for chardet: $ pip install cchardet For speeding up DNS resolving by client API you may install aiodns as well. This option is highly recommended: $ pip install aiodns 2.1 Installing speedups altogether The following will get you aiohttp along with chardet, aiodns and brotlipy in one bundle. No need to type separate commands anymore! $ pip install aiohttp[speedups] 5 aiohttp Documentation, Release 3.7.4 6 Chapter 2. Library Installation CHAPTER THREE GETTING STARTED 3.1 Client example import aiohttp import asyncio async def main(): async with aiohttp.ClientSession() as session: async with session.get('http://python.org') as response: print("Status:", response.status) print("Content-type:", response.headers['content-type']) html= await response.text() print("Body:", html[:15],"...") loop= asyncio.get_event_loop() loop.run_until_complete(main()) This prints: Status: 200 Content-type: text/html; charset=utf-8 Body: <!doctype html> ... Coming from requests ? Read why we need so many lines. 3.2 Server example: from aiohttp import web async def handle(request): name= request.match_info.get('name',"Anonymous") text="Hello,"+ name return web.Response(text=text) app= web.Application() app.add_routes([web.get('/', handle), web.get('/{name}', handle)]) (continues on next page) 7 aiohttp Documentation, Release 3.7.4 (continued from previous page) if __name__ =='__main__': web.run_app(app) For more information please visit Client and Server pages. 8 Chapter 3. Getting Started CHAPTER FOUR WHAT’S NEW IN AIOHTTP 3? Go to What’s new in aiohttp 3.0 page for aiohttp 3.0 major release changes. 9 aiohttp Documentation, Release 3.7.4 10 Chapter 4. What’s new in aiohttp 3? CHAPTER FIVE TUTORIAL Polls tutorial 11 aiohttp Documentation, Release 3.7.4 12 Chapter 5. Tutorial CHAPTER SIX SOURCE CODE The project is hosted on GitHub Please feel free to file an issue on the bug tracker if you have found a bug or have some suggestion in order to improve the library. The library uses Azure Pipelines for Continuous Integration. 13 aiohttp Documentation, Release 3.7.4 14 Chapter 6. Source code CHAPTER SEVEN DEPENDENCIES • Python 3.6+ • async_timeout • attrs • chardet • multidict • yarl • Optional cchardet as faster replacement for chardet. Install it explicitly via: $ pip install cchardet • Optional aiodns for fast DNS resolving. The library is highly recommended. $ pip install aiodns 15 aiohttp Documentation, Release 3.7.4 16 Chapter 7. Dependencies CHAPTER EIGHT COMMUNICATION CHANNELS aio-libs discourse group: https://aio-libs.discourse.group Feel free to post your questions and ideas here. gitter chat https://gitter.im/aio-libs/Lobby We support Stack Overflow. Please add aiohttp tag to your question there. 17 aiohttp Documentation, Release 3.7.4 18 Chapter 8. Communication channels CHAPTER NINE CONTRIBUTING Please read the instructions for contributors before making a Pull Request. 19 aiohttp Documentation, Release 3.7.4 20 Chapter 9. Contributing CHAPTER TEN AUTHORS AND LICENSE The aiohttp package is written mostly by Nikolay Kim and Andrew Svetlov. It’s Apache 2 licensed and freely available. Feel free to improve this package and send a pull request to GitHub. 21 aiohttp Documentation, Release 3.7.4 22 Chapter 10. Authors and License CHAPTER ELEVEN POLICY FOR BACKWARD INCOMPATIBLE CHANGES aiohttp keeps backward compatibility. After deprecating some Public API (method, class, function argument, etc.) the library guaranties the usage of depre- cated API is still allowed at least for a year and half after publishing new release with deprecation. All deprecations are reflected in documentation and raises DeprecationWarning. Sometimes we are forced to break the own rule for sake of very strong reason. Most likely the reason is a critical bug which cannot be solved without major API change, but we are working hard for keeping these changes as rare as possible. 23 aiohttp Documentation, Release 3.7.4 24 Chapter 11. Policy for Backward Incompatible Changes CHAPTER TWELVE TABLE OF CONTENTS 12.1 Client The page contains all information about aiohttp Client API: 12.1.1 Client Quickstart Eager to get started? This page gives a good introduction in how to get started with aiohttp client API. First, make sure that aiohttp is installed and up-to-date Let’s get started with some simple examples. Make a Request Begin by importing the aiohttp module, and asyncio: import aiohttp import asyncio Now, let’s try to get a web-page. For example let’s query http://httpbin.org/get: async def main(): async with aiohttp.ClientSession() as session: async with session.get('http://httpbin.org/get') as resp: print(resp.status) print(await resp.text()) loop= asyncio.get_event_loop() loop.run_until_complete(main()) Now, we have a ClientSession called session and a ClientResponse object called resp. We can get all the information we need from the response. The mandatory parameter of ClientSession.get() coroutine is an HTTP url (str or class:yarl.URL instance). In order to make an HTTP POST request use ClientSession.post() coroutine: session.post('http://httpbin.org/post', data=b'data') Other HTTP methods are available as well: 25 aiohttp Documentation, Release 3.7.4 session.put('http://httpbin.org/put', data=b'data') session.delete('http://httpbin.org/delete') session.head('http://httpbin.org/get') session.options('http://httpbin.org/get') session.patch('http://httpbin.org/patch', data=b'data') Note: Don’t create a session per request. Most likely you need a session per application which performs all requests altogether. More complex cases may require a session per site, e.g. one for Github and other one for Facebook APIs. Anyway making a session for every request is a very bad idea. A session contains a connection pool inside. Connection reusage and keep-alives (both are on by default) may speed up total performance. A session context manager usage is not mandatory but await session.close() method should be called in this case, e.g.: session= aiohttp.ClientSession() async with session.get('...'): # ... await session.close() Passing Parameters In URLs You often want to send some sort of data in the URL’s query string. If you were constructing the URL by hand, this data would be given as key/value pairs in the URL after a question mark, e.g. httpbin.org/get?key=val. Requests allows you to provide these arguments as a dict, using the params keyword argument. As an example, if you wanted to pass key1=value1 and key2=value2 to httpbin.org/get, you would use the following code: params={'key1':'value1','key2':'value2'} async with session.get('http://httpbin.org/get', params=params) as resp: expect='http://httpbin.org/get?key1=value1&key2=value2' assert str(resp.url) == expect You can see that the URL has been correctly encoded by printing the URL. For sending data with multiple values for the same key MultiDict may be used; the library support nested lists ({'key': ['value1', 'value2']}) alternative as well. It is also possible to pass a list of 2 item tuples as parameters, in that case you can specify multiple values for each key: params=[('key','value1'), ('key','value2')] async with session.get('http://httpbin.org/get', params=params) as r: expect='http://httpbin.org/get?key=value2&key=value1' assert str(r.url) == expect You can also pass str content as param, but beware – content is not encoded by library. Note that + is not encoded: 26 Chapter 12. Table Of Contents aiohttp Documentation, Release 3.7.4 async with session.get('http://httpbin.org/get', params='key=value+1') as r: assert str(r.url) =='http://httpbin.org/get?key=value+1' Note: aiohttp internally performs URL canonicalization before sending request. Canonicalization encodes host part by IDNA codec and applies requoting to path and query parts. For example URL('http://example.com//%30?a=%31') is converted to URL('http://example. com/%D0%BF%D1%83%D1%82%D1%8C/0?a=1'). Sometimes canonicalization is not desirable if server accepts exact representation and does not requote URL itself. To disable canonicalization use encoded=True parameter for URL construction: await session.get( URL('http://example.com/%30', encoded=True)) Warning: Passing params overrides encoded=True, never use both options. Response Content and Status Code We can read the content of the server’s response and its status code. Consider the GitHub time-line again: async with session.get('https://api.github.com/events') as resp: print(resp.status) print(await resp.text()) prints out something like: 200 '[{"created_at":"2015-06-12T14:06:22Z","public":true,"actor":{..
Recommended publications
  • HAIL: an Algorithm for the Hardware Accelerated Identification of Languages, Master's Thesis, May 2006
    Washington University in St. Louis Washington University Open Scholarship All Computer Science and Engineering Research Computer Science and Engineering Report Number: WUCSE-2006-36 2006-01-01 HAIL: An Algorithm for the Hardware Accelerated Identification of Languages, Master's Thesis, May 2006 Charles M. Kastner This thesis examines in detail the Hardware-Accelerated Identification of Languages (HAIL) project. The goal of HAIL is to provide an accurate means to identify the language and encoding used in streaming content, such as documents passed over a high-speed network. HAIL has been implemented on the Field-programmable Port eXtender (FPX), an open hardware platform developed at Washington University in St. Louis. HAIL can accurately identify the primary languages and encodings used in text at rates much higher than what can be achieved by software algorithms running on microprocessors. Follow this and additional works at: https://openscholarship.wustl.edu/cse_research Part of the Computer Engineering Commons, and the Computer Sciences Commons Recommended Citation Kastner, Charles M., " HAIL: An Algorithm for the Hardware Accelerated Identification of Languages, Master's Thesis, May 2006" Report Number: WUCSE-2006-36 (2006). All Computer Science and Engineering Research. https://openscholarship.wustl.edu/cse_research/187 Department of Computer Science & Engineering - Washington University in St. Louis Campus Box 1045 - St. Louis, MO - 63130 - ph: (314) 935-6160. Department of Computer Science & Engineering 2006-36 HAIL: An Algorithm for the Hardware Accelerated Identification of Languages, Master's Thesis, May 2006 Authors: Charles M. Kastner Corresponding Author: [email protected] Web Page: http://www.arl.wustl.edu/projects/fpx/reconfig.htm Abstract: This thesis examines in detail the Hardware-Accelerated Identification of Languages (HAIL) project.
    [Show full text]
  • Vntex — Typesetting Vietnamese Hàn Thế Thành Reinhard Kotucha
    VnTEX — Typesetting Vietnamese Hàn Thế Thành Reinhard Kotucha Abstract VnTEX is an extension to Donald Knuth’s TEX typesetting system which provides support for typesetting Vietnamese. The primary site of VnTEX is http://vntex.sf.net. 1 Where to get Help The current maintainers of VnTEX are: I Hàn Thế Thành [email protected] I Reinhard Kotucha [email protected] I Werner Lemberg [email protected] There is a mailing list (very low traffic) for questions about VnTEX and typesetting Vietnamese. To subscribe to the list, visit: http://lists.sourceforge.net/lists/listinfo/vntex-users There is also a Wiki: http://vntex.info 2 Related Documents The following files are part of the VnTEX distribution I Hàn Thế Thành, Hỗ trợ tiếng Việt cho TEX I Hàn Thế Thành, Minimal steps to typeset Vietnamese I Hàn Thế Thành và Thái Phú Khánh Hòa, Dùng font với VnTEX The following files are not part of VnTEX but might be part of the TEX distribution you are using. I The American Mathematical Society, Hướng dẫn sử dụng gói amsmath, http://ctan.org/tex-archive/info/amslatex/vietnamese/amsldoc-vi.pdf http://ctan.org/tex-archive/info/amslatex/vietnamese/amsldoc-print-vi.pdf I H. Partl, E. Schlegl, I. Hyna, T. Oetiker, Một tài liệu ngắn gọn giới thiệu về LATEX 2", Translated by Nguyễn Tân Khoa. http://ctan.org/tex-archive/info/lshort/vietnamese/lshort-vi.pdf I Wolfgang May, Andreas Schlechte, Mở rộng môi trường định lý. Translated by Huỳnh Kỳ Anh. http://ctan.org/tex-archive/info/translations/vn/ntheorem-doc-vn.pdf 1 3 Typesetting Vietnamese In order to typeset Vietnamese, you need a text editor which supports Vietnamese.
    [Show full text]
  • Basis Technology Unicode対応ライブラリ スペックシート 文字コード その他の名称 Adobe-Standard-Encoding A
    Basis Technology Unicode対応ライブラリ スペックシート 文字コード その他の名称 Adobe-Standard-Encoding Adobe-Symbol-Encoding csHPPSMath Adobe-Zapf-Dingbats-Encoding csZapfDingbats Arabic ISO-8859-6, csISOLatinArabic, iso-ir-127, ECMA-114, ASMO-708 ASCII US-ASCII, ANSI_X3.4-1968, iso-ir-6, ANSI_X3.4-1986, ISO646-US, us, IBM367, csASCI big-endian ISO-10646-UCS-2, BigEndian, 68k, PowerPC, Mac, Macintosh Big5 csBig5, cn-big5, x-x-big5 Big5Plus Big5+, csBig5Plus BMP ISO-10646-UCS-2, BMPstring CCSID-1027 csCCSID1027, IBM1027 CCSID-1047 csCCSID1047, IBM1047 CCSID-290 csCCSID290, CCSID290, IBM290 CCSID-300 csCCSID300, CCSID300, IBM300 CCSID-930 csCCSID930, CCSID930, IBM930 CCSID-935 csCCSID935, CCSID935, IBM935 CCSID-937 csCCSID937, CCSID937, IBM937 CCSID-939 csCCSID939, CCSID939, IBM939 CCSID-942 csCCSID942, CCSID942, IBM942 ChineseAutoDetect csChineseAutoDetect: Candidate encodings: GB2312, Big5, GB18030, UTF32:UTF8, UCS2, UTF32 EUC-H, csCNS11643EUC, EUC-TW, TW-EUC, H-EUC, CNS-11643-1992, EUC-H-1992, csCNS11643-1992-EUC, EUC-TW-1992, CNS-11643 TW-EUC-1992, H-EUC-1992 CNS-11643-1986 EUC-H-1986, csCNS11643_1986_EUC, EUC-TW-1986, TW-EUC-1986, H-EUC-1986 CP10000 csCP10000, windows-10000 CP10001 csCP10001, windows-10001 CP10002 csCP10002, windows-10002 CP10003 csCP10003, windows-10003 CP10004 csCP10004, windows-10004 CP10005 csCP10005, windows-10005 CP10006 csCP10006, windows-10006 CP10007 csCP10007, windows-10007 CP10008 csCP10008, windows-10008 CP10010 csCP10010, windows-10010 CP10017 csCP10017, windows-10017 CP10029 csCP10029, windows-10029 CP10079 csCP10079, windows-10079
    [Show full text]
  • Unicode Compression: Does Size Really Matter? TR CS-2002-11
    Unicode Compression: Does Size Really Matter? TR CS-2002-11 Steve Atkin IBM Globalization Center of Competency International Business Machines Austin, Texas USA 78758 [email protected] Ryan Stansifer Department of Computer Sciences Florida Institute of Technology Melbourne, Florida USA 32901 [email protected] July 2003 Abstract The Unicode standard provides several algorithms, techniques, and strategies for assigning, transmitting, and compressing Unicode characters. These techniques allow Unicode data to be represented in a concise format in several contexts. In this paper we examine several techniques and strategies for compressing Unicode data using the programs gzip and bzip. Unicode compression algorithms known as SCSU and BOCU are also examined. As far as size is concerned, algorithms designed specifically for Unicode may not be necessary. 1 Introduction Characters these days are more than one 8-bit byte. Hence, many are concerned about the space text files use, even in an age of cheap storage. Will storing and transmitting Unicode [18] take a lot more space? In this paper we ask how compression affects Unicode and how Unicode affects compression. 1 Unicode is used to encode natural-language text as opposed to programs or binary data. Just what is natural-language text? The question seems simple, yet there are complications. In the information age we are accustomed to discretization of all kinds: music with, for instance, MP3; and pictures with, for instance, JPG. Also, a vast amount of text is stored and transmitted digitally. Yet discretizing text is not generally considered much of a problem. This may be because the En- glish language, western society, and computer technology all evolved relatively smoothly together.
    [Show full text]
  • Package 'Fontmplus'
    Package ‘fontMPlus’ February 27, 2017 Title Additional 'ggplot2' Themes Using 'M+' Fonts Version 0.1.1 Description Provides 'ggplot2' themes based on the 'M+' fonts. The 'M+' fonts are a font family under a free license. The font family provides multilingual glyphs. The fonts provide 'Kana', over 5,000 'Kanji', Basic Latin, Latin-1 Supplement, Latin Extended-A, and 'IPA' Extensions glyphs. Most of the Greek, Cyrillic, Vietnamese, and extended glyphs and symbols are included too. So the fonts are in conformity with ISO-8859-1, 2, 3, 4, 5, 7, 9, 10, 13, 14, 15, 16, Windows-1252, T1, and VISCII encoding. More information about the fonts can be found at <http://mplus-fonts.osdn.jp/about-en.html>. Depends R (>= 3.0.0) License MIT + file LICENSE Encoding UTF-8 LazyData true RoxygenNote 6.0.1 Imports hrbrthemes, extrafont, ggplot2 Suggests stringr, knitr, rmarkdown URL https://github.com/bhaskarvk/fontMPlus BugReports https://github.com/bhaskarvk/fontMPlus/issues VignetteBuilder knitr NeedsCompilation no Author Bhaskar Karambelkar [aut, cre], MPlus [cph] Maintainer Bhaskar Karambelkar <[email protected]> Repository CRAN Date/Publication 2017-02-27 08:15:30 1 2 fontMPlus R topics documented: fontMPlus . .2 import_mplus . .3 mplus.fontfamilies . .3 mplus.fonttable . .4 theme_ipsum_mplus_c1 . .4 theme_ipsum_mplus_c2 . .6 theme_ipsum_mplus_m1 . .7 theme_ipsum_mplus_m2 . .9 theme_ipsum_mplus_mn1 . 10 theme_ipsum_mplus_p1 . 12 theme_ipsum_mplus_p2 . 14 Index 16 fontMPlus Additional ggplot2 themes using M+ fonts. Description This is an add-on pacakge for hrbrthemes pacakge. It provides seven ggplot2 themes based on M+ fonts. M+ FONTS The M+ FONTS are a font family under the Free license. You can use, copy, and distribute them, with or without modification, either commercially or noncommercially.
    [Show full text]
  • IDOL Keyview Viewing SDK 12.7 Programming Guide
    KeyView Software Version 12.7 Viewing SDK Programming Guide Document Release Date: October 2020 Software Release Date: October 2020 Viewing SDK Programming Guide Legal notices Copyright notice © Copyright 2016-2020 Micro Focus or one of its affiliates. The only warranties for products and services of Micro Focus and its affiliates and licensors (“Micro Focus”) are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Documentation updates The title page of this document contains the following identifying information: l Software Version number, which indicates the software version. l Document Release Date, which changes each time the document is updated. l Software Release Date, which indicates the release date of this version of the software. To check for updated documentation, visit https://www.microfocus.com/support-and-services/documentation/. Support Visit the MySupport portal to access contact information and details about the products, services, and support that Micro Focus offers. This portal also provides customer self-solve capabilities. It gives you a fast and efficient way to access interactive technical support tools needed to manage your business. As a valued support customer, you can benefit by using the MySupport portal to: l Search for knowledge documents of interest l Access product documentation l View software vulnerability alerts l Enter into discussions with other software customers l Download software patches l Manage software licenses, downloads, and support contracts l Submit and track service requests l Contact customer support l View information about all services that Support offers Many areas of the portal require you to sign in.
    [Show full text]
  • Instantly Identify and Triage Many Languages
    Rosette® BIG TEXT ANALYTICS Language Identifier RLI RLI ROSETTE Identify languages and encodings Language Identifier Sortedwww.basistech.com Languages [email protected] +1 617-386-2090 Base Linguistics RBL RBL ROSETTE Search many languages with high accuracy InstantlyBase Linguistics identify and triageBetter Search Entity Extractor REX REX ROSETTE Tag names of people, places, and organizations manyEntity languages Extractor within largeTagged Entities English Primary Language Entity Resolver 8% RES voRESlumes ROSETTE of text. French Make real-world connections in your data Chinese Entity Resolver Chinese RealPrimary Scrip Identitiest 即时识别和处理大量多语言文本。 22% Arabic 39% Latin Identifiez et triez instantanément plusieurs French French Name Indexer English RNI languesRNI à travers ROSETTE de nombreux textes. Match names between many variations Name Indexer Matched Names %31 اﻟﺘﺤﺪﻳﺪ واﻟﺘﺼﻨﻴﻒ اﻟﻔﻮري ﻟﻠﻌﺪﻳﺪ ﻣﻦ اﻟﻠﻐﺎت ﺿﻤﻦ ﻛﻤﻴﺎت ﻛﺒﻴﺮة ﻣﻦ اﻟﻨﺼﻮص. Arabic Name Translator RNT RNT ROSETTE Translate foreign names into English Name Translator Translated Names Identify languages and Supported Categorizer Languages transform ROSETTE encodings 55 RCA Categorize Everything In Sight RCA Rosette® LanguageCategorizer Identifier (RLI) analyzes text from a few words to whole KEY FEATURES Sorted Content documents, to detect the languages and character encoding with speed and very high accuracy. Automatic language identification is the necessary first - Simple API Sentiment Analyzer step for applications that categorize, search, process, and store text in many - Fast and scalable ROSETTE - Industrial-strength support RSA languages.RSA Individual documents may be routed to language specialists, or sent Detect The Sentiments Of Your Text - Easy installation into language-specificSentiment analysis pipelines Analyzer (such as Rosette Base Linguistics) to Actionable Insights - Flexible and customizable improve the quality of search results.
    [Show full text]
  • Oracle® Tuxedo Programming an Oracle Tuxedo ATMI Application Using C 10G Release 3 (10.3)
    Oracle® Tuxedo Programming an Oracle Tuxedo ATMI Application Using C 10g Release 3 (10.3) January 2009 Tuxedo Programming an Oracle Tuxedo ATMI Application Using C, 10g Release 3 (10.3) Copyright © 1996, 2009, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this software or related documentation is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007).
    [Show full text]
  • Linux Programmer's Manual ICONV(1)
    ICONV(1) Linux Programmer’s Manual ICONV(1) NAME iconv − character set conversion SYNOPSIS iconv [-c][-s][-f encoding][-t encoding][inputfile ...] iconv -l DESCRIPTION The iconv program converts text from one encoding to another encoding. More precisely, it converts from the encoding given for the -f option to the encoding given for the -t option. Either of these encod- ings defaults to the encoding of the current locale. All the inputfiles are read and converted in turn; if no inputfile is given, the standard input is used. The converted text is printed to standard output. When option -c is given, characters that cannot be converted are silently discarded, instead of leading to a conversion error. When option -s is given, error messages about invalid or unconvertible characters are omitted, but the actual converted text is unaffected. The encodings permitted are system dependent. For the libiconv implementation, they are listed in the iconv_open(3) manual page. The iconv -l command lists the names of the supported encodings, in a system dependent format. For the libiconv implementation, the names are printed in upper case, separated by whitespace, and alias names of an encoding are listed on the same line as the encoding itself. SEE ALSO iconv_open(3), locale(7) GNU January 13, 2002 1 ICONV(3) Linux Programmer’s Manual ICONV(3) NAME iconv − perform character set conversion SYNOPSIS #include <iconv.h> size_t iconv (iconv_t cd, const char* * inbuf , size_t * inbytesleft, char* * outbuf , size_t * outbytesleft); DESCRIPTION The argument cd must be a conversion descriptor created using the function iconv_open. The main case is when inbuf is not NULL and *inbuf is not NULL.
    [Show full text]
  • NAME DESCRIPTION Supported Encodings
    Perl version 5.8.6 documentation - Encode::Supported NAME Encode::Supported -- Encodings supported by Encode DESCRIPTION Encoding Names Encoding names are case insensitive. White space in names is ignored. In addition, an encoding may have aliases. Each encoding has one "canonical" name. The "canonical" name is chosen from the names of the encoding by picking the first in the following sequence (with a few exceptions). The name used by the Perl community. That includes 'utf8' and 'ascii'. Unlike aliases, canonical names directly reach the method so such frequently used words like 'utf8' don't need to do alias lookups. The MIME name as defined in IETF RFCs. This includes all "iso-"s. The name in the IANA registry. The name used by the organization that defined it. In case de jure canonical names differ from that of the Encode module, they are always aliased if it ever be implemented. So you can safely tell if a given encoding is implemented or not just by passing the canonical name. Because of all the alias issues, and because in the general case encodings have state, "Encode" uses an encoding object internally once an operation is in progress. Supported Encodings As of Perl 5.8.0, at least the following encodings are recognized. Note that unless otherwise specified, they are all case insensitive (via alias) and all occurrence of spaces are replaced with '-'. In other words, "ISO 8859 1" and "iso-8859-1" are identical. Encodings are categorized and implemented in several different modules but you don't have to use Encode::XX to make them available for most cases.
    [Show full text]
  • Name Synopsis Description Options
    CKBCOMP(1) Console-setup User’sManual CKBCOMP(1) NAME ckbcomp − compile a XKB keyboard description to a keymap suitable for loadkeysorkbdcontrol SYNOPSIS ckbcomp [OPTION...] [XKBLAYOUT [XKBVARIANT [XKBOPTIONS]...]] DESCRIPTION The ckbcomp keymap compiler converts a description of an XKB keyboard layout into a console keymap that can be read directly by loadkeys(1) or kbdcontrol(1). On its standard output ckbcomp dumps the generated keyboard definition. The most important difference between the arguments of setxkbmap(1) and the arguments of ckbcomp is the additional parameter -charmap when non-Unicode keyboard map is wanted. Without -charmap ckbcomp will generate Unicode keyboard. OPTIONS General options -?,-help Print a usage message and exit. -charmap charmap The encoding to use for the output keymap. There should be an character mapping table defining this encoding in /usr/share/consoletrans.Definitions of the following charmaps are provided: ARMSCII-8, CP1251, CP1255, CP1256, GEORGIAN-ACADEMY, GEORGIAN-PS, IBM1133, ISIRI-3342, ISO-8859-1, ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5, ISO-8859-6, ISO-8859-7, ISO-8859-8, ISO-8859-9, ISO-8859-10, ISO-8859-11, ISO-8859-13, ISO-8859-14, ISO-8859-15, ISO-8859-16, KOI8-R, KOI8-U, TIS-620 and VISCII. -Idir Look in the top-leveldirectory dir for files included by the keymap description. This option may be used multiple times. If a file can not be found in anyofthe specified directories, it will be searched also in some other standard locations, such as /etc/console-setup/ckb, /usr/share/X11/xkb and /etc/X11/xkb -v level Set levelofdetail for listing.
    [Show full text]
  • Tutorial: Internet Languages, Character Sets and Encodings
    TUTORIAL: INTERNET LANGUAGES, CHARACTER SETS AND ENCODINGS by Michael K. Bergman BrightPlanet Corporation March 23, 2006 Broad-scale, international open source harvesting from the Internet poses many challenges in use and translation of legacy encodings that have vexed academics and researchers for many years. Successfully addressing these challenges will only grow in importance as the relative percentage of international sites grows in relation to conventional English ones. A major challenge in internationalization and foreign source support is “encoding.” Encodings specify the arbitrary assignment of numbers to the symbols (characters or ideograms) of the world’s written languages needed for electronic transfer and manipulation. One of the first encodings developed in the 1960s was ASCII (numerals, plus a-z; A-Z); others developed over time to deal with other unique characters and the many symbols of (particularly) the Asiatic languages. Some languages have many character encodings and some encodings, for example Chinese and Japanese, have very complex systems for handling the large number of unique characters. Two different encodings can be incompatible by assigning the same number to two distinct symbols, or vice versa. So-called Unicode set out to consolidate many different encodings, all using separate code plans into a single system that could represent all written languages within the same character encoding. There are a few Unicode techniques and formats, the most common being UTF-8. The Internet was originally developed via efforts in the United States funded by ARPA (later DARPA) and NSF, extending back to the 1960s. At the time of its commercial adoption in the early 1990s via the Word Wide Web protocols, it was almost entirely dominated by English by virtue of this U.S.
    [Show full text]