Digital Image Processing Lectures 25 & 26

Total Page:16

File Type:pdf, Size:1020Kb

Digital Image Processing Lectures 25 & 26 Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Digital Image Processing Lectures 25 & 26 M.R. Azimi, Professor Department of Electrical and Computer Engineering Colorado State University M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image in order to reduce the number of bits to represent an image or a sequence of images (e.g., video). Applications: Image Transmission: e.g., HDTV, 3DTV, satellite/military communication, and teleconferencing. Image Storage: e.g., Document storage & retrieval, medical image archives, weather maps, and geological surveys. Category of Techniques: 1 Pixel Encoding: PCM, run-length encoding, bit-plane, Huffmann encoding, entropy encoding 2 Predictive Encoding: Delta modulation, 2-D DPCM, inter-frame method 3 Transform-based Encoding: DCT-based, WT-based, Zonal encoding 4 Others: Vector quantization (clustering), neural network-based, hybrid encoding M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Encoding System There are three steps involved with any encoding system (Fig. 1). a. Mapping: Removes redundancies in the images. Should be invertible. b. Quantization: Mapped values are quantized using uniform or Llyod-Max quantizers. c. Coding: Optimal codewords are assigned to the quantized values. Figure 1: A Typical Image Encoding System. However, before we discuss several types of encoding systems, we need to review some basic results from information theory. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Measure of Information & Entropy Assume there is a source (e.g., an image) that generates a discrete set of independent messages (e.g., grey-levels), rk, with prob. Pk; k 2 [1;L] with L being the number of messages (or number of levels). Figure 2: Source and message. Then, information associated with rk is Ik = − log2 Pk bits PL−1 Clearly, k=0 Pk = 1. For equally likely levels (messages) information can be transmitted as an n-bit binary number 1 1 P = = ! I = n bits k L 2n k For images, Pk's are obtained from the histogram. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding As an example, consider a binary image with r0 = Black;P0 = 1 and r1 = White;P1 = 0, then Ik = 0 i.e. no information. Entropy: Average information generated by the source L L X X H = PkIk = − Pk log2 Pk Avg: bits=pixel k=1 k=1 Entropy also represents a measure of redundancy. Let L = 4,P1 = P2 = P3 = 0 and P4 = 1, then H = 0 i.e. most certain case and thus maximum redundancy. Now, let L = 4,P1 = P2 = P3 = P4 = 1=4, then H = 2 i.e. most uncertain case and hence least redundant. Maximum entropy occurs when levels are equally likely, 1 Pk = L k 2 [1;L], then L X 1 1 H = − log = log L max L 2 L 2 k=1 Thus, 0 ≤ H ≤ Hmax Entropy and coding M.R. Azimi Digital Image Processing Entropy represents the lower bound on the number of bits required to code the coder inputs. That is, for a set of coder input levels vk; k 2 [1;L], with Pk then it is guaranteed that it is not possible to code them using less than H bits on the average. Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Entropy and Coding Entropy represents the lower bound on the number of bits required to code the coder inputs, i.e. for a set of coder inputs vk; k 2 [1;L], with prob Pk it is guaranteed that it is not possible to code them using less than H bits on the average. If we design a code with codewords Ck; k 2 [1;L] with corresponding word lengths βks, the average number PL of bits required by the coder is R(L) = k=1 βkPk. Figure 3: Coder producing codewords Cks with lengths βks. Shannon's Entropy Coding Theorem (1949) The average length R(L) is bounded by H ≤ R(L) ≤ H + , ; = 1=L M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding i.e. it is possible to encode without distortion a source with entropy H using an average of H + bits/message; or it is possible to encode with distortion the source using H bits/message. Optimality of the coder depends on how close R(L) is to H. Example: Let L = 2, P1 = p and P2 = 1 − p 0 ≤ p ≤ 1. Thus, the entropy is H = −p log2 p − (1 − p) log2(1 − p). The above figure shows H as a function of p. Clearly, since the source is binary, we can use 1 bit/pixel. This corresponds to Hmax = 1 at p = 1=2. However, if p = 1=8,H ≈ 0:2 i.e. more redundancies then it is possible to find a coding scheme that uses only 0:2 bits/pixel. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Remark: Max achievable compression is Average bit rate of original raw data(B) C = Average bit rate of encoded data (R(L)) Thus B B ≤ C ≤ = 1=L H + H Since certain distortion is inevitable in any image transmission, it is necessary to find the minimum number of bits to encode the image while allowing a certain level of distortion. Rate Distortion Function Let D be a fixed distortion between the actual values, x and reproduced values, x^. Then the question is: allowing D distortion what is minimum number of bits required to encode the data? 2 If we consider x as a Gaussian r.v. with σx, D is D = E[(x − x^)2] Rate distortion function is defined by M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding ( 2 1 σx 2 2 log2 D 0 ≤ D ≤ σx RD = 2 0 D > σx 1 σ2 = Max[0; log x ] 2 2 D 2 At maximum D ≥ σx, RD = 0 i.e. no information needs is transmitted. Figure 4: Rate Distortion Function RD versus D. RD shows the number of bits required for distortion D. Since RD 2 1=2 RD σx represents the number of bits/pixel N = 2 = D , D is considered to be quantization noise variance. This variance can be minimized using Llyod-Max quantizer. In transform domain we can assume that x is white (e.g., due to KL). M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Pixel-Based Encoding Encode each pixel ignoring their inter-pixel dependencies. Among methods are: 1. Entropy Coding Every block of an image is entropy encoded based upon the Pk's within a block. This produces variable length code for each block depending on spatial activities within the blocks. 2. Run-Length Encoding Scan the image horizontally or vertically and while scanning assign a group of pixel with the same intensity into a pair (gi; li) where gi is the intensity and li is the length of the \run". This method can also be used for detecting edges and boundaries of an object. It is mostly used for images with a small number of gray levels and is not effective for highly textured images. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Example 1: Consider the following 8 × 8 image. 4 4 4 4 4 4 4 0 4 5 5 5 5 5 4 0 4 5 6 6 6 5 4 0 4 5 6 7 6 5 4 0 4 5 6 6 6 5 4 0 4 5 5 5 5 5 4 0 4 4 4 4 4 4 4 0 4 4 4 4 4 4 4 0 The run-length codes using vertical (continuous top-down) scanning mode are: (4,9) (5,5) (4,3) (5,1) (6,3) (5,1) (4,3) (5,1) (6,1) (7,1) (6,1) (5,1) (4,3) (5,1) (6,3) (5,1) (4,3) (5,5) (4,10) (0,8) i.e. total of 20 pairs = 40 numbers. The horizontal scanning would lead to 34 pairs = 68 numbers, which is more than the actual number of pixels (i.e. 64). M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Example 2: Let the transition probabilities for run-length encoding of a binary image (0: black and 1: white) be p0 = P (0j1) and p1 = P (1j0). Assuming all runs are independent, find (a) average run lengths, (b) entropies of white and black runs, and (c) compression ratio. Solution: A run of length l ≥ 1 can be represented by a Geometric r.v. l−1 Xi with PMF P (Xi = l) = pi(1 − pi) with i = 0; 1 which corresponds to happening of 1st occurrences of 0 or 1 after l independent trials. (Note that (1 − P (0j1)) = P (1j1) and (1 − P (1j0)) = P (0j0).) and Thus, for the average we have 1 1 X X l−1 µXi = lP (Xi = l) = lpi(1 − pi) l=1 l=1 P1 n−1 1 1 which using series na = 2 reduces to µX = .
Recommended publications
  • Compilers & Translator Writing Systems
    Compilers & Translators Compilers & Translator Writing Systems Prof. R. Eigenmann ECE573, Fall 2005 http://www.ece.purdue.edu/~eigenman/ECE573 ECE573, Fall 2005 1 Compilers are Translators Fortran Machine code C Virtual machine code C++ Transformed source code Java translate Augmented source Text processing language code Low-level commands Command Language Semantic components Natural language ECE573, Fall 2005 2 ECE573, Fall 2005, R. Eigenmann 1 Compilers & Translators Compilers are Increasingly Important Specification languages Increasingly high level user interfaces for ↑ specifying a computer problem/solution High-level languages ↑ Assembly languages The compiler is the translator between these two diverging ends Non-pipelined processors Pipelined processors Increasingly complex machines Speculative processors Worldwide “Grid” ECE573, Fall 2005 3 Assembly code and Assemblers assembly machine Compiler code Assembler code Assemblers are often used at the compiler back-end. Assemblers are low-level translators. They are machine-specific, and perform mostly 1:1 translation between mnemonics and machine code, except: – symbolic names for storage locations • program locations (branch, subroutine calls) • variable names – macros ECE573, Fall 2005 4 ECE573, Fall 2005, R. Eigenmann 2 Compilers & Translators Interpreters “Execute” the source language directly. Interpreters directly produce the result of a computation, whereas compilers produce executable code that can produce this result. Each language construct executes by invoking a subroutine of the interpreter, rather than a machine instruction. Examples of interpreters? ECE573, Fall 2005 5 Properties of Interpreters “execution” is immediate elaborate error checking is possible bookkeeping is possible. E.g. for garbage collection can change program on-the-fly. E.g., switch libraries, dynamic change of data types machine independence.
    [Show full text]
  • Chapter 2 Basics of Scanning And
    Chapter 2 Basics of Scanning and Conventional Programming in Java In this chapter, we will introduce you to an initial set of Java features, the equivalent of which you should have seen in your CS-1 class; the separation of problem, representation, algorithm and program – four concepts you have probably seen in your CS-1 class; style rules with which you are probably familiar, and scanning - a general class of problems we see in both computer science and other fields. Each chapter is associated with an animating recorded PowerPoint presentation and a YouTube video created from the presentation. It is meant to be a transcript of the associated presentation that contains little graphics and thus can be read even on a small device. You should refer to the associated material if you feel the need for a different instruction medium. Also associated with each chapter is hyperlinked code examples presented here. References to previously presented code modules are links that can be traversed to remind you of the details. The resources for this chapter are: PowerPoint Presentation YouTube Video Code Examples Algorithms and Representation Four concepts we explicitly or implicitly encounter while programming are problems, representations, algorithms and programs. Programs, of course, are instructions executed by the computer. Problems are what we try to solve when we write programs. Usually we do not go directly from problems to programs. Two intermediate steps are creating algorithms and identifying representations. Algorithms are sequences of steps to solve problems. So are programs. Thus, all programs are algorithms but the reverse is not true.
    [Show full text]
  • An End-To-End System for Automatic Characterization of Iba1 Immunopositive Microglia in Whole Slide Imaging
    Neuroinformatics (2019) 17:373–389 https://doi.org/10.1007/s12021-018-9405-x ORIGINAL ARTICLE An End-to-end System for Automatic Characterization of Iba1 Immunopositive Microglia in Whole Slide Imaging Alexander D. Kyriazis1 · Shahriar Noroozizadeh1 · Amir Refaee1 · Woongcheol Choi1 · Lap-Tak Chu1 · Asma Bashir2 · Wai Hang Cheng2 · Rachel Zhao2 · Dhananjay R. Namjoshi2 · Septimiu E. Salcudean3 · Cheryl L. Wellington2 · Guy Nir4 Published online: 8 November 2018 © Springer Science+Business Media, LLC, part of Springer Nature 2018 Abstract Traumatic brain injury (TBI) is one of the leading causes of death and disability worldwide. Detailed studies of the microglial response after TBI require high throughput quantification of changes in microglial count and morphology in histological sections throughout the brain. In this paper, we present a fully automated end-to-end system that is capable of assessing microglial activation in white matter regions on whole slide images of Iba1 stained sections. Our approach involves the division of the full brain slides into smaller image patches that are subsequently automatically classified into white and grey matter sections. On the patches classified as white matter, we jointly apply functional minimization methods and deep learning classification to identify Iba1-immunopositive microglia. Detected cells are then automatically traced to preserve their complex branching structure after which fractal analysis is applied to determine the activation states of the cells. The resulting system detects white matter regions with 84% accuracy, detects microglia with a performance level of 0.70 (F1 score, the harmonic mean of precision and sensitivity) and performs binary microglia morphology classification with a 70% accuracy.
    [Show full text]
  • Scripting: Higher- Level Programming for the 21St Century
    . John K. Ousterhout Sun Microsystems Laboratories Scripting: Higher- Cybersquare Level Programming for the 21st Century Increases in computer speed and changes in the application mix are making scripting languages more and more important for the applications of the future. Scripting languages differ from system programming languages in that they are designed for “gluing” applications together. They use typeless approaches to achieve a higher level of programming and more rapid application development than system programming languages. or the past 15 years, a fundamental change has been ated with system programming languages and glued Foccurring in the way people write computer programs. together with scripting languages. However, several The change is a transition from system programming recent trends, such as faster machines, better script- languages such as C or C++ to scripting languages such ing languages, the increasing importance of graphical as Perl or Tcl. Although many people are participat- user interfaces (GUIs) and component architectures, ing in the change, few realize that the change is occur- and the growth of the Internet, have greatly expanded ring and even fewer know why it is happening. This the applicability of scripting languages. These trends article explains why scripting languages will handle will continue over the next decade, with more and many of the programming tasks in the next century more new applications written entirely in scripting better than system programming languages. languages and system programming
    [Show full text]
  • Arithmetic Coding
    Arithmetic Coding Arithmetic coding is the most efficient method to code symbols according to the probability of their occurrence. The average code length corresponds exactly to the possible minimum given by information theory. Deviations which are caused by the bit-resolution of binary code trees do not exist. In contrast to a binary Huffman code tree the arithmetic coding offers a clearly better compression rate. Its implementation is more complex on the other hand. In arithmetic coding, a message is encoded as a real number in an interval from one to zero. Arithmetic coding typically has a better compression ratio than Huffman coding, as it produces a single symbol rather than several separate codewords. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 ≤ n < 1.0) Arithmetic coding is a lossless coding technique. There are a few disadvantages of arithmetic coding. One is that the whole codeword must be received to start decoding the symbols, and if there is a corrupt bit in the codeword, the entire message could become corrupt. Another is that there is a limit to the precision of the number which can be encoded, thus limiting the number of symbols to encode within a codeword. There also exist many patents upon arithmetic coding, so the use of some of the algorithms also call upon royalty fees. Arithmetic coding is part of the JPEG data format.
    [Show full text]
  • Information Theory Revision (Source)
    ELEC3203 Digital Coding and Transmission – Overview & Information Theory S Chen Information Theory Revision (Source) {S(k)} {b i } • Digital source is defined by digital source source coding 1. Symbol set: S = {mi, 1 ≤ i ≤ q} symbols/s bits/s 2. Probability of occurring of mi: pi, 1 ≤ i ≤ q 3. Symbol rate: Rs [symbols/s] 4. Interdependency of {S(k)} • Information content of alphabet mi: I(mi) = − log2(pi) [bits] • Entropy: quantifies average information conveyed per symbol q – Memoryless sources: H = − pi · log2(pi) [bits/symbol] i=1 – 1st-order memory (1st-order Markov)P sources with transition probabilities pij q q q H = piHi = − pi pij · log2(pij) [bits/symbol] Xi=1 Xi=1 Xj=1 • Information rate: tells you how many bits/s information the source really needs to send out – Information rate R = Rs · H [bits/s] • Efficient source coding: get rate Rb as close as possible to information rate R – Memoryless source: apply entropy coding, such as Shannon-Fano and Huffman, and RLC if source is binary with most zeros – Generic sources with memory: remove redundancy first, then apply entropy coding to “residauls” 86 ELEC3203 Digital Coding and Transmission – Overview & Information Theory S Chen Practical Source Coding • Practical source coding is guided by information theory, with practical constraints, such as performance and processing complexity/delay trade off • When you come to practical source coding part, you can smile – as you should know everything • As we will learn, data rate is directly linked to required bandwidth, source coding is to encode source with a data rate as small as possible, i.e.
    [Show full text]
  • Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE
    SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE Abstract—A novel approach to entropy coding is described that entropy coding while the assignment of codewords to symbols provides the coding efficiency and simple probability modeling is the actual entropy coding. For decades, two methods have capability of arithmetic coding at the complexity level of Huffman dominated practical entropy coding: Huffman coding that has coding. The key element of the proposed approach is given by a partitioning of the unit interval into a small set of been invented in 1952 [8] and arithmetic coding that goes back disjoint probability intervals for pipelining the coding process to initial ideas attributed to Shannon [7] and Elias [9] and along the probability estimates of binary random variables. for which first practical schemes have been published around According to this partitioning, an input sequence of discrete 1976 [10][11]. Both entropy coding methods are capable of source symbols with arbitrary alphabet sizes is mapped to a approximating the entropy limit (in a certain sense) [12]. sequence of binary symbols and each of the binary symbols is assigned to one particular probability interval. With each of the For a fixed probability mass function, Huffman codes are intervals being represented by a fixed probability, the probability relatively easy to construct. The most attractive property of interval partitioning entropy (PIPE) coding process is based on Huffman codes is that their implementation can be efficiently the design and application of simple variable-to-variable length realized by the use of variable-length code (VLC) tables.
    [Show full text]
  • The Future of DNA Data Storage the Future of DNA Data Storage
    The Future of DNA Data Storage The Future of DNA Data Storage September 2018 A POTOMAC INSTITUTE FOR POLICY STUDIES REPORT AC INST M IT O U T B T The Future O E P F O G S R IE of DNA P D O U Data LICY ST Storage September 2018 NOTICE: This report is a product of the Potomac Institute for Policy Studies. The conclusions of this report are our own, and do not necessarily represent the views of our sponsors or participants. Many thanks to the Potomac Institute staff and experts who reviewed and provided comments on this report. © 2018 Potomac Institute for Policy Studies Cover image: Alex Taliesen POTOMAC INSTITUTE FOR POLICY STUDIES 901 North Stuart St., Suite 1200 | Arlington, VA 22203 | 703-525-0770 | www.potomacinstitute.org CONTENTS EXECUTIVE SUMMARY 4 Findings 5 BACKGROUND 7 Data Storage Crisis 7 DNA as a Data Storage Medium 9 Advantages 10 History 11 CURRENT STATE OF DNA DATA STORAGE 13 Technology of DNA Data Storage 13 Writing Data to DNA 13 Reading Data from DNA 18 Key Players in DNA Data Storage 20 Academia 20 Research Consortium 21 Industry 21 Start-ups 21 Government 22 FORECAST OF DNA DATA STORAGE 23 DNA Synthesis Cost Forecast 23 Forecast for DNA Data Storage Tech Advancement 28 Increasing Data Storage Density in DNA 29 Advanced Coding Schemes 29 DNA Sequencing Methods 30 DNA Data Retrieval 31 CONCLUSIONS 32 ENDNOTES 33 Executive Summary The demand for digital data storage is currently has been developed to support applications in outpacing the world’s storage capabilities, and the life sciences industry and not for data storage the gap is widening as the amount of digital purposes.
    [Show full text]
  • Fast Algorithm for PQ Data Compression Using Integer DTCWT and Entropy Encoding
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 22 (2017) pp. 12219-12227 © Research India Publications. http://www.ripublication.com Fast Algorithm for PQ Data Compression using Integer DTCWT and Entropy Encoding Prathibha Ekanthaiah 1 Associate Professor, Department of Electrical and Electronics Engineering, Sri Krishna Institute of Technology, No 29, Chimney hills Chikkabanavara post, Bangalore-560090, Karnataka, India. Orcid Id: 0000-0003-3031-7263 Dr.A.Manjunath 2 Principal, Sri Krishna Institute of Technology, No 29, Chimney hills Chikkabanavara post, Bangalore-560090, Karnataka, India. Orcid Id: 0000-0003-0794-8542 Dr. Cyril Prasanna Raj 3 Dean & Research Head, Department of Electronics and communication Engineering, MS Engineering college , Navarathna Agrahara, Sadahalli P.O., Off Bengaluru International Airport,Bengaluru - 562 110, Karnataka, India. Orcid Id: 0000-0002-9143-7755 Abstract metering infrastructures (smart metering), integration of distributed power generation, renewable energy resources and Smart meters are an integral part of smart grid which in storage units as well as high power quality and reliability [1]. addition to energy management also performs data By using smart metering Infrastructure sustains the management. Power Quality (PQ) data from smart meters bidirectional data transfer and also decrease in the need to be compressed for both storage and transmission environmental effects. With this resilience and reliability of process either through wired or wireless medium. In this power utility network can be improved effectively. Work paper, PQ data compression is carried out by encoding highlights the need of development and technology significant features captured from Dual Tree Complex encroachment in smart grid communications [2].
    [Show full text]
  • How Do You Know Your Search Algorithm and Code Are Correct?
    Proceedings of the Seventh Annual Symposium on Combinatorial Search (SoCS 2014) How Do You Know Your Search Algorithm and Code Are Correct? Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 [email protected] Abstract Is a Given Solution Correct? Algorithm design and implementation are notoriously The first question to ask of a search algorithm is whether the error-prone. As researchers, it is incumbent upon us to candidate solutions it returns are valid solutions. The algo- maximize the probability that our algorithms, their im- rithm should output each solution, and a separate program plementations, and the results we report are correct. In should check its correctness. For any problem in NP, check- this position paper, I argue that the main technique for ing candidate solutions can be done in polynomial time. doing this is confirmation of results from multiple in- dependent sources, and provide a number of concrete Is a Given Solution Optimal? suggestions for how to achieve this in the context of combinatorial search algorithms. Next we consider whether the solutions returned are opti- mal. In most cases, there are multiple very different algo- rithms that compute optimal solutions, starting with sim- Introduction and Overview ple brute-force algorithms, and progressing through increas- Combinatorial search results can be theoretical or experi- ingly complex and more efficient algorithms. Thus, one can mental. Theoretical results often consist of correctness, com- compare the solution costs returned by the different algo- pleteness, the quality of solutions returned, and asymptotic rithms, which should all be the same.
    [Show full text]
  • NAPCS Product List for NAICS 5112, 518 and 54151: Software
    NAPCS Product List for NAICS 5112, 518 and 54151: Software Publishers, Internet Service Providers, Web Search Portals, and Data Processing Services, and Computer Systems Design and Related Services 1 2 3 456 7 8 9 National Product United States Industry Working Tri- Detail Subject Group lateral NAICS Industries Area Code Detail Can Méx US Title Definition Producing the Product 5112 1.1 X Information Providing advice or expert opinion on technical matters related to the use of information technology. 511210 518111 518 technology (IT) 518210 54151 54151 technical consulting Includes: 54161 services • advice on matters such as hardware and software requirements and procurement, systems integration, and systems security. • providing expert testimony on IT related issues. Excludes: • advice on issues related to business strategy, such as advising on developing an e-commerce strategy, is in product 2.3, Strategic management consulting services. • advice bundled with the design and development of an IT solution (web site, database, specific application, network, etc.) is in the products under 1.2, Information technology (IT) design and development services. 5112 1.2 Information Providing technical expertise to design and/or develop an IT solution such as custom applications, 511210 518111 518 technology (IT) networks, and computer systems. 518210 54151 54151 design and development services 5112 1.2.1 Custom software Designing the structure and/or writing the computer code necessary to create and/or implement a 511210 518111 518 application design software application. 518210 54151 54151 and development services 5112 1.2.1.1 X Web site design and Designing the structure and content of a web page and/or of writing the computer code necessary to 511210 518111 518 development services create and implement a web page.
    [Show full text]
  • Media Theory and Semiotics: Key Terms and Concepts Binary
    Media Theory and Semiotics: Key Terms and Concepts Binary structures and semiotic square of oppositions Many systems of meaning are based on binary structures (masculine/ feminine; black/white; natural/artificial), two contrary conceptual categories that also entail or presuppose each other. Semiotic interpretation involves exposing the culturally arbitrary nature of this binary opposition and describing the deeper consequences of this structure throughout a culture. On the semiotic square and logical square of oppositions. Code A code is a learned rule for linking signs to their meanings. The term is used in various ways in media studies and semiotics. In communication studies, a message is often described as being "encoded" from the sender and then "decoded" by the receiver. The encoding process works on multiple levels. For semiotics, a code is the framework, a learned a shared conceptual connection at work in all uses of signs (language, visual). An easy example is seeing the kinds and levels of language use in anyone's language group. "English" is a convenient fiction for all the kinds of actual versions of the language. We have formal, edited, written English (which no one speaks), colloquial, everyday, regional English (regions in the US, UK, and around the world); social contexts for styles and specialized vocabularies (work, office, sports, home); ethnic group usage hybrids, and various kinds of slang (in-group, class-based, group-based, etc.). Moving among all these is called "code-switching." We know what they mean if we belong to the learned, rule-governed, shared-code group using one of these kinds and styles of language.
    [Show full text]