<<

Cinematography, Lighting, Set, Designing & Makeup(204) UNIT -1 Technology

Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual med ia.

Video systems vary greatly in the resolution of the display, how they are refreshed, and the rate of refreshed, and 3D video systems exist. They can also be carried on a variety of media, including radio broadcast, tapes, , computer files etc. History

Video technology was first developed for systems, which were quickly replaced by cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Charles Ginsburg led an research team developing one of the first practical (VTR). In 1951 the first video tape recorder captured live images from television by converting the 's electrical impulses and saving the information onto magnetic video tape.

Video recorders were sold for $50,000 in 1956, and cost $300 per one-hour reel.[1] However, prices gradually dropped over the years; in 1971, began selling (VCR) decks and tapes to the public.[2]

The use of digital techniques in video created , which allowed higher quality and, eventually, much lower cost than earlier analog technology. After the invention of the DVD in 1997 and Blu-ra y Disc in 2006, sales of and recording equipment plummeted. Advances in computer technology allowed even inexpensive personal computers to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production. The advent of digital broadcasting and the subsequent transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world. As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, and high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is slowly converging with digital technology. Characteristics of video streams

Number of frames per second Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (, Asia, Australia, etc.) and SECAM (France, , parts of etc.) specify 25 frame/s, while NTSC standards (USA, , , etc.) specify 29.97 frames.[citation needed] Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.[3]

Interlaced vs progressive

Video can be interlaced or progressive. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second, which would have sacrificed image detail to remain within the limitations of a narrow bandwidth. The horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two fields: an odd field (upper field) consisting of the odd- numbered lines and an even field (lower field) consisting of the even-numbered lines.

Analog display devices reproduce each frame in the same way, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction (although with halved detail) of rapidly moving parts of the image when viewed on an interlaced CRT display, but the display of such a signal on a device is problematic.

NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often specified as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.

In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. When displaying a natively interlaced signal, however, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image appear unless special signal processing eliminates them. A procedure known as deinterlacing can optimize the display of an signal from an analog, DVD or satellite source on a progressive scan device such as an LCD Television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.

Aspect ratio

Comparison of common cinematography and traditional television (green) aspect ratios

Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.

Pixels on computer monitors are usually square, but pixels used in digital video often have non- square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video , and the corresponding anamorphic widescreen formats. Therefore, a 720 by 480 pixel NTSC DV image displayes with the 4:3 aspect ratio (the traditional television standard) if the pixels are thin, and displays at the 16:9 aspect ratio (the anamorphic widescreen format) if the pixels are fat.

The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report - growing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like Snapchat’s are watched in their entirety 9X more than landscape video ads.[4] The format was rapidly taken up by leading social platforms and media publishers such as Mashable[5] In October 2015 video platfo r Grabyo launched technology to help video publishers adapt horizotonal 16:9 video into mobile formats such as vertical and square.[6]

Color space and bits per pixel

Example of U-V color plane, Y value=0.5 Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by S ECAM television.

The number of distinct colors a pixel can represent depends on the number of bits per pixel (bpp). A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, 4:2:0/4 :1 :1 ). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2 pixel blocks (4:2:2) or 75% using 4 pixel blocks(4:2:0). This process does not reduce the number of possible color values that can be displayed, it reduces the number of distinct points at which the color changes.

Video quality

Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert observation.

The subjective video quality of a system is evaluated as follows:

• Choose the video sequences (the SRC) to use for testing. • Choose the settings of the system to evaluate (the HRC). • Choose a test method for how to present video sequences to experts and to collect their ratings. • Invite a sufficient number of experts, preferably not fewer than 15. • Carry out testing. • Calculate the average marks for each HRC based on the experts' ratings.

Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".

Video compression method (digital only)

Main article: Video compression

Uncompressed video delivers maximum quality, but with a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a Gro up O f Pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression a nd is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern standards are MPEG-2, used for DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.

Stereoscopic

Stereoscopic video can be created using several different methods:

• Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off- axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters. • One channel with two overlaid color-coded layers. This left and right layer technique is occasionally used for network broadcast, or recent "anaglyph" releases of 3D movies on DVD. Simple Red/Cyan plastic glasses provide the means to view the images discretely to a stereoscopic view of the content. • One channel with alternating left and right frames for the corresponding eye, using LCD glasses that read the frame sync from the VGA Display Data C hannel to alternately block the image to each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications such as in a Cave Automatic Virtual Environment, but reduces effective video framerate to one-ha lf o f normal (for example, from 120 Hz to 60 Hz).

Blu-ray Discs greatly improve the sharpness and detail of the two-color 3D effect in color-coded stereo programs. See articles Stereoscopy and 3-D film. Video formats

Different layers of video transmission and storage each provide their own set of formats to choose from.

For transmission, there is a physical connector and signal protocol ("video connection standard" below). A given physical link can carry certain "display standards" that specify a particular refresh rate, display resolution, and color space.

Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video compression format, of which a number are available.

Analog video

Analog video is a video signal transferred by an analog signal. An analog color video signal contains luminance, brightness (Y) and chrominance (C) of an image. When combined into one channel, it is called composite video as is the case, among others with N TSC, PAL and SECAM. Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi- channel component video formats.

Analog video is used in both consumer and professional television production applications.

Composite video (single channel RCA)

S-Video (2-channel YC)

Component video (3-channel RGB)

SCART

VGA •

TRRC

D-Terminal

Digital video

Digital video signal formats with higher quality have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface, though analog video interfaces are still used and widely available. There exist different adaptors and variants.

Ser ia l d igita l interface (SDI)

Digital Visual Interface (DVI)

• HDMI

DisplayPort Transport medium

Video can be transmitted or transported in a variety of ways. Wireless broadcast as an analog or digital signal. Coaxial cable in a closed circuit system can be sent as analog interlaced 1 volt peak to peak with a maximum horizontal line resolution up to 480. Broadcast or studio cameras use a single or dual coaxial cable system using a progressive scan format known as SDI serial digital interface and HD-SDI for High Definition video. The distances of transmission are somewhat limited depending on the manufacturer the format may be proprietary. SDI has a negligible lag and is uncompressed. There are initiatives to use the SDI standards in closed circuit surveillance systems, for Higher Definition images, over longer distances on coax or twisted pair cable. Due to the nature of the higher bandwidth needed, the distance the signal can be effectively sent is a half to a third of what the older interlaced analog systems supported.[7]

Video connectors, cables, and signal standards

• See List of video connectors for information about physical connectors and related signal standards.

Video display standards

Further information: Display technology

Digital television

Further information: Broadcast television systems

New forma ts fo r digital television broadcasts use the MPEG-2 video codec and include:

• ATSC - USA, Canada, Me xico, • Digital Video Broadcasting (DVB) - Europe • ISDB - Japan o ISDB-Tb - Uses the MPEG-4 video codec. Brazil, Argentina • Digital Multimedia Broadcasting (DMB) - Korea

Analog television

Further information: Broadcast television systems Analog television broadcast standards include:

• FCS - USA, Russia; obsolete • MAC - Europe; obsolete • MUSE - Japan • NTSC - USA, Canada, Japan • PAL - Europe, As ia, o PAL-M - PAL variation. Brazil, Argentina o PALplus - PAL extension, Europe • RS-343 (military) • SECAM - France, Former , Central Africa

An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing synchronization information or a time delay. This surrounding margin is known as a blanking interval or blanking region; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval.

Computer displays

See Computer display standard for a list of standards used for computer monitors and comparison with those used for television.

Recording formats before video tape

Analog tape formats

In approximate chronological order. All formats listed were sold to and used by broadcasters, video producers or consumers; or were important historically (VERA).

• 2" (Ampex 1956) • VERA (BBC experimental format ca. 1958) • 1" Type A videotape (Ampex) • 1/2 " EI AJ (1969) • U-ma tic 3/4" (Sony) • 1/2" (Avco) • VCR, VCR-LP, SVR • 1" Type B videotape (Robert Bosch GmbH) • 1" Type C videotape (Ampex, Marconi and Sony) • (Sony) • VHS (JVC) • (Philips) • 2" Videotape (IVC) • 1/4 " CVC (F unai) • (Sony) [8] • HDVS (Sony) • Betacam SP (Sony) • Video8 (Sony) (1986) • S-VHS (JVC) (1987) • VHS-C (JVC) • Pixelvision (F isher-Price) [8] • UniHi 1/2" HD (Sony) • Hi8 (Sony) (mid-1990s) • W-VHS (JVC) (1994)

Digital tape formats

• Betacam IMX (Sony) • D-VHS (JVC) • D-Theater • D1 (Sony) • D2 (Sony) • D3 • D5 HD • D6 (Philips) • Digital-S D9 (JVC) • Digital Betacam (Sony) • (Sony) • DV (including DVC-Pro) • HDCAM (Sony) • HDV • ProHD (JVC) • MicroMV • MiniDV

Optical disc storage formats

• Blu-ray Disc (Sony) • Blue High-definition Disc (CBHD) • DVD (was S uper Density Disc, DVD Forum) • • Univer sa l M ed ia Disc (UMD) (Sony)

Discontinued

• Enhanced Versatile Disc (EVD, Chinese government-sponsored) • HD DVD (NEC and Toshiba) • HD-VMD • Capacitance Electronic Disc • Laserdisc (MCA and Philips) • Television Electronic Disc (Teldec)) and (Telefunken) • VHD (JVC)

Digital encoding formats

See a lso : Video codec and list of codecs

• CCIR 601 (ITU-T) • H.261 (ITU-T) • H.263 (ITU-T) • H.264/MPEG-4 AVC (ITU-T + ISO) • H.265 • M-JPEG (ISO) • MPEG-1 (ISO) • MPEG-2 (ITU-T + ISO) • MPEG-4 (ISO) • Ogg-Theora • VP8-WebM • VC-1 (SMPTE)

Standards

• Sys te m A • Sys te m B • Sys te m G • Sys te m H • Sys te m I • Sys te m M

Parts of Camera

There are 10 basic camera parts to identify in today’s digital world. Whether you have a digital compact or a digital SLR, these parts will inevitably be found on most cameras.

1. Lens

The lens is one of the most vital parts of a camera. The light enters through the lens, and this is where the photo process begins. Lenses can be either fixed permanently to the body or interchangeable. They can also vary in focal length, , and other details.

2. Viewfinder The viewfinder can be found on all DSLRs and some models of digital compacts. On DSLRs, it will be the main visual source for image-taking, but many of today’s digital compacts have replaced the typical viewfinder with an LCD screen.

3. Body

The body is the main portion of the camera, and bodies can be a number of different shapes and sizes. DSLRs tend to be larger bodied and a bit heavier, while there are other consumer cameras that are a conveniently smaller size and even able to fit into a pocket.

4. Shutter Release

The shutter release button is the mechanism that “releases” the shutter and therefore enables the ability to capture the image. The length of time the shutter is left open or “exposed” is determined by the shutter speed.

5. Ape rture

The aperture affects the image’s exposure by changing the diameter of the lens opening, which controls the amount of light reaching the image sensor. Some digital compacts will have a fixed aperture lens, but most of today’s compact cameras have at least a small aperture range. This range will be expressed in f/stops. For DSLRs, the lens will vary on f/stop limits, but it is usually easily defined by reading the side of the lens. There will be a set of numbers stating the f/stop or f/stop range, ex: f/2.8 or f/3.5-5.6. This will be your lowest settings available with that lens.

6. Image Sensor

The image sensor converts the optical image to an electronic signal, which is then sent to your memory card. There are two main types of image sensors that are used in most digital cameras: CMOS and CCD. Both forms of the sensor accomplish the same task, but each has a different method of performance.

7. Memory Card

The memory card stores all of the image information, and they range in size and speed capacity. The main types of memory cards available are CF and SD cards, and cameras vary on which type that they require.

8. LCD Screen

The LCD screen is found on the back of the body and can vary in size. On digital compact cameras, the LCD has typically begun to replace the viewfinder completely. O n DS LRs, the LCD is mainly for viewing photos after shooting, but some cameras do have a “live mode” as well.

9. The on-board flash will be available on all cameras except some professional grade DSLRs. It can sometimes be useful to provide a bit of extra light during dim, low light situations.

10. User Controls

The controls on each camera will vary depending on the model and type. Your basic digital compacts may only have auto settings that can be used for different environments, while a DSLR will have numerous controls for auto and manual shooting along with custom settings.

Types of Camera

Below are the camera types

camera • Autofocus camera • Backup camera • Banquet camera • camera • Br id ge ca me ra • Camcorder • Camera phone • Closed-circuit television camera • Compact camera • Compact System cameras • Dashcam • Digital camera • Disposable camera • Document camera • Field camera • FireWire camera • Folding camera • Gun camera • Helmet camera • High-speed camera • Hidden camera • Instant camera • IP camera • Keychain camera • Light-field camera • Live-preview digital camera • Medium format camera • Mirrorless interchangeable-lens camera • Monorail camera • Movie camera • Multiplane camera • Omnidirectional camera • Onboard camera • Pinhole camera • Pinspeck camera • Plate camera • Pocket camera • Pocket video camera • Point-and-shoot camera • Pool safety camera • Press camera • Process camera • Professional video camera • Rapatronic camera • Rangefinder camera • Red light camera • Reflex camera • Remote camera • Rostrum camera • Schmidt camera • Single-lens reflex camera • Spy cam • Spy camera • Stat camera • Stereo camera • Still camera • Still video camera • Subminiature camera • System camera • Thermal imaging camera (firefighting) • Thermographic camera • Toy camera • Traffic camera • Traffic enforcement camera • Twin-lens reflex camera • Video camera • • Webcam • Wright camera • Zenith camera • Zoom-lens reflex camera

• The te r m camera is also used, for devices producing images or image sequences from measurements of the physical world, or when the image formation cannot be described as photographic. o Magnetic resonance imaging which produce images showing, internal structure of different parts of a patient's body. o Rangefinder camera which produce images of the distance to each point in the scene. o Ultrasonography uses ultrasonic cameras that produce images of the absorption of ultra-sonic energy. o Vir tua l c a mer a, in computing and gaming. 1. The Hand- held Camera

Hand-held came ra or hand-held shooting is a filmmaking and video production technique in which a camera is held in the camera operator's hands as opposed to being mounted on a tripod or other base. Hand-held cameras are used because they are conveniently sized for travel and because they allow greater freedom of motion during filming. Newsreel camera operators frequently gathered images using a hand-held camera. Virtually all modern video cameras are small enough for hand-held use, but many professional video cameras are designed specifically for hand-held use such as for electronic news-gathering (ENG), and electronic field production (EFP).

Hand-held camera shots often result in an image that is perceptibly shakier than that of a tripod- mounted camera. Purposeful use of this technique is called shaky camera and can be heightened by the camera operator during filming, or artificially simulated in post-production. To prevent shaky shots, a number of image stabiliza tio n technologies have been used on hand-held cameras including optical, digital and mechanical methods. The Steadicam, which is not considered to be a "hand-held" camera, uses a stabilizing mount to make smoother shots. Early usage

History of film

Silent film

The first silent film era movie cameras that could be carried by the cameraman were bulky and not very practical to simultaneously support, aim, and crank by hand, yet they were sometimes used in that way by pioneering filmmakers. In the 1890s, brothers Auguste and Louis Lumière developed the fairly compact Cinematograph which could be mounted on a tripod or carried by the cameraman, and it also served as the film projector.[1] In 1908 with a hand-held Lumière camera, Wilbur Wright was filmed flying his aircraft on the outskirts of Paris. Thomas Edison developed a portable film camera in 1896.[2] Polish inventor Kazimierz Prószyński first demonstrated a hand-held film camera in 1898 but it was not reliable.[3]

From 1909 to 1911, directors Francesco Bertolini and Adolfo Padavan, with assistant director Giuseppe de Liguoro, shot scenes for L'Inferno, based on Dante's The Divine Comedy. The film was first shown in 1911 and it included hand-held camera shots as well as innovative camera angles and special film effects.[4] In 1914, Thomas H. Ince's The Italian, directed by Reginald Barker, included two hand-held shots, at least one of which represented the viewpoint of a character. The camera swerved suddenly to match what was happening to the character in the story.[5]

A Parvo Model L camera

The compact hand-cranked Parvo camera was first made in Paris by André Debrie in 1908. Though expensive, it slowly built in popularity from about 1915. By the mid-1920s it was, in sheer numbers, the most-used film camera of any kind.[6]

The hand-held Aeroscope camera ran on compressed air.

The problem of hand-cranking and supporting the camera, and simultaneously aiming and focusing it, was difficult to solve. A variety of automatic cranking systems were developed to free one of the cameraman's hands. Various cameras were invented which replaced the hand crank with an electric motor, or with a mainspring and gears, or with gears driven by compressed a ir. The Aeroscope was a compressed air camera designed by Prószyński, one that proved reliable and popular. Hundreds of Aeroscopes were used during by British war journalists. Sales continued into the 1920s.[7]

In January 1925, Abel Gance began shooting Napoléon using a wide variety of innovative techniques, including strapping a camera to a man's chest, a snow sled, a horse's saddle, a pendulum swing, and wrapping a large sponge around a hand-held camera so that it could be punched by actors during a fight scene. For the Debrie Parvo camera strapped to the horse's saddle, Gance's , engineer Simon Feldman, devised a reversed steam engine for cranking it, powered by two compressed air tanks. Wearing a costume to fit the scene, cameraman Jules Kruger rode another horse to tend the mechanism between shots. Rather than including one or two hand-held scenes for an unusual effect amid an otherwise static film, Gance strove to make his entire film appear as dynamic as possible. It premiered in early 1927.[8]

Bo le x 16 mm camera

In the 1920s, more cameras such as the Newman-Sinclair, Eye mo, and De Vry were beginning to be created with hand-held ergonomics in mind. The Bolex camera was introduced using half- width 16 mm film stock. These smaller cameras satisfied the demand from both the growing newsreel and documentary fields, as well as the emerging amateur market. They were specifically designed to hold shorter lengths of film—usually 100 to 200 feet (30 to 60 m)—and were driven by hand-wound mainspring clockworks which could last continuously through most or even all of a film roll on one winding. These cameras saw limited use in professional filmmaking.[9] Further examples of limited hand-held work in the late 1920s include J. Stuart Blackton's The Passionate Quest (1926), S idney Franklin's Quality Street (1927), and Cecil B. DeMille's The King of Kings (1927).[9]

Sound film

The emergence of the sound film had an immediate dampening effect on the use of hand-held shots because the film camera motors were too loud to be able to record synchronized sound on set, and thus early sound were forced to install the camera within a soundproof booth. By 1929, camera manufacturers and studios had devised shells, called blimps, to encase the camera and dampen the mechanical noise sufficiently to allow the cameras to be free of the booths. However, this came at a cost: the blimped, motorized cameras were considerably heavier. When the soon-to-be ubiquitous Mitchell Camera BNC (Blimped Newsreel Camera) emerged in 1934, it weighed in at 135 lb; this clearly precluded any hand-held usage. The aesthetic style of films from this period thus reflected their available technology, and hand-held shots were for the most part avoided.

Hand-held shots required use of the smaller hand-wound spring-work cameras, which were too loud to be practical for any shots requiring synchronized (sync) sound, and held less footage than studio cameras. The spring-wound cameras were also not accurate enough speed-wise to guarantee perfect sync speed, which led to many of them having motors installed (the additional sound being negligible). Thus, these cameras could not be used for much in the way of dialogue.

They were joined by the revolutionary new Arriflex 35 camera, introduced in 1937, which was the first reflex camera for motion pictures. This camera also facilitated hand-held usage by integrating its motor into a handle below the camera body, allowing easy hand-held support, and weighing a mere 12 lb. Most of these cameras saw steady usage during World War II by both sides for documentary purposes, and the Eyemos and Arriflexes in particular were mass manufactured for the Allied and Axis militaries, respectively. This allowed these cameras to be exposed to a much greater number of individuals than would have normally familiarized themselves with them; many wartime cameramen would eventually bring them back into the film industry where they are used to this day. With the Allied capture of Arriflexes, along with the release of the new Arriflex II in 1946, many curious non-German cameramen finally had access to the advanced camera. Eclair followed this up with the Cameflex the following year. It was a lightweight (13 lb) camera specifically designed for hand-held shots and could be switched between shooting 35 mm and 16 mm. In 1952, Arri subsequently released the Arriflex 16ST, the first reflex camera designed specifically for 16 mm. New Wave revival

This section needs additional citations for verification. P lea se he lp improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2011) A balanced Steadicam avoids shakiness. It is not directly hand-he ld.

Despite these technological developments, the aesthetic consequences of these smaller cameras weren't fully realized until the late 1950s and early 1960s. A hand-held camera was used in 1958 on the documentary film Les Raquetteurs, shot on 35 mm by Michel Brault. When met Brault and saw his work, he asks him to come to France, and show his technique. An excerpt of the film is available here [1].

Filmmaker Curtis Choy using an Eclair NPR in 1976. For some context on this film, relationship of documentary sound and image, and Brault's cinema, see .

This trend, led by Michel Brault, was followed by 's work in the French New Wave and the cinéma vérité, "fly-on-the-wall" documentary film aesthetic. In the case of the latter, Richard Leacock and D.A. Pennebaker actually had to force the 16 mm technology forward themselves through a number of extensive camera and audio recording equipment modifications in order to achieve longer-take, , observational films, beginning with

Primary (1960)

A cameraman shoots a street scene from a low perspective

In the realm of 16 mm cameras, Michel Coutant at Éclair was working with Brault and Rouch' s input to create prototypes that eventually led to the self-blimped Eclair 16 (also known as the Eclair NPR or Eclair Coutant ), the first successful lightweight sync-sound movie camera. History of the collaboration of Brault with Éclair is told here (in French) [2]. The design included a camera magazine which not only was back-mounted specifically to distribute a more balanced camera weight across the shoulder for hand-holding, but also included a built-in pressure plate and sprocket drive, which allowed cameras to be reloaded in seconds — a crucial feature for vérité documentaries.

Rouch's 1961 was shot by Coutard and Brault on a prototype that led to the Eclair 16. Arri took many years to catch up, debuting the popular Arriflex 16BL in 1965, but not including quick-change magazines until the Arriflex 16SR ten years later. In the meantime, Eclair had also a much smaller and ergonomic hand-he ld 1 6 mm ca me ra, the Eclair ACL (1971), an improvement also spurred by Rouch's drive for equipment that matched his vision of cinema. Filmmakers known for hand-held camera style

• Kenneth Anger • • Noah Baumbach in his recent work • Peter Berg • Stan Brakhage • Andrew Bujalski • Alfonso Cuarón • Maya Deren • Morris Engel • Abel Gance • • Jean-Luc Godard during the early part of his career • Paul Greengrass • Catherine Hardwicke • • Spike Lee • Alejandro González Iñárritu • Stanley K ubrick • Alber t and David Maysles • Jonas Mekas • Rauni Mollberg • D.A. Pennebaker • Jacques Rivette • Jean Rouch • Uwe Boll • David O. Russell in his recent work • Steve n Sp ie lb er g • Lars von Trier • André Weinfeld • • Wong Kar-wai • David Yates • Rob Zombie • Christopher Hampton • Thomas Vinterberg • Michael Mann in his recent work

Convertible Camera

Wherever they are used, our convertible HD cameras ensure the very best picture quality from various angles, even in difficult shooting conditions. Choose the AK-HC1800, with Intelligent Function that reduces the need for adjustment during remote video acquisition; the compact AK- HC1500 that can be combined with our pan/tilt models for remote, multi-angle shooting for a wide range of applications. Or the HE870 for its built-in Down Converter, making it a cost effective choice for rental and other mixed HD/SD applications.

- See more at: http://business.panasonic.co.uk/professional-camera/convertible- cameras/convertible-hd-cameras#sthash.9ds9oq9i.dpuf

A convertible or combinable lens is a lens consisting of two or more separable groups of glass elements, some of which can form an acceptable image when used on their own.

100 years ago these le nses were a common option for photo amateurs to give a double or triple extension folding camera more versatility. Some common lenses of that era consisted of a pair of lens tubes, one to screw into the back of an aperture/shutter-unit, the other to be screwed into its front side. Mounted that way the lens elements of both groups combined to make up the complete standard lens, usable with single extension of the camera bellows. These lenses are often, but not always, near-symmetrical double-anastigmats, and one half of the lens can be used alone (usually the rear half). This has a focal length roughly twice that of the two halves used together (and therefore a relative aperture about half as great).[1] It also requires more bellows extension to achieve any focus distance, so is particularly suitable for double- or triple-extension cameras. It is possible to separate the parts of lenses as simple as a Rapid Rectilinear, in which each half is just two glass elements cemented together. Some non-symmetrical lenses can also be used this way: the front (cemented) group of a Petzval lens can be used alone, if fitted behind the diaphragm.[2]

At the end of the 19th century, lens makers sought to exploit this ability of lenses more fully. Some makers designed symmetrical lenses in which each half was well-designed to be used alone (i.e. and other faults are well corrected in one half of the lens; symmetrical lenses often depend on the use of both symmetrical halves to correct chromatism etc.). Examples are the Zeiss Protar Series VII (the two half-lenses together are a Series VIIa), the Ross Combinable, and the Goerz Pantar.[3]

Because such groups are well-corrected on their own, it is possible to combine front and rear groups of different focal lengths: with these sophisticated convertibles it is possible to exchange the front group while leaving the same rear group mounted, giving a greater choice of focal lengths. S uch a system was later used in a few 35 mm SLR cameras; the Contaflex (beginning with the Contaflex III) in the 1950s, and the Canon EXEE and EX Auto of the 1970s. Because these are rigid-bodied cameras, it is not possible to use the rear group on its own as in a bellows camera, though it is capable of forming an image, because it is fixed too close to the film plane.

Combinable lenses often comprise fully cemented groups of three, four or even more elements. This required them to be ground to more than usually close tolerance, and they were therefore expensive to make,[3] and rarely used for popular cameras.

Some modern compact cameras use a similar idea; they have a double-focus lens instead of a single-foc us o r zoo m le ns. In most cases the focus switch of such cameras shifts one lens element into or out of the lens tube.

Electronic Cinematography

Digital cinematography is the process of capturing (recording) motion pictures as digital video images rather than traditional analog film frames. Digital capture may occur on video tape, hard disks, flash me mor y, or other media which can record digital data through the use of a digital mo vie vid eo ca me ra or other digital video camera. As digital technology has improved in recent years, this practice has become dominant. Since the mid 2010s most of the movies across the world are captured as well as distributed digitally.[1][2][3]

Many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, as well as new vendors like RED, Blackmagic, Silicon Imaging, Vision Research and companies which have traditionally focused on consumer and broadcast video equipment, like Sony, GoPro, and Panasonic. History Beginning in the late 1980s, Sony began marketing the concept of "electronic cinematography, " utilizing its analog Sony HDVS professional video cameras. The effort met with very little success. However, this led to one of the earliest digitally shot feature movies Julia and Julia to be produced in 1987.[4] In 1998, with the introduction of HDCAM recorders and 1920 × 1080 pixel digital professional video cameras based on CCD technology, the idea, now re-branded as "digital cinematography," began to gain traction in the market.[citation needed] Shot and released in 1998, The Last Broadcast is believed by some to be the first feature-length video shot and edited entirely on consumer-level digital equipment.[5]

In May 1999 challenged the supremacy of the movie-making medium of film for the first time by including footage filmed with high-definition digital cameras in Star Wars Episode I: The Phantom Menace. The digital footage blended seamlessly with the footage shot on film and he announced later that year he would film its sequels entirely on hi-def digital video. Also in 1999, digital projectors were installed in four theaters for the showing of The Phantom Menace. In June 2000, Star Wars Episode II: Attack of the Clones began principal photography shot entirely using a Sony HDW-F900 camera as Lucas had previously stated. The film was released in May 2002. In May 2001 Once Upon a Time in Mexico was also shot in 24 frame-per-second high-definition digital video, partially developed by George Lucas using a Sony HDW-F900 camera,[6] following Robert Rodriguez's introduction to the camera at Lucas' Skywalker Ranch facility whilst editing the sound for Spy Kids. Two lesser-known movies, Vidocq (2001) and Russian Ark (2002), had also been shot with the same camera, the latter notably consisting of a single long take.

Today, cameras from companies like Sony, Panasonic, JVC and Canon offer a variety of choices for shooting high-definition video. At the high-end of the market, there has been an emergence of cameras aimed specifically at the market. These cameras from Sony, Vis io n Research, Arri, Silicon Imaging, Panavision, Grass Valley and Red offer resolution and dynamic range that exceeds that of traditional video cameras, which are designed for the limited needs of broadcast television.

In 2009, Slumdog Millionaire became the first movie shot mainly in digital to be awarded the Academy Award for Best Cinematography[7] and the highest grossing movie in the history of cinema, Avatar, not only was shot on digital cameras as well, but also made the main revenues at the box office no longer by film, but digital projection.

In late 2013, Paramount became the first major studio to distribute movies to theaters in digital format eliminating 35mm film entirely.[8] Anchorman 2 was the last Paramount production to include a 35mm film version, while The Wolf of Wall Street was the first major movie distributed entirely digitally.[8] Technology

Digital cinematography captures motion pictures digitally in a process analogous to . While there is no clear technical distinction that separates the images captured in digital cinematography from video, the term "digital cinematography" is usually applied only in cases where digital acquisition is substituted for film acquisition, such as when shooting a feature film. The term is seldom applied when digital acquisition is substituted for video acquisition, as with live broadcast television programs.

Recording

Ca me ras

Digital movie camera Arriflex D-21

Professional cameras include the Sony CineAlta(F) Series, Blackmagic Cinema Camera, RED ONE, Arriflex D-20, D-21 and Ale xa, Panavisions Genesis, Silicon Imaging SI-2K, Thomson Viper, Vision Research Phantom, IMAX 3D camera based on two Vision Research Phantom cores, Weisscam HS-1 and HS-2, GS Vitec noX, and the F usion Camera S ystem. Independent filmmakers have also pressed low-cost consumer and prosumer cameras into service for digital filmmaking.

Sensors

Digital cinematography cameras capture images using CMOS or CCD sensors, usually in one of two arrangements.

Single chip cameras designed specifically for the digital cinematography market often use a single sensor (much like digital photo cameras), with dimensions similar in size to a 16 or 35 mm film frame or even (as with the Vision 65) a 65 mm film frame. An image can be projected onto a single large sensor exactly the same way it can be projected onto a film frame, so cameras with this design can be made with PL, PV and similar mounts, in order to use the wide range of existing high-end cinematography lenses available. Their large sensors also let these cameras achieve the same shallow depth of field as 35 or 65 mm motion picture film cameras, which many cinematographers consider an essential visual tool.[9]

Video formats

Unlik e other video formats, which are specified in terms of vertical resolution (for example, , which is 1920×1080 pixels), digital cinema formats are usually specified in terms of horizontal resolution. As a shorthand, these resolutions are often given in "nK" notation, where n is the multiplier of 1024 such that the horizontal resolution of a corresponding full-aperture, digitized film frame is exactly pixels. Here the "K" has a customary meaning corresponding to the binary prefix "kibi" (ki).

For instance, a 2K image is 2048 pixels wide, and a 4K image is 4096 pixels wide. Vertical resolutions vary with aspect ratios though; so a 2K image with an HDTV (16:9) aspect ratio is 2048×1152 pixels, while a 2K image with a SDTV or Academy ratio (4:3) is 2048×1536 pixels, and one with a Panavision ratio (2.39:1) would be 2048×856 pixels, and so on. Due to the "nK " notation not corresponding to specific horizontal resolutions per format a 2K image lacking, for example, the typical 35mm film soundtrack space, is only 1828 pixels wide, with vertical resolutions rescaling accordingly. This led to a plethora of motion-picture related video resolutions, which is quite confusing and often redundant with respect to nowadays few projection standards.

All formats designed for digital cinematography are progressive scan, and capture usually occurs at the same 24 frame per second rate established as the standard for 35mm film. Some films such as The Hobbit: An Unexpected Journey have a High Frame Rate of 48 fps, although in some it was also released in a 24 fps version which many fans of traditional film prefer.

The DCI standard for cinema usually relies on a 1.89:1 aspect ratio, thus defining the maximum container size for 4K as 4096×2160 pixels and for 2K as 2048×1080 pixels. When distributed in the form of a Digital Cinema Package (DCP), content is letterboxed or pillarboxed as appropriate to fit within one of these container formats

Studio Camera

Photography needs light. In the early times of photography the rule photography needs time and light was more obvious since the exposure times for portraits were at least 5 seconds. To achieve such "short" exposure times a well-lit scene had to be chosen to make a picture of a person. Thus early professional photo studios had large glass windows facing towards the midday sun. Optimal studio rooms were under the roof, with additional big glass windows in the roof bringing full advantage of bright daylight.

But photographers were not happy: with sunny weather outside they could make a Daguerreotype exposure within five seconds; with dark clouds over the studio at least 20 seconds were necessary, especially a problem when portraying children. "Head-holders" and other means of "torture" were not the solution. The invention of new faster photographic processes together with the invention of the flashgun were solving the time&light problem. Electric light, electric flashbulbs, and later electronic flashguns dramatically enhanced the advantages of photographing in a studio since the light there became completely controllable by the photographer, and of course always bright enough to forget about ancient exposure time troubles.

What about the cameras?

The cameras needed in the ancient daylight-depending studios had to have fast lenses. The fast Petzval lens was a standard for portrait photography. With the fast lenses used at wide the still portraying style was found: sharpness plane on the face and blurred foreground and background. Big cameras were not necessary.

But the photographers wanted to show all their art, thus giving their studios the style of a with changing settings. The photo subjects could feel like actors on a stage, playing a scene with statuary gesture for the moment of exposure, of course made at smaller aperture, needing a longer exposure time unless made with flash light. If people wanted , instead of a picture for the album, a big framed image for the living room, large format cameras were preferred instead of costly enlarging of small images. Thus big view cameras have become common in photo studios. If such cameras reach dimensions at which they must be seen as a piece of studio furniture they are pure studio cameras. The camera stand of such a camera is not just a tripod. It is a sturdy massive wooden construction including mechanics to regulate height and tilting of the camera.

Studio cameras might give exposures of 8×10 inches or dimensions like that, thus giving highest possible image resolution if taken ultimately sharp.

With the advent of high-end professional digital cameras, large format studio cameras are rarely seen in use anymore. However, a large format camera still can achieve greater quality than present studio digital cameras can hope for.

What else is studio camera?

Since the early days smaller cameras play a leading role in studio photography, for example the Daguerreotyp-Apparat zum Portraitiren from 1840. Carte de Visite cameras (later Passport cameras) and smaller view cameras were always common in studios. In studios for advertising photos the interaction between photographer and model is more important, making hand cameras, mainly SLRs or modern professional digital cameras, the instrument of choice, combined with modern studio flashes. Other advertising studios don't photograph people but items of any kind for catalogues, brochures and so on, thus needing cameras with all kind of macro equipment and probably an additional monorail camera. But all these other camera types are not that kind of studio furniture like the above-mentioned super-large format view camera.

Conclusion

The kind of super-la r ge view camera mentioned above is the only one which is made for the studio and for nothing but a studio, thus being the only one to be classified definitely as studio came ra. Other studio cameras are smaller and more or less portable for applications outside of a studio. However they are classifiable otherwise.

UNIT – 2 Camera Movements the standard types of camera movement in film and video. In the real world, many camera moves use a combination of these techniques simultaneously.

Cra b A les s-common term for tracking or trucking.

Dolly The camera is mounted on a cart which travels along tracks for a very smooth movement. Also known as a tracking shot or trucking shot.

Dolly Zoom A technique in which the camera moves closer or further from the subject while simultaneously adjusting the zoom angle to keep the subject the same size in the frame.

Follow The camera physically follows the subject at a more or less constant distance.

Pan Horizontal movement, left and right.

Pedestal (Ped) Moving the camera position vertically with respect to the subject.

Tilt Vertical movement of the camera angle, i.e. pointing the camera up and down (as opposed to moving the whole camera up and down).

Track Roughly synonymous with the dolly shot, but often defined more specifically as mo ve me nt which stays a constant distance from the action, especially side-to- side movement.

Truck Another term for tracking or dollying.

Zoom Technically this isn't a camera move, but a change in the lens focal length with gives the illusion of moving the camera closer or further away.

------Zoom

Without a doubt, zoo ming is the most used (and therefore, most overused) camera movement there is. It is often used as a clutch when the videographer is not sure what else to do to add interest to a shot. If you are going to use zoo m, try to use it creatively. Zoom in or out from an unexpected, yet important, object or person in your shot. Use a quick zoom to add energy to a fa st-paced piece. Don’t get stuck with your zoom as your default move!

Pan

Panning is when you move your camera horizontally; either left to right or right to left, while its base is fixated on a certain point. You are not moving the position of the camera itself, just the direction it faces. These types of shots are great for establishing a sense of location within your story.

Tilt

Tilting is when you move the camera vertically, up to down or down to up, while its base is fixated to a certain point. Again, like panning, this move typically involves the use of a tripod where the camera is stationary but you move the angle it points to. These shots are popular when introducing a character, especially one of grandeur, in a movie.

Dolly

A dolly is when you move the entire camera forwards and backwards, typically on some sort of track or motorized vehicle. This type of movement can create beautiful, flowing effects when done correctly. If you want to attempt a dolly, make sure your track is stable and will allow for fluid movement.

Truck

Trucking is the same as dollying, only you are moving the camera from left to right instead of in and out. Again, it is best to do this using a fluid motion track that will eliminate any jerking or friction.

Pedestal

A pedestal is when you move the camera vertically up or down while it is fixated in one location. This term came from the use of studio cameras when the operators would have to adjust the pedestal the camera sat on to compensate for the height of the subject. A pedestal move is easy to do when the camera is fixated to an adjustable tripod.

Rack Focus

Rack focus is not as much of a camera move as it is a technique, but many amatuers overlook this essential skill. You adjust the lens to start an image blurry and then slowly make it crisper, or vice versa. It is an extremely effective way for you to change your audience’s focus from one subject to another.

Although camera movements are often implemented to add excitement to shots, their best use is when new information is revealed. At the beginning level, budding filmmakers sometimes tilt and pan without the proper motivation. Camera movements can be distracting and even annoying when overused or used without a reason. Don’t use a camera movement to show that you can. Use it when you know you need it. Pan

During a pan, the camera is aimed sideways along a straight line. Note that the camera itself is not moving. It is often fixed on tripod, with the operator turning it either left or right. Panning is commonly utilized to capture images of moving objects like cars speeding or people walking; or to show sweeping vistas like an ocean or a cliff.

One o f the earliest and best appearances of panning was in Edwin S. Porter’s 1903 movie Life of An American Fireman. While the camera follows the fire brigade approaching their destination, the operator pans to reveal it – a house burning. Remember: the best pans are used to reveal information.

A smooth pan with be slow enough to allow the audience to observe the scenery. A fast pan will create blur. If it’s too fast, it will be called a Swish pan. Newsgathering etiquette demands panning from left to right, as to allow the viewer to read any text that may be captured on camera, like headlines or marquises.

Tilt

Tilts refer to the up or down movement of the camera while the camera itself does not move. Tilts are often employed to reveal vertical objects like a building or a person. Dolly

When the entire camera moves forward or backward, this move is called dolly. If the camera is on tripod, the tripod will also be moved. Dollies are often used when recording a subject that moves away or toward the camera, in which case the goal would probably be keeping the subject at the same distance from the camera. For an optimal dolly, the camera should be mounted on a wheeled-platform, such as an actual dolly, or a shopping cart, depending on the budget. Moving the camera forward is called dolly in. Moving the camera backward is called dolly out. Track

Tracking is similar to dolling. The main difference being that in dollies the camera is moved toward or away from the subject, whereas in a track shot, the camera is moved sideways, parallel to an object. Pedestal

In a pedestal move, the camera body will physically be lowered or elevated. The difference between tilts and pedestals is that in the former, the is just being aimed up or down, whereas in the latter, the camera is being moved vertically. Zoom

Despite a common misconception, the terms “zoom” and “dolly” are not interchangeable. With dollies, the camera is being moved in a physical space. With zooms, the camera remains at a constant position, but the lens magnify or minimize the size of the subject. Zooms happen at the push of a button. Zoo m in refers to seemingly “approaching” the subject, thus making it look bigger in the frame. Zoom out refers to seemingly “distancing” the subject, thus making it look smaller.

Note that zooms change focal length, thus affecting depth of field. Zoom in transforms the lens into telephoto, while zoom out changes it into wide-angle. Zooming is considered amateurish and is not preferred by professional cinematographers, but there are many exceptions like the opening shot of The Conversation (1974), in which elects zoom to articulate the film’s themes of espionage or voyeurism.

Note: Zooms are not really a move, for the camera doesn’t change position. But in film studies and filmmaking courses they have been traditiona lly combined with real camera moves. Dolly Counter Zoom

A dolly counter zoom is a rare type of shot of great stylistic effect. To accomplish it, the camera has to dolly (move) closer or further away from the subject while the zoom is adjusted so the subject’s size remains about the same. Notably, Hitchcock’s Vertigo (1958), Spielberg’s Jaws (1975), and Scorsese’s Goodgellas (1990) used dolly counter zoom to demonstrate a character’s uneasiness.

Here’s the scene from Jaws. Notice how the background distorts as the camera zooms in on Brody. Again, this happens because the zoom is countering the camera movement. Single Camera Techniques

The single-camera setup, or single-camera mode of production, also known as Portable Single Camera, is a method of filmmaking and video production.

A single camera — either motion picture camera or professional video camera — is employed on the set, and each shot to make up a scene is taken independently. An alternative "single camera" method that actually uses two cameras is more widely used. The latter method is intended to save time by using two cameras to capture a medium shot of the scene while the other can capture a c lose-up during the same take. Analysis

As its name suggests, a production using the single-camera setup generally employs just one camera. Each of the various shots and camera angles is taken using the same camera, which is moved and reset to get each shot or new angle. The lighting setup is typically reconfigured for each camera setup.

In contrast, a multiple-camera setup consists of multiple cameras arranged to capture all of the different shots (camera angles) of the scene simultaneously, and the set must be lit to accommodate all camera setups concurrently. Multi-camera production generally results in faster but less versatile photography.

In single-camera, if a scene cuts back and forth between actor A and actor B, the director will first point the camera towards A and run part or all of the scene from this angle, then move the camera to point at B, relight, and then run the scene through from this angle. C hoices can then be made during the post-production editing process for when in the scene to use each shot, and when to cut back and forth between the two (or usually more than two) angles. This also then allows parts of the scene to be removed if it is felt that the scene is too long.

The single-camera setup gives the director more control over each shot, but is more time consuming and expensive than multiple-camera. The choice of single-camera or multiple-camera setups is made separately from the choice of film or video (that is, either setup can be shot in either film or video). Multiple-camera setups shot on video can be switched "live-to-tape" during the performance, while setups shot on film still require that the various camera angles be edited together later.

The single-camera setup originally developed during the birth of the classical Hollywood cinema in the 1910s and has remained the standard mode of production in the cinema. In television, both single camera and multiple-camera productions are common. Single-camera television

Television producers make a distinct decision to shoot in single-camera or multiple-camera mod es—unlike film producers who almost always opt for single-camera shooting. In television, single-camera is mostly reserved for prime-time dramas, made-for-TV mo vies, music and commercial advertisements. Soap operas, talk shows, so me and the like more frequently use the multiple-camera setup. Multiple-camera shooting is the only way that an ensemble of actors presenting a single performance before a live audience may be recorded from multiple perspectives. In the case of situation comedies, which may potentially be shot in either multiple- or single-camera modes, it may be deemed preferable to use the single-camera technique especially if specific camera angles and camera movements for a feature film-like visual style are considered crucial to the success of the production, and if are to be frequently used. For more standard, dialogue-driven domestic situation comedies, the multi-camera technique, which is cheaper and takes less production time, may be deemed more feasible.

Though multi-camera was the norm for sitcoms during the 1950s (beginning with ), the 1960s saw increased technical standards in situation comedies which came to have larger casts and used a greater number of different locations in episodes. Several comedy series of the era also made use of feature film techniques. To this end, many comedies of this period, including Leave It to Beaver, Mister Ed, The Show, My Three Sons, The Addams Family, The Munsters, Bewitched, I Dream of Jeannie, Gilligan's Island, Get Smart, Hogan's Heroes, Family Affair, and The Brady Bunch, used the single-camera technique. Apart from giving the shows a feature film style, this technique was better suited to the visual effects frequently used in these shows, such as magical appearances and disappearances and lookalike doubles in which the regular actors played a dual role. These effects were created using editing and optical techniques, and would not have been possible had the shows been shot using a multi-camera setup.

In the case of Get Smart, the single-camera technique also allowed the series to present fast- paced and tightly-edited fight and action sequences reminiscent of the spy dramas that it parodied. Single-camera comedies were also prevalent into the early 1970s. With its large cast, varied locations, and seriocomic tone, the TV series M*A*S*H was shot using single-camera style. began in 1974 as a single-camera series, before switching to the multi-camera setup in its second season. However, the success of (which was taped with multiple cameras live in front of a studio audience, very much like a stage play) and Nor ma n Lear's subsequent productions led to a renewed interest by sitcom producers in the multi- camera technique; by the latter part of the 1970s, most sitcoms again employed the multi-camera fo r ma t.

By the mid-1970s, with domestic situation comedies in vogue, the multi-camera shooting style for sitcoms came to dominate and would continue to do so through the 1980s and 1990s, although the single-camera format was still seen in television series classified as comedy-dra ma or "dramedy".

In the 2000s, television saw a resurgence in the use of single-camera in sitcoms, such as Spaced (1999–2001), Malcolm in the Middle (2000–06) Curb Your Enthusiasm (2000–11), The Office (UK) (2001–03), Scrubs (2001–10), Peep Show (2003–), (2003–6, 2013– ), Corner Gas (2004–09), Zoey 101 (2005–08), The Office (US) (2005–13), My Name is Earl (2005–09), It's Always Sunny In Philadelphia (2005–), Extras (2005–07), 30 Rock (2006–13), Samantha Who? (2007–09), Community (2009–15), The Middle (2009–), Modern Family (2009– ), Parks and Recreation (2009–15), Cougar Town (2009–15), Louie (2010–), Happy Endings (2011–13), New Girl (2011–), The Mindy Project (2012–), Veep (2012–), Brooklyn Nine-Nine (2013–), and About a Boy (2014–15). Unlike single-camera sitcoms of the past, nearly all contemporary comedies shot in this manner are produced without a . Multi Camera Techniques

The multiple-camera setup, multiple-camera mode of production, multi-ca me ra or simply multicam is a method of filmmaking and video production. Several cameras—either film or professional video cameras—are employed on the set and simultaneously record or broadcast a scene. It is often contrasted with single-camera setup, which uses one camera.

Generally, the two outer cameras shoot c los e-up shots or "crosses" of the two most active characters on the set at any given time, while the central camera or cameras shoot a wider ma ster shot to capture the overall action and establish the geography of the room. In this way, multiple shots are obtained in a single take without having to start and stop the action. This is more efficient for programs that are to be shown a short time after being shot as it reduces the time spent film editing or video editing the footage. It is also a virtual necessity for regular, high- output shows like daily soap operas. Apart from saving editing time, scenes may be shot far more quickly as there is no need for re-lighting and the set-up of alternative camera angles for the scene to be shot again from the different angle. It also reduces the complexity of tracking continuity issues that crop up when the scene is reshot from the different angles. It is an essential part o f .

Drawbacks include a less optimized lighting which needs to provide a compromise for all camera angles and less flexibility in putting the necessary equipment on scene, such as microphone booms and lighting rigs. These can be efficiently hidden from just one camera but can be more complicated to set up and their placement may be inferior in a multiple-camera setup. Another drawback is in film usage - a 4-camera setup will use (depending on the cameras involved) up to 4 times as much film (or storage space for digital) per take, compared to a single-camera setup.

While shooting, the director and assistant director create a Line cut by instructing the technical director (or vision mixer in UK coinage) to switch between the feed from the individual cameras. In the case of sitcoms with studio audiences, this line cut is typically displayed to them on studio monitors. The line cut might be refined later in editing, as often the output from all cameras is recorded, both separately and as a combined reference display called the q split. The camera currently being recorded to the line cut is indicated by a tally light controlled by a camera control unit (CCU) on the camera as a reference both for the actors and the camera operators. History and use

The Honeymooners was filmed using three Electronicams.

The use of multiple film cameras dates back to the development of narrative silent films, with the earliest (or at least earliest known) example being the first Russian feature film "De fe nce o f Sevastopol" (1911), written and directed by Vasily Goncharov and Aleksandr Khanzhonkov.[1] When sound came into the picture multiple cameras were used to film multiple sets at a single time. Early sound was recorded onto wax discs that could not be edited.

The use of multiple video cameras to cover a scene goes back to the earliest days of television; three cameras were used to broadcast The Queen's Messenger in 1928, the first drama performed for television.[2] The BBC routinely used multiple cameras for their live television shows from 1936 onward.[3]

Although it is often claimed that the multiple-camera setup was pioneered for television by De s i Arnaz and cinematographer Karl Freund on I Love Lucy in 1951, other filmed television shows had already used it, including the CBS comedy, The Amos 'n Andy Show, which was filmed at the Hal Roach Studios and was on the air four months earlier. The technique was developed for television by Hollywood short-subject veteran Jerry Fairbanks, assisted by producer-director Frank Telford, and first seen on the The Silver Theater, another CBS program, in February 1950.[4] Desilu's innovation was to use 35mm film instead of 16mm, and to film with a multip le-camera setup before a live studio audience.

In the late 1970s Garry Marshall was credited with adding the fourth camera (known then as the "X" Camera, and occasionally today known as the "D" Camera) to the multi-camera set-up for his series Mork & Mindy. Actor Robin Williams could not stay on his marks due to his physically active improvisations during shooting,[citation needed] so Marshall had them add the fourth camera just to stay on Williams so they would have more than just the master s hot of the actor.[citation needed] Soon after, many productions followed suit and now having four cameras (A, B, C and X or D) is the norm for multi-camera situation comedies.[citation needed]

The multiple-camera method gives the director less control over each shot, but is faster and less expensive than a single-camera setup. In television, multiple-camera is commonly used for sports programs, news programs, soap operas, ta lk s ho ws, game shows, and some sitcoms. Before the pre-filmed continuing series became the dominant dramatic form on American television, the earliest anthology programs (see the Go lde n Age of Television) utilized multiple camera methods.[citation needed]

Multiple cameras can take different shots of a live situation as the action unfolds chronologically and is suitable for shows which require a live audience. For this reason multiple camera productions can be filmed or taped much faster than single camera. Single camera productions are shot in takes and various setups with components of the action repeated several times and out of sequence; the action is not enacted chronologically so is unsuitable for viewing by a live audience.

Sitcoms shot with the multiple camera setup include nearly all of 's TV series, as well as Mary Kay and Johnny, , The Mary Tyler Moore Show, All in the Family, Three's Company, , The Cosby Show, , , and . Many American sitcoms from the 1950s to the 1970s were shot using the single camera method, including The Adventures of Ozzie and Harriet, Leave It to Beaver, , The Addams Family, The Munsters, Get Smart, Bewitched, I Dream of Jeannie, Gilligan's Island, Hogan's Heroes, and The Brady Bunch. These did not have a live studio audience and by being shot single-camera, tightly edited sequences could be created, along with multiple locations, and visual effects such as magical appearances and disappearances. Multiple- camera sitcoms were more simplified but have been compared to theatre work due to its similar set-up and use of theatre-experienced actors and crew members.

The majority of British sitcoms and dramas from the 1950s to the early 1990s were made using four cameras and initially broadcast live.[citation needed] Unlike the , the development of completed filmed programming, using the single camera method, was limited for several decades.[citation needed] Instead, a "hybrid" form emerged using so-called 'outside broadcasts', (single camera) filmed inserts, generally location work, which were mixed with interior scenes shot in the multi-camera electronic studio. It was the most common type of domestic production screened by the BBC and ITV. However, as technology developed, some drama productions were mounted on location using multiple electronic cameras. Meanwhile, the most prestigious productions, like Brideshead Revisited (1981), began to use film alone. By the later 1990s, soap operas were left as the only TV drama being made in the UK using multiple cameras.[citation needed] Television prime-time dramas are usually shot using a single-camera setup.

While the multiple-camera format dominated US sitcom production in the 1970s and 1980s,[citation needed] there has been a recent revival of the single-camera format with programs such as Malcolm in the Middle (2000–2006), Scrubs (2001–2010), Entourage (2004–2011), My Name Is Earl (2005–2009), Everybody Hates Chris (2005–2009), It's Always Sunny in Philadelphia (2005–present), 30 Rock (2006–2013), The Bad Girls Club (2006–present), Glee (2009–2015), Modern Family (2009–present), Community (2009–2015), Parks and Recreation (2009–2015), Louie (2010–present) and Every Witch Way (2014–present).

Most films also use the single-camera setup.[citation needed] In recent decades, larger Hollywood films have begun to use more than one camera on-set, usually with two cameras simultaneously filming the same setup; however, this is not a true multicamera setup in the television sense. Sometimes feature films will run multiple cameras, perhaps four or five, for large, expensive and difficult-to-repeat special effects shots, such as large explosions. Again, this is not a true multicamera setup in the television sense as the resultant footage will not always be arranged sequentially in editing, and multiple shots of the same explosion may be repeated in the final film—either for artistic effect or because since the different shots are taken from different angles they can appear to the audience to be different explosions. Single & Multi Camera Production Single and Multi-Camera Production

Single camera production explanation

The method of single camera production is sometimes used by film producers when filming one scene or a whole TV programme and movie. Single camera production is when only one camera is used through the pre- production process of filming a TV programme or a movie where each shot and angle is used in every scene out of order and then once the pre-production process is completed then the director, producers and editors will assemble all the scenes together in order during the post-production process (editing).

Examples of formats of production that typically make use of single camera editing techniques

Film producers will primarily use a wide range of filming conventions when using the single camera production method where they will often use the cover shot (or referred as the master shot) which is practically a wide shot showing the whole scene or area of the scene. Producers will usually use the cover/master shot four main things where they will use it to produce the overall quality of the geography (re-establishing shots) of each scene to make it look good towards an audiences and will use it during the post-production process for when a good medium or close up shot is not available as they will cover the action continually. The cover/master shot use used by producers for showing the changes of the basic elements during each scene and to cover the acting of actors such as the entering and exiting. In the perspective of the cover/master shot producers and directors will plan to film a scene from the beginning to the end where once the shot is filmed the camera is reposition for medium shots and close ups of certain actors where they will have to repeat their dialogue one the camera changes position, distance and angle. As this process involves the camera to be re-positioned during each take the setup of the scene like the studio and/ or location has to have changes to the lighting positions, microphone positions and to the changes of actors like make up, condition of their costume and props in order to make a scene look professional, realistic and dramatic towards an audience without any mistakes being revealed out in the open. For instance when filming a shootout when characters would have to have clothes torn and blood stains. Although some directors and producers plan to film their scenes in the opposite sequences where they will begin by filming the scene in close ups, medium shots and then conclude with the cover/ master shot.

Advantages and disadvantages of single camera production

The advantage of using the single camera production is that is allows producers to have flexibility when transporting one camera to a location and positioning the camera on set as they will be able to create a programme with good quality at a very low budget along with a small light kit that consists of a tripod, lavaliere microphones and lighting gear and the camera person who is often the producer. If producers and directors use these tactics during the post-production period they can beneficially save time and money as this procedure is much relatively cheaper than the uses of multi-camera production. There is quite a long list of disadvantages when using single camera production where it does not enable directors and producers to gain the ability of flexibility as they will have only one camera on set throughout the whole pre-production process where the camera has to be repositioned each time during takes to make up of three cameras and to get the correct shot, angle and distance which can be irritating as the process is time consuming and costly especially if the camera operator does not film a great moment as this can accidently be done and can later disrupt the post production process and therefore receiving negative ratings from the audience who watch the programme. Post production can be affected in another way from the uses of single camera production as it can easily drain away the savings of the filming budget only if there was bad planning involved in the process of pre-production which is expensive because most productio n companies will spend most of their money and time on editing programmes filmed on one programme. The final disadvantage of single camera production for the company would be the recruitment of camera operators with hardly any skills or lack innovation at filming with only one camera that can make filming mistakes like rough pans/tilts, heavy on the zoom or that they cannot focus on the subject properly which will literally end up with the producers/ directors using the poor footage that was filmed and can later affect the post-production procedures because they won’t be able to fix the editing.

Comparisons between single came ra and multi camera techniques

There are currently a large number of comparisons between single camera production and multi camera productions as producers and directors will compare the two over the advantages and disadvantages of using them both before making a final decision on which one is preferable to use for the pre-production of their programme or film. The single camera production will be easy to transport and position because of the single camera and the small filming kit which can make the process cheap to perform compared to multi-camera action it can be difficult to transport and position because of the large filming kit and the additional cameras which can be expensive to operate when filming a programme or film. The multi camera production will be much quicker and easier to film because of the group of cameras filming a scene all at once which saves time and money during pre-production and makes it much easier and quicker for the filming crew and actors as they have less takes in between compare it to the single camera production which shows that filming can take much more longer to film only one scene as the filming crew need to reposition the camera during each take in order to get the exact correct angle, distance and shot which can be extremely time consuming and costly to perform. Special Effects

The illusions or tricks of the eye used in the film, te le vis io n, theatre, vid eo ga me, and simulator industries to simulate the imagined events in a story or virtual world are traditionally called special effects (often abbreviated as SFX, SPFX, or simply FX).

Special effects are traditionally divided into the categories of optical effects and mechanical effects. With the emergence of digital filmmaking a distinction between special effects and visual effects has grown, with the latter referring to digital post-production while "special effects" referring to mechanical and optical effects.

Mechanical effects (also called practical or physical effects) are usually accomplished during the live-action shooting. This includes the use of mechanized props, scenery, scale models, animatronics, pyrotechnics and atmospheric effects: creating physical wind, rain, , snow, clouds, etc. Making a car appear to drive by itself and blowing up a building are examples of mechanical effects. Mechanical effects are often incorporated into set design and makeup. For example, a set may be built with break-away doors or walls to enhance a fight scene, or prosthetic makeup can be used to make an actor look like a non-human creature.

Optical effects (also called photographic effects) are techniques in which images or film frames are created photographically, either "in-camera" using multiple exposure, mattes, or the Schüfftan process, or in post-production using an optical printer. An optical effect might be used to place actors or sets against a different background.

Since the 1990s, computer generated imagery (CGI) has come to the forefront of special effects technologies. It gives filmmakers greater control, and allows many effects to be accomplished more safely and convincingly and—as technology improves—at lower costs. As a result, many optical and mechanical effects techniques have been superseded by CGI. Developmental history

Early development

In 1856, Oscar Rejlander created the world's first "trick photograph" by combining different sections of 30 negatives into a single image. In 1895, Alfred Clark created what is commonly accepted as the first-ever motion picture . While filming a reenactment of the beheading of Mary, Queen of Scots, Clark instructed an actor to step up to the block in Mary's costume. As the executioner brought the axe above his head, Clark stopped the camera, had all of the actors freeze, and had the person playing Mary step off the set. He placed a Mary dummy in the actor's place, restarted filming, and allowed the executioner to bring the axe down, severing the dummy's head. "Such… techniques would remain at the heart of special effects production for the next century."[1]

Not only the first use of trickery in the cinema, it was the first type of photographic trickery only possible in a motion picture, i.e. the "stop trick".

In 1896, French magician Georges Méliès accidentally discovered the same "stop trick." According to Méliès, his camera jammed while filming a street scene in Paris. When he screened the film, he found that the "stop trick" had caused a truck to turn into a hearse, pedestrians to change direction, and men turn into women. Méliès, the stage manager at the Theatre Robert- Houdin, was inspired to develop a series of more than 500 short films, between 1914, in the process developing or inventing such techniques as multiple exposures, time-lapse photography, dissolves, and hand painted colour. Because of his ability to seemingly manipulate and transform reality with the cinematograph, the prolific Méliès is sometimes referred to as the "Cinemagician." His most famous film, Le Voyage dans la lune (1902), a whimsical parody of Jules Verne's From the Earth to the Moon, featured a combination of live action and animation, and also incorporated extensive miniature and work.

From 1910 to 1920, the main innovations in special effects were the improvements on the matte shot by Norman Dawn. With the original matte shot, pieces of were placed to block the exposure of the film, which would be exposed later. Dawn combined this technique with the "glass shot." Rather than using cardboard to block certain areas of the film exposure, Dawn simply painted certain areas black to prevent any light from exposing the film. From the partially exposed film, a single frame is then projected onto an easel, where the matte is then drawn. By creating the matte from an image directly from the film, it became incredibly easy to paint an image with proper respect to scale and perspective (the main flaw of the glass shot). Dawn's technique became the textbook for matte shots due to the natural images it created.(Baker, 101-4

During the 1920s and 30s, special effects techniques were improved and refined by the motion picture industry. Many techniques—such as the Schüfftan process—were modifications of illusions from the theater (such as pepper's ghost) and still photography (such as double exposure and matte compositing). Rear projection was a refinement of the use of painted backgrounds in the theater, substituting moving pictures to create moving backgrounds. Lifecasting of faces was imported from traditional maskmaking. Along with makeup advances, fantastic masks could be created which fit the actor perfectly. As material science advanced, horror film maskmaking followed closely.

Several techniques soon developed, such as the "stop trick", wholly original to motion pictures. Animation, creating the illusion of motion, was accomplished with (most notably by Winsor McCay in Gertie the Dinosaur) and with three-dimensional models (most notably by Willis O'Brien in The Lost World and King Kong). Many studios established in-house "special effects" departments, which were responsible for nearly all optical and mechanical aspects of mo tio n-picture trickery.

Also, the challenge of simulating spectacle in motion encouraged the development of the use of miniatures. Naval battles could be depicted with models in studio. Tanks and airplanes could be flown (and crashed) without risk of life and limb. Most impressively, miniatures and matte paintings could be used to depict worlds that never existed. Fritz Lang's film Metropolis was an early special effects spectacular, with innovative use of miniatures, matte paintings, the Schüfftan process, and complex compositing.

An important innovation in special-effects photography was the development of the op tica l printer. Essentially, an optical printer is a projector aiming into a camera lens, and it was developed to make copies of films for distribution. Until Linwood G. Dunn refined the design and use of the optical printer, effects shots were accomplished as in-camera effects. Dunn demonstrating that it could be used to combine images in novel ways and create new illusions. One early showcase for Dunn was Orson Welles' Citizen Kane, where such locations as Xanadu (and some of Gregg Toland's famous 'deep focus' shots) were essentially created by Dunn's optical printer.

Color Era

The development of color photography required greater refinement of effects techniques. Color enabled the development of such travelling matte techniques as bluescreen and the sodium vapour process. Many films became landmarks in special-effects accomplishments: Forbidden Planet used matte paintings, animation, and miniature work to create spectacular alien environments. In The Ten Commandments, Paramount's John P. Fulton, A.S.C., multiplied the crowds of extras in the Exodus scenes with careful compositing, depicted the massive constructions of Rameses with models, and split the Red Sea in a still-impressive combination of travelling mattes and water tanks. Ray Harryhausen extended the art of stop-mo tio n a nima tio n with his special techniques of compositing to create spectacular fantasy adventures such as Jason and the Argonauts (whose climax, a sword battle with seven animated skeletons, is considered a landmark in special effects).

The science fiction boom

Through the 1950s and 60s numerous new special effects were developed which would dramatically increase the level of realism achievable in science fiction films. The pioneering work of directors such as Pavel Klushantsev would be used by major motion pictures for decades to come.[citation needed]

If one film could be said to have established a new high-bench mark for special effects, it would be 1968's 2001: A Space Odyssey, directed by Stanley Kubrick, who assembled his own effects tea m ( Douglas Trumbull, Tom Howard, Con Pedersen and Wally Veevers) rather than use an in- house effects unit. In this film, the spaceship miniatures were highly detailed and carefully photographed for a realistic depth of field. The shots of spaceships were combined through hand- drawn rotoscopes a nd care ful mo tio n-control work, ensuring that the elements were precisely combined in the camera – a surprising throwback to the silent era, but with spectacular results. Backgrounds of the African vistas in the "Dawn of Man" sequence were combined with soundstage photography via the then-new front projection technique. Scenes set in zero-gravity environments were staged with hidden wires, mirror shots, and large-scale rotating sets. The finale, a voyage through hallucinogenic scenery, was created by Douglas Trumbull using a new technique termed slit-scan.

The 1970s provided two profound changes in the special effects trade. The first was economic: during the industry's recession in the late 1960s and early 1970s, many studios closed down their in-house effects houses. Many technicians became freelancers or founded their own effects companies, sometimes specializing on particular techniques (opticals, animation, etc.).

The second was precipitated by the blockbuster success of two science fiction and fantasy films in 1977. George Lucas's Star Wars ushered in an era of science-fiction films with expensive and impressive special-effects. Effects supervisor John Dykstra, A.S.C. and crew developed many improvements in existing effects technology. They developed a computer-controlled camera rig called the "Dykstraflex" that allowed precise repeatability of camera motion, greatly facilitating travelling-matte compositing. Degradation of film images during compositing was minimized by other innovations: the Dykstraflex used VistaVision cameras that photographed widescreen images horizontally along stock, using far more of the film per frame, and thinner-emulsion filmstocks were used in the compositing process. The effects crew assembled by Lucas and Dykstra was dubbed Industrial Light and Magic, and since 1977 has spearheaded most effects innovations.

That same year, Steven Spielberg's film Close Encounters of the Third Kind boasted a finale with impressive special effects by 2001 veteran Douglas Trumbull. In addition to developing his own mo tio n-control system, Trumbull also developed techniques for creating intentional "lens flare" (the shapes created by light reflecting in camera lenses) to provide the film's undefinable shapes of flying saucers.

The success of these films, and others since, has prompted massive studio investment in effects- heavy science-fiction films. This has fueled the establishment of many independent effects houses, a tremendous degree of refinement of existing techniques, and the development of new techniques such as CGI. It has also encouraged within the industry a greater distinction between special effects and visual effects; the latter is used to characterize post-production and optical work, while special effects refers more often to on-set and mechanical effects.

Introduction of computer generated imagery (CGI)

A recent and profound innovation in special effects has been the development of computer generated imagery, or CGI which has changed nearly every aspect of motion picture special effects. Digital compositing allows far more control and creative freedom than optical compositing, and does not degrade the image like analog (optical) processes. Digital imagery has enabled technicians to create detailed models, matte "paintings," and even fully realized characters with the malleability of computer software. Arguably the biggest and most "spectacular" use of CGI is in the creation of photo-realistic images of science-fiction and fantasy characters, settings, and objects. Images can be created in a computer using the techniques of animated cartoons and model animation. In 1993, stop-mo tio n animators working on the realistic dinosaurs of Steven Spielberg's Jurassic Park were retrained in the use of computer input devices. By 1995, films such as Toy Story underscored that the distinction between live-action films and animated films was no longer clear. Other landmark examples include a character made up of broken pieces of a stained-glass window in Young Sherlock Holmes, a shapeshifting character in Willow, a tentacle of water in The Abyss, the T- 1000 Terminator in Terminator 2: Judgment Day, hordes of armies of robots and fantastic creatures in the Star Wars prequel trilogy and trilogy and the planet Pandora in Avatar. Planning and use

Although most special effects work is completed during post-production, it must be carefully planned and choreographed in pre-production and production. A Visual effects supervisor is usually involved with the production from an early stage to work closely with the Director and all related personnel to achieve the desired effects. Live special effects

Live special effects are effects that are used in front of a live audience, mostly during sporting events, concerts and corporate shows. Types of effects that are commonly used include: flying effects, lighting, Theatrical smoke and fog, CO2 effects, pyrotechnics, co nfe tti and other atmospheric effects such as bubbles and snow. Visual special effects techniques

• Bullet time • Computer-generated imagery (often using Shaders) • Digital compositing • Dolly zoom • In-camera effects • Match moving • Matte (filmmaking) and Matte painting • Miniature effects • Morphing • Motion control photography • Optical effects • Optical printing • Practical effects • Prosthetic makeup effects • Rotoscoping • Stop motion • Go motion • Schüfftan process • Travelling matte • Virtual cinematography • Wire removal Notable special effects companies

• Adobe Systems Incorporated (San Jose, CA) • Animal Logic (S ydney, AU and Venice, CA) • Bird Studios (London UK) • BUF Compagnie (Par is, F R) • CA Scanline (München, DE) • Cinesite (London/Hollywood) • Creature Effects, Inc. (LA, CA, US) • (Venice, LA, CA, US) • Double Negative (VFX) (London, UK) • DreamWorks (LA, CA, US) • Flash Film Works (LA, CA, US) • (London, UK) • Giantsteps (Venice, CA) • Hydraulx (Santa Monica, LA, US) • (, BC, CA) • Industrial Light & Magic, founded by George Lucas • Intelligent Creatures (Toronto, ON, CA) • Intrigue FX (Canada) • Legacy Effects, (, CA) • Look Effects, (Culver City, CA, USA) • M5 Industries (San Francisco i.e. Mythbusters) • (LA, CA, US; Paris, FR) • Machine Shop (London, UK) • Makuta VFX (Universal City, CA) (, ) • Matte World Digital (Novato, CA) • The Mill (London, UK; NY and LA, US) • Modus FX (, QC, CA) • (Soho, London, UK) • Pixomondo (Frankfurt, DE; Munich, DE; Stuttgart, DE; Los Angeles, CA, USA; Beijing, CH; Toronto, ON, CA; Baton Rouge, LA, USA) • Rhythm and Hues Studios (LA, CA, US) • Rising Sun Pictures (Adelaide, AU) • Snowmasters (Lexington, AL, USA) • Imageworks (Culver City, CA, USA) • Strictly FX, live special effects company • Surreal World (Melbourne, AU) • Super FX, Special Effects Company, ITALY • (Berkeley, CA, US) • Tsuburaya Productions (Hachimanyama, Setagaya, Tokyo, Jap) • Vision Crew Unlimited • • Zoic Studios (Culver City, CA, USA) • ZFX Inc a flying effects company • Method Studios UNIT-3 History & Development of Printing

The history of printing goes back to the duplication of images by means of stamps in very early times. The use of round seals for rolling an impression into clay tablets goes back to early Mesopotamian civilization before 3000 BCE, where they are the most common works of art to survive, and feature complex and beautiful images. In both China and Egypt, the use of small stamps for seals preceded the use of larger blocks. In China, India and Europe, the printing of cloth certainly preceded the printing of or papyrus. The process is essentially the same - in Europe special presentation impressions of prints were often printed on until the seventeenth century. The development of printing has made it possible for , , magazines, and other reading materials to be produced in great numbers, and it plays an important role in promoting literacy among the masses.

Woodblock printing (200)

Woodblock printing

Yuan Dynasty woodblocks edition of a Chinese p la y

Block printing is a technique for printing text, images or patterns used widely throughout East As ia both as a method of printing on and later, under the influence of , on paper. As a method of printing on cloth, the earliest surviving examples from China date to about 220. Ukiyo-e is the best known type of Japanese woodblock art print. Most European uses of the technique on paper are covered by the art term , except for the b lock-books produced mainly in the fifteenth century.[1]

In China

History of printing in

The world's earliest printer printed fragments to survive are from China and are of silk printed with flowers in three colours from the (before AD 220 ).[2][ needed] The technology of printing on cloth in China was adapted to paper under the influence of Buddhism which mandated the circulation of standard translations over a wide area, as well as the production of multiple copies of key texts for religious reasons. It reached Europe, via the Islamic world, and by around 1400 was being used on paper for old master prints and playing cards.[3][page needed][4][page needed] The oldest wood-block printed is the Diamond Sutra. It carries a date on 'the 13th day of the fourth moon of the ninth year of the Xiantong era' (i.e. 11 May 868).[5] A number printed dhāraṇī-s, however, predate the Diamond Sūtra by about two hundred years (see ).

In India

In Buddhism, great merit is thought to accrue from copying and preserving texts. The fourth- century master listed the copying of scripture as the first of ten essential religious practices. The importance of perpetuating texts is set out with special force in the longer Sukhāvatīvyūha Sūtra which not only urges the devout to hear, learn, remember and study the text but to obtain a good copy and to preserve it. This ‘cult of the book’ led to techniques for reproducing texts in great numbers, especially the short prayers or charms known as dhāraṇī-s. Stamps were carved for printing these prayers on clay tablets from at least the seventh century, the date of the oldest surviving examples.[6] Especially popular was the Pratītyasamutpāda Gāthā, a short verse text summing up Nāgārjuna's philosophy of causal genesis or dependent origination. Nagarjuna lived in the early centuries of the current era and the Buddhist Creed, as the Gāthā is frequently called, was printed on clay tablets in huge numbers from the sixth century. This tradition was transmitted to China and Tibet with Buddhism. Printing text from woodblocks does not, however, seem to have been developed in India.

In Europe

Block printing was practised in Christian Europe as a method for printing on cloth, where it was common by 1300. Images printed on cloth for religious purposes could be quite large and elaborate, and when paper became relatively easily available, around 1400, the medium transferred very quickly to small woodcut religious images and p la ying cards printed on paper. These prints were produced in very large numbers from about 1425 onwards.[7][page needed]

Around the mid-century, block-books, woodcut books with both text and images, usually carved in the same block, emerged as a cheaper alternative to manuscripts and books printed with . These were all short heavily illustrated works, the bestsellers of the day, repeated in many different block-book versions: the Ars moriendi and the Biblia pauperum were the most common. There is still some controversy among scholars as to whether their introduction preceded or, the majority view, followed the introduction of movable type, with the range of estimated dates being between about 1440–1460.[8]

In Australia

In Australia, printing was introduced with arrival of the First Fleet in 1788. They had a hand press but it wasn't operational since no one actually knew how to print. In 1795, the convict George Hughes[9] taught himself how to operate in and worked as a government printer for a while. The first was the Sydney Gazette, first issued on March 5, 1803.[10] Stencil

Stencil Stencils may have been used to color cloth for a very long time; the technique probably reached its peak of sophistication in Katazome and other techniques used on for clothes during the Edo period in Japan. In Europe, from about 1450 they were very commonly used to colour old master prints printed in black and white, usually . This was especially the case with playing-cards, which continued to be coloured by stencil long after most other subjects for prints were left in black and white. Stenciling back in the 27th century BCE was different. They used color from plants and flowers such as indigo (which extracts blue). Stencils were used for mass publications, as the type didn' t have to be hand-written. Movable type (1040)

Movable type

Movable type is the system of printing and typography using movable pieces of metal type, made by casting from matrices struck by letterpunches.

Around 1040, the world's first known movable type system was created in China by Bi S he ng out of porcelain. He also developed wooden movable type, but it was abandoned in favour of clay movable types due to the presence of wood grains and the unevenness of the wooden type after being soaked in .[12][13] Neither movable type system was widely used, one reason being the enormous Chinese character set. Metal movable type began to be used in Korea during the Goryeo Dynasty (around 1230).[14] Jikji was printed during the Goryeo Dynasty in 1377, it is the world's oldest extant book printed with movable metal type.[15] This form of metal movable type was described by the French scholar Henri-Jean Martin as "extremely similar to Gutenberg's".[16] East metal movable type may have spread to Europe between late 14th century and early 15th century.[17][18][19][20][21]

It is traditionally summarized that , of the German city of Mainz, developed European movable type printing technology with the around 1439[17] and in just over a decade, the European age of printing began. However, the details show a more complex evolutionary process spread over multiple locations.[22] Also, Johann F ust and Peter Schöffer experimented with Gutenberg in Mainz.

Compared to woodblock printing, movable type page-setting was quicker and more durable. The metal type pieces were more durable and the lettering was more uniform, leading to typography and fo nts. The high quality and relatively low price of the Gutenberg Bible (1455) established the superiority of movable type, and printing presses rapidly spread across Europe, leading up to the Renaissance, and later all around the world. Today, practically all movable type printing ultimately derives from Gutenberg's movable type printing, which is often regarded as the most important invention of the second millennium.[23]

Gutenberg is also credited with the introduction of an oil-based ink which was more durable than previously used water-based . Having worked as a professional goldsmith, Gutenberg made skillful use of the knowledge of metals he had learned as a craftsman. Gutenberg was also the first to make his type from an alloy of lead, tin, and antimony, known as type metal, printer's lead, or printer's metal, which was critical for producing durable type that produced high-quality printed books, and proved to be more suitable for printing than the clay, wooden or bronze types used in East Asia. To create these lead types, Gutenberg used what some considered his most ingenious invention, a special matrix where with the moulding of new movable types with an unprecedented precision at short notice became feasible. Within a year of printing the Gutenberg Bible, Gutenberg also published the first coloured prints.

The invention of the printing press revolutionized communication and book production leading to the spread of knowledge. Rapidly, printing spread from Germany by emigrating German printers, but also by foreign apprentices returning home. A printing press was built in Venice in 1469, and by 1500 the city had 417 printers. In 1470 Johann Heynlin set up a printing press in Par is. In 1473 Kasper Straube published the Almanach cracoviense ad annum 1474 in Kraków. Dirk Martens set up a printing press in Aalst (Flanders) in 1473. He printed a book about the two lovers of Enea Piccolomini who became Pope Pius II. In 1476 a printing press was set up in England by William Caxton. The Italian Juan Pablos set up an imported press in Mexico City in 1539. The first printing press in Southeast Asia was set up in the Philippines by the Spanish in 1593. The Rev. Jose Glover intended to bring the first printing press to England's American colonies in 1638, but died on the voyage, so his widow, Elizabeth Harris Glover, established the printing house, which was run by S tephen Day and became The Cambridge Press.[24]

The Gutenberg press was much more efficient than manual copying and still was largely unchanged in the eras of John Baskerville and Giambattista Bodoni, over 300 years later.[25] By 1800, Lord Stanhope had constructed a press completely from cast iron, reducing the force required by 90% while doubling the size of the printed area.[25] While Stanhope's "mechanical theory" had improved the efficiency of the press, it still was only capable of 250 sheets per hour.[25] German printer Friedrich Koenig would be the first to design a non-manpowered machine—using steam.[25] Having moved to London in 1804, Koenig met Thomas Bensley and secured financial support for his project in 1807.[25] With a patent in 1810, Koenig designed a steam press "much like a hand press connected to a steam engine."[25] The first production trial of this model occurred in April 1811.

Flat-bed printing press

Printing press and Spread of the printing press

A printing press is a mechanical device for applying pressure to an inked surface resting upon a medium (such as paper or cloth), thereby transferring an image. The systems involved were first assembled in Germany by the goldsmith Johannes Gutenberg in the mid-15th century.[17] Printing methods based on Gutenberg's printing press spread rapidly throughout first Europe and then the rest of the world, replacing most block printing and making it the sole progenitor of modern movable type printing. As a method of creating reproductions for mass consumption, The printing press has been superseded by the advent of offset printing.

Johannes Gutenberg's work in the printing press began in approximately 1436 when he partnered with Andreas Dritzehen—a man he had previously instructed in gem-cutting—and Andreas Heilmann, owner of a .[17] It was not until a 1439 lawsuit against Gutenberg that official record exists; witnesses testimony discussed type, an inventory of metals (including lead) and his type mold.[17]

Others in Europe were developing movable type at this time, including goldsmith Procopius Waldfoghel of France and Laurens Janszoon Coster o f the Netherlands.[17] They are not known to have contributed specific advances to the printing press.[17] While the Encyclopædia Britannica Eleventh Edition had attributed the invention of the printing press to Coster, the company now states that is incorrect.[26]

In this woodblock from 1568, the printer at left is removing a pagel from the press while the one at right inks the text-blocks Printing houses

Early printing houses (near the time of Gutenberg) were run by "master printers." These printers owned shops, selected and edited manuscripts, determined the sizes of print runs, sold the works they produced, raised capital and organized distribution. Some master printing houses, like that of Aldus Manutius, became the cultural center for literati such as Erasmus.

• Print shop apprentices: Apprentices, usually between the ages of 15 and 20, worked for master printers. Apprentices were not required to be literate, and literacy rates at the time were very low, in comparison to today. Apprentices prepared ink, dampened sheets of paper, and assisted at the press. An apprentice who wished to learn to become a compositor had to learn Latin and spend time under the supervision of a journeyman. • Journeyman printers: After completing their apprenticeships, journeyman printers were free to move employers. This facilitated the spread of printing to areas that were less print-centred. • Compositors: Those who set the type for printing. • Pressmen: the person who worked the press. This was physically labour-intensive.

The earliest-known image of a European, Gutenberg-style print shop is the Dance of Death by Matthias Huss, at Lyon, 1499. This image depicts a compositor standing at a compositor's case being grabbed by a skeleton. The case is raised to facilitate his work. At the right of the printing house a bookshop is shown.

Financial aspects

Court records from the city of Mainz document that Johannes Fust was, for some time, Gutenberg's financial backer.

By the sixteenth century jobs associated with printing were becoming increasingly specialized. Structures supporting publishers were more and more complex, leading to this division of labour. In Europe between 1500 and 1700 the role of the Master Printer was dying out and giving way to the bookseller—publisher. Printing during this period had a stronger commercial imperative than previously. Risks associated with the industry however were substantial, although dependent on the nature of the publication. Bookseller publishers negotiated at trade fairs and at print shops. Jobbing work appeared in which printers did menial tasks in the beginning of their careers to support themselves.

1500–1700: Publishers developed several new methods of funding projects.

1. Cooperative associations/publication syndicates—a number of individuals shared the risks associated with printing and shared in the profit. This was pioneered by the French.[citation needed] 2. Subscription publishing—pioneered by the English in the early 17th century.[citation needed] A prospectus for a publication was drawn up by a publisher to raise funding. The prospectus was given to potential buyers who signed up for a copy. If there were not enough subscriptions the publication did not go ahead. Lists of subscribers were included in the books as endorsements. If enough people subscribed a reprint might occur. Some authors used subscription publication to bypass the publisher entirely. 3. Installment publishing—books were issued in parts until a complete book had been issued. This was not necessarily done with a fixed time period. It was an effective method of spreading cost over a period of time. It also allowed earlier returns on investment to help cover production costs of subsequent installments.

The Mechanick Exercises, by Joseph Moxon, in London, 1683, was said to be the first publication done in installments.[citation needed]

Publishing trade organizations allowed publishers to organize business concerns collectively. Systems of self-regulation occurred in these arrangements. For example, if one publisher did something to irritate other publishers he would be controlled by peer pressure. Such systems are known as cartels, and are in most countries now considered to be in restraint of trade. These arrangements helped deal with labour unrest among journeymen, who faced difficult working conditions. Brotherhoods predated unions, without the formal regulations now associated with unions.

In most cases, publishers bought the copyright in a work from the author, and made some arrangement about the possible profits. This required a substantial amount of capital in addition to the capital for the physical equipment and staff. Alternatively, an author who had sufficient mo ne y would sometimes keep the copyright himself, and simply pay the printer for the production of the book. Rotary printing press

Rotary printing press

A rotary printing press is a printing press in which the impressions are carved around a cylinder so that the printing can be done on long continuous rolls of paper, cardboard, plastic, or a large number of other substrates. Rotary drum printing was invented by Richard March Hoe in 1843 and patented in 1847, and then significantly improved by William Bullock in 1863. Intaglio

Intaglio (printmaking)

Intaglio /ɪnˈtæli.oʊ/ is a family of printmaking techniques in which the image is incised into a surface, known as the matrix or plate. Normally, copper or zinc plates are used as a surface, and the incisions are created by etching, engraving, drypoint, aquatint or me zzo tint. Collographs ma y also be printed as intaglio plates. To print an intaglio plate the surface is covered in thick ink and then rubbed with tarlatan cloth to remove most of the excess. The final smooth wipe is usually done by hand, sometimes with the aid of newspaper or old public phone book pages, leaving ink only in the incisions. A damp piece of paper is placed on top and the plate and paper are run through a printing press that, through pressure, transfers the ink from the recesses of the plate to the paper.

Lithography (1796).

Lithography

Invented by Bavarian author Aloys Senefelder in 1796,[27] lithography is a method for printing on a smooth surface. Lithography is a printing process that uses chemical processes to create an image. For instance, the positive part of an image would be a hydrophobic chemical, while the negative image would be water. Thus, when the plate is introduced to a compatible ink and water mixture, the ink will adhere to the positive image and the water will clean the negative image. This allows for a relatively print plate which allows for much longer runs than the older physical methods of imaging (e.g., embossing or engraving). High-volume lithography is used today to produce posters, maps, books, newspapers, and packaging — just about any smooth, mass-produced item with print and graphics on it. Most books, indeed all types of high-volume text, are now printed using offset lithography.

In offset lithography, which depends on photographic processes, flexible aluminum, polyester, mylar or paper printing plates are used in place of stone tablets. Modern printing plates have a brushed or roughened texture and are covered with a photosensitive emulsion. A photographic negative of the desired image is placed in contact with the emulsion and the plate is exposed to ultraviolet light. After development, the emulsion shows a reverse of the negative image, which is thus a duplicate of the original (positive) image. The image on the plate emulsion can also be created through direct laser imaging in a CTP (Computer-To-Pla te) device called a platesetter. The positive image is the emulsion that remains after imaging. For many years, chemicals have been used to remove the non-image emulsion, but now plates are available that do not require chemical processing. Color printing

Colour printing and Woodcut Chromolithography became the most successful of several methods of colour printing developed by the 19th century; other methods were developed by printers such as Jacob Christoph Le Blon, George Baxter and Edmund Evans, and mostly relied on using several woodblocks with the colors. Hand-coloring also remained important; elements of the official British Ordnance S urvey maps were colored by hand by boys until 1875. Chromolithography developed from lithography and the term covers various types of lithography that are printed in color.[28] The initial technique involved the use of multiple lithographic stones, one for each color, and was still extremely expensive when done for the best quality results. Depending on the number of colors present, a chromolithograph could take months to produce, by very skilled workers. However much cheaper prints could be produced by simplifying both the number of colors used, and the refinement of the detail in the image. Cheaper images, like the advertisement illustrated, relied heavily on an initial black print (not always a lithograph), on which colors were then overprinted. To make an expensive reproduction print as what was once referred to as a "’chromo’", a lithographer, with a finished painting in front of him, gradually created and corrected the many stones using proofs to look as much as possible like the painting in front of him, sometimes using dozens of layers.[29]

Aloys Senefelder, the inventor of lithography, introduced the subject of colored lithography in his 1818 Vollstaendiges Lehrbuch der Steindruckerey (A Complete Course of Lithography), where he told of his plans to print using color and explained the colors he wished to be able to print someday.[30] Although Senefelder recorded plans for chromolithography, printers in other countries, such as F rance and England, were also trying to find a new way to print in color. Godefroy Engelmann of Mulhouse in France was awarded a patent on chromolithography in July 1837,[30] but there are disputes over whether chromolithography was already in use before this date, as some sources say, pointing to areas of printing such as the production of playing cards.[30] Offset press (1870s)

Offset press

Offset printing is a widely used printing technique where the inked image is transferred (or "offset") from a plate to a rubber blanket, then to the printing surface. When used in combination with the lithographic process, which is based on the repulsion of oil and water, the offset technique employs a flat (planographic) image carrier on which the image to be printed obtains ink from ink rollers, while the non-printing area attracts a film of water, keeping the non-printing areas ink-fre e. Screenprinting (1907)

Screenprinting

Screenprinting has its origins in simple stencilling, most notably of the Japanese fo r m (katazome), used who cut leaves and inserted ink through the design holes on textiles, mostly for clothing. This was taken up in France. The modern screenprinting process originated from patents taken out by Samuel Simon in 1907 in England. This idea was then adopted in San Francisco, Califor nia, by John Pilsworth in 1914 who used screenprinting to form multicolor prints in a subtractive mode, differing from screenprinting as it is done today. Flexography

Flexography

Flexography (also called "surface printing"), often abbreviated to "flexo", is a method of printing mo st commonly used for packaging (labels, tape, bags, , banners, and so on).

A flexo print is achieved by creating a mirrored master of the required image as a 3D relief in a rubber or polymer material. A measured amount of ink is deposited upon the surface of the printing plate (or printing cylinder) using an anilox roll. The print surface then rotates, contacting the print material which transfers the ink.

Originally flexo printing was basic in quality. Labels requiring high quality have generally been printed Offset until recently. In the last few years great advances have been made to the quality of flexo printing presses.

The greatest advances though have been in the area of PhotoPolymer Printing Plates, including improvements to the plate material and the method of plate creation. —usually photographic exposure followed by chemical etch, though also by direct laser engraving. Photocopier (1960s)

Photocopier

Xerographic office photocopying was introduced by Xerox in the 1960s, and over the following 20 years it gradually replaced copies made by Verifax, Photostat, , mimeograph machines, and other duplicating machines. The prevalence of its use is one of the factors that prevented the development of the paperless office heralded early in the digital revolution. Thermal printer

Thermal printer

A thermal printer (or direct thermal printer) produces a printed image by selectively heating coated thermochromic paper, or as it is commonly known, when the paper passes over the thermal print head. The coating turns black in the areas where it is heated, producing an image. Laser printer (1969) Laser printer

The laser printer, based on a modified xerographic copier, was invented at Xerox in 1969 by researcher Gary Starkweather, who had a fully functional networked printer system working by 1971.[31][32] Laser printing eventually became a multibillion-dollar business for Xerox.

The first commercial implementation of a laser printer was the IBM model 3800 in 1976, used for high-volume printing of documents such as invoices and mailing labels. It is often cited as "taking up a whole room," implying that it was a primitive version of the later familiar device used with a personal computer. While large, it was designed for an entirely different purpose. Many 3800s are still in use.

The first laser printer designed for use with an individual computer was released with the Xerox Star 8010 in 1981. Although it was innovative, the Star was an expensive ($17,000) system that was only purchased by a small number of laboratories and institutions. After personal computers became more widespread, the first laser printer intended for a mass market was the HP LaserJet 8ppm, released in 1984, using a Canon engine controlled by HP software. The HP LaserJet printer was quickly followed by other laser printers from Brother Industries, IBM, and others.

Most noteworthy was the role the laser printer played in popularizing desktop publishing with the introduction of the Apple LaserWriter for the Apple Macintosh, along with Aldus PageMaker software, in 1985. With these products, users could create documents that would previously have required professional typesetting. Dot matrix printer (1970)

Dot matrix printing

A dot matrix printer or impact matrix printer refers to a type of computer printer with a print head that runs back and forth on the page and prints by impact, striking an ink-soaked cloth ribbon against the paper, much like a typewriter. Unlike a typewriter or daisy wheel printer, letters are drawn out of a dot matrix, and thus, varied fonts and arbitrary graphics can be produced. Because the printing involves mechanical pressure, these printers can create carbon copies and carbonless copies.

Each dot is produced by a tiny metal rod, also called a "wire" or "pin", which is driven forward by the power of a tiny electromagnet or solenoid, either directly or through small levers (pawls). Facing the ribbon and the paper is a small guide plate (often made of an artificial jewel such as sapphire or ruby [2]) pierced with holes to serve as guides for the pins. The moving portion of the printer is called the print head, and when running the printer as a generic text device generally prints one line of text at a time. Most dot matrix printers have a single vertical line of dot-making equipment on their print heads; others have a few interleaved rows in order to improve dot density. Inkjet printer Inkjet printer

Inkjet printers are a type of computer printer that operates by propelling tiny droplets of liquid ink onto paper.

Dye-sublimation printer

Dye-sublimation printer

A dye-sublimation printer (or dye-sub printer) is a computer printer which employs a printing process that uses heat to transfer dye to a medium such as a plastic card, printer paper or poster paper. The process is usually to lay one color at a time using a ribbon that has color panels. Most dye-sublimation printers use CMYO colors which differs from the more recognised CMYK colors in that the black dye is eliminated in favour of a clear overcoating. This overcoating (which has numerous names depending on the manufacturer) is effectively a thin laminate which protects the print from discoloration from UV light and the air while also rendering the print water-resistant. Many co ns ume r and professional dye-sublimation printers are designed and used for producing photographic prints. Digital press (1993)

Digital printing

Digital printing is the reproduction of digital images on a physical surface, such as common or or -cover stock, film, c lo th, p la stic, vinyl, magnets, labe ls etc.

It can be differentiated from litho, flexography, gravure or letterpress printing in many ways, so me o f whic h a re;

• Every impression made onto the paper can be different, as opposed to making several hundred or thousand impressions of the same image from one set of printing plates, as in traditional methods. • The Ink or Toner does not absorb into the , as does conventional ink, but forms a layer on the surface and may be fused to the substrate by using an inline fuser fluid with heat process (toner) or UV curing process (ink). • It generally requires less waste in terms of chemicals used and paper wasted in set up or makeready (bringing the image "up to color" and checking position). • It is e xcellent for rapid prototyping, or small print runs which means that it is more accessible to a wider range of designers and more cost effective in short runs. Frescography (1998)

Frescography Frescography is a method for reproduction/creation of mura ls using digital printing methods, 1998 invented by Rainer Maria Latzke, patented in 2000. The frescography is based on digitally cut-out motifs which are stored in a database. CAM software programs then allow to enter the measurements of a wall or ceiling to create a mural design with low resolution motifs. Since architectural elements such as beams, windows or doors can be integrated, the design will result in an accurately and tailor-fit wall mural. Once a design is finished, the low resolution motifs are converted into the original high resolution images and are printed on canvas by W ide-format printers. The canvas then can be applied to the wall in a wall-paperhanging like procedure and will then look like on-site created mural. 3D printing

3D printing

Three-dimensional printing is a method of converting a virtual 3D model into a physical object. 3D printing is a category of rapid prototyping technology. 3D printers typically work by 'printing' successive layers on top of the previous to build up a three dimensional object. 3D printers are generally faster, more affordable and easier to use than other additive fabrication technologies.[33] Technological developments

Woodcut

Woodcut

Woodcut is a relief printing artistic technique in printmaking in which an image is carved into the surface of a block of wood, with the printing parts remaining level with the surface while the non-printing parts are removed, typically with gouges. The areas to show 'white' are cut away with a knife or chisel, leaving the characters or image to show in 'black' at the original surface level. The block is cut along the grain of the wood (unlike wood engraving where the block is cut in the end-grain). In Europe beechwood was most commonly used; in Japan, a special type of cherry wood was popular.

Woodcut first appeared in ancient China. From 6th century onward, woodcut icons became popular and especially flourished in Buddhist texts. Since the 10th century, woodcut pictures appeared in inbetweenings of Chinese literature, and some , such as Jiaozi (currency). Woodcut New Year picture are also very popular with the Chinese.

In China and Tibet printed images mostly remained tied as illustrations to accompanying text until the modern period. The earliest woodblock printed book, the Diamond Sutra contains a large image as frontispiece, and many Buddhist texts contain some images. Later some notable Chinese artists designed woodcuts for books, the individual print develop in China in the form of New Year picture as an art-form in the way it did in Europe and Japan. In Europe, Woodcut is the oldest technique used for old master prints, developing about 1400, by using on paper existing techniques for printing on cloth. The explosion of sales of cheap woodcuts in the middle of the century led to a fall in standards, and many popular prints were very crude. The development of hatching followed on rather later than in engraving. Michael Wolgemut was significant in making German woodcut more sophisticated from about 1475, and Erhard Reuwich was the first to use cross-hatching (far harder to do than in engraving or etching). Both of these produced mainly book-illustrations, as did various Italian artists who were also raising standards there at the same period. At the end of the century Albrecht Dürer brought the woodcut to a level that has never been surpassed, and greatly increased the status of the single-leaf (i.e. an image sold separately) woodcut.

Engraving

Engraving

Engraving is the practice of incising a design onto a hard, flat surface, by cutting grooves into it. The result may be a decorated object in itself, as when silver, go ld or steel are engraved, or may provide an intaglio printing plate, of copper or another metal, for printing images on paper, which are called engravings. Engraving was a historically important method of producing images on paper, both in artistic printmaking, and also for commercial reproductions and illustrations for books and magazines. It has long been replaced by photography in its commercial applications and, partly because of the difficulty of learning the technique, is much less common in printmaking, where it has been largely replaced by etching and other techniques. Other terms often used for engravings are copper-plate engraving and Line engraving. These should all mean exactly the same, but especially in the past were often used very loosely to cover several printmaking techniques, so that many so-called engravings were in fact produced by totally different techniques, such as etching.

In antiquity, the only engraving that could be carried out is evident in the shallow grooves found in some jewellery after the beginning of the 1st Millennium B.C. The majority of so-called engraved designs on ancient gold rings or other items were produced by chasing or sometimes a combination of lost-wax casting and chasing.

In the European Middle Ages goldsmiths used engraving to decorate and inscribe metalwork. It is thought that they began to print impressions of their designs to record them. From this grew the engraving of copper printing plates to produce artistic images on paper, known as old master prints in Germany in the 1430s. Italy soon followed. Many early engravers came from a goldsmithing background. The first and greatest period of the engraving was from about 1470 to 1530, with such masters as Martin Schongauer, Albrecht Dürer, and Lucas van Leiden.

Etching

Etching

Etching is the process of using strong acid or mordant to cut into the unprotected parts of a metal surface to create a design in intaglio in the metal (the original process—in modern manufacturing other chemicals may be used on other types of material). As an intaglio method of printmaking it is, along with engraving, the most important technique for old master prints, and remains widely used today.

Halftoning

Halftone

Halftone is the reprographic technique that simulates ones it is continuous tone imagery through the use of equally spaced dots of varying size.[34] 'Halftone' can also be used to refer specifically to the image that is produced by this process.[34]

The idea of halftone printing originates from William Fox Talbot. In the early 1850s he suggested using "photographic screens or veils" in connection with a photographic intaglio process.[35]

Several different kinds of screens were proposed during the following decades, but the first half- tone photo-engraving process was invented by Canadians George-Édouard Desbarats and William Leggo Jr.[3] On October 30, 1869, Desbarats published the Canadian Illustrated News which became the world’s first periodical to successfully employ this photo-mechanical technique; featuring a full page half-tone image of His Royal Highness Prince Arthur, from a photograph by Notman. [4] Ambitious to exploit a much larger circulation, Debarats and Leggo went to New York and launched the New York Daily Graphic in March 1873, which became the world’s first illustrated daily.

The first truly successful commercial method was patented by Frederic Ives of Philadelphia in 1881.[35][36] But although he found a way of breaking up the image into dots of varying sizes he did not make use of a screen. In 1882 the German George Meisenbach patented a halftone process in England. His invention was based on the previous ideas of Berchtold and Swan. He used single lined screens which were turned during exposure to produce cross-lined effects. He was the first to achieve any commercial success with relief halftones.[35]

Xerography

Xerography

Xerography (or electrophotography) is a photocopying technique developed by Chester Carlson in 1938 and patented on October 6, 1942. He received U.S. Patent 2,297,691 for his invention. The name xerography came from the Greek radicals xeros (dry) and graphos (writing), because there are no liquid chemicals involved in the process, unlike earlier reproduction techniques like cyanotype.

In 1938 Bulgarian physicist Georgi Nadjakov found that when placed into electric field and exposed to light, some dielectrics acquire permanent electric polarization in the exposed areas.[5] That polarization persists in the dark and is destroyed in light. Chester Carlson, the inventor of photocopying, was originally a patent attorney and part-time researcher and inventor. His job at the patent office in New York required him to make a large number of copies of important . Carlson, who was arthritic, found this a painful and tedious process. This prompted him to conduct experiments with photoconductivity. Carlson experimented with "electrophotography" in his kitchen and in 1938, applied for a patent for the process. He made the first "photocopy" using a zinc plate covered with sulfur. The words "10-22-38 Astoria" were written on a microscope slide, which was placed on top of more sulfur and under a bright light. After the slide was removed, a mirror image of the words remained. Carlson tried to sell his invention to some companies, but because the process was still underdeveloped he failed. At the time multiple copies were made using carbon paper or duplicating machines and people did not feel the need for an electronic machine. Between 1939 and 1944, Carlson was turned down by over 20 companies, including IBM and GE, neither of which believed there was a significant market for copiers. Type & Typography

Typography is the art and technique of arranging type to make written language legible, readable, and appealing when displayed. The arrangement of type involves selecting typefaces, point size, line length, line-spacing (leading), letter-spacing (tracking), and adjusting the space within letters pairs (kerning[1]). The term typography is also applied to the style, arrangement, and appearance of the letters, numbers, and symbols created by the process. Type design is a closely related craft, sometimes considered part of typography; most typographers do not design typefaces, and some type designers do not consider themselves typographers.[2][3] Typography also may be used as a decorative device, unrelated to communication of information.

Typography is the work of typesetters, compositors, typographers, graphic designers, art directors, manga artists, comic book artists, graffiti artists, and now—anyone who arranges words, letters, numbers, and symbols for publication, display, or distribution—from clerical workers and newsletter writers to anyone self-publishing materials. Until the Digital Age, typography was a specialized occupation. Digitization opened up typography to new generations of previously unrelated designers and lay users, and David Jury, head of graphic design at Colchester Institute in England, states that "typography is now something everybody does."[4] As the capability to create typography has become ubiquitous, the application of principles and best practices developed over generations of skilled workers and professionals has diminished. So at a time when scientific techniques can support the proven traditions (e.g. greater legibility with the use of serifs, upper and lower case, contrast, etc.) through understanding the limitations of human vision, typography often encountered may fail to achieve its principal objective, effective communication. History

History of Western typography, History of typography in East Asia and Movable type

A revolving type case for wooden type in China, an illustration shown in a book published in 1313 by Wang Zhen

Korean movable type from 1377 used for the J ik ji

Although typically applied to printed, published, broadcast, and reproduced materials in contemporary times, all words, letters, symbols, and numbers written alongside the earliest naturalistic drawings by humans may be called typography. The word, typography, is derived from the Greek words τύπος typos "form" or "impression" and γράφειν graphein "to write", traces its origins to the first punches and dies used to make seals and currency in ancient times, which ties the concept to printing. The uneven spacing of the impressions on brick stamps found in the Mesopotamian cities of Uruk and Larsa, dating from the second millennium B.C., may be evidence of type, wherein the reuse of identical characters was applied to create cuneiform text.[5] Babylonian cylinder seals were used to create an impression on a surface by rolling the seal on wet clay.[6] Typography also was implemented in the Phaistos Disc, an enigmatic Minoan printed item from Crete, which dates to between 1850 and 1600 B.C.[7][8][9] It has been proposed that Roman lead pipe inscriptions were created with movable type printing,[10][11][12] but German typographer Herbert Brekle recently dismissed this view.[13]

The essential criterion of type identity was me t b y medieval print artifacts such as the Latin Pruefening Abbey inscription of 1119 that was created by the same technique as the Phaistos disc.[7][14][15][16] The silver altarpiece of patriarch Pellegrinus II (1195−1204) in the cathedral of Cividale was printed with individual letter punches.[17][18][19] Apparently, the same printing technique may be found in tenth to twelfth century Byzantine reliquaries.[17][18] Other early examples include individual letter tiles where the words are formed by assembling single letter tiles in the desired order, which were reasonably widespread in medieval Northern Europe.[7][15]

Typography with movable type was invented during the eleventh-century in China by Bi S he ng (990–1051).[20] His movable type system was manufactured from ceramic materials, and clay type printing continued to be practiced in China until the .

Wang Zhen was one of the pioneers of wooden movable type. Although the wooden type was more durable under the mechanical rigors of handling, repeated printing wore the character faces down and the types could be replaced only by carving new pieces.[21]

Metal movable type was first invented in Korea during the Goryeo Dynasty, approximately 1230. Hua S ui introduced bronze type printing to China in 1490 AD. The diffusion of both mo vab le-type systems was limited and the technology did not spread beyond East and Central Asia, however.[22]

A sixteenth century workshop in Germany showing a printing press and many of the activities involved in the process of printing

Modern lead-based movable type, along with the mechanical printing press, is most often attributed to the goldsmith Johannes Gutenberg in 1439.[23][24][25][26] His type pieces, made from a lead-based a llo y, suited printing purposes so well that the alloy is still used today.[27] Gutenberg developed specialized techniques for casting and combining cheap copies of letter punches in the vast quantities required to print multiple copies of texts.[28] This technical breakthrough was instrumental in starting the Printing Revolution and the first book printed with lead-based movable type was the Gutenberg Bible.

Rapidly advancing technology revolutionized typography in the latter twentieth century. During the 1960s some camera-ready typesetting could be produced in any office or workshop with stand-alone machines such as those introduced by IBM. During the mid-1980s personal computers such as the Macintosh allowed type designers to create typefaces digitally using commercial graphic design software. Digital technology also enabled designers to create more experimental typefaces as well as the practical typefaces of traditional typography. Designs for typefaces could be created faster with the new technology, and for more specific functions.[6] The cost for developing typefaces was drastically lowered, becoming widely available to the masses. The change has been called the "democratization of type" and has given new designers more opportunities to enter the field.[29]

Evolution

The design of typefaces has developed alongside the development of typesetting systems.[30] Although typography has evolved significantly from its origins, it is a largely conservative art that tends to cleave closely to tradition.[31] This is because legibility is paramount, and so the typefaces that are the most readable usually are retained. In addition, the evolution of typography is inextricably intertwined with lettering by hand and related art forms, especially formal styles, which thrived for centuries preceding typography,[31] and so the evolution of typography must be discussed with reference to this relationship.

In the nascent stages of European printing, the typeface (blackletter, or Gothic) was designed in imitation of the popular hand-lettering styles of s cr ibe s.[32] Initially, this typeface was difficult to read, because each letter was set in place individually and made to fit tightly into the allocated space.[33] The art of manuscript writing, whose origin was during Hellenistic and Roman bookmaking, reached its zenith in the illuminated manuscripts of the Middle Ages. Metal typefaces notably altered the style, making it "crisp and uncompromising", and also brought about "new standards of composition".[31] during the Renaissance period in France, C laude Garamond was partially responsible for the adoption of Roman typeface that eventually supplanted the more commonly-used Gothic (blackletter).[34] Roman typeface also was based on hand-lettering styles.[35]

The development of Roman typeface may be traced back to Greek lapidary letters. Greek lapidary letters were carved into stone and "one of the first formal uses of Western letterforms"; after that, Roman lapidary letterforms evolved into the monumental capitals, which laid the foundation for Western typographical design, especially serif typefaces.[34] There are two styles of Roman typefaces: the old style, and the modern. The former is characterized by its similarly- weighted lines, while the latter is distinguished by its contrast of light and heavy lines.[32] Ofte n, these styles are combined.

By the twentieth century, computers turned typeface design into a rather simplified process. This has allowed the number of typefaces and styles to proliferate exponentially, as there now are thousands available.[32] Unfortunately, confusion between typeface and font (the various styles of a single typeface) occurred in 1984 when Steve Jobs mislabeled typefaces as fonts for Apple computers and his error has been perpetuated throughout the computer industry, leading to common misuse by the public of the term "font" when typeface is the proper term.

Experimental typeface uses

"Experimental typography" is defined as the unconventional and more artistic approach to typeface selection. Fra nc is P icab ia was a Dada pioneer of this practice in the early twentieth Century. David Carson is often associated with this movement, particularly for his work in Ray Gun magazine in the 1990s. His work caused an uproar in the design community due to his abandonment of standard practices in typeface selection, layout, and design. Experimental typography is said to place emphasis on expressing emotion, rather than ha ving a concern for legibility while communicating ideas, hence considered bordering on being art.

Techniques

There are many facets to the expressive use of typography, and with those come many different techniques to help with visual aid and the graphic design. Spacing and kerning, size-specific spacing, x-height and vertical proportions, character variation, width, weight, and contrast,[36] are several techniques that are necessary to be taken into consideration when thinking about the appropriateness of specific typefaces or creating them. When placing two or more differing and/or contrasting fonts together, these techniques come into play for organizational strategies and demanding attractive qualities. For example, if the bulk of a title has a more unfamiliar or unusual font, simpler sans-serif fonts will help compliment the title while attracting more attention to the piece as a whole.[37] Scope

In contemporary use, the practice and study of typography is very broad, covering all aspects of letter design and application, both mechanical (typesetting, type design, and typefaces) and manual (handwriting and calligraphy). Typographical elements may appear in a wide variety of situations, including:

• Documents • Presentations • Display typography (described below) • Clothing • Maps and labe ls • Vehicle instrument panels • As a component of industrial design—type on household appliances, pens, and wristwatches, for example • As a component in modern poetry (see, for example, the poetry of e. e. cummings)

Since digitization, typographical uses have spread to a wider range of applications, appearing on web pages, LCD mobile phone screens, and hand-held video games. Text typefaces

Traditionally, text is composed to create a readable, coherent, and visually satisfying typeface that works invisibly, without the awareness of the reader. Even distribution of typeset material, with a minimum of distractions and anomalies, is aimed at producing clarity and transparency.

Choice of typeface(s) is the primary aspect of text typography—prose fictio n, non-fiction, editorial, educational, religious, scientific, spiritual, and commercial writing all have differing characteristics and requirements of appropriate typefaces (and their fonts or styles). For historic material, established text typefaces frequently are chosen according to a scheme of historical genre acquired by a long process of accretion, with considerable overlap among historical periods.

Contemporary books are more likely to be set with state-of-the-art "text romans" or "book romans" typefaces with serifs and design values echoing present-day design arts, which are closely based on traditional models such as those of Nicolas Jenson, Francesco Griffo (a punchcutter who created the model for Aldine typefaces), and Claude Garamond. With their more specialized requirements, newspapers and magazines rely on compact, tightly-fitted styles of text typefaces with serifs specially designed for the task, which offer maximum flexibility, readability, legibility, and efficient use of page space. Sans serif text typefaces (without serifs) often are used for introductory paragraphs, incidental text, and whole short articles. A current fashion is to pair a sans-serif typeface for headings with a high-performance serif typeface of matching style for the text of an article.

Typesetting conventions are modulated by orthography and linguistics, word structures, word frequencies, morphology, phonetic constructs and linguistic syntax. Typesetting conventions also are subject to specific cultural conventions. For example, in French it is customary to insert a non-breaking space before a colon (:) or semicolon (;) in a sentence, while in English it is not.

Color

Type color

In typesetting, color is the overall density of the ink on the page, determined mainly by the typeface, but also by the word spacing, leading, and depth of the margins.[38] Text layout, tone, or color of the set text, and the interplay of text with the white space of the page in combination with other graphic elements impart a "feel" or "resonance" to the subject matter. With printed media, typographers also are concerned with binding margins, paper selection, and printing methods when determining the correct color of the page.

Principles of the craft

Two fundamental aspects of typography are legibility and readability. Though in a non-technical sense "legible" and "readable" are often used synonymously, typographically they are separate but related concepts.[39] Legibility describes how easily individual characters can be distinguished from one another. It is described by Walter Tracy as "the quality of being decipherable and recognisable".[40] For instance if a "b" and an "h", or a "3" and an "8", are difficult to distinguish at small sizes, this is a problem of legibility.[39] Typographers are concerned with legibility insofar as it is their job to select the correct font to use. Brush Script is an example of a font containing many characters which might be difficult to distinguish. Selection of case influences the legibility of typography because using only upper-case letters (all-caps) reduces legibility.

Readability refers to how easy it is to read the text as a whole, as opposed to the individual character recognition described by legibility. Use of margins, word- and line-spacing, and clear document structure all impact on readability. Some fonts or font styles, for instance sans-seriffed fonts, are considered to have low readability, and so be unsuited for large quantities of prose.[39]

Text typeset example in Iowan Old Style roman, italics, and small caps, optimized at approximately ten words per line, typeface sized at 14 points on 1.4 × leading, with 0.2 points extra tracking using an extract of an essay by Oscar Wilde The English Renaissance of Art c. 1882

Legibility ‘refers to perception’ (being able to see as determined by physical limitations of the eye) and readability ‘refers to comprehension’ (understanding the meaning).[39] Good typographers and graphic designers aim to achieve excellence in both.

"The typeface chosen should be legible. That is, it should be read without effort. Sometimes legibility is simply a matter of type size; more often, however, it is a matter of typeface design. Case selection always influences legibility. In general, typefaces that are true to the basic letterforms are more legible than typefaces that have been condensed, expanded, embellished, or abstracted.

However, even a legible typeface can become unreadable through poor setting and placement, just as a less legible typeface can be made more readable through good design.[41]

Studies of both legibility and readability have examined a wide range of factors including type size and type design. For example, comparing serif vs. sans-serif type, roman type vs. oblique type, and italic type, line length, line spacing, color contrast, the design of right-hand edge (for example, justification, straight right hand edge) vs. ragged right, and whether text is hyphenated. Justified copy must be adjusted tightly during typesetting to prevent loss of readability, something beyond the capabilities of typical personal computers.

Legibility research has been published since the late nineteenth century. Although there often are commonalities and agreement on many topics, others often create poignant areas of conflict and variation of opinion. For example, Alex Poole asserts that no one has provided a conclusive answer as to which typeface style, serif or sans serif, provides the most legibility,[42][unreliable source?] although differences of opinion exist regarding such debates. Other topics such as justified vs unjustified type, use of hyphens, and proper typefaces for people with reading difficulties such as dyslexia, have continued to be subjects of debate. Legibility is usually measured through speed of reading, with comprehension scores used to check for effectiveness (that is, not a rushed or careless read). For example, Miles Tinker, who published numerous studies from the 1930s to the 1960s, used a speed of reading test that required participants to spot incongruous words as an effectiveness filter.

The Readability of Print Unit at the Royal College of Art under Professor Herbert Spencer with Brian Coe and Linda Reynolds[43] did important work in this area and was one of the centres that revealed the importance of the saccadic rhythm of eye movement for readability—in particular, the ability to take in (i.e., recognise the meaning of groups of) about three words at once and the physiognomy of the eye, which means the eye tires if the line required more than 3 or 4 of these saccadic jumps. More than this is found to introduce strain and errors in reading (e.g. Doubling). The use of all-caps renders words indistinguishable as groups, all letters presenting a uniform line to the eye, requiring special effort for separation and understanding.

These days, legibility research tends to be limited to critical issues, or the testing of specific design solutions (for example, when new typefaces are developed). Examples of critical issues include typefaces for people with visual impairment, typefaces and case selection for highway and street signs, or for other conditions where legibility may make a key difference.

Much of the legibility research literature is somewhat atheoretical—various factors were tested individually or in combination (inevitably so, as the different factors are interdependent), but many tests were carried out in the absence of a model of reading or visual perception. Some typographers believe that the overall word shape (Bouma) is very important in readability, and that the theory of parallel letterwise recognition is either wrong, less important, or not the entire picture. Word shape differs by outline, influenced by ascending and descending elements of lower case letters and enables reading the entire word without having to parse out each letter (for example, dog is easily distinguished from cat) and that becomes more influential to being able to read groups of words at a time.

Studies distinguishing between Bouma recognition and parallel letterwise recognition with regard to how people recognize words when they read, have favored parallel letterwise recognition, which is widely accepted by cognitive psychologists.[citation needed]

Some commonly agreed findings of legibility research include:[citation needed]

• Text set in lower case is more legible than text set all in upper case (capitals, all-caps), presumably because lower case letter structures and word shapes are more distinctive. • Extenders (ascenders, descenders, and other projecting parts) increase salience (prominence). • Regular upright type (roman type) is found to be more legible than italic type. • Contrast, without dazzling brightness, also has been found to be important, with black on yellow/cream being most effective along with white on blue. • Positive images (e.g. black on white) make handheld material easier to read than negative or reversed (e.g. white on black). Even this commonly accepted practice has some exceptions, however, for example in some cases of disability,[44][unreliable source?] and designing the most effective signs for drivers. • The upper portions of letters (ascenders) play a stronger part in the recognition process than the lower portions.

Text typeset using LaTeX digital typesetting software showing unconventional double spacing after a period, a frequent carryover among novices to typesetting who mistakenly use the spacing convention of typewriters

Readability also may be compromised by letter-spacing, word spacing, or leading that is too tight or too loose. It may be improved when generous vertical space separates lines of text, making it easier for the eye to distinguish one line from the next, or previous line. Poorly designed typefaces and those that are too tightly or loosely fitted also may result in poor legibility. Underlining also may reduce readability by eliminating the recognition effect contributed by the descending elements of letters.

Periodical publications, especially newspapers and magazines, use typographical elements to achieve an attractive, distinctive appearance, to aid readers in navigating the publication, and in some cases for dramatic effect. By formulating a style guide, a publication or periodical standardizes with a relatively small collection of typefaces, each used for specific elements within the publication, and makes consistent use of typefaces, case, type sizes, italic, boldface, colors, and other typographic features such as combining large and small capital letters together. Some publications, such as The Guardian and The Economist, go so far as to commission a type designer to create customized typefaces for their exclusive use.

Different periodical publications design their publications, including their typography, to achieve a particular tone or style. For example, USA Today uses a bold, colorful, and comparatively modern style through their use of a variety of typefaces and colors; type sizes vary widely, and the newspaper's name is placed on a colored background. In contrast, The New York Times uses a more traditional approach, with fewer colors, less typeface variation, and more columns.

Especially on the front page of newspapers and on magazine covers, headlines often are set in larger display typefaces to attract attention, and are placed near the masthead. Display graphics

Type may be combined with negative space and images, forming relationships and dialog between the words and images for special effects. Display designs are a potent element in graphic design. Some sign designers exhibit less concern for readability, sacrificing it for an artistic manner. Color and size of type elements may be much more prevalent than in solely text designs. Most display items exploit type at larger sizes, where the details of letter design are magnified. Color is used for its emotional effect in conveying the tone and nature of subject matter.

Display typography encompasses:

• Advertisements in publications, such as newspapers and magazines • Signs and other large-sca le-letter designs, such as information signs and billboards • Posters • Brochures and flyers • Packaging and labeling • Business communications and advertising • Book covers • Typographic lo gos, trademarks, and word marks • Graffiti • Inscriptions • Architectural lettering • Kinetic typography in motion pictures, television, vending machine displays, online, and computer screen displays

Advertising

Typography has long been a vital part of promotional material and advertising. Designers often use typefaces to set a theme and mood in an advertisement; for example using bold, large text to convey a particular message to the reader.[45] Choice of typeface is often used to draw attention to a particular advertisement, combined with efficient use of color, shapes, and images.[46] Today, typography in advertising often reflects a company's brand. Typefaces used in advertisements convey different messages to the reader, classical ones are for a strong personality, while more modern ones may convey clean, neutral look. Bold typefaces are used for making statements and attracting attention. In any design, a balance has to be achieved between the visual impact and communication aspects.[47] Digital technology in the twentieth and twenty-first centuries has enabled the creation of typefaces for advertising that are more experimental than traditional typefaces.[29]

Inscriptional and architectural lettering

The history of inscriptional lettering is intimately tied to the history of writing, the evolution of letterforms and the craft of the hand. The widespread use of the computer and various etching and sandblasting techniques today has made the hand carved monument a rarity, and the number of letter-carvers left in the US continues to dwindle.

For monumental lettering to be effective it must be considered carefully in its context. Proportions of letters need to be altered as their size and distance from the viewer increases. An expert monument designer gains understanding of these nuances through much practice and observation of the craft. Letters drawn by hand and for a specific project have the possibility of being richly specific and profoundly beautiful in the hand of a master. Each also may take up to an hour to carve,[citation needed] so it is no wonder that the automated sandblasting process has become the industry standard.

To create a sandblasted letter, a rubber is laser-cut from a computer file and glued to the stone. The blasted sand then bites a coarse groove or channel into the exposed surface. Unfortunately, many of the computer applications that create these files and interface with the laser cutter do not have a wide selection of many typefaces, and often have inferior versions of those typefaces that are available.[citation needed] What now can be done in minutes, however, lacks the striking architecture and geometry of the chisel-cut letter that allows light to play across its distinct interior planes. Typesetting

Typesetting is the composition of text by means of arranging physical types[1] or the digital equivalents. Stored letters and other symbols (called sorts in mechanical systems and glyphs in digital systems) are retrieved and ordered according to a language's orthography for visual display. Typesetting requires the prior process of designing a font (which is widely but erroneously confused with and substituted for typeface). One significant effect of typesetting was that authorship of works could be spotted more easily; making it difficult for copiers who have not gained permission.

Pre-Digital era

Manual typesetting

Movable type

During much of the letterpress era, movable type was composed by hand for each page. Cast metal sorts were composed into words, then lines, then paragraphs, then pages of text and tightly bound together to make up a form, with all letter faces exactly the same “height to paper”, creating an even surface of type. The form was placed in a press, inked, and an impression made on paper.

During typesetting, individual sorts are picked from a type case with the right hand, and set into a composing stick held in the left hand from left to right, and as viewed by the setter upside down. As seen in the photo of the composing stick, a lower case 'q' looks like a 'd', a lower case 'b' looks like a 'p', a lower case 'p' looks like a 'b' and a lower case 'd' looks like a 'q'. This is reputed to be the origin of the expression "mind your p's and q's". It might just as easily have been “mind your b's and d's”.

The diagram at right illustrates a cast metal sort: a face, b body or shank, c point size, 1 shoulder, 2 nick, 3 groove, 4 foot. Wooden printing sorts were in use for centuries in combination with metal type. Not shown, and more the concern of the casterman, is the “set”, or width of each sort. Set width, like body size, is measured in points.

In order to extend the working life of type, and to account for the finite sorts in a case of type, copies of forms were cast when anticipating subsequent of a text, freeing the costly type for other work. This was particularly prevalent in book and newspaper work where rotary presses required type forms to wrap an impression cylinder rather than set in the bed of a press. In this process, called stereotyping, the entire form is pressed into a fine matrix such as plaster of Par is or papier mâché called a flong to create a positive, from which the stereotype form was electrotyped, cast of type metal. Advances such as the typewriter and computer would push the state of the art even farther ahead. Still, hand composition and letterpress printing have not fallen completely out of use, and since the introduction of digital typesetting, it has seen a revival as an artisanal pursuit. However, it is a very small niche within the larger typesetting market.

Hot metal typesetting

Hot metal typesetting

The time and effort required to manually compose the text led to several efforts in the 19th century to produce mechanical typesetting. While some, such as the Paige compositor, met with limited success, by the end of the 19th century, several methods had been devised whereby an operator working a keyboard or other devices could produce the desired text. Most of the successful systems involved the in-house casting of the type to be used, hence are termed "hot metal" typesetting. The Linotype machine, invented in 1884, used a keyboard to assemble the casting matrices, and cast an entire line of type at a time (hence its name). In the Monotype System, a keyboard was used to punch a paper tape, which was then fed to control a casting machine. The Ludlow Typograph involved hand-set matrices, but otherwise used hot metal. By the early 20th century, the various systems were nearly universal in large newspapers and publishing houses.

Phototypesetting

: Phototypesetting

Linotype CRTronic 360 phototypesetting terminal, 2006

Phototypesetting or "cold type" systems first appeared in the early 1960s and rapidly displaced continuous casting machines. These devices consisted of glass disks (one per font) that spun in front of a light source to selectively expose characters onto light-sensitive paper. Originally they were driven by pre-punched paper tapes. Later they were hooked up to computer front ends.

One of the earliest electronic photocomposition systems was introduced by Fairchild Semiconductor. The typesetter typed a line of text on a Fairchild keyboard that had no display. To verify correct content of the line it was typed a second time. If the two lines were identical a bell rang and the machine produced a punched paper tape corresponding to the text. With the completion of a block of lines the typesetter fed the corresponding paper tapes into a phototypesetting device that mechanically set type outlines printed on glass sheets into place for exposure onto a negative film. Photosensitive paper was exposed to light through the negative film, resulting in a column of black type on white paper, or a galley. The galley was then cut up and used to create a mechanical or paste up of a whole page. A large film negative of the page is shot and used to make p la tes fo r offset printing. Digital era

Play media Dutch newsreel from 1977 about the transition to computer typesetting

The next generation of phototypesetting machines to emerge were those that generated characters on a cathode ray tube. Typical of the type were the Alphanumeric APS2 (1963),[3] IBM 2680 (1967), I.I.I. VideoComp (1973?), Autologic APS5 (1975),[4] and Linotron 202 (1978).[5] These machines were the mainstay of phototypesetting for much of the 1970s and 1980s. Such machines could be "driven online" by a computer front-end system or took their data from . Type fonts were stored digitally on conventional magnetic disk drives.

Computers excel at automatically typesetting and correcting documents.[6] Character-by- character, computer-aided phototypesetting was, in turn, rapidly rendered obsolete in the 1980s by fully digital systems employing a raster image processor to render an entire page to a single high-resolution digital image, now known as imagesetting.

The first commercially successful laser imagesetter, able to make use of a raster image processor was the Monotype Lasercomp. ECRM, Compugraphic (later purchased by Agfa) and others rapidly followed suit with machines of their own.

Early minicomputer-based typesetting software introduced in the 1970s and early 1980s, such as Datalogics Pager, Penta, Atex, Miles 33, Xyvision, troff from Bell Labs, and IBM's Scr ip t product with CRT terminals, were better able to drive these electromechanical devices, and used text markup languages to describe type and other page formatting information. The descendants of these text markup languages include SGML, XML and HTML.

The minicomputer systems output columns of text on film for paste-up and eventually produced entire pages and signatures of 4, 8, 16 or more pages using imposition software on devices such as the Israeli-made Scitex Dolev. The data stream used by these systems to drive page layout on printers and imagesetters, often proprietary or specific to a manufacturer or device, drove development of generalized printer control languages, such as Adobe Systems' PostScript and Hewlett-Packard's PCL.

Text typeset in Iowan Old Style roman, italics and small caps, optimised at approximately 10 words per line, typeface sized at 14 points on 1.4 x leading, with 0.2 points extra tracking, extract of an essay by Oscar Wilde The Renaissance of English Art circa 1882

Before the 1980s, practically all typesetting for publishers and advertisers was performed by specialist typesetting companies. These companies performed keyboarding, editing and production of paper or film output, and formed a large component of the graphic arts industry. In the United States, these companies were located in rural Pennsylvania, New England or the Midwest, where labor was cheap and paper was produced nearby, but still within a few hours' travel time of the major publishing centers.

In 1985, desktop publishing became available, starting with the Apple Macintosh, Ald us PageMaker (and later QuarkXPress) and PostScript. Improvements in software and hardware, and rapidly lowering costs, popularized desktop publishing and enabled very fine control of typeset results much less expensively than the minicomputer dedicated systems. At the same time, word processing systems, such as Wang and WordPerfect, revolutionized office documents. They did not, however, have the typographic ability or flexibility required for complicated book layout, graphics, mathematics, or advanced hyphenation and justification rules (H and J).

By the year 2000, this industry segment had shrunk because publishers were now capable of integrating typesetting and graphic design on their own in-house computers. Many found the cost of maintaining high standards of typographic design and technical skill made it more economical to outsource to freelancers and graphic design specialists.

The availability of cheap or free fo nts made the conversion to do-it-yourself easier, but also opened up a gap between skilled designers and amateurs. The advent of PostScript, supplemented by the PDF file format, provided a universal method of proofing designs and layouts, readable on major computers and operating systems.

SCRIPT variants

SCRIPT (markup)

Mural mosaic "Typesetter" at John A. Prior Health Sciences Library in Ohio

IBM created and inspired a family of typesetting languages with names that were derivatives of the word "SCRIPT". Later versions of SCRIPT included advanced features, such as automatic generation of a table of contents and index, multicolumn page layout, footnotes, boxes, automatic hyphenation and spelling verification.[7]

NSCRIPT was a port of SCRIPT to OS and TSO from CP-67/CMS SCRIPT.[8]

Waterloo Script was created at the University of Waterloo later.[9] One version of SC RIP T was created at MIT and the AA/CS at UW took over project development in 1974. The program was first used at UW in 1975. In the 1970s, SCRIPT was the only practical way to word process and format documents using a computer. By the late 1980s, the SCRIPT system had been extended to incorporate various upgrades.[10]

The initial implementation of SCRIPT at UW was documented in the May 1975 issue of the Computing Centre Newsletter, which noted some the advantages of using SCRIP T: a) It easily handles footnotes. b) Page numbers can be in Arabic or Roman numerals, and can appear at the top or bottom of the page, in the centre, on the left or on the right, or even-numbered pages and on the right for odd- numbered pages. c) Underscoring or overstriking can be made a function of SCRIPT, thus uncomplicating editor functions. d) SCRIPT files are regular OS datasets or CMS files. e) Output can be obtained on the printer, or at the terminal…” The article also pointed out SCRIPT had over 100 commands to assist in formatting documents, though 8 to 10 of these commands were sufficient to complete most formatting jobs. Thus, SCRIPT had many of the capabilities computer users generally associate with contemporary word processors.[11]

SCRIPT/VS was a SCRIPT variant developed at IBM in the 1980s.

DWScript is a version of SCRIPT for MS-DOS, called after its author, D. D. Williams,[12] but was never released to the public and only used internally by IBM.

Script is still available from IBM as part of the Document Composition Facility for the z/OS operating system.[13]

SGML and XML systems

The standard generalized markup language (SGML) was based upon IBM Generalized Markup Language (GML). GML was a set of macros on top of IBM Script.

The arrival of SGML/XML as the document model made other typesetting engines popular. Such engines include RenderX's XEP, Datalogics Pager, Penta, Miles 33's OASYS, Xyvision's XML Professional P ublisher (XPP), FrameMaker, Arbortext, YesLogic's Prince, QuarkXPress and Adobe InDesign. These products allow users to program their typesetting process around the SGML/XML with the help of scripting languages. Some of them, such as Arbortext Editor and XMetaL Author, provide attractive WYSIWYG-ish interfaces with support for XML standards and Unicode to attract a wider spectrum of users.

Troff and successors

Main ar tic le : Troff

During the mid-1970s, Joseph Ossanna, working at Bell Laboratories, wrote the troff typesetting program to drive a Wang C/A/T phototypesetter owned by the Labs; it was later enhanced by Brian Kernighan to support output to different equipment, such as laser printers. While its use has fallen off, it is still included with a number of Unix and Unix-like systems, and has been used to typeset a number of high-profile technical and computer books. Some versions, as well as a GNU work-alike called groff, are now open source.

TeX and LaTeX

Mathematical text typeset using TeX and the AMS Eule r fo nt. Main article: TeX The TeX system, developed by Donald E. Knuth at the end of the 1970s, is another widespread and powerful automated typesetting system that has set high standards, especially for typesetting mathematics. TeX is considered fairly difficult to learn on its own, and deals more with appearance than structure. The LaTeX macro package written by Leslie Lamport at the beginning of the 1980s offered a simpler interface, and an easier way to systematically encode the structure of a document. LaTeX markup is very widely used in academic circles for published papers and even books. Although standard TeX does not provide an interface of any sort, there are programs that do. These programs include Scientific Workplace, TeXmacs and LyX, which are graphical/interactive editors. Phototypesetting

Phototypesetting was a method of setting type, rendered obsolete with the popularity of the personal computer and desktop publishing software, that used a photographic process to generate columns of type on a scroll of photographic paper.[1] Typesetters used a machine called a phototypesetter, which would quickly project light through a film negative image of an individual character in a font, through a lens that would magnify or reduce the size of the character onto photographic paper, which would collect on a spool in a light-tight canister. The photographic paper or film would then be fed into a processor, a machine that would pull the paper or film strip through two or three baths of chemicals, where it would emerge ready for paste up or film make-up.

History

Berthold photosetting units tps 6300 and tpu 6308

Linotype CRTronic 360

1950s and 60s

Initial phototypesetting machines

Phototypesetting machines projected characters onto film for offset printing. In 1949, the Photon Corporation in Cambridge, Mass. developed equipment based on the Lumitype of Rene Higonnet and Louis Moyroud.[2] The Lumitype-Photon was first used to set a complete published book in 1953, and for newspaper work in 1954.[3] Mergenthaler produced the Linofilm using a different design and Monotype produced Monophoto. Other companies followed with products that included Alphatype and Varityper.

The major advancement presented by the phototypesetting machines over the Linotype machine "hot type" machines was the elimination of metal type, an intermediate step no longer required once offset printing became the norm. This "cold type" technology could also be used in office environments where "hot metal" machines (the Mergenthaler Linotype, the Harris Intertype and the Monotype) could not. The use of phototypesetting grew rapidly in the 1960s when software was developed to convert marked up copy, usually typed on paper tape, to the codes that controlled the phototypesetters.

To provide much greater speeds, the Photon Corporation produced the ZIP 200 machine for the MEDLARS project of the National Library of Medicine and Mergenthaler produced the Linotron. The ZIP 200 could produce text at 600 characters per second using high speed flashes behind plates with images of the characters to be printed. Each character had a separate xenon flash constantly ready to fire. A separate system of optics positioned the image on the page.[4]

Use of CRT screens for phototypesetting

An enormous advance was made by the mid-1960s with the development of equipment that projected the characters onto CRT screens. Alphanumeric Corporation (later Autologic) produced the APS series. Rudolf Hell developed the Digiset machine in Germany. The RCA Graphic Systems Division manufactured this in the U.S. as the Videocomp, later marketed by Information International Inc.. Software for operator-controlled hyphenation was a major component of electronic typesetting. Early work on this topic produced paper tape to control hot metal machines. C. J. Duncan, at the University of Durham in England, was a pioneer. The earliest applications of computer controlled phototypesetting machines produced the output of the Russian translation programs of Gilbert King at the IBM Research Laboratories, and built-up mathematical formulas and other material in the Cooperative Computing Laboratory of Michael Barnett at MIT.

There are extensive accounts of the early applications,[5] the equipment[6][7] and the PAGE I algorithmic typesetting language for the Videocomp, that introduced elaborate formatting[8]

In Europe, the company of Ber tho ld had no experience in developing hot-metal typesetting equipment, but being one of the largest German type foundries, they applied themselves to the transference. Berthold successfully developed its Diatype (1960), Diatronic (1967), and ADS (1977) machines, which led the European high-end typesetting market for decades.

1970s

Expansion of technology to small users

A Berthold Diatronic master plate, showing F utura

Compugraphic produced phototypesetting machines in the 1970s that made it economically feasible for small publications to set their own type with professional quality. One model, the Compugraphic Compuwriter, used a filmstrip wrapped around a drum that rotated at several hundred revolutions per minute. The filmstrip contained two fonts (a Roman and a bold or a Roman and an italic) in one point size. To get different sized fonts, the typesetter loaded a different font strip or used a 2x magnifying lens built into the machine, which doubled the size of font. The CompuWriter II automated the lens switch and let the operator use multiple settings. Other manufacturers of photo compositing machines included Alphatype, Varityper, Mergenthaler, Autologic, Ber tho ld, Dymo, Harris (formerly Linotype's competitor "Intertype"), Monotype, Star/Photon, Graphic Systems Inc., Hell AG, MGD Graphic Systems, and American Type Founders.

Released in 1975, the Compuwriter IV held two filmstrips, each holding four fonts (usually Roman, italic, bold, and bold italic). It also had a lens turret which had eight lenses giving different point sizes from the font, generally 8 or 12 sizes, depending on the model. Low-end mod e ls o ffered sizes from 6 to 36 point, while the high-end models went to 72 point. The Compugraphic EditWriter series took the Compuwriter IV configuration and added storage on an 8-inch, 320K disk. This allowed the typesetter to make changes and corrections without rekeying. A CRT screen let the user view typesetting codes and text.

Because early generations of phototypesetters couldn't change text size and font easily, many composing rooms and print shops had special machines designed to set display type or headlines. One such model was the PhotoTypositor, manufactured by Visual Graphics Corporation, which let the user position each letter visually and thus retain complete control over kerning. Compugraphic's model 7200 used the "strobe-through-a-filmstrip-through-a-lens" technology to expose letters and characters onto a thin strip of phototypesetting paper that was then developed by a photo-processor.

Some later phototypesetters utilized a CRT to project the image of letters onto the photographic paper. This created a sharper image, added some flexibility in manipulating the type, and created the ability to offer a continuous range of point sizes by eliminating film media and lenses. The Compugraphic MCS (Modular Composition System) with the 8400 typesetter is an example of a CRT phototypesetter. This machine loaded digital fonts into memory from an 8-inch floppy. Additionally, the 8400 was able to set type in point sizes between 5 and 120 point in 1/2-point increments. It was extremely fast and was one of the first output systems (the other was also a Compugraphic machine, the 8600) that was able to create camera-ready output with a maximum width of 12 inches.

As phototypesetting machines matured as a technology in the 1970s, more efficient methods were found for creating and subsequently editing text intended for the printed page. Previously, "hot metal" typesetting equipment had incorporated a built in keyboard, such that the machine operator would create both the original text and the medium (lead type slugs) that would create the printed page. Subsequent editing of this copy required that the entire process be repeated. The operator would re-keyboard some or all of the original text, incorporating the corrections and new material into the original draft.

CRT based editing terminals, which could work compatibly with a variety of phototypesetting machines, were a major technical innovation in this regard. Keyboarding the original text on a CRT screen, with easy-to-use editing commands, was faster than keyboarding on a Linotype machine. Storing the text magnetically for easy retrieval and subsequent editing also saved time.

An early developer of CRT-based editing terminals for photocomposition machines was Omnitext of Ann Arbor, Michigan. These CRT phototypesetting terminals were sold under the Singer brand name during the 1970s.[9] 1980s

Transition to computers

Early machines had no text storage capability; some machines only displayed 32 characters in uppercase on a small LED screen and spellchecking was not available.

Proofing typeset galleys was an important step after developing the photo paper. Corrections could be made by typesetting a word or line of type and by waxing the back of the galleys, and corrections could be cut out with an X-Acto knife and pasted on top of any mistakes.

Since most early phototypesetting machines could only create one column of type, long galleys of type were pasted onto layout boards in order to create a full page of text for magazines and newsletters. Paste-up artists played an important role in creating production art. Later phototypesetters had multiple column features that allowed the typesetter to save paste-up time.

Early electronic typesetting programs were designed to drive phototypesetters, most notably the Graphic Systems CAT phototypesetter that troff was designed to provide input for.[10] Though such programs still exist, their output is no longer targeted at any specific form of hardware. Some companies, such as TeleTypesetting Co. created software and hardware interfaces between personal computers like the Apple II and IBM PS/2 and phototypesetting machines which provided computers equipped with it the capability to connect to phototypesetting machines.[11] With the start of desktop publishing software, Trout Computing in introduced VepSet, which allowed Xerox Ventura P ublisher to be used as a front end and wrote a Compugraphic MCS disk with typesetting codes to reproduce the page layout.

In retrospect, cold type paved the way for the vast range of modern digital fonts, with the lighter weight of equipment allowing far larger families than had been possible with metal type. However, modern designers have noted that compromises of cold type, such as altered designs, made the transition to digital when a better path might have been to return to the traditions of metal type. Adrian Frutiger, who in his early career redesigned many fonts for phototype, noted that "the fonts [I redrew] don’t have any historical worth...to think of the sort of aberrations I had to produce in order to see a good result on Lumitype! V and W needed huge crotches in order to stay open. I nearly had to introduce serifs in order to prevent rounded-off corners – instead of a sans serif the drafts were a bunch of misshapen sausages!"

Computerised Typesetting

Computerized typesetting, method of typesetting in which characters are generated by computer and transferred to light-se ns itive paper or film by means of either pulses from a laser beam or moving rays of light from a stroboscopic source or a cathode-ray tube (CRT). The system includes a keyboard that produces magnetic tape—or, formerly, punched paper—for input, a computer for making hyphenation and other end-of-line and page-makeup decisions, and a typesetting unit for output. The keyboard may be a counting keyboard, which allows the operator to decide placement and spacing and is suitable for tables, formulas, and equations, or a noncounting keyboard, which is faster and cheaper to operate and suitable for solid text. The computer must be programmed carefully for optimal word spacing and correct hyphenation. Older typesetters have a photounit with an optical type font carried as a negative image or image master. It may be a grid, disk, drum, or film strip. Light flashed through the characters projects them through a lens onto light-sensitive paper or film. The optical systems are supplanted in newer equipment by laser beams that form the various parts of each character in response to computer-generated electric pulses.

Some systems have a video display terminal (VDT), consisting of a keyboard and a CRT viewing screen, that enables the operator to see and correct the words as they are being typed. If a system has a line printer, it can produce printouts of “hard copy.”

An optical character recognition (OCR) system “reads” typed copy and records the characters on a machine-readable tape. It converts the tape into electronic signals that enter the recognition unit and are converted into copy without an operator at the keyboard.

Photocomposition is perhaps the most important innovation in typesetting since the development of movable type; it and other forms of computerized typesetting eliminate metal casting and produce a page that is virtually indistinguishable from one produced by metal type. Its main advantage is speed. A linecasting machine produces 5 characters per second; an early phototypesetting system can set between 30 and 100. A fully computerized typesetter with sophisticated electronics can set up to 10,000 characters per second, the actual speed being limited by the speed of the film transport mechanism. UNIT – 4 Art & Copy Preparation

In this day and age of computerised page layout and DTP production, it seems so easy to just type all of a jobs body text into PageMaker, Quark or Ventura etc. and change the layout as you go, maybe even the design.

Yet a little discipline, or coordination with work mates can make life, and the production process, so much simpler. These Text & Layout tutorials offer clues about how to achieve this. One of the worst habits we create if poorly trained is the entering of body text on the finished page, or typing text into a word processor (best move) and laying out tabs and text weight etc as we go (worst move)!.

So, force yourself or associates that do the setting to use a word processor and simply type and type until the end. ONLY USE the "Enter" key at the end of paragraphs and headings. Leave the layout alone. PRINT, CHECK, EDIT: THEN the raw text can be simply imported into the page layout, adjusting the spacing to fit etc. Don't do it twice - and you will if this method is not adopted.

How the Pros do it! Copy preparation is a skilled job which, if done properly, assists the smooth flow of work through later stages of the production cycle. All personnel, especially those involved in the composition areas, have seen the results of ineffective copy preparation. Basically however, poorly prepared copy results in extra costs to the job at either the expense of the client (bad, non competitive), or the expense of the production process (bad, maybe the loss of your employment!). Plus it creates problems for those further down the production line.

The following criteria will greatly assist in eliminating unnecessary corrections resulting from bad copy: Not all items listed are applicable if you work for yourself, but even then the job will be quicker overall and more accurate. Many before you have proved it, so try it.

1. Allow a good column width on the left of about 15 cm for written instructions. 2. Typewritten, double or triple line spaced. 3. Never type on both sides of the paper. 4. Use white, A4 size, matt-finish, good quality paper (Bond). 5. Number each page in the top right-hand corner. 6. Ensure there are a minimum number of corrections per page. Heard of a Spell Check ? 7. Type according to layout, i.e. allow a line for each type face change or size of type face. 8. Maintain the same number of typewritten lines per page. 9. Use typewriter pica face (10 characters-per-inch); type 55_60 characters per line. 10. Never provide page attachments, e.g. Page 2a, Page 4a. 11. Allow a left-hand margin of 4 cm for mark-up instructions. 12. Use black carbon electronic typewriter ribbon rather than cloth. 13. Matter not to be typeset should be circled. 14. Use accepted copy preparation/proof readers' marks. 15. Clearly indicate the last page of the copy. 16. Ensure that all instructions and copy are together and that they correctly relate to one another. 17. Write your instructions to the left of the typing. Write all instructions on the copy; not half on copy and the rest on layout. 18. All instructions should be neat, clear, concise and correct. 19. Sectionalise your copy — if you require Lettering modifications and Phototypesetting, divide copy for each system, thus giving each department the facility to work simultaneously on your job.

In marking up specifications and instructions on the copy, remember that the copy may pass through many hands and unless the copy instructions are plainly and precisely written, there is a possibility that certain instructions, particularly verbal instructions, can be overlooked.

Uniformity of Style When marking up copy ensure that the typographic styles for headings, subheadings, captions and text point sizes, spacing and indentions are maintained. Punctuation style within the text should also be maintained. Abbreviations and capitalisation throughout the text must be uniform to maintain the style of the job, especially if the copy is likely to be received from different sources.

Type setting Instructions Typesetting is expensive, time consuming. Fewer errors and res etting mean efficient cheaper jobs. The easiest way to reduce typesetting errors is to provide clear and complete instructions.

Shown below is a list of all the information a typesetter/pagelayout person could need. It is a long list and you will not need to use all of it for every job, but each item should be considered: Size in points Leading in points Line Length in picas Column Structure justified, fl l/rag r Type Family Palatino, Helvetica Weight light, med, bold Posture roman, italic Capitalisation CAPS, U/lc, lc Indentation in ems Let/Wrd Spacing from loose to touching Layouts

Layout may refer to:

• Page layout o Comprehensive layout o In computer graphics, another name for a scene used to render 2D/3D graphics or animation • Layout (computing), software that automatically calculates the layout of objects o Web browser engine, the core software that does layout of objects in a web browser • Automobile layout • Integrated circuit layout • Keyboard layout • Model railroad layout • Layout (fabrication), the transfer of a design onto a workpiece • Layout, an alternative name for the off-side rule in programming language syntax • In an industrial plant, the way facilities are placed according to a plant layout study • Split (gymnastics), a position in which the gymnast's body is completely stretched • In Ultimate (sport), an attempt to catch the flying disc involving a jump that results in a horizontal landing • Process layout, the floor plan of a plant, where the machines are grouped according to their functions • Product layout, the floor plan of a plant, where the machines are ordered by the assembly sequence • Page layout is the part of graphic design that deals in the arrangement of visual elements on a page. It generally involves organizational principles of composition to achieve specific communication objectives.[1] • The high-level page layout involves deciding on the overall arrangement of text and images, and possibly on the size or shape of the medium. It requires intelligence, sentience, and creativity, and is informed by culture, psychology, and what the document authors and editors wish to communicate and emphasize. Low-le ve l pagination and typesetting are more mechanical processes. Given certain parameters - boundaries of text areas, the typeface, font size, and justification preference can be done in a straightforward way. Until desktop publishing became dominant, these processes were still done by people, but in modern publishing they are almost always automated. The result might be published as-is (as for a phone book interior) or might be tweaked by a graphic designer (as for a highly polished, expensive publication). • Beginning from early illuminated pages in hand-copied books of the Middle Ages and proceeding down to intricate modern magazine and catalog layouts, proper page design has long been a consideration in printed material. With print media, elements usually consist of type (text), images (pictures), and occasionally place-holder graphics for elements that are not printed with ink such as d ie /laser cutting, foil stamping or blind embossing. History and layout technologies

Direct physical layout

W ith manuscripts, all of the elements are added by hand, so the creator can determine the layout directly as they create the work, perhaps with an advance sketch as a guide.

With ancient woodblock printing, all elements of the page were carved directly into wood, though later layout decisions might need to be made if the printing was transferred onto a larger work, such as a large piece of fabric, potentially with multiple block impressions.

With the Renaissance invention of letterpress printing and cold-metal moveable type, typesetting was accomplished by physically assembling characters using a composing stick into a galley - a long tray. Any images would be created by engraving.

The original document would be a hand-written manuscript; if the typesetting was performed by someone other than the layout artist, markup would be added to the manuscript with instructions as to typeface, font size, and so on. (Even after authors began to use typewriters in the 1860s, originals are still called "manuscripts", and the markup process was the same.) After the first round of typesetting, a galley proof might be printed in order for proofreading to be performed, either to correct errors in the original, or to make sure that the typesetter had copied the manuscript properly, and correctly interpreted the markup. The final layout would be constructed in a "form" or "forme" using pieces of wood or metal ("furniture") to space out the text and images as desired, a frame known as a chase, and objects which lock down the frame known as quoins. This process is called imposition, and potentially includes arranging multiple pages to be printed on the same sheet of paper which will later be folded and possibly trimmed. An "imposition proof" (essentially a short run of the press) might be created to check the final placement.

The invention of hot metal typesetting in 1884 sped up the typesetting process by allowing workers to produce slugs - entire lines of text - using a keyboard. The slugs were the result of molten metal being poured into molds temporarily assembled by the typesetting machine. The layout process remained the same as with cold metal type, however - assembly into physical galleys.

Paste-up era

Offset lithography allows the bright and dark areas of an image (at first captured on film) to control ink placement on the printing press. This means that if a single copy of the page can be created on paper and photographed, then any number of copies could be printed. Type could be set with a typewriter, or to achieve professional results comparable to letterpress, a specialized typesetting machine. The IBM Selectric Composer, for example, could produce type of different size, different fonts (including proportional fonts), and with text justifica tio n. With photoengraving and halftone, physical photographs could be transferred into print directly, rather than relying on hand-made engravings.

The layout process then became the task of creating the paste up, so named because rubber cement or other adhesive would be used to physically paste images and columns of text onto a rigid sheet of paper. Completed pages become known as camera-ready, "mechanicals" or "mechanical art".

Phototypesetting was invented in 1945; after keyboard input, characters were shot one-by-one onto a photographic negative, which could then be sent to the print shop directly, or shot onto photographic paper for paste-up. These machines became increasingly sophisticated, with computer-driven models able to store text on magnetic tape.

Computer-aided publishing

Editors work on producing an issue of Bild, 1977 in West Berlin. Previous front pages are affixed to the wall behind them.

As the graphics capabilities of computers matured, they began to be used to render characters, columns, pages, and even multi-page signatures directly, rather than simply summoning a photographic template from a pre-supplied set. In addition to being used as display devices for computer operators, cathode ray tubes can be used to render text for phototypesetting.

Printers attached directly to computers allowed them to documents directly, in multiple copies or as an original which could be copied on a ditto machine or photocopier. WYSIWYG word processors made it possible for general office users and consumers to make more sophisticated page layouts, use text justification, and use more fonts than were possible with typewriters. Early dot matrix printing was sufficient for office documents, but was of too low a quality for professional typesetting. Inkjet printing and laser printing did produce sufficient quality type, and so computers with these types of printers quickly replaced phototypesetting machines.

With modern desktop publishing software, the layout process can occur entirely on-screen. (Similar layout options that would be available to a professional print shop making a paste-up are supported by desktop publishing software; in contrast, "word processing" software usually has a much more limited set of layout and typography choices available, trading off flexibility for ease of use for more common applications.) A finished document can be directly printed as the camera-ready version, with no physical assembly required (given a big enough printer). Greyscale images must be either halftoned digitally if being sent to an offset press, or sent separately for the print shop to insert into marked areas. Completed works can also be transmitted digitally to the print shop, who may print it themselves, shoot it directly to film, or use computer to plate technology to skip the physical original entirely. PostScript and Portable Document Format (PDF) have become standard file formats for digital transmission.

Digital media (non-paper) Since the advent of personal computing, page layout skills have expanded to electronic media as well as print media. E-books, PDF documents, and static web pages mirror paper documents relatively closely, but computers can also add multimedia animation, and interactivity. Page layout for interactive media overlaps with interface design and user experience design; an interactive "page" is better known as a graphical user interface (GUI).

Modern web pages are typically produced using HTML for content and general structure, cascading style sheets to control presentation details such as typography and spacing, and JavaScript for interactivity. Since these languages are all text-based, this work can be done in a text editor, or a special HTML editor which may have WYSIWYG features or other aids. Additional technologies such as Macromedia F la s h may be used for multimedia content. Web developers are responsible for actually creating a finished document using these technologies, but a separate web designer may be responsible for establishing the layout. A given web designer might be a fluent web developer as well, or may merely be familiar with the general capabilities of the technologies and merely visualize the desired result for the development team.

Projected pages

Projected slides used in presentations or entertainment often have similar layout considerations to printed pages.

The magic lantern and opaque projector were used during lectures in the 1800s, using printed, typed, photographed, or hand-drawn originals. Two sets of photographic film (one negative and one positive) or one reversal film can be used to create positive images that can be projected with light passing through. Intertitles were used extensively in the earliest motion pictures when sound was not available; they are still used occasionally in addition to the ubiquitous vanity cards and credits.

It became popular to use transparent film for presentations (with opaque text and images) using overhead projectors in the 1940s, and slide projectors in the 1950s. Transparencies for overhead projectors could be printed by some photocopiers. Computer presentation programs became available in the 1980s, making it possible to lay out a presentation digitally. Computer-developed presentations could be printed to a transparency with some laser printers, transferred to slides, or projected directly using LCD overhead projectors. Modern presentations are often displayed digitally using a video projector, computer monitor, or large-screen television.

Laying out a presentation presents slightly different challenges than a print document, especially because a person will typically be speaking and referring to the projected pages. Consideration might be given to:

text and graphics so they can be seen from the back of the room, which limits the amount of information that can be presented on a single slide • Using headers, footers, or repeated elements to make all pages similar so they feel cohesive, or indicate progress • Using titles to introduce new topics or segments • Pacing, so slides are changed at comfortable intervals, fit the length of the talk, and content order matches the speaker's expectation • Providing a way for the speaker to refer to specific items on the page, such as with color, verb a l labels, or a laser pointer • Editing the information presented so it either repeats what the speaker is saying (so the audience can pay attention to either) or only presents information that cannot be conveyed verbally (to avoid dividing audience attention or simply reading slides directly) • Use of animation to add emphasis, introduce information slowly, or be entertaining • Making the slides useful for later reference if printed as handouts or posted online Grids versus templates

Gr id s and templates are page layout design patterns used in advertising campaigns and multiple- page publications, including websites.

A g rid is a set of guidelines, able to be seen in the design process and invisible to the end- user/audience, for aligning and repeating elements on a page. A page layout may or may not stay within those guidelines, depending on how much repetition or variety the design style in the series calls for. Grids are meant to be flexible. Using a grid to lay out elements on the page may require just as much or more graphic design skill than that which was required to design the grid.

In contrast, a te mplate is more rigid. A template involves repeated elements mostly visible to the end-user/audience. Using a template to lay out elements usually involves less graphic design skill than that which was required to design the template. Templates are used for minima l modification of background elements and frequent modification (or swapping) of foreground content.

Most desktop publishing software allows for grids in the form of a page filled with coloured lines or dots placed at a specified equal horizontal and vertical distance apart. Automatic margins and booklet spine (gutter) lines may be specified for global use throughout the document. Multiple additional horizontal and vertical lines may be placed at any point on the page. Invisible to the end-user/audience shapes may be placed on the page as guidelines for page layout and print processing as well. Software templates are achieved by duplicating a template data file, or with master page features in a multiple-page document. Master pages may include both grid elements and template elements such as header and footer elements, automatic page numbering, and automatic table of contents features.

Static versus dynamic layouts

A dynamic layout lays out all text and images into rectangular areas of rows and columns. As these areas' widths and heights are defined to be percentages of the available screen, they are responsive to varying screen dimensions. They'll automatically ensure maximized use of available space while always staying adapted optimally both on screen resizes and hardware- given restrictions. Fonts may freely be resized to provide users' individual needs on legibility while never disturbing a given layout's proportions. The content's overall arrangement on screen this way may always remain as it was originally designed.

Electronic pages allow for dynamic layouts with swapping content, personalization of styles, text scaling, image scaling, or reflowable content with variable page sizes often referred to as fluid or liquid layout.

Motion graphics don' t fit neatly into either category, but may involve layout skills or careful consideration of how the motion may affect the layout. In either case, the element of motion makes it a dynamic layout, but one that warrants motion graphic design more than static graphic design or interactive design.

Electronic pages may utilize both static and dynamic layout features by dividing the pages or by combining the effects. For example, a section of the page such as a web banner may contain static or motion graphics contained within a swapping content area. Dynamic or live text may be wrapped around irregular shaped images by using invisible spacers to push the text away from the edges. Some computer algorithms can detect the edges of an object that contain transparency and flow content around contours. Front-end versus back-end

With modern media content retrieval and output technology, there is much overlap between visual communications (fro nt-end) and information technology (back-end). Large print publications (thick books, especially instructional in nature) and electronic pages (web pages) require meta data for automatic indexing, automatic reformatting, database publishing, dynamic page display and end-user interactivity. Much of the meta data (meta tags) must be hand coded or specified during the page layout process. This divides the task of page layout between artists and engineers, or tasks the artist/engineer to do both.

More complex projects may require two separate designs: page layout design as the front-end, and function coding as the back-end. In this case, the front-end may be designed using an alternative page layout technology such as image editing software or on paper with hand rendering methods. Most image editing software includes features for converting a page layout for use in a "What You See Is What You Get" (WYSIWYG) editor or features to export graphics for desktop publishing software. WYSIWYG editors and desktop publishing software allow front-end design prior to back end-coding in most cases. Interface design and database publishing may involve more technical knowledge or collaboration with information technology engineering in the front-end. Design elements and choices

Page layout might be prescribed to a greater or lesser degree by a house style which might be implemented in a specific desktop publishing template. There might also be relatively layout to do in comparison to the amount of pagination (as in novels and other books with no figures). Typical page layout decisions include:

• Size of page margins • Size and position of images and figures • Deciding on the number and size of columns and gutters (gaps between columns) • Placement of intentio na l whitespace • Use of special effects like overlaying text on an image, runaround and intrusions, or bleeding an image over the page margin • Use o f color printing or spot color for emphasis

Specific elements to be laid out might include:

• Chapter or section titles, or headlines and subheads • Image captions • Pull quotes and nut graphs which might be added out of course or to make a short story fit the layout • Boxouts and sidebars, which present information as asides from the main text flow • Page headers and page footers, the contents of which are usually uniform across content pages and thus automatically duplicated by layout software. The page number is usually included in the header or footer, and software automatically increments it for each page. • Table of contents • Notes like footnotes and end notes; bibliography, for example in academic journals or textbooks

In newspaper production, final selection and cropping of photographs accompanying stories might be left to the layout editor (since the choice of photo could affect the shape of the area needed, and thus the rest of the layout), or there might be a separate photo editor. Likewise, headlines might be written by the layout editor, a copy editor, or the original author.

To make stories fit the final layout, relatively inconsequential copy tweaks might be made (for example, rephrasing for brevity), or the layout editor might make slight adjustments to typography elements like font size or leading.

Floating block

A floating block in writing and publishing is any graphic, text, table, or other representation that is unaligned from the main flow of text. Use of floating blocks to present pictures and tables is a typical feature of academic writing and technical writing, including scientific articles and books. Floating blocks are normally labeled with a caption or title that describes its contents and a number that is used to refer to the figure from the main text. A common system divides floating block into two separately numbered series, labeled figure (for pictures, diagrams, plots, etc.) and table. An alternative name for figure is image or graphic.

Floating blocks are said to be floating because they are not fixed in position on the page at the place, but rather drift to the side of the page. By placing pictures or other large items on the sides of pages[2] rather than embedding them in the middle of the main flow of text, typesetting is more flexible and interruption to the flow of the narrative is avoided.

For example, an article on geography might have "Figure 1: Map of the world", "Figure 2: Map of Europe", "Table 1: Population of continents", "Table 2: Population of European countries", and so on. Some books will have a table of figures—in addition to the table of contents—that lists centrally all the figures appearing in the work.

Other kinds of floating blocks may be differentiated as well, for example:

Sideb ar : [3] For digressions from the main narrative. For example, a technical manual on usage of a product might include examples of how various people have employed the product in their work in sidebars. Also called an intermezzo. See sidebar (publishing). Program : Articles and books on computer programming often place code and algorithms in a figure. Equation : Writing on mathematics may place large blocks of mathematical notation in numbered blocks set apart from the main text. Presenting layouts under development

A mock up of a layout might be created to get early feedback, usually before all the content is actually ready. Whether for paper or electronic media, the first draft of a layout might be simply a rough paper and pencil sketch. A comprehensive layout for a new magazine might show placeholders for text and images, but demonstrate placement, typographic style, and other idioms intended to set the pattern for actual issues, or a particular unfinished issue. A website wireframe is a lo w-cost way to show layout without doing all the work of creating the final HTML and CSS, and without writing the copy or creating any images.

Lorem ipsum text is often used to avoid the embarrassment any improvised sample copy might cause if accidentally published. Likewise, placeholder images are often labelled "for position only". Art Works

Art is a diverse range of human activities in creating visual, auditory or performing artifacts – artworks, expressing the author's imaginative or technical skill, intended to be appreciated for their beauty or emotional power.[1][2] In their most general form these activities include the production of works of art, the criticism of art, the study of the history of art, and the aesthetic dissemination of art.

The oldest form of art are visual arts, which include creation of images or objects in fields including painting, sculpture, printmaking, photography, and other visual media. Architecture is often included as one of the visual arts; however, like the decorative arts, it involves the creation of objects where the practical considerations of use are essential—in a way that they usually are not in a painting, for example. Music, theatre, film, dance, and other performing arts, as well as literature and other media such as interactive media, are included in a broader definition of art or the arts.[1][3] Until the 17th century, art referred to any skill or mastery and was not differentiated from crafts or sciences. In modern usage after the 17th century, where aesthetic considerations are paramount, the fine arts are separated and distinguished from acquired skills in general, such as the decorative or applied arts.

Art may be characterized in terms of mimesis (its representation of reality), expression, communication of emotion, or other qualities. During the Romantic period, art came to be seen as "a special faculty of the human mind to be classified with religion and science".[4] Though the definition of what constitutes art is disputed[5][6][7] and has changed over time, general descriptions mention an idea of imaginative or technical skill stemming from human agency[8] and creation.[9]

The nature of art, and related concepts such as creativity and interpretation, are explored in a branch of philosophy known as aesthetics.

In the perspective of the history of art,[9] artistic works have existed for almost as long as humankind: from early pre-historic art to contemporary art; however, some theories restrict the concept of "artistic works" to modern Western societies.[11] One early sense of the definition of art is closely related to the older Latin meaning, which roughly translates to "skill" or "craft," as associated with words such as "artisan." English words derived from this meaning include artifact, artificial, artifice, medical arts, and military arts. However, there are many other colloquial uses of the word, all with some relation to its etymology.

20th-century Rwandan bottle. Artistic works may serve practical functions, in addition to their decorative value.

Few modern scholars have been more divided than Plato and Aristotle on the question concerning the importance of art, with Aristotle strongly supporting art in general and Plato generally being opposed to its relative importance. Several dialogues in Plato tackle questions about art: Socrates says that poetry is inspired by the muses, and is not rational. He speaks approvingly of this, and other forms of divine madness (drunkenness, eroticism, and dreaming) in the Phaedrus (265a–c), and yet in the Republic wants to outlaw Homer's great poetic art, and laughter as well. In Ion, Socrates gives no hint of the disapproval of Homer that he expresses in the Republic. The dialogue Ion suggests that Ho me r's Iliad functioned in the ancient Greek world as the Bible does today in the modern Christian world: as divinely inspired literary art that can provide moral guidance, if only it can be properly interpreted. With regards to the literary art and the musical arts, Aristotle considered epic poetry, tragedy, comedy, dithyrambic poetry and music to be mimetic or imitative art, each varying in imitation by medium, object, and manner.[12] For example, music imitates with the media of rhythm and harmony, whereas dance imitates with rhythm alone, and poetry with language. The forms also differ in their object of imitation. Comedy, for instance, is a dramatic imitation of men worse than average; whereas tragedy imitates men slightly better than average. Lastly, the forms differ in their manner of imitation – through narrative or character, through change or no change, and through drama or no drama.[13] Aristotle believed that imitation is natural to mankind and constitutes one of mankind's advantages over animals.[14]

The second, and more recent, sense of the word art as an abbreviation for creative art or fine art emerged in the early 17th century.[15] Fine art refers to a skill used to express the artist's creativity, or to engage the audience's aesthetic sensibilities, or to draw the audience towards consideration of more refined or finer work of art.

Within this latter sense, the word art may refer to several things: (i) a study of a creative skill, (ii) a process of using the creative skill, (iii) a product of the creative skill, or (iv) the audience's experience with the creative skill. The creative arts (art as discipline) are a collection of disciplines which produce artworks (art as objects) that are compelled by a personal drive (art as activity) and convey a message, mood, or symbolism for the perceiver to interpret (art as experience). Art is something that stimulates an individual's thoughts, emotions, beliefs, or ideas through the senses. Works of art can be explicitly made for this purpose or interpreted on the basis of images or objects. For some scholars, such as Kant, the sciences and the arts could be distinguished by taking science as representing the domain of knowledge and the arts as representing the domain of the freedom of artistic expression.

Often, if the skill is being used in a common or practical way, people will consider it a craft instead of art. Likewise, if the skill is being used in a commercial or industrial way, it may be considered commercial art instead of fine art. On the other hand, crafts and design are sometimes considered applied art. Some art followers have argued that the difference between fine art and applied art has more to do with value judgments made about the art than any clear definitional difference.[16] However, even fine art often has goals beyond pure creativity and self-expression. The purpose of works of art may be to communicate ideas, such as in politically, spiritually, or philosophically motivated art; to create a sense of beauty (see aesthetics); to explore the nature of perception; for pleasure; or to generate strong e motio ns. The purpose may also be seemingly nonexistent.

The nature of art has been described by philosopher Richard Wollheim as "one of the most elusive of the traditional problems of human culture".[17] Art has been defined as a vehicle for the expression or communication of emotions and ideas, a means for exploring and appreciating formal elements for their own sake, and as mimesis or representation. Art as mimesis has deep roots in the philosophy of Aristotle.[18] Leo Tolstoy identified art as a use of indirect means to communicate from one person to another.[18] Benedetto Croce and R.G. Collingwood advanced the idealist view that art expresses emotions, and that the work of art therefore essentially exists in the mind of the creator.[19][20] The theory of art as form has its roots in the philosophy of Immanuel Kant, and was developed in the early twentieth century by Roger Fry and Clive Bell. More recently, thinkers influenced by Martin Heidegger have interpreted art as the means by which a community develops for itself a medium for self-expression and interpretation.[21] George Dickie has offered an institutional theory of art that defines a work of art as any artifact upon which a qualified person or persons acting on behalf of the social institution commonly referred to as "the art world" has conferred "the status of candidate for appreciation".[22] Larry Shiner has described fine art as "not an essence or a fate but something we have made. Art as we have generally understood it is a European invention barely two hundred years old.”[23] History

His tor y o f a rt

Venus of Willendorf, circa 24,000–22,000 BP

Sculptures, cave paintings, rock paintings and petroglyphs from the Upper Paleolithic dating to roughly 40,000 years ago have been found,[24] but the precise meaning of such art is often disputed because so little is known about the cultures that produced them. The oldest art objects in the world—a series of tiny, drilled snail shells about 75,000 years old—were discovered in a South African cave.[25] Containers that may have been used to hold paints have been found dating as far back as 100,000 years.[26] Etched shells by Homo erectus from 430,000 and 540,000 years ago were discovered in 2014.[27]

Cave painting of a horse from the Lascaux caves, circa 16,000 BP

Many great traditions in art have a foundation in the art of one of the great ancient civilizations: Ancient Egypt, Mesopotamia, Per s ia, India, China, Ancient Greece, Rome, as well as Inca, Maya, and O lmec. Each of these centers of early civilization developed a unique and characteristic style in its art. Because of the size and duration of these civilizations, more of their art works have survived and more of their influence has been transmitted to other cultures and later times. Some also have provided the first records of how artists worked. For example, this period of Greek art saw a veneration of the human physical form and the development of equivalent skills to show musculature, poise, beauty, and anatomically correct proportions.

In Byzantine and Medieval art of the Western Middle Ages, much art focused on the expression of subjects about Biblical and religious culture, and used styles that showed the higher glory of a heavenly world, such as the use of gold in the background of paintings, or glass in mosaics or windows, which also presented figures in idealized, patterned (flat) forms. Nevertheless, a classical realist tradition persisted in small Byzantine works, and realism steadily grew in the art of Catholic Europe. Renaissance art had a greatly increased emphasis on the realistic depiction of the material world, and the place of humans in it, reflected in the corporeality of the human body, and development of a systematic method of graphical perspective to depict recession in a three-dimensional picture space.

The stylized signature of S ulta n Mahmud II o f the Ottoman Empire was written in Islamic calligraphy. It reads Mahmud Khan son of Abdulhamid is forever victorious.

The Great Mosque of Kairouan in Tunisia, also called the Mosque of Uqba, is one of the finest, mo st significant and best preserved artistic and architectural examples of early great mosques. Dated in its present state from the 9th century, it is the ancestor and model of all the mosques in the western Islamic lands.[28]

In the east, Islamic art's rejection of iconography led to emphasis on geometric patterns, calligraphy, and architecture. F urther east, religion dominated artistic styles and forms too. India and Tibet saw emphasis on painted sculptures and dance, while religious painting borrowed many conventions from sculpture and tended to bright contrasting colors with emphasis on outlines. China saw the flourishing of many art forms: jade carving, bronzework, pottery (including the stunning terracotta army of Emperor Qin), poetry, calligraphy, music, painting, drama, fiction, etc. Chinese styles vary greatly from era to era and each one is traditionally named after the ruling dynasty. So, for example, Tang dynasty paintings are monochromatic and sparse, emphasizing idealized landscapes, but paintings are busy and colorful, and focus on telling stories via setting and composition. Japan names its styles after imperial dynasties too, and also saw much interplay between the styles of calligraphy and painting. Woodblock printing became important in Japan after the 17th century.

The western Age of Enlightenment in the 18th century saw artistic depictions of physical and rational certainties of the clockwork universe, as well as politically revolutionary visions of a post-monarchist world, such as Blak e's portrayal of Newton as a divine geometer, or David's propagandistic paintings. This led to Romantic rejections of this in favor of pictures of the emotional side and individuality of humans, exemplified in the novels of Goethe. The late 19th century then saw a host of artistic movements, such as academic art, Symbolism, impressionism and fauvism among others.

The history of twentieth-century art is a narrative of endless possibilities and the search for new standards, each being torn down in succession by the next. Thus the parameters of Impressionism, Expressionism, Fauvism, Cubism, Dadaism, Surrealism, etc. cannot be maintained very much beyond the time of their invention. Increa sing global interaction during this time saw an equivalent influence of other cultures into Western art. Thus, Japanese woodblock prints (themselves influenced by Western Renaissance draftsmanship) had an immense influence on Impressionism and subsequent development. Later, African sculptures were taken up by P icasso and to some extent by Matisse. Similarly, in the 19th and 20th centuries the West has had huge impacts on Eastern art with originally western ideas like Communism and Post-Modernism exerting a powerful influence.

Modernism, the idealistic search for truth, gave way in the latter half of the 20th century to a realization of its unattainability. Theodor W. Adorno said in 1970, "It is now taken for granted that nothing which concerns art can be taken for granted any more: neither art itself, nor art in relationship to the whole, nor even the right of art to exist."[29] Relativism was accepted as an unavoidable truth, which led to the period of contemporary art and po stmod er n cr itic is m, where cultures of the world and of history are seen as changing forms, which can be appreciated and drawn from only with skep tic is m and irony. Furthermore, the separation of cultures is increasingly blurred and some argue it is now more appropriate to think in terms of a global culture, rather than o f regional ones. Forms, genres, media, and styles

The arts

Detail of Leonardo da Vinci's Mona Lisa, showing the painting technique of sfumato

The creative arts are often divided into more specific categories, each related to its technique, or medium, such as decorative arts, plastic arts, performing arts, or literature. Unlike scientific fields, art is one of the few subjects that are academically organized according to technique. An artistic med ium is the substance or material the artistic work is made from, and may also refer to the technique used. For example, paint is a medium used in painting, and paper is a medium used in drawing.

An art form is the specific shape, or quality an artistic expression takes. The media used often influence the form. For example, the form of a sculpture must exist in space in three dimensions, and respond to gravity. The constraints and limitations of a particular medium are thus called its formal qualities. To give another example, the formal qualities of painting are the canvas texture, color, and brush texture. The formal qualities of video games are non-linearity, interactivity and virtual presence. The form of a particular work of art is determined by the fo r ma l qualities of the media, and is not related to the intentions of the artist or the reactions of the audience in any way whatsoever as these properties are related to content rather than form.[30]

A genre is a set of conventions and styles within a particular medium. For instance, well recognized genres in film are western, horror and romantic comedy. Genres in music include death metal and trip hop. Genres in painting include still life and pastoral landscape. A particular work of art may bend or combine genres but each genre has a recognizable group of conventions, clichés and tropes. (One note: the word genre has a second older meaning within painting; genre painting was a phrase used in the 17th to 19th centuries to refer specifically to paintings of scenes of everyday life and is still used in this way.)

The Great Wave off Kanagawa by Hokusai (Japanese, 1760–1849), colored woodcut print The style of an artwork, artist, or movement is the distinctive method and form followed by the respective art. Any loose brushy, dripped or poured abstract painting is called expressionistic. Often a style is linked with a particular historical period, set of ideas, and particular artistic movement. So Jackson Pollock is called an Abstract Expressionist.

A particular style may have specific cultural meanings. For example, Roy Lichtenstein—a painter associated with the American Pop art movement of the 1960s—was not a pointillist, despite his use of dots. Lichtenstein used evenly spaced Ben-Day dots (the type used to reproduce color in comic strips) as a style to question the "high" art of painting with the "low" art of comics, thus commenting on class distinctions in culture. Pointillism, a technique in late Impressionism (1880s) developed especially by the artist Georges Seurat, employs dots to create variation in color and depth in an attempt to approximate the way people really see color. Both artists use dots, but the particular style and technique relate to the artistic movement adopted by each artist.

These are all ways of beginning to define a work of art, to narrow it down.

Imagine you are an art critic whose mission is to compare the meanings you find in a wide range of individual artworks. How would you proceed with your task? One way to begin is to examine the materials each artist selected in making an object, image video, or event. The decision to cast a sculpture in bronze, for instance, inevitably effects its meaning; the work becomes something different from how it might be if it had been cast in gold or plastic or chocolate, even if everything else about the artwork remains the same. Next, you might examine how the materials in each artwork have become an arrangement of shapes, colors, textures, and lines. These, in turn, are organized into various patterns and compositional structures. In your interpretation, you would comment on how salient features of the form contribute to the overall meaning of the finished artwork. [But in the end] the meaning of most artworks... is not exhausted by a discussion of materials, techniques, and form. Most interpretations also include a discussion of the ideas and feelings the artwork engenders.[31]

Skill and craft

Art can connote a sense of trained ability or mastery of a medium. Art can also simply refer to the developed and efficient use of a language to convey meaning with immediacy and or depth. Art is an act of expressing feelings, thoughts, and observations.[32] There is an understanding that is reached with the material as a result of handling it, which facilitates one's thought processes. A common view is that the epithet "art", particular in its elevated sense, requires a certain level of creative expertise by the artist, whether this be a demonstration of technical ability, an originality in stylistic approach, or a combination of these two. Traditionally skill of execution was viewed as a quality inseparable from art and thus necessary for its success; for Leonardo da Vinci, art, neither more nor less than his other endeavors, was a manifestation of skill. Rembrandt's work, now praised for its ephemeral virtues, was most admired by his contemporaries for its virtuosity. At the turn of the 20th century, the adroit performances of John Singer Sargent were alternately admired and viewed with skepticism for their manual fluency, yet at nearly the same time the artist who would become the era's most recognized and peripatetic iconoclast, Pab lo P ic asso, was completing a traditional academic training at which he excelled. A common contemporary criticism of some modern art occurs along the lines of objecting to the apparent lack of skill or ability required in the production of the artistic object. In conceptual art, Marcel Duchamp's "Fountain" is among the first examples of pieces wherein the artist used found objects ("ready-made") and exercised no traditionally recognised set of skills. Tracey Emin's My Bed, or Damien Hirst's The Physical Impossibility of Death in the Mind of Someone Living follow this example and also manipulate the mass media. Emin slept (and engaged in other activities) in her bed before placing the result in a gallery as work of art. Hirst came up with the conceptual design for the artwork but has left most of the eventual creation of many works to employed artisans. Hirst's celebrity is founded entirely on his ability to produce shocking concepts. The actual production in many conceptual and contemporary works of art is a matter of assembly of found objects. However, there are many modernist and contemporary artists who continue to excel in the skills of drawing and painting and in creating hands-on works of a rt. Purpose of art

Art has had a great number of different functions throughout its history, making its purpose difficult to abstract or quantify to any single concept. This does not imply that the purpose of Art is "vague", but that it has had many unique, different reasons for being created. Some of these functions of Art are provided in the following outline. The different purposes of art may be grouped according to those that are non-motivated, and those that are motivated (Lévi-Strauss).

Non-motivated functions of art

The non-motivated purposes of art are those that are integral to being human, transcend the individual, or do not fulfill a specific external purpose. In this sense, Art, as creativity, is something humans must do by their very nature (i.e., no other species creates art), and is therefore beyond utility.

1. Basic human instinct for harmony, balance, rhythm. Art at this level is not an action or an object, but an internal appreciation of balance and harmony (beauty), and therefore an aspect of being human beyond utility.

"Imitation, then, is one instinct of our nature. Next, there is the instinct for 'harmony' and rhythm, meters being manifestly sections of rhythm. Persons, therefore, starting with this natural gift developed by degrees their special aptitudes, till their rude improvisations gave birth to Poetry." -Aristotle[33]

2. Experience of the mysterious. Art provides a way to experience one's self in relation to the universe. This experience may often come unmotivated, as one appreciates art, music or poetry.

"The most beautiful thing we can experience is the mysterious. It is the source of all true art and science." -[34] 3. Expression of the imagination. Art provides a means to express the imagination in non- grammatic ways that are not tied to the formality of spoken or written language. Unlike words, which come in sequences and each of which ha ve a definite meaning, art provides a range of forms, symbols and ideas with meanings that are malleable.

"Jupiter's eagle [as an example of art] is not, like logical (aesthetic) attributes of an object, the concept of the sublimity and majesty of creation, but rather something else – something that gives the imagination an incentive to spread its flight over a whole host of kindred representations that provoke more thought than admits of expression in a concept determined by words. They furnish an aesthetic idea, which serves the above rational idea as a substitute for logical presentation, but with the proper function, however, of animating the mind by opening out for it a prospect into a field of kindred representations stretching beyond its ken." -Immanuel Kant[35]

4. Ritualistic and symbolic functions. In many cultures, art is used in rituals, performances and dances as a decoration or symbol. While these often have no specific utilitarian (motivated) purpose, anthropologists know that they often serve a purpose at the level of meaning within a particular culture. This meaning is not furnished by any one individual, but is often the result of many generations of change, and of a cosmological re la tio ns hip within the culture.

"Most scholars who deal with rock paintings or objects recovered from prehistoric contexts that cannot be explained in utilitarian terms and are thus categorized as decorative, ritual or symbolic, are aware of the trap posed by the term 'art'." -Silva Tomaskova[36]

Motivated functions of art

Motivated purposes of art refer to intentional, conscious actions on the part of the artists or creator. These may be to bring about political change, to comment on an aspect of society, to convey a specific emotion or mood, to address personal psychology, to illustrate another discipline, to (with commercial arts) to sell a product, or simply as a form of communication.

1. Communication. Art, at its simplest, is a form of communication. As most forms of communication have an intent or goal directed toward another individual, this is a motivated purpose. Illustrative arts, such as scientific illustration, are a form of art as communication. Maps are another example. However, the content need not be scientific. Emotions, moods and feelings are also communicated through art.

"[Art is a set of] artefacts or images with symbolic meanings as a means of communication." -Steve Mithen[37]

2. Art as entertainment. Art may seek to bring about a particular emotion or mood, for the purpose of relaxing or entertaining the viewer. This is often the function of the art industries of Motion Pictures and Video Games.[citation needed] 3. The Avante-Garde. Art for political change. One of the defining functions of early twentieth-century art has been to use visual images to bring about political change. Art movements that had this goal—Dadaism, Surrealism, Russian constructivism, and Abstract Expressionism, among others—are collectively referred to as the avante-garde arts.

"By contrast, the realistic attitude, inspired by positivism, from Saint Thomas Aquinas to Anatole France, clearly seems to me to be hostile to any intellectual or moral advancement. I loathe it, for it is made up of mediocrity, hate, and dull conceit. It is this attitude which today gives birth to these ridiculous books, these insulting plays. It constantly feeds on and derives strength from the newspapers and stultifies both science and art by assiduously flattering the lowest of tastes; clarity bordering on stupidity, a dog's life." -André Breton (Surrealism)[38]

4. Art as a "free zone", removed from the action of the social censure. Unlike the avant- garde movements, which wanted to erase cultural differences in order to produce new universal values, contemporary art has enhanced its tolerance towards cultural differences as well as its critical and liberating functions (social inquiry, activism, subversion, deconstruction...), becoming a more open place for research and experimentation.[39] 5. Art for social inquiry, subversion and/or anarchy. While similar to art for political change, subversive or deconstructivist art may seek to question aspects of society without any specific political goal. In this case, the function of art may be simply to criticize some aspect of society.

5. Spray-paint graffiti on a wall in Rome

Graffiti art and other types of street art are graphics and images that are spray-painted or stencilled on publicly viewable walls, buildings, buses, trains, and bridges, usually without permission. Certain art forms, such as graffiti, may also be illegal when they break laws (in this case vandalism).

6. Art for social causes. Art can be used to raise awareness for a large variety of causes. A number of art activities were aimed at raising awareness of autism,[40][41][42] cancer,[43][44][45] human trafficking,[46][47] and a variety of other topics, such as ocean conservation,[48] human rights in Darfur,[49] murdered and missing Aboriginal women,[50] elder abuse,[51] and pollution.[52] Trashion, using trash to make fashion, practiced by artists such as Marina DeBris is one example of using art to raise awareness about pollution. 7. Art for psychological and healing purposes. Art is also used by art therapists, psychotherapists and clinical psychologists as art therapy. The Diagnostic Drawing Series, for example, is used to determine the personality and emotional functioning of a patient. The end product is not the principal goal in this case, but rather a process of healing, through creative acts, is sought. The resultant piece of artwork may also offer insight into the troubles experienced by the subject and may suggest suitable approaches to be used in more conventional forms of psychiatric therapy. 8. Art for propaganda, or commercialism. Art is often utilized as a form of propaganda, and thus can be used to subtly influence popular conceptions or mood. In a similar way, art that tries to sell a product also influences mood and emotion. In both cases, the purpose of art here is to subtly manipulate the viewer into a particular emotional or psychological response toward a particular idea or object.[53] 9. Art as a fitness indicator. It has been argued that the ability of the human brain by far exceeds what was needed for survival in the ancestral environment. One evolutionary psychology explanation for this is that the human brain and associated traits (such as artistic ability and creativity) are the human equivalent of the peacock's tail. The purpose of the male peacock's extravagant tail has been argued to be to attract females (see also Fisherian runaway and handicap principle). According to this theory superior execution of art was evolutionary important because it attracted mates.[54]

The functions of art described above are not mutually exclusive, as many of them may overlap. For example, art for the purpose of entertainment may also seek to sell a product, i.e. the movie or . Public access

Since ancient times, much of the finest art has represented a deliberate display of wealth or power, often achieved by using massive scale and expensive materials. Much art has been commissioned by rulers or religious establishments, with more modest versions only available to the most wealthy in society. Nevertheless, there have been many periods where art of very high quality was available, in terms of ownership, across large parts of society, above all in cheap media such as pottery, which persists in the ground, and perishable media such as textiles and wood. In many different cultures, the ceramics of indigenous peoples of the Americas are found in such a wide range of graves that they were clearly not restricted to a social elite, though other forms of art may have been. Reproductive methods such as moulds made mass-production easier, and were used to bring high-quality Ancient Roman pottery and Greek Tanagra figurines to a very wide market. Cylinder seals were both artistic and practical, and very widely used by what can be loosely called the middle class in the Ancient Near East. Once coins were widely used these also became an art form that reached the widest range of society. Another important innovation came in the 15th century in Europe, when printmaking began with small woodcuts, mostly religious, that were often very small and hand-colored, and affordable even by peasants who glued them to the walls of their homes. Printed books were initially very expensive, but fell steadily in price until by the 19th century even the poorest could afford some with printed illustrations. Popular prints of many different sorts have decorated homes and other places for centuries.

Public buildings and monuments, secular and religious, by their nature normally address the whole of society, and visitors as viewers, and display to the general public has long been an important factor in their design. Egyptian te mp les are typical in that the most largest and most lavish decoration was placed on the parts that could be seen by the general public, rather than the areas seen only by the priests. Many areas of royal palaces, castles and the houses of the social elite were often generally accessible, and large parts of the art collections of such people could often be seen, either by anybody, or by those able to pay a small price, or those wearing the correct clothes, regardless of who they were, as at the Palace of Versailles, where the appropriate extra accessories (silver shoe buckles and a sword) could be hired from shops outside.

Special arrangements were made to allow the public to see many royal or private collections placed in galleries, as with the Or lea ns Co llec tio n mostly housed in a wing of the Palais Royal in Paris, which could be visited for most of the 18th century. In Italy the art tourism of the Grand Tour became a major industry from the Renaissance onwards, and governments and cities made efforts to make their key works accessible. The British Royal Collection remains distinct, but large donations such as the Old Royal Library were made from it to the British Museum, established in 1753. The Uffizi in F lorence opened entirely as a gallery in 1765, though this function had been gradually taking the building over from the original civil servants' offices for a long time before. The building now occupied by the Prado in Madrid was built before the French Revolution for the public display of parts of the royal art collection, and similar royal galleries open to the public existed in Vie nna, Munich and other capitals. The opening of the Musée du Louvre during the French Revolution (in 1793) as a public museum for much of the former French royal collection certainly marked an important stage in the development of public access to art, transferring ownership to a republican state, but was a continuation of trends already well established.

Most modern public museums and art education programs for children in schools can be traced back to this impulse to have art available to everyone. Museums in the United States tend to be gifts from the very rich to the masses (The Metropolitan Museum of Art in , for example, was created by John Taylor Johnston, a railroad executive whose personal art collection seeded the museum.) But despite all this, at least one of the important functions of art in the 21st century remains as a marker of wealth and social status.

There have been attempts by artists to create art that can not be bought by the wealthy as a status object. One of the prime original motivators of much of the art of the late 1960s and 1970s was to create art that could not be bought and sold. It is "necessary to present something more than mere objects"[55] said the major post war German artist Joseph Beuys. This time period saw the rise of such things as performance art, video art, and conceptual art. The idea was that if the artwork was a performance that would leave nothing behind, or was simply an idea, it could not be bought and sold. "Democratic precepts revolving around the idea that a work of art is a commodity impelled the aesthetic innovation which germinated in the mid-1960s and was reaped throughout the 1970s. Artists broadly identified under the heading of Conceptual art... substituting performance and publishing activities for engagement with both the material and materialistic concerns of painted or sculptural form... [have] endeavored to undermine the art object qua object."[56]

In the decades since, these ideas have been somewhat lost as the art market has learned to sell limited edition DVDs of video works,[57] invitations to exclusive performance art pieces, and the objects left over from conceptual pieces. Many of these performances create works that are only understood by the elite who have been educated as to why an idea or video or piece of apparent garbage may be considered art. The marker of status becomes understanding the work instead of necessarily owning it, and the artwork remains an upper-class activity. "With the widespread use of DVD recording technology in the early 2000s, artists, and the gallery system that derives its profits from the sale of artworks, gained an important means of controlling the sale of video and computer artworks in limited editions to collectors."[58] Controversies

Théodore Géricault's Raft of the Medusa, circa 1820

Art has long been controversial, that is to say disliked by some viewers, for a wide variety of reasons, though most pre-modern controversies are dimly recorded, or completely lost to a modern view. Iconoclasm is the destruction of art that is disliked for a variety of reasons, including religious ones. Aniconism is a general dislike of either all figurative images, or often just religious ones, and has been a thread in many major religions. It has been a crucial factor in the history of Islamic art, where depictions of Muhammad remain especially controversial. Much art has been disliked purely because it depicted or otherwise stood for unpopular rulers, parties or other groups. Artistic conventions have often been conservative and taken very seriously by art critics, though often much less so by a wider public. The iconographic content of art could cause controversy, as with late medieval depictions of the new motif of the Swoon of the Virgin in scenes of the Crucifixion of Jesus. The Last Judgment by Michelangelo was controversial for various reasons, including breaches of decorum through nudity and the Apollo-like pose of Christ.

The content of much formal art through history was dictated by the patron or commissioner rather than just the artist, but with the advent of Romanticism, and econonomic changes in the production of art, the artists' vision became the usual determinant of the content of his art, increasing the incidence of controversies, though often reducing their significance. Strong incentives for perceived originality and publicity also encouraged artists to court controversy. Théodore Géricault's Raft of the Medusa (c. 1820), was in part a political commentary on a recent event. Édouard Manet's Le Déjeuner sur l'Herbe (1863), was considered scandalous not because of the nude woman, but because she is seated next to men fully dressed in the clothing of the time, rather than in robes of the antique world. John S inger Sargent's Madame Pierre Gautreau (Madam X) (1884), caused a controversy over the reddish pink used to color the woman's ear lobe, considered far too suggestive and supposedly ruining the high-society model's reputation.

The gradual abandonment of naturalism and the depiction of realistic representations of the visual appearance of subjects in the 19th and 20th centuries led to a rolling controversy lasting for over a century. In the twentieth century, Pablo Picasso's Guernica (1937) used arresting cubist techniques and stark mo noc hro ma tic o ils , to depict the harrowing consequences of a contemporary bombing of a small, ancient Basque town. Leon Golub's Interrogation III (1981), depicts a female nude, hooded detainee strapped to a chair, her legs open to reveal her sexual organs, surrounded by two tormentors dressed in everyday clothing. Andres Serrano's Piss Christ (1989) is a photograph of a crucifix, sacred to the Christian religion and representing C hrist's sacrifice and final suffering, submerged in a glass of the artist's own urine. The resulting uproar led to comments in the United States Senate about public funding of the arts. Theory

Aesthetics

In the nineteenth century, artists were primarily concerned with ideas of truth and beauty. The aesthetic theorist John Ruskin, who championed what he saw as the naturalism of J. M. W. Turner, saw art's role as the communication by artifice of an essential truth that could only be found in nature.[59]

The definition and evaluation of art has become especially problematic since the 20th century. Richard Wollheim distinguishes three approaches to assessing the aesthetic value of art: the Realist, whereby aesthetic quality is an absolute value independent of any human view; the Objectivist, whereby it is also an absolute value, but is dependent on general human experience; and the Relativist position, whereby it is not an absolute value, but depends on, and varies with, the human experience of different humans.[60]

Arrival of Modernism

The arrival of Modernism in the late nineteenth century lead to a radical break in the conception of the function of art,[61] and then again in the late twentieth century with the advent of postmodernism. Clement Greenberg's 1960 article "Modernist Painting" defines modern art as "the use of characteristic methods of a discipline to criticize the discipline itself".[62] Greenberg originally applied this idea to the Abstract Expressionist movement and used it as a way to understand and justify flat (non-illusionistic) abstract painting:

Realistic, naturalistic art had dissembled the medium, using art to conceal art; modernism used art to call attention to art. The limitations that constitute the medium of painting—the flat surface, the shape of the support, the properties of the pigment—were treated by the Old Masters as negative factors that could be acknowledged only implicitly or indirectly. Under Modernism these same limitations came to be regarded as positive factors, and were acknowledged openly.[62]

After Greenberg, several important art theorists emerged, such as M ic hae l F r ied, T. J. C la rk, Rosalind Krauss, Linda Nochlin and Griselda Pollock among others. Though only originally intended as a way of understanding a specific set of artists, Greenberg's definition of modern art is important to many of the ideas of art within the various art movements of the 20th century and early 21st century.

Pop artists like Andy Warhol became both noteworthy and influential through work including and possibly critiquing popular culture, as well as the art world. Artists of the 1980s, 1990s, and 2000s expanded this technique of self-criticism beyond high art to all cultural image-making, including fashion images, comics, billboards and pornography.

Duchamp once proposed that art is any activity of any kind- everything. However, the way that only certain activities are classified today as art is a social construction.[63] There is evidence that there may be an element of truth to this. The Invention of Art: A Cultural History is an art history book which examines the construction of the modern system of the arts i.e. Fine Art. Shiner finds evidence that the older system of the arts before our modern system (fine art) held art to be any skilled human activity i.e. Ancient Greek society did not possess the term art but techne. Techne can be understood neither as art or craft, the reason being that the distinctions of art and craft are historical products that came later on in . Techne included painting, sculpting and music but also; cooking, medicine, horsemanship, geometry, carpentry, prophecy and farming etc.

New Criticism and the "Intentional Fallacy"

Following Duchamp during the first half of the twentieth century, a significant shift to general aesthetic theory took place which attempted to apply aesthetic theory between various forms of art, including the literary arts and the visual arts, to each other. This resulted in the rise of the New Cr itic is m school and debate concerning the intentional fallacy. At issue was the question of whether the aesthetic intentions of the artist in creating the work of art, whatever its specific form, should be associated with the criticism and evaluation of the final product of the work of art, or, if the work of art should be evaluated on its own merits independent of the intentions of the artist.

In 1946, William K. Wimsatt and Monroe Beardsley published a classic and controversial New Critical essay entitled "The Intentional Fallacy", in which they argued strongly against the relevance of an author's intention, or "intended meaning" in the analysis of a literary work. For Wimsatt and Beardsley, the words on the page were all that mattered; importation of meanings from outside the text was considered irrelevant, and potentially distracting.

In another essay, "The Affective Fallacy," which served as a kind of sister essay to "The Intentional Fallacy" Wimsatt and Beardsley also discounted the reader's personal/emotional reaction to a literary work as a valid means of analyzing a text. This fallacy would later be repudiated by theorists from the reader-response school of literary theory. Ironically, one of the leading theorists from this school, Stanley Fish, was himself trained by New Critics. Fish criticizes Wimsatt and Beardsley in his essay "Literature in the Reader" (1970).[64]

As summarized by Gaut and Livingston in their essay "The Creation of Art": "Structuralist and post-structuralists theorists and critics were sharply critical of many aspects of New Criticism, beginning with the emphasis on aesthetic appreciation and the so-called autonomy of art, but they reiterated the attack on biographical criticisms's assumption that the artist's activities and experience were a privileged critical topic."[65] These authors contend that: "Anti-intentionalists, such as formalists, hold that the intentions involved in the making of art are irrelevant or peripheral to correctly interpreting art. So details of the act of creating a work, though possibly of interest in themselves, have no bearing on the correct interpretation of the work."[66]

Gaut and Livingston define the intentionalists as distinct from formalists stating that: "Intentionalists, unlike formalists, hold that reference to intentions is essential in fixing the correct interpretation of works." They quote Richard Wollheim as stating that, "The task of criticism is the reconstruction of the creative process, where the creative process must in turn be thought of as something not stopping short of, but terminating on, the work of art itself."[66] "Linguistic turn" and its debate

The end of the 20th century fostered an extensive debate known as the linguistic turn controversy, or the "innocent eye debate", and generally referred to as the structuralism- poststructuralism debate in the philosophy of art. This debate discussed the encounter of the work of art as being determined by the relative extent to which the conceptual encounter with the work of art dominates over the perceptual encounter with the work of art.[67] Decisive for the linguistic turn debate in art history and the humanities were the works of yet another tradition, namely the structuralism of Ferdinand de Saussure and the ensuing movement of poststructuralism. In 1981, the artist Mark Tansey created a work of art titled "The Innocent Eye" as a criticism of the prevailing climate of disagreement in the philosophy of art during the closing decades of the 20th century. Influential theorists include Judith Butler, Luce Irigaray, Julia Kristeva, Michel Foucault and Jacques Derrida. The power of language, more specifically of certain rhetorical tropes, in art history and historical discourse was explored by Hayden White. The fact that language is not a transparent medium of thought had been stressed by a very different form of philosophy of language which originated in the works of Johann Georg Hamann and Wilhelm von Humboldt.[68] Ernst Gombrich and Nelson Goodman in his book Languages of Art: An Approach to a Theory of Symbols came to hold that the conceptual encounter with the work of art predominated exclusively over the perceptual and visual encounter with the work of art during the 1960s and 1970s.[69] He was challenged on the basis of research done by the Nobel prize winning psychologist Roger Sperry who maintained that the human visual encounter was not limited to concepts represented in language alone (the linguistic turn) and that other forms of psychological representations of the work of art were equally defensible and demonstrable. Sperry's view eventually prevailed by the end of the 20th century with aesthetic philosophers such as Nick Zangwill strongly defending a return to moderate aesthetic formalism among other alternatives.[70]

Classification disputes:

Classificatory disputes about art

Disputes as to whether or not to classify something as a work of art are referred to as classificatory disputes about art.

Classificatory disputes in the 20th century have included cubist and impressionist paintings, Duchamp's Fountain, the movies, superlative imitations of banknotes, conceptual art, and video games.[72]

Philosopher David Novitz has argued that disagreement about the definition of art are rarely the heart of the problem. Rather, "the passionate concerns and interests that humans vest in their social life" are "so much a part of all classificatory disputes about art" (Novitz, 1996). According to Novitz, classificatory disputes are more often disputes about societal values and where society is trying to go than they are about theory proper. For example, when the Daily Mail criticized Hir st's and Emin's work by arguing "For 1,000 years art has been one of our great civilising forces. Today, pickled sheep and soiled beds threaten to make barbarians of us all" they are not advancing a definition or theory about art, but questioning the value of Hirst's and Emin's work.[73] In 1998, Arthur Danto, suggested a thought experiment showing that "the status of an artifact as work of art results from the ideas a culture applies to it, rather than its inherent physical or perceptible qualities. Cultural interpretation (an art theory of some kind) is therefore constitutive of an object's arthood."[74][75]

Anti-art is a label for art that intentionally challenges the established parameters and values of art;[76] it is term associated with Dadaism and attributed to Marcel Duchamp just before World War I,[76] when he was making art from found objects.[76] One of these, Fountain (1917), an ordinary urinal, has achieved considerable prominence and influence on art.[76] Anti-art is a feature of work by Situationist International,[77] the lo-fi Mail art movement, and the Young British Artists,[76] though it is a form still rejected by the Stuckists,[76] who describe themselves as anti-anti-art.[78][79]

Value judgment

Somewhat in relation to the above, the word art is also used to apply judgments of value, as in such expressions as "that meal was a work of art" (the cook is an artist), or "the art of deception", (the highly attained level of skill of the deceiver is praised). It is this use of the word as a measure of high quality and high value that gives the term its flavor of subjectivity.

Making judgments of value requires a basis for criticism. At the simplest level, a way to determine whether the impact of the object on the senses meets the criteria to be considered art is whether it is perceived to be attractive or repulsive. Though perception is always colored by experience, and is necessarily subjective, it is commonly understood that what is not somehow aesthetically satisfying cannot be art. However, "good" art is not always or even regularly aesthetically appealing to a majority of viewers. In other words, an artist's prime motivation need not be the pursuit of the aesthetic. Also, art often depicts terrible images made for social, moral, or thought-provoking reasons. For example, Francisco Goya's painting depicting the Spanish shootings of 3rd of May 1808 is a graphic depiction of a firing squad executing several pleading civilians. Yet at the same time, the horrific imagery demonstrates Goya's keen artistic ability in composition and execution and produces fitting social and political outrage. Thus, the debate continues as to what mode of aesthetic satisfaction, if any, is required to define 'art'.

The assumption of new values or the rebellion against accepted notions of what is aesthetically superior need not occur concurrently with a complete abandonment of the pursuit of what is aesthetically appealing. Indeed, the reverse is often true, that the revision of what is popularly conceived of as being aesthetically appealing allows for a re-invigoration of aesthetic sensibility, and a new appreciation for the standards of art itself. Countless schools have proposed their own ways to define quality, yet they all seem to agree in at least one point: once their aesthetic choices are accepted, the value of the work of art is determined by its capacity to transcend the limits of its chosen medium to strike some universal chord by the rarity of the skill of the artist or in its accurate reflection in what is termed the zeitgeist.

Art is often intended to appeal to and connect with human emotion. It can arouse aesthetic or mo ra l feelings, and can be understood as a way of communicating these feelings. Artists express something so that their audience is aroused to some extent, but they do not have to do so consciously. Art may be considered an exploration of the human condition; that is, what it is to be human. Printing processes

Letterpress

Miehle press printing Le Samedi journal. Montreal, 1939. Letterpress printing

Letterpress printing is a technique of relief printing. A worker composes and locks mo vab le type into the bed of a press, inks it, and presses paper against it to transfer the ink from the type which creates an impression on the paper.

Letterpress printing was the normal form of printing text from its invention by Johannes Gutenberg in the mid-15th century and remained in wide use for books and other uses until the second half of the 20th century, when offset printing was developed. More recently, letterpress printing has seen a revival in an artisanal form.

Offset

Offset press

Offset printing is a widely used printing technique. Offset printing is where the inked image is transferred (or "offset") from a plate to a rubber blanket. An offset transfer moves the image to the printing surface. When used in combination with the lithographic process, a process based on the repulsion of oil and water; the offset technique employs a flat (planographic) image carrier. So, the image to be printed obtains ink from ink rollers, while the non-printing area attracts a film of water, keeping the non-printing areas ink-free.

Currently, most books and newspapers are printed using the technique of offset lithography.

Gravure

Rotogravure

Gravure printing is a n intaglio printing technique, where the image being printed is made up of small depressions in the surface of the printing plate. The cells are filled with ink, and the excess is scraped off the surface with a doctor blade. Then a rubber-covered roller presses paper onto the surface of the plate and into contact with the ink in the cells. The printing cylinders are usually made from copper plated steel, which is subsequently chromed, and may be produced by diamond engraving; etching, or la ser ablation. Gravure printing is used for long, high-quality print runs such as magazines, mail-order catalogues, packaging and printing onto fabric and . It is also used for printing postage stamps and decorative plastic laminates, such as kitchen worktops.

Other printing techniques

The other significant printing techniques include:

• Flexography, used for packaging, labels, newspapers • Dye-sublimation printer • Inkjet, used typically to print a small number of books or packaging, and also to print a variety of materials: from high quality papers simulating offset printing, to floor tiles. Inkjet is also used to apply mailing addresses to direct mail pieces • Laser printing (toner printing) mainly used in offices and for transactional printing (bills, bank documents). Laser printing is commonly used by direct mail companies to create variable data letters or . • Pad printing, popular for its unusual ability to print on complex three-dimensional surfaces • Relief print, mainly used for catalogues • Screen-printing for a variety of applications ranging from T-shirts to floor tiles, and on uneven surfaces • Intaglio, used mainly for high value documents such as currencies. • Thermal printing, popular in the 1990s for fax printing. Used today for printing labels such as airline baggage tags and individual price labels in supermarket deli counters. UNIT – 5 Flexography

Flexography (often abbreviated to flexo) is a form of printing process which utilizes a flexible relief plate. It is essentially a modern version of letterpress which can be used for printing on almost any type of substrate, including plastic, metallic films, cellophane, and paper. It is widely used for printing on the non-porous substrates required for various types of food packaging (it is also well suited for printing large areas of solid colour).

History

In 1890, the first such patented press was built in Liverpool, England by Bibby, Baron and Sons. The water-based ink smeared easily, leading the device to be known as "Bibby's Folly". In the early 1900s, other European presses using rubber printing plates and aniline oil-based ink were developed. This led to the process being called "aniline printing". By the 1920s, most presses were made in Germany, where the process was called "gummidruck," or rubber printing. In mod er n-day Germany, they continue to call the process "gummidruck." During the early part of the 20th century, the technique was used extensively in food packaging in the United States. However, in the 1940s, the Food and Drug Administration classified aniline dyes as unsuitable for food packaging. Printing sales plummeted. Individual firms tried using new names for the process, such as "Lustro Printing" and "Transglo Printing," but met with limited success. Even after the Food and Drug Administration approved the aniline process in 1949 using new, safe inks, sales continued to decline as some food manufacturers still refused to consider aniline printing. Worried about the image of the industry, packaging representatives decided the process needed to be renamed.

In 1951 Franklin Moss, then the president of the Mosstype Corporation, conducted a poll among the readers of his journal The Mosstyper to submit new names for the printing process. Over 200 names were submitted, and a subcommittee of the Packaging Institute's Printed Packaging Committee narrowed the selection to three possibilities: "permatone process", "rotopake process", and "flexographic process". Postal ballots from readers of The Mosstyper overwhelmingly chose the last of these, and "flexographic process" was chosen.[1]

Evolution

Originally, flexographic printing was rudimentary in quality. Labels requiring high quality have generally been printed using the offset process until recently. Since 1990,[2] great advances have been made to the quality of flexographic printing presses, printing plates and printing inks.

The greatest advances in flexographic printing have been in the area of photopolymer printing plates, including improvements to the plate material and the method of plate creation.

Digital direct to plate systems have been a good improvement in the industry recently. Companies like Asahi Photoproducts, AV Flexologic, Dupont, MacDermid, Kodak and Esko have pioneered the latest technologies, with advances in fast washout and the latest screening technology.

Laser-etched ceramic anilox rolls also play a part in the improvement of print quality. Full-color picture printing is now possible, and some of the finer presses available today, in combination with a skilled operator, allow quality that rivals the lithographic process. O ne ongoing improvement has been the increasing ability to reproduce highlight tonal values, thereby providing a workaround for the very high dot gain associated with flexographic printing.

Process overview

1. Platemaking[3] The first method of plate development uses light-sensitive polymer. A film negative is placed over the plate, which is exposed to ultra-violet light. The polymer hardens where light passes through the film. The remaining polymer has the consistency of chewed gum. It is washed away in a tank of either water or solvent. Brushes scrub the plate to facilitate the "washout" process. The process can differ depending on whether solid sheets of photopolymer or liquid photopolymer are used, but the principle is still the same. The plate to be washed out is fixed in the orbital washout unit on a sticky base plate. The plate is washed out in a mixture of water and 1% dishwasher soap, at a temperature of approximately 40°C. The unit is equipped with a dual membrane filter. With this the environmental burdening is kept to an absolute minimum. The membrane unit separates photopolymer from the washout water. After addition of absorb gelatine for example, the photopolymer residue can be disposed of as standard solid waste together with household refuse. The recycled water is re-used without adding any detergent.[4]

Flexographic printing press

The second method used a computer-guided laser to etch the image onto the printing plate. Such a direct laser engraving process is called digital platemaking. Companies such as AV Flexologic, Esko, Kodak ,Polymount and Screen from The Netherlands are market leaders in manufacturing this type of equipment.

The third method is to go through a molding process. The first step is to create a metal plate out of the negative of our initial image through an exposition process (followed by an acid bath). In the early days the metal used was zinc, leading to the name 'zincos'. Later magnesium was used.This metal plate in relief is then used in the second step to create the mold that could be in bakelite board or even glass or plastic, through a first molding process. Once cooled, this master mold will press the rubber or plastic compound (under both controlled temperature and pressure) through a second molding process to create the printing plate.

2. Mounting For every colour to be printed, a plate is made and eventually put on a cylinder which is placed in the printing press. To make a complete picture, regardless of printing on flexible film or corrugated paper, the image transferred from each plate has to re gis ter exactly with the images transferred from the other colors. To ensure an accurate picture is made, mounting marks are made on the flexographic plates. These mounting marks can be microdots (down to 0.3 mm) and/or crosses. Special machinery is made for mounting these plates on the printing cylinders to maintain registration.

3. Printing A flexographic print is made by creating a positive mirrored master of the required image as a 3D relief in a rubber or polymer material. Flexographic plates can be created with analog and digital platemaking processes. The image areas are raised above the non image areas on the rubber or polymer plate. The ink is transferred from the ink roll which is partially immersed in the ink tank. Then it transfers to the anilox or ceramic roll (or meter roll) whose texture holds a specific amount of ink since it is covered with thousands of small wells or cups that enable it to meter ink to the printing plate in a uniform thickness evenly and quickly (the number of cells per linear inch can vary according to the type of print job and the quality required).[5] To avoid getting a final product with a smudgy or lumpy look, it must be ensured that the amount of ink on the printing plate is not excessive. This is achieved by using a scraper, called a doctor blade. The doctor blade removes excess ink from the anilox roller before inking the printing plate. The substrate is finally sandwiched between the plate and the impression cylinder to transfer the image.[6] The sheet is then fed through a dryer, which allows the inks to dry before the surface is touched again. If a UV-curing ink is used, the sheet does not have to be dried, but the ink is cured by UV rays instead.

Basic parts of the press

• Unwind and infeed section – The roll of stock must be held under control so the web can unwind as needed.

• Printing section – Single color station including the fountain, anilox, plate and impression rolls.

• Drying station – High velocity heated air, specially formulated inks and an after-dryer can be used.

• Outfeed and rewind section – Similar to the unwind segment, keeps web tension controlled. Operation

Operational overview

1. Fountain roller The fountain roller transfers the ink that is located in the ink pan to the second roller, which is the anilox roller. In Modern Flexo printing this is called a Meter or "metering" roller.

2. Anilox rolle r This is what makes flexography unique. The anilox roller meters the predetermined ink that is transferred for uniform thickness. It has engraved cells that carry a certain capacity of inks that can only be seen with a microscope. These rollers are responsible to transfer the inks to the flexible-plates that are already mounted on the Plate Cylinders.

3. Doctor Blade (optional) The doctor blade scrapes the anilox roll to ensure that the predetermined ink amount delivered is only what is contained within the engraved cells. Doctor blades have predominantly been made of steel but advanced doctor blades are now made of polymer materials, with several different types of beveled edges.

4. Plate cylinder The plate cylinder holds the printing plate, which is soft flexible rubber-like material. Tape, magnets, tension straps and/or ratchets hold the printing plate against the cylinder. 5. Impression Cylinder The impression cylinder applies pressure to the plate cylinder, where the image is transferred to the substrate. This impression cylinder or "print Anvil" is required to apply pressure to the Plate Cylinder.

Flexographic printing inks

The nature and demands of the printing process and the application of the printed product determine the fundamental properties required of flexographic inks. Measuring the physical properties of inks and understanding how these are affected by the choice of ingredients is a large part of ink technology. Formulation of inks requires a detailed knowledge of the physical and chemical properties of the raw materials composing the inks, and how these ingredients affect or react with each other as well as with the environment. Flexographic printing inks are primarily formulated to remain compatible with the wide variety of substrates used in the process. Each formulation component individually fulfills a special function and the proportion and composition will vary according to the substrate.

There are five types of inks that can be used in flexography: solvent-based inks, water-based inks, electron beam (EB) curing inks, ultraviolet (UV) curing inks and two-part chemically- curing inks (usually based on polyurethane isocyanate reactions), although these are uncommon at the moment.[7] Water based flexo inks with particle sizes be lo w 5 µm may cause problems when recycled paper.

Ink controls

The ink is controlled in the flexographic printing process by the inking unit. The inking unit can be either of fountain roll system or doctor blade system. The fountain roll system is a simple old system yet if there is too much or too little ink this system would likely control in a poor way. The doctor blade inside the anilox/ceramic roller uses geometry and distribution. These blades ensure that the cells are filled with enough ink.[8]

Presses

Stack press Color stations stack up vertically, which makes it easy to access. This press is able to print on both sides of the substrate.

Central Impression press All color stations are located in a circle around the impression cylinder. This press can only print on one side. Advantage: excellent registry

In-line press Color stations are placed horizontally. This press prints on both sides, via a turnbar. Advantage: can print on heavier substrates, such as corrugated boards. Applications

Flexo has an advantage over lithography in that it can use a wider range of inks, water based rather than oil based inks, and is good at printing on a variety of different materials like plastic, foil, acetate film, brown paper, and other materials used in packaging. Typical products printed using flexography include brown corrugated boxes, flexible packaging including retail and shopping bags, food and hygiene bags and sacks, milk and beverage , flexible plastics, self-adhesive labels, disposable cups and containers, and wallpaper. In recent years there has also been a move towards laminates, where two or more materials are bonded together to produce new material with different properties than either of the originals. A number of newspapers now eschew the more common offset lithography process in favour of flexo. Flexographic inks, like those used in gravure and unlike those used in lithography, generally have a low viscosity. This enables faster drying and, as a result, faster production, which results in lower costs.

Printing press speeds of up to 600 meters per minute (2000 feet per minute) are achievable now with modern technology high-end printers. Flexo printing is widely used in the converting industry for printing plastic materials for packaging and other end uses. For maximum efficiency, the flexo presses produce large rolls of material that are then slit down to their finished size on slitting machines. Screen Printing

Screen printing is a printing technique whereby a mesh is used to transfer ink onto a substrate, except in areas made impermeable to the ink by a stencil. A blade or squeegee is moved across the screen to fill the open mesh apertures with ink, and a reverse stroke then causes the screen to touch the substrate momentarily along a line of contact. This causes the ink to wet the substrate and be pulled out of the mesh apertures as the screen springs back after the blade has passed.

Screen printing is also a stencil method of print making in which a design is imposed on a screen of polyester or other fine mesh, with blank areas coated with an impermeable substance. Ink is forced into the mesh openings by the fill blade or squeegee and by wetting the substrate, transferred onto the printing surface during the squeegee stroke. As the screen rebounds away from the substrate the ink remains on the substrate. It is also known as silk-screen, screen, serig ra phy, and serig ra ph printing. One color is printed at a time, so several screens can be used to produce a multicoloured image or design.

There are various terms used for what is essentially the same technique. Traditionally the process was called screen printing or silkscreen printing because silk was used in the process prior to the invention of polyester mesh. Currently, synthetic threads are commonly used in the screen printing process. The most popular mesh in general use is made of polyester. There are special- use mesh materials of nylon and stainless steel available to the screen printer. There are also different types of mesh size which will determine the outcome and look of the finished design on the material. History

Screen printing is a form of stencilling that first appeared in a recognizable form in China during the Song Dynasty (960–1279 AD).[1][2] It was then adapted by other Asian countries like Japan, and was furthered by creating newer methods.

Screen printing was largely introduced to from Asia sometime in the late 18th century, but did not gain large acceptance or use in Europe until silk mesh was more available for trade from the east and a profitable outlet for the medium discovered.

Early in the 1910s, several printers experimenting with photo-reactive chemicals used the well- known actinic light–activated cross linking or hardening traits of potassium, sodium or ammonium chromate and dichromate chemicals with glues and gelatin compounds. Roy Beck, Charles Peter and Edward Owens studied and experimented with chromic acid salt sensitized emulsions for photo-reactive stencils. This trio of developers would prove to revolutionize the commercial screen printing industry by introducing photo-imaged stencils to the industry, though the acceptance of this method would take many years. Commercial screen printing now uses sensitizers far safer and less toxic than bichromates. Currently there are large se lec tio ns o f pre- sensitized and "user mixed" sensitized emulsion chemicals for creating photo-reactive stencils.[3]

A group of artists who later formed the National Serigraphic Society coined the word Serigraphy in the 1930s to differentiate the artistic application of screen printing from the industrial use of the process.[4] "Ser igraphy" is a compound word formed from Latin "sēricum" (silk) and Greek "graphein" (to write or draw).[5]

The Printers' National Environmental Assistance Center says "Screenprinting is arguably the most versatile of all printing processes."[6] Since rudimentary screenprinting materials are so affordable and readily available, it has been used frequently in underground settings and subcultures, and the non-professional look of such DIY culture screenprints have become a significant cultural aesthetic seen on movie posters, record album covers, flyers, shirts, commercial fonts in advertising, in artwork and elsewhere.

1960s to present

Credit is generally given to the artist Andy Warhol for popularising screen printing as an artistic technique, identified as serigraphy, in the United States. Warhol was supported in his production by master Screen Printer Michel Caza,[7] a founding member of Fespa, and is particularly identified with his 1962 depiction of actress Marilyn Monroe, known as the Marylin Diptych, screen printed in garish colours.

Sister Mary Corita Kent, gained international fame for her vibrant serigraphs during the 1960s and 1970s. Her works were rainbow colored, contained words that were both political and fostered peace and love and caring.

American entrepreneur, artist and inventor Michael Vasilantone started to use, develop, and sell a rotary multicolour garment screen printing machine in 1960.[citation needed] Vasilantone later filed for patent[8] on his invention in 1967 granted number 3,427,964 on February 18, 1969.[8] The original rotary machine was manufactured to print logos and team information on bowling garments but soon directed to the new fad of printing on T-shirts. The Vasilantone patent was licensed by multiple manufacturers, the resulting production and boom in printed T-shirts made the rotary garment screen printing machine the most popular device for screen printing in the industry. Screen printing on garments currently accounts for over half of the screen printing activity in the United States.[9]

In June 1986, Marc Tartaglia, Marc Tartaglia Jr. and Michael Tartaglia created a silk screening device which is defined in its US Patent Document as, "Multi-coloured designs are applied on a plurality of fabric or sheet materials with a silk screen printer having seven platens arranged in two horizontal rows below a longitudinal heater which is movable across either row." This invention received the patent number 4,671,174 on June 9, 1987, however the patent no longer exists.[10]

Graphic screenprinting is widely used today to create many mass or large batch produced graphics, such as posters or display stands. Full colour prints can be created by printing in CMYK (cyan, magenta, yellow and black ('key')).

Screen printing lends itself well to printing on canvas. Andy Warhol, Rob Ryan, Blexbolex, Arthur Okamura, Robert Rauschenberg, Roy Lichtenstein, Harry Gottlieb, and many other artists have used screen printing as an expression of creativity and artistic vision. Printing technique

Screen Printers use a silkscreen like this Screenstretch version, a squeegee, and hinge clamps to screen print their designs. The ink is forced through the mesh using the rubber squeegee, the hinge clamps keep the screen in place for easy registration

A. Ink. B. Squeegee. C. Image. D. Photo-emulsion. E. Screen. F. Printed image.

How to screen print one image

How to screen print with multiple layers using CMYK

Different samples of the printed image

A step-by-step illustrated poster infographic showing how to screen print using emulsion on a screen

Used to hold screens in place on this screen print hand bench

Trolley containing a wooden squeegee and acrylic ink

Screen printed fabric on heat press

A wash out for cleaning screens

Screen Printing four layers on a hand bench

A screen is made of a piece of mesh stretched over a frame. A stencil is formed by blocking off parts of the screen in the negative image of the design to be printed; that is, the open spaces are where the ink will appear on the substrate. Before printing occurs, the frame and screen must undergo the pre-press process, in which an emulsion is 'scooped' across the mesh and the 'exposure unit' burns away the unnecessary emulsion leaving behind a clean area in the mesh with the identical shape as the desired image. The surface to be printed (commonly referred to as a pallet) is coated with a wide 'pallet tape'. This serves to protect the 'pallet' from any unwanted ink leaking through the screen and potentially staining the 'pallet' or transferring unwanted ink onto the next substrate. Next, the screen and frame are lined with a tape. The type of tape used in for this purpose often depends upon the ink that is to be printed onto the substrate. These aggressive tapes are generally used for UV and water-based inks due to the inks' lower viscosities. The last process in the 'pre-press' is blocking out any unwanted 'pin-holes' in the emulsion. If these holes are left in the emulsion, the ink will continue through and leave unwanted marks. To block out these holes, materials such as tapes, speciality emulsions and 'block-out pens' may be used effectively.

The screen is placed atop a substrate. Ink is placed on top of the screen, and a floodbar is used to push the ink through the holes in the mesh. The operator begins with the fill bar at the rear of the screen and behind a reservoir of ink. The operator lifts the screen to prevent contact with the substrate and then using a slight amount of downward force pulls the fill bar to the front of the screen. This effectively fills the mesh openings with ink and moves the ink reservoir to the front of the screen. The operator then uses a squeegee (rubber blade) to move the mesh down to the substrate and pushes the squeegee to the rear of the screen. The ink that is in the mesh opening is pumped or squeezed by capillary action to the substrate in a controlled and prescribed amount, i.e. the wet ink deposit is proportional to the thickness of the mesh and or stencil. As the squeegee moves toward the rear of the screen the tension of the mesh pulls the mesh up away from the substrate (called snap-off) leaving the ink upon the substrate surface.

There are three common types of screen printing presses. The ' fla t-bed', 'cylinder', and the most widely used type, the 'rotary'.[6]

Textile items printed with multicoloured designs often use a wet on wet technique, or colours dried while on the press, while graphic items are allowed to dry between colours that are then printed with another screen and often in a different colour after the product is re-aligned on the press.

Most screens are ready for re-coating at this stage, but sometimes screens will have to undergo a further step in the reclaiming process called dehazing. This additional step removes haze or "ghost images" left behind in the screen once the emulsion has been removed. Ghost images tend to faintly outline the open areas of previous stencils, hence the name. They are the result of ink residue trapped in the mesh, often in the knuckles of the mesh (the points where threads cross).[11]

While the public thinks of garments in conjunction with screen printing, the technique is used on tens of thousands of items, including decals, clock and watch faces, balloons, and many other products. The technique has even been adapted for more advanced uses, such as laying down conductors and resistors in multi-layer circuits using thin ceramic layers as the substrate.

Stencilling techniques

A macro photo of a screen print with a photographically produced stencil. The ink will be printed where the stencil does not cover the substrate.

A method of stencilling that has increased in popularity over the past years is the photo emulsion technique:

Hand-painted colour separation on transparent overlay by serigraph printer Csaba Markus

1. The original image is created on a transparent overlay, and the image may be drawn or painted directly on the overlay, photocopied, or printed with a computer printer, but making so that the areas to be inked are not transparent. A black-and-white positive may also be used (projected onto the screen). However, unlike traditional plate-making, these screens are normally exposed by using film positives. 2. A screen must then be selected. There are several different mesh counts that can be used depending on the detail of the design being printed. Once a screen is selected, the screen must be coated with emulsion and put to dry in a dark room. Once dry, it is then possible to burn/expose the print. 3. The overlay is placed over the screen, and then exposed with a light source containing ultraviolet light in the 350-420 nano meter spectrum. 4. The screen is washed off thoroughly. The areas of emulsion that were not exposed to light dissolve and wash away, leaving a negative stencil of the image on the mesh.

Another advantage of screen printing is that large quantities can be produced rapidly with new automatic presses, up to 1800 shirts in 1 hour.[12] The current speed loading record is 1805 shirts printed in one hour, documented on 18 February 2005. Maddie Sikorski of the New Buffalo Shirt Factory in Clarence, New York (USA) set this record at the Image Wear Expo in Orlando, Florida, USA, using a 12-colour M&R Formula Press and an M&R Passport Automatic Textile Unloader. The world speed record represents a speed that is over four times the typical average speed for manual loading of shirts for automated screen printing.[9] Screen printing materials

Caviar beads A caviar bead is a glue that is printed in the shape of the design, to which small plastic beads are then applied – works well with solid block areas creating an interesting tactile surface. Cracking ink Cracking ink effect is when the ink produces an intentional cracked surface after drying. Discharge inks Discharge ink is used to print lighter colours onto dark background fabrics, they work by removing the dye of the garment – this means they leave a much softer texture. The cons with this process is that they are less graphic in nature than plastisol inks, and exact colours are difficult to control. One of the pros of using this process is they are especially good for distressed prints and under-basing on dark garments that are to be printed with additional layers of plastisol. It adds variety to the design or gives it that natural soft feel. Expanding ink (puff) Expanding ink, or puff, is an additive to plastisol inks which raises the print off the garment, creating a 3D feel and look to the design. Mostly used when printing on apparel. Flocking Flocking consists of a glue printed onto the fabric and then flock material is applied for a velvet touch. Foil Foil is much like flock, but instead of a velvet touch and look it has a reflective/mirror look to it. Although foil is finished with a heat press process it needs the screen printing process in order to add the adhesive glue onto the material for the desired logo or design. Four colour process or the CMYK colour model Four colour process is when the artwork is created and then separated into four colours (CMYK) which combine to create the full spectrum of colours needed for photographic prints. This means a large number of colours can be simulated using only 4 screens, reducing costs, time, and set-up. The inks are required to blend and are more translucent, meaning a compromise with vibrancy of colour. Glitter/Shimmer Glitter or Shimmer ink is when metallic flakes become an additive in the ink base to create this sparkle effect. Usually available in gold or silver but can be mixed to make most colours. Gloss Gloss ink is when a clear base laid over previously printed inks to create a shiny finish. Metallic Metallic ink is similar to glitter, but smaller particles suspended in the ink. A glue is printed onto the fabric, then nano-scale fibers applied on it. This is often purchased already made. Mirrored silver Mirrored silver is a highly reflective, solvent based ink. Nylobond Nylobond is a special ink additive for printing onto technical or waterproof fabrics. Plastisol Plastisol is the most common ink used in commercial garment decoration. Good colour opacity onto dark garments and clear graphic detail with, as the name suggests, a more plasticized texture. This print can be made softer with special additives or heavier by adding extra layers of ink. Plastisol inks require heat (approx. 150 °C (300 °F) for many inks) to cure the print. PVC and Phthalate Free PVC and Phthalate Free is relatively new breed of ink and printing with the benefits of plastisol but without the two main toxic components. It also has a soft texture. Suede Ink Suede ink is a milky coloured additive that is added to plastisol. With suede additive you can make any color of plastisol have a suede feel. It is actually a puff blowing agent that does not bubble as much as regular puff ink. The directions vary from manufacturer to manufacturer, but generally up to 50% suede can be added to normal plastisol. Water-Based inks these penetrate the fabric more than the plastisol inks and create a much softer feel. Ideal for printing darker inks onto lighter coloured garments. Also useful for larger area prints where texture is important. Some inks require heat or an added catalyst to make the print permanent. High Build High Build is a process which uses a type of varnish against a lower mesh count with many coats of emulsion or a thicker grade of emulsion (e.g., Capillex®). After the varnish passes through to the substrate, an embossed-appearing, 'raised' area of varnish is created. When cured at the end of the process, the varnish yields a Braille effect, hence the term 'High Build'. Versatility

Screen with exposed image ready to be printed.

Screen printing is more versatile than traditional printing techniques. The surface does not have to be printed under pressure, unlike etching or lithography, and it does not have to be planar. Different inks can be used to work with a variety of materials, such as textiles, ceramics,[13] wood, paper, glass, metal, and plastic. As a result, screen printing is used in many different industries, including:

• Balloons • Clothing • Decals • Medical devices [14] • Printed electronics, including circuit board printing • Product labels • Signs and displays • Snowboard graphics • Textile fabric • Thick film technology Semiconducting material

In screen printing on wafer-based solar photovoltaic (PV) cells, the mesh and buses of silver are printed on the front; furthermore, the buses of silver are printed on the back. Subsequently, aluminum paste is dispensed over the whole surface of the back for passivation and surface reflection. [15] One of the parameters that can vary and can be controlled in screen printing is the thickness of the print. This makes it useful for some of the techniques of printing solar cells, electronics etc.

Solar wafers are becoming thinner and larger, so careful printing is required to maintain a lower breakage rate, though high throughput at the printing stage improves the throughput of the whole cell production line.[15]

Screen printing press

To print multiple copies of the screen design on garments in an efficient manner, amateur and professional printers usually use a screen printing press. Many companies offer simple to sophisticated printing presses. Most of these presses are manual. A few that are industrial-grade- automatic printers require minimal manual labor and increase production significantly. Rotary screen printing

A development of screen printing with flat screens from 1963 was to wrap the screen around to form a tube, with the ink supply and squeegee inside the tube. The resulting roller rotates at the same speed as the web in a roll-to-roll machine. The benefits are high output rates and long rolls of p roduct. This is the only way to make high-build fully patterned printing/coating as a continuous process, and has been widely used for manufacturing textured . Printing Paper & ink

Printing and writing pape rs are paper grades used for newspapers, magazines, catalogs, books, commercial printing, business forms, stationeries, copying and digital printing. About 1/3 of the total and paper marked (in 2000) is printing and writing papers.[1] The pulp or fibers used in printing and writing papers are extracted from wood using a chemical or mechanical process.[2]

In the United States printing and writing papers are separated into four main categories:[3]

1. Uncoated Freesheet Paper 2. Uncoated Mechanical Paper 3. Coated Freesheet Paper 4. Coated Mechanical Paper

Paper is a thin material produced by pressing together moist fibres of pulp derived from wood, rags or grasses, and drying them into flexible sheets. It is a versatile material with many uses, including writing, printing, packaging, cleaning, and a number of industrial and construction processes.

It and the pulp process is said to have been developed in C hina during the early AD, possibly as early as the year 105 A.D.,[1] by the Han court Ca i Lun, although the earliest archaeological fragments of paper derive from the 2nd century BC in China.[2] The mod er n is global, with China leading its production and the United States right behind it. History

History of paper

He mp wrapping paper, China, circa 100 BC.

The oldest known archaeological fragments of the immediate precursor to modern paper, date to the 2nd century BC in China. The pulp papermaking process is ascribed to Ca i Lun, a 2nd- century AD Han court eunuch.[2] With paper as an effective substitute for silk in many applications, China could export silk in greater quantity, contributing to a .

Its knowledge and uses spread from C hina through the M idd le East to medieval Europe in the 13th century, where the first water powered paper mills were built.[3] Because of paper's introduction to the West through the city of Baghdad, it was first called bagdatikos.[4] In the 19th century, industrial manufacture greatly lowered its cost, enabling mass exchange of information and contributing to significant cultural shifts. In 1844, the Canadian inventor Charles Fenerty and the German F. G. Keller independently developed processes for pulping wood fibres.[5] Etymology

Papyrus

The word "paper" is etymologically derived from Latin papyrus, which comes from the Greek πάπυρος (papuros), the word for the Cyperus papyrus p la nt. [6][7] Papyrus is a thick, paper-like material produced from the pith of the Cyperus papyrus plant, which was used in ancient Egypt and other Mediterranean cultures for wr iting before the introduction of paper into the and Europe.[8] Although the word paper is etymologically derived from papyrus, the two are produced very differently and the development of the first is distinct from the development of the second. Papyrus is a lamination of natural plant fibres, while paper is manufactured from fibres whose properties have been changed by maceration.[2] Papermaking

Papermaking

Chemical pulping

Ma in ar tic les : , and

To make pulp from wood, a chemical pulping process separates from cellulose fibres. This is accomplished by dissolving lignin in a cooking liquor, so that it may be washed from the cellulose; this preserves the length of the cellulose fibres. Paper made from chemical pulps are also known as wood-free papers–not to be confused with tree-free paper; this is because they do not contain lignin, which deteriorates over time. The pulp can also be bleached to produce white paper, but this consumes 5% of the fibres; chemical pulping processes are not used to make paper made from cotton, which is already 90% cellulose.

The microscopic structure of paper: Micrograph of paper autofluorescing under ultraviolet . The individual fibres in this sample are around 10 µm in diameter.

There are three main chemical pulping processes: the sulfite process dates back to the 1840s and it was the dominant method extent before the second world war. The kraft process, invented in the 1870s and first used in the 1890s, is now the most commonly practiced strategy, one of its advantages is the chemical reaction with lignin, that produces heat, which can be used to run a generator. Most pulping operations using the kraft process are net contributors to the electricity grid or use the electricity to run an adjacent paper mill. Another advantage is that this process recovers and reuses all inorganic chemical reagents. Soda pulping is another specialty process used to pulp straws, and hardwoods with high silicate content.

Mechanical pulping There are two major mechanical pulps, the thermomechanical one (TMP) and groundwood pulp (GW). In the TMP process, wood is chipped and then fed into large steam heated refiners, where the chips are squeezed and converted to fibres between two steel discs. In the groundwood process, debarked logs are fed into grinders where they are pressed against rotating stones to be made into fibres. Mechanical pulping does not remove the lignin, so the yield is very high, >95%, however it causes the paper thus produced to turn yellow and become brittle over time. Mechanical pulps have rather short fibres, thus producing weak paper. Although large amounts of electrical energy are required to produce mechanical pulp, it costs less than the chemical kind.

De-inked pulp

Main article: de-inking

Paper recycling processes can use either chemically or mechanically produced pulp; by mixing it with water and applying mechanical action the hydrogen bonds in the paper can be broken and fibres separated again. Most recycled paper contains a proportion of virgin fibre for the sake of quality; generally speaking, de-inked pulp is of the same quality or lower than the collected paper it was made from.

There are three main classifications of recycled fibre:.

• Mill broke or internal mill waste – This incorporates any substandard or grade-change paper made within the paper mill itself, which then goes back into the manufacturing sys te m to be re-pulped back into paper. Such out-of-specification paper is not sold and is therefore often not classified as genuine reclaimed recycled fibre, however most paper mills have been reusing their own waste fibre for many years, long before recycling become popular. • Preconsumer waste – This is offcut and processing waste, such as guillotine trims and blank waste; it is generated outside the paper mill and could potentially go to landfill, and is a genuine recycled fibre source; it includes de-inked preconsumer (recycled material that has been printed but did not reach its intended end use, such as waste from printers and unsold publications).[9] • Postconsumer waste – This is fibre from paper that has been used for its intended end use and includes office waste, magazine papers and . As the vast majority of this material has been printed – either digitally or by more conventional means such as lithography or rotogravure – it will either be recycled as printed paper or go through a de- inking process first.

Recycled papers can be made from 100% recycled materials or blended with virgin pulp, although they are (generally) not as strong nor as bright as papers made from the latter.

Additives

Besides the fibres, pulps may contain fillers such as chalk or china clay, which improve its characteristics for printing or writing. Additives for sizing purposes may be mixed with it and/or applied to the paper web later in the manufacturing process; the purpose of such sizing is to establish the correct level of surface absorbency to suit ink or paint.

Producing paper

Ma in ar tic les : and papermaking

The p ulp is fed to a paper machine where it is formed as a paper web and the water is removed from it by pressing and drying.

Pressing the sheet removes the water by force; once the water is forced from the sheet, a special kind of felt, which is not to be confused with the traditional one, is used to collect the water; whereas when making paper by hand, a blotter sheet is used instead.

Drying involves using air and/or heat to remove water from the paper sheets; in the earliest days of paper making this was done by hanging the sheets like laundry; in more modern times various forms of heated drying mechanisms are used. On the paper machine the most common is the steam heated can dryer. These can reach temperatures above 200 °F (93 °C) and are used in long sequences of more than 40 cans; where the heat produced by these can easily dry the paper to less than 6% moisture.

Finishing

The paper may then undergo sizing to alter its physical properties for use in various applications.

Paper at this point is uncoated. has a thin layer of material such as calcium carbonate or china clay applied to one or both sides in order to create a surface more suitable for high-resolution halftone screens. (Uncoated papers are rarely suitable for screens above 150 lpi.) Coated or uncoated papers may have their surfaces polished by calendering. Coated papers are divided into matte, semi-matte or silk, and gloss. Gloss papers give the highest optical density in the printed image.

The paper is then fed onto reels if it is to be used on web printing presses, or cut into sheets for other printing processes or other purposes. The fibres in the paper basically run in the machine direction. Sheets are usually cut "long-grain", i.e. with the grain parallel to the longer dimension of the sheet.

All paper produced by paper machines as the Fourdrinier Machine are , i.e. the wire mesh that transports the web leaves a pattern that has the same density along the paper grain and across the grain. Textured finishes, and wire patterns imitating hand-made can be created by the use of appropriate rollers in the later stages of the machine.

Wove paper does not exhibit "laidlines", which are small regular lines left behind on paper when it was handmade in a mould made from rows of metal wires or . Laidlines are very close together. They run perpendicular to the "chainlines", which are further apart. Handmade paper similarly exhibits " edges", or rough and feathery borders.[10] Applications

Paper can be produced with a wide variety of properties, depending on its intended use.

• For representing value: paper money, bank note, cheque, security (see ), voucher and ticket • Fo r storing information: book, , magazine, newspaper, art, zine, letter • For personal use: diary, note to remind oneself, etc.; for temporary personal use: scratch paper • Fo r communication: between individuals and/or groups of people. • For packaging: corrugated box, , envelope, Packing & Wrapping Paper, Paper string, Charta emporetica and wallpaper • For cleaning: , handkerchiefs, paper towels, and cat litter • For construction: pap ie r-mâché, , paper planes, , paper honeycomb, used as a core material in composite materials, , and • Fo r other uses: emery paper, , , litmus paper, universal indicator paper, paper chromatography, electrical insulation paper (see also dielectric and permittivity) and Types, thickness and weight

Paper size, and

Card and paper stock for crafts use comes in a wide variety of textures and colors.

The thickness of paper is often measured by caliper, which is typically given in thousandths of an inch in the United S tates and in thousandths of a mm in the rest of the world.[11] Paper may be between 0.07 and 0.18 millimetres (0.0028 and 0.0071 in) thick.[12]

Paper is often characterized by weight. In the United States, the weight assigned to a paper is the weight of a ream, 500 sheets, of varying "basic sizes", before the paper is cut into the size it is sold to end customers. For example, a ream of 20 lb, 8.5 in × 11 in (216 mm × 279 mm) paper weighs 5 pounds, because it has been cut from a larger sheet into four pieces.[13] In the United States, printing paper is generally 20 lb, 24 lb, or 32 lb at mo st. Cover stock is generally 68 lb, and 110 lb or more is considered .

In Europe, and other regions using the ISO 216 paper sizing system, the weight is expressed in grammes per square metre (g/m2 or usually just g) of the paper. Printing paper is generally between 60 g and 120 g. Anything heavier than 160 g is considered card. The weight of a ream therefore depends on the dimensions of the paper and its thickness.

Most commercial paper sold in is cut to standard paper sizes based on customary units and is defined by the length and width of a sheet of paper. The ISO 216 system used in most other countries is based on the surface area of a sheet of paper, not on a sheet's width and length. It was first adopted in Germany in 1922 and generally spread as nations adopted the metric system. The largest standard size paper is A0 (A zero), measuring one square meter (approx. 1189 × 841 mm). Two sheets of A1, placed upright side by side fit exactly into one sheet of A0 laid on its side. Similarly, two sheets of A2 fit into one sheet of A1 and so forth. Common sizes used in the office and the home are A4 and A3 (A3 is the size of two A4 sheets).

The density of paper ranges from 250 kg/m3 (16 lb/c u ft) for to 1,500 kg/m3 (94 lb/c u ft) for some speciality paper. Printing paper is about 800 kg/m3 (50 lb /c u ft).[14]

Paper may be classified into seven categories:[15]

• Printing papers of wide variety. • Wrapping papers for the protection of goods and merchandise. This includes wax and kraft papers. • Writing paper suitable for stationery requirements. This includes ledger, bank, and . • Blotting papers containing little or no size. • Drawing papers usually with rough surfaces used by artists and designers, including . • Handmade papers including most decorative papers, Ingres papers, Japanese paper and tissues, all characterized by lack of grain direction. • Specialty papers including cigarette paper, toilet tissue, and other industrial papers.

Some paper types include:

• Fish paper (vulcanized fibres • Tyvek paper • for electrical insulation) • Wallpaper • Bond paper • • Waterproof • Coated paper: glossy and • Laid paper paper matte surface • Leather paper • • Construction paper/sugar • Mummy paper • Wove paper paper • Oak Tag Paper • • Sandpaper

Paper stability

Much of the early paper made from wood pulp contained significant amounts of alum, a variety of aluminium sulfate salts that is significantly ac id ic. Alum was added to paper to assist in sizing,[16] making it somewhat water resistant so that inks did not "run" or spread uncontrollably. Early papermakers did not realize that the alum they added liberally to cure almost every problem encountered in making their product would eventually be detrimental.[17] The cellulose fibres that make up paper are hydrolyzed by acid, and the presence of alum would eventually degrade the fibres until the paper disintegrated in a process that has come to be known as "s lo w fire". Documents written on rag paper were significantly more stable. The use of non-ac id ic additives to make paper is becoming more prevalent, and the stability of these papers is less of an issue.

Paper made from mechanical pulp contains significant amounts of lignin, a major component in wood. In the presence of light and oxygen, lignin reacts to give yellow materials,[18] whic h is why newsprint and other mechanical paper yellows with age. Paper made from bleached kraft or sulfite pulps does not contain significant amounts of lignin and is therefore better suited for books, documents and other applications where whiteness of the paper is essential.

Paper made from wood pulp is not necessarily less durable than a rag paper. The ageing behavior of a paper is determined by its manufacture, not the original source of the fibres.[19] Furthermore, tests sponsored by the Library of Congress prove that all paper is at risk of acid decay, because cellulose itself produces formic, acetic, lactic and oxalic acids.[20]

Mechanical pulping yields almost a tonne of pulp per tonne of dry wood used, which is why mechanical pulps are sometimes referred to as "high yield" pulps. With almost twice the yield as chemical pulping, mechanical pulps is often cheaper. Mass-market paperback books and newspapers tend to use mechanical papers. Book publishers tend to use ac id-free paper, made from fully bleached chemical pulps for hardback and trade paperback books. Environmental impact of paper

Ma in ar tic les : Environmental impact of paper, Paper pollutio n and Deforestation

The production and use of paper has a number of adverse effects on the environment.

Worldwide consumption of paper has risen by 400% in the past 40 years leading to increase in deforestation, with 35% of harvested trees being used for paper manufacture. Most paper companies also plant trees to help regrow forests. Logging of old growth forests accounts for less than 10% of wood pulp,[21] but is one of the most controversial issues.

Paper waste accounts for up to 40% of total waste produced in the United States each year, which adds up to 71.6 million tons of paper waste per year in the United States alone.[22] The average office worker in the US prints 31 pages every day.[23] Americans also use on the order of 16 billion paper cups per year.

Conventional bleaching of wood pulp using elemental chlorine produces and releases into the environment large amounts of chlorinated organic compounds, including chlorinated dioxins.[24] Dioxins are recognized as a persistent environmental pollutant, regulated internationally by the Stockholm Convention on Persistent Organic Pollutants. Dioxins are highly toxic, and health effects on humans include reproductive, developmental, immune and hormonal problems. They are known to be carcinogenic. Over 90% of human exposure is through food, primarily me at, dairy, fish and shellfish, as dioxins accumulate in the food chain in the fatty tissue of animals.[25] Future of paper

Some manufacturers have started using a new, significantly more environmentally friendly alternative to expanded plastic packaging. Made out of paper, and known commercially as paperfoam, the new packaging has very similar mechanical properties to some expanded plastic packaging, but is biodegradable and can also be recycled with ordinary paper.[26]

With increasing environmental concerns about synthetic coatings (such as PFOA) and the higher prices of hydrocarbon based petrochemicals, there is a focus on zein (corn protein) as a coating for paper in high grease applications such as popcorn bags.[27]

Also, synthetics such as Tyvek and Teslin have been introduced as printing media as a more durable material than paper. Papermaking

Papermaking is the process of making paper, a material which is used universally today for writing and packaging.

In papermaking, a dilute suspension of fibres in water is drained through a screen, so that a mat of randomly interwoven fibres is laid down. Water is removed from this mat of fibres by pressing and drying to make paper. Since the invention of the Fourdrinier machine in the 19th century, most paper has been made from wood pulp because of cost. But other fibre sources such as cotton and textiles are used for high-quality papers. One common measure of a paper's quality is its non-wood-pulp content, e.g., 25% cotton, 50% rag, etc. Previously, paper was made up of rags and he mp as well as other materials. History

History of paper

Hemp wrapping paper, China, circa 100 BCE.

Papermaking is known to have been traced back to China about 105 CE, when Lun, an official attached to the Imperial court during the Han Dynasty (202 BCE-220 CE), created a sheet of paper using mulberry and other bast fibres along with fishnets, old rags, and he mp waste.[1] However a recent archaeological discovery has been reported from Gansu province of paper with legible Chinese writings on it dating from 8 BCE,[2] while paper had been used in China for wrapping and padding since the 2nd century BCE.[3] Paper used as a writing medium became widespread by the 3rd century[4] and, by the 6th century, toilet paper was starting to be used in China as well.[5] During the Tang Dynasty (618-907 CE) paper was folded and sewn into square bags to preserve the flavour of tea,[3] while the later Song Dynasty (960-1279 CE) was the first government to issue paper-printed money. In the 8th century, paper spread to the Islamic world, where the rudimentary and laborious process of papermaking was refined and machinery was designed for bulk manufacturing of paper. Production began in Samarkand, Baghdad, Damascus, Cairo, Morocco then Muslim Spain. In Baghdad, this was under the supervision of the Grand Vizier Ja'far ibn Yahya. In general, Muslims invented a method to make a thicker sheet of paper. This helped transform papermaking from an art into a major industry.[6][7] The earliest use of water-powered mills in paper production, specifically the use of pulp mills for preparing the pulp for papermaking, dates back to Samarkand in the 8th century.[8] The earliest references to paper mills also come from the medieval Islamic world, where they were first noted in the 9th century by Arabic geographers in Damascus.[9] Papermaking was diffused across the Islamic world, from where it was diffused further west into Europe.[10]

Modern papermaking began in the early 19th century in Europe with the development of the Fourdrinier machine, which produces a continuous roll of paper rather than individual sheets. These machines are considerably large, up to 150 meters in length, produce up to 10 meters wide sheet, and running around 100 km/h. In 1844, Canadian inventor C harles Fenerty and German inventor F.G. Keller had invented the machine and associated process to make use of wood pulp in papermaking.[11] This would end the nearly 2,000-year use of pulped rags and start a new era for the production of newsprint and eventually almost all paper was made out of pulped wood.

Manual papermaking

Papermaking, regardless of the scale on which it is done, involves making a dilute suspension of fibres in water and allowing this suspension to drain through a screen, so that a mat of randomly interwoven fibres is laid down. Water is then removed from this mat of fibres using a press.

The method of manual papermaking changed very little over time, despite advances in technologies. The process of manufacturing handmade paper can be generalized into five steps:

1. Separating the useful fibre from the rest of raw materials. (e.g. cellulose from wood, cotton, etc.) 2. Beating down the fibre into pulp 3. Adjusting the colour, mechanical, chemical, biological, and other properties of the paper by adding special chemical premixes 4. Screening the resulting solution

4. An illustration from 105 AD depicting the papermaking process as designed by Ca i Lun. 5. Pressing and drying to get the actual paper

Screening the fibre involves using a mesh made from non-corroding and inert material, such as aluminium, which is stretched in a wooden frame similar to that of a window. The size of the paper is governed by the size of the frame. This tool is then completely submerged in the solution vertically and drawn out horizontally to ensure a uniform coating of the wire mesh. Excess water is then removed and the wet mat of fibre laid on top of a damp cloth. The process is repeated for the required number of sheets. This stack of wet mats is then pressed in a hydraulic press very gently to ensure the fibre does not squeeze out. The fairly damp fibre is then dried using a variety of methods, such as vacuum drying or simply air drying. Sometimes, the individual sheet is rolled to flatten, harden, and refine the surface. Finally, the paper is then cut to the desired shape or the standard shape (A4, letter, legal, etc.) and packed.[12]

The wooden frame is called a "deckle". The deckle leaves the edges of the paper slightly irregular and wavy, called "deckle edges", one of the indications that the paper was made by hand. Deckle-edged paper is occasionally mechanically imitated today to create the impression of o ld-fashioned luxury. The impressions in paper caused by the wires in the screen that run sideways are called "laid lines" and the impressions made, usually from top to bottom, by the wires holding the sideways wires together are called "chain lines". Watermarks are created by weaving a design into the wires in the mould. This is essentially true of Oriental moulds made of other substances, such as bamboo. Handmade paper generally folds and tears more evenly along the laid lines.

Handmade paper is also prepared in laboratories to study papermaking and in paper mills to check the quality of the production process. The "handsheets" made according to TAPPI Standard T 205[13] are circular sheets 15.9 cm (6.25 in) in diameter and are tested for paper characteristics such as brightness, strength and degree of sizing.[14] Industrial papermaking

Paper machine

A modern paper mill is divided into several sections, roughly corresponding to the processes involved in making handmade paper. Pulp is refined and mixed in water with other additives to make a pulp slurry. The head-box of the paper machine (Fourdrinier machine) distributes the slurry onto a moving continuous screen, water drains from the slurry (by gravity or under vacuum), the wet paper sheet goes through presses and dries, and finally rolls into large rolls. The outcome often weighs several tons.

Another type of paper machine makes use of a cylinder mo uld that rotates while partially immersed in a vat of dilute pulp. The pulp is picked up by the wire and covers the mould as it rises out of the vat. A couch roller is pressed against the mould to smooth out the pulp, and picks the wet sheet off the mould.

Key inventors include , Heinrich Voelter and Carl Daniel Ekman, among others. History of paper sheet production

Paper sizes

Folio Folio (printing)

In the beginning of Western papermaking, was fairly standard. A page of paper is referred to as a leaf. When a leaf was printed on without being folded, the size was referred to as folio (meaning leaf). It was roughly equal to the size of a small newspaper sheet. ("Folio" can also refer to other sizes - see paper sizes.)

Quarto

: Q uarto

A Folio folded once produces two leaves (or four pages), and the size of these leaves was referred to as quarto (4to) (about 230 x 280 mm).

Octavo

Octavo

If the original sheet was folded in half again, the result was eight pages, referred to as octavo (8vo), which is roughly the size of an average modern novel. An octavo folding produces four leaves; the first two and the second two will be joined at the top by the first fold. The top edge is usually trimmed to make it possible to look freely at each side of the leaf. Sometimes books are found that have not been trimmed on the top, and these pages are referred to as unopened.

An octavo book produces a printing puzzle. The paper was first printed before folding and thus pages 8 and 1 are printed right-s ide-up on the bottom of the sheet, and pages 4 and 5 are printed upside-down on the top of the same side of the paper. On the opposite side, pages 2 and 7 are printed right-s ide-up on the bottom of the sheet, and pages 6 and 3 are printed upside-down on the top of the sheet. When the paper is folded twice and the folds trimmed, the pages fall into proper order.

Sixteen-mo

Smaller books are produced by folding the leaves again to produce 16 pages, known as a sixteen- mo (16mo) (originally sextodecimo). Other folding arrangements produce yet smaller books such as the thirty-two-mo (32mo) (duo et tricensimo).

Octavo

When a standard-sized octavo book is produced by twice folding a large leaf, two leaves joined at the top will be contained in the resulting fold (which ends up in the gulley between the pages). This group of eight numberable pages is called a signature or a gathering. Traditionally, printed signatures were stacked on top of each other in a sewing frame and each signature was sewn through the inner fold to the signature on top of it. The sewing ran around leather bands or fabric tapes along the backs of the signatures to stabilize the pile of signatures. The leather bands originally used in the West to stabilize the backs of sewn books appear as a number of ridges under the leather on the spine of leather books. The ends of the leather strips or fabric bands were sewn or glued onto the cover boards and reinforced the hinging of the book in its covers.

Standardisation ISO sizes

While opinions and speculation abound on exact reasons for standardized paper sizes, the most revealing feature of popular sizes (such as Letter and ISO 216 sizes) is that they conform not to some arbitrary device dimension, but that the length of the paper is chosen to be the width of the page times the square root of two (≈1.414). This feature allowed for a large page to be cut in half and the resulting two pages to have the same aspect ratio as the original piece (just with half the size). Repeated cuts can be made to reduce the entire sheet to one size of pages without wasted paper. This format was formalized by ISO 216 however such logic dictated efficient paper sizes long before the ISO was created. For example, traditional 8.5"x11" Letter paper is within a few millimetres of A4 paper (ISO 216) dimensions. While paper sizes "may" have been chosen based on the size of original frames, the frames themselves were chosen to make page reduction efficient without distorting the aspect ratio of the pages regardless of final size chosen. However, there are paper sizes that do not conform to this idea when specific applications are needed.

Vatmen paper

Vatmen Paper was a type of paper made in the Netherlands that was 17 inches (430 mm) wide and 44 inches (1,100 mm) long. 44 inches was (reputedly) chosen because that is how far the papermaker could stretch his arm.[citation needed] A single vatman can generally handle a mould and deckle which produce up to a 25 inches (640 mm) wide sheet.[citation needed]

Handmade paper

Kalp i is one of the important hubs of handmade paper industry in India. A layman can very well confuse the paper sheet as leather when it is dried in the sun.

Paper types

PAPER TYPES

Paper can be separated into two main categories: uncoated and coated stocks. Uncoated stocks:

Uncoated stock is paper that has no coated pigment applied to reduce the absorbency or increase the smoothness. The uncoated finishes can be described as vellum, antique, wove, or smooth. Coated stocks:

A coated stock has a surface coating that has been applied to make the surface more receptive for the reproduction of text and images in order to achieve sharper detail and improved color density. By adding a coated clay pigment, the objective of coating the stock is to improve the smoothness and reduce the absorbency. Coated paper finishes can be categorized as matte, dull, cast, gloss, and high gloss. The coating can be on both sides of the stock (coated two sides, "C2S") or on one side only (coated one side, "C1S").

From these subcategories, paper stocks are then separated into types such as offset, bond, cover, index, and vellum bristol. The following tables show these types, along with their common colors, weights, and uses.

There are two main types of paper used in commercial printing: coated and uncoated

Coated Paper - a coated paper stock has a surface sealant and often contains clay. Coating papers reduce dot gain by restricting ink from absorbing into the surface of the paper. This sealant allows for crisper printing, particularly photos, gradients and fine detailed images. Coated stocks have numerous sheen options: gloss, matte, dull and satin finishes.

Gloss - a gloss coated paper has a high sheen, as those in a typical magazine. Gloss papers have less bulk and opacity and are less expensive than a dull & matte paper of equal thickness. A Cast Coated Paper (such as "Kromekote") has a very high gloss sheen made by pressing the paper against a polished hot metal drum while the coating is still wet. Dull - a dull finish coated paper has a smooth surface paper that is low in gloss. Dull coated paper falls between matte and glossy paper. Matte - a matte coated paper is a non-glossy, flat looking paper with very little sheen. Matte papers are more opaque, contain greater bulk, and are higher in cost.

Uncoated Paper - an uncoated paper stock has not been coated with clay or other surface sealants. Inks dry by absorbing into the paper. Uncoated papers comprise a vast number of paper types and are available in a variety of surfaces, both smooth and textured (such as "laid" and "linen"). Some fine quality uncoated sheets contain a .

Weight - the weight of a paper refers to its thickness and is typically measured in pounds (such as 20#) and points (such as 10 PT). The higher the number, the thicker the paper for that "type" of paper. Paper weights in commercial printing can be very confusing. For example, a sheet of 20# bond (probably what you use on your copy machine) is about the same thickness as a sheet of 50# offset. A more meaningful measurement is to pay attention to is a paper's caliper. See the paper caliper chart. Stationery Papers

Three general paper thickness categories used to describe the basis weight of matching stationery papers are writing, text and cover weight papers. They are commonly used for a company's matching letterhead, envelope, business cards and other collateral items.

Writing Paper - a letterhead-weight stock, typically 24# or 28# writing, and often has a wate r mark. Text Paper - is thicker than a writing paper, but not as thick as a cover paper (card stock). Text- weight stationery paper is usually a 70# or 80# text. Cover Paper - a card stock paper, such as those used for a or report cover. They are usually an 80# cover weight, although some brands of paper offer cover weight paper that is as thin as a 65# cover or as thick as a 100# cover or heavier.

Opacity - a paper's opacity is determined by its weight, ingredients and absorbency. A paper's opacity determines how much printing will show through on the reverse side of a sheet. Opacity is expressed in terms of it's percentage of reflection. Complete opacity is 100% and complete transparency is 0%.

Brightness - the brightness of a sheet of paper measures the percentage of blue light it reflects. The brightness of a piece of paper is typically expressed on a scale of 1 to 100, with 100 being the brightest. Most papers reflect 60-90% of light. The brightness of a paper affects readability, the perception of ink color and the contrast between light and dark hues.

Kinds of Ink

Ink is a liquid or paste that contains pigments or dyes and is used to color a surface to produce an image, text, or design. Ink is used for drawing or writing with a pen, brush, or quill. Thicker inks, in paste form, are used extensively in letterpress and lithographic printing.

Ink can be a complex medium, composed of solvents, pigments, dyes, , lubricants, solubilizers, surfactants, particulate matter, fluorescents, and other materials. The components of inks serve many purposes; the ink’s carrier, colorants, and other additives affect the flow and thickness of the ink and its appearance when dry. Types

Magnified line drawn by a fountain pen.

Ink formulas vary, but commonly involve two components:

• Colorants • Vehicles (binders)

Inks generally fall into four classes:[1]

• Aqueous • Liq uid • Paste • Powder

Colorants

Pigment inks are used more frequently than dyes because they are more color-fast, but they are also more expensive, less consistent in color, and have less of a color range than dyes.[1]

Pig me nts

Pigment

Pigments are solid, opaque particles suspended in ink to provide color.[1] Pigment molecules typically link together in crystalline structures that are 0.1–2 µm in size and comprise 5–30 percent of the ink volume.[1] Qualities such as hue, saturation, and lightness vary depending on the source and type of pigment.

Dyes

Dye

Dye-based inks are generally much stronger than pigment-based inks and can produce much more color of a given density per unit of mass. However, because dyes are dissolved in the liquid phase, they have a tendency to soak into paper, making the ink less efficient and potentially allowing the ink to bleed at the edges of an image.

To circumvent this problem, dye-based inks are made with solvents that dry rapidly or are used with q uick-drying methods of printing, such as blowing hot air on the fresh print. Other methods include harder paper sizing and more specialized paper coatings. The latter is particularly suited to inks used in non-industrial settings (which must conform to tighter toxicity and emission controls), such as inkjet printer inks. Another technique involves coating the paper with a charged coating. If the dye has the opposite charge, it is attracted to and retained by this coating, while the solvent soaks into the paper. Cellulose, the wood-derived material most paper is made of, is naturally charged, and so a compound that complexes with both the dye and the paper's surface aids retention at the surface. Such a compound is commonly used in ink-jet printing inks.

An additional advantage of dye-based ink systems is that the dye molecules can interact with other ink ingredients, potentially allowing greater benefit as compared to pigmented inks from optical brighteners and color-enhancing agents designed to increase the intensity and appearance of d ye s.

A more recent development in dye-based inks are dyes that react with cellulose to permanently color the paper. Such inks are not affected by water, alcohol, and other solvents.[citation needed] As such, their use is recommended to prevent frauds that involve removing signatures, such as check washing. This kind of ink is most commonly found in gel inks and in certain fountain pen inks.[citation needed] History

Ink drawing of Ganesha under an umbrella (early 19th century). Ink, called masi, an admixture of several chemical components, has been used in India since at least the 4th century BC.[2] The practice of writing with ink and a sharp pointed needle was common in early South India.[3] Several Jain sutras in India were compiled in ink.[4]

Many ancient cultures around the world have independently discovered and formulated inks for the purposes of writing and drawing. The knowledge of the inks, their recipes and the techniques for their production comes from archaeological analysis or from written text itself.

The history of Chinese inks can be traced back to the 23rd century BC, with the utilization of natural plant (plant dyes), animal, and mineral inks based on such materials as graphite that were ground with water and applied with ink brushes. Evidence for the earliest Chinese inks, similar to modern inksticks, is around 256 BC in the end of the Warring States period and produced from soot and animal glue.[5] The best inks for drawing or painting on paper or silk are produced from the of the pine tree. They must be between 50 and 100 years old. The Chinese inkstick is produced with a fish glue, whereas Japanese glue (膠 "nikawa") is from cow or stag.[6]

The process of making I nd ia ink was known in China as early as the middle of the 3rd millennium BC, during Neolithic China.[7] India ink was first invented in China,[8][9] although the source of materials to make the carbon pigment in India ink was later often traded from India, thus the term India ink was coined.[8][9] The traditional Chinese method of making the ink was to grind a mixture of hide glue, carbon black, lampblack, and bone black pigment with a pestle and mo rta r, then pouring it into a ceramic dish where it could dry.[8] To use the dry mixture, a wet brush would be applied until it reliquified.[8] The manufacture of India ink was well-established by the Cao Wei Dynasty (220–265 AD).[10] Indian documents written in Kharosthi with ink have been unearthed in Chinese .[11] The practice of writing with ink and a sharp pointed needle was common in early South India.[3] Several Buddhist and Jain sutras in India were compiled in ink.[4]

In ancient Rome, atramentum was used. In an article for the Christian Science Monitor, S haron J. Huntington describes these other historical inks:

About 1,600 years ago, a popular ink recipe was created. The recipe was used for centuries. Iron salts, such as ferrous sulfate (made by treating iron with sulfuric acid), were mixed with tannin from gallnuts (they grow on trees) and a thickener. When first put to paper, this ink is bluish- black. Over time it fades to a dull brown.

Scribes in medieval Europe (about AD 800 to 1500) wrote principally on parchment or vellum. One 12th century ink recipe called for hawthorn branches to be cut in the spring and left to dry. Then the bark was pounded from the branches and soaked in water for eight days. The water was boiled until it thickened and turned black. Wine was added during boiling. The ink was poured into special bags and hung in the sun. Once dried, the mixture was mixed with wine and iron salt over a fire to make the final ink.[12]

The reservoir pen, which may have been the first fountain pen, dates back to 953, when Ma'ād al- Mu' izz, the ca lip h of Egypt, demanded a pen that would not stain his hands or clothes, and was provided with a pen that held ink in a reservoir.[13]

In the 15th century, a new type of ink had to be developed in Europe for the printing press by Johannes Gutenberg. According to Martyn Lyons in his book Books: A Living History, Gutenberg’s dye was indelible, oil-based, and made from the soot of lamps (lamp-black) mixed with varnish and egg white.[14] Two types of ink were prevalent at the time: the Greek and Roman writing ink (soot, glue, and water) and the 12th century variety composed of ferrous sulfate, gall, gum, and water.[15] Neither of these handwriting inks could adhere to printing surfaces without creating blurs. Eventually an oily, varnish-like ink made of soot, turpentine, and walnut oil was created specifically for the printing press.

In 2011 worldwide consumption of printing inks generated revenues of more than 20 billion US- dollars. Demand by traditional print media is shrinking, on the other hand more and more printing inks are consumed for packagings.[16] Health and environmental aspects

There is a misconception that ink is non-toxic even if swallowed. Once ingested, ink can be hazardous to one' s health. Certain inks, such as those used in digital printers, and even those found in a common pen can be harmful. Though ink does not easily cause death, inappropriate contact can cause effects such as severe headaches, skin irritation, or nervous system damage. These effects can be caused by solvents, or by pigment ingredients such as p-Anisidine, which helps create some inks' color and shine.

Three main environmental issues with ink are:

• Heavy metals • Non-renewable oils • Volatile organic compounds

Some regulatory bodies have set standards for the amount of heavy metals in ink.[17] There is a trend toward vegetable oils rather than petroleum oils in recent years in response to a demand for better environmental sustainability. Writing and preservation

The two most used black writing inks in history are carbon inks and iron gall inks. Both types create problems for preservationists.

Carbon

Chinese inks tick ; carbon-based and made from soot and animal glue.

Carbon inks were commonly made from lampblack or soot and a binding agent such as gum arabic or animal glue. The binding agent keeps the carbon particles in suspension and adhered to paper. The carbon particles do not fade over time even when in sunlight or when bleached. One benefit of carbon ink is that it is not harmful to the paper. Over time, the ink is chemically stable and therefore does not threaten the strength of the paper. Despite these benefits, carbon ink is not ideal for permanence and ease of preservation. Carbon ink has a tendency to smudge in humid environments and can be washed off a surface. The best method of preserving a document written in carbon ink is to ensure it is stored in a dry environment (Barrow 1972).

Recently, carbon inks made from carbon nanotubes have been successfully created. They are similar in composition to the traditional inks in that they use a polymer to suspend the carbon nanotubes. These inks can be used in inkjet printers and produce electrically conductive patterns.[18]

Iron gall

Iron gall inks became prominent in the early 12th century; they were used for centuries and were widely thought to be the best type of ink. However, iron gall ink is corrosive and damages the paper it is on (Waters 1940). Items containing this ink can become brittle and the writing fades to brown. The original scores of Johann Sebastian Bach are threatened by the destructive properties of iron gall ink. The majority of his works are held by the German State Library, and about 25% of those are in advanced stages of decay (American Libraries 2000). The rate at which the writing fades is based on several factors, such as proportions of ink ingredients, amount deposited on the paper, and paper composition (Barrow 1972:16). Corrosion is caused by acid catalysed hydrolysis and iron(II)-catalysed oxidation of cellulose (Rouchon-Quillet 2004:389).

Treatment is a controversial subject. No treatment undoes damage already caused by acidic ink. Deterioration can only be stopped or slowed. Some[who?] think it best not to treat the item at all for fear of the consequences. Others believe that non-aqueous procedures are the best solution. Yet others think an aqueous procedure may preserve items written with iron gall ink. Aqueous treatments include distilled water at different temperatures, calcium hydroxide, calcium bicarbonate, magnesium carbonate, magnesium bicarbonate, and calcium phytate. There are many possible side effects from these treatments. There can be mechanical damage, which further weakens the paper. Paper color or ink color may change, and ink may bleed. Other consequences of aqueous treatment are a change of ink texture or formation of plaque on the surface of the ink (Reibland & de Groot 1999). Iron gall inks require storage in a stable environment, because fluctuating relative humidity increases the rate that formic acid, acetic acid, and furan derivatives form in the material the ink was used on. Sulfuric acid acts as a catalyst to cellulose hydrolysis, and iron (II) sulfate acts as a catalyst to cellulose oxidation. These chemical reactions physically weaken the paper, causing brittleness.[19] Indelible ink

A voter's thumb stained with indelible ink Election ink

Indelible means "un-removable". Some types of indelible ink have a very short shelf life because of the quickly evaporating solvents used. I nd ia, M e xico, Indonesia, and other developing countries have used indelible ink in the form of electoral stain to prevent electoral fraud. The Indian Scientist Dr. M.L. Goel is the founding father of indelible ink in India and gave the Secret formula to NPL (National Physical laboratory) of India.The Election Commission in India has used indelible ink for many elections. Indonesia used it in their last election in Aceh. In Mali, the ink is applied to the fingernail. Indelible ink itself is not infallible as it can be used to commit electoral fraud by marking opponent party members before they have chances to cast their votes. There are also reports of "indelible" ink washing off voters' fingers.

Quality for printing processes

Offset printing is a commonly used technique in which the inked image is transferred (or “offset”) from a plate to a rubber blanket, then to the printing surface. When used in combination with the lithographic process, which is based on the repulsion of oil and water, the offset tec hniq ue e mp lo ys a fla t (planographic) image carrier on which the image to be printed obtains ink from ink rollers, while the non-printing area attracts a water-based film (called “fountain solution”), keeping the non-printing areas ink-free. The modern “web” process feeds a large reel of paper through a large press machine in several parts, typically for several metres, which then prints continuously as the paper is fed through.

Development of the offset press came in two versions: in 1875 by Robert Barclay of England fo r printing on tin, and in 1904 by Ira Washington Rubel of the United States for printing on paper. History

Lithography was initially created to be an inexpensive method of reproducing artwork.[2][3] This printing process was limited to use on flat, porous surfaces because the printing plates were produced from limestone.[2] In fact, the word “lithograph” historically means “an image from stone” or “printed from stone”. Tin cans were popular packaging materials in the 19th century, but transfer technologies were required before the lithographic process could be used to print on the tin.[2]

The first rotary offset lithographic printing press was created in England and patented in 1875 by Robert Barclay.[2] This development combined mid-19th century transfer printing technologies and Richard March Hoe’s 1843 rotary printing press—a press that used a metal cylinder instead of a flat stone.[2] The offset cylinder was covered with specially treated cardboard that transferred the printed image from the stone to the surface of the metal. Later, the cardboard covering of the offset cylinder was changed to rubber,[2] which is still the most commonly used material.

As the 19th century closed and photography became popular, many lithographic firms went out of business.[2] Photoengraving, a process that used halftone technology instead of illustration, became the primary aesthetic of the era. Many printers, including Ira Washington Rubel of New Jersey, were using the low-cost lithograph process to produce copies of photographs and books.[4] Rubel discovered in 1901—by forgetting to load a sheet—that when printing from the rubber roller, instead of the metal, the printed page was clearer and sharper.[4] After further refinement, the Potter Press printing Company in New York produced a press in 1903.[4] By 1907 the Rubel offset press was in use in San Francisco.[5]

The Harris Automatic Press Company also created a similar press around the same time. Charles and Albert Harris modeled their press “on a rotary letter press machine”.[6] Modern offset printing

One of the most important functions in the printing process is prepress production. This stage makes sure that all files are correctly processed in preparation for printing. This includes converting to the proper CMYK color model, finalizing the files, and creating plates for each color of the job to be run on the press.

Offset lithography is one of the most common ways of creating printed materials. A few of its common applications include: newspapers, magazines, brochures, stationery, and books. Compared to other printing methods, offset printing is best suited for economically producing large volumes of high quality prints in a manner that requires little maintenance.[7] Many modern offset presses use computer-to-p la te systems as opposed to the older computer-to-film work flows, which further increases their quality.

Advantages of offset printing compared to other printing methods include:

• consistent high image quality. Offset printing produces sharp and clean images and type more easily than, for example, letterpress printing; this is because the rubber blanket conforms to the texture of the printing surface; • quick and easy production of printing plates; • longer printing plate life than on direct litho presses because there is no direct contact between the plate and the printing surface. Properly developed plates used with optimized inks and fountain solution may achieve run lengths of more than a million impressions; • cost. Offset printing is the cheapest method for producing high quality prints in commercial printing quantities; • ability to adjust the amount of ink on the fountain roller with screw keys. Most commonly, a metal blade controls the amount of ink transferred from the ink trough to the fountain roller. By adjusting the screws, the operator alters the gap between the blade and the fountain roller, increasing or decreasing the amount of ink applied to the roller in certain areas. This consequently modifies the density of the colour in the respective area of the image. On older machines one adjusts the screws manually, but on modern machines the screw keys are operated electronically by the printer controlling the machine, enabling a much more precise result.[8]

Disadvantages of offset printing compared to other printing methods include:

• slightly inferior image quality compared to rotogravure or photogravure printing; • propensity for anodized aluminum printing plates to become sensitive (due to chemical oxidation) and print in non-image–background areas when developed plates are not cared for properly; • time and cost associated with producing plates and printing press setup. As a result, very s ma ll quantity printing jobs may now use digital offset machines.

Every printing technology has its own identifying marks, as does offset printing. In text reproduction, the type edges are sharp and have clear outlines. The paper surrounding the ink dots is usually unprinted. The halftone dots are always hexagonal though there are different screening methods.[9] Offset printing process

The most common kind of offset printing is derived from the photo offset process, which involves using light-sensitive chemicals and photographic techniques to transfer images and type from original materials to printing plates. In current use, original materials may be an actual photographic print and typeset text. However, it is more common—with the prevalence of computers and digital images—that the source material exists only as data in a digital publishing sys te m.

Offset printing process consists of several parts:

• the inking system (ink fountain and ink rollers); • the dampening system (water fountain and water rollers); • the plate cylinder; • the offset cylinder (or blanket cylinder); • the impression cylinder.

In this process, ink is transferred from the ink fountain to the paper in several steps:

1. The inking and dampening systems deliver ink and water onto the offset plate covering the plate cylinder. 2. The plate cylinder transfers the ink onto the blanket covering the offset cylinder. 3. The paper is then pressed against the offset cylinder by the impression cylinder, transferring the ink onto the paper to form the printed image.

Inking system

The goal of any inking system is to place a uniform layer of ink across every dimension of the printing plate. The lithographic process is unique in that it requires the ink form rollers to pass in contact with the nonimage areas of the plate without transferring ink to them.

Inking systems are made up of several elements:

• the ink fountain; • the ink fountain roller (or ink feed roller); • the ink ductor roller; • the ink distribution rollers; • the ink form rollers.

The ink fountain stores a quantity of ink in a reservoir and feeds small quantities of ink to the distribution rollers from the ink fountain roller and the ink ductor roller. The ink ductor roller is a movable roller that moves back and forth between the ink fountain roller and an ink distribution ro lle r. As the ductor contacts the ink fountain roller, both turn and the ductor is inked. The ductor then swings forward to contact an ink distribution roller and transfers ink to it. There are generally two types of ink distribution rollers: the ink rotating rollers (or ink transfer rollers), which rotate in one direction, and the ink oscillating rollers (or ink vibrating rollers), which rotate and move from side to side. The ink distribution rollers receive ink and work it into a semiliquid state that is uniformly delivered to the ink form rollers. A thin layer of ink is then transferred to the image portions of the lithographic plate by the ink form rollers.

The ink fountain holds a pool of ink and controls the amount of ink that enters the inking system. The most common type of fountain consists of a metal blade that is held in place near the fountain roller. The gap between the blade and the ink fountain roller can be controlled by adjusting screw keys to vary the amount of ink on the fountain roller. The printer adjusts the keys in or out as the ink fountain roller turns to obtain the desired quantity of ink. In simple presses, the printer must turn these screws by hand. In modern presses, the adjusting screws are moved by servomotors which are controlled by the printer at a press console. Thus the printer can make ink adjustments electronically. If the printer needs to increase or decrease ink in an area of the plate (print), he need only adjust the needed keys to allow more or less ink flow through the blade. The ink flow can also be controlled by the rotation velocity of the ink fountain roller.

A simple indication of the quality of a printing press is the number of distribution and form rollers. The greater the number of distribution rollers, the more accurate the control of ink uniformity. It is difficult to ink large solid areas on a plate with only one ink form roller. With three (generally the maximum), it is relatively easy to maintain consistent ink coverage of almost any image area on the plate. Business forms presses, which print very little coverage, usually only have one or two ink form rollers. Because of this, they cannot print large solid or screen images. Smaller less sophisticated presses also have the same problem, however, many of the newer presses today are being equipped with larger better inking systems to meet the growing print demands of the consumer.

Dampening system

Most lithographic plates function on the principle of water and ink receptive areas. In order for ink to adhere only to the image areas on the plate, a layer of moisture must be placed over the nonimage areas. The dampening system accomplishes this by moistening the plate consistently throughout the press run.

Dampening systems are made up of several elements:

• the water fountain; • the water fountain roller (or water feed roller); • the water ductor roller in intermittent-flow dampening systems and the water slip roller in continuous-flow dampening systems; • the water distribution rollers; • the water form rollers.

Direct dampening systems employ a water fountain roller which picks up the water from the water fountain. The water is then passed to a water distribution roller. From here the water is transferred to the offset plate via one or two water form rollers.

Indirect dampening systems (or integrated dampening systems) feed the water directly into one of the ink form rollers (ink rollers that touch the offset plate) via a water form roller in contact with it. These systems are known as “indirect” since the water travels to the offset plate passing through the inking system and not directly to the offset plate as direct systems do. Some indirect systems will have the ability to feed the water into the inking system as well as to the offset plate. A fine emulsion of ink and water is then developed on the ink form roller. This is one reason printers need to know about “water pickup” or what percentage of water can be taken up by the ink. These systems are also known as “integrated” dampening systems as they are integrated into the inking system. One of the benefits of these systems, is that they do not use covers thus they react quicker when dampening changes are made. One generally finds this type of dampening systems on newer and faster press equipment today.

Intermittent-flow dampening systems (direct or indirect) use a water ductor roller to pick up the water and transfer it to a water distribution roller. A drawback of these systems is the slow reaction time in making adjustments due to the back and forth action of the ductor.

Continuous-flow dampening systems (direct or indirect), are used by most newer presses today because they do not have the slow reaction time of intermittent-flow dampening systems. They do not employ the water ductor roller but use the water slip roller (a roller in contact with both the water fountain roller and a distribution roller, contrary to the water ductor roller that moves back and forth between the two) for a continuous flow. The speed of the water slip roller controls the supply. The use of alcohol on these type of dampeners was standard for years. Alcohol (isopropyl alcohol) was used as it increased the water viscosity and made it “more wettable” so that transfer was easier from one roller to the other. However, alcohol substitutes such as glycol ethers, butyl cellusolve, etc., are being used today to accomplish the same task because alcohol contains volatile organic compounds. Roller hardness is also being changed to help accomplish the same job—easy transfer of the water.

Variations

Several variations of the printing process exist:

• blanket-to-blanket, a printing method in which there are two blanket cylinders through which a sheet of paper is passed and printed on both sides.[11] Blanket-to-blanket presses are considered a perfecting press because they print on both sides of the sheet at the same time. Since the blanket-to-blanket press has two blanket cylinders, making it possible to print on both sides of a sheet, there is no impression cylinder. The opposite blanket cylinders act as an impression cylinder to each other when print production occurs. This method is most utilized on offset presses designed for envelope printing. There are also two plate cylinders on the press; • blanket-to-steel, a printing method similar to a sheet offset press; except that the plate and cylinder pressures are quite precise. Actual squeeze between plate and blanket cylinder is optimal at 0.005″; as is the squeeze or pressure between the blanket cylinder and the substrate.[12] Blanket-to-steel presses are considered one-color presses. In order to print the reverse side, the web is turned over between printing units by means of turning bars.[12] The method can be used to print business forms, computer letters and direct mail advertising; • variable-size printing, a printing process that uses removable printing units, inserts, or cassettes for one-sided and blanket-to-blanket two-sided printing;[12] • keyless offset, a printing process that is based on the concept of using fresh ink for each revolution by removing residual inks on the inking drum after each revolution.[12] It is suitable for printing newspapers; • dry offset printing, a printing process which uses a metal backed photopolymer relief plate, similar to a letterpress plate, but, unlike letterpress printing where the ink is transferred directly from the plate to the substrate, in dry offset printing the ink is transferred to a rubber blanket before being transferred to the substrate. This method is used for printing on injection moulded rigid plastic buckets, tubs, cups and flowerpots. Plates

Materials

The plates used in offset printing are thin, flexible, and usually larger than the paper size to be printed. Two main materials are used:

• metal plates, usually aluminum, although sometimes they are made of multimetal, paper, or p las tic ;[13] • polyester plates, these are much cheaper and can be used in place of aluminum plates for smaller formats or medium quality jobs, as their dimensional stability is lower.[13]

Computer-to-plate

Main article: Computer-to-p late

Computer-to-plate (CTP) is a newer technology which replaced computer-to-film (CTF) technology, and that allows the imaging of metal or polyester plates without the use of film. By eliminating the stripping, compositing, and traditional plate making processes, CTP altered the printing industry, which led to reduced prepress times, lower costs of labor, and improved print quality.

Most CTP systems used thermal CTP or violet technologies. Both technologies have the same characteristics in term of quality and plate durability (longer runs). However the often violet CTP systems are cheaper than thermal ones, and thermal CTP systems do not need to be operated under yellow light.

Thermal CTP involves the use of thermal to expose and/or remove areas of coating while the plate is being imaged. This depends on whether the plate is negative, or positive working. These lasers are generally at a wavelength of 830 nm, but vary in their energy usage depending on whether they are used to expose or ablate material. Violet CTP lasers have a much lower wavelength, 405 nm–410 nm. Violet CTP is “based on emulsion tuned to visible light exposure”.[14]

Another process is computer-to-conventional plate (CTCP) system in which conventional offset plates can be exposed, making it an economical option. Sheet-fed offset

Sheet-fed offset press

Sheet-fed refers to individual sheets of paper or rolls being fed into a press via a suction bar that lifts and drops each sheet onto place. A lithographic (“litho” for short) press uses principles of lithography to apply ink to a printing plate, as explained previously. Sheet-fed litho is commonly used for printing of short-run magazines, brochures, letter headings, and general commercial (jobbing) printing. In sheet-fed offset, “the printing is carried out on single sheets of paper as they are fed to the press one at a time”. Sheet-fed presses use mechanical registration to relate each sheet to one another to ensure that they are reproduced with the same imagery in the same position on every sheet running through the press.[15]

Perfecting press

A perfecting press, also known as a duplex press, is one that can print on both sides of the paper at the same time.[16] Web and sheet-fed offset presses are similar in that many of them can also print on both sides of the paper in one pass, making it easier and faster to print duplex. Offset duplicators

Small offset lithographic presses that are used for fast, good quality reproduction of one-co lor and two-color copies in sizes up to 12″ by 18″.[12] Popular models were made by A. B. Dick Company, Multilith, and the Chief and Davidson lines made by A.T.F.-Davidson. O ffse t duplicators are made for fast and quick printing jobs; printing up to 12,000 impressions per hour. They are able to print business forms, letterheads, labels, bulletins, , envelopes, folders, reports, and sales literature.

Feeder system

The feeder system is responsible for making sure paper runs through the press correctly. This is where the substrate is loaded and then the system is correctly set up to the certain specifications of the substrate to the press.[17]

Printing–inking system

The Printing Unit consists of many different systems. The dampening system is used to apply dampening solution to the plates with water rollers. The inking system uses rollers to deliver ink to the plate and blanket cylinders to be transferred to the substrate. The plate cylinder is where the plates containing all of the imaging are mounted. Finally the blanket and impression cylinders are used to transfer the image to the substrate running through the press.[18]

Delivery system

The delivery system is the final destination in the printing process while the paper runs through the press. Once the paper reaches delivery, it is stacked for the ink to cure in a proper manner. This is the step in which sheets are inspected to make sure they have proper ink density and registration.

Slur

Production or impact of double image in printing is known as slur.[19] Web-fed offset

Web-fed refers to the use of rolls (or “webs”) of paper supplied to the printing press.[20] Offset web printing is generally used for runs in excess of five or ten thousand impressions. Typical examples of web printing include newspapers, newspaper inserts or ads, magazines, direct mail, catalogs, and books. Web-fed presses are divided into two general classes: coldset (or non- heatset), and heatset offset web presses; the difference being how the inks that are used dry. Cold web offset printing dries through absorption into the paper, while heatset utilizes drying lamps or heaters to cure or “set” the inks. Heatset presses can print on both coated (slick) and uncoated papers, while coldset presses are restricted to uncoated paper stock, such as newsprint. Some coldset web presses can be fitted with heat dryers, or ultraviolet lamps (for use with UV-curing inks). This can enable a newspaper press to print color pages heatset and black & white pages co ldset.

Web offset presses are beneficial in long run printing jobs, typically press runs that exceed ten or twenty thousand impressions. Speed is a determining factor when considering the completion time for press production; some web presses print at speeds of 3,000 feet per minute or faster. In addition to the benefits of speed and quick completion, some web presses have the inline ability to cut, perforate, and fold.

Heatset web offset

This subset of web offset printing uses inks which dry by evaporation in a dryer typically positioned just after the printing units. This is typically done on coated papers, where the ink stays largely on the surface, and gives a glossy high contrast print image after the drying. As the paper leaves the dryer too hot for the folding and cutting that are typically downstream procedures, a set of “chill rolls” positioned after the dryer lowers the paper temperature and sets the ink. The speed at which the ink dries is a function of dryer temperature and length of time the paper is exposed to this temperature. This type of printing is typically used for magazines, catalogs, inserts and other medium-to-high volume, medium-to-high quality production runs.

Coldset web offset

This is also a subset of web offset printing, typically used for lower quality print output. It is typical of newspaper production. In this process, the ink dries by absorption into the underlying paper. A typical coldset configuration is often a series of vertically arranged print units and peripherals. As newspapers seek new markets, which often imply higher quality (more gloss, more contrast), they may add a heatset tower (with a dryer) or use UV (ultraviolet) based inks which “cure” on the surface by polymerisation rather than by evaporation or absorption. Sheet-fed vs. web-fed

Sheet-fed presses offer several advantages. Because individual sheets are fed through, a large number of sheet sizes and format sizes can be run through the same press. In addition, waste sheets can be used for make-ready (which is the testing process to ensure a quality print run). This allows for lower cost preparation so that good paper is not wasted while setting up the press, for plates and inks. Waste sheets do bring some disadvantages as often there are dust and offset powder particles that transfer on to the blankets and plate cylinders, creating imperfections on the printed sheet. This method produces the highest quality images.

Web-fed presses, on the other hand, are much faster than sheet-fed presses, with speeds up to 80,000 cut-offs per hour (a cut-off is the paper that has been cut off a reel or web on the press; the length of each sheet is equal to the cylinder's circumference). The speed of web-fed presses makes them ideal for large runs such as newspapers, magazines, and comic books. However, web-fed presses have a fixed cut-off, unlike rotogravure or flexographic presses, which are variable. Inks

Offset printing uses inks that, compared to other printing methods, are highly viscous. Typical inks have a dynamic viscosity o f 40–100 Pa·s.[21]

There are many types of paste inks available for utilization in offset lithographic printing and each have their own advantages and disadvantages. These include heat-se t, co ld-set, and energy- curable (or EC), such as ultraviolet- (or UV-) curable, and electron beam- (or EB-) curable. Heat- set inks are the most common variety and are “set” by applying heat and then rapid cooling to catalyze the curing process. They are used in magazines, catalogs, and inserts. Cold-set inks are set simply by absorption into non-coated stocks and are generally used for newspapers and books but are also found in insert printing and are the most economical option. Energy-curable inks are the highest-quality offset litho inks and are set by application of light energy. They require specialized equipment such as inter-station curing lamps, and are usually the most expensive type of offset litho ink.

• Letterset inks are mainly used with offset presses that do not have dampening systems and uses imaging plates that have a raised image.[22] • Waterless inks are heat-resistant and are used to keep silicone-based plates from showing toning in non-image areas. These inks are typically used on waterless Direct Imaging presses.[22] • Single Fluid Inks are newer ink that uses a process allowing lithographic plates on a lithographic press without using a dampening system during the process.[22]

Ink–water balance

Ink and water balance is an extremely important part of offset printing. If ink and water are not properly balanced, the press operator may end up with many different problems affecting the quality of the finished product, such as emulsification (the water overpowering and mixing with the ink). This leads to scumming, catchup, trapping problems, ink density issues and in extreme cases the ink not properly drying on the paper; resulting in the job being unfit for delivery to the client. With the proper balance, the job will have the correct ink density and should need little further adjustment except for minor ones. An example would be when the press heats up during normal operation, thus evaporating water at a faster rate. In this case the machinist will gradually increase the water as the press heats up to compensate for the increased evaporation of water. Printing machinists generally try to use as little water as possible to avoid these problems.

Fountain solution

Fountain solution is the water-based (or “aqueous”) component in the lithographic process that moistens the non image area of the plate in order to keep ink from depositing (and thus printing). Historically, fountain solutions were acid-based and made with gum ar ab ic, chromates and/or phosphates, and magnesium nitrate. Alcohol is added to the water to lower the surface tension and help cool the press a bit so the ink stays stable so it can set and dry fast. While the acid fountain solution has improved in the last several decades, neutral and alkaline fountain solutions have also been developed. Both of these chemistries rely heavily on surfactants–emulsifiers and phosphates and/or silicates to provide adequate cleaning and desensitizing, respectively. Since about 2000, alkaline-based fountain solutions have become less common due to the inherent health hazards of high pH and the objectionable odor of the necessary microbiological additives.

Ac id-based fountain solutions are still the most common variety and yield the best quality results by means of superior protection of the printing plate, lower dot gains, and longer plate life. Acids are also the most versatile; capable of running with all types of offset litho inks. However, because these products require more active ingredients to run well than do neutrals and alkalines, they are also the most expensive to produce. However, neutrals and, to a lesser degree, alkalines are still an industry staple and will continue to be used for most newspapers and many lower- quality inserts. In recent years alternatives have been developed which do not use fountain solutions at all (waterless printing). In industry

Offset lithography became the most popular form of commercial printing from the 1950s (“offset printing”). Substantial investment in the larger presses required for offset lithography was needed, and had an effect on the shape of the printing industry, leading to fewer, larger, printers. The change made a greatly increased use of colour printing possible, as this had previously been much more expensive. Subsequent improvements in plates, inks, and paper have further refined the technology of its superior production speed and plate durability. Today, lithography is the primary printing technology used in the U.S. and most often as offset lithography.

Today, offset lithography is “responsible for over half of all printing using printing plates”. [16] The consistent high quality of the prints and the volume of prints created for their respective cost makes commercial offset lithography very efficient for businesses, especially when many prints must be created.

Odor free offset printing is the newest technology