
64-BIT TECHNOLOGY 64-bit From Wikipedia, the free encyclopedia. In computer architecture, 64-bit is an adjective N-bit Processors used to describe integers, memory addresses or 4- 8- 16- 24- 31- 32- 48- 64- 128- other data units that are at most 64 bits (8 bit bit bit bit bit bit bit bit bit octets) wide, or to describe CPU and ALU architectures based on registers, address buses, N-bit Applications or data buses of that size. 16- 31- 32- 64- bit bit bit bit As of 2004, 64-bit CPUs are common in N-bit Data Sizes servers, and have recently been introduced to 4- 8- 16- 32- 64- 128- the (previously 32-bit) mainstream personal computer arena in the form of the AMD64, bit bit bit bit bit bit EM64T, and PowerPC 970 (or "G5") processor nibble byte octet word dword qword architectures. These definitions are relevant to the world of x86 processors. See linked articles for Although a CPU may be 64-bit internally, its discussion of the meaning in other external data bus or address bus may have a architectures. The 31-bit and 48-bit sizes different size, either larger or smaller, and the relate to IBM mainframes and AS/400s, respectively. term is often used to describe the size of these buses as well. For instance, many current machines with 32-bit processors use 64-bit buses, and may occasionally be referred to as "64-bit" for this reason. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data. Without further qualification, however, a computer architecture described as "64-bit" generally has integer registers that are 64 bits wide and thus directly supports dealing both internally and externally with 64- bit "chunks" of data. Architectural implications Registers in a processor are generally divided into three groups: integer, floating point, and other. In all common general purpose processors, only the integer registers are capable of storing pointer values (that is, an address of some data in memory). The non- integer registers cannot be used to store pointers for the purpose of reading or writing to memory, and therefore cannot be used to bypass any memory restrictions imposed by the size of the integer registers. Nearly all common general purpose processors (with the notable exception of the ARM and most 32-bit MIPS implementations) have integrated floating point hardware, which may or may not use 64 bit registers to hold data for processing. For example, the AMD64 architecture defines a SSE unit which includes 16 128-bit wide registers, and the traditional x87 floating point unit defines 8 80-bit registers in a stack configuration. By contrast, the 64-bit Alpha family of processors defines 32 64-bit wide floating point registers in addition to its 32 64-bit wide integer registers. Memory limitations Most CPUs are currently (c. 2005) designed so that the contents of a single integer register can store the address (location) of any datum in the computer's virtual memory. Therefore, the total number of addresses in the virtual memory — the total amount of data the computer can keep in its working area — is determined by the width of these registers. Beginning in the 1960s with the IBM System 360, then (amongst many others) the DEC VAX minicomputer in the 1970s, and then with the Intel 80386 in the mid- 1980s, a de facto consensus developed that 32 bits was a convenient register size. A 32- bit register meant that 232 addresses, or 4 gigabytes of RAM memory, could be referenced. At the time these architectures were devised, 4 gigabytes of memory was so far beyond the typical quantities available in installations that this was considered to be enough "headroom" for addressing. 4-gigabyte addresses were considered an appropriate size to work with for another important reason: 4 billion integers are enough to assign unique references to most physically countable things in applications like databases. However, with the march of time and the continual reductions in the cost of memory (see Moore's Law), by the early 1990s installations with quantities of RAM approaching 4 gigabytes began to appear, and the use of virtual memory spaces exceeding the 4- gigabyte ceiling became desirable for handling certain types of problems. In response, a number of companies began releasing new families of chips with 64-bit architectures, initially for supercomputers and high-end workstation and server machines. 64-bit computing has gradually drifted down to the personal computer desktop, with Apple Computer's PowerMac desktop line as of 2003 and its iMac home computer line (as of 2004) both using 64-bit processors (the G5 chip from IBM), and AMD's "AMD64" architecture (cloned by Intel as "EM64T") becoming common in high-end PCs. Timeline • 1991: MIPS Technologies produced the first 64-bit CPU, as the third revision of their MIPS RISC architecture, the R4000. The CPU was commercially available in 1991 and used in SGI graphics workstations starting with the Indigo series, running the 64-bit version of the IRIX operating system. • 1994: Intel announced plans for the 64-bit IA-64 architecture (jointly developed with HP) as a successor to its 32-bit IA-32 processors. A 1998-1999 launch date was targeted. • 1995: Fujitsu-owned HAL Computer Systems launched workstations based on a 64-bit CPU, HAL's independently designed first generation SPARC64. IBM released 64-bit AS/400 systems, with the upgrade able to convert the operating system, database and applications. • 1996: Sun and HP released their 64-bit processors, the UltraSPARC and the PA-8000. Sun Solaris, IRIX, and other variants of UNIX continued to be common 64-bit operating systems. • 1999: Intel released the instruction set for the IA-64 architecture. First public disclosure of AMD's set of 64-bit extensions to IA-32 called x86-64. • 2000: IBM shipped its first 64-bit mainframe, the zSeries z900, and its new z/OS operating system — culminating history's biggest 64-bit processor development investment and instantly wiping out 31-bit plug-compatible competitors Fujitsu/Amdahl and Hitachi. 64-bit Linux on zSeries followed almost immediately. • 2001: Intel finally shipped its 64-bit processor line, now branded Itanium, targeting high-end servers. It fails to meet expectations due to the repeated delays getting IA-64 to market, and becomes a flop. Linux was the first operating system to run on the processor at its release. • 2002: Intel introduced the Itanium 2 as a successor to the Itanium. • 2003: AMD brought out its 64-bit Opteron and Athlon 64 processor lines. Apple also shipped 64-bit PowerPC chips courtesy of IBM and Motorola, along with an update to its Mac OS X operating system. Several Linux distributions released with support for x86-64. Microsoft announced that it would create a version of its Windows operating system for the AMD chips. Intel maintained that its Itanium chips would remain its only 64-bit processors. • 2004: Intel, reacting to the market success of AMD, admitted it had been developing a clone of the x86-64 extensions, which it calls EM64T. Updated versions of its Xeon and Pentium 4 processor families supporting the new instructions were shipped. • 2005: In March, Intel announced that their first dual-core processors will ship in the second quarter 2005 with the release of the Pentium Extreme Edition 840 and the new Pentium D chips. Dual-core Itanium 2 processors will follow in the fourth quarter. • 2005: On April 18, Beijing Longxin rolled out its first x86-64 compatible CPU, named Longxin II. The thumb sized square chip gathers 13.5 million transistors with a peak capacity of 2 billion calculations per second for a single accuracy check and 1 billion calculations per second under a dual accuracy check. The new chip registers a maximum frequency of 500MHz and a power consumption ranging from 3 to 5 watts. • 2005: On April 30, Microsoft publicly released Windows XP x64 Edition for x86-64 processors. • 2005: In May, AMD pre-released its dual-core desktop processor family called Athlon 64 X2. Athlon 64 X2 (Toledo) processors feature two cores with 1MB of L2 cache memory per core and consist of about 233.2 million transistors. They are 199 mm² large. • 2005: In July, IBM announced its new dual-core 64-bit PowerPC 970MP (codenamed Antares). 32 vs 64 bit A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture. Other software must also be ported to use the new capabilities; older software is usually supported through either a hardware compatibility mode (in which the new processors support an older 32-bit instruction set as well as the new modes), through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor die (as with the Itanium2 processors from Intel). One significant exception to this is the AS/400, whose software runs on a virtual ISA which is implemented in low-level software. This software, called TIMI, is all that has to be rewritten to move the entire OS and all software to a new platform, such as when IBM transitioned their line from 32-bit POWER to 64-bit POWER. While 64-bit architectures indisputably make working with huge data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages20 Page
-
File Size-