Large Numbers from Wikipedia, the Free Encyclopedia
Total Page:16
File Type:pdf, Size:1020Kb
Large numbers From Wikipedia, the free encyclopedia This article is about large numbers in the sense of numbers that are significantly larger than those ordinarily used in everyday life, for instance in simple counting or in monetary transactions. The term typically refers to large positive integers, or more generally, large positive real numbers, but it may also be used in other contexts. Very large numbers often occur in fields such as mathematics, cosmology, cryptography, and statistical mechanics. Sometimes people refer to numbers as being "astronomically large". However, it is easy to mathematically define numbers that are much larger even than those used in astronomy. Contents 1 Using scientific notation to handle large and small numbers 2 Large numbers in the everyday world 3 Astronomically large numbers 4 Computers and computational complexity 5 Examples 6 Systematically creating ever faster increasing sequences 7 Standardized system of writing very large numbers 7.1 Examples of numbers in numerical order 8 Comparison of base values 9 Accuracy 9.1 Accuracy for very large numbers 9.2 Approximate arithmetic for very large numbers 10 Large numbers in some noncomputable sequences 11 Infinite numbers 12 Notations 13 See also 14 Notes and references Using scientific notation to handle large and small numbers See also: scientific notation, logarithmic scale and orders of magnitude Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, a 1 followed by nine zeros: 1 000 000 000, and 1.0 × 10−9 means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is. Large numbers in the everyday world Examples of large numbers describing everyday real-world objects are: The number of bits on a computer hard disk (as of 2010, typically about 1013, 500-1000 GB) The estimated number of atoms in the observable Universe (1080) The number of cells in the human body (more than 1014) The number of Jews destroyed during the holocaust (estimated at 6 million) The number of neuronal connections in the human brain (estimated at 1014) The lower bound on the game-tree complexity of chess a.k.a. the "Shannon number" (estimated at around 1043) The Avogadro constant, the number of "elementary entities" (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12; (approximately 6.022 × 1023) Astronomically large numbers Other large numbers, as regards length and time, are found in astronomy and cosmology. For example, the current Big Bang model suggests that the Universe is 13.8 billion years (4.355 × 1017 seconds) old, and that the observable universe is 93 billion light years across (8.8 × 1026 metres), and contains about 5 × 1022 stars, organized into around 125 billion (1.25 × 1011) galaxies, according to Hubble Space Telescope observations. There are about 1080 atoms in the observable universe, by rough estimation.[1] According to Don Page, physicist at the University of Alberta, Canada, the longest finite time that has so far been explicitly calculated by any physicist is which corresponds to the scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire universe, observable or not, assuming a certain inflationary model with an inflaton whose mass is 10−6 Planck masses.[2][3] This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where our universe's history repeats itself arbitrarily many times due to properties of statistical mechanics; this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again. Combinatorial processes rapidly generate even larger numbers. The factorial function, which defines the number of permutations on a set of fixed objects, grows very rapidly with the number of objects. Stirling's formula gives a precise asymptotic expression for this rate of growth. Combinatorial processes generate very large numbers in statistical mechanics. These numbers are so large that they are typically only referred to using their logarithms. Gödel numbers, and similar numbers used to represent bit-strings in algorithmic information theory, are very large, even for mathematical statements of reasonable length. However, some pathological numbers are even larger than the Gödel numbers of typical mathematical propositions. Logician Harvey Friedman has done work related to very large numbers, such as with Kruskal's tree theorem and the Robertson–Seymour theorem. Computers and computational complexity Between 1980 and 2000, personal computer hard disk sizes increased from about 10 megabytes (107 bytes) to over 100 gigabytes (1011 bytes). [4] A 100 gigabyte disk could store the favorite color of all of Earth's seven billion inhabitants without using data compression (storing 14 bytes times 7 billion inhabitants would equal 98 GB used). But what about a dictionary-on-disk storing all possible passwords containing up to 40 characters? Assuming each character equals one byte, there are about 2320 such passwords, which is about 2 × 1096. In his paper Computational capacity of the universe,[5] Seth Lloyd points out that if every particle in the universe could be used as part of a huge computer, it could store only about 1090 bits, less than one millionth of the size such a dictionary would require. However, storing information on hard disk and computing it are very different functions. On the one hand storage currently has limitations as stated, but computational speed is a different matter. It is quite conceivable that the stated limitations regarding storage have no bearing on the limitations of actual computational capacity, especially if the current research into quantum computers results in a "quantum leap" (but see holographic principle). Still, computers can easily be programmed to start creating and displaying all possible 40-character passwords one at a time. Such a program could be left to run indefinitely. Assuming a modern PC could output 1 billion strings per second, it would take one billionth of 2 × 1096 seconds, or 2 × 1087 seconds to complete its task, which is about 6 × 1079 years. By contrast, the universe is estimated to be 13.8 billion (1.38 × 1010) years old. Computers will presumably continue to get faster, but the same paper mentioned before estimates that the entire universe functioning as a giant computer could have performed no more than 10120 operations since the Big Bang. This is trillions of times more computation than is required for displaying all 40-character passwords, but computing all 50 character passwords would outstrip the estimated computational potential of the entire universe. Problems like this grow exponentially in the number of computations they require, and they are one reason why exponentially difficult problems are called "intractable" in computer science: for even small numbers like the 40 or 50 characters described earlier, the number of computations required exceeds even theoretical limits on mankind's computing power. The traditional division between "easy" and "hard" problems is thus drawn between programs that do and do not require exponentially increasing resources to execute. Such limits are an advantage in cryptography, since any cipher-breaking technique that requires more than, say, the 10120 operations mentioned before will never be feasible. Such ciphers must be broken by finding efficient techniques unknown to the cipher's designer. Likewise, much of the research throughout all branches of computer science focuses on finding efficient solutions to problems that work with far fewer resources than are required by a naïve solution. For example, one way of finding the greatest common divisor between two 1000-digit numbers is to compute all their factors by trial division. This will take up to 2 × 10500 division operations, far too large to contemplate. But the Euclidean algorithm, using a much more efficient technique, takes only a fraction of a second to compute the GCD for even huge numbers such as these. As a general rule, then, PCs in 2005 can perform 240 calculations in a few minutes. A few thousand PCs working for a few years could solve a problem requiring 264 calculations, but no amount of traditional computing power will solve a problem requiring 2128 operations (which is about what would be required to brute-force the encryption keys in 128- bit SSL commonly used in web browsers, assuming the underlying ciphers remain secure). Limits on computer storage are comparable. Quantum computing might allow certain problems, that require an exponential amount of calculations, to become feasible, but it has practical and theoretical challenges that may never be overcome, such as the mass production of qubits, the fundamental building block of quantum computing. See also: computation, computational complexity theory, algorithmic information theory, computability theory and big O notation Examples See also: Examples of numbers, in numerical order (10,000,000,000), called "ten billion" in the short scale or "ten milliard" in the long scale. Sexdecilliard = otherwise known as a Duotrigintillion. googol = centillion = or , depending on number naming system The largest known Mersenne Prime = googolplex = Skewes' numbers: the first is approximately , the second Graham's number, larger than can be represented even using power towers. However, it can be represented using Knuth's up-arrow notation. The total amount of printed material in the world is roughly 1.6 × 1018 bits; therefore the contents can be represented by a number somewhere in the range 0 to roughly Compare: The first number is much larger than the second, due to the larger height of the power tower, and in spite of the small numbers 1.1.