Ivars Peterson's MathTrek
October 11, 2004
The known digits of these numbers appear patternless. According to one novel method of assessing the randomness of a sequence of numbers, however, the digits of pi turn out to be somewhat more irregular than the digits of the other irrational numbers.
The measure used to determine the irregularity or degree of disorder (entropy) of these sequences is called the approximate entropy. Invented by Steve Pincus of Guilford, Conn., and developed in cooperation with Burton H. Singer of Princeton University, this measure characterizes the randomness of a sequence of numbers.
Suppose the data are expressed as a string of binary digits. The idea is to determine how often each of eight blocks of three consecutive digits000, 001, 010, 011, 100, 101, 110, and 111comes up in a given string.
Given the first 280,000 binary digits of pi, the most frequently occurring block is 000, which appears 35,035 times, and the least common block is 111, which appears 34,944 times. The maximum possible irregularity occurs when all eight blocks appear equally often.
For the square root of 3, the block 000 occurs most often (35,374 times) and 010 (34,615) least often. The greater divergence from exactly 35,000 occurrences means that the first 280,000 digits of root 3 are farther from maximum irregularity than the digits of pi.
The formula for approximate entropy developed by Pincus takes such data about a sequence of numbers, whatever its source, and assigns a single number to the sequence. Larger values correspond to greater apparent serial randomness or irregularity, and smaller values correspond to more instances of recognizable features in the data. Overall, approximate entropy grades a continuum that ranges from totally ordered to maximally irregular (or completely random).
Putting the four irrationals in order, starting with the most irregular, gives pi, root 2, e, and root 3. That's a curious, unexpected result. Irrational numbers such as root 2 and root 3 are known as algebraic numbers because they are the solution to a polynomial with a finite number of terms. Others, such as pi and e, are known as nonalgebraic, or transcendental, numbers. Mathematicians had regarded algebraic numbers as, in some sense, simpler than transcendental numbers. But, according to approximate entropy, this distinction doesn't show up in the irregularity of the digits. Whether such quirks in the irregularity of irrationals have any implications for number theory remains an open question for mathematicians.
Because the approximate entropy method does not depend on any assumptions about the process involved in generating a sequence of numbers, it can be applied to biological, medical, or financial data and to physical measurements, such as the number of alpha particles emitted by a radioactive element in specified time intervals, as readily as to the digits of irrational numbers.
For example, Pincus has looked at stock market performance, as measured by Standard and Poor's index of 500 stocks. His calculations show that fluctuations in the index's value are generally quite far from being completely irregular, or random.
One striking exception occurred during the 2-week period immediately preceding the stock market crash of 1987, when the approximate entropy indicated nearly complete irregularity. That change flagged the incipient collapse.
Now, Pincus and Rudolf E. Kalman of the Swiss Federal Institute of Technology in Zurich have applied approximate entropy to the analysis of a wide range of other financial data. They describe their findings in the Sept. 21 Proceedings of the National Academy of Sciences.
Approximate entropy "appears to be a potentially useful marker of system stability, with rapid increases possibly foreshadowing significant changes in a financial variable," Pincus and Kalman contend.
To provide another example of such foreshadowing, Pincus and Kalman examined fluctuations in Hong Kong's Hang Seng index from 1992 to 1998. In this case, the approximate entropy value rose sharply to its highest observed value immediately before this market crashed in November 1997.
Pincus and Kalman also show the usefulness of approximate entropy in characterizing volatility. Volatility is normally understood as the size of asset price fluctuations. A market with large swings in price is generally considered highly volatile and, hence, unpredictable. Pincus and Kalman argue that large fluctuations are not necessarily the same thing as unpredictability.
"The point is that the extent of variation is generally not feared; rather, unpredictability is the concern," Pincus and Kalman say. "Recast, if an investor were assured that future prices would follow a precise sinusoidal pattern, even with large amplitude, this perfectly smooth roller coaster ride would not be frightening."
Standard deviation remains the appropriate tool for characterizing deviations from centrality, the researchers say, and approximate entropy might well be the appropriate tool for grading the extent of irregularity (and unpredictability).
Use of approximate entropy to characterize disorder in time series data also suggests that random walks and related models don't generally fit the actual behavior of markets. There's often more order or structure in the data than such models assume.
"Independent of whether one chooses technical analysis, fundamental analysis, or model building, a technology to directly quantify subtle changes in serial structure has considerable real-world utility, allowing an edge to be gained," Pincus and Kalman conclude. "And this applies whether the market is driven by earnings or by perceptions, for both sort- and long-term investments."
Copyright © 2004 by Ivars Peterson
Peterson, I. 2001. Pi à la mode. Science News 160(Sept. 1):136-137. Available at http://www.sciencenews.org/articles/20010901/bob9.asp.
______. 1997. The Jungles of Randomness: A Mathematical Safari. New York: Wiley.
______. 1997. Assessing irrational irregularity. Science News 151(May 31):340.
Pincus, S.M. 2001. Assessing serial irregularity and its implications for health.Annals of the New York Academy of Sciences 954(No. 1):245-267. Abstract available at http://www.annalsnyas.org/cgi/content/abstract/954/1/245.
______. 1991. Approximate entropy as a measure of system complexity. Proceedings of the National Academy of Sciences 88(March 15):2297-2301. Abstract available at http://www.pnas.org/cgi/content/abstract/88/6/2297.
Pincus, S., and R.E. Kalman. 2004. Irregularity, volatility, risk, and financial market time series. Proceedings of the National Academy of Sciences 101(Sept. 21):13709-13714. Abstract available at http://www.pnas.org/cgi/content/abstract/101/38/13709.
______. 1997. Not all (possibly) “random” sequences are created equal. Proceedings of the National Academy of Sciences 94(April 15):3513-3518. Abstract available at http://www.pnas.org/cgi/content/abstract/94/8/3513.
Pincus, S., and B.H. Singer. 1998. A recipe for randomness. Proceedings of the National Academy of Sciences 95(Sept. 1):10367-10372. Abstract available at http://www.pnas.org/cgi/content/abstract/95/18/10367.
______. 1996. Randomness and degrees of irregularity. Proceedings of the National Academy of Sciences 93(March 5):2083-2088. Abstract available at http://www.pnas.org/cgi/content/abstract/93/5/2083.
Singer, B.H., and S. Pincus. 1998. Irregular arrays and randomization. Proceedings of the National Academy of Sciences 95(Feb. 17):1363-1368. Abstract available at http://www.pnas.org/cgi/content/abstract/95/4/1363.MathTrek articles, updated and illustrated, is now available as the Mathematical Association of America (MAA) book Mathematical Treks: From Surreal Numbers to Magic Circles. See http://www.maa.org/pubs/books/mtr.html.