UCSD scientists explain and improve upon ’enigmatic’ probability formula

Findings could have implications for speech recognition, machine learning, information retrieval

Scientists at the University of California, San Diego (UCSD) have developed new insight into a formula that helped British cryptanalysts crack the German Enigma code in World War II. Writing in the Oct. 17 edition of the journal Science, UCSD Jacobs School of Engineering professor Alon Orlitsky and graduate students Narayana P. Santhanam and Junan Zhang shed light on a lingering mathematical mystery and propose a new solution that could help improve automatic speech recognition, natural language processing, and other machine learning software.

In the article, Orlitsky and his colleagues unlock some of the secrets of the “Good-Turing estimator,” a formula for estimating the probability of elements based on observed data. The formula is named after famed mathematicians I.J. Good and Alan Turing who, during WWII, were among a group of cryptanalysts charged with breaking the Enigma cipher — the code used to encrypt German military communications. Working at Bletchley Park outside of London, their work has been credited by some with shortening the war by several years. (It also led to the development of the first modern computer, and was documented in a number of books and movies.)

The cryptanalysts were greatly aided by their possession of the Kengruppenbuch, the German cipher book that contained all possible secret keys to Enigma, and had been previously captured by British Intelligence. They documented the keys used by various U-boat commanders in previously decrypted messages and used this information to estimate the distributions of pages from which commanders picked their secret keys.

The prevailing technique at the time estimated the likelihood of each page by simply using its empirical frequency, the fraction of the time it had been picked in the past. But Good and Turing developed an unintuitive formula that bore little resemblance to conventional estimators. Surprisingly, this Good-Turing estimator outperformed the more intuitive approaches. Following the war, Good published the formula, mentioning that Turing had an “intuitive demonstration” for its power, but not describing what that demonstration entailed.

Since then, Good-Turing has been incorporated into a variety of applications such as information retrieval, spell-checking, and speech recognition software, where it is used to learn automatically the underlying structure of the language. But despite its usefulness, “its performance has remained something of an enigma itself,” said Orlitsky, a professor in the Electrical and Computer Engineering department. While some partial explanations were given as to why Good-Turing may work well, no objective evaluation or results have been established for its optimality. Additionally, scientists observed that while it worked well under many circumstances, at times, its performance was lacking.

Now, Orlitsky, Santhanam, and Zhang believe they have unraveled some of the mystery surrounding Good-Turing, and constructed a new estimator that, unlike the historic formula, is reliable under all conditions. Motivated by information-theoretic and machine-learning considerations, they propose a natural measure for the performance of an estimator. Called attenuation, it evaluates the highest possible ratio between the probability assigned to each symbol in a sequence by any distribution, and the corresponding probability assigned by the estimator.

The UCSD researchers show that intuitive estimators, such as empirical frequency, can attenuate the probability of a symbol by an arbitrary amount. They also prove that Good-Turing performs well in general. While it can attenuate the probability of symbols by a factor of 1.39, it never attenuates by a factor of more than 2. Motivated by these observations, they derived an estimator whose attenuation is 1. This means that as the length of any sequence increases, the probability assigned to each symbol by the new estimator is as high as that assigned to it by any distribution.

“While there is a considerable amount of work to be done in simplifying and further improving the new estimator,” concluded Orlitsky, “we hope that this new framework will eventually improve language modeling and hence lead to better speech recognition and data mining software.”

* “Always Good-Turing: Asymptotically Optimal Probability Estimation,” Science Magazine. http://www.sciencemag.org/

Media Contact: Doug Ramsey 858-822-5825 dramsey@ucsd.edu

Media Contact

Doug Ramsey EurekAlert!

More Information:

http://www.ucsd.edu/

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

Lighting up the future

New multidisciplinary research from the University of St Andrews could lead to more efficient televisions, computer screens and lighting. Researchers at the Organic Semiconductor Centre in the School of Physics and…

Researchers crack sugarcane’s complex genetic code

Sweet success: Scientists created a highly accurate reference genome for one of the most important modern crops and found a rare example of how genes confer disease resistance in plants….

Evolution of the most powerful ocean current on Earth

The Antarctic Circumpolar Current plays an important part in global overturning circulation, the exchange of heat and CO2 between the ocean and atmosphere, and the stability of Antarctica’s ice sheets….

Partners & Sponsors