Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Machine translation for the most part targets Internet and technical texts

11.06.2008
Like all technology, machine translation (MT) has its limits, says Mike Dillinger, President of the Association for Machine Translation in the Americas and Adjunct Professor, Department of Psychology, San José State University.

At the invitation of the Department of Artificial Intelligence, Mike Dillinger has been giving a course on paraphrasing and text mining at the School of Computing. Dillinger considers that without clean and clear texts machine translation will not work well and that, despite technological advances, there will always be a demand for human translators to render legal or literary texts.

Machine translation, he adds, for the most part targets Internet and technical texts. MT’s Internet bias means training web content creators to assure that the documents are machine translatable.

- As an acclaimed expert in machine translation, how would you define the state of the art in this discipline?

The state of the art is rapidly changing. A far-reaching new approach was introduced fifteen to twenty years ago. At the time we faced a two-sided problem. First, it took a long time and a lot of money to develop the grammatical rules required to analyse the original sentence and the “transfer” or translation rules. Second, it looked to be impossible to manually account for the vast array of words and sentence types in documents.

The new approach uses statistical techniques to identify qualitatively simpler rules. This it does swiftly, automatically and on a massive scale, covering much more of the language. Similar techniques are used to identify terms and their possible translations. These are huge advances! Before system development was very much a cottage industry; now they are mass produced. Today’s research aims to increase the qualitative complexity of the rules to better reflect syntactic structures and aspects of meaning. We are now exploiting the qualitative advances of this approach.

- Machine translation systems have been in use since the 1970s, is this technology now mature?

If maturity means for use in industrial applications, the answer is definitely yes. MT has been widely used by top-ranking industrial and military institutions for 30 years. The European Community, Ford, SAP, Symantec, the US Armed Forces and many other organizations use MT every day. If maturity means use by the general public to enter a random sentence for translation, I would say, just as definitely, no. Like all technology, machine translation has its limits. You don’t expect a Mercedes to run well in the snow or sand: where it performs best is on a dry, surfaced road. Neither do you expect a Formula 1 car to win a race using ordinary petrol or alcohol, it needs special fuel. Unfortunately, very often people expect a perfect translation of a not very clear or error-riddled text. For the time being at least, without clean and correct texts, machine translation will not work properly.

- Do you think society understands MT?

Not at all! It’s something I come across all the time. A lot of people think that “translation” is being able to tell what the author means, even if he or she has not expressed himself or herself clearly and correctly. Therefore, many have great expectations about what a translation system will be able to do. This is why they are always disappointed. On the other hand, those of us working in MT have to make a big effort to get society to better understand what use it is and when it works well: this is the mandate of the association I chair.

- What is MT about? Developing programs, translation systems, computerized translation, manufacturing electronic dictionaries? How exactly would you define this discipline?

MT is concerned with building computerized translation systems. Of course, this includes building electronic dictionaries, grammars, databases of word co-occurrences and other linguistic resources. But it also involves developing automatic translation evaluation processes, input text “cleaning” and analysis processes, and processes for guaranteeing that everything will run smoothly when a 300,000 page translation order arrives. As these are all very different processes and components, MT requires the cooperation of linguists, programmers and engineers.

- What are the stages of the machine translation process?

1. Document preparation. This is arguably the most important stage, because you have to assure that the sentences of each document are understandable and correct.
2. Adaptation of the translation system. Just like a human translator, the machine translation system needs information about all the words it will come across in the documents. It can be taught new words through a process known as customization.
3. Document translation. Each document format, like Word, pdf or HTML, has many different features, apart from the sentences that actually have to be translated. This stage separates the content from the wrapping as it were.
4. Translation verification. Quality control is very important for human and machine translators. Neither words nor sentences have just one meaning, and they are very easy to misinterpret.

5. Document distribution. This stage is more complex than is generally thought. When you receive 10,000 documents to be translated to 10 different languages, checking that they were all translated, putting them all in the right order without mixing up languages, etc., takes a lot of organizing.

- Is this technology a threat to human translators? Do you really think it creates jobs?

It is not a threat at all! MT takes the most routine work out of translators’ hands so that they can apply their expertise to more difficult tasks. We will always need human translators for more complex texts legal and literary texts. MT today is mostly applied to situations where there is no human participation. It would be cruel even to have people translate e-mails, chats, SMS messages and random web pages. The required text throughput and translation speed is so huge that it would be excruciating for a human being. It is a question of scale: an average human translator translates from 8 to 10 pages per day, whereas, on the web scale, 8 to 10 pages per second would be an extremely low rate. The adoption of new technologies, especially in a global economy, seldom boosts job creation. What it does do is open up an increasingly clear divide between low-skilled routine jobs and specialized occupations.

- Is the deployment of this technology a technical or social problem?

First and foremost it is a social engineering problem because people have to change their behaviour and the way they see things. The MT process reproduces exactly the same stages as human translation, except for two key differences:
a) In translation systems, you have to be very, very careful about the wording. Human translators apply their technical knowledge (if any) to make up for incorrect wording, but machine translation systems have no such knowledge: they reproduce all too faithfully the mistakes in the original/source text. It is hard to get them to translate more accurately, but there are now extremely helpful automatic checking tools. Symantec is a recent example that uses an automatic checker and a translation system to achieve extremely fast and very good results.

b) Translation systems have to handle a lot of translated documents. What happens if an organization receives 5,000 instead of the customary 50 translated documents per week? Automating the translation process ends up uncovering problems with other parts of document handling.

- You mentioned the British National Corpus that includes a cross section of texts that are representative of the English language. It contains 15 million different terms, whereas machine translation dictionaries only contain 300,000 entries. How can this barrier to the construction of an acceptable MT system for society be overcome?

This collection of over 100 million English words is a good mirror of macro language features. One is that we use a great deal of words. However, word frequency is extremely variable: of 15 million terms 70% are seldom used! To overcome the variability in vocabulary usage barrier, we now use the most common words to create a core system to which 5,000 to 10,000 customer-specific words are added. This is reasonably successful. For web applications, however, this simply does not work. Even the best systems are missing literally millions of words, plus the fact that new words are invented every day. At least three remedies are applied at present: ask the user to “try again”, ask the user to enter a synonym, and automatically or semiautomatically build synonym databases. As I see it, we will have to develop web content author guidance systems, such as have already been developed for technical documents. There are strong economic arguments for moving in that direction.

- The Association for Machine Translation in the Americas that you chair is organizing the AMTA 2008 conference to be held in Hawaii next October, what innovations does conference have in store?

There is always something! Come and see! One difference this year is that several groups are holding conferences together. AMTA, the International Workshop of Spoken Language Translation (IWSLT), a workshop by the US government agency NIST about how to evaluate translation evaluation methods, a Localization Industry Standards Association meeting attracting representatives from large corporations, and another group of Empirical Methods in Natural Language Processing (EMNLP) researchers will all be at the same hotel in the same week. Finally, as it is to be held in Hawaii, our colleagues from Asia will be there to add an even more international edge. For more information, see conference web site.

Eduardo Martínez | alfa
Further information:
http://www.mikedillinger.com/
http://www.amtaweb.org/
http://www.fi.upm.es/?pagina=653&idioma=english

More articles from Information Technology:

nachricht Cutting edge research for the industries of tomorrow – DFKI and NICT expand cooperation
21.03.2017 | Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI

nachricht Molecular motor-powered biocomputers
20.03.2017 | Technische Universität Dresden

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Giant Magnetic Fields in the Universe

Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far.

The results will be published on March 22 in the journal „Astronomy & Astrophysics“.

Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the...

Im Focus: Tracing down linear ubiquitination

Researchers at the Goethe University Frankfurt, together with partners from the University of Tübingen in Germany and Queen Mary University as well as Francis Crick Institute from London (UK) have developed a novel technology to decipher the secret ubiquitin code.

Ubiquitin is a small protein that can be linked to other cellular proteins, thereby controlling and modulating their functions. The attachment occurs in many...

Im Focus: Perovskite edges can be tuned for optoelectronic performance

Layered 2D material improves efficiency for solar cells and LEDs

In the eternal search for next generation high-efficiency solar cells and LEDs, scientists at Los Alamos National Laboratory and their partners are creating...

Im Focus: Polymer-coated silicon nanosheets as alternative to graphene: A perfect team for nanoelectronics

Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are less stable. Now researchers at the Technical University of Munich (TUM) have, for the first time ever, produced a composite material combining silicon nanosheets and a polymer that is both UV-resistant and easy to process. This brings the scientists a significant step closer to industrial applications like flexible displays and photosensors.

Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are...

Im Focus: Researchers Imitate Molecular Crowding in Cells

Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to simulate these confined natural conditions in artificial vesicles for the first time. As reported in the academic journal Small, the results are offering better insight into the development of nanoreactors and artificial organelles.

Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

International Land Use Symposium ILUS 2017: Call for Abstracts and Registration open

20.03.2017 | Event News

CONNECT 2017: International congress on connective tissue

14.03.2017 | Event News

ICTM Conference: Turbine Construction between Big Data and Additive Manufacturing

07.03.2017 | Event News

 
Latest News

When Air is in Short Supply - Shedding light on plant stress reactions when oxygen runs short

23.03.2017 | Life Sciences

Researchers use light to remotely control curvature of plastics

23.03.2017 | Power and Electrical Engineering

Sea ice extent sinks to record lows at both poles

23.03.2017 | Earth Sciences

VideoLinks
B2B-VideoLinks
More VideoLinks >>>