Machine translation (MT) is a term used to describe a range of computer-based activities involving translation. This article reviews sixty years of history of MT research and development, concentrating on the essential difficulties and limitations of the task, and how the various approaches have attempted to solve, or more usually work round, these. History of MT is said to date from a period just after the Second World War during which the earliest computers had been used for code-breaking. In the late 1980s the field underwent a major change in direction with the emergence of a radically new way of doing MT. Two main approaches to MT have emerged and these are rule-based and statistics-based. These approaches owe little to conventional linguistic methods and ideas, but it must be recognized that the much faster development cycle has made functional versions of MT systems covering new language pairs become available.
Many large companies have included methods of controlling the input language to minimize problems of disambiguation to improve the quality of machine translation (MT). In the large-scale enterprise systems, MT is used to produce drafts, which are then edited by bilingual personnel. A significant development has been the introduction of specialized systems, designed for Internet service providers and for large corporations to supply and edit translations of their own webpages localized to their domain, and for cross-language communication with customers. MT also finds its application in healthcare communication, the military field, and translation for foreign tourists. The future for MT lies in developing hybrid systems combining the best of the statistical and rule-based approaches. A specific target of MT for immigrants or minorities has been the translation of subtitles for television programmes. Apart from minorities and immigrants, there are other disadvantaged members of society now beginning to be helped by MT-related systems.