Post-editing means the process of editing of a machine-translated text by a qualified translator. Machine translation is often abbreviated as MT. Depending on the approach, the machine, also known as the “engine”, produces a translation according to the underlying algorithm, which then can be considered a raw version. Popular engine systems are Google Translate, DeepL and Microsoft Translator. Depending on the engine system, different issues can be expected in the target text during post-editing. The post-editor must therefore work intensively on the raw version of the machine translation to ensure that the result is a linguistically high-quality translation that conveys the text’s content correctly.
For a “normal” translator, linguistic and cultural knowledge, research competence and translation strategy skills are essential in their work. In addition to these skills and knowledge, a post-editor must also know which problems and sources of errors are typical for the respective MT system so that they can eliminate them and ensure a high-quality translation through their post-editing. There are several strategic approaches to this, which can, and in some cases must, vary depending on the language combination and various other factors.
Light post-editing means a fast post-editing of the machine translation with only minimal changes to the output, only improving comprehensibility and terminology. The aim is to produce a text that is correct in content and comprehensible, but has not the demands for quality compared to a translation produced by a human translator.
In full post-editing, the output undergoes time-consuming editing, so that the quality of the result is equivalent to a “traditional” translation by a human translator.
The first rule-based systems for MT have been available since the 1930s. Such engines are based on language algorithms that follow a specified grammar and use a specific dictionary. Rule-based MT reads mechanically, but is always complete and provides consistency with regard to terminology and style, making the output predictable.
The first statistical MT systems appeared in the 1980s. They analyse large amounts of bilingual data files according to statistical criteria and produce a translation based on the resulting language patterns. Statistical MT reads more smoothly than rule-based MT, but there is a possibility of incomplete translation. Due to the different sources that are analysed, the translation can be inconsistent in terms of style and terminology – the output is unpredictable.
Since about 2016 there are neural MT systems, which are characterised by the fact that the engines are trained with large amounts of bilingual data and learn from this data to translate within a large neural network. The translations usually read very fluently, but like statistical MT they are unpredictable in output and inconsistent in terminology and style. And here, too, incomplete translations are possible.
There is also the risk that the target text may look like a good translation at first glance, but the content of the translation does not correspond to the original source text. This is because the machine does not recognise the context of the text and translates each sentence individually, i.e. it does not link and relate the preceding to the following sentences.
Hybrid MT systems combine the advantages of the different systems.
Adaptive MT systems are a combination of statistical and neural MT.
We will give you a free quotation, with no obligation on your part. Simply send us the text, or a description of the service required.
Are your files too large to send by email? Then you will get access to the secure in-house Kocarek Cloud.
We are certified under the international standard for translation services ISO 17100.
We are certified under the international quality management standard ISO 9001.