Tommy Tomolonis, a Project Manager at CETRA, describes his experience at the 53rd Annual American Translators Association Conference in San Diego.
From the outdoor opening reception to the closing dance party, this year’s ATA conference was one for the books. The conference was held at the Hilton San Diego Bayfront, and the weather was perfect (especially for those of us who have already seen snow this year).
This was my third consecutive year attending the conference, and my focus this year was on technology, specifically MT. I arrived on Wednesday for the SDL LSP Partners meeting where I learned how I could use SDL’s automated translation in my normal project workflows using the technology I already own. This was great news, but MT is far more complicated than simply buying an MT system and incorporating it into my workflow. Fortunately, the Association for Machine Translation in the Americas (AMTA) also had their biennial conference in San Diego following the ATA conference and, in an effort to segue from one event to the other, many of ATA’s Saturday sessions were related to MT.
I started my MT cramming with a session called “Teaching Translation Studies Students How Machine Translation Works.” This session had a panel of MT specialists who talked about different aspects of MT. Leonardo Gianossa talked about the types of errors that MT systems tend to make and how they can be prevented by controlling the source content. Mike Dillinger then showed how a good rule-based MT system isn’t much different from a good TM. In fact, MT has an advantage over TM because it is capable of producing some sort of match value when a TM cannot. And since rule-based systems can be trained and created from existing TMs, the output isn’t necessarily the usual Frankenstein monster that is associated with MT. The goal of the session was to explain that we shouldn’t fear MT, especially when it comes to training future translators. Instead, students and translators in general should be better educated in what MT is and how it can actually help our work. This was a really positive start to my MT training day.
I later attended Rubén de la Fuente’s “Tips and Tricks for Full Post-Editing” and “Building Your Own Statistical Machine Translation Systems and Integrating Them with Your Translation Memory Tools “ and Mike Dillinger’s “Machine Translation in Practice.” The most important lesson I learned from these sessions is “garbage in, garbage out.” If you create MT systems with low-quality input, then all you can hope to achieve is low-quality output. For MT to really be beneficial to translators, LSPs, and clients, MT systems must be created with care. For statistical systems, this means focusing on the quality of the segments added, not just the quantity. For rule-based systems, quality will also involve customization to accommodate specific linguistic nuances. With a well-built system, post-editing time will be reduced, which will ultimately lower costs. This could make MT a valid solution for projects with low budgets that do not require publication quality. This, in turn, could open up new business opportunities for LSPs and add to their lists of services.
Overall, MT should not be feared as the enemy of human translation. Instead, it should be seen as an emerging translation tool for translators and a unique business opportunity for LSPs. MT will never be able to create language as humans do, but it will require linguists to help create and maintain the systems. And just as CAT tools have become an essential part of the translation process, MT and post-editing may prove to be the next big step, and those who refuse to adapt may be left behind.
Click here to watch CETRA’s video from the ATA Conference.