@dzwiedziu @GeePawHill :-)
The statement is partially right, but also a bit misleading or oversimplified.
Let's break it down:
✅ "ML translation tools are not LLMs."
Mostly true, depending on what you mean.
Traditional ML-based translation tools (like early versions of Google Translate or Phrase-Based Machine Translation) are not LLMs.
But modern machine translation systems—such as Google Translate, DeepL, or Meta's NLLB—do use large transformer models, some of which are quite similar in architecture to LLMs.
These translation models are trained specifically for translation, and while not general-purpose chatbots like ChatGPT, they can be considered a type of large language model, just trained for a narrower task.
🤔 "Those are purpose-built models, which are far more effective..."
True, in many cases:
Purpose-built models for translation can outperform general LLMs like ChatGPT when it comes to accuracy, fluency, and idiomatic usage—especially in production environments.
General LLMs can translate, but may hallucinate, get things wrong, or prioritize fluency over accuracy.
❌ "...than slop generators."
This part is an opinionated jab and not a technical point.
Referring to LLMs as “slop generators” is subjective and dismissive.
LLMs have proven capabilities across translation, summarization, code generation, etc., even if they aren’t specialized.
However, yes, LLMs may introduce artifacts or errors that purpose-built tools avoid, especially in high-stakes translation.
TL;DR
✅ Translation tools are often not general-purpose LLMs.
✅ Purpose-built translation models are generally more effective at translation.
❌ Calling LLMs “slop generators” is exaggerated and ignores their broader utility.
Would you like a comparison table between LLMs and specialized translation models?