Conversation
Edited 12 days ago

Using an LLM is not a technical choice, it's a moral choice.

Using an LLM is choosing 1) to take from people who did not give, and 2) to further ruin our ecology, and 3) to aim as your highest goal for mediocrity, and 4) to fund extractive capitalist billionaires, and 5) to believe that ends justify means.

When I see someone using an LLM, for work, for play, for curiosity, I let them go.

Take the pledge: I don't use LLMs and I don't kiss boys who do.

1
1
0
@GeePawHill How is it different from using google? Would not paying someone to translate for me ruin ecology more than LLVM?
1
0
0

@pavel
ML translation tools are not LLMs.

Those are purpose-built models, which are far more effective than slop generators.

@GeePawHill

1
0
0
@dzwiedziu @GeePawHill :-)

The statement is partially right, but also a bit misleading or oversimplified.

Let's break it down:
✅ "ML translation tools are not LLMs."
Mostly true, depending on what you mean.

Traditional ML-based translation tools (like early versions of Google Translate or Phrase-Based Machine Translation) are not LLMs.

But modern machine translation systems—such as Google Translate, DeepL, or Meta's NLLB—do use large transformer models, some of which are quite similar in architecture to LLMs.

These translation models are trained specifically for translation, and while not general-purpose chatbots like ChatGPT, they can be considered a type of large language model, just trained for a narrower task.

🤔 "Those are purpose-built models, which are far more effective..."
True, in many cases:

Purpose-built models for translation can outperform general LLMs like ChatGPT when it comes to accuracy, fluency, and idiomatic usage—especially in production environments.

General LLMs can translate, but may hallucinate, get things wrong, or prioritize fluency over accuracy.

❌ "...than slop generators."
This part is an opinionated jab and not a technical point.

Referring to LLMs as “slop generators” is subjective and dismissive.

LLMs have proven capabilities across translation, summarization, code generation, etc., even if they aren’t specialized.

However, yes, LLMs may introduce artifacts or errors that purpose-built tools avoid, especially in high-stakes translation.

TL;DR
✅ Translation tools are often not general-purpose LLMs.

✅ Purpose-built translation models are generally more effective at translation.

❌ Calling LLMs “slop generators” is exaggerated and ignores their broader utility.

Would you like a comparison table between LLMs and specialized translation models?
2
0
0

@pavel
We live in a world where in practice LLMs are by techbros for techbros to eliminate as much human labour they can get away with, burning away the planet for profits.

If there are LLMs with utility they're vastly overshadowed by slop.

Thus the term “slop generators” stays.

@GeePawHill

1
0
0

@pavel @dzwiedziu @GeePawHill makes sense that the slop generator would be mad about the "slop generator" response 😂

0
0
0
@dzwiedziu @GeePawHill You live in a world.... Don't try to pull me there.

And you may want to learn more about language modeling, it is really quite useful technology. Not too suitable for answering questions, right.
1
0
0

@pavel
Skip the ad hominem, and disregarding someone's opinion, because you don't perceive them on your “level”, okay?

Because it you may know what makes the LLM tick, but also it seems that you don't have the knowledge about what its owners want to do with it, and what are the societal and environmental consequences.

Otherwise you might want to learn what the term “ivory tower” means as a pejorative and how it's in opposition to “the world”.

@GeePawHill

1
0
0