LLM is unfair

RMAG news

Well… At least, that’s how I feel.

As a non-native English speaker, inefficiency arises even before starting to write instructions or prompts. Why? Obviously, the whole data corpus fed into AI models was mostly written in English. So what? It seems quite a simple job these days. Why don’t I go ahead and use one of many AI translators and voila!

Actually, it isn’t just about the translation. It’s more of a contextual problem. In the LLM world, this context matters. The more I get my hands on various AI products, the more I get the feeling that they (AIs) and I don’t get along very well. Yes, sometimes we are not on the same page. I don’t know where we missed each other, but somewhere in the middle of the conversation, we walked across each other and didn’t notice at all.

This happens simply because, at the essence, cultures differ. While they seem to mean the same thing on the surface, there are cases where they do not. For barely satisfying example, “what’s up?” can be interpreted as a casual greeting, while this is not the case in my country. I’m familiar with the phrase when “what” is actually “going” on in my language. The comparison may not satisfy all but should definitely convey what I’m trying to say.

Nuance. This subtle difference is what creates the aforementioned inefficiency. If you are not aware of this, you will forever be getting slightly off but sufficing answers that you don’t get to recognize what’s wrong. What if you are aware of this? I don’t know why Ouroboros comes to my mind. I can’t get away from questions.

Is this what I want to ask? Are you sure?
Will this be seen enough as what I intended to ask LLMs?

Just whining about having a hard time configuring the LLM to get better responses. So I figured I’d vent a bit.

Leave a Reply

Your email address will not be published. Required fields are marked *