• Re: Limitations of GenAI and LLM

    From Nuno Silva@3:633/10 to All on Fri Apr 3 10:50:26 2026

    On 2026-04-02, Stefan Ram wrote:

    Nuno Silva <nunojsilva@invalid.invalid> wrote or quoted:
    And it really poses a philosophical question: if GenAI systems really >>cannot grasp the concept of not knowing something, and perhaps also >>concepts of varying certainty of accuracy &c., how can these systems be >>expected to produce something remotely useful?

    Everybody is always working to the best of his knowledge, and
    experts can even use a certain kind of closed world assumption
    ("I have not heard of it and it comes from a layman, so it
    must be wrong or does not exist.")

    For example, ask Isaac Newton about how the time coordinate
    differs for a moving observer, and he'll tell you: "There is
    no such difference! Time passes in the same manner everywhere
    and for everyone, moving or not."

    Today, we know that Newton was wrong.

    We could add to every claim of ours a disclaimer:
    ". . . to the best of my knowledge". But this would be
    redundant as it is obvious without saying.

    My point here is that humans having the ability to grasp such a concept
    is a key tool in the thinking process that needs to be replicated in
    order for GenAI to achieve a more useful state.

    Humans won't always make use of it, but its presence significantly
    changes the possibilities present in thought. Conversely, its absence
    will limit the possible outcomes.

    --
    Nuno Silva

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)