D simulated by H cannot possibly reach its own
simulated final halt state.
I am not going to talk about any non-nonsense of
resuming a simulation after we already have this
final answer.
We just proved that the input to H(D) specifies
non-halting. Anything beyond this is flogging a
dead horse.
news://news.eternal-september.org/20251104183329.967@kylheku.com
On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
On 2025-11-05, olcott <polcott333@gmail.com> wrote:
The whole point is that D simulated by H
cannot possbly reach its own simulated
"return" statement no matter what H does.
Yes; this doesn't happen while H is running.
So while H does /something/, no matter what H does,
that D simulation won't reach the return statement.
Am 06.11.2025 um 21:48 schrieb olcott:
D simulated by H cannot possibly reach its ownWhat you do is like thinking in circles before falling asleep.
simulated final halt state.
I am not going to talk about any non-nonsense of
resuming a simulation after we already have this
final answer.
We just proved that the input to H(D) specifies
non-halting. Anything beyond this is flogging a
dead horse.
news://news.eternal-september.org/20251104183329.967@kylheku.com
On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
On 2025-11-05, olcott <polcott333@gmail.com> wrote:
The whole point is that D simulated by H
cannot possbly reach its own simulated
"return" statement no matter what H does.
Yes; this doesn't happen while H is running.
So while H does /something/, no matter what H does,
that D simulation won't reach the return statement.
It never ends. You're gonna die with that for sure sooner or later.
On 11/25/2025 9:20 AM, Bonita Montero wrote:It don't matters if you're correct. There's no benefit in discussing
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
Even Kimi that was dead set against me now fully
understands my new formal foundation for correct
reasoning.
Am 25.11.2025 um 16:47 schrieb olcott:
On 11/25/2025 9:20 AM, Bonita Montero wrote:
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
It don't matters if you're correct. There's no benefit in discussing
such a theoretical topic for years. You won't even stop if everyone
tells you're right.
Even Kimi that was dead set against me now fully
understands my new formal foundation for correct
reasoning.
On 11/25/2025 9:20 AM, Bonita Montero wrote:
Am 06.11.2025 um 21:48 schrieb olcott:
D simulated by H cannot possibly reach its ownWhat you do is like thinking in circles before falling asleep.
simulated final halt state.
I am not going to talk about any non-nonsense of
resuming a simulation after we already have this
final answer.
We just proved that the input to H(D) specifies
non-halting. Anything beyond this is flogging a
dead horse.
news://news.eternal-september.org/20251104183329.967@kylheku.com
On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
On 2025-11-05, olcott <polcott333@gmail.com> wrote:
The whole point is that D simulated by H
cannot possbly reach its own simulated
"return" statement no matter what H does.
Yes; this doesn't happen while H is running.
So while H does /something/, no matter what H does,
that D simulation won't reach the return statement.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
Even Kimi that was dead set against me now fully
understands my new formal foundation for correct
reasoning.
On 11/25/2025 9:50 AM, Bonita Montero wrote:
Am 25.11.2025 um 16:47 schrieb olcott:
On 11/25/2025 9:20 AM, Bonita Montero wrote:
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
It don't matters if you're correct. There's no benefit in discussing
such a theoretical topic for years. You won't even stop if everyone
tells you're right.
My whole purpose of this has been to establish a
new foundation for correct reasoning that gets rid
The timing for such a system is perfect because it
could solve the LLM AI reliability issues.
On 11/12/2025 8:25 PM, Kaz Kylheku wrote:
If those two are in any way whatsoever different, the entire
castle you built in the sand is washed away.
*This is a FOREVER thing until someone admits the truth*
*This is a FOREVER thing until someone admits the truth*
*This is a FOREVER thing until someone admits the truth*
int D()
{
˙ int Halt_Status = H(D);
˙ if (Halt_Status)
˙˙˙ HERE: goto HERE;
˙ return Halt_Status;
}
Everyone here rejects that the execution trace
of 5 statements of D simulated by H according to
the semantics of C is this:
(1)˙˙˙ H simulates D that calls H(D)
(2) that simulates D that calls H(D)
(3) that simulates D that calls H(D)
(4) that simulates D that calls H(D)
(5) that simulates D that calls H(D)
| Sysop: | Tetrazocine |
|---|---|
| Location: | Melbourne, VIC, Australia |
| Users: | 16 |
| Nodes: | 8 (0 / 8) |
| Uptime: | 40:07:56 |
| Calls: | 208 |
| Files: | 21,502 |
| Messages: | 82,823 |