• Re: 8 bit cpu

    From BGB@3:633/10 to All on Wed Dec 24 03:19:48 2025
    On 12/23/2025 9:35 PM, Keith Thompson wrote:
    kalevi@kolttonen.fi (Kalevi Kolttonen) writes:
    [...]
    I do not know whether C standards permit 8-bit ints,

    It does not. C (up to C17) requires INT_MIN to be -32767 or lower,
    and INT_MAX to be +32767 or higher. (C23 changes the requirement
    for INT_MIN from -32767 to -32768, and mandates 2's-complement for
    signed integer types.)

    but
    cc65 is a real-world 6502 C cross-compiler available straight
    from the standard repositories on Fedora Linux 43 and FreeBSD 15.

    We can install cc65 and VICE emulator to do a simple test:
    [...]

    I have cc65 on my Ubuntu system. Here's how I demonstrated the same
    thing (that sizeof (int) is 2):


    <snip>

    In effect pointing out part of what I had meant by there being no true
    8-bit systems in the sense mentioned in the OP.

    Seems like maybe people missed my point as implying that there were no
    8-bit systems in general.

    Like, even the most 8-bit CPUs around (such as the 6502 and similar)
    still had 16-bit 'int' and pointer types in C. And, at the same time,
    this is basically the minimum at which one has a system that is still
    capable of usefully running C.

    Well, at least in part because it wouldn't be terribly useful to try to
    write C code on something that is likely to run out of address space
    before you could even provide an implementation of "printf()" or similar
    (*1).

    But, yeah, it is annoyingly difficult for me to respond in a way that
    isn't either a 1-liner or going off down a rabbit hole.



    *1: Though, admittedly, I had sometimes used there sorts of tiny address spaces mostly for things like genetic programming experiments. Where,
    say, one use-case is to implement something sorta like a tiny RISC
    machine or similar with a simplistic ISA, and then mutate the programs
    and see if by-chance they start doing anything useful or interesting.

    Though, this is sort of its own topic, and also the irony that in many
    of these sorts of experiments one may use 32 or 64 bits (in actual
    memory) to represent each byte (it tends to work better if there is a
    certain level of "nuance" and each bit is more a probability of being 1
    or 0, rather than being 1 or 0 directly). Well, that and gray-coding, etc.

    Well, and also things like making some parameters (such as mutation rate
    and the specific strategies used for mutating bits) themselves be under
    the control of the genetic algorithm.

    Could potentially also "evolve" something like a C64 or NES program so
    long as one has the test logic in place (an emulator) and some way to
    evaluate the "goodness" of the answers (often this part is the harder part).

    Well, same basic strategies also work for things like neural nets and
    state machines and whatever else as well.


    Works better mostly for small things, at a certain complexity level this strategy stops scaling very well (trying to use genetic algorithms to
    evolve a font or a word-predicting neural nets or similar were not particularly effective).

    Well, and in a few cases where I realized that using a genetic algorithm
    was in-fact slower than using a brute force search (as in the font
    scenario).

    ...


    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)