• simple compression mathod to code itself?

    From fir@3:633/280.2 to All on Sat Apr 6 05:24:51 2024
    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half
    but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: i2pn2 (i2pn.org) (3:633/280.2@fidonet)
  • From David LaRue@3:633/280.2 to All on Sat Apr 6 10:28:12 2024
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half
    but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all bytes
    as you scan them from the input. The compressed output is a reference to
    the table you just built (wnhem repeated byte strings are found, otherwise feed the input to the compressed image so that the expansion method can
    build the sme dynamic table the encoder built. The table generally has a limit on the number of entries (usually a good size) and allows the table
    of bytes to dynamically change as new patterns are read from the input.

    This is a well known and documented compression/expansion algorithm. PKZIP and other engines use this as one of their compression methodz. Look for
    the description of that if you need more details to figure out what you
    need to write.

    Expansion is the reverse. Read the source (now the compressed image) and build the compression table from the bytes. As encoded references to the compression table are read from the compressed image output the source byte sequences. The output should be the same as what your encoder originally read.

    A good check on the final code is to compare the original input with the eventual output and make sure they agree exactly.

    Have fun,

    David

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Paul@3:633/280.2 to All on Sat Apr 6 12:18:01 2024
    On 4/5/2024 7:28 PM, David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half
    but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all bytes as you scan them from the input. The compressed output is a reference to the table you just built (wnhem repeated byte strings are found, otherwise feed the input to the compressed image so that the expansion method can build the sme dynamic table the encoder built. The table generally has a limit on the number of entries (usually a good size) and allows the table
    of bytes to dynamically change as new patterns are read from the input.

    This is a well known and documented compression/expansion algorithm. PKZIP and other engines use this as one of their compression methodz. Look for the description of that if you need more details to figure out what you
    need to write.

    Expansion is the reverse. Read the source (now the compressed image) and build the compression table from the bytes. As encoded references to the compression table are read from the compressed image output the source byte sequences. The output should be the same as what your encoder originally read.

    A good check on the final code is to compare the original input with the eventual output and make sure they agree exactly.

    Have fun,

    David


    Some people have written compression codes, purely for educational purposes. That's why I got a copy of this, some years ago. For fun.

    https://github.com/grtamayo/RLE

    gtrle35.c # Run Length Encoding, one of the simpler compressors

    Paul

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From fir@3:633/280.2 to All on Sat Apr 6 22:48:38 2024
    Paul wrote:
    On 4/5/2024 7:28 PM, David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half
    but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all bytes >> as you scan them from the input. The compressed output is a reference to
    the table you just built (wnhem repeated byte strings are found, otherwise >> feed the input to the compressed image so that the expansion method can
    build the sme dynamic table the encoder built. The table generally has a
    limit on the number of entries (usually a good size) and allows the table
    of bytes to dynamically change as new patterns are read from the input.

    This is a well known and documented compression/expansion algorithm. PKZIP >> and other engines use this as one of their compression methodz. Look for
    the description of that if you need more details to figure out what you
    need to write.

    Expansion is the reverse. Read the source (now the compressed image) and
    build the compression table from the bytes. As encoded references to the
    compression table are read from the compressed image output the source byte >> sequences. The output should be the same as what your encoder originally
    read.

    A good check on the final code is to compare the original input with the
    eventual output and make sure they agree exactly.

    Have fun,

    David


    Some people have written compression codes, purely for educational purposes. That's why I got a copy of this, some years ago. For fun.

    https://github.com/grtamayo/RLE

    gtrle35.c # Run Length Encoding, one of the simpler compressors

    Paul

    rle i think i could write by hand but i would like something a bit more eleborate than this - not much but somewhat

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: i2pn2 (i2pn.org) (3:633/280.2@fidonet)
  • From fir@3:633/280.2 to All on Sat Apr 6 22:53:14 2024
    David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half
    but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all bytes
    as you scan them from the input. The compressed output is a reference to
    the table you just built (wnhem repeated byte strings are found, otherwise feed the input to the compressed image so that the expansion method can
    build the sme dynamic table the encoder built. The table generally has a limit on the number of entries (usually a good size) and allows the table
    of bytes to dynamically change as new patterns are read from the input.

    This is a well known and documented compression/expansion algorithm. PKZIP and other engines use this as one of their compression methodz. Look for
    the description of that if you need more details to figure out what you
    need to write.

    Expansion is the reverse. Read the source (now the compressed image) and build the compression table from the bytes. As encoded references to the compression table are read from the compressed image output the source byte sequences. The output should be the same as what your encoder originally read.

    A good check on the final code is to compare the original input with the eventual output and make sure they agree exactly.

    Have fun,

    David

    this could be good but i dont quite understood that .. but eventually
    could be good...

    i thinged something abut that if rle search for repetitions of 1
    byte then maybe after that search for repetitions of 2 bytes, then 3
    bytes, 4 bytes and so on.. then do some "report" how many found and then
    find a way to encode that

    need to think a bit becouse if rle only stores repetitions that are
    one after another then this method should store repetitions that have
    various distances among them


    i also do nto want to spend a much time on this 1-2 days eventually

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: i2pn2 (i2pn.org) (3:633/280.2@fidonet)
  • From fir@3:633/280.2 to All on Sat Apr 6 23:16:51 2024
    fir wrote:
    David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half
    but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all
    bytes
    as you scan them from the input. The compressed output is a reference to
    the table you just built (wnhem repeated byte strings are found,
    otherwise
    feed the input to the compressed image so that the expansion method can
    build the sme dynamic table the encoder built. The table generally has a
    limit on the number of entries (usually a good size) and allows the table
    of bytes to dynamically change as new patterns are read from the input.

    This is a well known and documented compression/expansion algorithm.
    PKZIP
    and other engines use this as one of their compression methodz. Look for
    the description of that if you need more details to figure out what you
    need to write.

    Expansion is the reverse. Read the source (now the compressed image) and
    build the compression table from the bytes. As encoded references to the
    compression table are read from the compressed image output the source
    byte
    sequences. The output should be the same as what your encoder originally
    read.

    A good check on the final code is to compare the original input with the
    eventual output and make sure they agree exactly.

    Have fun,

    David

    this could be good but i dont quite understood that .. but eventually
    could be good...

    i thinged something abut that if rle search for repetitions of 1
    byte then maybe after that search for repetitions of 2 bytes, then 3
    bytes, 4 bytes and so on.. then do some "report" how many found and then
    find a way to encode that

    need to think a bit becouse if rle only stores repetitions that are
    one after another then this method should store repetitions that have various distances among them


    i also do nto want to spend a much time on this 1-2 days eventually


    though maybe i should start from RLE indeed..in fact ihis shouldnt be so
    bad as for some of my eventuall funny needs - also it maybe seem ok to
    be first step until something more elaborate

    if someone want to talk on compression adn how to code it i could like
    to read it (as reading net articles on this may be seem to hard for my
    old sick head and posts are much are easier to get into it)

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: i2pn2 (i2pn.org) (3:633/280.2@fidonet)
  • From bart@3:633/280.2 to All on Sun Apr 7 00:28:01 2024
    On 06/04/2024 13:16, fir wrote:
    fir wrote:
    David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half >>>> but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all
    bytes
    as you scan them from the input.˙ The compressed output is a
    reference to
    the table you just built (wnhem repeated byte strings are found,
    otherwise
    feed the input to the compressed image so that the expansion method can
    build the sme dynamic table the encoder built.˙ The table generally
    has a
    limit on the number of entries (usually a good size) and allows the
    table
    of bytes to dynamically change as new patterns are read from the input.

    This is a well known and documented compression/expansion algorithm.
    PKZIP
    and other engines use this as one of their compression methodz.˙ Look
    for
    the description of that if you need more details to figure out what you
    need to write.

    Expansion is the reverse.˙ Read the source (now the compressed image)
    and
    build the compression table from the bytes.˙ As encoded references to
    the
    compression table are read from the compressed image output the source
    byte
    sequences.˙ The output should be the same as what your encoder
    originally
    read.

    A good check on the final code is to compare the original input with the >>> eventual output and make sure they agree exactly.

    Have fun,

    David

    this could be good but i dont quite understood that .. but eventually
    could be good...

    i thinged something abut that if rle search for repetitions of 1
    byte then maybe after that search for repetitions of 2 bytes, then 3
    bytes, 4 bytes and so on.. then do some "report" how many found and then
    find a way to encode that

    ˙ need to think a bit becouse if rle only stores repetitions that are
    one after another then this method should store˙ repetitions that have
    various distances among them


    i also do nto want to spend a much time on this 1-2 days eventually


    though maybe i should start from RLE indeed..in fact ihis shouldnt be so
    bad as for some of my eventuall funny needs - also it maybe seem ok to
    be first step until something more elaborate

    What sort of data are you compressing?

    If it is computer generated imagery with no noise or artefacts, then RLE
    will probably work well.

    If it is a noisy image captured from a camera then it'll be rubbish.

    Stuff like text files will likely be mildly compressed, but probably not enough to be worth the trouble.

    Decent compression is hard; you're not going to come up with anything in
    1-2 days that will give worthwhile results across a range of inputs.

    if someone want to talk on compression adn how to code it i could like
    to read it (as reading net articles on this may be seem to hard for my
    old sick head and posts are much are easier to get into it)


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From fir@3:633/280.2 to All on Sun Apr 7 01:02:12 2024
    bart wrote:
    On 06/04/2024 13:16, fir wrote:
    fir wrote:
    David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half >>>>> but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all
    bytes
    as you scan them from the input. The compressed output is a
    reference to
    the table you just built (wnhem repeated byte strings are found,
    otherwise
    feed the input to the compressed image so that the expansion method can >>>> build the sme dynamic table the encoder built. The table generally
    has a
    limit on the number of entries (usually a good size) and allows the
    table
    of bytes to dynamically change as new patterns are read from the input. >>>>
    This is a well known and documented compression/expansion algorithm.
    PKZIP
    and other engines use this as one of their compression methodz.
    Look for
    the description of that if you need more details to figure out what you >>>> need to write.

    Expansion is the reverse. Read the source (now the compressed
    image) and
    build the compression table from the bytes. As encoded references
    to the
    compression table are read from the compressed image output the source >>>> byte
    sequences. The output should be the same as what your encoder
    originally
    read.

    A good check on the final code is to compare the original input with
    the
    eventual output and make sure they agree exactly.

    Have fun,

    David

    this could be good but i dont quite understood that .. but eventually
    could be good...

    i thinged something abut that if rle search for repetitions of 1
    byte then maybe after that search for repetitions of 2 bytes, then 3
    bytes, 4 bytes and so on.. then do some "report" how many found and then >>> find a way to encode that

    need to think a bit becouse if rle only stores repetitions that are
    one after another then this method should store repetitions that have
    various distances among them


    i also do nto want to spend a much time on this 1-2 days eventually


    though maybe i should start from RLE indeed..in fact ihis shouldnt be
    so bad as for some of my eventuall funny needs - also it maybe seem ok
    to be first step until something more elaborate

    What sort of data are you compressing?

    If it is computer generated imagery with no noise or artefacts, then RLE
    will probably work well.

    If it is a noisy image captured from a camera then it'll be rubbish.

    Stuff like text files will likely be mildly compressed, but probably not enough to be worth the trouble.

    Decent compression is hard; you're not going to come up with anything in
    1-2 days that will give worthwhile results across a range of inputs.

    if someone want to talk on compression adn how to code it i could like
    to read it (as reading net articles on this may be seem to hard for my
    old sick head and posts are much are easier to get into it)

    at the initial idea i want to add this as a method to my "bytes" microcontainer (where you could put or load anything)

    i just tried to think what usebale metods i could add and some
    pack/unpack method caould be handy

    though even i dont know if to do it todaye/tomorrow or maybe more in future..discussing something abut it would be interesting imo




    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: i2pn2 (i2pn.org) (3:633/280.2@fidonet)
  • From David LaRue@3:633/280.2 to All on Sun Apr 7 01:39:11 2024
    fir <fir@grunge.pl> wrote in news:uurd34$8ga0$1@i2pn2.org:

    David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half
    but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all
    bytes as you scan them from the input. The compressed output is a
    reference to the table you just built (wnhem repeated byte strings are
    found, otherwise feed the input to the compressed image so that the
    expansion method can build the sme dynamic table the encoder built.
    The table generally has a limit on the number of entries (usually a
    good size) and allows the table of bytes to dynamically change as new
    patterns are read from the input.

    This is a well known and documented compression/expansion algorithm.
    PKZIP and other engines use this as one of their compression methodz.
    Look for the description of that if you need more details to figure out
    what you need to write.

    Expansion is the reverse. Read the source (now the compressed image)
    and build the compression table from the bytes. As encoded references
    to the compression table are read from the compressed image output the
    source byte sequences. The output should be the same as what your
    encoder originally read.

    A good check on the final code is to compare the original input with
    the eventual output and make sure they agree exactly.

    Have fun,

    David

    this could be good but i dont quite understood that .. but eventually
    could be good...

    i thinged something abut that if rle search for repetitions of 1
    byte then maybe after that search for repetitions of 2 bytes, then 3
    bytes, 4 bytes and so on.. then do some "report" how many found and then
    find a way to encode that

    need to think a bit becouse if rle only stores repetitions that are
    one after another then this method should store repetitions that have various distances among them


    i also do nto want to spend a much time on this 1-2 days eventually


    Hello fir,

    The method above can do the same thing for each repetition of a single
    byte. Alone it has the advantage that compression and decompression are single pass operations. It is similar to RLE but allows for more
    variations to be used along the way. The encoder also needs some table searching to determine best matches, new byte sequences, and improved
    length sequences. An ordered table of byte strings is typically used.
    I've also seen more advanced search methods that use branch tables to represent the encoded sequences with the final entry being the value to put
    in the compressed output. Very easy to search and gives some improvements
    to the basic search design to make performance better. This can be done in
    a few days or less, depending on how much time you spend on deciding the
    table size and so on. I've also seen the search part as a coding
    requirement for a one hour test to see how the coder breaks the idea down.

    I like all the ideas you have.

    David

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David LaRue@3:633/280.2 to All on Sun Apr 7 02:09:55 2024
    fir <fir@grunge.pl> wrote in news:uurefe$8i04$1@i2pn2.org:

    fir wrote:
    <snip>

    if someone want to talk on compression adn how to code it i could like
    to read it (as reading net articles on this may be seem to hard for my
    old sick head and posts are much are easier to get into it)

    There are several good books on search and compression methods that provide examples of complexity and discussions about the complexity and performance. I have book on just algorithms that I bought years ago. One of the first chapters discussed the absurdaty of an OS/App requiring seperate Account and Password entries when only one is needed. The result is the same and takes one less character to enter. I found that book in a Barnes and Noble years ago and loved reading and trying to understand the suggestions they made. A great place to look for such books is in a college book store; the one for books that are used for the second year or higher students. I've not found much online EXCEPT for the papers and examples published by Nicholas Wirth or by Knuth. Knuth's publications are best if you don't mind reading for a
    while and then deciding what is best to code/do. Wirth had books on algorithms and for specific languages. Much easier to read for a beginner.

    The Knuth discussions are well organized and usually available for free online. He covered an enormous variety on topics in his many books/papers. They have complete descriptions about why something was done and then discusses how to improve them. Again this is deep material but well worth
    the effort once you've mastered a language or two. C gives the ideas needed to read and und1erstand his comments about angorithms and language design. Very good stuff, IMHO.

    David

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From fir@3:633/280.2 to All on Sun Apr 7 05:57:13 2024
    David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uurd34$8ga0$1@i2pn2.org:

    David LaRue wrote:
    fir <fir@grunge.pl> wrote in news:uupflr$5t1n$1@i2pn2.org:

    i not code at all in recent years
    (recently i coded half of my compuler with a plan to write second half >>>> but its to ambitious to lazy coding mood i got recently)
    but recently i started this slamm coding modd and found
    ite pleasurable

    searching for lazy side things i could code in such mood
    i thought maybe i wopuld like to have compression routine
    but i would like to write it myself sortla like i use quicksort
    or so but i also want to write it myself

    so is there maybe some method i could use i mean some simple
    routine that compresses and uncompresses an array of bytes but
    maybe something a bit more complex than RLE (rune length encoding)
    - only thing i know from this domain


    Hello fir,

    A method a bit more complex that you might try builds a table of all
    bytes as you scan them from the input. The compressed output is a
    reference to the table you just built (wnhem repeated byte strings are
    found, otherwise feed the input to the compressed image so that the
    expansion method can build the sme dynamic table the encoder built.
    The table generally has a limit on the number of entries (usually a
    good size) and allows the table of bytes to dynamically change as new
    patterns are read from the input.

    This is a well known and documented compression/expansion algorithm.
    PKZIP and other engines use this as one of their compression methodz.
    Look for the description of that if you need more details to figure out
    what you need to write.

    Expansion is the reverse. Read the source (now the compressed image)
    and build the compression table from the bytes. As encoded references
    to the compression table are read from the compressed image output the
    source byte sequences. The output should be the same as what your
    encoder originally read.

    A good check on the final code is to compare the original input with
    the eventual output and make sure they agree exactly.

    Have fun,

    David

    this could be good but i dont quite understood that .. but eventually
    could be good...

    i thinged something abut that if rle search for repetitions of 1
    byte then maybe after that search for repetitions of 2 bytes, then 3
    bytes, 4 bytes and so on.. then do some "report" how many found and then
    find a way to encode that

    need to think a bit becouse if rle only stores repetitions that are
    one after another then this method should store repetitions that have
    various distances among them


    i also do nto want to spend a much time on this 1-2 days eventually


    Hello fir,

    The method above can do the same thing for each repetition of a single
    byte. Alone it has the advantage that compression and decompression are single pass operations. It is similar to RLE but allows for more
    variations to be used along the way. The encoder also needs some table searching to determine best matches, new byte sequences, and improved
    length sequences. An ordered table of byte strings is typically used.
    I've also seen more advanced search methods that use branch tables to represent the encoded sequences with the final entry being the value to put in the compressed output. Very easy to search and gives some improvements
    to the basic search design to make performance better. This can be done in
    a few days or less, depending on how much time you spend on deciding the table size and so on. I've also seen the search part as a coding
    requirement for a one hour test to see how the coder breaks the idea down.

    I like all the ideas you have.

    David

    okay though i will answer to it alter as i decided to do it but not
    today yet

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: i2pn2 (i2pn.org) (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Mon Apr 8 06:08:57 2024
    On 4/6/2024 11:41 AM, Scott Lurndal wrote:
    David LaRue <huey.dll@tampabay.rr.com> writes:
    fir <fir@grunge.pl> wrote in news:uurefe$8i04$1@i2pn2.org:

    fir wrote:
    <snip>

    if someone want to talk on compression adn how to code it i could like
    to read it (as reading net articles on this may be seem to hard for my
    old sick head and posts are much are easier to get into it)

    There are several good books on search and compression methods that provide >> examples of complexity and discussions about the complexity and performance.

    https://en.wikipedia.org/wiki/Lossless_compression

    One of the earliest:

    https://en.wikipedia.org/wiki/Huffman_coding

    Former colleague wrote this one:

    https://en.wikipedia.org/wiki/Gzip

    This was once under patent to a former employer:

    https://en.wikipedia.org/wiki/LZ77_and_LZ78

    Also LZ4 is a moderately simple format, and works reasonably OK.

    I have another (similar category but different design) format that gets slightly better compression to LZ4 at a similar decode speed.



    I guess, if I were to try to quickly design another LZ format that was
    simple to decode, say:
    Tag is a 16 bit word (may fetch more bits though);
    LSB=0: Short Run
    (15:8): Match Distance (if non 0)
    ( 7:5): Raw Length (0..7)
    ( 4:1): Match Length (4..19)
    LSB=01: Longer Run
    (31:16): Match Distance (up to 64K)
    (15:11): Raw Length (0..31)
    (10: 2): Match Length (4..515)

    With a few special cases:
    tag=0x0000: EOB
    if md==0:
    if(ml!=4)
    rl+=(ml-3)*X; //longer run of literal bytes

    So, main decoder might look something like (pseudo C):
    cs=src;
    ct=dst;
    while(1)
    {
    tag=*(u32 *)cs; //assume misaligned safe and little endian
    if(!(tag&1))
    {
    rl=(tag>>5)&7;
    ml=((tag>>1)&15)+4;
    md=(tag>>8)&255;
    cs+=2;
    if(!md)
    {
    if(!tag)
    break; //EOB
    if(ml!=4)
    rl+=(ml-3)*8;
    }
    }else if(!(tag&2))
    {
    rl=(tag>>11)&31;
    ml=((tag>>2)&511)+4;
    md=(tag>>16)&65535;
    cs+=4;
    if(!md)
    {
    if(ml!=4)
    rl+=(ml-3)*32;
    }
    }else
    {
    //maybe more cases
    }

    if(rl)
    {
    memcpy(ct, cs, rl);
    cs+=rl; ct+=rl;
    }

    if(md)
    {
    matchcopy(ct, ct-md, ml);
    ct+=ml;
    }
    }

    The matchcopy function would resemble memcpy, but have different
    semantics in the case of self-overlap.

    Say, a simple version (not tuned for performance):
    void matchcopy(byte *dst, byte *src, int len)
    {
    if((src+len)<dst)
    {
    //no overlap
    memcpy(dst, src, len);
    return;
    }
    while(len--)
    *dst++=*src++;
    }

    The match copy operation tends to bigger and more complicated if one
    tries to make it faster (where generally byte-for-byte copies are rather
    slow, and the case of long repeating RLE like runs isn't particularly rare).


    Designing an LZ encoder is more of a challenge though.
    Where typically, speed, compression, and code simplicity, are all
    mutually opposed (a simple LZ encoder will be either slow or give crappy compression, though speed is a matter of perspective).


    Note that comparably, Deflate is a fairly complicated format.





    Huffman coding is effective, but as noted, comes at a cost in terms of
    both speed and code complexity (but is still much faster than something
    like range coding).

    Though, partial ways to make Huffman decoding a little faster (in the
    design of a compressor):
    Limit maximum symbol length to around 12 or 13 bits, which allows lookup tables to fit in the L1 caches of typical CPUs;
    Decode blocks of symbols in advance, then fetch symbols as-needed from
    these blocks (this allows overlapping the Huffman symbol decoding,
    making more effective use of the CPU pipeline).

    These designs had sort of ended up resembling an entropy-coded version
    of LZ4 (usually with Huffman coded blocks for tags, literal bytes, and length/distance prefixes; using a scheme similar to Deflate's distance
    coding scheme for longer match or raw lengths, as well as for match
    distance). Note that after decoding the symbol blocks, the bitstream
    would only contain "extra bits" for the distance coding.

    But that said, on a modern PC style CPU, it is rather difficult to get
    Huffman coded designs much over about 600 or 700 MB/sec for decode
    speed, even with these tricks (if one skips an entropy-coding stage,
    typically several GB/sec is possible for LZ decoding).



    Though, one possibility could be a pseudo-entropy scheme, say:
    Sort all the symbols into descending frequency order;
    Stored directly as a table of 128 bytes (skip uncommon bytes).
    Encode the symbol blocks with bytes encoding table indices, say:
    00..78: Symbol Pair (0..10)
    7F: Escaped Symbol (encoded as a byte)
    80..FF: Direct Index (0..127)

    With the block being encoded as a blob of raw bytes if the
    pseudo-entropy scheme didn't save enough space (say, if there isn't a significant probability skew). Idea being that this would be faster to
    decode than blobs of Huffman-coded symbols.


    Then, say, block is encoded as a 16-bit prefix, say:
    0000..3FFF: Up to 16K, raw byte symbols
    4000..7FFF: Up to 16K, pseudo-entropy.
    8000..FFFF: Escape other block types.


    Though, efficiently dealing with the dual-symbol case would be harder,
    most obvious answers being one of:
    Lookup index pair in a lookup table;
    itmp=indexpairlut[relidx];
    i0=itmp&255;
    i1=itmp>>8;
    Or, try to split it up directly.
    Naive:
    i1=relidx/11; //can be turned into multiply by reciprocal
    i1=(relidx*0x175)>>12; //alternate
    i0=relidx-(i1*11);
    Though, unclear if this would be fast enough to be worthwhile.


    Could maybe also abandon the use of a bitstream for the extra-bits, to
    using nybbles or bytes instead (where a nybble or byte stream is faster
    than a bitstream). Decided not to go into the specifics of
    length/distance coding here.


    But, say, in this case, each LZ coded block would consist of 4 symbol
    blocks:
    Tag Symbols
    Literal Symbols
    Distance Symbols
    Distance Extra (probably always raw bytes)


    TBD if it could be performance competitive with the purely byte-oriented alternatives though, which would still have the advantage of being
    simpler in any case.


    Well, among any number of possibilities.
    Could probably fiddle with it...



    But, all this is getting a bit more advanced...



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)