• "White House to Developers: Using C or C++ Invites Cybersecurity Risks

    From Lynn McGuire@3:633/280.2 to All on Sun Mar 3 10:13:56 2024
    Subject: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    Lynn

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Mar 3 11:05:28 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sat, 2 Mar 2024 17:13:56 -0600, Lynn McGuire wrote:

    The feddies want to regulate software development very much.

    Given the high occurrence of embarrassing mistakes companies have been
    making with their code, and continue to make, it’s quite clear they’re not capable of regulating this issue themselves.

    I wouldn’t worry about companies tripping over and hurting themselves, but when the consequences are security leaks, not of information belonging to those companies, but to their innocent customers/users who are often
    unaware that those companies even had that information, then it’s quite clear that Government has to step in.

    Because if they don’t, then who will?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John McCue@3:633/280.2 to All on Sun Mar 3 13:10:03 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"
    Reply-To: jmclnx@SPAMisBADgmail.com

    trimmed followups to comp.lang.c

    In comp.lang.c Lynn McGuire <lynnmcguire5@gmail.com> wrote:
    <snip>

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    Well to be fair, the feds regulations in the 60s made COBOL and
    FORTRAN very popular, plus POSIX later on. All they did was
    say "we will not buy anything unless ... rules".

    From "The C Programming Language Quotes by Brian W. Kernighan".

    Nevertheless, C retains the basic philosophy that
    programmers know what they are doing; it only requires
    that they state their intentions explicitly.

    If programmers were given time to test and develop, many
    issues would not exist. Anyone who has ever worked for a
    large company knows the pressure that exists to get things
    done quickly instead of right. So all these issues I blame
    on management.

    How many times have we heard "ship it now, you can fix later"
    and "later" never comes. :)

    Rust will never fix policy issues, just different and maybe worst
    issues will happen.

    Lynn

    --
    [t]csh(1) - "An elegant shell, for a more... civilized age."
    - Paraphrasing Star Wars

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Mar 3 14:30:17 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sun, 3 Mar 2024 02:10:03 -0000 (UTC), John McCue wrote:

    Well to be fair, the feds regulations in the 60s made COBOL and FORTRAN
    very popular, plus POSIX later on.

    The US Government purchasing rules on POSIX were sufficiently sketchy that Microsoft was able to satisfy them easily with Windows NT, while supplying
    a “POSIX” subsystem that was essentially unusable.

    And then Microsoft went on to render POSIX largely irrelevant by eating
    all the proprietary “Unix” vendors alive.

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence because of Linux and Open Source. Developers are discovering that the Linux ecosystem offers a much more productive development environment for a code-sharing, code-reusing, Web-centric world than anything Microsoft can offer.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Blue-Maned_Hawk@3:633/280.2 to All on Sun Mar 3 19:52:03 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    Any attempt to displace C will require total replacement of the modern computing ecosystem. Frankly, i'd be fine with that if pulled off well,
    but i wouldn't be fine with a half-baked solution nor trying to force out
    C without thinking about the whole rest of everything.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/ │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    Mac and Cheese, Horrifying Quality, Prepared by Barack Obama

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Blue-Maned_Hawk@3:633/280.2 to All on Sun Mar 3 19:54:36 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    Lawrence D'Oliveiro wrote:

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
    of Linux and Open Source. Developers are discovering that the Linux
    ecosystem offers a much more productive development environment for a code-sharing, code-reusing, Web-centric world than anything Microsoft
    can offer.

    I do not want to live in a web-centric world. I would much rather see
    other, better uses of the internet become widespread.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/ │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    Special thanks to misinformed hipsters!

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Sun Mar 3 20:10:22 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Sat, 2 Mar 2024 17:13:56 -0600
    Lynn McGuire <lynnmcguire5@gmail.com> wrote:

    They have been talking about it for at least 20 years now.

    More like 48-49 years. https://en.wikipedia.org/wiki/High_Order_Language_Working_Group


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Sun Mar 3 22:01:57 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are
    paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    Good languages and good tools help, but they are not the root cause of
    poor quality software in the world.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Janis Papanagnou@3:633/280.2 to All on Mon Mar 4 02:03:10 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03.03.2024 12:01, David Brown wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. [...]"
    [...]

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. [...]

    [...]

    Good languages and good tools help, but they are not the root cause of
    poor quality software in the world.

    I agree about the necessity of having good programmers. But a lot more
    factors are important, and there's factors that influence programmers. Languages may have a design that makes it possible to produce safer
    software, or to be error prone and require a lot more attention from
    the programmers (and also from management). Tools may help a bit to
    work around the problems that languages inherently add. Good project
    management may also help to increase software quality. But it's much
    more costly in case of using inferior (or unsuited) languages.

    Janis


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Mon Mar 4 02:31:15 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"
    Reply-To: slp53@pacbell.net

    Lynn McGuire <lynnmcguire5@gmail.com> writes:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming >languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much.

    You've been reading far to much apocalyptic fiction and seeing the
    world through trump-colored glasses. Neither reflect reality.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Kaz Kylheku@3:633/280.2 to All on Mon Mar 4 05:18:26 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming
    languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    Programmers who think about safety, correctness and quality and all that
    have way fewer diagnostics and more footguns if they are coding in C
    compared to Rust.

    I think, you can't just wave away the characteristics of Rust as making
    no difference in this regard.

    But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    The rhetoric you hear from Rust people about this is that coders taking
    a safety shortcut to make something work have to explicitly ask for that
    in Rust. It leaves a visible trace. If something goes wrong because of
    an unsafe block, you can trace that to the commit which added it.

    The rhetoric all sounds good.

    However, like you, I also believe it boils down to people, in a
    somewhat different way. To use Rust productively, you have to be one of
    the rare idiot savants who are smart enough to use it *and* numb to all
    the inconveniences.

    The reason the average programmer won't make any safety
    boo-boos using Rust is that the average programmer either isn't smart
    enough to use it at all, or else doesn't want to put up with the fuss:
    they will opt for some safe language which is easy to use.

    Rust's problem is that we have safe languages in which you can almost
    crank out working code with your eyes closed. (Or if not working,
    then at least code in which the only uncaught bugs are your logic bugs,
    not some undefined behavior from integer overflow or array out of
    bounds.)

    This is why Rust people are desperately pitching Rust as an alternative
    for C and whatnot, and showcasing it being used in the kernel and
    whatnot.

    Trying to be both safe and efficient to be able to serve as a "C
    replacement" is a clumsy hedge that makes Rust an awkward language.

    You know the parable about the fox that tries to chase two rabbits.

    The alternative to Rust in application development is pretty much any convenient, "easy" high level language, plus a little bit of C.
    You can get a small quantity of C right far more easily than a large
    quantity of C. It's almost immaterial.

    An important aspect of Rust is the ownership-based memory management.

    The problem is, the "garbage collection is bad" era is /long/ behind us.

    Scoped ownership is a half-baked solution to the object lifetime
    problem, that gets in the way of the programmer and isn't appropriate
    for the vast majority of software tasks.

    Embedded systems often need custom memory management, not something that
    the language imposes. C has malloc, yet even that gets disused in favor
    of something else.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Mar 4 07:10:26 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to Rust.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Mar 4 07:11:14 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
    of Linux and Open Source. Developers are discovering that the Linux
    ecosystem offers a much more productive development environment for a
    code-sharing, code-reusing, Web-centric world than anything Microsoft
    can offer.

    I do not want to live in a web-centric world.

    You already do.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Mon Mar 4 07:23:56 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03/03/2024 19:18, Kaz Kylheku wrote:
    On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and
    qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    Programmers who think about safety, correctness and quality and all that
    have way fewer diagnostics and more footguns if they are coding in C
    compared to Rust.

    I think, you can't just wave away the characteristics of Rust as making
    no difference in this regard.

    I did not.

    I said that the /root/ problem is not the language, but the programmers
    and the way they work.

    Of course some languages make some things harder and other things
    easier. And even the most careful programmers will occasionally make mistakes. So having a language that helps reduce the risk of some kinds
    of errors is a helpful thing.

    But consider this. When programming in modern C++, you can be risk-free
    from buffer overruns and most kinds of memory leak - use container
    classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete. You can use the C++ coding guideline
    libraries to mark ownership of pointers. You can use compiler
    sanitizers to catch many kinds undefined behaviour. You can use all
    sorts of static analysis tools, from free to very costly, to help find problems. And yet there are armies of programmers writing bad C++ code.
    PHP and Javascript have automatic memory management and garbage
    collection eliminating many of the possible problems seen in C and C++
    code, yet armies of programmers write PHP and Javascript code full of
    bugs and security faults.

    Better languages, better libraries, and better tools certainly help.
    There are not many tasks for which C is the best choice of language.
    But none of that will deal with the root of the problem. Good
    programmers, with good training, in good development departments with
    good managers and good resources, will write correct code more
    efficiently in a better language, but they can write correct code in
    pretty much /any/ language. Similarly, the bulk of programmers will
    write bad code in any language.


    But if it gets popular enough for schools and colleges to teach Rust
    programming course to the masses, and it gets used by developers who are
    paid per KLoC, given responsibilities well beyond their abilities and
    experience, lead by incompetent managers, untrained in good development
    practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    The rhetoric you hear from Rust people about this is that coders taking
    a safety shortcut to make something work have to explicitly ask for that
    in Rust. It leaves a visible trace. If something goes wrong because of
    an unsafe block, you can trace that to the commit which added it.

    The rhetoric all sounds good.

    You can't trace the commit for programmers who don't use version control software - and that is a /lot/ of them. Leaving visible traces does not
    help when no one else looks at the code. Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    Rust makes it possible to have some safety checks for a few things that
    are much harder to do in C++. It does not stop people writing bad code
    using bad development practices.


    However, like you, I also believe it boils down to people, in a
    somewhat different way. To use Rust productively, you have to be one of
    the rare idiot savants who are smart enough to use it *and* numb to all
    the inconveniences.

    And you have to have managers who are smart enough to believe it when
    their programmers say they need to train in a new language, re-write
    lots of existing code, and accept longer development times as a tradeoff
    for fewer bugs in shipped code.

    (I personally have a very good manager, but I know a great many
    programmers do not.)


    The reason the average programmer won't make any safety
    boo-boos using Rust is that the average programmer either isn't smart
    enough to use it at all, or else doesn't want to put up with the fuss:
    they will opt for some safe language which is easy to use.

    Rust's problem is that we have safe languages in which you can almost
    crank out working code with your eyes closed. (Or if not working,
    then at least code in which the only uncaught bugs are your logic bugs,
    not some undefined behavior from integer overflow or array out of
    bounds.)

    This is why Rust people are desperately pitching Rust as an alternative
    for C and whatnot, and showcasing it being used in the kernel and
    whatnot.


    I personally think it is madness to have Rust in a project like the
    Linux kernel. I used to see C++ as a rapidly changing language with its
    3 year cycle - Rust seems to have a 3 week cycle for updates, with no
    formal standardisation and "work in progress" attitude. That's fine for
    a new language under development, but /not/ something you want for a
    project that spans decades.

    Trying to be both safe and efficient to be able to serve as a "C
    replacement" is a clumsy hedge that makes Rust an awkward language.

    You know the parable about the fox that tries to chase two rabbits.

    The alternative to Rust in application development is pretty much any convenient, "easy" high level language, plus a little bit of C.
    You can get a small quantity of C right far more easily than a large
    quantity of C. It's almost immaterial.


    There are lots of alternatives to Rust for application development. But
    in general, higher level languages mean you do less manual work, and
    write fewer lines of code for the same amount of functionality. And
    that means a lower risk of errors.

    An important aspect of Rust is the ownership-based memory management.

    The problem is, the "garbage collection is bad" era is /long/ behind us.

    Scoped ownership is a half-baked solution to the object lifetime
    problem, that gets in the way of the programmer and isn't appropriate
    for the vast majority of software tasks.

    Embedded systems often need custom memory management, not something that
    the language imposes. C has malloc, yet even that gets disused in favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Andreas Kempe@3:633/280.2 to All on Mon Mar 4 07:57:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    Den 2024-03-03 skrev Lawrence D'Oliveiro <ldo@nz.invalid>:
    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google
    <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to Rust.

    I'm not surprised. I think it is pretty self-evident that a language
    that is designed to reduce memory errors, if correctly designed,
    will do just that.

    It is very easy to go on about good and bad programmers, but that
    really doesn't matter since statistics from the real world show that
    memory errors are common and cause serious vulnerabilities.

    Considering how hostile today's interconnected world has become with
    security getting a higher and higher priority, I think we are bound to
    see a decline of memory unsafe languages. C++ can be written to be
    memory safe, but it is also very easy to write C++ that is not memory
    safe. If C++ is to stay competitive, I think the C++ committee needs
    to have a good and long think about what can be done to remedy these
    issues.

    Doing nothing, I could see initiatives like CHERI introducing hardware
    based memory safety being a saviour. If the languages that enforce
    memory safety through their type system are more difficult to use, C++
    might be preferable if the memory safety is provided more or less
    transparently through the hardware. Although a software solution is
    probably seen as easier and cheaper than a hardware one. As the sales department at my last job informed me: "We can sell hardware, everyone
    likes a shiny box! Software is supposed to be included for free!"

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: Lysator ACS (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 08:42:07 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/2/2024 4:05 PM, Lawrence D'Oliveiro wrote:
    On Sat, 2 Mar 2024 17:13:56 -0600, Lynn McGuire wrote:

    The feddies want to regulate software development very much.

    Given the high occurrence of embarrassing mistakes companies have been
    making with their code, and continue to make, it’s quite clear they’re not
    capable of regulating this issue themselves.

    Oh my. C/C++ compilers are banned world wide. They even have reeducation
    camps that they will confine you to. You know, to learn the one true
    way... If you make a bug using the one true way, you risk a firing
    squad? lol. ;^)



    I wouldn’t worry about companies tripping over and hurting themselves, but when the consequences are security leaks, not of information belonging to those companies, but to their innocent customers/users who are often
    unaware that those companies even had that information, then it’s quite clear that Government has to step in.

    Because if they don’t, then who will?

    lol.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 08:48:37 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 3:01 AM, David Brown wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No. The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now. This is a
    very bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    Then we will hear about how human programmers cannot be trusted... AI is there. No programmers needed now. Jesting, of course, but I have heard
    some people starting to think that way.



    But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    Good languages and good tools help, but they are not the root cause of
    poor quality software in the world.




    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 08:49:15 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 12:11 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
    of Linux and Open Source. Developers are discovering that the Linux
    ecosystem offers a much more productive development environment for a
    code-sharing, code-reusing, Web-centric world than anything Microsoft
    can offer.

    I do not want to live in a web-centric world.

    You already do.

    You are not an ai, right? ;^)

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 09:01:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:
    On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks" >>>>
    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming
    languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They >>>> have been talking about it for at least 20 years now. This is a very
    bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and
    qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    Programmers who think about safety, correctness and quality and all that
    have way fewer diagnostics and more footguns if they are coding in C
    compared to Rust.

    I think, you can't just wave away the characteristics of Rust as making
    no difference in this regard.

    I did not.

    I said that the /root/ problem is not the language, but the programmers
    and the way they work.

    Of course some languages make some things harder and other things
    easier. And even the most careful programmers will occasionally make mistakes. So having a language that helps reduce the risk of some kinds
    of errors is a helpful thing.

    But consider this. When programming in modern C++, you can be risk-free from buffer overruns and most kinds of memory leak - use container
    classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete. You can use the C++ coding guideline
    libraries to mark ownership of pointers. You can use compiler
    sanitizers to catch many kinds undefined behaviour. You can use all
    sorts of static analysis tools, from free to very costly, to help find problems. And yet there are armies of programmers writing bad C++ code.
    PHP and Javascript have automatic memory management and garbage
    collection eliminating many of the possible problems seen in C and C++
    code, yet armies of programmers write PHP and Javascript code full of
    bugs and security faults.

    Better languages, better libraries, and better tools certainly help.
    There are not many tasks for which C is the best choice of language. But none of that will deal with the root of the problem. Good programmers,
    with good training, in good development departments with good managers
    and good resources, will write correct code more efficiently in a better language, but they can write correct code in pretty much /any/
    language. Similarly, the bulk of programmers will write bad code in any language.


    But if it gets popular enough for schools and colleges to teach Rust
    programming course to the masses, and it gets used by developers who are >>> paid per KLoC, given responsibilities well beyond their abilities and
    experience, lead by incompetent managers, untrained in good development
    practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    The rhetoric you hear from Rust people about this is that coders taking
    a safety shortcut to make something work have to explicitly ask for that
    in Rust. It leaves a visible trace. If something goes wrong because of
    an unsafe block, you can trace that to the commit which added it.

    The rhetoric all sounds good.

    You can't trace the commit for programmers who don't use version control software - and that is a /lot/ of them. Leaving visible traces does not help when no one else looks at the code. Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    Rust makes it possible to have some safety checks for a few things that
    are much harder to do in C++. It does not stop people writing bad code using bad development practices.


    However, like you, I also believe it boils down to people, in a
    somewhat different way. To use Rust productively, you have to be one of
    the rare idiot savants who are smart enough to use it *and* numb to all
    the inconveniences.

    And you have to have managers who are smart enough to believe it when
    their programmers say they need to train in a new language, re-write
    lots of existing code, and accept longer development times as a tradeoff
    for fewer bugs in shipped code.

    (I personally have a very good manager, but I know a great many
    programmers do not.)


    The reason the average programmer won't make any safety
    boo-boos using Rust is that the average programmer either isn't smart
    enough to use it at all, or else doesn't want to put up with the fuss:
    they will opt for some safe language which is easy to use.

    Rust's problem is that we have safe languages in which you can almost
    crank out working code with your eyes closed. (Or if not working,
    then at least code in which the only uncaught bugs are your logic bugs,
    not some undefined behavior from integer overflow or array out of
    bounds.)

    This is why Rust people are desperately pitching Rust as an alternative
    for C and whatnot, and showcasing it being used in the kernel and
    whatnot.


    I personally think it is madness to have Rust in a project like the
    Linux kernel. I used to see C++ as a rapidly changing language with its
    3 year cycle - Rust seems to have a 3 week cycle for updates, with no
    formal standardisation and "work in progress" attitude. That's fine for
    a new language under development, but /not/ something you want for a
    project that spans decades.

    Trying to be both safe and efficient to be able to serve as a "C
    replacement" is a clumsy hedge that makes Rust an awkward language.

    You know the parable about the fox that tries to chase two rabbits.

    The alternative to Rust in application development is pretty much any
    convenient, "easy" high level language, plus a little bit of C.
    You can get a small quantity of C right far more easily than a large
    quantity of C. It's almost immaterial.


    There are lots of alternatives to Rust for application development. But
    in general, higher level languages mean you do less manual work, and
    write fewer lines of code for the same amount of functionality. And
    that means a lower risk of errors.

    An important aspect of Rust is the ownership-based memory management.

    The problem is, the "garbage collection is bad" era is /long/ behind us.

    Scoped ownership is a half-baked solution to the object lifetime
    problem, that gets in the way of the programmer and isn't appropriate
    for the vast majority of software tasks.

    Embedded systems often need custom memory management, not something that
    the language imposes. C has malloc, yet even that gets disused in favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your various
    data structures needs....

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 09:06:31 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to Rust.

    Sure. Putting corks on the forks reduces the chance of eye injuries.
    Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
    Funny to me:


    https://youtu.be/eF8QAeQm3ZM?t=332

    Putting the cork on the fork is akin to saying nobody should be using C
    and/or C++ in this "modern" age? :^)

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Blue-Maned_Hawk@3:633/280.2 to All on Mon Mar 4 09:11:14 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence
    because of Linux and Open Source. Developers are discovering that the
    Linux ecosystem offers a much more productive development environment
    for a code-sharing, code-reusing, Web-centric world than anything
    Microsoft can offer.

    I do not want to live in a web-centric world.

    You already do.

    That does not change the veracity of my statement.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/
    │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    Every time!

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Blue-Maned_Hawk@3:633/280.2 to All on Mon Mar 4 09:14:31 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    Frankly, i think we should all be programming in macros over assembly
    anyway.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/
    │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    You have a disease!

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 09:15:09 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 2:14 PM, Blue-Maned_Hawk wrote:
    Frankly, i think we should all be programming in macros over assembly
    anyway.




    lol! :^D

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Mar 4 10:27:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sun, 3 Mar 2024 22:11:14 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    I do not want to live in a web-centric world.

    You already do.

    That does not change the veracity of my statement.

    That doesn’t change the veracity of mine.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Mar 4 10:29:42 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:

    On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google
    <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to
    Rust.

    Sure. Putting corks on the forks reduces the chance of eye injuries.

    Except this is Google, and they’re doing it in real-world production
    code, namely Android. And showing some positive benefits from doing
    so, without impairing the functionality of Android in any way.

    Not like “putting corks on the forks”, whatever that might be about
    ....

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Mar 4 10:31:35 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sun, 3 Mar 2024 21:23:56 +0100, David Brown wrote:

    But consider this. When programming in modern C++, you can be risk-free from buffer overruns and most kinds of memory leak - use container
    classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete.

    Or, going further, how about Google‘s “Carbon” project <https://github.com/carbon-language/carbon-lang>, which tries to keep
    the good bits from C++ while chucking out the bad?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 10:53:22 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 3:29 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:

    On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google
    <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to
    Rust.

    Sure. Putting corks on the forks reduces the chance of eye injuries.

    Except this is Google, and they’re doing it in real-world production
    code, namely Android. And showing some positive benefits from doing
    so, without impairing the functionality of Android in any way.

    Not like “putting corks on the forks”, whatever that might be about
    ...

    Putting corks on the forks is necessary to prevent the programmer from
    hurting itself or others... ;^)

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David LaRue@3:633/280.2 to All on Mon Mar 4 10:59:33 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    Lynn McGuire <lynnmcguire5@gmail.com> wrote in news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No. The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now. This is a
    very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages. They can be hacked just as easily as C and C++ and many other languages. The government should worry about things they really need to control, which is less not more, IMHO. They obviously know very little about computer development.

    David
    Professional developer for nearly 45 years

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Mon Mar 4 11:06:24 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 3:59 PM, David LaRue wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> wrote in news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
    invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No. The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now. This is a
    very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages. They can be hacked just as easily as C and C++ and many other languages. The government should worry about things they really need to control, which is less not more, IMHO. They obviously know very little about computer development.
    [...]

    I remember a while back when some people would try to tell me that ADA
    solves all issues...


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kenny McCormack@3:633/280.2 to All on Mon Mar 4 11:44:41 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    In article <us2s96$2n6h3$6@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    ....
    Sure. Putting corks on the forks reduces the chance of eye injuries.
    Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
    Funny to me:


    https://youtu.be/eF8QAeQm3ZM?t=332

    Leader Keith gets mad when you post YouTube URLs here.

    I'd be more careful, if I were you.

    Putting the cork on the fork is akin to saying nobody should be using C >and/or C++ in this "modern" age? :^)
    --
    The randomly chosen signature file that would have appeared here is more than 4 lines long. As such, it violates one or more Usenet RFCs. In order to remain in compliance with said RFCs, the actual sig can be found at the following URL:
    http://user.xmission.com/~gazelle/Sigs/ModernXtian

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: The official candy of the new Millennium (3:633/280.2@fidonet)
  • From bart@3:633/280.2 to All on Mon Mar 4 12:00:24 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03/03/2024 23:29, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:

    On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google
    <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to
    Rust.

    Sure. Putting corks on the forks reduces the chance of eye injuries.

    Except this is Google, and they’re doing it in real-world production
    code, namely Android. And showing some positive benefits from doing
    so, without impairing the functionality of Android in any way.

    That's great. So long as it is somebody else is programming in one of
    those languages where you have one hand tied behind your back. That used
    to be Ada. Now apparently it is Rust (so more like both hands tied).


    In the piechart in your link however, new code in C/C++ still looks to
    be nearly 3 times as much as Rust.

    Personally I think there must be an easier language which is considered
    to be safer without also making coding a nightmare.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Mar 4 16:43:40 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:

    I remember a while back when some people would try to tell me that [Ada] solves all issues...

    It did make a difference. Did you know the life-support system on the International Space Station was written in Ada? Not something you
    would trust C++ code to, let’s face it.

    And here <https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
    is a project to make it even safer.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Mon Mar 4 19:44:04 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something that >>> the language imposes. C has malloc, yet even that gets disused in favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any dynamic memory? How are you going to mange this memory wrt your various
    data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management. And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic malloc/free.
    Flexible network communication (such as Ethernet or other IP
    networking) is hard to do without dynamic memory.

    But for things that are safety or reliability critical, you aim to have everything statically allocated. (Sometimes you use dynamic memory at
    startup for convenience, but you never free that memory.) This, of
    course, means you simply don't use certain kinds of data structures. std::array<> is fine - it's just a nicer type wrapper around a fixed
    size C-style array. But you don't use std::vector<>, or other growable structures. You figure out in advance the maximum size you need for
    your structures, and nail them to that size at compile time.

    There are three big run-time dangers and one big build-time limitation
    when you have dynamic memory:

    1. You can run out. PC's can often be assumed to have "limitless"
    memory, and it is also often fine for a PC program to say it can't load
    that big file until you close other programs and free up memory. In a safety-critical embedded system, you have limited ram, and your code
    never does things it does not have to do - consequently, it is not
    acceptable to say it can't run a task at the moment due to lack of memory.

    2. You get fragmentation from malloc/free, leading to allocation
    failures even when there is enough total free memory. Small embedded
    systems don't have virtual memory, paging, MMUs, and other ways to
    re-arrange the appearance of memory. If you free your memory in a
    different order from allocation, your heap gets fragmented, and you end
    up with your "free" memory consisting of lots of discontinuous bits.

    3. Your timing is hard to predict or constrain. Walking heaps to find
    free memory for malloc, or coalescing free segments on deallocation,
    often has very unpredictable timing. This is a big no-no for real time systems.

    And at design/build time, dynamic memory requires are extremely
    difficult to analyse. In comparison, if everything is allocated
    statically, it's simple - it's all there in your map files, and you have
    a pass/fail result from trying to link it all within the available
    memory of the target.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Malcolm McLean@3:633/280.2 to All on Mon Mar 4 22:38:51 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 08:44, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something
    that
    the language imposes. C has malloc, yet even that gets disused in favor >>>> of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management. And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic malloc/free.
    Flexible network communication (such as Ethernet or other IP
    networking) is hard to do without dynamic memory.

    But for things that are safety or reliability critical, you aim to have everything statically allocated. (Sometimes you use dynamic memory at startup for convenience, but you never free that memory.) This, of
    course, means you simply don't use certain kinds of data structures. std::array<> is fine - it's just a nicer type wrapper around a fixed
    size C-style array. But you don't use std::vector<>, or other growable structures. You figure out in advance the maximum size you need for
    your structures, and nail them to that size at compile time.

    And if it's embedded, it's unlikely to have an unbounded dataset thrown
    at it, because embedded systems aren't used for those types of problems.

    --
    Check out Basic Algorithms and my other books: https://www.lulu.com/spotlight/bgy1mm


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Malcolm McLean@3:633/280.2 to All on Mon Mar 4 22:44:06 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03/03/2024 23:29, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:

    On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google
    <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to
    Rust.

    Sure. Putting corks on the forks reduces the chance of eye injuries.

    Except this is Google, and they’re doing it in real-world production
    code, namely Android. And showing some positive benefits from doing
    so, without impairing the functionality of Android in any way.

    Not like “putting corks on the forks”, whatever that might be about
    ...

    And it's pump money at it until something which is not going to a goer
    for anyone else starts to be a goer, and it is now made to work. And of
    course Google can solve a problem by inventing a new language and
    putting up all the infrastructure that that would need around it.

    --
    Check out Basic Algorithms and my other books: https://www.lulu.com/spotlight/bgy1mm


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Malcolm McLean@3:633/280.2 to All on Mon Mar 4 22:54:29 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 00:06, Chris M. Thomasson wrote:
    On 3/3/2024 3:59 PM, David LaRue wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> wrote in
    news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"
    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
    invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No. The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now. This is a
    very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages.
    They
    can be hacked just as easily as C and C++ and many other languages. The
    government should worry about things they really need to control,
    which is
    less not more, IMHO. They obviously know very little about computer
    development.
    [...]

    I remember a while back when some people would try to tell me that ADA solves all issues...

    And there's ADA, and there's Ada, the lady.

    And she wrote.

    "The Analytical Engine has no pretensions whatever to originate
    anything. It can do whatever we know how to order it to perform. It can
    follow analysis; but it has no power of anticipating any analytical
    relations or truths."

    And so she knew what the capabilites of the Analytical Engine were,
    exactly what programming was, what and what it could not achieve, and
    how set out making it achieve what it could achieved. And so she had it,
    and in a sense, ADA solved all issues.

    And no formal computer science education. Of course.
    --
    Check out Basic Algorithms and my other books: https://www.lulu.com/spotlight/bgy1mm


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Derek@3:633/280.2 to All on Mon Mar 4 23:18:25 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    All,

    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point,
    but it won't be easy."

    They make the mistake of blaming the tools rather than
    how the tools are used https://shape-of-code.com/2024/03/03/the-whitehouse-report-on-adopting-memory-safety/


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Tue Mar 5 01:41:43 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 12:54, Malcolm McLean wrote:
    On 04/03/2024 00:06, Chris M. Thomasson wrote:
    On 3/3/2024 3:59 PM, David LaRue wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> wrote in
    news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"
    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>> invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No. The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now. This is a
    very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages.
    They
    can be hacked just as easily as C and C++ and many other languages. The >>> government should worry about things they really need to control,
    which is
    less not more, IMHO. They obviously know very little about computer
    development.
    [...]

    I remember a while back when some people would try to tell me that ADA
    solves all issues...

    And there's ADA, and there's Ada, the lady.

    No, there's Ada the programming language, named after Lady Ada Lovelace.

    For those that perhaps don't understand these things, all-caps names are usually used for acronyms, such as BASIC, or languages from before small letters were universal in computer systems, such as early FORTRAN.
    Programming languages named after people are generally capitalised the
    same way people's names are - thus Ada and Pascal.


    And she wrote.

    "The Analytical Engine has no pretensions whatever to originate
    anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths."

    And so she knew what the capabilites of the Analytical Engine were,
    exactly what programming was, what and what it could not achieve, and
    how set out making it achieve what it could achieved. And so she had it,
    and in a sense, ADA solved all issues.


    What I think you are trying to say, but got completely lost in the last sentence, is that Lady Ada Lovelace is often regarded (perhaps
    incorrectly) as the first computer programmer.

    And no formal computer science education. Of course.

    She had a great deal of education in mathematics - just like most
    computer science pioneers.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Mar 5 02:28:35 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"
    Reply-To: slp53@pacbell.net

    David Brown <david.brown@hesbynett.no> writes:
    On 04/03/2024 12:54, Malcolm McLean wrote:
    On 04/03/2024 00:06, Chris M. Thomasson wrote:
    On 3/3/2024 3:59 PM, David LaRue wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> wrote in
    news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"
    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>>> invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No.  The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now.  This is a >>>>> very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages. >>>> They
    can be hacked just as easily as C and C++ and many other languages.  The >>>> government should worry about things they really need to control,
    which is
    less not more, IMHO.  They obviously know very little about computer
    development.
    [...]

    I remember a while back when some people would try to tell me that ADA
    solves all issues...

    And there's ADA, and there's Ada, the lady.

    No, there's Ada the programming language, named after Lady Ada Lovelace.\

    Indeed. And ADA has a very different meaning stateside.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Janis Papanagnou@3:633/280.2 to All on Tue Mar 5 03:05:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control
    development according to a realistic plan.

    Janis


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Tue Mar 5 04:24:58 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 17:05, Janis Papanagnou wrote:
    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control development according to a realistic plan.


    Now you are beginning to understand!



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Malcolm McLean@3:633/280.2 to All on Tue Mar 5 05:51:03 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 14:41, David Brown wrote:
    On 04/03/2024 12:54, Malcolm McLean wrote:
    On 04/03/2024 00:06, Chris M. Thomasson wrote:
    On 3/3/2024 3:59 PM, David LaRue wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> wrote in
    news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"
    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>>> invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No. The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now. This is a >>>>> very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure
    languages. They
    can be hacked just as easily as C and C++ and many other languages.
    The
    government should worry about things they really need to control,
    which is
    less not more, IMHO. They obviously know very little about computer
    development.
    [...]

    I remember a while back when some people would try to tell me that
    ADA solves all issues...

    And there's ADA, and there's Ada, the lady.

    No, there's Ada the programming language, named after Lady Ada Lovelace.

    For those that perhaps don't understand these things, all-caps names are usually used for acronyms, such as BASIC, or languages from before small letters were universal in computer systems, such as early FORTRAN. Programming languages named after people are generally capitalised the
    same way people's names are - thus Ada and Pascal.


    And she wrote.

    "The Analytical Engine has no pretensions whatever to originate
    anything. It can do whatever we know how to order it to perform. It
    can follow analysis; but it has no power of anticipating any
    analytical relations or truths."

    And so she knew what the capabilites of the Analytical Engine were,
    exactly what programming was, what and what it could not achieve, and
    how set out making it achieve what it could achieved. And so she had
    it, and in a sense, ADA solved all issues.


    What I think you are trying to say, but got completely lost in the last sentence, is that Lady Ada Lovelace is often regarded (perhaps
    incorrectly) as the first computer programmer.

    So what I'm trying to say is that she did it, and everyone else just
    knocked out the code. Once you understand what you are doing in this
    way, it's wrapped up. She solved it. So early on.

    Look at sentence two. She knew what that machine was.

    --
    Check out Basic Algorithms and my other books: https://www.lulu.com/spotlight/bgy1mm


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 07:36:57 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something
    that
    the language imposes. C has malloc, yet even that gets disused in favor >>>> of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management. And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic malloc/free.
    Flexible network communication (such as Ethernet or other IP
    networking) is hard to do without dynamic memory.
    [...]

    Think of using a big chunk of memory, never needed to be freed and is
    just there per process. Now, you carve it up and store it in a cache
    that has functions push and pop. So, you still have to manage memory
    even when you are using no dynamic memory at all... Fair enough, in a
    sense? The push and the pop are your malloc and free in a strange sense...


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 07:41:26 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 12:36 PM, Chris M. Thomasson wrote:
    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something >>>>> that
    the language imposes. C has malloc, yet even that gets disused in
    favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management. And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic
    malloc/free. Flexible network communication (such as Ethernet or
    other IP networking) is hard to do without dynamic memory.
    [...]

    Think of using a big chunk of memory,

    Say your program gains a special pointer from the system that contains
    all of the memory it can use for its lifetime. Its there, and there is
    no way to allocate any more...


    never needed to be freed and is
    just there per process. Now, you carve it up and store it in a cache
    that has functions push and pop. So, you still have to manage memory
    even when you are using no dynamic memory at all... Fair enough, in a
    sense? The push and the pop are your malloc and free in a strange sense...



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 07:46:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 3:38 AM, Malcolm McLean wrote:
    On 04/03/2024 08:44, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something >>>>> that
    the language imposes. C has malloc, yet even that gets disused in
    favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management. And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic
    malloc/free. Flexible network communication (such as Ethernet or
    other IP networking) is hard to do without dynamic memory.

    But for things that are safety or reliability critical, you aim to
    have everything statically allocated. (Sometimes you use dynamic
    memory at startup for convenience, but you never free that memory.)
    This, of course, means you simply don't use certain kinds of data
    structures. std::array<> is fine - it's just a nicer type wrapper
    around a fixed size C-style array. But you don't use std::vector<>,
    or other growable structures. You figure out in advance the maximum
    size you need for your structures, and nail them to that size at
    compile time.

    And if it's embedded, it's unlikely to have an unbounded dataset thrown
    at it, because embedded systems aren't used for those types of problems.


    Fwiw, this older experimental allocator (2009) works on restricted
    memory systems. Please forgive the alignment hacks... ;^)

    https://pastebin.com/raw/f37a23918
    (to raw text, no ads wrt pastebin)

    https://groups.google.com/g/comp.lang.c/c/7oaJFWKVCTw/m/sSWYU9BUS_QJ


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 07:52:27 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 4:18 AM, Derek wrote:
    All,

    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    They make the mistake of blaming the tools rather than
    how the tools are used https://shape-of-code.com/2024/03/03/the-whitehouse-report-on-adopting-memory-safety/


    Akin to giving somebody a hammer and they proceed to smash their own
    hand with it. Then they say, well, that hammer is dangerous and the
    person that gave it to me should be sued for negligence... Wow, lets
    think about writing up a 1000 pages on why hammers should be banned?
    Hyper sarcastic, I know, but it the key fits... ;^)

    Sorry for the sarcasm.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 07:57:21 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 4:44 PM, Kenny McCormack wrote:
    In article <us2s96$2n6h3$6@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    ...
    Sure. Putting corks on the forks reduces the chance of eye injuries.
    Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
    Funny to me:


    https://youtu.be/eF8QAeQm3ZM?t=332

    Leader Keith gets mad when you post YouTube URLs here.

    I'd be more careful, if I were you.

    Well, at least I added in a description... ;^)


    Putting the cork on the fork is akin to saying nobody should be using C
    and/or C++ in this "modern" age? :^)


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 08:07:27 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Mon, 4 Mar 2024 11:44:06 +0000, Malcolm McLean wrote:

    And of course Google can solve a problem by inventing a new language and putting up all the infrastructure that that would need around it.

    Google has invented quite a lot of languages: Dart and Go come to mind,
    and also this “Carbon” effort.

    I suppose nowadays a language can find a niche outside the mainstream, and still be viable. Proprietary products need mass-market success to stay
    afloat, but with open-source ones, what’s important is the contributor
    base, not the user base.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 08:11:08 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Mon, 4 Mar 2024 15:41:43 +0100, David Brown wrote:

    ... Lady Ada Lovelace is often regarded (perhaps
    incorrectly) as the first computer programmer.

    She was the first, in written records, to appreciate some of the not-so- obvious issues in computer programming.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 08:15:20 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 9:43 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:

    I remember a while back when some people would try to tell me that [Ada]
    solves all issues...

    It did make a difference. Did you know the life-support system on the International Space Station was written in Ada? Not something you
    would trust C++ code to, let’s face it.

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?



    And here <https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
    is a project to make it even safer.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 08:26:51 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    The less C code you write, the easier it is to keep it under control.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 08:28:46 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 1:26 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    The less C code you write, the easier it is to keep it under control.

    Excellent comment in a C group. Well, you should move to another group?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 08:29:52 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 1:28 PM, Chris M. Thomasson wrote:
    On 3/4/2024 1:26 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    The less C code you write, the easier it is to keep it under control.

    Excellent comment in a C group. Well, you should move to another group?

    http://fractallife247.com/test/hmac_cipher/ver_0_0_0_1?ct_hmac_cipher=7e7e1c663477d02a3adbf99372cfa1e0e719dcdabd20b50c27000dba3eb5dc342e3e0403607bb40f00b999b6bc24559ca0858b445c097a3848b457b1028ab0d78aa57934cd00b99dd080f80bf7791a11d5df6435fb0e

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Mar 5 09:59:48 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Mon, 4 Mar 2024 21:07:27 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 4 Mar 2024 11:44:06 +0000, Malcolm McLean wrote:
    =20
    And of course Google can solve a problem by inventing a new
    language and putting up all the infrastructure that that would need
    around it. =20
    =20
    Google has invented quite a lot of languages: Dart and Go come to
    mind, and also this =E2=80=9CCarbon=E2=80=9D effort.
    =20
    I suppose nowadays a language can find a niche outside the
    mainstream, and still be viable. Proprietary products need
    mass-market success to stay afloat, but with open-source ones, what=E2=80=
    =99s
    important is the contributor base, not the user base.

    Go *is* mainstream, more so than Rust.
    Dart is not mainstream and is not even niche.
    For Carbon it's too early to call, but so far prospects look bleak.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Janis Papanagnou@3:633/280.2 to All on Tue Mar 5 12:46:51 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04.03.2024 18:24, David Brown wrote:
    On 04/03/2024 17:05, Janis Papanagnou wrote:
    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control
    development according to a realistic plan.

    Now you are beginning to understand!

    Huh? - I posted about various factors (beyond the programmers'
    proficiency and tools) in an earlier reply to you; it was including
    the management factor that you missed to note and that you adopted
    as factor just in a later post. - So there's neither need nor reason
    for such an arrogant, wrong, and disrespectful statement.

    Janis


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 12:54:46 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:

    Go *is* mainstream, more so than Rust.

    Google looked at what language to use for its proprietary “Fuchsia” OS, and decided Rust was a better choice than Go.

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better performance.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Janis Papanagnou@3:633/280.2 to All on Tue Mar 5 13:32:23 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04.03.2024 22:15, Chris M. Thomasson wrote:
    On 3/3/2024 9:43 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:

    I remember a while back when some people would try to tell me that [Ada] >>> solves all issues...

    It did make a difference. Did you know the life-support system on the
    International Space Station was written in Ada? Not something you
    would trust C++ code to, let’s face it.

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    You named them as "critical libraries", which (as a project manager)
    I'd handle as such; be sure about their quality, about certificates,
    write own test cases if necessary, or demand source code for reviews
    for own verification.

    As already said, there's more factors than the language. An external
    library is also an externality to consider, and to not consider it
    (per se) as okay.

    Janis


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Malcolm McLean@3:633/280.2 to All on Tue Mar 5 13:46:33 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 21:28, Chris M. Thomasson wrote:
    On 3/4/2024 1:26 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    The less C code you write, the easier it is to keep it under control.

    Excellent comment in a C group. Well, you should move to another group?

    There's an underlying reality there. The less code you have, the less
    that can go wrong. So don;t just knock out code, but think a bit about
    what you do and do not really need.
    --
    Check out Basic Algorithms and my other books: https://www.lulu.com/spotlight/bgy1mm


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 14:40:37 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 6:46 PM, Malcolm McLean wrote:
    On 04/03/2024 21:28, Chris M. Thomasson wrote:
    On 3/4/2024 1:26 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:

    Would you trust a "safe" language that had some critical libraries that >>>> were written in say, C?

    The less C code you write, the easier it is to keep it under control.

    Excellent comment in a C group. Well, you should move to another group?

    There's an underlying reality there. The less code you have, the less
    that can go wrong.

    Well, hard to disagree with that. :^D


    So don;t just knock out code, but think a bit about
    what you do and do not really need.

    Indeed.

    [...]

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 14:42:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 6:32 PM, Janis Papanagnou wrote:
    On 04.03.2024 22:15, Chris M. Thomasson wrote:
    On 3/3/2024 9:43 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:

    I remember a while back when some people would try to tell me that [Ada] >>>> solves all issues...

    It did make a difference. Did you know the life-support system on the
    International Space Station was written in Ada? Not something you
    would trust C++ code to, let’s face it.

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    You named them as "critical libraries", which (as a project manager)
    I'd handle as such; be sure about their quality, about certificates,
    write own test cases if necessary, or demand source code for reviews
    for own verification.

    As already said, there's more factors than the language. An external
    library is also an externality to consider, and to not consider it
    (per se) as okay.

    Think of a critical library as an essential part of a runtime for a
    language, perhaps? Say you create a new language that depends on certain things that are coded in C and/or ASM. Fair enough?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 15:43:21 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:

    The less code you have, the less that can go wrong.

    This can also mean using the build system to automatically generate some repetitive things, to avoid having to write them manually.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 16:23:49 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 8:43 PM, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:

    The less code you have, the less that can go wrong.

    This can also mean using the build system to automatically generate some repetitive things, to avoid having to write them manually.

    Does the build system depend on anything coded in C?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lynn McGuire@3:633/280.2 to All on Tue Mar 5 17:02:01 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 4:14 PM, Blue-Maned_Hawk wrote:
    Frankly, i think we should all be programming in macros over assembly
    anyway.

    Been there, done that. No more.

    Lynn



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lynn McGuire@3:633/280.2 to All on Tue Mar 5 17:03:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:

    I remember a while back when some people would try to tell me that [Ada]
    solves all issues...

    It did make a difference. Did you know the life-support system on the International Space Station was written in Ada? Not something you
    would trust C++ code to, let’s face it.

    And here <https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
    is a project to make it even safer.

    Most of the Ada code was written in C or C++ and converted to Ada for delivery.

    Lynn


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lynn McGuire@3:633/280.2 to All on Tue Mar 5 17:09:35 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 9:31 AM, Scott Lurndal wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> writes:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming
    languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much.

    You've been reading far to much apocalyptic fiction and seeing the
    world through trump-colored glasses. Neither reflect reality.

    Nope, I actually have had a Professional Engineer's License in Texas for
    34 years now and can tell you all about what it takes to get one and
    what it takes to keep one.

    This bunch of crazies in the White House wants to do the same thing to software development.

    Lynn



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 17:18:47 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 5:54 PM, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:

    Go *is* mainstream, more so than Rust.

    Google looked at what language to use for its proprietary “Fuchsia” OS, and decided Rust was a better choice than Go.

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better performance.

    Why do you mention performance? I thought is was all about safety...

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 18:06:38 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Mon, 4 Mar 2024 22:18:47 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 5:54 PM, Lawrence D'Oliveiro wrote:

    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:

    Go *is* mainstream, more so than Rust.

    Google looked at what language to use for its proprietary “Fuchsia” OS, >> and decided Rust was a better choice than Go.

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better
    performance.

    Why do you mention performance? I thought is was all about safety...

    Safety’s a given. Plus you get performance as well.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 18:07:24 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 00:09:35 -0600, Lynn McGuire wrote:

    ... I actually have had a Professional Engineer's License in Texas for
    34 years now and can tell you all about what it takes to get one and
    what it takes to keep one.

    Does that include any qualification in safety-critical or security-
    critical systems?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 18:07:48 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 8:43 PM, Lawrence D'Oliveiro wrote:

    On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:

    The less code you have, the less that can go wrong.

    This can also mean using the build system to automatically generate
    some repetitive things, to avoid having to write them manually.

    Does the build system depend on anything coded in C?

    These days, it might be Rust.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 5 18:08:54 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would
    trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for delivery.

    Was it debugged again? Or was it assumed that the translation was bug-
    free?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Tue Mar 5 18:10:47 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 11:06 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 22:18:47 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 5:54 PM, Lawrence D'Oliveiro wrote:

    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:

    Go *is* mainstream, more so than Rust.

    Google looked at what language to use for its proprietary “Fuchsia” OS, >>> and decided Rust was a better choice than Go.

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better
    performance.

    Why do you mention performance? I thought is was all about safety...

    Safety’s a given. Plus you get performance as well.

    For sure? There is no way a programmer can f it up, so to speak?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Tue Mar 5 20:01:53 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 21:36, Chris M. Thomasson wrote:
    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something >>>>> that
    the language imposes. C has malloc, yet even that gets disused in
    favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management. And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic
    malloc/free. Flexible network communication (such as Ethernet or
    other IP networking) is hard to do without dynamic memory.
    [...]

    Think of using a big chunk of memory, never needed to be freed and is
    just there per process. Now, you carve it up and store it in a cache
    that has functions push and pop. So, you still have to manage memory
    even when you are using no dynamic memory at all... Fair enough, in a
    sense? The push and the pop are your malloc and free in a strange sense...


    I believe I mentioned that. You do not, in general, "push and pop" -
    you malloc and never free. Excluding debugging code and other parts
    useful in testing and developing, you have something like :

    enum { heap_size = 16384; }
    alignas(max_align_t) static uint8_t heap[heap_size];
    uint8_t * next_free = heap;

    void free(void * ptr) {
    (void) ptr;
    }

    void * malloc(size_t size) {
    const size_t align = alignof(max_align_t);
    const real_size = size ? (size + (align - 1)) & ~(align - 1)
    : align;
    void * p = next_free;
    next_free += real_size;
    return p;
    }


    Allowing for pops requires storing the size of the allocations (unless
    you change the API from that of malloc/free), and is only rarely useful.
    Generally if you want memory that temporary, you use a VLA or alloca
    to put it on the stack.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Mar 5 20:11:03 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:
    =20
    Go *is* mainstream, more so than Rust. =20
    =20
    Google looked at what language to use for its proprietary =E2=80=9CFuchsi=
    a=E2=80=9D
    OS, and decided Rust was a better choice than Go.


    Go is (1) garbage-collected, (2) mostly statically linked.
    (1) means it is not suitable for kernel
    (2) means it is suitable for big user-mode utilities, but probably
    impractical for smaller utilities, because you don't want your tiny
    utility to occupy 2-3 MB on permanent storage.
    But both (1) and (2) are advantages for typical application programming,
    esp. for back-end processing.

    Discord did some benchmarking of its back-end servers, which had been=20 using Go, and decided that switching to Rust offered better
    performance.

    I have no idea who is Discord.
    However I fully expect that for micro- or mini-benchmarks they are
    correct.=20
    I also expect that
    - even for micro- or mini-benchmark the difference in speed is less
    than factor of 3
    - for big and complex real-world back-end processing, writing working
    solution in go will take 5 time less man hours than writing it in
    Rust=20
    - for more complex processing just making it work in Rust, regardless of
    execution speed, will require uncommon level of programming skills
    - even if Rust solution works initially, it would be more costly (than
    go solution) to maintain and especially to adapt to changing
    requirements.




    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Tue Mar 5 21:23:41 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 05/03/2024 02:46, Janis Papanagnou wrote:
    On 04.03.2024 18:24, David Brown wrote:
    On 04/03/2024 17:05, Janis Papanagnou wrote:
    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only >>>> so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control
    development according to a realistic plan.

    Now you are beginning to understand!

    Huh? - I posted about various factors (beyond the programmers'
    proficiency and tools) in an earlier reply to you; it was including
    the management factor that you missed to note and that you adopted
    as factor just in a later post. - So there's neither need nor reason
    for such an arrogant, wrong, and disrespectful statement.


    It was not intended that way at all - I'm sorry if that is how it came
    across.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Tue Mar 5 21:27:01 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would
    trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for
    delivery.

    Was it debugged again? Or was it assumed that the translation was bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Tue Mar 5 21:31:11 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 04/03/2024 22:11, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 15:41:43 +0100, David Brown wrote:

    ... Lady Ada Lovelace is often regarded (perhaps
    incorrectly) as the first computer programmer.

    She was the first, in written records, to appreciate some of the not-so- obvious issues in computer programming.

    Yes. That includes realising that computers could do more than number crunching. She was also involved in checking, correcting and commenting
    some of Babbage's programs, and also was the first to publish an
    algorithm (for Bernouli numbers) designed specifically for executing on
    a computer. And she did all this without a working computer.

    So while calling her "the first computer programmer" is inaccurate, she
    was definitely a key computer science pioneer.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Mr. Man-wai Chang@3:633/280.2 to All on Wed Mar 6 00:51:26 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/3/2024 7:13 am, Lynn McGuire wrote:

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    A responsible, good progreammer or a better C/C++ pre-processor can
    avoid a lot of problems!!

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Wed Mar 6 01:56:58 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"
    Reply-To: slp53@pacbell.net

    Lynn McGuire <lynnmcguire5@gmail.com> writes:
    On 3/3/2024 9:31 AM, Scott Lurndal wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> writes:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much.

    You've been reading far to much apocalyptic fiction and seeing the
    world through trump-colored glasses. Neither reflect reality.

    Nope, I actually have had a Professional Engineer's License in Texas for
    34 years now and can tell you all about what it takes to get one and
    what it takes to keep one.

    This bunch of crazies in the White House wants to do the same thing to >software development.


    Nothing in the quoted article supports your ridiculous assertion.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Wed Mar 6 07:51:21 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/5/2024 1:01 AM, David Brown wrote:
    On 04/03/2024 21:36, Chris M. Thomasson wrote:
    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not
    something that
    the language imposes. C has malloc, yet even that gets disused in >>>>>> favor
    of something else.


    For safe embedded systems, you don't want memory management at all. >>>>> Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid
    any dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic
    memory and therefore memory management. And as Kaz says, you will
    often use custom solutions such as resource pools rather than generic
    malloc/free. Flexible network communication (such as Ethernet or
    other IP networking) is hard to do without dynamic memory.
    [...]

    Think of using a big chunk of memory, never needed to be freed and is
    just there per process. Now, you carve it up and store it in a cache
    that has functions push and pop. So, you still have to manage memory
    even when you are using no dynamic memory at all... Fair enough, in a
    sense? The push and the pop are your malloc and free in a strange
    sense...


    I believe I mentioned that. You do not, in general, "push and pop" -
    you malloc and never free. Excluding debugging code and other parts
    useful in testing and developing, you have something like :

    enum { heap_size = 16384; }
    alignas(max_align_t) static uint8_t heap[heap_size];
    uint8_t * next_free = heap;

    void free(void * ptr) {
    (void) ptr;
    }

    void * malloc(size_t size) {
    const size_t align = alignof(max_align_t);
    const real_size = size ? (size + (align - 1)) & ~(align - 1)
    : align;
    void * p = next_free;
    next_free += real_size;
    return p;
    }


    Allowing for pops requires storing the size of the allocations (unless
    you change the API from that of malloc/free), and is only rarely useful.
    Generally if you want memory that temporary, you use a VLA or alloca
    to put it on the stack.


    wrt systems with no malloc/free I am thinking more along the lines of a
    region allocator mixed with a LIFO for a cache, so a node based thing.
    The region allocator gets fed with a large buffer. Depending on specific needs, it can work out nicely for systems that do not have malloc/free.
    The pattern I used iirc, was something like:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
    // try the lifo first...

    node* n = lifo_pop();

    if (! n)
    {
    // resort to the region allocator...

    n = region_allocate_node();

    // note, n can be null here.
    // if it is, we are out of memory.

    // note, out of memory on a system
    // with no malloc/free...
    }

    return n;
    }

    void
    node_push(
    node* n
    ) {
    lifo_push(n);
    }
    _______________________


    make any sense to you?


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Wed Mar 6 08:01:52 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would >>>> trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for
    delivery.

    Was it debugged again? Or was it assumed that the translation was bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)



    Really? Any logic errors in the program itself?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kaz Kylheku@3:633/280.2 to All on Wed Mar 6 08:24:26 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would >>>>> trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for
    delivery.

    Was it debugged again? Or was it assumed that the translation was bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware overflow exception from forcing a 64 bit floating-point value into a 16
    bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Wed Mar 6 08:44:30 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/5/2024 1:24 PM, Kaz Kylheku wrote:
    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would >>>>>> trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for >>>>> delivery.

    Was it debugged again? Or was it assumed that the translation was bug- >>>> free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware overflow exception from forcing a 64 bit floating-point value into a 16
    bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.


    I need to study up on that one; Thanks. Fwiw, the joint strike fighter
    C++ rules are interesting to me as well. Can a little bugger make one of
    its air-to-air missiles fire? Now I can hear one of my friends saying,
    see, I told you that human programmers cannot be trusted... ;^) lol.

    ADA is bullet proof... Until its not... ;^)

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Wed Mar 6 08:48:25 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/4/2024 11:07 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 8:43 PM, Lawrence D'Oliveiro wrote:

    On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:

    The less code you have, the less that can go wrong.

    This can also mean using the build system to automatically generate
    some repetitive things, to avoid having to write them manually.

    Does the build system depend on anything coded in C?

    These days, it might be Rust.

    The keyword is might... Right?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Keith Thompson@3:633/280.2 to All on Wed Mar 6 08:58:10 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would >>>>>> trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for >>>>> delivery.

    Was it debugged again? Or was it assumed that the translation was bug- >>>> free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware overflow exception from forcing a 64 bit floating-point value into a 16
    bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.

    A numeric overflow occurred during the Ariane 5's initial flight -- and
    the software *did* catch the overflow. The same overflow didn't occur
    on Ariane 4 because of its different flight profile. There was a
    management decision to reuse the Ariane 4 flight software for Ariane 5
    without sufficient review.

    The code (which had been thoroughly tested on Ariane 4 and was known not
    to overflow) emitted an error message describing the overflow exception.
    That error message was then processed as data. Another problem was that systems were designed to shut down on any error; as a result, healthy
    and necessary equipment was shut down prematurely.

    This is from my vague memory, and may not be entirely accurate.

    *Of course* logic errors are possible in Ada programs, but in my
    experience and that of many other programmers, if you get an Ada program
    to compile (and run without raising unhandled exceptions), you're likely
    to be much closer to a working program than if you get a C program to
    compile. A typo in a C program is more likely to result in a valid
    program with different semantics.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Medtronic
    void Void(void) { Void(); } /* The recursive call of the void */

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: None to speak of (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Wed Mar 6 09:02:43 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/5/2024 1:58 PM, Keith Thompson wrote:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would >>>>>>> trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for >>>>>> delivery.

    Was it debugged again? Or was it assumed that the translation was bug- >>>>> free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware
    overflow exception from forcing a 64 bit floating-point value into a 16
    bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.

    A numeric overflow occurred during the Ariane 5's initial flight -- and
    the software *did* catch the overflow. The same overflow didn't occur
    on Ariane 4 because of its different flight profile. There was a
    management decision to reuse the Ariane 4 flight software for Ariane 5 without sufficient review.

    The code (which had been thoroughly tested on Ariane 4 and was known not
    to overflow) emitted an error message describing the overflow exception.
    That error message was then processed as data. Another problem was that systems were designed to shut down on any error; as a result, healthy
    and necessary equipment was shut down prematurely.

    This is from my vague memory, and may not be entirely accurate.

    *Of course* logic errors are possible in Ada programs, but in my
    experience and that of many other programmers, if you get an Ada program
    to compile (and run without raising unhandled exceptions), you're likely
    to be much closer to a working program than if you get a C program to compile. A typo in a C program is more likely to result in a valid
    program with different semantics.


    So close you can just feel its a 100% correct and working program?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Keith Thompson@3:633/280.2 to All on Wed Mar 6 09:11:51 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    Of course no language that can be used for real work can be completely bulletproof. Ada is designed to be relatively safe (and neither of
    these newsgroups is the place to discuss the details.)

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Medtronic
    void Void(void) { Void(); } /* The recursive call of the void */

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: None to speak of (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Wed Mar 6 09:34:03 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/5/2024 2:11 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    I wonder how many people got confused?


    Of course no language that can be used for real work can be completely bulletproof. Ada is designed to be relatively safe (and neither of
    these newsgroups is the place to discuss the details.)


    That's fine.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Mar 6 09:58:10 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing working
    solution in go will take 5 time less man hours than writing it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Mar 6 11:25:08 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 11:31:11 +0100, David Brown wrote:

    That includes realising that computers could do more than number
    crunching.

    Or, conversely, realizing that all forms of computation (including symbol manipulation) can be expressed as arithmetic? Maybe that came later, cf “Gödel numbering”.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Mar 6 11:25:49 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 13:48:25 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 11:07 PM, Lawrence D'Oliveiro wrote:

    On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:

    Does the build system depend on anything coded in C?

    These days, it might be Rust.

    The keyword is might... Right?

    Might does not make right.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Wed Mar 6 17:01:01 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/5/2024 4:25 PM, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 13:48:25 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 11:07 PM, Lawrence D'Oliveiro wrote:

    On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:

    Does the build system depend on anything coded in C?

    These days, it might be Rust.

    The keyword is might... Right?

    Might does not make right.

    So, what is the right language to use?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Mr. Man-wai Chang@3:633/280.2 to All on Wed Mar 6 18:43:46 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 5/3/2024 9:51 pm, Mr. Man-wai Chang wrote:
    On 3/3/2024 7:13 am, Lynn McGuire wrote:

    "The Biden administration backs a switch to more memory-safe programming
    languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    A responsible, good progreammer or a better C/C++ pre-processor can
    avoid a lot of problems!!

    Or maybe A.I.-assisted code analyzer?? But there are still blind spots...

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Wed Mar 6 21:43:21 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 05/03/2024 21:51, Chris M. Thomasson wrote:
    On 3/5/2024 1:01 AM, David Brown wrote:
    On 04/03/2024 21:36, Chris M. Thomasson wrote:
    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not
    something that
    the language imposes. C has malloc, yet even that gets disused in >>>>>>> favor
    of something else.


    For safe embedded systems, you don't want memory management at
    all. Avoiding dynamic memory is an important aspect of
    safety-critical embedded development.


    You still have to think about memory management even if you avoid
    any dynamic memory? How are you going to mange this memory wrt your >>>>> various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic
    memory and therefore memory management. And as Kaz says, you will
    often use custom solutions such as resource pools rather than
    generic malloc/free. Flexible network communication (such as
    Ethernet or other IP networking) is hard to do without dynamic memory.
    [...]

    Think of using a big chunk of memory, never needed to be freed and is
    just there per process. Now, you carve it up and store it in a cache
    that has functions push and pop. So, you still have to manage memory
    even when you are using no dynamic memory at all... Fair enough, in a
    sense? The push and the pop are your malloc and free in a strange
    sense...


    I believe I mentioned that. You do not, in general, "push and pop" -
    you malloc and never free. Excluding debugging code and other parts
    useful in testing and developing, you have something like :

    enum { heap_size = 16384; }
    alignas(max_align_t) static uint8_t heap[heap_size];
    uint8_t * next_free = heap;

    void free(void * ptr) {
    (void) ptr;
    }

    void * malloc(size_t size) {
    const size_t align = alignof(max_align_t);
    const real_size = size ? (size + (align - 1)) & ~(align - 1)
    : align;
    void * p = next_free;
    next_free += real_size;
    return p;
    }


    Allowing for pops requires storing the size of the allocations (unless
    you change the API from that of malloc/free), and is only rarely
    useful. Generally if you want memory that temporary, you use a VLA
    or alloca to put it on the stack.


    wrt systems with no malloc/free I am thinking more along the lines of a region allocator mixed with a LIFO for a cache, so a node based thing.
    The region allocator gets fed with a large buffer. Depending on specific needs, it can work out nicely for systems that do not have malloc/free.
    The pattern I used iirc, was something like:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
    // try the lifo first...

    node* n = lifo_pop();

    if (! n)
    {
    // resort to the region allocator...

    n = region_allocate_node();

    // note, n can be null here.
    // if it is, we are out of memory.

    // note, out of memory on a system
    // with no malloc/free...
    }

    return n;
    }

    void
    node_push(
    node* n
    ) {
    lifo_push(n);
    }
    _______________________


    make any sense to you?


    I know what you are trying to suggest, and I understand how it can sound reasonable. In some cases, this can be a useful kind of allocator, and
    when it is suitable, it is very fast. But it is has two big issues for
    small embedded systems.

    One problem is the "region_allocate_node()" - getting a lump of space
    from the underlying OS. That is fine on "big systems", and it is normal
    that malloc/free systems only ask for memory from the OS in big lumps,
    then handle local allocation within the process space for efficiency.
    (This can work particularly well if each thread gets dedicated lumps, so
    that no locking is needed for most malloc/free calls.)

    But in a small embedded system, there is no OS (an RTOS is generally
    part of the same binary as the application), and providing such "lumps"
    would be dynamic memory management. So if you are using a system like
    you describe, then you would have a single statically allocated block of memory for your lifo stack.

    Then there is the question of how often such a stack-like allocator is
    useful, independent of the normal stack. I can imagine it is
    /sometimes/ helpful, but rarely. I can't think off-hand of any cases
    where I would have found it useful in anything I have written.

    As I (and others) have said elsewhere, in small embedded systems and
    safety or reliability critical systems, you want to avoid dynamic memory
    and memory management whenever possible, for a variety of reasons. If
    you do need something, then specialise allocators are more common -
    possibly including lifos like this.

    But it's more likely to have fixed-size pools with fixed-size elements, dedicated to particular memory tasks. For example, if you need to track multiple in-flight messages on a wireless mesh network, where messages
    might take different amounts of time to be delivered and acknowledged,
    or retried, you define a structure that holds all the data you need for
    a message. Then you decide how many in-flight messages you will support
    as a maximum. This gives you a statically allocated array of N structs.
    Block usage is then done by a bitmap, typically within a single 32-bit
    word. Finding a free slot is a just finding the first free zero, and
    freeing it is clearing the correct bit.

    There are, of course, many other kinds of dedicated allocators that can
    be used in other circumstances.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Wed Mar 6 23:02:14 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than writing
    it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On
    average the performance was satisfactory, but every two minutes there
    was spike in latency. The latency during the spike was not that big
    (300 msec), but they stilled were feeling that they want better.
    They tried to tune GC, but the problem appeared to be fundamental.
    So they just rewrote this particular server in Rust. Naturally, Rust
    does not collect garbage, so this particular problem disappeared.

    The key phrase of the story is "This service was a great candidate to
    port to Rust since it was small and self-contained".
    I'd add to this that even more important for eventual success of
    migration was the fact that at time of rewrite server was already
    running for several years, so requirements were stable and
    well-understood.
    Another factor is that their service does not create/free that many
    objects. The delay was caused by mere fact of GC scanning rather than
    by frequent compacting of memory pools. So, from the beginning it was
    obvious that potential fragmentation of the heap, which is the main
    weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
    not apply in their case.

    There is also non-technical angle involved: Discord is fueled by
    investor's money. It's not that they have no revenues at all, but their revenues at this stage are not supposed to cover their expenses.
    Companies that operate in such mode have different
    perspective to just about everything. I mean, different from
    perspective of people like myself, working in a company that fights hard
    to stay profitable and succeeds more often than not.

    I have few questions about the story, most important one is whether the weakness of this sort is specific to GC of Go, due to its relative
    immaturity or more general and applies equally to most mature GCs on
    the market, i.e. J2EE and .NET.
    Another question is whether the problem is specific to GC-style of
    automatic memory management (AMM) or applies, at least to some degree,
    to other forms of AMM, most importantly, to AMMs based on Reference
    Counting used by Swift and also popular in C++.
    Of course, I don't expected that my questions will be answered fully on comp.lang.c, but if some knowledgeable posters will try to answer I
    would appreciate.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From bart@3:633/280.2 to All on Wed Mar 6 23:28:59 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 12:02, Michael S wrote:
    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than writing
    it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On
    average the performance was satisfactory, but every two minutes there
    was spike in latency. The latency during the spike was not that big
    (300 msec), but they stilled were feeling that they want better.
    They tried to tune GC, but the problem appeared to be fundamental.
    So they just rewrote this particular server in Rust. Naturally, Rust
    does not collect garbage, so this particular problem disappeared.

    The key phrase of the story is "This service was a great candidate to
    port to Rust since it was small and self-contained".
    I'd add to this that even more important for eventual success of
    migration was the fact that at time of rewrite server was already
    running for several years, so requirements were stable and
    well-understood.
    Another factor is that their service does not create/free that many
    objects. The delay was caused by mere fact of GC scanning rather than
    by frequent compacting of memory pools. So, from the beginning it was
    obvious that potential fragmentation of the heap, which is the main
    weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
    not apply in their case.

    From the same link:

    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps track of who can read and write to memory. It knows when the program is using
    memory and immediately frees the memory once it is no longer needed. It enforces memory rules at compile time, making it virtually impossible to
    have runtime memory bugs.⁴ You do not need to manually keep track of
    memory. The compiler takes care of it."

    This suggests the language automatically takes care of this. But you
    have to write your programs in a certain way to make it possible. The programmer has to help the language keep track of what owns what.

    So you will probably be able to do the same thing in another language.
    But Rust will do more compile-time enforcement by restricting how you
    share objects in memory.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Thu Mar 7 00:31:50 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 05/03/2024 23:34, Chris M. Thomasson wrote:
    On 3/5/2024 2:11 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    I wonder how many people got confused?


    Apparently you and Malcolm got confused.

    Others who mentioned the language know it is called "Ada". I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Thu Mar 7 00:34:50 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 05/03/2024 23:02, Chris M. Thomasson wrote:
    On 3/5/2024 1:58 PM, Keith Thompson wrote:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote: >>>> On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something >>>>>>>> you would
    trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada >>>>>>> for
    delivery.

    Was it debugged again? Or was it assumed that the translation was >>>>>> bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware >>> overflow exception from forcing a 64 bit floating-point value into a 16
    bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.

    A numeric overflow occurred during the Ariane 5's initial flight -- and
    the software *did* catch the overflow. The same overflow didn't occur
    on Ariane 4 because of its different flight profile. There was a
    management decision to reuse the Ariane 4 flight software for Ariane 5
    without sufficient review.

    The code (which had been thoroughly tested on Ariane 4 and was known not
    to overflow) emitted an error message describing the overflow exception.
    That error message was then processed as data. Another problem was that
    systems were designed to shut down on any error; as a result, healthy
    and necessary equipment was shut down prematurely.

    This is from my vague memory, and may not be entirely accurate.

    That matches my recollection too.


    *Of course* logic errors are possible in Ada programs, but in my
    experience and that of many other programmers, if you get an Ada program
    to compile (and run without raising unhandled exceptions), you're likely
    to be much closer to a working program than if you get a C program to
    compile. A typo in a C program is more likely to result in a valid
    program with different semantics.


    So close you can just feel its a 100% correct and working program?

    Didn't you notice the smiley in my comment? It used to be a running
    joke that if you managed to get your Ada code to compile, it was ready
    to ship. The emphasis is on the word "joke".


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Thu Mar 7 00:40:46 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 01:25, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 11:31:11 +0100, David Brown wrote:

    That includes realising that computers could do more than number
    crunching.

    Or, conversely, realizing that all forms of computation (including symbol manipulation) can be expressed as arithmetic?

    That's also a reasonable way to put it. I have not read any of her
    writings, so I don't know exactly how she described things.

    Maybe that came later, cf
    “Gödel numbering”.

    That's getting a few steps further on - it is treating programs as data,
    and I don't think there's any reason to suspect that was something Ada Lovelace thought about. It's also very theoretical, while Ada was more interested in the practical applications of computers.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From bart@3:633/280.2 to All on Thu Mar 7 00:50:16 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 13:31, David Brown wrote:
    On 05/03/2024 23:34, Chris M. Thomasson wrote:
    On 3/5/2024 2:11 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    I wonder how many people got confused?


    Apparently you and Malcolm got confused.

    Others who mentioned the language know it is called "Ada". I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.


    Whoever wrote this short Wikipedia article on it got confused too as it
    uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since it
    is case-insensitive, 'ADA' would also work.)

    Here's also a paper that uses 'ADA' (I assume it is the same language):

    https://www.sciencedirect.com/science/article/abs/pii/0166361582900136

    Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
    written in all-caps or only capitalised? You can't tell!


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Thu Mar 7 01:18:42 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:

    On 06/03/2024 13:31, David Brown wrote:
    On 05/03/2024 23:34, Chris M. Thomasson wrote: =20
    On 3/5/2024 2:11 PM, Keith Thompson wrote: =20
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...] =20
    ADA is bullet proof... Until its not... ;^) =20

    The language is called Ada, not ADA. =20

    I wonder how many people got confused?
    =20
    =20
    Apparently you and Malcolm got confused.
    =20
    Others who mentioned the language know it is called "Ada".=A0 I not
    only corrected you, but gave an explanation of it, in the hope that
    with that clarity, you'd learn.
    =20
    =20
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:
    =20
    https://simple.wikipedia.org/wiki/Ada_(programming_language)
    =20
    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)
    =20

    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    https://en.wikipedia.org/wiki/Ada_(programming_language)

    Here's also a paper that uses 'ADA' (I assume it is the same
    language):
    =20
    https://www.sciencedirect.com/science/article/abs/pii/0166361582900136
    =20

    The article published 1982. The language became official in 1983.
    Possibly, in 1982 there still was a confusion w.r.t. its name.

    Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'=20
    written in all-caps or only capitalised? You can't tell!
    =20

    If only ADA, written in upper case, was not widely used for something
    else...




    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From aph@littlepinkcloud.invalid@3:633/280.2 to All on Thu Mar 7 01:30:58 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    In comp.lang.c Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than writing
    it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On
    average the performance was satisfactory, but every two minutes there
    was spike in latency. The latency during the spike was not that big
    (300 msec), but they stilled were feeling that they want better.

    ....

    I have few questions about the story, most important one is whether the weakness of this sort is specific to GC of Go, due to its relative
    immaturity

    I'm sure it is. 300ms is terrible.

    or more general and applies equally to most mature GCs on the
    market, i.e. J2EE and .NET.

    Continuously-compacting concurrent collectors like those available for
    Java aim for less than 10ms, and often hit 1ms. You have to stop each
    thread briefly to scan its stack and do a few other things, but that's
    all.

    Andrew.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From bart@3:633/280.2 to All on Thu Mar 7 01:38:25 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 14:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:

    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    https://en.wikipedia.org/wiki/Ada_(programming_language)

    Here's also a paper that uses 'ADA' (I assume it is the same
    language):

    https://www.sciencedirect.com/science/article/abs/pii/0166361582900136


    The article published 1982. The language became official in 1983.
    Possibly, in 1982 there still was a confusion w.r.t. its name.

    It would have been know it was named after a person. (I think Lovelace
    would have been better though.)

    Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
    written in all-caps or only capitalised? You can't tell!


    If only ADA, written in upper case, was not widely used for something
    else...

    I don't know what that is without looking it up. In a programming
    newsgroup I expect ADA to be the language.

    BTW it's a good thing that C, written in upper case, can never be
    confused with anything else...


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Keith Thompson@3:633/280.2 to All on Thu Mar 7 02:42:37 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    bart <bc@freeuk.com> writes:
    [...]
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    Fixed.

    [...]

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Medtronic
    void Void(void) { Void(); } /* The recursive call of the void */

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: None to speak of (3:633/280.2@fidonet)
  • From James Kuyper@3:633/280.2 to All on Thu Mar 7 06:14:42 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/24 09:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:
    ....
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link
    serves the same purpose. "Simple English" is it's own language, closely
    related to standard English. Read the corresponding Wikipedia article
    for more details.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From bart@3:633/280.2 to All on Thu Mar 7 06:46:49 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 14:38, bart wrote:
    On 06/03/2024 14:18, Michael S wrote:

    If only ADA, written in upper case, was not widely used for something
    else...

    I don't know what that is without looking it up. In a programming
    newsgroup I expect ADA to be the language.

    Here's an interesting pic:

    https://upload.wikimedia.org/wikipedia/commons/5/50/AdaLovelaceplaque.JPG

    Notice the upper-case name.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kaz Kylheku@3:633/280.2 to All on Thu Mar 7 06:50:45 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 3/6/24 09:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:
    ...
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link serves the same purpose. "Simple English" is it's own language, closely related to standard English.

    Where is Simple English spoken? Is there some geographic area where
    native speakers concentrate?

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Thu Mar 7 07:13:40 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 20:50, Kaz Kylheku wrote:
    On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 3/6/24 09:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:
    ...
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link
    serves the same purpose. "Simple English" is it's own language, closely
    related to standard English.

    Where is Simple English spoken? Is there some geographic area where
    native speakers concentrate?


    It is meant to be simpler text, written in simpler language. The target audience will include younger people, people with dyslexia or other
    reading difficulties, learners of English, people with lower levels of education, people with limited intelligence or learning impediments, or
    simply people whose eyes glaze over when faced with long texts on the
    main Wikipedia pages.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Thu Mar 7 09:00:08 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:

    On 06/03/2024 12:02, Michael S wrote:
    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    =20
    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
    =20
    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    =20
    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance. =20

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than
    writing it in Rust =20

    Nevertheless, they found the switch to Rust worthwhile. =20
    =20
    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
    =20
    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On average the performance was satisfactory, but every two minutes
    there was spike in latency. The latency during the spike was not
    that big (300 msec), but they stilled were feeling that they want
    better. They tried to tune GC, but the problem appeared to be
    fundamental. So they just rewrote this particular server in Rust. Naturally, Rust does not collect garbage, so this particular
    problem disappeared.
    =20
    The key phrase of the story is "This service was a great candidate
    to port to Rust since it was small and self-contained".
    I'd add to this that even more important for eventual success of
    migration was the fact that at time of rewrite server was already
    running for several years, so requirements were stable and
    well-understood.
    Another factor is that their service does not create/free that many objects. The delay was caused by mere fact of GC scanning rather
    than by frequent compacting of memory pools. So, from the beginning
    it was obvious that potential fragmentation of the heap, which is
    the main weakness of "plain" C/C++/Rust based solutions for Web
    back-ends, does not apply in their case. =20
    =20
    From the same link:
    =20
    "Rust uses a relatively unique memory management approach that=20 incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basically, R=
    ust keeps
    track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually impossible to have runtime memory bugs.=E2=81=B4 You do not need to manua=
    lly
    keep track of memory. The compiler takes care of it."
    =20
    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based languages like Java, C# or Go.

    But you=20
    have to write your programs in a certain way to make it possible. The=20 programmer has to help the language keep track of what owns what.
    =20
    So you will probably be able to do the same thing in another
    language. But Rust will do more compile-time enforcement by
    restricting how you share objects in memory.
    =20
    =20



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Thu Mar 7 09:13:58 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/2024 5:34 AM, David Brown wrote:
    On 05/03/2024 23:02, Chris M. Thomasson wrote:
    On 3/5/2024 1:58 PM, Keith Thompson wrote:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com>
    wrote:
    On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something >>>>>>>>> you would
    trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to >>>>>>>> Ada for
    delivery.

    Was it debugged again? Or was it assumed that the translation was >>>>>>> bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the
    hardware
    overflow exception from forcing a 64 bit floating-point value into a 16 >>>> bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.

    A numeric overflow occurred during the Ariane 5's initial flight -- and
    the software *did* catch the overflow. The same overflow didn't occur
    on Ariane 4 because of its different flight profile. There was a
    management decision to reuse the Ariane 4 flight software for Ariane 5
    without sufficient review.

    The code (which had been thoroughly tested on Ariane 4 and was known not >>> to overflow) emitted an error message describing the overflow exception. >>> That error message was then processed as data. Another problem was that >>> systems were designed to shut down on any error; as a result, healthy
    and necessary equipment was shut down prematurely.

    This is from my vague memory, and may not be entirely accurate.

    That matches my recollection too.


    *Of course* logic errors are possible in Ada programs, but in my
    experience and that of many other programmers, if you get an Ada program >>> to compile (and run without raising unhandled exceptions), you're likely >>> to be much closer to a working program than if you get a C program to
    compile. A typo in a C program is more likely to result in a valid
    program with different semantics.


    So close you can just feel its a 100% correct and working program?

    Didn't you notice the smiley in my comment? It used to be a running
    joke that if you managed to get your Ada code to compile, it was ready
    to ship. The emphasis is on the word "joke".


    You jest whooshed over my head. Sorry! Humm, well, shit. ;^o

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Thu Mar 7 09:14:51 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/2024 5:31 AM, David Brown wrote:
    On 05/03/2024 23:34, Chris M. Thomasson wrote:
    On 3/5/2024 2:11 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    I wonder how many people got confused?


    Apparently you and Malcolm got confused.

    Others who mentioned the language know it is called "Ada". I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.


    ADA = nothing
    Ada = the language of Ada

    Got it.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Thu Mar 7 09:18:39 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/2024 2:43 AM, David Brown wrote:
    On 05/03/2024 21:51, Chris M. Thomasson wrote:
    On 3/5/2024 1:01 AM, David Brown wrote:
    On 04/03/2024 21:36, Chris M. Thomasson wrote:
    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not
    something that
    the language imposes. C has malloc, yet even that gets disused >>>>>>>> in favor
    of something else.


    For safe embedded systems, you don't want memory management at
    all. Avoiding dynamic memory is an important aspect of
    safety-critical embedded development.


    You still have to think about memory management even if you avoid >>>>>> any dynamic memory? How are you going to mange this memory wrt
    your various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic
    memory and therefore memory management. And as Kaz says, you will
    often use custom solutions such as resource pools rather than
    generic malloc/free. Flexible network communication (such as
    Ethernet or other IP networking) is hard to do without dynamic memory. >>>> [...]

    Think of using a big chunk of memory, never needed to be freed and
    is just there per process. Now, you carve it up and store it in a
    cache that has functions push and pop. So, you still have to manage
    memory even when you are using no dynamic memory at all... Fair
    enough, in a sense? The push and the pop are your malloc and free in
    a strange sense...


    I believe I mentioned that. You do not, in general, "push and pop" -
    you malloc and never free. Excluding debugging code and other parts
    useful in testing and developing, you have something like :

    enum { heap_size = 16384; }
    alignas(max_align_t) static uint8_t heap[heap_size];
    uint8_t * next_free = heap;

    void free(void * ptr) {
    (void) ptr;
    }

    void * malloc(size_t size) {
    const size_t align = alignof(max_align_t);
    const real_size = size ? (size + (align - 1)) & ~(align - 1)
    : align;
    void * p = next_free;
    next_free += real_size;
    return p;
    }


    Allowing for pops requires storing the size of the allocations
    (unless you change the API from that of malloc/free), and is only
    rarely useful. Generally if you want memory that temporary, you use
    a VLA or alloca to put it on the stack.


    wrt systems with no malloc/free I am thinking more along the lines of
    a region allocator mixed with a LIFO for a cache, so a node based
    thing. The region allocator gets fed with a large buffer. Depending on
    specific needs, it can work out nicely for systems that do not have
    malloc/free. The pattern I used iirc, was something like:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
    // try the lifo first...

    node* n = lifo_pop();

    if (! n)
    {
    // resort to the region allocator...

    n = region_allocate_node();

    // note, n can be null here.
    // if it is, we are out of memory.

    // note, out of memory on a system
    // with no malloc/free...
    }

    return n;
    }

    void
    node_push(
    node* n
    ) {
    lifo_push(n);
    }
    _______________________


    make any sense to you?


    I know what you are trying to suggest, and I understand how it can sound reasonable. In some cases, this can be a useful kind of allocator, and
    when it is suitable, it is very fast. But it is has two big issues for small embedded systems.

    One problem is the "region_allocate_node()" - getting a lump of space
    from the underlying OS. That is fine on "big systems", and it is normal that malloc/free systems only ask for memory from the OS in big lumps,
    then handle local allocation within the process space for efficiency.
    (This can work particularly well if each thread gets dedicated lumps, so that no locking is needed for most malloc/free calls.)

    But in a small embedded system, there is no OS (an RTOS is generally
    part of the same binary as the application), and providing such "lumps" would be dynamic memory management. So if you are using a system like
    you describe, then you would have a single statically allocated block of memory for your lifo stack.

    Then there is the question of how often such a stack-like allocator is useful, independent of the normal stack. I can imagine it is
    /sometimes/ helpful, but rarely. I can't think off-hand of any cases
    where I would have found it useful in anything I have written.

    As I (and others) have said elsewhere, in small embedded systems and
    safety or reliability critical systems, you want to avoid dynamic memory
    and memory management whenever possible, for a variety of reasons. If
    you do need something, then specialise allocators are more common -
    possibly including lifos like this.

    But it's more likely to have fixed-size pools with fixed-size elements, dedicated to particular memory tasks. For example, if you need to track multiple in-flight messages on a wireless mesh network, where messages
    might take different amounts of time to be delivered and acknowledged,
    or retried, you define a structure that holds all the data you need for
    a message. Then you decide how many in-flight messages you will support
    as a maximum. This gives you a statically allocated array of N structs.
    Block usage is then done by a bitmap, typically within a single 32-bit word. Finding a free slot is a just finding the first free zero, and freeing it is clearing the correct bit.

    There are, of course, many other kinds of dedicated allocators that can
    be used in other circumstances.


    Fair enough. :^)

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From James Kuyper@3:633/280.2 to All on Thu Mar 7 11:27:24 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/24 14:50, Kaz Kylheku wrote:
    On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    ....
    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link
    serves the same purpose. "Simple English" is it's own language, closely
    related to standard English.

    Where is Simple English spoken? Is there some geographic area where
    native speakers concentrate?

    It's a constructed language, which probably has no native speakers. See <https://en.wikipedia.org/wiki/Constructed_language>. Wikipedia has
    articles in several constructed languages. The two biggest such
    languages are Esperanto, with 350,598, and Simple English with 248,540.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Mar 7 12:44:25 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Wed, 6 Mar 2024 14:02:14 +0200, Michael S wrote:

    Another factor is that their service does not create/free that many
    objects. The delay was caused by mere fact of GC scanning rather than
    by frequent compacting of memory pools.

    In other words, a GC language could not even cope reasonably with a light memory-management load.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Mar 7 12:45:05 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Wed, 6 Mar 2024 12:28:59 +0000, bart wrote:

    This suggests the language automatically takes care of this. But you
    have to write your programs in a certain way to make it possible.

    You are forced to by default, because if you don’t follow the rules, that’s a compile-time error.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Mar 7 12:46:22 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:

    Continuously-compacting concurrent collectors like those available for
    Java aim for less than 10ms, and often hit 1ms.

    What ... a 1ms potential delay every time you want to allocate a new
    object??

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Thu Mar 7 13:00:47 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/2024 5:46 PM, Lawrence D'Oliveiro wrote:
    On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:

    Continuously-compacting concurrent collectors like those available for
    Java aim for less than 10ms, and often hit 1ms.

    What ... a 1ms potential delay every time you want to allocate a new
    object??

    GC can be a no go for certain schemes. GC can be fine and it has its place.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kaz Kylheku@3:633/280.2 to All on Thu Mar 7 13:37:11 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On 2024-03-07, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/6/2024 5:46 PM, Lawrence D'Oliveiro wrote:
    On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:

    Continuously-compacting concurrent collectors like those available for
    Java aim for less than 10ms, and often hit 1ms.

    What ... a 1ms potential delay every time you want to allocate a new
    object??

    GC can be a no go for certain schemes. GC can be fine and it has its place.

    It is the situations where GC cannot be used that are niches that have
    their place. Everywhere else, you can use GC.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Mar 7 14:06:41 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Wed, 6 Mar 2024 19:27:24 -0500, James Kuyper wrote:

    It's a constructed language, which probably has no native speakers.

    Not to be confused with Basic English, which was created, and copyrighted
    by, C K Ogden.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Thu Mar 7 15:36:01 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/2024 6:37 PM, Kaz Kylheku wrote:
    On 2024-03-07, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/6/2024 5:46 PM, Lawrence D'Oliveiro wrote:
    On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:

    Continuously-compacting concurrent collectors like those available for >>>> Java aim for less than 10ms, and often hit 1ms.

    What ... a 1ms potential delay every time you want to allocate a new
    object??

    GC can be a no go for certain schemes. GC can be fine and it has its place.

    It is the situations where GC cannot be used that are niches that have
    their place. Everywhere else, you can use GC.


    Touche! :^)

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Blue-Maned_Hawk@3:633/280.2 to All on Thu Mar 7 17:46:46 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 22:11:14 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    I do not want to live in a web-centric world.

    You already do.

    That does not change the veracity of my statement.

    That doesn’t change the veracity of mine.



    Then our collective fingertips have done nothing in their plasticsmacking.

    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/
    │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    FORE!

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Thu Mar 7 21:35:08 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps
    track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually
    impossible to have runtime memory bugs.⁴ You do not need to manually
    keep track of memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program at
    any given time, and having larger heaps reduces fragmentation (or at
    least reduces the consequences of it).


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Thu Mar 7 22:44:01 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Thu, 7 Mar 2024 11:35:08 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:
    =20

    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basically=
    , Rust keeps
    track of who can read and write to memory. It knows when the
    program is using memory and immediately frees the memory once it
    is no longer needed. It enforces memory rules at compile time,
    making it virtually impossible to have runtime memory bugs.=E2=81=B4 Y=
    ou
    do not need to manually keep track of memory. The compiler takes
    care of it."

    This suggests the language automatically takes care of this. =20
    =20
    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.
    =20
    Garbage collection does not stop heap fragmentation. GC does, I=20
    suppose, mean that you need much more memory and bigger heaps in=20 proportion to the amount of memory you actually need in the program
    at any given time, and having larger heaps reduces fragmentation (or
    at least reduces the consequences of it).
    =20

    GC does not stop fragmentation, but it allow heap compaction to be
    built-in part of environment. So, it turns heap fragmentation
    from denial of service type of problem to mere slowdown, hopefully insignificant slowdown.
    I don't say that heap compaction is impossible in other environments,
    but it is much harder, esp. in environments where pointers are visible
    to programmer. The famous David Wheeler's quote applies here at full
    force.=20
    Also when non-GC environments chooses to implement heap compaction they
    suffer the same or bigger impact to real-time responsiveness as GC.
    So, although I don't know it for sure, my impression is that generic
    heap compaction extremely rarely implemented in performance-aware
    non-GC environments.=20
    Performance-neglecting non-GC environments, first and foremost CPython,
    can, of course, have heap compaction, although my googling didn't give
    me a definite answer whether it's done or not.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Fri Mar 8 02:36:43 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 07/03/2024 12:44, Michael S wrote:
    On Thu, 7 Mar 2024 11:35:08 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the
    program is using memory and immediately frees the memory once it
    is no longer needed. It enforces memory rules at compile time,
    making it virtually impossible to have runtime memory bugs.⁴ You
    do not need to manually keep track of memory. The compiler takes
    care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program
    at any given time, and having larger heaps reduces fragmentation (or
    at least reduces the consequences of it).


    GC does not stop fragmentation, but it allow heap compaction to be
    built-in part of environment.

    No, GC alone does not do that. But heap compaction is generally done as
    part of a GC cycle.

    Heap compaction requires indirect pointers. That is to say, if you have
    a struct "node" on your heap, your code does not use a "node *" pointer
    that points to it. It has a "node_proxy *" pointer, and the
    "node_proxy" struct points to the actual node. Heap compaction moves
    the real node in memory, and updates the proxy with the new real
    address, while the main program uses the same "node_proxy" address.
    (These proxies, or indirect pointers, do not move during heap
    compaction.) And the main program needs to be careful to access the
    data via the proxy, and re-read the proxy after every heap compaction cycle.

    This is not going to work well with a low-level and efficient language -
    the extra accesses can be a significant burden for a language like C and
    C++. But it can be fine for VM-based high-level languages, where the
    overhead is lost in the noise, and where the VM knows when the heap
    compaction has run and it needs to re-read the proxies.

    So, it turns heap fragmentation
    from denial of service type of problem to mere slowdown, hopefully insignificant slowdown.

    For high-level VM based languages, that could be correct. But low-level compiled and optimised languages are dependent on addresses remaining
    valid, so heap compaction is not an option.

    (An OS on a "big" system with an MMU can move memory pages around and
    change the virtual to physical memory mapping to get more efficient use
    of hierarchical virtual memory or to free up contiguous large page
    areas. That is transparent to the user application code.)

    I don't say that heap compaction is impossible in other environments,
    but it is much harder, esp. in environments where pointers are visible
    to programmer. The famous David Wheeler's quote applies here at full
    force.
    Also when non-GC environments chooses to implement heap compaction they suffer the same or bigger impact to real-time responsiveness as GC.

    Agreed.

    So, although I don't know it for sure, my impression is that generic
    heap compaction extremely rarely implemented in performance-aware
    non-GC environments.

    I think that is likely.

    Performance-neglecting non-GC environments, first and foremost CPython,
    can, of course, have heap compaction, although my googling didn't give
    me a definite answer whether it's done or not.


    CPython does use garbage collection, as far as I know.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kaz Kylheku@3:633/280.2 to All on Fri Mar 8 03:35:48 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps
    track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually
    impossible to have runtime memory bugs.⁴ You do not need to manually
    keep track of memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
    languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program at
    any given time, and having larger heaps reduces fragmentation (or at
    least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation. Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently used
    to bump-allocate new objects.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kaz Kylheku@3:633/280.2 to All on Fri Mar 8 04:18:14 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 07/03/2024 12:44, Michael S wrote:
    GC does not stop fragmentation, but it allow heap compaction to be
    built-in part of environment.

    No, GC alone does not do that. But heap compaction is generally done as part of a GC cycle.

    Heap compaction requires indirect pointers.

    I believe, it doesn't, or doesn't have to. The garbage collector fixes
    all the pointers contained in the reachable graph to point to the new
    locations of objects.

    If some foreign code held pointers to GC objects, that would be a
    problem. That can usually be avoided. Or else, the proxy handles
    can be used just for those outside references.

    A simple copying garbage collector moves each object on the first
    traversal and rewrites the parent pointer which it just chased
    to point to the new location. Subsequent visits to the same object
    then recognize that it has already been moved and just adjust the
    pointer that had been traversed to reach that object. The forwarding
    pointer to the new location can be stored in the old object;
    most of its fields are no longer needed for anything.

    The space required for the scheme can be regarded as equivalent
    to fragmentation, but it's controlled.

    The worst case exhibited by fragmentation (where the wasted space is proportional to the size ratio of the largest to smallest object) is
    avoided.

    Now, copying collection is almost certainly inapplicable to C programs;
    it's not something you "slide under" C, like Boehm. We have to think
    outside of the C box. Outside of the C box, interesting things are
    possible, like precisely knowing all the places that point at an
    object.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From James Kuyper@3:633/280.2 to All on Fri Mar 8 06:28:11 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/24 22:06, Lawrence D'Oliveiro wrote:
    On Wed, 6 Mar 2024 19:27:24 -0500, James Kuyper wrote:

    It's a constructed language, which probably has no native speakers.

    Not to be confused with Basic English, which was created, and copyrighted by, C K Ogden.

    Simple English is the term used by Wikipedia for one of it's
    language-specific subsets. One of it's requirements is that the articles
    be written in Basic English as much as possible. See <https://simple.wikipedia.org/wiki/Wikipedia:How_to_write_Simple_English_pages#Basic_English_and_VOA_Special_English>
    for details.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Mar 8 10:42:10 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Tue, 5 Mar 2024 22:01:01 -0800, Chris M. Thomasson wrote:

    On 3/5/2024 4:25 PM, Lawrence D'Oliveiro wrote:

    So, what is the right language to use?

    Learn to use more than one.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Mar 8 10:43:09 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Wed, 6 Mar 2024 14:34:50 +0100, David Brown wrote:

    It used to be a running joke that if you managed to get your Ada code to compile, it was ready to ship.

    That joke actually originated with Pascal. Though I suppose Ada took it to
    the next level ...

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Mar 8 10:44:20 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Thu, 7 Mar 2024 14:28:11 -0500, James Kuyper wrote:

    One of it's requirements is that the articles be written in Basic
    English as much as possible.

    Interesting, because it was Ogden’s protectiveness of his copyright that killed off any initial chance of Basic English taking off, back in the
    day.

    I guess that’s expired now.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Fri Mar 8 11:21:05 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/7/2024 3:42 PM, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 22:01:01 -0800, Chris M. Thomasson wrote:

    On 3/5/2024 4:25 PM, Lawrence D'Oliveiro wrote:

    So, what is the right language to use?

    Learn to use more than one.

    Indeed, I do. Btw, are you an AI? Still not exactly sure why I think that.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Fri Mar 8 18:25:13 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 07/03/2024 17:35, Kaz Kylheku wrote:
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually
    impossible to have runtime memory bugs.⁴ You do not need to manually >>>> keep track of memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
    languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program at
    any given time, and having larger heaps reduces fragmentation (or at
    least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation.

    Yes, but garbage collectors that could be useable for C, C++, or other efficient compiled languages are not "copying" garbage collectors.

    Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently used
    to bump-allocate new objects.


    I think if you have a system with enough memory that copying garbage collection (or other kinds of heap compaction during GC) is a reasonable option, then it's unlikely that heap fragmentation is a big problem in
    the first place. And you won't be running on a small embedded system.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Fri Mar 8 19:01:21 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 08/03/2024 00:43, Lawrence D'Oliveiro wrote:
    On Wed, 6 Mar 2024 14:34:50 +0100, David Brown wrote:

    It used to be a running joke that if you managed to get your Ada code to
    compile, it was ready to ship.

    That joke actually originated with Pascal.

    I didn't know that.

    Though I suppose Ada took it to
    the next level ...

    It seems much more appropriate for Ada (though Pascal also had stricter checking and stronger types than most other popular languages had when
    Pascal was developed).


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Fri Mar 8 21:57:46 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Fri, 8 Mar 2024 08:25:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 07/03/2024 17:35, Kaz Kylheku wrote:
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote: =20
    On 06/03/2024 23:00, Michael S wrote: =20
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:
    =20

    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basical=
    ly, Rust
    keeps track of who can read and write to memory. It knows when
    the program is using memory and immediately frees the memory
    once it is no longer needed. It enforces memory rules at compile
    time, making it virtually impossible to have runtime memory
    bugs.=E2=81=B4 You do not need to manually keep track of memory. The >>>> compiler takes care of it."

    This suggests the language automatically takes care of this. =20

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.
    =20
    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the
    program at any given time, and having larger heaps reduces
    fragmentation (or at least reduces the consequences of it). =20
    =20
    Copying garbage collectors literally stop fragmentation. =20
    =20
    Yes, but garbage collectors that could be useable for C, C++, or
    other efficient compiled languages are not "copying" garbage
    collectors.


    Go, C# and Java are all efficient compiled languages. For Go it was
    actually a major goal.

    Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently
    used to bump-allocate new objects.
    =20
    =20
    I think if you have a system with enough memory that copying garbage=20 collection (or other kinds of heap compaction during GC) is a
    reasonable option, then it's unlikely that heap fragmentation is a
    big problem in the first place. And you won't be running on a small
    embedded system.


    You sound like arguing for sake of arguing.
    Of course, heap fragmentation is relatively rare problem. But when you
    process 100s of 1000s of requests of significantly varying sizes for
    weeks without interruption then rare things happen with high
    probability :(
    In case of this particular Discord service, they appear to
    have a benefit of size of requests not varying significantly, so
    absence of heap compaction is not a major defect.
    BTW, I'd like to know if 3 years later they still have their Rust
    solution running.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Paavo Helde@3:633/280.2 to All on Fri Mar 8 23:41:16 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++ std::shared_ptr (except that it does not need to be thread-safe).

    With reference counting one only knows how many pointers there are to a
    given heap block, but not where they are, so heap compaction would not
    be straightforward.

    Python also has zillions of extensions written in C or C++ (all of AI
    related work for example), so having e.g. heap compaction of Python
    objects only might not be worth of it.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Sat Mar 9 01:07:47 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 08/03/2024 13:41, Paavo Helde wrote:
    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++ std::shared_ptr (except that it does not need to be thread-safe).

    Yes, that is my understanding too. (I could be wrong here, so don't
    rely on anything I write!) But the way it is used is still a type of
    garbage collection. When an object no longer has any "live" references,
    it is put in a list, and on the next GC it will get cleared up (and call
    the asynchronous destructor, __del__, for the object).

    A similar method is sometimes used in C++ for objects that are
    time-consuming to destruct. You have a "tidy up later" container that
    holds shared pointers. Each time you make a new object that will have asynchronous destruction, you use a shared_ptr for the access and put a
    copy of that pointer in the tidy-up container. A low priority
    background thread checks this list on occasion - any pointers with only
    one reference can be cleared up in the context of this separate thread.


    With reference counting one only knows how many pointers there are to a given heap block, but not where they are, so heap compaction would not
    be straightforward.

    Python also has zillions of extensions written in C or C++ (all of AI related work for example), so having e.g. heap compaction of Python
    objects only might not be worth of it.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Sat Mar 9 01:32:22 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 08/03/2024 11:57, Michael S wrote:
    On Fri, 8 Mar 2024 08:25:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 07/03/2024 17:35, Kaz Kylheku wrote:
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust
    keeps track of who can read and write to memory. It knows when
    the program is using memory and immediately frees the memory
    once it is no longer needed. It enforces memory rules at compile
    time, making it virtually impossible to have runtime memory
    bugs.⁴ You do not need to manually keep track of memory. The
    compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the
    program at any given time, and having larger heaps reduces
    fragmentation (or at least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation.

    Yes, but garbage collectors that could be useable for C, C++, or
    other efficient compiled languages are not "copying" garbage
    collectors.


    Go, C# and Java are all efficient compiled languages. For Go it was
    actually a major goal.

    C# and Java are, AFAIUI, managed languages - they are byte-compiled and
    run on a VM. (JIT compilation to machine code can be used for
    acceleration, but that does not change the principles.) I don't know
    about Go.


    Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently
    used to bump-allocate new objects.


    I think if you have a system with enough memory that copying garbage
    collection (or other kinds of heap compaction during GC) is a
    reasonable option, then it's unlikely that heap fragmentation is a
    big problem in the first place. And you won't be running on a small
    embedded system.


    You sound like arguing for sake of arguing.

    I am just trying to be clear about things. Different types of system,
    and different types of task, have different challenges and different solutions. (This seems obvious, but people often think they have "the" solution to a particular issue.) In particular, in small embedded
    systems with limited ram and no MMU, if you use dynamic memory of any
    kind, then heap fragmentation is a serious risk. And a heap-compacting garbage collection will not mitigate that risk.

    There are a lot of GC algorithms, each with their pros and cons, and the
    kind of languages and tasks for which they are suitable. If you have a
    GC algorithm that works by copying all live data (then scraping
    everything left over), then heap compaction is a natural byproduct.

    But I think it is rare that heap compaction is an appropriate goal in
    itself - it is a costly operation. It invalidates all pointers, which
    means a lot of overhead and extra care in languages where pointers are
    likely to be cached in registers or local variables on the stack. And
    it will be tough on the cache as everything has to be copied and moved.
    That pretty much rules it out for efficient compiled languages, at least
    for the majority of their objects, and leaves it in the domain of
    languages that can accept the performance hit.


    Of course, heap fragmentation is relatively rare problem. But when you process 100s of 1000s of requests of significantly varying sizes for
    weeks without interruption then rare things happen with high
    probability :(

    There are all sorts of techniques usable to optimise such systems.
    Allocation pools for different sized blocks would be a typical strategy.

    In case of this particular Discord service, they appear to
    have a benefit of size of requests not varying significantly, so
    absence of heap compaction is not a major defect.
    BTW, I'd like to know if 3 years later they still have their Rust
    solution running.




    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Sat Mar 9 01:57:09 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites
    Cybersecurity Risks"

    On Fri, 8 Mar 2024 15:32:22 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/03/2024 11:57, Michael S wrote:
    On Fri, 8 Mar 2024 08:25:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:
    =20
    On 07/03/2024 17:35, Kaz Kylheku wrote: =20
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote: =20
    On 06/03/2024 23:00, Michael S wrote: =20
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:
    =20

    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basic=
    ally, Rust
    keeps track of who can read and write to memory. It knows when
    the program is using memory and immediately frees the memory
    once it is no longer needed. It enforces memory rules at
    compile time, making it virtually impossible to have runtime
    memory bugs.=E2=81=B4 You do not need to manually keep track of
    memory. The compiler takes care of it."

    This suggests the language automatically takes care of this. =20

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.
    =20
    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the
    program at any given time, and having larger heaps reduces
    fragmentation (or at least reduces the consequences of it). =20

    Copying garbage collectors literally stop fragmentation. =20

    Yes, but garbage collectors that could be useable for C, C++, or
    other efficient compiled languages are not "copying" garbage
    collectors.
    =20
    =20
    Go, C# and Java are all efficient compiled languages. For Go it was actually a major goal. =20
    =20
    C# and Java are, AFAIUI, managed languages - they are byte-compiled
    and run on a VM. (JIT compilation to machine code can be used for=20 acceleration, but that does not change the principles.) I don't know=20 about Go.


    C# was Jitted originally and was even interpretted on on very small implementation that don't seem to be supported any longer. Today it is
    mostly AoTed, which in simpler language means "compiled". There are
    options in dev tools whhether to compile to native code on to platform-independent. I would think that most people compile to native.

    Java-on-Android which, I would guess, is majority on Java written in
    the world, is like 95% AoTed + 5% JITtted. Is used to be 100% AoTed in
    few versions of Android, but by now JIT is reintroduced as an option,
    not for portability, but for profile-guided optimization
    opportinities it allows. If I am not mistaken, direct interpretaions of
    Davlik non-byte-code was never supported on Android.

    Java-outside-Android? I don't know what is current stated. Would think
    that Oracle's JVMs intetended for desktop/la[top/server are also
    either JITted or AoTed, not interpreted.

    Go is compiled to native, most often via LLVM, but there exists gcc
    option as well.







    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From bart@3:633/280.2 to All on Sat Mar 9 02:15:36 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 08/03/2024 14:07, David Brown wrote:
    On 08/03/2024 13:41, Paavo Helde wrote:
    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++
    std::shared_ptr (except that it does not need to be thread-safe).

    Yes, that is my understanding too. (I could be wrong here, so don't
    rely on anything I write!) But the way it is used is still a type of garbage collection. When an object no longer has any "live" references,
    it is put in a list, and on the next GC it will get cleared up (and call
    the asynchronous destructor, __del__, for the object).

    Is that how CPython works? I can't quite see the point of saving up all
    the deallocations so that they are all done as a batch. It's extra
    overhead, and will cause those latency spikes that was the problem here.

    In my own reference count scheme, when the count reaches zero, the
    memory is freed immediately.

    I also tend to have most allocations being of either 16 or 32 bytes, so
    reuse is easy. It is only individual data items (a long string or long
    array) that might have an arbitrary length that needs to be in
    contiguous memory.

    Most strings however have an average length of well below 16 characters
    in my programs, so use a 16-byte allocation.

    I don't know the allocation pattern in that Discard app, but Michael S suggested they might not be lots of arbitrary-size objects.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Sat Mar 9 03:55:48 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 08/03/2024 16:15, bart wrote:
    On 08/03/2024 14:07, David Brown wrote:
    On 08/03/2024 13:41, Paavo Helde wrote:
    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++
    std::shared_ptr (except that it does not need to be thread-safe).

    Yes, that is my understanding too. (I could be wrong here, so don't
    rely on anything I write!) But the way it is used is still a type of
    garbage collection. When an object no longer has any "live"
    references, it is put in a list, and on the next GC it will get
    cleared up (and call the asynchronous destructor, __del__, for the
    object).

    Is that how CPython works? I can't quite see the point of saving up all
    the deallocations so that they are all done as a batch. It's extra
    overhead, and will cause those latency spikes that was the problem here.

    I believe the GC runs are done very regularly (if there is something in
    the clean-up list), so there is not much build-up and not much extra
    latency.


    In my own reference count scheme, when the count reaches zero, the
    memory is freed immediately.

    That's synchronous deallocation. It's a perfectly good strategy, of
    course. There are pros and cons of both methods.


    I also tend to have most allocations being of either 16 or 32 bytes, so reuse is easy. It is only individual data items (a long string or long array) that might have an arbitrary length that needs to be in
    contiguous memory.

    Most strings however have an average length of well below 16 characters
    in my programs, so use a 16-byte allocation.

    I don't know the allocation pattern in that Discard app, but Michael S suggested they might not be lots of arbitrary-size objects.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Ross Finlayson@3:633/280.2 to All on Sat Mar 9 05:08:44 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03/08/2024 06:07 AM, David Brown wrote:
    On 08/03/2024 13:41, Paavo Helde wrote:
    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++
    std::shared_ptr (except that it does not need to be thread-safe).

    Yes, that is my understanding too. (I could be wrong here, so don't
    rely on anything I write!) But the way it is used is still a type of
    garbage collection. When an object no longer has any "live" references,
    it is put in a list, and on the next GC it will get cleared up (and call
    the asynchronous destructor, __del__, for the object).

    A similar method is sometimes used in C++ for objects that are
    time-consuming to destruct. You have a "tidy up later" container that
    holds shared pointers. Each time you make a new object that will have asynchronous destruction, you use a shared_ptr for the access and put a
    copy of that pointer in the tidy-up container. A low priority
    background thread checks this list on occasion - any pointers with only
    one reference can be cleared up in the context of this separate thread.


    With reference counting one only knows how many pointers there are to
    a given heap block, but not where they are, so heap compaction would
    not be straightforward.

    Python also has zillions of extensions written in C or C++ (all of AI
    related work for example), so having e.g. heap compaction of Python
    objects only might not be worth of it.





    Wondering about mark-and-sweep or abstractly
    whatever means that detects references, vis-a-vis,
    reference counting and reference registration and
    this kind of thing, sort of is for making the automatic
    cleanup along the lines of stack-unwinding.

    Like how C++ works on stack objects, ....

    Then, that makes de-allocation part of the routine,
    and adds reference-counts to objects, but it would
    be pretty safe, ..., and the GC would never interrupt
    the entire runtime.

    One might figure that any time an lvalue is assigned
    an rvalue, the rvalue's refcount increments and any
    previous assigned revalue's refcount decrements,
    then anything that goes out of scope it's rvalue
    is assigned null, its un-assigned rvalue refcount decrements,
    that any refcount decremented to zero results deletion.

    Isn't that smart-pointers?

    https://en.wikipedia.org/wiki/Smart_pointer

    Maybe the big code cop should say "you should use smart pointers".

    I think smart pointers should usually be the way of things,
    any kind of pointer, then with, you know, detach() or what,
    manual management.

    I suppose it's nice that syntactic sugar just does that,
    or, that the runtime makes a best effort sort of
    inference, while, it would be nice if when an object's
    purpose is fulfilled, that it can be canned and it results
    freeing itself.

    Static analysis and "safe programming"
    is an option in any deterministic language, ...,
    given "defined behavior" of the runtime, of course.

    How about "ban USB and PXE" and
    "proxy-defeating DNS", "read-only
    runtime", "computer literacy suprise quiz".

    The idea of memory pools and freelists and
    arenas and slabs and dedicated allocations
    for objects of types and the declaration at
    definition-time of the expected lifetime and
    ownership of objects, gets into a lot of ways
    to have both efficiency and dedication by design.


    Shadow stack, NX bit, shared register protection,
    Orange Book, journaling, link-layer?

    A usual behavior of Java crashing is leaving
    the entire heap in a heap-dump file, ....

    These days a usual sort of approach is, like,
    the old "trust but verify", static analysis and
    all, figuring that type-safety first is the greatest
    possible boon to correctness, then that
    memory-management would better be a
    sort of "if you could just explicitly close your
    resources when you're done then maybe
    have a mark-and-sweep on the side, and
    mark lifetime resources as so, then anything
    left would be waste".




    I'm a big fan of C/C++ coders and it's nice
    to know about Java which I think is great
    and I mostly think in it, vis-a-vis,
    Go and JavaScript and similar event loops,
    like Windows, or the Pythonic or something like
    that, there's something to be said for that
    Haskell is probably cooler than me, these
    days I'm looking at the language specs and
    the opcode instructions as from assembler
    languages with regards to "modular modules
    with well-defined modules and modularity".

    Figuring "modular modules" and "scope the globals".



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Sat Mar 9 08:23:57 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
    On 3/6/2024 2:43 AM, David Brown wrote:
    [...]

    This is a fun one:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
    // try per-thread lifo

    // try shared distributed lifo

    // try global region

    // if all of those failed, return nullptr
    }

    void
    node_push(
    node* n
    ) {
    // if n came from our per-thread, try to push it into it...

    // if n came from another thread, try to push it into its thread...

    // if all of those failed, push into shared distributed lifo
    }
    _______________________


    The fun part is this scheme can be realized as long as a node is at
    least the size of a pointer. That is the required overhead wrt the size
    of a node.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Ross Finlayson@3:633/280.2 to All on Sat Mar 9 16:36:14 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 03/06/2024 12:13 PM, David Brown wrote:
    On 06/03/2024 20:50, Kaz Kylheku wrote:
    On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 3/6/24 09:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:
    ...
    Whoever wrote this short Wikipedia article on it got confused too as >>>>> it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since >>>>> it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link >>> serves the same purpose. "Simple English" is it's own language, closely
    related to standard English.

    Where is Simple English spoken? Is there some geographic area where
    native speakers concentrate?


    It is meant to be simpler text, written in simpler language. The target audience will include younger people, people with dyslexia or other
    reading difficulties, learners of English, people with lower levels of education, people with limited intelligence or learning impediments, or simply people whose eyes glaze over when faced with long texts on the
    main Wikipedia pages.



    Yet, why?

    There's "Simplified Technical English", which is a same
    sort of idea, with the idea that manuals and instructions
    be clear and unambiguous.

    https://en.wikipedia.org/wiki/Simplified_Technical_English

    Heh, it's like in the old days, when people would get
    manuals, and be amused as it were by the expression.


    What I'd like to know about is who keeps dialing
    the "harmonization" efforts, which really must
    give grouse to the "harmonisation" spellers,
    when good old-fashioned words "spelt" their own way,
    which of course is archaic "spelled".

    It reminds me of "Math Blaster" and "Typing Games",
    vis-a-vis, "the spelling bee", and for that matter,
    of course, weekly spelling quizzes all through
    elementary school.

    I'm so old the only games we had were how to
    compute and how to spell.

    And Tooth Invaders. Just kidding I had 50+ floppies
    for my Commodore64. Like GI Joe and Beachhead II.

    But we didn't get promoted in school if we
    didn't pass our spelling tests.

    (We couldn't even have dangling prepositions
    or sentence fragments like the above.)

    We had a class in school we couldn't even pass
    until we could type thirty words a minute.


    The Simplified Technical English though is a good idea,
    it's used in technical manuals and instructions, widely.


    Really, whever harmonization dials away a word,
    I'm like, hey, I'm using that word.


    There's something to be said for a, "source parser",
    the idea being a, multi-pass parser of sorts, with
    any number of, forms, so that it results, parsing
    languages sort of opportunistically, and results,
    sort of lifting, sections, of source, into regions
    of syntax, so that as syntaxes get all commingled,
    that all the syntax and grammar definitions get piled
    together, where it sort of results then for comments
    and quoting, and, usual ideas of brackets, and comma,
    for joiners and separators and groupers and splitters,
    observing mostly usually the parenthetical and indentation,
    for all sorts of languages, into, a pretty common sort of
    form.

    So, what is there, "Simplified Compilation Source",
    basically reflecting, "if it's source somehow it
    parses, if being ambiguous among languages then
    in editions of each or according to the source
    locale", these kinds of things....

    For a long time I've been thinking about "modular
    and composable parsers", with mostly the usual
    goal of relating productions in grammar to source
    locations, that one figures it would be a most usual
    sort of study, to result, all the proliferation of
    little languages, get all parsed, then for the great
    facility of "term re-write rules" and "term-graph
    re-write rules", or "re-write systems", or for
    extracting signatures, identifiers, and logic,
    for any kind of language.

    I think everybody reading this has a most usual
    sort of exposure to the theory of parsing as after
    Backus-Naur format, vis-a-vis syntax diagrams or
    railroad diagrams, and Chomsky hierarchy, and lexers
    and parsers and the interpreted and all these kinds
    of things, but I don't know a sort of wide-open
    framework that parses any kinds of sources and
    happens to also re-write itself to any sort of target,
    parsing any source language in any source language.

    Did I miss the memo?

    What I got into was defining languages in terms
    of comments and quoting, and, brackets and commas,
    and, space and line, in terms of, sequence and alternation,
    for basically that all the source is loaded or mapped into
    memory, then instead of an abstract syntax tree or sorts,
    results an abstract syntax sequence of sorts, those "lifted"
    over the source text for its location, then that any sort
    of lexicalizing and syntax and grammar, all get put together
    as modules and any one just enumerates or makes equivalent
    whatever kind of source it is, then according to the
    language, results usual sorts constructs and productions,
    for functional and procedural languages, and data,
    and, you know, language.

    Tesniere, Tesniere is the great complement to Chomsky,
    where after Chomsky is like, "this finite state machine
    builds models of productions in minimal resources", to,
    something like, "Simplified Compilation Source", parser,
    "this algorithm works in fixed or linear resources in
    up to factorial time and parses anything, and unparsed
    sections are their source text, and iterating the data
    structure or any segment iterates the source under it
    that it's lifted over".


    See, look at that, "lifted over", I would get a bad
    mark for that. Of course that's since been relaxed,
    figuring it's natural to dangle and OK to continue.
    And so on.


    So anyways as long as we're talking about all the usual
    languages, uh, is that all "Common Source Language"?

    CS language?

    So, for something like, "Common Compilation Components",
    figuring all sorts usual functional and procedural
    productions sort of have a usual form and thusly
    can be a great fabric of re-write rules, or targetting,
    basically is for making common-enough productions and
    the algorithm be multi-pass as necessary, to result
    a usual sort of workbench for languages of the source.






    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Sat Mar 9 23:25:26 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 08/03/2024 22:23, Chris M. Thomasson wrote:
    On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
    On 3/6/2024 2:43 AM, David Brown wrote:
    [...]

    This is a fun one:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
    // try per-thread lifo

    // try shared distributed lifo

    // try global region

    // if all of those failed, return nullptr
    }


    Just to be clear here - if this is in a safety-critical system, and your allocation system returns nullptr, people die. That is why you don't
    use this kind of thing for important tasks.



    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Sun Mar 10 09:16:18 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/9/2024 4:25 AM, David Brown wrote:
    On 08/03/2024 22:23, Chris M. Thomasson wrote:
    On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
    On 3/6/2024 2:43 AM, David Brown wrote:
    [...]

    This is a fun one:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
    // try per-thread lifo

    // try shared distributed lifo

    // try global region

    // if all of those failed, return nullptr
    }


    Just to be clear here - if this is in a safety-critical system, and your allocation system returns nullptr, people die. That is why you don't
    use this kind of thing for important tasks.



    In this scenario, nullptr returned means the main region allocator is
    out of memory. So, pool things up where this never occurs.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Sun Mar 10 09:18:14 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 3/9/2024 2:16 PM, Chris M. Thomasson wrote:
    On 3/9/2024 4:25 AM, David Brown wrote:
    On 08/03/2024 22:23, Chris M. Thomasson wrote:
    On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
    On 3/6/2024 2:43 AM, David Brown wrote:
    [...]

    This is a fun one:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
    // try per-thread lifo

    // try shared distributed lifo

    // try global region

    // if all of those failed, return nullptr
    }


    Just to be clear here - if this is in a safety-critical system, and
    your allocation system returns nullptr, people die. That is why you
    don't use this kind of thing for important tasks.



    In this scenario, nullptr returned means the main region allocator is
    out of memory. So, pool things up where this never occurs.



    You know how to do it! I know you do.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 12 11:03:31 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Fri, 8 Mar 2024 09:01:21 +0100, David Brown wrote:

    It seems much more appropriate for Ada (though Pascal also had stricter checking and stronger types than most other popular languages had when
    Pascal was developed).

    That’s why Ada was built on Pascal: if you want something intended for high-reliability, safety-critical applications, why not build it on a foundation that was already the most, shall we say, anal-retentive, among well-known languages of the time?

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Mar 12 11:07:23 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On Fri, 8 Mar 2024 21:36:14 -0800, Ross Finlayson wrote:

    What I'd like to know about is who keeps dialing the "harmonization"
    efforts, which really must give grouse to the "harmonisation"
    spellers ...

    Some words came from French and had “-ize”, others did not and had “-ise”.
    Some folks in Britain decided to change the former to the latter.

    “Televise”, “merchandise”, “advertise” -- never any “-ize” form.

    “Synchronize”, “harmonize”, “apologize” -- “-ize” originally.

    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Ross Finlayson@3:633/280.2 to All on Tue Mar 12 14:05:19 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    On 03/11/2024 05:07 PM, Lawrence D'Oliveiro wrote:
    On Fri, 8 Mar 2024 21:36:14 -0800, Ross Finlayson wrote:

    What I'd like to know about is who keeps dialing the "harmonization"
    efforts, which really must give grouse to the "harmonisation"
    spellers ...

    Some words came from French and had “-ize”, others did not and had “-ise”.
    Some folks in Britain decided to change the former to the latter.

    “Televise”, “merchandise”, “advertise” -- never any “-ize” form.

    “Synchronize”, “harmonize”, “apologize” -- “-ize” originally.




    Hey thanks that's something I hadn't thought,
    that the harmonization was coming from this
    side of the pond besides vice-versa, with regards
    to that "harmonization" is an effort in controlled
    languages in terms of natural languages which
    are organic though of course subject their extended
    memory the written corpi, which I write corpi, not corpora.

    It's like when the dictionary adds new words,
    the old words are still words, in, the "Wortbuch",
    an abstract dictionary of all the words, that I read
    about in Curme. (I'm a fan of Tesniere and Curme.)

    About parsing and re-writing systems, I'm really wondering
    a lot about, compilation units, lines, spacing and indentation,
    blocks, comments, quoting, punctuation, identifiers,
    brackets, commas, and stops, how to write grammars
    for all sorts usual source language in those, and result,
    a novel sort of linear data structure above those,
    in whatever languages so recognized in those,
    and any sections it doesn't as the source text.


    I looked around a bit and after re-writing on the Wiki
    and "multi-pass parser" there are some sorts ideas,
    usually in terms of fungible intermediate languages
    for targeting those to whatever languages, here
    though mostly to deal with a gamut of existing code,
    there are lots of syntax recognizers and highlighters
    and this kind of thing, "auto-detect" in the static
    analysis toolkit, the languages, then as with regards to
    that a given compilation unit is only gonna be one or
    a few languages in it, with regards for example to
    "code in text" or "text in code", about comments,
    sections, blocks, or "language integrated code"
    or "convenience code", "sugar modes", you know,
    about what the _grammar_ specifications would be,
    and the lexical and syntax the specifications, to
    arrive at a multi-pass parser, that compiles a whole
    bunch of language specs, finds which ones apply
    where to the compilation unit, then starts building
    them up "lifting" them above the character sequence,
    building an "abstract syntax sequence" (yeah I know)
    above that, then building a model of the productions
    directly above that, that happens to be exactly derived
    from the grammar productions, with the same sort
    of structure as the grammar productions.

    (Order, loop, optional, a superset of eBNF, to support
    syntaxes with bracket blocks like C-style and syntaxes
    with indent blocks though I'm not into that, the various
    inversions of comments and code, the various interpolations
    of quoting, brackets and grouping and precedence,
    commas and joining and separating, and because SQL
    doesn't really comport itself to BNF, these kinds of things.)

    Of course it's obligatory that this would be about C/C++
    and as with regards to Java which of course is in the
    same style, or that its derivative, is for example that
    M4/C/C++ code is already to a multi-pass parser, and,
    Java at some point added language features which
    fundamentally require a multi-pass parser, so it's not
    like the entire resources of the mainframe has to fit
    a finite-state-machine on the read-head, in fact at
    compile-time specifically there's "it's fair to consider
    a concatenation of the compilation units as a linear
    input in space", then figuring the "liftings" are linear
    in that, in space, then that the productions whence
    derived are as concise as the productions a minimal
    model, thus discardable the intermediate bit, is for
    introducing a sort of common model of language
    representation, source language, for reference
    implementations of the grammars, then to make
    the act of ingestion of sources in languages as a
    first-class kind of thing, I'm looking for one of those,
    and that's about as much I've figured out it is.

    It's such a usual idea I must imagine that it's
    commonplace, as it's just the very most simple
    act of the model of iterating these things and
    reading them out.

    I probably might not care about it but getting
    to where it takes a parser that can parse SQL
    for example, or, you know, when there are lots
    of source formats but it's just data and definitions,
    yeah if you know that there's like a very active
    open project in that I'd be real interested in a
    sort of "source/object/relational mapping", ...,
    as it were, "source/grammatical-production mapping",
    what results you identify grammars and pick sources
    and it prints out the things.

    I'm familiar with the traditional approaches,
    and intend to employ them. I figure this
    must be a very traditional approach if
    nobody's heard of it.


    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Thiago Adams@3:633/280.2 to All on Wed Mar 13 05:54:21 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 04:43, Mr. Man-wai Chang wrote:
    On 5/3/2024 9:51 pm, Mr. Man-wai Chang wrote:
    On 3/3/2024 7:13 am, Lynn McGuire wrote:

    "The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    A responsible, good progreammer or a better C/C++ pre-processor can
    avoid a lot of problems!!

    Or maybe A.I.-assisted code analyzer?? But there are still blind spots...

    I think AI could be used and give goods result but it is not ideal.
    The advantage of AI it could understand patterns. Like the names init
    and destroy could work as tips or patterns.

    However, I think programming needs a formal language for contracts and
    the static analysis needs to check them.
    Also ideally is better contracts for the interface rather having to see
    the body of the functions.




    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Thiago Adams@3:633/280.2 to All on Wed Mar 13 06:00:31 2024
    Subject: Re: "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    On 06/03/2024 04:43, Mr. Man-wai Chang wrote:
    On 5/3/2024 9:51 pm, Mr. Man-wai Chang wrote:
    On 3/3/2024 7:13 am, Lynn McGuire wrote:

    "The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    A responsible, good progreammer or a better C/C++ pre-processor can
    avoid a lot of problems!!

    Or maybe A.I.-assisted code analyzer?? But there are still blind spots...

    I think AI could be used and give goods result but it is not ideal.
    The advantage of AI it could understand patterns. Like the names init
    and destroy could work as tips or patterns.

    However, I think programming needs a formal language for contracts and
    the static analysis needs to check them.
    Also ideally is better contracts for the interface rather having to see
    the body of the functions.




    --- MBSE BBS v1.0.8.4 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)