The feddies want to regulate software development very much.
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
Nevertheless, C retains the basic philosophy that
programmers know what they are doing; it only requires
that they state their intentions explicitly.
Lynn
Well to be fair, the feds regulations in the 60s made COBOL and FORTRAN
very popular, plus POSIX later on.
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
They have been talking about it for at least 20 years now.
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
Lynn
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. [...]"
[...]
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. [...]
[...]
Good languages and good tools help, but they are not the root cause of
poor quality software in the world.
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming >languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
It is not languages like C and C++ that are "unsafe".
Lawrence D'Oliveiro wrote:
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a
code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
I do not want to live in a web-centric world.
On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
Programmers who think about safety, correctness and quality and all that
have way fewer diagnostics and more footguns if they are coding in C
compared to Rust.
I think, you can't just wave away the characteristics of Rust as making
no difference in this regard.
But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
The rhetoric you hear from Rust people about this is that coders taking
a safety shortcut to make something work have to explicitly ask for that
in Rust. It leaves a visible trace. If something goes wrong because of
an unsafe block, you can trace that to the commit which added it.
The rhetoric all sounds good.
However, like you, I also believe it boils down to people, in a
somewhat different way. To use Rust productively, you have to be one of
the rare idiot savants who are smart enough to use it *and* numb to all
the inconveniences.
The reason the average programmer won't make any safety
boo-boos using Rust is that the average programmer either isn't smart
enough to use it at all, or else doesn't want to put up with the fuss:
they will opt for some safe language which is easy to use.
Rust's problem is that we have safe languages in which you can almost
crank out working code with your eyes closed. (Or if not working,
then at least code in which the only uncaught bugs are your logic bugs,
not some undefined behavior from integer overflow or array out of
bounds.)
This is why Rust people are desperately pitching Rust as an alternative
for C and whatnot, and showcasing it being used in the kernel and
whatnot.
Trying to be both safe and efficient to be able to serve as a "C
replacement" is a clumsy hedge that makes Rust an awkward language.
You know the parable about the fox that tries to chase two rabbits.
The alternative to Rust in application development is pretty much any convenient, "easy" high level language, plus a little bit of C.
You can get a small quantity of C right far more easily than a large
quantity of C. It's almost immaterial.
An important aspect of Rust is the ownership-based memory management.
The problem is, the "garbage collection is bad" era is /long/ behind us.
Scoped ownership is a half-baked solution to the object lifetime
problem, that gets in the way of the programmer and isn't appropriate
for the vast majority of software tasks.
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
On Sat, 2 Mar 2024 17:13:56 -0600, Lynn McGuire wrote:
The feddies want to regulate software development very much.
Given the high occurrence of embarrassing mistakes companies have been
making with their code, and continue to make, it’s quite clear they’re not
capable of regulating this issue themselves.
I wouldn’t worry about companies tripping over and hurting themselves, but when the consequences are security leaks, not of information belonging to those companies, but to their innocent customers/users who are often
unaware that those companies even had that information, then it’s quite clear that Government has to step in.
Because if they don’t, then who will?
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
Good languages and good tools help, but they are not the root cause of
poor quality software in the world.
On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:
Lawrence D'Oliveiro wrote:
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a
code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
I do not want to live in a web-centric world.
You already do.
On 03/03/2024 19:18, Kaz Kylheku wrote:
On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks" >>>>
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They >>>> have been talking about it for at least 20 years now. This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
Programmers who think about safety, correctness and quality and all that
have way fewer diagnostics and more footguns if they are coding in C
compared to Rust.
I think, you can't just wave away the characteristics of Rust as making
no difference in this regard.
I did not.
I said that the /root/ problem is not the language, but the programmers
and the way they work.
Of course some languages make some things harder and other things
easier. And even the most careful programmers will occasionally make mistakes. So having a language that helps reduce the risk of some kinds
of errors is a helpful thing.
But consider this. When programming in modern C++, you can be risk-free from buffer overruns and most kinds of memory leak - use container
classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete. You can use the C++ coding guideline
libraries to mark ownership of pointers. You can use compiler
sanitizers to catch many kinds undefined behaviour. You can use all
sorts of static analysis tools, from free to very costly, to help find problems. And yet there are armies of programmers writing bad C++ code.
PHP and Javascript have automatic memory management and garbage
collection eliminating many of the possible problems seen in C and C++
code, yet armies of programmers write PHP and Javascript code full of
bugs and security faults.
Better languages, better libraries, and better tools certainly help.
There are not many tasks for which C is the best choice of language. But none of that will deal with the root of the problem. Good programmers,
with good training, in good development departments with good managers
and good resources, will write correct code more efficiently in a better language, but they can write correct code in pretty much /any/
language. Similarly, the bulk of programmers will write bad code in any language.
But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are >>> paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
The rhetoric you hear from Rust people about this is that coders taking
a safety shortcut to make something work have to explicitly ask for that
in Rust. It leaves a visible trace. If something goes wrong because of
an unsafe block, you can trace that to the commit which added it.
The rhetoric all sounds good.
You can't trace the commit for programmers who don't use version control software - and that is a /lot/ of them. Leaving visible traces does not help when no one else looks at the code. Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
Rust makes it possible to have some safety checks for a few things that
are much harder to do in C++. It does not stop people writing bad code using bad development practices.
However, like you, I also believe it boils down to people, in a
somewhat different way. To use Rust productively, you have to be one of
the rare idiot savants who are smart enough to use it *and* numb to all
the inconveniences.
And you have to have managers who are smart enough to believe it when
their programmers say they need to train in a new language, re-write
lots of existing code, and accept longer development times as a tradeoff
for fewer bugs in shipped code.
(I personally have a very good manager, but I know a great many
programmers do not.)
The reason the average programmer won't make any safety
boo-boos using Rust is that the average programmer either isn't smart
enough to use it at all, or else doesn't want to put up with the fuss:
they will opt for some safe language which is easy to use.
Rust's problem is that we have safe languages in which you can almost
crank out working code with your eyes closed. (Or if not working,
then at least code in which the only uncaught bugs are your logic bugs,
not some undefined behavior from integer overflow or array out of
bounds.)
This is why Rust people are desperately pitching Rust as an alternative
for C and whatnot, and showcasing it being used in the kernel and
whatnot.
I personally think it is madness to have Rust in a project like the
Linux kernel. I used to see C++ as a rapidly changing language with its
3 year cycle - Rust seems to have a 3 week cycle for updates, with no
formal standardisation and "work in progress" attitude. That's fine for
a new language under development, but /not/ something you want for a
project that spans decades.
Trying to be both safe and efficient to be able to serve as a "C
replacement" is a clumsy hedge that makes Rust an awkward language.
You know the parable about the fox that tries to chase two rabbits.
The alternative to Rust in application development is pretty much any
convenient, "easy" high level language, plus a little bit of C.
You can get a small quantity of C right far more easily than a large
quantity of C. It's almost immaterial.
There are lots of alternatives to Rust for application development. But
in general, higher level languages mean you do less manual work, and
write fewer lines of code for the same amount of functionality. And
that means a lower risk of errors.
An important aspect of Rust is the ownership-based memory management.
The problem is, the "garbage collection is bad" era is /long/ behind us.
Scoped ownership is a half-baked solution to the object lifetime
problem, that gets in the way of the programmer and isn't appropriate
for the vast majority of software tasks.
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:
Lawrence D'Oliveiro wrote:
Nowadays, POSIX (and *nix generally) is undergoing a resurgence
because of Linux and Open Source. Developers are discovering that the
Linux ecosystem offers a much more productive development environment
for a code-sharing, code-reusing, Web-centric world than anything
Microsoft can offer.
I do not want to live in a web-centric world.
You already do.
Frankly, i think we should all be programming in macros over assembly
anyway.
Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:
I do not want to live in a web-centric world.
You already do.
That does not change the veracity of my statement.
On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to
Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
But consider this. When programming in modern C++, you can be risk-free from buffer overruns and most kinds of memory leak - use container
classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete.
On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:
On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to
Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.
Not like “putting corks on the forks”, whatever that might be about
...
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
Lynn McGuire <lynnmcguire5@gmail.com> wrote in news:us0brl$246bf$1@dont-email.me:[...]
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages. They can be hacked just as easily as C and C++ and many other languages. The government should worry about things they really need to control, which is less not more, IMHO. They obviously know very little about computer development.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
Funny to me:
https://youtu.be/eF8QAeQm3ZM?t=332
Putting the cork on the fork is akin to saying nobody should be using C >and/or C++ in this "modern" age? :^)--
On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:
On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to
Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.
I remember a while back when some people would try to tell me that [Ada] solves all issues...
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something that >>> the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any dynamic memory? How are you going to mange this memory wrt your various
data structures needs....
On 03/03/2024 23:01, Chris M. Thomasson wrote:
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something
that
the language imposes. C has malloc, yet even that gets disused in favor >>>> of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management. And as Kaz says, you will often use
custom solutions such as resource pools rather than generic malloc/free.
Flexible network communication (such as Ethernet or other IP
networking) is hard to do without dynamic memory.
But for things that are safety or reliability critical, you aim to have everything statically allocated. (Sometimes you use dynamic memory at startup for convenience, but you never free that memory.) This, of
course, means you simply don't use certain kinds of data structures. std::array<> is fine - it's just a nicer type wrapper around a fixed
size C-style array. But you don't use std::vector<>, or other growable structures. You figure out in advance the maximum size you need for
your structures, and nail them to that size at compile time.
On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:
On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to
Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.
Not like “putting corks on the forks”, whatever that might be about
...
On 3/3/2024 3:59 PM, David LaRue wrote:
Lynn McGuire <lynnmcguire5@gmail.com> wrote in[...]
news:us0brl$246bf$1@dont-email.me:
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages.
They
can be hacked just as easily as C and C++ and many other languages. The
government should worry about things they really need to control,
which is
less not more, IMHO. They obviously know very little about computer
development.
I remember a while back when some people would try to tell me that ADA solves all issues...
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point,
but it won't be easy."
On 04/03/2024 00:06, Chris M. Thomasson wrote:
On 3/3/2024 3:59 PM, David LaRue wrote:And there's ADA, and there's Ada, the lady.
Lynn McGuire <lynnmcguire5@gmail.com> wrote in[...]
news:us0brl$246bf$1@dont-email.me:
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>> invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages.
They
can be hacked just as easily as C and C++ and many other languages. The >>> government should worry about things they really need to control,
which is
less not more, IMHO. They obviously know very little about computer
development.
I remember a while back when some people would try to tell me that ADA
solves all issues...
And she wrote.
"The Analytical Engine has no pretensions whatever to originate
anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths."
And so she knew what the capabilites of the Analytical Engine were,
exactly what programming was, what and what it could not achieve, and
how set out making it achieve what it could achieved. And so she had it,
and in a sense, ADA solved all issues.
And no formal computer science education. Of course.
On 04/03/2024 12:54, Malcolm McLean wrote:
On 04/03/2024 00:06, Chris M. Thomasson wrote:
On 3/3/2024 3:59 PM, David LaRue wrote:And there's ADA, and there's Ada, the lady.
Lynn McGuire <lynnmcguire5@gmail.com> wrote in[...]
news:us0brl$246bf$1@dont-email.me:
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>>> invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a >>>>> very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages. >>>> They
can be hacked just as easily as C and C++ and many other languages. The >>>> government should worry about things they really need to control,
which is
less not more, IMHO. They obviously know very little about computer
development.
I remember a while back when some people would try to tell me that ADA
solves all issues...
No, there's Ada the programming language, named after Lady Ada Lovelace.\
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
On 03.03.2024 21:23, David Brown wrote:
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control development according to a realistic plan.
On 04/03/2024 12:54, Malcolm McLean wrote:
On 04/03/2024 00:06, Chris M. Thomasson wrote:
On 3/3/2024 3:59 PM, David LaRue wrote:And there's ADA, and there's Ada, the lady.
Lynn McGuire <lynnmcguire5@gmail.com> wrote in[...]
news:us0brl$246bf$1@dont-email.me:
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>>> invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a >>>>> very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure
languages. They
can be hacked just as easily as C and C++ and many other languages.
The
government should worry about things they really need to control,
which is
less not more, IMHO. They obviously know very little about computer
development.
I remember a while back when some people would try to tell me that
ADA solves all issues...
No, there's Ada the programming language, named after Lady Ada Lovelace.
For those that perhaps don't understand these things, all-caps names are usually used for acronyms, such as BASIC, or languages from before small letters were universal in computer systems, such as early FORTRAN. Programming languages named after people are generally capitalised the
same way people's names are - thus Ada and Pascal.
And she wrote.
"The Analytical Engine has no pretensions whatever to originate
anything. It can do whatever we know how to order it to perform. It
can follow analysis; but it has no power of anticipating any
analytical relations or truths."
And so she knew what the capabilites of the Analytical Engine were,
exactly what programming was, what and what it could not achieve, and
how set out making it achieve what it could achieved. And so she had
it, and in a sense, ADA solved all issues.
What I think you are trying to say, but got completely lost in the last sentence, is that Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
On 03/03/2024 23:01, Chris M. Thomasson wrote:[...]
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something
that
the language imposes. C has malloc, yet even that gets disused in favor >>>> of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management. And as Kaz says, you will often use
custom solutions such as resource pools rather than generic malloc/free.
Flexible network communication (such as Ethernet or other IP
networking) is hard to do without dynamic memory.
On 3/4/2024 12:44 AM, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:[...]
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something >>>>> that
the language imposes. C has malloc, yet even that gets disused in
favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management. And as Kaz says, you will often use
custom solutions such as resource pools rather than generic
malloc/free. Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
Think of using a big chunk of memory,
never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
On 04/03/2024 08:44, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something >>>>> that
the language imposes. C has malloc, yet even that gets disused in
favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management. And as Kaz says, you will often use
custom solutions such as resource pools rather than generic
malloc/free. Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
But for things that are safety or reliability critical, you aim to
have everything statically allocated. (Sometimes you use dynamic
memory at startup for convenience, but you never free that memory.)
This, of course, means you simply don't use certain kinds of data
structures. std::array<> is fine - it's just a nicer type wrapper
around a fixed size C-style array. But you don't use std::vector<>,
or other growable structures. You figure out in advance the maximum
size you need for your structures, and nail them to that size at
compile time.
And if it's embedded, it's unlikely to have an unbounded dataset thrown
at it, because embedded systems aren't used for those types of problems.
All,
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
They make the mistake of blaming the tools rather than
how the tools are used https://shape-of-code.com/2024/03/03/the-whitehouse-report-on-adopting-memory-safety/
In article <us2s96$2n6h3$6@dont-email.me>,
Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
...
Sure. Putting corks on the forks reduces the chance of eye injuries.
Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
Funny to me:
https://youtu.be/eF8QAeQm3ZM?t=332
Leader Keith gets mad when you post YouTube URLs here.
I'd be more careful, if I were you.
Putting the cork on the fork is akin to saying nobody should be using C
and/or C++ in this "modern" age? :^)
And of course Google can solve a problem by inventing a new language and putting up all the infrastructure that that would need around it.
... Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:
I remember a while back when some people would try to tell me that [Ada]
solves all issues...
It did make a difference. Did you know the life-support system on the International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
And here <https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
is a project to make it even safer.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
On 3/4/2024 1:26 PM, Lawrence D'Oliveiro wrote:
On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
Excellent comment in a C group. Well, you should move to another group?
On Mon, 4 Mar 2024 11:44:06 +0000, Malcolm McLean wrote:=99s
=20
And of course Google can solve a problem by inventing a new=20
language and putting up all the infrastructure that that would need
around it. =20
Google has invented quite a lot of languages: Dart and Go come to
mind, and also this =E2=80=9CCarbon=E2=80=9D effort.
=20
I suppose nowadays a language can find a niche outside the
mainstream, and still be viable. Proprietary products need
mass-market success to stay afloat, but with open-source ones, what=E2=80=
important is the contributor base, not the user base.
On 04/03/2024 17:05, Janis Papanagnou wrote:
On 03.03.2024 21:23, David Brown wrote:
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.
Now you are beginning to understand!
Go *is* mainstream, more so than Rust.
On 3/3/2024 9:43 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:
I remember a while back when some people would try to tell me that [Ada] >>> solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
On 3/4/2024 1:26 PM, Lawrence D'Oliveiro wrote:
On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
Excellent comment in a C group. Well, you should move to another group?
On 04/03/2024 21:28, Chris M. Thomasson wrote:
On 3/4/2024 1:26 PM, Lawrence D'Oliveiro wrote:
On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:
Would you trust a "safe" language that had some critical libraries that >>>> were written in say, C?
The less C code you write, the easier it is to keep it under control.
Excellent comment in a C group. Well, you should move to another group?
There's an underlying reality there. The less code you have, the less
that can go wrong.
So don;t just knock out code, but think a bit about
what you do and do not really need.
On 04.03.2024 22:15, Chris M. Thomasson wrote:
On 3/3/2024 9:43 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:
I remember a while back when some people would try to tell me that [Ada] >>>> solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
You named them as "critical libraries", which (as a project manager)
I'd handle as such; be sure about their quality, about certificates,
write own test cases if necessary, or demand source code for reviews
for own verification.
As already said, there's more factors than the language. An external
library is also an externality to consider, and to not consider it
(per se) as okay.
The less code you have, the less that can go wrong.
On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate some repetitive things, to avoid having to write them manually.
Frankly, i think we should all be programming in macros over assembly
anyway.
On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:
I remember a while back when some people would try to tell me that [Ada]
solves all issues...
It did make a difference. Did you know the life-support system on the International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
And here <https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
is a project to make it even safer.
Lynn McGuire <lynnmcguire5@gmail.com> writes:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
You've been reading far to much apocalyptic fiction and seeing the
world through trump-colored glasses. Neither reflect reality.
On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS, and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better performance.
On 3/4/2024 5:54 PM, Lawrence D'Oliveiro wrote:
On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS, >> and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
Why do you mention performance? I thought is was all about safety...
... I actually have had a Professional Engineer's License in Texas for
34 years now and can tell you all about what it takes to get one and
what it takes to keep one.
On 3/4/2024 8:43 PM, Lawrence D'Oliveiro wrote:
On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate
some repetitive things, to avoid having to write them manually.
Does the build system depend on anything coded in C?
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for delivery.
On Mon, 4 Mar 2024 22:18:47 -0800, Chris M. Thomasson wrote:
On 3/4/2024 5:54 PM, Lawrence D'Oliveiro wrote:
On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS, >>> and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
Why do you mention performance? I thought is was all about safety...
Safety’s a given. Plus you get performance as well.
On 3/4/2024 12:44 AM, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:[...]
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something >>>>> that
the language imposes. C has malloc, yet even that gets disused in
favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management. And as Kaz says, you will often use
custom solutions such as resource pools rather than generic
malloc/free. Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:a=E2=80=9D
=20
Go *is* mainstream, more so than Rust. =20=20
Google looked at what language to use for its proprietary =E2=80=9CFuchsi=
OS, and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been=20 using Go, and decided that switching to Rust offered better
performance.
On 04.03.2024 18:24, David Brown wrote:
On 04/03/2024 17:05, Janis Papanagnou wrote:
On 03.03.2024 21:23, David Brown wrote:
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only >>>> so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.
Now you are beginning to understand!
Huh? - I posted about various factors (beyond the programmers'
proficiency and tools) in an earlier reply to you; it was including
the management factor that you missed to note and that you adopted
as factor just in a later post. - So there's neither need nor reason
for such an arrogant, wrong, and disrespectful statement.
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
On Mon, 4 Mar 2024 15:41:43 +0100, David Brown wrote:
... Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
She was the first, in written records, to appreciate some of the not-so- obvious issues in computer programming.
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
On 3/3/2024 9:31 AM, Scott Lurndal wrote:
Lynn McGuire <lynnmcguire5@gmail.com> writes:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
You've been reading far to much apocalyptic fiction and seeing the
world through trump-colored glasses. Neither reflect reality.
Nope, I actually have had a Professional Engineer's License in Texas for
34 years now and can tell you all about what it takes to get one and
what it takes to keep one.
This bunch of crazies in the White House wants to do the same thing to >software development.
On 04/03/2024 21:36, Chris M. Thomasson wrote:
On 3/4/2024 12:44 AM, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:[...]
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not
something that
the language imposes. C has malloc, yet even that gets disused in >>>>>> favor
of something else.
For safe embedded systems, you don't want memory management at all. >>>>> Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid
any dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic
memory and therefore memory management. And as Kaz says, you will
often use custom solutions such as resource pools rather than generic
malloc/free. Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange
sense...
I believe I mentioned that. You do not, in general, "push and pop" -
you malloc and never free. Excluding debugging code and other parts
useful in testing and developing, you have something like :
enum { heap_size = 16384; }
alignas(max_align_t) static uint8_t heap[heap_size];
uint8_t * next_free = heap;
void free(void * ptr) {
(void) ptr;
}
void * malloc(size_t size) {
const size_t align = alignof(max_align_t);
const real_size = size ? (size + (align - 1)) & ~(align - 1)
: align;
void * p = next_free;
next_free += real_size;
return p;
}
Allowing for pops requires storing the size of the allocations (unless
you change the API from that of malloc/free), and is only rarely useful.
Generally if you want memory that temporary, you use a VLA or alloca
to put it on the stack.
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would >>>> trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would >>>>> trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would >>>>>> trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for >>>>> delivery.
Was it debugged again? Or was it assumed that the translation was bug- >>>> free?
With Ada, if you can get it to compile, it's ready to ship :-)
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:
On 3/4/2024 8:43 PM, Lawrence D'Oliveiro wrote:
On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate
some repetitive things, to avoid having to write them manually.
Does the build system depend on anything coded in C?
These days, it might be Rust.
On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would >>>>>> trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for >>>>> delivery.
Was it debugged again? Or was it assumed that the translation was bug- >>>> free?
With Ada, if you can get it to compile, it's ready to ship :-)
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would >>>>>>> trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for >>>>>> delivery.
Was it debugged again? Or was it assumed that the translation was bug- >>>>> free?
With Ada, if you can get it to compile, it's ready to ship :-)
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware
overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
A numeric overflow occurred during the Ariane 5's initial flight -- and
the software *did* catch the overflow. The same overflow didn't occur
on Ariane 4 because of its different flight profile. There was a
management decision to reuse the Ariane 4 flight software for Ariane 5 without sufficient review.
The code (which had been thoroughly tested on Ariane 4 and was known not
to overflow) emitted an error message describing the overflow exception.
That error message was then processed as data. Another problem was that systems were designed to shut down on any error; as a result, healthy
and necessary equipment was shut down prematurely.
This is from my vague memory, and may not be entirely accurate.
*Of course* logic errors are possible in Ada programs, but in my
experience and that of many other programmers, if you get an Ada program
to compile (and run without raising unhandled exceptions), you're likely
to be much closer to a working program than if you get a C program to compile. A typo in a C program is more likely to result in a valid
program with different semantics.
ADA is bullet proof... Until its not... ;^)
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...]
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
Of course no language that can be used for real work can be completely bulletproof. Ada is designed to be relatively safe (and neither of
these newsgroups is the place to discuss the details.)
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing working
solution in go will take 5 time less man hours than writing it in Rust
That includes realising that computers could do more than number
crunching.
On 3/4/2024 11:07 PM, Lawrence D'Oliveiro wrote:
On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:
Does the build system depend on anything coded in C?
These days, it might be Rust.
The keyword is might... Right?
On Tue, 5 Mar 2024 13:48:25 -0800, Chris M. Thomasson wrote:
On 3/4/2024 11:07 PM, Lawrence D'Oliveiro wrote:
On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:
Does the build system depend on anything coded in C?
These days, it might be Rust.
The keyword is might... Right?
Might does not make right.
On 3/3/2024 7:13 am, Lynn McGuire wrote:
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
A responsible, good progreammer or a better C/C++ pre-processor can
avoid a lot of problems!!
On 3/5/2024 1:01 AM, David Brown wrote:
On 04/03/2024 21:36, Chris M. Thomasson wrote:
On 3/4/2024 12:44 AM, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:[...]
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not
something that
the language imposes. C has malloc, yet even that gets disused in >>>>>>> favor
of something else.
For safe embedded systems, you don't want memory management at
all. Avoiding dynamic memory is an important aspect of
safety-critical embedded development.
You still have to think about memory management even if you avoid
any dynamic memory? How are you going to mange this memory wrt your >>>>> various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic
memory and therefore memory management. And as Kaz says, you will
often use custom solutions such as resource pools rather than
generic malloc/free. Flexible network communication (such as
Ethernet or other IP networking) is hard to do without dynamic memory.
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange
sense...
I believe I mentioned that. You do not, in general, "push and pop" -
you malloc and never free. Excluding debugging code and other parts
useful in testing and developing, you have something like :
enum { heap_size = 16384; }
alignas(max_align_t) static uint8_t heap[heap_size];
uint8_t * next_free = heap;
void free(void * ptr) {
(void) ptr;
}
void * malloc(size_t size) {
const size_t align = alignof(max_align_t);
const real_size = size ? (size + (align - 1)) & ~(align - 1)
: align;
void * p = next_free;
next_free += real_size;
return p;
}
Allowing for pops requires storing the size of the allocations (unless
you change the API from that of malloc/free), and is only rarely
useful. Generally if you want memory that temporary, you use a VLA
or alloca to put it on the stack.
wrt systems with no malloc/free I am thinking more along the lines of a region allocator mixed with a LIFO for a cache, so a node based thing.
The region allocator gets fed with a large buffer. Depending on specific needs, it can work out nicely for systems that do not have malloc/free.
The pattern I used iirc, was something like:
// pseudo code...
_______________________
node*
node_pop()
{
// try the lifo first...
node* n = lifo_pop();
if (! n)
{
// resort to the region allocator...
n = region_allocate_node();
// note, n can be null here.
// if it is, we are out of memory.
// note, out of memory on a system
// with no malloc/free...
}
return n;
}
void
node_push(
node* n
) {
lifo_push(n);
}
_______________________
make any sense to you?
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On
average the performance was satisfactory, but every two minutes there
was spike in latency. The latency during the spike was not that big
(300 msec), but they stilled were feeling that they want better.
They tried to tune GC, but the problem appeared to be fundamental.
So they just rewrote this particular server in Rust. Naturally, Rust
does not collect garbage, so this particular problem disappeared.
The key phrase of the story is "This service was a great candidate to
port to Rust since it was small and self-contained".
I'd add to this that even more important for eventual success of
migration was the fact that at time of rewrite server was already
running for several years, so requirements were stable and
well-understood.
Another factor is that their service does not create/free that many
objects. The delay was caused by mere fact of GC scanning rather than
by frequent compacting of memory pools. So, from the beginning it was
obvious that potential fragmentation of the heap, which is the main
weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
not apply in their case.
On 3/5/2024 2:11 PM, Keith Thompson wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...]
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
On 3/5/2024 1:58 PM, Keith Thompson wrote:
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote: >>>> On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something >>>>>>>> you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada >>>>>>> for
delivery.
Was it debugged again? Or was it assumed that the translation was >>>>>> bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware >>> overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
A numeric overflow occurred during the Ariane 5's initial flight -- and
the software *did* catch the overflow. The same overflow didn't occur
on Ariane 4 because of its different flight profile. There was a
management decision to reuse the Ariane 4 flight software for Ariane 5
without sufficient review.
The code (which had been thoroughly tested on Ariane 4 and was known not
to overflow) emitted an error message describing the overflow exception.
That error message was then processed as data. Another problem was that
systems were designed to shut down on any error; as a result, healthy
and necessary equipment was shut down prematurely.
This is from my vague memory, and may not be entirely accurate.
*Of course* logic errors are possible in Ada programs, but in my
experience and that of many other programmers, if you get an Ada program
to compile (and run without raising unhandled exceptions), you're likely
to be much closer to a working program than if you get a C program to
compile. A typo in a C program is more likely to result in a valid
program with different semantics.
So close you can just feel its a 100% correct and working program?
On Tue, 5 Mar 2024 11:31:11 +0100, David Brown wrote:
That includes realising that computers could do more than number
crunching.
Or, conversely, realizing that all forms of computation (including symbol manipulation) can be expressed as arithmetic?
Maybe that came later, cf
“Gödel numbering”.
On 05/03/2024 23:34, Chris M. Thomasson wrote:
On 3/5/2024 2:11 PM, Keith Thompson wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...]
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Apparently you and Malcolm got confused.
Others who mentioned the language know it is called "Ada". I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.
On 06/03/2024 13:31, David Brown wrote:
On 05/03/2024 23:34, Chris M. Thomasson wrote: =20=20
On 3/5/2024 2:11 PM, Keith Thompson wrote: =20=20
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...] =20
ADA is bullet proof... Until its not... ;^) =20
The language is called Ada, not ADA. =20
I wonder how many people got confused?
=20
Apparently you and Malcolm got confused.
=20
Others who mentioned the language know it is called "Ada".=A0 I not
only corrected you, but gave an explanation of it, in the hope that
with that clarity, you'd learn.
=20
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
=20
https://simple.wikipedia.org/wiki/Ada_(programming_language)
=20
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
=20
Here's also a paper that uses 'ADA' (I assume it is the same
language):
=20
https://www.sciencedirect.com/science/article/abs/pii/0166361582900136
=20
Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'=20
written in all-caps or only capitalised? You can't tell!
=20
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On
average the performance was satisfactory, but every two minutes there
was spike in latency. The latency during the spike was not that big
(300 msec), but they stilled were feeling that they want better.
I have few questions about the story, most important one is whether the weakness of this sort is specific to GC of Go, due to its relative
immaturity
or more general and applies equally to most mature GCs on the
market, i.e. J2EE and .NET.
On Wed, 6 Mar 2024 13:50:16 +0000
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
https://en.wikipedia.org/wiki/Ada_(programming_language)
Here's also a paper that uses 'ADA' (I assume it is the same
language):
https://www.sciencedirect.com/science/article/abs/pii/0166361582900136
The article published 1982. The language became official in 1983.
Possibly, in 1982 there still was a confusion w.r.t. its name.
Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
written in all-caps or only capitalised? You can't tell!
If only ADA, written in upper case, was not widely used for something
else...
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
On Wed, 6 Mar 2024 13:50:16 +0000....
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
On 06/03/2024 14:18, Michael S wrote:
If only ADA, written in upper case, was not widely used for something
else...
I don't know what that is without looking it up. In a programming
newsgroup I expect ADA to be the language.
On 3/6/24 09:18, Michael S wrote:
On Wed, 6 Mar 2024 13:50:16 +0000...
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
Notice that in your following link, "en" appears at the beginning to
indicate the use of English. "simple" at the beginning of the above link serves the same purpose. "Simple English" is it's own language, closely related to standard English.
On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 3/6/24 09:18, Michael S wrote:
On Wed, 6 Mar 2024 13:50:16 +0000...
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
Notice that in your following link, "en" appears at the beginning to
indicate the use of English. "simple" at the beginning of the above link
serves the same purpose. "Simple English" is it's own language, closely
related to standard English.
Where is Simple English spoken? Is there some geographic area where
native speakers concentrate?
On 06/03/2024 12:02, Michael S wrote:ust keeps
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)=20
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
=20
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:=20
=20
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
=20
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance. =20
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than
writing it in Rust =20
Nevertheless, they found the switch to Rust worthwhile. =20
I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
=20
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On average the performance was satisfactory, but every two minutes
there was spike in latency. The latency during the spike was not
that big (300 msec), but they stilled were feeling that they want
better. They tried to tune GC, but the problem appeared to be
fundamental. So they just rewrote this particular server in Rust. Naturally, Rust does not collect garbage, so this particular
problem disappeared.
=20
The key phrase of the story is "This service was a great candidate
to port to Rust since it was small and self-contained".
I'd add to this that even more important for eventual success of
migration was the fact that at time of rewrite server was already
running for several years, so requirements were stable and
well-understood.
Another factor is that their service does not create/free that many objects. The delay was caused by mere fact of GC scanning rather
than by frequent compacting of memory pools. So, from the beginning
it was obvious that potential fragmentation of the heap, which is
the main weakness of "plain" C/C++/Rust based solutions for Web
back-ends, does not apply in their case. =20
From the same link:
=20
"Rust uses a relatively unique memory management approach that=20 incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basically, R=
track of who can read and write to memory. It knows when the programlly
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually impossible to have runtime memory bugs.=E2=81=B4 You do not need to manua=
keep track of memory. The compiler takes care of it."
=20
This suggests the language automatically takes care of this.
But you=20
have to write your programs in a certain way to make it possible. The=20 programmer has to help the language keep track of what owns what.
=20
So you will probably be able to do the same thing in another
language. But Rust will do more compile-time enforcement by
restricting how you share objects in memory.
=20
=20
On 05/03/2024 23:02, Chris M. Thomasson wrote:
On 3/5/2024 1:58 PM, Keith Thompson wrote:
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com>
wrote:
On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something >>>>>>>>> you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to >>>>>>>> Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was >>>>>>> bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Ariane 5 rocket incident of 1996: The Ada code didn't catch the
hardware
overflow exception from forcing a 64 bit floating-point value into a 16 >>>> bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
A numeric overflow occurred during the Ariane 5's initial flight -- and
the software *did* catch the overflow. The same overflow didn't occur
on Ariane 4 because of its different flight profile. There was a
management decision to reuse the Ariane 4 flight software for Ariane 5
without sufficient review.
The code (which had been thoroughly tested on Ariane 4 and was known not >>> to overflow) emitted an error message describing the overflow exception. >>> That error message was then processed as data. Another problem was that >>> systems were designed to shut down on any error; as a result, healthy
and necessary equipment was shut down prematurely.
This is from my vague memory, and may not be entirely accurate.
That matches my recollection too.
*Of course* logic errors are possible in Ada programs, but in my
experience and that of many other programmers, if you get an Ada program >>> to compile (and run without raising unhandled exceptions), you're likely >>> to be much closer to a working program than if you get a C program to
compile. A typo in a C program is more likely to result in a valid
program with different semantics.
So close you can just feel its a 100% correct and working program?
Didn't you notice the smiley in my comment? It used to be a running
joke that if you managed to get your Ada code to compile, it was ready
to ship. The emphasis is on the word "joke".
On 05/03/2024 23:34, Chris M. Thomasson wrote:
On 3/5/2024 2:11 PM, Keith Thompson wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...]
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Apparently you and Malcolm got confused.
Others who mentioned the language know it is called "Ada". I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.
On 05/03/2024 21:51, Chris M. Thomasson wrote:
On 3/5/2024 1:01 AM, David Brown wrote:
On 04/03/2024 21:36, Chris M. Thomasson wrote:
On 3/4/2024 12:44 AM, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not
something that
the language imposes. C has malloc, yet even that gets disused >>>>>>>> in favor
of something else.
For safe embedded systems, you don't want memory management at
all. Avoiding dynamic memory is an important aspect of
safety-critical embedded development.
You still have to think about memory management even if you avoid >>>>>> any dynamic memory? How are you going to mange this memory wrt
your various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic
memory and therefore memory management. And as Kaz says, you will
often use custom solutions such as resource pools rather than
generic malloc/free. Flexible network communication (such as
Ethernet or other IP networking) is hard to do without dynamic memory. >>>> [...]
Think of using a big chunk of memory, never needed to be freed and
is just there per process. Now, you carve it up and store it in a
cache that has functions push and pop. So, you still have to manage
memory even when you are using no dynamic memory at all... Fair
enough, in a sense? The push and the pop are your malloc and free in
a strange sense...
I believe I mentioned that. You do not, in general, "push and pop" -
you malloc and never free. Excluding debugging code and other parts
useful in testing and developing, you have something like :
enum { heap_size = 16384; }
alignas(max_align_t) static uint8_t heap[heap_size];
uint8_t * next_free = heap;
void free(void * ptr) {
(void) ptr;
}
void * malloc(size_t size) {
const size_t align = alignof(max_align_t);
const real_size = size ? (size + (align - 1)) & ~(align - 1)
: align;
void * p = next_free;
next_free += real_size;
return p;
}
Allowing for pops requires storing the size of the allocations
(unless you change the API from that of malloc/free), and is only
rarely useful. Generally if you want memory that temporary, you use
a VLA or alloca to put it on the stack.
wrt systems with no malloc/free I am thinking more along the lines of
a region allocator mixed with a LIFO for a cache, so a node based
thing. The region allocator gets fed with a large buffer. Depending on
specific needs, it can work out nicely for systems that do not have
malloc/free. The pattern I used iirc, was something like:
// pseudo code...
_______________________
node*
node_pop()
{
// try the lifo first...
node* n = lifo_pop();
if (! n)
{
// resort to the region allocator...
n = region_allocate_node();
// note, n can be null here.
// if it is, we are out of memory.
// note, out of memory on a system
// with no malloc/free...
}
return n;
}
void
node_push(
node* n
) {
lifo_push(n);
}
_______________________
make any sense to you?
I know what you are trying to suggest, and I understand how it can sound reasonable. In some cases, this can be a useful kind of allocator, and
when it is suitable, it is very fast. But it is has two big issues for small embedded systems.
One problem is the "region_allocate_node()" - getting a lump of space
from the underlying OS. That is fine on "big systems", and it is normal that malloc/free systems only ask for memory from the OS in big lumps,
then handle local allocation within the process space for efficiency.
(This can work particularly well if each thread gets dedicated lumps, so that no locking is needed for most malloc/free calls.)
But in a small embedded system, there is no OS (an RTOS is generally
part of the same binary as the application), and providing such "lumps" would be dynamic memory management. So if you are using a system like
you describe, then you would have a single statically allocated block of memory for your lifo stack.
Then there is the question of how often such a stack-like allocator is useful, independent of the normal stack. I can imagine it is
/sometimes/ helpful, but rarely. I can't think off-hand of any cases
where I would have found it useful in anything I have written.
As I (and others) have said elsewhere, in small embedded systems and
safety or reliability critical systems, you want to avoid dynamic memory
and memory management whenever possible, for a variety of reasons. If
you do need something, then specialise allocators are more common -
possibly including lifos like this.
But it's more likely to have fixed-size pools with fixed-size elements, dedicated to particular memory tasks. For example, if you need to track multiple in-flight messages on a wireless mesh network, where messages
might take different amounts of time to be delivered and acknowledged,
or retried, you define a structure that holds all the data you need for
a message. Then you decide how many in-flight messages you will support
as a maximum. This gives you a statically allocated array of N structs.
Block usage is then done by a bitmap, typically within a single 32-bit word. Finding a free slot is a just finding the first free zero, and freeing it is clearing the correct bit.
There are, of course, many other kinds of dedicated allocators that can
be used in other circumstances.
On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:....
Notice that in your following link, "en" appears at the beginning to
indicate the use of English. "simple" at the beginning of the above link
serves the same purpose. "Simple English" is it's own language, closely
related to standard English.
Where is Simple English spoken? Is there some geographic area where
native speakers concentrate?
Another factor is that their service does not create/free that many
objects. The delay was caused by mere fact of GC scanning rather than
by frequent compacting of memory pools.
This suggests the language automatically takes care of this. But you
have to write your programs in a certain way to make it possible.
Continuously-compacting concurrent collectors like those available for
Java aim for less than 10ms, and often hit 1ms.
On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:
Continuously-compacting concurrent collectors like those available for
Java aim for less than 10ms, and often hit 1ms.
What ... a 1ms potential delay every time you want to allocate a new
object??
On 3/6/2024 5:46 PM, Lawrence D'Oliveiro wrote:
On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:
Continuously-compacting concurrent collectors like those available for
Java aim for less than 10ms, and often hit 1ms.
What ... a 1ms potential delay every time you want to allocate a new
object??
GC can be a no go for certain schemes. GC can be fine and it has its place.
It's a constructed language, which probably has no native speakers.
On 2024-03-07, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
On 3/6/2024 5:46 PM, Lawrence D'Oliveiro wrote:
On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:
Continuously-compacting concurrent collectors like those available for >>>> Java aim for less than 10ms, and often hit 1ms.
What ... a 1ms potential delay every time you want to allocate a new
object??
GC can be a no go for certain schemes. GC can be fine and it has its place.
It is the situations where GC cannot be used that are niches that have
their place. Everywhere else, you can use GC.
On Sun, 3 Mar 2024 22:11:14 -0000 (UTC), Blue-Maned_Hawk wrote:
Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:
I do not want to live in a web-centric world.
You already do.
That does not change the veracity of my statement.
That doesn’t change the veracity of mine.
On Wed, 6 Mar 2024 12:28:59 +0000
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps
track of who can read and write to memory. It knows when the program
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually
impossible to have runtime memory bugs.⁴ You do not need to manually
keep track of memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based languages like Java, C# or Go.
On 06/03/2024 23:00, Michael S wrote:, Rust keeps
On Wed, 6 Mar 2024 12:28:59 +0000
bart <bc@freeuk.com> wrote:
=20
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basically=
outrack of who can read and write to memory. It knows when the
program is using memory and immediately frees the memory once it
is no longer needed. It enforces memory rules at compile time,
making it virtually impossible to have runtime memory bugs.=E2=81=B4 Y=
Garbage collection does not stop heap fragmentation. GC does, I=20do not need to manually keep track of memory. The compiler takes=20
care of it."
This suggests the language automatically takes care of this. =20
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
=20
suppose, mean that you need much more memory and bigger heaps in=20 proportion to the amount of memory you actually need in the program
at any given time, and having larger heaps reduces fragmentation (or
at least reduces the consequences of it).
=20
On Thu, 7 Mar 2024 11:35:08 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the
program is using memory and immediately frees the memory once it
is no longer needed. It enforces memory rules at compile time,
making it virtually impossible to have runtime memory bugs.⁴ You
do not need to manually keep track of memory. The compiler takes
care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the program
at any given time, and having larger heaps reduces fragmentation (or
at least reduces the consequences of it).
GC does not stop fragmentation, but it allow heap compaction to be
built-in part of environment.
So, it turns heap fragmentation
from denial of service type of problem to mere slowdown, hopefully insignificant slowdown.
I don't say that heap compaction is impossible in other environments,
but it is much harder, esp. in environments where pointers are visible
to programmer. The famous David Wheeler's quote applies here at full
force.
Also when non-GC environments chooses to implement heap compaction they suffer the same or bigger impact to real-time responsiveness as GC.
So, although I don't know it for sure, my impression is that generic
heap compaction extremely rarely implemented in performance-aware
non-GC environments.
Performance-neglecting non-GC environments, first and foremost CPython,
can, of course, have heap compaction, although my googling didn't give
me a definite answer whether it's done or not.
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps
track of who can read and write to memory. It knows when the program
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually
impossible to have runtime memory bugs.⁴ You do not need to manually
keep track of memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the program at
any given time, and having larger heaps reduces fragmentation (or at
least reduces the consequences of it).
On 07/03/2024 12:44, Michael S wrote:
GC does not stop fragmentation, but it allow heap compaction to be
built-in part of environment.
No, GC alone does not do that. But heap compaction is generally done as part of a GC cycle.
Heap compaction requires indirect pointers.
On Wed, 6 Mar 2024 19:27:24 -0500, James Kuyper wrote:
It's a constructed language, which probably has no native speakers.
Not to be confused with Basic English, which was created, and copyrighted by, C K Ogden.
On 3/5/2024 4:25 PM, Lawrence D'Oliveiro wrote:
So, what is the right language to use?
It used to be a running joke that if you managed to get your Ada code to compile, it was ready to ship.
One of it's requirements is that the articles be written in Basic
English as much as possible.
On Tue, 5 Mar 2024 22:01:01 -0800, Chris M. Thomasson wrote:
On 3/5/2024 4:25 PM, Lawrence D'Oliveiro wrote:
So, what is the right language to use?
Learn to use more than one.
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the program
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually
impossible to have runtime memory bugs.⁴ You do not need to manually >>>> keep track of memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the program at
any given time, and having larger heaps reduces fragmentation (or at
least reduces the consequences of it).
Copying garbage collectors literally stop fragmentation.
Reachable
objects are identified and moved to a memory partition where they
are now adjacent. The vacated memory partition is then efficiently used
to bump-allocate new objects.
On Wed, 6 Mar 2024 14:34:50 +0100, David Brown wrote:
It used to be a running joke that if you managed to get your Ada code to
compile, it was ready to ship.
That joke actually originated with Pascal.
Though I suppose Ada took it to
the next level ...
On 07/03/2024 17:35, Kaz Kylheku wrote:ly, Rust
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote: =20
On 06/03/2024 23:00, Michael S wrote: =20
On Wed, 6 Mar 2024 12:28:59 +0000
bart <bc@freeuk.com> wrote:
=20
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basical=
=20=20Garbage collection does not stop heap fragmentation. GC does, Ikeeps track of who can read and write to memory. It knows when
the program is using memory and immediately frees the memory
once it is no longer needed. It enforces memory rules at compile
time, making it virtually impossible to have runtime memory
bugs.=E2=81=B4 You do not need to manually keep track of memory. The >>>> compiler takes care of it."
This suggests the language automatically takes care of this. =20
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
=20
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the
program at any given time, and having larger heaps reduces
fragmentation (or at least reduces the consequences of it). =20
Copying garbage collectors literally stop fragmentation. =20
Yes, but garbage collectors that could be useable for C, C++, or
other efficient compiled languages are not "copying" garbage
collectors.
Reachable=20
objects are identified and moved to a memory partition where they
are now adjacent. The vacated memory partition is then efficiently
used to bump-allocate new objects.
=20
I think if you have a system with enough memory that copying garbage=20 collection (or other kinds of heap compaction during GC) is a
reasonable option, then it's unlikely that heap fragmentation is a
big problem in the first place. And you won't be running on a small
embedded system.
CPython does use garbage collection, as far as I know.
07.03.2024 17:36 David Brown kirjutas:
CPython does use garbage collection, as far as I know.
AFAIK CPython uses reference counting, i.e. basically the same as C++ std::shared_ptr (except that it does not need to be thread-safe).
With reference counting one only knows how many pointers there are to a given heap block, but not where they are, so heap compaction would not
be straightforward.
Python also has zillions of extensions written in C or C++ (all of AI related work for example), so having e.g. heap compaction of Python
objects only might not be worth of it.
On Fri, 8 Mar 2024 08:25:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 07/03/2024 17:35, Kaz Kylheku wrote:
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust
keeps track of who can read and write to memory. It knows when
the program is using memory and immediately frees the memory
once it is no longer needed. It enforces memory rules at compile
time, making it virtually impossible to have runtime memory
bugs.⁴ You do not need to manually keep track of memory. The
compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the
program at any given time, and having larger heaps reduces
fragmentation (or at least reduces the consequences of it).
Copying garbage collectors literally stop fragmentation.
Yes, but garbage collectors that could be useable for C, C++, or
other efficient compiled languages are not "copying" garbage
collectors.
Go, C# and Java are all efficient compiled languages. For Go it was
actually a major goal.
Reachable
objects are identified and moved to a memory partition where they
are now adjacent. The vacated memory partition is then efficiently
used to bump-allocate new objects.
I think if you have a system with enough memory that copying garbage
collection (or other kinds of heap compaction during GC) is a
reasonable option, then it's unlikely that heap fragmentation is a
big problem in the first place. And you won't be running on a small
embedded system.
You sound like arguing for sake of arguing.
Of course, heap fragmentation is relatively rare problem. But when you process 100s of 1000s of requests of significantly varying sizes for
weeks without interruption then rare things happen with high
probability :(
In case of this particular Discord service, they appear to
have a benefit of size of requests not varying significantly, so
absence of heap compaction is not a major defect.
BTW, I'd like to know if 3 years later they still have their Rust
solution running.
On 08/03/2024 11:57, Michael S wrote:ally, Rust
On Fri, 8 Mar 2024 08:25:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
=20
On 07/03/2024 17:35, Kaz Kylheku wrote: =20
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote: =20
On 06/03/2024 23:00, Michael S wrote: =20
On Wed, 6 Mar 2024 12:28:59 +0000
bart <bc@freeuk.com> wrote:
=20
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory =E2=80=9Cownership=E2=80=9D. Basic=
=20=20Garbage collection does not stop heap fragmentation. GC does, Ikeeps track of who can read and write to memory. It knows when
the program is using memory and immediately frees the memory
once it is no longer needed. It enforces memory rules at
compile time, making it virtually impossible to have runtime
memory bugs.=E2=81=B4 You do not need to manually keep track of
memory. The compiler takes care of it."
This suggests the language automatically takes care of this. =20
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
=20
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the
program at any given time, and having larger heaps reduces
fragmentation (or at least reduces the consequences of it). =20
Copying garbage collectors literally stop fragmentation. =20
Yes, but garbage collectors that could be useable for C, C++, or
other efficient compiled languages are not "copying" garbage
collectors.
=20
Go, C# and Java are all efficient compiled languages. For Go it was actually a major goal. =20
C# and Java are, AFAIUI, managed languages - they are byte-compiled
and run on a VM. (JIT compilation to machine code can be used for=20 acceleration, but that does not change the principles.) I don't know=20 about Go.
On 08/03/2024 13:41, Paavo Helde wrote:
07.03.2024 17:36 David Brown kirjutas:
CPython does use garbage collection, as far as I know.
AFAIK CPython uses reference counting, i.e. basically the same as C++
std::shared_ptr (except that it does not need to be thread-safe).
Yes, that is my understanding too. (I could be wrong here, so don't
rely on anything I write!) But the way it is used is still a type of garbage collection. When an object no longer has any "live" references,
it is put in a list, and on the next GC it will get cleared up (and call
the asynchronous destructor, __del__, for the object).
On 08/03/2024 14:07, David Brown wrote:
On 08/03/2024 13:41, Paavo Helde wrote:
07.03.2024 17:36 David Brown kirjutas:
CPython does use garbage collection, as far as I know.
AFAIK CPython uses reference counting, i.e. basically the same as C++
std::shared_ptr (except that it does not need to be thread-safe).
Yes, that is my understanding too. (I could be wrong here, so don't
rely on anything I write!) But the way it is used is still a type of
garbage collection. When an object no longer has any "live"
references, it is put in a list, and on the next GC it will get
cleared up (and call the asynchronous destructor, __del__, for the
object).
Is that how CPython works? I can't quite see the point of saving up all
the deallocations so that they are all done as a batch. It's extra
overhead, and will cause those latency spikes that was the problem here.
In my own reference count scheme, when the count reaches zero, the
memory is freed immediately.
I also tend to have most allocations being of either 16 or 32 bytes, so reuse is easy. It is only individual data items (a long string or long array) that might have an arbitrary length that needs to be in
contiguous memory.
Most strings however have an average length of well below 16 characters
in my programs, so use a 16-byte allocation.
I don't know the allocation pattern in that Discard app, but Michael S suggested they might not be lots of arbitrary-size objects.
On 08/03/2024 13:41, Paavo Helde wrote:
07.03.2024 17:36 David Brown kirjutas:
CPython does use garbage collection, as far as I know.
AFAIK CPython uses reference counting, i.e. basically the same as C++
std::shared_ptr (except that it does not need to be thread-safe).
Yes, that is my understanding too. (I could be wrong here, so don't
rely on anything I write!) But the way it is used is still a type of
garbage collection. When an object no longer has any "live" references,
it is put in a list, and on the next GC it will get cleared up (and call
the asynchronous destructor, __del__, for the object).
A similar method is sometimes used in C++ for objects that are
time-consuming to destruct. You have a "tidy up later" container that
holds shared pointers. Each time you make a new object that will have asynchronous destruction, you use a shared_ptr for the access and put a
copy of that pointer in the tidy-up container. A low priority
background thread checks this list on occasion - any pointers with only
one reference can be cleared up in the context of this separate thread.
With reference counting one only knows how many pointers there are to
a given heap block, but not where they are, so heap compaction would
not be straightforward.
Python also has zillions of extensions written in C or C++ (all of AI
related work for example), so having e.g. heap compaction of Python
objects only might not be worth of it.
On 3/6/2024 2:43 AM, David Brown wrote:[...]
On 06/03/2024 20:50, Kaz Kylheku wrote:
On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 3/6/24 09:18, Michael S wrote:
On Wed, 6 Mar 2024 13:50:16 +0000...
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as >>>>> it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since >>>>> it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
Notice that in your following link, "en" appears at the beginning to
indicate the use of English. "simple" at the beginning of the above link >>> serves the same purpose. "Simple English" is it's own language, closely
related to standard English.
Where is Simple English spoken? Is there some geographic area where
native speakers concentrate?
It is meant to be simpler text, written in simpler language. The target audience will include younger people, people with dyslexia or other
reading difficulties, learners of English, people with lower levels of education, people with limited intelligence or learning impediments, or simply people whose eyes glaze over when faced with long texts on the
main Wikipedia pages.
On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
On 3/6/2024 2:43 AM, David Brown wrote:[...]
This is a fun one:
// pseudo code...
_______________________
node*
node_pop()
{
// try per-thread lifo
// try shared distributed lifo
// try global region
// if all of those failed, return nullptr
}
On 08/03/2024 22:23, Chris M. Thomasson wrote:
On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
On 3/6/2024 2:43 AM, David Brown wrote:[...]
This is a fun one:
// pseudo code...
_______________________
node*
node_pop()
{
// try per-thread lifo
// try shared distributed lifo
// try global region
// if all of those failed, return nullptr
}
Just to be clear here - if this is in a safety-critical system, and your allocation system returns nullptr, people die. That is why you don't
use this kind of thing for important tasks.
On 3/9/2024 4:25 AM, David Brown wrote:
On 08/03/2024 22:23, Chris M. Thomasson wrote:
On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
On 3/6/2024 2:43 AM, David Brown wrote:[...]
This is a fun one:
// pseudo code...
_______________________
node*
node_pop()
{
// try per-thread lifo
// try shared distributed lifo
// try global region
// if all of those failed, return nullptr
}
Just to be clear here - if this is in a safety-critical system, and
your allocation system returns nullptr, people die. That is why you
don't use this kind of thing for important tasks.
In this scenario, nullptr returned means the main region allocator is
out of memory. So, pool things up where this never occurs.
It seems much more appropriate for Ada (though Pascal also had stricter checking and stronger types than most other popular languages had when
Pascal was developed).
What I'd like to know about is who keeps dialing the "harmonization"
efforts, which really must give grouse to the "harmonisation"
spellers ...
On Fri, 8 Mar 2024 21:36:14 -0800, Ross Finlayson wrote:
What I'd like to know about is who keeps dialing the "harmonization"
efforts, which really must give grouse to the "harmonisation"
spellers ...
Some words came from French and had “-ize”, others did not and had “-ise”.
Some folks in Britain decided to change the former to the latter.
“Televise”, “merchandise”, “advertise” -- never any “-ize” form.
“Synchronize”, “harmonize”, “apologize” -- “-ize” originally.
On 5/3/2024 9:51 pm, Mr. Man-wai Chang wrote:
On 3/3/2024 7:13 am, Lynn McGuire wrote:
"The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
A responsible, good progreammer or a better C/C++ pre-processor can
avoid a lot of problems!!
Or maybe A.I.-assisted code analyzer?? But there are still blind spots...
On 5/3/2024 9:51 pm, Mr. Man-wai Chang wrote:
On 3/3/2024 7:13 am, Lynn McGuire wrote:
"The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
A responsible, good progreammer or a better C/C++ pre-processor can
avoid a lot of problems!!
Or maybe A.I.-assisted code analyzer?? But there are still blind spots...
Sysop: | Tetrazocine |
---|---|
Location: | Melbourne, VIC, Australia |
Users: | 6 |
Nodes: | 8 (0 / 8) |
Uptime: | 18:11:40 |
Calls: | 45 |
Files: | 21,492 |
Messages: | 62,969 |