Allocating memory with 'new', new(std::nothrow), testing -fcheck-new

Status
Not open for further replies.

luni

Well-known member
Triggered by this https://forum.pjrc.com/threads/6820...ill-a-bad-idea?p=288694&viewfull=1#post288694 I was doing a few experiments with detecting out of memory conditions and operator 'new' and stumbled over some very weird behavior. The code below tries to allocate chunks of 150kB using new, new(std::nothrow) and malloc. It checks the returned pointer and stops if it gets a nullptr.

Might be some strange cache effect?

Code:
#include "Arduino.h"
#include <memory>

void setup()
{
    while (!Serial) {}

    if (CrashReport)
        Serial.println(CrashReport);
}

void loop()
{
    uint8_t* ptr = new uint8_t[1024 * 150];            // CRASHES without -fcheck-new
    //uint8_t* ptr = new (std::nothrow)  uint8_t[1024 *150];  // OK
    //uint8_t* ptr = (uint8_t*) malloc(1024 * 150);      // OK

    Serial.printf("Addr: %p\n", ptr);

    if (ptr)
    {
        Serial.print("  try write...");
        ptr[0] = 42;
        Serial.println("  ok, still alive\n");
    }
    else
    {
        Serial.printf("  OUT OF MEMORY  %p\n", ptr);
        while (true) yield();
    }
}

This prints:

Code:
Addr: 0x20203068
  try write...  ok, still alive

Addr: 0x20228870
  try write...  ok, still alive

Addr: 0x2024e078
  try write...  ok, still alive

Addr: 0x0
  try write...

So, 'new' seems to return a nullptr, Serial.printf prints it correctly but the if clause is not able to detect it?? Thus it tries to write to the nullpointer and crashes. Crashreport correctly complains about writing to a null pointer.
Never saw such a thing. As mentioned in the linked thread above, new tries to throw an exception, so I'd expect a crash but not that weird behavior. If you don't write to the nullpointer the program doesn't even crash but the if clause still doesn't detect the null pointer.

If you use the new(std::nothrow) version or malloc everything works as expected. This time the if clause can detect the null pointer as it should.

Using the nothrow version of new is a bit tedious and I assume that most of the libraries don't do this. The gcc compiler flag -fcheck-new seems to fix this. In this case all three versions return a "checkable" 0 if out of memory and everything works as expected. The description of this flag is a bit confusing. But it might be worth testing this in the next beta version.

-fcheck-new
Check that the pointer returned by operator new is non-null before attempting to modify the storage allocated. This check is normally unnecessary because the C++ standard specifies that operator new only returns 0 if it is declared throw(), in which case the compiler always checks the return value even without this option. In all other cases, when operator new has a non-empty exception specification, memory exhaustion is signalled by throwing std::bad_alloc. See also ‘new (nothrow)’.
 
I wouldn't call it a bug. Actually, I expected the crash since new is supposed to throw a bad_alloc exception which clashes with the fact that the sketch is compiled with -fno-exceptions (which is good!).
The strange thing is that the pointer returned by 'new' prints as 0x0 but a simple if clause doesn't interpret it as 0. But, probably this is just an example what can happen if one provokes undefined behavior (happy that it didn't make my computer explode :) )

Best to use new(std::nothrow) since this is guaranteed to not throw. The -fcheck-new compiler flag also seems to work but the description doesn't state that it prevents new from throwing, so that might be just another pit fall.
 
Might be some strange cache effect?
It's just the optimizer doing its job. The standard requires that `new` always returns a non-null pointer, so the optimizer can just throw away the entire `else` branch.
[expr.new.16]
Note: Unless an allocation function has a non-throwing exception specification (18.4), it indicates failure
to allocate storage by throwing a std::bad_alloc exception (6.6.4.4.1, Clause 18, 21.6.3.1); it returns a
non-null pointer otherwise.
See it for yourself on https://godbolt.org/z/aKoYqPKjr:
Code:
#include <cstdint>

void still_alive();
void out_of_memory();

void loop() {
  uint8_t *ptr = new uint8_t[1024 * 150];
  if (ptr)
    still_alive();
  else
    out_of_memory();
}
Code:
loop():
        push    {r4, lr}
        bl      still_alive()
        pop     {r4, lr}
        bx      lr

I'm not sure how useful adding the -fcheck-new flag will be for existing libraries, since no one ever checks the return value of new anyway, not even the standard library. At that point you're probably better off just enabling exceptions.
With -fno-exceptions, it might be a good idea to add -fcheck-new as well, even if it's just for (non-standard) user code, so the option to check the result is there if anyone needs it.
 
He, he, that's cool indeed. Thanks for checking.
So, still the same conclusion right? If you want to check for out of memory condition use new(std::nothrow).
 
He, he, that's cool indeed. Thanks for checking.
So, still the same conclusion right? If you want to check for out of memory condition use new(std::nothrow).

I had seen the same thing in a test and couldn't understand it. Thanks for investigating.
 
> no one ever checks the return value of new anyway, not even the standard library ... probably better off just enabling exceptions

So if one uses a teensy and the standard library, this is the best alternative to (I assume) the default "who-knows-what behavior in out-of-heap and excessively-fragmented-heap scenarios"?

Is modifying cores/teensy*/new.cpp to check for a null return from malloc() a viable alternative?
 
Is modifying cores/teensy*/new.cpp to check for a null return from malloc() a viable alternative?

I really don't know which way is best, but I would like to improve the C++ new situation for 1.56. Until today I didn't even know the compiler offered a -fcheck-new flag. Very interesting....

Another option which is on the table for future Teensyduino would be recompiling newlib. The copy we're using today (when not optimizing for smallest code size) appears to have exception support compiled in.
 
So, still the same conclusion right? If you want to check for out of memory condition use new(std::nothrow).
Indeed, if you need to handle out-of-memory conditions on a microcontroller with exceptions disabled, use new(std::nothrow).

Is modifying cores/teensy*/new.cpp to check for a null return from malloc() a viable alternative?
I think that's a good idea. Returning null is not allowed and does indeed seem to cause problems as shown above, so maybe calling __throw_bad_alloc() (or abort) is the best way to terminate “gracefully”, even with exceptions disabled.
If the user doesn't handle it correctly (by either enabling exceptions or by using new(std::nothrow)), what else can you do in an out of memory condition?

The copy we're using today (when not optimizing for smallest code size) appears to have exception support compiled in.
Is that a good thing or a bad thing? :)
Do you plan to keep exceptions disabled on the Teensy 4.x as well? With such a powerful CPU and megabytes of memory, the overhead caused by exceptions might be acceptable.

Is there a specific reason for defining new/delete with the Teensy core itself? Doesn't the toolchain include these? Or are they left out by the -nostdlib option?
 
Status
Not open for further replies.
Back
Top