GCC vs. CERT

CERT recently issued an unfortunate advisory about gcc.

The advisory is about an optimization which gcc introduced in gcc 4.2: given a pointer p, then a comparison of the form p + C1 < p + C2 can be reduced to C1 < C2. This is always valid in standard C and C++, because those languages say that if you have a pointer to an object, and you add an integer to that pointer, then the result must be a pointer to that same object, or must point just past the end of the object. Any other result is undefined. This means in particular that p + C can not wrap around the end of memory, so the compiler can ignore wrapping issues.

Now, I don’t know who would have thought otherwise. Checking for overflow is always a good idea when validating user input. But checking for pointer overflow using wrapping arithmetic does not make sense. What matters is whether the offset is within the bounds of the object the pointer points at. A test which relies on wrapping overflow for pointer bounds seems rather strange.

However, if you do write such a test, you are indeed in trouble. It is perfectly reasonable for CERT to warn against doing so. gcc implements this optimization because it is very useful for determining how often a loop will iterate when the loop is written using a pointer. So people should be aware of it.

Where CERT went wrong is in only mentioning gcc in their advisory. In fact all popular optimizing compilers implement the same optimization. The CERT advisory suggests that you can avoid this problem by avoiding new versions of gcc. That is just plain incorrect. If you switch to a different compiler, you will have the same problem.

As the CERT advisory is written today, they warn that gcc 4.2 and later are a potential hazard, without warning about any other compiler. That seems very unfair and wrong. I certainly hope that CERT will correct or withdraw the advisory.

Also, CERT describes the problem in a rather vague manner, which has led several people to misunderstand it and think that the potential problem is even worse than it seems–i.e., that it might apply to code that people might actually write.

All that said, I am working on adding a warning to gcc for this case. I doubt it will ever be useful to any real programmer, but I hope that it will pacify CERT.


Posted

in

by

Tags:

Comments

13 responses to “GCC vs. CERT”

  1. ncm Avatar

    My experience is that “security professionals” don’t generally write code, and often have opinions on software development practices that a thoughtful engineer would consider idiotic. Promotion of strlcpy() is an example.

  2. Pádraig Brady Avatar

    This sounds very similar to the signed int overflow optimization
    introduced recently, which although I agree with, caught me out.
    You explain that issue very well here: http://www.airs.com/blog/archives/120
    and mention that -Wstrict-overflow which is included in -Wall was retroactively added.

    In this case the user code is even more invalid I think, but again,
    since the code is ignored a warning should be printed.

  3. Ian Lance Taylor Avatar

    ncm: I dunno, strlcpy doesn’t seem so bad. It’s true that strncpy doesn’t reliably null terminate the string, so strncpy must always be followed by setting the null byte. Obviously strlcpy is not some sort of panacea, it’s just a tiny bit nicer to use. And obviously std::string is a much better choice.

    Pádraig: I agree that it’s reasonable for the compiler to issue a warning here. In practice it will be very rare for this warning to find a true positive. But since it’s clear that there is widespread misunderstanding of the standard notion of “undefined behaviour,” I think it’s a good idea to optionally warn about cases where the compiler relies on it.

  4. Christoph Bartoschek Avatar
    Christoph Bartoschek

    I wonder why the following programm does not give a warning for the first if statement similar to the warning for the second statement:

    #include
    #include

    int main(void) {
    char buf[1];
    unsigned pos = 1;
    int len = INT_MAX;

    if (buf+len= 0) printf(“Bigger\n”);
    return 0;
    }

    I used version 4.3 and:
    gcc -O2 -W -Wall -pedantic -Wstrict-overflow=5 fehler.c

    Generally I would like to have a warning for each code block that is optimized out and a possibility to disable the warning for specific code blocks.

  5. Ian Lance Taylor Avatar

    I haven’t actually tried your program, but it is yet another case. In standard C there is nothing you can add to a non-null pointer which will give you a null pointer.

    It doesn’t make sense to me to have a warning for every code block that is optimized out. It will happen too often–there will be too many false positives. Think about inline functions, for example, when called with constant arguments.

    I also don’t think a warning for the type of code in your example is useful. What real code would it help?

    There is an easy way to ensure that nothing gets optimized out: compile with -O0.

  6. Danilo Piazzalunga Avatar
    Danilo Piazzalunga

    ncm: I never understood why strlcpy() is considered that bad. I agree with Ian in finding strlcpy() just a nicer strncpy().

  7. Christoph Bartoschek Avatar
    Christoph Bartoschek

    Somehow my program example got mangled through the posting process. Did not notice it earlier.

    Unfortunately you are right. Deciding about which removed code should be warned is very hard.

  8. ncm Avatar

    strlcpy, in isolation, wouldn’t be so bad. What makes it actionably bad is “security professionals” insisting that every strcpy call be replaced by strlcpy.

    That said, it’s a bad design because it doesn’t pull its weight: it’s just as much work (and code) to check the result as it would have been to pass verified-safe arguments to memcpy. Furthermore, there’s nothing meaningful to do if it comes out wrong. It’s not so bad as strtok, which represents the extremum of standardized interface design stupidity. (I don’t doubt MS has surpassed it privately, but who could find out without home trepanation?)

  9. Knieper Avatar
    Knieper

    >The advisory is about an optimization which gcc
    >introduced in gcc 4.2
    No, it’s an old bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27180

    >This means in particular that p + C can not wrap
    >around the end of memory, so the compiler can
    >ignore wrapping issues.
    >Now, I don’t know who would have thought otherwise.

    >What matters is whether the offset is within the
    >bounds of the object the pointer points at. A test
    >which relies on wrapping overflow for pointer bounds
    >seems rather strange.

    http://www.fefe.de/openldap-mail.txt

  10. Ian Lance Taylor Avatar

    ncm: strlcpy lets you write sloppy code faster without introducing buffer overflow. I guess I do have to agree that any fully correct use of strlcpy can be replaced by memcpy. String truncation at a relatively arbitrary point is only rarely the right thing to do.

  11. Ian Lance Taylor Avatar

    Knieper: thanks for the example. It’s almost plausible. Of course it’s better to rewrite the incorrect test as

    if (len > max – ptr)
    return EINVAL;

    rather than to rely on wrapping overflow. (Let’s not forget that the C language standard specifically says that pointer overflow is undefined, and that it will not work as you expect on a segmented memory architecture).

    You’re right, gcc PR 27180 was an earlier iteration of this. it was made to work correctly–that is, to optimize those comparisons only when appropriate–with gcc PR 27039.

  12. dbremner Avatar
    dbremner

    Russ Cox reported this as a vulnerability to CERT. The original broken code is here.
    I’m disappointed by CERT’s response.

  13. Ian Lance Taylor Avatar

    Thanks very much for the pointer.

Leave a Reply