susam
a day ago
The code in the post seems very similar to the one in my own post from 2010: https://susam.net/sequence-points.html
int a = 5;
a += a++ + a++;
I do remember that this particular code snippet (with a = 5, even) used to be popular as an interview question. I found such questions quite annoying because most interviewers who posed them seemed to believe that whatever output they saw with their compiler version was the correct answer. If you tried explaining that the code has undefined behaviour, the reactions generally ranged from mild disagreement to serious confusion. Most of them neither cared about nor understood 'undefined behaviour' or 'sequence points'.I remember one particular interviewer who, after I explained that this was undefined behaviour and why, listened patiently to me and then explained to me that the correct answer was 17, because the two post-increments leave the variable as 6, so adding 6 twice to the original 5 gives 17.
I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?
kentm
20 hours ago
IMO, The only reasonable answer if asked this in an interview is “I would not write code where I have to know the answer to this question”
These sorts of things are neat trivia to learn about things like sequence points but 99.9% of the time if it matters in your codebase you're writing something unmaintainable.
tzs
18 hours ago
> IMO, The only reasonable answer if asked this in an interview is “I would not write code where I have to know the answer to this question”
That's half of a reasonable answer. The other half is "but I do know the answer so if I see it when reviewing or working on someone else's code I can flag it or rewrite it, and explain to them why it is bad".
godelski
14 hours ago
> The other half is "but I do know the answer
Except you don't!If you claim to know the answer you've made a grave mistake and fooled yourself.
If you ran the code in a compiler and used that to conclude "this is the answer" rather than "this is an answer" then now is a great time to learn how easy it is to fool yourself. You just need you ask yourself what assumptions you made. I'll wager you assumed all compilers process this line in the same way.
Or just RTFA, or Susam's, as that's exactly what they are about. They explain why this is undefined behavior.
| The first principle is that you must not fool yourself — and you are the easiest person to fool.
- Feynmantzs
8 hours ago
> I'll wager you assumed all compilers process this line in the same way
You would lose that wager.
What I mean by "I do know the answer" is that I know that this is undefined behavior and why it is undefined behavior and that different compilers can give different results and also that even if I test the compiler I use to see what it does I can't count on that not changing any time the compiler gets updated.
1718627440
2 hours ago
> Except you don't!
Except you can do, because "The answer is that this isn't a valid C program." is a sentence you can know.
benj111
6 hours ago
I think you're misinterpreting "I know the answer". The GP is suggesting rewriting it, so the know the issue.
IshKebab
18 hours ago
No it isn't. You don't need to know the answer to know that it is bad code. The very fact that it isn't clear shows that.
tialaramex
15 hours ago
Right, the feedback I'd expect in a code review interview is something like "This is unclear or wrong, write what you actually meant".
That's the feedback I would want, and it's the feedback I give to my colleagues in reviews. Actually I tend to be too verbose, so you might get a full paragraph explaining what the ISO document says and that you shouldn't assume it does whatever it is your compiler says.
My actual feelings for this specific case are that the language is defective, but if we're wedded to a defective language then the reviews need to call out such usage.
godelski
13 hours ago
> Actually I tend to be too verbose, so you might get a full paragraph explaining what the ISO document
I'm verbose too, but I love it when others are. Honestly, it's usually easy to triage (and I write to try to make it easy). I like verbosity because learning why means I not only won't make that mistake again but I won't make any similar mistakes again.Verbosity isn't bad. Not everything needs to be a fucking tweet
1718627440
2 hours ago
If you know is this code is bad, but don't know that it is UB, I thing you are rating code on feelings and cargo culting.
atoav
6 hours ago
I mean the good answer is:
I am not sure this code could be interpreted the same by different programmers and compilers alike. So I would never write it.
amelius
18 hours ago
You might still make a mistake, even if you think you know the answer. It's much better to instrument the code to figure it out, or write a short test program.
bestouff
17 hours ago
It's Undefined Behavior. So you can instrument all you want, the answer will still be wrong. You'll capture what your particular compiler does under some particular conditions (opt flags, surrounding code, etc.) but that will not be representative of what can happen in the general case (hint : anything can happen with UB).
tengwar2
17 hours ago
Not nasal demons in this case (https://groups.google.com/g/comp.std.c/c/ycpVKxTZkgw/m/S2hHd...): thaumasiotes shows that we can expect a numeric answer.
_kst_
16 hours ago
I don't see the name "thaumasiotes" at that link, nor do I see anything relevant to the code in the title.
The behavior of "int a = 5; a = a++ + ++a;" is undefined. There is no guarantee of a numeric result, because there is no guarantee of anything.
susam
16 hours ago
I believe they were referring to thaumasiotes's thread here: https://news.ycombinator.com/item?id=48141294
I think the objection thaumasiotes has raised there is valid and I have made an attempt to answer it as well in the same thread.
HarHarVeryFunny
14 hours ago
It's only the order of evaluation that is undefined.
_kst_
13 hours ago
No, the behavior is undefined. That means, quoting the ISO C standard, "behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this document imposes no requirements".
A conforming implementation could reject it at compile time, or generate code that traps, or generate code that set a to 137, or, in principle, generate code that reformats your hard drive. Some of these behaviors are unlikely, but none are forbidden by the language standard.
HarHarVeryFunny
an hour ago
I was wrong.
I was looking at this:
https://en.cppreference.com/cpp/language/eval_order
I'm not sure where precisely this sequencing exception to the default "eval order undefined" rule is given, but after the 24(!) sequencing rules they do give this "++i + i++" as an explicit example of undefined behavior.
Interestingly that page says that since C++17 f(++i, ++i) is "unspecified" rather than "undefined", whatever that means, and presumably plus(++i, i++) would be too, which seems a bit inconsistent.
afdbcreid
14 hours ago
Nope, there is no sequence point in the middle and modifying an object more than once between sequence points is undefined behavior.
amelius
17 hours ago
It doesn't matter if the answer is wrong. You run the test program and then replace the code by the answer. This basically weeds out the UB.
1718627440
2 hours ago
That's a valid approach, if you only use high-level language to generate assembly faster, and the assembly is your source of truth.
yongjik
17 hours ago
But since it is a UB, there's no guarantee that your test program produces the same result as the same code running on production, even if you have the same compiler.
amelius
17 hours ago
That's very unlikely, and in the worst case you've reduced a difficult bug into an easier to understand bug.
thaumasiotes
17 hours ago
> It's Undefined Behavior.
Susam's post doesn't make this clear. The quotes from K&R say that the modifications to the variable may take place in any order, but they don't directly say that doing this is Undefined Behavior, which would make it permissible to do anything, including e.g. interpreting the increments as decrements.
The C99 standard is quoted saying this:
>> Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression.
It's possible that something else in the standard defines noncompliance with this clause as Undefined Behavior. But that's not the most intuitive interpretation; what this seems to say, to me, is that the line of code `a = a++ + ++a` should fail to compile, because it's not in compliance with a requirement of the language. Compilers that produce any result at all are suffering from a bug.
(It seems more likely that the actual intent is to specify that, given the line of code `b = a++ + ++a`, with a initially equal to 5, the compiler is required to ensure that the value stored at the address of a is never equal to 6 - that it begins at 5, and at some indefinite point it becomes 7, but that there is no intermediate stage between them. But I find the 'compiler failure on attempt to put multiple modifications between two sequence points' interpretation preferable.)
vilhelm_s
15 hours ago
The "shall" in the standard means it's undefined behavior. This is explained in the "Conformance" section,
> 2. If a ‘‘shall’’ or ‘‘shall not’’requirement that appears outside of a constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard by the words ‘‘undefined behavior’’ or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe ‘‘behavior that is undefined’’.
Compilers will not refuse to compile the code, indeed the blog post we are all commenting on reports the results from a bunch of different compilers. Historically the reason the C standard specified a lot of undefined behavior is that the actually existing C compilers at the time compiled the code but disagreed about the output.
thaumasiotes
14 hours ago
> Compilers will not refuse to compile the code, indeed the blog post we are all commenting on reports the results from a bunch of different compilers.
Yes, I see that. I just said they should refuse.
masklinn
9 hours ago
Because this specific UB is static (not usually the case) both gcc and clang will flag it if Wsequence-point is enabled (and it is part of Wall) (technically the clang warning is Wunsequenced but aliased to the GCC version).
edit: apparently Wunsequenced is enabled by default so clang should warn you out of the box.
Dylan16807
17 hours ago
Compilers are not able to prevent you from violating must/shall in the general case. So they're not held to that bar. Unless the standard says not to compile it, it's not a compiler bug.
Also, imagine a situation where the line of code actually lists three different variables, but all three of them are passed in by address. It quickly becomes impossible for the compiler to know you violated the spec by reusing the same variable. And even optimizations that make sense here could corrupt the value pretty badly and possibly lead to worse errors.
thaumasiotes
17 hours ago
> Also, imagine a situation where the line of code actually lists three different variables, but all three of them are passed in by address. It quickly becomes impossible for the compiler to know you violated the spec by reusing the same variable.
OK. What is the value of a spec to which compliance is impossible?
aidenn0
16 hours ago
The compiler does comply to the spec. It's the program that fails to comply with the spec. It's definitely possible to write programs that have no undefined behavior.
thaumasiotes
8 hours ago
The compiler is supposed to compile programs that comply with the spec, and not compile programs that don't.
The concept of "compiling a program that doesn't comply with the spec" doesn't even exist! A text file that doesn't comply with the C spec isn't a C program. That's what it means to be "the spec".
1718627440
2 hours ago
> The compiler is supposed to compile programs that comply with the spec
Yes.
> and not compile programs that don't.
No, there is no limitation on what a compiler does in this case.
> The concept of "compiling a program that doesn't comply with the spec" doesn't even exist!
It does, it is called "undefined behaviour".
> A text file that doesn't comply with the C spec isn't a C program.
That's the point. A program that contains UB is not a valid C program. That's what UB means.
Dylan16807
7 hours ago
> The concept of "compiling a program that doesn't comply with the spec" doesn't even exist!
Wrong. Lots of spec violations only happen at runtime and can't be predicted at compile time.
Here's an easy example. You're the compiler. I hand you what appears to be valid C code that allocates an array and then asks the user which slot to use. It doesn't verify the slot is in bounds, just puts a number in array[slot], does some math with it, and then prints the result. Does my program comply with the spec? Do you compile it?
tialaramex
6 hours ago
This is "Wrong" in the sense that C++ does really work like this, but it's not wrong in the sense that this is somehow unavoidably the case.
For example if you attempt an equivalent mistake in WUFFS that will be rejected.
Your WUFFS compiler will say this variable named slot must be a non-negative integer smaller than the length of the array, but as far as it can tell you didn't ensure that was true, therefore this code is nonsense, do better.
As I explained in my sister reply, in a broader context some of these are semantic properties and so there's a dilemma and C++ chooses to resolve that dilemma by accepting nonsense programs, but that wasn't the only available resolution and I am confident it's the wrong choice.
tialaramex
6 hours ago
No, there's a fun C++ talk - by I want to say Chandler Carruth - in which the speaker points out that C++ is a language defined to have false positives for the question "Is this a valid program?"
The mechanism in the ISO document is phrases of the form "Ill-formed, No Diagnostic Required" which is often shortened to IFNDR. Lets break that down. "Ill-formed" means this is not a valid C++ program. On its own that means the compiler should provide a diagnostic (an error messag) explaining that your program isn't valid. For example if the program text were to just consist of the word "fuck" that's ill-formed and will be diagnosed. "No Diagnostic Required" says in this case though, we don't require the compiler to report this problem.
Why do that? So originally there's a purely practical reason, but ultimately there's a philosophical one. C++ like C before it wants to translate many individual program files and then somehow cobble the resulting output into a single executable. So this means function A over here, using type T from a different file cannot know for sure about type T, instead C++ has a thing called the "One Definition Rule" which says you must somewhat define T each time it's needed, but all the definitions must be the same. What if you don't (by mistake or on purpose)? Well that will cause chaos, so, IFNDR.
Philosophically IFNDR is a way to resolve the dilemma from Rice's Theorem. Back in about 1950 this guy named Henry Rice got his PhD for proving that any non-trivial semantic property of a program is Undecidable. This isn't "Oh no, it's quite hard to do this" it's a straight up mathematical proof that it can't be done. Deciding reliably whether a program has any† semantic property isn't possible. Sometimes we're sure, and that's fine, but the dilemma is for the tricky cases: What do we do when we're not sure?
IFNDR is C++ choosing "Fuck it, it's fine" for this case. Maybe your program is nonsense, it might do absolutely anything, but you don't get even a warning from the compiler. This is Chandler's "false positive".
Rust chooses the opposite. When the compiler can't see why your program is sense it will be rejected, even if you and a room full of compiler experts agree it should work too bad, it doesn't compile. You get a diagnostic explaining why your program was rejected.
† Trivial means either all programs have the property or none do and so isn't interesting. As a result the restriction to "non-trivial" properties isn't much help.
lmm
15 hours ago
> OK. What is the value of a spec to which compliance is impossible?
It lets you tell people you have a spec? It makes it easy for compiler developers to dismiss bug reports with "your code violated the spec"?
Dylan16807
16 hours ago
Welcome to C.
But more seriously it's the job of the program to not do undefined things.
lupire
11 hours ago
It's the job of a language designer to define everything.
Dylan16807
8 hours ago
C should do better about the things that could be readily defined, but there's no way to have arbitrary pointers and define everything.
thaumasiotes
8 hours ago
> but there's no way to have arbitrary pointers and define everything.
What's the undefined behavior in assembly?
Dylan16807
7 hours ago
Assembly is kind of at the crossroads of everything being defined and nothing being defined, when you consider things like writing random data to memory and executing it... But anyway here's the first thing I found to answer that: https://news.ycombinator.com/item?id=9578178
Probably more important, way too many things in assembly vary by exact model. Can you name a portable language that fits those criteria?
alterom
10 hours ago
>What is the value of a spec to which compliance is impossible?
Are you saying, what's the value of a language spec that allows undefined behavior, as C does?
Well, it's that it allows for compiler implementations that aren't too hard to implement and maintain.
It allows for a language that's close enough to hardware (and allows you to do programming on a low level), while still offering a reasonable amount of abstraction to be useful (and usable).
It's also difficult to define a formal system that won't have undefined expressions. Mathematics itself is full of them (in logic, "this sentence is a lie" has no truth value; you can't define the set of all sets, or a set of sets that don't contain themselves; etc).
That said, I think we've settled on a rather silly choice here with the "++" operator.
Personally, I'd do away with the ++ operator in either pre- or post- increment forms, or at least disallow it in arithmetic expressions.
The only thing having it realistically accomplished is saving a few characters when writing a for-loop in C.
Even for that it's not necessary.
The problem with it is that, unlike normal arithmetic operators, it both returns a value and assigns one, which means that you can assign values to several variables in a single arithmetic expression, as in
a = b++;
...which C, in general, allows, as in: a = (b = b + 1);
The result of these two expressions, of course, is different.Now, I have the following religious belief, and it's that arithmetic operators shouldn't have side effects. That's to say, assignment and evaluation should be separate.
So that when I write
x = (arithmetic);
..I could be sure that the only outcome of this computation is changing the value of x.Perhaps calling the function sqrt(x) would summon Cthulhu — I'll read the documentation for it to be sure. But in general, I'd hope that calling abs(x) wouldn't change the value of x to |x| in addition to returning it.
But K&R decided to have fun by saying that "x = 5;" is both an assignment and an expression with a value. Which allows one to write:
x = y = z = 5;
as a parlor trick.That's it, that's the only utility.
Instead of defining this as a special initialization syntax and otherwise disallowing it (as Pyhthon does), they went YOLO and made assignment an expression rather than a mere statement.
Which means that the very useful statement "increase the value of this variable by one" became two expressions with different values.
In an ideal world, the following would be equivalent, and would not evaluate to anything you can assign to a variable:
++x;
x += 1;
x = x + 1;
...while "x++" would not exist at all (or would be equivalent to ++x).And that's how it is in Go. Thompson fixed the design mistake after 4-5 decades of it giving everyone headaches.
Sadly, C++, Java, C# all wanted to be "like C" in basic syntax, so we're stuck with puzzles like this to this day.
TL;DR: if you're asking "what's the value of the spec that makes assignment an expression", i.e. why is making "a = (b = c + d);" valid syntax a good idea, the answer is:
It isn't. It's a bad decision made in 1970s that modern languages like Go no longer support.
reichstein
6 hours ago
Assigning to multiple variables in a single expression is fine and useful. Take
``` target[i++] = source1[j++] + source2[k++]; ``` That's idiomatic, it shows the intent to read and consume the value in a single expression. You can write it longer, but not more clearly.
It's only when you assign to the same variable multiple times, or read it after it was assigned, that it introduces ordering issues.
A single `i++` or `++i`/`i += 1` is safe and useful.
alterom
4 hours ago
>A single `i++` or `++i`/`i += 1` is safe and useful
Sure, and you don't need the assignment to be an expression with a value for it to be useful.
>target[i++] = source1[j++] + source2[k++]; That's idiomatic
That's idiomatic to C for sure.
Also idiomatically horrible. Why are you using three index variables here?
>You can write it longer, but not more clearly.
target[i] = source1[i] + source2[i];
i++;
This is absolutely more clear to any sane person, and less prone to error.You can't forget to increase one if the indices when all three are meant to go in lockstep.
It's longer by one semicolon, and requires far less cognitive overload to parse.
There's a reason why they did away with it in Go. What do you think that reason was if it's so useful?
1718627440
2 hours ago
> Why are you using three index variables here?
> You can't forget to increase one if the indices when all three are meant to go in lockstep.
Obviously they are not in this example.
The next line might contain:
i++; j *= 42; k = srandom (k), random ();thaumasiotes
8 hours ago
> Well, it's that it allows for compiler implementations that aren't too hard to implement and maintain.
> It allows for a language that's close enough to hardware (and allows you to do programming on a low level), while still offering a reasonable amount of abstraction to be useful (and usable).
I can see the first of these. The second appears to be untrue; if you removed the concept of undefined behavior from C, it wouldn't get farther away from the hardware.
Is that first point actually something that somebody wants? Who benefits from the idea that it's easy to write a "standards-compliant" compiler, because you are technically "standards-compliant" whether you comply with the standard or not?
At that point, you've given up on having a standard, and the interviewers Susam calls out, who say that the correct answer is whatever their compiler says it is, are correct in fact. Susam is the one who's wrong, for reading the standard.
You can run a language that way just fine. I had the impression that Perl was defined by a reference implementation. But it's the opposite of having a standard.
alterom
4 hours ago
>The second appears to be untrue; if you removed the concept of undefined behavior from C, it wouldn't get farther away from the hardware
My understanding is that even common CPU instruction sets can have undefined behavior[1].
When C was written, the CPU architectures were more of a Wild West. It might have made sense to leave some parts up to the compiler authors on a particular architecture.
>Is that first point actually something that somebody wants?
When C was written — absolutely.
Portability of C code is almost taken for granted these days.
Things were different then. Portability was a big challenge.
All that said, this is my non-authoritative understanding of the reasons why it's a thing. Take it with a grain of salt.
>At that point, you've given up on having a standard
Sure. Just treat C as a family of languages which have a common standardized part.
Proprietary compiler extensions are/were common anyway, so that's not an unusual situation.
[1] https://www.os2museum.com/wp/undefined-isnt-unpredictable/
susam
16 hours ago
I searched K&R to see if there is any language that implies a += a++ + a++ to be undefined. I couldn't find anything. I found the following excerpt which is closest to what I claim, in spirit. But still, it does not explicitly spell out that an object must not be modified more than once between sequence points. From § A.7 Expressions:
> The precedence and associativity of operators is fully specified, but the order of evaluation of expressions is, with certain exceptions, undefined, even if the subexpressions involve side effects. That is, unless the definition of the operator guarantees that its operands are evaluated in a particular order, the implementation is free to evaluate operands in any order, or even to interleave their evaluation. However, each operator combines the values produced by its operands in a way compatible with the parsing of the expression in which it appears. This rule revokes the previous freedom to reorder expressions with operators that are mathematically commutative and associative, but can fail to be computationally associative. The change affects only floating-point computations near the limits of their accuracy, and situations where overflow is possible.
So I think, the text in K&R serves as warning against writing such code, at best. The C99 draft has more relevant language. From § 4. Conformance:
> If a "shall" or "shall not" requirement that appears outside of a constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard by the words "undefined behavior" or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe "behavior that is undefined".
This along with the § 6.5 excerpt already mentioned in my post implies a += a++ + a++ to be undefined. When I get some more time later, I'll make an update to my post to include the § 4. Conformance language too for completeness.
Thank you for the nice comment!
adrian_b
8 hours ago
The answers to some questions must be known in order to be able to write a correct program.
In the vast majority of the programming languages, the order of evaluation for the actual parameters passed to a function is undefined. In the few programming languages where the order of evaluation is defined, that is actually a mistake in the design of that programming language.
This is something about which any programmer must be well aware, because when composing function invocations it is very easy to write a function invocation where the result would depend on the order of evaluation of the expressions passed as actual parameters. The arithmetic operators are also function invocations, so that applies to them too.
chii
7 hours ago
> when composing function invocations it is very easy to write a function invocation where the result would depend on the order of evaluation of the expressions passed as actual parameters
this simply means your functions aren't pure functions, and is doing side effects. If you rewrite those functions to not have side effects (including ones being used to generate the parameters), there would be zero issues of such nature.
zeroq
13 hours ago
On one hand I've been using almost the exact statement 25 years ago in my Flash (ecmascript) tutorials to narrow down the point of operator precedence.
I still believe it's a good piece on your powerpoint if you want to teach. It's easy to fall, easy to grasp, and easy to unroll all the rules - that is, if the rules are actually set in stone.
On the other hand I've been through couple FAANG interviews, and twice I was presented with something similar and after I glanced at it for a half a minute the interviewer quickly proceed to "a ha!, you don't know! the interview is over , but I'm happy to tell you the right answer".
That part is not cool.
LPisGood
20 hours ago
In some sense, and without the interviewer knowing, that is actually a great scenario for an interview.
If you can convince someone in a position of authority that they’re wrong about something technical without upsetting them then you’re probably a good culture fit and someone who can raise the average effectiveness of your team.
rcxdude
20 hours ago
Or, also, in the reverse direction, if the interviewer is wrong about it and can't be convinced otherwise, it's probably not a great place to work.
bluGill
20 hours ago
I know I did recommend someone after the interview because I looked it up and they were right. Great person to work with. Though I fully understand why most would hesitate.
wat10000
18 hours ago
The best interview questions spawn discussions. This is a pretty good one for that. We could dive into what makes it UB, why a particular compiler might do it a certain way, what results we'd likely see from other compilers, and why the standard might say that this sort of thing is UB.
"What does this produce?" and expecting an answer of "17" is a bad question even if UB didn't mean the expected answer is wrong.
LPisGood
18 hours ago
I don’t work a ton with C, but I wonder how C programmers keep track of what behavior is and is not defined. It seems like there are many possible edge cases.
1718627440
2 hours ago
Personally, when ever I write a modifying statement, I wonder about the domains of the input and ensure, that the condition necessary to stay in the existing range is evaluated. If it is not, I either write the condition, reduce the input domain, or increase the output domain.
lmm
14 hours ago
They don't. In the culture some kinds of undefined behaviour are taken seriously and some aren't. If you want to write code that "works", you emulate what popular performance benchmarks do (whether their code is undefined according to the standard or not), since those are the thing that C compiler developers actually care about.
user
18 hours ago
wat10000
18 hours ago
We get by on a combination of matching patterns (any pointer cast gets a lot of scrutiny, for example), compiler warnings, tools like UBSan, debugging when things go wrong, and sheer dumb luck.
Having an understanding of how the code gets transformed into machine code helps. For this case, there's the basic idea that `a++` will boil down to three basic conceptual operations: fetch, add, and store, and those can be potentially interleaved with other parts of the statement. In something like `a++ + ++b` the interleaving doesn't affect the outcome no matter how it's done. In `a++ + ++b` the interleaving can affect the outcome, and that's your sign that something might be wrong.
Any memory safety issue in C code had to involve UB at some point. And you can see how prevalent those are, and deduce how not-particularly-great we are at keeping track of UB.
MaxBarraclough
17 hours ago
> Having an understanding of how the code gets transformed into machine code helps
I'm not sure about that. Knowing assembly is not a substitute for knowing how the language is defined. Sometimes C/C++ programmers with some assembly knowledge reason themselves into thinking that what they're asking of the language must have well-defined behaviour, when in fact it's undefined behaviour. It doesn't really matter whether interleaving order can change the output. (++i)++ is, apparently [0], undefined behaviour in C but has well defined behaviour in C++.
wat10000
15 hours ago
I don't mean assembly in this case, but something more like the compiler's view of the code. a++ can be broken down into more primitive operations, and might actually be, depending on how the compiler is implemented. The fact that the ordering of those more primitive operations with respect to other operations isn't very tightly constrained is something you'd just have to know about the language, I suppose.
IshKebab
18 hours ago
They don't really. In fact there are many things that are technically UB but are so common that compilers can't really treat them as UB. E.g. type punning via unions.
el_pollo_diablo
17 hours ago
Type punning via unions is not UB in C in general, but it is in C++ IIRC.
I write "in general" because, as with other forms of memory reinterpretation (memcpy or copy through a character type), evaluating a trap representation triggers UB.
Chaosvex
11 hours ago
The short version is that it's fine in C++ as long as you only read the member that was last written to or a char type.
1718627440
2 hours ago
And a slightly longer version is, that there are three types involved: the type of access, the effective type of the object[0], and the type of the variable. The type of the variable is only for the compiler to emit warnings, as long as the effective type and the type of access are equal, it isn't UB.
[0] the C meaning of an object, not the C++ one
IcyWindows
15 hours ago
Yeah, undefined behavior just means not defined in the specification.
I would argue that most languages only have one compiler so it doesn't matter what is in the specification.
mike_hock
19 hours ago
Do you want a job at a place where someone who doesn't understand UB makes the hiring decisions?
grahamburger
18 hours ago
Sometimes, even in tech, you just need a job.
angry_octet
3 hours ago
In the land of the blind, the one eyed man is King.
ketzu
18 hours ago
I think your options are very limited if you look for places that have people that truly understand UB, even less so the hiring people.
chasd00
17 hours ago
Genuinely curious, so this is undefined behavior and depends on the compiler. I get that. Java, and other languages, can do these same operations but their compilers produce bytecode that runs on a virtual machine (JVM) compiled to machine code just-in-time. Would this same code in Java possibly yield different results based on the platform the JVM was running on because of the platform specific JIT compiler? Maybe that's part of the origin of the phrase "write once, test everywhere".
Karliss
17 hours ago
The UB comes from how C++ standard defines expression sequencing which is not relevant for Java. Languages other than C++ typically define such details more strictly so there is no UB or even concept of UB. JIT compilers don't change it as any non toy JIT will generate native instructions directly or through intermediate representation (instead of generating C++ text and passing that through regular C++ compiler) both of which should have much stricter semantics compared to what C++ guarantees.
dmoy
17 hours ago
> Would this same code in Java possibly yield different results based on the platform the JVM was running on because of the platform specific JIT compiler?
No, and it's also well defined in languages like C#.
If we're talking about this specific example at least. No sequence point issues like that in Java.
Crespyl
17 hours ago
It's been quite a while, but IIRC, in Java these statements actually do have a defined behavior.
The ++x is a "pre-increment", meaning the value of the variable is incremented prior to evaluating the expression, while the "post-increment" "x++" is the other way around: the expression evaluates to x, then x is incremented afterwards.
All expressions are left-to-right.
tredre3
17 hours ago
That behavior is inherited from C. The pre/post increment behavior is actually the same in every language that uses them. The priority of operation is also usually the same as well.
The reason the question is tricky is because those operators change the value of a as the full expression is progressively executed.
It's not immediately clear to me what the answer in Java would be.
Just take a++ + ++a for example:
If the value if `a` is hoisted by the jvm then it could be 5++ + ++5, so 5 + 6.
But if it's executed left to right and `a` is looked up every time, then it becomes 5++ + ++6, so 5 + 7.
reichstein
7 hours ago
The value of the variable is not hoisted by the Java compiler. (It's not that JVM, that only executes the byte code, what y doesn't have that kind of ambiguities.)
The semantics of Java is not undefined on multiple assignments to the same variable in an expression, so it can't hoist something if it would change the outcome.
Now, I don't actually know what the outcome is, because I don't remember whether `a += e` reads the value of `a` before or after evaluating `e`. The code is still confusing and unreadable to humans, so you shouldn't write it, but the compiler behavior is not undefined.
And if your variable is accessed from multiple threads, it may be undefined which intermediate values night be seen.
1718627440
an hour ago
$ cat a.java
class a {
public static void main (String[] args) {
int a = 4;
int b = a++ + ++a;
System.out.println(b);
System.out.println(a);
}
}
$ javac a.java
$ java a
10
6user
17 hours ago
froh
17 hours ago
I'd be badly surprised if the jvm jit went through C, so if this monstrosity is well defined in Java it's well defined once well defined everywhere.
but still, if it were, it was and remained, as gp points out, bad practice...
tete
18 hours ago
> I found such questions quite annoying because most interviewers who posed them seemed to believe that whatever output they saw with their compiler version was the correct answer.
Other than the job for most programmers having nothing to do with whether they know the outcome, because hopefully they'd never write something like it or clean it up. And IF they found it they'd hopefully test it - given that it appears to be compiler dependent anyways.
jcalvinowens
16 hours ago
Both major compilers yell at you for this nowadays... it's pretty unforgivable IMHO for somebody to be asking it as an exam or interview question if the right answer isn't "undefined":
<source>:5:10: warning: multiple unsequenced modifications to 'a' [-Wunsequenced]
5 | a = a++ + ++a;
|
<source>:5:7: warning: operation on 'a' may be undefined [-Wsequence-point]
5 | a = a++ + ++a;
| ~~^~~~~~~~~~~p0w3n3d
19 hours ago
How many tennis balls can fit in a bus?
everyone
16 hours ago
The interviewer asking stuff like that is a good sign to leave immediately.
nine_k
12 hours ago
Maybe the interviewer seeks to hear something like "This is UD, this code needs to be rewritten, should not pass code review. What prevents you from using -Wall when compiling?"
thaumasiotes
17 hours ago
> I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?
Are you referring to the type of interview questions where the question is ill-defined and no one should know the answer, or the type where the question is reasonable and well-defined, but the interviewer doesn't know the answer?
I had a phone screen with Google once where they asked how to determine the length of a stretch of contiguous 1s within an infinite array of 0s. I suggested that, given the starting index i, you can check the index i+2 and then repeatedly square it until you find yourself among the zeroes, after which you can do binary search to find the transition from ones to zeroes.
The interviewer objected that this will grow the candidate end index too quickly, and the correct thing to do is to check index i+1 and then successively double it until you find the zeroes. We moved on.
I passed that phone screen. But I still resent it, because I checked the math later and "successive squaring followed by binary search" and "successive doubling followed by binary search" take exactly the same amount of time.
susam
15 hours ago
I meant the latter. I think the question is fine. It can lead to a good discussion, similar to what we are having in this thread. It has been a long time (almost 20 years), but I remember that most interviewers who asked this seemed to be convinced that the output they had seen with their compiler version was the correct answer. What could be a nice and relevant discussion, especially considering that some classes of bugs and security issues result from it, was seen only as a trivia quiz by the interviewers, with the expectation of an answer that was incorrect, no less.
Your phone screen story is quite nice. When I read your question, I would have answered with successive doubling as well. In fact, I faced the same question at an AWS interview a long time ago. The question was mathematically the same question but formulated differently. I answered with the doubling solution too, which leads to an O(log n) time solution, asymptotically. Your interviewer's immediate objection to your squaring solution seems like a major failure in their intuition. When I read your solution, purely by intuition, that is, without resorting to any rigorous reasoning, I felt: wow, that's interesting, your solution would land on the zero region in merely O(log log n) time. Why didn't I think of it? I think your solution should spark interest rather than dismissal in a curious person. Of course, the binary search after that to find the exact transition point blows up the time consumed back to O(log n).
Once again, thanks for these really interesting comments!
thaumasiotes
8 hours ago
From first principles, it seems unlikely that interviewers selecting their own questions would be able to eliminate this class of question, since by definition they cannot know whether the answer they believe is correct really is correct or not.
I would be 100% behind a movement to replace interviewer freedom with externally-set, vetted questions.
SilasX
18 hours ago
Heh, one time when I got this style of question[1] (but for JavaScript), I took a glance at it and said "Um ... you really shouldn't write code like that." The interviewer replied, "Oh. Yeah. Fair point." And then went on to another question.
[1] By which I mean predicting the behavior of error-prone code that requires good knowledge of all the quirks of the language to correctly answer.
colechristensen
20 hours ago
>I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?
I just refuse to do interviews like that any more.
vcdk
17 hours ago
Well... tried it on macOS using vanilla gcc, the results surprised me:
$ /bin/cat x.c; gcc -w -o x x.c; ./x
#include <stdio.h>
int main()
{
int a = 5;
a += a++ + a++;
printf("a = %d\n", a);
}
a = 18
Not what I expected.
This must be how it works:- The first a++ expression results in 5, after a = 6 - The second a++ expression results in 6, after a = 7 - Only then the LHS a is evaluated for the addition-assignment, so we get: a = 7 + 5 + 6 = 18