>No argument
You know you won the argument when the other guy starts using ad hominem.
You morons do realize that this is simply UB if it overflows and the compiler optimizes it out (as the standard dictates) because it assumes UB never happens, right?
Yes, C chose to make this UB because it allows for better optimisation in the 99% of cases where you want your integers to act like integers and don't care about exactly what happens when they overflow
1 month ago
Anonymous
x + 1 < x has no other usecsse than checking overflow wrapping.
1 month ago
Anonymous
So you want overflow on addition to be well defined only if the result of the addition is later used in a comparison with the original value? Do you really think that's the best idea instead of, idk, checking i == INT_MAX?
1 month ago
Anonymous
>So you want overflow on addition to be well defined only if the result of the addition is later used in a comparison with the original value?
Yes, as it incurs no additional check. Your hardware already does modular arithmetic using 2's complement.
1 month ago
Anonymous
Checking i==INT_MAX is better because it can be vectorized.
1 month ago
Anonymous
Nobody's talking about vectorization. It's not general enough, the wrapping_add() takes any value including the ones that are not 1.
1 month ago
Anonymous
>Nobody's talking about vectorization
I am because I care about optimal code.
Checking overflow flags is garbage long depedency chain serial bloat.
1 month ago
Anonymous
It is already optimal. And wrapping_add calls can be also vectorized. Not sure why you are trying to change the topic here.
1 month ago
Anonymous
Did you watch the video I linked? Just because your cpu is doing 2's complement arithmetic, doesn't mean that it's doing it on a register of the correct width to get 32 bit (or less) wrapping behavior. In this specific case, it is possible to get the "desired" behavior, but in general defining overflow will result in worse assembly.
1 month ago
Anonymous
That would effect modular arithmatic in general, not specific to overflow checks such as this.
It actually is "UB" in x86.
Assembly has no types, so there is no distinction between unsigned and signed integers.
There are instructions like imul and idiv for signed multiplication and division respectively, but there is no "iadd" or "isub" instruction.
INT_MAX+1: you get back 32 bits and the overflow flag is not tripped. Integer addition and subtraction is all unsigned.
From the perspective of C, you can either:
1. Convert the result bits to the integer type leading to wrap around
2. Decide the result is unsigned, then upcast all the arguments in the less than comparison, and then downcast to the function signature.
As you can see, if you do (2), the function always returns false.
Important to note: cmp also doesn't care about signed vs. unsigned, you have to use jb/ja (for unsigned) jl/jg (for signed).
Thus you have two instructions (add, cmp) that don't correspond to types in assembly. Hence UB.
Unsigned integer overflow is well defined in C.
1 month ago
Anonymous
>It actually is "UB" in x86.
Hardware has no undefined behavior. RTFM.
1 month ago
Anonymous
That's why I put it in quotes.
Sure, everything is defined in hardware, however hardware has no negative numbers.
>closer to the hardware
let this thread prove once and for all that rust trannies have absolutely no idea how compilers work. > screenshot from cringe rust troony that keeps being btfo in every thread
my sides. rust trannies are an absolute joke. have a nice day, thanks.
The C version doesn't know that your numbers are 32-bit long, are represented using two's complement and wrap around on overflow. If you want guaranteed modulo arithmetic, use unsigned integers.
The Rust code does specify this, hence the different compiler behavior.
>The C version doesn't know that your numbers are 32-bit long, are represented using two's complement and wrap around on overflow.
No you tard, it's simply UB if it were ever true, so the compiler discards this code entirely
>writes UB >gets btfod by compiler opti >doesnt undertand why, even if explained
so this is the power of rust, huh?
compilers are very smartly written, but they cannot account for raw, powerful moronation.
for these cases, use "volatile", you mong. it tells the compiler to leave the variable accesses unoptimized
"Closer to the hardware" is imprecise and unhelpful.
Rust is more willing to pin down language behavior, which makes it easier to take advantage of hardware details if your hardware actually matches the language (which to be fair it definitely does in the case of two's complement).
This is both because of Rust's safety obsession and because hardware was more varied back when C was standardized.
C makes signed integer overflow undefined behavior, which means that strictly speaking a program where such overflow happens is incorrect and you can't rely on any behavior. Many compilers are more lenient and others offer workarounds, but there's no uniformity. The standards committee is only now getting around to providing a standardized way to deal with this (see pic related).
In general it's more efficient to check for overflow after performing an operation, not before (especially on hardware with a dedicated overflow flag like x86, but I believe this is true regardless). So with signed overflow being UB and without stdckdint.h it's basically impossible to check for overflow in a way that's both portable and efficient, you'd have to write platform-specific code.
That's not what volatile is for. It's still UB even if it happens to work on your system.
>volatile is for
unless im mistaken, its exactly to tell the compiler to NOT try anything too smart for its own good, concerning a specific variable.
which boils down to modifying variable accesses.
but yeah, volatile is not the equivalent of "unsafe" in rust if thats what you mean.
also not all ub are equal.
some like binary on floats are UB, but only within C. just bc floats are defined elsewhere.
and thats when an UB works on a computer.
in a way UB only means "implementation-defined"
It doesn't just boil down to modifying variable accesses, it's only about variable accesses and nothing else.
(Rust implements read_volatile() and write_volatile() functions for pointers and doesn't have it as a type qualifier. This is enough to get something equivalent.)
In the OP's case it can't assume that the first read of x matches the second read of x. But that's only because there are two accesses.
Look at pic related, with x + 1 == INT_MIN:
- By default it'll still break by assuming that overflow can't happen.
- Even with -fwrapv it skips the arithmetic and checks the value of x directly, so it didn't disable all optimizations.
The only thing volatile accomplishes for this program is touching memory, making it less efficient for no benefit.
Basically the only valid use case for volatile is when accessing memory is how you talk to your hardware. (Some people use it for multithreading but that isn't actually correct either.) >but yeah, volatile is not the equivalent of "unsafe" in rust if thats what you mean.
I'm not sure how it would be, unsafe doesn't directly involve the optimizer very much.
1 month ago
Anonymous
lel autism is strong with you (not in a good sense)
youre wrong about what volatile does to the code.
try it out.
and it is used with signal handlers (bc mutexes arent the only way to deal with threading)
>by default it assumes you wont overflow
yes, because op uses int instead of unsigned ints. which yields UB insteadof a controllable overflow.
its in the standard
> -fwrapv
the what?
before this discussion i didnt even knew that option existed.
also wtf for? you got modern idomatic ways to deal with overflow, why do it by hand?
and why provide a global option that goes against the standard?
it sounds like a broken feature, left broken bc the only two people who know it exists are you and the guy who wrote it.
and wth do you mean by "touching the memory making it less efficient"?
1 month ago
Anonymous
>youre wrong about what volatile does to the code. >try it out.
Try what? I already showed you an example. Did you look at the picture? >and it is used with signal handlers
That might be reasonable yeah. >(bc mutexes arent the only way to deal with threading)
You really want proper atomics for that, volatile isn't enough. >>by default it assumes you wont overflow >yes, because op uses int instead of unsigned ints. which yields UB insteadof a controllable overflow. >its in the standard
That's my point yeah, which is why
>writes UB >gets btfod by compiler opti >doesnt undertand why, even if explained
so this is the power of rust, huh?
compilers are very smartly written, but they cannot account for raw, powerful moronation.
for these cases, use "volatile", you mong. it tells the compiler to leave the variable accesses unoptimized
is bad advice. >> -fwrapv >the what? >before this discussion i didnt even knew that option existed.
It tells the compiler to use wrapping signed arithmetic, effectively taking away the UB. There's also -ftrapv which generates traps. >also wtf for? you got modern idomatic ways to deal with overflow, why do it by hand?
I'm just demonstrating how the compiler handles things, I posted better solutions in
"Closer to the hardware" is imprecise and unhelpful.
Rust is more willing to pin down language behavior, which makes it easier to take advantage of hardware details if your hardware actually matches the language (which to be fair it definitely does in the case of two's complement).
This is both because of Rust's safety obsession and because hardware was more varied back when C was standardized.
C makes signed integer overflow undefined behavior, which means that strictly speaking a program where such overflow happens is incorrect and you can't rely on any behavior. Many compilers are more lenient and others offer workarounds, but there's no uniformity. The standards committee is only now getting around to providing a standardized way to deal with this (see pic related).
In general it's more efficient to check for overflow after performing an operation, not before (especially on hardware with a dedicated overflow flag like x86, but I believe this is true regardless). So with signed overflow being UB and without stdckdint.h it's basically impossible to check for overflow in a way that's both portable and efficient, you'd have to write platform-specific code.
That's not what volatile is for. It's still UB even if it happens to work on your system.
. >and why provide a global option that goes against the standard?
Because it's the behavior many people expect. Some people were already relying on it before standardization. Many compilers do it by default, including MSVC.
The standard permits it of course.
-fno-strict-aliasing is a similar flag that's pretty popular, used by e.g. Linux. >it sounds like a broken feature, left broken bc the only two people who know it exists are you and the guy who wrote it.
No. >and wth do you mean by "touching the memory making it less efficient"?
Look at the assembly. It writes the register to memory and then reads from memory and immediately discards the result, wasting time accomplishing nothing at all. It has to do that because that's what volatile means but it's useless.
1 month ago
Anonymous
im not fluent in asm at all, but i made an effort and now i see
picrel.
its your way of writing the algo thats the problem.
the compiler tries to do INT_MIN - 1 inside an int and shits itself.
if you write it in an alternative manner you get correct code.
but you gotta keep the volatile keyword or the coompiler shits itself again.
and i have no idea why this time.
the xor op must be there to zero the register i figure
>You really want proper atomics for that
i think you dont need em.
you can do the mutual exclusion with your signaling.
your signal handler normally stops the execution of your program at once.
execution which resumes once your signal handler returns
1 month ago
Anonymous
>dumb moron thinks volatile makes signed integer overflow not UB
dumb moron
1 month ago
Anonymous
>anal autist btfo by reading comprehension
[...]
is that way
1 month ago
Anonymous
Not who you're responding to, but you're missing the point. Doing an addition and THEN checking for overflow is always undefined behavior for signed integers in C, and your workarounds don't change that. You've found something that happens to work on your compiler, but it is still UB.
To do this right, you have to check if the addition will overflow with comparisons first and only then do the addition. In this case, you just have to check if num == INT_MAX like in
OP is fricking moronic.
/thread
.
1 month ago
Anonymous
>you're missing the point
nah, the point was to disect compiler behaviour
but now that you reminded me its fundamentally UB, i might have been too optimistic with the tickets and such
1 month ago
Anonymous
INT_MIN is -2147483648
dumb moron
[...]
geg we found a bug in gcc and the solution for it.
who said shitposting is a waste of time?
anyhoo, someone make a ticket about this issue
im unhireable anyways, i have no use for the accolade
>we found a bug in gcc
dumb moron
1 month ago
Anonymous
>INT_MIN is -2147483648 >dumb moron
now thats missing the point.
gorilla Black person is lower on the scale than a dumb moron, right?
1 month ago
Anonymous
>the compiler tries to do INT_MIN - 1 inside an int and shits itself.
No, that's not the reason. The compiler knows what it's doing.
The problem is that (according to the standard) there's no x for which x + 1 == INT_MIN. Doesn't matter whether x is volatile. Go read what the standard has to say about volatile, it doesn't have anything to do with arithmetic, only memory accesses and nothing else. >if you write it in an alternative manner you get correct code. >but you gotta keep the volatile keyword or the coompiler shits itself again. >and i have no idea why this time.
Like I said, that only happens because you're loading num twice. It can't assume that its first load of num matches the second load of num. This happens to prevent the optimization but it has nothing to do with arithmetic.
Like, the compiler assumes that its first load num might be 3 while its second load of num might be 8. And since 3 + 1 < 8 it should return true. So it decides it should actually perform the arithmetic and the check. But this is just a side effect, you're not attacking the root cause. >i think you dont need em. >you can do the mutual exclusion with your signaling.
For threading without mutexes I mean. Volatile doesn't take memory ordering into account so you can get still get impossible results depending on the whims of your hardware and your compiler. (I know less about signal handlers.)
1 month ago
Anonymous
>it doesn't have anything to do with arithmetic, only memory accesses and nothing else.
yeah i know
but the memory accesses of a variable have a bearing on code dependent on said accesses.
if you do computations only to ignore the end value, optimization will drop the whole code associated with the variable youre working on.
so yes, its all about accessing variables, but that accessing or lack thereof has consequences too. >side effect
yeah, of course
>signal handlers
you might want to look into em. they have a couple interesting properties which can come in handy in thread synchronization.
like pausing the execution of the process youre signalling.
im not sure how that would translate to its child processes tho
testing needed i guess...
also signalling sucks on macos, but to notice you really have to push them to the limit (i played around with signals at school, where we had to create a client and a server that were supposed to communicate only using two signals. macos had the bad habit of not doing what is described in the manpages when it comes to signals)
1 month ago
Anonymous
macos has xpc and mach message
1 month ago
Anonymous
myeah...
neither actually matches signal specificities.
xpc appears too archaic, mach appears too complex.
and neither is an excuse to provide a broken feature (bc thats what signalling on macos is. broken)
especially when you market your shit like apple does.
1 month ago
Anonymous
>you might want to look into em. they have a couple interesting properties which can come in handy in thread synchronization. >like pausing the execution of the process youre signalling.
actually its the only thing i can think of for thread synchronization purposes
like process watches a database, process b occasionally writes into it.
to tell the watcher process to have a look, process b can just send a signal to it once its done writing
no mutexes, no semaphores
1 month ago
Anonymous
im not fluent in asm at all, but i made an effort and now i see
picrel.
its your way of writing the algo thats the problem.
the compiler tries to do INT_MIN - 1 inside an int and shits itself.
if you write it in an alternative manner you get correct code.
but you gotta keep the volatile keyword or the coompiler shits itself again.
and i have no idea why this time.
the xor op must be there to zero the register i figure
>You really want proper atomics for that
i think you dont need em.
you can do the mutual exclusion with your signaling.
your signal handler normally stops the execution of your program at once.
execution which resumes once your signal handler returns
geg we found a bug in gcc and the solution for it.
who said shitposting is a waste of time?
anyhoo, someone make a ticket about this issue
im unhireable anyways, i have no use for the accolade
1 month ago
Anonymous
This is not a bug and you will be ridiculed on the GCC mailing list if you say it is. Undefined behavior is undefined, news at 11.
gcc's __builtin_xxx 'non portable' extensions runs on more platforms than 'standard' rust, so I'm not sure it really matters whether its standard C or not.
If you want maximum portability use C + gcc extensions because gcc supports the most platforms.
It's not even supported on MSVC, that seems pretty major. When I write a library in any language I make a basic effort to keep it compatible with many implementations and language versions. If I were writing a C library I'd at least think about it before using those builtins.
A lot of the time it's fine of course. But it's not irrelevant.
Is it really true that Rust is unable to use hardware specific features because it enforces wide contracts for integer operations?
Very interesting case. Is this because x86's address arithmetic thing only works on pointer-sized integers?
What if you cast i1 and i2 to usize at the start of the function? I'd probably do that regardless of this issue just to avoid the later casts.
Yeah, you need to put #![feature(unchecked_math)] and build on nightly.
Rust doesn't panic on overflow in optimized build
RTFM
Doesn't affect the point here though.
1 month ago
Anonymous
https://godbolt.org/z/8f5zPEs9f
>What if you cast i1 and i2 to usize at the start of the function?
Yeah, this does the trick.
I didn't know about this one though, thanks for pointing it out.
1 month ago
Anonymous
>It's not even supported on MSVC, that seems pretty major.
not even a freetard but i chuckled reading this
Thank you, idiomatic Rust would be this, no need for those unsafe blocks
https://godbolt.org/z/e9fn86o7P
1 month ago
Anonymous
And yet rusty nails wonder why people think the syntax is unreadable. Kek.
1 month ago
Anonymous
The unreadable syntax is intentional, it's a core goal of computational marxism to replace industry veterans with a lower-paid army of cheap disposable intellectual elites from academia
1 month ago
Anonymous
And yet rusty nails wonder why people think the syntax is unreadable. Kek.
Rust syntax is strictly superior to C, especially when it comes to higher order function and lambdas. C function pointer is a write-only pointer vomit.
1 month ago
Anonymous
[...]
You're telling me that
block.iter().skip(i1).zip(block.iter().skip(i2).any(|(&c1, &c2)| c1 != c2)
is perfectly readable and you'd be able to quickly understand it?
1 month ago
Anonymous
WDYM, this is far more readable than the headless chicken procedural code that I replied to.
1 month ago
Anonymous
[...]
Rust syntax is strictly superior to C, especially when it comes to higher order function and lambdas. C function pointer is a write-only pointer vomit.
You're telling me that
block.iter().skip(i1).zip(block.iter().skip(i2).any(|(&c1, &c2)| c1 != c2)
is perfectly readable and you'd be able to quickly understand it?
1 month ago
Anonymous
In a single glance, yes >create two iterators on block starting from i1 and i2 >zip them (iterate together) >find if in any instance where c1 and c2 are unequeal
I'm not the original poster of the code btw, the procedural code is friendly for the compiler maybe but the functional code is far readable to me as I can interpret the intent of the author perfectly.
Whereas I cannot envision the end goal of the procedural code
1 month ago
Anonymous
That doesn't do the c1 > c2 checks. This is closer:
pub fn test(i1: i32, i2: i32, block: &[u8]) -> bool {
let i1 = i1 as usize;
let i2 = i2 as usize;
block[i1..]
.iter()
.zip(&block[i2..])
.find(|(&c1, &c2)| c1 != c2)
.map(|(&c1, &c2)| c1 > c2)
.unwrap_or(false)
}
For a still better match it could check exactly three items and panic if out of bounds but without context I can't be bothered to go that far
This is because the C compiler is lying to you about what it's doing and the Rust compiler is honest. The C compiler turns block[i1]; ++i1; ++i1 into block[i1+2] because it thinks you're stupid and doesn't respect you.
>The unreadable syntax is intentional, it's a core goal of computational marxism to replace industry veterans with a lower-paid army of cheap disposable intellectual elites from academia
This isn't an implicit conversion problem. The _Bool and bool types did not come out until C99. All boolean expressions in C are integer expressions. There's no conversion from bool to int. It's just int.
>Tested it
It works on typical modern implementations, but I saw in the standard the other day that trapping or other results are permitted. Not UB though IIRC.
Are there C/C++ compilers that simply error out on UB instead of "optimizing" the code?
Also, what's the state in compiler circles? Do they jerk off or something when they find another UB optimization that removes code, but "increases performance"?
>xor eax, eax
lol
lmao
What hardware specific features?
bolt on breasts
You morons do realize that this is simply UB if it overflows and the compiler optimizes it out (as the standard dictates) because it assumes UB never happens, right?
This is a bait and flame war thread.
I'm just going to skip the technical details and call you a 'Black person homosexual.'
>Black person homosexual!
There, we're done. Have a nice day, Anon!
I may be a Black person homosexual, but I'm not wrong you frickhead
It’s not an UB in x86. That’s an UB in C only. Rust doesn’t have this problem because it’s closer to the hardware.
Yes, C chose to make this UB because it allows for better optimisation in the 99% of cases where you want your integers to act like integers and don't care about exactly what happens when they overflow
x + 1 < x has no other usecsse than checking overflow wrapping.
So you want overflow on addition to be well defined only if the result of the addition is later used in a comparison with the original value? Do you really think that's the best idea instead of, idk, checking i == INT_MAX?
>So you want overflow on addition to be well defined only if the result of the addition is later used in a comparison with the original value?
Yes, as it incurs no additional check. Your hardware already does modular arithmetic using 2's complement.
Checking i==INT_MAX is better because it can be vectorized.
Nobody's talking about vectorization. It's not general enough, the wrapping_add() takes any value including the ones that are not 1.
>Nobody's talking about vectorization
I am because I care about optimal code.
Checking overflow flags is garbage long depedency chain serial bloat.
It is already optimal. And wrapping_add calls can be also vectorized. Not sure why you are trying to change the topic here.
Did you watch the video I linked? Just because your cpu is doing 2's complement arithmetic, doesn't mean that it's doing it on a register of the correct width to get 32 bit (or less) wrapping behavior. In this specific case, it is possible to get the "desired" behavior, but in general defining overflow will result in worse assembly.
That would effect modular arithmatic in general, not specific to overflow checks such as this.
It actually is "UB" in x86.
Assembly has no types, so there is no distinction between unsigned and signed integers.
There are instructions like imul and idiv for signed multiplication and division respectively, but there is no "iadd" or "isub" instruction.
INT_MAX+1: you get back 32 bits and the overflow flag is not tripped. Integer addition and subtraction is all unsigned.
From the perspective of C, you can either:
1. Convert the result bits to the integer type leading to wrap around
2. Decide the result is unsigned, then upcast all the arguments in the less than comparison, and then downcast to the function signature.
As you can see, if you do (2), the function always returns false.
Important to note: cmp also doesn't care about signed vs. unsigned, you have to use jb/ja (for unsigned) jl/jg (for signed).
Thus you have two instructions (add, cmp) that don't correspond to types in assembly. Hence UB.
Unsigned integer overflow is well defined in C.
>It actually is "UB" in x86.
Hardware has no undefined behavior. RTFM.
That's why I put it in quotes.
Sure, everything is defined in hardware, however hardware has no negative numbers.
>I don't know how to store zero on x86.
Yes, we know fricking moron.
Can't you come up with new material?
>closer to the hardware
let this thread prove once and for all that rust trannies have absolutely no idea how compilers work.
> screenshot from cringe rust troony that keeps being btfo in every thread
my sides. rust trannies are an absolute joke. have a nice day, thanks.
>No argument
You know you won the argument when the other guy starts using ad hominem.
Cope
I use net bsd I care more about portsbility
It reslly is very portsble, isn't it?
go back to IQfy with your rust bulshit.
>use different function
>surprised when you get different results
you can write a wrapping add function in c, too, you know?
The C version doesn't know that your numbers are 32-bit long, are represented using two's complement and wrap around on overflow. If you want guaranteed modulo arithmetic, use unsigned integers.
The Rust code does specify this, hence the different compiler behavior.
Not hard at all to understand.
>The C version doesn't know that your numbers are 32-bit long, are represented using two's complement and wrap around on overflow.
No you tard, it's simply UB if it were ever true, so the compiler discards this code entirely
>The C version doesn't know that your numbers are 32-bit long
On -march=x86-64, it is.
bump
>volatile
>top kek
same happens if you declare x as volatile in OP's example, you're fricking moronic or deliberately trolling
>same happens if you declare x as volatile in OP's example
yes i've just done it
volatile?
>this thread again
frick off troll, your code isn't even legal
>writes UB
>gets btfod by compiler opti
>doesnt undertand why, even if explained
so this is the power of rust, huh?
compilers are very smartly written, but they cannot account for raw, powerful moronation.
for these cases, use "volatile", you mong. it tells the compiler to leave the variable accesses unoptimized
"Closer to the hardware" is imprecise and unhelpful.
Rust is more willing to pin down language behavior, which makes it easier to take advantage of hardware details if your hardware actually matches the language (which to be fair it definitely does in the case of two's complement).
This is both because of Rust's safety obsession and because hardware was more varied back when C was standardized.
C makes signed integer overflow undefined behavior, which means that strictly speaking a program where such overflow happens is incorrect and you can't rely on any behavior. Many compilers are more lenient and others offer workarounds, but there's no uniformity. The standards committee is only now getting around to providing a standardized way to deal with this (see pic related).
In general it's more efficient to check for overflow after performing an operation, not before (especially on hardware with a dedicated overflow flag like x86, but I believe this is true regardless). So with signed overflow being UB and without stdckdint.h it's basically impossible to check for overflow in a way that's both portable and efficient, you'd have to write platform-specific code.
That's not what volatile is for. It's still UB even if it happens to work on your system.
>volatile is for
unless im mistaken, its exactly to tell the compiler to NOT try anything too smart for its own good, concerning a specific variable.
which boils down to modifying variable accesses.
but yeah, volatile is not the equivalent of "unsafe" in rust if thats what you mean.
also not all ub are equal.
some like binary on floats are UB, but only within C. just bc floats are defined elsewhere.
and thats when an UB works on a computer.
in a way UB only means "implementation-defined"
It doesn't just boil down to modifying variable accesses, it's only about variable accesses and nothing else.
(Rust implements read_volatile() and write_volatile() functions for pointers and doesn't have it as a type qualifier. This is enough to get something equivalent.)
In the OP's case it can't assume that the first read of x matches the second read of x. But that's only because there are two accesses.
Look at pic related, with x + 1 == INT_MIN:
- By default it'll still break by assuming that overflow can't happen.
- Even with -fwrapv it skips the arithmetic and checks the value of x directly, so it didn't disable all optimizations.
The only thing volatile accomplishes for this program is touching memory, making it less efficient for no benefit.
Basically the only valid use case for volatile is when accessing memory is how you talk to your hardware. (Some people use it for multithreading but that isn't actually correct either.)
>but yeah, volatile is not the equivalent of "unsafe" in rust if thats what you mean.
I'm not sure how it would be, unsafe doesn't directly involve the optimizer very much.
lel autism is strong with you (not in a good sense)
youre wrong about what volatile does to the code.
try it out.
and it is used with signal handlers (bc mutexes arent the only way to deal with threading)
>by default it assumes you wont overflow
yes, because op uses int instead of unsigned ints. which yields UB insteadof a controllable overflow.
its in the standard
> -fwrapv
the what?
before this discussion i didnt even knew that option existed.
also wtf for? you got modern idomatic ways to deal with overflow, why do it by hand?
and why provide a global option that goes against the standard?
it sounds like a broken feature, left broken bc the only two people who know it exists are you and the guy who wrote it.
and wth do you mean by "touching the memory making it less efficient"?
>youre wrong about what volatile does to the code.
>try it out.
Try what? I already showed you an example. Did you look at the picture?
>and it is used with signal handlers
That might be reasonable yeah.
>(bc mutexes arent the only way to deal with threading)
You really want proper atomics for that, volatile isn't enough.
>>by default it assumes you wont overflow
>yes, because op uses int instead of unsigned ints. which yields UB insteadof a controllable overflow.
>its in the standard
That's my point yeah, which is why
is bad advice.
>> -fwrapv
>the what?
>before this discussion i didnt even knew that option existed.
It tells the compiler to use wrapping signed arithmetic, effectively taking away the UB. There's also -ftrapv which generates traps.
>also wtf for? you got modern idomatic ways to deal with overflow, why do it by hand?
I'm just demonstrating how the compiler handles things, I posted better solutions in
.
>and why provide a global option that goes against the standard?
Because it's the behavior many people expect. Some people were already relying on it before standardization. Many compilers do it by default, including MSVC.
The standard permits it of course.
-fno-strict-aliasing is a similar flag that's pretty popular, used by e.g. Linux.
>it sounds like a broken feature, left broken bc the only two people who know it exists are you and the guy who wrote it.
No.
>and wth do you mean by "touching the memory making it less efficient"?
Look at the assembly. It writes the register to memory and then reads from memory and immediately discards the result, wasting time accomplishing nothing at all. It has to do that because that's what volatile means but it's useless.
im not fluent in asm at all, but i made an effort and now i see
picrel.
its your way of writing the algo thats the problem.
the compiler tries to do INT_MIN - 1 inside an int and shits itself.
if you write it in an alternative manner you get correct code.
but you gotta keep the volatile keyword or the coompiler shits itself again.
and i have no idea why this time.
the xor op must be there to zero the register i figure
>You really want proper atomics for that
i think you dont need em.
you can do the mutual exclusion with your signaling.
your signal handler normally stops the execution of your program at once.
execution which resumes once your signal handler returns
>dumb moron thinks volatile makes signed integer overflow not UB
dumb moron
>anal autist btfo by reading comprehension
is that way
Not who you're responding to, but you're missing the point. Doing an addition and THEN checking for overflow is always undefined behavior for signed integers in C, and your workarounds don't change that. You've found something that happens to work on your compiler, but it is still UB.
To do this right, you have to check if the addition will overflow with comparisons first and only then do the addition. In this case, you just have to check if num == INT_MAX like in
.
>you're missing the point
nah, the point was to disect compiler behaviour
but now that you reminded me its fundamentally UB, i might have been too optimistic with the tickets and such
INT_MIN is -2147483648
dumb moron
>we found a bug in gcc
dumb moron
>INT_MIN is -2147483648
>dumb moron
now thats missing the point.
gorilla Black person is lower on the scale than a dumb moron, right?
>the compiler tries to do INT_MIN - 1 inside an int and shits itself.
No, that's not the reason. The compiler knows what it's doing.
The problem is that (according to the standard) there's no x for which x + 1 == INT_MIN. Doesn't matter whether x is volatile. Go read what the standard has to say about volatile, it doesn't have anything to do with arithmetic, only memory accesses and nothing else.
>if you write it in an alternative manner you get correct code.
>but you gotta keep the volatile keyword or the coompiler shits itself again.
>and i have no idea why this time.
Like I said, that only happens because you're loading num twice. It can't assume that its first load of num matches the second load of num. This happens to prevent the optimization but it has nothing to do with arithmetic.
Like, the compiler assumes that its first load num might be 3 while its second load of num might be 8. And since 3 + 1 < 8 it should return true. So it decides it should actually perform the arithmetic and the check. But this is just a side effect, you're not attacking the root cause.
>i think you dont need em.
>you can do the mutual exclusion with your signaling.
For threading without mutexes I mean. Volatile doesn't take memory ordering into account so you can get still get impossible results depending on the whims of your hardware and your compiler. (I know less about signal handlers.)
>it doesn't have anything to do with arithmetic, only memory accesses and nothing else.
yeah i know
but the memory accesses of a variable have a bearing on code dependent on said accesses.
if you do computations only to ignore the end value, optimization will drop the whole code associated with the variable youre working on.
so yes, its all about accessing variables, but that accessing or lack thereof has consequences too.
>side effect
yeah, of course
>signal handlers
you might want to look into em. they have a couple interesting properties which can come in handy in thread synchronization.
like pausing the execution of the process youre signalling.
im not sure how that would translate to its child processes tho
testing needed i guess...
also signalling sucks on macos, but to notice you really have to push them to the limit (i played around with signals at school, where we had to create a client and a server that were supposed to communicate only using two signals. macos had the bad habit of not doing what is described in the manpages when it comes to signals)
macos has xpc and mach message
myeah...
neither actually matches signal specificities.
xpc appears too archaic, mach appears too complex.
and neither is an excuse to provide a broken feature (bc thats what signalling on macos is. broken)
especially when you market your shit like apple does.
>you might want to look into em. they have a couple interesting properties which can come in handy in thread synchronization.
>like pausing the execution of the process youre signalling.
actually its the only thing i can think of for thread synchronization purposes
like process watches a database, process b occasionally writes into it.
to tell the watcher process to have a look, process b can just send a signal to it once its done writing
no mutexes, no semaphores
geg we found a bug in gcc and the solution for it.
who said shitposting is a waste of time?
anyhoo, someone make a ticket about this issue
im unhireable anyways, i have no use for the accolade
This is not a bug and you will be ridiculed on the GCC mailing list if you say it is. Undefined behavior is undefined, news at 11.
dumb moron
gcc's __builtin_xxx 'non portable' extensions runs on more platforms than 'standard' rust, so I'm not sure it really matters whether its standard C or not.
If you want maximum portability use C + gcc extensions because gcc supports the most platforms.
It's not even supported on MSVC, that seems pretty major. When I write a library in any language I make a basic effort to keep it compatible with many implementations and language versions. If I were writing a C library I'd at least think about it before using those builtins.
A lot of the time it's fine of course. But it's not irrelevant.
Very interesting case. Is this because x86's address arithmetic thing only works on pointer-sized integers?
What if you cast i1 and i2 to usize at the start of the function? I'd probably do that regardless of this issue just to avoid the later casts.
Yeah, you need to put #![feature(unchecked_math)] and build on nightly.
Doesn't affect the point here though.
>What if you cast i1 and i2 to usize at the start of the function?
Yeah, this does the trick.
I didn't know about this one though, thanks for pointing it out.
>It's not even supported on MSVC, that seems pretty major.
not even a freetard but i chuckled reading this
pls no bulli
>#![feature(unchecked_math)]
?
Good post. I think it's just a laziness of the standard committee.
>What is UB: the thread
OP is a moronic chimp homosexual as usual
>What is UB:
This is not an UB in hardware, Rust and x86 doesn't have this problem.
OP is fricking moronic.
/thread
>compiles without optimization
dumb moron
Code that only works in a specific optimization level is FRICKING BROKEN! Only a rusty nail could possibly think otherwise.
>coping cnile gets mad after being called out for lying
>t. hypocritical rusty nail that attacks c chads for using -fwrapv
If you disable optimization in C, it will also work as intended like Rust. Cnile cult has corroded your brain.
What's your opinion on num > num.wrapping_add(1)? Works on any optimization level
What's your opinion on num.wrapping_add() > num + 1? Works on any optimization level
Ugly af but technically works, just like the code in my pic does.
gcc and clang generates faster code.
use unsigned int and it will work as unsigned overflow isn't UB
Is it really true that Rust is unable to use hardware specific features because it enforces wide contracts for integer operations?
Bonus rust error:
error[E0658]: use of unstable library feature 'unchecked_math': niche optimization path
--> <source>:8:17
|
8 | i1 = i1.unchecked_add(1); i2 = i2.unchecked_add(1);
| ^^^^^^^^^^^^^
|
Rust doesn't panic on overflow in optimized build
RTFM
my bad. still produces bad asm, same as with wrapping add
(forgot to put target cpu, doesn't change anything)
post godbolt, I want to check something
https://godbolt.org/z/8f5zPEs9f
Thank you, idiomatic Rust would be this, no need for those unsafe blocks
https://godbolt.org/z/e9fn86o7P
And yet rusty nails wonder why people think the syntax is unreadable. Kek.
The unreadable syntax is intentional, it's a core goal of computational marxism to replace industry veterans with a lower-paid army of cheap disposable intellectual elites from academia
Rust syntax is strictly superior to C, especially when it comes to higher order function and lambdas. C function pointer is a write-only pointer vomit.
WDYM, this is far more readable than the headless chicken procedural code that I replied to.
You're telling me that
block.iter().skip(i1).zip(block.iter().skip(i2).any(|(&c1, &c2)| c1 != c2)
is perfectly readable and you'd be able to quickly understand it?
In a single glance, yes
>create two iterators on block starting from i1 and i2
>zip them (iterate together)
>find if in any instance where c1 and c2 are unequeal
I'm not the original poster of the code btw, the procedural code is friendly for the compiler maybe but the functional code is far readable to me as I can interpret the intent of the author perfectly.
Whereas I cannot envision the end goal of the procedural code
That doesn't do the c1 > c2 checks. This is closer:
pub fn test(i1: i32, i2: i32, block: &[u8]) -> bool {
let i1 = i1 as usize;
let i2 = i2 as usize;
block[i1..]
.iter()
.zip(&block[i2..])
.find(|(&c1, &c2)| c1 != c2)
.map(|(&c1, &c2)| c1 > c2)
.unwrap_or(false)
}
For a still better match it could check exactly three items and panic if out of bounds but without context I can't be bothered to go that far
This is because the C compiler is lying to you about what it's doing and the Rust compiler is honest. The C compiler turns block[i1]; ++i1; ++i1 into block[i1+2] because it thinks you're stupid and doesn't respect you.
Still not using Rust
Cope + Seethe + Cry + Weep + Not my problem
This isn't even a bug. The compiler did what you told it to.
Use inline asm or do the same check Rust is doing (i.e checking against INT_MAX)
>The unreadable syntax is intentional, it's a core goal of computational marxism to replace industry veterans with a lower-paid army of cheap disposable intellectual elites from academia
>int
>returns bool
God OP is a moron.
In C, the type of the expression (a < b) is int.
Does that affect the point whatsoever?
Blame C for shitty implicit conversation cancer
This isn't an implicit conversion problem. The _Bool and bool types did not come out until C99. All boolean expressions in C are integer expressions. There's no conversion from bool to int. It's just int.
(int)((unsigned int)x + 1) < x
IIRC implementations are allowed to do weird shit if x is negative though I can't name any that do
That sort of conversion works fine for negative.
Tested it
>Tested it
It works on typical modern implementations, but I saw in the standard the other day that trapping or other results are permitted. Not UB though IIRC.
Hello fellow citizens of the United States of America! Please see the attached subprogram which clearly expresses the intent of the programmer.
Are there C/C++ compilers that simply error out on UB instead of "optimizing" the code?
Also, what's the state in compiler circles? Do they jerk off or something when they find another UB optimization that removes code, but "increases performance"?
You sound like you don't understand what undefined behavior is, please watch this video:
Also, -fsanitize=undefined
This talk should always have this link attached to it https://twitter.com/chandlerc1024/status/1519784724624908288
CompCert and MSVC do not have this problem
MSVC has it to some extent what with that pointer-integer cast arithmetic thing