[...]
x^2 <= x * (2x) for all x but x^2 is not in O(2x).
>Does IQfy understand big O notation?
No, but I do understand >everything the redditor said is wrong
The vast majority of posts on Reddit contain inaccurate or false information.
In fact, the majority of the internet is inaccurate or false information.
Which is why LLMs more often than not produce inaccurate or false information.
its a big meme made out to be something when it really is nothing and computational complexity if you don't just conceptualize it in your head immediately then lol
>Does IQfy understand big O notation?
No, but I do understand >everything the redditor said is wrong
The vast majority of posts on Reddit contain inaccurate or false information.
In fact, the majority of the internet is inaccurate or false information.
Which is why LLMs more often than not produce inaccurate or false information.
big O is a good guideline when it comes to performance but its value is overrated since code runs on real hardware with cache-lines and not fairy magic
>reddit screencap thread >OP says "everything in it is LE WRONG!" (it's not) >30+ posts of useless vomit
what an absolutely embarrassing thread. the fricking state of this board, i don't even recognize it anymore
> O(1) is constant time, which means it doesn't take longer as the input size increases.
Wrong. It only means that it's bounded by a constant. It could always take longer with respect to the input size and still be O(1). For example, f(n) = 1 - 1/n always gets bigger with respect to n but is O(1).
All other statements he made are similarly false.
2 weeks ago
Anonymous
Wrong.
2 weeks ago
Anonymous
>but is O(1).
No it's not, it's O(1 - 1/n)
2 weeks ago
Anonymous
It's wrong, simple as.
How so?
it's wrong in that what's measured is not necessarily time
it can also be memory usage or some other factor
Big O is about how much the time taken by an algo can grow as input increases, not about how much time it'll take as input increases. the time will obv increase as input increases even in O(1)
all of this is masturbatory crap for academics anyway, in the real world there are too many factors to consider to be able to simplify the performance of an algorithm to big O. for small problems sizes, there are often algorithms with a worse complexity that run in less time, simply because they are able to use the hardware more efficiently (mainly the cache). there is substitute for testing your algorithms with real world data, N is not infinity in the real world.
in the real world you can google or (in case Putler drops a few nuclear bombs on your ISP's buildings) look for the proper algorithm in a book and get copy the most efficient one from there. You never need to do this masturbatory academicuck wankery because you can just copy paste what the academigolem npcs came up with. Just steal it and use it in your code lmao.
> O(1) is constant time, which means it doesn't take longer as the input size increases.
Wrong. It only means that it's bounded by a constant. It could always take longer with respect to the input size and still be O(1). For example, f(n) = 1 - 1/n always gets bigger with respect to n but is O(1).
All other statements he made are similarly false.
I don't see what's wrong with what the redditor said. Technically when talking about O it means the wrost case and technically it doesn't have to mean that there exists that worst case but the claims he makes are correct. Furthermore proofs usually give an O bound and don't bother proving anything further (like that there is a worst case) even though an algorithm may actually be in Theta
> O(1) is constant time, which means it doesn't take longer as the input size increases.
Wrong. It only means that it's bounded by a constant. It could always take longer with respect to the input size and still be O(1). For example, f(n) = 1 - 1/n always gets bigger with respect to n but is O(1).
All other statements he made are similarly false.
. His mistake is claiming something about how it scales with input without previously expressing that he is talking about large input. For sufficiently large n, his claims are correct
They are correct. For intsance in f(n) = Theta(g(n)) for any epsilon > 0 there is a sufficiently large m s.t. c*g(n) - epsilon < f(n) < c*g(n) + epsilon for all n > m. Additionally the number of steps an algorithm makes is an integer and therefore by picking an epsilon < 1/2 you get an m s.t. f(n) = c*g(n) for all n > m
2 weeks ago
Anonymous
Wrong.
That's not what's wrong with it.
Wrong.
2 weeks ago
Anonymous
Revise calculus
2 weeks ago
Anonymous
Wrong.
2 weeks ago
Anonymous
If anything was wrong and you knew calculus you would be able to point it out
Yes, its describes the order of efficiency of an algorithm which basically equates to the number of steps an algorithm needs to perform to solve a problem. O(n) would be linear i.e. roughly as many steps as the number of items in the array, O(n^2) would be quadratic i.e. n x n number of steps to complete the task for an array of n items.
>Yes, its describes the order of efficiency of an algorithm which basically equates to the number of steps an algorithm needs to perform to solve a problem. O(n) would be linear i.e. roughly as many steps as the number of items in the array, O(n^2) would be quadratic i.e. n x n number of steps to complete the task for an array of n items
Wrong.
>Yes, its describes the order of efficiency of an algorithm which basically equates to the number of steps an algorithm needs to perform to solve a problem. O(n) would be linear i.e. roughly as many steps as the number of items in the array, O(n^2) would be quadratic i.e. n x n number of steps to complete the task for an array of n items
Wrong.
This thread again. Hopefully IQfy will be smart and not fall for the bait this time.
[math] f(x)=O(g(x))[/math] if and only if there exists some constant [math]c[/math] such that [math]f(x)leq xg(x)[/math] for all [math]x[/math]
Wrong.
Double wrong.
x^2 <= x * (2x) for all x but x^2 is not in O(2x).
Wrong.
Wrong.
x is not a constant you dumb Black person
Wrong.
Wrong. Downvoted.
Correct.
>f(x)leq xg(x)
you mean [math]f(x)leq cg(x)[math]
Wrong.
>math notation
That's not a hint and you're a Black person
Hard mode: Explain it without using the letter e
Wrong.
bump
its a big meme made out to be something when it really is nothing and computational complexity if you don't just conceptualize it in your head immediately then lol
big o notation? more like big cope notation
>Does IQfy understand big O notation?
No, but I do understand
>everything the redditor said is wrong
The vast majority of posts on Reddit contain inaccurate or false information.
In fact, the majority of the internet is inaccurate or false information.
Which is why LLMs more often than not produce inaccurate or false information.
Wrong.
Is this OC?
>Hint: everything the redditor said is wrong
Wrong.
>this thread
kys op
Wrong.
big O is a good guideline when it comes to performance but its value is overrated since code runs on real hardware with cache-lines and not fairy magic
That previous thread made me realize IQfy is full of dunning-kruger moronic autistic homosexuals that wanted to sound smart
Wrong.
Wrong
Op is wrong, and my roll will be glorious
shut up cia spook
big O has no use case
Literally everybody in this thread is wrong
Wrong.
>reddit screencap thread
>OP says "everything in it is LE WRONG!" (it's not)
>30+ posts of useless vomit
what an absolutely embarrassing thread. the fricking state of this board, i don't even recognize it anymore
>>OP says "everything in it is LE WRONG!" (it's not)
Name one correct thing that the redditor said.
All of them. Go ahead and disprove them. You won't.
> O(1) is constant time, which means it doesn't take longer as the input size increases.
Wrong. It only means that it's bounded by a constant. It could always take longer with respect to the input size and still be O(1). For example, f(n) = 1 - 1/n always gets bigger with respect to n but is O(1).
All other statements he made are similarly false.
Wrong.
>but is O(1).
No it's not, it's O(1 - 1/n)
Utterly wrong. Read a fricking book.
Big O is about how much the time taken by an algo can grow as input increases, not about how much time it'll take as input increases. the time will obv increase as input increases even in O(1)
>print(timetocalculate(thing))
What does big O even tell you? How cum onto a paper exam?
Big rO(ll)
NO WAY YOU GOT HER. NO WAY.
FRICK OFF. AAAAAA LUCKY LUCKY LUCKY.
>tfw now married to a girl I've never even seen outside of memes
Time to finally start the series, it's a sign
all of this is masturbatory crap for academics anyway, in the real world there are too many factors to consider to be able to simplify the performance of an algorithm to big O. for small problems sizes, there are often algorithms with a worse complexity that run in less time, simply because they are able to use the hardware more efficiently (mainly the cache). there is substitute for testing your algorithms with real world data, N is not infinity in the real world.
in the real world you can google or (in case Putler drops a few nuclear bombs on your ISP's buildings) look for the proper algorithm in a book and get copy the most efficient one from there. You never need to do this masturbatory academicuck wankery because you can just copy paste what the academigolem npcs came up with. Just steal it and use it in your code lmao.
spoken like a true webshit dev soon to be replaced by AI
sounds like you're poor
sure, the definition might be a little different, but I'd say it's close enough for anyone who isn't doing theoretical computer science
It's wrong, simple as.
How so?
Gave an example here
it's wrong in that what's measured is not necessarily time
it can also be memory usage or some other factor
That's not what's wrong with it.
I don't see what's wrong with what the redditor said. Technically when talking about O it means the wrost case and technically it doesn't have to mean that there exists that worst case but the claims he makes are correct. Furthermore proofs usually give an O bound and don't bother proving anything further (like that there is a worst case) even though an algorithm may actually be in Theta
Oh I see now after reading this
. His mistake is claiming something about how it scales with input without previously expressing that he is talking about large input. For sufficiently large n, his claims are correct
1-1/n is bound by a constant. That constant is 1
>For sufficiently large n, his claims are correct
Wrong.
>No it's not
Wrong
>it's O(1 - 1/n)
Correct.
They are correct. For intsance in f(n) = Theta(g(n)) for any epsilon > 0 there is a sufficiently large m s.t. c*g(n) - epsilon < f(n) < c*g(n) + epsilon for all n > m. Additionally the number of steps an algorithm makes is an integer and therefore by picking an epsilon < 1/2 you get an m s.t. f(n) = c*g(n) for all n > m
Wrong.
Wrong.
Revise calculus
Wrong.
If anything was wrong and you knew calculus you would be able to point it out
Not correct.
What is the asymptotic time complexity of waifu acquisition?
O(wrong).
Rerolling because bees
Yes, its describes the order of efficiency of an algorithm which basically equates to the number of steps an algorithm needs to perform to solve a problem. O(n) would be linear i.e. roughly as many steps as the number of items in the array, O(n^2) would be quadratic i.e. n x n number of steps to complete the task for an array of n items.
also rolling for my waifu.
>Yes, its describes the order of efficiency of an algorithm which basically equates to the number of steps an algorithm needs to perform to solve a problem. O(n) would be linear i.e. roughly as many steps as the number of items in the array, O(n^2) would be quadratic i.e. n x n number of steps to complete the task for an array of n items
Wrong.
Wrong.
Wrong.
Wrong.