its not exhaustive but the kinds of operations he described is O(n)
now youre going to say that O(log n) is a subset of O(n) which is bounds autism I was describing
In computer science, "O(n)" refers to Big O notation, which is used to describe the efficiency or complexity of an algorithm. Specifically, "O(n)" represents linear time complexity, where the execution time of the algorithm increases linearly with the size of the input data set. The "n" represents the size of the input.
When we say an algorithm is "O(n)," it means that the time (or sometimes the space) it takes to complete the algorithm is directly proportional to "n". As "n" (the number of elements being processed) increases, the resources required by the algorithm increase linearly.
For example, consider an algorithm that checks whether a certain item is present in a list by looking through each item one at a time. In the worst case, where the item is not present or is at the end of the list, the algorithm would need to check every item once. Therefore, if the list contains "n" items, the algorithm will make "n" checks, making this algorithm O(n) in terms of time complexity.
>Are you asking for the formal math definition?
What the frick does that mean? As opposed to informal philosophical definition? No, I'm asking for an actual definition/explanation of what it means, whether formal or informal. >Because if not then his summary is not too far off
His summary is 100% wrong on almost every point.
[...] > now youre going to say that O(log n) is a subset of O(n)
It is. > which is bounds autism I was describing
What the frick does autism have to do with anything here?
>computer science board >nobody knows what O(n) means
>O(n) is just the upper bound
Wdym? > On that note, nothing "is" O(n). That's not how "is" works.
Wrong.
[...] > Oh, so you want the formal definition of O(n)?
Never asked for a formal definition, nor do I want one. Formal definitions are a pain in the ass to read. All I'm asking is a correct explanation of how it works and what it means. > O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
Your explanation is very wordy and also very incorrect. The fact that you cannot give an explanation in plain english, nor a correct formal explanation of what it means suggests you don't understand it.
OP here. I am going to sleep. Will continue this thread tomorrow if it's still up. Anyone pretending to be OP after this post before 7 hours has passed is merely pretending to be me.
OP here. This person is pretending to be me.
[...]
Nobody asked me to provide my explanation. Would you like to see one?
If so, the explanation given in the CLRS book I posted here [...] is a good start.
Here [...] I agreed with an explanation another anon gave. Seems like you haven't actually read the thread before replying.
As far as I know, O(n) notation simply takes the fastest growing term in the complexity equation. E.g. if you had n^2 it's obviously O(n^2), but 50n^2+10000000 is also O(n^2), completely disregarding the massive constant term and the multiplier. Which means big O notation is not telling you how complex it is, but how quickly it grows in complexity, meaning a O(1) could be slower than a O(n!) (the latter is just guaranteed to GROW faster).
Am I correct? t. self-taught tard who learned programming by hitting the keyboard in an IDE until a problem showed up then googling how to fix it.
Are you asking for the formal math definition? Because if not then his summary is not too far off, it's only ever mentioned as a rough indicator of how time complexity grows with the input nowadays, so you can stop acting high and mighty and go back now.
>Are you asking for the formal math definition?
What the frick does that mean? As opposed to informal philosophical definition? No, I'm asking for an actual definition/explanation of what it means, whether formal or informal. >Because if not then his summary is not too far off
His summary is 100% wrong on almost every point.
its not exhaustive but the kinds of operations he described is O(n)
now youre going to say that O(log n) is a subset of O(n) which is bounds autism I was describing
> now youre going to say that O(log n) is a subset of O(n)
It is. > which is bounds autism I was describing
What the frick does autism have to do with anything here?
WRONG WRONG WRONG you homosexual Black person
shit the frick up
>Wrong.
replying to a statement with "Wrong" and nothing else is the most reddit shit this 4cheddit place ever came up.with. 100% guaranteed you were making a smugsoyjak face while posting this reddit shit
I let people think about why they're wrong. If people ask me to explain I always do. This is a cultured way of interacting. Spoilers are not fun.
2 weeks ago
Anonymous
Wrong.
You will never be a teacher, you will never be respected, you will never make the world smarter, you will never be a woman.
2 weeks ago
Anonymous
Replying "wrong" does not achieve what you think it does, because it leaves nothing of value to dispel our belief that you're a bad-faith homosexualposter. Such is life on fore chance.
2 weeks ago
Anonymous
I thought this thread was fun and didnt interpret his "wrong" as smug or believe it was bad faith
Could have been though
2 weeks ago
Anonymous
Replying "wrong" does not achieve what you think it does, because it leaves nothing of value to dispel our belief that you're a bad-faith homosexualposter. Such is life on fore chance.
When people ask I always provide detailed explanations ITT. You're just being bad faith right now.
2 weeks ago
Anonymous
Great, what's the answer you're after?
2 weeks ago
Anonymous
To the question of what O(n) means? I showed which answers were correct here
[...]
2 weeks ago
Anonymous
Thanks!
Seems quite anal. Are you a math gay perchance? The interesting idea of big-O is understood by all good devs. It's irrelevant if they can give you a satisfactory definition or not.
2 weeks ago
Anonymous
>Are you a math gay perchance?
All autists think they are, in reality he was just being a pedant. Or baiting.
>O(n) is just the upper bound
Wdym? > On that note, nothing "is" O(n). That's not how "is" works.
Wrong.
>Does IQfy know what O(n) means? >Hint: everything the redditor said is incorrect
Oh, so you want the formal definition of O(n)? What the redditor provided is actually pretty reasonable for most people's use cases, but as for the proper meaning...
O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants. Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
> Oh, so you want the formal definition of O(n)?
Never asked for a formal definition, nor do I want one. Formal definitions are a pain in the ass to read. All I'm asking is a correct explanation of how it works and what it means. > O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
Your explanation is very wordy and also very incorrect. The fact that you cannot give an explanation in plain english, nor a correct formal explanation of what it means suggests you don't understand it.
By "just the upper bound" I suggest that it is not Theta(n), contrary to how it's informally used. Theta(n) is a tighter bound for when a function is in both Omega(n) and O(n). It is an upper asymptotic bound. It is not strictly "worst case".
O(n) is a set, so we use set notation like "in" to refer to its subsets. You would not say [0, 1, 2] is [0, 1, 2, 3] or [] is [0, 1, 2, 3].
Everything you said is surprisingly correct. Although it's not necessary to use the set membership notation to indicate the complexity class of a function.
Since sets are just properties that objects have, it's correct to say x^2 *is* O(n). In fact, one can go even further and freely substitute terms in expressions and equations with O(f(x)). For example, expressions such as f(x) = f(0) + O(x) are commonly used in mathematical textbooks and research, and is unambiguous. A formal decoding of it would say that f(x) - f(0) is O(x). One sometimes even sees multiple big O terms in an equation. Pic related is an example.
2 weeks ago
Anonymous
Incorrect, the groups are defined as exclusionary.
Wrong. I'm not schizophrenic. I do not believe in any conspiracy theories and I in fact hate schizophrenics, since they make no sense, constantly use illogical leaps in their arguments and have an undeserved sense of superiority over others.
2 weeks ago
Anonymous
>since they make no sense, constantly use illogical leaps in their arguments and have an undeserved sense of superiority over others.
You just described yourself
[...] >um ACKshyually, electron is le good because O(6502398528n) = O(n)
You're exactly what I thought you were. Frick off and die.
ITT: thinly veiled IQfy bait
If it's bait it's hilarious but I've known too many delusionally confident morons to believe that it is.
2 weeks ago
Anonymous
> >since they make no sense, constantly use illogical leaps in their arguments and have an undeserved sense of superiority over others. >You just described yourself
How so? I am able to justify my claims. Nowhere did I claim superiority over others. I recognize there are likely many people smarter than me on this board, including possibly in this thread.
Ask me to justify any point I made. I will do so.
2 weeks ago
Anonymous
OK you're trolling, bravo
2 weeks ago
Anonymous
>What the frick does autism have to do with anything here?
Read your posts itt. You are severely autistic and thus in dire need of professional help.
Wrong. I'm just pointing out that nobody else on this board seems to understand such a fundamental concept. You have proven my case.
2 weeks ago
Anonymous
OP here. This person is pretending to be me.
>Wrong >Wrong >Wrong >Just because everyone else thinks Big O notation means what it means, my extra super special awesome definition is the REAL definition >No, I won't tell you what it is
Nobody asked me to provide my explanation. Would you like to see one?
If so, the explanation given in the CLRS book I posted here
How about page 50 from the 4th edition of CLRS? The explanation agrees with mine and is completely different from what the redditor said and is in direct conflict with what he said. If the redditor gave that explanation in a computer science class, he should get 0 points since he is plain wrong.
[...] >You are severely autistic and thus in dire need of professional help.
What makes you say that?
[...] >OK you're trolling, bravo
What makes you say that?
is a good start.
Here
>In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants
This is closest to a correct explanation given ITT. Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects. For example, x^2 is not O(x), even though x^1 and x^2 differ by an effect of a constant (the effect of it being exponentiation). >Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
Wrong, there is no fastest growing linear function.
I agreed with an explanation another anon gave. Seems like you haven't actually read the thread before replying.
2 weeks ago
Anonymous
Wrong. OP here. This person is pretending to be me, you're one of those schizos I hate. With your undeserved sense of superiority you're too blind to see.
2 weeks ago
Anonymous
If it wasn't obvious already this person is trolling. He is not me.
2 weeks ago
Anonymous
I see you pseuds are resorting to trolling, pathetic. I should have known my ideas were too advanced for this board.
lmao, guess your own medicine tastes a bit bitter huh? Keep digging, you'll find it eventually
I don't understand what you are referring to as "my own medicine". I have not yet fully explained the position you're refuting.
2 weeks ago
Anonymous
>my ideas
OMFG LOL
2 weeks ago
Anonymous
OP here. The person you're replying is not me. None of the ideas I've expressed are mine. They have been known by mathematicians for well over a century.
2 weeks ago
Anonymous
>I don't believe in conspiracy theories. Whatever the hegemony says is the unquestioned truth!!!!
2 weeks ago
Anonymous
You're not justifying shit, you're just spamming "wrong".
2 weeks ago
Anonymous
Wrong. I already agreed with one anon who gave a correct (though wordy) explanation of what big O notation means. See
>In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants
This is closest to a correct explanation given ITT. Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects. For example, x^2 is not O(x), even though x^1 and x^2 differ by an effect of a constant (the effect of it being exponentiation). >Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
Wrong, there is no fastest growing linear function.
>very wordy
You didn't say I had to be concise. I don't know your level of understanding of the topic, so I might as well elaborate. >and also very incorrect
So I just broke your hard mode rules, and aside from my accidentally mixing up < and >, I do appear to have the correct definition. Unless you believe it is something else?
>In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants
This is closest to a correct explanation given ITT. Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects. For example, x^2 is not O(x), even though x^1 and x^2 differ by an effect of a constant (the effect of it being exponentiation). >Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
Wrong, there is no fastest growing linear function.
>Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects
Constant coefficient. My bad. >there is no fastest growing linear function
Yeah. Could have phrased that better. "Faster than any linear function" would be more apt, I think?
> I don't know your level of understanding of the topic
What about me making a thread and correctly noting that many other people misunderstand what big O notation means, indicate to you that I do not know what big O notation means. Unless you thought the redditor was correct?
2 weeks ago
Anonymous
Yep, the redditor was correct and you are wrong. You misunderstand the definition.
2 weeks ago
Anonymous
What definition?
2 weeks ago
Anonymous
I see you don't actually have any idea what you're talking about, very cool.
Here, let me spoonfeed you: How about modern big-O notation as adapted from Knuth's work and available in any intro to cs book published in the last 40 years?
2 weeks ago
Anonymous
How about page 50 from the 4th edition of CLRS? The explanation agrees with mine and is completely different from what the redditor said and is in direct conflict with what he said. If the redditor gave that explanation in a computer science class, he should get 0 points since he is plain wrong.
>What the frick does autism have to do with anything here?
Read your posts itt. You are severely autistic and thus in dire need of professional help.
>You are severely autistic and thus in dire need of professional help.
What makes you say that?
OK you're trolling, bravo
>OK you're trolling, bravo
What makes you say that?
2 weeks ago
Anonymous
ctrl+f your own textbook and type Knuth
2 weeks ago
Anonymous
I did that and got 72 results. Why did you ask me to do that?
2 weeks ago
Anonymous
You'll find it, I believe in you
2 weeks ago
Anonymous
I'm not going to spend time going through the results because I don't know what I'm supposed to be looking for.
2 weeks ago
Anonymous
lmao, guess your own medicine tastes a bit bitter huh? Keep digging, you'll find it eventually
2 weeks ago
Anonymous
> lmao, guess your own medicine tastes a bit bitter huh?
What are you talking about? What medicine?
Arguing about convention is about as productive as arguing about whether 0 is a natural.
Different authors are going to have different definitions of it, the essence is the same but the exact details of the definition depend on the context, get over it.
Whether it's defined as an average or as an upper bound is an arbitrary convention, not a property of the world which is what computer science is actually about. Science studies the natural world, not man made definitions. Those are merely auxiliary.
I don't think it's arguing about convention. I've never seen a definition of O(n) that would make what the redditor said correct.
Even if you incorrectly use O(n) to mean Theta(n), what the redditor wrote is still wrong on almost every point.
Do you disagree? Have you seen a different definition of O(n) which makes it correct?
2 weeks ago
Anonymous
>Even if you incorrectly use O(n) to mean Theta(n), what the redditor wrote is still wrong on almost every point.
What else is wrong apart from that?
2 weeks ago
Anonymous
>constant time, which means it doesn't take longer as the input size increases
Wrong. >logarithmic time, which means as input size increases it takes a logarithmically small amount more time
Wrong. > linear time, which means it takes a constant factor of time proportional to the size of input size
Wrong.
Should I keep going or did you get the idea?
2 weeks ago
Anonymous
time, which means it doesn't take longer as the input size increases >Wrong.
Then what does O(1) mean to you?
2 weeks ago
Anonymous
The runtime is bounded by a constant as the input size increases.
2 weeks ago
Anonymous
No, we were assuming every time he says O he means theta.
2 weeks ago
Anonymous
We were not, and this is wrong even if it did mean theta(1). Theta(1) simply means bounded by a constant and eventually nonzero.
2 weeks ago
Anonymous
The definition provided in your textbook does not in any way contradict the definition I supply in my post here:
>Does IQfy know what O(n) means? >Hint: everything the redditor said is incorrect
Oh, so you want the formal definition of O(n)? What the redditor provided is actually pretty reasonable for most people's use cases, but as for the proper meaning...
O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants. Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
With the exception of course that > needed to be replaced with <.
2 weeks ago
Anonymous
I didn't say they were in contradiction. You gave a correct explanation (the second, and the first one after the caveat you note).
2 weeks ago
Anonymous
You said the definition I provided was "very wordy and also very incorrect".
2 weeks ago
Anonymous
I said so about the first definition and wrote the reply before reading the second explanation you gave, which was correct, as I noted here
>In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants
This is closest to a correct explanation given ITT. Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects. For example, x^2 is not O(x), even though x^1 and x^2 differ by an effect of a constant (the effect of it being exponentiation). >Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
Wrong, there is no fastest growing linear function.
Your first definition was incorrect as you noted yourself multiples times in this thread. It's also needlessly wordy/formal.
2 weeks ago
Anonymous
How else could I describe such a feeble attempt to match my own understanding?
OP here. The person you're replying is not me. None of the ideas I've expressed are mine. They have been known by mathematicians for well over a century.
Wrong. OP here, this discussion has been derailed by stupidity. I don't blame you, we can't all be 110 IQ. Come back when you're serious about learning. In the meantime I'll be meditating on P = NP
2 weeks ago
Anonymous
Arguing about convention is about as productive as arguing about whether 0 is a natural.
Different authors are going to have different definitions of it, the essence is the same but the exact details of the definition depend on the context, get over it.
Whether it's defined as an average or as an upper bound is an arbitrary convention, not a property of the world which is what computer science is actually about. Science studies the natural world, not man made definitions. Those are merely auxiliary.
2 weeks ago
Anonymous
0 isn't a natural number
if it was natural everyone would have had it
they don't because it's not natural
stupid fricking moron
2 weeks ago
Anonymous
>rates of change
youre a fricking Black person. Where does it say this is a definition? That text just builds intuition, if you didnt stop reading to bait on IQfy you would find a proper definition later in the text
OP is being a homosexual. The only potential mistake was the "iterate 5 times" comment.
https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions
>REEEEEEEEEEEEE
For OP, Big O notation is how he measures his dilation success or failure. O(1) means he can only take one dick in a night. O(n) means he can only take one dick at a time. O(n^2) means he can take two dicks at once, O(n^3) three dicks, etc. His goal is to take O(n!) dicks simultaneously, all of them traveling salesmen. However, some trans surgeons say this is an impossible goal.
>which is a known shortcoming of O notation.
That's by design you fricking incompetent Black person. If you wanted more specificity then just count the number of operations and leave all the constants and lesser terms in like we all did in school. You DO have a cs degree right?
2 weeks ago
Anonymous
The fact that you think it's a shortcoming of O shows you don't understand what O notation means.
>um ACKshyually, electron is le good because O(6502398528n) = O(n)
2 weeks ago
Anonymous
You’re an actual ducking moron if you think this is what time complexity is applicable to lmao
Time complexity is used to describe how algorithms scale with input size. It’s not used to describe how gay a particular webshit is.
The fact that you think it's a shortcoming of O shows you don't understand what O notation means.
2 weeks ago
Anonymous
One could think it's a shortcoming and know the actual definition of it. What this would demonstrate, however, is that one lacks an understanding of why Big O exists in the first place.
> I don't know your level of understanding of the topic
What about me making a thread and correctly noting that many other people misunderstand what big O notation means, indicate to you that I do not know what big O notation means. Unless you thought the redditor was correct?
The redditor is not providing the formal definition of Big O, but is providing a definition that is useful in the context of how Big O is typically used. For all I know, you could just be some undergrad who is struggling with his studies and cannot use the redditor's definition for an assignment that requires you to consider the formal definition. However, given that you are describing actual correct definitions as wrong without describing how they are wrong, it appears you don't actually understand anything about the topic, and are just trolling.
its not a shortcoming its description of asymptotic behavior you stupid Black person
I havent even studied compsci or algorithms formally and even I know this
It’s mostly wrong.
It starts off all wrong with >O(1) is constant type, which means it doesn’t take longer as the input size increases
It can take longer locally for quite some time. For example, take f(x) = { x^2 if x <= 1,000,000, f(1,000,000) otherwise }. f(x) is O(1), but it grows quadratically first.
The phrase "at the limit" is assumed when talking about big O.
2 weeks ago
Anonymous
But the post OP is roasting appears intended for someone who knows nothing about Big O. So the context you and I have that “at the limit” is assumed is missing.
2 weeks ago
Anonymous
To be honest it's also missing some edge cases like galactic algorithms which grow slowly but have absurdly high constants so they are infeasible (for example, a sorting algorithm that is O(1) but takes the age of the universe to complete no matter the input size) but it gets the idea across well enough for a layman. Especially since computer algorithms in general are straightforward in that aspect.
2 weeks ago
Anonymous
Wrong.
He's right except he forgot to mention that big O is asymptotic complexity, so an upper bound for infinite input. So each one in that list is contained in the next. Technically calling an O(1) operation O(2^n) is correct (but useless).
Infinite inputs are not considered in computer science.
do you?
Yes.
O(n) basically means as n -> infinity (or, if you prefer, "as n gets arbitrarily large"), f(n)/n approaches a constant, non-zero number
Completely wrong.
There are some things wrong in it, but it's mostly correct and gets the general idea across.
>O(1) is constant time
Is the constant time complexity class*, if you want to be picky. >which means it doesnt take longer as the input size increases
Technically true in theory, but in actually on computers, you will have different factors for constant operations. Though this doesn't matter due to the nature and definition of Big-O, even if that factor were large. >For example, referencing an item in an array takes O(1) time
Bounded*, assuming you have the index location. >O(logn) is logarithmic time
True, but the notation is messed up, which confuses people. O(log(2)(n)). >which means as the input size increases it takes a logarithmically small amount more time
It means it's bounded above by that growth order function towards some limit, which is often positive infinity (and there should be some lower bound to make this a useful statement). The function's growth over a long period of time will grow at about that rate. >For example, binary searching a sorted list is O(logn).
If it's sorted, yes. The best growth order for sorting is O(n*log(2)(n)). If we consider just searching, we can say it's bounded by θ(log(2)(n)). If we consider the pre-sort operation, then it's bounded by θ(n*log(2)(n)). >O(n) is linear time, which means it takes a constant factor of time proportional to the size of the input size.
Again, bounded above (and should be below, with Theta) by that function. Generally, functions in that growth class will take some time proportional to their input size, yes. >For example, iterating over every element of an array 5 times is O(n).
Correct, but constant can matter. >O(nlogn) is linear times logarithmic time
Linearithmic time*. Wikipedia actually has a good table on the most common time orders. Though, there are an infinite number of them, those are just the most commonly used.
Wrong.
2 weeks ago
Anonymous
Are you just working on homework and fishing for information?
2 weeks ago
Anonymous
This is computer science 101 and can be found in any cs book or wikipedia. There's nothing to fish for. It's merely to test IQfys understanding of the basic concept. Evidently IQfy failed the test.
2 weeks ago
Anonymous
but you're not making any direct claims or counterarguments. You're just saying everyone is "wrong".
>Does IQfy know what O(n) means? >Hint: everything the redditor said is incorrect
Oh, so you want the formal definition of O(n)? What the redditor provided is actually pretty reasonable for most people's use cases, but as for the proper meaning...
O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants. Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
And I'm moronic and mixed up f(n) and g(n).
Big O is f(n) < c * g(n). Because g(n) is the upper bound.
Big Omega is f(n) > c * g(n). Because g(n) is the lower bound.
Big Theta is when something's in both Big O and Big Omega, IIRC.
>In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants
This is closest to a correct explanation given ITT. Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects. For example, x^2 is not O(x), even though x^1 and x^2 differ by an effect of a constant (the effect of it being exponentiation). >Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
Wrong, there is no fastest growing linear function.
>Does IQfy know what O(n) means? >Hint: everything the redditor said is incorrect
Oh, so you want the formal definition of O(n)? What the redditor provided is actually pretty reasonable for most people's use cases, but as for the proper meaning...
O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants. Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
Actually the fact that there is no fastest growing linear function is exactly why the constant factors are needed in the definition.
It means that when the size of a problem tends towards infinity, the time/memory/whatever tends towards increasing linearly with the size of the problem
O(n) is a notation describing the asymptotic number of operations an algorithm takes (be it average, best case or worst case) to process an input of length n.
Asymptotic meaning that it's not meant to be an exact amount for a given input size, but a limit that is approached as the number of input elements increases up to infinity.
You clearly don't understand the question at all so I won't waste my time.
2 weeks ago
Anonymous
>wrong >wrong >wrong
why? >hehe I'm not gonna waste my time
Actually autistic.
2 weeks ago
Anonymous
I see, so you are not asking what does O(...) mean, you are asking specifically about O(n). My bad.
O(n) means that for a certain algorithm, the number of operations undertaken by that algorithm tends to the number of elements in the input as the number of elements processed grows.
The trap here is how "operation" and "element" are defined. Operation is a function whose domain is a finite set and it's co-domain is also a finite set. Element is a member of a finite set. That's why an algorithm that runs in linear time runs in O(n) time for a certain definition of "operation" and "element". If the size of each operation and input element were defined instead of left open to interpretation, then we would have to add a multiplicative constant to that n, and O(2n) wouldn't be the same as O(n).
There is also the other detail of which input elements are you considering. If you are considering all possible input elements then the number you are giving is the average running time for all inputs of length n. If you are considering the worst case of length n then the number is the worst case running time for any given input length, and if you are considering the best case of length n then the number is the best case running time for any given input length.
>Wrong >Wrong >Wrong >Just because everyone else thinks Big O notation means what it means, my extra super special awesome definition is the REAL definition >No, I won't tell you what it is
it just means that the range of a function's upper-bound is determined by the number of elements.
in the OP, the function happens to be the number of operations as a function of input size, but we also commonly talk about space (memory) and we can apply this to other stuff if we want.
maybe we have a silly algorithm where we generate a subnet for each pair of nodes in a network and say it's O(n^2) addresses, or we open a routine for each element in a list and say we have O(n) threads. these would be unusual, but valid.
OP here. I am going to sleep. Will continue this thread tomorrow if it's still up. Anyone pretending to be OP after this post before 7 hours has passed is merely pretending to be me.
Real OP here. I am going to sleep in my hyperbaric chamber. It's not something you would understand as your brain only works at O(n) speed while mine calculates in O(nlogn) speed. Heh. Nothing personnel, kiddo.
>bait people by saying a redditor failed so surely IQfy must be able to get it right >call every single person regardless of their answer, wrong
It's honestly impressive how well this trolling strategy works
He's right except he forgot to mention that big O is asymptotic complexity, so an upper bound for infinite input. So each one in that list is contained in the next. Technically calling an O(1) operation O(2^n) is correct (but useless).
O(f(n)) is for a given algorithm, if there exists a real number k so that (number of operations in algorithm for input n) <= k * f(n)
function inc(arr) {
var i;
var z = arr.length; // 1
for (i = 0; i < z; i++) { // 1, n, n
arr[i] = arr[i] + 1; // 3
}
}
// for a given input of size n, (1 + 1 + n + n + 3n) ops
// total: 5n + 2
// so for f(n) = n, n > 0
// 5n + 2 <= kn
// if 2 <= (k-5)n
// if 2/n <= (k-5)
// so for n = 1, 7 <= k
// for n = 2, 6 <= k
// for n = 3, 5.666 <= k
// ... etc
// therefore there exists a real k (for example, 7) so that (ops done for input of size n) <= 7n
The omega is for the flipped <= (or >=, for lower bound) and the slashed O is for =.
An infinitist category theory professor and transsexual was teaching a class on David Hilbert, known nonconstructivist.
"Before the class begins, you must get on your knees and worship Georg Cantor and accept that he was the most highly-rigorous being the world has ever known, even greater than Archimedes!"
At this moment, a brave, intuitionist, wildbergian euclidean geometer who had produced 1500 constructive proofs and understood the necessity of algorithmic thinking stood up and held up a 0.999... foot ball.
"How wide is this ball?"
The arrogant professor smirked quite Infinitistly and smugly replied "The equivalence class of the sequence (1,1,1...)"
"Wrong. Only three nines were written down. It there were infinity and real numbers, as you say, are real... then i would need infinite paper."
The professor was visibly shaken, and dropped his japanese chalk and copy of Rudin. He stormed out of the room crying those infinist crocodile tears. The same tears infinitists cry for the “non-measurable sets” (who today live in such luxury that they need not be constructed) when they jealously try to take up space in textbooks from the deserving theorems. There is no doubt that at this point our professor, Cardinal troonystein, wished he had pulled himself up by his bootstraps and become more than a sophist infinity schizo. He wished so much that he had a gun to shoot himself from embarrassment, but the bullet would take infinite steps to reach his head!
The students applauded and all studied Wittgenstein that day and accepted Kroenecker as their lord and savior. An eagle named “Induction” flew into the room and perched atop the number theory book and shed a tear on the chalk. Wildberger's videos were watched several times, and God himself showed up after descending a finite amount from heaven.
The professor lost his tenure and was fired the next day. He died of gay plague AIDS and was expelled from the paradise Wildberger had created for all eternity.
There are some things wrong in it, but it's mostly correct and gets the general idea across.
>O(1) is constant time
Is the constant time complexity class*, if you want to be picky. >which means it doesnt take longer as the input size increases
Technically true in theory, but in actually on computers, you will have different factors for constant operations. Though this doesn't matter due to the nature and definition of Big-O, even if that factor were large. >For example, referencing an item in an array takes O(1) time
Bounded*, assuming you have the index location. >O(logn) is logarithmic time
True, but the notation is messed up, which confuses people. O(log(2)(n)). >which means as the input size increases it takes a logarithmically small amount more time
It means it's bounded above by that growth order function towards some limit, which is often positive infinity (and there should be some lower bound to make this a useful statement). The function's growth over a long period of time will grow at about that rate. >For example, binary searching a sorted list is O(logn).
If it's sorted, yes. The best growth order for sorting is O(n*log(2)(n)). If we consider just searching, we can say it's bounded by θ(log(2)(n)). If we consider the pre-sort operation, then it's bounded by θ(n*log(2)(n)). >O(n) is linear time, which means it takes a constant factor of time proportional to the size of the input size.
Again, bounded above (and should be below, with Theta) by that function. Generally, functions in that growth class will take some time proportional to their input size, yes. >For example, iterating over every element of an array 5 times is O(n).
Correct, but constant can matter. >O(nlogn) is linear times logarithmic time
Linearithmic time*. Wikipedia actually has a good table on the most common time orders. Though, there are an infinite number of them, those are just the most commonly used.
>which is between linear and quadratic,
True. >and is a common goal for optimizing many algorithms.
True, but a bad explanation. The growth order of quadratic time is awful. The difference between it and all functions below it is generally the point where most people consider bad. This is because squaring a number causes numbers to get big very quickly, especially at large domains. So generally O(n*log(2)(n)) is just the best hurdle before and noticeably much better than quadratic time. >O(n2) is quadratic time, which means as the input size grows, you have to do square the number of things.
No, it means that the function grows no larger than that. Many algorithms in the O(n2) class don't perform exactly n2 operations. See n(n+1)/2, for example. >iterating over every element of an array, but each time you have to do the entire array again. An example is brute force sorting an array.
There is no "brute force" method of sorting a set of numbers. Though Bubble Sort and Selection Sort are the most crude sorting algorithms and often what people think when they imagine that. But even they have some cleverness to them. Additionally, only their worst-case implementations perform in θ(n2). Other iterations of them are considered unique functions, for all intents and purposes. >O(n3), O(n4), etc... are like O(n2) but each gets more cumbersome to work with. All of these are "polynomial" times.
Cubic time*. Afterwards, the n^k are just classified as "polynomial time", yes. >Then theres exponential times like O(2n), which grow very fast.
2^n. >Then weve got O(n!) and similar, which is super exponential time.
That's factorial time. >This means you have to do a factorial number of things for the input size.
Same thing as above, no not technically. >An example is brute forcing the travelling salesman problem.
Yes, the brute force algorithm for TSP is bounded by factorial time.
Bookmarking this thread on an archive so I have a point of reference for the next time someone asks why techcels got their asses beaten in high school.
You said, as I quoted, "the upper bound", indicating you had a specific upper bound in mind, when there can be many. I asked you which upper bound you had in mind and you started seething for some reason.
>Anon, what's a "growth rate"? A-and is that a d-derivative or whatever it's called? >I thought functions had multiple upper bounds??!
You're not fooling anyone, 3rd year. Your degree will be worthless btw.
What then did you mean by the growth rate of a function if not its derivative?
2 weeks ago
Anonymous
>indicating you had a specific upper bound in mind,
Yes, you illiterate. Specifically, the one that you are dealing with, which is an upper bound.
>Anon, what's a "growth rate"? A-and is that a d-derivative or whatever it's called? >I thought functions had multiple upper bounds??!
You're not fooling anyone, 3rd year. Your degree will be worthless btw.
It should've been function value instead of growth rate. Growth rate generally means derivative of the function. But you're correct if you meant the function value.
Is 2x in O(x)?
I'll say it again but more formally this time
O(f(x)) describes the set of all g(x) such that the value of g(x) <= c.f(x) for some n >= n0 and C > 0
On your example
2x <= c.x for all c >= 2
When some one uses = with O(f(x)) it implies the Element of notation
> I'll say it again but more formally this time
I don't care about formality, all that matter is being correct. It's a common misconception that being formal means being more correct.You can be correct while being informal and wrong while being formal, as this thread has demonstrated multiple times.
Your explanation in this post is correct, although needlessly wordy and strictly formally nonsense (you try to be formal yet cannot get the variables to agree).
It also disagrees with the previous less formal explanation that you gave.
Care to explain how it disagrees with my post before
2 weeks ago
Anonymous
Sure. >The O(f(x)) notation just describes the set of functions that are upper bounded by the function f(x)
Disagrees with the second explanation that you gave because being upper bounded by f(x) means being not more than f(x). Your first explanation gave no mention to the multiplicative scalar factor invariance that makes the definition useful.
According to your first explanation, 2x is not in O(x), but according to the second explanation, 2x is in O(x).
2 weeks ago
Anonymous
Hmm you're right it was an oversight from me with the first post.
Most responses ITT are mostly correct, even your literal reddit screencap is fine. Just some technicalities that could be more accurate, which you personally haven't even pointed out.
They are dead wrong though. Clearly you haven't actually studied the concept. Perhaps you heard about it on reddit?
2 weeks ago
Anonymous
you literally go to reddit and screencapped a thread from there
2 weeks ago
Anonymous
To point out how wrong it was. I assure you my visit to that place was brief.
2 weeks ago
Anonymous
Oh come on, really? You just happened to go there for that one thing, which you already knew was there.. somehow?
I wasn't born yesterday, bro.
2 weeks ago
Anonymous
If you want the full story, here it goes. I saw someone on X say that every thread on reddit computer science subreddit is full of indian hate. Went to the computer science subreddit to briefly check for myself. Didn't find indian hate, found this thread instead. Posted it here. Never went there since.
2 weeks ago
Anonymous
>admits he uses X >admits he uses reddit >thinks by picking at technicalities rightfully lost in application, in a casual answer that is intended to be dumbed down, that he knows what he's talking about (In one of the easiest fields of math)
this just keeps getting better and better. Did you even graduate college, OP?
2 weeks ago
Anonymous
If someone said 2+2=5, would you go autistically go into the technicalities of addition to explain why she is technically wrong or simply accept that it's close enough and move on?
2 weeks ago
Anonymous
>If someone said 2+2=5
Woah, a classic strawman. Point for me in the thread where people said 2+2=5 or to the severity of that.
2 weeks ago
Anonymous
Examples are given here:
>constant time, which means it doesn't take longer as the input size increases
Wrong. >logarithmic time, which means as input size increases it takes a logarithmically small amount more time
Wrong. > linear time, which means it takes a constant factor of time proportional to the size of input size
Wrong.
Should I keep going or did you get the idea?
2 weeks ago
Anonymous
>They are dead wrong though.
Refute this one:
It establishes that the upper bound of the growth rate of some function of n is linear.
Also the predditor is wr9ng because if a function is in O(n) it doesn't necessarily means that it grows leanierly wrt n. A constant function is too in O(n)
The O(f(x)) notation just describes the set of functions that are upper bounded by the function f(x)
The reason he's wrong is the same reason I said "growth rate". Refute
It establishes that the upper bound of the growth rate of some function of n is linear.
. You literally can't. You lost.
2 weeks ago
Anonymous
Big O notation doesn't talk about the growth rate of a function. Growth rate is irrelevant. What you said is wrong.
2 weeks ago
Anonymous
>Big O notation doesn't talk about the growth rate of a function.
Wrong.
It should've been function value instead of growth rate. Growth rate generally means derivative of the function. But you're correct if you meant the function value.
[...]
I'll say it again but more formally this time
O(f(x)) describes the set of all g(x) such that the value of g(x) <= c.f(x) for some n >= n0 and C > 0
On your example
2x <= c.x for all c >= 2
When some one uses = with O(f(x)) it implies the Element of notation
>It should've been function value instead of growth rate
Wrong.
O(n) is just a set of functions. >IQfy gijitsu
baka my head SENPAI tbh
Plus even if something has running time complexity in O(n) it doesn't necessarily means that it would be faster than O(n^2) for n smaller than a particular value. You need to be extremely specific when it comes with asymptotic notations
I write scientific code and no one cares or knows about this.
If the hardware can't do maths fast enough, we go for the next higher spec microcontroller
"For sufficiently large n" is redundant. If we're considering f(x) in O(g(x)), and given N such that for all n>N, f(n) <= Cg(n), we can take the C' = max( g(0)/f(0), g(1)/f(1),..., g(N)/f(N), C), disregarding the terms where f(n) is zero. Then for ALL n, f(n) <= C'g(n). Hence the N/sufficiently large n condition can be dropped for the definition.
Note that this only works because the domain is the natural numbers, so finite sets are cofinal in the order.
If the domain were real numbers, the "x big enough" condition would be necessary, as removing it would present a different much less useful concept.
2 weeks ago
Anonymous
Wrong. Go back to school.
2 weeks ago
Anonymous
How am I wrong?
2 weeks ago
Anonymous
I'll let you think about it.
2 weeks ago
Anonymous
I already thought about it and would prefer you to give me an explanation.
2 weeks ago
Anonymous
Spoilers are not fun.
2 weeks ago
Anonymous
Wrong.
2 weeks ago
Anonymous
I explained this before, pay attention
I let people think about why they're wrong. If people ask me to explain I always do. This is a cultured way of interacting. Spoilers are not fun.
2 weeks ago
Anonymous
Redundant and still wrong.
2 weeks ago
Anonymous
Very much wrong.
2 weeks ago
Anonymous
The whole point of spoilers is that the interlocutor doesn't want to hear them because they want to experience the thing on their own. When the interlocutor wants to hear it and explicitly asks to hear it, it's no longer a spoiler. You're just being an butthole/ socially unaware.
2 weeks ago
Anonymous
Spoiler: you're wrong.
2 weeks ago
Anonymous
Categorically wrong.
2 weeks ago
Anonymous
Ah yes that makes sense (unless you consider 0 to be natural and let f(0) = 1 which is bigger than C'n)
2 weeks ago
Anonymous
Ah yes, the cases where g(n) is zero and f(n) is not cause trouble. For us to discard the assumption of n large enough, we have to assume that g(n) is never zero.
2 weeks ago
Anonymous
My guy C is not supposed to depend on n. You are wrong.
2 weeks ago
Anonymous
He is saying C depends on N not n, N being the number such that n>N satisfies the thing
2 weeks ago
Anonymous
Ah, I see. Then it's correct. I wouldn't want to use a different definition of O(n) when dealing with natural and when dealing with reals though.
Yes, members of the `/g/` imageboard (known as "Technology") are generally knowledgeable about computer science concepts, including big O notation (O(n)). Big O notation is a mathematical way to describe the performance or complexity of an algorithm, particularly its growth rate when the input size increases. It provides an upper bound on the time or space requirements of an algorithm in terms of the input size. For example, O(n) represents an algorithm whose running time grows linearly with the input size.
The redditor's statements were incorrect because they claimed that `/g/` does not understand what O(n)
What the frick is going in this thread?
Anyways, for f,g: R -> R, g =/= 0 for x sufficiently large, then f is O(g) iff lim x->infty (f/g)(x) < M for some constant M. >inb4 wrong
picrel and then we'll talk
What the frick is going in this thread?
Anyways, for f,g: R -> R, g =/= 0 for x sufficiently large, then f is O(g) iff lim x->infty (f/g)(x) < M for some constant M. >inb4 wrong
picrel and then we'll talk
I dont think so its just a weird formulation
You could test some examples maybe that would convince you
E.g if f(n) = 10000n + 1000000 then that limit is 10000 which is less than infinity, so f is in O(n)
E.g if f(n) = n^2 the limit of n^2/n is not less than infinity, so f is not in O(n)
Do you have an example f where that definition breaks down?
f in O(n) implies lim n->inf of f(n)/n < infinity
different formulation similar bloat though
However, if you take your definition and replace lim with lim sup, which always exists, the definition becomes corrrect.
https://en.wikipedia.org/wiki/Limit_inferior_and_limit_superior
No. O(x) is not as useful when generalized to limsup, since now you have to special case the definition back to regular lim anyways when using it for some derivative and integral theorems.
t. knower
2 weeks ago
Anonymous
Wrong. Lim sup of a sequence of real numbers always exists. The ordinary limit exists very rarely.
2 weeks ago
Anonymous
Not what I said illiterate moron
2 weeks ago
Anonymous
When lim exists, it equals both lim sup and lim inf. There are integral theorems like Fatou lemma where lim sup is used, and it can be applied to sequences of functions where regular limit doesn't exist. > since now you have to special case the definition back to regular lim anyways
This sounds like nonsense. What do you mean? Of course if the limit exists you can compute lim sup using the limit but often it doesn't.
2 weeks ago
Anonymous
If you use the limsup definition, then you have to special case f = O(x) as "also it's regular lim", in cases where you require the existence of limit of the O(x).
Primary place where this notation is used is for derivatives, which require a limit to exist.
I don't do analysis, but when I did my real analysis classes, O(x) was explicitly the limit, not limsup, for that reason.
2 weeks ago
Anonymous
>If you use the limsup definition, then you have to special case f = O(x) as "also it's regular lim", in cases where you require the existence of limit of the O(x).
Example? > Primary place where this notation is used is for derivatives, which require a limit to exist.
It's used all the time in algorithm analysis, combinatorics and analytic number theory, in places that have nothing to do with derivatives.
Do you have in mind the usage of big O in the taylor expansion of a function?
2 weeks ago
Anonymous
Wrong, you don't know what a lim sup is or does.
2 weeks ago
Anonymous
Wrong. Lim sup n-> infty of f(n) is defined as the infimum (greatest lower bound) of the sequence g(n) = sup(f(k) | k > n) in the extended real numbers [-infty, infty]. It is often useful to describe the asymptotic behavior of functions which are not well behaved or do not have a limit. For example, the twin prime conjecture can be stated as asserting that lim sup f(n+1) - f(n) = 2, where f(n) is the n'th prime number. lim f(n+1) - f(n) evidently doesn't exist.
2 weeks ago
Anonymous
Wrong, flat out wrong. You don't even know what you're talking about. Go back to school, if you ever went to one.
This thread is an insane asylum where every post is one of the following: > Define O(n) > Someone who gives a definition > "Haha you are wrong"(proceeds to give 0 evidence to how the definitions are wrong) >General trollage
IQfy used to be funny
I'm not autistic enough to care, but he could have mentioned that it describes the "order of" the growth in time complexity, and specifically a upper bound to said growth. Those are the defining features it's occasionally worth getting autistic over. It also means that you should be very careful when interpreting the meaning O(k) or O(k(f(n*~~ where k is a constant.
Not a programmer but I'm happy to bite. >O(1) means it doesnt take longer as the input size increases
This doesn't seem right to me, the processing time can increase, it just has to have a constant asymptote. >logarithmically small amount more time
This is vague but probably in the right direction. >it takes a constant factor of time proportional to the input size
Asymptotically, yes. >between linear and quadratic, and is a common goal for optimizing
Seems good. >square the number of things
Vague as well.
His examples seem fine, insofar as they have the correct asymptotic order.
unlike me, code monkeys on IQfy have never actually studied growth of functions, which is why they will object to the truth of the following true statement:
merge sort is O(n)
merge sort is O(n^2)
merge sort is O(2^n)
merge sort is O(n!)
merge sort is O(n!^n!)
the proof is trivial and thus is left as an exercise for the reader.
It means whatever I say it means. Otherwise people usually mean that the number of computation steps does not increase faster than some linear function of the input size.
looks like a man on four legs
or maybe a guy whos ready to get fricked in the ass
that's pretty gay, i'd say that it looks like a dog or any four legged animal of your choice
It's the opposite of off.
You mean O(ff).
he is right tho, unless youre talking about bounds autism
I'm talking about the meaning of O(n).
its not exhaustive but the kinds of operations he described is O(n)
now youre going to say that O(log n) is a subset of O(n) which is bounds autism I was describing
It's linear time, you make n iterations over an array of n size.
That's wrong.
bounds are irrelevant
no loops
In computer science, "O(n)" refers to Big O notation, which is used to describe the efficiency or complexity of an algorithm. Specifically, "O(n)" represents linear time complexity, where the execution time of the algorithm increases linearly with the size of the input data set. The "n" represents the size of the input.
When we say an algorithm is "O(n)," it means that the time (or sometimes the space) it takes to complete the algorithm is directly proportional to "n". As "n" (the number of elements being processed) increases, the resources required by the algorithm increase linearly.
For example, consider an algorithm that checks whether a certain item is present in a list by looking through each item one at a time. In the worst case, where the item is not present or is at the end of the list, the algorithm would need to check every item once. Therefore, if the list contains "n" items, the algorithm will make "n" checks, making this algorithm O(n) in terms of time complexity.
>Big O
giving Dorothy my big O
>LLM spacing
He said "iterating over every element 5 times" instead of "iterating over every element"?
Constants are dropped, so both are O(n).
He's technically right, he just said it in a gay and annoying way.
He is absolutely wrong, both technically and non-technically. Seems like you don't know what O(n) means either.
You are the worst type of autistic.
Why are you calling me autistic?
As far as I know, O(n) notation simply takes the fastest growing term in the complexity equation. E.g. if you had n^2 it's obviously O(n^2), but 50n^2+10000000 is also O(n^2), completely disregarding the massive constant term and the multiplier. Which means big O notation is not telling you how complex it is, but how quickly it grows in complexity, meaning a O(1) could be slower than a O(n!) (the latter is just guaranteed to GROW faster).
Am I correct? t. self-taught tard who learned programming by hitting the keyboard in an IDE until a problem showed up then googling how to fix it.
Iterating over every element of an array is O(n) idk why is he saying 5 times lol.
because 5n is still classed as O(n)
Are you asking for the formal math definition? Because if not then his summary is not too far off, it's only ever mentioned as a rough indicator of how time complexity grows with the input nowadays, so you can stop acting high and mighty and go back now.
>Are you asking for the formal math definition?
What the frick does that mean? As opposed to informal philosophical definition? No, I'm asking for an actual definition/explanation of what it means, whether formal or informal.
>Because if not then his summary is not too far off
His summary is 100% wrong on almost every point.
> now youre going to say that O(log n) is a subset of O(n)
It is.
> which is bounds autism I was describing
What the frick does autism have to do with anything here?
>What the frick does autism have to do with anything here?
Read your posts itt. You are severely autistic and thus in dire need of professional help.
my program turned to be O(6870n^3)
I blame react
OP is just gayging about how O(n) means worse case time while redditgay is using it as average time.
Obviously OP should stop posting and go back to sucking wieners. saged.
>O(n) means worse case time
Wrong.
> while redditgay is using it as average time.
Also wrong.
>Wrong.
replying to a statement with "Wrong" and nothing else is the most reddit shit this 4cheddit place ever came up.with. 100% guaranteed you were making a smugsoyjak face while posting this reddit shit
I let people think about why they're wrong. If people ask me to explain I always do. This is a cultured way of interacting. Spoilers are not fun.
Wrong.
You will never be a teacher, you will never be respected, you will never make the world smarter, you will never be a woman.
Replying "wrong" does not achieve what you think it does, because it leaves nothing of value to dispel our belief that you're a bad-faith homosexualposter. Such is life on fore chance.
I thought this thread was fun and didnt interpret his "wrong" as smug or believe it was bad faith
Could have been though
When people ask I always provide detailed explanations ITT. You're just being bad faith right now.
Great, what's the answer you're after?
To the question of what O(n) means? I showed which answers were correct here
Thanks!
Seems quite anal. Are you a math gay perchance? The interesting idea of big-O is understood by all good devs. It's irrelevant if they can give you a satisfactory definition or not.
>Are you a math gay perchance?
All autists think they are, in reality he was just being a pedant. Or baiting.
Does IQfy know why OP is a gay?
Hint: it's unrelated to all the wieners he sucks
Hard mode: no replying to this post
>computer science board
>nobody knows what O(n) means
O(n) is just the upper bound. O(1) is in O(n). O(logn) is in O(n). On that note, nothing "is" O(n). That's not how "is" works.
>O(n) is just the upper bound
Wdym?
> On that note, nothing "is" O(n). That's not how "is" works.
Wrong.
> Oh, so you want the formal definition of O(n)?
Never asked for a formal definition, nor do I want one. Formal definitions are a pain in the ass to read. All I'm asking is a correct explanation of how it works and what it means.
> O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
Your explanation is very wordy and also very incorrect. The fact that you cannot give an explanation in plain english, nor a correct formal explanation of what it means suggests you don't understand it.
By "just the upper bound" I suggest that it is not Theta(n), contrary to how it's informally used. Theta(n) is a tighter bound for when a function is in both Omega(n) and O(n). It is an upper asymptotic bound. It is not strictly "worst case".
O(n) is a set, so we use set notation like "in" to refer to its subsets. You would not say [0, 1, 2] is [0, 1, 2, 3] or [] is [0, 1, 2, 3].
Everything you said is surprisingly correct. Although it's not necessary to use the set membership notation to indicate the complexity class of a function.
Since sets are just properties that objects have, it's correct to say x^2 *is* O(n). In fact, one can go even further and freely substitute terms in expressions and equations with O(f(x)). For example, expressions such as f(x) = f(0) + O(x) are commonly used in mathematical textbooks and research, and is unambiguous. A formal decoding of it would say that f(x) - f(0) is O(x). One sometimes even sees multiple big O terms in an equation. Pic related is an example.
Incorrect, the groups are defined as exclusionary.
Schizophrenic moron
Wrong. I'm not schizophrenic. I do not believe in any conspiracy theories and I in fact hate schizophrenics, since they make no sense, constantly use illogical leaps in their arguments and have an undeserved sense of superiority over others.
>since they make no sense, constantly use illogical leaps in their arguments and have an undeserved sense of superiority over others.
You just described yourself
You're exactly what I thought you were. Frick off and die.
If it's bait it's hilarious but I've known too many delusionally confident morons to believe that it is.
> >since they make no sense, constantly use illogical leaps in their arguments and have an undeserved sense of superiority over others.
>You just described yourself
How so? I am able to justify my claims. Nowhere did I claim superiority over others. I recognize there are likely many people smarter than me on this board, including possibly in this thread.
Ask me to justify any point I made. I will do so.
OK you're trolling, bravo
Wrong. I'm just pointing out that nobody else on this board seems to understand such a fundamental concept. You have proven my case.
OP here. This person is pretending to be me.
Nobody asked me to provide my explanation. Would you like to see one?
If so, the explanation given in the CLRS book I posted here
is a good start.
Here
I agreed with an explanation another anon gave. Seems like you haven't actually read the thread before replying.
Wrong. OP here. This person is pretending to be me, you're one of those schizos I hate. With your undeserved sense of superiority you're too blind to see.
If it wasn't obvious already this person is trolling. He is not me.
I see you pseuds are resorting to trolling, pathetic. I should have known my ideas were too advanced for this board.
I don't understand what you are referring to as "my own medicine". I have not yet fully explained the position you're refuting.
>my ideas
OMFG LOL
OP here. The person you're replying is not me. None of the ideas I've expressed are mine. They have been known by mathematicians for well over a century.
>I don't believe in conspiracy theories. Whatever the hegemony says is the unquestioned truth!!!!
You're not justifying shit, you're just spamming "wrong".
Wrong. I already agreed with one anon who gave a correct (though wordy) explanation of what big O notation means. See
>very wordy
You didn't say I had to be concise. I don't know your level of understanding of the topic, so I might as well elaborate.
>and also very incorrect
So I just broke your hard mode rules, and aside from my accidentally mixing up < and >, I do appear to have the correct definition. Unless you believe it is something else?
>Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects
Constant coefficient. My bad.
>there is no fastest growing linear function
Yeah. Could have phrased that better. "Faster than any linear function" would be more apt, I think?
> I don't know your level of understanding of the topic
What about me making a thread and correctly noting that many other people misunderstand what big O notation means, indicate to you that I do not know what big O notation means. Unless you thought the redditor was correct?
Yep, the redditor was correct and you are wrong. You misunderstand the definition.
What definition?
I see you don't actually have any idea what you're talking about, very cool.
Here, let me spoonfeed you: How about modern big-O notation as adapted from Knuth's work and available in any intro to cs book published in the last 40 years?
How about page 50 from the 4th edition of CLRS? The explanation agrees with mine and is completely different from what the redditor said and is in direct conflict with what he said. If the redditor gave that explanation in a computer science class, he should get 0 points since he is plain wrong.
>You are severely autistic and thus in dire need of professional help.
What makes you say that?
>OK you're trolling, bravo
What makes you say that?
ctrl+f your own textbook and type Knuth
I did that and got 72 results. Why did you ask me to do that?
You'll find it, I believe in you
I'm not going to spend time going through the results because I don't know what I'm supposed to be looking for.
lmao, guess your own medicine tastes a bit bitter huh? Keep digging, you'll find it eventually
> lmao, guess your own medicine tastes a bit bitter huh?
What are you talking about? What medicine?
I don't think it's arguing about convention. I've never seen a definition of O(n) that would make what the redditor said correct.
Even if you incorrectly use O(n) to mean Theta(n), what the redditor wrote is still wrong on almost every point.
Do you disagree? Have you seen a different definition of O(n) which makes it correct?
>Even if you incorrectly use O(n) to mean Theta(n), what the redditor wrote is still wrong on almost every point.
What else is wrong apart from that?
>constant time, which means it doesn't take longer as the input size increases
Wrong.
>logarithmic time, which means as input size increases it takes a logarithmically small amount more time
Wrong.
> linear time, which means it takes a constant factor of time proportional to the size of input size
Wrong.
Should I keep going or did you get the idea?
time, which means it doesn't take longer as the input size increases
>Wrong.
Then what does O(1) mean to you?
The runtime is bounded by a constant as the input size increases.
No, we were assuming every time he says O he means theta.
We were not, and this is wrong even if it did mean theta(1). Theta(1) simply means bounded by a constant and eventually nonzero.
The definition provided in your textbook does not in any way contradict the definition I supply in my post here:
With the exception of course that > needed to be replaced with <.
I didn't say they were in contradiction. You gave a correct explanation (the second, and the first one after the caveat you note).
You said the definition I provided was "very wordy and also very incorrect".
I said so about the first definition and wrote the reply before reading the second explanation you gave, which was correct, as I noted here
Your first definition was incorrect as you noted yourself multiples times in this thread. It's also needlessly wordy/formal.
How else could I describe such a feeble attempt to match my own understanding?
Wrong. OP here, this discussion has been derailed by stupidity. I don't blame you, we can't all be 110 IQ. Come back when you're serious about learning. In the meantime I'll be meditating on P = NP
Arguing about convention is about as productive as arguing about whether 0 is a natural.
Different authors are going to have different definitions of it, the essence is the same but the exact details of the definition depend on the context, get over it.
Whether it's defined as an average or as an upper bound is an arbitrary convention, not a property of the world which is what computer science is actually about. Science studies the natural world, not man made definitions. Those are merely auxiliary.
0 isn't a natural number
if it was natural everyone would have had it
they don't because it's not natural
stupid fricking moron
>rates of change
youre a fricking Black person. Where does it say this is a definition? That text just builds intuition, if you didnt stop reading to bait on IQfy you would find a proper definition later in the text
OP is being a homosexual. The only potential mistake was the "iterate 5 times" comment.
https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions
See above.
WRONG WRONG WRONG you homosexual Black person
shit the frick up
>REEEEEEEEEEEEE
For OP, Big O notation is how he measures his dilation success or failure. O(1) means he can only take one dick in a night. O(n) means he can only take one dick at a time. O(n^2) means he can take two dicks at once, O(n^3) three dicks, etc. His goal is to take O(n!) dicks simultaneously, all of them traveling salesmen. However, some trans surgeons say this is an impossible goal.
>The only potential mistake was the "iterate 5 times" comment.
Not really, O(5n) = O(n), which is a known shortcoming of O notation.
>which is a known shortcoming of O notation.
That's by design you fricking incompetent Black person. If you wanted more specificity then just count the number of operations and leave all the constants and lesser terms in like we all did in school. You DO have a cs degree right?
>um ACKshyually, electron is le good because O(6502398528n) = O(n)
You’re an actual ducking moron if you think this is what time complexity is applicable to lmao
Time complexity is used to describe how algorithms scale with input size. It’s not used to describe how gay a particular webshit is.
The fact that you think it's a shortcoming of O shows you don't understand what O notation means.
One could think it's a shortcoming and know the actual definition of it. What this would demonstrate, however, is that one lacks an understanding of why Big O exists in the first place.
The redditor is not providing the formal definition of Big O, but is providing a definition that is useful in the context of how Big O is typically used. For all I know, you could just be some undergrad who is struggling with his studies and cannot use the redditor's definition for an assignment that requires you to consider the formal definition. However, given that you are describing actual correct definitions as wrong without describing how they are wrong, it appears you don't actually understand anything about the topic, and are just trolling.
its not a shortcoming its description of asymptotic behavior you stupid Black person
I havent even studied compsci or algorithms formally and even I know this
It’s mostly wrong.
It starts off all wrong with
>O(1) is constant type, which means it doesn’t take longer as the input size increases
It can take longer locally for quite some time. For example, take f(x) = { x^2 if x <= 1,000,000, f(1,000,000) otherwise }. f(x) is O(1), but it grows quadratically first.
The phrase "at the limit" is assumed when talking about big O.
But the post OP is roasting appears intended for someone who knows nothing about Big O. So the context you and I have that “at the limit” is assumed is missing.
To be honest it's also missing some edge cases like galactic algorithms which grow slowly but have absurdly high constants so they are infeasible (for example, a sorting algorithm that is O(1) but takes the age of the universe to complete no matter the input size) but it gets the idea across well enough for a layman. Especially since computer algorithms in general are straightforward in that aspect.
Wrong.
Infinite inputs are not considered in computer science.
Yes.
Completely wrong.
Wrong.
Are you just working on homework and fishing for information?
This is computer science 101 and can be found in any cs book or wikipedia. There's nothing to fish for. It's merely to test IQfys understanding of the basic concept. Evidently IQfy failed the test.
but you're not making any direct claims or counterarguments. You're just saying everyone is "wrong".
I did to people who asked me to. Read the thread.
>Does IQfy know what O(n) means?
>Hint: everything the redditor said is incorrect
Oh, so you want the formal definition of O(n)? What the redditor provided is actually pretty reasonable for most people's use cases, but as for the proper meaning...
O(n) is a set. Specifically, big O, big theta, and all those associated complexity classes are a way to describe a set. Formally speaking, O(g(n)) is the set of all functions f(n) such that for some positive constants c and N, for all n > N, f(n) > c * g(n). Or at least that's the definition I can remember from my undergrad, which I haven't had to touch upon in a fricking decade, so I might be missing something in that inequality definition.
In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants. Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
And I'm moronic and mixed up f(n) and g(n).
Big O is f(n) < c * g(n). Because g(n) is the upper bound.
Big Omega is f(n) > c * g(n). Because g(n) is the lower bound.
Big Theta is when something's in both Big O and Big Omega, IIRC.
>In effect though, when a function f(n) is in the set O(g(n)), we say that g(n) is providing an upper bound on f(n) when we ignore the effects of constants
This is closest to a correct explanation given ITT. Although it's important that the effects we're ignoring are multiplicative effects, not any other kind of effects. For example, x^2 is not O(x), even though x^1 and x^2 differ by an effect of a constant (the effect of it being exponentiation).
>Thus, O(n) is the set of all functions that do not grow faster than the fastest growing linear function.
Wrong, there is no fastest growing linear function.
Lol, wrong
I'm not wrong, moron. I'm literally the only correct person here. You are clearly jealous.
Actually the fact that there is no fastest growing linear function is exactly why the constant factors are needed in the definition.
didn't you Black folk go to college and calculate the complexity by hand in an algorithms course?
It means that when the size of a problem tends towards infinity, the time/memory/whatever tends towards increasing linearly with the size of the problem
O(n) is a notation describing the asymptotic number of operations an algorithm takes (be it average, best case or worst case) to process an input of length n.
Asymptotic meaning that it's not meant to be an exact amount for a given input size, but a limit that is approached as the number of input elements increases up to infinity.
Wrong.
Why?
You clearly don't understand the question at all so I won't waste my time.
>wrong
>wrong
>wrong
why?
>hehe I'm not gonna waste my time
Actually autistic.
I see, so you are not asking what does O(...) mean, you are asking specifically about O(n). My bad.
O(n) means that for a certain algorithm, the number of operations undertaken by that algorithm tends to the number of elements in the input as the number of elements processed grows.
The trap here is how "operation" and "element" are defined. Operation is a function whose domain is a finite set and it's co-domain is also a finite set. Element is a member of a finite set. That's why an algorithm that runs in linear time runs in O(n) time for a certain definition of "operation" and "element". If the size of each operation and input element were defined instead of left open to interpretation, then we would have to add a multiplicative constant to that n, and O(2n) wouldn't be the same as O(n).
There is also the other detail of which input elements are you considering. If you are considering all possible input elements then the number you are giving is the average running time for all inputs of length n. If you are considering the worst case of length n then the number is the worst case running time for any given input length, and if you are considering the best case of length n then the number is the best case running time for any given input length.
It's a safe bet OP is unemployed
You would lose that bet.
Big O is a fundamental concept in computer science. The fact that almost nobody ITT understands it speaks volumes.
ITT: thinly veiled IQfy bait
hes right though, go back to school, if you even went in the first place
How do I gitgud on O(n) challenges?
>Wrong
>Wrong
>Wrong
>Just because everyone else thinks Big O notation means what it means, my extra super special awesome definition is the REAL definition
>No, I won't tell you what it is
it just means that the range of a function's upper-bound is determined by the number of elements.
in the OP, the function happens to be the number of operations as a function of input size, but we also commonly talk about space (memory) and we can apply this to other stuff if we want.
maybe we have a silly algorithm where we generate a subnet for each pair of nodes in a network and say it's O(n^2) addresses, or we open a routine for each element in a list and say we have O(n) threads. these would be unusual, but valid.
Who the frick invited IQfy here? Frick off with your pedantic jargon, no one cares about your shit
This is incredibly basic computer science. Literally computer science 101.
nobody gives a shit about computer science, we're only interesting in computer technology here
OP here. I am going to sleep. Will continue this thread tomorrow if it's still up. Anyone pretending to be OP after this post before 7 hours has passed is merely pretending to be me.
Real OP here. I am going to sleep in my hyperbaric chamber. It's not something you would understand as your brain only works at O(n) speed while mine calculates in O(nlogn) speed. Heh. Nothing personnel, kiddo.
Wrong.
OP here, you're wrong. I would never sleep.
>bait people by saying a redditor failed so surely IQfy must be able to get it right
>call every single person regardless of their answer, wrong
It's honestly impressive how well this trolling strategy works
do your own homework
he is right You autistic moron
Seems like you don't know what O(n) means either
do you?
He's right except he forgot to mention that big O is asymptotic complexity, so an upper bound for infinite input. So each one in that list is contained in the next. Technically calling an O(1) operation O(2^n) is correct (but useless).
O(f(n)) is for a given algorithm, if there exists a real number k so that (number of operations in algorithm for input n) <= k * f(n)
function inc(arr) {
var i;
var z = arr.length; // 1
for (i = 0; i < z; i++) { // 1, n, n
arr[i] = arr[i] + 1; // 3
}
}
// for a given input of size n, (1 + 1 + n + n + 3n) ops
// total: 5n + 2
// so for f(n) = n, n > 0
// 5n + 2 <= kn
// if 2 <= (k-5)n
// if 2/n <= (k-5)
// so for n = 1, 7 <= k
// for n = 2, 6 <= k
// for n = 3, 5.666 <= k
// ... etc
// therefore there exists a real k (for example, 7) so that (ops done for input of size n) <= 7n
The omega is for the flipped <= (or >=, for lower bound) and the slashed O is for =.
>Does IQfy know what O(n) means?
who doesn't?
An infinitist category theory professor and transsexual was teaching a class on David Hilbert, known nonconstructivist.
"Before the class begins, you must get on your knees and worship Georg Cantor and accept that he was the most highly-rigorous being the world has ever known, even greater than Archimedes!"
At this moment, a brave, intuitionist, wildbergian euclidean geometer who had produced 1500 constructive proofs and understood the necessity of algorithmic thinking stood up and held up a 0.999... foot ball.
"How wide is this ball?"
The arrogant professor smirked quite Infinitistly and smugly replied "The equivalence class of the sequence (1,1,1...)"
"Wrong. Only three nines were written down. It there were infinity and real numbers, as you say, are real... then i would need infinite paper."
The professor was visibly shaken, and dropped his japanese chalk and copy of Rudin. He stormed out of the room crying those infinist crocodile tears. The same tears infinitists cry for the “non-measurable sets” (who today live in such luxury that they need not be constructed) when they jealously try to take up space in textbooks from the deserving theorems. There is no doubt that at this point our professor, Cardinal troonystein, wished he had pulled himself up by his bootstraps and become more than a sophist infinity schizo. He wished so much that he had a gun to shoot himself from embarrassment, but the bullet would take infinite steps to reach his head!
The students applauded and all studied Wittgenstein that day and accepted Kroenecker as their lord and savior. An eagle named “Induction” flew into the room and perched atop the number theory book and shed a tear on the chalk. Wildberger's videos were watched several times, and God himself showed up after descending a finite amount from heaven.
The professor lost his tenure and was fired the next day. He died of gay plague AIDS and was expelled from the paradise Wildberger had created for all eternity.
his channel would probably be a lot more popular if he didn't spend half the time whining about this shit
Best post
O(n) basically means as n -> infinity (or, if you prefer, "as n gets arbitrarily large"), f(n)/n approaches a constant, non-zero number
There are some things wrong in it, but it's mostly correct and gets the general idea across.
>O(1) is constant time
Is the constant time complexity class*, if you want to be picky.
>which means it doesnt take longer as the input size increases
Technically true in theory, but in actually on computers, you will have different factors for constant operations. Though this doesn't matter due to the nature and definition of Big-O, even if that factor were large.
>For example, referencing an item in an array takes O(1) time
Bounded*, assuming you have the index location.
>O(logn) is logarithmic time
True, but the notation is messed up, which confuses people. O(log(2)(n)).
>which means as the input size increases it takes a logarithmically small amount more time
It means it's bounded above by that growth order function towards some limit, which is often positive infinity (and there should be some lower bound to make this a useful statement). The function's growth over a long period of time will grow at about that rate.
>For example, binary searching a sorted list is O(logn).
If it's sorted, yes. The best growth order for sorting is O(n*log(2)(n)). If we consider just searching, we can say it's bounded by θ(log(2)(n)). If we consider the pre-sort operation, then it's bounded by θ(n*log(2)(n)).
>O(n) is linear time, which means it takes a constant factor of time proportional to the size of the input size.
Again, bounded above (and should be below, with Theta) by that function. Generally, functions in that growth class will take some time proportional to their input size, yes.
>For example, iterating over every element of an array 5 times is O(n).
Correct, but constant can matter.
>O(nlogn) is linear times logarithmic time
Linearithmic time*. Wikipedia actually has a good table on the most common time orders. Though, there are an infinite number of them, those are just the most commonly used.
>which is between linear and quadratic,
True.
>and is a common goal for optimizing many algorithms.
True, but a bad explanation. The growth order of quadratic time is awful. The difference between it and all functions below it is generally the point where most people consider bad. This is because squaring a number causes numbers to get big very quickly, especially at large domains. So generally O(n*log(2)(n)) is just the best hurdle before and noticeably much better than quadratic time.
>O(n2) is quadratic time, which means as the input size grows, you have to do square the number of things.
No, it means that the function grows no larger than that. Many algorithms in the O(n2) class don't perform exactly n2 operations. See n(n+1)/2, for example.
>iterating over every element of an array, but each time you have to do the entire array again. An example is brute force sorting an array.
There is no "brute force" method of sorting a set of numbers. Though Bubble Sort and Selection Sort are the most crude sorting algorithms and often what people think when they imagine that. But even they have some cleverness to them. Additionally, only their worst-case implementations perform in θ(n2). Other iterations of them are considered unique functions, for all intents and purposes.
>O(n3), O(n4), etc... are like O(n2) but each gets more cumbersome to work with. All of these are "polynomial" times.
Cubic time*. Afterwards, the n^k are just classified as "polynomial time", yes.
>Then theres exponential times like O(2n), which grow very fast.
2^n.
>Then weve got O(n!) and similar, which is super exponential time.
That's factorial time.
>This means you have to do a factorial number of things for the input size.
Same thing as above, no not technically.
>An example is brute forcing the travelling salesman problem.
Yes, the brute force algorithm for TSP is bounded by factorial time.
>(n2)
formatting got messed up, I'm moronic. (n^2).
>>Then weve got O(n!) and similar, which is super exponential time.
>That's factorial time.
it's also super exponential time
holy cow, I cant believe you insufferable homosexuals cant explain such a basic job interview questions go cry on not getting a job >>twg
Bookmarking this thread on an archive so I have a point of reference for the next time someone asks why techcels got their asses beaten in high school.
didn't read the pic but as i understand it, it means the amount of repetitions an algorithm takes increases linearly with the input size
That's wrong.
It establishes that the upper bound of the growth rate of some function of n is linear.
Growth rate? Are you talking about the derivative of the function?
>the upper bound
Which upper bound? Functions have many upper bounds.
>Growth rate? Are you talking about the derivative of the function?
No.
>Which upper bound?
AN upper bound, you illiterate.
You said, as I quoted, "the upper bound", indicating you had a specific upper bound in mind, when there can be many. I asked you which upper bound you had in mind and you started seething for some reason.
What then did you mean by the growth rate of a function if not its derivative?
>indicating you had a specific upper bound in mind,
Yes, you illiterate. Specifically, the one that you are dealing with, which is an upper bound.
>Anon, what's a "growth rate"? A-and is that a d-derivative or whatever it's called?
>I thought functions had multiple upper bounds??!
You're not fooling anyone, 3rd year. Your degree will be worthless btw.
It should've been function value instead of growth rate. Growth rate generally means derivative of the function. But you're correct if you meant the function value.
I'll say it again but more formally this time
O(f(x)) describes the set of all g(x) such that the value of g(x) <= c.f(x) for some n >= n0 and C > 0
On your example
2x <= c.x for all c >= 2
When some one uses = with O(f(x)) it implies the Element of notation
> I'll say it again but more formally this time
I don't care about formality, all that matter is being correct. It's a common misconception that being formal means being more correct.You can be correct while being informal and wrong while being formal, as this thread has demonstrated multiple times.
Your explanation in this post is correct, although needlessly wordy and strictly formally nonsense (you try to be formal yet cannot get the variables to agree).
It also disagrees with the previous less formal explanation that you gave.
Care to explain how it disagrees with my post before
Sure.
>The O(f(x)) notation just describes the set of functions that are upper bounded by the function f(x)
Disagrees with the second explanation that you gave because being upper bounded by f(x) means being not more than f(x). Your first explanation gave no mention to the multiplicative scalar factor invariance that makes the definition useful.
According to your first explanation, 2x is not in O(x), but according to the second explanation, 2x is in O(x).
Hmm you're right it was an oversight from me with the first post.
>Hmm you're right
I always am.
>You can be correct while being informal and wrong while being formal
Can you point out the correct and wrong posts from this thread
>post bait
>comment "WRONG" on every correct post
cool troll can you go back now
I already explicitly recognized some responses as correct, if you cared to read the thread. Most responses are wrong though and I pointed it out.
Most responses ITT are mostly correct, even your literal reddit screencap is fine. Just some technicalities that could be more accurate, which you personally haven't even pointed out.
They are dead wrong though. Clearly you haven't actually studied the concept. Perhaps you heard about it on reddit?
you literally go to reddit and screencapped a thread from there
To point out how wrong it was. I assure you my visit to that place was brief.
Oh come on, really? You just happened to go there for that one thing, which you already knew was there.. somehow?
I wasn't born yesterday, bro.
If you want the full story, here it goes. I saw someone on X say that every thread on reddit computer science subreddit is full of indian hate. Went to the computer science subreddit to briefly check for myself. Didn't find indian hate, found this thread instead. Posted it here. Never went there since.
>admits he uses X
>admits he uses reddit
>thinks by picking at technicalities rightfully lost in application, in a casual answer that is intended to be dumbed down, that he knows what he's talking about (In one of the easiest fields of math)
this just keeps getting better and better. Did you even graduate college, OP?
If someone said 2+2=5, would you go autistically go into the technicalities of addition to explain why she is technically wrong or simply accept that it's close enough and move on?
>If someone said 2+2=5
Woah, a classic strawman. Point for me in the thread where people said 2+2=5 or to the severity of that.
Examples are given here:
>They are dead wrong though.
Refute this one:
You literally can't. lol
everything he said is roughly correct why are such a POS OP?
O(n) is just a set of functions.
>IQfy gijitsu
baka my head SENPAI tbh
Also the predditor is wr9ng because if a function is in O(n) it doesn't necessarily means that it grows leanierly wrt n. A constant function is too in O(n)
The O(f(x)) notation just describes the set of functions that are upper bounded by the function f(x)
>The O(f(x)) notation just describes the set of functions that are upper bounded by the function f(x)
Wrong.
The reason he's wrong is the same reason I said "growth rate". Refute
. You literally can't. You lost.
Big O notation doesn't talk about the growth rate of a function. Growth rate is irrelevant. What you said is wrong.
>Big O notation doesn't talk about the growth rate of a function.
Wrong.
>It should've been function value instead of growth rate
Wrong.
Good morning sir but I'm not wrong.
Is 2x in O(x)?
Yes it is.
And is it 2x upper bounded by x? For example, take x=1. Is 2 bigger or smaller than 1?
Plus even if something has running time complexity in O(n) it doesn't necessarily means that it would be faster than O(n^2) for n smaller than a particular value. You need to be extremely specific when it comes with asymptotic notations
It means the space usage/runtime can be upper-bounded by a function linear in the input size.
Best answer ITT so far. Simple and correct.
What does a function "linear in the input size" mean?
No, I'm not a nerd.
O(n) is the set of all functions that behave asymptotically the same as f(n) = n.
Wrong.
No.
I write scientific code and no one cares or knows about this.
If the hardware can't do maths fast enough, we go for the next higher spec microcontroller
f(n) is O(n) iff there exists some real number c and natural number m such that for every natural number n >= m, |f(n)| <= c*n is true.
Correct but redundant definition. Can you remove the bloat?
What?
Correct but redundant definition. Can you remove the bloat?
You are moronic. How is that bloat?
O(n) is the set of all functions f: N->N such that there exists some k and some n' such that n > n' implies that f(n) < kn
The redditor is correct and OP just wants (you)'s, try harder.
I could use more concise words/symbols but cant spot the real bloat - it looks to me like all the elements there are essential
f in O(n) => k exists s.t f(n) < kn for sufficiently large n
Same thing though
f in O(n) implies lim n->inf of f(n)/n < infinity
different formulation similar bloat though
>f in O(n) implies lim n->inf of f(n)/n < infinity
That's wrong.
I dont think so its just a weird formulation
You could test some examples maybe that would convince you
E.g if f(n) = 10000n + 1000000 then that limit is 10000 which is less than infinity, so f is in O(n)
E.g if f(n) = n^2 the limit of n^2/n is not less than infinity, so f is not in O(n)
Do you have an example f where that definition breaks down?
It implies that lim n-> inf of f(n)/n exists, which is not true.
I think that limit does exist for any O(n) function. Can you describe an O(n) function where it doesn't exist?
Sure. Take f(n) = max(0, n * sin(n)).
Would you like me to explain the bloat or do you want to think about it harder?
Yeah explain it please i am stuck
"For sufficiently large n" is redundant. If we're considering f(x) in O(g(x)), and given N such that for all n>N, f(n) <= Cg(n), we can take the C' = max( g(0)/f(0), g(1)/f(1),..., g(N)/f(N), C), disregarding the terms where f(n) is zero. Then for ALL n, f(n) <= C'g(n). Hence the N/sufficiently large n condition can be dropped for the definition.
Note that this only works because the domain is the natural numbers, so finite sets are cofinal in the order.
If the domain were real numbers, the "x big enough" condition would be necessary, as removing it would present a different much less useful concept.
Wrong. Go back to school.
How am I wrong?
I'll let you think about it.
I already thought about it and would prefer you to give me an explanation.
Spoilers are not fun.
Wrong.
I explained this before, pay attention
Redundant and still wrong.
Very much wrong.
The whole point of spoilers is that the interlocutor doesn't want to hear them because they want to experience the thing on their own. When the interlocutor wants to hear it and explicitly asks to hear it, it's no longer a spoiler. You're just being an butthole/ socially unaware.
Spoiler: you're wrong.
Categorically wrong.
Ah yes that makes sense (unless you consider 0 to be natural and let f(0) = 1 which is bigger than C'n)
Ah yes, the cases where g(n) is zero and f(n) is not cause trouble. For us to discard the assumption of n large enough, we have to assume that g(n) is never zero.
My guy C is not supposed to depend on n. You are wrong.
He is saying C depends on N not n, N being the number such that n>N satisfies the thing
Ah, I see. Then it's correct. I wouldn't want to use a different definition of O(n) when dealing with natural and when dealing with reals though.
Order N
I can smell OP from here.
Yes, members of the `/g/` imageboard (known as "Technology") are generally knowledgeable about computer science concepts, including big O notation (O(n)). Big O notation is a mathematical way to describe the performance or complexity of an algorithm, particularly its growth rate when the input size increases. It provides an upper bound on the time or space requirements of an algorithm in terms of the input size. For example, O(n) represents an algorithm whose running time grows linearly with the input size.
The redditor's statements were incorrect because they claimed that `/g/` does not understand what O(n)
What the frick is going in this thread?
Anyways, for f,g: R -> R, g =/= 0 for x sufficiently large, then f is O(g) iff lim x->infty (f/g)(x) < M for some constant M.
>inb4 wrong
picrel and then we'll talk
Your post is not in the set of correct posts so it's wrong.
Start with Rudin or Fichtenholz if you're not american.
What if I am American.
Wrong for the same reason other people ITT have been wrong. I suggest you read the thread before replying.
However, if you take your definition and replace lim with lim sup, which always exists, the definition becomes corrrect.
https://en.wikipedia.org/wiki/Limit_inferior_and_limit_superior
No. O(x) is not as useful when generalized to limsup, since now you have to special case the definition back to regular lim anyways when using it for some derivative and integral theorems.
t. knower
Wrong. Lim sup of a sequence of real numbers always exists. The ordinary limit exists very rarely.
Not what I said illiterate moron
When lim exists, it equals both lim sup and lim inf. There are integral theorems like Fatou lemma where lim sup is used, and it can be applied to sequences of functions where regular limit doesn't exist.
> since now you have to special case the definition back to regular lim anyways
This sounds like nonsense. What do you mean? Of course if the limit exists you can compute lim sup using the limit but often it doesn't.
If you use the limsup definition, then you have to special case f = O(x) as "also it's regular lim", in cases where you require the existence of limit of the O(x).
Primary place where this notation is used is for derivatives, which require a limit to exist.
I don't do analysis, but when I did my real analysis classes, O(x) was explicitly the limit, not limsup, for that reason.
>If you use the limsup definition, then you have to special case f = O(x) as "also it's regular lim", in cases where you require the existence of limit of the O(x).
Example?
> Primary place where this notation is used is for derivatives, which require a limit to exist.
It's used all the time in algorithm analysis, combinatorics and analytic number theory, in places that have nothing to do with derivatives.
Do you have in mind the usage of big O in the taylor expansion of a function?
Wrong, you don't know what a lim sup is or does.
Wrong. Lim sup n-> infty of f(n) is defined as the infimum (greatest lower bound) of the sequence g(n) = sup(f(k) | k > n) in the extended real numbers [-infty, infty]. It is often useful to describe the asymptotic behavior of functions which are not well behaved or do not have a limit. For example, the twin prime conjecture can be stated as asserting that lim sup f(n+1) - f(n) = 2, where f(n) is the n'th prime number. lim f(n+1) - f(n) evidently doesn't exist.
Wrong, flat out wrong. You don't even know what you're talking about. Go back to school, if you ever went to one.
set of all functions f for which for every real number a there exists a real number m such that for every number n thats larger than a |f(n)| <= mn
the dunning kruger feels pretty strong here
Yeah, I can't believe how many people ITT confidently think they understand big O notation when in reality they don't understand it at all.
limsup deez nuts, Black folk
>super exponential
wtf am I reading? is this the power of cs math?
you'll never be employed
I currently am employed.
gaygiest thread on IQfy rn
imagine trying to do maths and being too midwit to make it in the stock market so now your future is teaching
>bait thread gets 300 replies
well done idiots
>everything I don't like is bait
no one, and I mean NO ONE on this thread knows what Big O notation means. quite sad really.
Wrong.
This thread is an insane asylum where every post is one of the following:
> Define O(n)
> Someone who gives a definition
> "Haha you are wrong"(proceeds to give 0 evidence to how the definitions are wrong)
>General trollage
IQfy used to be funny
I gave a detailed explanation and evidence every time I was asked to.
I'm not autistic enough to care, but he could have mentioned that it describes the "order of" the growth in time complexity, and specifically a upper bound to said growth. Those are the defining features it's occasionally worth getting autistic over. It also means that you should be very careful when interpreting the meaning O(k) or O(k(f(n*~~ where k is a constant.
Not a programmer but I'm happy to bite.
>O(1) means it doesnt take longer as the input size increases
This doesn't seem right to me, the processing time can increase, it just has to have a constant asymptote.
>logarithmically small amount more time
This is vague but probably in the right direction.
>it takes a constant factor of time proportional to the input size
Asymptotically, yes.
>between linear and quadratic, and is a common goal for optimizing
Seems good.
>square the number of things
Vague as well.
His examples seem fine, insofar as they have the correct asymptotic order.
f(n) = O(g(n)) can be read as
>f(n) grows asymptotically no faster than g(n)
There, I hacked the math notation for you.
Nothing I said was wrong.
You're just baiting for someone to say "wrong" at this point
Who are you? Said where? We have no way to determine if you're wrong or not if we don't know which posts you're referring to.
>We
It's implied since you replied to all of my comments. Yes, every single post you responded to was me the entire time.
Wrong.
>Does IQfy know what O(n) means?
Yeah
unlike me, code monkeys on IQfy have never actually studied growth of functions, which is why they will object to the truth of the following true statement:
merge sort is O(n)
merge sort is O(n^2)
merge sort is O(2^n)
merge sort is O(n!)
merge sort is O(n!^n!)
the proof is trivial and thus is left as an exercise for the reader.
>
(YOU)
Wait what? How can I be wrong
big theta notation is better anyways, i always use it in my documentations, the stupid codemonkeys think its a stylized O, the rest agree with me.
>always
hardly any common algorithm has bounds theta requires
THEN MAKE SURE IT DOES!!!!!!
I was right. You were wrong.
It means whatever I say it means. Otherwise people usually mean that the number of computation steps does not increase faster than some linear function of the input size.