Job interviewer hit me with the two sum problem and I was finished. Yeah I'm moronic, so what?
Thalidomide Vintage Ad Shirt $22.14 |
Thalidomide Vintage Ad Shirt $22.14 |
Job interviewer hit me with the two sum problem and I was finished. Yeah I'm moronic, so what?
Thalidomide Vintage Ad Shirt $22.14 |
Thalidomide Vintage Ad Shirt $22.14 |
const twoSum = function(nums, target) {
dear god
I would put a "strong no" if I interviewed someone who wrote this shit lol
the good thing is it's right at the top, you can just skim through and reject it immediately when you see it
Explain the const
read-only variable
The function returns a read-only value?
Or is the function itself "const," unchanging? I'm not familiar with the language he is using...
no, it's the name of the function which is like a variable
in which if it wasn't const, the variable could be overwritten
var x = 1;
x = 2; // the value of x is overwritten here
with const, it will give you error
const x = function() { return 0; };
x = 2; // gives error
But why even do that? Ever? It's computationally dishonest as all data in memory is able to be altered.
It's like saying "I can't do this thing, but any hacker in the world in my system is allowed to."
It is an inaccurate representation of how the computer actually works.
imagine pi and e constants in maths
they're not meant to be redefined
pi and e are just labels given to unchanging values
>But why even do that? Ever? It's computationally dishonest
It's to signal the devs that they shouldn't overwrite the variable and that something will go wrong if they do, the same goes for private, it doesn't mean anything for the memory to tag something as private but it's there to prevent other devs from making mistakes or wonder about something they shouldn't bother with
I agree though I present the fact that we shouldn't use devs who wouldn't know not to alter it in most all circumstances.
That only holds true for primitives. In general const and immutability are two different concepts.
const yourStupidAss = {inpenetrable:true,penetrated:false,penetrationCount:0};
yourStupidAss.inpenetrable=false;
yourStupidAss.penetrated=true;
yourStupidAss.penetrationCount=30;
This is why TypeScript is superior.
it still works the same in typescript.
readonly
object itself is modifiable
but you still can't overwrite what yourStupidAss points to to different object, you moron
learn what a reference is before replying to me
now do
const yourStupidAss = {inpenetrable:true,penetrated:false,penetrationCount:0};
yourStupidAss = { youAreAFrickingmoron: true, killYourself: true};
It's just an identifier that points to a callable object (the function) - the const makes it impossible to reassign a value to the identifier which is stored in the global name space object.
The language is PajeeScript
what's wrong with that? I also do
const func = () => {...
it's fun!
Yeah but you could redefine func. It's not really const.
>It's return value is const
Why? What benefit is there in a return value restricted from being altered by the receiver?
>Why? What benefit is there in a return value restricted from being altered by the receiver?
I need a rustard to explain this to me. It's also computationally dishonest as all data in memory is able to be altered.
>Safety
Is just a concession to any system intruder that they have more control over your system than you.
I mostly don't use that syntax but I don't mind it. What I do mind is mixing and matching that with adding "function" in there. That's just crazy
Pseuds be pseudocoding.
Question.. how do y’all embed code like that on this site? Can’t seem to find the option..
read the fricking sticky.
then go back to wherever you came from
[ code ] const gay = ""; [ /code ]
without the whitespace inside the brackets
this post made me feel old
it made me feel young cus he types like a boomer
ANSWER ME
[(remove this space to rant)code]ANSWER ME[(remove this one as well)/code]
https://github.com/IQfy/IQfy-JS/issues/77
Answer me
gooooood morning sir..
elegant solution saar
def twoSum(nums: Array[Int], targetSum: Int): Option[(Int, Int)] =
val uniques = nums.toSet
uniques
.find(v => uniques.contains(targetSum - v))
.map(v => (v, targetSum - v))
>fails for targetSum/2
oh no no no
embarrassing
Exceptionally inefficient. Fails when two entries of the same value add to the target.
based finishAfterHeatDeathOfTheUniverse(n) solution
it's just this, right?
function twoSum(nums, target) {
for (let i in nums)
for (let j in nums)
if (i != j && nums[i] + nums[j] == target)
return [i, j];
}
>Only finds one matching pair
>Senselessly iterates over already iterated pairs, thus would give multiples of the same pair if it cumulated them
When the constraints are only to find a solution it doesen't matter what time it runs in. I'd pass anon for using cosmic-ray-sort if they could point that I didn't determine the boundaries
Nocoder moron here. What's the two sum problem?
Some butthole gives you a list of numbers, and then asks you to tell them all pairs of numbers in the list that add up to some other number.
for instance
what equals 20 out of this list [1, 2, 6, 18, 19]
then you would tell them
[1, 19] [2, 18] to make them go away.
I've never sen this problem before.
I would make an algorithm which iterates once over the list.
Each entry would be iteratively compared (added) with all subsequent entries in the list to find the matching pairs.
Additionally:
An easy optimisation would exist if one could be certain that the list was sorted.
I do not know, without further analysis if pre-sorting the list would be more optimal than the naive algorithm.
Alright, without prior knowledge of other algorithms I would loop once and then inside I would loop starting from i-1 so that 1 is compared to 2,6,18,19; 2 is compared to 6,18,19 and so on. What's the efficient way to achieve this?
Efricient is relative to the person answering it. It has many complicated facets. Constraints missing is impossible to answer.
Thanks, I got an out of bounds error
Let me see your code
I don't know algorithms either but I think I've seen this one before.
For each number you come across, you store the complement (target - num) in a separate list (usually it's a key/value pair like a hashmap)
For example:
[ 1, 2, 6, 18, 19 ] and target = 20
Starting with the first number 1, you store the answer to 20-1, which is 19 in a key-value pair. { 1: 19 }
So if you ever come across 19 in the original list, congrats you've just found two numbers that add up to 20.
The benefit is that you only have to iterate the list once, rather than using two loops.
assuming they were sorted i would traverse twice simultaneously, step from both left and right sum and move pointers depending on above or below target
>yet another factorization problem
I have been working as a software dev for 6 years now and have no idea why shit like this exists or for what purpose it serves. For my internship and both jobs I worked at I was interviewed and hired on my skillset without having to waste time on some moronic shit like this.
what a software dev test should be (From junior to mid level):
>can you build a simple contact form?
>can you debug this javascript function?
>can you pull data from a MYSQL database and display it as a list? Can you add filters to sort that data?
>can create a page based off of this mock up? can you make the page responsive across different screens?
all developers need to know javascript and web development
got it
give some examples of proper interview tests then.
write a short script that calculates how many oven are needed to bake 6 million cookies
Only 1. You didn't specify a time frame.
>Some butthole gives you a list of numbers
>to make them go away
nice
function takes a list of numbers and a target
returns the indices of two numbers that sum up to the target
Thanks. Do the people who administer this test ding you for using a brute force solution?
If they would after not specifying it, I wouldn't want to work there so win-win
Never waste time optimizing if it's not explicitly necessary. If they expect your code to be optimized right off the bat, don't work there. That company will crumble under the stupidity of management or weight on the shoulders of the developers to carry it through hell.
Okay. Now generalize it to k-sums.
Ok did you get the job doe?
could it be lesser than O(n^2), IQfy?
Could be very quick, around O(1) if the list was already sorted.
How could it be O(1) if the list was sorted? Don't you still have to traverse the list once at that point, either using two pointers or binary search right of the current index, making it O(N) after disregarding the N log(N) sort?
No. You only need to traverse a very restricted range.
I have a list: [1,1,2,3,4,4,16,16,16,16,18,19,20] and want the indices that sum to 20. What is my restricted range?
Your small example is not applicable to large datasets.
You haven't explained how to do it in O(1). I was curious, but now I give up and sleep.
if you're on the 4th number in your example, 3, then as soon as you spot the 18 you don't have to look further in the list, since anything that comes after 18will by bigger than 20
I don't think O(1) is possible, but there are definitely optimizations possible if the list is sorted
>still has to call sort() which is not constant
>O(1)
You don't know anything about the compiler being used, there can't be many sssumptions.
By the way, people are way too obsessed about O(1) claims, to the point that the way our magic black box now seems O(1) is it waits until time passed + runtime is X. There you go, always in the same time. Enjoy waiting for 382x longer.
if your list came presorted then it should be n log n, shouldn't it?
-For each element (n)
-Binary search to find its complement (log n)
The hashmap solution would be O(n)
What is the upper bound for the number of iterations your algorithm does as n grows to infinity?
ezpz
class Solution {
public:
vector<int> twoSum(vector<int>& nums, int target) {
vector<int> output{0,1};
if (nums[0] + nums[1] == target)
{
return output;
}
if (nums[nums.size() - 2] + nums[nums.size() - 1] == target)
{
output[0] = nums.size() - 2;
output[1] = nums.size() - 1;
return output;
}
for (auto idx = 0; idx < nums.size() - 1; ++idx)
{
for (auto jIdx = idx + 1; jIdx < nums.size(); ++jIdx)
{
if ((nums[idx] + nums[jIdx]) == target)
{
output[0] = idx;
output[1] = jIdx;
return output;
}
if (nums[nums.size() - (idx + 1)] + nums[nums.size() - (jIdx + 1)] == target)
{
output[0] = nums.size() - jIdx - 1;
output[1] = nums.size() - idx - 1;
return output;
}
}
}
return output;
}
};
Protip: the check for first and last cases actually makes a huge difference. Their test cases really fricking suck.
Based meta-programmer. That cracked me up.
You serious? That's the most moronic solution ITT.
no, it's correct unlike a number of moronic answers
No, it's not.
Yes and it maximally returns one pair. Incorrect.
Find one test case it fails that complies with the bounds specified here: https://leetcode.com/problems/two-sum/
Doesn't even work for [].
Black person, the formula of the question on leetcode is as follows:
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
You can return the answer in any order.
It's not moronic to start by testing against edge cases that are likely going to be thrown at your code. That's gaming the system, and it's funny and literally "meta".
int* twoSum(int* nums, int numsSize, int target, int* returnSize) {
for (int i = numsSize - 1; i; --i) {
int remainder = target - nums[i];
for (int ii = 0; ii < i; ++ii) {
if (nums[ii] == remainder) {
int* lReturn = malloc(sizeof(int) << 1);
lReturn[0] = ii;
lReturn[1] = i;
*returnSize = 2; // 1 pair or 2 ints? homosexual
return lReturn;
}
}
}
*returnSize = 0;
return NULL; // are you going to check this before free'ing, you wienersucker?
}
As suggests as an optimization, these loops automatically also check for the edge cases.
A remainder is created to save the addition step, and while a good compiler optimization may do this automatically, it can't hurt to make it verbose, though that could still, for some reason, have worse performance, so one should bench both.
This version ensures it reads memory mostly sequentially, but a loop iterator that runs while not zero is faster than an integer comparison, so it's an an open question is whether the inner or outer loop should be in reverse for better performance.
I'm the off-by-one programmer and the code above is untested, may neither compile nor work, and I can't be arsed to find out, because I don't want to register.
Suck my dick.
This is O(n^2), the solution needs to be O(n) to be accepted in an interview.
I don't think O(n) is possible, whatever a language is doing that may make it *seem* like O(n) is just the language abstracting O(n^2) away from you.
No, I don't think so, see
, etc.
It just implements some tricks to reduce the general overhead to hopefully make it faster, but at its core, it too is O(n^2).
But I really have no clue, self-taught, voluntarily dropped out of college and was studying maths anyways, so I never had those complexity courses.
It is possible through using a Hash Map (maybe it's a Hash Table in C?). Since you know that t = a - b, for every element a you encounter you check if it exists in the HashMap, and if it doesn't you store b. This requires only 1 for loop and would have a runtime complexity of O(n).
I'm not sure why so many people on IQfy are getting it wrong. This is perhaps the first or second question you learn to prepare for interviews.
Don't you need O(n) alone just to initialize the hashmap itself? That's the first for-loop.
Then you iterate looking for matches in a second for-loop.
I suppose this is the O(nlogn) LeetCode alludes to.
The added complexity of hashmaps makes it only viable when the datasets get large, not at 1K, earliest at 10K, my gut feeling tells me.
There's also the random memory lookup latency issue using hashmaps instead of sequentially scanning when using naive, shit like this is not reflected in Big O, the former requires access to L2/L3 at best, a sequential iteration of memory reads will easily fit in L1.
Can somebody finally bench this shit, is there nobody here with a LeetCode account?
There is no second loop, you are checking whether an element exists in the Hashmap as you go along. If you reach the end of the array there is no match.
Hm, I think you are right, so I guess O(n) is indeed the lower theoretical bound.
Yep, I remember studying data structures very hard so I could get a job before I graduate. Then the market collapsed and I am now a NEET on IQfy.
Hm... I was thinking about this a bit more, and the most efficient hashmap (eseentially an identity lookup table) needs to be initialized, at least in C, because the memory you allocate for it has whatever bits are there when you allocate it, and you cannot for certain know if it's an index you've already hashed or just residual bits.
Jeez, frick this shit, the naive version is done in 10 minutes, this hashmap wankery it's definitely not worth it for such a trivial problem, unless your language already provides it out of the box.
And that language is defeated by C in speed anyways.
Ergo, back to square one.
>and you cannot for certain know if it's an index you've already hashed or just residual bits.
Though I suppose you could just reverse check again whether it checks out.
Yeah, whatever, I'm just going to write naive and call it a day, C is so blazingly fast anyways, I should be able to afford to not care.
I mean, some dude has an inefficient naive Sepples implementation here that beats 94% of others, that's fine enough.
>Though I suppose you could just reverse check again whether it checks out.
Just don't go out of bounds, oh, wait, another comparison instruction.
And this is for the easiest (naive, lol) hashmap implementation!
On a sidenote, I fricking love dictionaries in Python, because some homosexual has to deal with this on a code level and I can just not care and use it and it's fine and fast enough most of the time.
>and you cannot for certain know if it's an index you've already hashed
Why do you think this?
99% of the time you do not use C in an interview. Speed of finishing problems is very important and using C handicaps you for no reason.
The purpose of problems like these are to determine whether or not you know algorithms and data structures.
which is completely useless because any midwit can memorize a problem like this, while being too moronic to implement it efficiently on scales that matter in real world settings
>O(n log n) solution
Nice.
amusing because overhead for the linear solution is much larger than the runtime for the bruteforce for such small problems.
tbh it's generally been my experience that the constants involved in 'optimized' stuff doesn't matter unless you go gigantic. Big O notation is fricking moronic when it comes to the real world, where it's more important to roll out an implementation, then measure your hot spots. Yes, you don't pessimize the design, but you don't fricking agonize over dumb implementation details that can be fixed up later.
Well real-world does only seek optimizations when the product is lacking in sufficient performance however it is very valuable as a develop to understand how to represent the performance of a particular algorithm and how to possibly improve it.
>as a developer*
pub fn two_sum(nums: &[i32], target: i32) -> Option<(i32, i32)> {
let i_v_pairs = nums.iter().enumerate();
for (i1, v1) in i_v_pairs.clone() {
for (i2, v2) in i_v_pairs.clone() {
if i1 != i2 && *v1 + *v2 == target {
return Some((*v1, *v2));
}
}
}
None
}
Isnt this On^2
The OP does not specify any limitation, it’s not incorrect.
I'm a fricking idiot. That's what I would do:
def find_pairs(nums, target):
pairs = []
num_set = set()
seen = set()
for num in nums:
diff = target - num
if diff in seen and num <= diff:
pairs.append([num, diff])
if diff not in num_set:
num_set.add(num)
seen.add(num)
return pairs
I think you have to return the indices and not the values, so a hashmap with key of the index value and value as the index os typical.
Is rust not popular on leetcode?
impl Solution {
pub fn two_sum(nums: Vec<i32>, target: i32) -> Vec<i32> {
let mut output = vec!{0, 1};
if (nums[0] + nums[1] == target)
{
return output;
}
if (nums[nums.len() - 2] + nums[nums.len() - 1] == target)
{
output[0] = (nums.len() as i32) - 2;
output[1] = (nums.len() as i32) - 1;
return output;
}
let mut idx: i32 = 0;
loop {
if (idx >= (nums.len() as i32) - 1) {
break;
}
let mut jIdx = idx + 1;
loop {
if (jIdx >= (nums.len() as i32))
{
break;
}
if ((nums[idx as usize] + nums[jIdx as usize]) == target)
{
output[0] = idx;
output[1] = jIdx;
return output;
}
if (nums[nums.len() - (idx + 1) as usize] + nums[nums.len() - (jIdx + 1) as usize] == target)
{
output[0] = (nums.len() as i32) - jIdx - 1;
output[1] = (nums.len() as i32) - idx - 1;
return output;
}
jIdx += 1;
}
idx += 1;
}
return output;
}
}
Rustchad here, these timings in leetcode are fake.
Figures, I'm pretty sure there's some autismo out there with way more optimizations than my approach. Not very fun to write, didn't feel like dipping into the fricking index key zip bullshit of iterators when it's just two indices. For as much as I miss pattern matching from rust in C++, the lack of for loops is really annoying.
If you use hash maps, doesn't the hashing algorithm as well as heap allocation negate many of the speed advantages you would otherwise get from just doing a nested iteration?
It could, or it could not. OP didn't specify the platform nor does the question have circumstances. Talking about a test question like that it'd be imperceptible to person running it either way
Cleanest, O (n log n), semi-optimal, pseudocode, readable solution ITT:
f(list[...], target_sum) {
found_pairs = []
for(iterator in list)
for(index = iterator + 1, index < list.length, index++)
if (list[iterator] + list[index] == target_sum)
found_pairs.add([iterator, index])
return found_pairs
}
if you're fine with a lot of memory, can't this be done at O(n) by constructing a lookup array of all values less than the target sum, and then looking up the difference for the current integer, and if not found, placing that indice in the lookup table?
since I poorly described it, the way I see it is that (t-n) + n = t, so you store the indice that contained t-n in a lookup array if the indice containing n didn't already have an indice. I guess you could also construct a binary search tree while iterating, which would bring it to O(nlogn) with on average half the memory usage of a lookup table. Still, unless we're dealing with a very large input, brute forcing it is going to be the fastest anyways.
If the array is unsorted yes, you use a Hashmap with your described logic. If it is sorted you use the 2 pointer method to reduce space complexity to O(1)
Very easy with itertools in rust.
use itertools::Itertools;
fn two_sum(nums: &[i64], target: i64) -> impl Iterator<Item = (usize, usize)> + '_ {
nums.iter()
.copied()
.enumerate()
.tuple_combinations()
.filter_map(move |((i, x), (j, y))| (x + y == target).then_some((i, j*~~
}
I probably fricked something up.
# Assume arr is sorted.
pr findTwoSumIn'Arr'ay:(Array{I64}) for'Target:(I64)
is Array{@<I64, I64>}?
if arr.length < 2, return nil;
var sums := @'[]
for i1 := 0 -> arr.length - 1,
for i2 := i1 + 1 -> arr.length,
let a, b := [arr itemAtIndex:i1], [arr itemAtIndex:i2]
if a + b...,
= target:
[sums append:@<i1, i2>];
> target:
jump skip;
end
end
mark skip:
end
return sums;
So many unreadable solutions here.
def twoSum(self, nums: List[int], target: int) -> List[int]:
my_dict = {}
for p, n in enumerate(nums):
complament = target - n
if complament in my_dict:
return [p, my_dict[complament]]
my_dict[n] = p
whats wrong with the neetcode solution?
I did the same thing independently, seems to work just fine
function twoSum(nums: number[], target: number): number[] {
const winningLotteryNumbers = new Map<number, number>();
for (const [i, num] of nums.entries()) {
if (winningLotteryNumbers.has(num)) {
return [winningLotteryNumbers.get(num)!, i];
}
winningLotteryNumbers.set(target - num, i);
}
};
I did this according to the specification at https://leetcode.com/problems/two-sum/description/ which states that there will always be exactly one solution.
Okay, not bad. Pretty good, really.
>didn't start with tests
into the trash all of you go
quick solution. pretty Black personlicious, but should be 2*n*log(n). I think it has a chance to have better cache locality than maps. definitely less allocations. probably could be even better if nums and indices were stored in separate arrays, but I am too lazy. it already took me longer than I expected or care to admit.
#include <algorithm>
struct Num
{
int idx;
int val;
bool operator<(const Num& other) const
{
return val < other.val;
}
};
class Solution {
public:
vector<int> twoSum(vector<int>& nums, int target) {
vector<Num> numsSorted;
numsSorted.reserve(nums.size());
for (int idx = 0; idx < nums.size(); ++idx)
numsSorted.push_back(Num{idx, nums[idx]});
std::sort(numsSorted.begin(), numsSorted.end());
for (auto iterNum = numsSorted.cbegin(); iterNum != numsSorted.cend(); ++iterNum)
{
auto iterRem = std::lower_bound(iterNum + 1, numsSorted.cend(), Num{0, target - iterNum->val});
if (iterRem != numsSorted.cend())
return { iterNum->idx, iterRem->idx };
}
return {};
}
};
O(n^2)
for(int i = 0; i < SIZE - 1; i++) {
for(int j = i + 1; j < SIZE; j++) {
if(nums[i] + nums[j] == target) {
printf("Indices: %d, %dn", i, j);
break;
}
}
}
Not sure how other anons have O(n logn)
>Gives n log n solution
>Calls it n ^ 2
>thinks he's smart
>is a moron
Explain how it's n ^ 2 when your double iteration has a mathematically diminishing magnitude equivalent to log n?
this guy gets it
I think you're wrong anon.
Assume there are n elements in an array.
First pass of outer loop is:
1,2; 1,3; 1,4; ... ; 1.n = n-1 steps
Second pass of outer loop is:
2,3; 2,4; 2,5; ... ; 2,n = n-2 steps
and so on.
Total steps = (n-1) + (n-2) + ... + 1
This reduces to ((n(n-1))/2) = ((n^2) - n)/2 (arithmetic progression)
Discarding constant and lower order term to get big O expression = O(n^2)
I don't see how this is O(n log(n))
n ^ 2 would be n * n.
n * n is each element iterating over every other element.
If you use the optimisation of only exploring all subsequent elements, you reduce the time substantially, down to the order of log n.
log (base 2) n *
(When using big-O notation base 2 is always assumed for logs).
But there is no logarithmically growing term in the number of worst case steps as shown above. Go back and look at it again. Also, big O doesn't specify base. The whole point of big O is to get a general idea of how an algorithm's time grows. The base can be e, 2, 10 or anything else to fit your particular problem's scale.
>n ^ 2 would be n * n
Yes n^2 is n*n... we know this
>n * n is each element iterating over every other element
I don't know what you mean by this statement. Not clear.
>If you use the optimisation of only exploring all subsequent elements, you reduce the time substantially, down to the order of log n
I don't think this is possible. We have to check all pairs to identify the correct pair. Checking all pairs involves ((n(n-1))/2) steps at worst.
>>n * n is each element iterating over every other element
>I don't know what you mean by this statement. Not clear.
for (i = n; i > 0; i--)
for (j = n; j > 0; j--)
if (values[i] + values[j] == target)
found_value(i, j)
It is n log n if you just explore subsequent elements. I don't know how you don't get this. Very simple algorithmically. You are the one struggling to understand time-complexity.
ok, at this point it's strait up moron-baiting. well played, anon.
In time complexity, O(n) indicates that the algorithms time to completion increases linearly as n increases. For example, 2,000 elements will take twice as long as 1,000 elements.
n ^ 2 (or n * n) would mean that if there were 1,000 elements, the algorithm would take around 1k * 1k = 1 million units of time, and 2,000 elements would take around 2k * 2k = 4 million units of time.
n log n would mean that if there were 1,000 elements, the algorithm would consume 1k * log_2(1k) = 10k units of time, and if there were 2,000 elements, the algorithm would consume 2k * log_2(2k) ~= 22k units of time.
Demonstrate that that particular solution (
) is not O(n log n).
Demonstrated here
. The algorithm will grows in proportion to the expression n^2.
Your turn to MATHEMATICALLY demonstrate that the algorithm is O(n log(n)). Don't post another essay. Show it mathematically.
>This reduces to ((n^2) - n)/2
lol
"Merge sort" is a perfect example of an n log n algorithm. Study why.
You don't know basic mathematics then.
Your understanding is limited to surface level parroting and appeals to your (obviously wrong) intuition. Merge sort is a different algorithm. How is that related to this one? Don't waste any more time and really study basic algorithms again. The fact that you totally ignored the mathematically correct derivation of the big O expression (here
) but instead post smug retorts shows that you need to go back to square 1. To add, don't take my word for it. Post my proof and my algorithm (here
) to any other website or forum and get their opinion whether its correct. The worst case run time of this algorithm is O(n^2).
last ditch attempt to explain it to you. after this, you're on your own.
Forget it anon. He brought up merge sort as a counter argument instead of a mathematical derivation. Showing him that the number of steps grows like a triangle is useless. He needs to go back to square one. He can let his ego get hurt now, or he can move and study.
Okay, I concede you are right. I was wondering if I was wrong and somehow thought of exactly what you were going to post just before you did...
You're right. Show us your O(n log(n)) algorithm where you 'just explore subsequent elements'.
you can't be helped. you probably think insertion sort is logN too.
def two_sum(nums, target) do
Enum.reduce_while(nums, {%{}, 0}, fn num, {acc, index} ->
case Map.get(acc, target - num) do
nil -> {:cont, {Map.put(acc, num, index), index + 1}}
i -> {:halt, [i, index]}
end
end)
end
should be O(N) one pass
>Map.put
>O(N)
how?
nta, assuming all map does is do map MEM + hash(key) for the memory address isn't that an O(1) lookup? then at most we'll end up doing O(N) * 2 + O(N) which is O(3N) which is O(N)?
hello? respond
after thinking about it some more, I think you may be right. I don't really know this language.
that's ok i was just assuming how their map works anyway
how can it be O(N^2) though? we're doing only one pass and saving the most recently found dupes as an index?
>how can it be O(N^2) though
This
is why, unless that is wrong and is O(n log n), whichever it is, it doesn't matter, because what will ultimately count is not some theoretical wankery but the actual real world performance.
Even doing some things like going in reverse may slow down performance what with branching and predictions and all that black magic modern CPUs do, the entire field of O() is basically just academic masturbation.
I remember an anon being somewhat dumbfounded on another LeetCode thread some weeks ago, where the bruteforce simplistic C code turned out to be faster than 99.9% of all other submissions, which doesn't surprise me anymore.
Yes, I do have an axe to grind with academia.
>Even doing some things like going in reverse
For example, in O3, the compiler might be able to vectorize and do eight iterations in per cycle, but might not be able to catch that if you go in reverse, and then the code will be slower.
I usually have a general idea of what might be faster, but I would never claim to know, unless I benched it.
There is no other *reasonable* way to know, at least not on x86, unless you're an expert at predicting the assembly output to a tee.
>the entire field of O() is basically just academic masturbation
At least when it comes to systems programming languages.
In higher-level languages, where you're dragging the overhead with you at all times, those O() calculations might still hold true.
O() really only comes into play with large datasets. Small amounts of data can fully fit within the cache, so it's faster to do simple algorithms with many comparisons than having to use more complex algorithms which take longer compute time. When the data gets too big for the cache, O() optimized algorithms play a substantial role. And yes, you'd get a better result with C than some language that runs on top of a virual-machine or execution environment.
>the entire field of O() is basically just academic masturbation
t. the guy that wrote an O(n^4) string replacement function at my workplace
So glad I’m done with this shit. You think a plumber has to unclog the office shitter to prove he knows the job. Does he come home and work on his shitter unclogging portfolio
Frick everyone that supported this in the industry can’t wait for more of you getting fired because of AI tools it’s going to happen
Here’s the real world answer: importing a method from an existing library
Eat a dick.
I think I should just interview somewhere for fun and every question the ask for me to solve on a whiteboard I tell them it’s not how the job ever works in practice and nobody gives a shit anyway
Just hire more coolies a fricking jeet can do it for me
How about that I just rent the guy from 711 for the interview and I tell everyone Gagdeep does the coding I am his employer and contract him and the company I am applying to needs to not be racist and respect saars pronouns
Fricking queers, go get stuffed in a locker
Clean the toilet, pajeet.
No... NO!
Don't shit on the floor!!!
lol @ this.
I am an Indian. I wrote these posts
. I get it. All of you hate us. I'll still drive by here for the occasional laugh and genuine coding related stuff.
Clean the toilet, pajeet.
Wat
No I think he's a genuine moron who went through a basic compsci course and thinks he knows it all
Listen, I know you homosexuals think you're smart for saying
>You may assume that each input would have exactly one solution, and you may not use the same element twice
but it would've been better to say it less ambiguously like so
>Return the first valid solution consisting of unique indexes (indices)
because I was starting to create caching arrays, because "only one valid solution" can still mean any number of pairs, because only one unique combination of pairs will exist and the fact that the element may only be used once incurs keeping track of what was already taken, meaning either iterating over the output or caching available indexes on each testing iteration.
SO FRICK YOU GUYS for thinking you're smart, when you're not.
homosexuals, all of you, including and especially the trannies at LeetCode.
https://leetcode.com/problems/two-sum/description/
No where did the OP specify which exact version of the problem was to be solved.
There may be one pair or numerous depending upon how the question is asked.
Read the LeetCode.
I was writing code and then thought "wait, this is more complex than just two for loops and return" and then I checked other anons and it was indeed just two for loops and return and not as complex, and those homosexuals at LeetCode try to be not ambiguous, but ended up being ambiguous and I wasted 20 mins of my time.
Sigh.
Also what's with registering, Godbolt doesn't require that, frick you, LeetCode.
Also, it didn't particularly help that the C prototype returns a malloc'ed int* array and has an output input pointer returnSize, because if you just want one pair, just give me an array for two ints to write to and non-zero return on match found.
Jeez, malloc'ing FOR TWO INTS, who the frick does that, I thought you want all solutions in such a case, for frick's sake!
>malloc'ing FOR TWO INTS
Wait, let me specify that a little, malloc'ing WHEN YOU KNOW THE OUTPUT SIZE AND IT IS SMALL ENOUGH TO FIT IN STACK, WHO THE FRICK DOES THAT?
I have neither enough face nor palm to express my dismay.
>only one valid solution
they could have accepted [0,1] and [1,0] by sorting your answer and theirs at the end
I don't know what abstract ad hoc solutions are proving.
I think having a project that is running proofs more than a practiced algorithm implementation.
But I am a junior, so I don't know take my advice with a grain of salt
do you guys have some kinda javascript amnesia
the language is 40 years old
yes consts aren't really consts
get over it
Nah I don't believe you got close to the answer but failed in such a moronic way, fixed your code with paint
if you insist
twoSum := (nums: Array<number>, target: number): [number, number] ->
seen: Map<number, number> := new Map
for num, i of nums
diff := target - num
if seen.has diff then return [seen.get(diff), i]
seen.set num, i
twosum(List, Target) ->
SortedList = lists:sort(List),
ReversedSortedList = lists:reverse(SortedList),
twosum_loop(SortedList, ReversedSortedList, Target).
twosum_loop([], _, _) -> [];
twosum_loop(_, [], _) -> [];
twosum_loop([Low|_], [High|_], _) when Low > High -> [];
twosum_loop([Low|LowRest], [High|HighRest], Target) when Low + High == Target ->
[{Low,High}|twosum_loop(LowRest, HighRest, Target)];
twosum_loop([Low|LowRest], [High|HighRest], Target) when Low + High > Target ->
twosum_loop([Low|LowRest], HighRest, Target);
twosum_loop([Low|LowRest], [High|HighRest], Target) when Low + High < Target ->
twosum_loop(LowRest, [High|HighRest], Target).
>this O(nlogn) in time solution is the best!
>it only takes O(n) memory (it takes c1 + c2 * n space, where c2 is several times larger than n)
>time complexity is actually c1 * n * c2 * log(n) + c3 * n + c4, and the constants are fricking huge
>starts to beat O(n^2) implementation around n = 30 million which you'll never see in actual use
Stop jacking off with Big O.
>which you'll never see in actual use
t. pajeet
Embedded actually, where heap is disallowed, and resiliency is desirable.
Yeah well many of us do have to deal with large datasets.
Good, and I hope you've profiled your algorithm for them. You do measure and profile, don't you?
Larper
import "slices"
type Pair struct {
left int
right int
}
func twoSum(nums []int, target int) *Pair {
values := nums
slices.Sort(values)
for i := 0; i < len(nums); i += 1 {
idx, found := slices.BinarySearch(values, target-values[i])
if found && idx != i {
return &Pair{
left: values[i],
right: target - values[i],
}
}
}
return nil
}
Thanks everyone. I have been doing a lot of leetcode to prepare for an interview, and it fricks with my self esteem sometimes. This thread made me feel a lot better.
x.combination(2).select {|n| n.sum == y}
idk i'm a cable monkey that crawls around datacenter floors
class Program
{
public static void Main(string[] args)
{
Console.WriteLine("Hello, World!");
twoSum([2,10,7,15], 9);
}
private static void twoSum(int[] arr, int target)
{
for (int i = 0; i < arr.Length; i++)
{
for (int j = 0; j < arr.Length; j++)
{
if (arr[i] + arr[j] == target)
{
Console.WriteLine($"Found: {arr[i]}, {arr[j]}");
}
}
}
}
}
Hello, World!
Found: 2, 7
Found: 7, 2
Whats wrong with that? took me 30 seconds to code
>java
>O(N^2) and the worst kind
good morning sir
it's c# homosexual
same shit you pitch black Black personhomosexual
oh god big o returns!
People still need "technical" interviews? All my job came from
referrals and showing them my portfolio. Never once touched "leetcode"
or whatever the plebs must use to prove they can program.
You don't have to write a single line of code, moron. The cheapest option is to give a plain text file with the numbers to 10 pajeets and let them sort it and find the sum you want.
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
known = {}
for i, n in enumerate(nums):
diff = target - n
if diff in known:
return [known[diff], i]
known[n] = i
return []
returns the indices but whatever, potato potato
>camelcase in python
you only ever return one pair, moron
cant you just have ChatGPT or gemini write it for you? This normalhomosexual nonsense is getting fricking ludicrous nowadays, they're laying people off left and right and pretending you need to learn shit that bots already know, frick this bullshit man as a QA i say frick em
It's not only about having code that works. You have to show your though process. Being able to explain how your solution works, what the space and time complexity are, and knowing what could be improved have a much bigger impact on being hired.
If you can do that well under pressure and time constraints, chances are higher that you'll be a good coder. I'm not saying that everyone good at Leetcode is a good software developer or vice versa.
I see what you're getting at but its completely obsolete right now, that's just what it is, these people are living in the past, whos gon tell em?
I'm glad I'm not a gay or a moron so when my job goes away i will be retiring with dignity, my days as a good goy are numbered the moment some c**t figures out how to make AI bots prompt themselves which they already did for sure they just wont release the updates for the wider audience, cuz that's going to buckbreak the entire system at once, they're not done with the layoffs. Im pretty sure they have been done with this shit 17 years ago and i have proof of it.
Think of it like a form of hazing. Leetcode is how we determine if you care enough about joining the fraternity of software development to be willing to undergo ritual humiliation. Now that an entire generation of devs had to go through it, it won't be going away. Just be glad there's no sodomy involved like at a college frat full of actual chads.
The only frat members I've ever met have been the most beta midwits I've ever met. They do work out though.
numbers = [1, 2, 4, 5, 6, 8, 9, 5]
target = 10
winners = []
for i in numbers[:-1]:
for j in numbers[numbers.index(i)+1:]:
if (i+j)==target:
winners.append([i, j])
print(numbers)
print(winners)
void main() {
List<int> nums = [1, 1, 2, 3, 4, 4, 5, 7, 8, 10, 13, 14, 15];
int target = 12;
List<int> result = twoSum(nums, target);
if (result.isNotEmpty) {
print("Indices: ${result[0]} and ${result[1]}");
} else {
print("No two sum homosexual");
}
}
List<int> twoSum(List<int> nums, int target) {
Map<int, int> numMap = {};
for (int i = 0; i < nums.length; i++) {
int complement = target - nums[i];
if (numMap.containsKey(complement)) {
return [numMap[complement]!, i];
}
numMap[nums[i]] = i;
}
return [];
}
Indices: 6 and 7
frick leetcode
>Resume roasties screen for having a correctly formatted resume, a skill software developers don't use
>Dev interviews screen for leetcode problem drilling, a skill software developers don't use
>Nobody screens for the actual force multiplier in software development, the guy who doesn't take 3 hour lunch breaks or jerk off during work hours
Why
somehow i can never live down the shame of the fact that i solved this problem naively the first time
twoSum (x : xs) t = any (y -> y + x == t) xs || twoSum xs t
twoSum [] _ = False
What theme is that??