functional fetishism, nothing more. They're all UNEMPLOYED SPERG LOSERS. Posers, bro. You can't even use git with haskell. Trust me, I tried. I can't get the toolchain to work with this shit. I'd take C, Java, Python over that crap any day.
> can't even use git with haskell.
Average Cnile sanity. You will never be competent.
functional programming is based on abstract mathematical concepts. traditional imperative languages are based on how computers work. of course they optimize better
Correct
>produces 1MB of garbage >gets cleaned right away >never accumulates to 1GB >cherry picked one algorithm that haskell isn't really good at >cnile and sepples also use -O2, but it's not okay when haskell does it
make a better bait
>t. nocoder
Aside from other problems, immutability/no side effects will always destroy FP performance. No computer works that way, nor could any computer work that way efficiently. Computers are literally built to load small pieces of data from RAM, modify them, and write them back. Immutability is an expensive abstraction which will always get its ass kicked by other languages which do not impose it, especially C. >but muh embarrassingly parallel map function! >WHAT ABOUT MUH MAP FUNCTION???
Cherry picking. Any language can dispatch embarrassingly parallel problems to multiple cores or GPUs with ease. That says nothing about most code which is serial in execution as most problems are by nature.
>but muh embarrassingly parallel!!!!
So they found a compiler difference...which I'm sure no longer exists...related to vectorization, and based on this declared that "muh Haskell beats C!" This is the kind of cherry picking shit that makes other programmers laugh at you. Grasping for any "evidence" to avoid the truth only makes you a fool. Better to say "yeah it's slow but I like XYZ features" than to pretend otherwise.
functional fetishism, nothing more. They're all UNEMPLOYED SPERG LOSERS. Posers, bro. You can't even use git with haskell. Trust me, I tried. I can't get the toolchain to work with this shit. I'd take C, Java, Python over that crap any day.
I like haskell, but I agree the ecosystem lacks a lot, and the fact that the language isn't popular slows down the development of more libraries; and the bad ecosystem helps to make the language even less popular.
the ecosystem is so fricking shit it makes the language practically useless. It's fun if you're a functional fetishist. I used to be one for a while but then I discovered money and pussy bro
>having a job >wanting a job
Wagie get back in the cagie! Israel and Ukraine need your tax money. Those Iron Dome interceptors are pretty pricey so you'll need to put in overtime. No extra pay for you though. You're salaried.
functional programming is based on abstract mathematical concepts. traditional imperative languages are based on how computers work. of course they optimize better
>Oh gee wow I can make a functional Fibonacci method in java I guess java is functional now
The fact that a computer has memory btfo any argument that a computer is functional. State exists and running any function with the same arguments is not guaranteed to lead to having the same results.
>It never does
it always does though
since pure and impure code are separated, compiler could optimize the pure code
in impure language like pOOP, compiler has no way of telling which code is pure
>produces 1MB of garbage >gets cleaned right away >never accumulates to 1GB >cherry picked one algorithm that haskell isn't really good at >cnile and sepples also use -O2, but it's not okay when haskell does it
make a better bait
picked one algorithm that haskell isn't really good at
It's a pure algorithm that is innately recursive, my first thought while learning haskell was "oh this'll be easy to write in a totally pure way"
As it turns out, it can't do it at all. It's a pure function that causes haskell to freeze and eat ram until it's killed.
Great language, moron. You don't even know what Ackermann is.
2 weeks ago
Anonymous
Did you use tail recursion?
2 weeks ago
Anonymous
Here you go, https://en.wikipedia.org/wiki/Ackermann_function
impliment it in haskell and try to run it with ghci
Here's a working C function you can translate, if you want.
int ack(int m, int n)
{
if (m == 0){
return n+1;
}
else if((m > 0) && (n == 0)){
return ack(m-1, 1);
}
else if((m > 0) && (n > 0)){
return ack(m-1, ack(m, n-1));
}
}
2 weeks ago
Anonymous
I am not the previous poster and I am aware of ackermans function. It is not tail recursive
ack 0 n = n + 1
ack m 0 = ack (m - 1) 1
ack m n = ack (m - 1) (ack m (n - 1))
In haskell it's easier to express it as you can see
What's the point of not optimizing code? And C segfaults both optimized and unoptimized at ack(4, 1) and doesn't end at ack(2, 1) wheras haskell outputs immediately at unoptimised. Are you trying to deceive people? Also ackermans function is notoriously slow to compute
2 weeks ago
Anonymous
Even if it's slow, it shouldn't crash haskell when C -O0 can compute it. That's my point.
2 weeks ago
Anonymous
Again, my point is not that you shouldn't optimize, it's that at a base level the output it produces is garbage. In some cases it may optimize well, but in many it will remain slower than a real language.
Well haskell requires much more runtime support than C and C maps better to hardware compared to haskell which compiles to something like a graph I think. Also ackerman is kinda fricked in terms of lazyness because it always requires the other argument meaning you can't skip computation. One could come up with something like ackerman but under lazy evaluation a lot of computation can be skipped.
f 0 n = 1
f m n = f (m - 1) (f m (n - 1))
main = print $ f 10 10
int f(int m, int n) {
if (m == 0)
return 1;
return f(m - 1, f(m, n - 1));
}
int main() {
printf("%dn", f(10, 10));
}
Toy example but it could be made more complicated (and less like ackermans function). I didn't run it so I don't know if f(10, 10) is slow. There should be values where C takes forever while haskell outputs immediately. The runtime of haskell gives an edge here, besides of course enabling functional programing
2 weeks ago
Anonymous
The reason I bring up this example in particular was because even as a noob I became disappointed in haskell from this.
I thought it was *the* problem for haskell, it's recursive, it's iterative, it's easy to write in 3 lines.
It failed, and that soured me. I kept learning, but I never truly used the language, because I feel like I could do so much more for less work in other language.
2 weeks ago
Anonymous
Let me show you one example where the inverse is true
fix f = f (fix f)
I think this is really *the* problem for functional languages. It's a higher order function that accepts another (usually) higher order function and computes it's fixed point, i.e. the function which when inputed to f gives back itself. You can express the factorial using it as
fact = fix $ f n -> if n == 0 then 1 else n * f(n - 1)
To implement this in other languages you need to implement haskells runtime system
Well I will show you another example, an infinite list with fibbonacci numbers
fibs = 0:1:zipWith (+) fibs (tail fibs)
This one uses lazy evaluation to progressively compute as many numbers are needed
print $ take 10 fibs
These are in my opinion *the* problems for haskell. Also combinators and tacit programming. To a lesser extent (because other functional languages have this) using user defined types (which are more powerful than C) to easily create parsers, do symbolic computation etc. I'll add in monadic error handling but it's kinda niche when it's encountered and I don't want to drag this too long
2 weeks ago
Anonymous
For infinite series, I can see why haskell or a lazy language would be nice, but for something like factorial why would it be better to represent it with fixed point over a naive implementation?
Why would a naive/fixed haskell implementation be better than a C implementation?
2 weeks ago
Anonymous
It isn't about the factorial, it's about fix itself. I mean on it's own it's just a toy function like ackermans but it demonstrates what you can do. It repeteadly applies f to itself until it reaches the base case. In strict languages it uses currying to be able to do that. And the runtime support it requires doesn't exist in non functional languages (because the curried f outlives the activation of the instance of fix that creates it). I think it uses every feature of functional programming albeit covertly. On top of that it looks just like the drfinition of a fixed point in math: x0 such that x0 = f(x0)
Oh and I'm pretty sure that in haskell using fix results in the same graph as just using recursion
More info: https://en.m.wikipedia.org/wiki/Fixed-point_combinator
2 weeks ago
Anonymous
>It repeteadly applies f to itself until it reaches the base case.
You didn't specify that there has to be a base case.
2 weeks ago
Anonymous
It doesn't. It depends. If there isn't a base case it might loop forever like recursion. However sometimes it works
fix (const 1) correctly gives 1 (This might only work in lazy languages)
fix (x -> x + 1) correctly loops forever because x + 1 doesn't have a fixed point
fix (x -> x^2) incorrectly doesn't find a fixed point
In a lazy language it doesn't evaluate anything until fact 5 is needed. Then in the first step of evaluating it expands to fix (f n -> if n == 0 ...) 5. Then it evaluates fix: (f n -> if n == 0 ...) (fix (...)) 5. Then it evaluates the lambda. First it needs to check if n is 0 where n is 5. It is false so it turns into: 5 * (fix (...)) 4 (notice funciton application to 4 has precedence over multiplication here). We have to evaluate (fix (...)) 4. This is the same as before. I skip to 5 * 4 * 3 * 2 * 1 * (fix (...)) 0. Expanding fix we have (f n -> if n == 0 then 1 else ...) (fix (...)) 0. Since n is 0 this evaluates to 1 and we have 5 * 4 * 3 * 2 * 1 * 1. Evaluate that to 120. If the base case did not exist it would have just kept going. The interesting thing is that one can code this mechanism in a few characters
the functional paradigm does not accurately describe computing, it does not make for a good model for computation and thus it's innately problematic to make it optimal. GHC is already an insanely bloated compiler and Haskell is still not considered a very fast language.
>does not accurately describe computing
It has one to one correspondance with while loops. Also this is specific to lazy evaluation but let's say you want to compute the following function:
f(x, y) = if x = 5 then True else y
What should f(5, loop_forever()) evaluate to? Should it loop forever or since x is 5, should it return True? Looping forever is the value if y. In C it loops forever, in haskell it evaluates to True
2 weeks ago
Anonymous
> Also this is specific to lazy evaluation but ...
isn't that kind of the problem, though? every example I have seen of lazy eval either isn't something that would ever happen (like this), or is something where unidiomatic C code is used, and there's still an easy way to avoid unnecessary computation. it seems like a great feature for Haskell, but not really relevant to saving processing power when you're comparing to C.
2 weeks ago
Anonymous
The argument was that one could argue lazy evaluation more accurately describes computing because it computes that function even in the case when it's result is defined but some of it's arguments loop for ever. Also I think you can get the same effect as lazy evaluation in C by inlining and possibly dead code elimination unless this counts as changing the semantics and the compiler can't do it (I'm pretty sure the standard would say something along the lines of arguments are evaluated before the function call which would make the above optimizations impossible or compiler writers would be free to make C lazyly evaluated) >isn't that kind of the problem, though?
What kind of problem? The performance difference? Functional languages require a lot more runtime support including a garbage collector. Additionally processors have not been optimised to run functional code and in particular the runtime system (because map etc seem to fit well for vector operations). In contrast they do offer many perks to imperative languages like the stack yet they offer no speculation for things like function pointer calls (this also affects object oriented languages). Computation isn't assembly tho
2 weeks ago
Anonymous
maybe I should have been more explicit, my apologies. I am talking about how the CPU computes things, so I am more or less talking about assembly. the argument I made somewhere earlier in this thread is that the functional paradigm does not closely reflect how CPU's work when you compare it to imperative languages, and thus you have to jump through hoops to optimize it. someone said it's not a problem with functional languages, but rather the compilers for said languages, but I think it's innate to the functional paradigm that it's difficult to optimize. and GHC is an example of that.
2 weeks ago
Anonymous
I think it's more like CPUs haven't adapted to functional programming. They could add speculation in the jump register instructions, they could add hardware support for garbage collection as well as some structure that can efficently store anonymous functions that get passed around (rn they are forced to be on the heap). These would also yield improvements in non functional languages. Under the hood they don't implement the assembly programming model anyway. Imperative languages are simpler and require less work to run either way so they always have an edge. The trade off is functional gives you more power
fact n = foldl (*) [1..n]
quicksort (x:xs) = quicksort lesser ++ [x] ++ quicksort greater
where lesser = filter (< x) xs
greater = filter (x >=) xs
primes = sieve [2..]
where sieve (p:xs) = [x | x <- xs, x `mod` p > 0]
2 weeks ago
Anonymous
I fricked up the last one lol
primes = sieve [2..]
where sieve (p:xs) = p : sieve [x | x <- xs, x `mod` p > 0]
2 weeks ago
Anonymous
> The trade off is functional gives you more power
maybe. Haskell can be elegant, but I wonder whether that's inherent to functional programming or no. if you want to be terse, you can also use an array programming language like BQN, but I doubt that's actually desirable. with some high-level imperative languages you can still solve these problems in an elegant way, even if the syntax has to be a bit more explicit than Haskell. > Under the hood they don't implement the assembly programming model anyway
I feel like C does map closely to machine code, though. sure, it is still a high-level language, but if you can often predict what the compiler output is going to look like, I think it maps pretty closely to assembly. but FP could work great for higher-level languages.
2 weeks ago
Anonymous
>whether that's inherent to functional programming or no.
The first two are, except other functional languages might not curry operators like that and may require a lambda. The third requires lazy evaluation because of the infinite list (it returns an infinite list of all the primes). If it used [2..n] instead it works in strict but if the language doesn't support list comperhensions then it has to be expressed using filter
It's not supposed to be terse. To quick sort a list pick an element, quick sort the lesser elements, quicksort the greater elements and place it in the middle >with some high-level imperative languages you can still solve these problems in an elegant way
Examples? >I feel like C does map closely to machine code, though
Maybe. C is simple. The simple parts of functional languages map equally well, except they typically use bignums. And FP is only high-level unless you start droping features (which makes it not FP)
2 weeks ago
Anonymous
There this project: https://antelang.org/
It's functional (higher-level) by default, but it allows you to lower when you want.
2 weeks ago
Anonymous
>I think it's innate to the functional paradigm that it's difficult to optimize.
Correct.
I think it's more like CPUs haven't adapted to functional programming. They could add speculation in the jump register instructions, they could add hardware support for garbage collection as well as some structure that can efficently store anonymous functions that get passed around (rn they are forced to be on the heap). These would also yield improvements in non functional languages. Under the hood they don't implement the assembly programming model anyway. Imperative languages are simpler and require less work to run either way so they always have an edge. The trade off is functional gives you more power
fact n = foldl (*) [1..n]
quicksort (x:xs) = quicksort lesser ++ [x] ++ quicksort greater
where lesser = filter (< x) xs
greater = filter (x >=) xs
primes = sieve [2..]
where sieve (p:xs) = [x | x <- xs, x `mod` p > 0]
>I think it's more like CPUs haven't adapted to functional programming.
How would you adapt to immutability? You're constantly copying bytes. If you try to use indirection to minimize the copies you're copying pointers which introduce another hop to get the actual data, defeating the cache prefetch.
>They could add speculation in the jump register instructions
What are they speculating on? What's a pointer to prefetch scattered data? Apple tried that and now has a nasty exploit in their M chips.
>they could add hardware support for garbage collection
GC isn't the problem. GC languages are not, by nature, slow. There's some overhead on the tightest code vs. C/C++, but it's not the end of the world. Massive data throughput/copying is FP's #1 problem. Computers are designed to work on a shared, mutable space.
>as well as some structure that can efficently store anonymous functions that get passed around (rn they are forced to be on the heap).
The heap is just RAM. The stack is in RAM to. The only place that's faster is cache, but cache is purpose built hardware. It is not independently addressable. You can't just tell the CPU to give you some cache and keep stuff there.
>Under the hood they don't implement the assembly programming model anyway.
Yes they do.
2 weeks ago
Anonymous
>How would you adapt to immutability?
Never said that. Reread my suggestions >You're constantly copying bytes.
No because the compiler does liveness analysis (in every language language) so variables can be placed in the same registers or memory addresses >What are they speculating on?
The jump destination >Apple tried that and now has a nasty exploit in their M chips.
Skill issue? Speculating about whether a jump occurs is okay but now this isn't? Anyway OOP would benefit from that too because it usually is implement with dynamic dispatch (even in C++) >Massive data throughput/copying is FP's #1 problem
Wtf are you talking about? Do you even have any benchmarks to show this? The main problem for performance is everything about the runtime system >Computers are designed to work on a shared, mutable space.
You are implying parallelism here. This has nothing to do with FP but a shared mutable space has insane overhead which drags down performance. This is th reason why in supercomputers they use message passing instead. This is irrelevant to FP tho >The heap is just RAM. The stack is in RAM to
The stack has locality which means it gets cached and it's a simpler structure anyway. Since every language uses a stack CPUs have been optimised with regards to stack operations. For functional programming tho a stack is just not enough. Generalising the idea of a stack (required if functions can outlive the scope that created them) you end up having to use something like a graph which necessarily needs garbage collection (as you can't know at compile time when will certain scopes die) and so all of this needs to be in the heap >Yes they do.
My statement wasn't accurate. They provide assembly as an interface but under the hood they do different things which have the same end result (except when they don't and you get exploits)
You are aware that when you write a program you tell the system what you want to do, not how to do it right? This is true even in assembly
2 weeks ago
Anonymous
>You are aware that when you write a program you tell the system what you want to do, not how to do it right? This is true even in assembly
this doesn't seem like a very meaningful statement, though. yes, if you consider your scope low level enough, even opcodes instruct the CPU to do something, not how they should do it. that much is obvious, computers don't exist so humans can do their work. but when you consider the scope at which we're trying to solve a problem, and have computers actually perform a meaningful task, then there's a world of difference between imperative and declarative languages. with one, we instruct how to solve a task, and with the other, there's an intermediate runtime that has to come up with algorithms for declarative code.
2 weeks ago
Anonymous
Sure but imperative languages also get optimised by the compiler which means variables may be eliminated etc and the system might end up doing the same thing but not how you described it. The difference is in how you describe it. Foruthermore if you wanna be able to do more advanced things like higher order functions or object orientism you need more runtime support
2 weeks ago
Anonymous
So basically you're moronic?
2 weeks ago
Anonymous
>Are you trying to deceive people?
He and the other cnile cultists absolutely are. They're lying snakes and beyond moronic.
2 weeks ago
Anonymous
I forgot to add that what likely saves haskell at ack(2, 1) is lazy evaluation which avoids needless computation C can't avoid because it's strict
2 weeks ago
Anonymous
I am not the previous poster and I am aware of ackermans function. It is not tail recursive
ack 0 n = n + 1
ack m 0 = ack (m - 1) 1
ack m n = ack (m - 1) (ack m (n - 1))
In haskell it's easier to express it as you can see
What's the point of not optimizing code? And C segfaults both optimized and unoptimized at ack(4, 1) and doesn't end at ack(2, 1) wheras haskell outputs immediately at unoptimised. Are you trying to deceive people? Also ackermans function is notoriously slow to compute
I forgot to add that what likely saves haskell at ack(2, 1) is lazy evaluation which avoids needless computation C can't avoid because it's strict
Ignore the C part. I had accidentally swaped the arguments lol. C computes both relatively fast. What's the point of not optimised code tho?
2 weeks ago
Anonymous
Again, my point is not that you shouldn't optimize, it's that at a base level the output it produces is garbage. In some cases it may optimize well, but in many it will remain slower than a real language.
2 weeks ago
Anonymous
>use haskell as main language >call C within haskell for cases where it's bad optimized for (like 1000x slower than C) or very performant code is necessary.
2 weeks ago
Anonymous
How painful is the interop?
Can you give an example running the ackermann I posted earlier from within haskell?
2 weeks ago
Anonymous
Depends on how you choose to do it.
https://hackage.haskell.org/package/inline-c
depends on the implementation of ackermann algorithm
e.g. the following is faster than C
{-# LANGUAGE BangPatterns #-}
module Main (main) where
import qualified Data.Map as M
import Control.Monad.State.Strict
import Control.Monad
type Table = M.Map (Integer,Integer) Integer
ack :: Integer -> Integer -> State Table Integer
ack 0 n = return (n+1)
ack 1 n = return (n+2)
ack m 0 = ack (m-1) 1
ack m 1 = do
!n <- ack (m-1) 1
ack (m-1) n
ack m n = do
mb <- gets (M.lookup (m,n))
case mb of
Just v -> return v
Nothing -> do
!s <- ack m (n-2)
!t <- ack m (n-1)
let foo a b = do
c <- ack (m-1) b
let d = max a c
return $! d
!v <- foldM foo 0 [s .. t]
mp <- get
put $! M.insert (m,n) v mp
return v
main :: IO ()
main = print $ evalState (ack 4 1) M.empty
the functional paradigm does not accurately describe computing, it does not make for a good model for computation and thus it's innately problematic to make it optimal. GHC is already an insanely bloated compiler and Haskell is still not considered a very fast language.
>write pure >compiles to state machine anyway
Whats the point? You could just write as a state machine and it'll compile faster AND be faster in more cases.
I couldn't tell you.
But I guess I can explain it as a pretty lofty aspirational goal. If you really did manage to achieve a functional language and compiler that actually was able to compile things optimally, that would be a pretty big achievement. It would probably mean we figured out something about programming that we didn't quite know before, and that would probably be useful in all kinds of ways.
This could also be a pipedream too, idk, I don't use functional languages.
What's the next big programming paradigm? how do we transcend the limits of object oriented programming and functional programming? (they're both shit) We need to think up something new here. Perhaps we can get some inspiration from Alfred North Whitehead and his process philosophy
it's good marketing, no-coders love to parrot things like that
>rust is safe
You're a moron
> can't even use git with haskell.
Average Cnile sanity. You will never be competent.
fpbp
Correct
>t. nocoder
Aside from other problems, immutability/no side effects will always destroy FP performance. No computer works that way, nor could any computer work that way efficiently. Computers are literally built to load small pieces of data from RAM, modify them, and write them back. Immutability is an expensive abstraction which will always get its ass kicked by other languages which do not impose it, especially C.
>but muh embarrassingly parallel map function!
>WHAT ABOUT MUH MAP FUNCTION???
Cherry picking. Any language can dispatch embarrassingly parallel problems to multiple cores or GPUs with ease. That says nothing about most code which is serial in execution as most problems are by nature.
>but muh embarrassingly parallel!!!!
So they found a compiler difference...which I'm sure no longer exists...related to vectorization, and based on this declared that "muh Haskell beats C!" This is the kind of cherry picking shit that makes other programmers laugh at you. Grasping for any "evidence" to avoid the truth only makes you a fool. Better to say "yeah it's slow but I like XYZ features" than to pretend otherwise.
functional fetishism, nothing more. They're all UNEMPLOYED SPERG LOSERS. Posers, bro. You can't even use git with haskell. Trust me, I tried. I can't get the toolchain to work with this shit. I'd take C, Java, Python over that crap any day.
I like haskell, but I agree the ecosystem lacks a lot, and the fact that the language isn't popular slows down the development of more libraries; and the bad ecosystem helps to make the language even less popular.
the ecosystem is so fricking shit it makes the language practically useless. It's fun if you're a functional fetishist. I used to be one for a while but then I discovered money and pussy bro
>muh libraries
>but I agree the ecosystem lacks a lot
Such as?
>having a job
>wanting a job
Wagie get back in the cagie! Israel and Ukraine need your tax money. Those Iron Dome interceptors are pretty pricey so you'll need to put in overtime. No extra pay for you though. You're salaried.
if you're not a minority then tell your mother I'm so sorry for her burden
Still mad about how you wasted your 20s wagecucking every day? Boohoohoo, baby.
What did you do during your 20s?
No, seething here won't turn back time and I don't care about you.
functional programming despite being simple on paper is so needlessly complex that even cnile shit optimizes better lol
functional programming is based on abstract mathematical concepts. traditional imperative languages are based on how computers work. of course they optimize better
computers were originally pure until that girl Ada Lovelace found out you can do impure shits on it
>computers aren't functional
>what's OR(AND(XOR(0,1),1),0)
please go back
>Oh gee wow I can make a functional Fibonacci method in java I guess java is functional now
The fact that a computer has memory btfo any argument that a computer is functional. State exists and running any function with the same arguments is not guaranteed to lead to having the same results.
Learn erlang
>Currying
useful aspects of this have already been absorbed into modern languages
>It never does
it always does though
since pure and impure code are separated, compiler could optimize the pure code
in impure language like pOOP, compiler has no way of telling which code is pure
>1 GB/s of garbage data
>it's ok though, totally optimized!
>can't even solve Ackermann(4,1) without compiling with -O2
It's shit.
>produces 1MB of garbage
>gets cleaned right away
>never accumulates to 1GB
>cherry picked one algorithm that haskell isn't really good at
>cnile and sepples also use -O2, but it's not okay when haskell does it
make a better bait
picked one algorithm that haskell isn't really good at
It's a pure algorithm that is innately recursive, my first thought while learning haskell was "oh this'll be easy to write in a totally pure way"
As it turns out, it can't do it at all. It's a pure function that causes haskell to freeze and eat ram until it's killed.
Great language, moron. You don't even know what Ackermann is.
Did you use tail recursion?
Here you go, https://en.wikipedia.org/wiki/Ackermann_function
impliment it in haskell and try to run it with ghci
Here's a working C function you can translate, if you want.
int ack(int m, int n)
{
if (m == 0){
return n+1;
}
else if((m > 0) && (n == 0)){
return ack(m-1, 1);
}
else if((m > 0) && (n > 0)){
return ack(m-1, ack(m, n-1));
}
}
I am not the previous poster and I am aware of ackermans function. It is not tail recursive
ack 0 n = n + 1
ack m 0 = ack (m - 1) 1
ack m n = ack (m - 1) (ack m (n - 1))
In haskell it's easier to express it as you can see
What's the point of not optimizing code? And C segfaults both optimized and unoptimized at ack(4, 1) and doesn't end at ack(2, 1) wheras haskell outputs immediately at unoptimised. Are you trying to deceive people? Also ackermans function is notoriously slow to compute
Even if it's slow, it shouldn't crash haskell when C -O0 can compute it. That's my point.
Well haskell requires much more runtime support than C and C maps better to hardware compared to haskell which compiles to something like a graph I think. Also ackerman is kinda fricked in terms of lazyness because it always requires the other argument meaning you can't skip computation. One could come up with something like ackerman but under lazy evaluation a lot of computation can be skipped.
f 0 n = 1
f m n = f (m - 1) (f m (n - 1))
main = print $ f 10 10
int f(int m, int n) {
if (m == 0)
return 1;
return f(m - 1, f(m, n - 1));
}
int main() {
printf("%dn", f(10, 10));
}
Toy example but it could be made more complicated (and less like ackermans function). I didn't run it so I don't know if f(10, 10) is slow. There should be values where C takes forever while haskell outputs immediately. The runtime of haskell gives an edge here, besides of course enabling functional programing
The reason I bring up this example in particular was because even as a noob I became disappointed in haskell from this.
I thought it was *the* problem for haskell, it's recursive, it's iterative, it's easy to write in 3 lines.
It failed, and that soured me. I kept learning, but I never truly used the language, because I feel like I could do so much more for less work in other language.
Let me show you one example where the inverse is true
fix f = f (fix f)
I think this is really *the* problem for functional languages. It's a higher order function that accepts another (usually) higher order function and computes it's fixed point, i.e. the function which when inputed to f gives back itself. You can express the factorial using it as
fact = fix $ f n -> if n == 0 then 1 else n * f(n - 1)
To implement this in other languages you need to implement haskells runtime system
Well I will show you another example, an infinite list with fibbonacci numbers
fibs = 0:1:zipWith (+) fibs (tail fibs)
This one uses lazy evaluation to progressively compute as many numbers are needed
print $ take 10 fibs
These are in my opinion *the* problems for haskell. Also combinators and tacit programming. To a lesser extent (because other functional languages have this) using user defined types (which are more powerful than C) to easily create parsers, do symbolic computation etc. I'll add in monadic error handling but it's kinda niche when it's encountered and I don't want to drag this too long
For infinite series, I can see why haskell or a lazy language would be nice, but for something like factorial why would it be better to represent it with fixed point over a naive implementation?
Why would a naive/fixed haskell implementation be better than a C implementation?
It isn't about the factorial, it's about fix itself. I mean on it's own it's just a toy function like ackermans but it demonstrates what you can do. It repeteadly applies f to itself until it reaches the base case. In strict languages it uses currying to be able to do that. And the runtime support it requires doesn't exist in non functional languages (because the curried f outlives the activation of the instance of fix that creates it). I think it uses every feature of functional programming albeit covertly. On top of that it looks just like the drfinition of a fixed point in math: x0 such that x0 = f(x0)
Oh and I'm pretty sure that in haskell using fix results in the same graph as just using recursion
More info: https://en.m.wikipedia.org/wiki/Fixed-point_combinator
>It repeteadly applies f to itself until it reaches the base case.
You didn't specify that there has to be a base case.
It doesn't. It depends. If there isn't a base case it might loop forever like recursion. However sometimes it works
fix (const 1) correctly gives 1 (This might only work in lazy languages)
fix (x -> x + 1) correctly loops forever because x + 1 doesn't have a fixed point
fix (x -> x^2) incorrectly doesn't find a fixed point
In a lazy language it doesn't evaluate anything until fact 5 is needed. Then in the first step of evaluating it expands to fix (f n -> if n == 0 ...) 5. Then it evaluates fix: (f n -> if n == 0 ...) (fix (...)) 5. Then it evaluates the lambda. First it needs to check if n is 0 where n is 5. It is false so it turns into: 5 * (fix (...)) 4 (notice funciton application to 4 has precedence over multiplication here). We have to evaluate (fix (...)) 4. This is the same as before. I skip to 5 * 4 * 3 * 2 * 1 * (fix (...)) 0. Expanding fix we have (f n -> if n == 0 then 1 else ...) (fix (...)) 0. Since n is 0 this evaluates to 1 and we have 5 * 4 * 3 * 2 * 1 * 1. Evaluate that to 120. If the base case did not exist it would have just kept going. The interesting thing is that one can code this mechanism in a few characters
>does not accurately describe computing
It has one to one correspondance with while loops. Also this is specific to lazy evaluation but let's say you want to compute the following function:
f(x, y) = if x = 5 then True else y
What should f(5, loop_forever()) evaluate to? Should it loop forever or since x is 5, should it return True? Looping forever is the value if y. In C it loops forever, in haskell it evaluates to True
> Also this is specific to lazy evaluation but ...
isn't that kind of the problem, though? every example I have seen of lazy eval either isn't something that would ever happen (like this), or is something where unidiomatic C code is used, and there's still an easy way to avoid unnecessary computation. it seems like a great feature for Haskell, but not really relevant to saving processing power when you're comparing to C.
The argument was that one could argue lazy evaluation more accurately describes computing because it computes that function even in the case when it's result is defined but some of it's arguments loop for ever. Also I think you can get the same effect as lazy evaluation in C by inlining and possibly dead code elimination unless this counts as changing the semantics and the compiler can't do it (I'm pretty sure the standard would say something along the lines of arguments are evaluated before the function call which would make the above optimizations impossible or compiler writers would be free to make C lazyly evaluated)
>isn't that kind of the problem, though?
What kind of problem? The performance difference? Functional languages require a lot more runtime support including a garbage collector. Additionally processors have not been optimised to run functional code and in particular the runtime system (because map etc seem to fit well for vector operations). In contrast they do offer many perks to imperative languages like the stack yet they offer no speculation for things like function pointer calls (this also affects object oriented languages). Computation isn't assembly tho
maybe I should have been more explicit, my apologies. I am talking about how the CPU computes things, so I am more or less talking about assembly. the argument I made somewhere earlier in this thread is that the functional paradigm does not closely reflect how CPU's work when you compare it to imperative languages, and thus you have to jump through hoops to optimize it. someone said it's not a problem with functional languages, but rather the compilers for said languages, but I think it's innate to the functional paradigm that it's difficult to optimize. and GHC is an example of that.
I think it's more like CPUs haven't adapted to functional programming. They could add speculation in the jump register instructions, they could add hardware support for garbage collection as well as some structure that can efficently store anonymous functions that get passed around (rn they are forced to be on the heap). These would also yield improvements in non functional languages. Under the hood they don't implement the assembly programming model anyway. Imperative languages are simpler and require less work to run either way so they always have an edge. The trade off is functional gives you more power
fact n = foldl (*) [1..n]
quicksort (x:xs) = quicksort lesser ++ [x] ++ quicksort greater
where lesser = filter (< x) xs
greater = filter (x >=) xs
primes = sieve [2..]
where sieve (p:xs) = [x | x <- xs, x `mod` p > 0]
I fricked up the last one lol
primes = sieve [2..]
where sieve (p:xs) = p : sieve [x | x <- xs, x `mod` p > 0]
> The trade off is functional gives you more power
maybe. Haskell can be elegant, but I wonder whether that's inherent to functional programming or no. if you want to be terse, you can also use an array programming language like BQN, but I doubt that's actually desirable. with some high-level imperative languages you can still solve these problems in an elegant way, even if the syntax has to be a bit more explicit than Haskell.
> Under the hood they don't implement the assembly programming model anyway
I feel like C does map closely to machine code, though. sure, it is still a high-level language, but if you can often predict what the compiler output is going to look like, I think it maps pretty closely to assembly. but FP could work great for higher-level languages.
>whether that's inherent to functional programming or no.
The first two are, except other functional languages might not curry operators like that and may require a lambda. The third requires lazy evaluation because of the infinite list (it returns an infinite list of all the primes). If it used [2..n] instead it works in strict but if the language doesn't support list comperhensions then it has to be expressed using filter
It's not supposed to be terse. To quick sort a list pick an element, quick sort the lesser elements, quicksort the greater elements and place it in the middle
>with some high-level imperative languages you can still solve these problems in an elegant way
Examples?
>I feel like C does map closely to machine code, though
Maybe. C is simple. The simple parts of functional languages map equally well, except they typically use bignums. And FP is only high-level unless you start droping features (which makes it not FP)
There this project: https://antelang.org/
It's functional (higher-level) by default, but it allows you to lower when you want.
>I think it's innate to the functional paradigm that it's difficult to optimize.
Correct.
>I think it's more like CPUs haven't adapted to functional programming.
How would you adapt to immutability? You're constantly copying bytes. If you try to use indirection to minimize the copies you're copying pointers which introduce another hop to get the actual data, defeating the cache prefetch.
>They could add speculation in the jump register instructions
What are they speculating on? What's a pointer to prefetch scattered data? Apple tried that and now has a nasty exploit in their M chips.
>they could add hardware support for garbage collection
GC isn't the problem. GC languages are not, by nature, slow. There's some overhead on the tightest code vs. C/C++, but it's not the end of the world. Massive data throughput/copying is FP's #1 problem. Computers are designed to work on a shared, mutable space.
>as well as some structure that can efficently store anonymous functions that get passed around (rn they are forced to be on the heap).
The heap is just RAM. The stack is in RAM to. The only place that's faster is cache, but cache is purpose built hardware. It is not independently addressable. You can't just tell the CPU to give you some cache and keep stuff there.
>Under the hood they don't implement the assembly programming model anyway.
Yes they do.
>How would you adapt to immutability?
Never said that. Reread my suggestions
>You're constantly copying bytes.
No because the compiler does liveness analysis (in every language language) so variables can be placed in the same registers or memory addresses
>What are they speculating on?
The jump destination
>Apple tried that and now has a nasty exploit in their M chips.
Skill issue? Speculating about whether a jump occurs is okay but now this isn't? Anyway OOP would benefit from that too because it usually is implement with dynamic dispatch (even in C++)
>Massive data throughput/copying is FP's #1 problem
Wtf are you talking about? Do you even have any benchmarks to show this? The main problem for performance is everything about the runtime system
>Computers are designed to work on a shared, mutable space.
You are implying parallelism here. This has nothing to do with FP but a shared mutable space has insane overhead which drags down performance. This is th reason why in supercomputers they use message passing instead. This is irrelevant to FP tho
>The heap is just RAM. The stack is in RAM to
The stack has locality which means it gets cached and it's a simpler structure anyway. Since every language uses a stack CPUs have been optimised with regards to stack operations. For functional programming tho a stack is just not enough. Generalising the idea of a stack (required if functions can outlive the scope that created them) you end up having to use something like a graph which necessarily needs garbage collection (as you can't know at compile time when will certain scopes die) and so all of this needs to be in the heap
>Yes they do.
My statement wasn't accurate. They provide assembly as an interface but under the hood they do different things which have the same end result (except when they don't and you get exploits)
You are aware that when you write a program you tell the system what you want to do, not how to do it right? This is true even in assembly
>You are aware that when you write a program you tell the system what you want to do, not how to do it right? This is true even in assembly
this doesn't seem like a very meaningful statement, though. yes, if you consider your scope low level enough, even opcodes instruct the CPU to do something, not how they should do it. that much is obvious, computers don't exist so humans can do their work. but when you consider the scope at which we're trying to solve a problem, and have computers actually perform a meaningful task, then there's a world of difference between imperative and declarative languages. with one, we instruct how to solve a task, and with the other, there's an intermediate runtime that has to come up with algorithms for declarative code.
Sure but imperative languages also get optimised by the compiler which means variables may be eliminated etc and the system might end up doing the same thing but not how you described it. The difference is in how you describe it. Foruthermore if you wanna be able to do more advanced things like higher order functions or object orientism you need more runtime support
So basically you're moronic?
>Are you trying to deceive people?
He and the other cnile cultists absolutely are. They're lying snakes and beyond moronic.
I forgot to add that what likely saves haskell at ack(2, 1) is lazy evaluation which avoids needless computation C can't avoid because it's strict
Ignore the C part. I had accidentally swaped the arguments lol. C computes both relatively fast. What's the point of not optimised code tho?
Again, my point is not that you shouldn't optimize, it's that at a base level the output it produces is garbage. In some cases it may optimize well, but in many it will remain slower than a real language.
>use haskell as main language
>call C within haskell for cases where it's bad optimized for (like 1000x slower than C) or very performant code is necessary.
How painful is the interop?
Can you give an example running the ackermann I posted earlier from within haskell?
Depends on how you choose to do it.
https://hackage.haskell.org/package/inline-c
>ack
and now I realized I'm ruined
depends on the implementation of ackermann algorithm
e.g. the following is faster than C
{-# LANGUAGE BangPatterns #-}
module Main (main) where
import qualified Data.Map as M
import Control.Monad.State.Strict
import Control.Monad
type Table = M.Map (Integer,Integer) Integer
ack :: Integer -> Integer -> State Table Integer
ack 0 n = return (n+1)
ack 1 n = return (n+2)
ack m 0 = ack (m-1) 1
ack m 1 = do
!n <- ack (m-1) 1
ack (m-1) n
ack m n = do
mb <- gets (M.lookup (m,n))
case mb of
Just v -> return v
Nothing -> do
!s <- ack m (n-2)
!t <- ack m (n-1)
let foo a b = do
c <- ack (m-1) b
let d = max a c
return $! d
!v <- foldM foo 0 [s .. t]
mp <- get
put $! M.insert (m,n) v mp
return v
main :: IO ()
main = print $ evalState (ack 4 1) M.empty
>literally beats cnile
>not optimized
https://research.microsoft.com/en-us/um/people/simonpj/papers/ndp/haskell-beats-C.pdf
then simply write a better compiler for said functional language
the functional paradigm has no fault here
the functional paradigm does not accurately describe computing, it does not make for a good model for computation and thus it's innately problematic to make it optimal. GHC is already an insanely bloated compiler and Haskell is still not considered a very fast language.
Beyond moronic. Go do your homework and don't come back until (if) you get a job.
He accurately stated the problem with FP. I would tell you to go do your homework, but systems architecture is way above your head.
it theory it could.
compilers just aren't good enough yet
>write pure
>compiles to state machine anyway
Whats the point? You could just write as a state machine and it'll compile faster AND be faster in more cases.
I couldn't tell you.
But I guess I can explain it as a pretty lofty aspirational goal. If you really did manage to achieve a functional language and compiler that actually was able to compile things optimally, that would be a pretty big achievement. It would probably mean we figured out something about programming that we didn't quite know before, and that would probably be useful in all kinds of ways.
This could also be a pipedream too, idk, I don't use functional languages.
What's the next big programming paradigm? how do we transcend the limits of object oriented programming and functional programming? (they're both shit) We need to think up something new here. Perhaps we can get some inspiration from Alfred North Whitehead and his process philosophy
functional programming arguments get way funnier when you realize morons actually believe every FP language is like haskell
>Currying
Hello, sir