Modern PHP is bad but not that bad. They patched it up as much as they could. Avoid these twenty features, remember these thirty caveats, always read the docs carefully, get a framework or set up some basics yourself, and you can write pretty normal well-structured code. (But why would you?)
Laravel is ass.
Majority of bad things PHP is famous for is from the pre-5.6-ish era. Modern PHP code that follows PSR standards is fine, Composer isn't worse than any other package manager and inconsistent stuff that is always in every PHP hate thread is pretty much deprecated and not used. But if you have to support legacy code or develop for an old CMS like Joomla or Modx, it will make your eyes bleed. Also, >$
>you might as well use aws/gcp free tiers
I have to send them my loicence and ID. Plus they have a bandwidth limit of something like 500MB per month. So if I visit the site too much I have to wait for the end of the month to use it again.
2 months ago
Anonymous
>500mb
it's 100gb (and you have 1tb transfer in cloudfront for assets so its like 100gb of text)
>loicense and id
fair i guess
still if your project goes anywhere free hosting wont do it, but i guess its fine to use php if you dont really give a frick
>unlimited web space for 6.95 a month
Does that mean storage? Ain't no way blud. What's stopping me from using these morons as a CDN and recreating Youtube but hosting everything on their servers?
>oh no, my language is readable by default and doesn't need me to jump through 20 hoops and have a gold medal in mental gymnastics to make it execute lines one after the other instead of in literally whatever order
>filtered by async
you literally add one await before stuff, it isnt rocket science
also why would you care about the order things run as long as you get the desired result
enjoy your colored functions moron
and the solution to that is to have no way to do concurrent io on a language that is made to do concurrent io?
async functions can only be called from async functions and as sooner or later everything has to be async in your project.
2 months ago
Anonymous
>and as sooner or later everything has to be async in your project.
Okay but what are the ACTUAL consequences of this? Worse performance? Also no, don't know JS, but C# you can just call async function by creating a new Task();
2 months ago
Anonymous
>async functions can only be called from async functions
that's not even remotely true. what the absolute frick?
there isn't one. people who quote the color function blog post are moronic and don't realize everything is ultimately colored by things jeets don't understand; mainly: side effects.
just throwing an N:M scheduler doesn't solve every problem either since you still need to make decisions about task yielding or preemption. Also look at CGo for the consequences of such a runtime.
>Process Control should not be enabled within a web server environment and unexpected results may happen if any Process Control functions are used within a web server environment.
Also just not what async is, generally it involves intra-process and even intra-thread concurrency. There exists async for PHP but only as third-party libraries AFAIK. I'm seeing Swoole and ReactPHP. A coworker mentioned one of these once but I've never met anyone who uses them
Concurrency in php is mostly handled by os by juggling the multiple instances of php-cgi for multiple separate requests. It wasn't built for its own concurrency.
2 months ago
Anonymous
what are the actual benefits from running async/concurrency in the context of a web server? in benchmarks php performs just as well as "async" solutions so I suspect there's some myths spread by some people.
users requests are never really concurrent, they come in a sequencial way, and can be answered each as they come or in bulk depending on what it is.
as for server side processing I've never encountered a situation where it can be a bottleneck and I'm paid to create server side scripts since over 10 years.
tasks that require really concurrent processing usually aren't server side scripts but actual programs that are written in a lower level language, and they can feed off data sent by php.
you can create an infinite loop in php that sends and receive data instantly if needed, that is lightweight, simple, and doesn't require any architecture change. I've built a trading bot under with logic and it's extremely fast, no async inside it.
concurrent processes are important though, but that is managed by the OS, not by php.
also if you really want to, you can manipulate subprocesses and probably achieve some sort of async.
so if someone can explain the point of that async/concurrent hype into the language, any valid example.
2 months ago
Anonymous
>so if someone can explain the point of that async/concurrent hype into the language, any valid example.
apache2 (threaded/forking) vs. nginx (event loop)
That said it only matters if you want to serve *a lot* of requests. Which is why I'm super not into Rust: People love their async shit there too and it just gets in the way most of the time.
2 months ago
Anonymous
those are web servers, I'm afraid I didn't get your point. I like nginx and concurrency makes sense in the context of a webserver obviously.
I guess my point is that if you need to process a lot of data or concurrency you use a language like c++, not a higher level language like php.
I'm also not a fan of rust, I think it mostly attracts people who want to feel smart because of its unreadable syntax, and people justify it by saying it is memory safe even though you can write memory safe code in any language and write dangerous code in rust. that is detrimental overall, but it kinds of filter the people you'd want to work or not work with, like for php/js.
2 months ago
Anonymous
Apache and nginx handle concurrency part for php. You could imagine all of php being async await with nginx.
With full featured languages it's the same process that controls the web server as well. It's the whole nginx + php in one. So to support concurrent processing the language itself has to support concurrency.
2 months ago
Anonymous
yes but if nginx handles concurrency, and you need nginx anyway, what is the point to support it in the language itself and write async code yourself?
2 months ago
Anonymous
There's some places where it's useful still and would speedup but not as many as in others.
Like one poster said concurrent sql queries. Fetching user info and page info at same time for example rather than sequentially.
2 months ago
Anonymous
>Fetching user info and page info at same time for example rather than sequentially.
that is 2 items
it makes virtually no difference to gather those 2 items sequentially as opposed to concurrently, and the cost is unneeded complexity in the code.
if the source of the query is the user, requests will come sequentially anyway.
if it's a backend task, needs more info on what you really need to do with those 2 results. and tables, the structure, architecture of the database, is made by developers / business logic, so it's always a small number, it's never millions of "items" to fetch. the quantity comes in the number of rows which are gathered from the sql database. you can do whatever you want with those. if you want to be efficient on ram and cpu, you don't process all at once.
still doesn't make sense to me.
2 months ago
Anonymous
Most of the time it's not needed yeah but there's times you do want it. There can be slow queries or apis that take 1 sec to receive results. Concurrently you can send 2 requests, combine them to what you want and be finished in 1sec. Sequentially you'd be waiting 2 seconds. The longer and more you need the worse sequential is.
There was syncing data between db and elastic at one of my jobs. Sequentially would take hours, concurrently minutes.
2 months ago
Anonymous
if you notice that the query time is superior to 1 sec for some reason, and it's a problem for sequencial processing you split the script and you execute both script in 2 processes, total time will be equal to the total time of the async script. 2 processes is reasonable.
I would be surprised if it wasn't possible to make one big query in your case. would need to know the details to tell but I've never encountered your kind of issue, and if that was truly impossible to bulk queries or something like this, then it means the solution picked is a bad one, and that's where a dev needs to fix that. php + mysql can do mostly anything web related.
if you don't do that it's an endless negative circle : something is badly implemented or made. you create tools to handle this bad implementation. then people create other tools to make that more efficient. that's nonsense: you shall fix the initial issue.
2 months ago
Anonymous
Many times you don't control the db or where you have to call. It's mostly in small freelancing that's never in production if you can redesign everything every time.
Many times you're told to call these few apis and combine data. Sometimes they are slow, sometimes fast. You can't change that. You can help it with concurrency though.
Opening multiple processes to solve an easy problem is the epitome of unmaintable mess. It's the php cancer that's hated and why it's called a shitty problem. With proper language it's one extra line of code, with php you start thinking of multiple processes.
2 months ago
Anonymous
Why would you pull in data from multiple slow APIs and spit out a static page? No matter what language you use, it's going to be a shitty experience. Why do morons keep bringing up this unrealistic scenario? Are they bots?
2 months ago
Anonymous
Why does it have to be a static page? At current job our service calls to 10 other services. One request might call to 4 other services to find all related data. And we're just a small part with a single instance service.
It's a scenario that happens every day when you're not a pajeet doing freelancing for small marketing sites.
2 months ago
Anonymous
>calls dozens of live services with each page load >what is caching?
2 months ago
Anonymous
>and you need nginx anyway
[citation needed]
2 months ago
Anonymous
what are you using to handle http requests?
2 months ago
Anonymous
php itself should be able to handle http. for some reason, people really hated http (justifiably so) and had a hard-on for fcgi since it guaranteed less ambiguous header reprs since it was binary encoded, but ya, no one talks fcgi over the internet sadly. what we got for a proper binary http was http/2 and http/2 is shit beyond belief.
2 months ago
Anonymous
most languages can handle http by themselves, they dont need anything in front of them
for php, swoole and roadrunner can handle requests directly without nginx, and frankenphp uses caddy iirc
2 months ago
Anonymous
I didn't even know. but I doubt this is a better solution than nginx + php or even apache for real world scenarios.
2 months ago
Anonymous
>so if someone can explain the point of that async/concurrent hype into the language, any valid example.
>Be you >Be tasked with maintaining a web dashboard in PHP that's in charge of rendering millions of paginated records, calculate different metrics ($$$) and count users >On top of that in order to feed other information you need to query another three external APIs because your architecture sucks and now you're the code janny >You try refreshing the page. 30 seconds later it times out >Profiling and benchmarking shows you it actually takes more than 180 seconds now to render this page sometimes
>Turns out the php file in charge of doing this is doing ALL of this sequentially >There's no proper async/concurrent support for performing all of this
>You try to optimize everything as much as you can, but the final result is a page that still loads extremely slow (+5 seconds) and sometimes times out to the user
>There's no partial rendering or pre-loading and you're stuck with a mediocre solution that's bound to piss off customers and bleed you money.
>You cannot make concurrent requests so everything has to awkwardly sit statically for 20 seconds in the best case before you start showing shit. Worst case it times out
>You know realize why async/concurrency is helpful, but PHP offers deficient support for it
2 months ago
Anonymous
>write shitty code >i-it's the language's fault!
Massive cope. So what's your proposed solution? Write some shitty async garbage in Python that does the exact same thing? If you're going to rewrite it anyway, why not just use AJAX like everyone else?
2 months ago
Anonymous
>It's the language's fault
Yes. >So what's your proposed solution?
Limit yourself to using php for small projects. When you need to deal with more complex usecases, use the appropriate tool for it, instead of having to hack around a templating scripting language. Not even israelitebook could handle php, they had to fork it and rewrite it to make it usable for their usecases.
>Python
No lol. Just stop overcomplicating everything and use Go for backend, jfc.
2 months ago
Anonymous
>use Go for backend >just take 20 times more time to build an app than it would take in Laravel, Rails, ...
2 months ago
Anonymous
What? That's literally Go's forte. It takes far less time building than any of those frameworks you mentioned.
2 months ago
Anonymous
>t. never had a job
2 months ago
Anonymous
I literally work as a PHP janny. Our builds take a frickton of time. I worked in the past for an enterprise that used rails and guess what, our build took far more than the slowest go build. Again, are you baiting? What the frick are you talking about?
2 months ago
Anonymous
go always takes more lines of code and time to make something compared to php. which is why I never believed in it and think it will go the way of ruby. unless it somehow manage to replace java and c#...
>there's so many reasons why an application could have performance issues, blaming it on not using async isn't a proof or convincing at all
i wasnt even talking about async there, i was talking about fpm's model.
and i surely am not talking out of my ass, i worked on migrating a largeish monolith application (700m+ requests per month, 1m lines of php) to roadrunner, and we saw significant gains (up to 20% in user requests, ~40% in high rpm api calls) from just not having to run framework code. we got additional gains after changing the application code to use in-memory caches, which _is_ impossible without the worker model (you can use apcu but you'll still have a lot of overhead deserializing stuff)
>that's why php performs well
it doesnt, fastcgi fricking sucks, every benchmark will tell you that
>they don't make concurrent requests
did you ever think about having multiple users? if they access your site at the same time, it will be concurrent, moron
>it doesn't matter if it's not a bottleneck, and it usually isn't
source: your ass
it isnt the bottleneck because you spawn a trillion processes to deal with the fact the language _cant do async_
you will pay more in infrastructure due to that, it's a fact
[...] >if the antifraud service belongs to you
it doesnt, i meant "our" as in "the ones we use"
>if you have to deal with client's garbage servers [...] I would reject the fault to the person responsible for the remote server, and notify the user that his query is being processed
you realize sometimes you have no control about it? your users will yell at you because your endpoints are slow, your upstream services will take forever fixing their shit or just go "yea this is the best we can do tough luck", and you'll be left wondering "why the frick i work with php still"
face it, scenarios where async is useful happen a lot the time on backend stuff, and if the language doesnt support it its simply a bad choice for backend dev
>gain from just not having to run framework code
if you're telling me you ran a large application and relied on a framework, instead of actually using php for a proper tailor-made solution, I'm not surprised you ran into issues.
>every benchmark will tell you that
which ones?
>did you ever think about having multiple users? if they access your site at the same time, it will be concurrent, moron
the initial discussion was about user requests themselves needing to be concurrent for that one user, not server side http requests processing, you're talking about something else
>source: your ass
can you explain why something that isn't a bottleneck would matter? If I don't need to use an extra server, I won't pay more. and if I don't need an extra sever that means there's no bottlenecks. I'm sorry but again, I think you're imagining a situation where process management is an issue because of php. I've had countless discussions like this over there and elsewhere. not a single time php was the bottleneck in the context of a web backend.
>you realize sometimes you have no control about it?
yes, maybe. but maybe the issue is the person you work with, if you work with people with shit infrastructure and nothing is ever fixed, that will obviously have impacts on the end user. php, not the problem here.
you can do async with php with third party libraries or frameworks I guess, but I still consider it as bloat.
2 months ago
Anonymous
>go always takes more lines of code
we are talking about build time, moron
2 months ago
Anonymous
Why? That's not even what the original post (
>use Go for backend >just take 20 times more time to build an app than it would take in Laravel, Rails, ...
) was about.
2 months ago
Anonymous
build = compilation
You meant development time then. Which is still wrong because go is as simple as it gets.
2 months ago
Anonymous
>go always takes more lines of code and time to make something compared to php.
What the frick are you on about? Go was made for you to be productive. It's literally as simple as it gets. I get more LOC but more time?
2 months ago
Anonymous
And also I was talking about compilation time so nice job shifting those goalposts
>if you're telling me you ran a large application and relied on a framework
yeah because in the real world people love reinventing the wheel and not actually shipping anything
>If I don't need to use an extra server, I won't pay more
the point is you will need extra servers, beefier servers, to handle all those processes taking extra memory and just being able to handle less requests/server overall
if one server is all you need php is fine, but for large application it isnt enough
>not a single time php was the bottleneck in the context of a web backend
ive shown you many examples where php and its ecosystem were literally the bottleneck
>maybe the issue is the person you work with, if you work with people with shit infrastructure and nothing is ever fixed, that will obviously have impacts on the end user
in the end, you're paid to solve stuff. as i said, many times you have no control, and you need to solve it. you can use subprocesses, you can spawn jobs, but it's a way more complex solution than if the language just had actual concurrency support
it can be worked around, that's not the point, the point is if it's a good language for backend dev, and it isnt, it lacks features that are extremely useful for backend dev
2 months ago
Anonymous
alright, swoole makes php faster, maybe 3x faster. but outside benchmarks, 0.5% scenarios, or badly written applications, still couldn't see a real benefit from simple php.
in your links, I saw some buzzwords, microservices, frameworks, laravel, so I'm not confident on the ability of the person to produce an efficient php application though. so those benchmarks are still not representative, like everything I've seen.
>yeah because in the real world people love reinventing the wheel and not actually shipping anything
to me, using a framework with code I didn't write is precisely that. libraries exist and they're enough to avoid reinventing the wheel.
>the point is you will need extra servers
I'm disappointed that I couldn't find any situations where pure php is a bottleneck in a properly written nginx/php/mysql application. I guess I need to see it with my own eyes.
> it's a way more complex solution than if the language just had actual concurrency support
in my opinion no...async code is a mess to look at and more complex to do, and can only be potentially useful in niche scenarios.
if you really need performance, to the point where you need to divide the number of servers to save money, you use a lower language. probably also true for go/java/c#. php is for fast, reliable, maintainable programming. which is the opposite of lower level languages. it's all about tradeoffs.
2 months ago
Anonymous
>produce an efficient php application though
lmao
there's a reason why big companies that used to run on php moved on to things like hacklang (which, unsurprinsingly, has async constructs)
>I'm disappointed that I couldn't find any situations where pure php is a bottleneck in a properly written nginx/php/mysql application
write something popular and pay the compute bills, im sure you'll find one then
2 months ago
Anonymous
not the guy who are replying to but you're obviously delusional and moronic lmao nobody uses hack
2 months ago
Anonymous
the only huge corpos that used php that i remember were slack and facebook
both moved on, and i dont recall anyone else using it (i think lyft used php too but they went the microservice route)
no one uses hack, i agree, but its the only way out of php. the right thing would be not to use it in the first place
2 months ago
Anonymous
modern PHP is based
it's fast like no other interpreted language, it's quite literally the only dynamic lang that has opt-in static typing, the ecosystem is top tier and the only major feature it's missing at this point are generics
you just have a hateboner for it because it's cool to hate on PHP
2 months ago
Anonymous
how is pointing out objective flaws on the language/ecosystem "hating it"
i've been mostly pointing out how moronic the fastcgi model is at scale and the inability to do concurrency. both are facts, no matter how you look at it
modern php is okay, if all your code is up to standards. this wont be the case in 99% of the codebases, which will still reek of phpisms: no typing, array abuse, superglobal usage, magic code abuse (dont get me started on laravel's facade bullshit), no annotations which is the fricking only way to get decent typing support, using empty() for everything
i dont hate it because its cool to, i hate it because i use it daily. i dont really care about its flaws, in the end of the day i just stopped caring because it pays the bills. i just wont say its a good lang, because it isnt
>the ecosystem is top tier
the orms are okay, the remaining stuff is subpar or just the same as everything found in other langs
>the only major feature it's missing at this point are generics
which it will never get, by nikita's own words. it also misses some kind of sum types, be it sealed classes or tagged unions
also, the way the languages committee works is really moronic. anything even slightly controversial gets turned down because "php isnt fit for that" or something. no, you cant just not use this feature if you dont like it, you need to keep it out of the lang. so fricking annoying
2 months ago
Anonymous
>it will go the way of ruby
lol you picked the worst example possible
Ruby mogs literally every langauge even in comfiness and code beauty and speed of development
2 months ago
Anonymous
Not to defend anything or whatever, but: >one of the biggest online platforms to date >~~*not even*~~
2 months ago
Anonymous
That is fine. Concurrency will just hurt your brain.
what are the actual benefits from running async/concurrency in the context of a web server? in benchmarks php performs just as well as "async" solutions so I suspect there's some myths spread by some people.
users requests are never really concurrent, they come in a sequencial way, and can be answered each as they come or in bulk depending on what it is.
as for server side processing I've never encountered a situation where it can be a bottleneck and I'm paid to create server side scripts since over 10 years.
tasks that require really concurrent processing usually aren't server side scripts but actual programs that are written in a lower level language, and they can feed off data sent by php.
you can create an infinite loop in php that sends and receive data instantly if needed, that is lightweight, simple, and doesn't require any architecture change. I've built a trading bot under with logic and it's extremely fast, no async inside it.
concurrent processes are important though, but that is managed by the OS, not by php.
also if you really want to, you can manipulate subprocesses and probably achieve some sort of async.
so if someone can explain the point of that async/concurrent hype into the language, any valid example.
>in benchmarks php performs just as well as async
it doesnt
php-fpm performs way below almost every language/application server due to the 1 request per process model (every request "creates" a new process). this model means you need to rerun everything in order to serve a new request: framework bootstrapping code, loading any in-memory caches, etc.
in larger applications this causes a significant performance hit.
alternative application servers like swoole or frankenphp avoid this by running in worker mode: each process is a loop that accepts a request, runs it, and then waits for another one. this solves the need to run everything every request, but you still need to run multiple processes, which implies higher memory requirements as every process needs its own instance of things
>users requests are never really concurrent
objectively wrong
requests are served concurrently even in php, by using multiple processes
by using async you can serve another request while waiting for io in one request, which means you need less instances of your application to serve the same amount of users, which means you spend less on infra/higher performance for the same price
>any valid example
aside from the obvious example of serving concurrent requests, you can do things like "call both of our antifraud services at the same time" or "read this data from redis and query the db for this other data", which reduces the time the end user waits
>Fetching user info and page info at same time for example rather than sequentially.
that is 2 items
it makes virtually no difference to gather those 2 items sequentially as opposed to concurrently, and the cost is unneeded complexity in the code.
if the source of the query is the user, requests will come sequentially anyway.
if it's a backend task, needs more info on what you really need to do with those 2 results. and tables, the structure, architecture of the database, is made by developers / business logic, so it's always a small number, it's never millions of "items" to fetch. the quantity comes in the number of rows which are gathered from the sql database. you can do whatever you want with those. if you want to be efficient on ram and cpu, you don't process all at once.
still doesn't make sense to me.
"i dont need it" isnt a valid justification
there are many valid reasons, and if you're going to say "oh you just need to rework your architecture/table structure/queries" or whatever to make for a language deficiency its just pure cope
>the cost is unneeded complexity in the code
when the alternative is creating subprocess or jobs, id argue async is the lesser evil
Red functions are the devil.
Use green threads instead.
>use green threads instead
i would, if fibers wasnt fricking vaporware
its been like 3 years, no support
>every request "creates" a new process
no, it's not, fricking moron, it has a pool of sub-processes that's gets reused.
2 months ago
Anonymous
and how does that changes anything i've said?
i know it uses a process pool, but you still need to rerun all your code, and it's still a huge performance hit. saying it "creates" a new process is just an easier way to explain it. you also need ugly extension-level workarounds for connection pooling/persistant connections and so on
you can literally google any benchmark and see it for yourself, fpm _always_ lands dead last compared to servers using a worker model. it sucks even for php standards
concurrency exists, its called fibers
[...]
async functions can only be called from async functions and as sooner or later everything has to be async in your project.
>concurrency exists, its called fibers
as i said, fibers are vaporware
show me an example using fibers in any production application
>and as sooner or later everything has to be async in your project.
Okay but what are the ACTUAL consequences of this? Worse performance? Also no, don't know JS, but C# you can just call async function by creating a new Task();
none
the colored functions post is like 10 years old, the main issue was callback hell as promises werent really a thing
now people just add async/await/Promise<x> (or Task<x>) to everything and call it a day
2 months ago
Anonymous
>the colored functions post is like 10 years old, the main issue was callback hell as promises werent really a thing
isn't callback mechanism an ALTERNATIVE to promise/async model?
2 months ago
Anonymous
it depends. this goes into the whole literature about "nodebacks" and if they're invoked immediately are put into some mechanism to defer their work. a callback itself is just code you can invoke at any time.
2 months ago
Anonymous
why though? the process basically needs to be nuked from orbit (exec()) unless php does fricky shit to re-execute the entrypoint of the process.
2 months ago
Anonymous
you are a moron without any idea how it works
2 months ago
Anonymous
I don't because I'm not a PHP tard.
what
and how does that changes anything i've said?
i know it uses a process pool, but you still need to rerun all your code, and it's still a huge performance hit. saying it "creates" a new process is just an easier way to explain it. you also need ugly extension-level workarounds for connection pooling/persistant connections and so on
you can literally google any benchmark and see it for yourself, fpm _always_ lands dead last compared to servers using a worker model. it sucks even for php standards
[...] >concurrency exists, its called fibers
as i said, fibers are vaporware
show me an example using fibers in any production application
[...]
none
the colored functions post is like 10 years old, the main issue was callback hell as promises werent really a thing
now people just add async/await/Promise<x> (or Task<x>) to everything and call it a day
said sounds like how I'd do it, accept() loop in same process in perpetuity.
so how does this FPM mechanism work then? do you keep the interpreter and just feed it the same script somehow? is it like I said? you take the same process and invoke exec with the same fricking interpreter and arguments?
2 months ago
Anonymous
fpm does fricky shit to replace the process' image
in practice this saves the kernel-level overhead of having to create a new process, but that's all
you are a moron without any idea how it works
he's right though? it essentially reruns the process, whether it creates or not a new process changes very little fundamentally
>the colored functions post is like 10 years old, the main issue was callback hell as promises werent really a thing
isn't callback mechanism an ALTERNATIVE to promise/async model?
in JS they're kind of the same, both use the continuation-passing style implementation
but promises and language-level async/await constructs makes it way less painful to use, which was one of the issues (and to me the one that matters the most) the colored functions post raised
>in larger applications this causes a significant performance hit.
everytime people claim this. there's so many reasons why an application could have performance issues, blaming it on not using async isn't a proof or convincing at all.
by the way, fastcgi doesn't creates a new process for each request as far as I know. php fastcgi keeps the process alive and wait for another request. maybe that's why php performs well.
>objectively wrong
what are we even talking about? I'm mostly talking about a backend for web services. a website. users click on a link. they have one mouse cursor. they don't make concurrent requests. unless maybe the front end is some garbage js code, which is bad design anyway. I can't think of the last time I had to use js in a webpage.
if it's a backend for some online games, alright maybe there's concurrency. and I've never worked on those kinds of projects. I would assume that it would need a lot of ressources and you would need maybe to use a lower level language than php? if you claim php couldn't work here, I can assume that's possible.
>by using async you can serve another request
it doesn't matter if it's not a bottleneck, and it usually isn't.
2 months ago
Anonymous
>everytime people claim this. there's so many reasons why an application could have performance issues, blaming it on not using async isn't a proof or convincing at all.
reading for you:
thundering herd
slowloris
async solves these because it doesn't constraint your task execution to allocating (psuedo) stacks and deferring to a scheduler to make, a likely, incorrect assumption of interrupting and rescheduling tasks.
the people keep shouting "fibers" are likely referring to Google's kernel fork which offers some kernel mechanisms where users can make N:M schedulers in userspace, sort of like futex for (cross-process) userspace synchronization primitives.
2 months ago
Anonymous
>thundering herd
sounds like an OS issue
in any way to run into that issue, you need a lot of users, and if you have a lot of users you're supposed to run extensive benchmarks before production
>slowloris
sounds like a job for firewalls, not scripts
>so if someone can explain the point of that async/concurrent hype into the language, any valid example.
>Be you >Be tasked with maintaining a web dashboard in PHP that's in charge of rendering millions of paginated records, calculate different metrics ($$$) and count users >On top of that in order to feed other information you need to query another three external APIs because your architecture sucks and now you're the code janny >You try refreshing the page. 30 seconds later it times out >Profiling and benchmarking shows you it actually takes more than 180 seconds now to render this page sometimes
>Turns out the php file in charge of doing this is doing ALL of this sequentially >There's no proper async/concurrent support for performing all of this
>You try to optimize everything as much as you can, but the final result is a page that still loads extremely slow (+5 seconds) and sometimes times out to the user
>There's no partial rendering or pre-loading and you're stuck with a mediocre solution that's bound to piss off customers and bleed you money.
>You cannot make concurrent requests so everything has to awkwardly sit statically for 20 seconds in the best case before you start showing shit. Worst case it times out
>You know realize why async/concurrency is helpful, but PHP offers deficient support for it
>30 seconds later it times out
I would investigate the database, structure, indexes, etc. millions of records is not a lot. the other day I was working on a server that I pay less than $10 a month, it had millions of financial records (mariadb), zero performance issue, shared hosting. lol
also in your example you have shit code and you shall have multiple script to handle multiple tasks. fix your code.
too bad, bad example.
2 months ago
Anonymous
>would investigate the database, structure, indexes, etc. millions of records is not a lot.
True, and then you would perhaps shed another 15 seconds, after battling for weeks with SRE, DBRE and everyone because indices, partitioning and other non-DB stuff is expensive. Congrats, everything is still run sequentially.
>also in your example you have shit code and you shall have multiple script to handle multiple tasks.
And all of that PHP runs sequentially. In order to render the page, you need to do all those tasks (i.e. the page is synchronous). PHP doesn't have a decent solution to do it async. Meanwhile other languages (even fricking node.js) are capable of doing it without so many hassles. The moron above kept having a tantrum over other languages "riding the async/concurrent hype train" and I showed him why it's important if you want to create big, enterprise apps.
2 months ago
Anonymous
>And all of that PHP runs sequentially. In order to render the page, you need to do all those tasks (i.e. the page is synchronous). PHP doesn't have a decent solution to do it async.
If you have a query that takes 10 seconds to run, the backend won't serve the page until those 10 seconds are up, regardless of whether it's PHP or a language with async features. Instead of making the user wait, why not just serve the page as quickly as possible, then offload the 10 second query to an API and pull it via AJAX? It's a better user experience than watching their browser spin while a huge static page is loading. That's why this whole conversation is pretty stupid.
2 months ago
Anonymous
>why not just serve the page as quickly as possible, then offload the 10 second query to an API and pull it via AJAX?
That's what I've been proposing the entire time. I was explicitly telling that moron that said concurrency / async isn't necessary that they can just build a decent backend and do AJAX in the front end, preload the page and not block the user experience. That's an irl usecase where async actions are necessary. If you want to keep using php then just use it for the backend, but don't force the site to use PHP in order to echo the DOM when it can't do async, that's when PHP isn't the right tool.
2 months ago
Anonymous
>there's so many reasons why an application could have performance issues, blaming it on not using async isn't a proof or convincing at all
i wasnt even talking about async there, i was talking about fpm's model.
and i surely am not talking out of my ass, i worked on migrating a largeish monolith application (700m+ requests per month, 1m lines of php) to roadrunner, and we saw significant gains (up to 20% in user requests, ~40% in high rpm api calls) from just not having to run framework code. we got additional gains after changing the application code to use in-memory caches, which _is_ impossible without the worker model (you can use apcu but you'll still have a lot of overhead deserializing stuff)
>that's why php performs well
it doesnt, fastcgi fricking sucks, every benchmark will tell you that
>they don't make concurrent requests
did you ever think about having multiple users? if they access your site at the same time, it will be concurrent, moron
>it doesn't matter if it's not a bottleneck, and it usually isn't
source: your ass
it isnt the bottleneck because you spawn a trillion processes to deal with the fact the language _cant do async_
you will pay more in infrastructure due to that, it's a fact
>things like "call both of our antifraud services at the same time" or "read this data from redis and query the db for this other data", which reduces the time the end user waits
if the antifraud service belongs to you, the answering time shall be a few ms at most. if it's seconds, fix that before adding complexity.
same for the redis + other db query. in which world do you need to wait 4 seconds for one request, then some more for the other request, before echoing the result to a user? damn. fix your servers.
if you have to deal with client's garbage servers, and you have the case of a user having to wait, personally, I would reject the fault to the person responsible for the remote server, and notify the user that his query is being processed. it could then be a few seconds or more. then he refreshes his page or get notified when the remote server decided to answer. I'm not downgrading my code because of someone else's shitty code.
>lesser evil
if the need for async code is a consequence of bad design decisions, it's something to correct and not waste time with.
using extra processes is a simple solution that any OS can handle. implementing async in the code itself seems like unneeded complexity.
>if the antifraud service belongs to you
it doesnt, i meant "our" as in "the ones we use"
>if you have to deal with client's garbage servers [...] I would reject the fault to the person responsible for the remote server, and notify the user that his query is being processed
you realize sometimes you have no control about it? your users will yell at you because your endpoints are slow, your upstream services will take forever fixing their shit or just go "yea this is the best we can do tough luck", and you'll be left wondering "why the frick i work with php still"
face it, scenarios where async is useful happen a lot the time on backend stuff, and if the language doesnt support it its simply a bad choice for backend dev
>things like "call both of our antifraud services at the same time" or "read this data from redis and query the db for this other data", which reduces the time the end user waits
if the antifraud service belongs to you, the answering time shall be a few ms at most. if it's seconds, fix that before adding complexity.
same for the redis + other db query. in which world do you need to wait 4 seconds for one request, then some more for the other request, before echoing the result to a user? damn. fix your servers.
if you have to deal with client's garbage servers, and you have the case of a user having to wait, personally, I would reject the fault to the person responsible for the remote server, and notify the user that his query is being processed. it could then be a few seconds or more. then he refreshes his page or get notified when the remote server decided to answer. I'm not downgrading my code because of someone else's shitty code.
>lesser evil
if the need for async code is a consequence of bad design decisions, it's something to correct and not waste time with.
using extra processes is a simple solution that any OS can handle. implementing async in the code itself seems like unneeded complexity.
i used to ask myself the same thing in middleschool more than a decade ago when i saw the claim that 95% of the internet runs on php >where .php huh
but ofcourse i was moronic and didnt know any better
2 months ago
Anonymous
please explain it to me, as you would to yourself
2 months ago
Anonymous
PHP websites typically do have the .php extension for their PHP files. However, in some cases, developers might choose to use URL rewriting or server configurations to hide the file extensions for aesthetic or security reasons. This technique is often referred to as "URL rewriting" or "pretty URLs".
Instead of having URLs like example.com/page.php, they might appear as example.com/page. This can make the URLs cleaner and more user-friendly. It's achieved by configuring the web server to internally rewrite the URL to include the .php extension on the server side, but present it to the user without it. This doesn't mean that the PHP files themselves don't have the .php extension; it's just hidden in the URL.
>t. chatgpt
if you can't keep up with technology you will become illiterate and have a hard time in life
2 months ago
Anonymous
It's hidden now, servers can do that brainlet.
Notice how if you go to IQfy/banned.php it hides the php? But if you go to html that's another story.
>Where's the .php in the url?
Look at the URL of the popup when you report a post :^)
2 months ago
Anonymous
interesting.
why doesn't the page we're on right ow have a ".html" or a ".php" at the end of its url? I have a personal website I use for documents and stuff and all the pages are .html, and this is reflected in the url.
2 months ago
Anonymous
are you mentally moronic?
read
PHP websites typically do have the .php extension for their PHP files. However, in some cases, developers might choose to use URL rewriting or server configurations to hide the file extensions for aesthetic or security reasons. This technique is often referred to as "URL rewriting" or "pretty URLs".
Instead of having URLs like example.com/page.php, they might appear as example.com/page. This can make the URLs cleaner and more user-friendly. It's achieved by configuring the web server to internally rewrite the URL to include the .php extension on the server side, but present it to the user without it. This doesn't mean that the PHP files themselves don't have the .php extension; it's just hidden in the URL.
>t. chatgpt
if you can't keep up with technology you will become illiterate and have a hard time in life
stupid bots GET BANNED BANNED MOOODS
2 months ago
Anonymous
oh gee anon I'm so sorry I didn't read the fricking gpt post. Is that actually how it works for IQfy?
2 months ago
Anonymous
I mean I also stop reading a post when it's slop.
Yes, that's how it works for all websites that don't have the default path like urls. Url rewriting is simple to do and doesn't impact performance.
If you check the html of IQfy pages, your will find some urls with .php.
Facebook also used to have .php extensions in urls, I don't know how it is now but I wouldn't be surprised if it still had them.
2 months ago
Anonymous
if these are .php pages, why are they .html when I save them?
2 months ago
Anonymous
Everything is a .html page when you save it. Try it with a .jsp, .cfm, .asp, or any other extension.
nah, php ain't the dumpster fire ppl make it out to be. sure, it's got its quirks and can get messy, but it's all 'bout how u use it. it's like any tool, ya know? use it right, and it gets the job done. plus, the new versions are pretty solid, got some neat features that make it way less painful than the old days. so yeah, don't sweat it too much.
Depends. If you want to be autistic/anal about it, then sure it's a passable language. Realistically or doesn't matter how good it is or isn't, if you put that on a resume or portfolio you're going to get passed up (and rightfully so) for being old.
>if you put that on a resume or portfolio you're going to get passed up (and rightfully so) for being old.
Unemployed zoomer detected, PHP is still fricking everywhere
I use Python and php almost exclusively because it makes Cniles and the crabfrickers seethe and I get fricking paid to use Pythong and php lolololololololololol life is good. Go make some libraries so I can profit off your work Ctards (I would say the same to rustBlack folk but nothing of use has ever came from the rust programming language)!
No, in fact it's the best thing
People who dislike it are tech illiterate bootcampers who think they're smart because they've learned ruby on rails because the name sounded cool
it was badly designed abomination of a language
inconsistent function names and signatures
its a disgusting mess and it makes sense it got popular
because the world is made of mediocre moronic people
Some of the early function names are actually how they are because the hash function used to store the function name in the engine was actually just the function names' length and nothing more at the beginning and the original developer chose the names to be of varied length to balance it.
Php is crap still. It caught up a bit but still not good.
Nowadays only reason to use php is WordPress, since that's the best full featured site builder. The dynamic way php allows plugins and editing makes it shine.
Php is also populated by jeets. So either find locals who don't know better and you can trick into hiring you or compete with million pajeets online.
it's worse than bad. It's made by literal brainlets. the fact it works at all is a concerted effort of decades and billions.
I still see PHP using sites that serve content double escaped because PHP makes it too easy to do everything wrong, without fail at a "interpreter configuration" level.
The amount of seething these obvious bait threads tend to accumulate is the only proof anyone needs that the lang in question is still relevant af regardless of how flawed it might be
actually is quite great for a career now.
so many people jumped off or never started, they are all in C/C++ or python or Javascript.
If you're an expert in php you are set
PHP+nginx/Apache is as simple as you can get. It is enough for most applications, even if your code is moronic. I'll take this is forced async garbage like node where you still have to run multiple instances of your app except one moronic line in your app means the entire shit is blocked.
only at first, with a good color scheme it looks more than ok and the combination of the sigils + a special color for variables makes it a bit easier to read the code imo
you can make any website in any language in tor, in fact, you can even put a fricking ssh server in tor, tor will simply give you a tor address and act as a proxy for any service, are you moronic?
>this homosexual doesn't know he can self host
HHAHAHAHAHAHAHAHAHAH, dude shut the frick up, I ran services on the tor network, I know how to run that shit, you can literally run ANYTHING on it
2 months ago
Anonymous
No. You can't self host Black person, you have to go through the Tor company, it is their product, their private property
2 months ago
Anonymous
>you can literally run ANYTHING on it
just like the TRAIN me and my boys ran on YOUR MOM
Modern PHP is bad but not that bad. They patched it up as much as they could. Avoid these twenty features, remember these thirty caveats, always read the docs carefully, get a framework or set up some basics yourself, and you can write pretty normal well-structured code. (But why would you?)
Laravel is ass.
When did IQfy stop running on php?
Majority of bad things PHP is famous for is from the pre-5.6-ish era. Modern PHP code that follows PSR standards is fine, Composer isn't worse than any other package manager and inconsistent stuff that is always in every PHP hate thread is pretty much deprecated and not used. But if you have to support legacy code or develop for an old CMS like Joomla or Modx, it will make your eyes bleed. Also, >$
Some of the worst code I saw was from PHP 3 and 4.
php 8 is fine but I like python better
>Is it really that bad?
Depends on what is between the keyboard and chair.
>web server language
>no async
lmao
yeah it sucks big time, there's no reason to use it on new projects
>there's no reason to use it on new projects
infinityfree
biz.nf
you might as well use aws/gcp free tiers and not have to use fricking php (lambda and dynamo have free tiers on aws, i think cloud run also is idk)
show me an example running 2 (two) concurrent db queries in laravel (the dominant php framework)
no subprocesses allowed
>you might as well use aws/gcp free tiers
I have to send them my loicence and ID. Plus they have a bandwidth limit of something like 500MB per month. So if I visit the site too much I have to wait for the end of the month to use it again.
>500mb
it's 100gb (and you have 1tb transfer in cloudfront for assets so its like 100gb of text)
>loicense and id
fair i guess
still if your project goes anywhere free hosting wont do it, but i guess its fine to use php if you dont really give a frick
It's mostly for my personal use anyway
>using a framework instead of simple php
ngmi
>unlimited web space for 6.95 a month
Does that mean storage? Ain't no way blud. What's stopping me from using these morons as a CDN and recreating Youtube but hosting everything on their servers?
wtf are you talking about you dumb Black person? all popular languages and frameworks have async support these days.
>oh no, my language is readable by default and doesn't need me to jump through 20 hoops and have a gold medal in mental gymnastics to make it execute lines one after the other instead of in literally whatever order
>filtered by async
you literally add one await before stuff, it isnt rocket science
also why would you care about the order things run as long as you get the desired result
and the solution to that is to have no way to do concurrent io on a language that is made to do concurrent io?
concurrency exists, its called fibers
async functions can only be called from async functions and as sooner or later everything has to be async in your project.
>and as sooner or later everything has to be async in your project.
Okay but what are the ACTUAL consequences of this? Worse performance? Also no, don't know JS, but C# you can just call async function by creating a new Task();
>async functions can only be called from async functions
that's not even remotely true. what the absolute frick?
enjoy your colored functions moron
why is this an issue again?
there isn't one. people who quote the color function blog post are moronic and don't realize everything is ultimately colored by things jeets don't understand; mainly: side effects.
just throwing an N:M scheduler doesn't solve every problem either since you still need to make decisions about task yielding or preemption. Also look at CGo for the consequences of such a runtime.
https://www.php.net/manual/en/intro.pcntl.php
moron.
It didn't.
>Process Control should not be enabled within a web server environment and unexpected results may happen if any Process Control functions are used within a web server environment.
Also just not what async is, generally it involves intra-process and even intra-thread concurrency. There exists async for PHP but only as third-party libraries AFAIK. I'm seeing Swoole and ReactPHP. A coworker mentioned one of these once but I've never met anyone who uses them
Concurrency in php is mostly handled by os by juggling the multiple instances of php-cgi for multiple separate requests. It wasn't built for its own concurrency.
what are the actual benefits from running async/concurrency in the context of a web server? in benchmarks php performs just as well as "async" solutions so I suspect there's some myths spread by some people.
users requests are never really concurrent, they come in a sequencial way, and can be answered each as they come or in bulk depending on what it is.
as for server side processing I've never encountered a situation where it can be a bottleneck and I'm paid to create server side scripts since over 10 years.
tasks that require really concurrent processing usually aren't server side scripts but actual programs that are written in a lower level language, and they can feed off data sent by php.
you can create an infinite loop in php that sends and receive data instantly if needed, that is lightweight, simple, and doesn't require any architecture change. I've built a trading bot under with logic and it's extremely fast, no async inside it.
concurrent processes are important though, but that is managed by the OS, not by php.
also if you really want to, you can manipulate subprocesses and probably achieve some sort of async.
so if someone can explain the point of that async/concurrent hype into the language, any valid example.
>so if someone can explain the point of that async/concurrent hype into the language, any valid example.
apache2 (threaded/forking) vs. nginx (event loop)
That said it only matters if you want to serve *a lot* of requests. Which is why I'm super not into Rust: People love their async shit there too and it just gets in the way most of the time.
those are web servers, I'm afraid I didn't get your point. I like nginx and concurrency makes sense in the context of a webserver obviously.
I guess my point is that if you need to process a lot of data or concurrency you use a language like c++, not a higher level language like php.
I'm also not a fan of rust, I think it mostly attracts people who want to feel smart because of its unreadable syntax, and people justify it by saying it is memory safe even though you can write memory safe code in any language and write dangerous code in rust. that is detrimental overall, but it kinds of filter the people you'd want to work or not work with, like for php/js.
Apache and nginx handle concurrency part for php. You could imagine all of php being async await with nginx.
With full featured languages it's the same process that controls the web server as well. It's the whole nginx + php in one. So to support concurrent processing the language itself has to support concurrency.
yes but if nginx handles concurrency, and you need nginx anyway, what is the point to support it in the language itself and write async code yourself?
There's some places where it's useful still and would speedup but not as many as in others.
Like one poster said concurrent sql queries. Fetching user info and page info at same time for example rather than sequentially.
>Fetching user info and page info at same time for example rather than sequentially.
that is 2 items
it makes virtually no difference to gather those 2 items sequentially as opposed to concurrently, and the cost is unneeded complexity in the code.
if the source of the query is the user, requests will come sequentially anyway.
if it's a backend task, needs more info on what you really need to do with those 2 results. and tables, the structure, architecture of the database, is made by developers / business logic, so it's always a small number, it's never millions of "items" to fetch. the quantity comes in the number of rows which are gathered from the sql database. you can do whatever you want with those. if you want to be efficient on ram and cpu, you don't process all at once.
still doesn't make sense to me.
Most of the time it's not needed yeah but there's times you do want it. There can be slow queries or apis that take 1 sec to receive results. Concurrently you can send 2 requests, combine them to what you want and be finished in 1sec. Sequentially you'd be waiting 2 seconds. The longer and more you need the worse sequential is.
There was syncing data between db and elastic at one of my jobs. Sequentially would take hours, concurrently minutes.
if you notice that the query time is superior to 1 sec for some reason, and it's a problem for sequencial processing you split the script and you execute both script in 2 processes, total time will be equal to the total time of the async script. 2 processes is reasonable.
I would be surprised if it wasn't possible to make one big query in your case. would need to know the details to tell but I've never encountered your kind of issue, and if that was truly impossible to bulk queries or something like this, then it means the solution picked is a bad one, and that's where a dev needs to fix that. php + mysql can do mostly anything web related.
if you don't do that it's an endless negative circle : something is badly implemented or made. you create tools to handle this bad implementation. then people create other tools to make that more efficient. that's nonsense: you shall fix the initial issue.
Many times you don't control the db or where you have to call. It's mostly in small freelancing that's never in production if you can redesign everything every time.
Many times you're told to call these few apis and combine data. Sometimes they are slow, sometimes fast. You can't change that. You can help it with concurrency though.
Opening multiple processes to solve an easy problem is the epitome of unmaintable mess. It's the php cancer that's hated and why it's called a shitty problem. With proper language it's one extra line of code, with php you start thinking of multiple processes.
Why would you pull in data from multiple slow APIs and spit out a static page? No matter what language you use, it's going to be a shitty experience. Why do morons keep bringing up this unrealistic scenario? Are they bots?
Why does it have to be a static page? At current job our service calls to 10 other services. One request might call to 4 other services to find all related data. And we're just a small part with a single instance service.
It's a scenario that happens every day when you're not a pajeet doing freelancing for small marketing sites.
>calls dozens of live services with each page load
>what is caching?
>and you need nginx anyway
[citation needed]
what are you using to handle http requests?
php itself should be able to handle http. for some reason, people really hated http (justifiably so) and had a hard-on for fcgi since it guaranteed less ambiguous header reprs since it was binary encoded, but ya, no one talks fcgi over the internet sadly. what we got for a proper binary http was http/2 and http/2 is shit beyond belief.
most languages can handle http by themselves, they dont need anything in front of them
for php, swoole and roadrunner can handle requests directly without nginx, and frankenphp uses caddy iirc
I didn't even know. but I doubt this is a better solution than nginx + php or even apache for real world scenarios.
>so if someone can explain the point of that async/concurrent hype into the language, any valid example.
>Be you
>Be tasked with maintaining a web dashboard in PHP that's in charge of rendering millions of paginated records, calculate different metrics ($$$) and count users
>On top of that in order to feed other information you need to query another three external APIs because your architecture sucks and now you're the code janny
>You try refreshing the page. 30 seconds later it times out
>Profiling and benchmarking shows you it actually takes more than 180 seconds now to render this page sometimes
>Turns out the php file in charge of doing this is doing ALL of this sequentially
>There's no proper async/concurrent support for performing all of this
>You try to optimize everything as much as you can, but the final result is a page that still loads extremely slow (+5 seconds) and sometimes times out to the user
>There's no partial rendering or pre-loading and you're stuck with a mediocre solution that's bound to piss off customers and bleed you money.
>You cannot make concurrent requests so everything has to awkwardly sit statically for 20 seconds in the best case before you start showing shit. Worst case it times out
>You know realize why async/concurrency is helpful, but PHP offers deficient support for it
>write shitty code
>i-it's the language's fault!
Massive cope. So what's your proposed solution? Write some shitty async garbage in Python that does the exact same thing? If you're going to rewrite it anyway, why not just use AJAX like everyone else?
>It's the language's fault
Yes.
>So what's your proposed solution?
Limit yourself to using php for small projects. When you need to deal with more complex usecases, use the appropriate tool for it, instead of having to hack around a templating scripting language. Not even israelitebook could handle php, they had to fork it and rewrite it to make it usable for their usecases.
>Python
No lol. Just stop overcomplicating everything and use Go for backend, jfc.
>use Go for backend
>just take 20 times more time to build an app than it would take in Laravel, Rails, ...
What? That's literally Go's forte. It takes far less time building than any of those frameworks you mentioned.
>t. never had a job
I literally work as a PHP janny. Our builds take a frickton of time. I worked in the past for an enterprise that used rails and guess what, our build took far more than the slowest go build. Again, are you baiting? What the frick are you talking about?
go always takes more lines of code and time to make something compared to php. which is why I never believed in it and think it will go the way of ruby. unless it somehow manage to replace java and c#...
>gain from just not having to run framework code
if you're telling me you ran a large application and relied on a framework, instead of actually using php for a proper tailor-made solution, I'm not surprised you ran into issues.
>every benchmark will tell you that
which ones?
>did you ever think about having multiple users? if they access your site at the same time, it will be concurrent, moron
the initial discussion was about user requests themselves needing to be concurrent for that one user, not server side http requests processing, you're talking about something else
>source: your ass
can you explain why something that isn't a bottleneck would matter? If I don't need to use an extra server, I won't pay more. and if I don't need an extra sever that means there's no bottlenecks. I'm sorry but again, I think you're imagining a situation where process management is an issue because of php. I've had countless discussions like this over there and elsewhere. not a single time php was the bottleneck in the context of a web backend.
>you realize sometimes you have no control about it?
yes, maybe. but maybe the issue is the person you work with, if you work with people with shit infrastructure and nothing is ever fixed, that will obviously have impacts on the end user. php, not the problem here.
you can do async with php with third party libraries or frameworks I guess, but I still consider it as bloat.
>go always takes more lines of code
we are talking about build time, moron
Why? That's not even what the original post (
) was about.
build = compilation
You meant development time then. Which is still wrong because go is as simple as it gets.
>go always takes more lines of code and time to make something compared to php.
What the frick are you on about? Go was made for you to be productive. It's literally as simple as it gets. I get more LOC but more time?
And also I was talking about compilation time so nice job shifting those goalposts
>which ones?
https://eldadfux.medium.com/moving-from-nginx-fpm-to-swoole-has-increased-our-php-api-performance-by-91-40f62e51a064
https://medium.com/@victorgazotti/how-did-we-increased-our-php-app-performance-by-80-with-laravel-and-swoole-6b53d1092cab
https://habr.com/en/articles/646397/
just google it
>if you're telling me you ran a large application and relied on a framework
yeah because in the real world people love reinventing the wheel and not actually shipping anything
>If I don't need to use an extra server, I won't pay more
the point is you will need extra servers, beefier servers, to handle all those processes taking extra memory and just being able to handle less requests/server overall
if one server is all you need php is fine, but for large application it isnt enough
>not a single time php was the bottleneck in the context of a web backend
ive shown you many examples where php and its ecosystem were literally the bottleneck
>maybe the issue is the person you work with, if you work with people with shit infrastructure and nothing is ever fixed, that will obviously have impacts on the end user
in the end, you're paid to solve stuff. as i said, many times you have no control, and you need to solve it. you can use subprocesses, you can spawn jobs, but it's a way more complex solution than if the language just had actual concurrency support
it can be worked around, that's not the point, the point is if it's a good language for backend dev, and it isnt, it lacks features that are extremely useful for backend dev
alright, swoole makes php faster, maybe 3x faster. but outside benchmarks, 0.5% scenarios, or badly written applications, still couldn't see a real benefit from simple php.
in your links, I saw some buzzwords, microservices, frameworks, laravel, so I'm not confident on the ability of the person to produce an efficient php application though. so those benchmarks are still not representative, like everything I've seen.
>yeah because in the real world people love reinventing the wheel and not actually shipping anything
to me, using a framework with code I didn't write is precisely that. libraries exist and they're enough to avoid reinventing the wheel.
>the point is you will need extra servers
I'm disappointed that I couldn't find any situations where pure php is a bottleneck in a properly written nginx/php/mysql application. I guess I need to see it with my own eyes.
> it's a way more complex solution than if the language just had actual concurrency support
in my opinion no...async code is a mess to look at and more complex to do, and can only be potentially useful in niche scenarios.
if you really need performance, to the point where you need to divide the number of servers to save money, you use a lower language. probably also true for go/java/c#. php is for fast, reliable, maintainable programming. which is the opposite of lower level languages. it's all about tradeoffs.
>produce an efficient php application though
lmao
there's a reason why big companies that used to run on php moved on to things like hacklang (which, unsurprinsingly, has async constructs)
>I'm disappointed that I couldn't find any situations where pure php is a bottleneck in a properly written nginx/php/mysql application
write something popular and pay the compute bills, im sure you'll find one then
not the guy who are replying to but you're obviously delusional and moronic lmao nobody uses hack
the only huge corpos that used php that i remember were slack and facebook
both moved on, and i dont recall anyone else using it (i think lyft used php too but they went the microservice route)
no one uses hack, i agree, but its the only way out of php. the right thing would be not to use it in the first place
modern PHP is based
it's fast like no other interpreted language, it's quite literally the only dynamic lang that has opt-in static typing, the ecosystem is top tier and the only major feature it's missing at this point are generics
you just have a hateboner for it because it's cool to hate on PHP
how is pointing out objective flaws on the language/ecosystem "hating it"
i've been mostly pointing out how moronic the fastcgi model is at scale and the inability to do concurrency. both are facts, no matter how you look at it
modern php is okay, if all your code is up to standards. this wont be the case in 99% of the codebases, which will still reek of phpisms: no typing, array abuse, superglobal usage, magic code abuse (dont get me started on laravel's facade bullshit), no annotations which is the fricking only way to get decent typing support, using empty() for everything
i dont hate it because its cool to, i hate it because i use it daily. i dont really care about its flaws, in the end of the day i just stopped caring because it pays the bills. i just wont say its a good lang, because it isnt
>the ecosystem is top tier
the orms are okay, the remaining stuff is subpar or just the same as everything found in other langs
>the only major feature it's missing at this point are generics
which it will never get, by nikita's own words. it also misses some kind of sum types, be it sealed classes or tagged unions
also, the way the languages committee works is really moronic. anything even slightly controversial gets turned down because "php isnt fit for that" or something. no, you cant just not use this feature if you dont like it, you need to keep it out of the lang. so fricking annoying
>it will go the way of ruby
lol you picked the worst example possible
Ruby mogs literally every langauge even in comfiness and code beauty and speed of development
Not to defend anything or whatever, but:
>one of the biggest online platforms to date
>~~*not even*~~
That is fine. Concurrency will just hurt your brain.
>subprocesses
moron
>in benchmarks php performs just as well as async
it doesnt
php-fpm performs way below almost every language/application server due to the 1 request per process model (every request "creates" a new process). this model means you need to rerun everything in order to serve a new request: framework bootstrapping code, loading any in-memory caches, etc.
in larger applications this causes a significant performance hit.
alternative application servers like swoole or frankenphp avoid this by running in worker mode: each process is a loop that accepts a request, runs it, and then waits for another one. this solves the need to run everything every request, but you still need to run multiple processes, which implies higher memory requirements as every process needs its own instance of things
>users requests are never really concurrent
objectively wrong
requests are served concurrently even in php, by using multiple processes
by using async you can serve another request while waiting for io in one request, which means you need less instances of your application to serve the same amount of users, which means you spend less on infra/higher performance for the same price
>any valid example
aside from the obvious example of serving concurrent requests, you can do things like "call both of our antifraud services at the same time" or "read this data from redis and query the db for this other data", which reduces the time the end user waits
"i dont need it" isnt a valid justification
there are many valid reasons, and if you're going to say "oh you just need to rework your architecture/table structure/queries" or whatever to make for a language deficiency its just pure cope
>the cost is unneeded complexity in the code
when the alternative is creating subprocess or jobs, id argue async is the lesser evil
>use green threads instead
i would, if fibers wasnt fricking vaporware
its been like 3 years, no support
>every request "creates" a new process
no, it's not, fricking moron, it has a pool of sub-processes that's gets reused.
and how does that changes anything i've said?
i know it uses a process pool, but you still need to rerun all your code, and it's still a huge performance hit. saying it "creates" a new process is just an easier way to explain it. you also need ugly extension-level workarounds for connection pooling/persistant connections and so on
you can literally google any benchmark and see it for yourself, fpm _always_ lands dead last compared to servers using a worker model. it sucks even for php standards
>concurrency exists, its called fibers
as i said, fibers are vaporware
show me an example using fibers in any production application
none
the colored functions post is like 10 years old, the main issue was callback hell as promises werent really a thing
now people just add async/await/Promise<x> (or Task<x>) to everything and call it a day
>the colored functions post is like 10 years old, the main issue was callback hell as promises werent really a thing
isn't callback mechanism an ALTERNATIVE to promise/async model?
it depends. this goes into the whole literature about "nodebacks" and if they're invoked immediately are put into some mechanism to defer their work. a callback itself is just code you can invoke at any time.
why though? the process basically needs to be nuked from orbit (exec()) unless php does fricky shit to re-execute the entrypoint of the process.
you are a moron without any idea how it works
I don't because I'm not a PHP tard.
what
said sounds like how I'd do it, accept() loop in same process in perpetuity.
so how does this FPM mechanism work then? do you keep the interpreter and just feed it the same script somehow? is it like I said? you take the same process and invoke exec with the same fricking interpreter and arguments?
fpm does fricky shit to replace the process' image
in practice this saves the kernel-level overhead of having to create a new process, but that's all
he's right though? it essentially reruns the process, whether it creates or not a new process changes very little fundamentally
in JS they're kind of the same, both use the continuation-passing style implementation
but promises and language-level async/await constructs makes it way less painful to use, which was one of the issues (and to me the one that matters the most) the colored functions post raised
>in larger applications this causes a significant performance hit.
everytime people claim this. there's so many reasons why an application could have performance issues, blaming it on not using async isn't a proof or convincing at all.
by the way, fastcgi doesn't creates a new process for each request as far as I know. php fastcgi keeps the process alive and wait for another request. maybe that's why php performs well.
>objectively wrong
what are we even talking about? I'm mostly talking about a backend for web services. a website. users click on a link. they have one mouse cursor. they don't make concurrent requests. unless maybe the front end is some garbage js code, which is bad design anyway. I can't think of the last time I had to use js in a webpage.
if it's a backend for some online games, alright maybe there's concurrency. and I've never worked on those kinds of projects. I would assume that it would need a lot of ressources and you would need maybe to use a lower level language than php? if you claim php couldn't work here, I can assume that's possible.
>by using async you can serve another request
it doesn't matter if it's not a bottleneck, and it usually isn't.
>everytime people claim this. there's so many reasons why an application could have performance issues, blaming it on not using async isn't a proof or convincing at all.
reading for you:
thundering herd
slowloris
async solves these because it doesn't constraint your task execution to allocating (psuedo) stacks and deferring to a scheduler to make, a likely, incorrect assumption of interrupting and rescheduling tasks.
the people keep shouting "fibers" are likely referring to Google's kernel fork which offers some kernel mechanisms where users can make N:M schedulers in userspace, sort of like futex for (cross-process) userspace synchronization primitives.
>thundering herd
sounds like an OS issue
in any way to run into that issue, you need a lot of users, and if you have a lot of users you're supposed to run extensive benchmarks before production
>slowloris
sounds like a job for firewalls, not scripts
>30 seconds later it times out
I would investigate the database, structure, indexes, etc. millions of records is not a lot. the other day I was working on a server that I pay less than $10 a month, it had millions of financial records (mariadb), zero performance issue, shared hosting. lol
also in your example you have shit code and you shall have multiple script to handle multiple tasks. fix your code.
too bad, bad example.
>would investigate the database, structure, indexes, etc. millions of records is not a lot.
True, and then you would perhaps shed another 15 seconds, after battling for weeks with SRE, DBRE and everyone because indices, partitioning and other non-DB stuff is expensive. Congrats, everything is still run sequentially.
>also in your example you have shit code and you shall have multiple script to handle multiple tasks.
And all of that PHP runs sequentially. In order to render the page, you need to do all those tasks (i.e. the page is synchronous). PHP doesn't have a decent solution to do it async. Meanwhile other languages (even fricking node.js) are capable of doing it without so many hassles. The moron above kept having a tantrum over other languages "riding the async/concurrent hype train" and I showed him why it's important if you want to create big, enterprise apps.
>And all of that PHP runs sequentially. In order to render the page, you need to do all those tasks (i.e. the page is synchronous). PHP doesn't have a decent solution to do it async.
If you have a query that takes 10 seconds to run, the backend won't serve the page until those 10 seconds are up, regardless of whether it's PHP or a language with async features. Instead of making the user wait, why not just serve the page as quickly as possible, then offload the 10 second query to an API and pull it via AJAX? It's a better user experience than watching their browser spin while a huge static page is loading. That's why this whole conversation is pretty stupid.
>why not just serve the page as quickly as possible, then offload the 10 second query to an API and pull it via AJAX?
That's what I've been proposing the entire time. I was explicitly telling that moron that said concurrency / async isn't necessary that they can just build a decent backend and do AJAX in the front end, preload the page and not block the user experience. That's an irl usecase where async actions are necessary. If you want to keep using php then just use it for the backend, but don't force the site to use PHP in order to echo the DOM when it can't do async, that's when PHP isn't the right tool.
>there's so many reasons why an application could have performance issues, blaming it on not using async isn't a proof or convincing at all
i wasnt even talking about async there, i was talking about fpm's model.
and i surely am not talking out of my ass, i worked on migrating a largeish monolith application (700m+ requests per month, 1m lines of php) to roadrunner, and we saw significant gains (up to 20% in user requests, ~40% in high rpm api calls) from just not having to run framework code. we got additional gains after changing the application code to use in-memory caches, which _is_ impossible without the worker model (you can use apcu but you'll still have a lot of overhead deserializing stuff)
>that's why php performs well
it doesnt, fastcgi fricking sucks, every benchmark will tell you that
>they don't make concurrent requests
did you ever think about having multiple users? if they access your site at the same time, it will be concurrent, moron
>it doesn't matter if it's not a bottleneck, and it usually isn't
source: your ass
it isnt the bottleneck because you spawn a trillion processes to deal with the fact the language _cant do async_
you will pay more in infrastructure due to that, it's a fact
>if the antifraud service belongs to you
it doesnt, i meant "our" as in "the ones we use"
>if you have to deal with client's garbage servers [...] I would reject the fault to the person responsible for the remote server, and notify the user that his query is being processed
you realize sometimes you have no control about it? your users will yell at you because your endpoints are slow, your upstream services will take forever fixing their shit or just go "yea this is the best we can do tough luck", and you'll be left wondering "why the frick i work with php still"
face it, scenarios where async is useful happen a lot the time on backend stuff, and if the language doesnt support it its simply a bad choice for backend dev
>things like "call both of our antifraud services at the same time" or "read this data from redis and query the db for this other data", which reduces the time the end user waits
if the antifraud service belongs to you, the answering time shall be a few ms at most. if it's seconds, fix that before adding complexity.
same for the redis + other db query. in which world do you need to wait 4 seconds for one request, then some more for the other request, before echoing the result to a user? damn. fix your servers.
if you have to deal with client's garbage servers, and you have the case of a user having to wait, personally, I would reject the fault to the person responsible for the remote server, and notify the user that his query is being processed. it could then be a few seconds or more. then he refreshes his page or get notified when the remote server decided to answer. I'm not downgrading my code because of someone else's shitty code.
>lesser evil
if the need for async code is a consequence of bad design decisions, it's something to correct and not waste time with.
using extra processes is a simple solution that any OS can handle. implementing async in the code itself seems like unneeded complexity.
>It didn't.
It still uses php? Where's the .php in the url?
i used to ask myself the same thing in middleschool more than a decade ago when i saw the claim that 95% of the internet runs on php
>where .php huh
but ofcourse i was moronic and didnt know any better
please explain it to me, as you would to yourself
PHP websites typically do have the .php extension for their PHP files. However, in some cases, developers might choose to use URL rewriting or server configurations to hide the file extensions for aesthetic or security reasons. This technique is often referred to as "URL rewriting" or "pretty URLs".
Instead of having URLs like example.com/page.php, they might appear as example.com/page. This can make the URLs cleaner and more user-friendly. It's achieved by configuring the web server to internally rewrite the URL to include the .php extension on the server side, but present it to the user without it. This doesn't mean that the PHP files themselves don't have the .php extension; it's just hidden in the URL.
>t. chatgpt
if you can't keep up with technology you will become illiterate and have a hard time in life
It's hidden now, servers can do that brainlet.
Notice how if you go to IQfy/banned.php it hides the php? But if you go to html that's another story.
It's still there.
>Where's the .php in the url?
Look at the URL of the popup when you report a post :^)
interesting.
why doesn't the page we're on right ow have a ".html" or a ".php" at the end of its url? I have a personal website I use for documents and stuff and all the pages are .html, and this is reflected in the url.
are you mentally moronic?
read
stupid bots GET BANNED BANNED MOOODS
oh gee anon I'm so sorry I didn't read the fricking gpt post. Is that actually how it works for IQfy?
I mean I also stop reading a post when it's slop.
Yes, that's how it works for all websites that don't have the default path like urls. Url rewriting is simple to do and doesn't impact performance.
If you check the html of IQfy pages, your will find some urls with .php.
Facebook also used to have .php extensions in urls, I don't know how it is now but I wouldn't be surprised if it still had them.
if these are .php pages, why are they .html when I save them?
Everything is a .html page when you save it. Try it with a .jsp, .cfm, .asp, or any other extension.
Every web page, that is.
Red functions are the devil.
Use green threads instead.
nah, php ain't the dumpster fire ppl make it out to be. sure, it's got its quirks and can get messy, but it's all 'bout how u use it. it's like any tool, ya know? use it right, and it gets the job done. plus, the new versions are pretty solid, got some neat features that make it way less painful than the old days. so yeah, don't sweat it too much.
Good enough for Lockbit good enough for me
Depends. If you want to be autistic/anal about it, then sure it's a passable language. Realistically or doesn't matter how good it is or isn't, if you put that on a resume or portfolio you're going to get passed up (and rightfully so) for being old.
>if you put that on a resume or portfolio you're going to get passed up (and rightfully so) for being old.
Unemployed zoomer detected, PHP is still fricking everywhere
Is it "FIP" or "PEH-HUP"?
Pee hip
I use Python and php almost exclusively because it makes Cniles and the crabfrickers seethe and I get fricking paid to use Pythong and php lolololololololololol life is good. Go make some libraries so I can profit off your work Ctards (I would say the same to rustBlack folk but nothing of use has ever came from the rust programming language)!
No, in fact it's the best thing
People who dislike it are tech illiterate bootcampers who think they're smart because they've learned ruby on rails because the name sounded cool
Some of the worst code I've ever seen is written in PHP. I don't know what it is about PHP which attracts awful developers.
it was badly designed abomination of a language
inconsistent function names and signatures
its a disgusting mess and it makes sense it got popular
because the world is made of mediocre moronic people
Some of the early function names are actually how they are because the hash function used to store the function name in the engine was actually just the function names' length and nothing more at the beginning and the original developer chose the names to be of varied length to balance it.
Php is crap still. It caught up a bit but still not good.
Nowadays only reason to use php is WordPress, since that's the best full featured site builder. The dynamic way php allows plugins and editing makes it shine.
Php is also populated by jeets. So either find locals who don't know better and you can trick into hiring you or compete with million pajeets online.
>Laravel is ass.
Laravel is the only thing keeping it alive
Maintained by ~~*Zend*~~
it's worse than bad. It's made by literal brainlets. the fact it works at all is a concerted effort of decades and billions.
I still see PHP using sites that serve content double escaped because PHP makes it too easy to do everything wrong, without fail at a "interpreter configuration" level.
The amount of seething these obvious bait threads tend to accumulate is the only proof anyone needs that the lang in question is still relevant af regardless of how flawed it might be
.t PHP on and off dev of 20 years
php is shit. cope.
Its not as bad as it used to be. PHP versions before 7 are pretty ass though. Use a framework like CakePHP or Laravel
actually is quite great for a career now.
so many people jumped off or never started, they are all in C/C++ or python or Javascript.
If you're an expert in php you are set
Laravel is for tiny projects and black people
symfony is the white man's framework
>muh PHP slow
learn to queue
no
it's worse
PHP+nginx/Apache is as simple as you can get. It is enough for most applications, even if your code is moronic. I'll take this is forced async garbage like node where you still have to run multiple instances of your app except one moronic line in your app means the entire shit is blocked.
I like it. It works well.
a
yes it is. make anything other than a webpage and see how you like it?
I make my CLI scripts in PHP
have you ever tried Perl?
Same
Perl looks disgusting
only at first, with a good color scheme it looks more than ok and the combination of the sigils + a special color for variables makes it a bit easier to read the code imo
>is hammer bad?
>yes it is. use it for anything other than pounding and see how you like it?
yes
How else would you make a website for Tor?
you can make any website in any language in tor, in fact, you can even put a fricking ssh server in tor, tor will simply give you a tor address and act as a proxy for any service, are you moronic?
You are moronic. torhosting.net only works with PHP, tried and tested
>this homosexual doesn't know he can self host
HHAHAHAHAHAHAHAHAHAH, dude shut the frick up, I ran services on the tor network, I know how to run that shit, you can literally run ANYTHING on it
No. You can't self host Black person, you have to go through the Tor company, it is their product, their private property
>you can literally run ANYTHING on it
just like the TRAIN me and my boys ran on YOUR MOM