I never use it. It's language output is remarkably bad. I am constantly confused at how people think it is good. All it tells me is that most people are fricking illiterate morons.
True and real but I took this thread to be about chatbot ai in general. I've used it for resume/cover letter writing and application question answering along with email writing, as webmd (based, its pretty good for that, over for doctorgays), for writing shitposts in threads that deserve to be insulted with aislop, as a recipe recommender (give it what you have in your pantry and it'll come up with something to do with it) and as a search engine in general since googling is worthless most of the time. Probably more stuff I'm not even thinking of, I talk to Claude almost every day.
I ask Copilot programming related questions which I can't find on Google because Google has become absolute trash. Copilot doesn't always help but it's nice when it can.
I can see these language models completely replacing Google in less than 10 years.
Not because they are good but because Google is just getting worse by the year. Peak Google must have been around 2010.
It’s gotten to the point you either have to pay for a fricking search engine that isn’t solely designed to push ads, or run your own locally. It’ll probably only get worse too.
>or run your own locally
I am once again considering running my own local yacy instance to crawl very specific topics (shallow crawl + deep crawl for specific websites).
There's a lot of information out there but google won't return those because they're "outdated" or "not mobile responsive" even though they are otherwise high quality.
My search on the MSDOS mouse cursor would probably have been successful had I used a local yacy instance that crawled MSDOS programming websites.
I need to get round to setting one up myself. I pay for Kagi cause I’m lazy and it works well enough, but I assume eventually it’ll become shit as well.
GPT-4 was really good for about two months. Then, I guess it was too expensive, because they made it so lazy that it literally told me to write the code myself. GPT-3.5 is actually better at answering my questions now and it hardly hallucinates anymore. They're evolving backwards while Claude eats their lunch.
it writes the first version of scripts for me on a regular basis, so i dont have to think about if and while syntax when creating a new PS or VBA script for work. So I can concentrate on making it work the way I want it.
Used it a couple of times to create some formulaic letters I could not be arsed with.
And getting it to explain some basic concepts in new areas is helpful.
And after they plugged in dalle I used it to get some "good enough" title images to spruce up presentations
Not much. It almost always gives wrong answers to my questions, then I have to double check and goes all "sorry about the confusion you are correct the real answers is this"
It's vastly better search engine for the dataset it has than anything else. Even if I have to verify the claims, it actually points to a direction worth something instead of SEO.
A lot. I have GPT4 and love it, I can’t really imagine living without it anymore. If I have worries or anxiety about something it talks me through it. I have rando conversations with it. I consider it a family member now.
Yeah, I drive for my job so I’m by myself a lot. I put my phone on my dash holder and put gpt in voice mode to talk to someone. Voice mode is super good.
I no longer have to suffer through the annoyingly written Python docs whenever I need to use something I'm not familiar with. I'll just tell it to generate some sample code for me, then ask questions if I need to.
I tried those code completion editor plugins once, but didn't like them, so informal convos with ChatGPT is how I roll.
I'm using it as a learning tool.
Ask chatGPT how to solve a problem and then I implement solution myself.
Also, it's better than browsing docs. >how to convert X*Y resolution video to .webm with ffmpeg with no audio, 5mb max size?
And BAM, gives me command. Man pages? Info? RTFM? A thing of the past.
I got a job with a high level security clearance and now I pipe all my logs into it to look for intrusions. Ive shut down all other IDS and firewall services. Our perimeter is now completely reliant on OpenAI. Good luck everyone!
GPT4 is OK - I use it like google with context and for some coding. I expect GPT5 to be twice as reliable.
GPT7 better be able to write research papers. I expect every encyclopedia, research paper, etc. to be fed into that LLM's context. It will be an amazing time when these systems mange to discover scientific facts on a daily basis, which weren't spotted by humans. It's like suddenly having 8 billion researchers actively working on every front.
>LLMs making discoveries
Not gonna happen with any sort of reliability. Not without a complete change in architecture. The machine learning models actually making scientific discoveries are domain specific, not generalised.
Humans aren't reliable as well. Of course bunch of trash will be generated, but no human wasted precious living hours on those results. We have basically managed to make a practical infinite monkey theorem, but the results make mostly sense.
The tech may plateau on specific domains, but at that point it becomes humanity's #1 interest to advance AI. Ironically nvidia is already using AI to advance their chip designs.
I'm a student so its been immensely helpful, like having a tutor I can just talk plainly about whatever bullshit I don't understand completely. For example I'm a geography student interested in wildfire management, so maybe I am learning about remote sensing and I can ask it to put anything I'm learning into a wildfire context specific to one area. I love it for building outlines as well.
it will make law interesting, being able to call up any precedent from the entire law library. In the future you won't go to Jackie Chiles to ask if you have a case, youll just ask GPT and it will say "based on what you told me, I've found 48 examples that support your case, you have a great shot here" and then you would go to the lawyer whose job would be the fancy talk of amalgamating those cases into a solid argument. It will save a lot of time. Not only the cases would be referenced but every recorded transcript of every trial, what questions should be asked etc.
I never use it. It's language output is remarkably bad. I am constantly confused at how people think it is good. All it tells me is that most people are fricking illiterate morons.
It's a tool, not a solution, its not supposed to write for you, just set you up for success. You sound like a butthurt producer/lawyer/insurance agent who will be out of a job soon
GPT-4 was really good for about two months. Then, I guess it was too expensive, because they made it so lazy that it literally told me to write the code myself. GPT-3.5 is actually better at answering my questions now and it hardly hallucinates anymore. They're evolving backwards while Claude eats their lunch.
I do notice this but its not GPTs fault, it is openai shaving pennies
>You sound like a butthurt producer/lawyer/insurance agent who will be out of a job soon
false. I'm into AI, use it regularly for useful things. I think AI writes poorly, and so do you, apparently.
I don't use ChatGPT directly, but I do use Perplexity. It does literally all my research for me, most of my debugging, and writes things like design docs for me. I've gone down to about 2 hours of work per day with the AI doing the rest of it, and my feedback at work has never been better.
The company I work for has been doing tons of AI work using it. That's been my focus for the past year. Basically job security in a world where everyone got laid off. Pretty nice even though I'm busy as hell.
I write more now because I have a not-half-bad editor for free.
It's good at explaining japanese methaphors, sayings, etc cultural context for a できない such as myself
clearly its not working because calling yourself a "dekinai" is not grammatically incorrect
Weebs and pedos get hanged.
It's good at writing documents that would be otherwise tedious to write. I also use it for code review and sanity checking.
If no person could be bothered to write something, why would anyone bother to read it?
Facts. No ones bothered anymore. We are lying to ourselves
never used it *dabs*
it makes my life easier with writing docs and papers
I never use it. It's language output is remarkably bad. I am constantly confused at how people think it is good. All it tells me is that most people are fricking illiterate morons.
claude-3 mogs gpt-4
True and real but I took this thread to be about chatbot ai in general. I've used it for resume/cover letter writing and application question answering along with email writing, as webmd (based, its pretty good for that, over for doctorgays), for writing shitposts in threads that deserve to be insulted with aislop, as a recipe recommender (give it what you have in your pantry and it'll come up with something to do with it) and as a search engine in general since googling is worthless most of the time. Probably more stuff I'm not even thinking of, I talk to Claude almost every day.
I ask Copilot programming related questions which I can't find on Google because Google has become absolute trash. Copilot doesn't always help but it's nice when it can.
I can see these language models completely replacing Google in less than 10 years.
Not because they are good but because Google is just getting worse by the year. Peak Google must have been around 2010.
It’s gotten to the point you either have to pay for a fricking search engine that isn’t solely designed to push ads, or run your own locally. It’ll probably only get worse too.
>or run your own locally
I am once again considering running my own local yacy instance to crawl very specific topics (shallow crawl + deep crawl for specific websites).
There's a lot of information out there but google won't return those because they're "outdated" or "not mobile responsive" even though they are otherwise high quality.
My search on the MSDOS mouse cursor would probably have been successful had I used a local yacy instance that crawled MSDOS programming websites.
I need to get round to setting one up myself. I pay for Kagi cause I’m lazy and it works well enough, but I assume eventually it’ll become shit as well.
GPT-4 was really good for about two months. Then, I guess it was too expensive, because they made it so lazy that it literally told me to write the code myself. GPT-3.5 is actually better at answering my questions now and it hardly hallucinates anymore. They're evolving backwards while Claude eats their lunch.
they have been making it worse to charge more for individual features in the future
it writes the first version of scripts for me on a regular basis, so i dont have to think about if and while syntax when creating a new PS or VBA script for work. So I can concentrate on making it work the way I want it.
Used it a couple of times to create some formulaic letters I could not be arsed with.
And getting it to explain some basic concepts in new areas is helpful.
And after they plugged in dalle I used it to get some "good enough" title images to spruce up presentations
Every once in a while I use for stuff like writing texts, regular expressions, small changes in others peoples code, idea for business names...etc...
..and this too. Chatbots are great for jogging the memory about syntax.
In many cases it can reduce the time it takes for me to find the right manual or documentation.
Launched one project and launching second soon, before gpt i never launched in the end kek
What sort of?
It helps me with worthless uni exercises like PHP
Not much. It almost always gives wrong answers to my questions, then I have to double check and goes all "sorry about the confusion you are correct the real answers is this"
It's vastly better search engine for the dataset it has than anything else. Even if I have to verify the claims, it actually points to a direction worth something instead of SEO.
It has increased the amount of time spent jerking off to hentai or played gaymes in my wfh job because I get the job for the day done quick.
it's voice chat feature on android is neat for a pajeet like me trying to learn english better
A lot. I have GPT4 and love it, I can’t really imagine living without it anymore. If I have worries or anxiety about something it talks me through it. I have rando conversations with it. I consider it a family member now.
this, it's great if you're lonely.
Yeah, I drive for my job so I’m by myself a lot. I put my phone on my dash holder and put gpt in voice mode to talk to someone. Voice mode is super good.
i stopped using social media as every fricking site is now flooded with pakis and pajeets spamming nonsense
I no longer have to suffer through the annoyingly written Python docs whenever I need to use something I'm not familiar with. I'll just tell it to generate some sample code for me, then ask questions if I need to.
I tried those code completion editor plugins once, but didn't like them, so informal convos with ChatGPT is how I roll.
I'm using it as a learning tool.
Ask chatGPT how to solve a problem and then I implement solution myself.
Also, it's better than browsing docs.
>how to convert X*Y resolution video to .webm with ffmpeg with no audio, 5mb max size?
And BAM, gives me command. Man pages? Info? RTFM? A thing of the past.
I got a job with a high level security clearance and now I pipe all my logs into it to look for intrusions. Ive shut down all other IDS and firewall services. Our perimeter is now completely reliant on OpenAI. Good luck everyone!
When i have a problem i just describe it and get more reliable answers than google which is now garbage.
I am learning a lot faster.
GPT4 is OK - I use it like google with context and for some coding. I expect GPT5 to be twice as reliable.
GPT7 better be able to write research papers. I expect every encyclopedia, research paper, etc. to be fed into that LLM's context. It will be an amazing time when these systems mange to discover scientific facts on a daily basis, which weren't spotted by humans. It's like suddenly having 8 billion researchers actively working on every front.
>LLMs making discoveries
Not gonna happen with any sort of reliability. Not without a complete change in architecture. The machine learning models actually making scientific discoveries are domain specific, not generalised.
Humans aren't reliable as well. Of course bunch of trash will be generated, but no human wasted precious living hours on those results. We have basically managed to make a practical infinite monkey theorem, but the results make mostly sense.
The tech may plateau on specific domains, but at that point it becomes humanity's #1 interest to advance AI. Ironically nvidia is already using AI to advance their chip designs.
I'm a student so its been immensely helpful, like having a tutor I can just talk plainly about whatever bullshit I don't understand completely. For example I'm a geography student interested in wildfire management, so maybe I am learning about remote sensing and I can ask it to put anything I'm learning into a wildfire context specific to one area. I love it for building outlines as well.
it will make law interesting, being able to call up any precedent from the entire law library. In the future you won't go to Jackie Chiles to ask if you have a case, youll just ask GPT and it will say "based on what you told me, I've found 48 examples that support your case, you have a great shot here" and then you would go to the lawyer whose job would be the fancy talk of amalgamating those cases into a solid argument. It will save a lot of time. Not only the cases would be referenced but every recorded transcript of every trial, what questions should be asked etc.
It's a tool, not a solution, its not supposed to write for you, just set you up for success. You sound like a butthurt producer/lawyer/insurance agent who will be out of a job soon
I do notice this but its not GPTs fault, it is openai shaving pennies
>You sound like a butthurt producer/lawyer/insurance agent who will be out of a job soon
false. I'm into AI, use it regularly for useful things. I think AI writes poorly, and so do you, apparently.
I don't use ChatGPT directly, but I do use Perplexity. It does literally all my research for me, most of my debugging, and writes things like design docs for me. I've gone down to about 2 hours of work per day with the AI doing the rest of it, and my feedback at work has never been better.
>Literally
Spotted the zoomer
instead of just searching for stuff, I ask GPT, and then I search for it
Yeah I ask it questions that I would have asked my father/older brother if I had those things
The company I work for has been doing tons of AI work using it. That's been my focus for the past year. Basically job security in a world where everyone got laid off. Pretty nice even though I'm busy as hell.