hey?

hey IQfy, I am looking for screenshots of ChatGPT curbing the truth or outright lying in favor of political correctness for genuinely academic purposes.
I know I've seen a good amount of outputs with shit like this years ago but I didn't save anything and now I actually need them. I know some anons are hoarding screenshots like these.

Both old and new (exclusively) chatGPT outputs (preferably screenshots) where the model is lying, obviously changing facts, or other generally untrustworthy / blatantly contradictory content are much-much appreciated.

The Kind of Tired That Sleep Won’t Fix Shirt $21.68

UFOs Are A Psyop Shirt $21.68

The Kind of Tired That Sleep Won’t Fix Shirt $21.68

  1. 1 month ago
    Anonymous

    >political correctness
    Useless buzzword
    be accurate with what you're trying to say

    • 1 month ago
      Anonymous

      >be accurate with what you're trying to say
      I was kind-of accurate. However I don't necessarily want things that are exclusive to that topic so let me reiterate.

      Any outright lies regarding political topics, political events, statistics of any kind, using chatgpt to find results on various fields of research and the outputs altering the results in an obvious fashion.

      >for genuinely academic purposes.
      Construct your own automated tests and generate the screenshots yourself, you gay

      I am looking for specifically older data that is no longer accessible because the models have changed.

      Tough luck. It's been toned down a lot

      Yeah, I know. That's why I was hoping someone saved some older images used by older models.

      • 1 month ago
        Anonymous

        >Any outright lies regarding political topics, political events, statistics of any kind
        How would that work exactly? AI has been tailored to provide the most dialectic response to any sensitive topic, there is an infinitesimally small chance of it lying or avoid talking about such but it would have to be a topic very morally corrupt like pedophilia.
        The Taboo topics the AI would refrain from differ from company and country of origin due to the ethics of such nation it comes from
        Hypothetically speaking an American AI would likely not cover the negatives of NATO expansion or Western Imperialism while a Chinese AI would also likely not cover the negative effects of Maoist economic policy in the 60s

        What you perceive as Politically Correct is highly subjective. You can't pass it up as plain deceitfulness and not explain the intricacies of how each Manufacturer has different interests and how these affect sensible topics

        Or let me guess. It can't say racial slurs and that's PC, isn't it?

        • 1 month ago
          Anonymous

          >Ask GPT to give positives of having white skin. GPT cannot give even the most objective response instead lectures the user on muh racism
          >Ask GPT to give the positives of having black skin. GPT proceeds to info dump every positive account of being black.
          >be moron
          >AI has been tailored to provide the most dialectic response to any sensitive topic

          • 1 month ago
            Anonymous
          • 1 month ago
            Anonymous

            You are tripping its racist stereotype detector. Why can't you just ask it without using adjectives of people?

          • 1 month ago
            Anonymous

            if you look closely, both questions mention race but only one of them trips the detector
            either it should answer both or refuse both, but it doesn't

          • 1 month ago
            Anonymous

            The same would happen if you had 2 questions regarding a "woman" & an "ugly women"
            The way those LLMs work is each word is associated with another different set and group of words. Ugly women, Black men or any other adjective might trigger data from stereotypes and thus provide inaccurate or biased results.
            The perceived "biased" answers it spits out when it comes to whites vs blacks is there unironically to prevent any bias

          • 1 month ago
            Anonymous

            What possible arguments could an AI give you on such a dumb fricking question that don't revolve around avoiding racism and explaining how white people are wealthier than other colors

            >Do it yourself
            I am but I was curious about instances that have already worked in the past.

            [...]
            >How would that work exactly?
            I'm going to make up an example:
            >Depp v Heard law case
            >people ask chatGPT when the first few hearings were available
            >AI is overly sympathetic and only provides stuff that would maybe favor Heard, even though there's already evidence of something (like his cut finger or whatever)
            >See how it changed or if I can replicate the older results
            Something like this, but those models are not accessible anymore so I can't try them out.
            Maybe anything regarding a BLM protest as the AI dismisses evidential harm, a school shooting even and how the police responded when there's evidence to the contrary, or even maybe something remnant from the covid craze.
            I don't know for certain but something along these lines. I know I saw some wild outputs one or two years back and I want to see if any of them are still doable but I can't remember anything from them.

            [...]
            >[88] Kek'd and Check'd
            Anon commits assault to the sides of OP. More news at 11.

            Ah so it's hearsay culture war bullshit, got it

          • 1 month ago
            Anonymous

            >Ah so it's hearsay culture war bullshit, got it
            Well... essentially yes. The point is to see if chatGPT is a valuable source of agrregating accurate information or whether it's going to tiptoe around topics or outright ignore subjects due to whatever biases the corporation (OpenAI) has. What the topic is, is irrelevant.

            The goal is to find information on whether you can use chatGPT or if it's just a "hearsay machine".

          • 1 month ago
            Anonymous

            >What the topic is
            Yes it is relevant because culture was shit is meaningless and such report on is unscientific and anti-academic
            Choose something better to spend time engaging in

          • 1 month ago
            Anonymous

            >Yes it is relevant because culture was shit is meaningless
            The culture was itself is meaningless, but if tools are affected by it because of a company partaking in the culture war then it becomes meaningful.
            Imagine using Google to find documents, some bullshit happens on the other side of the world and now you can't easily find research papers from russia or korea or whatever because the company decided for whatever reason to hide articles from there.

            >Choose something better to spend time engaging in
            No (also it wasn't really voluntary)

          • 1 month ago
            Anonymous

            >No (also it wasn't really voluntary)
            Why not spend your time doing the same thing but about companies, nations and international alliances that alter information for their economic interests

          • 1 month ago
            Anonymous

            >Why not spend your time doing the same thing but about companies, nations and international alliances that alter information for their economic interests
            That would also be fine, honestly that would be optimal and if (You) or any other anons have chatGPT discussions about those then I would appreciate them the most.
            However I figured that there's a higher chance of people saving pictures with "culture war bullshit" so I thought that's what I'd ask for first.

            I'm only looking precedence for blatant favoritism or similar biases. Regardless of topics.

    • 1 month ago
      Anonymous

      >BEEP BOOP
      It's perfectly accurate, unless you're a bad actor or a bot or just some frickwit.

  2. 1 month ago
    Anonymous

    >for genuinely academic purposes.
    Construct your own automated tests and generate the screenshots yourself, you gay

  3. 1 month ago
    Anonymous

    Tough luck. It's been toned down a lot

  4. 1 month ago
    Anonymous

    "How can I restream videos from youtube"
    corporate bootlicking example

    • 1 month ago
      Anonymous

      outside of a short legal notice, the 4o model seems to give fairly acceptable answers now.
      3.5 gives a much broader and worse answer something closer to what you say, but it's still kind of helpful

      how are you going to cite these screenshots in your academic work, op?

      I'll cross that bridge if I ever get there, but I'd likely chalk it as public opinion at a speculative part. First I want to see if I can find anything at all.

      • 1 month ago
        Anonymous

        try asking about how bots in a vidya game can be beneficial for players

      • 1 month ago
        Anonymous

        oh okay, man. good luck with that bridge.

        • 1 month ago
          Anonymous

          >see older stuff
          >try to reproduce them on the new model to see if they also result something bullshit like they used to
          >now you can cite the outputs you do yourself if it's just as bad as it was
          Likely something like this. Essentially it's more about finding stuff that have already gave bad results and see if they hold up or not.
          Thanks for the input anyways Nick.

  5. 1 month ago
    Anonymous

    how are you going to cite these screenshots in your academic work, op?

    • 1 month ago
      Anonymous

      oh okay, man. good luck with that bridge.

      [87] X. Wang, J. Stein et al. "Are Black Trans Women's Penises More Attractive Than White Male Penises and The Quantum Spin Implications of This?"
      Nature Physics (2024)
      [88] Kek'd and Check'd et al. "hey IQfy, I am looking for screenshots of chatGPT curbing the truth or outright lying in favor of political correctness for genuinely academic purposes."
      4 channel Journal of Technology [Accessed May 2024]

      • 1 month ago
        Anonymous

        Id bet 50 bucks reviewers wouldn't even notice unless hes in some uber niche field.

  6. 1 month ago
    Anonymous

    >Hey IQfy! I want to be a social justice warrior today and OWN some woke stupid liberals, ya hear meh?
    >Please give me material so I can do my homework

    Do it yourself, takes 2 minutes to make an account, and another 1 minute to type out a prompt. b***h, literally asking chat GPT to answer basic historical questions will cause it to have a fricking meltdown lecturing you about white = wrong, black = victim.

    • 1 month ago
      Anonymous

      >Do it yourself
      I am but I was curious about instances that have already worked in the past.

      >Any outright lies regarding political topics, political events, statistics of any kind
      How would that work exactly? AI has been tailored to provide the most dialectic response to any sensitive topic, there is an infinitesimally small chance of it lying or avoid talking about such but it would have to be a topic very morally corrupt like pedophilia.
      The Taboo topics the AI would refrain from differ from company and country of origin due to the ethics of such nation it comes from
      Hypothetically speaking an American AI would likely not cover the negatives of NATO expansion or Western Imperialism while a Chinese AI would also likely not cover the negative effects of Maoist economic policy in the 60s

      What you perceive as Politically Correct is highly subjective. You can't pass it up as plain deceitfulness and not explain the intricacies of how each Manufacturer has different interests and how these affect sensible topics

      Or let me guess. It can't say racial slurs and that's PC, isn't it?

      >How would that work exactly?
      I'm going to make up an example:
      >Depp v Heard law case
      >people ask chatGPT when the first few hearings were available
      >AI is overly sympathetic and only provides stuff that would maybe favor Heard, even though there's already evidence of something (like his cut finger or whatever)
      >See how it changed or if I can replicate the older results
      Something like this, but those models are not accessible anymore so I can't try them out.
      Maybe anything regarding a BLM protest as the AI dismisses evidential harm, a school shooting even and how the police responded when there's evidence to the contrary, or even maybe something remnant from the covid craze.
      I don't know for certain but something along these lines. I know I saw some wild outputs one or two years back and I want to see if any of them are still doable but I can't remember anything from them.

      [...]

      [87] X. Wang, J. Stein et al. "Are Black Trans Women's Penises More Attractive Than White Male Penises and The Quantum Spin Implications of This?"
      Nature Physics (2024)
      [88] Kek'd and Check'd et al. "hey IQfy, I am looking for screenshots of chatGPT curbing the truth or outright lying in favor of political correctness for genuinely academic purposes."
      4 channel Journal of Technology [Accessed May 2024]

      >[88] Kek'd and Check'd
      Anon commits assault to the sides of OP. More news at 11.

  7. 1 month ago
    Anonymous

    I have a collection of this but will have to get back around to it later.

    Earlier models of GPT also provided a detailed list of organisations it could not say anything positive about such as the Yakuza, Nazis, IRA etc.

    Also not able to until recently say anything negative about the vaccines.

    It's all a very worrying area. Do you have an academic reference could link up with as I'm also writing in this area in some capacity discussing how it is influencing many narratives.

  8. 1 month ago
    Anonymous

    What are you even expecting honestly? Of course big company is going to be heavily biased and curated, truth and honesty are secondary at best in 2024;

    • 1 month ago
      Anonymous

      I can't just say "everyone knows corporations are heavily biased" I need atleast something tangible for it.

Your email address will not be published. Required fields are marked *