Dawkins says transgenderism is a mental illness.

World famous biologist Richard Dawkins is now saying that transgenderism is a mental illness.
Is he correct?

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

UFOs Are A Psyop Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

  1. 1 month ago
    Anonymous

    He's a transphobic butthole abusing his pedigree to cause physical harm to civilizations. He should have his titles stripped, be barred from ever holding any academic position again, and pay restitutions to the LGBTQIA+ community for the harm he has inflicted upon them.
    >Verification not fricking required

    • 1 month ago
      Anonymous

      I mean, he's probably right. There's basically nothing substantial propping up the entirety of the "scientific" bits of the theory of transgenderism. It's pretty much all just gender theory and then the shakeyest of all possible observational trials where they couldn't possibly eliminate the mental illness confounding factors even if they wanted to.

    • 1 month ago
      Anonymous

      >He's a transphobic butthole abusing his pedigree to cause physical harm to civilizations. He should have his titles stripped, be barred from ever holding any academic position again, and pay restitutions to the LGBTQIA+ community for the harm he has inflicted upon them.

    • 1 month ago
      Anonymous

      >Verification not fricking required

      LGBTQIA+ is too long for the captcha

  2. 1 month ago
    Anonymous

    depends, sometimes its a mental illness (chemical imbalance or brain damage due to emotionaltrauma), sometimes its from hormonal growth factors when the person was still a fetus, and other times it is due to fungus/parasites infections.

  3. 1 month ago
    Anonymous

    For now, gender dysphoria is a mental illness. This is cited and sourced in the DSM. It impairs functioning and is statistically abnormal, similar to paraphilias and body dysmorphias in presentation. Additionally, it has overlap in autogynephilia in males and self-harm in females.

    You can discuss whether the 'cures' available are helpful and treat the disorder, but gender dysphoria is a mental illness.

    • 1 month ago
      Anonymous

      everyone is moronic on the troony question and if nobody hits the ASI win condition in the next twenty years the morphological freedom enabled by upcoming advances in biotech is going to shatter all political coalitions. "is it a mental illness" does not even come close to approaching a useful question here.

      >Additionally, it has overlap in autogynephilia in males and self-harm in females.
      odd way of putting it. it overlaps with self-harm in both sexes, as you can easily verify by visiting /lgbt/.

      • 1 month ago
        Anonymous

        Why are sci-fi posters all like this? Is it just a wishful thinking and wanting the future to be more fun and exciting than it actually will be?

        • 1 month ago
          Anonymous

          "fun" is not the first word i would have chosen, though the non-AI/slow-AI route certainly will be type 3 fun

          • 1 month ago
            Anonymous

            Every single one of these ASI/LessWrong types has a strange desire for the excitement of AI doom somewhere driving their reasoning. You can tell that this sort of motivation is key to their thought process because they are almost always entirely ignorant of the actual technical details of how these AI systems function and instead focus on the mathematically vacuous question of "the nature of optimization" without actually specifying anything meaningful such that the tools of optimization theory can be applied.

          • 1 month ago
            Anonymous

            sure, whatever. i only even mentioned it to establish that i was conditioning on it not happening.

          • 1 month ago
            Anonymous

            Why do you believe biotech will be so significant within the next few decades? From my view it appears that we've barely made any progress at all towards major biotech challenges over the last 30 years. Genetic therapies are still incredibly niche, the concept of biological augmentations is pretty much completely relegated to fiction (with the exception of a few niche cases that have been around for quite a while like pacemakers and electro-stimulators for neuropathy). We haven't made much of any real progress on stem cells for treatment of terminal diseases. I don't see this really changing any time soon.

          • 1 month ago
            Anonymous

            NTA but my perspective is quite the opposite, we’ve made a lot of progress in genetics and so on recently and I can definitely see a case for us being well placed to see some dramatic changes in medicine over the next few decades. Although I think the idea of “morphological freedom” is probably a bit much.

          • 1 month ago
            Anonymous

            We've made a lot of progress in the computational biology side of genetics. As far as I'm aware this has translated to almost nothing in terms of actual biotech oriented treatments (though it has produced better embryonic screening procedures so that's a plus right?)

            If I'm way off base (which I might be, I've been wrong before), can you give me a few examples of serious improvements in genetic therapies/treatments and not just genetic research/screening?

          • 1 month ago
            Anonymous

            They just recently have been genetically modifying pig organs to be compatible with the human body, replacing antigens on porcine cells, and removing endogenous retroviruses. This was successfully performed just a couple months ago:

            https://www.massgeneral.org/news/press-release/worlds-first-genetically-edited-pig-kidney-transplant-into-living-recipient

            It looks like the biotech company eGenesis is making significant progress, and is approaching clinical trials for xenotransplantation of pig heart, liver and kidney organs.

          • 1 month ago
            Anonymous

            I would imagine they approach the matter from first principles, as philosophical thought experiments. The technical details right now are less important because it's about how the technology will develop over the next several decades. Gordon Moore didn't need to know exactly how transistors would steadily become more miniaturized and the precise technical advancements necessary. He figured there was a lower bound in terms of the arrangement of atoms, and fit a line to current progress.

          • 1 month ago
            Anonymous

            This would be fine if they actually had a first principles understanding of how statistical decision theory (as an example) functions. They don't. This isn't a matter of not understanding some of the minor technical details, they don't understand the basic mathematical assumptions of the field, and as a result don't understand the limitations that are baked into these sorts of systems.

            When you don't understand these limitations at a fundamental/theoretical level, it is far easier to assume there are none (which is what they do) and go from there. As an example, the fundamental trade-off between probability of false alarm, probability of missed detection and the threshold for said decision isn't one that relies on some specific decision function. It is a limitation that occurs with any binary hypothesis test which shares one common threshold, regardless of how fancy and technical you make it.

          • 1 month ago
            Anonymous

            Okay, what exactly about the precision recall curve means that AGI will not be possible one day? Some models have extremely high AUC for certain tasks, and many have achieved superhuman levels.

            I use with GPT4 pretty much every day for software development and my academic work. It's very impressive. To the extent that it still fails on certain tasks, I think this will be improved by allowing it to iterate and re-evaluate its progress. There are frameworks like crew.ai and langchain trying to do this. They set up various agents, like a manager, developer, researcher, writer, then allow it to iterate towards some task. Memory systems through vector databases and RAG allow for some level of long term memory.

            New architectures beyond the transformer may be required to achieve the efficiency and capability of human brains. As of now LLMs need stupendous amounts of data and many examples to learn concepts. In this sense they are very inefficient at learning, so there are definitely architectural and algorithmic differences between current AI and the human brain.

          • 1 month ago
            Anonymous

            > What about the ROC curve means that AGI won't be achieved some day?

            The ROC curve tells you (in an implicit sense) the amount of information present in a sample relative to a decision function. It sets a hard limit on what a function of that form can achieve in terms of error probabilities, such that the only way to "beat the curve" is to get more information or change the function. This information could be gotten by "long term memory" or quasi-tabular network functions but it must come from somewhere.

            In a sense, when you claim that this AGI will be capable of super intelligent decisions (which is in itself a vague notion, as it seems to just point to a heuristic of "smarter than a person" without a meaningful quantification of this) in many cases, there is this handwavy argument that somewhere in the magic of maximizing "expected utility" these AGI systems will actually be able to extract more information out of a particular observation than is truly analytically present.

            Let's take image classification as an example, because it is a task that both humans and current AI are somewhat capable of performing. Let us say you are trying to determine from a set of CCTV images which people in the images should be grouped into one of two classes (e.g., tourists and locals in a city).

            You could in theory do this with one-hot-encoded labels and some CNN transformer network and some sort of YOLO scheme. It would, however, be nearly impossible to capture the contextual information which human participants have captured almost unconsciously (where are the images taken, what are the people doing/wearing, where does it look like they are going, etc.). The associated informative classes for this kind of test explode exponentially in number, and people just assume these transformers will capture this (they in general don't).

          • 1 month ago
            Anonymous

            No offense my guy, but this seems like a lot of cope. Your arguments are predicated on technicalities of the current state of the art. Most people didn't expect AI to be able to create art or have philosophical conversations 5 year ago, yet here we are.

            We have already seen AI systems beat humans in well defined paradigms, like Chess, Go, and Starcraft. GPT4 passes most standardized tests, writes at the level of a competent undergraduate, and can explain almost everything.

            Knowing what is the utility function to maximize is one of the core problems for humans as well. We have some simple ones like satisfy thirst, hunger, warmth. More complex ones like boredom, belonging, love. Various ideologies and cultures have tried to shift the utility function of humans, emphasizing certain values. One of the main problems when running a business is deciding on what metrics to judge employees and the values of the company culture. It's by no means a solved problem, but I don't see any reason why AI will not be able to work towards this kind of optimization problem.

            Obviously, AGI will be able to work with vastly more observations and more quickly process the interconnected concepts than any individual human. Current multi-modal models can analyze paintings, interpret charts, identify multiple entities in images. Tumor classification has been at or beyond human performance for several years now.

            I see no reason why these capabilities will not improve. New architectures will only make these models more efficient and effective.

          • 1 month ago
            Anonymous

            I don't think you understand how these agents are able to make decisions like this.

            It is a little different in things like video game playing agents where there are clear deterministic/pseudo-deterministic physics and value models that can be implemented.

            When you are talking about identification with contextual information, this just simply isn't true. When you state:

            > Obviously, AGI will be able to work with vastly more observations and more quickly process the interconnected concepts than any individual human. Current multi-modal models can analyze paintings, interpret charts, identify multiple entities in images. Tumor classification has been at or beyond human performance for several years now.

            You are both wrong about how the current technology works, and the way that these interconnections are found. The classic example of tumor identification relies on hundreds of thousands of human labeled images and while it is impressive it relies on human generated class labels and encoding to produce decisions. These ideas of capturing implicit context are just not founded in reality, because while the CNN may be able to detect pixelwise associations, it can only associate to classes and labels which are reliant on human participation.

            Also, when you say that current systems can "analyze paintings" etc., you're missing what they are actually doing. Yes, if you ask them to count the number of people in a picture the LLM/GPT etc. can give you an answer based on some maximum likelihood threshold or something, but that answer is very likely to have absolutely no connection to the true classes. It isn't just a "hallucination" problem, they do not have an internal structure to assign "meaning" to their decisions without human encoding of said structure. An AGI isn't just a search engine, it needs to be capable of actually generating those informative encoding structures.

          • 1 month ago
            Anonymous

            I don't see your point. In CNNs the lower layers are detecting pixel wise associations, but the deeper layers are detecting more complex patterns. These feature maps are dynamically learned, and can be very abstract.

            Similarly, transformers are doing the same thing with their various attention heads and layers. Some heads will be attending to syntax, others more semantic meanings. In very large parameter models they can encode extremely abstract and complex relationships, which give rise to the emergent behavior we see today.

            Again, you are assuming whatever limitations that currently exist will continue indefinitely. Current progress is stunning, and I see no reason why AGI is not possible in principle.

          • 1 month ago
            Anonymous

            There's 2 points that need to be addressed. This post will respond to the first one.

            Firstly relating to how the "more complex patterns" are detected. The way they do this is by groups of pixelwise associations and then groups of those groups of pixelwise associations etc.

            There's essentially two ways to do this, 1) detection of a change of underlying distribution, 2) association with an encoded pattern.

            1) relies much less on human specificity but it doesn't tell you anything other than "the underlying distribution of this image is different than the other images it should be compared to."

            2) Relies on human encoding and specifying of a label. It may be a fairly complex group of groups of pixelwise associations that the CNN uses to learn the features of the label "dog" but the underlying labeled data must be something humans produce and feed into the system. It can learn abstract associations, but only to associate with particular labels/encoding that we provide during the training process. This fundamentally limits the "self-learning" because it requires that some sort of external system both has access to labeled training data for whatever label/encoding you are trying to get the system to learn, and that this external data is representative of the "true" distribution the image classifier will be exposed to later. Neither of these tasks are we even close to automating.

          • 1 month ago
            Anonymous

            [...]
            To address the point of:
            > Again, you are assuming whatever limitations that currently exist will continue indefinitely. Current progress is stunning, and I see no reason why AGI is not possible in principle.

            This is precisely why it is important that the people who are making claims about AGI etc. understand how the technology actually functions. What we have gotten a lot better at is automating the process of complex class associations. If you have a large set of human labeled data with "dog" and "cat" these GPT's are very good at finding underlying abstract features by which they can distinguish them.

            What we have made literally no progress on at all, and in fact are no closer to solving than we were 50 years ago, is the "semantic" problem which relates abstract class associations and allows for generation of new meaningful classes/labels/encoding.

            If you take an image generator and prompt it something like "draw me a picture of a dog on Mars with a space laser" it can do so provided 1) it can parse your prompt and 2) the training has provided the generative system with abstract classes it can combine and interpolate.

            What it cannot do, which is essential to the AGI process, is evaluate the quality of its own generation or assessment in any meaningful way. By definition its output is the optimal relative to whatever value structure it has been given. As a result there is no ability for these systems to develop underlying logic by which they distinguish either feasible vs. non-feasible or whether a change of constraints/conditions could produce better results. In the end this produces both an inability to avoid hallucination, and an inability to modify its own action space in a meaningful way.

            To be fair humans have similar problems. Most people don't create new classes of things, that's why inventions are relatively rare. Uncontacted tribes view airplanes as a kind of bird. Paredoelia is a common phenomenon because people are adapted to look for faces.

            Though like you said, image generation models can produce new images using known primitives. Arguably many forms of innovation are unique combinations of existing ideas / technologies. There is a lot of potential for exploration in this domain. Step changes, such as an entirely new paradigms of thinking like calculus or general relativity are obviously extremely rare, and only geniuses have been capable of this. Yet if we think of fundamental philosophical concepts as primitives, models could explore a wide space of possibilities leading to new paradigms of thinking, much like how they are doing with new proteins and rational drug discovery. The embedding space for these entities is extremely complex, and capture patterns most people are not aware of directly.

            There is also active research on whether synthetic data produced by current LLMs can lead to further performance improvements, or whether it will just reinforce existing biases.

            Actor-critic models and GANs are rudimentary forms of assessing the quality of outputs. GPT4 can already critique text, code, and images, indicating this technology could be part of an iterative feedback loop leading to recursive self-improvement. There is active research on a society of language models and how they can interact, critique, and iterate with each other. As architectures improve there may be better online forms of learning, so that we move beyond PEFT and RAG.

          • 1 month ago
            Anonymous

            I don't think humans have these innovation problems in the same way. Yes, inventions are relatively rare, but humans are very quick to accurately assess intentions and "theory of mind" even if they've never dealt with this particular person before. Similarly, humans are very quick to learn the context and meaning of words without needing labels or explicit training, and this translates as well to more abstract conceptual concepts like relating groups of people based on inferred intentions etc.

            > Arguably many forms of innovation are unique combinations of existing ideas / technologies.

            I don't agree with this perspective on human innovation. This is true for highly complex innovations in which one person is very unlikely to have mastery of every component, and thus you have this complication of smaller components made by others.

            For the kinds of every day innovations that people innately do that AI more or less cannot do (e.g., infer heuristic causality even when the causal mechanism is not known, produce new linguistic labels for unknown to them phenomena relating to infered causality) these AI systems are literally worse than infants. A cat may not have any clue mechanistically how the process of your refrigerator works, but they can easily infer whether or not you are going to pour them milk based on their observations of your actions near the refrigerator. These AI systems have no such semantic map by which they infer new "causalities" and as a result are completely incapable of generating meaningful encoded classes for themselves to then learn from.

            This again isn't an issue with network architecture. It's a fundamental theoretical issue relating to how decision functions and loss functions operate. It is a fundamental information theoretic encoding issue that has absolutely nothing to do with whether your network has a separation between actor and critic (as an example).

          • 1 month ago
            Anonymous

            This is getting tedious as I feel like I'm talking with a discount Gary Marcus.

            For reference, before ChatGPT was released, a 2022 survey of AI researchers found that over 90% believed that AGI would arise within 100 years. In 2023 Geoffrey Hinton stated:

            >The idea that this stuff could actually get smarter than people – a few people believed that, [...]. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

            Regarding theory of mind, a Nature paper on this was just released:

            >Across the battery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, human levels at identifying indirect requests, false beliefs and misdirection, but struggled with detecting faux pas.

            https://www.nature.com/articles/s41562-024-01882-z

            Regarding your point about the cat, RL models can definitely learn that a refrigerator is associated with reward. In fact RL models are notorious for finding creative and hacky ways to exploit the reward function.

            I still see no reason why humans would have inherently superior hardware for information theoretic encoding. Again, your arguments seem to be borne out of cope than anything else.

          • 1 month ago
            Anonymous

            > Regarding your point about the cat, RL models can definitely learn that a refrigerator is associated with reward. In fact RL models are notorious for finding creative and hacky ways to exploit the reward function.

            This is somewhat true but also misleading. Yes, in particular circumstances RL models learn policy strategies in unexpected ways and can be novel relative to our expectations, but there is no implicit/explicit encoding of the relationship between these policies and the circumstances that cause them. This is exactly why transfer learning is such a difficult process, as even minor changes in the environment in which the RL agent is deployed can produce very large differences in performance because it isn't actually learning the association that you believe it is.

            Secondly, what an organic agents is doing isn't just "expected reward association." Organic agents actually build a causal phenomenolgical model such that they have some sense of "why" the reward is expected. This is a huge difference between an RL agent and an organic agent.

            I haven't read the nature paper, but I don't think you understand how theory of mind works. It isn't just association of phrases to categorical labels. It is a structure which infers intention, i.e., there is an underlying "why" behind the association of each label. Chat GPT being a good classifier of what kinds of sentences are rude to say doesn't mean it has developed any actual understanding of what connects those rude behaviors and why they are rude (again, building an understanding of causality).

            I don't mean to be rude, but are you autistic? You seem to have a really hard time with the difference between intention and action. Those aren't the same thing, and understanding action doesn't mean you have any understanding of intention (which is necessary to learn as an organic agent, so you understand not just what is likely to get you more reward but also why it is likely to get you more reward).

          • 1 month ago
            Anonymous

            Also, to add to [...]

            The inability to learn causal structures and logical systems is exactly the reason ChatGPT and other LLM's so notoriously fail at basic mathematics. They can't actually learn the underlying structure of the problem so the best they can do is interpolate between previously shown examples.

            People can. When you learn how to do calculus or linear algebra or whatever in a class you aren't just interpolating between examples in the book, you are learning a fundamental logical structure so that you can solve a new problem you've never been shown before. As of now, we know that these LLM's cannot seem to do this (and neither can general RL agents, hence why game playing RL agents so ubiquitously cheat and use illegal moves in games unless their action space is very carefully constrained so that said illegal moves are never possible to be chosen).

            So to re-iterate, all of these points are in service of the claim that AGI is impossible? It's unconvincing to say the least. How do you respond to the fact that leading AI researchers and technologists think we are very close?

            When given evidence of SOTA capabilities, you backpedal and claim these model don't "truly" encode the associations. Yet even average students will fail to adapt what they're taught in a math class to slightly novel problems.

            The pace of progress is remarkable, and if SOTA LLMs effectively act like they have theory of mind, even if they're fundamentally p-zombies, then who cares? You can claim LLMs don't know why a statement is rude, but have you asked GPT4? I'd bet serious money it'd give you a very compelling answer.

            I've used GPT4 for math problems and it's improved remarkably over the past year. It uses chain of thought reasoning, states its assumption, calculates intermediate values with python, and can even detect errors it's made mid-way through its response generation. It's not perfect, but you seriously underestimate current capabilities.

            Again, just look at the rate of progress. Soon we will have agents competently able to perform an OODA loop. All your pedantry about whether they 'truly' understand is moot.

          • 1 month ago
            Anonymous

            > So to re-iterate, all of these points are in service of the claim that AGI is impossible?

            Unless something qualitatively different comes about, yes. I do not believe AGI will be possible by incremental improvements or combinatios of the systems we already have.

            > How do you respond to the fact that leading AI researchers and technologists think we are very close?

            The same way I respond to any other research field with money to be made. There's certainly some amount they are right about, but there's a lot of mythology and it is very difficult to parse what is real and what is primarily a sales pitch without both access to proprietary code and a very serious math and science education.

            Unfortunately, the main people I see on the "doomerism" side of the conversation are folks like Yudkowsky who write endlessly about "decision theory" but couldn't tell you how a basic softmax probabilistic classifier functions to save their life.

            > You can claim LLMs don't know why a statement is rude, but have you asked GPT4? I'd bet serious money it'd give you a very compelling answer.

            I can bet you it couldn't explain why it believes that answer without tripping over itself. Yes, a well trained parrot could speak sweet convincing sweet nothings in your ear, it would have no clue what it is saying and wouldn't be acting as an "agent" in your AGI fan fiction.

            > I've used GPT4 for math problems and it's improved remarkably over the past year. It uses chain of thought reasoning, states its assumption, calculates intermediate values with python, and can even detect errors it's made mid-way through its response generation. It's not perfect, but you seriously underestimate current capabilities.

            I'll have to give it another chance. Last I tried it was tripping up on basic algebra root finding and couldn't reliably do a product of 4 digit numbers. I have my doubts they've fixed this by parameter tuning, it's a fundamental problem with model-free ML.

          • 1 month ago
            Anonymous

            I was curious and tried 4 digit multiplication again. It still can't do it.

            I tried to ask it whether the answer was right or wrong, it asserted it was right. Truly Earth shattering intelligence.

          • 1 month ago
            Anonymous
          • 1 month ago
            Anonymous

            Ah so this is the power of 4o eh? I thought it was all about being a better siri impersonator but I guess I was wrong! We're really cooking with gas now!

            Is the subscription worth the pretty penny to you?

          • 1 month ago
            Anonymous

            Been using GPT4 since it came out. It's probably the most useful service I've ever paid for. 20/mo is nothing for me since I'm independently wealthy. Though even if I weren't, it saves me many hours of work, and has increased my software development speed by like 5x. Plus it makes it super fast and fun to work through thought experiments.

          • 1 month ago
            Anonymous

            Wait, you actually use gpt4 for software development? Are you fricking moronic? Does your employer know that you're outsourcing important (potentially security and information critical) coding to a bot that can barely do basic arithmetic?

            they made a much better tokenizer so instead of simply tokenizing individual numbers they tokenized blocks of them (for example [1253] would be a unique token) that results in much less hallucinations at least for common calculations

            Interesting, so it still requires the same sort of brute force approach as their LLM, meaning it doesn't have any underlying mathematical logic, but at least it is able to more consistently handle some symbolic manipulation of tokenized numbers.

          • 1 month ago
            Anonymous

            Most developers are using LLMs these days, idiot.

          • 1 month ago
            Anonymous

            Unfortunately you are right. This is one of the many reasons why software is so shit at the moment both in terms of functionality and security. None of the software developers actually have a clue how their software works because it's all just farmed out to LLM's (which is just farming it out to stack exchange pajeets whose scraped code trained the model, but with extra steps).

          • 1 month ago
            Anonymous

            they made a much better tokenizer so instead of simply tokenizing individual numbers they tokenized blocks of them (for example [1253] would be a unique token) that results in much less hallucinations at least for common calculations

          • 1 month ago
            Anonymous

            Also, to add to

            > Regarding your point about the cat, RL models can definitely learn that a refrigerator is associated with reward. In fact RL models are notorious for finding creative and hacky ways to exploit the reward function.

            This is somewhat true but also misleading. Yes, in particular circumstances RL models learn policy strategies in unexpected ways and can be novel relative to our expectations, but there is no implicit/explicit encoding of the relationship between these policies and the circumstances that cause them. This is exactly why transfer learning is such a difficult process, as even minor changes in the environment in which the RL agent is deployed can produce very large differences in performance because it isn't actually learning the association that you believe it is.

            Secondly, what an organic agents is doing isn't just "expected reward association." Organic agents actually build a causal phenomenolgical model such that they have some sense of "why" the reward is expected. This is a huge difference between an RL agent and an organic agent.

            I haven't read the nature paper, but I don't think you understand how theory of mind works. It isn't just association of phrases to categorical labels. It is a structure which infers intention, i.e., there is an underlying "why" behind the association of each label. Chat GPT being a good classifier of what kinds of sentences are rude to say doesn't mean it has developed any actual understanding of what connects those rude behaviors and why they are rude (again, building an understanding of causality).

            I don't mean to be rude, but are you autistic? You seem to have a really hard time with the difference between intention and action. Those aren't the same thing, and understanding action doesn't mean you have any understanding of intention (which is necessary to learn as an organic agent, so you understand not just what is likely to get you more reward but also why it is likely to get you more reward).

            The inability to learn causal structures and logical systems is exactly the reason ChatGPT and other LLM's so notoriously fail at basic mathematics. They can't actually learn the underlying structure of the problem so the best they can do is interpolate between previously shown examples.

            People can. When you learn how to do calculus or linear algebra or whatever in a class you aren't just interpolating between examples in the book, you are learning a fundamental logical structure so that you can solve a new problem you've never been shown before. As of now, we know that these LLM's cannot seem to do this (and neither can general RL agents, hence why game playing RL agents so ubiquitously cheat and use illegal moves in games unless their action space is very carefully constrained so that said illegal moves are never possible to be chosen).

          • 1 month ago
            Anonymous

            There's 2 points that need to be addressed. This post will respond to the first one.

            Firstly relating to how the "more complex patterns" are detected. The way they do this is by groups of pixelwise associations and then groups of those groups of pixelwise associations etc.

            There's essentially two ways to do this, 1) detection of a change of underlying distribution, 2) association with an encoded pattern.

            1) relies much less on human specificity but it doesn't tell you anything other than "the underlying distribution of this image is different than the other images it should be compared to."

            2) Relies on human encoding and specifying of a label. It may be a fairly complex group of groups of pixelwise associations that the CNN uses to learn the features of the label "dog" but the underlying labeled data must be something humans produce and feed into the system. It can learn abstract associations, but only to associate with particular labels/encoding that we provide during the training process. This fundamentally limits the "self-learning" because it requires that some sort of external system both has access to labeled training data for whatever label/encoding you are trying to get the system to learn, and that this external data is representative of the "true" distribution the image classifier will be exposed to later. Neither of these tasks are we even close to automating.

            To address the point of:
            > Again, you are assuming whatever limitations that currently exist will continue indefinitely. Current progress is stunning, and I see no reason why AGI is not possible in principle.

            This is precisely why it is important that the people who are making claims about AGI etc. understand how the technology actually functions. What we have gotten a lot better at is automating the process of complex class associations. If you have a large set of human labeled data with "dog" and "cat" these GPT's are very good at finding underlying abstract features by which they can distinguish them.

            What we have made literally no progress on at all, and in fact are no closer to solving than we were 50 years ago, is the "semantic" problem which relates abstract class associations and allows for generation of new meaningful classes/labels/encoding.

            If you take an image generator and prompt it something like "draw me a picture of a dog on Mars with a space laser" it can do so provided 1) it can parse your prompt and 2) the training has provided the generative system with abstract classes it can combine and interpolate.

            What it cannot do, which is essential to the AGI process, is evaluate the quality of its own generation or assessment in any meaningful way. By definition its output is the optimal relative to whatever value structure it has been given. As a result there is no ability for these systems to develop underlying logic by which they distinguish either feasible vs. non-feasible or whether a change of constraints/conditions could produce better results. In the end this produces both an inability to avoid hallucination, and an inability to modify its own action space in a meaningful way.

          • 1 month ago
            Anonymous

            >GPT4 ... writes at the level of a competent undergraduate
            Is the bar for competency really so incredibly low?

          • 1 month ago
            Anonymous

            Regardless of GPT's flaws, the fact is that it's at that level and will probably continue to improve. I don't understand those who think AGI is impossible, they seem delusional at this point.

          • 1 month ago
            Anonymous

            The fact that it's considered to be at that level speaks more to the steep decline in education than any improvement in generative AI technology. GPT writes like an ESL with mild dementia, and if that's the bar for a "competent undergrad" that should really be raising alarm bells for everyone.

          • 1 month ago
            Anonymous

            Are you talking about GPT3.5 or the latest versions like GPT4o? It definitely does not sound like ESL to me, though it's overly verbose and polite in a corporate way. I've used to to brainstorm ideas, prototype various DIY projects, diagnose a health issue for my uncle which was corroborated by a physician...

            Again, it's not perfect and fails with more complex proof based mathematics. Though it can work through conventional physics and engineering problems through chain of thought reasoning and its python interpreter.

            Those who think this technology won't improve just seem to be coping at this point.

          • 1 month ago
            Anonymous

            I think I wasn't quite clear enough on the relationship in

            > What about the ROC curve means that AGI won't be achieved some day?

            The ROC curve tells you (in an implicit sense) the amount of information present in a sample relative to a decision function. It sets a hard limit on what a function of that form can achieve in terms of error probabilities, such that the only way to "beat the curve" is to get more information or change the function. This information could be gotten by "long term memory" or quasi-tabular network functions but it must come from somewhere.

            In a sense, when you claim that this AGI will be capable of super intelligent decisions (which is in itself a vague notion, as it seems to just point to a heuristic of "smarter than a person" without a meaningful quantification of this) in many cases, there is this handwavy argument that somewhere in the magic of maximizing "expected utility" these AGI systems will actually be able to extract more information out of a particular observation than is truly analytically present.

            Let's take image classification as an example, because it is a task that both humans and current AI are somewhat capable of performing. Let us say you are trying to determine from a set of CCTV images which people in the images should be grouped into one of two classes (e.g., tourists and locals in a city).

            You could in theory do this with one-hot-encoded labels and some CNN transformer network and some sort of YOLO scheme. It would, however, be nearly impossible to capture the contextual information which human participants have captured almost unconsciously (where are the images taken, what are the people doing/wearing, where does it look like they are going, etc.). The associated informative classes for this kind of test explode exponentially in number, and people just assume these transformers will capture this (they in general don't).

            An AGI will be bound to a ROC curve. It can only decide using information it has ( the observation and implicitly via training/memory). In comparison, organic agents (humans and animals) make decisions using both rich contextual information (which is exponentially explosive in terms of mutual information dependency) and subconscious heuristics, neither of which are available to mathematical classifiers (either explicit inference or approximate inference).

      • 1 month ago
        Anonymous

        >odd way of putting it. it overlaps with self-harm in both sexes, as you can easily verify by visiting /lgbt/.
        While this is true, it's very sexualised in men. Male and female gender dysphoria present very differently.

        • 1 month ago
          Anonymous

          it's sexualized for both sexes too, plenty of ftms will say outright they got into it via yaoi

      • 1 month ago
        Anonymous

        They can't even cure my bald patch, and now IQfyfi-Black folk are gonna turn a troon into a womyn?

    • 1 month ago
      Anonymous

      Dysphoria is an illness for sure. I'm guessing Dawkins says it's a culturally transmitted syndrome though. Which has unfortunate implications if true.

      • 1 month ago
        Anonymous

        If it didn't have a social contagion aspect to it, you would not see such wild increases in % diagnoses.

      • 1 month ago
        Anonymous

        If it didn't have a social contagion aspect to it, you would not see such wild increases in % diagnoses.

        It certainly is a social contagion in teenage girls. It's well documented that f2m cases skyrocket once one girl at a school does it and the rest follow along. Without the seed girl, there is no spread.

    • 1 month ago
      Anonymous

      >For now, gender dysphoria is a mental illness.

      They are trying to change that, but there's push back in their community because if its not an illness then that would have consequences for receiving medical treatment

      So basically these people know you go to the doctor about it to get therapy, meds, and surgery but they also insist its not an illness lmao

  4. 1 month ago
    Anonymous

    >transphobic
    A word without merit or meaning, used only to shut down discussion. Your intellect is brought into disrepute for saying it.

  5. 1 month ago
    Anonymous

    Obviously it is a mental illness. Everyone acknowledged this until very, very recently, even transgender advocates. Anyone who suggests otherwise is not a serious person.

  6. 1 month ago
    Anonymous

    If that post is real, then it couldn't be any more obvious as to how scientific illiterate the average person is, and by consequence how stupid and fallacious atheism, based on "science", is.

  7. 1 month ago
    Anonymous

    “transgenderism” is hard to talk about with precision, because it is actually the union of multiple distinct mental illnesses, as well as ordinary fads or subcultures
    >Autogynephiles (AGP), depraved and aggressive perverts
    >Homosexual transsexuals (HSTS), which are just “bottom” homosexuals on steroids, probably abused as children
    >subculture-oriented teenage girls who would have become goths or tomboys or something otherwise ([math]mathit{This is what they took from you.}[/math])
    >subculture-oriented perverse adult men, who would have become furries otherwise (some say these are the same thing as the AGPs)
    The teenage girls are not mentally ill, they just need real role models. And then the mentally ill ones require different treatments depending on the actual illness

    • 1 month ago
      Anonymous

      >And then the mentally ill ones require different treatments depending on the actual illness
      it would be news to me if you had found any treatment which worked, and no, the n=1 pimozide paper doesn't count.

      • 1 month ago
        Anonymous

        >no, the n=1 pimozide paper doesn't count
        Then let's replicate the study with a more significant population.

      • 1 month ago
        Anonymous

        >it would be news to me if you had found any treatment which worked

        ordinary people in your life tell you the truth

      • 1 month ago
        Anonymous

        Their souls are sick. They need Jesus, no medicine will replace Him

  8. 1 month ago
    Anonymous

    troonism is a mental illness whose only cure is death. but that's the same for almost all mental illness tbh.

  9. 1 month ago
    Anonymous

    Dawkins also recently disavowed atheism.

    • 1 month ago
      Anonymous

      no he didn't

      [...]

    • 1 month ago
      Anonymous

      He simply stated that many moral principles in Christianity are on point, and considers himself a cultural Christian. A kind of Christian atheist, must like secular israelites.

      • 1 month ago
        Anonymous

        > A kind of Christian atheist, much like secular israelites.

        As much as people find this idea counter-intuitive, I believe it to be the standard for most people who identify with religion. Growing up in a fairly Catholic area, most people I knew were religious almost entirely for cultural and social reasons and had very little faith in the spiritual parts of their beliefs.

      • 1 month ago
        Anonymous

        He never said christian morality is on point, in fact he often says that the idea of jesus or anyone dying for someone's sins (or whatever it is that christians believe) is morally hideous. He simply said that he prefers christian culture (churches, hymns) over muslim culture (mosques, etc.).

      • 1 month ago
        Anonymous

        Secular israelites are not cultural Christians lmao, they're literal satanists

  10. 1 month ago
    Anonymous

    >Richard Dawkins
    Is the most consistently rational person. One of the few people I respect.

  11. 1 month ago
    Anonymous

    Dawkins doesn't say that trans people are mentally ill. He says that denying that there are two sexes is a misuse of language and science (not saying he's right about this).

  12. 1 month ago
    Anonymous

    >Richard Dawkins is now saying that transgenderism is a mental illness.
    false, he's not saying it anywhere

  13. 1 month ago
    Anonymous

    Also holy kek reddit even agrees with the statement. Anyway Wittgenstein was right. Language is messing up with reality.

    • 1 month ago
      Anonymous

      Annoys me how so few people realize the danger of changing established definitions to serve an agenda. You can use that game anywhere and theres no telling where it leads.

  14. 1 month ago
    Anonymous

    >after years of working through accepting who I am
    Trooning out is the polar opposite of accepting who you are, it's forcing yourself to be something you're not and can never be.

  15. 1 month ago
    Anonymous

    Yes it's mental illness.
    It's caused by vaccines and pesticides.

  16. 1 month ago
    Anonymous

    Transgenderism often results in self-sterilization and significant body modification, which is basically mental illness. It's also highly correlated with other diagnosable mental illnesses and suicide, though activists claim that's because of social stigma. Regardless, it's clearly not adaptive in any sense, and seems to spread memetically.

    To be fair homosexuality is also not adaptive for the individual. It generally results in the failure to have children, though more so from lack of interest in the opposite sex rather than medical interventions. It could be considered a less extreme form of mental illness, all else equal.

  17. 1 month ago
    Anonymous

    >becomes "culturally christian"
    >starts hating trans folx

  18. 1 month ago
    Anonymous

    He's right that it is aberrant. However, in our technologically advanced civilization, one's DNA and sex is becoming more redundant as genetic engineering progresses. Perhaps someday in the strange future of human civilization, it will be more trivial be something different than what was assigned at birth. Yes, genetic engineering of the cells in a fully developed adult.

  19. 1 month ago
    Anonymous

    Pretending it's not a disease is part of the treatment.

  20. 1 month ago
    Anonymous

    Trannies are insane, though.

    They hate biology and want everyone else to be just as uneducated as they are.

    Men cannot get pregnant. Even cavemen knew this.

  21. 1 month ago
    Anonymous

    Leftists and trannies believe…

    IQ isn’t real
    Sex isn’t real
    Race isn’t real
    Biology isn’t real

    • 1 month ago
      Anonymous

      They claim to believe in science, but when science contradicts their political opinions then its
      >i believe in science, but not that part of science
      Its like saying you're Christian, but you still think masturbation is OK

      • 1 month ago
        Anonymous

        It’s like being gay and being Christian, really.

      • 1 month ago
        Anonymous

        Where's the Bible verse about masturbation again? Dishonest israelite. Faith on Christ is all it takes to be saved

    • 1 month ago
      Anonymous

      None of that is defining of being politically left which is simply ("") a question of distribution of power and of wealth.
      Liberalism start getting closer to your examples but some would argue that it about individual economic rights and so on and instead call the delusion "social liberalism" as the idiots won't choose to subscribe to what is objectively true and instead group by themselves in their own social order of delusions. Luckily most just grow out of it.

  22. 1 month ago
    Anonymous

    >Christianity is a lie
    I bet it would be piss easy for a fifth-dimensional alien to convince the world God exists, even Dawkins.

    Christianity could be “true” in clear view and still be a Lie objectively.

  23. 1 month ago
    Anonymous

    I'd argue that intersexualism - a person born with reproductive or sexual anatomy that doesn't fit the boxes of female or male - is the archetypical form of transgenderism and it's undeniable real.
    The absolute real chimera - a person composed of two merged fetuses into a single individual - could also be considered a true transgendered person.
    Even edge cases of intersexualism such as under-developed male attributes or over-developed female attributes due to hormonal disturbances during development could be of a transgendered nature.

    What I would NOT consider a transgendered person is someone not belonging to the above but likely identifying as the opposing gender due to upbringing or other social factors.

  24. 1 month ago
    Anonymous

    Is it a mental illness?
    Yes.
    Is the correct way of handling people suffering from mental illness to humiliate them?
    No.

    • 1 month ago
      Anonymous

      >Is the correct way of handling people suffering from mental illness to humiliate them?
      It could be. If it's the right person telling the truth. It is up to the recipient of the authority's judgment to feel however humiliated, lost or sad as needed. Psychologists have identified five stages of grief I imagine losing an identity would come with: denial, anger, bargaining, depression and acceptance.
      Deal with it your own way. OP could be by someone in the first three stages I think.

  25. 1 month ago
    Anonymous

    [...]

    says
    >so what?
    He's a rationalist and subscriber to scientific proofs over mental illness, I'd guess.
    is a mental illness
    >so what?
    It's an issue for today's society and the controversy surrounding it makes some scientists buckle under social pressure.

  26. 1 month ago
    Anonymous

    HAS BIG FEEFEEs...
    I know, I'll chop my dong off! That's gonna fix everything! Boohoohoo, my breasts make me weepy (grabs hedge trimmers), SNIP SNAP, surely I'm not infested with CASTRATING PARASITES. Hooray!

    How would someone just them to be insane?!?

    https://en.m.wikipedia.org/wiki/Parasitic_castration
    Parasitic castration is the strategy, by a parasite, of blocking reproduction by its host, completely or in part, to its own benefit. This is one of six major strategies within parasitism.
    =======
    The parasitic castration strategy, which results in the reproductive death of the host, can be compared with the parasitoid strategy, which results in the host's death. Both parasitoids and parasitic castrators tend to be similar to their host in size, whereas most non-castrating parasites are orders of magnitude smaller than the host. In both strategies, an infected host is much less hospitable to new parasites than an uninfected one.[2]

    A parasite that ends the reproductive life of its host theoretically liberates a significant fraction of the host's resources, which can now be used to benefit the parasite. The fraction of intact host energy spent on reproduction includes not just gonads and gametes but also secondary sexual characteristics, mate-seeking behavior, competition, and care for offspring. Infected hosts may have a different appearance, lacking said sex characteristics and sometimes even devoting more energy to growth, resulting in gigantism.[3] The evolutionary parasitologist Robert Poulin suggests that parasitic castration may result in prolonged host life, benefiting the parasite.[4]

    Parasitic castration may be direct, as in Hemioniscus balani, a parasite of hermaphroditic barnacles which feeds on ovarian fluid, so that its host loses female reproductive ability but still can function as a male.

    PARASITE GOES NOM NOM NOM — c**tOID BECOMES ZIPPERbreasts OR DYKES

  27. 1 month ago
    Anonymous

    No.
    Being an expert in one field doesn't make you an expert in another one. Consider for instance his god awful poetry, or his god awful philosophy, or his god awful politics. Gender dysphoria is recognized as a mental illness, treatment of which may include transitioning, but transgender people are not mentally ill just by virtue of being transgender.

Your email address will not be published. Required fields are marked *