Is there an algorithm that always converges to a global optimum if it exists (rather than getting stuck)?

Is there an algorithm that always converges to a global optimum if it exists (rather than getting stuck)?
For my problem (evolving cellular automata rulesets) genetic algorithm does better than RMHC or HCT, but still fails to find the optimal solution sometimes even when I know it exists.

It's All Fucked Shirt $22.14

The Kind of Tired That Sleep Won’t Fix Shirt $21.68

It's All Fucked Shirt $22.14

  1. 2 weeks ago
    Anonymous

    Yes. Set every value at random and take the most optimal result.

    • 2 weeks ago
      Anonymous

      >Is there an algorithm that always converges to a global optimum

      Yes, of course. Exhaustive search is guaranteed to get you the global optimum.

      For an arbitrary problem it is also the only way to guarantee that you have found the global optimum. (Obviously)

      If you want anything better you need to exploit the specific structure of your problem. E.g. for a continuous analytic function you can bound the global optimum from above through interval arithmetic.

      >Exhaustive search
      >Bogosort
      This gives worse results than RMHC, HCT or GA. The algorithm should perform better than the aforementioned algorithms.

      • 2 weeks ago
        Anonymous

        >ask a question
        >get an answer
        >no not that answer, a real answer

        • 2 weeks ago
          Anonymous

          I have a better question than OP, why don't genetic algorithms make use of sexual dimorphism, if it was beneficial in nature?

          • 2 weeks ago
            Anonymous

            Or diploidy for that matter. Currently the best known understanding of why genetic algorithms work is the building block hypothesis and anything else is just speculation. Feel free to come up with something better.

          • 2 weeks ago
            Anonymous

            because it's a faithful imitation of part of God's design

          • 2 weeks ago
            Anonymous

            >building block hypothesis
            what the frick is that?
            genetic algorithms work if a partial solution gets partial credit
            if a partial solution gets no credit (e.g. guessing a hashed password) then genetic algorithms dont work
            if mixing partial solutions cant produce a better partial solution then genetic algorithms dont work
            killing off worse partial solutions optimizes the search
            mixing partial solutions solves multiple constraint satisfaction, which requires some attractor states not be mutually exclusive

          • 2 weeks ago
            Anonymous

            The building block hypothesis means that chained short "genes" form a longer "gene" kind of like syllables form a word or words form a sentence.

          • 2 weeks ago
            Anonymous

            The end result of natural genetic evolution is much more than which genes are selected at fertilization

          • 2 weeks ago
            Anonymous

            >you might create an entire mini digital world in the process of solving your problem
            Beautiful, no wonder scientists hate it

          • 2 weeks ago
            Anonymous

            Or diploidy for that matter. Currently the best known understanding of why genetic algorithms work is the building block hypothesis and anything else is just speculation. Feel free to come up with something better.

            The end result of natural genetic evolution is much more than which genes are selected at fertilization

            because it's a faithful imitation of part of God's design

            >you might create an entire mini digital world in the process of solving your problem
            Beautiful, no wonder scientists hate it

            Shut the frick up- moronS!

      • 2 weeks ago
        Anonymous

        >This gives worse results than RMHC, HCT or GA.
        No, it doesn't. It gives you the guaranteed global maximum.

        And *you can not do better* for an arbitrary problem. There is nothing you can do better for an arbitrary problem than try all inputs. Global maximum is just SAT.

  2. 2 weeks ago
    Anonymous

    >Is there an algorithm that always converges to a global optimum

    Yes, of course. Exhaustive search is guaranteed to get you the global optimum.

    For an arbitrary problem it is also the only way to guarantee that you have found the global optimum. (Obviously)

    If you want anything better you need to exploit the specific structure of your problem. E.g. for a continuous analytic function you can bound the global optimum from above through interval arithmetic.

  3. 2 weeks ago
    Anonymous

    Suppose we have a large array of integers where every odd integer is 5 and every even integer is 10, there is exactly one 0 in the array, find the 0 in less than size / 2 steps on average.
    This is probably the simplest problem that demonstrates why finding global minimas without brute force is impossible in the general case.

    • 2 weeks ago
      Anonymous

      I know what you mean, but I was able to find the global minimum using the genetic algorithm. The problem is that it doesn't always find it, so I was wondering if there was a way to guarantee that it always performs at its best, in other words without having to restart the algorithm over and over.

      • 2 weeks ago
        Anonymous

        So your algorithm is getting stuck in local minimas.
        If you already know what the global minima is, you can force random mutations once the algorithm gets stuck on a local minima, which is the equivalent of a soft restart.
        How much DNA you should keep and how much you should randomise is just the exploration vs exploitation dilemma.

      • 2 weeks ago
        Anonymous

        So your algorithm is getting stuck in local minimas.
        If you already know what the global minima is, you can force random mutations once the algorithm gets stuck on a local minima, which is the equivalent of a soft restart.
        How much DNA you should keep and how much you should randomise is just the exploration vs exploitation dilemma.

        Side note: the same problem exists in RL, they usually "solve" it by making some percentage of the actions be random during training
        https://www.google.com/amp/s/www.geeksforgeeks.org/epsilon-greedy-algorithm-in-reinforcement-learning/amp/

        In practice, you generally don't know what the global minima is, so you insert a bit of randomness every couple of steps, which guarantees asymptotic convergence given infinite time.
        Then you checkpoint the best performers.

  4. 2 weeks ago
    Anonymous

    maybe try looking into deep reinforcement learning? depending on what you're trying to solve you can maybe get results closer to optimal

  5. 2 weeks ago
    Anonymous

    all algorithms will find the global maxima for a fitness function with enough random restarts. A less computationally intense alternative is a scaling annealing rate.

Your email address will not be published. Required fields are marked *