## Kolmogorov complexity, optimization and hardness (2006)

### Cached

### Download Links

Citations: | 2 - 2 self |

### BibTeX

@TECHREPORT{Borenstein06kolmogorovcomplexity,,

author = {Yossi Borenstein and Riccardo Poli},

title = {Kolmogorov complexity, optimization and hardness},

institution = {},

year = {2006}

}

### OpenURL

### Abstract

Abstract — The Kolmogorov complexity (KC) of a string is defined as the length of the shortest program that can print that string and halts. This measure of complexity is often used in optimization to indicate expected function difficulty. While it is often used, there are known counterexamples. This paper investigates the applicability of KC as an estimator of problem difficulty for optimization in the black box scenario. In particular we address the known counterexamples (e.g., pseudorandom functions, the NIAH) and explore the connection of KC to the NFLTs. We conclude that high KC implies hardness however, while easy fitness functions have low KC the reverse is not necessarily true. I.

### Citations

652 | No Free Lunch Theorems for Optimization - Wolpert, Macready - 1997 |

172 | Metaheuristics in combinatorial optimization: Overview and conceptual comparison
- Blum
(Show Context)
Citation Context ...pected to capture the inherent difficulty of the function. This approach to complexity seems to be particularly interesting to the analysis of general purpose algorithms (also known as metaheuristics =-=[1]-=-, black-box algorithms or randomized search heuristics [20]). Metaheuristics are often used when either the problem is not well defined or when there is no sufficient knowledge (or resources) to const... |

59 | A tutorial introduction to the minimum description length principle,” In: Advances in Minimum Description Length: Theory and Applications, (edited by - Grünwald - 2005 |

41 | Upper and lower bounds for randomized search heuristics in black-box optimization. Theory of Computing Systems, 2005. Accepted for publication
- Droste, Jansen, et al.
(Show Context)
Citation Context ...knowledge. They sample possible solutions, compute their objective values and accordingly sample additional solutions. This continues until some stopping criteria (e.g., an optima was sampled) is met =-=[5]-=-, [20], [21]. Even though it is not exactly the case, it is useful to think of a black-box algorithm in the following way: 1) Sample random points 2) (Estimate accordingly a model for the function f) ... |

39 | The no free lunch and problem description length - Schumacher, Vose, et al. - 2001 |

32 | Kolmogorov’s structure function and model selection - Vereshchagin, Vitányi |

30 | Optimization with randomized search heuristics: The (A)NFL theorem,realistic scenarios, and difficult functions
- Droste, Jansen, et al.
- 2000
(Show Context)
Citation Context ...yzing them. Indeed, KC was extensively used in this scenario, particularly in the context of the no-free-lunch theorems (NFLTs). In their proof for an almost no-free-lunch, Droste, Jansen and Wegener =-=[4]-=- used the fact that the KC of a repetitive sequence is minimal in order to modify an existing function without changing (almost) its KC – and therefore, its expected difficulty. Schumacher, Vose and W... |

30 | Instance complexity - Orponen, Ko, et al. - 1994 |

23 | editors. Handbook of Applied Optimization - Pardalos, Resende - 2002 |

17 | Simple explanation of the no-free-lunch theorem and its implications - Ho, Pepyne - 2002 |

13 | Optimization is easy and learning is hard in the typical function - English - 2000 |

9 |
Information Landscapes
- Borenstein, Poli
- 2005
(Show Context)
Citation Context ...rence. The MDL or KC is meaningless, in this case 3 . 2 We refer to any statistical estimation which do not use an a priori knowledge regarding the position of the global optimum. 3 Interestingly, in =-=[2]-=- we explicitly associated flat landscapes with the random decisions an algorithm makes. We suggested a different way to measure the KC of a fitness function based on the way an algorithm interpret it.... |

7 | Practical implications of new results in conservation of optimizer performance - English |

6 | Algorithmic Information Theory - Grunwald, Vitanyi - 2008 |

6 | Ga-hardness revisited - Guo, Hsu - 2003 |

6 | Two broad classes of functions for which a no free lunch result does not hold - Streeter - 2003 |

5 | information and kolmogorov complexity - Shannon |

4 | Kolmogorov complexity and computational complexity. In Complexity of computations and proofs - Fortnow - 2004 |

4 | Towards a theory of randomized search heuristics - Wegener - 2003 |

3 | Foundations of cryptography: a primer. Foundations and Trends - Goldreich - 2005 |