Deterministic optimization methods such as the Nelder-Mead-Simplex, Interior Point Filter Line Search, Method of Moving Assymptotes, or the Sequential Quadratic Program use the objective function’s convexity, separability, or derivative information to prove the convergence to a local optimum. What I usually deal with are functions which are piecewise non-differentiable, non-convex, and non-linear. These properties impair the usability of an optimization method. Moreover, simulated observability functions can be expensive and quantized.
However, what really increases the complexity of the optimization is the black box behavior: We call an objective function or a constraint function a black box if a closed expression is not available. Thus, properties such as differentiability or convexity may not be taken for granted. Moreover, operations other than function evaluations are hardly possible or too expensive. In this project I am using a surrogate of the objective function, as shown in the next video, in order to compensate for black-box behavior.