Information-based complexity (IBC) is the branch of computational complexity that studies problems for which the information is *partial*, *contaminated*, and *priced*.

To motivate these assumptions about information consider the problem of the numerical computation of an integral. Here, the integrands consist of functions defined over the d-dimensional unit cube. Since a digital computer can store only a finite set of numbers, these functions must be replaced by such finite sets (by, for example, evaluating the functions at a finite number of points). Therefore, we have only *partial* information about the functions. Furthermore, the function values may be *contaminated* by round-off error. Finally, evaluating the functions can be expensive, and so computing these values has a *price*.

The complexity depends on what *setting* we use. A setting depends on

- how the sample points are chosen, i.e., either
- deterministically, or
- randomly

- how the error and cost are measured, typical choices being
- worst case, and
- average case with respect to a probability measure

For any given setting, we seek to answer the following questions:

- What is the complexity of computing an approximation satisfying a given error threshhold?
- What is the optimal (or nearly-optimal) algorithm for computing such an approximation?

The integration problem is only an example, although it is highly illustrative. Generally, IBC studies continuous mathematical problems. The theory is developed over abstract linear spaces. Applications have included differential and integral equations, continuous optimization, path integrals, high-dimensional integration and approximation, low-discrepancy sequences.

Many of the problems that IBC has studied suffer the "curse of dimensionality" in the worst case deterministic setting. That is, the complexity is exponential in the dimension of the problem. A major theme of IBC is trying to vanquish this curse, by switching to an average case or randomized setting.