opt

Generic implementations of optimization routines.

Generic implementations of optimization algorithm such as conjugate gradient and line search that can be reused between domain specific modules. In, the future, this module may be replaced by Operator Discretization Library (ODL) solvers library.

tike.opt.batch_indicies(n, m=1, use_random=False)[source]

Return list of indices [0…n) as m groups.

>>> batch_indicies(10, 3)
[array([2, 4, 7, 3]), array([1, 8, 9]), array([6, 5, 0])]
tike.opt.conjugate_gradient(array_module, x, cost_function, grad, dir_multi=<function dir_single>, update_multi=<function update_single>, num_iter=1, step_length=1, num_search=None, cost=None)[source]

Use conjugate gradient to estimate x.

Parameters
  • array_module (module) – The Python module that will provide array operations.

  • x (array_like) – The object to be recovered.

  • cost_function (func(x) -> float) – The function being minimized to recover x.

  • grad (func(x) -> array_like) – The gradient of cost_function.

  • dir_multi (func(x) -> list_of_array) – The search direction in all GPUs.

  • update_multi (func(x) -> list_of_array) – The updated subimages in all GPUs.

  • num_iter (int) – The number of steps to take.

  • num_search (int) – The number of during which to perform line search.

  • step_length (float) – The initial multiplier of the search direction.

  • cost (float) – The current loss function estimate.

tike.opt.dir_single(x)[source]
tike.opt.direction_dy(xp, grad0, grad1, dir_)[source]

Return the Dai-Yuan search direction.

Parameters
  • grad0 (array_like) – The gradient from the previous step.

  • grad1 (array_like) – The gradient from this step.

  • dir_ (array_like) – The previous search direction.

tike.opt.get_batch(x, b, n)[source]

Returns x[:, b[n]]; for use with map().

Return a new step_length using a backtracking line search.

Parameters
  • f (function(x)) – The function being optimized.

  • x (vector) – The current position.

  • d (vector) – The search direction.

  • step_length (float) – The initial step_length.

  • step_shrink (float) – Decrease the step_length by this fraction at each iteration.

  • cost (float) – f(x) if it is already known.

Returns

  • step_length (float) – The optimal step length along d.

  • cost (float) – The new value of the cost function after stepping along d.

  • x (float) – The new value of x after stepping along d.

References

https://en.wikipedia.org/wiki/Backtracking_line_search

tike.opt.put_batch(y, x, b, n)[source]

Assigns y into x[:, b[n]]; for use with map().

tike.opt.update_single(x, step_length, d)[source]