skqulacs.qnn.solver module#
- class skqulacs.qnn.solver.Adam(callback: Optional[Callable[[List[float]], NoneType]] = None, tolerance: float = 0.0001, n_iter_no_change: Optional[int] = None)[source]#
Bases:
Solver
- callback: Optional[Callable[[List[float]], None]] = None#
- n_iter_no_change: Optional[int] = None#
- run(cost_func: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], float], jac: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], ndarray[Any, dtype[float64]]], theta: List[float], x: ndarray[Any, dtype[float64]], y: ndarray[Any, dtype[float64]], maxiter: Optional[int]) Tuple[float, List[float]] [source]#
Run optimizer for given initial parameters and data.
- Parameters:
theta – Initial value of parameters to optimize.
x – Data to use in optimization.
y – Data to use in optimization.
maxiter – Maximum iteration count in optimization.
- Returns:
Loss and parameters after optimization.
- Return type:
(loss, theta_opt)
- tolerance: float = 0.0001#
- class skqulacs.qnn.solver.Bfgs[source]#
Bases:
Solver
- run(cost_func: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], float], jac: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], ndarray[Any, dtype[float64]]], theta: List[float], x: ndarray[Any, dtype[float64]], y: ndarray[Any, dtype[float64]], maxiter: Optional[int]) Tuple[float, List[float]] [source]#
Run optimizer for given initial parameters and data.
- Parameters:
theta – Initial value of parameters to optimize.
x – Data to use in optimization.
y – Data to use in optimization.
maxiter – Maximum iteration count in optimization.
- Returns:
Loss and parameters after optimization.
- Return type:
(loss, theta_opt)
- class skqulacs.qnn.solver.GradientDescent[source]#
Bases:
Solver
- run(cost_func: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], float], jac: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], ndarray[Any, dtype[float64]]], theta: List[float], x: ndarray[Any, dtype[float64]], y: ndarray[Any, dtype[float64]], lr: float = 0.1) Tuple[float, List[float]] [source]#
Run optimizer for given initial parameters and data.
- Parameters:
theta – Initial value of parameters to optimize.
x – Data to use in optimization.
y – Data to use in optimization.
maxiter – Maximum iteration count in optimization.
- Returns:
Loss and parameters after optimization.
- Return type:
(loss, theta_opt)
- class skqulacs.qnn.solver.NelderMead[source]#
Bases:
Solver
- run(cost_func: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], float], jac: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], ndarray[Any, dtype[float64]]], theta: List[float], x: ndarray[Any, dtype[float64]], y: ndarray[Any, dtype[float64]], maxiter: Optional[int]) Tuple[float, List[float]] [source]#
Run optimizer for given initial parameters and data.
- Parameters:
theta – Initial value of parameters to optimize.
x – Data to use in optimization.
y – Data to use in optimization.
maxiter – Maximum iteration count in optimization.
- Returns:
Loss and parameters after optimization.
- Return type:
(loss, theta_opt)
- class skqulacs.qnn.solver.Solver[source]#
Bases:
ABC
- abstract run(cost_func: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], float], jac: Callable[[List[float], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]], ndarray[Any, dtype[float64]]], theta: List[float], x: ndarray[Any, dtype[float64]], y: ndarray[Any, dtype[float64]], maxiter: Optional[int]) Tuple[float, List[float]] [source]#
Run optimizer for given initial parameters and data.
- Parameters:
theta – Initial value of parameters to optimize.
x – Data to use in optimization.
y – Data to use in optimization.
maxiter – Maximum iteration count in optimization.
- Returns:
Loss and parameters after optimization.
- Return type:
(loss, theta_opt)