ForceBalance API
1.3
Automated optimization of force fields and empirical potentials
|
Public Member Functions | |
def | __init__ (self, options, Objective, FF) |
Create an Optimizer object. More... | |
def | recover (self) |
def | set_goodstep (self, val) |
Mark in each target that the previous optimization step was good or bad. More... | |
def | save_mvals_to_input (self, vals, priors=None, jobtype=None) |
Write a new input file (s_save.in) containing the current mathematical parameters. More... | |
def | Run (self) |
Call the appropriate optimizer. More... | |
def | adjh (self, trust) |
def | MainOptimizer (self, b_BFGS=0) |
The main ForceBalance adaptive trust-radius pseudo-Newton optimizer. More... | |
def | step (self, xk, data, trust) |
Computes the next step in the parameter space. More... | |
def | NewtonRaphson (self) |
Optimize the force field parameters using the Newton-Raphson method (. More... | |
def | BFGS (self) |
Optimize the force field parameters using the BFGS method; currently the recommended choice (. More... | |
def | ScipyOptimizer (self, Algorithm="None") |
Driver for SciPy optimizations. More... | |
def | GeneticAlgorithm (self) |
Genetic algorithm, under development. More... | |
def | Simplex (self) |
Use SciPy's built-in simplex algorithm to optimize the parameters. More... | |
def | Powell (self) |
Use SciPy's built-in Powell direction-set algorithm to optimize the parameters. More... | |
def | Anneal (self) |
Use SciPy's built-in simulated annealing algorithm to optimize the parameters. More... | |
def | ConjugateGradient (self) |
Use SciPy's built-in conjugate gradient algorithm to optimize the parameters. More... | |
def | Scipy_BFGS (self) |
Use SciPy's built-in BFGS algorithm to optimize the parameters. More... | |
def | BasinHopping (self) |
Use SciPy's built-in basin hopping algorithm to optimize the parameters. More... | |
def | TruncatedNewton (self) |
Use SciPy's built-in truncated Newton (fmin_tnc) algorithm to optimize the parameters. More... | |
def | NewtonCG (self) |
Use SciPy's built-in Newton-CG (fmin_ncg) algorithm to optimize the parameters. More... | |
def | Scan_Values (self, MathPhys=1) |
Scan through parameter values. More... | |
def | ScanMVals (self) |
Scan through the mathematical parameter space. More... | |
def | ScanPVals (self) |
Scan through the physical parameter space. More... | |
def | SinglePoint (self) |
A single-point objective function computation. More... | |
def | Gradient (self) |
A single-point gradient computation. More... | |
def | Hessian (self) |
A single-point Hessian computation. More... | |
def | Precondition (self) |
An experimental method to determine the parameter scale factors that results in the best conditioned Hessian. More... | |
def | FDCheckG (self) |
Finite-difference checker for the objective function gradient. More... | |
def | FDCheckH (self) |
Finite-difference checker for the objective function Hessian. More... | |
def | readchk (self) |
Read the checkpoint file for the main optimizer. More... | |
def | writechk (self) |
Write the checkpoint file for the main optimizer. More... | |
Public Attributes | |
OptTab | |
A list of all the things we can ask the optimizer to do. More... | |
mvals_bak | |
The root directory. More... | |
failmsg | |
Print a special message on failure. More... | |
goodstep | |
Specify whether the previous optimization step was good or bad. More... | |
iterinit | |
The initial iteration number (nonzero if we restart a previous run.) More... | |
iteration | |
The current iteration number. More... | |
Objective | |
reset the global variable More... | |
bhyp | |
Whether the penalty function is hyperbolic. More... | |
FF | |
The force field itself. More... | |
uncert | |
Target types which introduce uncertainty into the objective function. More... | |
bakdir | |
resdir | |
excision | |
The indices to be excluded from the Hessian update. More... | |
np | |
Number of parameters. More... | |
mvals0 | |
The original parameter values. More... | |
read_mvals | |
h | |
chk | |
H | |
dx | |
Val | |
Grad | |
Hess | |
Penalty | |
iter | |
prev_bad | |
xk_prev | |
x_prev | |
x_best | |
Optimizer class.
Contains several methods for numerical optimization.
For various reasons, the optimizer depends on the force field and fitting targets (i.e. we cannot treat it as a fully independent numerical optimizer). The dependency is rather weak which suggests that I can remove it someday.
Definition at line 46 of file optimizer.py.
def src.optimizer.Optimizer.__init__ | ( | self, | |
options, | |||
Objective, | |||
FF | |||
) |
Create an Optimizer object.
The optimizer depends on both the FF and the fitting targets so there is a chain of dependencies: FF –> FitSim –> Optimizer, and FF –> Optimizer
Here's what we do:
Definition at line 59 of file optimizer.py.
def src.optimizer.Optimizer.adjh | ( | self, | |
trust | |||
) |
Definition at line 382 of file optimizer.py.
def src.optimizer.Optimizer.Anneal | ( | self | ) |
Use SciPy's built-in simulated annealing algorithm to optimize the parameters.
Definition at line 1208 of file optimizer.py.
def src.optimizer.Optimizer.BasinHopping | ( | self | ) |
Use SciPy's built-in basin hopping algorithm to optimize the parameters.
Definition at line 1223 of file optimizer.py.
def src.optimizer.Optimizer.BFGS | ( | self | ) |
Optimize the force field parameters using the BFGS method; currently the recommended choice (.
Definition at line 956 of file optimizer.py.
def src.optimizer.Optimizer.ConjugateGradient | ( | self | ) |
Use SciPy's built-in conjugate gradient algorithm to optimize the parameters.
Definition at line 1213 of file optimizer.py.
def src.optimizer.Optimizer.FDCheckG | ( | self | ) |
Finite-difference checker for the objective function gradient.
For each element in the gradient, use a five-point finite difference stencil to compute a finite-difference derivative, and compare it to the analytic result.
Definition at line 1530 of file optimizer.py.
def src.optimizer.Optimizer.FDCheckH | ( | self | ) |
Finite-difference checker for the objective function Hessian.
For each element in the Hessian, use a five-point stencil in both parameter indices to compute a finite-difference derivative, and compare it to the analytic result.
This is meant to be a foolproof checker, so it is pretty slow. We could write a faster checker if we assumed we had accurate first derivatives, but it's better to not make that assumption.
The second derivative is computed by double-wrapping the objective function via the 'wrap2' function.
Definition at line 1562 of file optimizer.py.
def src.optimizer.Optimizer.GeneticAlgorithm | ( | self | ) |
Genetic algorithm, under development.
It currently works but a genetic algorithm is more like a concept; i.e. there is no single way to implement it.
Definition at line 1105 of file optimizer.py.
def src.optimizer.Optimizer.Gradient | ( | self | ) |
A single-point gradient computation.
Definition at line 1322 of file optimizer.py.
def src.optimizer.Optimizer.Hessian | ( | self | ) |
A single-point Hessian computation.
Definition at line 1330 of file optimizer.py.
def src.optimizer.Optimizer.MainOptimizer | ( | self, | |
b_BFGS = 0 |
|||
) |
The main ForceBalance adaptive trust-radius pseudo-Newton optimizer.
Tried and true in many situations. :)
Usually this function is called with the BFGS or NewtonRaphson method. The NewtonRaphson method is consistently the best method I have, because I always provide at least an approximate Hessian to the objective function. The BFGS method works well, but if gradients are cheap the SciPy_BFGS method also works nicely. The method adaptively changes the step size. If the step is sufficiently good (i.e. the objective function goes down by a large fraction of the predicted decrease), then the step size is increased; if the step is bad, then it rejects the step and tries again. The optimization is terminated after either a function value or step size tolerance is reached. @param[in] b_BFGS Switch to use BFGS (True) or Newton-Raphson (False)
Definition at line 416 of file optimizer.py.
def src.optimizer.Optimizer.NewtonCG | ( | self | ) |
Use SciPy's built-in Newton-CG (fmin_ncg) algorithm to optimize the parameters.
Definition at line 1233 of file optimizer.py.
def src.optimizer.Optimizer.NewtonRaphson | ( | self | ) |
Optimize the force field parameters using the Newton-Raphson method (.
Definition at line 951 of file optimizer.py.
def src.optimizer.Optimizer.Powell | ( | self | ) |
Use SciPy's built-in Powell direction-set algorithm to optimize the parameters.
Definition at line 1203 of file optimizer.py.
def src.optimizer.Optimizer.Precondition | ( | self | ) |
An experimental method to determine the parameter scale factors that results in the best conditioned Hessian.
Condition number function to be optimized.
Parameters ---------- logrskeys : np.ndarray Logarithms of the rescaling factor of each parameter type. The optimization is done in the log space. multiply : bool If set to True, then the exponentiated logrskeys will multiply the existing rescaling factors defined in the force field. Returns ------- float Condition number of the Hessian matrix.
Definition at line 1342 of file optimizer.py.
def src.optimizer.Optimizer.readchk | ( | self | ) |
Read the checkpoint file for the main optimizer.
Definition at line 1590 of file optimizer.py.
def src.optimizer.Optimizer.recover | ( | self | ) |
def src.optimizer.Optimizer.Run | ( | self | ) |
Call the appropriate optimizer.
This is the method we might want to call from an executable.
Definition at line 321 of file optimizer.py.
def src.optimizer.Optimizer.save_mvals_to_input | ( | self, | |
vals, | |||
priors = None , |
|||
jobtype = None |
|||
) |
Write a new input file (s_save.in) containing the current mathematical parameters.
Definition at line 254 of file optimizer.py.
def src.optimizer.Optimizer.Scan_Values | ( | self, | |
MathPhys = 1 |
|||
) |
Scan through parameter values.
This option is activated using the inputs:
This method goes to the specified parameter indices and scans through the supplied values, evaluating the objective function at every step.
I hope this method will be useful for people who just want to look at changing one or two parameters and seeing how it affects the force field performance.
[in] | MathPhys | Switch to use mathematical (True) or physical (False) parameters. |
Definition at line 1259 of file optimizer.py.
def src.optimizer.Optimizer.ScanMVals | ( | self | ) |
Scan through the mathematical parameter space.
Definition at line 1306 of file optimizer.py.
def src.optimizer.Optimizer.ScanPVals | ( | self | ) |
Scan through the physical parameter space.
Definition at line 1311 of file optimizer.py.
def src.optimizer.Optimizer.Scipy_BFGS | ( | self | ) |
Use SciPy's built-in BFGS algorithm to optimize the parameters.
Definition at line 1218 of file optimizer.py.
def src.optimizer.Optimizer.ScipyOptimizer | ( | self, | |
Algorithm = "None" |
|||
) |
Driver for SciPy optimizations.
Using any of the SciPy optimizers requires that SciPy is installed. This method first defines several wrappers around the objective function that the SciPy optimizers can use. Then it calls the algorith mitself.
[in] | Algorithm | The optimization algorithm to use, for example 'powell', 'simplex' or 'anneal' |
Definition at line 969 of file optimizer.py.
def src.optimizer.Optimizer.set_goodstep | ( | self, | |
val | |||
) |
Mark in each target that the previous optimization step was good or bad.
Definition at line 247 of file optimizer.py.
def src.optimizer.Optimizer.Simplex | ( | self | ) |
Use SciPy's built-in simplex algorithm to optimize the parameters.
Definition at line 1198 of file optimizer.py.
def src.optimizer.Optimizer.SinglePoint | ( | self | ) |
A single-point objective function computation.
Definition at line 1316 of file optimizer.py.
def src.optimizer.Optimizer.step | ( | self, | |
xk, | |||
data, | |||
trust | |||
) |
Computes the next step in the parameter space.
There are lots of tricks here that I will document later.
@param[in] G The gradient @param[in] H The Hessian @param[in] trust The trust radius
Definition at line 745 of file optimizer.py.
def src.optimizer.Optimizer.TruncatedNewton | ( | self | ) |
Use SciPy's built-in truncated Newton (fmin_tnc) algorithm to optimize the parameters.
Definition at line 1228 of file optimizer.py.
def src.optimizer.Optimizer.writechk | ( | self | ) |
Write the checkpoint file for the main optimizer.
Definition at line 1602 of file optimizer.py.
src.optimizer.Optimizer.bakdir |
Definition at line 182 of file optimizer.py.
src.optimizer.Optimizer.bhyp |
Whether the penalty function is hyperbolic.
Definition at line 176 of file optimizer.py.
src.optimizer.Optimizer.chk |
Definition at line 541 of file optimizer.py.
src.optimizer.Optimizer.dx |
Definition at line 774 of file optimizer.py.
src.optimizer.Optimizer.excision |
The indices to be excluded from the Hessian update.
Definition at line 189 of file optimizer.py.
src.optimizer.Optimizer.failmsg |
Print a special message on failure.
Definition at line 160 of file optimizer.py.
src.optimizer.Optimizer.FF |
The force field itself.
Definition at line 178 of file optimizer.py.
src.optimizer.Optimizer.goodstep |
Specify whether the previous optimization step was good or bad.
Definition at line 162 of file optimizer.py.
src.optimizer.Optimizer.Grad |
Definition at line 776 of file optimizer.py.
src.optimizer.Optimizer.h |
Definition at line 387 of file optimizer.py.
src.optimizer.Optimizer.H |
Definition at line 773 of file optimizer.py.
src.optimizer.Optimizer.Hess |
Definition at line 777 of file optimizer.py.
src.optimizer.Optimizer.iter |
Definition at line 779 of file optimizer.py.
src.optimizer.Optimizer.iteration |
The current iteration number.
Definition at line 166 of file optimizer.py.
src.optimizer.Optimizer.iterinit |
The initial iteration number (nonzero if we restart a previous run.)
This will be invoked if we quit RIGHT at the start of an iteration (i.e.
jobs were launched but none were finished) If data exists in the temp-dir corresponding to the highest iteration number, read the data.
Definition at line 164 of file optimizer.py.
src.optimizer.Optimizer.mvals0 |
The original parameter values.
Check derivatives by finite difference after the optimization is over (for good measure)
Don't print a "result" force field if it's the same as the input.
Determine the save file name.
Parse the save file for mvals, if exist.
The "precondition" job type takes care of its own output files.
Definition at line 196 of file optimizer.py.
src.optimizer.Optimizer.mvals_bak |
The root directory.
Determine the output file name.
The job type Initial step size trust radius Minimum trust radius (for noisy objective functions) Lower bound on Hessian eigenvalue (below this, we add in steepest descent) Lower bound on step size (will fail below this) Guess value for Brent Step size for numerical finite difference When the trust radius get smaller, the finite difference step might need to get smaller. Number of steps to average over Function value convergence threshold Step size convergence threshold Gradient convergence threshold Allow convergence on low quality steps Maximum number of optimization steps For scan[mp]vals: The parameter index to scan over For scan[mp]vals: The parameter name to scan over, it just looks up an index For scan[mp]vals: The values that are fed into the scanner Name of the checkpoint file that we're reading in Name of the checkpoint file that we're writing out Whether to write the checkpoint file at every step Adaptive trust radius adjustment factor Adaptive trust radius adjustment damping Whether to print gradient during each step of the optimization Whether to print Hessian during each step of the optimization Whether to print parameters during each step of the optimization Error tolerance (if objective function rises by less than this, then the optimizer will forge ahead!) Search tolerance (The Hessian diagonal search will stop if the change is below this threshold) Whether to make backup files Name of the original input file Number of convergence criteria that must be met Only backup the "mvals" input file once per calculation.
Clone the input file to the output,
Definition at line 158 of file optimizer.py.
src.optimizer.Optimizer.np |
Number of parameters.
Definition at line 192 of file optimizer.py.
src.optimizer.Optimizer.Objective |
reset the global variable
The objective function (needs to pass in when I instantiate)
Definition at line 174 of file optimizer.py.
src.optimizer.Optimizer.OptTab |
A list of all the things we can ask the optimizer to do.
Definition at line 63 of file optimizer.py.
src.optimizer.Optimizer.Penalty |
Definition at line 778 of file optimizer.py.
src.optimizer.Optimizer.prev_bad |
Definition at line 972 of file optimizer.py.
src.optimizer.Optimizer.read_mvals |
Definition at line 223 of file optimizer.py.
src.optimizer.Optimizer.resdir |
Definition at line 183 of file optimizer.py.
src.optimizer.Optimizer.uncert |
Target types which introduce uncertainty into the objective function.
Will re-evaluate the objective function when an optimization step is rejected
Definition at line 181 of file optimizer.py.
src.optimizer.Optimizer.Val |
Definition at line 775 of file optimizer.py.
src.optimizer.Optimizer.x_best |
Definition at line 975 of file optimizer.py.
src.optimizer.Optimizer.x_prev |
Definition at line 974 of file optimizer.py.
src.optimizer.Optimizer.xk_prev |
Definition at line 973 of file optimizer.py.