ForceBalance API
1.3
Automated optimization of force fields and empirical potentials
|
Penalty functions for regularizing the force field optimizer. More...
Public Member Functions | |
def | __init__ (self, User_Option, ForceField, Factor_Add=0.0, Factor_Mult=0.0, Factor_B=0.1, Alpha=1.0, Power=2.0) |
def | compute (self, mvals, Objective) |
def | L2_norm (self, mvals) |
Harmonic L2-norm constraints. More... | |
def | BOX (self, mvals) |
Box-style constraints. More... | |
def | HYP (self, mvals) |
Hyperbolic constraints. More... | |
def | FUSE (self, mvals) |
def | FUSE_BARRIER (self, mvals) |
def | FUSE_L0 (self, mvals) |
Public Attributes | |
fadd | |
fmul | |
a | |
b | |
p | |
FF | |
ptyp | |
Pen_Tab | |
spacings | |
Find exponential spacings. More... | |
Static Public Attributes | |
dictionary | Pen_Names |
Penalty functions for regularizing the force field optimizer.
The purpose for this module is to improve the behavior of our optimizer; essentially, our problem is fraught with 'linear dependencies', a.k.a. directions in the parameter space that the objective function does not respond to. This would happen if a parameter is just plain useless, or if there are two or more parameters that describe the same thing.
To accomplish these objectives, a penalty function is added to the objective function. Generally, the more the parameters change (i.e. the greater the norm of the parameter vector), the greater the penalty. Note that this is added on after all of the other contributions have been computed. This only matters if the penalty 'multiplies' the objective function: Obj + Obj*Penalty, but we also have the option of an additive penalty: Obj + Penalty.
Statistically, this is called regularization. If the penalty function is the norm squared of the parameter vector, it is called ridge regression. There is also the option of using simply the norm, and this is called lasso, but I think it presents problems for the optimizer that I need to work out.
Note that the penalty functions can be considered as part of a 'maximum likelihood' framework in which we assume a PRIOR PROBABILITY of the force field parameters around their initial values. The penalty function is related to the prior by an exponential. Ridge regression corresponds to a Gaussian prior and lasso corresponds to an exponential prior. There is also 'elastic net regression' which interpolates between Gaussian and exponential using a tuning parameter.
Our priors are adjustable too - there is one parameter, which is the width of the distribution. We can even use a noninformative prior for the distribution widths (hyperprior!). These are all important things to consider later.
Importantly, note that here there is no code that treats the distribution width. That is because the distribution width is wrapped up in the rescaling factors, which is essentially a coordinate transformation on the parameter space. More documentation on this will follow, perhaps in the 'rsmake' method.
Definition at line 368 of file objective.py.
def src.objective.Penalty.__init__ | ( | self, | |
User_Option, | |||
ForceField, | |||
Factor_Add = 0.0 , |
|||
Factor_Mult = 0.0 , |
|||
Factor_B = 0.1 , |
|||
Alpha = 1.0 , |
|||
Power = 2.0 |
|||
) |
Definition at line 373 of file objective.py.
def src.objective.Penalty.BOX | ( | self, | |
mvals | |||
) |
Box-style constraints.
A penalty term of mvals[i]^Power is added for each parameter.
If Power = 2.0 (default value of penalty_power) then this is the same as L2 regularization. If set to a larger number such as 12.0, then this corresponds to adding a flat-bottomed restraint to each parameter separately. @param[in] mvals The parameter vector @return DC0 The norm squared of the vector @return DC1 The gradient of DC0 @return DC2 The Hessian (just a constant)
Definition at line 479 of file objective.py.
def src.objective.Penalty.compute | ( | self, | |
mvals, | |||
Objective | |||
) |
Definition at line 417 of file objective.py.
def src.objective.Penalty.FUSE | ( | self, | |
mvals | |||
) |
def src.objective.Penalty.FUSE_BARRIER | ( | self, | |
mvals | |||
) |
def src.objective.Penalty.FUSE_L0 | ( | self, | |
mvals | |||
) |
def src.objective.Penalty.HYP | ( | self, | |
mvals | |||
) |
Hyperbolic constraints.
Depending on the 'b' parameter, the smaller it is, the closer we are to an L1-norm constraint. If we use these, we expect a properly-behaving optimizer to make several of the parameters very nearly zero (which would be cool).
[in] | mvals | The parameter vector |
Definition at line 504 of file objective.py.
def src.objective.Penalty.L2_norm | ( | self, | |
mvals | |||
) |
Harmonic L2-norm constraints.
These are the ones that I use the most often to regularize my optimization.
[in] | mvals | The parameter vector |
Definition at line 450 of file objective.py.
src.objective.Penalty.a |
Definition at line 376 of file objective.py.
src.objective.Penalty.b |
Definition at line 377 of file objective.py.
src.objective.Penalty.fadd |
Definition at line 374 of file objective.py.
src.objective.Penalty.FF |
Definition at line 379 of file objective.py.
src.objective.Penalty.fmul |
Definition at line 375 of file objective.py.
src.objective.Penalty.p |
Definition at line 378 of file objective.py.
|
static |
Definition at line 369 of file objective.py.
src.objective.Penalty.Pen_Tab |
Definition at line 381 of file objective.py.
src.objective.Penalty.ptyp |
Definition at line 380 of file objective.py.
src.objective.Penalty.spacings |
Find exponential spacings.
Definition at line 414 of file objective.py.