larq.constraints¶
Functions from the constraints module allow setting constraints (eg. weight clipping) on network parameters during optimization.
The penalties are applied on a per-layer basis. The exact API will depend on the layer, but the layers QuantDense, QuantConv1D, QuantConv2D and QuantConv3D have a unified API.
These layers expose 2 keyword arguments:
kernel_constraintfor the main weights matrixbias_constraintfor the bias.
import larq as lq
lq.layers.QuantDense(64, kernel_constraint="weight_clip")
lq.layers.QuantDense(64, kernel_constraint=lq.constraints.WeightClip(2.0))
WeightClip¶
larq.constraints.WeightClip(clip_value=1)
Weight Clip constraint
Constrains the weights incident to each hidden unit to be between [-clip_value, clip_value].
Arguments
- clip_value
float: The value to clip incoming weights.
weight_clip¶
larq.constraints.weight_clip(clip_value=1)
Weight Clip constraint
Constrains the weights incident to each hidden unit to be between [-clip_value, clip_value].
Arguments
- clip_value
float: The value to clip incoming weights.