My favorites | Sign in
Project Home
READ-ONLY: This project has been archived. For more information see this post.
Search
for
Param_mapping  
Updated Aug 22, 2012 by jean.dau...@gmail.com

Parameter mappings

By default, the model assumes all unknown variables (except precision hyperparameters) are continuous variables that can vary between -Inf and Inf. However, one may want to constrain the range of values of some of these variables. For example, in the context of a Q-learning model:

  • one might want to ensure that the learning rate in a Q-learning model lies within the [0,1] interval.
  • one might want to ensure that the softmax temperature is strictly positive.
These sorts of constraints are easily handled by re-defining model variables as being dummy variables that are passed through an appropriate mapping , i.e.:

This mapping can now be inserted into the evolution and observation functions, so that although the VB algorithm derives an approximate posterior on the unbounded variable , the effective parameter fulfills the desired constraints.

Note: The distribution on the effective parameter can be approximated using a first-order Taylor expansion of the mapping, as follows:

The quality of this approximation strongly depends on how linear is the mapping. See here for better approximations in the context of the exponential and sigmoidal mappings.

We will now see examples of useful one-to-one mappings below:

Sigmoidal mapping

This mapping is relevant when restricting the variables to the unit [0,1] interval.

Note: by rescaling and translating the above sigmoid mapping, one can construct variables that are bounded from above and from below with any arbitrary values:

where the effective parameter is constrained to lie within the [a,b] interval.

Exponential mapping

This mapping is relevant when restricting the variables to positive numbers:

Note: the lower bound of the constraint can be changed by translating the above mapping...

Powered by Google Project Hosting