My favorites | Sign in
Project Home
READ-ONLY: This project has been archived. For more information see this post.
Search
for
Optim_intro  
Updated Apr 12, 2012 by vincent....@gmail.com

General introduction

Given an experimental paradigm and a model m for the subject, the only thing left to manipulate is experimental data u.

When designing an experiment with the aim to invert a model (that is to compute a posterior distribution over the parameters), we try to maximize the amount of information we can get about the parameters.

Different criteria can be used to quantify the achieved information gain. Before we choose a criterion to quantify this gain, let’s write C(u1:t ,y1:t) the information gain achieved by choosing data u1:t and observing behavior y1:t

Prior to any observation, one could choose u1:t so as to maximize the expected gain under the joint distribution over parameters and behavioral data.

This ideal choice is computationally intractable. However, a greedier (at the cost of being less optimal) optimization is tractable.

Instead of maximizing the information gain for the whole experiment (finding all experimenter data at once), let’s just try to optimize one step ahead (finding experimenter data for the next trial). Imagine you selected u→t and observed y→t. Now you want to maximize the information you can get from the next observation

This is much easier to compute and is what we will be doing here. Here we propose a method to minimize the sum of expected variance of the posterior on parameters after the next action (greedy) :

Intuitively, leaving correlation in the estimation of parameters aside, a model is best identified if its parameters are inferred with low variance, that is posterior distribution is concentrated around its expected value.

We will use the approximated covariance matrix from the inversion method described earlier to optimize the design.

Powered by Google Project Hosting