My research focuses on two levels of computation. Models of high-level computation aim at
understanding the representation that humans use to learn about their environment.
The way information is represented constrains both the ways new information can be
acquired and how learned information can be exploited for achieving various (e.g. behavioral) goals.
Since learning has to be performed on high-dimensional, noisy and ambiguous stimuli,
probabilistic models are adequate tools as these models can handle all of these issues.
Furthermore, Bayesian probabilistic models provide a normative theory for learning,
which enables us to compare model performance with human data. We test theories by analyzing
behavior of humans in experiments: by following participants' eye movement we analyze
how learning affects the design of efficient movement strategies.
My investigations in low-level computations address how neurons deal with the problems
imposed by the extremely rich stimuli. Optimal inference and learning requires that neurons
also represent the uncertainty related to the inferred features of the environment
besides the actual values of the features.
My focus is on how a proper representation can be built and how these principles affect neural
responses. I use probabilistic models to model evoked and spontaneous activities in the visual system.