A short viewpoint piece that Keith and I wrote just came out in Perception. Go check it out, it’s open access. May, K. A., & Vincent, B. T. (2016). Fewer Statistical Tests … or Better Ones? Perception. http://doi.org/10.1177/0301006616677909
Posterior predictive checks The toolbox now calculates 2 measures of “goodness of fit” of the models. This is a useful quantitative reassurance that the models describe the participant discounting behaviour better than chance. In turn, this is important when we come to deciding which (if any) data files we should exclude. You can go and […]
I’ve just released Version 1.2 of the toolbox ‘Bayesian analysis toolbox for delay discounting.’ The main feature of this release was the addition of new models. For example, you can now estimate discount rates (ignoring the magnitude effect). So you can obtain estimates of the discount rate k, which is very useful if your primary […]
While I have some experience with probabilistic programming in the flavour of Bayesian Networks, and have published papers using them, I am interested in the super-class of generic probabilistic programs. That is, right now I am happy with conducting inference on Bayesian Networks, but I want to learn how to conduct inference on generic programs. As […]
Importance sampling is related to rejection sampling, which I looked at in the last post. Here is a short demo. %% true probability distribution true_func = @(x) betapdf(x,1+1,1+10); %% Do importance sampling N = 10^6; % uniform proposal distribution x_samples = rand(N,1); proposal = 1/N; % evaluate for each sample target = true_func(x_samples); % calculate importance […]
I am happy to announce my 3rd paper of the year, accepted for publication in Behavior Research Methods. Following my initial foray into writing review papers (2 earlier this year), this is my first methods paper, and also my first contribution to higher-level decision making.
Vincent, B. T. (2015) A tutorial on Bayesian models of perception, Journal of Mathematical Psychology, 66:103–114.
I am very happy to announce my new tutorial review paper. Vincent, B. T. (2015) Bayesian accounts of covert selective attention: a tutorial review, Attention, Perception, & Psychophysics, 77(4), 1013-1032. If you do not have an institutional subscription to Attention, Perception, & Psychophysics, Springer allow me to self-archive my author-accepted manuscript (legal). Get the preprints here: [manuscript pdf], [supplementary pdf]. The final publication […]
I’ve just released a small bit of Matlab code on GitHub which helps automate the job of plotting posterior predictive distributions. If you are inferring posterior distributions of parameters of a 1D function (e.g. y=mx+c) then this code will plot the posterior predictive distribution for you. This should be handy for you to eyeball how well a model […]
Earlier this week I had a nice opportunity to talk about epistemology and inference to the Dundee Skeptics in The Pub. The talk seems to have been well received and a lively discussion followed. I took the chance to correct the misunderstanding that Sherlock Holmes is a master of deductive inference, using this amusing video.
Until recently, many texts on Bayesian inference assumed the reader had a strong background in mathematics or statistics. I found that really frustrating and it really got in my way of understanding this stuff. But this concise book (~160 pages) is a really great introduction. If I had this book when I was learning, […]
If you’ve decided to join the increasing number of people using MCMC methods to conduct Bayesian inference, then one important decision is which software to use. This decision will be influenced by your programming language of choice, see Figure below. If you use Matlab, then really your best choice at the moment is JAGS. You use it […]
A lot of people want to understand more about Bayesian methods. Some people might get some traction by directly comparing something they already understand (such as a t-test) under the traditional approach, to a Bayesian way of tackling the problem. John Kruschke comes to the rescue. He has published a nice paper on the Bayesian […]
If we want to try to locate a target of interest given a brief glimpse of a visual scene, then we can use at least two sources of information. Firstly, we can use any visual cues which give away the target’s location. However, in many cases the visual cues are insufficient to work out precisely […]
The yes/no detection task is a classic method used to probe the inner workings of how humans process information. In this paper I was interested in one quite specific experimental phenomenon of visual information processing: that of search asymmetries. If you search for an item A amongst distracters B, then you will have some level […]
Abstract When attempting to understand where people look during scene perception, researchers typically focus on the relative contributions of low- and high-level cues. Computational models of the contribution of low-level features to fixation selection, with modifications to incorporate top-down sources of information have been abundant in recent research. However, we are still some way from […]
One of the broad aims of my work is to apply the approach of Bayesian Decision Theory to attentional phenomena. In this particular paper, published back in 2009, I examined one specific aspect of the approach: the decision rule. Described very succinctly Bayesian Decision Theory consists of two main steps. Firstly, we make an inference about […]