Sunday, September 20, 2009

Computing a confidence interval for ρ

Curiously, neither R nor SPSS seem to offer a simple way to compute a confidence interval for Pearson's correlation coefficient based on r and the sample size. R base package includes the cor.test function which does provide a confidence interval based on Fisher z transformation but it takes the full data set as input. Even then, the confidence interval depends only on the sample correlation and on the sample size so the extra information is not really needed, except to compute the sample correlation coefficient in the first place. The confidence interval can therefore just as well be computed from published correlation coefficients, without going back to the original data set.

The formula is relatively simple and can be found in any statistics textbook but tracking it down and computing it by hand every time can be somewhat cumbersome. Here is a short R function to do it easily.

r.cint <- function(r,n,level=.95) {
 z <- 0.5*log((1+r)/(1-r))
 zse <- 1/sqrt(n-3)
 zmin <- z - zse * qnorm((1-level)/2,lower.tail=FALSE)
 zmax <- z + zse * qnorm((1-level)/2,lower.tail=FALSE)
 return(c((exp(2*zmin)-1)/(exp(2*zmin)+1),(exp(2*zmax)-1)/(exp(2*zmax)+1)))
}

The result can also be used as an hypothesis test, by checking if the confidence interval includes 0 or any other constant. The conclusion is very similar but not identical to the tests reported by SPSS CORRELATIONS procedure or R cor.test, because these p values are based on another test statistic (and on the t distribution).

What's the point? As is plain to see from the formulas, the standard error of the z-transformed correlation depends only on the sample size (that's the point of the transformation), which means that you don't need any other information than the correlation coefficient and the sample size to perform a test.

Correlation are often reported without any discussion of sampling variability but with a very small sample size, the point estimate is going to be very imprecise and even an impressive r can hide a modest correlation. Similarly, a moderate observed correlation could reflect anything from a small correlation in the other direction to a strong correlation in the same direction. If nothing else, the confidence interval makes this imprecision visible and helps to interpret results based on experiments with a very small number of participants.

Bootstrapping techniques can also be used to construct a confidence interval for a correlation coefficient but they require access to the original data set and cannot be computed based only on typical research reports.

PS: This post was updated in 2013 to fix a layout problem and add some clarifications

Thursday, September 17, 2009

Salmon and voodoo

If you follow any neuroscience/psychology blog, chances are you already heard of the « puzzingly high correlations in fMRI studies of emotion, personality, and social cognition » paper (ex-voodoo correlation). If you haven't, start with this summary of the debate by the neurocritic and follow the links. On one level, the discussion revolves around the statistical analyses used in some fMRI studies in social neuroscience. It's pretty technical but there is a lot to learn even if you are not into neuroimaging. On another level, the diffusion of the paper on the web and the ensuing discussion on various blogs sparked a debate on peer review and scientific debate. Plenty of food for thought there as well.

Anyway, I just discovered (via the neuroskeptic), that Craig Bennett (of Prefrontal.org) recently presented a poster illustrating how improper statistical analysis can lead to spurious detection of BOLD changes in a dead salmon. This really drives the point home cunningly.

Tuesday, September 15, 2009

Emotion slider

Beside some rather unsuccessful attempts at psychophysiological measurement, the best part of the last two years have been occupied by a series of experiments with a device (the “emotion slider”) I developed together with Pieter Desmet (one of my PhD advisors and also the person responsible for the sketches in the paper), Rob Luxen (who never stopped improving the electronics) and Hannah Ottens (who actually built the thing).

I presented a first article about it at the Design Research Society conference in Sheffield last year and I never uploaded it to my website, but I recently noticed that it is now available online. The paper is titled Designing a research tool and describes the development of the device from a design angle.

Since then, I have also written a more classical experimental psychology-type of paper, which I finally presented at ACII last week. Apparently it's not online yet but you can always contact me for more info. In the meantime, you can also download the slides from my presentation.

In a few words the conclusion of all this is that there is evidence of compatibility effects between movements on the slider and affective state (for example how good or bad you feel or how you evaluate a picture). Participants in the experiment were quicker to push to evaluate positive pictures than negative pictures and also quicker to pull to evaluate negative pictures than positive pictures. This difference in response time shows the one set of movements is more intuitive or easier than the other one.

However, unlike what some earlier reports suggested, this effect is very sensitive to the context (how the slider is positioned, what the instructions are, if there is some form of feedback, etc.) With "neutral" instructions (in my case I asked the participants to « push » and « pull » without any other precision) and a slider positioned between the screen and the user, the more natural mapping seems to be pushing for « positive » and pulling for « negative ».

Using the « wrong » movements also seems to have a small but noticeable effect on the number of errors people make but the evidence is not very strong.

Sunday, September 13, 2009

Back from ACII 2009

Since thursday, I have been attending the second « Affective Computing & Intelligent Interaction » in Amsterdam. Generally speaking I was quite impressed by the quality of the research, even if the conclusion often went along the lines of « things are complex, it's difficult ». On a lighter note, the venue was great and everything went smoothly, even if the food and timing were less impressive (keynotes at 8 something in the morning and a conference finishing late on Saturday are tough!)

As far as I am concerned, the last session (« Guidelines for Affective Signal Processing: From Lab to Life ») was the most interesting but many other papers are well worth checking out. A few things that caught my attention are Rana el Kaliouby's emotion recognition system for children with autism spectrum disorders, Dimitry Tseterukou's affective haptics, Elisabeth Eichhorn's Recording Inner Life prototype, Jennifer Robison's paper on the consequences of affective feedback (she got a well-deserved best paper award).

A few of my colleagues from Delft also presented their work: Valentijn Visch had a paper on attribution of emotion by observers based on basic movement parameters and Miguel Bruns Alonso presented the last prototype that came out of his work on tangible interaction and stress reduction.

I don't know if the proceedings are online yet but in the mean time you can check out the conference website and contact the authors directly, most of them are really happy to send out copies of their article when asked.

My brand new blog!

It's been some time that I first considered opening a blog and I have been thinking from time to time about potential content and cool gadgets, even drawing a few wireframes of the layout I wanted it to have. It never got past this stage but while attending the ACII conference in Amsterdam last week, I really felt that it would be great to have some place to post some comments or present the many interesting things I saw. That's the reason why I put all the plans for a carefully designed website aside and finally decided to simply sign up on blogger, pick a template and click on “create a blog”. It does not even have a real title but that's the way blogging was supposed to be, right?