An interview with Professor Geraint Rees


Chairmans Blog Rees Banner

 

Dr David Lewis –David Lewis - Mindlab International Chairman The Chairman’s Blog

 

An Interview with Professor Geraint Rees

As professionals we should constantly be learning about, and frequently learning from, the knowledge and experience of others in the field. For the next few of my blogs I will be presenting edited interviews with some of the leading players in the field of neuroscience, marketing, and retailing. In the first of these Professor Geraint Rees, Director of the Institute of Cognitive Neuroscience, University College London, discusses the two main technologies used by neuromarketing companies – EEG and fMRI. How, in his opinion, had these advanced over the last decade?  



Professor Geraint Rees: “EEG is a stable, mature technology that has been around – [he pauses to think] – well since a very long time. There have not been many technical developments within EEG in the last 10 years. I would have thought that the main developments have been in the way data can be analysed and, in particular, putting the data into frameworks where it can be combined with data from other imaging techniques – such as functional MRI. The software packages now used to analyse functional imaging data, like SPM, or Statistical Parametric Mapping can also be used now to analyse EEG or MEG (1) data. That facilitates data fusion, that is, looking at how a mental process in the human brain operates, combining data from two different techniques. That was quite fashionable, five years ago. With functional MRI, in the last decade, I think it’s a technology that’s rapidly matured. Twenty years ago, it was brand new –it was the new-new thing and it was super hot. And that meant a lot of exploratory studies got done, a lot of things got found out very rapidly. At the same time, there was a lot more heterogeneity in how people analysed their data, and how people reached the conclusions they were going to reach. Perhaps the second phase of that has been, as with any technology, not only has the hardware matured, but also the way in which the data analysed has converged internationally into widely agreed standards, that are – not universally – but near-universally applied. So the field feels a lot more mature in terms of that respect. The hardware has continued to evolve, but I would describe it as incremental, rather than revolutionary. The revolution was inventing B.O.L.D (2) contrast functional MRI in the first place. Since then, there have been repeated advances in terms of the field strength of magnets at which data can be acquired and that, in turn, either increases your sensitivity to detect more subtle effects in the brain, or you keep the same sensitivity, but can get higher spatial resolution. So you can see more bits of the brain and in more detail.



That’s probably the biggest drive, together also with something called parallel imaging, where you can acquire more data, faster, that again allows the same improvement, so the end user is seeing data that are either more sensitive to small changes in the brain, or are more spatially fine-grained, compared to previously. Those have been the main technological developments.

Professor Rees



In EEG, the intrinsic problem is you can never fully determine the pattern of sources in the brain that produce a pattern of EEG waves. For a mathematical reason that is unfixable. This problem is not solvable, or not uniquely solvable. That doesn’t mean that one can’t guess at what pattern of sources, or patterns of activity in the brain made the particular pattern of activity at the scalp that you observe. It just means it’s not uniquely determined, which means you can’t ever be precisely certain. A second, intrinsic, barrier to the nature of the EEG signal is that it’s relatively insensitive to deep brain structures. So the nuclei that live in the middle of our brains that, for example, are affected in Parkinson’s (disease) are not easily accessible to any technique that records the scalp EEG – [which is] much more sensitive to stuff on the surface of the brain. And those are intrinsic and are probably not going to be overcome anytime soon. With functional MRI, the mostly widely used form of functional MRI, B.O.L.D. or Blood-Oxygen Level Dependant, is contingent on the level of deoxyhaemoglobin in the blood. The intrinsic limitation there, whatever the special scale at which blood flow is regulated, is the ultimate spatial resolution on functional MRI, because you are not going to get below that – or you’re going to get below that with difficulty, and inferences, and assumptions. We’ve known for a long time, since experiments in the 19th century, that blood flow changes in the brain are very closely localised to where neural activity changes, to within a millimetre or two. Now a millimetre or two is great – right? I mean, absolutely fantastic for the kind of science I do. But on the other hand, to some people, a millimetre or two is hundreds of thousands of nerve cells, and so that’s a huge distance. It reflects the fact that the brain has these multiple levels of organisation at different spatial scales.



For a technique at one spatial scale to talk, or inform another spatial scale is not always possible, or is more challenging if there are intrinsic limits like this… there have been repeated attempts to get round that; some people have tried to devise MR sequences that, for example, are sensitive to electrical currents. But these generally have not entered wide usage, and are generally speaking, highly technical and not as powerful. Why B.O.L.D contrast fMRI has become the de facto standard is not because it’s the best signal, but it’s because it’s the signal that’s easiest to produce and is jolly reliable. As these things go. So there’s an intrinsic limit of course that many people appreciate, because it’s a blood flow response, which is also temporal. So the timing of the blood flow response lags neural activity by several seconds. But…we don’t know for sure, because people haven’t done a kind of mapping experiment to map out that latency across the whole brain. The reason they haven’t done that is because you’d need to be able to precisely activate every single portion of the brain and you don’t know precisely what to do to do that, to actually make the measurements in the first place. There’s no particular reason to expect wild differences because, of course, given that brains have grown, like the rest of us, from very small numbers of cells, in the embryo. That said, there are some regions of the brain, like, say the cerebellum, where the architecture of the nerve cells is quite different. So you might expect or anticipate or hypothesise that the blood flow response is different. Usually that isn’t a problem for experimenting, because you’re comparing an experimental manipulation in that brain region – you’re not normally comparing what’s happening in one brain region, to another.



You’re looking at what’s happening in a brain region, under two different circumstances. So there are ways round that particular problem – it’s recognised in the field, basically. Within marketing, generally, with honourable exceptions, there’s often a surprising lack of knowledge about whether it works. So for example, I went to a conference where one company gave a very interesting presentation about how difficult they found it, working out whether advertising made any difference to sales. They went to the trouble of essentially randomising half the customers in one town, to receive adverts for one store, or not. Then working with that store to actually determine whether that made a difference in sales. So a really good experiment – well controlled. The challenge they had, was they did observe an effect, a positive effect, but it was so small, and the variability in what people spent at this store, because it was a general – I presume – department store, was so big, that the effect size they were measuring, even with tens of thousands of customers, they found it very difficult to actually say this is definitively a statistically significant difference in marketing. But that said…that difference, multiplied by the number of consumers, was a big difference to their bottom line. So very relevant. That’s not Neuro marketing – they were just doing classic marketing. But I had a lot of sympathy for the challenge they faced in determining whether a large-scale intervention has a very small effect on a very heterogeneous population. It’s not easy. Of course, recording EEG in the wild is always going to be worse than recording EEG in controlled laboratory conditions. Not so much because of the uncontrolled nature of the environment, but because of the electrical interference of everything around us that is electrical…as long as the signal is reflecting the EEG signals, fine. It’s then just a pragmatic issue of, is the signal enough to provide something useful that can correlate, or provide insight, into some behaviour? If it is, it doesn’t matter that it’s not the best in the world.



I was reading a Psych-science paper showing that fMRI responses in a focus group predicted the effectiveness of a marketing campaign. I thought that was a good paper – and it was really interesting. Because that was conceptually doing a really interesting thing, which was saying you can do a high quality laboratory based fMRI study, and a focus group, and it generalises to a large-scale phone survey. Again, from the conceptual issues, that is interesting because that says you don’t have to collect lots of data, badly, from lots of people, and draw a conclusion from noisy data. You can actually do this, but under these kind of circumstances. So if you could understand, for example, the circumstances under which that generalisation works, then that becomes a very useful tool. In the same way that presumably focus groups are useful tools in marketing generally, because the focus group can be generalised to the population.” End





[1] Magnetoencephalography (MEG) a technique for mapping brain activity by recording the magnetic fields produced by naturally occurring electrical currents.

[2] BOLD – Blood Oxygen Level Dependent. A technique for mapping brain activity by measuring changes in blood flow to different regions. The full version of this interview, conducted by Mindlab’s Tom Dixon, can be found at www.the-brainsell.com 


Subscribe



Share