Click update 2013
http://rintintin.colorado.edu/~vancecd/phil1000/Nagel.pdf (“What it is like to be a bat”)
CLAIRE CHESKIN used to live in a murky world of grey, her damaged eyes only seeing large objects if they were right next to her. She could detect the outlines of people but not their expressions, and could just about make out the silhouettes of buildings, but no details. Looking into the distance? Forget it.
Nowadays things are looking distinctly brighter for Cheskin. Using a device called vOICe, which translates visual images into "soundscapes", she has trained her brain to "see through her ears". When travelling, the device helps her identify points of interest; at home she uses it to find things she has put down, like coffee cups. "I've sailed across the English Channel and across the North Sea, sometimes using the vOICe to spot landmarks," she says. "The lights on the land were faint but the vOICe could pick them up."
As if the signposting of objects wasn't impressive and useful enough, some long-term users of the device like Cheskin eventually report complete images somewhat akin to normal sight, thanks to a long-term rewiring of their brains. Sometimes these changes are so profound that it alters their perceptions even when they aren't using the device. As such, the vOICe (the "OIC" standing for "Oh, I See") is now proving invaluable as a research tool, providing insights into the brain's mind-boggling capacity for adaptation.
The idea of hijacking another sense to replace lost vision has a long history. One of the first "sensory substitution" devices was developed in 1969 by neuroscientist . He rigged up a television camera to a dentist's chair, on which was a 20-by-20 array of stimulators that translated images into tactile signals by vibrating against the participant's back. Despite the crudeness of the set-up, it allowed blind participants to detect the presence of horizontal, vertical and diagonal lines, while skilled users could even associate the physical sensations with faces and common objects.
By the time he died in 2006, Bach-y-Rita had developed more sophisticated devices which translated the camera's images into electrical pulses delivered by a postage-stamp-sized array of electrodes sitting on the tongue. Users found, after some practice, that these pulses gave them a sense of depth and "openness", a feeling that there was "something out there" .
This vague feeling of space, which we experience as part of normal sight, suggests the brain may be handling the information as if it had originated from the eyes. Would it be possible to get even closer to normal vision- perhaps even producing vivid and detailed images- by feeding in information using something other than tactile stimulation? To find out, physicist and inventor, based in Eindhoven, the Netherlands, turned to hearing. The ears do not detect as much information as the eyes, but their capacity is nevertheless much greater than the skin's.
Meijer thought up the vOICe in 1982, though it took until 1991 for him to design and build a desktop prototype that would translate video into audio. By 1998 he had developed a portable, if still bulky, version using a webcam, notebook PC and stereo headphones, which allowed users to experiment with the device in daily life. The device is now more discreet, consisting of "spy" sunglasses which conceal a tiny camera connected to a netbook PC, and a pair of headphones. Alternatively, some users download the software to their smartphone, and its built-in camera acts as their eyes.
Every second the camera scans a scene from left to right. Software then converts the images into soundscapes transmitted to the headphones at a rate of roughly one per second . Visual information from objects to the wearer's left and right are fed into the left and right ear respectively. Bright objects are louder, and frequency denotes whether an object is high up or low down in the visual field.
At first the soundscapes are reminiscent of the whirring, bleeping and hooting sound effects that would accompany an alien melting the brain of a human in a 1960s science-fiction movie. But by feeling the objects first, to learn to associate the accompanying sounds with their shapes, and by discovering how the soundscape of an object varies as the user moves, the experience becomes particularly "vision-like".
Pat Fletcher of Buffalo, New York, lost her sight at the age of 21 and had just a pinpoint of perception in her left eye, through which she could sometimes see red or green, before she started using the vOICe system in 2000. In the early stages, the pictures in her mind's eye were like "line drawings" and "simple holographic images", but after a decade of practice, she now sees complete images with depth and texture. "It is like looking at an old black-and-white movie from the early 30s or 40s. I can see the tree from top to bottom, and the cracked sidewalk that runs alongside the tree," she says.
"What's exciting to me," says , a cognitive psychologist at Queen Mary, University of London, who has been using the vOICe for his own research, "is that not only can you use this device in a very deliberate fashion where you can think, 'okay, this sound corresponds with this object', but it is also possible, through extensive use, to go beyond that and actually have some sort of direct, qualitative experience that is similar to the vision they used to experience."
The US is now funding the first controlled study to look at the benefits of the vOICe system while trying to find the optimal training protocol. "Some of the participants in the current trial have learned more in months than [Fletcher] learned in years of using the vOICe," says Meijer. The study, which will involve around 10 participants, may even answer the long-standing question of whether congenitally blind adults can benefit in the same way as Cheskin and Fletcher.
Intended to last about a year, the trial is being run by Luis Goncalves and Enrico Di Bernardo of , a company that tests sensory substitution devices. The first two participants are a 66-year-old who has been blind from birth but has slight light perception, and a 40-year-old who lost his sight due to diabetes. Twice a week they attend two-hour training sessions, including tasks such as finding a target in a large room and making their way around an obstacle course. "They are empowered by this," says Goncalves, adding that the 66-year-old "can now go to a restaurant and seat himself without asking for assistance and is teaching his wife, who is also blind, how to use the vOICe".
Not everyone is quite so impressed. For example, , a psychologist at Descartes University in Paris, France, points out that the system needs time to scan an image and so lacks the immediacy of vision. "I think it's possible with resources and time to make something much better than the vOICe," he says.
Nevertheless, vOICe is still of great interest to O'Regan and other researchers, who want to know what these people are experiencing. Are they really seeing? And if so, how?
The traditional view is that the brain takes data from the different sensory organs- in the case of sight, the retina- and, for each sense, processes it in separate regions to create a picture of the outside world. But that cannot explain how someone can have a visual experience from purely auditory information.
As such, O'Regan says our definition of what it means to see needs to change. Our senses, he argues, are defined by the way the incoming information changes as we interact with the environment. If the information obeys the laws of perspective as you move forward and backward, we will experience it as "seeing"- no matter how the information is being delivered. If you have a device that preserves these laws, then you should be able to see through your ears or your skin, he says.
If O'Regan is on the right track, we will have to reconsider long-held ideas of how the brain is organised to deal with incoming information. Traditionally, the brain is considered to be highly modular, with the occipital, temporal and parietal cortices handling inputs from the eyes, ears and from the skin and deep tissues, respectively. According to O'Regan, however, these regions may actually deal with certain types of information- shape or texture, for example- irrespective of which sense it comes from.
There is some evidence to support this view. In 2002, neuroscientist Amir Amedi, now at the , Israel, published research showing that a specific part of the occipital cortex was activated by touch as well as visual information. He named it the lateral occipital tactile-visual (LOtv) region. Amedi and colleagues hypothesised that the area lit up because the occipital cortex is oriented around particular tasks- in this case, 3D-object recognition- rather than a single sense ().
How does this tally with the vOICe experience? Amedi recently collaborated with , director of the Berenson-Allen Center for Noninvasive Brain Stimulation in Boston, Massachusetts, to find out whether the vOICe system activates the LOtv when users perceive objects through soundscapes. They asked 12 people, including Fletcher, to examine certain objects such as a seashell, a bottle and a rubber spider using touch and the vOICe system. They were then asked to recognise the same objects using only the soundscapes delivered by vOICe. For comparison, they were also asked to identify objects based on a characteristic sound, such as the jingling of a set of keys.
During the trials, fMRI brain scans showed that the LOtv region was active when expert users like Fletcher were decoding the vOICe soundscapes, but significantly less active when they just heard characteristic sounds. For those using the vOICe for the first time, the LOtv region remained inactive, again suggesting that this area is important for the recognition of 3D objects regardless of which sense produces the information ().
Further evidence that this region is vital for decoding soundscapes came two years later, in 2009, from a study using repetitive transcranial magnetic stimulation (rTMS) - short bursts of a magnetic field that temporarily shut down the LOtv of subjects, including Fletcher. "It felt like someone tapping on the back of my head," she says. As the rTMS progressed, her vision with the vOICe deteriorated, and the "world started getting darker, like someone slowly turning down the lights".
When Fletcher attempted to use the vOICe after undergoing rTMS, the various test no longer made sense. "It was total confusion in my brain... I couldn't see anything." The result was terrifying: "I wanted to cry because I thought they broke my sight - it was like a hood over my head." The rTMS had a similar impact on other vOICe users ().
"It turns upside down the way we think about the brain," says Pascual-Leone. Most of us think of our eyes as being like cameras that capture whatever is in front of them and transmit it directly to the brain, he says. But perhaps the brain is just looking for certain kinds of information and will sift through the inputs to find the best match, regardless of which sense it comes from.
The question remains of how the vOICe users' brains reconfigured the LOtv region to deal with the new source of information. Amedi's preliminary fMRI scans show that in the early stages of training with vOICe, the auditory cortex works hard to decode the soundscape, but after about 10 to 15 hours of training the information finds its way to the primary visual cortex, and then to the LOtv region, which becomes active. Around this time the individuals also become more adept at recognising objects with vOICe. "The brain is doing a quick transition and using connections that are already there," says Amedi. With further practice, the brain probably builds new connections too, he adds.
Eventually, such neural changes may mean that everyday sounds spontaneously trigger visual sensations, as Cheskin has experienced for herself. "The shape depends on the noise," she says. "There was kind of a spiky shape this morning when my pet cockatiel was shrieking, and [the warning beeps of] a reversing lorry produce little rectangles." Only loud noises trigger the sensations and, intriguingly, she perceives the shape before the sound that sparked it.
This phenomenon can be considered a type of synaesthesia, in which one sensation automatically triggers another, unrelated feeling. Some individuals, for example, associate numbers or letters with a particular colour: "R" may be seen as red while "P" is yellow. For others, certain sounds trigger the perception of shapes and colours, much as Cheskin has experienced.
Most synaesthetes first report such experiences in early childhood, and it is very rare for an adult to spontaneously develop synaesthesia, says , a psychologist at the University of Sussex in Brighton, UK. He recently published a chronological log of Cheskin's and Fletcher's experiences, including the synaesthetic ones ().
This capacity to rewire our sensory processing may even boost the learning abilities of sighted users, suggests Pascual-Leone. It might be possible to extract supplementary information by feeding a lot of different sensory inputs to the same brain areas. Art connoisseurs could learn to associate the style of a master's hand with a characteristic sound, and this may help them distinguish genuine work from a fake. Alternatively, it could compensate for low light levels by delivering visual information through our ears. "That's science fiction. But it's interesting science fiction," says Pascual-Leone.
For neuroscientists like Pascual-Leone and Amedi, the research is proof that the ability to learn as we grow old does not disappear. Pascual-Leone says the notion of a critical period during which the brain must be exposed to particular knowledge or never learn it appears "not universally true". "It gives us a reason for hope," he says, "and implies that we should be able to help people adjust to sensory losses with this type of substitution. The capacity to recover function could be much greater than realised."
Three-card-monte and Bats
11 May 2008
Will Bats go for Three-card-monte ?
And how !
The poor suckers will be hooked from moment go .
The better the sensory equipment , the bigger the sucker . (AW)
Their sensorium can see not only the position of each card , but also the direction it is moved in . But being of a mammalian order , there is a left-hand and right-hand brain memory stack with a limited capacity The oldest item gets pushed down and out .
The brain also moves data about the left-hand stack to the right-hand stack to keep track of the object before it is painted on the sensorium . And vise-versa of course .
In humans , it has been shown by fMRI that if the switch is done faster than the left-right transfer rate , the memory at the bottom of the stack is lost . This is not an illusion . The hand is not faster than the eye . The hand is faster than a rather bureaucratic information transfer of data in the brain .
The eyes sees perfectly well , but the memory system loses track .
What is amazing here is the low frequency . The visual system keeps track at about beta-brain freq (ie about 16 herz) . Hand switching is about 4 herz .It seems to take at least 4 cycles to move data from the left-hand of the brain to the right-hand . For error-correction included , this seems about correct .
(Systems without error-correction – see amygdala systems))
So , for something not deemed critical to survival , the mammalian system gives a whole whopping quarter of a second leeway .
Like humans , bats , whales , dolphins , etc simply will not believe that they can lose track of an object moved faster than 4 times a second . They will try over and over again . It does not matter how good their sensory system is .
Another definition of being a mammalian : susceptibility to Three-Card-Monte
You can hook humans , bats ,dolphins ,whales to the internal endorphins .
Then lead them into some serious gambling .
Remember , the rewards are internal endorphin releases . Creatures in virtual isolation are extremely vulnerable . Suckers .
Then , you can teach them poker or mah-jong .
Dolphins and whales should be pretty good at mah-jong or go , but bats should be whizzes at poker .
The problem in training humans or mammalians is usually what reward makes sense to them . The endorphin reward of gambling is mammalian specific and general .
You can hook any mammal on gambling .
Then uplift them . For their own good , of course .
The unending route of effort for relative advantages .
Do birds gamble?
We know that all mammals have a primitive neural knot that releases endorphins on gambling . This is a major reason for their success and ties in with boredom .
For dinosaurs and birds we can look to the ripple effects after the KT boundary .
Dinosaur descendants quickly repopulated (large bird-like raptors) But speciation into the empty ecological niches (the small ones) was faster in mammals because of the random “gambling” nerve-complexes in mammals . Taking a chance paid off if the pay-off is biased in the positive direction .
This has been the general experience of mammalians since then .
From this , we can infer that birds (and dinosaurs before them) did not gamble .
If the bias does not favour you , you are doomed anyway . You have to bet as if the bias favours you .
You have to bet .
And try to figure out a way to change the bias in your favour .
This is why you are a mammal and not a bird-dinosaur .
You can get bankrupt betting this way .