Imove objects on a computer screen by thought alone. It rotates when I want it to. I don’t even have to look at it directly. In life sometimes the Force can be within you. You can be the Jedi Knight capable of astonishing feats. Your mind can move things on a screen, your brain waves will follow your command, and for a few seconds you can feel like the master of universe. The trick is to pay attention to the task at hand—an impulse that drives the brain in ways scientists still don’t fully understand.

Seated on a chair in an unlit room at the Centre for Neuroscience (CNS) at the 109-year-old Indian Institute of Science in Bengaluru, I’m taking part in an experiment that studies attention, using real-time Electroencephalography (EEG).

Attention is a word thrown around casually in conversation. But what scientists mean by attention is the ability of the brain to block everything out, and focus on what the brain deems important.

Understanding how attention works in the brain is important. It brings us one step closer to unravelling the mysteries of the mind, one of the frontiers of biological sciences. Of the human brain and its whimsies we still know precious little. Of attention, one of the many vital tasks that the brain performs, we are just starting to get an idea of its mechanisms.

Knowing how attention works can lead to better learning outcomes for students. With driver inattention one of the top causes of road accidents, it could save lives. With the Internet as the top productivity killer in offices, it might save money. And it can go beyond that. By learning how our brains function better, we can devise treatments for those with ADHD or Alzheimer’s disease.



ohan Dutta, a research assistant (RA), measures my head with a tape to find an EEG cap that fits me. He shouts out for a large one. Abhijit M. Chinchani, another RA, hands me a consent form. He explains the experiment as I go through it. He cracks a joke, reassuring me that there’s nothing to worry. I sign it.

Meanwhile, two RAs, Ankita Sengupta and Guruprasath come back with the tape. They are trying to find the midpoint of my skull so that the EEG cap sits correctly.

They get a blue EEG cap, place it on my head, and ask me to strap it on. Both of them inject the EEG fluid onto my scalp through the many holes on the cap. Since human skin causes impedance while taking EEG readings, a conducting gel is used to improve the signal reading from the brain. I feel a sudden rush of cold from the gel. The research assistants begin to connect the electrodes to my scalp.

In the meantime, Guruprasath begins to explain the machine sitting on the desk before me. It has a chin rest, and green tinted glasses through which I can see the computer screen. This is an eye tracking machine. It’s equipped with an infrared camera that can record eye movements 500 to 1,250 times a second.

“It’s used in experiments with monkeys and other animals,” Guruprsath tells me. “They’ve designed it for that.”

Everyone in the room laughs.

“That’s not helping him,” Dutta quips.  

In the next room, Chinchani is checking my brain wave readings on a computer. “Can you close your eyes,” he shouts. “When you close your eyes, usually, activity in a certain frequency increases.”

He asks me to open my eyes next. Once opened, a different wave pattern should appear.

The test is successful.

“We are good to go,” he announces.



hey begin the experiment. The first step is to capture the baseline signals from my brain. In the middle of the screen in front of me, lies a “fixation crosshair”.  Next to it, on both sides, are two discs. The discs are flickering, a little different from each other. Each disc is actually made of two concentric circles; each textured with a grating. The inner grating and outer grating are unaligned. In fact, they are orthogonal to each other.

Chinchani explains the task to me. First, I have to always look at the fixation cross. After a few seconds, an arrow will appear above the cross, pointing either to the left or right. This is the crucial bit. With my eyes fixed at the cross, and without moving them, I have to pay attention to the disc that the arrow points out.

NeuroExp-2 (1).jpg
Two research assistants fill EEG fluid on the scalp to improve signal reading from the brain. Photo: Sriram Vittalamurthy

Attention. Not look. “You should pay attention without looking at it,” he says.

An arrow suddenly appears above the central crosshair. It’s pointing to the right. Staring only at the fixation crosshair, without moving my eyes, I take the cue, and pay attention to the right disc. The inner disc begins to rotate clockwise. It is moving solely by me  paying attention.  It feels a little like telekinesis. The entire time, my hands are resting on a table, not touching the mouse or keyboard, or any sort of input device.

When I lose concentration, the disc rotates anticlockwise. It stays stationary for a second or two, and then shuffles around a bit. The discs disappear from the screen, leaving only the crosshair on the screen. Within two seconds, a message appears on the screen: 29.86 per cent completed.

This is my score, of how well I paid attention to the task. The entire process takes 12 seconds.

How did I turn the disc by paying attention to it? A software program measures the strength of my brain waves associated with attention using the EEG. It then rotates the disc proportionally.

How did I turn the disc by paying attention to it? A software program measures the strength of my brain waves associated with attention using the EEG. It then rotates the disc proportionally.

As the spikes of my brain waves increase, the experiment provides me feedback by rotating the inner disc. When I don’t attend to it, the disc rotates back. The goal is to get the get the grating on the inner disc to align with the outer one. At the end of the task, it gives me a score of how successful my attempt was. I can score 100 per cent if the gratings on the inner disc align perfectly with the outer one.

To determine which disc I’ve paid attention to, they flicker at different frequencies. The strength of the brain wave readings for the corresponding frequency reveals the amount of attention paid.

On the next trial, the arrow points again to the right, so I pay attention to the disc on the right. The inner grating begins to rotate, first clockwise and then anti-clockwise. This time I score 43.37 per cent.

The third time, I get a cue to the left. This time, I’m screwing up my eyes a little and clenching my jaws as I will the disc to rotate. The gratings almost align. Score: 92.39 per cent.

Sridharan Devarajan, an assistant professor at CNS who runs the cognition lab, says the subject may not be able to pay attention very well to the two gratings, initially. “But over time, based on neuro-feedback, they may learn how to control their attentional focus in a manner that allows them to attend well.”

His team is conducting these experiments as the neurofeedback technique can help them understand attention better. He says that they are trying to see how subjects perform attention tasks as the size of these brain waves linked to attention changes.

Devarajan also wants to see whether a long training session, of at least several hours, makes a difference in the subject’s ability to attend. “There are obvious clinical implications for people with attention deficit, for instance,” he says.

My trials continue on for a few minutes, with the scores dipping and rising. After the experiment, the research assistants remove the electrodes and the EEG cap. Chinchani hands me a towel and something yellow to remove the gel from the scalp.

It’s a bottle of Johnson’s baby shampoo.



he cognition lab at IISc provides a glimpse of the revolution that neuroscience has enabled in understanding the human brain. In the Sixties and Seventies, attention research consisted of simple experiments. Researchers gave subjects certain tasks and constructed theories around their responses. But there was no way to examine these neural processes in the brain itself.

Since the Eighties, neuroscience began to play an important role in attention research. Electrophysiology, which measures electrical signals from cells and tissues, benefited attention studies. “These early studies saw a clear difference in the brain when paying attention,” says Devarajan.

Centre for Neuroscience at the Indian Institute of Science, Bengaluru. Photo: Sriram Vittalamurthy

Scientists began using non-invasive EEG to record electrical signals from the human brain. They performed brain imaging using diffusion Magnetic Resonance Imaging (MRI) and functional MRI. With animals and birds, the experiments could be more invasive. They inserted electrodes deep inside the brain, and monitored signals at the level of neurons. Neurons, also called nerve cells, carry electrical impulses and are basic units of the nervous system.

Through electrophysiology, scientists have studied gamma oscillations, one of the components of brain waves. These oscillations have been shown to play a significant role in higher cognitive functions like attention, thinking, and abstraction.  Many have speculated that gamma oscillations are linked to consciousness.

Another area of innovation in recent years is neurostimulation. This can be done in two ways: either by applying an electric current on the scalp, or by using magnetic fields. By altering the electrical activity of the brain, scientists can enhance or suppress parts of the brain. This helps in probing the role of different parts of the brain involved in attention.

Labs have also increasingly adopted a multi-disciplinary approach. Take the IISc’s cognition lab. It employs 20 people, including PhD students and research assistants. They come from diverse backgrounds like biology, physics, engineering, and computer science.

“The split is almost 50-50. Half of them are from a neuroscience or biology background, and the other half from an engineering background,” says. Devarajan. “Those with a neuroscience or biology background are able to interact very richly with those who have more of an engineering background. And together they design experiments, analyse the data, and interpret the results.”

Sridhar Devarajan.jpg
Sridhar Devarajan, assistant professor and Ramalingaswamy Fellow, Centre for Neuroscience, IISc. Photo: Sriram Vittalamurthy

In the cognition lab, programmers are writing code that subjects see on the screen with the precision of a few milliseconds. Electrical engineers who are well versed with signal detection theory analyse the data from the experiments. Computer science students are doing things with deep neural networks to analyse very large data. Those with maths skills are modelling and analysing data. Physicists are good with applied mathematical techniques. And the neuroscience researchers are able to put it all together to understand how the brain does these things. They are developing and refining hypotheses using model-based analyses and large-scale computer simulations.

And so, today, we have a picture of the brain that was not possible a few years back.



he origins of research in attention can be traced to the 19th century psychologist and philosopher William James in his work The Principles of Psychology in 1890. He writes: “Attention… is the taking possession by the mind, in clear and vivid form, of one out of what may seem several simultaneously possible objects or trains of thought. …It implies withdrawal from some things in order to deal effectively with others”.

Attention allows you to select what is most important among many stimuli. It then processes that information in an efficient way so that you can make decisions.

For example, take the famous driving example. There are many vehicles on the road but you only tend to pay attention to the vehicles immediately around you. Attention is what allows you to ignore some vehicles, and react to others. For instance, if a guy in the adjacent lane brakes, your brain ignores it. But if the guy in front of you brakes you react immediately.

We cannot pay attention to all the information we receive from our senses all the time. If our brains processed every single car on the road, it would get overloaded with information. To prevent this, there exists a bottleneck in processing.

Attention comes in many varieties. Spatial attention is a form of attention that makes you focus on an area in your field of vision. For instance, the specific line or word that you are reading on this page makes sense only if you pay attention. Michael Posner, an eminent researcher, wrote in a paper he co-authored that spatial attention is like a “spotlight that enhances the efficiency of the detection of events within its beam”.

Against this is feature-based attention, which casts a wider net while directing attention to a particular feature. For instance, in a room full of red balls, you can spot the solitary blue ball immediately without directing your attention across the room. It pops out. This is feature-based attention in action.

Another way to classify attention is on the basis of how it was captured. Top-down attention is the voluntary deployment of attention based on some goal that the person has set. For instance, typing on a keyboard. Bottom-up attention is a process by which an event captures your attention automatically; like a loud noise in your surroundings.

Donald Broadbent, a British experimental pyschologist, provided one of the first models for understanding attention in the 1950s. He performed an experiment based on his observations of air traffic controllers during World War II. He made subjects do a “dichotic listening task”.

In the experiment, the subject would be asked to pay attention to one ear, say, the left ear. A message would be played: “The horses galloped across the field.” At the same time, in his right ear, another message would be played: “President Lincoln often read by the light of the fire.”

He was trying to understand how the brain chooses from many sensory inputs. Most people were able to recall the message played in the attended ear. But they were unable to recall the message from the unattended ear.

To explain this, Broadbent presented a filter model of attention. At any given time, all sensory input, including both messages, enters a buffer. Only one of these inputs is allowed to pass through a filter before it is processed further. He hypothesised that the filter rejected the unattended message to prevent the brain from overloading.  Only the sensory input that passes through the filter is processed further, for meaning.

One of the biggest criticisms of the filter model was that it could not account for the “cocktail party phenomenon”. If you are in a party, there are several conversations going on around you. Even though you are not paying attention, the mention of your name in those conversations catches your attention. It, somehow, gets through that filter.

To get around this, Anne Triesman, another English psychologist  proposed an attenuation model of attention, which retains Broadbent’s filter. But, instead of eliminating unattended messages, it dials them down, like the volume control on a TV.

So, in the cocktail party, the conversations in the room are processed, but the processing takes place at a lower level. If someone speaks your name, it can grab your attention since a weakened input is still processed.

Posner, an Americam psychologoist and emeritus professor at the University of Ohio,  also contributed significantly to understanding attention by using innovative experiments. The Posner cueing task was formulated by him to test how well subjects can shift their attention in tasks. Posner also studied the role of attention in tasks like reading, number processing, and visual search. Variations of the Posner cueing task are used even today in neuroscience experiments.

Over the years, several modifications to these theories have appeared. Since the days of Broadbent and Triesman, the biggest change in attention research was the increasing role neuroscience began to play.

Recent work has been successful in segregating and breaking apart different aspects of attention and understanding the contribution of different parts of the brain leading to a better understanding of the neural basis of attention.

“That sort of structuralist approach to attention has been successful,” says

Behrad Noudoost, an associate professor at the University of Utah. He has been studying neural mechanisms of visual attention since 2006.

“We have a much better idea about different parts of the brain that are involved in generating attention and various neural changes happening during attention. We are also studying attention in interaction with different behavioural aspects: its relationship with working memory, stimulus salience, eye movements, reward and motivation, etc.,” Noudoost says.



o get a taste of these new techniques, I meet Sanjna Banerjee, a PhD student, and Suhas Ganesh, a research assistant at CNS. They are walking me through an experiment in which small regions of the brain are stimulated.

At one end of the room is a computer screen on a table, and a chair before it. The chair has two arms protruding from behind. Seated in the chair is Banerjee, who has agreed to play the subject. The arm protruding from the right is fixed to her chin, helping her keep her head stationary. The arm protruding on the left side holds a coil shaped in a figure of eight. This coil can be moved around and positioned on Banerjee’s head.

“This experiment is looking at the role of the fronto-parietal network in attention,” Ganesh explains.

The fronto-parietal network comprises the frontal and parietal cortex. The cortex is the outer surface of the brain which contains gray matter that gives us many of our higher cognitive functions.  Many studies have established that the frontal parietal network is important in attention. But we still do not know the specific role of this network.

“Basically, we are trying to tease apart the components of attention like sensitivity and bias,” says Banerjee.

Sanjna Banerjee demonstrates the working of a Transcranial Magnetic Stimulation (TMS) experiment where specific areas of the brain are stimulated with a magnetic field. Photo: Sriram Vittalamurthy

In models explaining attention, sensitivity is how well you can perceive a stimulus. For instance, how well does the brain perceive a car in front of you? When a car moves into your lane, you observe it in greater detail.

Bias is how you weigh competing stimuli. Take the driving example again. While driving, you will be biased towards a car in your own lane, rather than one that is two lanes away.

Ganesh explains that they use a technique called Transcranial Magnetic Stimulation (TMS). It’s a non-invasive neurostimulation procedure where you can enhance or suppress parts of the brain. It achieves this by producing an alternating magnetic field in a figure of eight coil placed over the subject’s skull. This induces a current in the brain tissues by electromagnetic induction.

Before TMS, electric currents were applied directly to the skull. Besides the discomfort, the disadvantage of this was that it was not targeted. But with TMS, stimulation can be highly localised. It can suppress or stimulate the brain to a depth of 2 to 4 centimetres under the skull.

To get the positioning of the TMS correct, an MRI of the subject’s head is fed to the software program. An infrared-based camera is tracking the subject’s head. That allows them to place the coil in a precise manner.

Ganesh demonstrates this on the computer screen, which is loaded with Banerjee’s MRI. As he moves the stimulating coil, a crosshair moves on the screen. Ganesh says that the important thing is the coil should be tangential to the skull. “Even if it misses slightly, the pulse will not go inside the skull — it will skip the skull and go out.”

In the experiment, a part of the brain is either suppressed or excited, and the subject has to perform an attention task. “And based on the behavioural response, we try to correlate it to changes in sensitivity or bias,” says Ganesh.

Ganesh and Banerjee tell me that the actual stimulation will not be done today. That’s because neurostimulation is only permitted in sanctioned experiments. Even the dosage of stimulation is regulated. “Ethics regulations stipulate that since we are directly interfering with brain activity, however transiently, we have to use the least intensity at which the experiment will work,” says Banerjee.



evarajan exemplifies the multi-disciplinary approach that neuroscience and attention research have adopted. He enrolled at IIT Madras for a joint Masters programme in 1999. “I was very interested in physics, and people said I could do fluid Physics. So I went into aerospace engineering,” he says.

But as he started working on projects early on in his course, he realised that this was not what he wanted to do. He spoke to his guide, who suggested that he worked with someone in biochemistry or biology. “At that time, it was a sort of a radical idea,” he says.

He did some work on molecular simulations: “RNA folding and things like that”. But even there, he says, he wasn’t particularly happy. “I, then, took a computational neuroscience course on artificial neural networks. It was offered by a professor in the electrical engineering department,” he says. “I began reading up on the brain and found it was quite interesting.”

Devarajan went on to do his BTech project with the professor and a clinician in Chennai, where he did a bit of MRI work. “That spurred my interest.”

After completing his course at IIT Madras, he applied for PhD programmes in neuroscience abroad. He got into Carnegie Mellon and Stanford universities. He chose the latter.

While doing his PhD, he began looking at human brain activity while listening to music, using functional MRI. “We had participants lying inside the scanner, passively listening to unfamiliar musical symphonies. We were looking to figure out something about what the brain does while listening to music,” he says.

One of the studies that came out of these experiments, co-authored by Devarajan with Vinod Menon and Daniel Levitin, received a lot of press attention. This study looked at how networks in the brain engaged and disengaged during transitions in the music. The transition was causing these networks to become active, like a cue to the listener to pay attention.

But they could not make further progress. “It was a little difficult to take that further because it (listening to music) is a very uncontrolled stimulus. Music can evoke many emotions; it can evoke memory; it’s very complex,” says Devarajan.  At that moment, he thought he should narrow his field of view a little bit. “Where I have a little bit more control. That is when I, sort of, moved into studying attention itself,” he adds.



he second half of the twentieth century has seen progress in the study of neuroscience due to advances in molecular biology, neuroimaging, and computational neuroscience. The brain is often called the most complex structure in the universe. Some progress has been made in understanding the brain. We know which parts of the brain are involved in certain tasks. We’ve identified which part processes our vision and hearing, and the connections between the different parts.

But neruroscience is still grappling with many big questions. How do neurons encode information? How do we know what we know? Why do we sleep and dream? Or what makes us intelligent? How does the brain represent abstract ideas? The biggest questions are related to consciousness—why do neurons firing in different parts of the brain give rise to consciousness?

Within all this, attention plays a key role. Edward B. Titchener, a psychologist in the early 20th century wrote that attention lies at the centre of the psychological enterprise. In 2007, Posner and Mary Rothbart wrote “Attention has been one of the fastest growing of all fields within cognitive psychology and cognitive neuroscience.”

So what are the big questions driving attention research today?

One class of questions is how the brain chooses a particular stimulus to pay attention or to ignore. While driving, how do you decide to ignore the car in the adjacent lane, but pay attention to a car in front of you?  

How does attention switching happen? If the car in the adjacent lane turns on his indicator, your attention switches to it.

Another class of questions concerns  what happens after the brain has chosen a stimulus for processing.  Research has shown that once attention kicks in, the attended information is enhanced.  

“That means the brain is able to encode that information with more fidelity,” says Devarajan. Greater fidelity helps to process the information faster.

So while driving, you may be peripherally aware of what is going on in the adjacent lane. “But the moment a car in the adjacent lane turns on the indicator and starts moving into your lane, your system begins to represent this information—this new automobile—with a lot of fidelity, and with much clearer sensory representations,” Devarajan explains.

Experiments have verified this increase in fidelity in human brains, as well as in non-human primates, birds, rodents, etc.

Devarajan explains that if you look at the brain activity recordings associated with an unattended object, it will be suppressed or highly variable.

“On the other hand, when you analyse an object which is being highly attended to, neurons are found to be firing more, and they are also found to be firing in synchrony,” he adds.

The other key component to understanding attention models is gating.

While driving, only some of the information has to be routed to your decision circuits to make a decision. You would not weigh highly the car which is two lanes away from you in your decisions. If you did, then if this car suddenly braked, you would inadvertently brake, and that might lead to an accident. But, you would weigh highly a car immediately ahead of you, to make your decisions.

The exact sequence in which attention processing works is still not well understood.

Devarajan says that all these are open questions. “We are still grappling with very simple things about attention,” he says.

One difficulty is that there is still a lot of disagreement in the field about the operational definition of attention.

Behrad Noudoost says, “Attention, like any other aspect of the brain is multifaceted, and has different aspects. Researchers have tended to approach the study of attention from various aspects based on what they think is most important.

“Based on the way that experiment emphasises each aspect, they will be answering them differently. And they will be studying attention differently. Someone might study saliency of stimuli. Another person is looking at how priorities are shaped. Another person is defining it terms of how goals are affecting processing sensory information.”

This divide was visible at a meeting at Caregie Mellon University in 2017 attended by many big names in the attention field. “There was a lot of disagreement on how to define attention,” says Devarajan. “It is still a hotly debated area because intuitively everybody understands what it is. But once you put it down on paper, it becomes a very difficult problem.”

Devarajan talks about a paper that he co-authored and is currently under review. “The reviewer loves our paper, he loves our model, our data,” he says. “But one of the griefs we are facing now is how you distinguish between attention and expectation—the reason why I’m paying attention to someone in front of me is because I expect that whatever he does is going to influence my activity,” he explains. “So do you club expectation into attention or do you treat it separately from attention? And that’s a big question.”

Other questions relating to his paper or in the field of attention research are: Is the process of selecting a stimuli part of attention? Is the process that gates information also part of attention?

“That’s partly why we are in business”, says Devarajan.



odern neuroscience has enabled the mapping of brain networks to specific roles in the attention process.  There are two distinct attention networks of interest in the brain. The first is the forebrain network, comprising the prefrontal cortex and the posterior parietal cortex. This network has been identified in attention mechanisms since the 1970s, but the exact roles are still to be worked out.

There is evidence that this network is responsible for sensitivity control — the mechanism by which objects become clearer when attended to.

Neuroimaging studies have identified the regions of the brain that activate during different types of attention tasks. Active regions in the brain are those that see an increase in neural activity—these regions are either sending or receiving information, or processing internally. Functional MRI can visualise these regions by looking at blood flow in the brain—as a part of the brain is in use, blood flow increases to that region. You can see the activated region graphically by colour-coding the strength of these blood flows.

Regions for spatial attention, feature based attention, top-down attention and bottom-up attention have been identified. The specific roles of these networks are still not understood well. In some networks, like top-down and bottom-up attention, it is not yet clear that these are really two separate networks. “There are several things that are known, but the unknowns exceed the knowns”, says Devarajan. “And the unknown unknowns are even more”.

The second network related to attention is the midbrain network. Scientists hypothesise that this network is involved in the decisional component of attention. A major component of the midbrain network is the Superior Colliculus (SC). This is a layered structure deep in the midbrain. This makes it harder to study them in humans, as the signals are weak at that depth from the skull.

Fortunately, a similar structure is also found in birds, fishes, and other species. In these species, it is known as the optic tectum (OT).

“This is speculative, but the idea is that all species, at least at the level of fishes, must have some mechanism to filter out irrelevant stimuli. Like when they are following prey, or tracking a predator,” says Devarajan.

Important advances in studying the brain have come via invasive experiments in animals. Electrodes can be inserted beneath the skull to get electrical readings at the neuron level. And this allows scientists to do some sophisticated experiments to investigate the role of various parts of the brain.

As part of his post-doc at Stanford, Devarajan and his team performed a series of experiments on the optic tectum in birds to understand attention. They inserted electrodes in the optic tectum and recorded neural oscillations in barnyard owls. “To record these oscillations, we needed to open up our frequency band to lower frequencies. When we opened it up, we saw beautiful oscillations in the gamma band,” he says.

Gamma oscillations are associated with attention, but they are generally recorded on the outer surface of the brain in the cortex, not in the midbrain. So the team was surprised to see them in a species so different from primates.They also found that these gamma waves modulated accordingly with attention. In the first study, the barn owls were tranquilised and made to stare at a computer screen. A series of images were presented—a faint dark dot, a bright dot, and a looming dot that starts small but grows bigger. Depending on the image, the gamma oscillations from the optic tectum would get stronger or weaker. In the case of the weak dark dot, the oscillations would be weak, but would get stronger as the dots became brighter.

“The oscillations become very prominent during the looming dot. It’s a very threatening stimulus,” he explains.

The team wanted to study these oscillations and the associated neural circuits even more closely. So, they decapitated the birds and removed their brains. Then, they made transverse slices of the optic tectum in a dish.

“What we discovered was that if you electrically stimulate that slice in a dish, even that shows gamma oscillations”. They were able to map the circuit details of these oscillations, and map which neurons produced them. They could predict how the oscillations changed as they stimulated fibres in the slice. As these tests progressed, they narrowed down neurons sitting next to the tectum. The tegmental nuclei appeared to be a key component in generating these oscillations.

So they took these neurons out, and repeated the experiment. There was a dramatic drop in the gamma oscillations. “There is a vestige of the oscillations in the tectum. But the very large scale oscillations that you saw went away,” says Devarajan.  This nucleus was crucial for broadcasting these oscillations across the tectum.

The next stage was to see the impact on attention processing without the neurons in a live bird. The experiment involved training a bird for an attention task. Then they would remove the neurons, and measure the performance of the birds in the task again.

For the experiment, five adult female white leghorn chickens were used. The chickens were around a year old when the experiments began, and continued until they turned 4 years old.

For the training, the birds were placed in an isolation chamber with a booth containing a touch-sensitive computer screen. The bird was kept inside a box with the head protruding a few inches from the screen.

 “They learned to peck on the touch sensitive screen,” says Devarajan. The team used motion capture to track their movement. Once this was done, the training began.

The birds were trained to do an orientation discrimination task. Essentially, the birds had to perform a certain task based on the stripes on a grating on the screen. The stripes were either horizontal or vertical.

A fixation cross appears first on the screen, quickly followed by the grating. A few seconds later, a box appears at the bottom of the screen.

The task for the bird was as follows. If the stripe is horizontal, the bird would have to peck on the box at the bottom of the screen. If the stripe is vertical, the bird would have to peck at the cross in the middle of the screen.

How do you train a bird to perform such a sophisticated task?

“The process took several months,” says Devarajan. Essentially, the training worked by providing a simple version of the task to the bird and rewarding it when it performed them correctly. The task would be repeated but the complexity would be higher in the next stage.

Initially, the bird first learns to peck on the centre, and then peck on the horizontal grating. If it pecks on the grating, you reward the bird by giving it access to the feed for a few seconds.

In the next phase, the box appears at the bottom of the screen. The bird would continue pecking on the grating. But this time, the bird isn’t rewarded. The reward is offered only when the bird pecks on the box. “So it learns over a few trials that now it has to actually peck on the box,” says Devarajan.

In the final stage, the bird learns the intended task. That it has to peck on the box if the stripe is horizontal, or peck on the cross if the strip is vertical. “It’s almost comparable to what humans perform” he says.

After training the bird, the neurons associated with the optic tectum are removed. The tests are performed again. As they theorised, this affected the performance of the task.

“The effects were far more devastating than anything associated with attention,” he says. What they saw was that the birds were unable to discriminate the orientation. They could not process which one was horizontal, and which vertical.

Devarajan explains that another part of the brain processes orientation and this is not affected by removing the neurons. “Even after removing the neurons, the orientation is known. But this information is unable to reach the bird’s decision circuits.”

Due to the lack of neurons associated with the optic tectum, there is a gating deficit,” he says. “The bird is unable to make decisions about this object. The orientation information is not getting access to those decision circuits.”

All species have to let important information in, and filter out irrelevant information. This is true of fish as it is of humans. “One hypothesis, given that the superior colliculus like structure is preserved across species, is that this is what helps them keep track of some things while ignoring others.

The role of the optic tectum and superior colliculus is now being investigated across species and humans. Devarajan and others have looked at data from four earlier studies on non-human primates. They have proposed a new model that shows how the superior colliculus performs the same gating function as the OT.

In humans, studying it is a bit more challenging, since the midbrain is hard to access. A few experiments are in progress to nail down the role of the superior colliculus in the brain.

We check to see whether there is a possibility for predicting when someone will suddenly develop steep cognitive decline which may be associated with diseases like Alzheimer’s.

“We are doing psychophysics tasks, we are doing imaging, neurostimulation. And we have got some results. But they are still preliminary, and too early to share,” he says.



he deep investigations of attention mechanisms have other benefits. There is an effort across the globe to analyse datasets of diseased brains and pick out patterns to classify them.

In collaboration with National Institute of Mental Health and Nuerosciences (NIMHANS), CNS is conducting a long-term study looking at brain scans over several years. “We check to see whether there is a possibility for predicting when someone will suddenly develop steep cognitive decline which may be associated with diseases like Alzheimer’s,” says Devarajan. This study wants to see if it’s possible to predict three to four years in advance if someone is susceptible to Alzheimer’s.

Neurofeedback is being investigated by many labs to improve attentional capabilities. One possibility is that those with attention disorders—those with high functioning autism or ADHD—can use neurofeedback to allow them to modulate brain activity.

As a consequence, this might create certain changes in the brain that results in long lasting improvements in attention, says Devarajan. “Neurofeedback could allow you to partly compensate using these neural strategies,” Devarajan says.

Another exciting area of interest is implementing attention-like algorithms in machine learning and neural networks. In many tasks, like visual search, our brains are highly efficient at discarding irrelevant information and completing tasks in a short time.

In 2017, at the Neural Information Processing Systems conference in California, a team of researchers from Google presented a paper titled “Attention is all you need”. The paper showed how deep neural networks implementing an attention-like mechanism can perform certain tasks really well.

The team designed a new network architecture, called the Transformer, based solely on attention mechanisms. Essentially, these neural networks possess an attention-like mechanism that allows them pay attention only to relevant features.

The Transformer was trained on translating English to German and French respectively. In the former, it was the best performing model, while in the latter, it scored very highly.

“We are excited about the future of attention-based models and plan to apply them to other tasks,” the authors write. “We plan to ... investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video.”

One area of attention that could directly benefit all of us is understanding why a bottleneck exists in the first place. Devarajan asks: “Why despite all this massive neural machinery do we have this bottleneck that allows us to process only some amount of information at any time.”

If we understand the bottleneck, maybe we can tweak it, improve it, or even eliminate it. There may be reasons why that might not be possible, like energy costs, or the need for a larger brain. “Undoing the bottleneck will obviously require some kind of interventional approach,” says Devarajan.

That intervention could be something as simple as TMS, where “if you stimulate a particular area just at the right time, maybe you would allow that gate to open, and allow more information to come.”