Transcranial Magnetic Stimulation – from origin to treatment tool

Where it all started, from magnetism to electricity.
Let’s start with a brief story, Michael Faraday was raised in a poor farmer's family, his father being an apprentice farrier. As a 14-year-old Faraday himself acquired an apprenticeship with a bookbinder and bookseller. During his years in this apprenticeship, he read many books and developed an interest in science, with a focus on electricity. He would soon write notes about the books he read and started conducting scientific experiments himself. We now see Faraday as one of the key figures in the science of electricity; even Albert Einstein said he had a picture of Faraday on his study wall.

Faraday discovered that we can create electricity through magnetism, this is where Faraday’s law of induction comes in. The principle of this law is that running a current through loops of metal wire (such as a magnetic coil), will create a temporary magnetic field, which in turn induces an electrical field. This principle is the working mechanism used for among other things  the induction cooker, the pickup elements in electrical guitars and Transcranial Magnetic Stimulation (TMS). For TMS the coil(s) are placed over the scalp, causing the electromagnetic pulse generated to travel through the skull into the brain. In our brain, the electrical field created by the pulse leads to an action potential. To put this into context, say that you want to move your index finger, a certain area in your motor cortex is responsible for this finger movement when you want to move it voluntarily. We can now position the TMS coil over this area and by firing a strong enough pulse, create an action potential that will lead to your index finger tapping once. 

From cool finger tapping toy to treatment
Alright, very neat, we can make you tap your finger by activating the coil. But how come TMS is now seen as a very potential tool for patient treatment of various disorders? This is because research has shown that using TMS repetitively with set parameters can induce certain beneficial, long-lasting effects on the brain. Firstly, we need to get back to the finger action, since there is a method by which the TMS user can see direct feedback from the given pulse. If the pulse strength is too low, there will be no finger movement. The larger the pulse strength, the bigger the movement and the more likely that other close areas are also activated as well, causing movements in for example other fingers, the wrist, or the elbow. What we want to do now is find the threshold, the intensity with which a pulse will cause a movement of the finger 50% of the time. With this threshold we can then calculate the intensity we want to use for treatment. This is because when we use TMS for treatment, we often stimulate a brain area which does not give a directly observable response (such as more frontal lying areas related to cognition).

While the pulses above threshold will cause neuronal activation in the brain, receiving multiple pulses in a short amount of time will have an effect depending on the parameters. For example, receiving 10 Hertz (Hz) stimulation (meaning 10 pulses per second) will cause an excitatory effect in the targeted area, while receiving 1 Hz stimulation will cause an inhibitory effect in the area. Receiving multiple pulses like this  is called repetitive TMS (rTMS). The effects generally increase with more pulses being delivered and more sessions being conducted. Repeated excitatory or inhibitory effects of rTMS can change the plasticity of the brain and create a lasting effect.

In the Netherlands, rTMS is covered by health insurance as treatment for depression if 2 other treatments (either therapy or medication) haven’t had enough effect. To give an indication of the effectivity of rTMS treatment for depression, a study by Carpenter and colleagues (2012) found that patients suffering from Major Depressive Disorder and who had persisting symptoms despite a pharmacological antidepressant intervention responded to the rTMS (responding means a significant reduction in depression symptoms) in 41.5-56.4% of the cases and had remission of symptoms in 26.5-28.7% of the cases meaning they no longer met the criteria for diagnosis. These response and remission rates were based on patient reports, the rates reported by their clinicians were higher. The effects of treatment were relatively stable over a 12-month period. I think this study gives a clear example of how impactful rTMS can be. Achieving these results in a group that did not respond to antidepressant treatment is more than encouraging.  

While depression is currently the only disorder for which rTMS treatment is covered by health insurance, its potential for treating other disorders is big. Currently there is ongoing research on using it to treat obsessive compulsive disorder, craving symptoms due to addiction, PTSD or to use it after stroke to improve recovery; and there are many promising findings. I think it’s a matter of finding the right parameters and predictors for treatment efficacy and then it can be used in a much wider variety of symptoms. If you’re thinking this all sounds wonderful, but electricity reminds me of electroconvulsive therapy (ECT), please do read on!

The safety & side effects of rTMS
I often hear people who are not familiar with TMS mention ECT or they compare it to that. I think it’s important to emphasize that TMS and ECT are nothing alike. While ECT is endured under anesthesia, rTMS is not painful and sessions are generally perceived as harmless and painless. I often compare the feeling of a pulse on the scalp to a soft tick of a finger or pen. rTMS is a non-invasive treatment method, it’s safe and the side-effects are relatively small when compared to those of medication. The most severe side-effect is the occurrence of a seizure, which is almost impossible with the right screening before treatment, one of the factors which might warrant against usage of rTMS is a (family) history of epilepsy. 

Finishing note
I work with TMS myself and see it as a promising tool for treating a variety of disorders. There is still much to be discovered but there are many researchers and institutions working on the road ahead. The specificity TMS can have makes it a strong (potential) alternative for medication usage which often influences the whole body in some proportion. While I think it can be an alternative in many disorders, I also think the combination of rTMS with cognitive behavioral therapy or pharmacotherapy shouldn’tbe underrated. While I think this now covers the basics of TMS, having a visual impression or real-life experience can really help to get an idea of it. I want to finish with a list of some videos I recommend which might interest you:

The Potential Side Effects of rTMS | Transcranial Magnetic Stimulation (TMS) - Dr.Martjin Arns: 

An interesting video to showcase how strong the effects of TMS can be (more research oriented). Michael Mosley has areas of his brain turned off - The Brain: A Secret History - BBC Four:

Simple and visual explanation video of TMS: 

Author: Kobus Lampe

Mapping of brain activity with fMRI, EEG, and PET imaging; how does it work?

If you are reading this article from this website right now, I would take a wild guess and suppose you might have at least some interest in how the brain works. This desire to understand more about the mind is shared by many philosophers in the last few centuries. However, they encountered some obstacles on their way. Among them, the size of the neurons (0.01-0.05 mm diameter) made it impossible to observe them prior to the invention of the microscope at the end of the 17th century, and even then, those observations could only be undertaken on post-mortem brains. Another crucial invention, which is the focus of this article, was that of the various neuro-imaging techniques that are simply necessary to observe a living brain through the skull and the different tissues surrounding it. Thus, a few techniques were proposed in the late 19s and early 20s, such as the “human circulation balance” from Angelo Mosso, which more closely resembled a medieval torture device than a brain scanner, and the pneumoencephalography, which required the cerebral spinal fluid to be replaced by air. These techniques are now outdated. Here, we will focus on the strengths and weaknesses of three more recent, important, and well-established techniques; fMRI, EEG, and PET imaging.


Functional magnetic resonance imaging or functional MRI (fMRI), not to be confused with structural MRI (simply referred to as MRI), is a painless, non-invasive neuroimaging technique that produces detailed 3D images of the anatomy of our brains and is used to measure brain activity. This activity is inferred from an indirect observation of the ratio between oxy- and deoxyhemoglobin. What does that mean? Our circulatory system is responsible for oxygenating the different cells and tissues of our body and eliminating waste products and carbon dioxide from these cells. This is done by hemoglobin, a protein in red blood cells to which oxygen and carbon dioxide can be bound and therefore transported through the body. Thus, the oxygenated blood is bound to hemoglobin and hence called oxyhemoglobin. Regarding the brain, when a particular area is active, more oxygen-rich blood is delivered. The combination of the oversized electromagnet constituting the MRI scanner and a radiofrequency current can detect the different magnetic properties between oxy- and deoxygenated blood. The signal the MRI scanner picks up is therefore called the BOLD (blood-oxygen-level-dependent) response. Simply put, fMRI makes images of brain activity based on the amount of oxygenated blood that rushes to the brain areas that are active. More details about the mechanism of this technique can be found here: fMRI - Brain Matters

FMRI is mostly used for its excellent spatial resolution. That is, it can determine very precisely which part of the brain is active (relative to others) and provide high-quality images of the brain. Therefore, it is mostly used to study the structure of the brain, assign various functions to its specific regions, but also to examine the effects of cerebrovascular accidents, trauma, or neurodegeneration on brain function, and keep track of the growth of brain tumors. On the other hand, an fMRI scan is expensive, and patients must stay still to capture clear images, which is often more difficult than it seems when it lasts for a sustained period of time, resulting sometimes in poor imaging quality. In addition, the main limitation of fMRI is that it provides a poor temporal resolution, as the blood takes some time to flow from one part of the brain to another (the BOLD response takes approximately 4 seconds, which is a very long time in the neuroscience world). This means that we can’t tell precisely when the changes observed in the brain are happening. To compensate for this, fMRI studies are often combined with EEG.


As you probably know, the brain communicates via electrical impulses, in the form of an action potential and a postsynaptic potential. Electroencephalography (EEG) is a method used to measure this electrical activity by placing electrodes with a special conducting glue on the scalp. The precise mechanism of EEG is a little bit complex and well-defined here: EEG - Brain Matters. In summary, when neurons communicate, positive and negative ions such as Na+, K+, or Cl- continually leave and enter the neurons’ membranes, which gives rise to dynamic movements of positive and negative charge throughout the brain, called dipoles. Those differences in electric charges are picked up by the electrodes and sent to a device that records and transforms the brain activity in the form of wave patterns. Those can then be analyzed and identified by specialists, who so far have classified them into five basic patterns: delta, theta, alpha, beta, and gamma waves. Each pattern is associated with different states of alertness and different functions.

Unlike fMRI, one of the biggest advantages of EEG resides in its high temporal resolution, which means its ability to record brain activity as it unfolds in real-time, at the level of milliseconds (thousandths of a second). However, EEG imaging cannot provide precise information about the origin of brain activity (low spatial resolution), nor can it reliably detect signals from subcortical structures. This is why the EEG and fMRI techniques are often combined, as they complement each other well.


As we saw previously with fMRI, neurons need oxygen to function, and measuring the changes in the magnetic properties of oxygenated vs deoxygenated blood enables us to measure changes correlated with brain activity. Positron emission tomography (PET) is a nuclear imaging technique also used to measure brain activity, by following a similar yet distinct methodological path. Neurons need oxygen, yes, but also glucose for proper operation. Precisely, the brain needs approximately 5.6mg of glucose per 100g of brain tissue per minute. Thus, the rationale underlying PET imaging is to inject a radioactive tracer into the bloodstream that will bind to glucose. Then, when certain brain areas activate, the tracer contained in the glucose will be brought there. A positron will be released from the particle and collide with an electron. This will in turn result in the creation of two photons, which will be detected by the PET scan. More details about the functioning of PET imagery can be found here: PET - Brain Matters.

One of the reasons to use a PET scan is that it can reveal which parts of your body are functioning at the cellular level with qualitative anatomic resolution. This is especially helpful for identifying and investigating cancers, infections, and how the body reacts to the diseases and their treatments. PET is also increasingly used to investigate neurotransmission. Nonetheless, PET imaging also contains some limitations. The most obvious one is that the subject must be injected with radioactive tracers. This procedure isn’t harmful if done once or twice, but becomes problematic if done more often, as the effects of radiation add up over the lifetime. Therefore you cannot scan the same individual multiple times. This is a limitation for research since you cannot conduct within-subject studies, meaning that you would compare the same individual('s brain) under different conditions. These types of studies are useful since brains tend to differ a lot between people, and the averaging or standardization process of different brains is complicated and often affects the data.


To sum up, there is no magical imaging technique enabling us to have complete access to brain structure and function on a level of milliseconds and millimeters. Different techniques permit different analyses of the brain, all containing some advantages and limitations or costs. Therefore, imaging techniques should be chosen depending on what you want to know about the brain. If the timing of brain activity is important, you should probably go for EEG, but if you are interested in where the activity takes place, fMRI or PET would be better options. Currently, scientists are working on combining multiple techniques to get the best of all worlds.

Author: Pablo de Chambrier

An "electronic Ear" - cochlear implants

Many people benefit from technological development. Being able to use a phone or personal computer makes your life a lot easier. Although it sometimes seems some people are not able to function without their device, as if it has become a part of their body, other people actually rely on devices to take over bodily or sensory functions. One such example is a cochlear implant (CI). This is a medical device that is used to provide the brain with input; more specifically it can restore some access to sound to people with severe to profound hearing loss. In this article, I will explain how a CI works and some of the benefits and challenges that come with this technology.

From pressure changes to electricity

In order to understand how a CI works, it is important to know a bit more about the auditory system and what sound actually is. Physically, sound is pressure changes in the air (or another medium) caused by an object's movements or vibrations. Usually, we can perceive this sound since the ear can transfer these pressure changes into electrical signals, and subsequently meaningful sounds and speech. The auditory signal takes the following path for this: first, the pinna picks up the sound, which then travels through the ear canal unit it arrives at the ear drum. The eardrum is set into vibration by the air pressure changes from the sound. This, you could say, causes a chain reaction: the vibrating of the eardrum causes the three smallest bones in your body, the ossicles (hammer, anvil and stirrup) to vibrate in succession, which amplify the vibration and transmit it to the oval window. This oval window is part of the cochlea, a snail-like structure filled with liquid, that consequently vibrates too. Within the cochlear lie various structures: the basilar membrane, organ of Corti and tectorial membrane, which are all set into motion. Most importantly, the hair cells start to move, which transduce the environmental stimulus waves into electrical signals. In these hair cells, the movement (bending of the hair cells) triggers a sequence of chemical reactions that lead to action potentials. Interestingly enough, these hair cells do not vibrate in a random fashion. Both their pattern of vibration and intensity convey properties of the sound signal. The pattern or place of the vibration within the cochlea conveys the pitch of the sound; this is called tonotopy. These vibrations trigger electrical signals, action potentials, which are then sent through the auditory nerve to the brainstem and from there are transmitted to higher-order structures of the brain, such as the auditory cortex.

Technology to the rescue

However, some parts of this auditory pathway might not function properly. For every step up to the brainstem there is a device that can be used to restore hearing: from a hearing aid that amplifies incoming sound to bone-conduction devices, a cochlear implant and an auditory brainstem implant. Of these devices the CI is, after the conventional hearing aid, the most widely used. CIs can help people with severe to profound hearing loss to receive a sensation of sound by directly stimulating the auditory nerve fibers in the inner ear. In terms of technology, it consists of an external microphone that picks up the sound, a speech processor that processes the sound, a transmitter that sends the signal to the implanted receiver (coil), and an electrode array that is placed inside the cochlea. That sounds amazing, right? A device that lets almost and completely deaf people hear again. And not only that, but indirectly it also leads to less depression, social isolation, unemployment, and more independence.  

Do not forget the brain

Although a CI might lead to great results, it is not “plug and play”. The sound that is produced by a CI is less nuanced than natural hearing because it is processed by a little computer. The sound can be described as sounding similar to a robot voice. It, therefore, takes some time to get used to before speech can be understood from this distorted signal. This shows that a CI really is a BCI (Brian Computer Interface), it is not only important to develop a good device, but it interacts with the brain which in turn has to adjust to the device and vice versa. Luckily there are professionals, called audiologists, that fit (tune) the device to the CI user's needs and guide them during rehabilitation. There are many different settings, from the way that sound (and background noise) is processed, to the way that the auditory nerve is stimulated. But researchers are also trying to understand how the brain actually deals with the signal and how effortful listening with a CI is. Because even though someone might understand perfectly with a CI, it might be that they still need to put a lot of effort into achieving this.

If you are interested to learn more about this topic

For Dutch-speaking people, in the podcast “Met Hertz & Ziel – de rol van cochleaire implantaten” three Flemish audiologists discuss cochlear implants. They not only describe, for example, how a CI works, but they also discuss how CIs changed the lives of some of the users, why they cannot predict how well someone will perform with a CI, how audiologists fit the device when someone is not able to communicate how well they perceive sounds, or what drives the decision to get or not get a CI from both a clinical and user perspective.

If you like to learn more about hearing loss in general:

Or about what it might cost to not address hearing loss (which ranges from 6-30 billion for several European countries):

Author & Illustrations : Loes Beckers


Goldstein, E. B. (1999). Sensation and perception (5th ed.). Brooks/Cole Pub.

Kochkin, K., and Rogin (2000) Quantifying the obvious: The impact of hearing instruments on quality of life. Hearing Review 7(1).

Shield, B, ‘Evaluation of the social and economic costs of hearing impairment’. A report for Hear-It AISBL, 2006.

When your thoughts are able to control a robot!

Headline: A 57-year-old woman took a sip of her coffee. Not impressed? Then consider that she is almost completely paralyzed and was using a robotic arm connected to her brain.

Picking up a cup, bringing it to your mouth and taking a sip: this action is probably so commonplace for you that you've never thought about how you actually do it. The woman in this study emphatically did. She became paralyzed after a stroke, and has been unable to speak or use her limbs ever since. This happened about 15 years before she was able to participate in a promising study in the US. Within this study, she ended up managing to control a robotic arm placed on a table in front of her by thinking only of movements of her own hand.

Motor cortex

This thought based control of the robotic arm is not telekinesis, but the result of decades of research into Brain Computer Interfaces (BCI): technology to enable direct communication between the brain and a computer. In this case, a part of the cortex named the motor cortex which contains cells that almost directly control the muscles in your body. Scientists have been doing experiments for years to capture the activity of those cells with implanted electrodes. From these, they can approximately deduce what movement someone wants to perform.


Before the patient in question in this article was able to control the robotic arm to serve herself a drink, researchers already succeeded in using signals from the motor cortex to control a mouse pointer on a computer screen. But that a patient can now control a robotic arm is a new milestone. It offers the prospect of greater independence for people who currently rely entirely on care. However, the technology is not yet very practical. Taking a sip of coffee with the robotic arm requires the presence of a technician who spends over half an hour calibrating the equipment. In addition, the arm must be connected by cables to the patient's head. The researchers hope that one day it will be possible to use BCI technology to allow paralyzed people to regain control of their own limbs.

Watch a video about this study here

Original Author: Daan Schetselaar

Translated into English and adapted: Sophie Ruppert 

Elon Musk’s Neuralink: Promising future tech or marketing scam?

Whether it’s producing fancy electric cars, potentially offering commercial space travel, or ‘freeing’ Twitter, Elon Musk is eager to invest in futuristic endeavours! But did you know that he also founded a company that wants to insert implants into peoples’ heads for them to control devices with just their brain signals? Elon even called this a “FitBit in your skull” that may “solve” autism or schizophrenia…so, what’s behind all this?

Neuralink Corporation, founded by Elon Musk in 2016, is aiming to produce a brain-computer interface, or BCI for short. Usually, movement is what connects brain signals to devices in the outside world, e.g., when you type something on a keyboard with your hands. A BCI enables direct real-time communication between the brain and outside devices. For example, with a BCI you could move a robot arm by thinking about moving your own arm or you could text your friend on your phone by just thinking about it. Seems cool, right? Of course, this technology is still in a relatively early stage, and it will probably be a long time before you can buy any product like that as a consumer.

What is also very exciting about BCIs is that some may enable people that cannot move their bodies very much, e.g., quadriplegics, to interact with the outside world through technology. Imagine if we could just give someone new limbs that they could control as if these were made of their own flesh and blood!

Neuralink’s specific idea boils down to this: Tiny, tiny threads with many electrodes on them, which are implanted into the brain by a surgical robot. They are connected to a chip that collects and combines the neural signals from the electrode tips and sends them wirelessly to a device to control, e.g., your mouse cursor or computer keyboard. Neuralink already released a few videos of animal test subjects using their device, e.g., in 2021 a monkey was shown playing computer games with just its decoded brain signals, no hand movements needed! Pretty neat, right? Well, yes. But as researchers far and wide have remarked: this kind of technology has been available since about 2002…

So, what is new about Neuralink then? 

Firstly, invasiveness: Typically, research on BCIs in humans has used non-invasive methods such as EEG or fMRI to connect brains with computers. Neuralink is different in that regard. They insert technology into people’s heads and this act of implanting something into the skull is obviously far more dangerous than just putting on an EEG cap! Implantation is used because if you record closer to individual neurons the signal is much clearer compared when you place electrodes on top of the head. The big issue with this is that something may become infected after the surgery, which can get very dangerous. While in certain cases, like for people with very strong epilepsy, implantation of electrodes is already being done, doing this on healthy people deserves way more critical consideration since they don’t have the purpose of treating a certain ailment.

Secondly, wirelessness: Typically, the EEG or fMRI are connected to a computer to be controlled via cables, which restricts mobility and general practicability. So, wirelessness, if it functions properly, could be a huge advantage of Neuralink compared to a lot of current BCIs.

To sum it up, Neuralink has some promising features that may really add something useful to future BCI research. However, you should not expect this product on the shelves anytime soon since human safety is still a massive issue at the moment. And most importantly, you should generally be critical of a businessman hyping his own company on social media!

Author: Melanie Smekal

Image created using DALL-E-2 open AI software


Screen time and social media, a curse for adolescents’ developing brains?

As a millennial, I remember that when I was in high school, our mobile phones were not so much of a distraction. During breaks we would have conversations or play cards to kill time. However, my sister who is a typical gen Z’er and 6 years younger than I am, really struggles with concentrating on homework and is easily distracted by her phone as soon as one of her friends sends her a snapchat. She is not the only one. Literally all of her friends are sharing the same struggle and apps like “Forest” are nowadays necessary for youngsters to keep them focused on their school work. I started wondering, is this really a difference between generations? And if so, is this really corrupting adolescents’ minds?

For most adolescents, their phone functions as their life line in order to stay in touch with friends and families. Opening that snapchat with a funny picture of your friend whose face is morphed into that of a monkey, is for a short amount of time the dopamine shot that you were craving. However, in the long run this short period of gratification is not enough and you end up checking your phone every 5 minutes. Therefore concerns among researchers regarding the influence of screen time on the developing brains are rising. Therefore, this article will focus on the effects of screen time and the use of social media on the development of adolescent’s brains.

Screen time and its role in adolescence

Adolescence is the developmental period in between childhood and adulthood. During this period, the brain undergoes massive changes influenced by several biological and environmental factors, with screen time and social media being two of them. These changes can be best explained by the “dual-systems model”. In early adolescent years the emotional-motivational system important for you craving to open your friend’s Snapchat matures at the beginning of adolescence. This in contrast to the control system important for stopping yourself from checking your phone and therefore (mostly found in frontoparietal circuits) matures at the end of adolescence. This time gap in maturation of these two brain areas shows us that in adolescents emotions are less damped by our cognitive control (top-down control) system. This is where Internet or social media comes into play; adolescents seek short-term gratifications over long-term gratifications, since they cannot resist the desire of going on their phone. Moreover, adolescents are less influenced by their parents and are more drawn to spend time with friends. Hence, the internet is an accessible way which provides many opportunities for adolescents to connect with peers as well as to engage in highly gratifying activities such as watching YouTube videos or online gaming.

Effects of approval and appreciation via Instagram on the adolescent brain

Since as you just have read, adolescents are really vulnerable to the opinion of their peers, in 2018, a study by Sherman and colleagues investigated brain areas involved in providing “likes” to other people’s pictures on Instagram among 58 eighteen-year-olds by making use of functional magnetic resonance imaging (fMRI). The participants were asked to choose between “liking” the picture or not. The fMRI showed increasing activation of the reward circuitry in the brain (e.g. striatum and ventral tegmental area – VTA) when the participant decided to “like” a certain picture. Additionally, these same brain areas are involved as soon as a participant their pictures are “liked” by others. The researchers therefore concluded that providing and receiving “likes” on Instagram correlates with higher activity in brain areas involved in the processing of rewards as well as prosocial behavior.

Effects of social media on concentration and impulse control in adolescents

As already described in the introduction, this “digital generation” seems to struggle more with concentrating and focusing compared to older generations. This is confirmed in a survey among high school teachers from the X and Y generation* who indicated that generation Z students had poorer time management, unplanned study behavior and disrupted class often. When focusing on changes in the functional connectivity, when comparing adolescent’s addicted to the internet to healthy adolescents, Lin and colleagues showed a positive correlation between the level of Internet addiction and a decrease in connectivity in various white matter tracts (orbitofrontal tracts, anterior cingulate cortex, corpus callosum, front-occipital fasciculus etc.). These white matter tracts form the important highways of the brain, and a decrease in the connectivity between brain areas via these tracts results in a reduction in among others concentration, impulse inhibition etc. Let me take the Anterior Cingulate Cortex (ACC) as an example. This brain region and its connections are important for you to keep your eyes on the price. However in the case of decreased functional connectivity between the ACC and its connected hubs, a reduction in one’s cognitive control is the consequence which results in you opening that snapchat even when you know you have to study. To illustrate this, Li and colleagues gathered fMRI data from 18 interned-addicted and 23 control participants all aged 15 years old who had to perform a Go-stop task. An example of such a task would be that the participants lying in the MRI scanner have to press the button showing an arrow pointing in the same direction as the arrow presented on a screen. In the case of a “stop sign”, they have to refrain themselves from pressing any button. Comparison of the fMRI results indicated that in the case of the Internet-addicted adolescents they showed that these participants had a decreased strength of the connections between frontal and basal ganglia pathways which are known to be involved in response inhibition. These individuals seem to fail to inhibit their response, because they are not able to fully activate the right connections between the necessary brain areas, and therefore inhibit their automatic response. I would like to point out that in the case of the described studies, the diagnosis of Internet addiction was based only on self-reports rather than a clinical interview. Moreover, it seems as if these results cannot be found back in healthy adolescents, but only in Internet-addicted adolescents, so it remains to be seen in future studies if generation Z healthy adolescents really suffer from less time management skills due to functional connectivity changes in the brain. 

Effects of social media on social skills among adolescents

One last interesting finding regarding the effect of social media on today’s youngsters, is the idea among researchers that face-to-face social skills are declining within this generation. This concern has risen in attention, since several talks or books have touched upon this problem. Trends are observed in literature regarding the fact that since teenagers more socialize online, they risk putting less value on their ‘real world’ selves which makes them more vulnerable to impulsive and suicidal behavior. Moreover, an additional concern is the fact that time spent in front of screens by adolescents replaces time for face-to-face interactions and therefore time spent to learn these skills. Researchers actually found that the more time an adolescent spends on the Internet, this resulted in a decrease in their sociability, communication with family members and even an increase in depression and loneliness. So it seems that social media has made it easier these days to stay in touch with each other, however the downside of this form of contact is that face-to-face skills among adolescents declines with all its consequences. However, the evidence is limited. In fact, research that specifically compared children’s social skills before and after the internet boom has never taken place upon this point. Moreover, social skills are hard to measure and there is no consensus among researchers on how to do so. 

To end on a positive note

All in all, based on these results of neuroimaging studies, we can observe a trend in a sense that screen time affects our brain. However, we should not forget that we are currently not able to draw definite conclusions about this matter, since imaging techniques like fMR are not refined enough yet tand currently there is heterogeneity regarding measuring social skills or internet addiction. However, what is more important is that the Internet and social media have brought us positive things as well. In 2013, a study already showed that children in the age range of 0 to 8 years old also experience a lot of positive effects due to the use of the Internet. Several studies showed that the more these children used the internet, the better their verbal abilities as well as academic performance became. Moreover, the more time they spent on the Internet, the more these children already got acquainted with using the Internet to broaden their scope of knowledge which in adolescent years resulted in a higher political engagement, and a lower risk of experiencing negative effects as described above. 

All in all, I think it is also important to not forget that while future research should take the above things into account, positive aspects of screen-use among adolescents such as building and maintaining social connections, learning, and social support should not be refuted!

*People belonging to the X generation are born between 1965 and 1980, and they are also called “the babyboomer generation”. Children of this generation belong to generation Y (also called “Millennials”) and are born between 1980 and 1999. 

Author: Joyce Burger

Image: Joyce Burger


BCI & Neurotechnology Spring School 2023

With this month's theme being "Brain and Technology" we are delighted to inform you about this year's BCI & Neurotechnology Spring School. This 10-day virtual event is inviting everyone: students, researchers and anyone who is interested in BCI-related robotics, AR, VR, machine learning, computing, human-machine interface systems, control, signal processing, big data, rehabilitation, and similar areas. The Spring School 2023 offers a series of educational talks and insights from universities and companies about their cutting-edge brain-computer interface projects. As if this isn't already enough, there is a BR41N.IO Designers' Hackathon too, to top off this unique and highly valuable learning experience.

The spring school takes place online from April 17-23 and is free of charge.

Check out the program and register here:

Artificial Superintelligence: can we keep a super-intelligent genie locked up in its bottle?

The human race is without any doubt a very successful species. We have managed to populate the entire world, from arid deserts to icy tundras. We have tamed wild animals and we can surmount the biggest and strongest creatures with our weapons. We don’t have a natural enemy to fear and we learned to survive in extreme circumstances. All in all, you could say that we are in some sense the superior species on planet earth. But our superior position could be in danger…

Before I can explain what is threatening our position, we should take a look at the main strength that brought us to where we are today, our intelligence. The human race is, unlike other species, able to think in a very sophisticated way. For example, we are able to remember and look ahead, that way we can learn from the past and prepare for the future. Another important aspect of intelligence is language. By using language we can share information so others can build on previous knowledge that could have been acquired generations ago. These skills helped us survive and thrive. All in all, we should thank our brains for our top position in the animal kingdom. Thanks to our intelligence we managed to develop tools that became increasingly complex, from a primitive axe to a smartphone. At the moment we are on the edge of a huge technological revolution. Since the first computer was developed many more technological innovations followed. The rate of the development of innovations is starting to go up more rapidly. Technological progress is increasing exponentially, which means that soon technological advances will skyrocket. Even though we know technological advances are exponentially growing, we tend to underestimate the rate at which technological progress is made. I’ll illustrate the reason for this underestimation by using a metaphor:

“If you were to place grains on the squares of a chessboard such that the first square gets 1 grain, the second 2 grains, and the third 4 grains, etc. How many grains will be on the chessboard when you finish?”

The correct answer is 9 223 372 036 854 775 808 grains. This is way more than you expect, right? We see the same effect when we look at our view on technological progress; we don’t expect it to grow at the rate that it does. This means that science fiction themes such as super-intelligent robots that can think independently might be closer than we think. There are already robots out there that are able to ‘reproduce’ themselves without any humans involved by using genetic algorithms. But what if these fun technological developments get out of hand? What if the progress is going too fast and we lose grip? What if artificial intelligence (AI) gets more intelligent than we are? Will robots then take over our superior spot in the animal kingdom?

According to many influential scientists, such As Nick Bostrom, a scientist and philosopher of artificial intelligence, this scenario is very realistic. Likely, computers will someday be more intelligent than we are. An advantage of computers is that they have in theory an unlimited storage capacity. Whole buildings can be created to store computer data, while our brains have limited storage space since it needs to fit in our skull. A reason that computers haven't caught up on us yet is because a computer needs much more energy to function compared to our brain. The brain only needs energy equal to 20 watts to function (which is less energy than required to keep a dim light bulb lit). On the other hand, the fastest supercomputer (the USA’s frontier supercomputer)needs 21 million watts to function. Nonetheless, it’s only a matter of time before science finds a solution to the energy problem. Nick Bostrom, therefore, encourages fellow scientists to develop a way to keep artificial intelligence under control before it’s too late. He recommends developing a safety mechanism before we create more advanced AI. You might think: “Well, why don’t we just put an on-and-off switch on super-intelligent AI?”. It’s just not that simple, and I will explain why. Technically, humans have multiple off switches such as hindering the supply of oxygen by closing the trachea, stopping the blood flow by destroying the heart, or damaging the control center of the body (our brain). Although we have those off switches, it’s not easy to deactivate us because we find ways to avoid being deactivated. Super-intelligent AI could be able to do the same thing and this already started happening in some sense. For example, do you know where the off switch to the internet is? The point that I am trying to make is that we might not be able to control something superior to us. As Nick Bostrom says: ‘’We should not be overestimating our ability to keep a super-intelligent genie locked up in his bottle forever”.

Although many technological advancements are very valuable, it’s also important to be aware of the possible dangers. Soon there will be an explosion of technological developments. When exactly this will happen and what its consequences will be no one can know for sure. The least we can do is think about the possible consequences of super-intelligent AI and be prepared before it’s too late. We should make sure that we keep controlling AI and avoid that AI starts controlling us. Before we make a super-intelligent genie we should develop a lamp that is strong enough to detain it.

Author: Pauline van Gils