Neuroscience To Better Understanding Cognition, Psychological Disorders, And Social Behaviour
The interaction between artificial intelligence and cognitive neuroscience to better understanding cognition, psychological disorders, and social behaviour
The progression of Artificial Intelligence (AI) has developed rapidly in recent years in the domain of smartphones, medicine, robotics, and with progression, many more. The movement of computers saw the budding interaction between neuroscience, psychology, and AI (Hassabis et al., 2017). AI and cognitive neuroscience share common ground and develop mutually from insights from each other. One benefit of the overgrowing intertwined relationship between neuroscience and AI is our expanding ability to understand and create biological computations such as connectomes of the brain. Furthermore, scientists and psychologist have used AI’s computational models to offer explanations of causes and treatments of psychological disorders such as Alzheimer’s (White et al., 1986) and epilepsy (Gleichgerrcht et al., 2018). Other areas of cognitive neuroscience involving social developmental disorders such as autism have been investigated extensively by AI leading to effective treatments. Finally scientists have very recently and successfully created AI to produce an effective interaction between visual perception and decision making leading AI to have an adaptive working memory (Yang et al., 2018). Despite offering rich insights, there are many ways in which AI proves problematic in the domain of neuroscience. Computational models are not clear cut, and alike many imaging problems, poor spatial and temporal resolution is often a challenging variable. More controversially many ecologists and generally from a moral and ethical standpoint, the progression of AI may lead to detrimental impacts within society and offer philosophical questions such as “what are the requirements of basic human rights?” This rapidly developing approach to neuroscience is evaluated in the discussion below, highlighting the effectiveness and reliability of budding computational systems.
One way in which AI can help neuroscientists better understand neurological disorders is through whole or part brain connectome mapping. A ‘connectome’ refers to a virtual map of the brain’s spatial connectivity of neural pathways (Toga, 2012). These virtual neural maps have been created to predict outcomes of neurological disorders. For instance, the connectome mapping of the invertebrate C. elegans (White et al., 1986) has provided neuroscientists with a better understanding of Alzheimer’s disease. Since the 80s work on C. elegans has enabled scientists to obtain great detail on this is a small worm-like organism e.g. that it obtains 302 neurons and around 7000 connections. This single organism has offered scientists with a microcosmic blueprint of what the human brain may look like. For humans, Alzheimer’s disease can be characterized by neurofibrillary tangles and senile plaques in the brain. These are formed by the growth of amyloid-beta fragments of the amyloid precursor protein (APP). Otherwise speaking, Alzheimer’s disease is characterised by APP mutations (Minati et al, 2009). In humans, this protein is exists in several cell types throughout the body. C. elegans worms contain the APP ortholog called apl-1 (Daigle & Li, 1993). Expressions of the gene in the worm is prevalent in neurons, supporting cells, and head muscles. Absence of apl-1 results in larval lethality, where worms without the apl-1 to die prematurely (Hornsten et al, 2007). From this, it is clear that the apl-1 gene is essential for survival and development. Scientists can use this information to create a c. elegans connectome with the homologous disorder of human Alzheimer’s. Manipulation of this within the artificial agent may provide us with ideas of how to treat and find causes of the disease. Further insight into the worm through the simulation may give psychologists opportunity to understand how APP can be used positively. Sebastian Seung even proposes that a very near future may behold the capability to create connectomes of the human brain. These types of connectomes have offered us the ability to monitor, predict, and treat certain disorders.
Whilst this connectome helping Alzhiemer’s is still theoretical, scientists have recently created whole-brain connectome maps which teach AI systems through deep learning paradigms to aid with post-surgery complications of the neurological disorder epilepsy (Gleichgerrcht et al., 2018). Presently epilepsy surgery is often recommended to individuals who respond poorly to medication. However, many patients are reluctant to go through with the surgery due to high risks. On top of this surgery is dependent on clinical variables such as visual interpretation and radiologic studies which leads the model’s prediction of success to be only around 50% accurate. To tackle this debilitating problem, researchers at the Medical University of South Carolina implemented deep learning techniques in a whole brain connectome to predict the outcome of surgery in specific individuals (Gleichgerrcht et al., 2018). In their research, a deep network was trained based on pre-surgical connectome data of fifty patients. One main challenge the authors tried to tackle in this connectome was to reduce to large number of input connections. They did this by implementing a learned binary task into the AI by inputting the important layer of connections which were crucial in the training stage. This binary task learning allowed for a reduction of search space and a simplified neural network architecture. The map was created using in-depth analysis of diffusion magnetic resonance imaging (dMRI). This new method has increased the post-surgery outcome predictability to nearly 90%. A similar white matter connectome was created by Taylor et al. (2018) which found that measurements of the network could account for outcomes with 79% accuracy. From this whole brain connectome scientists have the tools to make surgery safer and more reliable. We can infer that the use of artificial neural networks have clinical benefits on predicting the success of epilepsy treatment. Thanks to this research we are one step further in using AI in cognitive neuroscience ease the surgical domain of neurological disorders.
Although the looks like an incredibly sophisticated and safe technique, there are many challenges of creating this AI agent such as its practicality and effectiveness. DMRI and the physical connections created by the researchers are individually specific meaning they replicate and predict the outcomes of specific brains. Hence placing the same map or analysis on several patients is almost impossible. Resulting from this leads to a very lengthy and generalised approach. On top of this there is a high variability in types of diffusion imaging leading to a rather ambiguous tool. DMRI also faces the problems of long scan time and poor signal to noise ratio (Shi, 2017). Nevertheless the deep learning method used in this case was able to find patterns and prove a statistically reliable in predicting post-surgery outcomes. Furthermore a problem often found in computational modelling approaches is overfitting the deep learning paradigm in the training stages due to a limited number of training samples. Due to this limitation the neural network may be biased, and over fit certain outcomes Gleichgerrcht et al. (2018). Another way this approach can be limited is due to the study’s retrospection and small sample size. The data which the researchers used looked at was previous data meaning no current patients benefitted.
Moving away from connectome mapping, another way neuroscience can enhance AI is in terms of understanding social interaction. From a neuroscientific perspective, learning by imitation plays an essential role in human-computer interaction which has consequently been pivotal in the development of AI (Schaal 1999). In fact, developments in this domain provides scientists and psychologists insight about social and developmental disorders. In humans imitation is an essential component for learning behaviours (Bandura, 1977). Therefore in order for AI to be closer to the level of cognitive computation in humans, there must be a prerequisite for imitation. This is computed by a connection between sensory and motor systems. Taking a neurophysiological perspective, for this to occur, it would be helpful to identify brain areas which are specialised in imitation. Rizzolatti et al. (1988) found specialised neurons in the area F5 of monkeys which activated when viewing and imitating actions of another. These neurons have been coined ‘mirror neurons’. Whilst specialised neurons or the robotic homology are hard to replicate, especially considering the reliability behind them in general is not solid, an alternative system of imitation which has recently been implemented in AI are inductive approaches to imitation (Schaal, 1999). Work by Breazeal et al. (2003) applied the idea of infant imitation learning of a caregiver to enhance the sociability of robots. Nowadays robots have been placed in clinical setting such as caring for the elderly, the mentally ill, and for children to which the authors argue robots should obtain a caring and empathetic communicative nature. They should be sympathetic and sensitive in assistance e.g. when reminding people to take medication. Overall robots should contain a high level of social competence and appropriateness.
In order to reach these stages, scientists have tried to replicate a Theory of Mind (TOM) into AI (Breazeal et al., 2003). TOM refers to our ability to understand our own perspective as well as others (Meltzoff & Moore, 1997). In order to enhance a robots social interaction and imitation, a version of TOM must be reached to make a connection between seeing and doing (Breazeal et al., 2003). The simulation theory posits that through the simulation of another’s actions and the object or person they are interacting with, humans can make prediction about the perspectives and behaviours of others. This is more easily implemented in robots than in deep learning for example, as these predictions made about another’s behaviour will be adopted by the robot allowing the AI to imitate through prediction. Work by Breazeal et al. (2003) invented a robot to act accordingly. In an interaction between a robot names Leonardo and a human, the robot began to imitate the human’s facial expressions in a similar fashion to infants imitating their caregivers’. Not only this, but the imitation computation in Leonardo allowed him to successfully imitate the poses of the human. According to the authors, this is pivotal in the social interaction between AI and humans due to the robots ability to find a structural correspondence between the human and himself. Overall the progression of imitation in robots allows for autonomous learning rather than AI having to learn through demonstration and trial and error (Hussein et al., 2017). It is important therefore for not only physical neural networks to help the development of AI, as seen in the development of connectomes, but more theoretical cognitive applications such as ToM and mirror neurons. Advancements in the area of computer-human imitation and interaction has allowed psychologists to predict autism spectrum disorders (ASD). Autism is a heterogenous developmental social disorder affecting less that 1% of the population. It is embodied by impairments in social behaviour and communication and is typically more prevalent in males than females. Ranatunga et al. (2013) combined AI and data analysis techniques in order to detect ASDs. A humanoid robot named Zeno was used to perform upper body poses to which the human subject had to imitate. Clinical results found that this tool was a consistent qualitative measurement of predicting autism disorders. Expanding on this other AI tools such as contemporary Computer Aided Systems and virtual reality systems have been effective in autism assisted interventions (Jaliawala and Khan, 2019).
Whilst it is clear that AI augments neuroscience is the domain of social interaction and social disorders, a moral and ethical standpoint highlights the dangers of closing in on robotic consciousness. Theorists suggest that the social and ethical consequences of developing human-computer interaction can have detrimental effects in society. One problem which will be an inevitable discussion point is the right of AI. For example the social robot Sophia is a communicative AI system. She has famously joked about destroying humans causing a very brief but real moral scare. However when it comes to rights and moral justification of AI like her, there are some sociological cautions to take on board. First of all in terms of procedural justification it is unfair to programme the AI with moral standpoints of the developer/designer (Yudkowsky, 2004). This questions AI’s individuality and susceptibility to poisonous views some designers may have. Another interesting area is the legal rights AI should or should not possess. Hubbard (2011) argues AI obtain attributes which can be considered essential in legal standing. On the contrary Yampolskiy (2013) proposes machines should be inferior by design, they should have no rights, and be expendable.
This debate poses the sociological question “should robots have human rights?” (Or the equivalent). The theoretical standpoint on personhood is that humans are defined as people who can understand, interpret, and internalise someone else’s feelings and respond rationally. If scientists go as far to compute consciousness within robots alike humans, and a responsive ToM, will they therefore obtain ‘personhood’ and should it be acceptable to offer them rights the same as their human creators?
Another domain of cognitive neuroscience which is explored and prevalent within AI is the working memory. Recently scientists have created AI systems to perform working memory computations in order to perform logical reasoning about spatial relationships. One factor that remains problematic in the working memory of AI is the reasoning of complex tasks and events that occur within changing visual stimuli (Yang et al., 2018). Therefore with the development of working memory in AI, agents should be able to perform reasoning tasks. The working memory is responsible for learning tasks by decision making through focussing on and filtering essential information. In terms of neuroscience, the working memory system has been found to exist in the prefrontal cortex (PFC) of the primate brain (Skubic et al., 2004). Skubic and co-workers (2004) aimed to create AI with similar working memory properties to a human. So far the existence of reinforcement and spatial learning in the PFC has only been validated in laboratory studies, therefore the success of this research proves pivotal in providing evidence for the biological mechanisms of working memory. In humans the adaptive working memory is executed as follows: the PFC receives dopaminergic inputs from the midbrain leading to a rewarding feeling from executing behaviours. Computational neuroscientists suggests that these dopaminergic inputs are involved in a reinforcement learning mechanism called temporal difference (TD) learning. Simulated PFC models of TD learning have been successful in explaining non-human primate performance in a plethora of working memory tasks (Braver & Cohen 2000). Skubic et al (2004) suggests in order for AI to experience adaptive working memory, they must facilitate the interactive components of a central executive, spatial reasoning system, and visual-based object recognition. Yang et al. (2018) used the neuroscience component of the visual-spatial sketchpad to sort an effective database storing dynamic visual representation of complex tasks. They called this the configurable visual question and answer dataset (COG). After training the database, the AI was able to make generalisations between similar tasks, something that has not been done before. In fact analyses of the system was found to be comparable to human computation. Critically this system may be restricted to a response bias (Yang et al., 2018). Neural networks may seem effective and compute large inputs however this may just be due to an ineffective response provided by the AI. The authors did try to amend this by determining biased responses and counteract these. Nevertheless it is difficult to tell if the responses are bias or not. Another problem with this approach to cognitive neuroscience is that there is yet to be a standard program or algorithm for the working memory in AI.
Interestingly, artificial models of long short term memory (LSTM) have different inputs to that of the working memory in AI (Hassabis et al., 2017). In working memory models separate the functions of sequence control and memory storage. In LSTM however, these two components are closely intertwined which allows data to be placed in a fixed active state and be activated at an appropriate time (Hochreiter and Schmidhuber, 1997). These original accounts of different memory stores could arguably hold evidence for the different memory stores in the human brain e.g. the Atkinson-Shiffrin multi-store memory model accounting for long term and short term memory compared to Baddeley’s working memory model. Nevertheless newer computed networks seem to present an overlap between the two hence minimising this viewpoint.
The interaction between cognitive neuroscience and AI is a remarkably expanding field. Positivist standpoints see AI to have pro-social impacts in society such as the development of jobs, the aid and insight in psychological disorders, revolution of the medical sector, and a general increase in our standard of living. Huge advancements have are prevalent in the application of AI in models of Alzheimer’s disease to help patients and predict treatment options (i.e. Gleichgerrcht et al., 2018; Taylor et al., 2018). Nevertheless barriers in the field in terms of poor imaging resolution hinder developments and applications in the field. In terms of the neurobiology social behaviour, AI has been able to implement both physical biological models and theoretical applications (ToM, mirror neurons) into social computations (Ranatunga et al., 2013). However, the expanding development of AI into our society as we know it offers sociological problems such as an ethical debate of human-robotic rights. Not only this, but could AI perhaps develop a consciousness which surpasses the cerebral capacity of humans? Finally computational networks of the working memory system have allowed for AI to bridge the gap between visual perception and motor decision making. Nevertheless the superfluous amount of designs reduce the inter-rater reliability and validity of the networks. It is clear that cognitive neuroscience plays a crucial role in the development of AI as well as AI enhancing the understanding of certain aspect of cognitive neuroscience. The expanding interaction between the two provides and exciting and futuristic domain both for the individual and the greater macrocosm of society. Nevertheless, we should take caution in how far the development of AI progresses, especially where AI plays a role in weaponry and war.