🧠Psychoanalysis of Artificial Neural Networks (aka AI).

Reeshabh Choudhary
9 min readOct 14, 2024
Psycho-analysis of AI

👷‍♂️ Software Architecture Series — Part 31

There are no inventions in virtual realm, but mere adaptation or manifestation of the ideas already existent in the physical realm. Be it web encryption mechanisms or software architectural designs, all are inspired from something which has already been implemented and used in physical world. And Artificial Intelligence is no exception to it.

The term artificial intelligence aka AI is no stranger to us anymore, on the contrary it is the talk of the town! At the time of writing this text, AI systems have slowly started to creep into our daily lives via integration in phones, laptops, web, etc., and intend to make our decision making and job easier. The sole agenda is to have a virtual companion to guide us through complex tasks or even perform them end to end whenever possible.

This virtual companion (or AI agent) was designed to mimic human brain functions such as memory and learning. I have earlier written a short article about “How did ‘The Natural Selection of Brain Wiring’ inspire neural networks in Machine Learning?”. Today we will be dissecting how does it impacts our lives through its decision making. But, before diving straight into the psycho-realm, let us build some context about how a neural network function.

🥅Design of a Neural Network

An artificial neural network takes its inspiration from Hopfield network designed by Mr. John Hopfield in 1982, who has been recently awarded the Physics Nobel Prize 2024 for constructing methods that helped lay the foundation for today’s powerful machine learning. The Hopfield network draws its strength by mimicking associative memory function of the brain. To put it in simple terms, imagine recalling a fairly unusual word which we use rarely, such as one for the supportive structure for a rider of an animal. We scan the words stored in the memory. It is something like what was that word? syll..able?? Umm.. no.. it is ‘saddle’. This is how associative memory function of our brain looks like. The Hopfield network can store patterns and has a method for recreating them. When the network is given an incomplete or slightly distorted pattern, the method can find the stored pattern that is most similar.

Say, we feed the network with two simple 3x3 images with following patterns: Pattern 1: and Pattern 2:

And then we present the network with an incomplete version of Pattern 1:

The network will use stored patterns to recreate/complete the image. Let us understand how this works.

A node (also called neurons) in the Hopfield network is a binary unit, which means each neuron can only be in one of the two states, active or inactive. Each pair of neurons in a fully connected network is connected by a weight (represented by , weight between ith and jth neuron.), which represents how their states are correlated across different patterns. These correlations are captured through a learning rule (like Hebbian learning) and stored in the weight matrix.

To put it in simply, the weights are adjusted according to the correlation between the states of neurons in the patterns. So, if neurons ith and jth tend to be in the same state (both or both ) in many patterns, the weight , between them will be positive, indicating that these neurons should support each other’s states. However, if they tend to be in opposite states, the weight will be negative, an indication to adopt opposite states.

The weights across the network are symmetric and there are no self-connections. The idea here is to use the values of these weights to determine how the neurons collectively behave, and by adjusting these weights, we can store patterns in the network. Each pattern (e.g., text, image, etc.) is encoded by adjusting the weights in such a way that the neurons in that pattern have strong correlations, making the pattern an attractor state of the network, a state that the network naturally falls into when given similar or incomplete information. The weights are adjusted so that the pattern represents a low-energy configuration.

The idea here is that when the network is given an input that is close to the stored pattern (even if some neurons are missing or incorrect), the network will naturally gravitate towards the full pattern during recall. The dynamics of the system will push it toward this low-energy state, effectively reconstructing the stored pattern.

In a nutshell, when we present a partial or noisy version of a stored pattern, the weights (which store the correlations between neurons) guide the neurons back to the correct configuration. Neurons that are supposed to be positive in the full pattern will get positive inputs from their connected neighbours, and those that are supposed to be negative will get negative inputs. The stored weights act like a memory of the patterns.

The system is designed in such a way that each pattern corresponds to a minimum in the network’s energy landscape. A high energy state corresponds to a random, noisy, or incomplete pattern where the neurons’ activations are far from one of the stored patterns. On the contrary, a low energy state corresponds to a stable state, or the attractor state discussed earlier.

When an incomplete pattern is presented, the dynamics of the network pull the neurons into one of these minima, restoring the original pattern. A similar analogy can be thought of in terms of a ball rolling down into a valley in a landscape. As the neurons update their states, the network slides down towards the nearest valley. Once it reaches the bottom, it stays there. Here, the bottom corresponds to a low energy state (or one of the stored patterns).

Now that we have come to an understanding of how an AI agent (or specifically artificial neural network) makes a decision to solve patterns, let us now try to understand how we (real human beings) make decisions in real life.

⚔️Humans: Intuition vs Logic

Daniel Kahneman and Amos Tversky dedicated their lives to study human behavioral patterns and especially decision-making ability of individuals in different environments. The legendary researchers categorized two systems (System 1 and System 2) in human brain, which play their part in decision making in crucial situations. System 1 is intuition based and System 2 is logic based. Their research, which is now accepted across the globe, points out that while we think we are taking a decision logically (activating System 2), but most of the times we are dealing with biases of intuition (System 1).

Let us do a quick exercise, as suggested by Mr. Kahneman. Below is the personality sketch of an imaginary individual Tom W :

Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to feel little sympathy for other people and does not enjoy interacting with others. Self-centred, he nonetheless has a deep moral sense.

Please take a sheet of paper and rank the nine fields of specialization listed below in order of the likelihood that Tom W is now a graduate student in each of these fields:

1. Business administration

2. Computer Science

3. Engineering

4. Humanities and education

5. Law

6. Medicine

7. Library Science

8. Physical and Life Sciences

9. Social science and social work

When Mr Kahneman administered this task to a group of graduate students in psychology, most of them ranked computer science and engineering as most probable streams (you can cross check your answers honestly). Ironical enough, these group of students were aware about the intuition bias and were familiar with base rates of different fields. Yet, they did not engage their knowledge but were lured by the representativeness and ignored base rates and even did not doubt the authenticity of the description based on the judgement. Had the above information not been presented, and just asked to simply gauge Tom’s probability to join a stream, they would have taken into account the base rates of different fields i.e. the average number of students joining each field in a given set.

The point being we are more often than not lured by the intuitive system which makes decision on our behalf and makes System 2 believe we are logical. The intuitive system works on the context of associative coherence. We weave our stories based on the familiarity of the environment and yield a self-reinforcing pattern of cognitive, emotional, and physical responses. However, a trained and conscious mind could have engaged System 2 and used base rates to gauge probability rather than getting deviated by representativeness, but it requires effort.

You can try a simple exercise to validate the point made above:

Chocolate Nausea

Although, these two words are unrelated your brain intuitive ends up concatenating a story linking the two. Probably, eating chocolates caused/s nausea. Often recent events and current context have the most weight in determining an interpretation. And when no recent event comes to mind, more distance memories govern. When often fail to think about the possibility that the evidence which can be crucial to make a decision is missing from the context.

When faced with a difficult question, our brain is mapped to look for the answers to a related and easier question. And the final answer ultimately depends on the experience we have had in the past as well as current environment. It is an intuitive process which happens in a flash. It does not take this much effort as you have put in reading this rather length and slightly boring article.

Now that I have made you read two different contexts, one about working of a neural network and other about judgment biases of human mind, you may probably start seeing the associativity between the two.

🧠Analysis: When our decision making is governed by AI not brain

Neural networks recreate/reconstruct responses based upon the experience they have been trained upon. They are designed to look for routes from high energy state to low energy state. Quicker they settle down to a pattern known to them, easier and earlier they would attain this stable state. The design of neural networks is rigged to mimic System 1.

When we look for answer of a complex problem and seek help of AI agents, it answers from the experience fed to it (read: training data set). It can not think beyond that realm. It can not create something which does not exist. The opportunity to look for unique solution is already lost as we as readers get casually biased towards the context presented to us. More often than not, the AI agents are not trained by individuals but some corporate giants who feed the data to make behave the AI agent in line of their benefit. We lose not only our capability to think and engage System 2, but we fall in the trap of a higher agenda which we do not even see. While making critical decisions, we fall into the trap of induced biases. Isn’t that true about being manipulated by media and surroundings. Overspending is one good example of this pattern.

💡As Mr Kahneman advises that it is true that System 1 is intuitive and fast to respond, we can always consciously engage our System 2. Yes, it takes effort, a lot of effort, but the benefit we will reap at the end will definitely be worth.

🎯Point in a nutshell: It is okay to be intuitive and make use of AI agents to automate tasks. But when it comes to decision making, it is better to opt out from the rat race and take a deep breath!

--

--

Reeshabh Choudhary
Reeshabh Choudhary

Written by Reeshabh Choudhary

Software Architect and Developer | Author : Objects, Data & AI.

No responses yet