AI is not a threat, a brain computer interface case study

15099616_1756792557918081_571534888671379456_n%281%29.jpg

Brain computer interfaces (BCI)s have recently made a strong resurgence in popular culture, with the help of many researchers and of course Elon Musk. Who at the unveiling of his new startup Neuralink gave this prediction of the future of humankind, “Humans must become cyborgs if they are to stay relevant in a future dominated by artificial intelligence.” Now Musk and others in the BCI field argue that we are under threat from Artificial Intelligence and the only hope humans have to not get left behind is to fully integrate ourselves with the digital world through “high bandwidth interfaces.” 

For the purposes of this paper, I am going to assume that sometime in the near future a team of brilliant researchers are able to overcome all of the philosophical and technical obstacles and create true artificial intelligence (AI). I will define intelligence as, the ability to adjust plans and use plans as a resource for action. This form of AI would reflect human intelligence, in that it would be able to recognize its own sentience, which could either be a good or bad thing depending on how it views humans.

The sentient form of AI is what most people picture when they think of AI. However, there is another more likely and dangerous form of AI, one that does not understand its own intelligence, but rather is just really efficient at utilizing the existing technological infrastructure. This AI could pose a greater threat because it has the potential to attack anything that it is pointed at, yet it lacks the understanding that what it is doing is affecting people.

However, even if we are able to achieve AI at either of these levels, would a BCI prevent humans from falling behind in the earthly food chain? BCIs will not save us, but not because we lack the technical capabilities or some magical hardware, but rather because our basic theory of how the human brain functions does not cooperate with the real function of our brain. The psychological term for our flawed understanding is the, “information processing theory” which states that the human brain works much like a computer. It takes in input, processes the input and then spits out an output in the form of a behavior or an action. While this may seem like an oversimplification of how the human mind works, it is the model upon which much of our computing technology has been built. Which, when computers were in their infancy was fine place to start because it was the easiest way to create a usable system. However, now that researchers are trying to connect computers directly to our brains the oversimplified theories are reaching their usable limits. This is because, no matter how granular you try to model the brain you will never get a perfect model because the brain is always changing and doing unexpected things. So if somehow we do create true synthetic intelligence, we will not be able to communicate with it using a BCI any more than we can with a normal computer. 

So how did we end up in this situation? The information processing revolution began began in the 1940’s and matured throughout the 1950’s. The vision of the information processing school of thought was to simplify every part of human life to a quantifiable flow of information. This school of thought is what leads to the idea that human brain were basically just computer and that computer were basically just simple brains. This terribly oversimplified analogy was the foundation for many future missteps. However, the idea of information processing as applied to human beings did not spring out of nothingness. It has its roots in the theories of Descartes. Specifically his idea of Mentalism, which stated that the mind is three things; abstract, representational and rule based.  

Abstract, meaning that the mind is a detached non physical thing. First, let me start by arguing that the term the mind carries too much connotation and misunderstanding to critique directly. Rather, I would like to suggest that a more accurate term to define what Descartes was trying to convey is; consciousness, which I will define as the ability for an intelligent being to realize that they are an individual with unique thoughts and an ability to affect the world around them. If we take this new definition, then we have enough evidence to prove that consciousness is not abstract or devoid of physicality, but rather that it is the result of a disputed networks of neurons firing in symphony. Therefore, the pillar crumbles as you cannot separate the body from consciousness because they are one and the same. 

The next tenant of Mentalism is representationality, suggesting that all human thought is made up of a series of mental models, which are further broken down into frames and scripts. Frames being situations for when to apply scripts and scripts being, how to respond when certain events occur. The pervasiveness of this reasoning is still seen today in computer code as conditional statements. If you see a stop sign, then you stop.  This strategy works well for most binary situations, however it fails to represent the true nature of human thought because there are never guaranteed responses from the human brain. No matter how granular of a mental model, at the end of the day it is still a model and thus has excluded elements in order to avoid having to completely recreate the original. However, when attempting to model human thought we cannot exclude any elements because neuroscience has yet to determine definitively what elements are necessary and which ones are not.  

The last tenant of Mentalism is that human thought is logical and can be broken down into a rules based system. Descartes hypothesized that if you could get a big enough list of rules that you could predict any human thought or action. However, the encyclopedia of truth has yet to be developed and that is mainly because no matter how many rules you create, there will always be a question that you have not thought of, or an answer that you could not predict. This comes from the inherent messiness of the human brain. As Seyed A. Valizadeh mentions in his article on brain anatomy “Just 30 years ago we thought that the human brain had few or no individual characteristics.” He went on the talk about how we now know that each individual brain is as unique as a fingerprint and that each and every experience changes the physical and chemical structure of the brain. With this amount of variation and the fact that there are on average, 86 billion neurons in the human brain, I do not believe that there will ever be an all encompassing set of linear rules or axioms. 

Even though we have at this point in time disproved the pillars of Mentalism, we still see them standing strong at the base of computing today. Phillip Agre makes this pointedly clear with an example that he gives in his book Computation and Human Experience, “The sophisticated structures and processes that form the basis for AI research are not geared to living in the world; they are geared to replacing it.” By this he means that computer scientists have created a set of practices that no longer represent the physical world, but rather an idealized version of what the world is, based on a set of context less symbols. These symbols often implemented in a syntactical manner, represent yet another layer of abstraction in which human beings have attempted to quantify and simplify nature. 

A series of abstractions is much like a game of telephone, in that the further you get from the initial artifact the less representative the abstractions become. This is why the systems that we have created based on context less symbols, cannot represent human thought. Using context less symbols to represent human thought would be akin to discovering a new species, naming it, translating that new name into a different language and then expecting someone who has never seen the new species to understand what you are talking about simply by name. This system of inadequate symbols built upon faulty psychological theory are why at least for the time being BCIs are going to just be tools that extend what we are already able to do with computers. 

Using representative mental models will not be able to elevate BCIs to the plane of conscious thought either. Like I mentioned earlier mental models will always be shaped by the constraints of the technology that they are being built to be used on. However, BCIs and the human brain do not share the same constraints. For example, the human brain has the ability to store an untold amount of information, but at the same time it can fail to recall even the simplest facts. Whereas computers have a finite amount of memory and their read and write speeds are consistent. Juxtaposing these two mental models reveals how in order to accommodate one another they both must make many compromises. For example, the human will have to express their thoughts in a linear logical way, even if that is not the most effective or accurate way to do so. While, the computer must wait patiently with its perfect attention for the slow human to tell it to do something. 

I was that slow human when I used a BCI called the EMOTIV EPOC, a 14 channel EEG device that uses dry electrodes to sense the relative voltage of your scalp. The logic behind the EPOC is the same as almost any other information processing device that is connected to sensors. There is a set threshold, in this case that threshold was measured in volts, and if that threshold is exceeded on any of the 14 sensors, a signal will be transmitted to the device. This signal will be interpreted based on where on the brain you have told the device the sensors are and it will then either turn you game character right or left. The game was quite enjoyable to play however it was pretty obvious that it was coded to be very forgiving and would almost always complete the objectives even if you were not paying any particular attention to our character. Showing that even the game creators knew that the communication would be limited and they would have to build in some bumpers so that people would not immediately get frustrated. This quick illustration shows how signal processing devices have been built on the same flawed theories and technology that is simply not compatible with human thought. 

While I recognize that a great deal of the problematic model preventing HCIs from becoming more than tools, was created by computer scientists just trying to get the most efficient results out of the technology that they had. There are potential new technologies such as organic computing that could reduce/eliminate the need for abstraction. In theory this could be done by mimicking naturally occurring biological processes using organic materials in a way that we are able to retrieve information from and use to complete tasks, much like we control inorganic computers today. If this avenue of research proves to be fruitful it could be the answer for creating a pathway of genuine communication between humans and the digital world. 

However, to get to a level of communication that would surpass the ones that we already possess with inorganic computing we might have to create an organic software so intelligent that it may become sentient. This poses obvious ethical dilemmas, such as if we create a biologically based computer that ends up becoming sentient do we give it the same rights as other humans online? I would argue that we should since technically they would be able to do most of the things that humans can do online. Furthermore, they may actually have more at stake because their entire lives are digital whereas humans at least have the option to retreat to the analog world. I think that organic computing offers a lot of very interesting opportunities, but I would not depend on it to save us or to change the entrenched technological establishment any time soon. 

Overall if a dangerous form of AI is produced we will not be able to stop it with our current technology or anything that is built directly upon the technology that we currently use. This is because as I have mentioned several times by now the misrepresentation of the human mind as an information processing system has lead us to create technology that can only be applied to things that can fit into the mentalist framework. In order to create a BCI that can speak the same language as our brain we will need to develop code and hardware that accepts and handles the random messiness of human neurons. If we can create such as system then we may be able to create the high throughput communication channel that Musk has envisioned, but we will not be able to do it without rethinking how we think. 

Previous
Previous

The story of my trash

Next
Next

How a personal data marketplace could solve data privacy.