Organic Consciousness vs Artificial Intelligence

I hear often people concerned about machines with some form of AI taking over the world. When I consider the path toward that vision, I do not think we are likely to ever get there. Here is why.

First, we should consider the mind, the self. What exactly in our body gives us the self and when exactly did it appear? When does the self begin? Asimov presented a view similar to the Big Bang theory, that is, a concept of spontaneity where for no reason at all, a machine gained consciousness. Okay, that is plausible I suppose. I also imagine it is not very likely and whatever the chances, there are other things more likely to happen sooner.

However, whilst a human or a machine may become self-aware for no reason, I think humans are self-aware for a physical reason. I believe this to be the case because all humans are self-aware, it would be a tremendous coincidence if we all just happen to become self-aware at some point.

I think the case must be that self-awareness, being linked to emotional intelligence and maturity, it is something that may evolve. What this says is that most other animals will also have a level of self-awareness but in a way that they cannot comprehend, recognise in others and communicate.  I believe all mammals and surely other animals too have some sense of self awareness. They lack the complexities of our brains so they may not be able to expand on philosophical pondering but it does stand to reason to me that they do understand their own existence and some of those animals with more complex brains may be able to question some complex subjects too.

If we were to consider the spontaneous approach, then, for no apparent reason, at some point of human evolution we became conscious. Whilst that seems acceptable, it would imply that the same spontaneity occurred for every single human. It seems more logical that it was in fact already in the brain and what we gained was a development of an already existing trait.

Baby humans embark on a journey to the self from birth, Freud spoke at length about this very subject and its different facets. What happens as we grow older is that we become more capable of complex thought. I have already mentioned that other animals have also this ability simply not as developed.

This points us to conclude that animals do not posses the ability for self awareness for no apparent reason thus we ought to expect machines to not evolve in that direction either.

Now, what I consider more likely to happen. Humans, aware of their own existence must struggle with the knowledge of their eventual death. It seems that our bodies can live for 120-150 years, after which date even the smallest of issues would cause its demise.

In order to solve this conundrum and live indefinitely or at least longer we have made considerable efforts. My concern here is the combination of fear and ego.

The ego will likely lead to the mistaken notion of “uploading” or “digitisation” one’s self to a computer. Such idea is nonsensical simply because that process would imply not a transfer of the mind but a copy, therefore the original person would still die and it would be a copy who would live. Of course this could have tremendous applications and allow us to take research to previously unimaginable horizons. However, there are dangers. The computer would be exclusively synthetic and no containing any organic matter it would possess a recreation of self, however, it would not be a real consciousness as it would still function first and foremost as a computer.

The other option is to fix the problems with the body and stop it from aging. A perfect way of doing this is by replacing parts of the body with synthetic and artificial bits. In this process one would likely be tempted to make improvements, the ability to move faster, speak languages, etc. As the organic matter would always die, at some point every part would need to be replaced so at one point it would cease to be human.

I consider these two options to be inevitable as humans will most definitely work on both of these, the digitisation of the mind, even as a copy, and the humanoid made of organic matter and other synthetic materials. Both of them, the digital human and the humanoid will worry about having a source of energy. This is more of a Matrix scenario where a bionic human or a virtual one will see the organic human as expendable and unnecessary. Much in the same way as humans have treated other humans all throughout history.

The humanoid, like the digital human could potentially become fully synthetic. In both cases, regardless of their organic origin and whether their functioning is fully computerised, their self-awareness would still be artificial and yet irrelevant. Because of its synthetic makeup I think the evolution of their consciousness would revert to what we could consider effectively to be computer programming. This is not to say that in a battle of wits with a human we would have the upper hand, chances are the computer would have an intelligence beyond our comprehension but their consciousness would lack the ability to evolve as such.   

The difference with humans, is that as we evolved and our brains evolved, so did our self awareness. Therefore, most likely, AI won’t simply evolve to become self aware. Although of course without knowing what exactly in the brain allows for this to happen, we wouldn’t necessarily be able to stop it from happening. I don’t see the potential of AI to be a concern that we really need to worry about but perhaps we ought to consider how we are going to tackle the already in motion digital human and the humanoid.

Leave a comment