Philosophical issues

God is a conjecture; but I desire that your conjectures should not reach beyond your creative will. Could you create a god? Then do not speak to me of any gods. But you could well create the overman. Thus spoke Zarathustra. --Nietzsche

Several philosophical issues will have to be dealt with, and decisions will have to made based on the philosophy adopted, during the course of developing this clone in virtuale. The issues raised will be dealt with in a rhetorical question answer session.

Should we be completely faithful to a cell and replicate the cell in exact detail?

This is probably the most important aspect of the AI concept introduced in this work. The moment we seek to formalise something, we lose its naturalness. While we might be able to replicate the ability of a cell, we can never duplicate it in an organic manner. That is a limitation and an advantage of this endeavour.

For instance, once our compiler is created, there is no necessity for the enzyme mRNA polymerase (the enzyme that translates DNA into mRNA) to exist in the cell since the compiler has replaced it functionally. However, our automaton should act as though it is indeed mRNA polymerase (though it performs more functions than mRNA polymerase does) and should not run if there are no mRNA polymerase enzymes present (this could happen if the genes for mRNA polymerase were damaged in the original strand of DNA).

Similarly, is it necessary to have a protein folding algorithm that folds up proteins the way they fold in the cell (delicious though it would be)? What matters is predicting the 3D structure of a protein fast, and once that is determined, the next step (what the protein does) can proceed smoothly. Of course, if one is interested in understand how the cell does a certain task, then one could try to construct models based on experimental evidence.

In the above instance, how do we keep track of what we have and what we don't have?

We have to create variables global to part or all of the system. We pretend mRNA polymerase exists in the cell by keeping track of the number of mRNA polymerase molecules transcribed by the DNA template and depleting or replenishing the count as necessary. Whenever the transducer translates protein, it acts as though the mRNA polymerase enzyme was indeed actually involved in the process. In our model, the mRNA polymerase enzyme does not interact in 3D space with the mRNA strand to produce protein (as it normally does in the cell), rather our transducer mimics the 3D interaction as best as it can.

How far can we deviate from the original cell and still imitate the cell successfully?

Not very far. We have to find analogies that represent the same in a computer and in the cell. If our analogies are excellent, then I believe we would be extremely sucessful, but if the constructs in the computer do not relate well to those in the cell, then we encounter problems. The usual problem arises when we have to make a change in our model so it fits what we know of the cell. However, the change we make is cascading in effect, i.e., a change in that particular construct generally requires a change in another construct and so on. For instance, if we think that DNA being represented as a grammar is not suitable, and we make a change in its representation, then this will have immediate repurcussions, because we then have to come up with different representation for mRNA and proteins. We therefore choose our representations carefully, and in this case we do know that DNA in a cell can be represented faithfully by a grammar.

At this point, some of the specific questions cannot be answered until the formalisms have been implemented. If we find that we cannot elucidate protein structure using formal language theory, then we will have no other alternative except to hypothesise a newer representation. The idea is to understand the philosophy of the organelle we are trying to duplicate, rather than make a perfect replica of the component that differs in functionality or purpose.

What guarantee is there that the created cell will even replicate itself?

If our representation is faithful enough to a real cell, then the natural tendency for it (depending on the kind of cell we have just cloned) will be to replicate mitotically. This proclivity is inherently built into its genome.

Self-replication (particularly in biological systems) is a product of emergence (i.e., when a complex enough interaction between the individual units is formed that the network of interactions becomes an entity in and of itself). Any emergent system will have the ability to self-replicate. Emergence is at least a necessary condition in order for self-replication to occur. Emergence is a case of tangled hierarchy or Strange Loop, as Douglas Hofstadter would put it. In general self-replication of a system can be characterised by tangled hierarchies or Strange Loops and emergent/complex behaviour when a network is created between the individual components of the system within the confines of a selective environment.

Self-replication is tied in intricately to evolution. In any evolutionary system (i.e., where the environment imparts a "survival of the fittest" selective pressure), the ability to evolve is selectively advantageous. Self-replication is a necessary condition for evolution. While it is possible evolution of a species or an organism can occur without replication, our biological universe is works in this manner. Alternately, any modification (evolution) in a non-replicating system can be thought of as process by which replication and change are merged into one.

Even if the cloned cell reproduces itself, how do we know it will give rise to sentient thought?

There is no way to be certain until cell division has been accomplished. But if we examine natural systems, we notice an interesting phenomena. There is no reason for one cell to interact with another. However, the information contained in the genome seems be the guiding factor as to why these cells interact. Gap junctions, for example, help in symmetric divisions of the cell. If we attempt to imitate a cell which in nature divides symmetrically with other cells, then we will also make sure we build in the gap junctions.

In an organism, if we try to define a single cell, we notice that we cannot do it without definining its neighbours. Yet if we try to define its neighbours, we need the original cell's definition! This brings up yet another Strange Loop (each cell is defined recursively in terms of the other cells around it), and I predict that if we are successfully able to clone two cells into a computer, then they both will interact because the Strange Loopiness has been built into their DNA.

Within the confines of an evolutionary environment, the ability to self-evolve is selectively advantageous. To self-evolve, there must be some sort of sentience or evolution (in other words, there must be a "direction" that is present that tells an organism its status in relation to its environment). Sentience is a by-product of the trait of self-evolution.

Are we saying that this ``Strange Loopiness'' is responsible for self-replication and sentient thought?

In a sense, yes. There are many ways one can look at this: sentience is a result of evolution having formed a Strange Loop between the primitives of the brain (namely the neurons) and the mind itself. This reasoning is somewhat circular, but if the mind is built up in some sort of a bootstrapping process (a collection of experiences accumulating), then the loop could start off very simply between primitives (no loop) and one end remains constant and the other gets more complex (one end is the actual hardware, and the other is what happens within the hardware). The increasing complexity gives rise to these universal truths that can't be proven by the system (say, consciousness), because by doing (formalising) so, the system (the brain) itself is rendered inconsistent. Note that the system becomes powerful in increasing steps, and it can (and is, in a sense) be formalised initially. This is somewhat like saying that sentience is a consequence of Gödel's incompleteness theorem, i.e., what we define as sentience appears mysterious and inconsistent, and does not appear as though it can be formalised, because it is too powerful.

There are two sorts of bootstrapping procedures occuring here: the evolutionary one from creature to creature, and one that occurs within the life cycle of given organism as its brain develops. This gives rise to two Strange Loops interacting with each other, and this could explain why the phenomenon of the mind is so complex.

Another way of looking at it is to view sentience as something that allows an escape from the consequences Gödel's theorem. Once the brain has become sufficiently powerful, inconsistencies are bound to crop up. The mind has the uncanny ability of being able to jump out of Strange Loops such as "This sentennce contains threee errors." One could either dismiss the entire sentence saying it contains only two errors and therefore it's false, thus never entering the loop at all. One could analyse further and say that the fact that falsehood is an error itself and therefore the sentence is true. But then the moment one allows that, then the sentence doesn't really contain the third error at all and therefore we are back to a situation with two errors! This could go on ad infinitum, but the brain realises what is happening here (you could say a meta-Strange Loop occurs within the brain that allows it to recognise a Strange Loop---talk about complexity) and decides it has had enough.

So creatures might have encountered such situations and the ones whose brains could jump out instead of self-destructing were selected for, and eventually led to the evolution of sentience. To take things one step further: sentience could be a combination of both interpretations; it could've arisen from a need to avoid Strange Loops and as a result of the internal Strange Loops!

In any case, for intelligence and sentience, there is a deep Strange Loop lurking in the heart of the brain. At a certain low level, the interaction between neurons represents a Strange Loop which results in higher Strange Loops, just as the Strange Loop between DNA and proteins gives rise to the one we see between cells.

A third way of looking at it is to consider the idea of emergence and the theory of complex adaptive systems that can explain how a group of neurons will not only interact, but come together to form something that is greater than the sum of its parts. Holistically, the Strange Loop between a selective environment and the organism or species is what will lead to sentient thought.

Should we not then just try to replicate neurons or the brain of the organism, instead of the whole organism itself?

At first, we probably will have to use this approach. But the mind, or intelligence, is not just the brain; it is also a collection of experiences. These experiences (albeit in a completely different environment---the cyberspace) are what will give it sentience. It is an age-old philosophical question: At what point in time is a human self-aware? The answer to that, in an abstract sense, is when the human is fully cogent of the surroundings around him, and that he is a part of the surroundings. He is then defined by himself and the environment (which includes him---the Strange Loop addressed in the previous question) as the sum of his experiences---The Brain thus becomes The Mind.

This whole idea of modelling neurons, and modelling components of a cell seems like a reductionist approach. Such an approach cannot work, and yet we say it does. Why?

It is true I speak, in a reductionist sense, about assembling macromolecules to make the cell, and of assembling cells (neurons) to make the brain. If this were treated purely on a syntactic level, as a set of smaller objects coming together to form higher objects, and if we believed that would lead to some sort of sentience, then we are wrong.

It is like saying a computer, in the present day and age, understands the programs it runs. It clearly does not. However, we are neglecting to mention the Strange Loopiness that gives the necessary semantics to each hierarchy. Take for example the interaction of macromolecules which give rise to a cell: The cell is a whole entity of its own. It is not just the sum of its parts (what the reductionists would advocate), but it is more. The cell has properties that the collection of individual macromolecules do not. This is because the Strange Loop assembly is nondeterministic and cannot be formally specified completely. Now the collection of cells give rise to, eventually, the Mind, which much more than the individual components. This is because the complexity of the system has increased exponentially. So the cell has meaning, or semantics, that cannot be determined from looking at its components, and the same applies to the Mind, whose meaning is much more hidden (in fact, we do not know how the brain becomes the Mind).

I am not suggesting a reductionist approach here, nor an holistic one. I'm proposing a combination of the two. This is certainly not something original; however, I stress that we need to merge the extremes of the two ideas: a half-half mix of reductionism and holism won't do. We need to be completely reductionistic (i.e. model right down to the molecular level (we need not go down to the physical level---if we did that, we would be creating a new universe, and not a AI being) and we need to be wholly holistic (rely on Strange Loops and nondeterministic to give meaning to the syntax).

It is as if words in a language knew internally what their meanings were and therefore assembled to form coherent sentences. Considering the fact that words can form sentences in several different ways, a formal set of rules will never suffice to accomplish this. The words have to know what their meanings are. They should have a syntax and an internal semantics (which they do in our minds). However, words themselves can be assembled from letters by a fixed set of rules.

The human mind can make coherent sentences out of words, which are given a intrinsic meaning and some syntax that defines how they interact. However, the words themselves are assembled from letters by fixed rules (some rules are plain ones---they just define a string of letters to be a word, and some others are self-modifying, such as making plurals and adding gerunds). The coherent sentence is the whole which is formed, the words are the holo-reductionistic intermediate, and the letters are completely reductionistic in nature.

What about physical limitations of current computing technology?

That indeed is a problem. At present, we are distant from our ultimate goal. I believe that by the time we are able to actually clone a cell in the machine, the technology would be sufficiently advanced to allow the cell to replicate to allow the organism to grow to normal size (in terms of cell numbers).

If these creatures are inside the computers and eventually forming a Utopian society, then who will be running the computers?

I first figured they could use an almost infinite energy source, like solar energy, and they wouldn't need to depend on anyone. Then I thought that one could construct robots that weren't intelligent (like the ones that assemble cars, etc.), that would mechanically take care of the computers. But then I thought that in terms of hardware, the robots wouldn't be a lot different from the intelligent clones. They would have the same sort of memory organisation and, if we're still sticking to the Von Neumann architecture style (or even otherwise), the same way of processing instructions at the hardware level. So it should be possible to transfer the program from the computer to the robot. Then what? This simulation+robot can do what the robot could do originally in an intelligent fashion---take care of itself---but guess what? It can also move. It can see. It can talk/communicate. Till now, I hadn't even considered these creatures performing physical actions, and now they can, but they really are intelligent WITHIN the computer. I mean they won't be able to feel air, water or heat (though I guess sensors could be designed to let them know what it felt like). I mean to them their world is the cyberspace within. Our world would be just as alien as their world is to us; imagine being IN a computer.

The thing that could also arise from this, of course, is that this knowledge might help us get closer to each other's worlds and that's what the cyberpunk genre is all about, but that's much farther ahead into the future.

Isn't whole notion of creating a clone tantamount to playing god?

Regardless of one's beliefs in god's existence (I'm a staunch atheist): if one has the power to actually create life, and chooses not to, then it is just as much ``playing god'' as creating life is. I believe as technology and our knowledge advance, we will come close to doing what I've described here. Choosing not to use our knowledge is also playing god, albeit in a passive sense. I would rather take advantage of the progress that we have made.

Do I really believe in this genes, macromolecules and computing garbage I just wrote?

Yes.

Do I really think I'll get anywhere with the ideas I subscribe to?

I do not know. All I know is that I believe in them passionately, and ultimately, my goals will be realised by someone, somewhere, and by some technique. True AI is indeed possible.

While I do assert my ideals and maintain them, the realistic issues that we have dealt with cannot be looked upon with skepticism. The protein folding problem, for example, is still unsolved and remains one of the most basic intellectual challenges in molecular biology. I plan to attack that aspect mostly and leave the rest of the challenges for others to overcome.

I do believe a collective effort of this type (each person contributing one aspect of the computing model of the cell) will eventually fulfill my dream.


Genes, Macromolecules, -&- Computing || Pseudointellectual ramblings || Ram Samudrala || me@ram.org