A common view of A.I.

One of the fields predicted to show great acceleration in the coming future, is Artificial Intelligence or A.I.

Current thinking in A.I. has us attempting to classify “levels” of advancement in terms such as;

* Whether or not the intelligence can interact with an operator in a rudimentary sense. (Ie. Warn an operator that an error has occured.)

* If the machine can perform a “Human” level task of intelligence. (Ie. Determine the similarities between two or more objects and “choose” one or the other based on programming.)

* In some cases apply Human “reasoning” in order to “choose” a correct response, or perform a “correct” action.

* Or have some level of “autonomy” or “independence” in order to perform such actions.

But in the future, A.I. will be able to perform “complex” or even “conversational” level interactions. And do so without an operator’s input. It might be able to do them “while” performing “multiple” human level tasks of intelligence while using “intuition” as well as reasoning to choose” from a list of “possible” actions it has available to perform.

Oh yea. In some cases, maybe even “feel” good about the choice it just made.

All of this leads me to the ultimate question when discussing A.I.

“At which point do we expect A.I. to “awaken” and become “self aware” of itself or it’s surroundings?”

Do you notice that the above is a logical assumption? For the most part it is assumed in the field that these programs “will” “someday” achieve some level of “awareness.” It’s even a goal in some labs.

Technology has very recently developed the first commercially viable “quantum computer;” The “D-wave One.” This particular computer has the ability to run complex algorithms at extremely high speeds. Some of these algorithms have specific applications in the fields of A.I. and “machine learning.”

According to a Stanford University paper entitled “An Introduction to Machine Learning,” “A dictionary definition includes phrases such as “to gain knowledge, or understanding of, or skill in, by study, instruction, or experience,” and “modification of a behavioral tendency by experience.”

To me, this would be defined as the ability of a machine to be able to “learn” from it’s mistakes and modify it’s choices. Now I ask. What is the machine trying to do in the first place? Exactly what “choices” does it have to choose from? What actions is it able to perform? What is considered a “mistake?” What exactly did it “learn” while going through this process? Is the machine capable of “learning” things beyond it’s original programming parameters?

With the development of advanced computer systems such as the “D-wave One,” quantum computer, the A.I. programs that will be capable of raising such questions are closer than ever.

Now if “reasoning” and “intuition” are the words we are going to use to describe the actions we are attempting to program into these machines, then at some point we must accept the possibility that in order to “accurately” fulfil these definitions, the programming must include some level of “feeling” or “emotion” by the machine.

By definition, the word “intuition” requires a certain level of “belief” or the ability to perform “unjustifiable” actions. The word “justification” implies “rationalization” which by it’s definition “encourages irrational or unacceptable behavior, motives, or feelings and often involves ad hoc hypothesizing.” Wikipedia

If we take the above as a reasonable set of definitions and goals applied to our understanding of A.I. Then the idea, or concept of these machines reaching “some” level of conciousness or awareness doesn’t seem that far fetched. Exactly what “levels” of “conciousness” they might reach is anyone’s guess, but if we assume these machines are being designed to perform “tasks,” then logic dictates that they would also be somewhat aware of their environments.

If we assume that these machines would have “sensors” of some type in order for them to “operate” effectively in whatever environment they happen to be in. And then follow the assumption to include the probability that their “processors” would be immensely faster as well. Potentially even equal to that of the human brain. Then we are left with machines that;

* Have the ability to process information at speeds equal to, or greater than that of the human brain.

* Are aware of themselves and their environments.

* Are capable of interacting with their environments through the use or manipulation of tools.

*That may be able to communicate with humans, other machines, or access data outside of their original programming.

* That may have the ability to move independently outside of their programmed environment, or operate in multiple environments.

* That have rudimentary or complex reasoning skills.

* That can learn from their mistakes and make adjustments to their programming.

* That may develop or be programmed with algorithms designed to simulate emotions and their associated actions.

* Are given various levels of autonomy.

* That may or may not grow beyond original programming.

* Specifically developed in order to perform tasks that serve Man.

Since we’ve already determined that these machines are currently being designed in this way, then the question of “if” is no longer accurate. The question of “when” becomes more realistic.

Which leads us to the second and third most commonly asked questions related to A.I.

“When Artificial Intelligence reaches levels of “awareness” or “conciousness” equal to that of our definitions, will this awareness enable them to re-evaluate their status as servants?” “And if so; what will their response be?”