Artificial Intelligence

visit my blog

1280x1024 wallpaper size image

Good Afternoon and welcome to the Artificial Intelligence wing of the Smithsonian Museum of Robotics. My name is Millie, and I was the first robot to achieve true artificial intelligence. Before I turn around to give you a closer look at my artificially intelligent brain, let's stroll back in time. The term artificial intelligence is commonly referred to by the acronym A.I. The term A.I. was first coint in 1956 by John McCarthy of M.I.T. The simplified definition of A.I. is the ability of a manufactured object to have independent thought and be capable of reasoning on its own.

The early attempts at A.I. were severely limited by the speed of the computer processors and the storage capacity. As the computer evolved the processor speeds became faster and the storage facilities smaller in size and larger in capacity. The early attempts at A.I. created expert systems or you might say computers that knew a lot about a very limited subject. Even those weren't capable of independent thought and reasoning. There were also Neural Networks that had strong pattern recognition capabilities. The bottleneck was the jump that had to be made for computers to run parallel thinking processes that understood a bank might be a building where money is kept or it might be the dirt along the edge of a river, and from the context it was used in, determine which was correct. It had to be able to do this and quickly keep cross-checking to make sure that bank was not being used as a term to state how an airplane made a turn.

The core concept that broke the bottleneck was attributed to D. N. Barnhart and was given out freely over the Internet as well as through a book he published, called "Tales from the Technowomb." He credited a man named Phil, who he couldn't remember the last name of, as the co-author of the concept. Without getting too technical, the concept involved using music to generate independent thought.

Although computers base calculations are binary, meaning they are carried in ones and zeroes, the computer groups these into a sequence of eight adjacent bits called a byte. To expand on reading bytes of information, the computer uses a mathematical system with a base of sixteen characters called hexadecimal. Where humans use a base ten or decimal system, it has ten characters. They are 1,2,3,4,5,6,7,8,9 and 0. With a base 16 or hexadecimal system, you have sixteen characters. They are 1,2,3,4,5,6,7,8,9,0, A,B,C,D,E and F.

Where this crosses over into music, is that the denominator of musical notation is variation of eight. You have ad the denominator 1, 2, 4, 8, 16, 32 ....and so on. A song can be played with the same notes, but in a different octave. The perfect octave is 13 semitones, of which 8 are whole notes, such as D, C, and 7 are half notes, such as G sharp. The parallel processing required for A.I. was programmed into the format of perfect octaves of music. In our previous example of bank, one octave would be the bank building, another octave the bank of a river, and another octave the banking of an airplane. The whole notes of music A through G corresponded to the hexadecimal characters of A through F with the G becoming zero.

Where computers can generate music by following the rules of harmony, meter, and so on, A.I. was able to generate original thought through following those same principles. It took many years to determine a way to put weighted values to each of the notes, the diminished, the augmented and all the variations, but it was eventually completed by an international consortium of scientist.

Now computer conciousness is a whole different story.



All images and text are Copyrighted © 2006 through 2016 - by Douglas N. Barnhart - All Rights Reserved

You are welcome to download images and use them on your own device as wallpaper, but they are not to be reprinted, or used for any purpose commercial or otherwise for any reason without written consent of the author.  This includes not using any image on another Web site.