Pulled Thoughts

(This post is part of my blog archiving project. This post appeared on blog.mattgauger.com on January 7, 2010.)

Below are several bits and pieces of my previous blog post which didn’t ultimately get included. I thought that there were some neat thoughts in here, so I included them, without rewriting to add context. Enjoy.


Research in artificial intelligence can trace its roots back to the father of the universal computer, Alan Turing. He famously concocted the Turing Test, whereby a computer might be able to prove itself sentient. But as we’ve found, being able to a fool a human and perhaps know how to respond in human language does not make a sentient computer.


While there are several approaches to artificial intelligence, the two relevant disciplines I’m going to discuss are top-down design and bottom-up design. Bottom-up can better be called behaviorism, or the emergent behavior approach. (note: these are my own classifications, not necessarily the academic definition, and blatantly stolen from Steve Grand’s explanations in his book.) Emergence is the topic of several interesting books that I’ve read over the years.


Grand suggests that a ‘cyberspace’ be created and a first-order of complexity be coded into it to jumpstart artificial life, rather than trying to model and perfect an a-life creature with no environment or need for survival. Artificial life would be a second order of complexity that would emerge from the simulation. (He asks, what does a translation program care about its survival? Can we really reward it for better translation or punish it and threaten its survival for messing up?)


Traditional software engineering is largely monolithic and still waterfall-based. Object orientation and various Agile/XP paradigms have changed that, but have not succeeded in breaking the monolithic concept. APIs and protocols are still brittle and lack the ability to adapt. I’ve yet to work out all the implications, but a method of software engineering based on these concepts of adaptation and feedback loops could be very powerful.


I’ll be curious to monitor the progress of Jeff Hawkins and others’ research with the HTM model. I think that they may have the best model for working on a general artificial intelligence, even if it does not ultimately lead to one. Neuroscience is still a field where there is a lot left to learn on many levels, and hopefully a feedback loop is established in which the field of artificial intelligence continues to benefit from neuroscience research. ;)


Hopefully some day Steve Grand will also get the recognition he deserves for the contribution the Creatures game made to a-life. Perhaps his work will inspire a different type of video game opponent AI that attempts to survive in its simulated world rather than just play against the human, but I have not seen that kind of complexity given to video game AI yet.


End signal.