Sunday, December 11, 2005

Colloquium Insights

Yesterday I called into the Law of Transhuman Persons conference in Florida to listen to two presenters involved in projects surrounding the development of smarter-than-human artificial intelligence. The first, Peter Voss from Adaptive AI, described the concept and implications of artificial general intelligence or AGI. The essence of AGI is intelligence that can learn, adapt, and grow within any realm of knowledge, rather than being domain specific. Voss believes that the creation of AGI could become a reality as early as 3 to 6 years, and that a "firm" take off is likely. That is, within a period of months after its development, it will be able to recursively improve its capabilities many times that of humans.

The second speaker, Eliezer Yudkowsky, gave an interesting talk surrounding the anthropomorphic nature of humanity. He described how humans attribute human characteristics to non-human entities, and how this presents a major flaw in approaching the issues of AGI. Many will tend to think that non-human sentient beings will still have the same desires for possession and control, the same aversion to captivity and harm, even the same will to live. However, there is nothing to suggest that an AGI will have any specific desires, fears, or agendas unless the programmer purposefully places them there. Yudkowsky goes on to describe the "psychic unity of humankind" as set of basic human assumptions that transcend humans of all times and places. If we think beyond the assumptions of our narrow human world, we will have a more complete and accurate understanding of general intelligence. Voss complimented this idea by describing how morality can be derived purely from rationality and that human emotions and religious beliefs should not play any part.

0 Comments:

Post a Comment

<< Home