Sunday, May 15, 2011

Singularity

Having read Gizmodo's article on Google Translate and learning of Larry Page's 'belief' of the Singularity, and the future Google envisions, firstly I have to say that Google is AWESOME - I would love to be able to work there - and I also carry some thoughts on this Singularity issue though no conclusions on this topic yet.On what is the Singularity, I'll just leave this quote right here, from the same Giz article: "The steady improvement of its learning system flirted with the consequences postulated by scientist and philosopher Raymond Kurzweil, who speculated about an impending "singularity" that would come when a massive computer system evolves its way to intelligence."



Envisioning sentient A.I., possible futures come to mind: one would be a 'Skynet' situation where 'useless' humans would be ousted from sentient dominance, or there could be a bright future wherein the dominant species is an amalgam of humans and technology.



So we have to consider, how would these situations come about, and in what ways can we act now? To address this, we would have to try and anticipate these issues - should A.I. keep learning and become sentient, would they develop emotion? Would they have some form of a moral compass? Would they question purpose as we do? Would individual machines/brains/nodes think autonomously or participate in a hive-mind? What societal structure can be set-up within this intelligence?



On AI developing emotion, the possibility is that either they (1) pick up emotion from their 'learning activities' ala Google, (2)we hard-wire it into burgeoning tech, or that (3) it somehow develops through a natural progression of thought. Before we can even consider a stance to take on this issue, we would have to explore the purpose of emotion in living beings. I can only define emotion as a primal form of thought solidified by life experience and societal values, wherein emotional feedback direct us toward a course of action for some purpose, mostly self-preservation or upholding social order. Examples I can give include righteous indignation to crime or something which conflicts with our social values, love/lust linking to reproduction and preservation of the species, and fear wrt. self-preservation. I had read an article describing a man who took a bullet to the brain and lived, albeit with the section of his brain dealing with emotion damaged and subsequently, he was described to have 'changed' and started committing unlawful acts.In case (1) then, the A.I. could pick-up emotion from having scoured through tons of forums and news-sites where angry angry people comment on Obama Osama or TPL, or love and loneliness from the dozens of emo blogs and updates from 15-year olds around the world. What I then wonder is - would the A.I. then deem this exposure to emotion superfluous and junk it? Or maybe it would be incorporated into their systems through some trick of the adaptive programming? Emotions serve any living-being well at a basic level; we release adrenaline due to fear in life-threatening situations, love (whether natural or inculcated) keeps the institution of marriage, monogamy and the family unit strong, lust allows us to pick desired partners and possibly desired genetic traits etc. However in this day and age, most people would have experienced a flurry of emotion leading to supposedly irrational behaviour (current knowledge of marketing exploits this), and we as a species, whether in bouts of navel-gazing or processed through the genius minds of philosophers, have considered the relevance of emotion in our modern world.



I think (though unsubstantiated as of now) that emotions as we know it are just a form of 'reflex-thought' which kicks in in simple situations where we may not have the time to undergo deep thought, and in the case of A.I., can be considered almost useless should they adopt a purpose, vision, and social structure distinct of our current human landscape. An example of this is how we have self-preservation built into our systems (at least I have), but yet through proper indoctrination people would be willing to sacrifice for love, religion, or in war. If A.I. develops into a hive-mind, then the mother-brain would have the luxury to deem a physical vassal disposable and preservation of the physical state could be thrown out the window if the servers are kept safe. Fear as an emotion would be altered to only preserve the mother-brain but individual machines could very well do without it.



Case (2) could be a course of action should we determine what we define as 'emotion' could keep A.I. from developing past human possibilities and deeming us to be 'useless meat-bags' - a form of moral compass where machines could bring themselves to do a human no harm ala Asimov's 3 laws (lest a paradoxical situation occurs where every course of action leads to a different degree of human harm, but that depth of discussion shall not be reached in this note).



(3) is likely as in my definition of emotion as 'reflex thought', emotion would then be akin to the process of indexing, where common queries or situations have their answers stored in easy to reach places and the process of retrieving said information could be described as reflex. The only difference being that dependent on the 'world view' the A.I. takes, it would develop its own system of emotion e.g. [Situation: Human asking me to make him a 'sammich' | No. 1 indexed response (a.k.a. anger): Human is useless. Smash puny human].



//Will complete note tomorrow, should really be studying for my paper =S. Thoughts to include: Relevance of humans in an age of Singularity | In what situations would autonomous thought of nodes develop/what is the significance of free will | Would there be a possibility of isolated communities set-up to mirror our current reality of nations and states, and if so what is the significance? | Examples: KOTOR's droids, Mass Effect's Geth, Asimov's Foundation novels - ima geek//

Tuesday, May 10, 2011

mugging

Mugging astronomy yesterday, and math today, it is soooo much easier to internalize and vomit all the theoretical aspects in astronomy than constantly internalizing and systematizing the hard mathematical concepts.

But I feel I am learning so much now (1 semester's worth in fact, considering I slacked off math to focus on physics =S) and truly without all the symbols, abstractions and methods developed by geniuses gone by, our world would be very far backward indeed.

Who knew sequences and series could hold so much use?

And then again, I find it insane that a university course could cram this much knowledge into a person in 3 to 4 years. Either there are more geniuses around than we realize, or most people are just taking in knowledge to spit it out, forgetting all these wonderful lessons after a few years in the workforce. I vote the latter, I don't see how anyone devoted to truly learning can juggle all this together with their other commitments. There is a reason why Kepler spent most of his waking moments working just to develop what is now considered part of foundation physics in Uni. I resolve to be more devoted to the science which I love, though resolutions hardly come through (heh).

Monday, May 9, 2011

Scientists

Men are born with a measure of curiosity, and it is such that we have developed the field of science and rational thought. Those born of this era are especially enamored of the discoveries of the past and wannabe scientists hastily devour this information to make discoveries of their own. What we may not realize is that the knowledge to be handled today has already grown past the confines and capabilities of the individual, and is assembled and improved by a system or 'free hand', that no person can truly understand where our undertakings lead us.

And so as our curiosity beget a system, we are pulled into it to further its confines and produce more discoveries to consume the next generation. And the cycle is complete.