Some AGI Rambling

??/??/2018

Back


I'm pretty confident that AGI is the most important single issue of our and all time. I wrote this as a blog post a long time ago to outline some frustrations I had on the subject. I wanted to leave it here so I can come back to it and reflect on how my views change over time.



The internet provides us with an immense opportunity to direct our large communication value into work structures that allow for more close and concise cooperation. I study machine learning because I am confident that AI will destroy our civilization one way or another, and I'd like to marginally decrease the probability of this happening. I believe the only way we can avert such a crisis is through a shift in human incentive structures, although I doubt we'll be able to achieve this on a societal level before something reminiscent of an existential singularity takes place. "

I've spoken with numerous computer scientists and researchers about this in my time at Carnegie Mellon, and most of the conversations have had a similar pattern wherein I ask about one's opinion on AGI, they respond with arguments that are focused on a combination of honey potting and off switches.

To me, this argument seems full of hubris. It seems that many view AGI as having a similar level of instinct to our own. The majority of people I've spoken to about this issue hinge their arguments on the crux that there's some unique human spark that can't be replicated artificially.

The methods used to train AlphaZero combined with a processing speed substantially faster than that of any human seem to immediately disprove this. As humans, we developed our thought pattern because it was the path of least resistance in our optimization engine [evolution]. Our brains are not infallible. A blank slate with a processing speed (Hz) above that of a human could certainly overtake our entire civilization in any specific task, and a recursive optimization of task management seems like a straightforward way to extend this argument to any "human" process.

I wanted to leave these views in a note here because I want someone to find them, argue with me, and change my mind. If you feel that I'm missing something in my views, please reach out to me. I don't want to live a life where AI alignment is the only thing I should think of, but until I am presented with compelling evidence of the contrary I will continue on my current trajectory.