[ACT-R-users] Questions regarding possible risks from artificial intelligence
Alexander Kruel
xixidu at gmail.com
Wed Jan 11 08:31:48 EST 2012
Ladies and Gentlemen,
I am currently trying to learn more about the academic perception of
artificial general intelligence and possible risks associated with it.
Consequently I am curious about the opinion of the ACT-R community.
I would like to ask you a few questions and your permission to publish
your possible answers in order to estimate the academic awareness and
perception of risks from AI. I am not a journalist and do not
represent any publication, nor do I maintain a formal academic
relationship. I am conducting an informal interview for a community
blog: lesswrong.com
Please let me know if you have any questions or if you are interested
in third-party material that does expand on various aspects of my
questions.
Q1: Assuming beneficial political and economic development and that no
global catastrophe halts progress, by what year would you assign a
10%/50%/90% chance of the development of artificial intelligence that
is roughly as good as humans at science, mathematics, engineering and
programming?
Q2: Once we build AI that is roughly as good as humans at science,
mathematics, engineering and programming, how much more difficult will
it be for humans and/or AIs to build an AI which is substantially
better at those activities than humans?
Q3: Do you ever expect artificial intelligence to overwhelmingly
outperform humans at typical academic research, in the way that they
may soon overwhelmingly outperform humans at trivia contests, or do
you expect that humans will always play an important role in
scientific progress?
Q4: What probability do you assign to the possibility of an AI with
initially (professional) human-level competence at general reasoning
(including science, mathematics, engineering and programming) to
self-modify its way up to vastly superhuman capabilities within a
matter of hours/days/< 5 years?
Q5: How important is it to figure out how to make AI provably friendly
to us and our values (non-dangerous), before attempting to build AI
that is good enough at general reasoning (including science,
mathematics, engineering and programming) to undergo radical
self-modification?
Q6: What probability do you assign to the possibility of human
extinction as a result of AI capable of self-modification (that is not
provably non-dangerous, if that is even possible)? P(human extinction
by AI | AI capable of self-modification and not provably non-dangerous
is created)
Sincerely yours,
Alexander Kruel
Münstermannsweg 18
33332 Gütersloh
Germany
More information about the ACT-R-users
mailing list