Survival of the Weakest...A.I.

#artificialintelligence 

Recently I read a great book that was on my "to read" list for quite some time, namely: "Superintelligence: Paths, Dangers, Strategies" by Professor Nick Bostrom. In the book -a tough, but great read- Prof. Bostrom shares his views on the opportunities and risks to human kind, associated with the ongoing development of Artificial Intelligence(A.I.). Using his broad knowledge of Mathematics, Engineering, Medicine, Social science and Philosophy, Prof. Bostrom explains the possible dangers associated with A.I. reaching the level of Superintelligence. Superintelligence is a term used to describe the level of artificial intelligence that far surpasses the intelligence of the brightest human minds alive today, or will come in the future. So after reading the book and doing some further post reading "research", a question kept popping up in my mind: Now before offering my view on a possible answer to this question -and eventually relating it back to the title of this article-, I think it is good to first go over some commonly used terms and concepts of A.I., without going too much in detail... Artificial Intelligence or A.I., as a term and even a discipline, was introduced by one of the "founding fathers of A.I.", John McCarthy(Lisp programming anyone?), during the famous Dartmouth conference in the mid-fifties of the previous century.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found