Suggested extra curriculum reading

This collection of links does not have any specific order. Read, watch, think and enjoy ! Note that you are not required to read documents linked from this page, but you are welcome to do this just for fun. Perhaps, you will be able to find here a research direction that you will wish to explore in the future.

What is Artificial Intelligence (HTML version), John McCarthy, Stanford University. An important introductory paper for undergraduate students. This Web page has links to other versions of John McCarthy's paper.
If the above link is not operational, then you can read
(Local Copy of the March 29, 2003 version):   "ps" - PostScript and "pdf" - Acrobat Reader formats.
(Local Copy of the November 12, 2007 version):  
Part 1: Basic Questions   Part 2: Branches of AI   Part 3: Applications   Part 4: More Questions   Part 5: Bibliography  

Gary Marcus (a professor of cognitive science at New York University, USA): Why can't my computer understand me? (August 16, 2013). This article was published in the New Yorker magazine to celebrate the talk presented by Professor Hector Levesque on occasion of the Research Excellence Award that he received in August 2013 at the premier international conference on artificial intelligence. Subsequent developments: Winograd Schema Challenge: Can computers reason like humans? Posted by Charles Ortiz on July 13, 2016 in What's Next? blog of Nuance Communications Web site. Charlie Ortiz is the director of the Laboratory for AI and Natural Language Processing at Nuance Communications and one of the organizers of this competition. The paper about the Winograd Schema Challenge, written by Hector Levesque , Ernest Davis, and Leora Morgenstern, was published in the proccedings of the 14th international conference on Principles of Knowledge Representation and Reasoning, Vienna, Austria, July 20-24, 2014.

The Turing Test: Computing Machinery and Intelligence by Alan Turing, published in "Mind", vol. LIX, N 236, pages 433-460, October, 1950. Because of the advancement in software agents technologies, nowdays this test has commercial applications discussed in this New York Times article (NY Times, December 10, 2002) and in several other articles. The recent CAPTCHA project has a goal of developing electronic tests that can tell humans and computers apart.

Patrick Hayes and Kenneth Ford
Turing Test Considered Harmful. This paper was published in the proceedings of the International Joint Conference on AI (IJCAI-1995), Montreal Canada, August 20-25, 1995.

Kenneth M. Ford, Patrick J. Hayes, Clark Glymour, James Allen (from the Florida Institute for Human and Machine Cognition, IHMC).
Cognitive Orthoses: Toward Human-Centered AI. Published in AI MAGAZINE, Winter 2015, Vol 36, No 4, pages 5-8.

Building Watson (a computer that won in Jeopardy competition over human champions): "An overview of the DeepQA project" by David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, Chris Welty from IBM Research. Published in "AI Magazine", vol 31, N3, 2010, pp. 59-79. A related Watson's Jeopardy! Challenge Web site at IBM Research. These papers dicuss some of the technical issues (linked from the IBM Research Web site). The paper Natural Language Processing With Prolog in the IBM Watson System written by Adam Lally (IBM) and Paul Fodor (Stony Brook University) was published on March 31, 2011 (this is PDF version of this article.

Prakash M Nadkarni, Lucila Ohno-Machado, and Wendy W Chapman. Natural language processing: an introduction. This paper includes a discussion of IBM "Watson" technology. Published in: Journal of the American Medical Informatics Association, 2011, volume 18, issue 5; Sep-Oct 2011, pages 544-551.

Will Knight: Tougher Turing Test Exposes Chatbots’ Stupidity. We have a long way to go if we want virtual assistants to understand us. Published in MIT Technology Review, July 14, 2016.

Gideon Lewis-Kraus wrote an article The Great A.I. Awakening. This is an article about recent improvements in Google Translate and the so-called ``neural networks" technology that helped to make translation better. Published in the New York Times Magazine on December 14, 2016. In a more recent article, also published in New York Times, Gary Marcus claims that Artificial Intelligence Is Stuck and then proposes what needs to be done to move it forward (New York Times, July 29, 2017).

Geoffrey Hinton (Canadian Institute for Advanced Research, University of Toronto and Google) received his Research Excellence Award at the IJCAI-2005. This is the highest honor for research in artificial intelligence. The very gentle after-dinner version of his lecture ``Can computer simulations of the brain allow us to see into the mind?" is available as PPT slides, but you also need to download six .avi movies to the same directory as the powerpoint file and with the same names as they currently have: moviebuildup.avi   movierecon2.avi   movierecon3.avi   mov2.avi   mov4.avi   mov8.avi

Jürgen Schmidhuber published in June 2015 an online Critique of Paper by "Deep Learning Conspiracy" where he critically discussed an article "Deep learning" written by Yann LeCun, Yoshua Bengio, and Geoffrey Hinton (published in the journal Nature, v521, p436-444, on 28 May 2015). Moreover, he published his own deep learning overview that has been widely discussed and verified by the machine learning community. His overview provides an unbiased historical review of research that led to success of deep learning in neural networks.

Some researchers believe deep-learning with its back-propagation still has a core role in AI's future. But on September 15, 2017, Professor Geoff Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. Hinton quoted a great German physicist Max Planck who said that " Science advances one funeral at a time", and then Hinton added "The future depends on some graduate student who is deeply suspicious of everything I have said". Gary Marcus (New York University) published a related paper: Deep Learning: A Critical Appraisal, arXiv1801.00631 (Submitted on 2 Jan 2018). The following PC Magazine article written for general public discusses the reason why Deep Learning Could Get Overhyped. Published on April 4, 2018.

John Launchbury, the Director of DARPA's Information Innovation Office (I2O), discusses the "three waves of AI" and the capabilities required for AI to reach its full potential. He outlines 3 waves in the AI research and explains -- what AI can do, what it can't do, and where it is headed. Published on Feb 15, 2017.

Defense Advanced Research Projects Agency (DARPA) Announces $2 Billion Campaign to Develop Next Wave of AI Technologies, September 7, 2018. DARPA’s multi-year strategy seeks contextual reasoning in AI systems to create more trusting, collaborative partnerships between humans and machines.

Thomas Nield ( published an article   Is Deep Learning Already Hitting its Limitations?   in the "Towards Data Science" blog on January 5, 2019, with the sub-title "And Is Another AI Winter Coming?".

Hector Geffner, a professor from the Universitat Pompeu Fabra (Barcelona, Spain) talks about General Solvers for General AI. Published on June 17, 2016.

For those of you who want to learn more about games. You can read about General Game Playing Project and also about Artificial Intelligence and Interactive Entertainment.

1975 ACM Turing Award Lecture "Computer science as empirical inquiry: symbols and search" by Allen Newell and Herbert A. Simon, Carnegie-Mellon Univ., Pittsburgh, PA. Published in Communications of the ACM, Volume 19 Issue 3, March 1976, Pages 113-126. Provided by the ACM Digital Library.

Knowledge-based model of mind and its contribution to sciences. An Interview with Ed Feigenbaum, a professor from Stanford University. Published in ``Communications of the ACM'', Vol. 53 No. 6, Pages 41-45. DOI 10.1145/1743546.1743564 (Full text PDF)

What is a Systematic Method of Scientific Discovery? by Herbert A. Simon, Carnegie Mellon University. Published in Systematic Methods of Scientific Discovery: Papers from the 1995 Spring Symposium, ed. Raul Valdes-Perez, pages 1-2. Technical Report SS-95-03. Association for the Advancement of Artificial Intelligence, Menlo Park, California.

Where is AI Heading? "Eye on the Prize" by Nils Nilsson, Stanford University. Published in "AI Magazine", vol 16, N2, 1995, pp. 9-17.

How do you teach a computer common sense? Researchers at a company called Cycorp in Austin, Texas, are trying to find out. Since 1984, they have been incorporating a huge collection of everyday knowledge in an AI project named Cyc. The Cyc project aims to develop a comprehensive common sense knowledge base, and associated reasoning systems. They are now being used to enable the development of knowledge-intensive applications for industry and government.

Why people think computers can't, written by Marvin Minsky, Massachusetts Institute of Technology. Published in "AI Magazine", vol. 3, N4, Fall 1982, p. 3-15.

SHRDLU, a program for understanding natural language, written by Terry Winograd at the M.I.T. Artificial Intelligence Laboratory in 1968-70. SHRDLU carried on a simple dialog with a user, about a small world of objects (the BLOCKS world). Terry Winograd is professor of computer science at Stanford University. SHRDLU resurrection: this Web site collects information about subsequent versions and updates.

Thinking machines: Can there be? Are we?, Terry Winograd, Stanford University.

Programs with Common Sense, John McCarthy, Stanford University.

How Intelligent is Deep Blue?, by Drew McDermott, Yale University.
[This is the original, long version of an article that appeared in the May 14, 1997 New York Times with more flamboyant title.]
If the link above fails, download a local copy.

A Gamut of Games. This article reviews the past successes, current projects, and future research directions for AI using computer games as a research test bed. Written by Jonathan Schaeffer, University of Alberta, Canada. Published in ``AI Magazine'', volume 22, number 3, pp. 29-46, 2001.

Allen Newell: The Scientific Relevance of Robotics. Remarks at the Dedication of the CMU Robotics Institute. Published in the AI Magazine, Vol 2, No 1, Spring 1981.

When Robots Meet People: Research Directions In Mobile Robotics written by Sebastian Thrun, Stanford University. He is a head of the team that built Stanley, the robotic car. Stanley was judged to be the "Best Robot Of All Time" by Wired Magazine, and NOVA shot a great documentary about Stanley and the race, which is available online

Natural Born Robots, Scientific American Frontiers.

Robots, Re-Evolving Mind written by Hans Moravec, Carnegie Mellon University. He also provides a photo of Shakey, the robot.

The next generation of WWW can benefit from the AI-inspired technologies: ``Semantic Web Services" by McIlraith, S., Son, T.C. and Zeng, H. Published in IEEE Intelligent Systems, Special Issue on the Semantic Web, 16(2):46--53, March/April, 2001 (Copyright IEEE, 2001). This paper is available from Sheila McIlraith web page in Stanford University. Additional information about Web Services Activity is provided by Semantic Web Services Interest Group.

Tim Berners-Lee, the inventor of the WWW, thinks about evolution of the Web in the 21st century. Here is Tim Berners-Lee's Semantic Web Road-map, written in September 1998. The additional information: Scientific American: The Semantic Web (a new form of Web content that is meaningful to computers). This paper has been published in Scientific American (May, 2001). The paper is written by Tim Berners-Lee, James Hendler and Ora Lassila. The Future of the Web: Tim Berners-Lee's Testimony before the United States House of Representatives Committee (on 2007-03-01).
Sir Tim Berners-Lee has received the 2016 ACM A.M. Turing Award for inventing the WWW. This is the highest honour awarded in computer science by the Association for Computing Machinery (ACM), a professional CS association. The article Weaving the Web explains the history of how the Web was invented.

"A logical framework for depiction and image interpretation", R. Reiter (Univ. of Toronto), and A. Mackworth (Univ. of British Columbia). Published in: Artificial Intelligence, vol 41, N 2, 1989, pp. 125-155.

Logical vs.Analogical or Symbolic vs. Connectionist or Neat vs. Scruffy, written by Marvin Minsky, Massachusetts Institute of Technology. Published in "AI Magazine", vol 12, N 2, 1991, pp. 34-51.

Reasoning with Cause and Effect, a research excellence lecture by Judea Pearl, Univ. of California, Los Angeles.

Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution, a note written by Judea Pearl, professor from the University of California, Los Angeles. TECHNICAL REPORT R-475, September 2017.

From here to human-level AI, John McCarthy, Stanford University.

Oliver Sacks (1933–2015) was a physician and the author of over ten books.
Speak, Memory published in the New York Review of Books, in the February 21, 2013 issue.
The Mental Life of Plants and Worms, Among Others, published in the New York Review of Books, in the April 24, 2014 issue.
In the River of Consciousness, published in the New York Review of Books, in the January 15, 2004 issue.

Christof Koch (Professor of Cognitive and Behavioral Biology, California Institute of Technology) The Quest for Consciousness: A Neurobiological Approach March 22, 2006, UC Berkeley Campus.

Dr Alex Taylor sets a difficult problem solving task, will the crow defeat the puzzle? Are crows the ultimate problem solvers? -- Inside the Animal Mind: Episode 2. BBC Two Programme website.

Pieces of mind, Scientific American Frontiers.

Computer programs as empirical models in cognitive psychology: Herbert Simon, the Psychology Department at Carnegie Mellon University. Human beings use symbolic processes to solve problems, reason, speak and write, learn and invent. Over the past 45 years, cognitive psychology has built and tested empirical models of these processes. The models take the form of computer programs that simulate human behavior.

What has AI in Common with Philosophy?, John McCarthy, Stanford University.

Mathematical Intuition vs. Mathematical Monsters, Synthese, 2000, p.317-332, written by Solomon Feferman, Stanford University. See also his paper The Logic of Mathematical. Discovery. Vs. the Logical. Structure of Mathematics reprinted as Chapter 3 in the book "In the Light of Logic". Author: Solomon Feferman. (Oxford University Press, 1998, ISBN 0-19-508030-0, Logic and Computation in Philosophy series).

" Where Mathematics Comes From", written by George Lakoff and Rafael Nunez, published by "Basic Books". Book review: "Where Mathematics Come From, Reviewed by James J. Madden, Department of Mathematics, Louisiana State University. Professor Ernest Davis published his review Mathematics as Metaphor in the Journal of Experimental and Theoretical AI, vol. 17, no. 3, 2005, pp. 305-315.

The robot and the baby, an amusing story written by John McCarthy (4th Sep 1927 - 24th Oct 2011), a person who invented the term "Artificial Intelligence". He was a professor at Stanford University.

Asimov, Isaac: "Robot Visions" and "Robot Dreams", there are several paperback editions.

Raymond Smullyan (1919–2017) was a mathematician, logician, magician, creator of extraordinary puzzles, philosopher, pianist. One of his best known collections of recreational logic puzzles is "What is the name of this book?". There are several paperback editions, e.g., recent editions are published by Dover.

cps721 (Artificial Intelligence).