In memoriam: Roger C Schank, theorist of AI, dies at 76
By Steve Lohr
Roger C Schank, a scientist who made influential contributions to the field of artificial intelligence and then, as an academic, author and entrepreneur, focused on how people learn, died on Jan. 29 in Shelburne, Vt. He was 76. Dr. Schank’s research combined linguistics, cognitive science and computing. In a 1995 essay, he described the common theme of his varied projects in academics and business as “trying to understand the nature of the human mind” and “building models of the human mind on the computer.”
In the late 1960s and ’70s, Dr. Schank developed ideas for how to represent in symbols for a computer simple concepts — like people and places, objects and events, cause-and-effect relationships — that humans describe with words. His model was called “conceptual dependency theory.” Dr. Schank later came up with ways to assemble this raw material of knowledge into the equivalent of human memories of past experience. He called these larger building blocks of knowledge “scripts” and regarded them as ingredients for learning from examples, or “case-based reasoning.”
“When I was a graduate student in the late 1970s, Roger Schank was required reading,” Steven Pinker, a cognitive psychologist at Harvard University, wrote on a memorial website. “He was regarded as one of the major researchers and theoreticians in artificial intelligence and cognitive science.”
But Dr. Schank’s ideas were introduced in the early days of A.I., when computers were big, slow and expensive. Trying to program a computer to execute his ideas proved impractical. And eventually, progress in A.I. came from statistical pattern-matching instead of from seeking to teach computers to reason as people do.
Especially over the past decade, the statistical pattern-matching path — fuelled by vast stores of data and lightning-fast computers — has delivered striking gains.
The newly famous ChatGPT, a giant software program that digests digital text from websites, books, news articles and Wikipedia entries, is a good example. When someone types in a question or request, ChatGPT’s powerful pattern-matching algorithms can generate poems, speeches and homework papers with remarkable, human-seeming fluency. But an A.I. program like ChatGPT has no semblance of common sense or real-world understanding, so it can also produce bizarre mistakes, racist and sexist screeds, and weird rants.
Those shortcomings, computer scientists say, could open the door to a revival of the ideas Dr. Schank advocated years ago. Adding facts about the physical world and structured reasoning, they say, could overcome the weaknesses of the new programs, which are called large language models. “These models can do amazing things, but they need to be steered,” Kristian Hammond, an A.I. researcher at Northwestern University and a former student of Dr. Schank’s, said by phone. “Roger Schank’s work now has the partner technology, in large language models, to become real.”
“I think that’s going to end up being part of his legacy,” Dr. Hammond said.
Steve Lohr is a journalist with NYT©2023
The New York Times
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android