Begin typing your search...
Universal problem solver: Relying on a computer to devise a theory of everything
Once upon a time, Albert Einstein described scientific theories as “free inventions of the human mind.” But in 1980, Stephen Hawking, the renowned Cambridge University cosmologist, had another thought.
Chennai
In a lecture that year, he argued that the so-called Theory of Everything might be achievable, but that the final touches on it were likely to be done by computers. “The end might not be in sight for theoretical physics,” he said. “But it might be in sight for theoretical physicists.”
The Theory of Everything is still not in sight, but with computers taking over many of the chores in life — translating languages, recognising faces, driving cars, recommending whom to date — it is not so crazy to imagine them taking over from the Hawkings and the Einsteins of the world. Computer programs like DeepMind’s AlphaGo keep discovering new ways to beat humans at games like Go and chess, which have been studied and played for centuries. Why couldn’t one of these marvellous learning machines, let loose on an enormous astronomical catalogue or the petabytes of data compiled by the Large Hadron Collider, discern a set of new fundamental particles or discover a wormhole to another galaxy in the outer solar system, like the one in the movie Interstellar?
At least that’s the dream. To think otherwise is to engage in what the physicist Max Tegmark calls “carbon chauvinism.” In November, the Massachusetts Institute of Technology, where Dr Tegmark is a professor, cashed a check from the National Science Foundation, and opened the metaphorical doors of the new Institute for AI and Fundamental Interactions.
The institute is one of seven set up by the foundation and the US Department of Agriculture as part of a nationwide effort to galvanise work in artificial intelligence. Each receives $20 mn over five years. The MIT-based institute, directed by Jesse Thaler, a particle physicist, is the only one specifically devoted to physics. It includes more than two dozen scientists, from all areas of physics, from MIT, Harvard, Northeastern University and Tufts.
“What I’m hoping to do is create a venue where researchers from a variety of different fields of physics, as well as researchers who work on computer science, machine-learning or AI, can come together and have dialogue and teach each other things,” Dr. Thaler said. “Ultimately, I want to have machines that can think like a physicist.”
Their tool in this endeavour is a brand of artificial intelligence known as neural networking. Unlike so-called expert systems like IBM’s Watson, which are loaded with human and scientific knowledge, neural networks are designed to learn as they go, similarly to the way human brains do. By analysing vast amounts of data for hidden patterns, they swiftly learn to distinguish dogs from cats, recognize faces, replicate human speech, flag financial misbehaviour and more. “We’re hoping to discover all kinds of new laws of physics,” Dr. Tegmark said. “We’re already shown that it can rediscover laws of physics.”
Last year, in what amounted to a sort of proof of principle, Dr. Tegmark and a student, Silviu-Marian Udrescu, took 100 physics equations from a famous textbook — “The Feynman Lectures on Physics” by Richard Feynman, Robert Leighton and Matthew Sands — and used them to generate data that was then fed to a neural network. The system sifted the data for patterns and regularities — and recovered all 100 formulas.
Dennis Overbye is a reporter with NYT©2020
The New York Times
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android
Next Story