AI + Science
AIVO caught up with Ziming Liu, who recently received his PhD from the Massachusetts Institute of Technology (MIT). Ziming’s research is at the intersection of AI and physics, namely AI + Science.
He works with the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (NSF IAIFI). Positioned at the intersection of physics and AI, this NSF-funded AI Institute is enabling physics discoveries and furthering foundational AI through the development of novel AI approaches that incorporate first principles and best practices from fundamental physics.
We discussed Ziming’s work in these focus areas:
- Science of AI: Understanding AI using science
- Science for AI: Advancing AI using science
- AI for Science: Advancing science using AI
AIVO: What prompted you to delve into your current research area(s)?
Ziming: It started when I was an undergraduate student at Peking University, where I studied physics. Back then, I was trying to find a research lab, because everyone else around me was doing the same – looking for a research lab.
I figured that I was not smart enough to do any theory in physics, and I also feel that my hands are very clumsy, so I’m not good at experiments. But I feel I’m very good at, and also interested in, computational stuff.
That was 2017 and AI was just starting to get very popular. I noticed the work called the Generative Adversarial Network (GAN). The GAN model is able to generate very realistic-looking images. So I was fascinated by its results. And it sounded like something that I could do.
I just feel that it’s a very interesting field. AI is a very interesting field, but at the same time, I also do not want to completely give up on physics, because I think physics is fun. But I just simply feel that maybe there isn’t much for me to discover in physics. So that’s when I started to explore the intersection of AI and physics.
So at Peking University I found a research lab mainly focused on AI for physics; to be more specific, AI for high-energy nuclear physics. But I have a broad interest in science, in general AI for science. And then, after I got here [to the United States] and got admitted to MIT, my advisor, Max Tegmark, and I first explored some AI for physics stuff.
Then we switched our focus to physics for AI and physics of AI, because we feel that there are many more things to be done in AI than in physics. But you know, physics is a very useful tool. It provides very useful methodologies. So that’s why we still stick to using physics as a tool in order to understand and improve AI. That’s how I started researching at the intersection of AI and physics.
AIVO: That’s great – you have good timing. Can you give us some details about your research?
Ziming: The big picture of my research is, as I said, at the intersection of AI and physics, or more generally, science.
It can have different directions or interactions. This includes:
- Physics of AI, which is trying to understand AI using physics
- Physics for AI, which is building new AI models using physical insights
- AI for physics
I will zoom in a little bit to the second category, which is physics for AI – building AI models using physical insights. I want to give you two examples.
- The first example is one I’m most proud of during my PhD work. It’s the so-called Kolmogorov-Arnold Network or KAN. It’s actually inspired by a very simple motivation in physics.
So if we take the AI models we have today, they are mostly just black box models. It’s very hard to interpret what’s being learned inside those models.
But in our physical world, you see a lot of structures. If you just grab a random physics textbook, you see a lot of structures. You see symbolic equations like Einstein’s energy-mass relation: E = mc2. They have very nice symbolic structures in the formula.
Physicists have been trying to apply AI to study physics problems. But one problem with that is — after feeding the task and when the model is able to perform the task and fit the function — it’s very hard for us human scientists to extract what’s being learned and what’s interpretable by those models.
So we try to build symbolic structure into neural networks. And we end up having the Kolmogorov-Arnold Networks (KANs). So with these Kolmogorov-Arnold Networks, once they fit to the data, there are no longer black boxes. We’re able to decode what’s being learned inside the models. And we’re able to recover the symbolic formula, like E = mc2.
That’s the first example.
- The second example is the so-called Poisson flow generative models. Generative AI is a very popular, very hot field right now. The idea is, one of the mainstream models is the so-called diffusion model. It borrows the idea of the diffusion process from physics. The basic idea being that you take an arbitrary distribution, interpreted as some probability density, and then you have some diffusion processes that spread the distribution into some uniform Gaussian distribution, and the diffusion model is trying to recover this diffusion process.
As physicists, we are wondering what’s so special about the diffusion process? And it turned out that the diffusion process is actually not that special, because we managed to propose another physical process for AI, which is like a charged particle in an electric field. We found that we are able to build a new type of generative model based on just simulating the dynamics of a charged particle in an electric field, and that gives us a type of generative model, which we call the Poisson flow generative model.
So both cases demonstrate an idea. Technically speaking, they’re quite different. But they share one overall matter level – an idea – which is taking something we know in physics and trying to use that to inspire a new architecture design in AI.
AIVO: You touched on this next question, at least for your research focus area. What is the goal of your project and/or its practical or real-world use?
Ziming: In the end we want to make scientific discoveries easier and more approachable. Current AI models, as I said, are mostly black boxes. Even though you feed a function, even though you fit a task perfectly well, you don’t know what’s happening inside.
This is totally fine for some tasks, but not acceptable for some other task where we actually care about what’s happening behind it.
The KAN model I just described has caught a lot of attention in the community, and people have applied the model to discover symbolic equations within their own field, including, just to name a few, biology, fluid mechanics, dynamic systems, and so on.
So I think this is a very promising thing, because in science we care a lot about interpretability, and the goal of my research is to bring interpretability into AI models.
AIVO: I’m going to shift gears a little bit here to talk about IAIFI. What’s the role of IAIFI in your project?
Ziming: That’s an interesting question. Because from day one of my PhD, I’ve been affiliated with IAIFI. If IAIFI didn’t exist, I would not have gotten admitted to MIT.
When I started researching the intersection of AI and physics back in my undergrad studies, there were not so many people who cared about this direction.
So I was very confused and also a little bit discouraged by pursuing it. There were not that many programs that accepted people who have a background like this or who have plans like this. But IAIFI is a perfect place for me to be. Actually I applied to six physics programs and the other five all rejected me. Only MIT accepted me because of IAIFI.
AIVO: How do you work with their team to move your project forward?
Ziming: My personal experience is that I find this Institute facilitates collaboration.
Of course, there are many events, like seminars, colloquiums, internal discussions, journal clubs, and workshops. I met most of my collaborators during these events.
And I’d say, IAIFI is giving the high-level overarching direction, basically encompassing everything – like the physics of AI, physics for AI, AI for physics. So my understanding is that the Institute does not tell you what to do. It just provides the environment for people who share similar interests to meet and to become collaborators.
For the KAN project I mentioned, the author list we have includes four groups of people. Max and I are from Max’s group, and the project also includes Sachin Vaidya and Marin Soljačić, from Marin’s group. Additionally, the KAN project includes Fabian Ruehle and James Halverson from Northeastern University, who are also affiliated with IAIFI. I met them because we go to the same IAIFI events. That’s how we know each other.
Other than that, I think the process is very much bottom-up. We just naturally form collaborations without IAIFI actually telling us what to do.
AIVO: That’s great to hear. That’s one of the goals of all of the Institutes [fostering collaboration].
Going back to you. What are some ways you plan to use your PhD?
That’s a good question. Right now, I’m most interested in two directions. One direction is AI for science.
So many people have said this, but I will say it one more time: AlphaFold is a great achievement. And you know the authors have won a Nobel Prize. The interesting question is, what’s the next AlphaFold moment for science, and my bet would be on materials science and neuroscience.
For materials science, it would be groundbreaking to be able to discover, say, a room temperature superconductor. That would totally change the energy landscape. And for neuroscience, what I think is promising is the origin of intelligence – not biological versus artificial intelligence, but the origin of both. I think that is very interesting.
And AI itself is a promising direction to pursue. So that’s AI for science.
During my PhD, I received training from both AI and physics, more generally science, because I also collaborated with neuroscientists and some chemical engineers and so on. My position definitely placed me in a good position for that goal.
But of course, there are many people who have very good positions and are trying to find the next big problem, tackle the next big problem. So that’s one thing.
Another thing I personally feel is very promising is what people call embodied intelligence or robotics, like the current AI systems. They’re like brains.
They’re like minds, but they are unable to interact with the outside world. In order for AI to really make an impact on the world, we need it to have a body, and that requires embodied intelligence.
In order to control, say a robot dog, we need some classical mechanics and to do good planning. We need to have AI. My PhD taught me knowledge and skills in both AI and science. That’s very useful for building a robot dog!
Many thanks to Ziming for talking with us and articulating his research! We’re delighted to learn about your work and wish you boundless success as you explore your intellectual passions!
Learn more about Ziming’s work
- Science of AI (Grokking): How Do Machines ‘Grok’ Data?
- Science for AI (Poisson Flow): The Physical Process That Powers a New Type of Generative AI
- AI for Science (KAN): Novel Architecture Makes Neural Networks More Understandable
- KAN: Kolmogorov-Arnold Networks – the code for which has 1.5K forks on GitHub
About NSF IAIFI
NSF IAIFI promotes training, education, and outreach at the intersection of physics and AI, which advances physics knowledge – from the smallest building blocks of nature to the largest structures in the universe – and galvanizes AI research innovation.
Based at MIT, the Institute is a collaboration of researchers from three more universities:
- Harvard
- Northeastern
- Tufts


