Question: Why is an understanding of AI so important for business leaders of the future?
I don’t think anyone who has been using these tools and following their development over the past year has any doubt about it: these are going to be a big part of every aspect of our futures. I have ChatGPT and Claude open almost all the time when I’m working, now. The progress we’ve seen in LLMs and other neural network applications in the past year is just astonishing. And we are just in the first inning.
We already have an array of apps built on top of GPT-4, and we’re going to see fundamental changes in its capabilities, with mixed media—text and image combined—in the next couple of weeks. And these models are just going to get better as we make progress on interpretability, understanding their architecture, and training even bigger models on even bigger data. The particular applications, user interfaces, and companies will all change.
However, some things are not going to change. You’re still going to be interacting with a neural network that’s trained on the vast ocean of human-generated text. So I think the fundamental habits and skills for working with LLMs, with good prompts, are going to carry over to the future applications.
The way I say this to my students is that, to get a good answer, you need to get the language model “in the right mindset”—the right space from its training sample of all of human-generated text. Ultimately these models are just doing text prediction, based on what they’ve seen—from Plato to Reddit comment sections. So, if you want a calibrated and smart and thoughtful answer from it, you have to give it the prompts that get it there. And that takes effort and thoughtfulness, about the nature of this artificial intelligence we are interacting with.
In the years to come, there will be plenty of new applications and models and companies. But that fundamental element is not going to change. So it’s worth training that ability, those habits, and that thoughtfulness now.
What are your favorite examples of AI being incorporated into the classroom?
Last spring, when GPT-4 came out, I used it to transcribe and clean up all of my lectures. And then I gave my students those cleaned-up transcripts and a set of prompts for how to get good answers and explanations for my practice problems using that. I walked them through my process: Give ChatGPT the chapter lecture transcript as its context, have it summarize in detail to confirm, and then enter in the practice problem with a prompt asking for a detailed, thorough, analytical answer, with multiple approaches and levels of explanation.
And the answers that ChatGPT gave, using that strategy, were astonishingly good. A bit unnerving at times, to be honest.
Some of my students told me it was the most valuable part of my class. Which to be honest, hurt my feelings a bit, made me feel obsolete—that I had trained my younger and better replacement. But at the end of the day, what matters more is their learning, not my job security.
Question: What are the biggest learning benefits of incorporating AI into our classrooms?
In my time teaching, I’ve found that the most important characteristic for effective pedagogy is not expertise, it’s actually empathy. And by empathy, I don’t mean the therapy kind—although I do a bit of that from time to time. I mean, putting myself in the students’ shoes, inhabiting their minds, anticipating the questions they have. Figuring out where they’re at, so I can bring them to where they need to be.
Now, here’s the fundamental issue: No matter how hard I try at that, I have 60 students in each section, and I have to give one lecture to all of them as a group. And one text and one set of slides and practice problems. But each of those students may have different questions. They’re all in a different place. And I will never know what question each individual student has as well as that student does.
So, the thing I’m most excited about is having my own course-specific chatbot.
I wrote my own open-source course course text years ago (the hard way), my own course slides and practice problems and solutions explanations, all that. And then, last year, when GPT-4 came out, I used it to transcribe and clean up all of my lectures for the year—30-plus hours of me explaining the course content and responding to student questions, etc. So I have a full text repository of all the course content. Next time I teach, I expect the applications will have a wide enough context window, so that I can feed all of that content, and students can ask their questions directly to it, the moment their curiosity about that question strikes them. So it will be driven by student curiosity, individualized, but still rigorous.
With the right prompts, they’ll get back an answer that is immediate (like Googling) but as rigorous, thorough, and analytical as a PhD professor teaching a university-level course. So, it will be the best of both worlds. I think that will be an amazing way for students to learn. And my students will get that. It might take a little longer to arrive for our pals at UCLA.
I’m excited to see what my students can learn with a chatbot version of me, that has all my expertise about the course—but that, unlike me, never makes any typos and never tires.
We have this unprecedented opportunity for individualized, interactive, rigorous, inquiry-driven learning—at scale. For the first time in human history, we can all be Alexanders with our own Aristotles.
Will common AI use give Marshall students an advantage in the future?
My answer is, ‘no,’ common AI use will not give Marshall students an advantage in the future. Econ 101 tells us that in a competitive marketplace, the price or value of anything gets competed down to marginal cost. What’s the marginal cost of putting a low-effort request into ChatGPT and copying the output? Zero. Since everyone has this technology, there won’t be any competitive advantage, any value from doing just that.
So, really, the value is going to come from uncommon use of AI, from pairing with this superintelligence in a unique way. The integrity and distinctiveness of our own minds is always going to be the advantage, our source of value.
I think that using these the ‘common way,’ as a crutch or substitute for effort, outsourcing our own minds, is going to be a big problem, for all of us. The reason we write essays in college is not because we need more piles of them in our libraries. It’s because writing is the best way to get ourselves to really think and structure our thoughts. So many of the great thinkers and also businesspeople, Paul Graham, Warren Buffett, and Jeff Bezos—who famously banned PowerPoints and has people write full-paragraph memos—will all tell you that writing is the best way of developing and refining your reasoning.
What students need to do—and we as their professors have a responsibility to help make this happen—is to use it in an uncommon, challenging way, as a tool to sharpen and enhance their own reasoning. That’s what I use it for, and that’s what I try to pass on to my students with the way I design my course.
What do most people misunderstand about AI and education?
As someone who spends a lot of time on the front lines in the classroom and using these tools, I have a lot to say on this. I see two common fallacies in what I read online: There are some people, the boosters, who think this is all just wonderful, and we should get our students to use AI for everything, since that’s how we’ll work in the future. And then there are the cynics, who think that this could be the death of education.
I think the truth is a blend of these two. We have both a potential disaster and a historic opportunity. And my view is that we need to blend old-school, centuries-old practices, with using this new technology as a learning tool.
Whatever happens with AI in the future, it’s still going to be as important as ever for students to have mastered and internalized the concepts in their own minds. AI has not changed that. The value you add in your career, or even just as a human being, does not come from retrieving facts. It comes from intellectual coherence, a mental framework to make sense of the world, to make good decisions. And, we still need that, even more than before. These technologies are going to take over some of our busy work and edit our emails, etcetera. But there’s still going to have to be a person who makes and stands behind decisions and is accountable for them. So, the integrity and coherence of your reasoning and judgment will always be what matters and makes a difference.
The fact is, learning is hard, it’s challenging. And, I don’t know about you, but I don’t do challenging things without some motivation. So one of the most important parts of my work as an educator is to provide that, to say, “you need to buckle down and do the work.”
So here’s where I’m going with this. Let’s acknowledge the elephant in the room: AI poses some serious problems for that. It’s tempting to use it as a substitute for effort, to outsource our brainwork, to just have AI do the homework or write for us. Back in the day, I buckled down and wrote my own essays and did my own homework because I had no choice, there was no alternative. Now, that’s changed: There is an alternative, and, yes, that is a big issue for education.
So, here’s my philosophy: When it comes to teaching and learning, the tools I give students in the classroom and for self-study, I’m very pro-AI. I want to use that full capacity, every advancement, that alien superintelligence, to sharpen and enhance the learning process.
But when it comes to test day, I’m old school. It’s an in-person, closed book, pencil and paper exam. And if you want to pass, you need to face it, one-on-one—just you, your pencil, and the power of your own mind.
So that’s my philosophy on that—ride the wave of every development of AI as a learning tool, but, on exam day, help students build their intellects and their courage in their own minds the old-school way. And I think that time will prove me right.
How can Marshall continue to incorporate AI into education?
My opinion is pretty simple. I think the best thing we’ll have in the near-to-medium term is the ability to build course-specific chatbots. I’m not sure what the big textbook publishers are planning to do. But, for professors like myself, if you are willing to put in the effort to have your own course text, clean up all the transcripts from your past lectures and practice problem solution explanations, etc., you’ll be able to make it happen by yourself pretty soon.
Instead of having to slog through a big publisher textbook (and let’s be honest, we don’t always have the attention span for that these days), students can just, wherever they’re at, whatever question they’re curious about, get an explanation that’s rigorous and analytical, in an engaging, two-way conversation.
I’m a little skeptical of other things. I’m skeptical of the value of giving students assignments instructing them to use ChatGPT to produce the project. Some other educators may disagree. This is new territory, so we will see how this unfolds, nobody can be sure. But my view is that the dominant apps and UIs are going to be completely different in two or three years. So the idea that students must “learn how to use this to produce your course project as you will use it in the workplace,” is, in my opinion, a fallacy.
I think it should be about the more fundamental habits and mindsets for how we can mesh with this alien superintelligence that has just arrived on earth.