Artificial intelligence isn’t the threat to business education. Surrender is.
AI is already integrated into the business world my students are entering. It is used in analytics, marketing, operations, planning, and decision making. Business schools cannot afford to pretend this is optional. Ignoring AI does not protect students; it prepares them poorly. However, treating AI as a substitute for thought is even worse. That’s the real danger: borrowed judgment.
This spring was the first semester I intentionally integrated AI into my teaching at Grove City College. I did this because businesses are using it, and my students need to know how to use it if they’re going to compete for jobs. In fact, they might need it even more than previous graduates. Some of the easier entry-level tasks that once helped young professionals learn how things work are exactly the kinds of tasks AI is starting to handle. That means new grads may be asked to contribute at a higher level, sooner. They need to be prepared for that. However, this doesn’t mean they should become dependent on AI. It means they need to learn how to use it without giving up the parts that matter most.
The clearest example so far has been in my Retail Management class. I put students into a supply-chain crisis and asked them to respond like executives. A port shutdown had trapped a major portion of holiday inventory offshore. They had to decide what to rescue, what to sacrifice, and how to respond under time, cost, and service pressure. That is a far better use of AI than the way students tend to think about it, mainly as a tool to polish writing. It forces them to generate options. It forces them to weigh tradeoffs. It forces them to decide.
AI was part of the exercise, but only as a limited tool. I provided students with structure. I taught them to use the RCTC prompting framework: role, context, task, and clarity. I also applied what I call the 10-80-10 rule. Spend the first 10 percent framing the problem properly. Let AI assist with much of the heavy lifting. Then spend the final 10 percent evaluating the output, determining what is useful, and making it your own. That is how accountability remains with the person. The individual still owns the judgment.
What happened in class clarified the problem. The stronger students tended to use AI as an accelerator. They pushed on it, questioned it, and compared its suggestions with course concepts. Sometimes, they learned something new because they had enough background to interrogate the output rather than just accept it. The weaker students often did the opposite: they accepted polished answers too quickly, overlooked obvious issues, and confused plausible language with sound reasoning. That is what educators should notice. AI does not eliminate the gap between stronger and weaker students; it reveals it. In some cases, it even heightens it.
That is because AI is very good at making an idea seem finished before it has been tested against reality. In the supply-chain exercise, some recommendations moved too quickly from “this sounds strategic” to “therefore this can be done.” A logistics move can sound smart at a high level while overlooking the difficult realities beneath it: dock capacity, routing changes, trucking availability, timing, coordination, and execution risk. In retail, physical constraints often expose the flaws in abstract confidence.
This is one reason I know my own use of AI differs from my students’ use. I push back more, iterate more, and clarify my expectations more precisely. I understand how to challenge the output because I have more experience. Students are not there yet; they are still learning the field. They are not subject-matter experts, and AI does not automatically turn them into experts. If anything, it makes the gap more obvious. That is also why AI has not diminished the importance of traditional teaching. My students still need lectures, explanations, guided discussions, and someone to help them distinguish what sounds right from what actually holds up. AI has not replaced the teacher; it has made the need for expertise more visible.
That’s one reason I ask students to present without screens. It’s not about style or nostalgia; it’s a test of their understanding. Sometimes I allow them to bring a few handwritten notes, but I don’t let them hide behind a laptop. I want to see if they can explain the recommendation in their own words. If they can, they probably understand it. If they rely on AI-generated phrasing, they usually don’t. That’s the real issue. I’m not mainly concerned about whether students used AI; I’m worried about students trying to explain things they don’t yet understand.
That is dangerous in any classroom, but it is especially risky in business education. Students are preparing for roles where they must make recommendations, defend tradeoffs, and justify decisions in front of others. They cannot do this effectively if they rely on borrowed language and borrowed judgment. The screen can hide that weakness temporarily, but oral explanations usually reveal it quickly. This is why the AI conversation in higher education remains too superficial. The problem isn’t just cheating; the deeper concern is whether students are developing judgment or simply becoming more fluent in generated language.
Christian higher education, in particular, should be able to state this clearly. We should affirm the usefulness of AI without suggesting that usefulness is the highest good. Christian colleges should send graduates into the workforce who can use powerful tools effectively, but who still own their recommendations, tradeoffs, and consequences. Students are not just processors of information; they are moral agents responsible for what they claim, recommend, and defend. Education should cultivate that sense of responsibility, not quietly train students to outsource it.
I am not anti-AI; quite the opposite. I want my students to use it and learn how to prompt better, iterate more effectively, and guide the tool more intelligently. I believe this will help them compete in the job market since they will need every advantage they can get. However, they must also understand its limits. AI can extend their thinking but cannot replace the need for it. It can help them work faster but cannot make decisions for them.
Business schools should teach students to use AI aggressively and wisely. They should also be very clear about what must stay human: framing the problem, assessing the results, making the final recommendation, and explaining it in their own words. Lose that distinction, and schools might produce students who are quicker, more fluent, and more polished. But they won’t necessarily make students more prepared. Because being polished isn’t the same as being ready.
AI has a place in business education. Borrowed judgment does not.
