I often ask myself whether AI is actually a good research tool, especially for someone in my position. As a PhD student, my main objective is to publish high-quality papers and prove to my committee, to my peers, and honestly to myself that I am a competent scientist. At the same time, I have another immediate goal: I want to deepen my knowledge of my field. I do not want to leave my PhD with publications but without understanding.
Before AI coding tools existed, those two goals were tightly connected. If I wanted to publish, I usually had to understand my own research deeply. In computer science, that often meant implementing my idea and building a proof of concept. The implementation process was not just a chore I had to get through; it was part of how I learned. When I coded something myself, I had to face the details. I had to discover what I did not understand. I had to debug. I had to make choices and defend them. In a very real sense, understanding was enforced by the work.
Now, with the rise of AI coding tools, I feel that connection weakening. It is becoming possible to get results without truly understanding why something works, because I do not have to implement the idea myself. I can ask an AI model to do it. That ability is powerful, but it creates a problem that feels uniquely intense in a PhD program: the “publish or perish” pressure. I feel pulled into a dilemma that I do not think has an easy answer. Should I use AI so I can publish faster, even if I lose out on the learning that would normally come from doing the work myself? Or should I avoid using AI so that I keep the learning, but risk spending a huge amount of time on ideas that lead nowhere. These pressures feel like two trains heading toward each other, and I am standing near the tracks.
I think the typical university professor would tell students they should spend time learning the concepts themselves instead of relying on AI. But I also think that advice does not match the incentive structure PhD students actually live under. In my program, what gets rewarded is the quality and number of publications. I cannot expect to graduate simply by knowing a lot about my field. I have to publish. And that creates a situation where “doing things the slow way” can feel like a luxury I am not sure I am allowed to afford.
This leads to a worry that I find hard to shake. I am afraid that when I graduate, my only real skill will be prompting an AI model with my ideas. If that becomes true, then I am basically just an idea generator: I throw things at the wall and see what sticks. And then I start asking myself questions that feel almost existential. What differentiates me from anyone else in that case? What value do I bring to an employer? What is the value of doing a PhD at all if the core work is outsourced to a model?
This worry feels even more real because of what I see around me. For example, I have a friend who is now the biggest advocate for what I call “vibe research.” His workflow is simple: he comes up with ideas, throws them into Claude Code, and it gives him an implementation. He tests whether the idea works. If it does not, he moves on. When I asked him why he works this way, he told me it is a “fail fast” approach. Without AI, he said, I might spend months learning all the tools I need for a minimally viable implementation. But if I discover at the end that the idea does not work, then I wasted months. With AI, he can get an implementation in a matter of hours. If the idea fails, he can drop it immediately and lose very little time.
On a practical level, I understand why that argument is appealing. It is hard to justify spending months building something if the payoff might be nothing. But when I asked him how I am supposed to learn and grow if I just have AI do the work, his response was basically: I can learn after I have a publishable idea. That is the point where I start to disagree strongly.
In my undergraduate years, I heard many of my math professors say, “Mathematics is not a spectator sport.” What they meant was clear: the only way to learn math is to do the math yourself. I had to do the homework myself. I had to struggle through the exercises in the textbook. Attending lectures, asking questions, talking with classmates, or even watching animated math videos on YouTube, like 3Blue1Brown, was never a substitute for actually doing problems. I also noticed how easy it is for students to fall into the trap of thinking they are learning, when they are really just watching someone else do the thinking. All the passive activities are easier than facing the psychological and emotional battle of tackling a hard problem on my own.
Because of that experience, I believe computer science is also not a spectator sport. I cannot just talk about ideas at a high level. If I want real understanding, I have to take the time to learn the details and implement things myself. From that perspective, feeding my idea into an AI model, letting it generate the implementation, and then only “learning” after the idea works feels like being a spectator. I was never an active participant in the thinking process. It is like looking at the solutions in the back of the book and telling myself, “Ah, that makes sense.” But that is not learning. That is recognition, not understanding.
I also think the discomfort matters. The acquisition of knowledge is not gentle. It is a form of masochism, in the sense that it requires willingly entering an unpleasant mental state: confusion, frustration, uncertainty, and staying there long enough for something to change. My undergraduate real analysis professor once said something along the lines of: being challenged is critical for personal growth. If I want to deeply understand a concept, I have to brave the psychological and emotional battle inside me, the part that hates not understanding immediately and the part that is afraid of the struggle. That transition, from struggling to understanding, is where the learning occurs. If I remove that transition by outsourcing the hard parts to AI, I may still produce outputs, but I am not sure I am producing myself as a scientist.
So when I ask whether AI is a good research tool, my answer is complicated. AI can help me move faster, test ideas more efficiently, and reduce wasted effort. But it can also tempt me into a version of research where I trade genuine understanding for speed and short-term productivity. And as a PhD student, I feel trapped between the demand to publish and my need to actually learn. That tension is not abstract for me; it shapes how I work, how I think about my future, and what I fear I might become if I am not careful.