Bison Trailblazers | AI in Doctoral Education: A Tool, Not a Shortcut

desta

By Anna De Cheke Qualls

Artificial intelligence is no longer a futuristic concept in academia—it’s here, and transforming the way doctoral students learn, research, and write. But as AI becomes embedded in PhD programs, educators are grappling with a critical question: How do we harness its benefits without compromising the intellectual rigor and authenticity that define doctoral work?

For Dr. Desta Haileselassie Hagos — a Lecturer of Computer Science and AI/ML Technical Lead Manager at Howard University—the answer lies in balance. “AI should be treated as an intellectual partner, not a substitute for thinking,” he says. “Its role is to improve how our graduate students search, analyze, simulate, and communicate. But the core intellectual work—framing problems, exercising judgment, making original contributions—must remain the responsibility of the researcher.”

AI’s potential in doctoral education is undeniable. It isn't like using Google or other digital tools -  its capabilities aren't just discrete tasks. 

"Modern AI systems are different because they are generative and conversational. They can suggest ideas, question assumptions, and adjust to a graduate student’s level in real time. This makes them powerful, but also more risky at the same time, because they shape how students reason, not just how they execute tasks, says Desta.*

Gone are the days when graduate students spend hours in the science library doing literature searches. AI has created efficiencies around some of these processes - with searching, summarizing long papers, mapping research themes across many articles, and pointing to related work that a graduate student might overlook. 

The Sway sat down with Desta* to take a deeper dive into AI and graduate education.

(The Sway) How can AI tools assist in literature review and data analysis for PhD students?

(Desta Haileselassie Hagos) For literature review, AI can help students with searching, summarizing long papers, mapping research themes across many articles, and pointing to related work that a student might overlook. For data analysis, it can assist with cleaning data, proposing appropriate models, generating sample code, and explaining statistical results. But students still need to understand and verify every step themselves instead of completely relying on AI to do the job for them.

(TS) What impact do you think AI will have on the originality and rigor of doctoral research?

(Desta) I think AI can have two very different effects on originality and rigor. When students use it responsibly, it can actually raise the standard, because they can spend less time on routine work and more time refining their ideas and checking the strength of their methods. But if it is used carelessly, it can lead to work that looks polished on the surface but is not grounded in solid reasoning or evidence. The impact ultimately depends on how clearly programs define expectations and how well they teach students to use AI in a disciplined way.

(TS) Can AI help in hypothesis generation or experimental design? If so, how?

(Desta) Yes. AI can scan large bodies of literature, highlight gaps, propose testable ideas, and even suggest alternative experimental setups or variables to explore. But these suggestions are only starting points. Students still need strong domain knowledge and methodological skills to judge which hypotheses make sense and how to design a proper way to test them.

(TS) What ethical challenges arise when PhD students use AI for writing or research?

(Desta) Some of the main ethical challenges, in my opinion, involve undisclosed ghostwriting and the risk of presenting AI-generated text as original work. There is also the danger of carrying biases or mistakes from AI outputs into research without proper checking. Another concern is the use of AI models trained on data that may not have been collected with consent. And on top of that, students can unintentionally expose confidential data or unpublished ideas when they upload materials to third-party systems.

(TS) How should universities ensure academic integrity when AI tools are widely available? (Originality of work and writing included)

(Desta) Universities need clear and practical policies that explain what kinds of AI use are acceptable and what crosses the line. These policies should be paired with strong mentoring so students understand why integrity matters, not just which rules to follow. Assessment methods may also need to change. More oral defenses, in-class reasoning tasks, code walk-throughs, written quizzes and exams, and regular supervision can make it much harder to completely rely on AI in place of genuine thinking. For dissertations in particular, guidelines should spell out when AI assistance is allowed, how it must be disclosed, and which parts of the work must be entirely the student’s own.

(TS) Should universities develop guidelines for acceptable AI use in dissertations? What should these include?

(Desta) Yes. Every dissertation should clearly state how AI tools were used, just as students already report the software, datasets, and methods behind their work. Guidelines should outline which tasks AI may assist with, such as copy-editing rather than drafting whole sections. They should also explain how to document AI involvement, set limits on using AI with sensitive or unpublished data, and make it explicit that the core contributions, ideas, analyses, and interpretations, must come from the student.

(TS) Do you believe AI could compromise the authenticity of doctoral work? Why or why not?

(Desta) Yes, I believe it can. Authenticity can be at risk when students use AI to produce arguments, derivations, or results they do not fully understand, or when the writing becomes so polished by a model that it no longer reflects their own voice. But I also believe that AI does not undermine authenticity by default. With clear oversight and honest disclosure, it can support the work while the ideas, reasoning, and ownership remain genuinely the work of the student.

(TS) What new skills should PhD students develop to effectively leverage AI in their research?

(Desta) They need strong AI literacy: an understanding of how models work, what they can and cannot do, and the kinds of mistakes they commonly make. They also need skills in prompt design, verification, and data governance, especially the habit of checking AI outputs against primary sources or real data. Just as important is the ability to separate what the AI suggested from what they have personally examined, validated, and decided.

(TS) How can AI change the expectations for methodological expertise in doctoral programs?

(Desta) I believe that AI will take over some routine coding and analysis tasks, so expectations will shift from asking whether a student can write everything from scratch to asking whether they can design, critique, and validate an analysis from beginning to end. Methodological expertise will place more weight on conceptual design, underlying assumptions, robustness checks, and interpretation, with AI serving as an assistant rather than a crutch.

(TS) Should AI literacy become a mandatory part of PhD curricula?

(Desta) I think yes. AI has become a standard part of the research toolkit, similar to statistics or programming. Therefore, PhD students should know how to use it effectively, and they should also be able to recognize issues such as hallucinations, bias, privacy risks, and the dangers of over-reliance. I personally don’t want my students to be completely AI-dependent.

(TS) How do you see AI shaping the future of doctoral education in the next 10 years?

(Desta) We will likely see more personalized mentoring, with AI tools helping students plan milestones, track their progress, and troubleshoot technical issues. Doctoral research may involve larger and more complex datasets, and AI will be woven into nearly every part of the research process. At the same time, programs will need to strengthen the human side of doctoral training, including deep reading, critical thinking, ethics, and a strong sense of scholarly community.

(TS) Will AI reduce the time required to complete a PhD? Why or why not? 

(Desta) For some students, yes. In fields where coding, data cleaning, or simulation slow progress, AI can remove a lot of the technical bottlenecks. But many delays in PhD programs have nothing to do with those tasks. They come from choosing the right research problem/topic, navigating institutional hurdles, securing funding, or managing personal circumstances. AI can speed up parts of the work, but it does not automatically solve the structural or human challenges that often extend the PhD timeline across many universities.

(TS) Could AI democratize access to PhD-level research opportunities globally?

(Desta) Potentially, yes, it could. AI can provide high-quality tutoring, advanced research tools, and writing support in English, which can help students who lack local resources or mentorship. But true democratization requires more than that. We also need broader access to computing, data, and reliable connectivity, and we need AI systems that represent diverse languages and perspectives rather than only those of the Global North.

(TS) What policies should universities adopt to ensure responsible use of AI in PhD research and writing?

(Desta) I believe universities should require students to disclose how they use AI in their theses and publications, and they should provide tools that protect privacy. Training in ethics, data governance, and responsible use should be part of the support system. Promotion and evaluation criteria also need to reward good practices rather than chastise students who are transparent. In addition to this, policies should be firm about integrity but flexible enough to adapt as the technology evolves.

(TS) How can AI reduce administrative burdens for faculty and doctoral students?

(Desta) AI can help draft routine emails, generate forms, summarize long documents or meeting notes, and support scheduling, reporting, and compliance tasks. If universities integrate these tools thoughtfully, they can take much of the repetitive administrative work off the plates of faculty and doctoral students, giving them more time to focus on important things like mentoring, research, and teaching.

(TS) In terms of the job market as our students exit, are there specific AI competencies that employers might expect future PhD graduates to have?

(Desta) Yes. In my opinion, employers will expect graduates to know how to use AI tools for data analysis, automating parts of workflows, and communicating results more effectively. They will also look for people who can evaluate AI systems with a critical eye, including understanding issues like bias, reliability, and knowing when a model should not be trusted.

(TS) How do you see AI shaping the skill sets required for PhD graduates entering the job market?

(Desta) Technical depth will always continue to matter, but the most valued skills will include adaptability, cross-disciplinary collaboration, and the ability to work effectively with AI tools. I believe that graduates who can combine strong domain expertise with data literacy, ethical judgment, and comfort with AI-assisted workflows will be especially well-positioned
in the job market.

(TS) How do you envision the ideal PhD graduate in an AI-driven world?

(Desta) In my opinion, the ideal graduate is someone who can think clearly without AI, work productively with AI, and explain their decisions to non-experts in very simple terms. They have deep knowledge of their field, they know how to design and evaluate useful methods with rigor, and they have the humility to question their own assumptions as well as the outputs of any underlying model.

(TS) Could AI help democratize access to PhD education in developing countries (overseas in general)?

Yes, especially if it is paired with real investment in infrastructure and funding. AI tutors, translation tools, and open educational resources can make high-quality instruction more accessible to students who cannot relocate or afford high tuition fees. But we should also be careful not to let automated systems replace genuine mentorship and the institutional support that students need to succeed. Students should not be left entirely dependent on AI.

(TS) What role should universities like Howard play in shaping AI literacy for future researchers?

(Desta) Universities like Howard should help students understand both the technical side of AI and its real-world impact. That means teaching future researchers how to use AI responsibly, how to think about its social and practical consequences, and how to reflect on the groups and communities their work may influence. Our role is to prepare students who can engage
thoughtfully with the technology and guide its use in meaningful and responsible ways.

(TS) Anything else we should know?

(Desta) I am very optimistic about what AI can do, but at the same time, I also want to make sure it never becomes a shortcut around real thinking and learning. At the end of the day, the core of doctoral education has not changed: curiosity, discipline, integrity, and a sense of responsibility toward the people our work affects. AI should support those values, not replace them.

----------

Dr. Desta Haileselassie Hagos is a Lecturer of Computer Science and the AI/ML Technical Lead Manager at Howard
University. His research focuses on advancing fundamental machine learning and Transformer-based algorithms, with applications spanning healthcare, multimodal data analysis, and efficient AI systems. Trained as a computer scientist with a specialization in artificial intelligence, I work at the intersection of algorithms, real-world data, and social impact, with a broader interest in building AI systems that are scalable and interpretable. I am also interested in large language models and natural language processing.

*Disclosure by The Sway | We recognize that global differences exist in cultures, kinship systems and naming conventions, Dr. Desta Haileselassie Hagos has requested for him to be referred to as 'Desta' throughout this publication.

 

Categories

Scholarship