Back to The EDiT Journal
Higher Education Adopted AI. Do We Know If It's Working?
Higher education has adopted AI at scale, but adoption alone does not prove impact. As institutions invest time and funding into these tools, leaders must ask a harder question: how do we measure whether AI is truly improving outcomes for students, faculty, and the institution?
AI in Education
Digital Transformation
Schools & Universities

In this article
Potential Isn’t Proof : The Evidence Gap in Higher Ed AI
From Access to Measurable Impact
The Funding Problem: When AI Spending Competes with Institutional Priorities
Why Institutions Struggle to Measure AI Impact
Structured Learning, Not Strategic Perfection
Higher education has moved more quickly on AI than it gets credit for. Over the past two years, institutions have built AI training programs for faculty, staff and students, developed acceptable use policies, signed enterprise agreements with providers like OpenAI and Anthropic, and made tools available institution-wide. Behind the scenes, countless individual faculty and staff have been experimenting on their own, redesigning assessments and figuring out where AI fits into their work. All this represents thousands of hours of committee meetings, policy drafting, curriculum development, and difficult conversations about academic integrity, pedagogy and the purpose of higher education in an age of AI.
Given the significant effort invested, it is time to ask: is this work genuinely impacting the outcomes that are most important to students, educators, and institutional leaders?
Potential Isn’t Proof : The Evidence Gap in Higher Ed AI
At this moment, more than three years after the arrival of ChatGPT, I don't think the answer is obviously yes. There is a lot of optimism about AI’s potential in higher education. While much of it is hyperbole, some of it is warranted. But potential is not evidence. When we look honestly at the current landscape, we have widespread access to AI tools, growing comfort with using them, and very little understanding of whether these tools are improving student learning, reducing the unsustainable workloads facing staff and faculty, or making institutional operations meaningfully better.
I am not suggesting that universities abandon AI and let the skeptics claim victory. I do think, however, that it is worth taking a moment to reflect and assess what we have learned thus far.
From Access to Measurable Impact
The first phase of AI adoption in higher education was necessarily about foundation-building: Can people access these tools? Do they know how to use them responsibly? Are there guardrails in place? Institutions have largely answered those questions, or are well on their way.
Now that people have access and some degree of working knowledge, what is AI being directed at? And is it helping?
The areas where AI is most often discussed as promising include student advising and support, enrollment management, retention interventions, and faculty workload. Each of these represents a genuine opportunity for improvement: Student support offices are chronically understaffed. Enrollment teams face increasing pressure to do more with less. Faculty report spending more and more of their work time on administrative and compliance tasks.
But the honest assessment is that we don't yet have strong evidence that AI is delivering those solutions at scale in higher education. We have pilot programs and vendor claims. We have individual faculty and staff who report saving time on particular tasks. However, what we generally do not have is the kind of evidence that would let a provost or a dean say with confidence, “This investment is producing measurable improvements in the outcomes we care about.”
I think that gap between promise and evidence deserves more attention.
The Funding Problem: When AI Spending Competes with Institutional Priorities
There is a practical reason this matters. For most institutions, the money being spent on AI is not new funding. It was redirected from other priorities: carved out of IT budgets, absorbed into department operating funds, or drawn from already-thin professional development allocations. The 2025 EDUCASE AI Landscape Study found that only 2% of respondents reported new funding sources for AI projects, and 30% reported that no plans are in place for accommodating AI-related costs.
This means that AI spending is, in effect, competing with other institutional needs. When planning and budget renewal conversations arrive, leaders will need more than anecdotes and enthusiasm. They will need a coherent account of what the investment has produced and a credible thesis about where future funding should go.
Lacking evidence, institutions face an unpleasant choice: continue spending without evidence of what is working, or pull back funding and potentially lose ground. In a rapidly evolving economy and competitive educational landscape, neither option is desirable.
The path forward requires building the capacity to learn from what institutions are doing, not to prove every choice was right, but to find out what's working and what isn't.
Why Institutions Struggle to Measure AI Impact
In my conversations with higher education leaders, I see two patterns that tend to stall progress on this front.
The first is the impulse to resolve institutional strategy before measuring anything. This is understandable. To assess whether AI is advancing the university’s mission, it helps to have clarity on what that mission is. But higher education has wrestled with questions of institutional purpose for decades (e.g., workforce readiness, intellectual growth, civic development, or research), and most institutions manage these purposes in productive tension rather than resolving them definitively. AI’s arrival has intensified this tension at a moment when public confidence in higher education was already eroding and stakeholders were already demanding clearer answers about what universities are for.
The temptation is to resolve the purpose question first and articulate a clear institutional position on AI before investing in measurement. But in the meantime, institutions lose the opportunity to learn what’s actually working.
The second pattern is the opposite: widespread experimentation without coordination. Individual departments adopt tools, try things out, and develop their own sense of what’s working. Innovation often starts organically and experimentation is deeply embedded in the culture of many universities. But when no one is tracking what is happening across units, and when each pocket of activity uses different definitions of success (or no definition at all), the institution can’t assemble a coherent picture. When the university president or board member asks what the institution has learned from its AI investments, the answer is a spreadsheet detailing AI training completion rates and a collection of anecdotes that fail to produce a coherent narrative.
Structured Learning, Not Strategic Perfection
There is an approach that doesn’t require perfect strategic alignment before you begin, but also doesn’t leave every department to figure things out in isolation. The core idea is straightforward: agree at the institutional level on the categories of value you care about, while giving individual units the autonomy to define specific goals and metrics within those categories. This creates enough coherence to produce an institutional picture without requiring the kind of top-down uniformity that rarely works in a university setting.
This approach treats measurement not as a performance review but as a learning process. We are early enough in the experience with AI that the right question isn’t “did this work?” but rather “what are we learning about where AI helps and where it doesn’t?” Universities are uniquely suited to this kind of inquiry. Systematic investigation, evidence-based reasoning, and iterating on what you learn is what higher education can do better than any other sector.
The question is whether institutions will apply that capacity to themselves.
In Part 2 of this series, I'll describe what this framework looks like in practice: how to structure it, how to get started even without comprehensive baseline data, and what institutions can do now to build the capacity for this kind of learning.
The institutions that thrive won't be the ones with the most AI pilots or the biggest enterprise agreements. It will be the ones that built the learning infrastructure to tell the difference between what’s working and what just feels like progress.
