University of California
UCnet
What are you looking for?

Onramps to AI for higher ed

Share This Article

An industrial server at UC Berkeley
An industrial server at UC Berkeley (Credit: Elena Zhukova)

By Camille Crittenden, Ph.D.

Tools and platforms endowed with artificial intelligence (AI) have seeped into many aspects of daily life. From recommendation engines that drive Amazon and Netflix to the latest advances in large language models underpinning platforms like OpenAI and Gemini, AI has brought efficiencies to the home and workplace, along with considerable anxiety about its implications for the future.

AI meets higher ed

Camille Crittenden headshot
Camille Crittenden, Ph.D., is the executive director of CITRIS and the Banatao Institute; co-founder of the CITRIS Policy Lab; and co-founder of Expanding Diversity and Gender Equity in Tech at UC.

Institutions of higher education are facing the same exciting but nerve-wracking questions that have emerged in other sectors. AI can help teaching faculty create better lesson plans and assessment instruments, researchers accelerate their investigations in a range of disciplines and administrators improve operations. But concerns have emerged about its impact on academic integrity, data security and upskilling in the workforce, among other areas. Institutional leaders must design flexible policies to ensure applications are beneficial, effective and safe.

The opportunities for creating operational efficiencies and entirely new capabilities can be seen across a variety of university offices. In enrollment management, AI tools can predict student yield, optimize recruitment strategies and personalize outreach. It can support students by offering automated guidance on financial aid, course registration, library resources and IT concerns. It can flag students needing extra academic help and those who may be struggling to adjust to campus life. It can help finance offices create better predictive models to manage energy consumption, campus maintenance and procurement. In HR, AI can help with performance management and workforce planning. Across all of these applications, however, those implementing such tools must be attuned to the possibility of bias in the training data or algorithmic models that could affect service delivery or have real consequences for those subject to the results.

Shaping AI for the higher-ed world

To control for and attempt to mitigate some of these risks, universities have employed a variety of strategies, including creating responsible AI principles, establishing AI governance committees and expanding roles for existing staff in ethics and compliance, technology policy, privacy, procurement and legal counsel. The job scope for CIOs and CTOs must now include attention to the responsible implementation of AI tools and new concerns for risk management. Faculty also have a voice in the areas where AI affects research, teaching and learning. While some have embraced the new pedagogical opportunities of AI, others are more wary about the effect on student learning and on the identity of the institution of higher education overall.

Key among the concerns for administrators and faculty alike are how best to ensure fairness and reduce bias when using these tools. Online programs that profess to detect AI cheating are notoriously flawed, returning many false positives and likely overlooking false negatives. Some automated proctoring services that use facial recognition are not trained well for non-Caucasian faces. Accusations of cheating can erode trust in the classroom and in the academic enterprise more broadly. Third-party audits for AI tools can sometimes verify their accuracy, but this is an emerging field as well, with many new and questionably qualified entrants.

Balancing transparency and privacy

As nonprofit organizations, many funded by public tax dollars, universities have a responsibility to make their decisions transparent and explainable — characteristics that many AI programs lack. Universities can build trust with their constituencies, whether students, faculty or staff, by establishing criteria for vendors providing products or services that require the highest level of explainability and reduce reliance on “black box” systems.

At the same time, universities must uphold the highest levels of privacy protection for the vast and varied data sets they maintain. Student data is protected by law under the Family Educational Rights and Privacy Act (FERPA); other kinds of personally identifiable information (PII) are protected under the General Data Protection Regulation (GDPR), implemented in Europe and other state-level privacy protection laws that would apply to university records and research.

Universities are ripe targets for cybersecurity attacks because of their troves of data. CIOs and CTOs must establish appropriate data governance policies to protect these important digital assets that have been made more vulnerable by a proliferation of AI-driven phishing attempts and other incursions.

Educating higher-ed audiences on AI

Of course, the primary mission of institutions of higher education is to educate. Many universities are establishing fundamental courses to improve AI literacy, not only among the students but also in their current workforce. Higher ed leaders outside the IT organization should have an understanding of how AI could help improve the efficiency and effectiveness of their work, in everything from writing letters of recommendation or marketing copy to creating data visualizations for their research findings.

Concerns about AI replacing human jobs are not entirely unfounded; still, those employees who know how to harness the power of AI will become more productive and competitive in the marketplace for talent.

Many organizations are considering their approaches and priorities with regard to enterprise-wide AI implementation. Universities and their IT and academic leaders have an opportunity to lead the way by foregrounding questions of safety, ethics, fairness and equitable access. Together with corporate, government and nonprofit partners, higher ed institutions can foster a thriving and creative environment for AI and humans to flourish together.

Editor’s note: This article was originally published in CXOTech Magazine

Keep Reading