PHILADELPHIA, PA—Researchers at the University of Pennsylvania developed artificial intelligence models that can identify seven personal qualities including leadership, perseverance, and prosocial purpose from short college application essays. The study analyzed over 309,000 college applications, training AI models to replicate human judgments about personal characteristics with remarkable accuracy. The RoBERTa language models, named after a sophisticated natural language processing system, successfully matched human evaluations across different demographic groups.

“Can AI ‘fix’ college admissions? No… But used responsibly by (human) admissions officers, AI may provide a more equitable alternative to the current ‘black box‘ of so-called holistic admissions,” explained lead researcher Angela Duckworth, a MacArthur Fellow and University of Pennsylvania psychology professor known for her research on grit and perseverance. The AI system analyzes 150-word essays about extracurricular activities, identifying qualities like teamwork, intrinsic motivation, and goal pursuit that predict college success.

The models demonstrated what researchers call “convergent validity” meaning they consistently identified the same qualities that human evaluators found. Computer-generated scores correlated with human ratings at levels ranging from 0.59 to 0.86, showing the AI could reliably detect personal characteristics that admissions officers value. This technical achievement represents a significant step forward in automating subjective evaluation processes.

Why These Findings Matter for Students

Currently, 50% of higher education admissions offices already use AI in their review process, with 82% expected to adopt AI assistance by 2024. This widespread adoption means AI evaluation of personal qualities could soon affect millions of college applicants. For students, this technology promises faster, more consistent evaluation of their essays and personal statements.

The research shows AI assessment could level the playing field by reducing human bias in admissions decisions. Unlike standardized test scores, which correlate strongly with socioeconomic status, the AI-generated personal quality scores showed minimal correlation with demographic characteristics like race, income, or parental education. This finding suggests AI might identify merit more fairly than current human-dominated processes.

Significant Limitations Raise Concerns

Despite promising technical results, the research reveals troubling limitations that question whether AI truly improves admissions fairness. The personal qualities showed only modest predictive power for college graduation, with effect sizes much smaller than traditional academic measures like GPA and standardized test scores. This raises fundamental questions about whether the added complexity of AI assessment provides meaningful benefits.

Separate research has found that AI algorithms incorrectly predict academic failure for Black students 19% of the time, compared to 12% for White students and 6% for Asian students. These disparities highlight ongoing concerns about algorithmic bias, even in systems designed to promote fairness. The University of Pennsylvania study, while showing demographic neutrality in its specific implementation, cannot guarantee similar results across different institutions or time periods.

Expert Analysis Reveals Mixed Reactions

USC Rossier associate professor Royel Johnson warns that “technological innovations like Google search engines are often baked with biases that can reproduce inequities,” noting that “AI is no different”. Education experts emphasize that human biases inevitably influence AI systems through the data used to train them and the decisions about how to apply results.

Harvard economist David Deming argues that holistic admissions processes, which AI seeks to automate, are fundamentally flawed because “you are probably biasing your process in favor of the rich” who understand how to “stand out” in competitive applicant pools. This critique suggests that automating existing admissions practices may simply make unfair processes more efficient rather than more equitable.

Gaming and Manipulation Risks

The researchers acknowledge a critical vulnerability: applicants could manipulate their essays to target what AI systems are designed to detect. The study’s own example demonstrates this risk—the phrase “I donated heroin to the children’s shelter” received an extremely high score for prosocial purpose, showing how AI can be fooled by keyword matching without understanding context.

Recent research from Cornell, Stanford, and the University of Pennsylvania found that AI-generated college admissions essays are most similar to essays authored by students who are males with higher socioeconomic status. This finding suggests that as AI tools become more accessible, they may inadvertently advantage students from privileged backgrounds who have better access to gaming strategies.

What Experts Recommend Going Forward

Education leaders recommend that “colleges and universities should invest in training admissions professionals to work with AI tools and carefully assess the recommendations provided by these systems”. The consensus among experts is that AI should augment rather than replace human judgment in admissions decisions.

The researchers themselves conclude that “future research and practice should focus on clarifying the goals of holistic review before automating parts of the process”. This recommendation highlights a fundamental challenge: institutions must first decide what they’re trying to achieve before deploying AI to achieve it more efficiently.

Key Takeaways

FAQs

How accurate are AI systems at evaluating personal qualities compared to human admissions officers?

The University of Pennsylvania study found AI models correlated with human evaluations at 59-86% accuracy rates. However, human admissions officers often disagreed with each other, raising questions about whether consistency equals quality. The AI performed similarly to human evaluators but couldn’t exceed the limitations of human judgment it was trained to replicate.

Will AI make college admissions more or less fair for underrepresented students?

The evidence is mixed. While this specific study showed demographic neutrality, other research reveals AI bias against minority students in academic predictions. The technology’s impact will depend heavily on implementation, oversight, and whether institutions address underlying biases in their admissions criteria before automating them.

Should students change how they write college essays knowing AI might evaluate them?

Students should focus on authentic self-expression rather than trying to game AI systems. The research shows manipulation attempts can backfire, and colleges increasingly use multiple evaluation methods. Genuine personal experiences and reflections remain more valuable than keyword optimization strategies that may appear inauthentic to human reviewers.

Keep Reading

Reference

Benjamin Lira et al.,Using artificial intelligence to assess personal qualities in college admissions.Sci. Adv.9,eadg9405(2023).DOI:10.1126/sciadv.adg9405

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.