AI Is Already Screening Your Residency Application
425 Views
Here’s Why That Should Make You Pause
If you’re applying to residency soon, there’s a good chance AI will be involved in reviewing your application.
A recent JAMA viewpoint highlights how quickly AI has entered the residency selection process through its integration into ERAS. Programs are using AI tools to scan transcripts, summarize performance, and even assess “fit” based on personal statements.
On the surface, this sounds efficient. We all use AI, and it definitely can be beneficial for sifting through large datasets. Programs are overwhelmed with applicants and are searching for ways to standardize their interview processes.
But here’s the problem. Efficiency often comes at a cost, and data shows that fairness is what’s being sacrificed.
And right now, we don’t have strong evidence that these tools are either accurate or equitable.
What AI Is Actually Doing to Your Application
These tools are not just organizing your application. They are interpreting it.
Bachina et al. report that the AI softwares used by residencies:
- Convert your grades into percentiles and visual summaries
- Assign “academic career interest” badges based on your personal statement
- Score subjective traits like leadership or program “fit”
All of this is happening before a human may even look at your file. Even more concerning, one study showed only a 7% overlap between applicants selected by AI versus human reviewers. That’s not optimization, that’s a completely different selection process.
The Part That Makes Me Nervous
Bias without intention is still bias
AI does not need to “know” your race or background to disadvantage you. If the model performs worse on certain types of transcripts, language patterns, or writing styles, it can create what’s called disparate impact.
Examples already flagged:
- Lower accuracy with international transcripts
- Issues with low quality or scanned documents
- Potential differences in how “soft traits” are interpreted across groups
AI isn’t trying to discriminate. But the software inherently cannot take nuance into account. And legally, that matters.
Errors are happening and you may never know
These AI tools are not systematically or independently evaluated yet. And errors are already happening. The article describes cases where AI incorrectly extracted grades from transcripts. Not slightly off. Just wrong. The bigger issue is that you don’t see what the AI sees. So if it misreads your transcript or mislabels your interests, that error can quietly follow your application across programs without you ever knowing.
At the same time, there’s very little transparency around how these systems work. You don’t know how decisions are being made. Programs may not fully understand them either. And there is limited independent validation. Even accountability is unclear. These tools are framed as “decision support,” not decision-makers. So when something goes wrong, responsibility gets blurry fast. And who suffers the most? The applicant who isn’t even aware of AI screening their application.
What This Means for You as an Applicant
This is the part that hit me the most reading this, especially as someone fresh off of match day having just gone through the process. We spend so much time optimizing our applications. Grades, research, personal statements, letters of rec. But now there’s another layer, and it’s one we don’t fully see or control that has consequences we are entirely unaware of.
A few practical takeaways:
Be mindful of how your application is presented
Clear, organized, and easy-to-read materials matter. Not just for human reviewers, but because they may be interpreted in multiple ways along the way.
Recognize that parts of your application may be interpreted, not just read
Concepts like “fit” or “interests” are not always taken at face value. This makes it even more important that your application tells a clear, consistent story about who you are and what you care about.
Stay aware of how the process is evolving
There is growing discussion around giving applicants access to AI-generated outputs. If that happens, it could change how transparent the process feels and how much insight you have into how your application is being reviewed. This would be extremely helpful in my opinion, as students have the most incentive to catch errors if AI makes them.
How I’m Thinking About This Going Into Residency
Reading this honestly changed how I think about the process and how I guide students I tutor. I used to think the challenge was getting the application in front of the right people. Now I think part of the challenge is making sure the application is interpreted correctly before it even gets there.
It also reinforces something I’ve been learning all year. You can do everything “right” and still not have full control over the outcome. And that’s uncomfortable. But it’s also part of the system we’re in.
Where This Is Headed
The authors make a strong point. AI in residency selection is not going away. If anything, it’s going to expand.
They recommend:
- More transparency from AI vendors
- Applicant access to AI-generated data
- Formal standards from AAMC
- Avoiding high-risk tools like AI-scored video interviews
Because once these systems scale, small errors don’t stay small. They propagate across programs, across cycles, and across careers.
Final Thoughts
AI has the potential to help a broken system. But right now, it’s being layered onto an already high-stakes, high-pressure process without enough safeguards. As applicants, we don’t control the system, but understanding how it works is becoming essential, especially as it continues to shape outcomes in ways we’re just starting to see. And if you’re navigating this process yourself, having the right guidance can make a real difference in how your application is presented and interpreted.
If you want expert support with your residency application, explore our 1:1 Residency Advising and get personalized guidance tailored to your goals.
Featured Articles
