Half of College Students Are Rethinking Their Majors Because of AI. Meanwhile, a Robot Showed Up at the White House.

New data shows 47% of college students have considered switching majors over AI fears. Students are using it more than ever while saying it harms their thinking. And Melania Trump walked with a humanoid robot at a White House education summit.

Graduates throwing caps in the air at a university graduation ceremony

Nearly half of all college students in the United States have thought about changing their major because of artificial intelligence. Not because they want to work in AI. Because they’re afraid AI will eat the career they were planning for.

That statistic comes from the Lumina Foundation-Gallup 2026 State of Higher Education Study, which surveyed over 6,000 students last fall. Forty-seven percent said they’d given “a great deal” or “a fair amount” of thought to switching majors because of AI’s potential impact on jobs. Sixteen percent have already made the switch.

In the same week this data landed, First Lady Melania Trump walked alongside a humanoid robot at a White House education summit and published an op-ed arguing AI could deliver a “world-class education” to American children. The teachers union called it “every parent’s nightmare.”

The gap between how students experience AI and how leaders talk about it has never been wider.

Students Are Using AI More Than Ever — and Trusting It Less

A separate Gallup study found that 57 percent of college students now use AI tools for coursework at least weekly. One in five use it daily. Their top applications:

  • 64 percent use it to get help understanding coursework
  • 60 percent check homework answers with it
  • 54 percent use it to edit or improve their writing
  • 54 percent summarize lectures or notes

But here’s the contradiction: while students are leaning on AI for nearly everything, a RAND Corporation survey found that 67 percent of students now say using AI for schoolwork harms their critical thinking. That’s up from 54 percent just six months earlier.

The concern is strongest among younger students. Between May and December 2025, AI homework use among middle schoolers jumped from 30 percent to 46 percent. Among high schoolers, from 49 to 60 percent. Female students were particularly worried, with 75 percent saying AI harmed critical thinking, compared to 59 percent of male students.

Students aren’t saying this because adults told them to be worried. They’re saying it because they can feel it happening.

The Major Migration

The Gallup-Lumina numbers break down further in revealing ways. Among students pursuing associate degrees, 56 percent reconsidered their field of study — higher than the 42 percent of bachelor’s degree students. Men were nearly twice as likely as women to have actually changed majors because of AI (21 percent versus 12 percent).

Students in technology fields were the most anxious: 70 percent reported giving serious thought to switching. That’s not students fleeing from tech illiteracy — it’s students closest to AI who are most worried about what it means for their careers.

The irony is thick. The students best positioned to understand AI are the ones most convinced it threatens their livelihood.

A Robot at the White House

Against this backdrop of student anxiety, First Lady Melania Trump chose a very different message. At the “Fostering the Future Together” summit in late March, she appeared alongside Figure 3, a humanoid robot, telling attendees that AI robot educators “will provide a personalized experience, adaptive to the needs of each student.”

On April 4, she followed up with an op-ed urging Americans to “embrace” AI rather than “fearmongering about robots,” arguing the technology would supplement teachers, not replace them.

Randi Weingarten, president of the American Federation of Teachers, didn’t buy it. “What she did yesterday was every parent’s nightmare,” Weingarten said. “We need human beings to actually help other human beings in the teaching and learning process.”

The debate isn’t new, but the visual is: the First Lady literally walking side-by-side with a robot at a children’s education event, while survey data shows students themselves are increasingly uneasy about exactly this scenario.

Schools Are Making Up Rules as They Go

Meanwhile, schools are scrambling to write policies for technology that changes faster than any committee can meet. Columbus City Schools, Ohio’s largest district, unanimously approved a new AI policy this week after months of work with around 50 stakeholders. The policy gives teachers discretion over whether students can use AI on specific assignments and requires disclosure when they do. Only district-approved tools are permitted.

It’s a reasonable approach for right now. But a Quinnipiac poll from late March found that 76 percent of Americans say they trust AI “hardly ever” or “only some of the time.” Schools are being asked to integrate a technology that most of the public doesn’t trust, for students who increasingly say it’s dulling their thinking, by teachers who are largely making up rules classroom by classroom.

The RAND survey found only about a third of students said their school had a schoolwide AI policy. Many reported rules that varied by teacher — one class banning ChatGPT entirely, the next requiring it for assignments.

What This Means

Three things are happening at once, and they don’t fit together neatly.

First, AI in education is no longer optional. The usage numbers make that clear. When 57 percent of college students use AI weekly and the number is climbing fast in middle and high schools, banning it is about as effective as banning calculators was in the 1980s.

Second, students are smarter about this than the adults setting policy. They’re using AI constantly and simultaneously recognizing it’s eroding skills they’ll need. That’s not hypocrisy — it’s a rational response to a system that hasn’t given them better alternatives. When your peers are using AI and your professor might not notice, not using it feels like bringing a knife to a gunfight.

Third, the political discourse around AI in education is disconnected from reality. Walking with a robot at the White House while students stress about whether their degree will matter in five years is tone-deaf at best. But so is outright opposition. The answer isn’t “embrace robots” or “ban everything” — it’s helping students develop the judgment to use these tools without losing the ability to think without them.

Columbus got closer to the right approach than most: clear rules, teacher discretion, required disclosure. It’s not perfect, but it acknowledges the complexity.

What You Can Do

If you’re a student: Use AI, but use it deliberately. Track when you’re using it to learn versus when you’re using it to avoid learning. The RAND data suggests you probably already know the difference.

If you’re a parent: Ask your kid’s school whether they have an AI policy — and if it’s schoolwide or teacher-by-teacher. A third of schools still don’t have one.

If you’re an educator: The most useful thing right now isn’t banning or mandating AI. It’s teaching students to recognize when AI is helping them think and when it’s thinking for them. That’s a skill no policy document can automate.