AI university students attending classes

Community Coach
Community Coach


At Ferris State University in Michigan, an unprecedented experiment is unfolding that might just redefine the future of education: two AI entities, Ann and Fry, are not just attending classes but actively participating as students. It's not the first time we've seen advanced technology integrated into our classroom, as many institutions have explored adaptive learning and personalized teaching system. Just as IT and HR departments have employed Zendesk technology to field common inquiries, education is also several years into integrating virtual teaching assistants and other techniques to improve the student experience. But this particular experiment may be particularly impactful as we explore the potential of artificial intelligence in higher education as a participant and co-collaborator within the classroom.

The potential with this initiative isn't just about integrating AI into the academic environment; it's also a profound exploration of how we can provide rather unique perspective to the student experience as a complement to traditional teaching methods, which are often criticized for their rigidity. So now we can see how the AI entities will be capable of attending lectures, participating in discussions, and completing assignments, and what their perspectives will provide. It will be interesting to observe the interactions as the AI students will not be bound by limitations such as fatigue or distractions (I completed my undergrad minutes from the beaches of the north shore of Oahu in Hawaii, so I know a couple of things about distractions in an academic setting).

The concept of AI students in a university setting is as much a technological marvel as it is a potential to shift towards a more student-centered learning approach. By analyzing and understanding the AI students' journey, educators and course designers can gain insights into optimizing the learning experience for all students. The presence of AI in classrooms could lead to the development of more adaptive and personalized educational systems, catering to the diverse learning needs of each student. This can also have important ramifications as we make strides toward serving minority, first-generation, international, and traditionally marginalized students. If we can design the AI students to simulate student behaviors and interactions, they can highlight areas in the academic structure that require improvement, from lecture delivery to general curriculum design.

I generated a handful of images for this article.  This one was by far the worst.  The more you look at it, the more awful it is, which is why I think it's great.I generated a handful of images for this article. This one was by far the worst. The more you look at it, the more awful it is, which is why I think it's great.


This initiative also opens doors to more profound research opportunities in using AI for educational purposes. This may be a unique chance to systemically and efficiently assess student learning and performance, or even give us the ability to develop more innovative and transformative teaching methodologies. This can be an exciting step because, quite frankly, the educational arena has seen a much more limited exploration and application of AI and technology compared to other sectors.

As this unfolds, it will be crucial to remember that the goal is not to replace human interaction in education but to enhance it. By embracing AI, we can uncover new ways to enrich the student experience, making education more engaging, adaptive, and effective for every learner. It does beg a question (just my own curiosity), would an initiative such as introducing AI students into an academic program influence students to think more like machines, or will the machines learn to think more like humans. We know from various statements of AI developers that the goal of LLMs is to mimic human linguistics and to calculate responses based on the existing patterns and learned materials. But we humans are incredibly adaptable and versatile creatures and we are keen on imitating and mimicking the people we interact with. How will that play out in a cohort that is part human, part technology? And from an ethical perspective, how might the presence of AI students affect the psychological aspects of learning and interaction among human students?

I do applaud Ferris State's venture as not just a technological experiment, but as a pioneering step towards redefining education. We will need to rethink our approach to teaching and learning as we continue to embrace technology as a catalyst for creating more dynamic, inclusive, and student-centered educational experiences. As we observe the fusion of human intellect and artificial prowess in our classrooms, it's time to reflect: are we prepared to embrace the impending evolution within education?

This article was originally written as a LinkedIn post.

Community Coach
Community Coach

Thanks @DrNufer for sharing this story. I read about the project in Inside Higher Ed and wonder how long before financial aid fraudsters try using this technique. At my college our IT staff works hard to prevent fraudulent applications for admission and they usually catch them before classes begin each semester. but if the user is logging in and participating in class it's a bit harder to determine that they are not real. I'm nonetheless excited about the potential and appreciate any effort to help all of our students be successful.

Community Participant

This is an wonderful story @DrNufer. Thank you for posting. Although there are real concerns regarding the integration of AI within society, I am actually rather excited to see it develop. Truthfully, I see the day were the term person is no longer confined to that of being homo sapiens sapiens. Instead, there will be real discussion into applying AI personalities to the designation of person. As far as the Inside Higher Ed article is concerned, I think the idea of "educating" AIs within the standard mechanisms that other students must endure is a great idea. There are already AIs that work as personal therapists per se. The programming of a formal education seems to offer a sort-of validity to whatever outputs their users might receive. Soon, though, I expect those very same doctorate-level AIs to become creators of their own, creating a variety of things. It will be an exciting world in the not too distant future!

Community Explorer

@WendellDuncan the techno-realist in me says that it's important to approach the integration of AI into society with a blend of optimism and critical awareness. While the a potential redefinition of 'personhood' to include AI entities is certainly intriguing it will raise significant ethical and practical challenges.

The concept of educating AI within standard academic frameworks is very compelling to me in almost innumerable ways, however, I remind myself to scrutinize how this aligns with the core objectives of education and the complexities of human learning processes.  Can AI truly assimilate the depth of human experiences and critical thinking skills imparted through formal education, or are we merely programming them with ever more sophisticated mimicry?

The idea of AI as independent creators and thinkers is exciting, and... how do we ensure accountability, ethical standards, and the safeguarding of human-centric values in a landscape increasingly influenced by non-human intelligence?

Speaking for myself as I explore these frontiers, I try to maintain a balanced viewpoint, recognizing the potential of AI while being acutely aware of the risks and ethical dilemmas they pose.  

Community Participant


Yes, I agree there will be vast challenges with editing humanity’s view of personhood. Frankly, it will take a whole shift in thought entailing, most likely, several generations. Its necessary for many reasons, though, primarily to fracture the egocentricity of Humanity. Just my opinion, though.

You mentioned that you have to remind yourself to scrutinize how this aligns with the core objectives of education and the complexities of human learning processes. Concerning the complexities of human learning processes, I am struggling to see how they would apply to AI learning, since for all intents and purposes, AI would never be human despite potentially one day becoming a “person.” There would be no real need on an educational level for an AI to assimilate human experiences, unless it was being trained specifically to be like a human.

I guess I see it two different ways. AI that mimics human nature and AI that only resembles humans but is something of its own. If you mean to think of AI as a human mimic, then yes, I see your point. Teaching a non-human what it means to be human would be incredibly difficult to accomplish. If you instead think of AI something that sometimes resembles humans but is a being on its own, Human 101 need not be a prerequisite.

Ensuring accountability and ethical standards for the independent AI creators seems to be an impossible task. Although I do think of such things, it is far out of my purview, but I, too, try to keep my opinions of the subject balanced if possible. Nonetheless, I overflow with excitement for what may come. Granted, that is tempered with a bit of worry as well.

Community Explorer

Very interesting, I'm glad you shared it @DrNufer  and the comments are also interesting.

@WendellDuncan , could you share something about the point you make when referring to AI acting as a therapist per se, it seems to me that you referred to this being a reality today? Did I understand you? In addition to being confused as to whether I understood you correctly, I was very interested in having some guidance on the topic. If possible, of course. Thanks

Community Member

@ThaísH has a psychologist chatbot.

Community Explorer

I'm new here and thought I had posted a response to your comment @jesmsmit74 ... Since I couldn't find anything, I must have done something wrong before! Anyway, I heard about and tested it with some of its personas/avatars. Unfortunately, it falls far short of what I consider sufficient to be a reliable tool, in my understanding. There were significant failures in serious topics, so I don't see it as a mature enough platform to be considered a therapist per se.

I know others have different opinions, and I fully respect those who disagree with me.

Anyway, the idea was to find out if they were talking about some other AI that I'm still unaware of. Thank you very much for your response and posts!

Community Participant

@ThaísH Hello, 

First, thank you for your response. I have to say that I may have invoked a level of confusion simply by my own misuse of per se. My use of it was unfortunately, a popular misuse of the word, rather than the actual, meaning of it.

To date, I know of no actual AI that stand alone as therapists. I know I heard an app being advertised on TV several months back that touted the use of AI chat bots as a tool for therapists to help their patients. That’s what really triggered my saying that about AI therapists. I am an advocate for clarity, and I apologize for my lack of it.

I did a quick EBSCO Host search and found some more recent stuff that might interest you. I cannot post the articles without infringing on copyright, but here are the citations. Maybe they can help you in your search.

Metz, Rachel. “People Are Using AI for Therapy, Even Though ChatGPT Wasn’t Built for It.” Bloomberg.Com, Apr. 2023.

Knight, Will. “Andrew Ng Has a Chatbot That Can Help with Depression.” MIT Technology Review, vol. 121, no. 1, Jan. 2018, p. 15.

Zeavin, Hannah. “Therapy with a Human Face.” Dissent (0012-3846), vol. 69, no. 1, Winter 2022, pp. 11–15. [This one refers to Eliza, “a video game AI bot, which provides quasi therapy to patients.” The abstract of the journal article states that “a text exchange with a cognitive-behavioral therapy app and multi-session weeks with a psychotherapist are now rendered equivalent.”]

Metz, Rachel. “Talk Therapy, Meet ChatGPT.” Bloomberg Businessweek, no. 4780, Apr. 2023, pp. 15–17. [This one talks about how people utilize OpenAI’s ChatGPT for mental health therapy. It also compares human mental health professionals and AI “therapists,” as well as addresses some of the risks of using ChatGPT.]