The Questions We Ask About AI in Education

AI generated image, 100% human-authored text 

Artificial Intelligence has entered the world of education like an explosion, leaving us reeling in its wake. Everyone has questions: how could AI reshape education? What risks does it create? These questions are natural, coming as they do from our common experiences. But what if they’re the wrong questions to ask? What if we should be questioning what we’re doing in education at all?

What AI Could Be

Generative AI has been around for almost three years now and we’ve watched as it has grown from its first tentative steps into an all-encompassing technology that is reshaping every aspect of society, including education.

Despite what the critics say, artificial intelligence does much more than to merely parrot what they have already seen. That criticism can be made only of the most basic language models trained on nothing more than the words we use online in articles and social media. Artificial intelligence can be trained on any sort of data at all - words, pictures, music, x-rays, traffic flows, you name it.

Multi-modal AI, as it’s called, can not only be trained and exposed to any kind of data we can think of, it can be trained on much more of that data than any human, and even more importantly, it can see that data in new ways. At their core, the things we call ‘stochastic parrots’ work by detecting patterns in the data, and unless we constrain them with predefined labels and categories, they can detect previously unseen trends, relations and categories.

Well - what could we do with artificial intelligence in education?

  • Could we use AI to grade student work (and maybe even detect whether they were cheating)?
  • How can AI be used to create new learning resources that are personalized according to the student’s interests, background, and work thus far?
  • Can AI be used to create and orchestrate in-class activities such as group work, labs and hands-on practice, games and other forms of interactive learning?
  • Is there a method whereby we can use AI to integrate work and learning, so that our tools train us as we use them?
  • Can AI replace teachers? Is the ‘robot tutor’ even a possibility?

However, along with the enthusiasm for a new set of tools that holds the promise of increasing access, quality and effectiveness in online learning, we have at the same time an increasing number of questions that need to be asked before we start rolling it out in the classroom.

The Questions We Ask

The questions are by now familiar to everyone. They include:

  • Has generative AI been trained with copyrighted content, and are we responsible for compensating authors and creators? What other legal risks does AI create?
  • What do we do about students cheating by using AI instead of their own brain to research and write their assignments? How can we even detect it?
  • How can we be sure that AI is not misleading students with incorrect answers and even misinformation? What guardrails are there around harmful information?
  • What about the bias in AI training data? How can we be sure AI is not perpetuating harmful stereotypes and creating further disadvantages for at-risk students?
  • Is the use of AI creating a cognitive deficit in students, making it harder for them to learn critical thinking and creative skills?
  • Can we use AI in education without violating students’ security and privacy?
  • What risks are created by depending on a few large companies like OpenAI, Microsoft and Meta to create and manage the content and data used to educate the next generation?
  • As we depend on more and more sophisticated tools, are we increasing the digital divide and inequality in society?
  • Where will we get the energy and resources we need to sustain development and growth in AI across society?

These questions are based on our concerns about the risks created by using this new technology, and reasonably so. We’ve learned from experience with new technologies - everything from internal combustion engines to social networks - that even the most useful tools can have undesirable side-effects.

Risks and Opportunities

So yes, the questions about risks need to be asked. And they have been asked, and there is a small army of scholars, developers and policy-makers working today to ensure that the AI systems of the future are as safe as they can be. It’s not that we need to ignore these questions, but it is probably fair to say that they’re being handled.

That said, while we should always be aware of risks, questions about risks are never really the right question to ask when we’re looking at something exciting and new. Dwelling on risks may keep us safe, but it also leaves us stymied and frozen, unable to take that next new step into something challenging and interesting.

As individuals and as a society we are at our best when we are focused on new opportunities, new discoveries and new answers to old questions. The rise of artificial intelligence has already forced some of this on us: what do we mean by creativity? How do we define intelligence? What is it that makes us essentially human?

Depending on how we answer these questions, we’re not just looking at education as usual any more. We already know that we may be training people for jobs that won’t exist in ten years. We know that we’re teaching skills that could be performed by a $49 piece of silicon and plastic. None of the answers to any of the questions above will prevent this from happening. But failing good answers to any of our questions, we just keep moving on, doing what we’ve always done.

We need better questions than we have been asking to this point.

Why We Educate

To begin at the beginning: what is it we are even trying to do in education? Why exactly do we want to impart students with new knowledge and new skills?

There are three traditional answers to this question:

  • The economic - to train people to be producing and contributing members of society, to provide a skilled workforce for industry, and to foster innovation and development
  • The social - to give people the background knowledge and critical thinking skills they need to participate in a democratic society
  • The personal - to empower people, to help them be able to grow, to maximize their potential and to give them the chance to enjoy the most fulfilling life possible

The questions we listed above all have to do with at least one of these three objectives. Some are rooted in the fear that we will no longer be able to satisfy the objective. Others are rooted in the hope that new technology will increase our ability to satisfy it.

What we are not asking, though, is whether we will need to do any of these things in the future at all.  

Economic and Social Benefit

Think about it. How far are we from a state of affairs where machines can innovate and develop new technologies that can manage themselves and produce anything we need, from new houses, to food and drink to games that amuse us and give us meaning? Why would we continue to want to educate humans to do this, when it is already being done?

People argue that there are some things machines will never be able to do, involving skills like care and compassion, that will remain the domain of humans. Perhaps this is true. Even if so, though, we won’t need everyone to be doing this. And it is certainly conceivable, if not actually possible, that machines could take care of us as well as any human.

While it is admittedly a stretch, we can imagine a world in which there is no longer a real economic need for education and development. What about the other two, the social and the personal? In a world filled with artificial intelligence, do we even need critical thinking, democratic governance, or even to develop to our maximum potential?

Why do we even have society? What function does the government actually fulfill? Historically it has been to protect us from harm and to manage the governance of scarce resources. Why do we need knowledge and critical thinking to manage a government if machines can do it for us?

To be sure, we want instinctively to preserve our freedom, and therefore recoil at the possibility of these decisions being made for us. But most such decisions are already being made for us, and often inefficiently and poorly, and we mostly feel no loss of freedom. Imagine, for example, justice being swift and accurate, taxes being unnecessary, and resources no longer being scarce. What is there to decide?

Personal Benefit

If artificial intelligence fulfils its promise, the real questions will come down to the third of our educational objectives, the personal. This naturally has two parts:

  • being able to grow and to maximize our potential  
  • enjoying the most fulfilling life possible

These two things raise the hard questions.

First, what is it to maximize our potential? What actually is the maximum potential of a human being? One way we have historically defined this is as health and physical fitness, which our robot doctors and trainers may well provide for everyone. Another way we can define it is in terms of knowledge and skills, though most of these will be archaic and no longer necessary. Perhaps our maximum potential may be found in the ethical or spiritual domains. Perhaps it is in bravery, or curiosity, or camaraderie. The problem is, we just don’t know.

Second, what does a fulfilling life look like? We’ve seen that people can have fulfilling lives even in a world of work, of scarcity, and of conflict. But what does a fulfilling life look like when these challenges have been removed? Do we require adversity in order to be fulfilled? Or can we find it in compassion and joy? Is it found in serving something bigger than ourselves? Once we remove individual wants and hardships, will a fulfilling life look more or less the same for everyone, or will different people find fulfillment in different ways? Can we have a fulfilling life even if we don’t come close to maximizing our potential?

How do we even begin to answer these questions?

Most people, if given the chance, and after some thought, may come to some form of answer to some parts of them. They express this answer any time they think about their dreams and aspirations.

But often they are not given the chance. Through intent or circumstance, through the management of need or the manipulation of desires, their path to a genuinely fulfilling life may be blocked or misdirected by people or organizations who view people are nothing more than a resource, to be used as a means to an end, but never to be seen as an end in themselves.

We’re Already There

It doesn’t matter whether or not artificial intelligence fulfills all the promises the pundits make for it, or whether it’s a disappointment and performs no better than it does today. Humanity is already being pushed toward a world in which the economic, social and personal objectives of education are being challenged, making these last two questions more important to answer than ever.

So where are we now?

Though schools and universities are still trying to answer the old questions, we’re already at the point where we need to be asking the new questions. The changes that have already happened will not go away. Who do we want to be? What do we want our lives to be about? Artificial intelligence poses opportunities and risks, but perhaps most of all, it gives is the opportunity to shift the question from what we must do in order to prosper, to what we want to do in order to make our prosperity worthwhile.

Comments

Popular Posts