Artificial Intelligence in Education: Context, Rules, and Limits

 

Good morning, ladies and gentlemen, greetings from Canada. Thank you for inviting me to speak at this important forum on artificial intelligence and education. Here is a QR code that you can scan to access these slides, with a transcript, just in case my French is not clear.

Context, Rules, and Limits

We will talk about artificial intelligence in education in general terms. I will start by explaining what AI is and how it works, very briefly, and then I will discuss the three main themes of the conference: context, rules, and limits.

I interpret context as “use,” and I will look at the many ways in which AI can be used in education.

I interpret rules as “principles,” and I will look at what are the fundamental principles that govern how and why we use AI in education.

And I interpret the limitations as practical guidelines and will look at appropriate mechanisms for teaching and learning, governance, and security.

What is AI

We have two types of AI. Weak AI, also called narrow AI, is capable of performing a specific task for which it was designed. Strong AI, on the other hand, is capable of learning, thinking, and adapting as humans do. That said, strong AI systems do not yet exist. In this sense, the term “AI” is an aspiration, not a description of what has actually been created.

Weak AI

Then we have two types of weak AI. An expert system is based on rules, facts, goals, and domain-specific knowledge provided by human experts. Expert systems are based on ontologies, which define what kinds of objects have what kinds of properties. Knowledge, rules, and ontologies are described using a system of formal symbols, i.e., a language.

Machine learning models, on the other hand, are entirely data-driven. Input data is processed by mathematical and statistical algorithms, resulting in what are called “models.” Data can be in any form and are represented as mathematical “vectors” of data.

Machine learning

Deep learning is a type of machine learning. Traditional machine learning is “supervised,” meaning that it requires human intervention to train the model. Deep learning, on the other hand, uses artificial neural networks to mimic the learning process of the human brain and is capable of unsupervised learning.

Deep learning

Almost all recent advances in artificial intelligence rely on deep learning. Deep learning is a method that allows computers to learn by example, much like humans do. It uses a layered structure of algorithms called neural networks to process data, recognize patterns, and make decisions.

Neural Networks


What do we mean when we say that a neural network learns? Neurons receive inputs, process them, and produce an output. They are organized into distinct layers: an input layer that receives the data, several hidden layers that process that data, and an output layer that provides the final decision or prediction. 

There are adjustable parameters within these neurons called weights and biases. As the network learns, these weights and biases are adjusted. So when we say that a neural network has “learned,” we mean that its weights and biases have been successfully adjusted.

Neurons

So what are weights and biases? Weight is the strength of the connection between one neuron and another. Not all connections are created equal. Some are stronger than others.

Bias is the sensitivity of the receiving neuron. How likely is it to respond to input signals? This is one way to understand the importance of that particular neuron. The result is sent through an “activation function” that determines whether the neuron sends a signal to the next layer of connected neurons.

There are many types of neural network algorithms. They are all based on different ways of adjusting the parameters presented here.

The AI Workflow


Creating and using artificial intelligence is the result of a complete workflow that starts with collecting data and ends with applying an AI model at scale.

When we think, for example, of ChatGPT, we’re talking about how it collected data from places like Twitter and Reddit and other sources on the web, processed the data, and applied it at scale in the form of an application that people can use on the web.

Because there are many different types and methods of designing AI algorithms, it becomes more relevant at this point to talk about the different types of artificial intelligence by considering their context of use.

How do we think?

Also, the different types of artificial intelligence raise an important fundamental question for educators: How do humans think? What are we actually doing when we see, plan, and decide?

Many people think that human knowledge and learning are based on rules and categories. This belief often goes hand in hand with the belief that all thought takes place in language. Language is the basis of logic and reason. But then, how do babies and animals think? Or maybe we think they don’t think at all.

Other people think that human reasoning is based on probability and statistics. Economists sometimes seem to think this way. Are there laws of thought, as there might be laws of nature?

I think that knowledge and learning are based on pattern recognition. Everything we experience, everything we think, forms a deep and complex neural network that works by recognizing patterns in data. Most of this happens automatically, unconsciously.

But if that’s true, how do we explain why we did what we did, or why we believed what we believed? Do we really have reasons? Or are the reasons we give just phrases that sound good in context?

We should think about these questions as we move forward.

Context

As I said earlier, I interpret context as “use” and will examine the many ways AI can be used in education.

In the literature, four traditional applications of AI are distinguished: descriptive, diagnostic, predictive, and prescriptive. Recently, generative AI has been added. And I predict a sixth type of application: deontic AI.

Descriptive AI – “What happened?”


Descriptive analytics includes analyses focused on description, detection, and reporting, including mechanisms to extract data from multiple sources, filter it, and combine it. The output of descriptive analytics includes visualizations such as pie charts, tables, bar charts, or line graphs. Descriptive analytics can be used to define key indicators, identify data needs, define data management practices, prepare data for analysis, and present the data to a viewer. (Vesset, 2018).

Typical applications include:

  • · Tracking
  • · Systems analysis
  • · Institutional compliance
  • · Student profiles
  • · Dashboard

It often relies on human labeling or categorization.

There is a significant overlap here between descriptive analytics and data science and data literacy.

AI diagnostics “Why did this happen?”


Diagnostic analytics looks deeper into the data to detect patterns and trends. Such a system could be thought of as being used to draw an inference about a piece of data based on patterns detected in the sample or training data, for example, to perform recognition, classification, or categorization tasks.

Applications include:

Anomaly Detection

Anomaly detection involves tasks that target the identification of outliers or extreme data points in a large data set.

  • · For example, spam detection, plagiarism detection, counterfeit detection
  • · Security
  • · Access control
  • · Surveillance

Dependency modeling and regression

Dependency modeling targets the identification of specific associations between data points that might otherwise go unnoticed.

  • · For example, sentiment analysis
  • · Opinion sampling

Clustering and classification

These tasks segment data into similar clusters based on the degree of similarity between data points.

  • · For example, Automated scoring
  • · Skill assessment

Summary

Condensing data for easier reporting and consumption

while avoiding the loss of more valuable and detailed information that we can use for clearer decision-making.

For example:

  • Content Summarization
  • Audio and Video Transcription
  • Supporting Special Needs
  • Peer Feedback Analysis


Predictive AI – “What will happen?”


Predictive analytics answers the question: what will (likely) happen, based on identifying patterns and trends in existing data, and extrapolating that pattern or trend to likely future states. For example, such an analytics system might look at a student’s participation in a course and then predict whether the student will pass or fail.

Typical applications include:

  • · Resource planning
  • · Learning design
  • · User testing
  • · Identifying students at risk of failure
  • · Instructional consulting
  • · Precision education
  • · Student recruitment

Prescriptive AI: “How do we get there?”

Prescriptive analytics recommends solutions. Using prescriptive analytics can inform a human user or a computer system of a need that must be satisfied. Such needs can be generated from rules or principles, operating limits or boundaries, balancing equations or mechanisms, or user input. The requirement for a solution can be based on the existence of a need combined with a prediction that the need has not been or will not be met. For example, the analysis can predict increasing pressure levels that exceed the tolerance of a pipeline.

Some typical applications:

  • · Learning recommendations
  • · Adaptive learning
  • · Adaptive group formation
  • · Placement matching
  • · Hiring
  • · Pricing
  • · Decision making

Generative AI: “creating something new”

Generative analytics uses past data analyses to generate original content based on parameters or properties of the data being studied, combined with predictions or requirements for future data. For example, generative analytics can use Picasso’s library of paintings as data, and then generate new Picasso-style paintings based on photographs or drawings.

Some examples of generative AI:

  • · Chatbots
  • · AI-generated content
  • · Automatically generated animation
  • · Coaching and tutoring
  • · Artificial teachers
  • · Content curation
  • · Robotics

Today we have many applications of generative AI that have made headlines, such as ChatGPT, Claude, Anthropic, which can generate new content such as articles and essays, software, audio recordings, artworks, etc. To many people, this seems like the whole of AI, but as we can see, it is only a small part of it.

Deontic AI - “What should happen?”

Deontic analysis answers the question “what should happen?” » This is a class of analytics that examines expressions of feelings, needs, desires, and other such factors to determine what kind of outcome would be best, and then works to achieve that outcome. In this sense, it is the use of analytics to inject ethical, political, or cultural order into the environment, whether it is a discussion list, resource allocation, or personnel management.

Applications include:

  • · Community norms
  • · Influencing behavior
  • · Identifying harm
  • · Amplifying good
  • · Defining what is right
  • · Changing the law
  • · Moderating discourse
  • · Alleviating distress

Learn more

You can read more about all of these applications in the online MOOC I wrote on ethics and learning analytics.

Rules

As I said earlier, I interpret the rules as “principles” that govern the ethical and responsible use of AI in education.

Current approaches

There have been many studies conducted by institutions around the world on the appropriate use of AI. They have looked at the use of AI in general and at its use in specific contexts, such as education.

Links to all of these studies can be found in the notes on this page in the Powerpoint slides

  • · Montreal Declaration for the Responsible Development of Artificial Intelligence - 2018
  • · OECD Principles for the Responsible Management of Trustworthy AI - 2024
  • · Hiroshima Process - International Guiding Principles for Organizations Developing Advanced AI Systems - 2023
  • · UNESCO - Artificial Intelligence in Education - 2024
  • · Alberta School Boards Association - Strategic Directions on Artificial Intelligence - 2024

I also conducted a study of these and other studies a few years ago; you can find the reference in my ethics course.

Here I present a subset of the ethical principles that many of these reports found and that the authors believe should govern our use of artificial intelligence in education.

These findings are often presented as a consensus, but in my own analysis, they are interpreted differently by professions and societies. While I describe them here, it is important that you consider them from your own perspective.

Transparency

Transparency is the idea of ??saying when AI is used, how it is used, what recommendations or decisions it has made, and why it has made them.

The idea of ??“explainable AI” is related to transparency, that is, understanding why an AI worked the way it did, so that we know that it worked in accordance with facts, values, and expectations.

Justice and fairness


Justice is often associated with fairness, although these two concepts are of course distinct. Justice is related to respect for and enforcement of laws, as well as the idea of ??a “just society” or a “fair society.” As a result, in most literature describing laws or rules governing the use of AI, justice is often cited alongside concepts such as consistency, inclusion, equality, and access.

Non-maleficence

This stems from the principle of medical ethics: do no harm. In education, this is particularly relevant to the safety of students and children.

Keywords: Non-maleficence, safety, security, harm, protection, precaution, prevention, integrity (bodily or mental), non-subversion

Privacy

This principle includes the concepts of security and confidentiality. Throughout the entire AI workflow, this condition comes into play in many places.

It is important to establish data protection laws that make the collection and analysis of educational data visible, traceable, and verifiable by teachers, students, and parents. In Europe, the collection of personal data is governed by the GDPR, which includes the “right to be forgotten.”

Many people argue that the effectiveness of AI will depend on surveillance, which will impact the privacy of students and teachers.

There is also the question of how the data will be used. Will it be limited to educational purposes or will it be used for advertising?

Beneficence

The idea of beneficence is to do good to individuals and society. There are many aspects to this, but the purpose of each is a beneficial intention or goal in using AI.

In education, this includes things like improving learning outcomes, increasing access, and reducing costs.

How should these goals be accomplished?

Some of the elements considered include the idea that we should “emphasize the autonomy and social well-being of students in the process of integrating AI-based tools” (IENESCO 2021) and also the principle of “duty of care,” which is the idea that you should consider the well-being of anyone you have entrusted with a service.

This is based on an ethic of care, which is the idea that ethics is based on a relationship between the service provider and the recipient based on open and trusting communication. It is the idea of listening to the person and accepting what they say about their needs as honest and factual.

Freedom and autonomy

In education, the concepts of freedom and autonomy are linked to the idea of ??student agency. This does not mean that students can do whatever they want, but it does recognize that in some settings it is best to allow and empower them to make their own choices and decisions.

To support agency, educators are encouraged to cultivate a learner-centered use of AI, that is, to reinforce and reiterate the authority and autonomy of humans over their own learning and the tools they use to support their learning.

This includes the principle of “human-centered AI,” that is, a definition of AI based on human needs and values, with human oversight and decision-making.

For example, UNESCO’s mandate inherently requires a human-centered approach to AI. "The aim is to refocus the debate on the role of AI in the fight against s current inequalities in access to knowledge, research and diversity of cultural expressions, and to ensure that AI does not exacerbate technological divides within and between countries.”

The principle of “informed consent” plays a key role here. It is the idea of ??transparency combined with the idea that individuals have a choice about whether or not to use AI.

Fit for purpose

This goal is one that I created to bring together a number of different and related goals from various sources. It includes, for example:

  • · The quest for knowledge
  • · The five objectives of maqasid (protection of religion, life, intellect, progeny and property) (Talal Agil Attas Alkhiri, p. 743)
  • · Value and benefit – we need to ask who benefits from the use of AI, as opposed to who pays the cost
  • · Educational relevance – Appropriate use, grounded in educational research, evidence-based, relevance to learners’ needs, child-centered AI, developmentally appropriate
  • · Reliability and accuracy – for example, people use AI to detect cheating and plagiarism, but AI cannot detect these cases accurately

Another one, this is outside our domain, but there is the question of the use of AI in policing and warfare

Some observations

It is often assumed, or even explicitly stated, that the values contained in these ethical codes, and in ethics in general, are common, fundamental, and universal. But that is not the case.

For example, some discussions of ethics in artificial intelligence and analytics simply assume that privacy is a right and should be respected. But when we argue against that, and we must argue against that, then we have to ask ourselves what is the basis for such an assertion.

After all, privacy protects criminals and innocents alike.

And maybe we will feel that we should just balance between two options, but what makes such a consequentialist approach, a technical approach, to balance the right approach? You wouldn’t balance killing and not killing. Would you?

Every society approaches these questions from a different angle, drawing its own conclusions on different grounds. Some societies are based on individual rights. Others take an approach that balances risks and benefits. Still others prioritize the social good or social order.

No one can determine what the rules are for your own society except you. How you decide, what you decide, how you implement AI in learning and in society at large, is up to you.

Limits

As I said earlier, I interpret limits as practical guidelines. Now I will look at appropriate mechanisms for teaching and learning, governance, and security.

Practical guidelines

We talk about limits, and that’s fine, but what we need in the classroom and on campus are practical guidelines for action. In this part of my talk, I will draw on the Alberta guidelines because they provide a solid foundation for structuring the conversation, although I caution you again that it is up to you to decide what they look like in your own schools and whether they are sufficient for your needs.

I also recommend to teachers and students the UNESCO AI Competency Frameworks (links are in the slide notes)

And to come back to the previous topic about how we think – while I know it is tempting, especially for administrators, to frame an approach to AI in terms of rules and principles, this may not be how we think about complex topics like this, because as we know, rules tend to help us only in the easy cases, but there are many cases where we have to rely on our own judgment and intuition.

Learning

The first practical principle is the need to support the learning of AI. We cannot use it in a practical way if we do not understand it. This means, for example, that we:

  • - Build AI literacy skills for teachers and students – what AI is, how to use it, where it’s appropriate to use it, where it’s not
  • - Talk about AI – create a community to talk about where we see AI around us and think about what we think about it
  • - Rethink tests and homework. Can we really stop students from using AI? How can we structure the education so that they can benefit from the use of AI
  • - Access. Not all students will have access to AI. This can create inequalities in the classroom.

Structure

The second principle is about creating institutional structures:

  • - Support is important, so there needs to be a structure or organization that considers the responsible and ethical use of AI
  • - When AI is used, informed people should be involved; when AI policy is discussed, it should include people who know about AI
  • - There should be a list or inventory of AI tools, and these should be vetted, so that there is a list of “safe” AI tools
  • - Not all tools are suitable for children

These things can be done by the national government, but they can also be done in individual schools and institutions to meet specific cases and local requirements

Safety

Safety is a top priority.

  • - Ensure cybersecurity protocols take AI into account
  • - Pay special attention to whether these protocols extend to data security

AI literacy must include a data literacy foundation

Governance

Governance is about how decisions will be made about the use of AI in education and what those decisions should take into account.

For example:

  • Update or establish policies to describe prohibited uses of AI
  • Determine the legal status of data used by AI and content created by AI (these issues are still being decided in North America and Europe)

Thank you

As I said at the beginning of this talk, it is important to note that this is just one topic in the much larger field of technology in education.

AI will be integrated into learning management systems, social media, virtual reality, security, and cryptography, including blockchain. There will be a synergistic effect as these technologies combine.

What we have talked about today is just the beginning of what I think will be a very exciting time for teaching and learning, here in Morocco and around the world.

Thank you, and again, here is a QR code that you can scan to access these slides, a transcript, the audio, and hopefully the video of this presentation.

Comments

Popular Posts