Assessment of Barriers to Educational Technology Acceptance

 

Automated transcription, lightly edited for paragraph breaks.
 
Welcome to the Sunday morning session. Where the most important papers are presented.

So, Uh, a lot of this paper wasn't actually in the paper.

It's part of a project that we did in the National Research Council with an organization called DRDC

And much of the presentation wasn't actually available to be shown at the time the paper was submitted but now it is available.
 
It was part of an overall training modernization process for the defense research and development Canada. And they were looking to conduct a study on factors that inhibited their instructors from using learning technology.

Our Publications are available. In these slides, I will show a QR code at the end of the presentation where the slides are available and you'll be able to access them. The first of two presentations is the one that this one is focused on. The analysis of barriers to technology adoption. 
 
The second publication is an ontology framework for instructional strategies. Um, and that's got a lot of interesting material as well. Both of these were authored by myself and my colleague, Bruno Emond, and the paper that was created for this conference was written by myself and is a short because we have page limitations summary of of the results. 
 
My motivations for this presentation because it was kind of slammed by the critics. Also known as the reviewers. So I thought I'd put this into a bit of context.
 
I wanted to provide first of all description of the factors involved in my review. Of the study that they were doing at drdc. Now, this will be a review for a lot of people. I I recognize that and and so it's pretty low and not original. That's okay. I can live with that. Um but I found when I I talked about these results earlier some people in our community, the education, technology Community knew nothing about things, like the technology acceptance model and stuff like that. I was a bit surprised. But you know, we assume that we're all from the same discipline, some kind of social sciences background. I'm not actually, I'm from a philosophical background and a lot of people don't have this scope of knowledge. So I wanted to present that.
 
I also always wonder about how useful studies are in the literature about so-and-so's perception of such and such and the literatures filled with those have always been a bit skeptical. So this informs my understanding of the thinking behind those studies.
 
And then finally, and this is kind of meta to think about the limits of a cognitive approach to understanding things like Um, I think of that as and this is a technical term, not a disparagement of folk psychology approach. To studying technology and I would contrast it with a more eliminative approach. For example is offered by people like Paul Churchland and others and that we see today in the development of neural networks. Or, you know large database probability-based. Statistical based complex systems, which is quite a contrast to this approach. I can't deal with that in detail in this presentation. But it's in my thinking in the background.
 
So let's look at acceptance models to begin with. There's a history here. Um and again this may be familiar to many of you, but it might not be and that's the whole point of this. So, Uh, there's two types of models. That have been important over the years, one is adoption theory and this is Uh, focused on the choices that individuals make as to whether or not to use a particular technology. And by contrast we also have diffusion Theory which considers the spread of a technology across an organization. A lot of the time these two concepts are just Blended as though they're one. But in fact there are two very different phenomena. We have to be careful not to confuse them.
 
The first of these, of course is Rogers. Innovation, diffusion Theory. She uh, he describes five stages of diffusion and, you know, in retrospect. Now they seem pretty obvious awareness of the Innovation persuasion of its benefits decision to adopt the Innovation implementation of that decision and confirmation of the Innovation process. But now, you know almost 60 years more than 60 years later seems very naive.
 
It's similarly, we have as Jen's theory of planned Behavior. Now it's only 40 years ago, Um, looking at what would cause changes of intention? And and, you know, Again, it's very folk psychological but you know, changes in the salience or, you know, the significance or relevance of belief New information obviously changes in confidence or commitments, you know, personal development factors individual decisions, their skills their willpower. If you will. Their emotions and external factors time opportunity the influence of others. So this is a very individual centered, very psychologically centered, kind of approach. Does it explain technology adoption or diffusion? No, it does not.
 
So we get The widely used and well-known technology, acceptance model. Now we're only 30 years ago. And it considers attitudes rather than behavioral intentions previously, it was thought, you know, it's a conscious decision that we make to adopt technology. Now, we're just thinking of it as a matter of, you know, attitudes or ways of looking at the technology and it identifies two main predictors of adoption Behavior. The usefulness of the technology. And the ease of use of a technology and if you reflect on your own decisions, Uh, With respect to adopted technology like PowerPoint or Internet, or chat, GPT or whatever. These are two major factors but not a complete explanation, right? Um,
 
We also get following from this, the decomposed theory of planned Behavior, I just love some of these titles. That combines this model. And the tpb that I described before to depict specific beliefs, as decomposed into belief constructs, Aside. Our, our beliefs really constructs. I don't really think so, but Um,
 
Another way of looking at it is scrubs 2009. Now, we're like only 15 years ago, uh, concerns-based adoption model. And it includes three what they call diagnostic judgment-free components or three tests or surveys that are applied stages of concern, the levels of use and the Innovation configuration Maps. All this going to show that technology adoption is a complex inherently Social Development process. This we're just beginning to see the importance of social factors rather than individual decision making or psychological factors in technology adoption and therefore diffusion
 
Finally, we get what's called UTAUT. And there's also a, you taught two unified theory of acceptance and use of Technology. Which gives us a whole list of factors. Basically, taking all of the previous studies mashing them together and saying all this great big complex explains or purports to explain why people adopt technology.
 
Here's UTAUT2. And it adds three new constructs to the original Utah model. Hedonistic motivation. Price value, I don't get kind of makes sense and use Behavior. So, If we think about this, We've got a large range of different factors influencing decisions.
 
But what prevents decisions this takes us to the discussion of barriers to technology adoption. Technology's not available, that's pretty obvious. There we go. Um, reliability and complexity of the technology. Yeah. You know, nobody wants to use Outlook. It doesn't work. Uh, faculty with poor self-efficacy, may be reluctant to try them. Also makes sense. And they. Turning away from technology and influencing other people to turn away from Technologies. The skeptic in the room, kind of phenomenon.
 
So, let's think about this overall. Technology. Acceptance is individual. Yes. But it's influenced by social conditions. Desires police, fears, Etc, social factors surrounding them. But it's also organizational. The most obvious is the technology available, but a range of organizational factors, And that all of this shows, the difficulty of obtaining a structured description of Technology.
 
Now I've put on the side here, the drdc model, this is the model that the my colleagues at drdc came up with that. I was evaluating for this project and so they basically combined much of what we saw here to come up with the model technology process Administration environment and faculty those five sets of factors. We'll return to this.
 
So that leads us to risk management. Because drdc wanted to think of Technology adoption from the perspective of risk. Now, risk management is a phenomenon that doesn't come up so much for individuals. Although, of course individuals may be Risk, Takers or risk-averse, but it's more a corporate kind of thing. When they do corporate planning, they come up with risk management profiles History of this as well risk assessment models, generally measure exposure to negative consequences, they may be qualitative or quantitative or a combination thereof and they're usually subjective. They measure perceived risk or expected outcomes, sometimes they can be objective, but it's hard, right? Because we're talking about the future.
 
Risk management begins with the Fine-Kinney method. Fine comes out in 71, kidney and 76. Calculating the risk. Score based on scores of the probability of the event, happening the exposure to that event and the consequences. You know how how bad the the consequences would be each is weighted equally and then you have all kinds of different theories after that vary this uh waiting someone
 
Later on, you have the analytical hierarchy process model by Harker in 89. Uh, And this is a method for weighing combining multiple goals or outcomes multiple criteria and a classification scheme for types of risk. So you you see on the one hand, we have types of risks. And on the other hand, we have a weighting of probabilities of risk and again, things are getting complex. Uh whoops wrong button. There we go,
 
Ultimately we come up with what is widely used in the field today and we use it in NRC in every organization I've ever seen uses it. Risk Matrix which basically combines the likelihood of a bad thing happening and how bad that thing is to give you a weighting based on a green yellow red kind of scale where red is the really serious kind of risk that you might be looking at. And green is kind of a risk, kind of not
 
So, What I found interesting. When thinking about risk and risk management, is that risk is self-referential, not just as an aside here. The drdc study involved. Focus groups of people talking about the technology, why they would use it, why they wouldn't what they would consider to the risks to be. And when you discuss, you know, what's the worst thing that could happen? Your assessment of the risk? Actually kind of begins to change. Might realize, for example, it's not as bad as you thought it was or conversely. You might think it's much worse than you thought it was. You didn't realize that until you talked about it.
 
Risk perception of risk. Varies a lot depending on point of view. And interestingly, because this is happening in an organizational context. Risk. As an individual construct, is very different from risk. As an organizational construct. You may not care, but it might be fatal for the organization. But so what you can get another job, similarly, the organization might not care about the risk to you just the risk to itself?

They're using surveys. Um, How do we know these surveys actually measure anything of value? Well, that leads us to the subject of validity and reliability.
 
Um, the definitive discussion of this is the era American educational, research association discussion in 2014. But basically reliability refers to the consistency of the measure. Do you get the same results every time you do it validity refers to the accuracy of the measure. Are you getting? Or are you measuring the right thing? Kind of well, the picture is clearer than my description. As is usually the case. Um,
 
So how do we test for these one way of testing is content validity involving assessing whether the questions in the survey, cover the entire range of issues or Concepts being studied very important, obviously, It's easy to study one small factor. Out of a whole range of factors. Um, so there's different ways of Testing for Content validity. Uh, testing it against external criteria, for example. Uh looking at elements of content representativeness and you know, you would talk to experts in the field and ask them basically, are we covering everything that we need to cover.
 
Also construct validity, this is important. Um, So, The the phrase that really jumps out to me is unit, dimensionality and local Independence of those options. Now in our keynote yesterday, Um, one of the surveys he gave us obviously not a scientific one because he had us clap, right? But it was uh, how often do you use AI once a day? Once a week or once a month. Now, what's wrong with that? Well, what if you use it like, I do four times a week? It's not once a day. It's not once a week, right? His survey failed the test of construct validity, right? It measured. Three out of a much wider set of possibilities. It should have been, you know, Um, Once a day. Up two once a week or or more than once a week or more than once a month, Etc. So, those sorts of questions?
 
Criterion validity, the extent to, which the Opera Opera upper operationalization got it of a construct such as a test. Relates to or predicts a theoretical. Uh, representation of the construct. That's a bunch of fancy words for saying. Does the thing you're measuring actually relate in some way to real world conditions and the way we establish, this is through a mechanism known as a logic model. So the logic model takes the elements of your survey, what you're asking about and Maps it to the elements of reality. Uh again if you don't have that, how do you know what you're testing for is actually measuring anything in reality. Of course, the philosopher in me asks, how do you know what reality is? But Short talk.
 
Test retest reliability, simply can you get the same results over and over again? Uh, if you do the same study again, Ideally, you shouldn't do it with the same people, sometimes, you can't. So you do it with a similar group of people, but in some way you need to determine that your measure will work more than once.
 
Finally, internal consistency. The extent to which the questions in the survey are measuring the same construct. Now, we can get fancy with that, but If your surveys aren't asking on the one hand about perceptions, and then in the next question about, what's out there in the real world or asking you to give your point of view? But at the same time, asking you to take a neutral point of view, you do not have internal consistency, right? So the, the frame of representation if you will needs to be the same throughout the survey. Now, you can construct a more complex frame but You need to know what that frame is. I see it.
 
So, Uh, If we think about it and here I'm stealing from Robert John ackerman's book. Data instrument and Theory, the three major elements of study validity. This may mean main elements of scientific investigation, generally data instrument and Theory. You see I'm stealing from this book.
 
Um, With data, we have issues of provenance, completeness Etc data cleaning which comes up a lot in AI instrument. Uh, is a survey actually an instrument or is it a tool of? Creativity. Um but also, you know, what are the limitations on what we're studying, what are the limitations on our instrument, whatever it is and and we can talk about that a lot but again, short talk.
 
And then the relation between Theory and data, I'm sure you're all familiar with the concept. Of Theory, Laden data from Larry Lauden originally. Just how much is your theory influencing the data that you are representing. So, Um, again, this is not in the actual paper that appears in the volume, but we now take all of these considerations and assess the study that drdc was doing on drdc came up with their model which I referenced earlier with these five major factors and as you can see many sub factors and I'm not going to get into the details of that because they're not that relevant. Um, but that was the model.
 
So, Did an initial run of their study and came away with these takeaways. Um, so technology access and complexity, were important, process Management, support and learner needs were important. Um, the environment, the the instructor's role and legal concerns, interestingly were important. And then the training of stakeholders was important. These were the things that they found were important.
 
So, I was asked to evaluate their method. With respect to their results. So, One of the things that they tried to do is instead of looking at barriers as barriers, they looked at barriers as pathways Why? Because they're an organization, you don't want to be negative and say barriers, you want to be positive and say Pathways It was an interesting thought, but it resulted in a lot of double negatives. And in one case, triple negatives.
 
Um, So it was less focused on individual acceptance, and more focused on You know, the the acceptance as an institution. And I questioned whether that was a concept that was even coherent, right? Can you talk about institutional acceptance? So I suppose you can but not if you're asking about individual, acceptance on the survey. Um, similarly. Uh, you know, there was there were Questions about the organizational individual perception of risk. What happened is it was never clear in this survey whether respondents were talking about other people.
 
Do other people accept technology or whether they're talking about themselves, do I accept technology? Get a lot of that kind of confusion in a lot of these studies. Some individual perceptions. Were influenced by their own. Uh, perceptions their own experience. Often influenced by behavior as modeled by others, but in the organizational context, Not influenced by others is basically abstracted out of existence. It doesn't exist anymore because you can't talk about modeling by others in an organizational context. Doesn't make sense.
 
Similarly, the emphasis on role. Which is how they approached this Pathways model. Um, Basically, tried to force individuals, responding to technology, acceptance to adopt what might be called a view from nowhere or an objective stance. So, You know, these are the sorts of confusions that can happen when you're not clear about all of these categories.
 
So, That's basically where it ends. I hope you enjoyed that. For those of you who are familiar with these Concepts, I hope you enjoyed the romp. Memory lane. And for those of you who found them new, I hope you found them interesting. There is as promised the QR code for the slides. My thank you.

Comments

Popular Posts