Considerations on the Framework for Ethical Learning Technology
As readers may know, I've been looking a lot recently at ethics related to online learning. In particular, I've studied a number of ethical codes and frameworks, publishing a paper summarizing my work. Assuming we need to draft such documents at all, my preference is for something more informal, rather than a set of principles and rules; I've offered one such here, which forms a part of my contribution to a community of practice in my workplace.
So that brings me to the Association for Learning Technology (ALT) Framework for Ethical Learning Technology (FELT), discussed in this white paper. Similarly, ALT opts "to articulate a framework for ethical professional practice in Learning Technology, not a set of rigid rules that apply to particular tools, technologies or contexts." Its focus is "the broad range of communication, information and related technologies that are used to support learning, teaching and assessment."
The document is divided into four major areas: awareness, professionalism, care and community, and values. I'm going to look at each in turn, and to consider each point in turn.
Awareness Respect the autonomy and interests of different stakeholders
This recommendation combines two points I think are important. First, it recognizes that different people have different interests. It avoids the trap of supposing that 'we all want the same thing'. The pull of different interests is especially clear in the field of learning technology.
It also places the principle of autonomy front and centre. Now I don't believe that autonomy is the only principle worth defending; there are other considerations at play. But without autonomy, there's no real sense of society and community, and so I think this should be highlighted.
This principle, however, equivocates to the point of harm in two key ways.
First, it urges readers to 'respect' autonomy and these different interests. The word 'respect' here is vague, perhaps with intent. It is possible to 'respect' someone's autonomy while doing nothing to protect it. The word 'respect' is also, by contrast, too strong. It places the reader into the role of an arbiter who determines what will be valued and what not. I think therefore that the tone of the word sets the wrong context, placing the writer (and presumably the reader) above others.
I understand that the main point is to underline to readers that there are other people in the world and that they will have interests different from one's own. But the main point here should be to say that we don't have power over these other people, as opposed to saying (as the FELT principle does) that we should use this power judiciously.
Be mindful, reflective and reflexive
The term 'mindful' typically requires an object, for example, as in "be mindful of other people" or "be mindful of the ice on the sidewalk". I feel like the authors have a particular object in mind (presumably, the interests of stakeholders?) but it's hard to say.
The term 'reflective' connotes advice to 'think before you act', which is typically good advice. Being 'reflective' is also of value in learning, the idea being that in reflection we can draw lessons from our experiences.
The term 'reflexive' does the most work in this section. To be 'reflexive' is to treat oneself as the object. In the current context, I think, the advice means to consider how you would feel were you the subject of your own actions or words. In a sense, then, it's a concise statement of the Golden Rule.
I think that the three items taken together mean something like "be considerate of other people", which is probably what it should say. Being mindful, reflective and reflexive are processes, but what really matters here, I think, is the result.
Recognise the limits of one's own knowledge and the possibility of unconscious bias.
There are many ways to be wrong - I once wrote the book on logical fallacies - and bias, unconscious or otherwise, is just one of them. So while I recognize that unconscious bias is the fallacy du jour, I think that a much wider scope is needed here.
And indeed, it's not so much about recognizing the limits of one's own knowledge. This in itself does almost nothing. Rather, the real concern here is to avoid error through incorrect or fallacious reasoning. There could be - nay, even should be - a lot packed into this. It should ultimately be a recommendation to adhere to the precepts of science and reason in your work.
We could talk about what that means (there's a not-so-small set of issues around the imperialism of reason and various ways of knowing) but there needs to be a recognition that we do not have carte blanche to say and so whatever we want and to offer whatever advice we fancy.
Overall, I think the intent here is good; readers are being advised to pay attention to the world and especially to other people, to understand that what we say and do has an impact on them, to treat them with respect and consideration, and to use experience and reason to guide our thoughts and actions.
The idea of professionalism generally is that professionals are held to higher standards of integrity and competence and are held to account for that by other people in the profession. But there is also a sense of elitism in the professions (as compared to, say, the trades) and in the idea of having an association (as compared to, say, a 'union'). And yet also, there's the sense that 'professionalism' is a privilege that has historically been awarded to upper-class white men, and that professionalism in other professions should be seen as a leveling of status.
Demonstrate accountable, evidence-led practice.
This point contains another one of the many odd word choices. Why is the word 'demonstrate' used here, as though that's the main thing? It points to the difference between 'professionalism' as a sort of status as opposed to 'professionalism' as a standard of practice, and it emphasizes the former, while I would emphasize the latter.
Thus, for example, I would tell people simply: be accountable.
There's also the odd phrasing of 'evidence-led practice'. It's a bit of jargon, I think, and it suggests that learning technologists should be guided by research. Now I am the last to disagree with the importance of research. Without evidence, what we say amounts to no more than mere speculation. But there are two major issues with this recommendation.
First, evidence doesn't lead. As David Hume would say, 'you cannot derive an ought from an is". True, he didn't say that exactly, and it's not completely true. But letting the evidence 'lead' is to abdicate our role as ethical agents. What evidence tells us is what's possible and what's not possible, not what's better and what's worse. We make that all-important judgement - or, more accurately, each one of us does, acting autonomously.
Second, not all evidence is equal. You can see this in a lot of educational research. Consider, for example, the observation that practice A increases PISA test scores on average. No small number of researchers say that the evidence should lead us to favour practice A on that basis. But if what we value in education is (say) self-reliance and autonomy, then PISA is a very poor measure to use.
There's a saying, σῴζειν τὰ φαινόμενα - to save the phenomena. I used to have it on my door in the philosophy department at the University of Alberta. What it means, essentially, is that we are bound by the limits of experience. We are responsible to the evidence, and we can't just say things without evidence. That's good practice.
Commit to ongoing professional development and enhancing your skills.
I think we should do more than merely 'commit' to doing it. We should actually do it (again, I have this sense here that how we are perceived seems to matter more in this document than what we do).
And I think that learning ought to mean much more than 'ongoing professional development and enhancing your skills'. There's an inherent conservatism in this phrasing, as though the question of 'what ought to be learned' has been answered, and that we as professionals ought to continue along that previously established path.
I think people - professionals especially, who don't get out much - should seek to expand their skills. Take up a trade. Do hard things. The best things I've ever learned to do - ranging from cycling to photography to learning French - have nothing to do with being a technologist, but they have made me a better one.
I think that this is especially the case if we are learning technologists. It's important to understand what it's like to learn (that's why I've been doing an ongoing video series, Stephen Follows Instructions). There's a phrase in technology development: "eating your own dogfood". Too few people in learning technology use the tools they develop in order to learn.
Act with integrity and honesty.
I don't know what it means to 'act with honesty'. I would just say: be honest. I would probably say more than that, because usually being honest isn't enough.
Acting with integrity is important, but as a professional principle, it ought to say more. What does it mean to act with integrity as a learning technologist? Most professional codes speak of things like conflict of interests and putting the client's interests above one's own.
Ensure practice complies with relevant laws and institutional policies.
This principle reads like the university administration write it. It certainly works in their favour. But it's wrong.
So here's the question: if 'relevant laws and institutional policies' conflict with ethical learning technology, which one wins?
One of the things that defines a professional is that they would prefer to suffer the sanction of their government or their employer rather than violate their ethics. All else being equal, sure, you follow the law and institutional policies. Everybody is supposed to do that.
In learning technology, we encounter unethical demands from governments and employers a lot. Shiuld we comply if asked to develop a student surveillance system? Should we comply is asked to provide a 'back door' into private student records?
When I've been asked (and I have been asked) to provide lists of course registrations, or to refrain from saying something important in a talk, my response has simply been: no.
Apply knowledge and research to advocate for and enhance ethical approaches.
Once again, the wording falls into jargon, which distorts the meaning. Rather than 'enhancing educational approaches', we should be advancing ethical approaches. That is, we should be correcting the unethical, rather than merely continuing the ethical (the word 'enhance' is one of those words that allows you to pretend that nothing is going wrong; even when the walls are falling down around you, you can say "we should enhance support for the roof").
Having said that, many statements of ethics include a provision for advocacy, to advance the interests and ethics of the profession. It has a lot to do with the self-policing nature of professions, and it has a lot to do with extending the values of a profession in the wider community.
But I'm not a fan. I think that people overstate the value of advocacy, and understate the value of practice. To me, advocacy has near zero value, while practice is everything. To me, it doesn't matter what the medical code of ethics says if Dr. So-and-so performs unsanctioned medical experiments. To me, the actions of organizations like Médecins Sans Frontières speak much more loudly than the Hippocratic Oath.
Care and Community
The concept of 'care' has become recently popular in the field of ethics. The term has a specific meaning, and I think it would have been more helpful had this meaning been stated more clearly in FELT. I've been wrestling with the concept of care for the last few years, and I'm still not sure I have a handle on it. That makes an undefined reference to 'care' a but tendentious, in my view.
For example, here's one view: "In tort law, a duty of care is a legal obligation which is imposed on an individual requiring adherence to a standard of reasonable care while performing any acts that could foreseeably harm others." For example, in business, "This duty—which is both ethical and legal—requires them to make decisions in good faith and in a reasonably prudent manner."
But that's not the sense being used here, I think. Rather, the concept of care is being derived from the ethics of care, which "involves maintaining the world of, and meeting the needs of, ourself and others. It builds on the motivation to care for those who are dependent and vulnerable, and it is inspired by both memories of being cared for and the idealizations of self."
There's a lot more that could be written here, but it is sufficient for now to be clear about the ambiguity.
Practice care of oneself and others
Nel Noddings would say that relationships are foundational to morality (as opposed to, say, sterile (and masculine) statements of moral principles). "The one-caring acts in response to a perceived need on the part of the cared-for. The act is motivated by an apprehension of the cared-for’s reality, where the one-caring feels and senses what the cared-for is experiencing and initiates a commitment to help."
The question I would ask, as a learning technologist, is whether that is the relationship I want to have with the people I work for and the people I work with? Indeed, it is relevant in this context to ask whether I'm even capable of such a relationship. To me it's not clear.
Where I agree with the philosophy of care is in its scepticism about rules and principles. But where I disagree is with the sense of innateness and inevitability it proposes, as though the morality of caregivers is the sole and universal ethical standard. I simply don't experience ethics that way. Maybe I could (though some would argue it's impossible for me).
But there's a long discussion to be had here.
Promote collegiality and mutual understanding
Again, we have a recommendation here that focuses on appearance and advocacy more than on action and being. If I were to make such a recommendation, I would say something like 'be collegial', not 'promote collegiality'.
But I'm not a fan of 'collegiality', at least, in some senses of it. Wikipedia informs us "Colleagues are those explicitly united in a common purpose and respecting each other's abilities to work toward that purpose. A colleague is an associate in a profession or in a civil or ecclesiastical office. Collegiality can connote respect for another's commitment to the common purpose and ability to work toward it."
I don't think we're working for a common purpose. If we were, fewer people would disagree with me about things. So when I hear recommendations to 'be collegial' what I read is an urging to subsume my own interest under someone else's (usually the department admin or Director), or even worse, to overlook and refrain from criticism of colleagues in order to further the interests of the department.
Now there's a sense in which being 'collegial' means treating other people with respect and courtesy. But I don't think that's the sense intended here. I would certainly agree with that. But I think the authors are trying to subsume under considerations of courtesy and respect some sense of commonality and shared purpose, which I oppose.
Ethics isn't about everyone believing and working toward the same thing, and when it becomes that, it has been subverted.
Minimise the risk of harms.
Also (and perhaps more directly), minimize harms. And even more importantly (and more directly), don't cause harm.
Recognise responsibilities and influence beyond your institution.
What's good about this recommendation is that it unambiguously speaks against what might be called 'institutional relativism', that is, the idea that what's good for the institution is what's good. Too often people working in institutions - including, but not limited to, colleges and universities - speak as though what really matters in any situation is the ongoing health of the institution.
But of course that's not true. We all have lives outside our institution, and our institution operates within the context of a wider society.
And I think that the institution, much more than any person working it it, has responsibilities and influence in wider society. Institutions that present themselves in society as only being interested in their own welfare are pathological, in my view. They seek ultimately to weaken the society than gives them life. The institution may not be (directly) answerable to wider society, but it certainly has responsibilities toward it.
I've spoken about this a lot over the years. For example, I've talked about how institutions should be looking toward supporting the needs of people who are not students, rather than focusing on that elite minority that makes their way inside the walls.
I think this is less clearly the case for individuals (though the phrasing of this recommendation suggests that the individual that must recognize responsibilities and influence beyond the institution). And the way it's phrased makes me feel as though what they're saying is that we need to be thinking about how the institution is reflected or perceived in wider society. This doesn't strike me as a principle of ethical value at all.
Share and disseminate best practices.
It is arguable - and I've heard it argued over the years - that there are no 'best practices'. What works well in one context might be a disaster in other contexts. That's why schools of management rely on things like case studies. There is perhaps an ineffable set of principles that an experienced practitioner might embody after considering enough cases, but this is nothing that could be written down or shared.
I'm inclined to agree with that view.
And as I think about it, if we're going to have a principle that talks about what could be shared, there are so many other things that we should be thinking about, such as stories and experiences, resources and services, facilities, standards, technologies, and more. Things that are practical, can be used as tools, and are more about providing support and less about creating conformity.
And credit. We should share credit. Most of what happens in learning technology happens as a result of a large number of people creating a large number of things, talking about it, and exchanging ideas. We too often seek to assign credit to one individual or another (or worse, taking credit for some institution or another).
Support the agency and development of learners.
Believe it or not, I'm actually pretty happy with this one. It assigns the active role to learners, and rather than position us as doing something for them (as though they were dependent) we offer our services and expertise in support. It isn't about 'respecting the autonomy and interests' of students, it is instead about taking that as a fact over which we have, and should have, no control.
Promote fair and equitable treatment, enhancing access to learning.
This principle manages to combine the faults I've described above in several previous principles.
It allows us to 'promote' fair and equitable treatment, and thus gain the reputation, while all the while never actually doing anything that is actually fair and equitable. We (and our institutions) should be fair and equitable. Our processes, standards and practices should be fair and equitable. (And we should have some sense of what 'fair and equitable' means in this context; some people think it means giving private companies as many rights as governments).
The recommendation also treats learners as objects to which we apply 'treatment', as though they were patients or dependents. It creates an 'us-and-them', with 'us' in the superior position.
And it lets us pretend there are no issues with access to learning, because we're focused on enhancing rather than 'creating' or 'increasing' or some other thing that would require our institutions to take a hard look at who they serve.
It also treats 'learning' as an object, when they should really be talking about opportunities, resources, supports, and other tangibles; the learning is created by the learners themselves, and isn't in some way 'acquired'.
Develop learning environments that are inclusive and supportive.
I would question whether it is the task of learning technologists to 'develop learning environments'. Certainly, I spend some of my time doing that, but I am far more interested in developing the aforementioned processes, standards and practices, opportunities, resources and supports.
Why not just say, "Develop inclusive technology?" And perhaps talk a bit about what 'inclusive' means.
Celebrate diversity as a route to innovation.
This is terrible phrasing. It allows us to infer that, if it weren't a route to innovation, there would be no reason to celebrate diversity. Moreover, it celebrates 'innovation' as somehow an even higher value than many of the other values discussed in this document.
In my own work, diversity is one of the four 'semantic conditions' (autonomy, openness and interactivity) are the rest. I call them the 'semantic condition' because they are the locus of meaning and value in a network. Without them, the network is not only incapable of learning, it is incapable of life itself. This is as true in individuals as it is in society.
Now perhaps not everyone agrees with me on this point. But surely we can think of a greater reason to embrace diversity than as 'a route to innovation'.
Reading a recommendation phrased this way makes me feel terribly terribly disappointed.
Design services, technologies to be widely accessible.
I would have simply said: design accessible services and technologies.
I would have said it this way because there is a commonly understood concept of 'accessible' in our domain that does not require elaboration. Indeed, by saying 'widely accessible', the connotation is that the statement doesn't actually mean 'accessible' as understood by the community at large. The statement may as well have said 'mostly accessible'. Or perhaps 'somewhat accessible'. It would have had the same force and intent.
If more specificity were required, it could elaborate on the ways learning can be more accessible, so the reader is clear that we don't just mean technical accessible. In a presentation today I saw this slide discussing universal design for learning (UDL):
Be accountable and prepared to explain decision-making.
This is better than 'demonstrating' accountability, as suggested above. But the recommendation that we 'be prepared to explain decision-making' is very narrow, it's incomplete, and in some respects, it's incorrect.
It's narrow in the sense that there's much more to accountability than decision-making. Sure, there are choices we make - from which text to use to which students to fail - where we very clearly make decisions. But our work consists of much more than selecting choices from a menu. And being accountable should apply to all of our work.
It's incomplete in the sense that it depicts our work as exercising authority over learners and therefore needing to justify how we manage that authority. It's not clear to me that we should position ourselves as authorities as all, and certainly, we should see ourselves as coauthors of whatever a student does, and not merely arbiters.
And it's incorrect in the sense that not everything can be explained. We see this phenomenon in artificial intelligence, where a system 'recognizes' a certain state of affairs, but where this recognition is not reducible to a set of facts that can be stated in language (or, rather,m such a statement would be so long as to be meaningless). Sometimes, what people need, rather than an explanation per se, is a statement of what could have been done instead to produce a different outcome.
It's like, if there's a car crash on a busy highway at a poorly designed intersection, if you say "if you had better brakes you could have avoided the accident" is useful, where the full explanation (the roads, the other drivers, the weather, etc) is not.
Be as open and transparent as is appropriate.
Of course, I would have said 'as is possible', but that would be the result if I felt openness and transparency are values.
But as this statement clearly demonstrates, there are some other priorities the author has in mind, unnamed priorities, so we can't even consider balancing, say, the need for privacy (which is completely unmentioned in this document) with openness. Usually, when I read 'as is appropriate' what I infer is "as is approved by administration, such that the reputation of the institution is unaffected" or some such thing.
I think that it's fair to say that openness and transparency are not absolutes. Neither are autonomy, interactivity and diversity. But understand what values are at play here is key.