Ethical Codes and Learning Analytics
PDF versions of the Full Paper and References. Presented at EDEN 2020 - Presentation page.
Abstract
The growth and development of learning
analytics has placed a range of new capacities into the hands of educational
institutions. At the same time, this increased capacity has raised a range of
ethical issues. A common approach to address these issues is to develop an
ethical code of conduct for practitioners. Such codes of conduct are drawn from
similar codes in other disciplines. Some authors assert that there are
fundamental tenets common to all such codes. This paper consists of an analysis
of ethical codes from other disciplines. It argues that while there is some
overlap, there is no set of principles common to all disciplines. The ethics of
learning analytics will therefore need to be developed on criteria specific to
education. We conclude with some ideas about how this ethic will be determined
and what it may look like.
Keywords:
Learning analytics, ethical codes, learning
technology, education, research ethics
Introduction
What distinguishes ethical codes from
other forms of ethics generally is that while they may assign duties and
responsibilities, these are assumed voluntarily by virtue of being a member of
the profession. To become a nurse is, for example, to adopt as a personal code
the ethical norms and values that define that particular profession.
The purpose of this chapter is to
showcase the wide range of ethical codes that are employed in different
professions, some of which are directly related to the use of analytics in that
profession, and others which describe ethics in the profession generally. This
diversity is not widely recognized; there is often a presumption, if not an
explicit assertion, that the values in these ethical codes, and in ethics
generally, are common, core, and universal.
This statement from Metcalf (2014) is
typical: “There are several principles that can be found at the core of
contemporary ethics codes across many domains:
·
respect for persons (autonomy,
privacy, informed consent),
·
balancing of risk to
individuals with benefit to society,
·
careful selection of
participants,
·
independent review of research
proposals,
·
self-regulating communities of
professionals,
·
funding dependent on adherence
to ethical standards.”
Whether or not one actually believes
these principles are foundational, it remains a matter of empirical fact that
they are not universal and not core. The same can be said for similar
assertions of universality made elsewhere (for example: Pitofsky, (1998:7),
Singer & Vinson (2002), CPA (2017)).
This paper is a substantial survey of
dozens of ethical codes. Though every attempt has been to keep this treatment
brief, it is nonetheless not brief. By laying out the evidence I endeavour to
show, rather than argue, that there is no common foundation to the ethical codes
that govern different professions.
We’ll begin with a quick overview of what
we mean by ethical codes, discussing the purpose and operation of ethical
codes, some of the components of ethical codes, and the ways in which these
codes differ from each other. Then we’ll take an extended look at the issues
raised by the codes. First we look at what problems the codes are trying to
solve, or in other words, what the purpose was for writing the codes. Then we
look at a length list of values and priorities revealed in the codes. After
this examination, we consider the question, to whom are the professionals
described in the codes obligated? Finally, we ask what bases and foundations
underlie the recommendations in the codes.
The full set of ethical codes is displayed,
with readers invited to notice the ways in which they differ from each other,
in Appendix 1: An Ethical Codes Reader, with references linking back to the
full code in question, for further study as desired by the reader.
Standards of Conduct
Why Ethical Codes?
The need for professional ethics
encompasses a number of factors. There is the need to be able to trust a person
in a position of trust. There is the need to make good decisions and to do the
right thing. And then there are various intangibles. The Project Management
Institute (PMI, 2020) states, “Ethics is about making the best possible
decisions concerning people, resources and the environment. Ethical choices
diminish risk, advance positive results, increase trust, determine long term
success and build reputations. Leadership is absolutely dependent on ethical
choices.”
But these are not the only reasons
advanced to justify professional ethics. There is the concern that without a
statement of ethics, unethical conduct will abound. “The absence of a formal
code could be seen almost as a guarantee that if such cases did exist they
would be swept under the carpet, left to others (probably the law) to sort
out,” writes Sturges (2003).
Others are less concerned about good
behaviour per se than they are about the bottom line. Alankar Karpe (2015), for
example, writes in ‘Being Ethical is Profitable’ that “Shortcuts and sleazy
behavior sometimes pay handsomely, but only for the short term. Organizations
must remember that any benefits from lying, cheating, and stealing usually come
at the expense of their reputation, brand image, and shareholders.” And, as he
notes, ““There is one and only one social responsibility of business – to use
it[s]resources and engage in activities designed to increase its profits so
long as it stays within the rules of the game, which is to say, engages in open
and free competition.”
Additionally, there are services and
institutions that require professional ethics in order to function. For
example, the CFA Institute (2017) states, “ethical conduct is vital to the
ongoing viability of the capital markets.” It notes, “compliance with
regulation alone is insufficient to fully earn investor trust. Individuals and
firms must develop a ‘culture of integrity’ that permeates all levels of
operations.” Indeed, it is arguable that society as a whole could not function
without professional ethics. Thus, the “CFA Institute recently added the
concept ‘for the ultimate benefit of society’ to its mission.”
Certain disciplines see ethical codes as
essential to being recognized as a profession. Hence, for example, for
librarians, “Keith Lawry set the idea of a code in a particularly positive view
of the professionalization process in British librarianship. He linked the
Library Association’s possession of a code of professional conduct with the
potential for statutory recognition of the association’s control of who might
and who might not practise librarianship” (Sturges, 2003)
Finally, practitioners need them. As
Rumman Chowdhury, Accenture’s Responsible AI Lead, said, “I’ve
seen many ‘ethics codes’ focused on AI, and while many of them are very good
they’re more directional than prescriptive – more in the spirit of the
Hippocratic Oath that doctors are expected to live by. Meanwhile, many data
scientists are hungry for something more specific and technical. That’s what we
need to be moving toward” (De Bruijn, et.al., 2019)
Ethical Codes As Standards of Conduct
While ethics commonly applies to people
in general, there is a specific class of ethics that applies to people by
virtue of their membership in a professional group. There are different
approaches, but in general, “professional ethics are principles that govern the
behaviour of a person or group in a business environment. Like values,
professional ethics provide rules on how a person should act towards other
people and institutions in such an environment” (Government of New Zealand,
2018).
Professional ethics can be characterized
as imposing a higher standard of conduct. The reasons for this vary, but (as we
discuss below) a higher standard is demanded because professionals are in
positions of power, they have people in their care, and they are expected to
have special competencies and responsibilities. Additionally, professional
ethics may require that practitioners put the interests of others ahead of
their own. This may include duties not only to those in one’s care, but also to
clients, organizations, or even intangibles like ‘the Constitution’ or ‘the
public good’.
As such, professional ethics are often
expressed in terms of codes of conduct (indeed, it is hard to find a sense of
professional ethics where such a code is not employed). Though the code is
normative (“breaches of a code of conduct usually do carry a professional disciplinary
consequence” (Ibid.)) usually the intent of the code is to remind professionals
of their duty and prompt them regarding specific obligations.
Ethical Codes as Requirements
In the world of software engineering, in
addition to ethical standards as codes of conduct, ethical codes can be seen as
defining requirements. This is proposed, for example, by Guizzardi, et.al.
(2020). They write, “Ethical requirements are requirements for AI systems
derived from ethical principles or ethical codes (norms). They are akin to
Legal Requirements, i.e.,requirements derived from laws and regulations.”
Ethical requirements are drawn from stakeholders in the form of principles and
codes. From these, specific requirement statements are derived. “For example,
from the Principle of Autonomy one may derive “Respect for a person’s privacy”,
and from that an ethical requirement “Take a photo of someone only after her
consent” (Ibid: 252).
An important distinction between the idea
of ethical codes as standards of conduct and ethical codes as requirements is
that in the former case, the AI is treated as an ethical agent can reason and
act on the basis of ethical principle, while in the latter case, the AI is not.
“Rather, they are software systems that have the functionality and qualities to
meet ethical requirements, in addition to other requirements they are meant to
fulfill” (Ibid: 252).
As Opposed to Legal Requirements
We stated above that ‘ethics is not the
same as the law’. This is a case where that principle applies. What we are
interested in here is the sense of an ethical code as a principle of ethics,
not as a legal document. It reflects the fact that a person chooses a
profession for themselves, and thereby voluntarily enters into a set of
obligations characterized by that profession. “Professions must be ‘professed’
(that is, declared or claimed)” (Davis, 2010:232).
Thus we may say that ethics may be
influenced by, but are distinct from, the following (all from Government of New
Zealand, 2018):
·
Fiduciary duties - fiduciary
duties are “special obligations between one party, often with power or the
ability to exercise discretion that impacts on the other party, who may be
vulnerable” (Wagner Sidlofsky, 2020). Examples of fiduciary relations include
those between lawyer and client, trustee and beneficiary, director and company,
power of attorney and beneficiary and accountant and client.
·
Contractual obligations - these
require the professional to perform the terms of the contract, and “includes a
duty to act with diligence, due care and skill, and also implies obligations
such as confidentiality and honesty” (New Zealand, 2018).
·
Other laws - for example, In
New Zealand this could include the Consumer Guarantees Act 1993.
What distinguishes legal requirements,
arguably, from ethical principles is the element of choice. In the case of legal requirements, the law
compels you to behave in a certain way, with increasing penalties for
non-compliance. In an important sense, it doesn’t matter whether the law or the
principle in question is ethical or not. You are penalized if you do not
comply.
It may be argued that the relation
between ethics and law is such that in a treatment of the ethics of learning
analytics we ought also to be concerned with the law in relation to learning
analytics. We will see this come up in two ways: first, in the argument that
‘obeying the law’ is part of the ethical responsibility of a practitioner, and
second, in the argument that the law regarding learning analytics is or ought
to be informed by ethical principles.
Principles and Values
“Values are general moral obligations
while principles are the ethical conditions or behaviors we expect” (Gilman,
2005: 10). Values and principles are connected. As Terry Cooper (1998:12)
explains, “An ethical principle is a statement concerning the conduct or state
of being that is required for the fulfillment of a value; it explicitly links a
value with a general mode of action.” For example, we may state that we value
‘justice’, but we would need a principle like “treat equals equally and
unequals unequally” to explain what we mean by ‘justice’.
All ethics codes encompass both
principles and values, though (as we shall see below) usually more implicitly
than explicitly. Values (such as honesty and trustworthiness) are often assumed
tacitly, as not needing to be stated. Sometimes they are expressed in a
preamble to the code, not as an explicit list, but rather in the sense of
establishing a context. For example, the Canadian Code of Public Service
ethical code has a preamble describing the role of the public service, as well
as a listing of the fundamental values (TBS, 2011).
The Value of Professional Codes
Codes of professional ethics or conduct
are widely used. They bring a utilitarian value to the conversation. They
provide a framework for professionals carrying out their responsibilities. They
clearly articulate unacceptable conduct. And they provide a vision toward which
a professional may be striving (Gilman, 2005: 5) Having a code, it is argued,
is key to the prevention of unacceptable conduct. That’s why, for example, the
United Nations Convention Against Corruption included a public service code of conduct
as an essential element in corruption prevention, says Gilman (Ibid). Yet the
convention is an interesting example: there is no code of conduct for the
private sector. Why?
At the same time, it is argued that
“Codes are not designed for ‘bad’ people, but for the persons who want to act
ethically” (Ibid: 7). That is, they provide guidance for a person who wants to
act ethically, but who may not know what is right. Therefore, codes are
preventative only in the sense that they prevent conduct that is accidentally
unacceptable. They may seem to be unnecessary in the case of a well-developed
profession and body of professionals, but in a new environment, such as data
analytics in education, there is much that is not yet clearly and widely
understood.
Moreover, argues Gilman, a code of ethics
will change the behaviour of bad actors, even if it does not incline them
toward good. “When everyone clearly knows the ethical standards of an
organization they are more likely to recognize wrongdoing; and do something
about it. Second, miscreants are often
hesitant to commit an unethical act if they believe that everyone else around
them knows it is wrong. And, finally
corrupt individuals believe that they are more likely to get caught in
environments that emphasize ethical behavior.” (Ibid: 8)
Study of Ethical Codes
More than 70 ethical codes were studied
as a part of this review. The selection methodology undertaken was designed to
encourage as wide a range of ethical codes as possible. To begin, ethical codes
referenced in relevant metastudies (such as ) were evaluated. Codes referenced
by these ethical codes were studied, to establish a history of code development
within a discipline. Documents from relevant disciplinary associations were
studied, to find more ethical codes. The selection of ethical codes includes
the following major disciplinary groups (and the number of individual codes
studied).
·
Professional ethics –
broad-based ethical codes (4)
·
Academic ethics – codes of
conduct for professors and staff in traditional academic institutions (3)
·
Teacher ethics – codes
governing teachers and the teaching profession (7)
·
Ethics for librarians and
information workers – ethics of information management (2)
·
Public service ethics – codes
of conduct for government employees (2)
·
Research ethics – includes
international declarations and government policy (6)
·
Health care ethics – including
codes for doctors and nurses (6)
·
Ethics in social science
research – research ethics (1)
·
Data ethics – government and
industry declarations on the use of study and survey data (7)
·
Market research ethics – codes
describing the ethical use of data in advertising and market studies (2)
·
Journalism ethics – codes of
conduct governing the use of public information by journalists (3)
·
Ethics for IT professionals –
system administration and software development ethics (3)
·
Data research ethics – related
specifically to the use of data in research (1)
·
Ethics for artificial
intelligence – government, industry and academic codes (15)
·
Information and privacy –
principles specifically addressing individual rights (1)
·
Ethics in educational research
– policies governing educational researchers specifically (3)
·
Ethics in learning analytics –
government, academic and industry guidelines and codes (7)
How the Codes Differ
Metcalf (2014) identifies a number of the
reasons ethical codes vary across professions, and even within professions
(quotes in the list below are all from Metcalf):
·
Motivation: The events that
prompt the development of ethical codes; for example, “in biomedicine, ethics
codes and policies have tended to follow scandals” while by contrast “major
policies in computing ethics have presaged many of the issues that are now
experienced as more urgent in the context of big data.”
·
Purpose: “Analyses of ethics
codes note a wide range of purposes for ethics codes (Frankel, 1998; Gaumintz
and Lere, 2002; Kaptein and Wempe, 1998).”
·
Interests: “Frankel (1989)
notes that all ethics codes serve multiple interests and therefore have
multiple, sometimes conflicting, dimensions. He offers a taxonomy of
aspirational, educational, and regulatory codes.”
·
Burden: who does the ethical
code apply to? Metcalf notes that “greater burdens are placed on individual
members to carry out the profession’s ethical agenda,” but different burdens
may fall on different groups of people.
·
Enforcement: “Organizations,
institutions and communities tend to develop methods of enforcement that
reflect their mission.”
Each code of ethics was subjected to an
analysis that includes the following criteria:
·
What ethical issues is it
attempting to address (for example, is focused on malpractice, on conflict of
interest, on violation of individual rights, etc)?
·
What are its core values or
highest priorities (as opposed to the detailed specification of ethical
principles described, as defined by Cooper (1998:12), Gilman (2005: 10))?
·
Which ethical issues from the
literature of learning analytics issues do they address?
·
Who is governed, and to whom
are they obligated? (e.g.,AITP (2017) list six separate groups to which
information professionals have obligations).
·
What is the basis (if any) for
the statement of ethical values and principles (for example, the Royal
Society’s recommendations are based in a “public consultation” (Drew, 2018)),
while numerous other statements are based in principles such as ‘fairness’ and
‘do no harm’.
Applications of Learning Analytics
Analytics is thought
of generally as “the science of examining data to draw conclusions and, when
used in decision making, to present paths or courses of action.” (Picciano,
2012). This includes not only the collection of the data but also the methods
of preparation and examination employed, and the application of the data in
decision-making. Thus the term ‘analytics’ can be thought of as the overall
process of “developing actionable insights through problem definition and the
application of statistical models and analysis against existing and/or
simulated future data” (Cooper, 2012).
The focus of this
paper is the use of analytics as applied to learning and education (typically
called ‘learning analytics’). Learning analytics is typically defined in terms
of its objective, which is to improve the chance of student success (Gasevic,
Dawson & Siemens, 2015). Accordingly, when founding the Society for
Learning Analytics (SoLAR) George Siemens defined learning analytics as “the
measurement, collection, analysis and reporting of data about learners and
their contexts, for purposes of understanding and optimizing learning and the
environments in which it occurs”
(Siemens, 2012).
We apply a broad
definition of learning analytics. A wider definition not only avoids the
difficulties of establishing a more narrow definition, but also ensures we do
not disregard potential ethical implications simply because the practice is
‘outside the scope of learning analytics’. Arguing for a broader definition of
analytics necessarily leads us to consider including artificial intelligence
(AI) in the conversation. However you define the terms, artificial intelligence
plays a significant role in analytics, and vice versa, so we will treat them
together as one thing (Adobe Experience Cloud Team, 2018). If a distinction is
necessary during the course of the discussion, we will apply it.
Potential applications of learning
analytics are based on what analytics can do and how they work. Modern
analytics is based mostly in machine learning and neural networks, and these in
turn provide algorithms for pattern recognition, regression, and clustering.
Built on these basic capabilities are four widely-used categories (Brodsky,
et.al., 2015; Boyer and Bonnin, 2017) to which we add additional fifth and
sixth categories, generative analytics and deontic analytics:
·
descriptive analytics,
answering the question “what happened?”;
·
diagnostic analytics, answering
the question “why did it happen?”;
·
predictive analytics, answering
the question “what will happen?”;
·
prescriptive analytics,
answering the question “how can we make it happen?”; and
·
generative analytics, which use
data to create new things, and
·
deontic analytics, answering
the question “what should happen?”.
Within each of these categories we can
locate the various applications that fall under the heading ‘learning
analytics.
Descriptive Analytics
Descriptive
analytics include analytics focused on description, detection and reporting,
including mechanisms to pull data from multiple sources, filter it, and combine
it. The output of descriptive analytics includes visualizations such as pie
charts, tables, bar charts or line graphs. Descriptive analytics can be used to
define key metrics, identify data needs, define data management practices,
prepare data for analysis, and present data to a viewer. (Vesset, 2018). Tracking
is an important part of descriptive analytics. The purpose of tracking is to
measure systems performance and institutional compliance. Relative costs and
benefits are compared to find the most cost-effective solution (Ware, et.al.,
1973: 9).
Higher education
institutions also use descriptive analytics to construct student profiles. A
person’s learning activities, for example, can be graphed and displayed in
comparison with other learners. This analysis can contain fine-grained detail, for
example, attention metadata (Duval, 2011). Today, a standardized format, the
Experience API, is used to collect and store activity data in a Learning Record
Store (LRS) (Corbí and Solans, 2014; Kevan and Ryan, 2016). These support
dashboards such as LAViEW (Learning Analytics Visualizations & Evidence
Widgets) that helps learners analyze learning logs and provide evidence of
learning. (Majumdar, et.al., 2019) Similar functionality is also provided by
IMS Global’s Caliper learning analytics (Oakleaf, et.al., 2017)
Diagnostic Analytics
Diagnostic analytics
look more deeply into data in order to detect patterns and trends. Such a
system could be thought of as being used to draw an inference about a piece of
data based on the patterns detected in sample or training data, for example, to
perform recognition, classification or categorization tasks. Diagnostic
analytics are applied in a wide range of applications.
Security
applications are common. To support physical security, facial and object
recognition technology is being used in schools and institutions. For example,
a New York school district is using an application called AEGIS to identify
potential threats (Klein, 2020). For digital security, analytics applications
that help filter unwanted messages (whether sent by humans or bots) are
generally available and widely used. Users can learn to train their own machine
learning to filter spam (Gan, 2018) or use commercial systems such as Akismet
(Barron, 2018). Automated fakes detection systems are becoming more widely used
(Li and Lyu, 2019).
Diagnostic analytics
is also employed to ensure academic discipline. Pattern recognition, for
example, is used for plagiarism detection Amigud, et.al. (2017). Video
recognition and biometrics are also used for security purposes and exam
proctoring (Rodchua, 2017). “For instance, Examity also uses AI to verify
students’ identities, analyze their keystrokes, and, of course, ensure they’re
not cheating. Proctorio uses artificial intelligence to conduct gaze detection,
which tracks whether a student is looking away from their screens” (Heilweil,
2020).
There is a large
literature devoted to automated grading, beginning with Page (1966), continuing
through the Hewlett competition (Kaggle, 2012), and today the technology has at
least “developed to the point where the systems provide meaningful feedback on
students’ writing and represent a useful complement (not replacement) to human
scoring” (Kaja and Bosnic, 2015).
Ultimately, AI could replace grading altogether. Rose Luckin argues, “logging
every keystroke, knowledge point and facial twitch, then the perfect record of
their abilities on file could make such testing obsolete” (Beard, 2020).This
creates the possibility of assessing competencies from actual performance data
outside educational environments, for example, using technologies like
analytics-based assessment of personal portfolios (van der Schaaf, et.al.,
2017) or using data-driven skills assessment in the workplace (Lin, et.al.,
2018).
Predictive analytics
Numerous products and
studies are based on the idea that “analytics tools can identify factors
statistically correlated with students at risk of failing or dropping out.”
(Scholes, 2016; Gasevic, Dawson & Siemens, 2015). For example, a Jisc report describes several
such projects, including one at New York Institute of Technology (NYIT) that
used four data sources: “admission application data, registration / placement
test data, a survey completed by all students, and financial data” (Sclater,
Peasgood and Mullan, 2016). Student retention is also supported by predictive
analytics. Predictive analytics is also used to assist in learning design,
including adaptive learning design. “Findings indicated that the primary
predictor of academic retention was how teachers designed their modules, in
particular the relative amount of so-called ‘communication activities’.”
(Rientes & Jones, 2019: 116)
Analytics can also draw
from campus information sources to support student advising. For example, the
Berkeley Online Advising (BOA, 2020) project at the University of California at
Berkeley “integrates analytical insights with relationship and planning tools
for advisors of large cohorts and the students they support” (Heyer &
Kaskiris, 2020). Additionally, the Comprehensive Analytics for Student Success
(COMPASS) project at the University of California, Irvine, “focuses on bringing
relevant student data to campus advisors, faculty, and administrators… to
improve undergraduate student outcomes” (UCI Compass, 2020). As O’Brien (2020)
writes, “These tools provide advisors with information that allows for
proactive outreach and intervention when critical student outcomes are not
met.” Combining these approaches is an initiative called ‘precision education’.
Yang and Ogata (2020) suggest that analogous to precision medicine, precision
education systems consider a wider array of variables than learning analytics,
“students’ IQ, learning styles, learning environments, and learning strategies.”
Prescriptive Analytics
An oft-cited
application is the potential of learning analytics to make content
recommendations, either as a starting point, or as part of a wider learning
analytics-supported learning path. For
example, the Personalised Adaptive Study Success (PASS) system supports
personalisation for students at Open Universities Australia (OUA) (Sclater,
Peasgood and Mullan, 2016). Students report desiring recommendations regarding
potential learning activities, and suggestions for potential learning partners.
(Schumacher, 2018) Content and learning path recommendations are based not only
on the discipline being studied but also on the individual learning profile,
academic history, and a variety of contextual factors. (Ifenthaler and
Widanapathirana, 2014)
Adaptive learning is
a step beyond learning recommendations in the sense that the learning
environment itself changes (or ‘adapts’) to events in the learning experience
(Sonwalkar, 2007). For example, “Adaptive learning systems — like IBM Watson
and Microsoft Power BI — have the advantage of continually assessing college
students’ skill and confidence levels.” (Neelakantan, 2019). Early adaptive
learning applications were expert systems based on explicit knowledge
representations and user models, that is, they were based on statements and
rules (Garrett & Roberts, 2004). More recently, the ‘black box’ methods
characteristic of contemporary analytics, such as neural networks, have been
employed (Almohammadi, et.al., 2017).
Generative Analytics
Generative analytics
is different from the previous four categories in the sense that it is not
limited to answering questions like “what happened” or “how can we make it
happen”, but instead uses the data to create something that is genuinely new.
In a sense, it is like predictive and prescriptive analytics in that it extrapolates
beyond the data provided, but while in the former two we rely on human agency
to act on the analytics, in the case of generative analytics the analytics
engine takes this action on its own.
In addition to
emulating human conversation, chatbots will generate additional human
responses, such as gestures and emotions. For example, there’s Magic Leap’s
Mica, an AI-driven being “that comes across as very human” (Craig, 2018). “What
is remarkable about Mica is not the AI, but the human gestures and reactions
(even if they are driven by AI).” Meanwhile, though “fictionalized and
simulated for illustrative purposes only”, products like Samsung’s Neon are
being called ‘artificial humans’, “a computationally created virtual being that
looks and behaves like a real human, with the ability to show emotions and
intelligence.” (Craig , 2020)
Analytics engines,
provided with data, can generate content. The Washington Post uses an AI called
Heliograf to write news and sports articles; in its first year it wrote around
850 items. “That included 500 articles around the election that generated more
than 500,000 clicks.” (Moses, 2017) Analytics and AI have self-generated
computer science papers (Stribling, et.al., 2005), music (Galeon, 2016), art
(Shepherd, 2016), books (Springer Nature, 2019) and inventions (Fleming, 2018).
There are now commercial AI-based applications that generate educational
resources, including articles (eg., AiWriter), textbooks, test questions (eg.
WeBuildLearning), and more.
Such technology can
make educational content more interesting and engaging. For example, In 2015,
an algorithm called DeepStereo developed for Google Maps was able to generate a
video from a series of still photographs (Flynn, et.al., 2015). Also, “With
deep fakes, it will be possible to manufacture videos of historical figures
speaking directly to students, giving an otherwise unappealing lecture a new
lease on life” (Chesney and Citron, 2018:1769). Chesney and Citron write, “The
educational value of deep fakes will extend beyond the classroom. In the spring
of 2018, Buzzfeed provided an apt example when it circulated a video that
appeared to feature Barack Obama warning of the dangers of deep-fake technology
itself. One can imagine deep fakes deployed to support educational campaigns by
public-interest organizations such as Mothers Against Drunk Driving (Chesney
and Citron, 2018:1770).
It may seem
far-fetched, but some pundits are already predicting the development of
artificial intelligences and robots teaching in the classroom. In a recent
celebrated case, a professor fooled hist students with ‘Jill Watson’, an
artificial tutor (Miller, 2016). “‘Yuki’, the first robot lecturer, was
introduced in Germany in 2019 and has already started delivering lectures to
university students at The Philipps University of Marburg.” (Ameen, 2019).
While most observers still expect AI and analytics to be limited to a support
role, these examples suggest that the role of artificial teachers might be
wider than expected.
Deontic Analytics
There is an
additional question that needs to be answered, and has been increasingly
entrusted to analytics: “what ought to happen?” Recently the question has been
asked with respect to self-driving vehicles in the context of Philippa Foote’s
‘trolley problem’. (Foote, 1967). In a nutshell, this problem forces the reader
to decide whether to take an action to save six and kill one, or to desist from
action to save one, allowing (by inaction) six to be killed. It is argued that
automated vehicles will face similar problems.
It may be argued
that these outcomes are defined ahead of time by human programmers. But automated
systems have an impact on what content is acceptable (and what is not) in a
society. We see this in effect on online video services. “On both YouTube and
YouTube Kids, machine learning algorithms are used to both recommend and
mediate the appropriateness of content” (UC Berkeley Human Rights Center
Research, 2019). Though such algorithms are influenced by input parameters,
their decisions are always more nuanced than designed, leading people to adapt
to the algorithm, thereby redefining what is acceptable.
What counts as
‘appropriate’ behaviour may be shaped by analytics and AI. These and additional
implications are being investigated by HUMAINT, “an interdisciplinary JRC
project aiming to understand the impact of machine intelligence on human
behaviour, with a focus on cognitive and socio-emotional capabilities and
decision making” (Tuomi, 2018; HUMAINT, 2020). An AI can select between what might
be called ‘good’ content and ‘bad’ content, displaying a preference for the
former. For example, in response to violence in conflict zones, researchers
“argue the importance of automatic identification of user-generated web content
that can diffuse hostility and address this prediction task, dubbed
‘hope-speech detection’” (Palakodety, et.al., 2019).
There is another
line of research that proposes that AI can define what’s fair. An early example
of this is software designed to optimize the design of congressional voting
districts in such a way that minimizes gerrymandering (Cohen-Addad, Klein &
Young, 2018). In another study, research suggested that “an AI can simulate an
economy millions of times to create fairer tax policy” (Heaven, 2020). A tool
developed by researchers at Salesforce “uses reinforcement learning to identify
optimal tax policies for a simulated economy.” The idea in this case was to
find tax policy that maximized productivity and income equality in a model
economy.
It should be noted that
discussion of generative and deontic analytics lie outside most traditional
accounts of analytics and ethics. And it is precisely in these wider accounts
of analytics that our relatively narrow statements of ethical principles are
lacking. It is possible to apply analytics correctly and yet still reach a
conclusion that would violate our moral sense. And it is possible to use
analytics correctly and still do social and cultural harm. An understanding of
ethics and analytics may begin with ethical principles, but it is far from
ended there.
Ethical Issues in Learning Analytics
We will follow Narayan (2019), who
classifies the ethical issues in learning analytics under three headings:
issues that arise when analytics works, issues that arise because analytics are
not yet reliable, and issues that arise in cases where the use of analytics
seems fundamentally wrong. To these three sets of issues we will add a fourth
describing wider social and cultural issues that arise with the use of
analytics and AI, and a set of issues related specifically to bad actors.
Many of these issues
will be familiar to readers, for example, the potential misuse of facial
recognition, surveillance and tracking, AI-based assessment, misrepresentation
and prejudice, explanability, filter bubbles and feedback effects. Others are
less frequently discussed but raised equally serious ethical issues, for
example, the mechanisms for appealing AI-based evaluations, systems consistency
reliability, stalking, alienation, network effects (ie., winner takes all), and
environmental impact.
When Analytics Works
Modern AI and
analytics work. As Mark Liberman (2019) observes, "Modern AI (almost)
works because of machine learning techniques that find patterns in training
data, rather than relying on human programming of explicit rules.” This is in
sharp contrast to earlier rule-based approaches that “generally never even got
off the ground at all.”
Analytics and AI
require data above all, and so in order to support this need institutions and
industries often depend on surveillance. However, ”when in wrong hands, these
systems can violate civil liberties.” (UC Berkeley, 2019) Once surveillance
becomes normal, its use expands (Marx, 2020). Private actors, as well, employ
surveillance for their own purposes. For example, Amazon-owned Whole Foods is
tracking its employees with a heat map tool that ranks stores most at risk of
unionizing (Peterson, 2020). Analytics makes tracking accessible to everyone.
“Miniature surveillance drones, unseen digital recognition systems, and
surreptitious geolocational monitoring are readily available, making long-term
surveillance relatively easy and cheap” (Cavoukian, 2013:23).
Anlytics also erodes
our ability to be anonymous. This is partially because of spying and tracking,
and partially because data about individuals can be cross-referenced. “When
Facebook acts as a third-party tracker, they can know your identity as long as
you’ve created a Facebook account and are logged in — and perhaps even if you
aren’t logged in” (Princiya, 2018). And analytics arguably creates a social
need to eliminate anonymity. As Bodle argues, “A consensus is growing among
governments and entertainment companies about the mutual benefits of tracking
people online.” Hence, provisions against anonymity, he argues, are being built
into things like trade agreements and contracts.
Recent debate has
focused on the use of facial recognition technologies, with IBM, Microsoft and
Amazon all announcing they will cease efforts. A startup called Clearview AI
makes the risk clear. “What if a stranger could snap your picture on the
sidewalk then use an app to quickly discover your name, address and other
details? has made that possible” (Moyer, 2020). Mark Andrejevic & Neil
Selwyn (2019) outline a number of additional ethical concerns involving facial
recognition technology in schools: the dehumanising nature of facially focused
schooling, the foregrounding of students’ gender and race, the increased
authoritarian nature of schooling, and more.
The previous
sections each raise their own issues, but all touch on the issue of privacy
generally. While it may be argued that privacy protects the powerful, at the
expense of the weaker (Shelton (2017), “Personal privacy is about more than
secrecy and confidentiality. Privacy is about being left alone by the state and
not being liable to be called to account for anything and everything one does,
says or thinks” (Cavoukian, 2013:18). We might say people should be able to
live their lives in ‘quiet enjoyment’ of their possessions, property and
relationships (Andresi, 2019).
In education
learning analytics used for assessment can score student work with accuracy and
precision. Students recognize this. But students have mixed feelings about such
systems, preferring “comments from teachers or peers rather than computers.”
(Roscoe, et.al., 2017) It is arguable that students may prefer human assessment
because they may feel more likely to be seen as an individual with individual
flair, rather than erroneously deviating from the expectations of the analytics
engine. As one college official says, “"Everyone makes snap judgments on
students, on applicants, when first meeting them. But what worries me about AI
is AI can't tell the heart of a person and the drive a person has."
Humans often use
discretion when applying the rules. “Organizational actors establish and
re-negotiate trust under messy and uncertain analytic conditions” (Passi and
Jackson, 2018) In the case of learning analytics, Zeide (2019) writes that a human instructor
might overlook a student’s error “if she notices, for example, that the student
clearly has a bad cold.” By contrast, “Tools that collect information,
particularly based on online interactions, don't always grasp the nuances.” The
impact of a lack of discretion is magnified by uncertainties in the data that
might be recognized by a human but overlooked by the machine (Passi and Jackson,
2018; Malouff & Thorsteinsson, 2016). There is a need for a principle of
“remedy for automated decision” that is “fundamentally a recognition that as AI
technology is deployed in increasingly critical contexts, its decisions will
have real consequences, and that remedies should be available just as they are
for the consequences of human actions” (Fjeld, et.al., 2020:33).
Analytics can also
be used to create misleading images and videos (Chesney and Citron, 2018:1760)
write “To take a prominent example, researchers at the University of Washington
have created a neural network tool that alters videos so speakers say something
different from what they originally said.” There are numerous unethical uses of
content manipulation, including exploitation, sabotage, harm to society,
distortion of discourse, manipulation of elections, erosion of trust,
exacerbation of divisions, undermining of public safety, and undermining
journalism. (Ibid:1772-1786).
A number of recent
high-profile cases have raised the possibility of analytics being used to
(illegitimately?) manipulate the thoughts, feelings and emotions of users. For
example, one study experimented on Facebook users (without their knowledge or
consent) to show that “emotional states can be transferred to others via
emotional contagion, leading people to experience the same emotions without
their awareness” (Kramer, Guillory & Hancock, 2014). An article from RAND
suggests, “Whoever is first to develop and employ such systems could easily
prey on wide swaths of the public for years to come” (Paul and Posard, 2020).
Manipulation of the
user can be used for beneficial purposes, as described above. However it
becomes ethically problematic when the institution, rather than the user,
benefits. As Kleber (2018) writes, “Casual applications like Microsoft’s
XiaoIce, Google Assistant, or Amazon’s Alexa use social and emotional cues for
a less altruistic purpose — their aim is to secure users’ loyalty by acting
like new AI BFFs. Futurist Richard van Hooijdonk quips: “If a marketer can get
you to cry, he can get you to buy.” Moreover, Kleber continues, “The discussion
around addictive technology is starting to examine the intentions behind voice
assistants. What does it mean for users if personal assistants are hooked up to
advertisers? In a leaked Facebook memo, for example, the social media company
boasted to advertisers that it could detect, and subsequently target, teens’
feelings of ‘worthlessness’ and ‘insecurity,’ among other emotions (Levin,
2017)”.
Schneier (2020)
writes, “The point is that it doesn’t matter which technology is used to
identify people… The whole purpose of this process is for companies — and
governments — to treat individuals differently.” In many cases, differential
treatment is acceptable. However, in many other cases, it becomes subject to
ethical concerns. The accuracy of analytics creates an advantage for companies
in a way that is arguably unfair to consumers. For example, the use of
analytics data to adjust health insurance rates (Davenport & Harris, 2007)
works in favour of insurance companies, and thereby, arguably, to the
disadvantage of their customers. Analytics are used similarly in academics,
sometimes before the fact, and sometimes after. For example, in a case where
failure was determined by predicted learning events, the “Mount St. Mary’s
University... president used a survey tool to predict which freshman wouldn’t
be successful in college and kicked them out to improve retention rates”
(Foresman, 2020).
When it Doesn’t
Artificial
Intelligence and analytics often work and as we’ve seen above can produce
significant benefits. On the other hand, as Liberman comments (2019), AI is
brittle. When the data are limited or unrepresentative, it can fail to respond
to contextual factors our outlier events. It can contain and replicate errors,
be unreliable, be misrepresented, or even defrauded. In the case of learning
analytics, the results can range from poor performance, bad pedagogy,
untrustworthy recommendations, or (perhaps worst of all) nothing at all.
Analytics can fail
because of error, and this raises ethical concerns. “Analytics results are
always based on the data available and the outputs and predictions obtained may
be imperfect or incorrect. Questions arise about who is responsible for the
consequences of an error, which may include ineffective or misdirected
educational interventions” (Griffiths, et.al., 2016:4).
Analytics requires
reliable data, “as distinguished from suspicion, rumor, gossip, or other
unreliable evidence” (Emory University Libraries, 2019). Meanwhile, a
‘reliable’ system of analytics is one without error and which can be predicted
to perform consistently, or in other words, “an AI experiment ought to ‘exhibit
the same behavior when repeated under the same conditions’ and provide
sufficient detail about its operations that it may be validated (Fjeld, et.al., 2020:29; Slade and Tait, 2019).
Both amount to a requirement of “verifiability and replicability” of both data
and process.
Additionally, the
reliability of models and algorithms used in analytics “concerns the capacity
of the models to avoid failures or malfunction, either because of edge cases or
because of malicious intentions. The main vulnerabilities of AI models have to
be identified, and technical solutions have to be implemented to make sure that
autonomous systems will not fail or be manipulated by an adversary” (Hamon,
Junklewitz & Sanchez, 2020, p.2). But it is not yet clear that learning
analytics are reliable (Contact North, 2018). For example, inconsistency can
magnify ethical issues, especially in real-time analytics. “‘When the facts
change, I change my mind’ can be a reasonable defence: but in order to avoid
less defensible forms of inconsistency, changing your mind about one thing may
require changing it about others also” (Boyd, 2019).
Additionally, there
are widespread concerns about bias in analytics. In one sense, it is merely a
specific way analytics can be in error or unreliable. But more broadly, the
problem of bias pervades analytics: it may be in the data, in the collection of
the data, in the management of the data, in the analysis, and in the application
of the analysis. The outcome of bias is
reflected in misrepresentation and prejudice.
For example, “the AI system was more likely to associate European
American names with pleasant words such as ‘gift’ or ‘happy’, while African
American names were more commonly associated with unpleasant words.” (Devlin,
2017) “The tales of bias are legion: online ads that show men higher-paying
jobs; delivery services that skip poor neighborhoods; facial recognition
systems that fail people of color; recruitment tools that invisibly filter out
women” (Powles and Nissenbaum, 2018).
Another source of
error is misinterpretation. Because analytical engines don’t actually know what
they are watching, they may see one thing and interpret it as something else.
For example, looking someone in the eyes is taken as a sign that they are
paying attention. And so that’s how an AI interprets someone looking straight
at it. But it might just be the result of a student fooling the system. For
example, students being interviewed by AI are told to “raise their laptop to be
eye level with the camera so it appears they're maintaining eye contact, even
though there isn't a human on the other side of the lens” (Metz, 2020). The
result is that the AI misinterprets laptop placement as ‘paying attention’.
There is a risk,
writes Ilkka Tuomi (2018), “that AI might be used to scale up bad pedagogical
practices. If AI is the new electricity, it will have a broad impact in
society, economy, and education, but it needs to be treated with care.” For
example, badly constructed analytics may lead to evaluation errors. “Evaluation
can be ineffective and even harmful if naively done ‘by rule’ rather than ‘by
thought’” (Dringus, 2012). Even more concerning is how poorly designed
analytics could result in poorly defined pedagogy. Citing Bowker and Star
(1999), Buckingham Shum and Deakin Crick (2012) argue that “a marker of the
health of the learning analytics field will be the quality of debate around
what the technology renders visible and leaves invisible, and the pedagogical
implications of design decisions.”
Social and Cultural Issues
This is a class of
issues that addresses the social and cultural infrastructure that builds up
around analytics. These are not issues with analytics itself, but with the way
analytics changes our society, our culture, and the way we learn.
Analytics is
ethically problematic in society when it is not transparent. When a
decision-making system is opaque, it is not possible to evaluate whether it is
making the right decision. You might not even know the decision was made by a
machine. Analytics requires a ‘principle of notification’ (Fjeld, et.al.,
2020:45). Additionally, transparency applies to the model or algorithm applied
in analytics. “Transparency of models: it relates to the documentation of the
AI processing chain, including the technical principles of the model, and the
description of the data used for the conception of the model. This also
encompasses elements that provide a good understanding of the model, and related
to the interpretability and explainability of models” (Hamon, Junklewitz &
Sanchez, 2020:2).
Explainability is
closely related to transparency. In the case of analytics, explainability seems
to be inherently difficult. We’re not sure whether we’ll be able to provide
explanations. Zeide (2019) writes, “Unpacking what is occurring within AI
systems is very difficult because they are dealing with so many variables at
such a complex level. The whole point is to have computers do things that are
not possible for human cognition.” As Eckersley, et.al. (2017) say, “Providing
good explanations of what machine learning systems are doing is an open
research question; in cases where those systems are complex neural networks, we
don’t yet know what the trade-offs between accurate prediction and accurate
explanation of predictions will look like.”
Numerous agencies
have announced efforts to ensure that automated decisions are ‘accountable.’ (Rieke,
Bogen & Robinson, 2018). But the nature of AI might make accountability impossible.
“Suppose every single mortgage applicant of a given race is denied their loan,
but the Machine Learning engine driving that decision is structured in such a
way that the relevant engineers know exactly which features are driving such
classifications. Further suppose that none of these are race-related. What is
the company to do at this point?” (Danzig, 2020).
What we don’t know
might hurt us. The UK House of Lords Select Committee notes that “The use of
sophisticated data analytics for
increasingly targeted political campaigns has attracted considerable attention
in recent years, and a number of our witnesses were particularly concerned
about the possible use of AI for turbo-charging this approach” (Clement-Jones,
et.al, 2018:para 260). One example is the use of bot Twitter accounts to sow
division during the Covid-19 pandemic. “More than 100 types of inaccurate
COVID-19 stories have been identified, such as those about potential cures. But
bots are also dominating conversations about ending stay-at-home orders and
‘reopening America,’ according to a report from Carnegie Mellon (Young, 2020).
An ethical issue
here arises because “information is filtered before reaching the user, and this
occurs silently. The criteria on which filtering occurs are unknown; the
personalization algorithms are not transparent” (Bozdag & Timmermans,
2011). Additionally, “We have different identities, depending on the context,
which is ignored by the current personalization algorithms” (Ibid). Moreover,
algorithms that drive filter bubbles may be influenced by ideological or
commercial considerations (Introna & Nissenbaum, 2000:177). The eventual
consequence may be disengagement and alienation. “Will Hayter, Project Director
of the Competition and Markets Authority, agreed: ‘ ... the pessimistic
scenario is that the technology makes things difficult to navigate and makes
the market more opaque, and perhaps consumers lose trust and disengage from
markets’” (Clement-Jones, et.al, 2018:para 52).
Artificial
intelligence and analytics impose themselves as a barrier between one person
and another, or between one person and necessary access to jobs, services, and
other social, economic and cultural needs. Consider the case of a person
applying for work where analytics-enabled job applicant screening is being
used. However, “La difficulté, pour les candidats pris dans les rets de ces
systèmes de tris automatisés, est d’en sortir, c’est-à-dire se battre contre
les bots, ces gardiens algorithmiques, pour atteindre une personne réelle capable
de décider de son sort (The difficulty for candidates caught in the nets of
these automated sorting systems is to get out of them, that is, to fight
against bots, those algorithmic guardians, to reach a real person capable of
deciding on their exit)” (Guillaud, 2020).
There are ethical
issues around the question of inclusion and exclusion in analytics. Most often,
these are put in the form of concerns about biased algorithms. But arguably,
the question of inclusion in analytics ought to be posed more broadly. For
example, Emily Ackerman (2019) reports of having been in a wheelchair and
blocked from existing an intersection by a delivery robot waiting on the ramp.
This isn’t algorithmic bias per se but clearly the use of the robot excluded
Ackerman from an equal use of the sidewalk.
New types of
artificial intelligence lead to new types of interaction. In such cases, it is
of particular importance to look at the impact on traditionally disadvantaged
groups. “There is increasing recognition that harnessing technologies such as
AI to address problems identified by working with a minority group is an
important means to create mainstream innovations. Rather than considering these
outcomes as incidental, we can argue that inclusive research and innovation should
be the norm” (Coughlan, et.al., 2019a: 88).
Above, we discussed
the ethics of surveillance itself. Here, we address the wider question of the
surveillance culture. This refers not only to specific technologies, but the
creation of a new social reality. “Focusing on one particular identification
method misconstrues the nature of the surveillance society we’re in the process
of building. Ubiquitous mass surveillance is increasingly the norm” (Schneier,
2020). Whether in China, where the infrastructure is being built by the
government, or the west, where it’s being built by corporations, the outcome is
the same.
What we are finding
with surveillance culture is the ‘elasticity’ of analytics ethics (Hamel, 2016)
as each step of surveillance stretches what we are willing to accept a bit and
makes the next step more inevitable. The uses of streetlight surveillance are
allowed to grow (Marx, 2020). Surveillance becomes so pervasive it becomes
impossible to escape its reach. (Malik, 2019). And nowhere is this more true
than in schools and learning. The goal is “to connect assessment, enrollment,
gradebook, professional learning and special education data services to its
flagship student information system" (Wan, 2019). Or, as Peter Greene
(2019) says, "PowerSchool is working on micromanagement and data mining in
order to make things easier for the bosses. Big brother just keeps getting
bigger, but mostly what that does is make a world in which the people who
actually do the work just look smaller and smaller."
Audrey Watters
captures the issue of surveillance culture quite well. It’s not just that we
are being watched, it’s that everything we do is being turned into data for
someone else’s use - often against us. She says “These products — plagiarism
detection, automated essay grading, and writing assistance software — are built
using algorithms that are in turn built on students’ work (and often too the
writing we all stick up somewhere on the Internet). It is taken without our
consent. Scholarship — both the content and the structure — is reduced to data,
to a raw material used to produce a product sold back to the very institutions
where scholars teach and learn.” (Watters, 2019). As Watters writes, “In her
book The Age of Surveillance Capitalism, Shoshana Zuboff calls this
‘rendition,’ the dispossession of human thoughts, emotions, and experiences by
software companies, the reduction of the complexities and richness of human
life to data, and the use of this data to build algorithms that shape and
predict human behavior.”
The products that
depend on analytics engines — plagiarism detection, automated essay grading,
and writing assistance software — are built using algorithms that are in turn
built on students’ work. And this work is often taken without consent, or (as
the lawsuit affirming TurnItIn’s right to use student essays) consent demanded
as an educational requirement (Masnick, 2008). And “Scholarship — both the
content and the structure — is reduced to data, to a raw material used to
produce a product sold back to the very institutions where scholars teach and
learn.” (Watters, 2019) And in a wider sense, everything is reduced to data,
and the value of everything becomes the value of that data. People no longer
simply create videos, they are “influencers”. Courses are no longer locations
for discussion and learning, they produce “outcomes”.
There is the sense
that analytics and AI can not reason, cannot understand, and therefore cannot
know the weight of their decisions. This, somehow, must be determined. But as
Brown (2017) asks, “Who gets to decide what is the right or wrong behaviour for
a machine? What would AI with a conscience look like?” On the other hand,
perhaps AI can learn the difference between right and wrong for itself.
Ambarish Mitra (2018) asks, “What if we could collect data on what each and
every person thinks is the right thing to do? …
With enough inputs, we could utilize AI to analyze these massive data
sets—a monumental, if not Herculean, task—and drive ourselves toward a better
system of morality… We can train AI to identify good and evil, and then use it
to teach us morality.” The danger in this is that people may lose the sense of
right and wrong, and there are suggestions that this is already happening.
Graham Brown-Martin argues, for example, “At the moment within social media
platforms we are seeing the results of not having ethics, which is potentially
very damaging.” (Clement-Jones, et.al, 2018:para 247). Do right and wrong
become what the machine allows it to be? This is perhaps the intuition being
captured by people who are concerned that AI results in a loss of humanity. And
when we depend on analytics to decide on right and wrong, what does that do to
our sense of morality?
While it may be
intuitive to argue that human designers and owners ought to take responsibility
for the actions of an AI, arguments have been advanced suggesting that
autonomous agents are responsible in their own right, thereby possibly
absolving humans of blame. “Emerging AI technologies can place further distance
between the result of an action and the actor who caused it, raising questions
about who should be held liable and under what circumstances.” (Fjeld, et.al.,
2020:34)
The argument from AI
autonomy has a variety of forms. In one, advanced (tentatively) by the IEEE. It
draws the distinction between ‘moral agents’ and ‘moral patients’ (or ‘moral
subjects’) to suggest that we ought to distinguish between how an outcome
occurred, and the consequence of that outcome, and suggests that autonomous
self-organizing systems may operate independently of the intent of the designer
(IEEE, 2016, p. 196) As Bostrom and Yubkowsky (2029) write, “The local,
specific behavior of the AI may not be predictable apart from its safety ,even
if the programmers do everything right.” It may seem unjust to hold designers
responsible in such cases.
Focus on Ethical Issues
In this section we examine the ethical
issues being addressed by codes of conduct. Most often these are not stated
explicitly, but must be inferred from the sorts of behaviours or outcomes being
expressly discussed.
The Good that Can Be Done
While
ethical codes are typically thought of as identifying wrongs, in the sense of
“thou shalt not”, it should be noted that many codes reference first the good that can be accomplished by the
discipline or profession being discussed. This is especially the case in
relation to data management and data research, which are new fields, and where
the benefits may not be immediately obvious.
For
example, while the United Kingdom Data Ethics Framework
“sets out clear principles for how data should be used in the public sector,”
it is with the intention to “maximise the value of data whilst also setting the
highest standards for transparency and accountability when building or buying
new data technology” (Gov.UK, 2018), advising researchers to “start with clear
user need and public benefit.” Also in the U.K., the list of principles
outlines by the House
of Lords Select Committee on AI principles reflect a purpose “for the common
good and benefit of humanity” including privacy rights, the right to be
educated, “to flourish mentally, emotionally and economically alongside
artificial intelligence” (Clement-Jones, et.al, 2018, para 417).
Similarly, the Sorbonne Declaration (2020)
points to “the benefit of society and economic development” that accrues as a
research of data research. It is motivated by the good that can be done and
“recognises the importance of sharing data in solving global concerns – for
example, curing diseases, creating renewable energy sources, or understanding
climate change” (Merett,
2020). In some cases, the emphasis is on being able to be more ethical. The Society of Actuaries, “AI provides many new
opportunities for ethical issues in practice beyond current practices,” for
example, ‘black box’ decision models, masked bias, and unregulated data”
(Raden, 2019: 9), all issues that received much less attention in the days
before analytics.
In the
field of learning analytics, there is often an explicit linkage drawn between
the use of data and benefits for students, and thereby, of helping society
benefit from education generally. The Open University, for example, asserts
that the purpose of collecting data should be “to identify ways of effectively
supporting students to achieve their declared study goals” (OU, 2014:4.2.2). The
Asilomar Convention for Learning Research in Higher Education principles were based
on “the promise of education to improve the human condition”, as expressed by
two tenets of educational research: to “advance the science of learning for the
improvement of higher education”, and to share “data, discovery, and technology
among a community of researchers and educational organizations” (Stevens &
Silbey, 2014).
Academic or Professional Freedom
Ethical
codes frequently point to the need for freedom or autonomy for the profession.
Not surprisingly, the concept of academic freedom surfaces frequently in
academic codes of ethics. It is seen as something that needs to be nurtured and
protected. Thus, for example, one university’s code of ethics asserts that the
defense of academic freedom is an “obligation” on faculty members, stating, “it
is unethical for faculty members to enter into any agreement that infringes
their freedom to publish the results of research conducted within the
University precincts or under University auspices… they have the obligation to
defend the right of their colleagues to academic freedom. It is unethical to
act so as deliberately to infringe that freedom” (SFU, 1992). Or, good practices are those that defend academic freedom (EUI, 2019).
But
university professors are not along in asserting professional independence.
Researchers generally, and especially early-career researchers (ECR) “are being
pressured into publishing against their ethics because of threats relating to
job security” (Folan, 2020). Librarians declare that they are “explicitly
committed to intellectual freedom and the freedom of access to information. We
have a special obligation to ensure the free flow of information and ideas to
present and future generations” (ALA, 2008). Doctors and nurses also declare the caregiver’s right to “be free to choose whom to
serve, with whom to associate, and the environment in which to provide medical
care” (AMA, 2001). The same assertions of independence and autonomy can be
found in journalists’ code of ethics (NUJ, 2011).
Conflict of interest
The idea that a person would use their
position to personally benefit from their position of privilege or
responsibility, whether directly or through the offer of gifts or benefits, is expressly
prohibited by many (but by no mean all) codes of ethics (CFA, 2019; IEEE, 2020: 7.8; SFU, 1992; CPA,
2017). Different sorts of conflict of interest are mentioned by different codes
of ethics.
Some codes focus on material benefits.
For example, codes of ethics in the financial sector often express prohibitions
against insider trading (specifically, members that “possess material nonpublic
information that could affect the value of an investment must not act or cause
others to act on the information” and against “practices that distort prices or
artificially inflate trading volume with the intent to mislead market
participants” (CFA, 2019). Public services ethics., meanwhile, address conflict
of interest as a matter of trust where the principles include “taking all
possible steps to prevent and resolve any real, apparent or potential conflicts
of interest,” as well as “effectively and efficiently using the public money,
property and resources managed by them” (TBS 2011).
Other codes focus on integrity. We see
this in professions like journalism, where “professional integrity is the
cornerstone of a journalist’s credibility” (SPJ, 1996) and journalists are
urged “to remain independent (and therefore avoid conflict of interest), and to
be accountable” (SPJ, 2014). The primary focus of the New York Times Ethical
Journalism Guidebook is avoidance of conflict of interest, and it addresses exhaustively
the ways in which a journalist could be in a real or perceived conflict of
interest, and counsels against them, while allowing for certain exceptions (NYT,
2018).
In education and the helping professions
the codes focus on exploitation (IUPSYS, 2008; CPA, 2017; NEA, 1975; BACB,
2014:6; SFU, 1992; EUI, 2019 etc.). The British Columbia Teachers Federation,
for example, states that “a privileged relationship exists between members and
students” and stresses the importance of refraining from exploiting that
relationship” (BCTF, 2020).
Harm
The prevention of harm is a theme that
arises in numerous codes of ethics. Many codes trace their origins to the
written principles for ethical research originating from the Nuremberg trials
in 1949 that were used to convict leading Nazi medics for their atrocities
during the Second World War (Kay et al. 2012). In general, research should not
risk “even remote possibilities of injury, disability, or death,” nor should
the harm exceed the potential benefits of the research (USHM, 2020). What
counts as harm, however, varies from code to code.
Often, the nature of harm is loosely
defined. Accenture’s Universal Principles for Data Ethics (Accenture, 2016:5)
states that practitioners need to be aware of the harm the data could cause,
both directly, and through the “downstream use” of data. The principles also
acknowledge that data is not neutral. “There is no such thing as raw data.” The
Information Technology Industry Council urges researchers to “Recognize
potentials for use and misuse, the implications of such actions, and the
responsibility and opportunity to take steps to avoid the reasonably
predictable misuse of this technology by committing to ethics by design. (UC
Berkeley, 2019)
Discrimination and human rights
violations are often cited as sources of harm (IEEE,2020: 9.26; NEA, 1975;
IFLA, 2012; NUJ, 2011; UC Berkeley, 2019; etc.). For example, the Amnesty
International and Access Now ‘Toronto Declaration’ calls on the right to
redress human rights violations caused by analytics and AI. “This may include,
for example, creating clear, independent, and visible processes for redress
following adverse individual or societal effects,” the declaration suggests,
“[and making decisions] subject to accessible and effective appeal and judicial
review” (Brandom, 2018).
Several codes, by contrast, identify
exemptions and cases that will not be
considered harm. For example, the U.S. ‘Common Rule’ states that research is
exempt from restrictions if it is a
“benign behavioral exemption”, that is, it is “brief in duration, harmless,
painless, not physically invasive, not likely to have a significant adverse
lasting impact on the subjects, and the investigator has no reason to think the
subjects will find the interventions offensive or embarrassing” (HHS, 2018:§46.104.2.C.ii).
Quality and Standards
Ethical codes – especially professional
ethical codes – also address issues related to quality and standards. Sometimes
competence is defined simply as “stewardship and excellence” (TBS,2011)or
professionalism (CFA, 2019; BACB, 2014:6). Or a profession may seek to restrict
practice to competent practitioners, for example, preventing assistance to a
“noneducator in the unauthorized practice of teaching” and preventing “any
entry into the profession of a person known to be unqualified in respect to
character, education, or other relevant attribute” (NEA, 1975).
The code may also seek to define and
reinforce exemplary behaviours such as research integrity, scientific rigor and
recognition of sources. The ethical code for behavioural analysts, for example,
states that researchers must not fabricate data or falsify results in their
publications, must correct errors in their publications, and not omit findings
that might alter interpretations of their work (BACB,2014:9.0). Similarly, “The
IEEE acknowledges the idea of scientific rigor in its call for creators of AI
systems to define metrics, make them accessible, and measure systems” (Feljd,
et.al., 2020:59). The major sources of academic misconduct are related to the
misuse of intellectual property, for example, through plagiarism, piracy,
misrepresentation of authorship (“personation”), and fabrication data or
qualifications (EUI, 2019; BACB,2014:9.0).
What are the Limits?
Finally, some ethical codes seek to
address the limits of what can be done ethically. It’s not always easy to
recognize these limits; it was only after years of effort that IBM announced it
would see work in general facial recognition technology, for example (Krishna,
2020). Sometimes the need for limits is stated explicitly. The purpose of the
U.K. Government Data Ethics Framework, for example, to help data scientists
identify the limits of what is allowed, to help practitioners consider policy
when designing data science initiatives, and to identify core ethical expectations
from such projects (Gov.UK, 2018).
Some discussions (eg. Floridi, et.al.,
2018, note 5) omit consideration of the research issues (arguing “they are
related specifically to the practicalities of AI development”), however they
set an important ethical standard, specifically, “to create not undirected
intelligence, but beneficial intelligence” (Asilomar, 2017). In other cases,
specific outcomes are undesired, for example, “We should not build a society
where humans are overly dependent on AI or where AI is used
to control human
behavior through the
excessive pursuit of
efficiency and convenience” (Japan, 2019:4). Many individual researchers, meanwhile,
refuse to work on military or intelligence applications (Shane & Wakabayashi,
2018).
Otherwise, the limits are related to the
benefits. For example, the Information and Privacy Commissioner Ontario, Canada.
Data-gathering by the
state should be
restricted to that
which is reasonably
necessary to meet
legitimate social objectives,
and subjected to controls over
its retention, subsequent use, and disclosure. (Cavoukian, 2013). Similarly,
research Ethics Boards (REB) often require that the submissions for ethics approval
be accompanied with statements of scientific merit and research need.
Core Values and Priorities
The previous section addressed ethical
issues being addressed by codes of conduct. It was, in a sense, addressing the purpose of the code qua code of ethics, that is, it didn’t look at the social,
political or economic need for codes of ethics, but rather, sought to identify
the questions for which a ‘code of ethics’ is the answer. No code of those
surveyed was designed to meet all of the purposes identified, and none of the
purposes identified was specifically addressed by all of the codes surveyed. We
use different ethical codes to do different things.
In this section, we will focus on the values and priorities that can be found
in the codes. These are things that might be found in the ethical principles described by the code, if the
code is structured that way, or the things that are explicitly described as
good or desirable by the code. When people state that there is a ‘universal’ or
‘general’ agreement on values, it is usually with respect to a subset of the
items listed here that they refer. Below we have not attempted to create a tab
or values mapped to codes, as some researchers (eg. Fjeld, et.al., 2020) have
done, but rather, to list the values with references to relevant examples where
they are asserted.
Pursuit of Knowledge
The pursuit of knowledge is identified as
a core value by many academic and professional codes. For example, the SFU code
of ethics, addresses faculty members first as teachers, and then as scholars.
“The first responsibility of university teachers is the pursuit and
dissemination of knowledge and understanding through teaching and research.
They must devote their energies conscientiously to develop their scholarly
competence and effectiveness as teachers” (SFU, 1992).
Similarly, the National Education
Association statement (NEA, 1975) is based on “recognizes the supreme
importance of the pursuit of truth, devotion to excellence, and the nurture of
the democratic principles.” Nor is the pursuit of knowledge limited to
academics. The Society for Professional Journalists (SPJ) code of ethics, originally
derived from Sigma Delta Chi’s ‘New Code of Ethics’ in 1926 (SPJ, 2014),
asserts that the primary function of journalism, according to the statements,
is to inform the public and to serve the truth.
Autonomy and Individual Value
Many codes, like National Education
Association code (NEA, 1975) are based on “believing in the worth and dignity
of each human being. This, though, is expressed in different ways by different
codes. For example, in one code, individual development is the objective, to
promote “acquisition of autonomous attitudes and behavior.” (Soleil, 1923). The
AI4People (Floridi, et.al., 2018:16) adopts a similar stance.
By contrast Tom Beauchamp and James
Childress’s Principles of Biomedical
Ethics contains an extended discussion of autonomy embracing the idea of
‘informed consent’, which requires disclosure of information, respect for
decision-making, and provision of advice where requested. A similar respect for
human autonomy is demanded by the High-Level Expert Group on Artificial
Intelligence (AI HLEG, 2019).
Similarly, the Belmont Report begins by
identifying ‘respect for persons’, as a core principle which “incorporates at
least two basic ethical convictions: first, that individuals should be treated
as autonomous agents, and second, that persons with diminished autonomy are
entitled to protection.” (DHEW, 1978:4)
Consent
Whether or not based in the principle of
autonomy or the inherent worth of people, the principle of consent is itself
often cited as a fundamental value by many ethical codes (BACB, 2014; DHEW,
1978; HHS, 2018; Drachsler & Greller, 2016, etc.). However there may be
variations in what counts as consent and what consent allows.
For example, the type of consent defined
by the Nuremberg Code “requires that before the acceptance of an affirmative
decision by the experimental subject there should be made known to him the
nature, duration, and purpose of the experiment; the method and means by which
it is to be conducted; all inconveniences and hazards reasonably to be
expected; and the effects upon his health or person which may possibly come
from his participation in the experiment” (USHM, 2020).
Several codes are more explicit about
what counts as informed consent. For example, one code requires that “researchers
be transparent about the research and give research subjects the choice not to
participate. This includes passive data collection, such as collection of data
by observing, measuring, or recording a data subject’s actions or behaviour”
(IA, 2019). The same code, however, contains provisions that allow data to be
collected without consent. If consent is not possible, it states, “Researchers
must have legally permissible grounds to collect the data and must remove or
obscure any identifying characteristics as soon as operationally possible.”
There are also stipulations designed to ensure research quality and to ensure
that communications about the research are accurate and not misleading (Ibid).
Meanwhile, that same code of ethics can allow the scope of consent to be extended
beyond research. It is the IA Code of Standards and Ethics for Marketing
Research and Data Analytics (IA, 2019). Consent is required for research
purposes, but in addition “such consent can enable non-research activities to
utilize research techniques for certain types of customer satisfaction, user,
employee and other experience activities.” The Nuremberg Code and marketing
research may stand at opposite poles of an ethical question, however, they are
reflective of a society as a whole that holds consent as sacrosanct on one hand
and makes legal End User Licensing Agreements (EULA) on the other hand.
Integrity
Integrity is often required of
professionals (CFA, 2019; CSPL, 1995; IA, 2019; etc.), but different codes
stress different aspects of integrity. The Canadian Psychological Association section
on integrity speaks to accuracy, honesty, objectivity, openness, disclosure,
and avoidance of conflict of interest (CPA, 2017). The European University
Institute defines integrity as including such values as honesty, trust,
fairness and respect. (EUI, 2019). The Ontario College of Teachers focuses on trust,
which includes “fairness, openness and honesty” and integrity, which includes
honesty and reliability (OCT, 2020). In Guyana, integrity includes “honest
representation of one’s own credentials, fulfilment of contracts, and
accountability for expenses” (Guyana, 2017). The Nolan Principles state “Holders
of public office should act solely in terms of the public interest” (CSPL,
1995) while Raden (2019: 9) defines it as “incorruptibility”.
Confidentiality
While sometimes breaches of
confidentiality are depicted as ‘harm’, confidentiality is often presented as a
virtue in and of itself, perhaps constitutive of integrity. Thus, for example,
librarians “protect each library user's right to privacy and confidentiality
with respect to information sought or received and resources consulted, borrowed,
acquired or transmitted” (ALA, 2008). Similarly, the Declaration of Helsinki states
that “every precaution must be taken to protect the privacy of research
subjects and the confidentiality of their personal information” (WMA, 2013).
The need for confidentiality increases
with the use of electronic data. The authors of a 1973 report for the U.S.
Department of Health, Education and Welfare addressing the then nascent
practice of electronic data management noted that “under current law, a
person's privacy is poorly protected against arbitrary or abusive
record-keeping practices” (Ware, et.al., 1973:xx). Government policy, they argued,
should be designed to limit intrusiveness, to maximize fairness, and to create
legitimate and enforceable expectations of confidentiality (Linowes,
et.al.,1977: 14-15).
Confidentiality, expressed as privacy, is
a core principle for data and information services and codes regulating those.
For example, the Federal Trade Commission promotes principles that “are widely
accepted as essential to ensuring that the collection, use, and dissemination
of personal information are conducted fairly and in a manner consistent with
consumer privacy interests.” (Pitofsky, et.al., 1998:ii).
It should be noted that exceptions to
confidentiality may be allowed, especially where required by law. For example,
the British Columbia Teachers’ Federation code states explicitly that “It shall
not be considered a breach of the Code of Ethics for a member to follow the
legal requirements for reporting child protection issues” (BCTF, 2020). Similarly,
in medical informatics, confidentiality can be compromised “by the legitimate,
appropriate and relevant data-needs of a free, responsible and democratic
society, and by the equal and competing rights of others” (IMIA, 2015).
Care
Care, which includes “compassion,
acceptance, interest and insight for developing students' potential” (OCT,
2020) is found in numerous ethical codes (CNA, 2017; CFA, 2019; IUPSYS, 2008; CPA,
2017; etc.) but is manifest differently in each code in this it appears. Contrasting
the OCT definition, for example, is the Canadian Nurses Association discussion of
“provision of care” references speech and body language, building
relationships, learning from “near misses”, adjusting priorities and minimizing
harm, safeguarding care during job actions, and more. It is worth noting that
the promotion of dignity means to “take into account their values, customs and
spiritual beliefs, as well as their social and economic circumstances without
judgment or bias.” (CAN, 2017:12)
The National Council of Educational Research
and Training is almost unique in an assertion of care, in the explanatory
notes, that states “the demonstration of genuine love and affection by teachers
for their students is essential for learning to happen. Treating all children
with love and affection irrespective of their school performance and achievement
level is the core of the teaching learning process” (NCERT, 2010).
Other codes (eg. CFA, 2019) adopt a more
legalist interpretation of ‘duty of care’, for example, that researchers must
“prioritize data subject privacy above business objectives, be honest,
transparent, and straightforward in all interactions (and respect the rights
and well-being of data subjects” (IA, 2019). Meanwhile there is a sense of
‘care’ that means ‘diligence and rigor’; this is the sense intended in the
Nuremberg Code (USHM, 2020) and the American Medical Association (Riddick,
2003).
Competence and Authority
Many of the codes identify competence or
authority to practice in the profession as core values or principles (CFA, 2019;
IEEE, 2020: 7.8; IUPSYS, 2008; etc.). This is expressed in several ways:
members of the profession may be expected to perform in a competent manner, or
they may be required to remain within their domain of competence, or they may
be obligated to ensure that unqualified people do not practice the profession (NEA,
1975, as cited above).
For example, behaviour analysts are
expected to rely on scientific evidence and remain within the domain of their
competence (BACB, 2014:6). Similarly, the Nuremberg Code also determines that
the researcher should be a qualified scientist and that the research ought to
have scientific merit and be based on sound theory and previous testing (USHM,
2020). And the CPA code (2017) requires that the practitioner be competent.
Sometimes what counts as competence is
spelled out in the code. For example, the Royal Society data science ethics in
government report (Drew, 2016) advises the use of robust data models in data
research. Provisions in the Open University code similarly state that the
modeling based on the data should be sound and free from bias, and that it
requires “development of appropriate skills across the organisation” (OU,
2014:4.4).
Codes sometimes require that only
authorized professionals perform the work. Accenture’s Universal Principles for
Data Ethics (Accenture, 2016:5) states “practitioners should accurately
represent their qualifications (and limits to their expertise).” This is
especially the case where expertise is more difficult to establish or where the
stakes are higher. The Guyana code of ethics for teachers, for example,
requires “honest representation of one’s own credentials” (Guyana, 2017) while
the Ontario Information and Privacy Commissioner Ontario states that “the
authority to employ intrusive surveillance powers should generally be
restricted to limited classes of individuals such as police officers” (Cavoukian,
2013).
Value and Benefit
While
above we represented ‘the good that can be done’ as aspirational, that is,
something ethical codes seek to accomplish, in the present case we view the
same principle as a limit, and specifically, as the research or practice must
produce a benefit in order to be ethical.
In some
cases, this benefit may be immediate and practical. For example, the Behavior
Analyst Certification Board requires that practitioners provide “effective
treatment” (BACB, 2014:6). It is arguable, as well,
that “health-care professionals, especially, have an obligation to distinguish
between remedies that represent the careful consensus of highly trained experts
and snake oil” (Kennedy, et.al., 2002).
In other cases the requirements are more
general (and more widely distributed). The Royal Society requires that
researchers “show clear user need and public benefit” (Drew, 2016). Similarly,
the Asilomar principles state that “AI technologies should benefit and empower
as many people as possible” and “the economic prosperity created by AI should
be shared broadly, to benefit all of humanity” (Asilomar, 2017). Fjeld (2020)
finds a principle of “promotion of human values,” and specifically, that “the
ends to which AI is devoted and the means by which it is implemented should
promote humanity's well being.”
In other cases, the requirement that a
benefit be shown is limited to requiring that practitioners demonstrate a
purpose for their work. The Barcelona Principles (2010) for example require
that researchers “specify purposes of data gathering in advance, and seek
approval for any new uses,” while the DELICATE principles require that universities
“Decide on the purpose of learning analytics for your institution” and “E-xplain:
Define the scope of data collection and usage” (Drachsler & Greller, 2016).
Non-Maleficence
The
principle of non-maleficence is an adaptation of the principle of “do no harm”
in the Hippocratic oath. This adaptation is necessary because harm is
unavoidable in many circumstances; the surgeon must sometimes harm in order to
heal, for example. Harm may occur in other professions as well; a teacher might
punish, a researcher might violate privacy, a defence contractor might develop
weapons.
So the
principle of non-maleficience, as developed for example by Beauchamp &
Childress (1992) means “avoiding anything which is unnecessarily or
unjustifiably harmful… (and) whether the level of harm is proportionate to the
good it might achieve and whether there are other procedures that might achieve
the same result without causing as much harm” (Ethics Centre, 2017). The
principle arguably also requires consideration of what the subject considers to be harm because as Englehardt (1993) says, we
engage one another as moral strangers who need to negotiate moral arrangements
(Erlanger, 2002).
The definition of maleficence to be
avoided can be variably broad. For example, the AMA (2001) addresses not only
the nature and priority of patient care, but also “respect for law, respect of
a patient’s rights, including confidences and privacy.” The AMA’s Declaration
of Professional Responsibility also advocates “a commitment to respect human
life” which includes a provision to “refrain from crimes against humanity (Riddick,
2003).
The
principle of non-maleficence is found in numerous ethical codes, and not only
medical ethics. For example, the Association for Computing Machinery (2018) states
“an essential aim of computing professionals is to
minimize negative consequences of computing, including threats to health,
safety, personal security, and privacy,” including “examples of harm include
unjustified physical or mental injury, unjustified destruction or disclosure of
information, and unjustified damage to property, reputation, and the
environment” (ACM, 2018).
Non-maleficence
in research and data science include being minimally intrusive (Drew, 2016), to keep data secure (ibid; also Raden, 2019: 9), to
promote “resilience to attack and
security, fall back plan and general safety, accuracy, reliability and
reproducibility… including respect for privacy, quality and integrity of data,
and access to data” (AI HLEG, 2019). AI systems, says Fjeld (2020) should perform as intended and be secure from compromise (also Drachsler
& Greller, 2016).
Beneficence
Another
of the principles defined by Beauchamp & Childress (1992), beneficence
should be understood as more than non-maleficence and distinct from value and
benefit. A professional demonstrates beneficence toward their client “not only by respecting their decisions and protecting them from
harm, but also by making efforts to secure their well-being.” Moreover,
“beneficence is understood in a stronger sense, as an obligation.” It’s
intended as a combination of “do no harm” and “maximize benefits and minimize
harm”, with the recognition that even the determination of what is harmful
might create a risk of harm (DHEW, 1978:6-7).
In a number of ethical codes, beneficence
can be thought of as “the principle of acting with the best interest of the
other in mind” (Aldcroft, 2012). This is more than merely the idea of doing
good for someone, it is the idea that the role of the professional is to prioritize the best interest of their
client (BACB, 2015; AMA, 2001; CPA, 2017). The principle of beneficence is also
raised with respect to AI (Floridi,
et.al, 2018:16; Stevens & Silbey, 2014), however, in
the precise statement of these principles it is unclear how they should be
applied. For example, should ‘the common good’ is included in the principle of
beneficence? Should AI promote social justice, or merely be developed
consistently with the principles of social justice?
Respect
The principle of respect is cited in
numerous ethical codes (AMA, 2001; IUPSYS, 2008; CPA, 2017; Dingwell, et.al., 2017;
etc.), for example, acting toward students with respect and dignity (BCTF,
2020), “respect for people” (TBS, 2011), “mutual respect” (Folan, 2020), “respect
for the composite culture of India among students” (NCERT, 2010), or “respect
for the rights and dignity of learners” (Stevens & Silbey, 2014). Though sometimes
paired with autonomy (DHEW, 1978:4, cited above) it is often presented quite
differently. The Ontario College of Teachers code states that respect includes
trust, fairness, social justice, freedom, and democracy (OCT, 2020).
Respect can also be thought of as promoting
“human dignity and flourishing”, which AI4All summarizes as “who we can become
(autonomous self-realisation); what we can do (human agency); what we can
achieve (individual and societal capabilities); and how we can interact with
each other and the world (societal cohesion)” (Floridi, et.al., 2018:7). The
last two ‘commandments’ of the Computer Ethics Institute’s Ten Commandments of
Computer Ethics recommend computer professionals “think about the social
consequences” and to “ensure consideration and respect for other humans” (CEI,
1992).
Democracy
Several ethical codes include ‘respect
for democracy’ among their values and principles; this can mean, variously,
respect for the idea of rule by the people, respect for the results of democratic choice (as, say,
found in public service ethics; TBS,2011:1.1-1.2), and respect for democratic
values, such as justice and non-discrimination.
Democracy is also identified as both an
input and output of ethical codes; the NEA code (1975) is based on “the nurture
of the democratic principles,” while the Code of Professional Ethics for School
Teachers in India states that “every child has a fundamental right to receive
education of good quality,” where this education develops the individual
personality, faith in democracy and social justice, cultural heritage and
national consciousness (NCERT, 2010).
Justice and Fairness
Almost all the ethical codes consulted
refer to justice in one form or another. Here it is listed alongside
‘fairness’, as ever since John Rawls’s influential A Theory of Justice (Revised, 1999) the two concepts have been
linked in popular discourse, according to the principle ‘justice as fairness’.
As fairness, justice is cited frequently,
for example, in academic codes, as fairness to students, including especially
refraining from exploiting free academic labour, and ensuring credit is given
for any academic work they may have depended on (SFU, 1992) and viewing
academics “as role models (who) must follow a professional code of ethics” to
ensure “students receive a fair, honest and uncompromising education” from
teachers who “demonstrate integrity, impartiality and ethical behavior”
(Guyana, 2017).
Even viewed as ‘fairness’, however,
ambiguities remain. As the Belmont Report notes. The idea of justice, “in the
sense of ‘fairness in distribution’ or ‘what is deserved’” can be viewed from
numerous perspectives, each of which needs to be considered, specifically, “(1)
to each person an equal share, (2) to each person according to individual need,
(3) to each person according to individual effort, (4) to each person according
to societal contribution, and (5) to each person according to merit.” The
authors also note that exposing a disadvantaged group to risk is an injustice
(DHEW, 1978:6-7).
Fairness is also viewed as impartiality,
an avoidance of bias or arbitrary ruling. In journalism, for example, “the
primary value is to describe the news impartially - “without fear or favour”,
as stated by New York Times “patriarch” Adolph Ochs (NYT, 2018). Similarly, the
High-Level Expert Group on Artificial Intelligence (AI HLEG, 2019) endorses
“diversity, non-discrimination and fairness - including the avoidance of unfair
bias, accessibility and universal design, and stakeholder participation.” And
the European University Institute opposesd acts that are arbitrary, biased or
exploitative (EUI, 2019).
Justice, sometimes coined as ‘natural
justice’ (CPA, 2017:11), can also be depicted in terms of rights (Stevens & Silbey, 2014; Asilomar, 2017; Access Now, 2018). That is how it appears in the Asilomar
declaration. The principles themselves reflect a broadly progressive social agenda,
“compatible with ideals of human dignity, rights, freedoms, and cultural
diversity,” recognizing the need for personal privacy, individual liberty, and
also the idea that “AI technologies should benefit and empower as many people
as possible” and “the economic prosperity created by AI should be shared
broadly, to benefit all of humanity.”
This interpretation of justice is also
expressed as an endorsement of diversity and prohibition of discrimination (Sullivan-Marx,
2020; Brandom, 2018; CPA, 2017:11; BACB, 2014; etc.) based on various social,
economic, cultural and other factors (this list varies from code to code). The
National Union of Journalists code, for example, states explicitly that
journalists should produce “no material likely to lead to hatred or
discrimination on the grounds of a person’s age, gender, race, colour, creed,
legal status, disability, marital status, or sexual orientation” (NUJ, 2011).
Justice, viewed from either the
perspective of fairness or rights, can be expanded to include redress for
current or past wrongs, or to prevent future wrongs. As early as 1973, U.S.
Department of Health, Education and Welfare, on observing abuses in data
collection, proposed a ‘Code of Fair Information Practice’. The intent of the
code was to redress this imbalance and provide some leverage for individuals
about whom data is being collected. The Toronto Declaration similarly calls for
“clear, independent, and visible processes for redress following adverse
individual or societal effects” (Brandom, 2018).
Depending on one’s perspective, the
principle of justice may be listed together with, or apart from, any number of
other principles, including fairness, rights, non-discrimination, and redress.
That we have listed them here in one section does not presuppose that we are
describing a single coherent core value or principle; rather, what we have here
is a family of related and sometimes inconsistent principles that are often
listed in the popular discourse as a single word, such as ‘justice’, as though
there is some shared understanding of this.
Accountability and Explicability
The principles
of accountability and explicability arise differently in computing and AI codes
than it does in other ethical codes. In the case of academic and
medical research, accountability is typically delegated to a process undertaken
by a research ethics board (REB). Similarly, the Information
and Privacy Commissioner of Ontario asserts that compliance with privacy rules
and restrictions should be subject to independent scrutiny and that “the state
must remain transparent
and accountable for
its use of
intrusive powers through
subsequent, timely, and
independent scrutiny of their use” (Cavoukian, 2013).
In other
disciplines, a range of additional processes describe practices such as
predictability, auditing and review (Raden, 2019: 9). As the U.S. Department of Health
and Welfare argued, data should only be used for the
purposes for which it was collected. And this information, however used, should
be accurate; there needs to be a way for individuals to correct or amend a
record of identifiable information about themselves, and organizations must
assure the reliability of the data and prevent misuse of the data. These, write
the authors, “define minimum standards of fair information
practice” (Ware, et.al., 1973:xxi).
In
digital technology, accountability also raises unique challenges. The AI4People code, for example, adds a fifth principle to the four
described by Beauchamp
& Childress (1992), “explicability, understood as
incorporating both intelligibility and accountability” where we should be able
to obtain “a factual, direct, and clear explanation of the decision-making
process” (Floridi et al. 2018). As (Fjeld, 2020) summarizes, “mechanisms must
be in place to ensure AI systems are accountable, and remedies must be in place
to fix problems when they're not.” Also, “AI systems should be designed and
implemented to allow oversight.”
Finally, says Fjeld, “important decisions
should remain under human review.” Or as Robbins (2019) says, ‘Meaningful human
control’ is now being used to describe an ideal that all AI should achieve if
it is going to operate in morally sensitive contexts.” As Robbins argues, “we
must ensure that the decisions are not based on inappropriate considerations.
If a predictive policing algorithm labels people as criminals and uses their
skin color as an important consideration then we should not be using that
algorithm.”
Openness
Many of the codes of ethics, especially
those dedicated to research, express openness as a core value, though often
with conditions attached. The Sorbonne Declaration, for example, states “research
data should, as much as possible be shared openly and reused, without
compromising national security, institutional autonomy, privacy, indigenous
rights and the protection of intellectual property” (Sorbonne Declaration,
2020). Similarly, the Declaration of Helsinki states “researchers have a duty
to make publicly available the results of their research on human subjects and
are accountable for the completeness and accuracy of their reports” (WMA,
2013).
Another project, FAIRsFAIR, is based on
the the FAIR Guiding Principles (GoFAIR, 2020) for scientific data management
and stewardship (Wilkenson, et.al., 2016). The principles (and the acronym
derived from them) are “Findability, Accessibility, Interoperability, and
Reusability—that serve to guide data producers and publishers as they navigate
around these obstacles, thereby helping to maximize the added-value gained by
contemporary, formal scholarly digital publishing.”
In many cases, openness is described in
terms of access serving the public good. The Asilomar Convention includes a
principle of openness representing learning and scientific inquiry as “public
goods essential for well-functioning democracies” (Stevens & Silbey, 2014).
Citing The Research Data Alliance’s 2014 “The Data Harvest Report” the
Concordat Working Group, (2016) authors write “the storing, sharing and re-use
of scientific data on a massive scale will stimulate great new sources of
wealth” (Genova, et.al., 2014).
Openness is also described in some
principles as openness of access to services. The IFLA (2019), for example,
expresses “support for the principles of open access, open source, and open
licenses” and “provision of services free of cost to the user.” The Canadian
Nurses association code includes “advocating for publicly administered health
systems that ensure accessibility, universality, portability and
comprehensiveness in necessary health-care services” (CAN, 2017).
Openness is also described in some
principles as ‘transparency’ of methods and processes (IA, 2019; Raden, 2019: 9;
Cavoukian, 2013; CSPL, 1995) in a way that often references accountability (as
referenced above). The Accenture code, for example, urges professionals to foster
transparency and accountability (Accenture, 2016:5). The High-Level Expert
Group on Artificial Intelligence (AI HLEG) also advocates transparency, which
includes traceability, explainability and communication.
Finally, openness can be thought of as
the opposite of secrecy, as mentioned in the Department of Health, Education
and Welfare report, stating that individuals should have a way to find out what
information about them is in a record and how it is used (Ware, et.al., 1973).
It is also the opposite of censorship (IFLA, 2019; ALA, 2008).
Common Cause / Solidarity
Many codes of ethics also explicitly
endorse an advocacy role for professionals to promote the values stated in the
code. The AMA Declaration of Professional Responsibility, for example, asserts
a commitment to “advocate for social, economic, educational, and political
changes that ameliorate suffering and contribute to human well-being” (Riddick,
2003).
The codes vary from advice to “teach what
uplifts and unites people and refuse to be, in any way whatsoever, the
propagandists of a partisan conception” (Soleil, 1923) to establishing a shared
vision of teaching and to “to identify the values, knowledge and skills that
are distinctive to the teaching profession” (OCT, 2016) to expressing
solidarity with other members of the profession, for example, stating that
criticism of other members will be conducted in private (BCTF, 2020).
Learning Analytics Issues Addressed
·
·
·
Obligations and Duties
As Feffer (2017) observes, our duties
often conflict. For example, we may read, “As a representative of the company,
you have one set of responsibilities. As a concerned private citizen, you have
other responsibilities. It's nice when those converge, but that's not always
the case.”
We might think, for example, that a
practitioner always has a primary duty to their client. Thus a doctor, lawyer,
or other professional tends to the interests of the client first. A look at
practice, however, makes it clear this is not the case. A doctor may (in some
countries) refuse to perform a service if a patient cannot pay. An educator may
be required to report on a student’s substance abuse problem or immigration
status.
And often, the locus of duty is not
clear. For example, if a company is skewing the data used in order to sway a
model toward a particular set of outcomes, does an employee have a duty to
disclose this fact to the media? There may be some cases where a company is
legally liable for the quality of its analytics, while in other cases (such as
marketing and promotion) the requirement is less clear.
If we widen our consideration beyond
simple transactions, the scope of our duties widens as well. Our duty to travel
to Africa to support a learning program may not conflict with a duty to preserve
the environment for people who have not yet been born. (Saugstad, 1994;
Wilkinson & Doolabh, 2017) Or our desire to eat meat may conflict with what
activists like Peter Singer might consider a duty to animals (Singer, 1979).
This this section we look briefly at the
different entities to which different code argue that we owe allegiance,
loyalty, or some other sort of obligation or duty.
Self
Most ethical codes abnegate serving or
benefitting oneself, and where the self is concerned, it is typically in the
service of the wider ethic, for example, our obligations as role models
(Guyana, 2017). The Nolan principles, for example, make clear that the ethics
of a member of the public service is selflessness (CSPL, 1995), though there is
occasional acknowledgement of a duty to self (AMA, 2001).
And yet, many of the ethical principles
described in the code could be construed as the cultivation of a better self,
for example, one who is honest, trustworthy, integral, objective and open (this
list varies from code to code) (IMIA, 2015; CSPL, 1995; CPA, 2017; IA, 2017; AITP,
2017; etc.) as well as “self-knowledge
regarding how their own values, attitudes, experiences, and social contexts
influence their actions, interpretations, choices, and recommendations” (IMIA,
2015).
And some principles might be thought of
as promoting some desirable attributes of self, even if referring to these in
others: autonomous self-realisation, human agency, and individual capabilities,
for example (Floridi, et.al., 2018:7), or to “participate in programmes of
professional growth like in-service education and training, seminars, symposia
workshops, conferences, self study etc.” (Mizoram, 2020).
Less Fortunate
We included a place-holder for duties or
obligations to the less fortunate because of an earlier reference to Peter
Singer’s (2009) The Life You Can Save.
Statements of any obligation toward the poor or less fortunate are impossible
to find in any of the ethical codes, however, with the exception of references
to specific clients of a profession, as discussed below).
That is not to say that the less
fortunate are completely omitted from ethical codes. As far back as Hammurabi’s
Code is the edict, “the strong may not oppress the weak" (Gilman,
2005:4n3). At the same time, the resistance to considering such matters is
telling, as summarized here: “Advocates have urged that considerations for the
poor, illegal immigrants, rain forests, tribal rights, circumcision of women,
water quality, air quality and the right to sanitary facilities be put into
codes for administrators. As important
as these issues might be they distort the purpose of ethics codes to the point
that they are confusing and put political leadership in the position of quietly
undermining them” (Ibid:47).
Student
Ethical codes for teachers or academics
often specify obligations or duties to students, though in different ways. For
example, Le code Soleil assigns a
three-fold responsibility to teachers: to train the individual, the worker, and
the citizen. Education, according to the code, “is the means to give all
children, whatever their diversity, to reach their maximum potential” (Soleil,
1923). The National Education Association code urges teachers to “strive to
help each student realize his or her potential as a worthy and effective member
of society” (NEA, 1975). Further, the Open University code asserts that “students
should be engaged as active agents in the implementation of learning analytics
(e.g. informed consent, personalised learning paths, interventions” (OU,
2014:4.3.2).
Parent or Guardian, Children
Parents
stand in two roles in codes of ethics. The first is to act as a proxy for
children with respect to matters of consent (Kay et al. 2012). The second is as
special interests that need to be protected; for example, an Indian code of
ethics advises teachers to “refrain from doing any
thing which may undermine students confidence in their parents or guardians” (NCERT,
20910; Mizoram, 2020) and with whom teachers need to maintain an open and
trusting relationship (OCT, 2020).
Data collection began early in the field
of digital media, with the FTC noting that “The practice is widespread and
includes the collection of personal information from even very young children
without any parental involvement or awareness” (Ibid:5) It is worth noting that
the principles are designed specifically to protect consumers, and that they
are addressed specifically toward industry. (Pitofsky, et.al., 1998:ii)
In the IEEE code there is a detailed
section on ‘working with children’ that contains provisions on safety and
security, confidentiality, and whistle-blowing, noting specifically that
“Adults have a responsibility to ensure that this unequal balance of power is
not used for their personal advantage” (IEEE, 2017). Finally, “the Information
Technology Industry Council has joined the conversation around children’s
rights with a focus on emerging technologies, publishing a list of principles
to guide the ethical development of artificial intelligence (AI) systems” (UC
Berkeley, 2019).
Client
In many
ethical codes the first and often only duty is to the client. This is
especially the case for service professions such as finance and accounting,
legal representation, where this is expressed as fiduciary duties, which are
“special obligations between one party, often with power or the ability to
exercise discretion that impacts on the other party, who may be vulnerable”
(Wagner Sidlofsky, 2020).
In health
care the needs of the client are often paramount. For example, the Declaration of Helsinki (WMA, 2013) states ‘The health of my
patient will be my first consideration,’” and cites the International Code of
Medical Ethics in saying, “A physician shall act in the patient's best interest
when providing medical care.” It is thus “the duty of the physician to promote
and safeguard the health, well-being and rights of patients, including those
who are involved in medical research” (Ibid). In cases where multiple duties
are owed, the client may be assigned priority, as in the case of medical research
codes. “When research and clinical needs conflict, prioritize the welfare of
the client” (BACB, 2014).
There is ambiguity in the concept of
client, particularly with respect to the idea that the duty is to the client
because the client is the one paying the bills. When care is paid by insurance,
or through government programs, or corporate employers, the service recipient
and the payer may be two distinct. Similarly, in digital media, costs may be
paid by advertisers or publishers, who may then assert moral priority. (Done, 2010). However, as Luban (2018:187)
argues, “’who pays the whistler calls the tune’ is not a defensible moral
principle.”
Research Subject
Research
ethics codes commonly describe a duty of the researcher to the research subject,
beginning with the Nuremberg Principles and established throughout the practice
thereafter. The rresponsibilities to research participants
include informed consent, transparency, right to withdraw, reasonableness of incentives,
avoidance and mitigation of harm arising from participation in research, and privacy
(BERA, 2018).
In the field of data research and
analytics this principle is often retained. Accenture’s universal principles
for data ethics, for example, state that the highest priority is “the person
behind the data” (Accenture, 2016:5). Similarly, the Insights Association code
(2019) states “respect the data subjects and their rights.” In journalism, asa
well, “ethical journalism treats sources, subjects, colleagues and members of
the public as human beings deserving of respect” (SPJ, 2014).
Employer or Funder
Public
service employees are not surprisingly obligated to their employer. “Members of the public service… are tasked with “loyally carrying out
the lawful decisions of their leaders and supporting ministers in their
accountability to Parliament and Canadians” (TBS,2011:1.1-1.2)
The same sometimes holds true in the case
of ethical codes for teachers. They may be required to “cooperate with the head
of the institution and colleagues in and outside the institution in both
curricular and co-curricular activities” and that a teacher should “recognize
the management as the prime source of his sustainable development” (Mizoram, 2020)
or to “abide by the rules and regulations established for the orderly conduct
of the affairs of the University” (SFU, 1992).
The same may apply for employees in the
private sector. Information technology professionals, for example, may be asked
“to guard my employer's interests, and to advise him or her wisely and
honestly” (AITP, 2017). Journalists, as well, are subject to obligations to the
newspaper (NUJ., 2936). Even funders may make a claim on the duties of the
researcher (Dingwell, et.al., 2017).
Colleagues, Union or Profession
Professional
associations and unions frequently include loyalty to the professional
association or union as a part of the code of ethics, either explicitly, or
expressed as an obligation owed to colleagues (NUJ, 1936; AITP, 2017; SFL,
1992; NEA, 1975; etc.). This is related to the idea that members are forming a
voluntary association. “If a member freely declares (or professes) herself to
be part of a profession, she is voluntarily implying that she will follow these
special moral codes. If the majority of members of a profession follow the
standards, the profession will have a good reputation and members will
generally benefit” (Weil, 2008).
Stakeholders
The term ‘stakeholders’ is sometimes used
without elaboration to indicate the presence of a general duty or obligation
(BERA, 2018). Fjeld (2020) asserts
for example that “developers of AI systems should make sure to consult all
stakeholders in the system and plan for long-term effects.” The Open University policy is based on
“significant consultation with key stakeholders and review of existing practice
in other higher education institutions and detailed in the literature” (OU,
2014:1.2.6). Similarly, one of the DELICATE principles (Drachsler
& Greller, 2016) requires researchers “talk to stakeholders and give
assurances about the data distribution and use.”
What is a stakeholder? It expands on the
concept of ‘stockholder’ and is intended to represent a wider body of interests
to which a company’s management ought to be obligated (SRI, 1963). Freeman (1984:25)
defines it as “any group or individual who can affect, or is affected by, the
achievement of a corporation’s… or organization’s purpose… or performance”. He
bases it on “the interconnected relationships between a business and its
customers, suppliers, employees, investors, communities and others who have a
stake in the organization” (Ledecky, 2020). There are many definitions of
‘stakeholder’ (Miles,2017:29) and no principled way to choose between them.
Publishers and Content Producers
Librarians
are subject to special obligations to publishers, according to some codes. For
example, “Librarians and other information workers'
interest is to provide the best possible access for library users to
information and ideas in any media or format, whilst recognising that they are
partners of authors, publishers and other creators of copyright protected
works” (IFLA, 2012).
This responsibility is extended in other
fields as a prohibition against plagiarism (EUI, 2019; BACB, 2014; SPJ, 2014;
NUJ, 2011; NYT, 2017; etc.) and taking credit for the work of others (AITP,
2017; IEEE, 2020; BACB, 2014; etc.).
Society
References
to a responsibility to society are scarce, but they do exist. BERA (2018)
argues for a responsibility to serve the public
interest, and in particular, responsibilities for publication and dissemination.
The ‘Nolan principles’, (CSPL, 1995) state “Holders of public office are
accountable to the public for their decisions and actions and must submit
themselves to the scrutiny necessary to ensure this.”
In the field of data analytics, the last
two of the Computer Ethics Institute’ Ten Commandments’ recommend computer
professionals “think about the social consequences” and to “ensure
consideration and respect for other humans”
(CEI, 1992). Though as Metcalf (2014) notes, “it appears to be the only
computing ethics code that requires members to proactively consider the broad
societal consequences of their programming activities” (my italics).
Subsequently, the Royal Society (Drew, 2016) recommended data scientists “be
alert to public perceptions.”
Law and Country
Although
it has been established that there is not an ethical duty to obey an unethical
law, a number of ethical codes nonetheless include respect for the law in one
way or another, for example, in reporting child
protection issues (BCTF, 2020), compliance with law as an ‘overarching
principle’ (IA, 2019), or “operate within the legal frameworks (and) refer to
the essential legislation (Drachsler & Greller, 2016).
Meanwhile, the Association of Information
Technology Professionals Code of Ethics asserts “I shall uphold my nation and
shall honor the chosen way of life of my fellow citizens,” though it is no
longer extant and as Metcalf (2016) comments, “it is decades old and has some
anachronisms that clash with globalized ethos of computing today.” Despite
this, it was cited (in EDUCAUSE Review) as recently as 2017 (Woo, 2017).
Environment
The
environment is rarely mentioned in ethical codes, though it appears in a
statement of obligations to “society, its members, and
the environment surrounding them” (ACM, 2018) and as “societal and
environmental wellbeing - including
sustainability and environmental friendliness, social impact, society and
democracy” (AI HLEG, 2019).
Bases for Values and Principles
What grounds these codes of ethics? On
what basis do their authors assert that this
code of ethics, as opposed to some hypothetical alternative, is the code of
ethics to follow? A typical explanation might be that “An individual’s
professional obligations are derived from the profession and its code,
tradition, society's expectations, contracts, laws, and rules of ordinary
morality” (Weil, 2008), but a closer examination raises as many questions as it
answers.
Universality
Many codes simply
assert that the principles embodied in the code are universal principles.
Universality may be seen as a justification for moral and ethical principles; if the
principle is believed by everyone, then arguably it should be believed here.
For example, the
Universal Declaration of Ethical Principles for Psychologists asserts, “The
Universal Declaration describes those ethical principles that are based on
shared human values” (IUPSYS, 2008). It later asserts “Respect for the dignity
of persons is the most fundamental and universally found ethical principle
across geographical and cultural boundaries, and across professional disciplines”
(Ibid). So we see here universality being asserted as a foundation underlying a
set of ethical principles. Similarly, the Asolomar Convention states that “Virtually
all modern societies have strong traditions for protecting individuals in their
interactions with large organizations… Norms of individual consent, privacy,
and autonomy, for example, must be more vigilantly protected as the
environments in which their holders reside are transformed by technology” (Stevens
& Silbey, 2014).
Additional studies,
such as Fjeld, et.al. (2020) that suggest that we have reached a consensus on
ethics and analytics. We argue that this is far from the case. The appearance
of ‘consensus’ is misleading. For example, in the Fjeld, et.al., survey, though
97% of the studies cite ‘privacy’ as a principle, consensus is much smaller if
we look at it in detail (Ibid:21), and the same if we look at the others, eg.
Accountability (Ibid:28). Aassertions of universality made elsewhere (for
example: Pitofsky,1998:7; Singer & Vinson, 2002; CPA, 2017; Raden, 2019: 11)
can be subject to similar criticisms.
In their examination
of teacher codes of ethics, Maxwell and Schwimmer (2016) found “analysis did
not reveal an overlapping consensus on teachers' ethical obligations.” Nor are
they alone in their findings; citing Campbell (2008:358) they observe that
“despite extensive research on the ethical dimensions of teaching, scholars in
the field do not appear to be any closer to agreement on ‘the moral essence of
teacher professionalism’.” Similarly, Wilkinson (2007:382) “argues that the
teaching profession has failed ‘to unite around any agreed set of
transcendental values which it might serve’.” And van Nuland & Khandelwal
(2006:18) report “The model used for the codes varies greatly from country to
country.” The selection below is a sample; many more codes may be viewed in the
EITCO website (IIEP, 2020).
Fundamental Rights
The High-Level
Expert Group on Artificial Intelligence cites four ethical principles, “rooted
in fundamental rights, which must be respected in order to ensure that AI
systems are developed, deployed and used in a trustworthy manner” (AI HLEG, 2019)
.
As noted above, the
Access Now report specifically adopts a human rights framework “The use of
international human rights law and its well-developed standards and
institutions to examine artificial intelligence systems can contribute to the
conversations already happening, and provide a universal vocabulary and forums
established to address power differentials” (Access Now, 2018:6).
The Toronto
Declaration “focuses on the obligation to prevent machine learning systems from
discriminating, and in some cases violating, existing human rights law. The
declaration was announced as part of the RightsCon conference, an annual
gathering of digital and human rights groups” (Brandom, 2018).
Nonetheless, it is
not clear what these fundamental rights are. Their statement in documents such
as the U.S. Bill of Rights, the Canadian Charter of Rights and Freedoms, or the
Universal Declaration of Human Rights, is very different. Is the right to bear
arms a fundamental right? Is the right to an education a fundamental right?
Fact
Arguments drawing
from statements of fact about the world are sometimes used to support ethical
principles. For example, the Universal Declaration of Ethical Principles for
Psychologists asserts, “All human beings, as well as being individuals, are
interdependent social beings that are born into, live in, and are a part of the
history and ongoing evolution of their peoples... as such, respect for the
dignity of persons includes moral consideration of and respect for the dignity
of peoples” (IUPSYS, 2008).
Against such
assertions of fact the “is-ought” problem may be raised. As David Hume (1739)
argued, moral arguments frequently infer from what ‘is’ the case to what
‘ought’ to be the case, but “as this ought, or ought not, expresses some new
relation or affirmation, 'tis necessary that it should be observed and
explained; and at the same time that a reason should be given” (Hume,
1888:469). Such ‘oughts’ may be supported with reference to goals or
requirements (see below), or with reference to institutional facts, such as
laws (Searle, 1964).
Balancing Risks and Benefits
The AI4People
declaration states “An ethical framework for AI must be designed to maximise
these opportunities and minimise the related risks” (Floriodi, et.al., 2018:7).
Similarly the Concordat Working Group (2016) document is of open data with the
need to manage access “in order to maintain confidentiality, protect
individuals’ privacy, respect consent terms, as well as managing security or
other risks.” And the AI4People starts from the premise that “an ethical
framework for AI must be designed to maximise these opportunities and minimise
the related risks” (Floridi, et.al., 2018:7).
The balancing of
risks and benefits is a broadly consequentialist approach to ethics and
therefore results in a different calculation in each application. For example,
the balancing of risk and benefit found in the Common Rule is focused more
specifically on biomedical research, and it has to be asked, is biomedicine the
ethical baseline? “Not all research has the same risks and norms as
biomedicine… there has remained a low-simmering conflict between social
scientists and IRBs. This sets the stage for debates over regulating research
involving big data.” (Metcalfe, 2016)
It also requires an
understanding of what the consequences actually are. Four of the five principles recommended by the House of Lords
Select Committee on AI represent a consequentialist approach (Clement-Jones, et.al, 2018: para 417). But
what are those consequences? The
Committee quotes the Information Commissioner’s Office (ICO) as stating that
there was a “need to be realistic about the public’s ability to understand in
detail how the technology works”, and it would be better to focus on “the
consequences of AI, rather than on the way it works”, in a way that empowers individuals
to exercise their rights (Ibid: para 51), but this may be unrealistic.
And perhaps ethics
isn’t really a case of balancing competing interests. The Information and Privacy Commissioner in Ontario (Cavoukian, 2013)
asserts that “a positive-sum
approach to designing a regulatory framework
governing state surveillance can avoid false dichotomies and unnecessary
trade-offs, demonstrating that it is indeed possible to
have both public safety and personal privacy. We can and must have both
effective law enforcement and rigorous privacy protections.”
Requirements of the Profession
A requirement is a
statement about what a person must believe, be or do in order to accomplish a
certain objective or goal. For example, the Universal Declaration of Ethical
Principles for Psychologists asserts, “competent caring for the well-being of
persons and peoples involves working for their benefit and, above all, doing no
harm… (it) requires the application of knowledge and skills that are
appropriate for the nature of a situation as well as the social and cultural
context” (IUPSYS, 2008). Similarly, the American Library Association sees its
role as requiring “a special obligation to ensure the free flow of information
and ideas to present and future generations” (ALA, 2008). The IFLA similarly
argues that “librarianship is, in its very essence, an ethical activity
embodying a value-rich approach to professional work with information” (IFLA,
2012).
The same document
also later asserts that “Integrity is vital to the advancement of scientific
knowledge and to the maintenance of public confidence in the discipline of
psychology,” which is the same type of argument, however, the objectives are
much less clearly moral principles: the “advancement of scientific knowledge”
and “the maintenance of public confidence.” Such arguments often proceed
through a chain of requirements; IUPSYS (2008) continues, for example, to argue
that “Integrity is based on honesty, and on truthful, open and accurate
communications.”
Such principles may
be expressed in two ways: either derived, or conditional. The principle is
derived if the antecedent is already an ethical principle. In the first IUPSYS
example above, for example, “competent caring for the well-being of persons and
peoples” may have been previously established as an ethical principle, from
which the derived principle ‘working for their benefit’ is also established.
The principle may be expressed as a conditional that describes what is entailed
on (say) joining a profession: if one is engaged in competent caring for the
well-being of persons and peoples then this requires working for their benefit.
Against such
assertions of requirements, several objections may be brought forward. The
first method is to argue that the requirement does not actually follow from the
antecedent; one might argue, for example that competent caring does not entail
working for the person’s benefit; it may only involve following proper
procedures without regard to the person’s benefit. Additionally, one might
argue that the antecedent has not in fact been established; for example, one
might argue that being a psychologist doesn’t involve caring at all, and might
only involve addressing certain disruptions in human behaviour. A criminal
psychologist might take this stance, for example.
Social Good or Social Order
Social good, however
defined, may be the basis of some ethical principles. The preamble to the
Society for Professional Journalists (SPJ) code of ethics states that the
primary function of journalism, according to the statements, is to inform the
public and to serve the truth, because “public enlightenment is the forerunner
of justice and the foundation of democracy” (SPJ, 2014).
A basis in social order,
however, invites relativism. People’s ethical judgements are relative (Drew,
2016). “People’s support is highly
context driven. People consider acceptability on a case-by-case basis, first
thinking about the overall policy goals and likely intended outcome, and then
weighing up privacy and unintended consequences” (Ibid). This relativism is
clear in a statement from a participant: “Better that a few innocent people are
a bit cross at being stopped, than a terrorist incident - because lives are at
risk.” And this relativism often reflects their own interests: “a direct
personal benefit (e.g. giving personalized employment advice), benefit to a local
community, or public protection” (Ibid).
‘Social order’ can
be construed to mean national interest. We see this in ethics statements
guiding public service agencies and professionals. For example, Russell T.
Vought, issued a memo asserting that “Office of Management and Budget (OMB)
guidance on these matters seeks to support the U.S. approach to free markets,
federalism, and good regulatory practices (GRPs), which has led to a robust
innovation ecosystem” (Vought, 2020). The resulting ‘Principles for the
Stewardship of AI Applications’ included such things as public participation,
public trust, and scientific integrity, but also included risk assessment and
management along with benefits and costs. The document also urged a
non-regulatory approach to ethics in AI. A different society might describe
ethics in government very differently.
Fairness
A principle of
‘fairness’ is frequently cited with no additional support or justification.
Often, fairness is
defined as essential to the ethics of the profession. The New York Times, for
example, “treats its readers as fairly and openly as possible” and also “treats
news sources just as fairly and openly as it treats readers” (NYT, 2018).
Fairness may be
equated with objectivity. For example, a journalist may say, “it is essential
that we preserve a professional detachment, free of any whiff of bias” (NYT,
2018).
While acknowledging
that “there is nothing inherently unfair in trading some measure of privacy for
a benefit,” the authors of a 1973 report for the U.S. Department of Health,
Education and Welfare addressing the then nascent practice of electronic data
management noted that “under current law, a person's privacy is poorly
protected against arbitrary or abusive record-keeping practices” (Ware, et.al.,
1973). Hence they proposed what they called a ‘Code of Fair Information
Practice’.
Epistemology
The advancement of
knowledge and learning is often considered to be in and of itself a moral good.
For example, it is used in the Universal Declaration of Ethical Principles for
Psychologists to justify the principle of integrity: “Integrity is vital to the
advancement of scientific knowledge and to the maintenance of public confidence
in the discipline of psychology” (IUPSYS, 2008). Epistemological justification
is also found in journalistic ethics: “relationships with sources require the
utmost in sound judgment and self discipline to prevent the fact or appearance
of partiality” (NYT, 2018). And in the case of AI ethics, it may be simply
pragmatic: “our ‘decision about who should decide’ must be informed by
knowledge of how AI would act instead of us” (Floridi, et.al., 2028:21).
Against this
argument, one may simply deny that knowledge and learning are moral goods, and
are simply things that people do, and can often be harmful (as in “curiosity killed
the cat”). More often, we see such responses couched in specific terms,
asserting that seeking some particular knowledge is not inherently good, for
example, knowledge related to advanced weapons research, violations of personal
confidentiality, and a host of other real or imagined harms. Seneca, for example, argued “This desire to
know more than is sufficient is a sort of intemperance” (Letter 88:36).
Trust
In order to do any
number of things, you need trust, or some of the components of trust. As a
result, the elements of trust in themselves can be cited as justification for
moral principles. For example, the Universal Declaration of Ethical Principles
for Psychologists writes “Integrity is vital... to the maintenance of public
confidence in the discipline of psychology” (IUPSYS, 2008). Chartered Financial
Analysts seek to “promote the integrity and viability of the global capital
markets for the ultimate benefit of society” (CFA, 2019).
Similar principles
underlie ethics in journalism; “integrity is the cornerstone of a journalist’s
credibility” (SPJ, 1996). Similarly, the New York Times asserts, “The
reputation of The Times rests upon such perceptions, and so do the professional
reputations of its staff members.” If we here interpret ‘public confidence’ as
an aspect of trust, we see how the authors are appealing to the principle of
trust to support the assertion that integrity is a moral principle.
Against this it may
be argued that trust is neither good not bad in and of itself, and indeed, that
trust may be abused in certain cases, which could make measures that promote
trust also bad. Moreover, it could be argued that trust is too fragile a
foundation for moral; principles, as it may be broken even without ill
attempts. Further, it may be argued that trustless systems are in fact morally
superior, because they do not create the possibility that trust may be
breached, thus preserving the integrity of whatever it was that trust was
intended to support.
Defensibility
Another way to
define an ethical principle’ is to say that it is descriptive of ‘conduct that
you (or your organization) would be willing to defend’. For example, the
National Union of Journalist code of conduct (NUJ, 2011) offers “guidance and
financial support of members who may suffer loss of work for conforming to
union principles.”
“Through years of
courageous struggle for better wages and working conditions its pioneers and
their successors have kept these aims in mind, and have made provision in union
rules not only for penalties on offenders, but for the guidance and financial
support of members who may suffer loss of work for conforming to union
principles” (NUJ, 1936).
Includes burden or
onus – responding to U.S. Whitehouse - Guidance for Regulation of Artificial
Intelligence Applications - Responding to these guidelines, the American
Academy of Nursing argued for a less business-focused assessment of the risks
and benefits of AI, saying “federal agencies should broaden the concept around
use of AI related social goals when considering fairness and non-discrimination
in healthcare.” They also urged that “federal agencies consider patient,
provider, and system burden in the evaluation of AI benefits and costs” and
“include data accuracy, validity, and reliability” in this assessment
(Sullivan-Marx, 2020)
Results of the Study
After having studied a certain number of
codes of ethics, in the light of the applications of analytics and arising
ethical issues considered above, the following statements can be asserted.
1.
None of the statements address
all of the issues in learning analytics extant in the literature, and arguably,
all of these statements, taken together, still fail to address all these
issues.
2.
Those issues that they address,
they often fail to actually resolve. Often the principles state what should be
considered, but leave open what should be the resolution of that consideration.
3.
There are legal aspects to
analytics, and there are ethical aspects, and there is a distinction between
the two, though this distinction is not always clear.
4.
Although there is convergence
around some topics of interest, there is no consensus with respect to the
ethics involved.
5.
In fact, there are conflicts,
both between the different statements of principles, and often, between the
principles themselves (often described as a need to ‘balance’ competing
principles).
6.
Even were there consensus, it
is clear that this would be a minimal consensus, and that important areas of
concern addressed in one domain might be entirely overlooked in another domain.
7.
Ethical principles and their
application vary from discipline to discipline, and from culture to culture.
8.
There is no common shared
foundation for the ethical principles described. As we will see below, these
statements of principles select on an ad hoc basis from different ethical ideas
and traditions.
9.
Often these principles include
elements of monitoring and enforcement, thus begging the question of why or for
what reason an individual would adhere to the ethical principle stated.
Concluding Remarks
It is premature (if it is possible at
all) to talk about “the ethics of such and such” as though we have solved
ethics. There are multiple perspectives on ethics, and these are represented in
the very different ethical codes from various disciplines. Approaches based in
simple principles, such as an appeal to consequences, or such as in terms of
rights and duties, and as such, as statements of rules or principles, fail to
address the complexity of ethics especially as regards learning and analytics.
The assertion of a universal nature of ethics doesn’t take into account context
and particular situations, and it doesn’t take into account larger
interconnected environment in which all this takes place.
Additionally, based in simple principles
don’t take into account how analytics themselves work. Analytics systems are
not based on rules or principles, they are statistical, using techniques such
as clustering and regression. As such, their input is going to be complex, and they will produce unexpected
consequences in a way that reflects the complexity of humans and human society.
There is an argument, with which we are
sympathetic, that when we ask ethical questions, such as “what makes so-and-so
think it would be appropriate to post such-and-such?” we are not looking for a
single answer, but a complex of factors based on individual identity, society,
circumstances and perspective. This suggests an ethics based on different
objectives - not ‘rights’ or ‘fairness’ but rather things like a sense of
compassion or on a philosophical perspective that uses a relational and
context-bound approach toward morality and decision making, for example, as
found in work based in conviviality or the ethics of care.
References
Academy
of Social Sciences. (2015). Academy Adopts Five Ethical Principles for
Social Science Research |.
https://www.acss.org.uk/developing-generic-ethics-principles-social-science/academy-adopts-five-ethical-principles-for-social-science-research/
Accenture.
(2016). Building digital trust: The role of data ethics in the digital age
(p. 12). Accenture.
https://www.accenture.com/_acnmedia/PDF-22/Accenture-Data-Ethics-POV-WEB.pdf#zoom=50
Access
Now. (2018). Human Rights in the Age of Artificial Intelligence.
https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf
Ackerman,
E. (2019, November 10). My Fight With a Sidewalk Robot. CityLab.
https://www.citylab.com/perspective/2019/11/autonomous-technology-ai-robot-delivery-disability-rights/602209/
Adobe
Experience Cloud Team. (2020, January 23). Artificial Intelligence Unlocks The
True Power Of Analytics.
https://cmo.adobe.com/articles/2018/8/ai-unlocks-the-true-power-of-analytics.html
Alan
Faisandier, Ray Madachy, & Rick Adcock. (2019, October 28). System
Analysis—SEBoK. Guide to the Systems Engineering Body of Knowledge [SEBoK].
https://www.sebokwiki.org/wiki/System_Analysis
Alankar
Karpe. (2015, August 3). Being Ethical is Profitable. Project Management
Institute [PMI].
https://www.projectmanagement.com/articles/300210/Being-Ethical-is-Profitable
Aldcroft,
A. (2012, July 13). Measuring the Four Principles of Beauchamp and Childress. BMC
Series Blog.
https://blogs.biomedcentral.com/bmcseriesblog/2012/07/13/measuring-the-four-principles-of-beauchamp-and-childress/
Allstate.
(n.d.). Drivewise Overview | Allstate Insurance Canada. Allstate.
Retrieved April 19, 2020, from
https://www.allstate.ca/webpages/auto-insurance/drivewise-app.aspx
Almohammadi,
K., Hagras, H., Alghazzawi, D., & Aldabbagh, G. (2012). A Survey of
Artificial Intelligence Techniques Employed For Adaptive Educational Systems
Within E-Learning Platforms. Journal of Artificial Intelligence and Soft
Computing Research, 7(1), 47–64.
Altexsoft.
(2019, April 24). Dynamic Pricing Explained: Machine Learning in Revenue
Management and Pricing Optimization. Altexsoft Weblog. https://www.altexsoft.com/blog/datascience/dynamic-pricing-explained-use-in-revenue-management-and-pricing-optimization/
Ameen,
N. (2019, April 15). What robots and AI may mean for university lecturers and
students. The Conversation. https://theconversation.com/what-robots-and-ai-may-mean-for-university-lecturers-and-students-114383
American
Library Association [ALA]. (2008). Code of Ethics of the American Library
Association.
http://www.ala.org/advocacy/sites/ala.org.advocacy/files/content/proethics/codeofethics/Code%20of%20Ethics%20of%20the%20American%20Library%20Association.pdf
American
Medical Association [AMA]. (2001). Principles of Medical Ethics.
https://www.ama-assn.org/sites/ama-assn.org/files/corp/media-browser/principles-of-medical-ethics.pdf
American
Medical Association [AMA]. (2002). Current Opinions of the Council on
Ethical and Judicial Affairs | Encyclopedia.com.
https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/current-opinions-council-ethical-and-judicial-affairs
Amigud,
A., Arnedo-Moreno, J., Daradoumis, T., & Guerrero-Roldan, and A.-E. (2019). Using Learning Analytics
for Preserving Academic Integrity. The International Review of Research in
Open and Distributed Learning, 18(5).
https://doi.org/10.19173/irrodl.v18i5.3103
Anagnostopoulou,
P., Alexandropoulou, V., Lorentzou, G., Lykothanasi, A., Ntaountaki, P., &
Drigas, A. (2020). Artificial Intelligence in Autism Assessment. International
Journal of Emerging Technologies in Learning (IJET), 15(06), 95–107.
Anderson,
R. E., Johnson, D. G., Donald Gotterbarn, & Judith Perrolle. (1993). Using
the New ACM Code of Ethics in Decision Making. Communications of the ACM,
36(2), 98–107.
Andrejevic,
M., & Selwyn, N. (2019). Facial recognition technology in schools: Critical
questions and concerns. Learning, Media and Technology, 1–14.
https://doi.org/10.1080/17439884.2020.1686014
Andresi,
M. (2019, February 25). Expert Interview: The Covenant of Quiet Enjoyment
Explained. RENTCafé.Com. https://www.rentcafe.com/blog/expert-interviews/quiet-enjoyment/
Ansolabehere, S. D., & Iyengar, S. (1994).
Of Horseshoes and Horse Races: Experimental Studies of the Impact of
Poll Results on Electoral Behavior. Political Communication, 4(11),
413–430.
Aristotle.
(2014). Nicomachean Ethics. http
Armstrong,
K., & Sheckler, C. (2019, December 7). Why Are Cops Around the World Using
This Outlandish Mind-Reading Tool? Pro Publica.
https://www.propublica.org/article/why-are-cops-around-the-world-using-this-outlandish-mindreading-tool
Arthur,
R. (2016, March 9). We Now Have Algorithms To Predict Police Misconduct. FiveThirtyEight.
http://fivethirtyeight.com/features/we-now-have-algorithms-to-predict-police-misconduct/
Arvind
Krishna. (2020, June 8). IBM CEO’s Letter to Congress on Racial Justice
Reform. THINKPolicy Blog.
https://www.ibm.com/blogs/policy/facial-recognition-susset-racial-justice-reforms/
Asilomar conference 2017 [Asilomar]. (2017). Asilomar
AI Principles. Future of Life Institute.
https://futureoflife.org/ai-principles/
Association
for Computing Machinery [ACM]. (n.d.). ACM Code of Ethics and Professional
Conduct. Association for Computing Machinery. Retrieved April 21, 2020,
from https://www.acm.org/code-of-ethics
Association
of Information Technology Professionals [AITP]. (2017, July 11). Ethics ,
Code of Conduct & Conflict of Interest. Wayback Machine.
https://web.archive.org/web/20170711191254/https://www.aitp.org/?page=EthicsConduct
Athabasca
University [AU]. (2020). Computer Science (COMP) 361—Systems Analysis and
Design (Revision 8). Courses, Athabasca University.
https://www.athabascau.ca/syllabi/comp/comp361.php
Attwell,
G. (2020, May 11). Pontydysgu – Bridge to Learning—Educational Research.
https://www.pontydysgu.org/2020/05/careerchat-bot/
Azad-Manjiri,
M. (2014). A New Architecture for Making Moral Agents Based on C4.5 Decision
Tree Algorithm. International Journal of Information Technology and Computer
Science, 6, 60–67.
Bagley,
C. E. (2003, February). The Ethical Leader’s Decision Tree.
https://hbr.org/2003/02/the-ethical-leaders-decision-tree
Bannan,
K. J. (2019, April). Georgia State Tackles Racial Disparities with Data-Driven
Academic Support. Ed Tech Magazine.
https://edtechmagazine.com/higher/article/2019/04/georgia-state-tackles-racial-disparities-data-driven-academic-support
Barneveld,
A. van, Arnold, K., & Campbell, J. (2012). Analytics in Higher
Education: Establishing a Common Language (EDUCAUSE Learning Initiative
(ELI) Collection(s): ELI Papers and Reports). EDUCAUSE.
https://library.educause.edu/resources/2012/1/analytics-in-higher-education-establishing-a-common-language
Baron
Cohen, S. (2019, November 21). Keynote Address at ADL’s 2019 Never Is Now
Summit on Anti-Semitism and Hate.
https://www.adl.org/news/article/sacha-baron-cohens-keynote-address-at-adls-2019-never-is-now-summit-on-anti-semitism
Barron,
B. (2018, April 10). AI and WordPress: How Artificial Intelligence Can Help
Your Website. WPExplorer. https://www.wpexplorer.com/ai-and-wordpress/
Barshay,
J., & Aslanian, S. (2019, August 6). Colleges are using big data to track
students in an effort to boost graduation rates, but it comes at a cost. The
Hechinger Report.
https://hechingerreport.org/predictive-analytics-boosting-college-graduation-rates-also-invade-privacy-and-reinforce-racial-inequities/
Beard,
A. (2020, March 19). Can computers ever replace the classroom? The Guardian.
https://www.theguardian.com/technology/2020/mar/19/can-computers-ever-replace-the-classroom
Beauchamp,
T. L., & Childress, J. F. (2012). Principles of Biomedical Ethics
(Seventh edition). Oxford University Press.
Beer,
J. M., Fisk, A. D., & Rogers, W. A. (2019). Toward a framework for levels
of robot autonomy in human-robot interaction. Journal of Human-Robot
Interaction, 3(2), 74–99.
Behavior
Analyst Certification Board [BACB]. (2014). Professional and Ethical
Compliance Code for Behavior Analysts (p. 24). Behavior Analyst
Certification Board [BACB].
https://www.bacb.com/wp-content/uploads/BACB-Compliance-Code-english_190318.pdf
Bell,
E. (2020, January 12). Facebook’s refusal to fact-check political ads is
reckless. The Guardian, January 12.
https://www.theguardian.com/media/commentisfree/2020/jan/12/facebook-us-election-2020-news-lies-campaigns-fact-check
Benkler,
Y. (2006). The Wealth of Networks: How Social Production Transforms Markets
and Freedom. https://archive.org/details/wealthofnetworks00benk
Berkeley
Online Advising [BOA]: A Cohort-Based Student Success & Learning Analytics
Platform | Research, Teaching, and Learning. (n.d.). Retrieved
March 4, 2020, from https://rtl.berkeley.edu/berkeley-online-advising-boa-cohort-based-student-success-learning-analytics-platform
Beschizza,
R. (2019, November 11). Ethnicity detection camera. BoingBoing.
https://boingboing.net/2019/11/11/ethnicity-detection-camera.html
Betsy
Foresman. (2020, May 4). AI can help higher ed, but biased data can harm,
warns data scientist. EdScoop.
https://edscoop.com/ai-can-help-higher-ed-but-biased-data-can-harm-warns-data-scientist/
Bodle,
R. (2013). The Ethics of Online Anonymity or Zuckerberg vs “Moot”“. Computers
and Society, 43(1), 23–35.
Bossmann,
J. (2016, October 21). Top 9 ethical issues in artificial intelligence.
World Economic Forum.
https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
Bostrom,
N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K.
Frankish & W. M. Ramsey (Eds.), The Cambridge Handbook of Artificial
Intelligence (pp. 316–334). Cambridge University Press; Cambridge Core.
https://doi.org/10.1017/CBO9781139046855.020
Boulton,
C. (2019, August 29). 6 data analytics success stories: An inside look. CIO Magazine.
https://www.cio.com/article/3221621/6-data-analytics-success-stories-an-inside-look.html
Bowker,
G. C., & Star, S. L. (2019). Sorting Things Out: Classification and Its
Consequences. MIT Press. https://mitpress.mit.edu/books/sorting-things-out
Boyd,
K. (2019). The concise argument: Consistency and moral uncertainty. Journal
of Medical Ethics, 45(7), 423–424.
https://doi.org/10.1136/medethics-2019-105645
Boyer,
A., & Bonnin, G. (2019). Higher Education and the Revolution of Learning
Analytics (International Council For Open And Distance Education).
https://static1.squarespace.com/static/5b99664675f9eea7a3ecee82/t/5beb449703ce644d00213dc1/1542145198920/anne_la_report+cc+licence.pdf
Bozdag,
E., & Timmermans, J. (2011). Values in the filter bubble: Ethics of
Personalization Algorithms in Cloud Computing. 6–15.
https://www.researchgate.net/profile/Martijn_Warnier/publication/238369914_Requirements_for_Reconfigurable_Technology_a_challenge_to_Design_for_Values/links/53f6e8250cf2888a7497561c.pdf#page=7
Brandom,
R. (2018, May 16). New Toronto Declaration calls on algorithms to respect human
rights. The Verge, 2018.
https://www.theverge.com/2018/5/16/17361356/toronto-declaration-machine-learning-algorithmic-discrimination-rightscon
Brandon,
S., Arthur, J., Ray, D., Meissner, C., Kleinman, S., Russano, M., & Wells,
S. (2019). The High-Value Detainee Interrogation Group (HIG): Inception,
evolution, and impact. In S. C. Harvey & M. A. Staal (Eds.), Operational
psychology: A new field to support national security and public safety (pp.
263–285). Praeger.
Bresnick,
J. (2017, June 7). Artificial Intelligence Could Take Over Surgical Jobs by
2053. Health IT Analytics.
https://healthitanalytics.com/news/artificial-intelligence-could-take-over-surgical-jobs-by-2053
Bresnick,
J. (2018a, June 28). Artificial Intelligence Tool Passes UK Medical Diagnostics
Exam. Health IT Analytics.
https://healthitanalytics.com/news/artificial-intelligence-tool-passes-uk-medical-diagnostics-exam
Bresnick,
J. (2018b, September 17). Arguing the Pros and Cons of Artificial Intelligence
in Healthcare. Health IT Analytics. https://healthitanalytics.com/news/arguing-the-pros-and-cons-of-artificial-intelligence-in-healthcare
British
Columbia Teachers’ Federation [BCTF]. (2020). BCTF Code of Ethics.
https://bctf.ca/ProfessionalResponsibility.aspx?id=4292
Brodsky,
A., Shao, G., Krishnamoorthy, M., Narayanan, A., Menascé, D., & Ak, R.
(2015). Analysis and Optimization in Smart Manufacturing based on a Reusable
Knowledge Base for Process Performance Models. 1418–1427.
https://ieeexplore.ieee.org/abstract/document/7363902
Brown,
R. (2017, November 1). People program AI, so what happens when they get hacked?
Create. https://www.createdigital.org.au/ai-people-hacked/
Cadwalladr,
C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook
profiles harvested for Cambridge Analytica in major data breach. The
Guardian.
https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Campbell,
E. (2008). The Ethics of Teaching as a Moral Profession. Curriculum Inquiry,
38(4), 357–385. https://doi.org/10.1111/j.1467-873X.2008.00414.x
Campbell,
J., DeBlois, P., & Oblinger, D. (2007, August). Academic Analytics: A New
Tool for a New Era. EDUCAUSE Review, 42, 40–57.
Canadian
Broadcasting Corporation [CBC]. (1967, December 21). Trudeau: ‘There’s no place
for the state in the bedrooms of the nation.’ CBC News.
https://www.cbc.ca/archives/entry/omnibus-bill-theres-no-place-for-the-state-in-the-bedrooms-of-the-nation
Canadian
Nurses Association [CNA]. (2017). Code of Ethics for Registered Nurses 2017
Edition. https://www.cna-aiic.ca/~/media/cna/page-content/pdf-en/code-of-ethics-2017-edition-secure-interactive
Canadian
Psychological Association [CPA]. (2017). Canadian Code of Ethics for
Psychologists. Miscellaneous Agency. https://deslibris.ca/ID/10095637
Canellis,
D. (2019, May 8). Who are the creators of AI-generated art—Programmers or
machines? The Next Web.
https://thenextweb.com/tnw2019/2019/05/08/who-are-the-creators-of-ai-generated-art-programmers-or-machines/
Carpenter,
T. A. (2020, February 12). If My AI Wrote this Post, Could I Own the Copyright?
The Scholarly Kitchen.
https://scholarlykitchen.sspnet.org/2020/02/12/if-my-ai-wrote-this-post-could-i-own-the-copyright/
Castillo,
M. del. (2015, August 26). Knewton launches ‘robot tutor in the sky’ that
learns how students learn. New York Business Journal.
https://www.bizjournals.com/newyork/news/2015/08/26/knewton-launches-robot-tutor-in-the-sky-that.html
Cavoukian,
A. (2013). Information and Privacy Commissioner, Ontario, Canada.
https://www.ipc.on.ca/wp-content/uploads/Resources/pbd-surveillance.pdf
CFA
Institute. (2017, October). Ethics and the Investment Industry. CFA
Institute.
https://www.cfainstitute.org/en/ethics-standards/codes/standards-of-practice-guidance/ethics-and-investement-industry
Cha,
S. (2020, January 12). “Smile with your eyes”: How to beat South Korea’s AI
hiring bots and land a job. Reuters.
https://www.reuters.com/article/us-southkorea-artificial-intelligence-jo/smile-with-your-eyes-how-to-beat-south-koreas-ai-hiring-bots-and-land-a-job-idUSKBN1ZC022
Chakrabarti,
S. (2009). Data mining: Know it all. Morgan Kaufmann.
https://www.elsevier.com/books/data-mining-know-it-all/chakrabarti/978-0-12-374629-0
Chesney,
R., & Citron, D. K. (2018). Deep Fakes: A Looming Challenge for Privacy,
Democracy, and National Security. 107 California Law Review,1753-1819 (2019).
https://ssrn.com/abstract=3213954
Clement-Jones,
T. F., & House of Lords Select Committee. (2018). AI in the UK: ready,
willing and able? House of Lords Select Committee on Artificial Intelligence:
Report of Session 2017–19 (House of Lords, Vol. 36).
https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
Code
of Ethics & Professional Conduct. (2020). Project Management Institute [PMI].
https://www.pmi.org/about/ethics/code
Code
of Ethics and Standards of Professional Conduct. (2019). CFA Institute.
http://www.cfainstitute.org/en/ethics-standards/ethics/code-of-ethics-standards-of-conduct-guidance
Code Soleil. (2016). In Wikipédia.
https://fr.wikipedia.org/w/index.php?title=Code_Soleil&oldid=131091483
Cohen-Addad,
V., Klein, P. N., & Young, N. E. (2018). Balanced power diagrams for
redistricting. ArXiv:1710.03358 [Cs]. http://arxiv.org/abs/1710.03358
Coldicutt,
R. (2018, November 21). Ethics won’t make software engineering better. Doteveryone.
https://medium.com/doteveryone/ethics-wont-make-software-engineering-better-f3ffeca11c2c
Committee
on Standards in Public Life [CSPL]. (1995, May 31). The 7 principles of
public life—GOV.UK. GOV.UK.
https://www.gov.uk/government/publications/the-7-principles-of-public-life
Computer
Ethics Institute [CEI]. (1992). Ten Commandments of Computer Ethics.
http://computerethicsinstitute.org/publications/tencommandments.html
Conger,
B. K., Fausset, R., & Kovaleski, S. F. (2019, May 14). San Francisco Bans
Facial Recognition Technology. New York Times.
https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html
Contact
North. (2018). Ten Facts About Learning Analytics.
https://teachonline.ca/tools-trends/ten-facts-about-learning-analytics
Cooper,
A. (2012). What is Analytics? Definition and Essential Characteristics
(No. 2051–9214; CETIS Analytics Series). Jisc.
https://pdfs.semanticscholar.org/98ab/3fbde3c583d30adf8e660a30e840ebaf2bf0.pdf
Cooper,
T. L. (1998). The Responsible Administrator: An Approach to Ethics in the
Administrative Role (4th ed.). Jossey-Bass Publishers.
https://www.textbooks.com/Responsible-Administrator-An-Approach-to-Ethics-for-the-Administrative-Role-4th-Edition/9780787941338/Terry-L-Cooper.php?CSID=AJ22KKDSOJMKUDOUQ2COTUSOB
Coppins,
M. (2020, March). The Billion-Dollar Disinformation Campaign to Reelect the
President: How new technologies and techniques pioneered by dictators will
shape the 2020 election. The Atlantic.
https://www.theatlantic.com/magazine/archive/2020/03/the-2020-disinformation-war/605530/
Corbí, A., & Solans, D. B. (2014). Review
of Current Student-Monitoring Techniques used in eLearning-Focused recommender
Systems and Learning analytics: The Experience API & LIME model Case Study.
International Journal of Interactive Multimedia and Artificial Intelligence,
2(7), 44–52.
Coughlan,
T., Lister, K., Seale, J., Scanlon, E., & Weller, M. (2019). Accessible
Inclusive Learning: Futures. In R. Ferguson, A. Jones, & E. Scanlon (Eds.),
Educational visions: The lessons from 40 years of innovation.
https://doi.org/10.5334/bcg.e
Couros,
A., & Hildebrandt, K. (2016, June 30). Are you being catfished? Open
Thinking. http://educationaltechnology.ca/2758
Courtney,
H., Lovallo, D., & Clarke, C. (2013, November). Deciding How to Decide. Harvard
Business Review Website. https://hbr.org/2013/11/deciding-how-to-decide
Craig,
E. (2018, October 13). Say hello to Mica – Magic Leap’s Mixed Reality AI. Digital
Bodies. https://www.digitalbodies.net/mixed-reality/say-hello-to-mica-magic-leaps-mixed-reality-ai/
Craig,
E. (2020, January 15). Samsung’s Neon Project – Artificial Humans or Chatbots? Digital
Bodies.
https://www.digitalbodies.net/ai/samsungs-neon-project-artificial-humans-or-chatbots/
Cullinane,
S. (2018, April 24). Monkey does not own selfie copyright, appeals court rules.
CNN.
https://www.cnn.com/2018/04/24/us/monkey-selfie-peta-appeal/index.html
Danzig,
L. (2020, January 6). The Road to Artificial Intelligence: An Ethical
Minefield. InfoQ. https://www.infoq.com/articles/algorithmic-integrity-ethics/
Das,
A. (2019, January 20). Clearview AI Can Identify You With The Help Of Your
Photo. Ubergizmo. https://www.ubergizmo.com/2020/01/clearview-ai/
Data
Ethics Framework. (n.d.). GOV.UK. Retrieved March 5, 2020, from
https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework
Davenport,
T. H., & Harris, J. (2007, May). The Dark Side of Customer Analytics. Harvard
Business Review.
https://hbr.org/2007/05/the-dark-side-of-customer-analytics
David
F. Linowes, & et.al. (1977). Personal Privacy in an Information Society:
The Report of The Privacy Protection Study Commission. Department of
Health, Education, and Welfare, United States.
De
Bruijn, B., Désillets, A., Fraser, K., Kiritchenko, S., Mohammad, S., Vinson,
N., Bloomfield, P., Brace, H., Brzoska, K., Elhalal, A., Ho, K., Kinsey, L.,
McWhirter, R., Nazare, M., & Ofori-Kuragu, E. (2019). Applied AI ethics:
Report 2019 (p. 38). National Research Council Canada.
https://nrc-publications.canada.ca/eng/view/fulltext/?id=a9064070-feb7-4c97-ba87-1347e41ec06a
Demiaux, V., & Abdallah, Y. S. (2017). How
Can Humans Keep the Upper hand? The ethical matters raised by algorithms and
artificial intelligence (French Data Protection Authority). Commission Nationale Informatique &
Libertés [CNIL].
https://www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_ai_gb_web.pdf
Department
of Denfense [DoD]. (2012). Directive Number 3000.09: Autonomy in Weapon
Systems. United States. https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf
Department
of Health, E. and W. [DHEW]. (1978). The Belmont Report: Ethical Principles
and Guidelines for the Protection of Human Subjects of Research, Report of the
National Commission for the Protection of Human Subjects of Biomedical and
Behavioral Research. http://videocast.nih.gov/pdf/ohrp_belmont_report.pdf
Desrochers, D. M., & Staisloff, R. L.
(2017). Technology-enabled Advising and the Creation of
Sustainable Innovation: Early Learnings from iPASS
(RPK Group). http://rpkgroup.com/wp-content/uploads/2015/12/rpkgroup_iPASS_whitepaper-Final.pdf
Dickens,
B. M. (2009). Unethical Protection of Conscience: Defending the Powerful
against the Weak. AMA Journal of Ethics.
https://journalofethics.ama-assn.org/article/unethical-protection-conscience-defending-powerful-against-weak/2009-09
Dingwall,
R., Iphofen, R., Lewis, J., Oates, J., & Emmerich, N. (2017). Towards
Common Principles for Social Science Research Ethics: A Discussion Document for
the Academy of Social Sciences. In R. I. FAcSS (Ed.), Advances in Research
Ethics and Integrity (Vol. 1, pp. 111–123). Emerald Publishing Limited.
https://doi.org/10.1108/S2398-601820170000001010
Dittrich,
T., & Star, S. (2018). Introducing Voice Recognition into Higher
Education. 4th International Conference on Higher Education Advances
(HEAd’18). http://dx.doi.org/10.4995/HEAd18.2018.8080
D’Mello,
S. K. (2017). Chapter 10: Emotional Learning Analytics. In C. Lang, G. Siemens,
A. Wise, & D. Gašević (Eds.), In The Handbook of Learning Analytics
(pp. 115–127). The Society for Learning Analytics Research [SoLAR].
https://www.solaresearch.org/hla-17/
Dolianiti,
F. S., Iakovakis, D., Dias, S. B., Hadjileontiadou, S., Diniz, J. A., &
Hadjileontiadis, L. (2019). Sentiment Analysis Techniques and Applications in
Education: A Survey. In M. Tsitouridou, J. A. Diniz, & T. A. Mikropoulos
(Eds.), Technology and Innovation in Learning, Teaching and Education
(pp. 412–427). Springer International Publishing.
Donald
J. Trump. (2019, February 14). Maintaining American Leadership in Artificial
Intelligence. Federal Register.
https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence
Done,
P. (2010, October 13). Facebook is “deliberately killing privacy”, says
Schneier. Information Age.
https://www.information-age.com/facebook-is-deliberately-killing-privacy-says-schneier-1290603/
Dooley,
J. (2019, July 26). Guide to call tracking and the power of AI for analyzing
phone data. Search Engine Watch. https://www.searchenginewatch.com/2019/07/26/guide-call-tracking/
Dressel,
J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting
recidivism. Science, 2018.
https://advances.sciencemag.org/content/advances/4/1/eaao5580.full.pdf
Drew, C. (2018). Design
for data ethics: Using service design approaches to operationalize ethical
principles on four projects. Philosophical Transactions of the Royal Society
A: Mathematical, Physical and Engineering Sciences, 376(2128),
20170353. https://doi.org/10.1098/rsta.2017.0353
Drigas,
A. S., & Ioannidoun, R.-E. (2012). Artificial Intelligence in Special
Education: A Decade Review. International Journal of Engineering Education,
28(6), 1366–1372.
Dringus,
L. P. (2012). Learning Analytics Considered Harmful. Journal of Asynchronous
Learning Networks, 16(2), 87–100.
Duong,
V., Pham, P., Bose, R., & Luo, J. (2020). #MeToo on Campus: Studying
College Sexual Assault at Scale Using Data Reported on Social Media. ArXiv:2001.05970
[Cs]. http://arxiv.org/abs/2001.05970
Duval,
E. (2011). Attention Please! Learning Analytics for Visualization and
Recommendation. 9–17. https://doi.org/10.1145/2090116.2090118
Eckersley,
P., Gillula, J., & Williams, J. (2017). Electronic Frontier Foundation –
Written evidence (AIC0199) (House of Lords Select Committee on Artificial
Intelligence).
http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69720.html
Eddie
Yan. (n.d.). r/MachineLearning—[NSFW] [P][For Fun][ArtosisNet] Training a
Neural Network to Identify Rage on a Twitch Stream. Reddit. Retrieved May
11, 2020, from
https://www.reddit.com/r/MachineLearning/comments/gh9vvf/pfor_funartosisnet_training_a_neural_network_to/
Eicher,
B., Polepeddi, L., & Goel, A. (2018). Jill Watson Doesn’t Care if You’re
Pregnant: Grounding AI Ethics in Empirical Studies. 88–94.
https://doi.org/10.1145/3278721.3278760
Eisenbeiß,
S. A., & Brodbeck, F. (2014). Ethical and Unethical Leadership: A
Cross-Cultural and Cross-Sectoral Analysis. Journal of Business Ethics, 122(2),
343–359.
Ekbia,
H., Mattioli, M., Kouper, I., Arave, G., Ghazinejad, A., Bowman, T., Suri, V.
R., Tsou, A., Weingart, S., & Sugimoto, C. R. (2015). Big data, bigger
dilemmas: A critical review. Journal of the Association for Information
Science and Technology, 66(8), 1523–1545.
https://doi.org/10.1002/asi.23294
Ekowo,
M., & Palmer, I. (2016). The Promise and Peril of Predictive Analytics
in Higher Education: A Landscape Analysis.
http://www.lonestar.edu/multimedia/The%20Promise%20and%20Peril%20of%20Predictive%20Analytics%20in%20Higher%20Education.pdf
Emory
University Libraries. (2019). Policy on the Collection, Use, and Disclosure
of Personal Information. http://web.library.emory.edu/privacy-policy/personal-information.html
Engelhardt,
H. T. (2993). Personhood, Moral Strangers, and the Evil of Abortion: The
Painful Experience of Post-Modernity. The Journal of Medicine and
Philosophy: A Forum for Bioethics and Philosophy of Medicine, 18(4),
419–421. https://doi.org/10.1093/jmp/18.4.419
Eric
Niiler. (n.d.). An AI Epidemiologist Sent the First Alerts of the Coronavirus. Wired.
Retrieved April 17, 2020, from
https://www.wired.com/story/ai-epidemiologist-wuhan-public-health-warnings/
Erlanger
Hospital. (2000). Erlanger Medical Ethics Orientation Manua.
https://www.utcomchatt.org/docs/biomedethics.pdf
Ethics
Centre, T. (2017, November 30). Big Thinkers: Thomas Beauchamp & James
Childress. The Ethics Centre. https://ethics.org.au/big-thinkers-thomas-beauchamp-james-childress/
European
Commission’s High-Level Expert Group on Artificial Intelligence. (2019). Ethics
Guidelines for Trustworthy AI.
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
European
Union. (2016). General Data Protection Regulation.
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679
European
University Institute [EUI]. (2019). Code of Ethics in Academic Research.
https://www.eui.eu/Documents/ServicesAdmin/DeanOfStudies/CodeofEthicsinAcademicResearch.pdf
Evans,
L. J. (n.d.). 25 Years of the Committee on Standards in Public
Life—Committee on Standards in Public Life. GOV.UK. Retrieved March 9,
2020, from https://cspl.blog.gov.uk/2020/03/03/25-years-of-the-committee-on-standards-in-public-life/
Facebook.
(2019). Deepfake Detection Challenge [DFDC].
https://deepfakedetectionchallenge.ai/ Accessed January 19, 2020.
FAIRsFAIR.
(2019, November 18). Outcomes from FAIRsFAIR Focus Group: Universidad Carlos
III de Madrid | FAIRsFAIR. FAIRsFAIR Focus Group: Universidad Carlos III de
Madrid.
https://www.fairsfair.eu/articles-publications/outcomes-fairsfair-focus-group-universidad-carlos-iii-de-madrid
FAIRsFAIR.
(2020). FAIRsFAIR - Fostering Fair Data Practices in Europe.
https://fairsfair.eu/
Farr,
M. (2020, January 24). U of T’s Citizen Lab reaches out to academics targeted
by spyware. University Affairs.
https://www.universityaffairs.ca/news/news-article/u-of-ts-citizen-lab-reaches-out-to-academics-targeted-by-spyware/
Federal Reserve
System [FRS]. (2011). SR Letter 11-7: Supervisory Guidance on Model
Risk Management. Board of Governors of the Federal Reserve System, Office
of the Comptroller of the Currency.
https://www.federalreserve.gov/supervisionreg/srletters/sr1107a1.pdf
Feffer,
M. (2017). Ethical vs. Legal Responsibilities for HR Professionals
(SHRM).
https://www.shrm.org/resourcesandtools/hr-topics/behavioral-competencies/ethical-practice/pages/ethical-and-legal-responsibilities-for-hr-professionals.aspx
Ferguson,
Rebecca, Andrew Brasher, Doug Clow, Adam Cooper, Garron Hillaire, Jenna
Mittelmeier, Bart Rienties, Thomas Ullmann, Riina Vuorikari, Riina Vuorikari,
& Jonatan Castaño Muñoz. (2016). Research Evidence on the Use of
Learning Analytics: Implications for Education Policy (No.
978-92-79-64441–2). Publications Office of the European Union.
https://doi.org/10.2791/955210
Field,
H., & Lapowsky, I. (2020, March 19). Coronavirus is AI moderation’s big
test. Don’t expect flying colors. Protocol.
https://www.protocol.com/ai-moderation-facebook-twitter-youtube
Fjeld,
J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, and M. (2020). Principled Artificial
Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to
Principles for AI (Berkman Klein Center for Internet & Society).
https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf?sequence=1&isAllowed=y
Fleming,
N. (2018). How artificial intelligence is changing drug discovery. Nature,
557, S55–S57.
Fleming,
R. (2020, January 20). QBot is here – Creating learning communities supporting
inclusion and social learning in Teams for Education! Microsoft Education.
https://educationblog.microsoft.com/en-au/2020/01/qbot-is-here-creating-learning-communities-supporting-inclusion-and-social-learning-in-teams-for-education/
Floridi,
L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge,
C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena,
E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities,
Risks, Principles, and Recommendations. Minds and Machines, 28(4),
689–707. https://doi.org/10.1007/s11023-018-9482-5
Floridi,
L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions
of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083),
20160360. https://doi.org/10.1098/rsta.2016.0360
Flynn,
J., Neulander, I., Philbin, J., & Snavely, N. (2015). DeepStereo: Learning
to Predict New Views from the World’s Imagery. Computer Vision and Pattern
Recognition. https://arxiv.org/abs/1506.06825
Folan,
B. (2020, January 20). OASPA Webinar: PhD students take on openness and
academic culture – webinar key takeaways and answers to attendee questions.
Open Access Scholarly Publishers Association.
https://oaspa.org/oaspa-webinar-phd-students-take-on-openness-and-academic-culture-webinar-key-takeaways/
Foot,
P. (1967). The Problem of Abortion and the Doctrine of the Double Effect in
Virtues and Vices (Oxford: Basil Blackwell, 1978) (originally appeared in the
Oxford Review, Number 5, 1967.).
http://www2.econ.iastate.edu/classes/econ362/hallam/Readings/FootDoubleEffect.pdf
Forth,
S. (2019, November 18). Getting Past Competency Model PTSD. TeamFit.
http://hq.teamfit.co/getting-past-competency-model-ptsd/
Framework_e.pdf.
(n.d.). Retrieved June 6, 2020, from
https://www.oct.ca/-/media/PDF/Professional%20Learning%20Framework/framework_e.pdf
Freeman,
R. E. (1984). Strategic Management: A
Stakeholder Approach. Pitman Publishing.
Friedberg,
B., & Donovan, J. (2019). The Platform Is the Problem (Center for
International Governance Innovation).
https://www.cigionline.org/articles/platform-problem
Fruhlinger,
J. (2019, October 14). Equifax data breach FAQ: What happened, who was
affected, what was the impact? CSO; International Data Group [IDG].
https://www.csoonline.com/article/3444488/equifax-data-breach-faq-what-happened-who-was-affected-what-was-the-impact.html
Fusset,
P., & Murray, D. (2019). Independent Report on the London Metropolitian
Police Service’s Trial of Live Facial Recognition Technology (Human Rights
Centre Pb - University of Essex).
https://48ba3m4eh2bf2sksp43rq8kk-wpengine.netdna-ssl.com/wp-content/uploads/2019/07/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report.pdf
Gadde,
V., & Derella, M. (2020, March 16). An update on our continuity strategy
during COVID-19. Twitter Blog.
https://blog.twitter.com/en_us/topics/company/2020/An-update-on-our-continuity-strategy-during-COVID-19.html
Galeon,
D. (2016, December 16). Human or AI: Can You Tell Who Composed This Music? Futurism.
https://futurism.com/human-or-ai-can-you-tell-who-composed-this-music
Gan,
S. H. (2018, December 16). How To Design A Spam Filtering System with Machine
Learning Algorithm. Towards Data Sciencemag.
https://towardsdatascience.com/email-spam-detection-1-2-b0e06a5c0472
Garrett,
B. M., & Roberts, G. (2004). Employing Intelligent and Adaptive Methods for
Online Learning. In C. Ghaoui (Ed.), E-Education Applications: Human Factors
and Innovative Approaches, Chapter XII (pp. 208–219).
https://www.researchgate.net/publication/282662099_Employing_Intelligent_and_Adaptive_Methods_for_Online_Learning
Gašević,
D., Dawson, S., & Siemens, G. (2015, February). Let’s not forget: Learning
analytics are about learning. TechTrends, 59.
https://link.springer.com/content/pdf/10.1007%2Fs11528-014-0822-x.pdf
Gauthier, J. (2008). The
Universal Declaration of Ethical Principles for Psychologists: Third Draft.
11. https://doi.org/10.1.1.518.3698
Gellman,
B., & Adler-Bell, S. (2017). The Disparate Impact of Surveillance.
The Century Foundation.
https://tcf.org/content/report/disparate-impact-surveillance/
Genova,
F., Hudson, R. L., & Moran, N. (2014). The Data Harvest: How sharing
research data can yield knowledge, jobs and growth (p. 40). Research Data
Alliance [RDA] Europe.
https://rd-alliance.org/sites/default/files/attachment/The%20Data%20Harvest%20Final.pdf
Germany.
(2018). Artificial Intelligence Strategy.
https://www.ki-strategie-deutschland.de/home.html?file=files/downloads/Nationale_KI-Strategie_engl.pdf
Gilbert,
S., & Lynch, N. (2002). Brewer’s conjecture and the feasibility of
consistent, available, partition-tolerant web services. ACM SIGACT News,
33(2), 51–59. https://doi.org/10.1145/564585.564601
Gilman,
S. C. (2005a). Comparative Successes and Lessons (p. 76). PREM, the
World Bank. https://www.oecd.org/mena/governance/35521418.pdf
Gilman,
S. C. (2005b). Ethics Codes and Codes of Conduct as Tools for Promoting an
Ethical and Professional Public Service: Comparative Successes and Lessons
(p. 76). World Bank. https://www.oecd.org/mena/governance/35521418.pdf
GoFAIR.
(2020). FAIR Principles—GO FAIR.
https://www.go-fair.org/fair-principles/
Goldstein,
P. J. (2005, December). Academic Analytics: The Uses of Management Information and Technology in Higher
Education. ECAR Key Findings.
https://er.educause.edu/-/media/files/articles/2007/7/ekf0508.pdf?la=en&hash=72921740F4D3C3E7F45B5989EBF86FD19F3EA2D7
Google
Assembler. (2020). Website. https://jigsaw.google.com/assembler/
Government
of Japan [Japan]. (2019). Social Principles of Human-centric AI. Cabinet
Office, Government of Japan.
https://www8.cao.go.jp/cstp/english/humancentricai.pdf
Government
of New Zealand. (2018, April 24). Professional ethics and codes of conduct.
Immigration Advisers Authority.
https://www.iaa.govt.nz/for-advisers/adviser-tools/ethics-toolkit/professional-ethics-and-codes-of-conduct/
Greene,
P. (2019, December 9). Ed Tech Giant Powerschool Keeps Eating the World. Curmudgucation.
https://nepc.colorado.edu/blog/ed-tech-giant
Griffiths,
D., Drachsler, H., Kickmeier-Rust, M., Steiner, C., Hoel, T., & Greller, W.
(2016). Is Privacy A Show-Stopper For Learning Analytics? A Review Of Current
Issues And Solutions. Learning Analytics Review, 6(15).
http://www.laceproject.eu/learning-analytics-review/files/2016/04/LACE-review-6_privacy-show-stopper.pdf
Griggs,
M. B. (2019, November 14). Google reveals ‘Project Nightingale’ after being accused
of secretly gathering personal health records. The Verge.
https://www.theverge.com/2019/11/11/20959771/google-health-records-project-nightingale-privacy-ascension
Gruzd,
A. (2020, April 24). Hospitals Around the World are Being Targeted by
Conspiracy Theorists | Social Media Lab.
https://socialmedialab.ca/2020/04/24/hospitals-around-the-world-are-being-targeted-by-conspiracy-theorists/
Guillaud, H. (2020, February 28). Des
limites du recrutement automatisé. InternetActu.net. http://www.internetactu.net/a-lire-ailleurs/des-limites-du-recrutement-automatise/
Guyana
Ministry of Education. (2017). Professional Code of Ethics for Teachers.
https://www.education.gov.gy/web/index.php/teachers/tips-for-teaching/item/2738-professional-code-of-ethics-for-teachers
Hamel,
S. (2016, June 28). The Elasticity of Analytics Ethics. Radical Analytics.
https://radical-analytics.com/the-elasticity-of-analytics-ethics-7d8ac253a3b9
Hamon,
R., Junklewitz, H., & Sanchez, I. (2020). Robustness and Explainability
of Artificial Intelligence—From technical to policy solutions (No.
978-92-79-14660–5). Publications Office of the European Union.
https://doi.org/10.2760/57493
Hattie,
J. (2008). Visible learning: A synthesis of over 800 Meta-analyses relating
to achievement. Routledge.
Hayley
Peterson. (2020, April 20). Amazon-owned Whole Foods is quietly tracking its
employees with a heat map tool that ranks which stores are most at risk of
unionizing. Business Insider.
https://www.msn.com/en-au/news/world/amazon-owned-whole-foods-is-quietly-tracking-its-employees-with-a-heat-map-tool-that-ranks-which-stores-are-most-at-risk-of-unionizing/ar-BB12VDFf
Heaton,
R. (2017, November 24). Identity Graphs: How online trackers follow you across
devices. Weblog. https://robertheaton.com/2017/11/24/identity-graphs-how-online-trackers-follow-you-across-devices/
Heilweil,
R. (2020, May 4). Paranoia about cheating is making online education
terrible for everyone. Vox.
https://www.vox.com/recode/2020/5/4/21241062/schools-cheating-proctorio-artificial-intelligence
Hern,
A. (2019, February 14). New AI fake text generator may be too dangerous to
release, say creators. The Guardian.
https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction
Heyer,
O., & Kaskiris, V. (n.d.). BOA Platform Overview. University of
California at Berkeley. Retrieved March 4, 2020, from
https://rtl.berkeley.edu/sites/default/files/general/boa_platform_overview.pdf
Hoffman,
J. (2019, September 23). How Anti-Vaccine Sentiment Took Hold in the United
States. The New York Times.
https://www.nytimes.com/2019/09/23/health/anti-vaccination-movement-us.html
Hoffstetter, M. (2019, September 27). La
pépite suisse qui démêle le fake du vrai. Bilan.
https://www.bilan.ch/techno/la-pepite-suisse-qui-demele-le-fake-du-vrai
Horan,
R., & DePetro, J. (2019). Public Health and Personal Choice: The Ethics
of Vaccine Mandates and Parental Refusal in the United States. 2019 Awards
for Excellence in Student Research and Creative Activity – Documents.
https://thekeep.eiu.edu/lib_awards_2019_docs/7
Hotchkiss,
K. (2019, April 6). With Great Power Comes Great (Eco) Responsibility – How
Blockchain is Bad for the Environment. Georgetown Law.
https://www.law.georgetown.edu/environmental-law-review/blog/with-great-power-comes-great-eco-responsibility-how-blockchain-is-bad-for-the-environment/
Housing
and Human Services [HHS]. (2018). Code of Federal Regulations.
https://www.ecfr.gov/cgi-bin/text-idx?m=02&d=26&y=2020&cd=20200224&submit=GO&SID=83cd09e1c0f5c6937cd9d7513160fc3f&node=pt45.1.46&pd=20180719
Hudson,
W. D. (1969). The Is/Ought Question: A Collection of Papers on the Central
Problem in Moral Philosophy. Macmillan.
Human
Behaviour, & Machine Intelligence (HUMAINT). (2020, January 23). European
Commission > JRC Science Hub. Website (Visited.
https://ec.europa.eu/jrc/communities/en/community/1162/about
Hume,
D. (1888). PART I: Of virtue and vice in general. Section 1: Moral distinctions
not deriv’d from reason. In Treatise of Human Nature/Book 3: Of morals.
Clarendon Press; Wikisource.
https://en.wikisource.org/wiki/Treatise_of_Human_Nature/Book_3:_Of_morals/Part_1/Section_1
IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems. (2016). Ethically
Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and
Intelligent Systems. IEEE.
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
Ifenthaler,
D., & Widanapathirana, C. (2014). Development and Validation of a Learning
Analytics Framework: Two Case Studies Using Support Vector Machines. Tech
Know Learn, 19, 221–240. https://doi.org/10.1007/s10758-014-9226-4
Iglesias,
M., Shamuilia, S., & Anderberg, A. (2019). Intellectual Property and
Artificial Intelligence – A literature review. EUR 30017 EN.
https://ec.europa.eu/jrc/en/publication/intellectual-property-and-artificial-intelligence-literature-review
Insights
Association [IA]. (2017, January 5). Historic CASRO and MRA Merger Complete
| Insights Association. Insights Association.
https://www.insightsassociation.org/article/historic-casro-and-mra-merger-complete
Insights
Association [IA]. (2019, April). IA Code of Standards and Ethics for
Marketing Research and Data Analytics | Insights Association. Insights
Association. https://www.insightsassociation.org/issues-policies/insights-association-code-standards-and-ethics-market-research-and-data-analytics-0
Institute
of Electrical and Electronics Engineers [IEEE]. (2014). Code of Conduct.
https://www.ieee.org/content/dam/ieee-org/ieee/web/org/about/ieee_code_of_conduct.pdf
Institute
of Electrical and Electronics Engineers [IEEE]. (2017). IEEE Guidelines for
Working With Children.
https://www.ieee.org/content/dam/ieee-org/ieee/web/org/voluntr/risk-insurance/ieee-guidelines-for-working-with-children-nov-201753018.pdf
Institute
of Electrical and Electronics Engineers [IEEE]. (2020). IEEE Policies. In IEEE
Governing Documents.
https://www.ieee.org/about/corporate/governance/index.html
International
Institute for Educational Planning [IIEP]. (2020). Teacher codes of conduct.
ETICO - Unesco IIEP. https://etico.iiep.unesco.org/en/teacher-codes-conduct
International
Medical Informatics Association [IMIA]. (2003). IMIA Code of Ethics—Archived
2003 edition. International Medical Informatics Association.
https://imia-medinfo.org/wp/imia-code-ethics-2003-archive/
International
Medical Informatics Association [IMIA]. (2015). International Medical
Informatics Association Code of Ethics. International Medical Informatics
Association. https://doi.org/10.1017/CBO9781139600330
International
Union of Psychological Science [IUPSYS]. (2008). Universal Declaration of
Ethical Principles for Psychologists—International Union of Psychological
Science. https://www.iupsys.net/about/governance/universal-declaration-of-ethical-principles-for-psychologists.html
Introna, L. D., & Nissenbaum, H. (2000). Shaping
the Web: Why the Politics of Search Engines Matters. The Information Society,
16, 169–185.
I.V.
Aided Dreams—Caroline Polachek—Google Play Music.
(n.d.). Retrieved April 21, 2020, from
https://play.google.com/music/listen#/album/Bq2eyp2flgoon5lete35mqb6uxy/Caroline+Polachek/Gloss+Coma+-+001
Jamal,
K., & Bowie, N. E. (1995). Theoretical considerations for a meaningful code
of professional ethics. Journal of Business Ethics, 14(9),
703–714.
James,
J. E. (2020). Pirate open access as electronic civil disobedience: Is it
ethical to breach the paywalls of monetized academic publishing? Journal of
the Association for Information Science and Technology, n/a(n/a),
1–5. https://doi.org/10.1002/asi.24351
Jarre,
J.-M., & Snowden, E. (2016, April 28). Exit.Exit. In YouTube.
https://www.youtube.com/watch?v=YNESMafb5ZI
Jaschik,
S. (2016, January 20). Are At-Risk Students Bunnies to Be Drowned? Inside
Higher Ed. https://www.insidehighered.com/news/2016/01/20/furor-mount-st-marys-over-presidents-alleged-plan-cull-students
Johri,
A., Han, E.-H. (Sam), & Mehta, D. (2016). Domain Specific Newsbots: Live
Automated Reporting Systems involving Natural Language Communication. 2016
Computation+Journalism Symposium, Stanford University (CJ2016).
https://journalism.stanford.edu/cj2016/files/Newsbots.pdf
Jones,
H. (20110). Working Paper 330:Taking responsibility for complexityHow
implementation can achieve results in the face of complex problems.
Overseas Development Institute.
https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/6485.pdf
Joseph Soleil. (1923). Le code Soleil.
http://crpe.free.fr/LecodeSoleil.htm
Julia
Angwin, Jeff Larson, Surya Mattu, & Lauren Kirchner. (2016, May 23). Machine
Bias (https://www.propublica.org/) [Text/html]. ProPublica; ProPublica.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Julie
Ireton. (2017, October 10). An inside look at the chaos at the Phoenix pay
centres—Is anyone’s pay right? CBC News.
https://www.cbc.ca/news/canada/ottawa/phoenix-compensation-advisors-investigation-broken-pay-system-1.4313389
Jun,
L. (2018, May 19). Facial Recognition Used to Analyze Students’ Classroom Behaviors.
People’s Daily Online.
http://en.people.cn/n3/2018/0519/c90000-9461918.html
Kaggle.
(2012). The Hewlett Foundation: Automated Essay Scoring.
https://www.kaggle.com/c/asap-aes
Kay,
D., Korn, N., & Oppenheim, C. (2012). Legal, Risk and Ethical Aspects of
Analytics in Higher Education. JISC CETIS Analytics Series, 1(6).
http://publications.cetis.org.uk/2012/500
Kaye,
J. (2020, May 7). Mozilla research shows some machine voices score higher
than humans. The Mozilla Blog. https://blog.mozilla.org/blog/2020/05/07/mozilla-research-shows-some-machine-voices-score-higher-than-humans
Kelly,
H. (2019, October 29). School apps track students from classroom to bathroom,
and parents are struggling to keep up. Washington Post.
https://www.washingtonpost.com/technology/2019/10/29/school-apps-track-students-classroom-bathroom-parents-are-struggling-keep-up/?arc404=true
Kemp,
D. S. (2013, February 11). When Conscience and Duty Conflict: A Health Care
Provider’s Moral Dilemma. Verdict. https://verdict.justia.com/2013/02/11/when-conscience-and-duty-conflict
Keppler,
N. (2020, February 12). Cost Cutting Algorithms Are Making Your Job Search a
Living Hell. Vice.
https://www.vice.com/en_us/article/pkekvb/cost-cutting-algorithms-are-making-your-job-search-a-living-hell
Kevan,
J. M., & Ryan, P. R. (2016, April). Experience API, Flexible, Decentralized
and Activity-Centric Data Collection. Technology, Knowledge and Learning,
21, 143–149.
Khalil,
M., & Ebner, M. (2015). Learning Analytics: Principles and Constraints.
1789–1799. https://pure.tugraz.at/ws/portalfiles/portal/3217534/edmedia2015.pdf
Kharif,
O. (2014, May 2). Privacy Fears Over Student Data Tracking Lead to InBloom’s
Shutdown. Bloomberg Website; Bloomberg.
https://www.bloomberg.com/news/articles/2014-05-01/inbloom-shuts-down-amid-privacy-fears-over-student-data-tracking
Kiciman,
E., Counts, S., & Gasser, M. (2018, June 15). Using Longitudinal Social
Media Analysis to Understand the Effects of Early College Alcohol Use. Twelfth
International AAAI Conference on Web and Social Media. Twelfth
International AAAI Conference on Web and Social Media.
https://www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17844
Kleber,
S. (2018, July 31). 3 Ways AI Is Getting More Emotional. Harvard Business
Review. https://hbr.org/2018/07/3-ways-ai-is-getting-more-emotional
Klein,
A. (2020, January 3). N.Y. District Will Use Facial Recognition Software,
Despite Big Privacy Concerns. Education Week.
http://blogs.edweek.org/edweek/DigitalEducation/2020/01/facial-recognition-new-york-software.html
Kottke,
J. (2020). Recording All the Melodies. Kottke.Org. https://kottke.org/20/02/recording-all-the-melodies
Kramer,
A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental
evidence of massive-scale emotional contagion through social networks. 111,
8788–8790. https://www.pnas.org/content/111/24/8788
Krawitz,
M., Law, J., & Litman, S. (2018). How higher-education institutions can
transform themselves using advanced analytics. McKinsey.
https://www.mckinsey.com/industries/social-sector/our-insights/how-higher-education-institutions-can-transform-themselves-using-advanced-analytics
Kristof,
N., & Thompson, S. A. (2020, March 13). Opinion | How Much Worse the
Coronavirus Could Get, in Charts. The New York Times.
https://www.nytimes.com/interactive/2020/03/13/opinion/coronavirus-trump-response.html
Kymlicka,
W. (2020, March 5). Why human rights are not enough. New Statesman.
https://www.newstatesman.com/2020/03/why-human-rights-are-not-enough
Levin,
S. (2017, May 1). The Guardian. May 1, 2017. https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens
Li,
Y., & Lyu, S. (2019). Exposing DeepFake Videos By Detecting Face Warping
Artifacts. IEEE Conference on Computer Vision and Pattern Recognition
(CVPR) Workshops 2019. https://github.com/danmohaha/CVPRW2019_Face_Artifacts
Liberman,
M. (2019). AI is brittle. Language Log (Weblog).
https://languagelog.ldc.upenn.edu/nll/?p=45317
Lin,
J., Yu, H., Pan, Z., Shen, Z., & Cui, L. (2018). Towards data-driven
software engineering skills assessment. International Journal of Crowd
Science, 2(2), 123–135.
Linda
Kinstler. (2020, March 8). Researcher danah boyd on how to protect the
census and fix tech. Protocol. https://www.protocol.com/danah-boyd-q-a
Liu,
J., Tom Magrino, Owen Arden, Michael D.
George, & Andrew C. Myers. (2014, July 27). Warranties for Faster Strong
Consistency. SlideServe.
https://www.slideserve.com/tirza/warranties-for-faster-strong-consistency
Lodge,
J. M., Panadero, E., Broadbent, J., & de Barba, P. G. (2018). Supporting
Self-Regulated Learning With Learning Analytics. In Jason M. Lodge, Jared
Cooney Horvath, & Linda Corrin (Eds.), Learning Analytics in the
Classroom: Translating Learning Analytics Research for Teachers. Routledge.
https://books.google.ca/books?id=XiBtDwAAQBAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
Long,
P., & Siemens, G. (2011). Penetrating the fog: Analytics in learning and
education. Educause Review, 46, 5,31-40, 46(5), 31–40.
Loutfi,
E. (2019, November 25). What does the future hold for AI-enabled coaching? Chief
Learning Officer.
https://www.chieflearningofficer.com/2019/11/25/ai-enabled-coaching/
Lu,
X. (2019). An Empirical Study on the Artificial Intelligence Writing Evaluation
System in China CET. Big Data, 7(2).
https://doi.org/10.1089/big.2018.0151
Luban,
D. (2018). Lawyers and Justice: An Ethical Study. Princeton University
Press.
Luciano
Floridi. (2013). The Ethics of Information. Oxford University Press.
https://global.oup.com/academic/product/the-ethics-of-information-9780199641321?cc=ca&lang=en&
Luckin,
R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence
unleashed—An argument for AI in education. Pearson.
http://discovery.ucl.ac.uk/1475756/
Lunden,
I. (2018, October 4). ZipRecruiter picks up $156M, now at a $1B valuation, for
its AI-based job-finding marketplace. TechCrunch.
https://techcrunch.com/2018/10/04/ziprecruiter-picks-up-156m-now-at-a-1b-valuation-for-its-ai-based-job-finding-marketplace/
Lyon,
D. (2017). Surveillance Culture: Engagement, Exposure, and Ethics in Digital
Modernity. International Journal of Communication, 11, 824–842.
Mackie,
J. L. (J. L. ). (1983). Ethics: Inventing right and wrong. Penguin
Books.
Majumdar,
R., Akçapınar, A., Akçapınar, G., Ogata, H., & Flanagan, B. (2019). LAView:
Learning Analytics Dashboard Towards Evidence-based Education. Companion
Proceedings of the 9th International Conference onLearning Analytics and
Knowledge (2019).
https://repository.kulib.kyoto-u.ac.jp/dspace/handle/2433/244127
M.Alberola, J., Val, E. del, Sanchez-Anguix,
V., Palomares, A., & Teruel, M. D. (101 C.E.). An
artificial intelligence tool for heterogeneous team formation in the classroom.
Knowledge-Based Systems, 101(1 June 2016), 1–14.
Malik,
K. (2019, May 19). As surveillance culture grows, can we even hope to escape
its reach? The Guardian.
https://www.theguardian.com/commentisfree/2019/may/19/as-surveillance-culture-grows-can-we-even-hope-to-escape-its-reach
Malouff,
J., & Thorsteinsson, E. (2016). Bias in grading: A meta-analysis of
experimental research findings. Australian Journal of Education, 60,
1–12.
Manjunath,
T. N., Hegadi, R. S., Umesh, I. M., & Ravikumar, G. (2011). Design and
Analysis of DWH and BI in Education Domain. International Journal of
Computer Science, International Journal of Computer Science,
545–551.
Manulife.
(2020). Manulife Vitality. Manulife.
https://www.manulife.ca/personal/vitality.html
Marczak,
B., Scott-Railton, J., McKune, S., Razzak, B. A., & Deibert, and R. (2018). Hide and Seek: Tracking NSO
Group’s Pegasus Spyware to Operations in 45 Countries. Citizen Lab.
https://citizenlab.ca/2018/09/hide-and-seek-tracking-nso-groups-pegasus-spyware-to-operations-in-45-countries/
Markham,
A. (2016, May 18). OKCupid data release fiasco. Medium. https://points.datasociety.net/okcupid-data-release-fiasco-ba0388348cd
Martineau,
P. (2019, May 2). Facebook Bans Alex Jones, Other Extremists—But Not as
Planned. Wired.
https://www.wired.com/story/facebook-bans-alex-jones-extremists/
Marx,
J. (2020, February 3). The Mission Creep of Smart Streetlights. Voice of San
Diego.
https://www.voiceofsandiego.org/topics/public-safety/the-mission-creep-of-smart-streetlights/
Masnick,
M. (2008, March 25). Turnitin Found Not To Violate Student Copyrights. TechDirt.
https://www.techdirt.com/articles/20080325/005954642.shtml
Maxwell,
B., & Schwimmer, M. (2016). Seeking the elusive ethical base of teacher
professionalism in Canadian codes of ethics. Teaching and Teacher Education,
59, 468–480. https://doi.org/10.1016/j.tate.2016.07.015
McLaughlin,
T. (2018, December 12). How WhatsApp Fuels Fake News and Violence in India. Wired.
https://www.wired.com/story/how-whatsapp-fuels-fake-news-and-violence-in-india/
McMurtree,
B. (2000, May 12). A Christian Fellowship’s Ban on Gay Leaders Splits 2
Campuses. Chronicle of Higher Educational.
https://dfkpq46c1l9o7.cloudfront.net/pdfs/4842_2898.pdf
Medhat,
W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and
applications: A survey. Ain Shams Engineering Journal, 5(4),
1093–1113.
Meinecke,
S. (2018, July 16). AI could help us protect the environment—Or destroy it. DW.
https://p.dw.com/p/31X4Z
Melissa
Woo. (2017, March 27). Ethics and the IT Professional. EDUCAUSE Review.
https://er.educause.edu/articles/2017/3/ethics-and-the-it-professional
Mercier,
H., & Sperber, D. (2017). The Enigma of Reason. Harvard University
Press.
Merett,
K. (2020, February 3). Russell Group sign the Sorbonne Declaration on research
data rights – Open Research at Bristol. Open Research at Bristol.
https://openresearchbristol.blogs.bristol.ac.uk/2020/02/03/russell-group-sign-the-sorbonne-declaration-on-research-data-rights/
Metcalf,
J. (2016a). Big Data Analytics and Revision of the Common Rule. Communications
of the ACM, 59(7), 31–33. https://doi.org/10.1145/2935882
Metcalf,
J. (2016b). Ethics Codes: History, Context, and Challenges (p. 15).
Council for Big Data, Ethics, and Society.
https://bdes.datasociety.net/wp-content/uploads/2016/10/EthicsCodes.pdf
Metcalf,
J. (2020). Letter on Proposed Changes to the Common Rule. Council for
Big Data, Ethics, and Society.
https://bdes.datasociety.net/council-output/letter-on-proposed-changes-to-the-common-rule/
Metz,
C., & Blumenthal, S. (2019, June 7). How A.I. Could be Weaponized to Spread
Disinformation. The New York Times.
https://www.nytimes.com/interactive/2019/06/07/technology/ai-text-disinformation.html
Metz,
R. (2020, January 15). There’s a new obstacle to landing a job after college:
Getting approved by AI. CNN Business.
https://www.cnn.com/2020/01/15/tech/ai-job-interview/index.html
Meyer,
R. (2018, March 8). The Grim Conclusions of the Largest-Ever Study of Fake
News. The Atlantic.
https://www.theatlantic.com/technology/archive/2018/03/largest-study-ever-fake-news-mit-twitter/555104
or
Michael
Davis. (2010). Licensing, Philosophical Counselors, and Barbers: A New Look at
an Old Debate about Professions. The International Journal of Applied
Philosophy, 24(2), 225–236. http://dx.doi.org/10.5840/ijap201024220
Michael
Ledecky. (2020, April 16). Looking Beyond Shareholders: What Is Stakeholder
Theory? EVERFI.
https://everfi.com/insights/blog/what-is-stakeholder-theory/
Michaux, B. (2018). Singularité technologique,
singularité humaine et droit d’auteur. In Laws, Norms and Freedoms in
Cyberspace/Droits, norms et libertés dans le cybermonde (pp. 401–416). Larcier.
http://www.crid.be/pdf/crid5978-/8244.pdf
Miles,
K. (2019, December 27). Should colleges really be putting smart speakers in
dorms? Technology Review. Technology Review.
https://www.technologyreview.com/s/614937/colleges-smart-speakers-in-dorms-privacy/
Miles,
S. (2017). Stakeholder Theory Classification, Definitions and Essential
Contestability. In D. M. Wasieleski & J. Weber (Eds.), Business and
Society 360 (Vol. 1, pp. 21–47). Emerald Publishing Limited. https://doi.org/10.1108/S2514-175920170000002
Millar,
J., Barron, B., Hori, K., Finlay, R., Kotsuki, K., & Kerr, I. (2018,
December 6). Discussion Paper for Breakout Session Theme 3: Accountability
in AI, Promoting Greater Societal Trust. G7 Multistakeholder Conference on
Artificial Intelligence December 6, 2018, Montreal.
https://www.ic.gc.ca/eic/site/133.nsf/425f69a205e4a9f48525742e00703d75/7301ef85db6957538525835a0016b5a4/$FILE/3_Discussion_Paper_-_Accountability_in_AI_EN.pdf
Miller,
P. (2016, May 6). Professor Pranksman fools his students with a TA powered by
IBM’s Watson. The Verge.
https://www.theverge.com/2016/5/6/11612520/ta-powered-by-ibm-watson
Mitchell
L. Stevens, & Susan S. Silbey. (2014). The Asilomar Convention for
Learning Research in Higher Education.
http://asilomar-highered.info/index.html
Mitra,
A. (2018, April 5). We can train AI to identify good and evil, and then use it
to teach us morality. Quartz.
https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/
Mittelstadt,
B. (2019). Editorial Notes: Introduction: The Ethics of Biomedical Data
Analytics. Philosophy and Technology, 32(1), 17–21.
Mizoram,
D. of S. E. (2020). Code of Professional Ethics for Teachers. Government
of Mizoram. https://schooleducation.mizoram.gov.in/uploads/attachments/b800d1de2cb6ee87c08e100993f2d8dd/posts-10-code-of-professional-ethics-for-teachers.pdf
Monisha
Shah. (n.d.). Ethical governance in Covid times: The value of the Nolan
principles. Wonkhe. Retrieved May 20, 2020, from
https://wonkhe.com/ethical-governance-in-covid-times-the-value-of-the-nolan-principles/
Morris,
D. Z. (2016, October 15). Mercedes-Benz’s Self-Driving Cars Would Choose
Passenger Lives Over Bystanders. Fortune.
https://fortune.com/2016/10/15/mercedes-self-driving-car-ethics/
Moses,
L. (2017, September 14). The Washington Post’s robot reporter has published 850
articles in the past year. Digiday.
https://digiday.com/media/washington-posts-robot-reporter-published-500-articles-last-year/
Moyer,
E. (2020, January 18). Clearview app lets strangers find your name, info with
snap of a photo, report says. CNet.
https://www.cnet.com/news/clearview-app-lets-strangers-find-your-name-info-with-snap-of-a-photo-report-says/
Mozilla.
(2020). Promise. MDN Web Docs. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise
Mujkanovica,
A., Loweb, D., & Willey, and K.
(2012). Adaptive group formation to promote desired behaviours. AAEE
2012 Conference, Melbourne, Australia. https://pdfs.semanticscholar.org/339a/7bd10ff3ca3afa1a0200889d71a37230b577.pdf
Muniasamy,
A., & Alasiry, A. (2020). Deep Learning: The Impact on Future eLearning. International
Journal of Emerging Technologies in Learning (IJET), 15(1), 188–199.
N.
Ben Fairweather. (2012, July 22). CCSR:Commentary on the “Ten Commandments
for Computer Ethics.” Archive.Is. http://archive.is/kWmz
Nadine
Habib. (2020, March 11). Research Grant Awarded to Ryerson And Royal Roads
Researchers to Study Misinformation Related to Coronavirus. Ryerson University.
https://socialmedialab.ca/2020/03/11/research-grant-study-misinformation-coronavirus/
Narayan,
A. (n.d.). How to Recognize AI Snake Oil. Promises and Perils of AI Mar
Hicks, Arvind Narayan, Sherry Turkle, Eden Medina. Retrieved November 27, 2019,
from
https://docs.google.com/document/d/1s_AgoeL2y_4iuedGuQNH6Fl1744twhe8Kj2qSfTqyHg/edit?fbclid=IwAR0QSuS-QXJB8rxgni_zGm5KU0oQPa9AJPFv-NpKcBlOkKIJZ0J4uefhg0o#heading=h.ypt4v4y21eo5
National Council
of Educational Research
and Training [NCERT]. (2010). Draft
Code of Professional Ethics for School Teachers. UNESCO ETICO.
https://etico.iiep.unesco.org/sites/default/files/india_2010_code_of_professional_ethics_for_school_teachers.pdf
National
Transportation Safety Board [NTSB] Office of Public Affairs. (2019). Inadequate
Safety Culture’ Contributed to Uber Automated Test Vehicle Crash—NTSB Calls for
Federal Review Process for Automated Vehicle Testing on Public Roads. Press
Release. https://www.ntsb.gov/news/press-releases/Pages/NR20191119c.aspx
National
Union of Journalists [NUJ]. (1936). First NUJ code of conduct 1936.
National Union of Journalists.
https://www.nuj.org.uk/about/nuj-code/first-nuj-code--1936/
National
Union of Journalists [NUJ]. (2011). NUJ code of conduct. National Union
of Journalists. https://www.nuj.org.uk/about/nuj-code/
National
Urban Security Technology Laboratory [NUSTL]. (2016). Asset Tracking and
Inventory Systems Market Survey Report.
Department of Homeland Security [DHS].
https://www.dhs.gov/sites/default/files/publications/Asset%20Tracking%20and%20Inventory%20Systems%20Market%20Survey%20Report%20December%202016.pdf
Neal,
A. (2020). Desmond Cole. In All in a Day with Alan Neal: Vol. CBC Radio.
https://www.youtube.com/watch?v=YNESMafb5ZI
Neelakantan,
S. (2019a, November). ‘Data Analytics Can Save Higher Education’, Say Top
College Bodies. Ed Tech Magazine.
https://edtechmagazine.com/higher/article/2019/11/data-analytics-can-save-higher-education-say-top-college-bodies
Neelakantan,
S. (2019b, November 25). Colleges See Equity Success With Adaptive Learning
Systems. Ed Tech (Online Magazine).
https://edtechmagazine.com/higher/article/2019/11/colleges-see-equity-success-adaptive-learning-systems
New
report on Ethics in Learning Analytics. (n.d.). ICDE. Retrieved March 4, 2020, from
https://www.icde.org/icde-news/new-report-on-ethics-in-learning-analytics
New
York Times [NYT]. (2008, September 25). Guidelines on Integrity. The New
York Times.
https://www.nytimes.com/editorial-standards/guidelines-on-integrity.html
New
York Times [NYT]. (2017, October 13). Social Media Guidelines for the Newsroom.
The New York Times.
https://www.nytimes.com/editorial-standards/social-media-guidelines.html
New
York Times [NYT]. (2018, January 5). Ethical Journalism. The New York Times.
https://www.nytimes.com/editorial-standards/ethical-journalism.html
Nicholas
Diakopoulos. (2020, April 15). The Ethics of Predictive Journalism. Columbia
Journalism Review.
https://www.cjr.org/tow_center/predictive-journalism-artificial-intelligence-ethics.php
Nichols,
G. (2018, December 20). RFID tag arrays can be used to track a person’s
movement. ZDNet.
https://www.zdnet.com/article/rfid-tag-arrays-can-be-used-to-track-a-persons-movement/
Nielsen,
K. (1973). Ethcis Without God. Pemberton Publishers.
Nilsen,
G. S. (2019). Digital Learning Arena. BI Norwegian Business School in
collaboration with EdTech Foundry 2015-2019.
https://drive.google.com/file/d/1zWqNz2n3AaKOdhHFxJHnFQdxdeUnr754/view
Nolan,
J., & Nolan, C. (2008). Screenplay, The Dark Knight. In Warner Bros.
https://www.youtube.com/watch?v=efHCdKb5UWc
Oakleaf,
M., Whyte, A., Lynema, E., & Brown, M. (2017). The Journal of Academic
Librarianship. 5, 454–461.
O’Brien,
J. (2020). 2020 EDUCAUSE Horizon Report | Teaching and Learning Edition
(p. 58). EDUCAUSE. https://library.educause.edu/-/media/files/library/2020/3/2020horizonreport.pdf?la=en&hash=DE6D8A3EA38054FDEB33C8E28A5588EBB913270C
OECD.
(2018). Case study: Free agents and GC Talent Cloud – Canada.
http://www.oecd.org/gov/innovative-government/Canada-case-study-UAE-report-2018.pdf
Office
of the Privacy Commissioner of Canada. (2008). Radio Frequency
Identification (RFID) in the Workplace: Recommendations for Good Practices: A
Consultation Paper. https://www.priv.gc.ca/media/1956/rfid_e.pdf
Ohm,
P. (2010). Broken Promises of Privacy: Responding to the Surprising Failure of
Anonymization. UCLA Law Review, 57, 1701–1777.
O’Leary,
K., & Murphy, S. (2019, July 11). Anonymous apps risk fuelling
cyberbullying but they also fill a vital role. The Conversation.
http://theconversation.com/anonymous-apps-risk-fuelling-cyberbullying-but-they-also-fill-a-vital-role-119836
Ontario
College of Teachers [OCT]. (2016). Professional Learning Framework for the
Teaching Profession. Ontario College of Teachers.
https://www.oct.ca/-/media/PDF/Professional%20Learning%20Framework/framework_e.pdf
Ontario
College of Teachers [OCT]. (2020). Ethical Standards. Ontario College of
Teachers. https://www.oct.ca/public/professional-standards/ethical-standards
Open
Research Data Task Force. (2018). Realising the potential: Final report of
the Open Research Data Task Force (p. 64). Jisc.
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/775006/Realising-the-potential-ORDTF-July-2018.pdf
Open
University [OU], T. (2014). Policy on Ethical use of Student Data for
Learning Analytics. 11.
Page,
E. B. (1966). The Imminence of... Grading Essays by Computer. The Phi Delta
Kappan, 47(5), 238–243.
Palakodety,
S., KhudaBukhsh, A. R., & Carbonell, J. G. (2019). Kashmir: A Computational
Analysis of the Voice of Peace. ArXiv:1909.12940v2.
Panopoly.
(2020). Data Mart vs Data Warehouse.
https://web.archive.org/web/20200111184222/
Pariser,
E. (2012). The Filter Bubble: How the New Personalized Web Is Changing What
We Read and How We Think. Penguin, New York, 2012.
Parkes,
D. (2019, December 2). A responsibility to judge carefully in the era of
prediction decision machines. Harvard Business School Digital Initiative.
https://digital.hbs.edu/managing-in-the-digital-economy/a-responsibility-to-judge-carefully-in-the-era-of-prediction-strikethrough-decision-machines/
Parkes,
D. C., & Vohra. (2019). Algorithmic and Economic Perspectives on
Fairness. Computing Community Consortium.
https://cra.org/ccc/wp-content/uploads/sites/2/2019/01/Algorithmic-and-Economic-Perspectives-on-Fairness.pdf
Paul,
C., & Posard, M. N. (2020, January 20). Artificial Intelligence and the
Manufacturing of Reality. The RAND Blog.
https://www.rand.org/blog/2020/01/artificial-intelligence-and-the-manufacturing-of-reality.html
Paul
Sawers. (2019, April 30). Examity raises $90 million for online proctoring
platform that thwarts exam cheats. VentureBeat.
https://venturebeat.com/2019/04/30/examity-raises-90-million-for-online-proctoring-platform-that-thwarts-exam-cheats/
Picciano,
A. G. (2012). The Evolution of Big Data and Learning Analytics in American
Higher Education. Journal of Asynchronous Learning Networks, 16,
9–20.
Pink,
T. (2020). Self-Determination and Ethics. Interview by Richard Marshall.
https://316am.site123.me/articles/self-determination-and-ethics?c=end-times-series
Pinker,
S. (2008, January 13). The Moral Instinct.
https://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html
Pitofsky,
R., Anthony, S. F., Thompson, M. W., Swindle, O., & Leary, T. B. (2000). Privacy
Online: Fair Information Practices in the Electronic Marketplace: A Federal
Trade Commission Report to Congress (p. 208). Federal Trade Commission.
https://www.ftc.gov/reports/privacy-online-fair-information-practices-electronic-marketplace-federal-trade-commission
Pitofsky,
R., Azcuenaga, M. L., Anthony, S. F., Thompson, M. W., & Swindle, O.
(1998). Privacy Online: A Report to Congress (p. 71). Federal Trade
Commission.
Plecháč,
P. (2019). Relative contributions of Shakespeare and Fletcher in Henry VIII: An
Analysis Based on Most Frequent Words and Most Frequent Rhythmic Patterns. ArXiv,1911.05652.
https://ui.adsabs.harvard.edu/abs/2019arXiv191105652P/abstract
Pojman,
L. P. (1990). Ethics: Discovering Right and Wrong. Wadsworth, 1990.
Powers,
A. (2018, March 23). Applying Machine Learning To User Research: 6 Machine
Learning Methods To Yield User Experience…. Medium.
https://medium.com/athenahealth-design/machine-learning-for-user-experience-research-347e4855d2a8
Powles,
J., & Nissenbaum, H. (2018, December 7). The Seductive Diversion of
‘Solving’ Bias in Artificial Intelligence. OneZero.
https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53
Prakasha, G. S., & Jayamma, H. S. (2012). Professional
Ethics of Teachers in Educational Institutions. Artha - Journal of Social
Sciences, 11(4), 25. https://doi.org/10.12724/ajss.23.2
Price,
M. (2019, August 8). Detecting and Defending Against Deepfakes. ZeroFOX. https://www.zerofox.com/blog/detecting-defending-against-deepfakes/
Princiya.
(2018, April 23). Web Tracking: What You Should Know About Your Privacy
Online. FreeCodeCamp.
https://www.freecodecamp.org/news/what-you-should-know-about-web-tracking-and-how-it-affects-your-online-privacy-42935355525/
Privacy
International. (2020). The SyRI case: A landmark ruling for benefits
claimants around the world.
https://www.privacyinternational.org/news-analysis/3363/syri-case-landmark-ruling-benefits-claimants-around-world
Projects
in Artificial Intelligence Registry [PAIR]. (n.d.). Oklahoma
University. Retrieved March 4, 2020, from https://pair.libraries.ou.edu/
Raden,
N. (2019). Ethical Use of Artificial Intelligence for Actuaries (p. 38).
Society of Actuaries. https://www.soa.org/globalassets/assets/files/resources/research-report/2019/ethics-ai.pdf
Raghu,
M., & Schmidt, E. (2020). A Survey of Deep Learning for Scientific
Discovery. ArXiv:2003.11755 [Cs, Stat]. http://arxiv.org/abs/2003.11755
Rainie,
B. L., Kiesler, S., Kang, R., & Madden, M. (2013). Anonymity, Privacy,
and Security Online. Pew Research Center.
https://www.pewresearch.org/internet/2013/09/05/anonymity-privacy-and-security-online/
Raja,
D. S. (2016). Bridging the disability divide through digital technologies:
Background Paper for the 2016 World Development Report: Digital Dividends.
World Bank.
http://pubdocs.worldbank.org/en/123481461249337484/WDR16-BP-Bridging-the-Disability-Divide-through-Digital-Technology-RAJA.pdf
Ramon
C. Barquin. (1992). In Pursuit of a “Ten Commandments” for Computer Ethics.
Computer Ethics Institute.
http://computerethicsinstitute.org/barquinpursuit1992.html
Rawls,
J. (1999). A Theory of Justice: Revised Edition (2 edition). Belknap
Press: An Imprint of Harvard University Press.
Renata
Guizzardi, Glenda Amaral, & Giancarlo Guizzardi. (n.d.). Ethical
Requirements for AI Systems. Advances in Artificial Intelligence, Cyril
Goutte & Xiaodan Zhu (Eds.), 251–256.
https://doi.org/10.1007/978-3-030-47358-7
Renz,
A., Krishnaraja, S., & Gronau, E. (2020). Demystification of Artificial
Intelligence in Education – How much AI is really in the Educational
Technology? International Journal of Learning Analytics and Artificial
Intelligence for Education (IJAI), 2(1), Article 1. https://www.online-journals.org/index.php/i-jai/article/view/12675
Resnik,
D. B. (2005). The Patient’s Duty to Adhere to Prescribed Treatment: An Ethical
Analysis. The Journal of Medicine and Philosophy, A Forum for Bioethics and
Philosophy of Medicine, 30(2), 167–188.
Richardson,
K., & Mahnič, N. (2017). Written evidence (AIC0200) Submission
(House of Lords Select Committee on Artificial Intelligence).
http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/70489.html
Riddick,
F. A. (2003). The Code of Medical Ethics of the American Medical Association. The
Ochsner Journal, 5(2), 6–10.
Rieke,
A., Bogen, M., & Robinson, D. G. (2018). Public Scrutiny of Automated
Decisions: Early Lessons and Emerging Methods. Upturn and Omidyar Network.
https://www.omidyar.com/insights/public-scrutiny-automated-decisions-early-lessons-and-emerging-methods
Rienties,
B., & Jones, A. (2019). Evidence-Based Learning: Futures. In Rebecca
Ferguson & A. Jones (Eds.), Educational Visions: Lessons from 40 years
of innovation. https://doi.org/10.5334/bcg.g
Robbins,
S. (2019). A Misdirected Principle with a Catch: Explicability for AI. Minds
and Machines, 29(4), 495–514. https://doi.org/10.1007/s11023-019-09509-3
Roberts,
C. (2009, July 14). Hey kids, Facebook is forever. New York Daily News.
https://www.nydailynews.com/news/money/hey-kids-facebook-article-1.404500
Rodchua,
S. (2017). Effective Tools and Strategies to Promote Academic Integrity in
e-Learning. International Journal of E-Education, e-Business, e-Management
and e-Learning, 7(3), 168–179.
Roscoe,
R. D., Wilson, J., & Johnson, A. C. (2017). Presentation, expectations, and
experience: Sources of student perceptions of automated writing evaluation. Computers
in Human Behavior, 70, 207–221.
https://doi.org/10.1016/j.chb.2016.12.076
Rushkoff,
D. (1994). Cyberia: Life in the Trenches of Hyperspace. Harper, 1994.
https://medium.com/@xephangraves/cyberia-life-in-the-trenches-of-hyperspace-by-douglas-rushkoff-2b3c6125f9f3
Rushkoff,
D. (2019, February 5). Team Human vs. Team AI. Strategy+business.
https://www.strategy-business.com/article/Team-Human-vs-Team-AI?gko=f1c4c
Saint
Joseph’s University [SJU]. (2020). How the Four Principles of Health Care
Ethics Improve Patient Care.
https://online.sju.edu/graduate/masters-health-administration/resources/articles/four-principles-of-health-care-ethics-improve-patient-care
Samantha
Miles. (2017). Stakeholder Theory Classification, Definitions and Essential
Contestability. In David M. Wasieleski , James Weber (ed.) Stakeholder
Management (Business and Society 360) (pp. 21–47).
Samuel,
S. (2019, November 15). Activists want Congress to ban facial recognition. So
they scanned lawmakers’ faces. Vox, Vox. https://www.vox.com/future-perfect/2019/11/15/20965325/facial-recognition-ban-congress-activism
Saqr, M., Fors, U., & Nouri, J. (2018). Using
social network analysis to understand online Problem-Based Learning and predict
performance. 13(9), e0203590.
Saugstad,
J. (1994, June 17). Moral Responsibility towards Future Generations of
People: Utilitarian and Kantian Ethics Compared. Lecture at University of
Oslo. http://folk.uio.no/jenssa/Future%20Generations.htm
Schaffhauser,
B. D., & 03/05/20. (2020, March 5). McGraw-Hill Adds AI to Writing Software
for High-Enrollment Courses -. Campus Technology.
https://campustechnology.com/articles/2020/03/05/mcgraw-hill-adds-ai-to-writing-software-for-high-enrollment-courses.aspx
Schneier,
B. (2020, January 20). We’re Banning Facial Recognition. We’re Missing the
Point. New York Times.
https://www.nytimes.com/2020/01/20/opinion/facial-recognition-ban-privacy.html
Scholes,
V. (2016, October). The ethics of using learning analytics to categorize
students on risk. Educational Technology Research and Development, 64,
939–955.
Schumacher,
C., & Ifenthaler, D. (2018). Features students really expect from learning
analytics. Computers in Human Behavior, 78(January 2018),
397–407.
Sclater,
N., Peasgood, A., & Mullan, J. (2016). Learning Analytics in Higher
Education A review of UK and international practice. Jisc.
https://www.jisc.ac.uk/sites/default/files/learning-analytics-in-he-v3.pdf
Searle,
J. R. (1964). How to Derive “Ought” From “Is.” The Philosophical Review,
73(1), 43–58. JSTOR. https://doi.org/10.2307/2183201
Searle,
J. R. (1995). The construction of social reality. New York : Free Press.
http://archive.org/details/constructionofso00sear
Self,
J. (1998). The defining characteristics of intelligent tutoring systems
research: ITSs care, precisely.International Journal of Artificial Intelligence
in Education (IJAIED), 1998, 10, pp.350-364. Hal-00197346.
https://telearn.archives-ouvertes.fr/hal-00197346/document
Selwyn,
N. (2019). What’s the Problem with Learning Analytics? Journal of Learning
Analytics, 6(3), 11 EP – 19.
Seneca.
(2019). Moral letters to Lucilius / Letter 88. Wikisource.
Sennaar,
K. (2019, April 26). The Artificial Intelligence Tutor – The Current
Possibilities of Smart Virtual Learning. Emerj.
https://emerj.com/ai-sector-overviews/artificial-intelligence-tutor-current-possibilities-smart-virtual-learning/
Serdyukov,
P. (2017). Innovation in education: What works, what doesn’t, and what to do
about it? Journal of Research in Innovative Teaching & Learning, 10(1),
4–33. https://doi.org/10.1108/JRIT-10-2016-0007
Seufert, S., Meier, C., Soellner, M., &
Rietsche, R. (2019). A Pedagogical Perspective on Big Data and
Learning Analytics: A Conceptual Model for Digital Learning Support. Technology,
Knowledge and Learning, 24, 599 EP
– 619–019. https://doi.org/10.1007/s10758-019-09399-5
Shane,
S., & Wakabayashi, D. (2018, April 4). ‘The Business of War’: Google
Employees Protest Work for the Pentagon. The New York Times.
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
Sharpe,
E. (2020, January 15). Web Foundation. January 15, 2020.
https://webfoundation.org/2020/01/for-a-healthy-democracy-facebook-must-halt-micro-targeted-political-ads/?mc_cid=39f740c4f4&mc_eid=d973949cbf
Shaw,
D. (2020, January 24). Met Police to deploy facial recognition cameras. BBC
News. https://www.bbc.com/news/uk-51237665
Shaw,
J. (2017, February). The Watchers: Assaults on privacy in America. Harvard
Magazine, January-February 2017.
https://www.harvardmagazine.com/2017/01/the-watchers
Sheila
Kennedy, Jean Mercer, Wanda Mohr, & Charles W Huffine. (2002). Snake oil,
ethics and the first amendment: What’s a profession to do? American Journal
of Orthopsychiatry, 72(1), 5–15.
Shelton,
T. (2017). Re-politicizing data. In J. Shaw & M. Graham (Eds.), Our
digital rights to the city. Meatspace Press.
https://www.academia.edu/31473915/Re-politicizing_Data?auto=download
Shepherd,
J. (2016, April 5). The Next Rembrandt: Data analysts “bring artist back to
life” to create one last painting. Independent.
https://www.independent.co.uk/arts-entertainment/art/news/the-next-rembrandt-data-analysts-bring-artist-back-to-life-to-create-one-last-painting-a6969371.html
Shirley
van Nuland, & B.P. Khandelwal. (2006). Ethics in education: The role of
teacher codes. Canada and South Asia—UNESCO Digital Library. International
Institute for Educational Planning.
https://unesdoc.unesco.org/ark:/48223/pf0000149079
Shum,
S. B., & Crick, R. D. (2012, April 29). Learning dispositions and
transferable competencies:pedagogy, modelling and learning analytics.
Proceedings LAK’12: 2nd International Conference on Learning Analytics &
Knowledge, 29 April - 2 May 2012. http://oro.open.ac.uk/32823/1/SBS-RDC-LAK12-ORO.pdf
Siemens,
G. (2012). Learning analytics: Envisioning a research discipline and a
domain of practice. Proceedings of the 2nd International Conference on
Learning Analytics and Knowledge. https://dl.acm.org/citation.cfm?id=2330605
Simon
Fraser University. (1992). Code of Faculty Ethics and Responsibilities.
http://www.sfu.ca/policies/gazette/academic/a30-01.html
Singer,
J., & Vinson, N. G. (2002). Ethical issues in empirical studies of software
engineering. IEEE Transactions on Software Engineering, 28(12),
1171–1180. https://doi.org/10.1109/TSE.2002.1158289
Singer,
N. (2017, May 13). How Google Took Over the Classroom. New York Times.
https://www.nytimes.com/2017/05/13/technology/google-education-chromebooks-schools.html
Singer,
P. (1979). Equality for Animals? In Practical Ethics. Cambridge.
https://www.utilitarian.net/singer/by/1979----.htm
Singer,
P. (2009). The Life You Can Save.
https://www.goodreads.com/work/best_book/4787382-the-life-you-can-save-acting-now-to-end-world-poverty
Slade, S., & Tait, A. (2019a). Global
guidelines: Ethics in Learning Analytics (p. 16). International
Council for Open and Distance Education.
https://www.icde.org/s/Global-guidelines-for-Ethics-in-Learning-Analytics-Web-ready-March-2019.pdf
Slade, S., & Tait, A. (2019b). Global
guidelines:Ethics in Learning Analytics. ICDE.
https://static1.squarespace.com/static/5b99664675f9eea7a3ecee82/t/5ca37c2a24a694a94e0e515c/1554218087775/Global+guidelines+for+Ethics+in+Learning+Analytics+Web+ready+March+2019.pdf
SmartRecruiters.
(2019, February 5). Recruitment Analytics: Using Data-Driven Hiring.
SmartRecruiters.
https://www.smartrecruiters.com/recruiting-software/recruiting-analytics-reporting/
Smith,
S. M., & Khovratovich, D. (2016, March 29). Identity System Essentials. Evemyrn,
16.
Snowden,
E. (2015, October 12). Ask yourself... Twitter.
https://twitter.com/Snowden/status/653705501381947393
Society
of Professional Journalists [SPJ]. (1996). SPJ Code of Ethics—Society of
Professional Journalists.
http://spjnetwork.org/quill2/codedcontroversey/ethics-code-2009.pdf
Society
of Professional Journalists [SPJ]. (2014). SPJ Code of Ethics—Society of
Professional Journalists. https://www.spj.org/ethicscode.asp
Sonwalkar,
N. (2007). Adaptive Learning: A Dynamic Methodology for Effective Online
Learning. Distance Learning, 4(1), 43–46.
Sorbonne
Declaration on Research Data Rights. (2020, January). LERU.
https://www.leru.org/files/Sorbonne-declaration.pdf
Spice,
B. (2020). A.I. amplifies ‘help speech’ to fight hate speech online. Futurity.
https://www.futurity.org/artificial-intelligence-social-media-comments-2256682/
Springer
Nature. (2019). Springer Nature publishes its first machine-generated book.
https://group.springernature.com/gp/group/media/press-releases/springer-nature-machine-generated-book/16590134
Stanford Research
Institute [SRI]. (1963). Internal
memo (unpublished).
Stanford
University. (2020, March 11). “Neuroforecasting” predicts which videos will be
popular. Futurity.
https://www.futurity.org/neuroforecasting-viral-videos-2303212/
Stange,
K. M. (2013). Differential Pricing in Undergraduate Education: Effects on
Degree Production by Field (Working Paper Series Number 19183). National
Bureau of Economic Research. http://www.nber.org/papers/w19183
Strandberg,
T., Olson, J. A., Hall, L., Woods, A., & Johansson, P. (2020). Depolarizing
American voters: Democrats and Republicans are equally susceptible to false
attitude feedback. PLoS ONE, 15(2), e0226799.
https://doi.org/10.1371/journal.pone.0226799
Stribling,
J., Krohn, M., & Aguayo, D. (2005). SciGen. MIT.
https://pdos.csail.mit.edu/archive/scigen/
Sturges,
P. (2001). Doing the right thing: Professional ethics for information workers
in Britain. New Library World, 104(3), 94–102.
https://doi.org/10.1108/03074800310698146
Suler,
J. (2004). The Online Disinhibition Effect. CyberPsychology and Behavior,
7, 321–326.
Sullivan,
R., & Suri, M. (2019, November 22). Indian cafe chain customers upset by
use of facial recognition to bill them. CNN Business.
https://www.cnn.com/2019/11/22/tech/indian-cafe-chain-facial-recognition-scli-intl/index.html
Sullivan-Marx,
E. (2020). RE: Request for Comments on a Draft Memorandum to the Heads of
Executive Departments and agencies, “Guidance for Regulation of Artificial
Intelligence Applications.” American Academy of Nursing.
https://higherlogicdownload.s3.amazonaws.com/AANNET/c8a8da9e-918c-4dae-b0c6-6d630c46007f/UploadedImages/FINAL_White_House_AI_Principles_RFI.pdf
The
Center for Internet and Society [CIS], & Maps, C. S. map: G. (n.d.). Redesigning
Notice and Consent. Stanmford Law School. Retrieved March 7, 2020, from
/events/redesigning-notice-and-consent
The
Concordat Working Group. (2016). Concordat on Open Research Data.
https://www.ukri.org/files/legacy/documents/concordatonopenresearchdata-pdf/
The
Ethics Centre. (2017, December 1). Big Thinkers: Thomas Beauchamp &
James Childress.
https://ethics.org.au/big-thinkers-thomas-beauchamp-james-childress/
The
Guardian. (2019, December 7). Samoa measles outbreak: 100 new cases as
anti-vaccination activist charged. The Guardian.
https://www.theguardian.com/world/2019/dec/07/samoa-measles-crisis-100-new-cases-as-anti-vaccination-activist-charged
The
International Federation of Library Associations and Institutions [IFLA].
(2019, December 10). IFLA Code of Ethics for Librarians and other
Information Workers (full version).
https://www.ifla.org/publications/node/11092
Tim
Tully. (2020, March 31). Introducing Splunk Remote Work Insights: Our
Solution for the New Work-from-Home Reality. Splunk-Blogs.
https://www.splunk.com/en_us/blog/leadership/introducing-splunk-remote-work-insights-our-solution-for-the-new-work-from-home-reality.html
Tong,
L. C., Acikalin, M. Y., Genevsky, A., Shiv, B., & Knutson, B. (2020). Brain
activity forecasts video engagement in an internet attention market. Proceedings
of the National Academy of Sciences.
https://doi.org/10.1073/pnas.1905178117
Treasury
Board of Canada [TBS] Secretariat. (2011, December 15). Values and Ethics
Code for the Public Sector.
https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=25049
Tsai,
Y.-S., Gašević, D., Whitelock-Wainwright, A., Muñoz-Merino, P. J.,
Moreno-Marcos, P. M., Fernández, A. R., Kloos, C. D., Scheffel, M., Ioana
Jivet, M., Drachsler, H., Tammets, K., Calleja, A. R., & Kollom, K. (2018).
Supporting Higher Education to Integrate Learning Analytics (SHEILA)
Research Report, 2018. Erasmus+ Project, European Union.
https://sheilaproject.eu/wp-content/uploads/2018/11/SHEILA-research-report.pdf
Tufekci,
Z. (2018, March 10). YouTube, the Great Radicalizer. New York Times.
https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html
Tuomi,
I. (2018). The Impact of Artificial Intelligence on Learning, Teaching, and
Education. In M. Cabrera, R. Vuorikari, & Y. Punie (Eds.), JRC Science
for Policy Report. European Union. https://doi.org/10.2760/12297
UC
Berkeley Human Rights Center Research Team. (2019). Memorandum on Artificial
Intelligence and Child Rights. UC Berkeley.
https://www.unicef.org/innovation/media/10501/file/Memorandum%20on%20Artificial%20Intelligence%20and%20Child%20Rights.pdf
UCI
Compass – Comprehensive Analytics for Student Success.
(n.d.). University of California, Irvine. Retrieved March 4, 2020, from
https://compass.uci.edu/
UNICEF.
(2019). AI and child rights policy (Workshop Towards Global Guidance on
AI and Child Rights 26 – 27 June 2019).
https://ec.europa.eu/jrc/communities/en/file/5652/download?token=KpOgpkxC
United
States Holocaust Museum [USHM]. (2020). Nuremberg Code.
https://www.ushmm.org/information/exhibitions/online-exhibitions/special-focus/doctors-trial/nuremberg-code
University
of Iowa. (n.d.). Student Success Using Learning Analytics. University of
Iowa. Retrieved March 4, 2020, from
https://teach.uiowa.edu/student-success-using-learning-analytics
University
of Leicester. (2019). Ethics Checklist.
https://www2.le.ac.uk/institution/ethics/resources/checklist
University
of Montreal. (2018). Montreal Declaration for a Responsible Development of
Artificial Intelligence.
https://www.montrealdeclaration-responsibleai.com/the-declaration
UserTesting.
(2013, November 14). Reading the Matrix: How to See Testing Opportunities in
Analytics Data. UserTesting. https://www.usertesting.com/blog/reading-the-matrix-how-to-see-testing-opportunities-in-analytics-data
van
der Schaaf, M., Donkers, J., Slof, B., Moonen-van Loon, J., van Tartwijk, J.,
Driessen, E., Badii, A., Serban, O., & Ten Cate, O. (2017). Improving workplace-based
assessment and feedback by an E-portfolio enhanced with learning analytics. Educational
Technology Research and Development, 65(2), 359–380.
https://doi.org/10.1007/s11423-016-9496-8
Van
Horne, S., Curran, M., Smith, A., VanBuren, J., Zahrieh, D., Larsen, R., &
Miller, R. (2018). Facilitating Student Success in Introductory Chemistry with
Feedback in an Online Platform. Technology, Knowledge and Learning, 23(1),
21–40. https://doi.org/10.1007/s10758-017-9341-0
Velasquez,
M. (2009). A Framework for Ethical Decision making. Markkula Center for
Applied Ethics at Santa Clara University.
https://www.scu.edu/media/ethics-center/ethical-decision-making/A-Framework-for-Ethical-Decision-Making.pdf
Vesset,
D. (2018, May 10). Descriptive analytics 101: What happened? Data Analytics
Blog. IBM.
https://www.ibm.com/blogs/business-analytics/descriptive-analytics-101-what-happened/
Vincent,
J. (2019, May 13). Use this cutting-edge AI text generator to write stories,
poems, news articles, and more. The Verge. https://www.theverge.com/tldr/2019/5/13/18617449/ai-text-generator-openai-gpt-2-small-model-talktotransformer
Virginia
Alvino Young. (2020, May 20). Nearly Half Of The Twitter Accounts Discussing
‘Reopening America’ May Be Bots. Carnegie Mellon School of Computer
Science.
https://www.scs.cmu.edu/news/nearly-half-twitter-accounts-discussing-%E2%80%98reopening-america%E2%80%99-may-be-bots
Vivian
Weil. (2008). Professional Ethics | Center For The Study Of Ethics In The
Professions. Illinois Institute of Technology.
http://ethics.iit.edu/teaching/professional-ethics#4
Vought,
R. T. (2020, January). Guidance for Regulation of Artificial Intelligence
Applications.
https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf
Wagner
Sidlofsky. (n.d.). Toronto Litigation Lawyers—Fiduciary Duties & Abuses of
Trust. Wagner Sidlofsky - Toronto Law Firm. Retrieved March 6, 2020,
from https://www.wagnersidlofsky.com/fiduciary-duties
Waltz,
E. (2020, January 13). Are Your Students Bored? This AI Could Tell You. IEEE
Spectrum.
https://spectrum.ieee.org/the-human-os/biomedical/devices/ai-tracks-emotions-in-the-classroom
Wan,
T. (2019, November 27). EdSurge, November 27, 2019.
https://www.edsurge.com/news/2019-11-27-powerschool-completes-schoology-purchase-in-march-toward-unified-k-12-data-ecosystem
Wasieleski,
D. M., & Weber, J. (2017). Stakeholder Management. Emerald Group
Publishing.
Watters,
A. (2019, August 17). HEWN 317. Hack Education. https://hewn.substack.com/p/hewn-no-317
Westwood,
S. J., Messing, S., & Lelkes, Y. (2020). Projecting confidence: How the
probabilistic horse race confuses and demobilizes the public. The Journal of
Politics. https://doi.org/10.1086/708682
Wilkinson,
D., & Doolabh, K. (2017, June 12). Which lives matter most? Aeon.
https://aeon.co/essays/should-we-take-ethical-account-of-people-who-do-not-yet-exist
Wilkinson,
G. (2007). Civic professionalism: Teacher education and professional ideals and
values in a commercialised education world. Journal of Education for
Teaching, 33(3), 379–395. https://doi.org/10.1080/02607470701450593
Wilkinson,
M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A.,
Blomberg, N., Boiten, J.-W., Santos, L. B. da S., Bourne, P. E., Bouwman, J.,
Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S.,
Evelo, C. T., Finkers, R., … Mons, B. (2016). The FAIR Guiding Principles for
scientific data management and stewardship. Scientific Data, 3(1),
1–9. https://doi.org/10.1038/sdata.2016.18
Will
Douglas Heaven. (2020, May 5). An AI can simulate an economy millions of
times to create fairer tax policy. MIT Technology Review.
https://www.technologyreview.com/2020/05/05/1001142/ai-reinforcement-learning-simulate-economy-fairer-tax-policy-income-inequality-recession-pandemic/
Willis
H. Ware, & et.al. (1973). Records, Computers and the Rights of Citizens.
Department of Health, Education and Welfare, United States.
https://www.justice.gov/opcl/docs/rec-com-rights.pdf
World
Medical Association [WMA]. (2013). Declaration of Helsinki: Ethical Principles
for Medical Research Involving Human Subjects. JAMA, 310(20),
2191–2194.
Yang,
S. J. H., & Ogata, H. (2020). Call for papers for a Special Issue on
“Precision Education—A New Challenge for AI in Education.” Journal of
Educational Technology and Society.
https://web.archive.org/web/20200110200426/
Young,
J., Shaxson, L., Jones, H., Hearn, S., Datta, A., & Cassidy, C. (2014). Rapid
Outcome Mapping Approach (ROMA): A guide to policy engagement and influence.
YouTube.
(n.d.). Protecting our extended workforce and the community. YouTube Creator
Blog. Retrieved May 11, 2020, from
https://youtube-creators.googleblog.com/2020/03/protecting-our-extended-workforce-and.html
Zawacki-Richter,
O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of
research on artificial intelligence applications in higher education – where
are the educators? International Journal of Technology in Higher Education,
16(39).
https://educationaltechnologyjournal.springeropen.com/track/pdf/10.1186/s41239-019-0171-0
Zeide,
E. (2019, August 26). Artificial Intelligence in Higher Education:
Applications, Promise and Perils, and Ethical Questions. EDUCAUSE Review.
https://er.educause.edu/articles/2019/8/artificial-intelligence-in-higher-education-applications-promise-and-perils-and-ethical-questions
Zimmermann,
A., Rosa, E. D., & Kim, H. (2020, January 9). Technology Can’t Fix
Algorithmic Injustice. Boston Review.
https://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic
Zook,
M., Barocas, S., Boyd, D., Crawford, K., Keller, E., Gangadharan, S. P.,
Goodman, A., Hollander, R., Koenig, B. A., & Metcalf, J. (2017). Ten simple
rules for responsible big data research. PLoS Comput Biol, 13(3),
e1005399. https://doi.org/10.1371/journal.pcbi.1005399
Zotero
| Your personal research assistant. (n.d.). Retrieved March 2, 2020, from
https://www.zotero.org/start
Zupanc,
K., & Bosnic, Z. (2015). Advances in the Field of Automated Essay
Evaluation. Informatica, 39(4), 383–395.
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete