AI Drift


I've set up a Cloudron installation - it's currently costing me about $25 per month ($15 for the Cloudron pro license and another $10 for the web server space). I don't know whether it will replace everything I'm using and paying for elsewhere but it will certainly replace some things. It's all a big experiment.

Cloudron allows me to install and test open source cloud applications really quickly. One of the things I've been playing with in FreshRSS. On a day to day basis, I've been using Feedly as my RSS reader. I use a cloud RSS reader rather than a desktop reader because it stores the feed results in a central place, so no matter which computer I'm using, it remembers what I've read and what I haven't.

What has been interesting about using Feedly over the years has been the way they've been extending its functionality to make it a more useful tools. For example, in addition to RSS feeds, it allows you to aggregate Reddit threads, Substack and other newsletters, Google News, and more. It used to let you follow Twitter accounts before Elon closed the door. It will also create feeds out of websites that don't support RSS.

Feedly also has an AI service it calls Leo. Basically, you train Leo by indicating whether a feed or an article belongs to a topic or category. It will then find more feeds or posts from that same category. It's a bit like a combination content filter and recommendation system. I've been using it and training it for a while.

This brings me to the title of this article - AI Drift.

There's a phenomenon labeled 'AI drift' which describes what happens "when an AI system’s performance and behavior change over time, often due to the evolving nature of the data it interacts with and learns from."  It's of particular concern to designers because "This can result in the Artificial intelligence system making predictions or decisions that deviate from its original design and intended purpose."

The full and proper name for this phenomenon is 'AI model drift'.  "In essence, AI model drift is a form of algorithmic bias that can lead to unintended consequences and potentially harmful outcomes." Specifically, "Meaning that from day 1, the data that our models utilize to make predictions is already different from the data on which they trained.... our models may suffer from model drift and model decay, unwanted bias or even just being suboptimal given the type of drift we are faced with."

That's what happens to AI models. But what's important to note is that the same phenomenon - AI drift - is also happening to us.

For example, what happened when I started using FreshRSS was that I had suddenly turned the AI off. I was still using the same list of feeds - I saved my OPML file from Feedly and used it to start using FreshRSS. But I went from having my feeds massaged for me to getting the raw chronological feed of new stories. That changed my experience completely.

What struck me wasn't what I was now finding. I knew I'd find a slew of off-topic Reddit posts, objectionable policy writing from Education Next, closed access journal articles, and more. In such a case, the ease of the user interface is really important; FreshRSS is fast and responsive and I can zip through the chaff and get to the good stuff.

No, what struck me was what I had missed. Beside the categories were little alert! icons. When I looked into the subscription manager I'd find that a feed was unreachable or discontinued or whatever. I have never even noticed when I was using Feedly. Back in the early days my list of feeds was something I had to prune and care for, because the environment isn't static. People come and people go. But in Feedly, I stopped worrying about that.

What's important is to notice what's happening. When I use AI to select the posts I read in my RSS reader, I'm finding more from the categories I've defined, but I'm missing the new stuff from categories that might not exist yet - the oft-referenced filter bubble. Also, I'm missing the ebb and flow of the undercurrent, of the comings and goings, of the stuff that seems off topic and doesn't matter - and yet, to someone who dwells in the debris like me, it does

This is what I'm calling 'AI drift' in humans. It's this phenomenon whereby you sort of 'drift' into new patterns and habits when you're in an AI environment. It's not the filter bubble; that's just one part of it. It's the influence it has over all our behaviour. One of those patterns, obviously, is that you start relying on the AI more do do things. But also, you stop doing some of the things you used to do - not because the AI is handling it for you, because as in this case it might not be helping at all, but because you just start doing other things.

The same sort of phenomenon has afflicted my presentation pages. I first started making and posting audio recordings of my talks in 2004 or so (I am still amazed that so few people record their talks almost 20 years later). Since I mostly don't write my talks in advance, if I want a transcript I have to work from the recording. I used to do it by hand. Then for a while I hired human transcribers to do it for me (it would cost about $100 for an hour of audio). Finally I bought a Google Pixel specifically for the transcription function. It worked pretty well, but over time I noticed that I stopped editing the transcriptions but just left them as they were. AI drift. 

I'm sure most people have a similar experience. Maybe like me they use AI to despeckle their photos, so they start pushing into higher ISO settings to get more speed in the dark. Perhaps they stopped looking for official translations and simply use the built-in browser translator. Perhaps they just rely on Maps to produce the best route without looking for something more scenic. Maybe they turn to chatGPT to write a programming function before even trying to write it manually. All of these are examples of AI drift - changes in the way we do things as AI gradually inserts itself into our lives.

At this point I suppose I'm supposed to say how bad this is and that I'm swearing off AI altogether. But that's not going to happen. I do think I'll stop using Feedly Pro (which costs $144/year). And I did buy a better camera so I can take those higher ISO shots. But I'm not going to stop using AI transcriptions. I'll continue to trust the translators. And though it hasn't happened yet, I an easily imagine using chatGPT every time I write software (after all, I pay $20/month for it) rather than just once in a while.

AI drift isn't inherently good, and it isn't inherently bad. It just is. It's like that quote often attributed to McLuhan: "We become what we behold. We shape our tools and then our tools shape us." Recognizing AI drift is simply recognizing how we're changing as we use new tools. We then decide whether we like that change or not. In my own case, it comes with some mixed feelings. But that's OK. I wouldn't expect anything else.


Popular Posts