Designing E-Learning 3.0 in gRSShopper - 9
E-Learning 3.0 - Part 1 - Part 2 - Part 3 - Part 4 - Part 5 - Part 6 - Part 7 - Part 8 - Part 9 - Part 10 - Part 11 - Part 12 - Part 13
The RSS Feed
The course isn't truly distributed until it has an RSS feed. We'll use the course newsletter as our RSS feed, except we'll publish the output in rss2 format instead of HTML.
So, exactly as before, we'll build the RSS page in the page Editor.
|Figure 117 - RSS Newsletter page in the Page Editor|
- I've made the heading pretty generic (I may actually make it a template in the future) by using page variables (such as [*page_crdate*] and site variables ( like st_url ).
- Dates in RSS need to be RFC 822, so I use date format=rfc822
- I've included keywords for both posts and media, so I can include the videos as links in the RSS feed. As with the JSON feed for Feeds in the previous post, I decided to use a 'datatype' element to make this clear (which is a major reason to make RSS this way, and not with standard Perl RSS modules - because you simply can't extra custom elements on the fly with the Perl module).
- In the keyword command for media (first one in the list) notice I used url~youtube\.com - I added the .com because someone had an image with 'youtube' in the URL and I didn't want to link to it as a video, and I added the \ before the . because when you use ~ (as in url~youtube) it executes a regular expression search, which means (a) regular expression characters can be used (yay!) but (b) if you use them as part of a search string they have to be escaped, which is what the \ does.
- I've added a post type 'announcement' which I'll use for the announcements at the start of the newsletter each time I create a new newsletter.
All the views for posts (post_link_rss2, post_article_rss2, post_resource_rss2 and post_announcement_rss2) are basically the same (except the 'link' for articles is site url /post/[*post_id*] instead of just [*post_link*].
|Figure 118: post_article_rss view in the View Editor|
I also created an rss2 view for media:
|Figure 119: media_youtuberss2 view in the View Editor|
Notice the special 'datatype' element in this view. Strictly speaking, this is not valid RSS, but it validates just fine because RSS readers ignore elements they don't understand.
It's worth noting that I could put anything into an RSS feed like this - courses, modules, whatever. RSS readers will treat them all the same. We'll do somewhat more with them though. :)
Finally, let's validate at https://validator.w3.org/
|Figure 120 - Validated XML|
There's a warning, but we can ignore it because we're not an HTML file and don't need to declare a docttype.
Interlude - The Cron Task
After my conversation with Tony Hirst I attended to some lingering problems with the cron jobs. Like - they weren't working. Why not?
It took me a while to figure it out but I eventually did. Specifically: when I defined the script directory for the gRSShopper scripts, I defined it relative to whatever script started first. So when admin.cgi tries to launch the harvester (using the system command) it basically tries to launch it relative to itself. As in: system .\harvest.cgi
But cron jobs don't start where CGI scripts start. A cron job is an automatically executed set of scripts that typically launch starting in the user's home directory. But the CGI scripts are nowhere near the user's home directory. So using the relative URL fails (with an unhelpful 'permission denies' error).
I added some code to admin.cgi to fix it:
use Cwd 'abs_path';
my $harvester = abs_path($0);
$harvester =~ s/admin\.cgi/harvest\.cgi/i;
*sigh* Three hours. Three lines of code.
Comment to Jenny Mackness
Here's her post.
– the purpose o the exercise is to give you a feed for using a cloud technology (which is what Feedly is), to get some experience using linked data (which is what RSS is), and to enable you to continue to follow the course without having to rely on a central course website. And it was kind o a warm-up for some of the more challenging things ahead.
– it is worth noting that you followed the model of software people everywhere – first you tried it without instructions, then you asked someone, and finally, as a last resort, you followed the easy-to-follow video. There’s a lesson there (one you probably already knew, but still).
– the course site isn’t 100% automated yet (I know, after ten years you would think the software would be finished, but…) so I’ve been adding links by hand. Next week I’ll show how I’m using the feed rules to filter your posts for the el30 tag. And that’s all that will be available through the course feed. Your Feedly collection, however, allows you to have a wider conversation with the people in the course – backchannels, sidebars, whatever. And also, once the course stops, the course feed comes to an end. But you should have the option of continuing on with the course community. Having said all that, only you can judge whether you’ve spent your time well.
– I thought quite a bit about the ordering of content in the sidebar (so often these are created with no thought whatsoever). The stuff at the top is the stuff you will likely click frequently – returning to the course outline, going to the activity centre, reading the daily newsletter. The stuff at the bottom is stuff you will probably use only one or a few times.
> I’m still not sure of the purpose or what I stand to gain.
Even more so than in the case of the connectivist courses of 10 years ago, courses of the future (which is what we’re basically describing in this course) will consist mostly of plumbing (and where most of the plumbing is behind the scenes, in the cloud). Through the first two modules, we’ve been looking at the core elements of that plumbing. Next week, Graph, we’ll look at how that plumbing is connected and at some of the uses to which it’s put.
This is very different from previous models of online education, where a great deal of attention is paid to design. I’ve deliberately kept the design of this course super-simple, to focus on the pices, and the connections between them. What happens in the future is that individual learning systems (including gRSShopper) will access this plumbing on an as-needed basis. There isn’t a ‘course’ per se but a learning environment.
This model is not being talked about in the course so much as it is being demonstrated via the course. Sure, we talk about the elements as they arise – as you quite correctly point out, people don’t know what RSS and OPML are, and find the idea of working with feeds counterintuitive. In the short 10 weeks that we have, I’m am more than anything trying to give you a _feel_ of it, because it’s otherwise very easy to become overwhelmed in a sea of details.
(But if you _want_ the details then you can follow my ‘Making of EL 3.0’ series over on Half an Hour. 🙂 )
The JSON Feed
Next I want to create a JSON feed for the newsletter page. This reflects the fact that RSS is gradually being phased out in favour of JSON. A JSON feed standard has been proposed and I've been using it for about a year on my own website.
I'm also planning for a more comprehensive data exchange between gRSShopper instances Why should we be limited to posts? We've already seen that we can exchange information in JSON about feeds. One of the advantages of JSON is that you don't have to adhere to a specific vocabulary, whihc means you can basically describe whatever you want in JSON.
Setting up the JSON feed is exactly the same as setting up the RSS feed. First, we create the JSON page in the Page Editor.
|Figure 121 - JSON page in the Page Editor|
|Figure 122 - post_link_json view in the View Edito|
Again, it's important to validate the JSON feed. Unlike other systems, gRSShopper does not prevent you from making mistakes in the coding. Maybe one day I'll integrate it with a JSON validator, but for now, it's up to you.
That's it for this post, which has actually been several days in the writing.