Digital dust?

I recently listened to a two part BBC radio documentary which was, in turn, inspiring, frightening, and anger provoking. Before considering the documentary let's start with a bit of judicious hyperbole … at least I hope it's hyperbole. Question: Where can I find the history of the 20th century? Answer: What history of the 20th century? In the UK, way back in the early 1980s, you could feel at the cutting edge of the computing revolution if you were the proud owner of a BBC Model B computer. Our North American cousins should substitute Tandy TRS-80, Commodore Pet, or whatever was their favourite electronic box at the time:)

Owners of the BBC Model B would have stored their data either on audio cassette tape (which they would later painfully try to reload back into the computer) or, if they were really cutting edge, they may have owned an external five and a quarter inch floppy disk.

Now fast forward to today and suddenly find you need that data that was stored on the BBC format cassette tape or floppy disk? Still have access to that BBC computer? Have the computer compatible tape deck? Is the tape still usable? Got access to a five and a quarter inch floppy disk? Is the disk still usable? Fast forward another 50 years and what honestly are the chances of being able to recover any data?

The data stored by your vintage computer may not have been of profound cultural or national significance, but it's lost, and our example problem is actually one facing the custodians of the world's knowledge bases today.

The digital world is giving us incredible storage and search potential right at our fingertips or computer desktops but it's an ephemeral world which can easily be built, according to Jeff Rothenberg, on technological quicksand. Note that Rothenberg's paper was written in 1998 before the many digital options that exist today. Today's leading edge delivery technology is, oh so quickly, tomorrow's junk, but, as we are now finding out, even junk matters.

A book written in the 16th century can still be read today whereas a floppy disk full of data from the mid 1980s requires a major investment in data recovery. So it's reassuring to think that the world's libraries at least are 'on the case'.

But the libraries track record isn't actually so good.

Indeed, according to Nicholson Baker's 2002 book Double Fold: Libraries and the Assault on Paper, libraries contributed signficantly to the problem. Baker described how libraries face major storage problems and so became enraptured by the archive potential of new technologies like microfilm and so embarked on a “slum clearance” process which began in the pre-war era until the early 1990s. Major libraries like the US Library of Congress, according to Baker, led the way in the destruction or 'sell-off' of original prints. Why? They assumed that the new technology (microfilm) was new shiny and better for researchers, a debatable point because a reel of film is sequential and therefore discourages browsing. Baker asserts that the second wave of destruction has now begun with the digitisation of books; he argues passionately for print due to its inherent durability and longevity.

Baker was probably a little too hard on modern libraries but, nevertheless, he made a valuable contribution to raising our awareness of a very serious issue. However, I'm not convinced about the longevity of modern print media which may use poor quality paper and ink which can turn information to dust. Richard J Cox's Vandals in the Stacks?: A Response to Nicholas Baker's Assault on Libraries takes issue with Baker's key assertions and his methodology.

New technology brings progress and advantages on the one hand but creates new problems with the other. Think, for instance, of the paperback book you want to lend to a friend. At the moment you hand over the artefact and your friend duly reads said book at their convenience.

Now think of an ebook equivalent in 10 years time. What do you mean you want to share a 'book' with a friend?

Digital rights backed by legislation will do their best to make such sharing difficult. Why? Because of course the ebook text is only being licensed to you and no one else; and digital rights management systems aim to ensure that remains the case. The technology is not the problem here but, arguably, the attempt to establish the conditions for further economic exploitation of the information encapsulated by the technology is becoming so.

Despite technological advances, the ebook equivalents of the iPod (irony deliberate) are unlikely to take take off until publishers finally learn that the ebook has to have advantages (to the users) greater than a paper equivalent. Restricting use and compromising data persistence may temporarily help publishers sleep easily in their beds at night, but to the detriment of users. In a recent Auricle article, Come in book number 3! … your time is up, I described how Sony's Librié could have been an object of desire. Who wants a 'book', however, which sits there like a timebomb waiting to self-detonate when its alloted time is up? Ok … more hyperbole, you just won't be able to read the 'book' any more when it's alloted time is up.

But what of the Web?

If anything that's even more ephemeral.

All users of the Web will have experienced that moment when the find a 'link is dead' – a site that was available only last week is no longer available and its information has disappeared, perhaps forever, into the ether. On the one hand the Internet can provide access to what was a previously unimaginable quantity of data and information. The transient nature of much of that information (or the sites on which it is hosted) can, however, create intense feelings of insecurity for those who occupation or interests rely on the persistence of such resources. It's not unusual now for the Web to become the primary source of reference and unlike a book/periodical there's no ISBN, no ISSN number, or back catalogue to refer to. Or is there?

First, consider Lots of Copies Keep Stuff Safe (LOCKSS), a collaboration involving Stanford University, the National Science Foundation and Sun Microsystems. LOCKSS maintains an infinite cache (one which is never flushed) of electronic journal articles with the cache being replicated by participating libraries. LOCKSS also has a self repair mechanism so that the integrity of the data at each site is maintained thus ensuring that digital content will be preserved and that if one site fails the others can continue to provide access and repair the compromised site. For more information on LOCKSS visit http://lockss.stanford.edu but this article Preserving today's scientific record for tomorrow on the British Medical Journal web site also provides a good overview, and from which I take this quote:

“For librarians whose mission is to transmit today's intellectual, cultural, and historical output to the future, it's fast becoming a nightmare.”

The LOCKSS initiative is also interesting because it recognizes that only by having many copies of a work in circulation is there a hope of preserving the data and information within. Common sense you may think, but compare this to the extreme measures taken to preserve 'rare' knowledge artefacts, e.g. restricted access, environmental control of light, temperature, and humidity. Arguably, the 'lots of copies' approach is the only one which makes sense in the digital world, but of course the challenge is in deciding who has the right to make, archive and disseminate such copies. One view could see data, information and knowledge as belonging to all and dissemination as the right of all, whereas another view perceives data, information and knowledge as commercially valuable and would seek to restrict copying and dissemination.

Second, consider the Internet Archive which to quote:

“… is working to prevent the Internet — a new medium with major historical significance — and other 'born-digital'materials from disappearing into the past. Collaborating with institutions including the Library of Congress and the Smithsonian, we are working to preserve a record for generations to come … with the purpose of offering permanent access for researchers, historians, and scholars to historical collections that exist in digital format.”

The Internet Archive has created The Wayback Machine which provides a tool for scholars and researchers to view Web sites as they were, not just how they are now. For example entering “http://www.bath.ac.uk” into the Wayback Machine will enable comparison of the University of Bath's changing Web site from April 1997 onwards. Now think of a researcher doing the same in say 50 years. The Wayback Machine, or any similar initiatives could, undoubtedly, make some governments, institutions or even individuals nervous since a permanent publicly accessible record or information archive is sometimes not desired, e.g. ongoing access to weapons building information post September 11, 2001. In reality it's easy for web sites to exclude themselves from the attentions of The Wayback Machine or have it's entries removed. Of course, it's perhaps dangerous to assume that The Wayback Machine, or its equivalent, will itself exist in 50 years time.

What's the relevance to e-learning? Well, assuming that the juggernaut of digitization is now unstoppable, we have to assume that, in one form or another, the intranet/internet is going to become the primary disseminator of learning artefacts/material/resources … you can even use the 'o' word if you like:) So are these artefacts to be considered ephemeral, disposable, of no historical significance? What about version control and quality assurance? As knowledge matches forward are we to to digitally pulp what was once believed to be true but has now been disproved?

And what about those online discussions, real time chats, instant messaging, weblogs (such as Auricle). What happens when all of these database driven web sites are declared obsolete, 'off message', or whatever?

And what about distributed learning arterfacts/materials/resources where there is no one centralized repository or system and which depend on syndicated 'feeds' via RSS/Atom etc? And what about aggregations of learning resources etc that are formed from the outputs of multiple Web services? Undoubtedly, some will see the only solution as being managed centralized repositories with guaranteed backup. The counter argument will be that it is only via diversity and widespread dissemination can we optimize the opportunities for artefact survival.

Even should the digital artefacts survive there is no guarantee we, or future generations, will be able to make sense of them. Back to our BBC computer example. So let's say you've copied the data on your five and a quarter inch disc to a modern USB memory stick. So load it into say Windows XP or any Linux distribution and what use can you make of it? Absolutely zilch! You need the original hardware and software which made use of this data. Haven't got a BBC computer? Haven't got the software? … Oh dear!

Just such a quandry faced what was once a BBC flagship project called the Domesday Project. In the UK in 1986 the Domesday Project was a relatively big event. It was a project which mobilized communities and nearly a million people across the UK to gather local data. All of course archived in that leading edge technology of the time, a pair of interactive video discs. A couple of years ago the BBC found it could no longer access this disc and so GBP 2.5 million and a whole lot of community effort was about to go to waste. For an account of some of the heroic efforts need to save it have a look at the wonderfully named CAMiLEON site. The Domesday interactive video discs weren't digital artefacts, but the issues are still highly relevant to this Auricle article. As well as what the CAMiLEON site has to say about the importance of emulation to preservation, Stewart Granger's D-Lib Magazine article (October 2000) Emulation as a Digital Preservation Strategy is still worth a read. Stewart was project co-ordinator of the CAMiLEON Project.

So let's finish off where I started with those two excellent BBC radio programmes called Losing the Past. The two programmes pose a possible future where the 20th century could be a new dark age in which future generations won't have access to the knowledge artefacts which will enable them to make sense of what we did?

To quote from the BBC Losing the Past site:

“In our headlong rush to go digital much of our past is becoming just meaningless code of 0s and 1s. A substantial amount of material stored on computers, magnetic tape and even CDs is no longer accessible due to rapid deterioration and obsolescence. The average life of a tape is fifteen years, a CD twenty, computer systems and software far less.”

There's also some really frightening stuff about how governments as custodians of, for instance, irreplaceable census data haven't been doing a very good job. The programme also raises concerns about the care (or lack of it) which digital data with military/political embarassment potential may or may not be getting. It becomes possible, therefore, for the past to be erased and thus history effectively changed. Those lost emails suddenly take on an new significance. And what about that Freedom of Information Act, recently implemented in the UK? It so easily could become the freedom of information we still have available or are not prepared to give you because we will declare business imperatives or confidentiality reasons. And what about the stuff which will now be passed orally with no written audit trail?

We are entering an era in which the issues related to freedom of information, preservation of information and access to information will not be the province of just a few. The growth of the Internet as a medium for communication and information dissemination now brings the debate out from being the province of just a few special interests or specialists, to being one which will affect us all.

Further reading:
Electronic Trail Goes Cold, Mark Tran, The Guardian, March 7, 2002

Learning Networks versus the Behmoth?

Of the many talks Stephen Downes, Senior Research Officer for the National Research Council of Canada, gave during his recent tour of Australia, his Learning Networks paper presented at the Australian College of Educators and the Australian Council of Educational Leaders Conference in Perth, Australia (9 Oct 2004) should give us all pause for thought. Stephen Downes is one of the few people in the global e-learning community who seem to be asking the questions that need to be asked. In my opinion, Downes' paper highlights how we are in danger of creating worlds where so called learning technologies are becoming more about administration, management and control than powerful tools in the service of learning and learners. OK, some may see Downes as idealistic and that the 'real' world is about big student numbers and constraints on physical estate and finance; which translates to the need to employ technologies in the 'optimization' of teaching/learning. But when this 'reality' translates into yet more mechanisms for delivering prescriptions of pre-packaged content and process maybe Stephen's idealism is no bad thing. Otherwise, the concept of student-centredness become mere rhetoric.

What his paper isn't promoting is anarchy and chaos, instead he is highlighting that there are serious limitations to the ultra rationalistic world of centralized e-learning specifications/standards, learning objects, content management, and repositories.

“Learning resources would be authored by instructors or (more likely) publishing companies, organized using sequencing or learning design, assigned digital rights and licenses, packaged, compressed, encrypted and stored in an institutional repository … they would then be unpacked and displayed to the student, a student who, using a learning management system, would follow the directions set out by the learning designer, work his or her way through the material, maybe do a quiz, maybe participate in a course-based online discussion … That’s the picture. That’s the brave new world of online learning. And honestly, it seems to me that at every point where they could have got it wrong, they did.”

Ouch!

And on the standardization of e-learning content packages and the 'lock-in' to proprietary VLEs he says:

“… this model is about as far from the model of the internet as one could get and still be in the realm of digital content. It resembles much more a school library or a CD collection than it does the world wide web. It also resembles the way publishing companies view the sale of digital journal subscriptions and e-books, as prepackaged content, the use of which is tightly controlled, or of software, complete with encryption and registration, that has to be licensed in order to be used, and requires an installation process and close interaction with an operating system, in this case the LMS. And, of course, without an LMS, the learning content is effectively useless. You can’t just view it on your web browser … if online learning held the promise of reducing the cost of learning materials and opening access to all, this model effectively took it away.”

Ouch!

An on attempts on standardization of Learning Design (capitalization deliberate) he focuses right in on that high teacher control ethos:

“Learning Design is, in my opinion, very much a dead end. A dead end not because it results in e-learning that is linear, predictable and boring, though it is that. A dead end not because it reduces interaction to a state of semi-literate yes-no, true-false multiple choice questions, though it is that. It is a dead end because it is no advantage over the old system – it doesn’t take advantage of the online environment at all; it just becomes an electronic way to standardize traditional class planning. It’s not cheaper, it’s not more flexible, and it’s not more accessible.”

Mega ouch!

He then goes on to draw a comparison between useful learning objects which are more like a multimedia blog entry than the high processed artefacts of the e-learning industry (my terminology).

“Learning objects may be constrained, learning design preordered, their authoring cumbersome and their distribution controlled. Blogs are the opposite of all this, and that’s what makes them work.”

But how are we to find learning objects/resources? In Downes' view, for which I have some sympathy, the federated searches implicit in centrally controlled systems are not what works, and is not what's always required. At this point his well known advocacy of simple syndication technologies, like RSS, comes to the fore.

“What makes RSS work is that it approaches search a lot more like Google and a lot less like the Federated search … Metadata moves freely about the internet, is aggregated not by one but by many sources, is recombined, and fed forward.”

So if Downes isn't arguing for anarchy and chaos where does the order come from? His answer is learning networks which make sense of, and thrive within, apparent disorganization and diversity.

“It should not be surprising that order emerges from a network of disorganized and disparate sources … Order emerges out of networks because networks are not static and organized but instead are dynamic and growing … Connections come, connections go. A connection may be used a lot, and grow stronger. It may be unused, and grow weaker … Like attracts like. Clusters form, concepts emerge, and small worlds are created.”

He perceives the learning networks to be:

” … the ecosystem, a collection of different entities related in a single environment that interact with each other in a complex network of affordances and dependencies, an environment where the individual entities are not joined or sequenced or packaged in any way, but rather, live, if you will, free, their nature defined as much by their interactions with each other as by any inherent property in themselves.”

There are many more interesting quotes but I'm in danger of republishing his complete paper so you'll have to read the original for yourselves. In his conclusion he provides some timely advice for would be purchasers of e-learning systems which makes the paper worth a read in itself.

Is he right?

Probably.

Will contributions like this stop us in our tracks?

Probably not … at least not for a long time.

The global e-learning business is, as Downes himself alludes to, just that, a business. There's an awful lot of investment in time, money, and reputations gone into and going into the creation, support, and maintenace of systems which are as much about, if not more about, managing, controlling, and limiting as they are with learning and education. The irony of course could be that once we've created this chocolate box that the consumers don't actually want the chocolates inside at all, but that they find some use for the box:)

What's important, however, is that we find space for the Downes' vision of e-learning and not have it crushed underfoot by the putative Behmoth. The difficulty here is of course the seductive myth of the e-learning solution which we all know is just a matter of selecting the best off-the-shelf proprietary product … right?

For readers interested in pursuing the Learnng Networks concept a bit further then Rob Koper's recent presentation at ALT-C 2004 entitled Moving towards learning networks for lifelong learning provides a 'systems' view of learning networks. The irony here, in the context of this Auricle article, is, of course, Rob Koper's academic home is the Open University of the Netherlands, some would say the crucible of IMS Learning Design.

Clark Kent solutions have super-powers - well sort of!

It's getting harder to categorize software solutions, what with CourseGenie enhancing Microsoft Word so that it becomes a SCORM/IMS/Blackboard/WebCT authoring tool and now content management solutions or portals crossing over into learning management system (VLE) territory. Like meek and mild Clark Kent's transformation to Superman, some of these latter solutions can metamorphize, with apparent ease, into something looking awfully like a learning management system or VLE … Earlier articles in Auricle considered how simple content management systems like those driving weblogs had potential as 'alternative' virtual learning environments. I've kept looking and touching, and the more I look at what's currently available in the open-source landscape the more convinced I become that the swiss-army-knives of functionality that are quasi-monolithic VLEs are cul-de-sacs, which we will want to reverse out of … eventually. But of course other, non-educational, imperatives will then inhibit us from doing so:(

Our earlier taste of PostNuke for instance showed how easy it could be to plug-in a new package of functionality, thus substantially enhancing the core environment. We also found it a breeze to have one application, i.e. the open-source VLE Moodle, authenticate through another, i.e. PostNuke, thus having one login serve for both.

Again, with Moodle we found it easy to have it authenticate against our University's LDAP directory thus eschewing multiple logins (but we didn't have time to investigate authorisation for specific Moodle courses:)

Although all of these black arts are usually the province of high priests and priestesses in information and computing services, the availability of open-source code and information, however, certainly provides us with experiences which makes us better informed and helps us make better decisions … hopefully:)

So to help me warm to my theme first let me assert that there's a lot of e-learning activity in the world … right?

There must be! There are thousands of Blackboard, WebCT et al 'courses' out there.

Now let me pose a really difficult question.

How much so called e-learning is really using a proprietary VLE as a content repository with perhaps a smidgen of noticeboard?

Go on do the audit!

How often is that institutional VLE being used primarily for purely administrative purposes and is standing in for functionality that should exist within the instituiton's core IT infrastructure; or even worse is replicating functionality that already exists?

How often do faculty actually do more than post a few links to a few resources or upload some content?

How often is it a faculty or departmental administrator who's actually making more use of the environment than academic staff?

Go on do the audit!

The knowledge transfer model (or is that the information/content model?) is pretty ingrained in the culture of particularly undergraduate higher education. It's little wonder, therefore, that technology is first perceived as a wonderfully efficient way of disseminating content. Which it is. Diana Laurillard's 2002 Educause Review article Rethinking Teaching for the Knowledge Society on the inadequacy of the knowledge transfer model puts the case far more cogently than I could ever do. However, like an antibiotic-resistant disease (or is it a comfort blanket:) the content transfer model still predominates.

So what should we do?

We could become a cabal of fundamentalist visionaries who expend a lot of energy trying to bring the 'contentcentrics' around to the 'one true faith':) Instead, it may be more productive to adopt an embrace and extend strategy, i.e. go with what they feel comfortable with and provide them with opportunities, tools and support which extend what they want to do.

So we give them access to an enterprise class proprietary VLE, right?

Well … not quite.

What I'm suggesting here is that in many cases a VLE (as we currently know them) may be expensive overkill (licensing, training etc) for what after all is relatively simple initial requirements, e.g. post up some course content and make a few announcements. Before the hit squad arrives note the emphasis on initial🙂

So what's brought me to propose this heresy?

I've been looking at content management systems, more specifically open-source ones, and again find myself increasingly impressed with what I see. One of the beneficial side effects of open source development, with potentially multiple developers, is that the technical architectures tend to be modular which means that the core functionality of whatever software artefact is produced can be enhanced by additional functional modules. Add a vibrant user and support community and you get an explosion of optional additional functionality. Note the emphasis on optional. As a result, a barebones content management system will do just that. But want a weblog, instant messaging, discussion board, Wiki et al then, sir/madam, select from this list and voila! you've got something that begins to look awfully like a virtual learning environment. Which begins to propose a very interesting question, i.e. if that's the case, instead of a proprietary VLE why not use an such an extensible content management system over which you have some control?

Let's focus on one example of such a system.

There's an open source content management system called Plone, which, in its basic configuration, is a perfectly reasonable open source content management system and portal. There's a vanilla Plone demonstration site available for those so inclined.

However, because Plone's architecture is modular, developers in continental Europe (Austria, Germany, and the Netherlands) are currently enriching and adapting the basic Plone to increase its basic usefullness in an educational context. This latter initiative is now called EduPlone.

At the same time there is another initiative arising from Italy called Plone Campus. There's an interesting set of slides about Plone Campus from this year's Plone Conference.

To reiterate, what I'm describing here is a core product, which albeit useful in its own right, is designed in such a way that it can be adapted, extended and enriched so that Clark Kent can become Superman.

Have a brief look at some of the extensions on offer for Plone. Not enough? Then try more extensions here. Or for some of the latest work there's always the Plone collective offerings.

For those so inclined it could be worth a visit and registration at the EduPlone demonstration site. It looks just like the plain vanilla Plone demo site but enables users to add EduPlone functionality to their structures.

We've installed and used the vanilla Plone without apparent problems but we've had less success in our local install of EduPlone. EduPlone installs fine but only some things work well, e.g discussion board, but other aspects appear less robust. To be fair, EduPlone is still very much a work-in-progress and the hosted demo site suggests it is possible to do better than we've managed to. I was concerned with the apparent lack of activity on the support sites for Eduplone but a visit to the CVS code repository suggests that the project is still very active. With EduPlone, it's important to grasp that this is not a different product from Plone but is instead using Plone's extensible architecture to add a suite of extensions (products in Plone speak) which the developers believe are most relevant to educational contexts.

For those interested in the history of the EduPlone initiative it's worth looking at the Plone-Educational archives in SourceForge

There's a growing number of Plone sites available, although you might not recognize them as such. Plone can be the hidden engine of content managed Web sites. For a helicopter view Plone.org's Plone Sites is a good place to start.

For a more focused example of Plone being used in an educational context then The Harvey Project may be of interest. The Harvey Project describes itself as:

“An international collaboration of educators, researchers, physicians, students, programmers, instructional designers and graphic artists working together to build interactive, dynamic human physiology course materials on the Web”. The site offers us an interesting example of what they call a 'reusable learning asset' … they just couldn't bring themselves to use the 'O' word:)

Another example of a Plone site, this time for educational community support, is Opencourse.org.

But like everything everything can't be that good so what's the potential gotchas?

Plone depends on the Zope framework. Zope.org describes their system as:

“an open source application server for building content management systems, intranets, portals, and custom applications. The Zope community consists of hundreds of companies and thousands of developers all over the world, working on building the platform and Zope applications. ”

Now some may see this as a good thing and I don't feel qualified to comment, but Zope is undoubtedly different. To me, it had become a highly respected but perhaps little understood initiative, only exploited by a high priesthood of 'opensorcedom'. And then along comes Plone which some would argue is the first major application of Zope.

So what's the issues? Well Plone depends on the underlying Zope and, to some extent, attempts to shield users from the complexities of the parent system. But use Plone for any length of time and you are going to have to understand and interact with the underlying Zope system. So basically, sure, start reading the Plone documentation but system administrators would benefit mightily from moving quickly on to the Zope documentation as well.

Next up is component dependencies. Adding a new function to Plone is usually as easy as it gets. Each new function is presented as a folder which contains the necessary code and resource elements. This folder is transferred into a designated part of the Plone directory tree. Plone is stopped and restarted and the new function and its interface becomes available in the 'Add item' control. So far so good. But sometimes the new function depends on the existence of another functional extension which, if missing, breaks the new addition. However, readers of manuals and documentation should have little problem with this concept:)

The next issue is where exactly should enhancements (products) be added? In Zope? In Plone? As described above the process is pretty easy, but now let me add some complexity. Some products are Zope compatible only and must be installed only in the Zope system and will not show up in Plone. Some products are Zope and Plone compatible and must be installed in the Zope system only. Some products appear to be only Plone compatible and must be installed in the Plone system only. Once you adjust to this tortuous way of thinking it's not too bad, but initially confusing or what? The problem perhaps arises because Plone is gaining traction and therefore the demand for Plone products is where it's at … but I could be wrong about this:)

Now we get on to what I consider the really serious issue, which will apply to all systems which place themselves in the content/document management/repository space. Ok, we can put our content/material/resources into systems such as Plone/Zope but just how easily can we get them out again?

How do we prevent ourselves exchanging proprietary system 'lock-in' for with open-source system 'lock in'? Plone, EduPlone, Plone Campus et al are based on the underlying Zope framework which is undoubtedly powerful but has ploughed its own furrow. In essence how easily can we get our content out of a system like Plone and variants once it's in there? This is a topic that has obviously been exercising some members of the Zope community. Reassuringly, the EduPlone community appear to be addressing the issue by working on the export of Plone folders as IMS compliant Content Package via their GoZip Plone product. I couldn't get this to work on my install but, as I suggested earlier, this is obviously a work-in-progress.

As with all open source projects a major weakness can be the lack of documentation. Documentation seems to be something that is done after the event or viewed as a supplementary activity instead of an absolutely critical component of a successful initiative, particularly complex ones. Forums can be useful but as we found in our recent explorations of Moodle the open source VLE, a vibrant mult-layer developer and user community can, ironically, make it very hard work building a coherent knowledge base. But heh! … great opportunity for wannabe authors of the definitive guide to whatever.

Having said the above, Plone's documentation isn't too bad with Plone.org offering the online Plone Book. The 'book' is based on Plone 1.x so is getting a little dated, but it's still useful. For something more up-to-date there's always Andy McKay's Definite Guide to Plone which is available both in print (Associated Press) and online, although the online version appears to be formatted in order to make producing a hard copy difficult.

EduPlone documentation is, again, best described as a work-in-progress. I found it particularly frustrating trying to build an accurate picture of this project. However, due credit to the EduPlone developers who are running a multi-lingual site (German, Dutch and English). Also, the mailing list archives and other documentation for EduPlone appear to be a bit sparsely populated, which kind of limits their usefullness for those, like me, who are trying build a picture of where the project's going. But pop in to the EduPlone CVS (Concurrent Versions System) archives and a different picture emerges … here is where all of the interaction and communication is happening.

So let's try and make sense of all of this.

What's started to emerge from the open-source community are solutions like Plone, Moodle et al which can be extended and enriched by communities of users and practice. What I find of particularly interest in the likes of Plone is the concept of 'plugging in' discrete elements of functionality, so that if all I want is, say, content upload and a weblog that's all I need to have.

To be fair to proprietary vendors some of them see the advantages of allowing users to extend the core system, e.g. Blackboard's Building Blocks. It's just that you only get such privileges when you've already bought into their whole enterprise package. The open source alternatives are building extensibility into the system architecture from the ground up and provide no barriers to entry based on licensing fees paid.

It seem to me that we are fast moving into a situation where, instead of having a paucity of choice and near monopoly proprietary provision, there is considerable scope for institutions to build powerful, customizable solutions based on open source foundations. The key question is, as I have addressed in previous Auricle articles, will HEI's who have already embedded proprietary systems into their infrastructure actually be in a position to participate. I suspect not, and that's a pity because the whole educational community loses much from their non participation.

Beyond content

When it was announced in April 2001 MIT's Open Courseware initiative certainly caught the attention of the world. The project delivered its first material on 30 September 2002. At the time of writing, the learning materials for 900 MIT courses are now available online with learning material for 2000 courses slated for 2008. But we all know (don't we?) that a load of content online, no matter how prestigious the source, does not e-learning make; a fact recognized by David Wiley's Open Learning Support (OLS) project, a pilot research project launched last April in collaboration with MIT. I thought I would drop in on OLS (virtually speaking) to see what appears to be going on. David Wiley the OLS Executive Director, based at Utah State University, describes the OLS mission as:

“Libraries evolved into universities for a specific reason. High-quality content is essential to facilitating learning, but so are the social activities of asking, answering, debating, arguing, and negotiating. The mission of Open Learning Support is to give additional educational value to existing open content projects.”

Seven of MIT's OpenCourseWare subjects are being supported by OLS at this time with a user voting system being used to prioritize support for other MIT subjects. Support translates into:

“… learning communities where individuals around the world can connect with each other, collaborate, form study groups, and receive support for their use of MIT OCW materials in formal and informal educational settings.”

There is no access to faculty either at MIT or Utah State University and no award provision.

At the time of writing the OLS site shows 757 registered users and 185 postings over the seven subject areas. Linear Algebra seems to be the most active forum (78 postings) with an Introduction to Optimization being the least active (8 postings). Even if you are not a mathematician the content of the Linear Algebra forum is worth a look. Why? Because it certainly gives the impression of having the beginnings of a supportive 'on topic' self-organizing community. Introduction to Optimization, however, has it's 8 postings spread over April, June and September and gives the distinct impression of being a plant in need of water. Mind you, this is fascinating research material why do some communities thrive whereas others wither and die?

Gilly Salmon's 5 stage model comes to mind here? Auricle readers following this link should be aware that Gilly has now left the UK OU to take up a new Chair as Professor of E-Learning and Learning Technologies at the University of Leicester in the UK.

Web services, what Web services? - 2

The University of Edinburgh's Discovery+ apparently eschews a SOAP/WSDL Web services model for in favour of the much simpler REST protocol. Interestingly, they are also piloting an implementation of the relatively new IMS Resource List Interoperability specification. For those interested in the REST versus SOAP view of Web services, Paul Prescod's 2002 article Second Generation Web Services is still a worthy read. The quick and dirty explanation of REST is that the service request is embedded within the URI.

But let's see how Discovery+ puts the REST protocol to work.

To have the response to a service request results returned in IMS RLI the normal Discovery+ REST URI would change from:

“http://tweed.lib.ed.ac.uk:8080/elf/search/edinburgh?operation=searchRetrieve&query=”e-learning”&maximumRecords=5”

“http://tweed.lib.ed.ac.uk:8080/elf/search/edinburghrli?operation=searchRetrieve&query=”e-learning”&maximumRecords=5” (note the 'rli' appended to the base url).

As with all Web services a mass of human unfriendly XML data is returned which the client application has to makes sense of.

Driving Higher Ed Institutions to an Enterprise Approach

Just to provide a bit of balance to the concerns about proprietary enterprise approaches, here we have a Learning Circuits article by Barbara Ross, Chief Operations Officer for WebCT who is putting the case for the prosecution (or is that the defence?) in Driving Higher Ed Institutions to an Enterprise Approach. Just to balance the balance a little I'll let Stephen Downes' comment sign off.

“Me, I've never seen an enterprise system that I've liked, and while the author touts service and standardized processes, these seem to me to be the major weak points, not the benefits, of an enterprise system.”

BlogBuilder highlights

In a couple of other articles this week I've referred to the University of Warwick's BlogBuilder system as a first rate example of how one institution makes potentially powerful eTools available to staff and students. I thought it might be helpful to show a few of the features of BlogBuilder from which we can all learn. This article is only possible due to the generous assistance of Warwick's John Dale.

Shown below is a pretty standard blog automatically generated by Warwick's systems. So far so normal. But readers of my previous article will recognize the importance of automating the blog creation process once tools like this become enterprise wide. So Warwick has adopted a solution similar to that of the 'commercial' blog hosters like Blogger.com et al.

image

Having clicked on the article's 'Edit' control I'm presented with the interface shown below. The key features of the edit interface include the all important 'Who can view this entry' menu. The ability of the author to restrict or open up their posting to a variable range of readers is a real 'killer' feature. Other notable aspects of the edit interface include: the provision for users to specify their own categories for blog postings as well as select from preset categories, e.g. General, Personal Development Planning (PDP); or the easy formatting controls which removes the need for the user to remember html tags for, say, lists.

image

Show below is the blog owner's administration interface which provides them with a degree of customization options. To me, the most important feature of this interface is the People section which allows the user to specify new groups of potential readers or article commentators as well a designate administrators, moderators, and other authors. What a potentially powerful tool for collaborative work this could become! You get a sense of the blog owner always having control. Nevetheless, I assume that the BlogBuilder system has the capability of imposing administrators and moderators should this be desired/required.

image

Online Help information is always context appropriate. Shown below is the guidance for adding a new user to a blog owner's group.

image

The potential application of tools such as BlogBuilder to Personal Development Planning (PDP) must be considerable. A fact obviously recognized by the Warwick eLab team because they've built it in as a preset category. It will be interesting to hear the feedback on this front later in the academic year (the first cohorts will be exposed to BlogBuilder in the next few weeks).

BlogBuilder is an excellent piece of work by the eLab team at the University of Warwick. The only pity is its tight coupling to the underlying technical infrastructure. Nevertheless, it has been inspiring to view something as well designed as this.

It would be good to think that the Further and Higher Education community could have access to tools like BlogBuilder either through some open source initiative or, alternatively, as a JISC supported service. Of course, if the latter, the thorny questions of authentication and authorization immediately raise their ugly heads but perhaps the Shibboleth Project has something to contribute here.

Reflections on 'e' services, tools and strategies

Whilst waxing lyrical about the issues related to scaling up weblog use within higher education institutions yesterday I put forward a couple of refreshing alternatives to the 'e-learning=proprietary VLE' furrow now being ploughed by many institutions. I think we have a lot to learn from institutions that have opted for an eTools and services model as an important part of delivering on their learning and teaching strategies, so I'm revisiting this theme today. In yesterday's Weblog scalability and automation in referring to the University of Warwick's BlogBuilder tool I said:

“On a more general front I really like Warwick's approach. Whilst many HEIs seem to be convinced that the future of e-learning in their institution is inextricably linked to binding themselves to a single enterprise level proprietary VLE, Warwick's ELab Tools and Services IMHO shows us there is another more flexible and adaptable way. That doesn't mean they don't use VLEs, it's just that they have deliberately eschewed the emerging 'monoculture' I've waxed lyrical about elsewhere.”

IMHO Warwick has one of the most clearly articulated and visionary e-learning strategies in the UK which has informed decision making and progress since the publication of their An e-strategy for the University of Warwick in January 2001 (available both as PDF or a menu driven HTML version.

Within this 2001 document we find statements like:

“The University of Warwick is a diverse organisation and it would be inappropriate to impose a standard VLE for all departments and purposes. Such a policy would likely inhibit adoption of innovative new approaches as individual academics discovered the limitations of the particular package selected. Rather we suggest that the e-learning Development Unit provides generic tools and capabilities for e-learning such as Web publishing, Collaborative authoring, Web conferencing, On-line assessment and audio / video production.” (p33)

The progressive nature of Warwick's 2001 proposals seem to have been reinforced in their An E-Learning Strategy for the University of Warwick (May 2002).

“E-learning resources should be an amenity for staff and students, but not imposed on staff.”

“E-learning is not simply an aspect of IT provision.”

“… there will be relatively few staff who will not find something of use among the array of possibilities now opening up.”

and in its Virtually a VLE but a lot more we find restatement of the belief that:

“… the acquisition of a one size fits all Virtual Learning Environment causes difficulties”

When I see tools like Warwick's BlogBuilder or the University of Washington's Portfolio, Virtual Case, or Peer Review tools I can't help but feel these institutions have got it right. Their staff and students can select from a growing catalogue of learning and teaching tools which enables them to focus on what they want/have to do without the dangers of cognitive overload and irrelevant VLE furniture.

Other examples of this 'tools' model were considered earlier this year in our series of articles about weblogs and content management systems as alternatives to the mainstream VLEs. Many of the open source content management systems like PostNuke have modular architectures which seem to stimulate a lot of creative activity in their developer communities and so, for example, we find a personal journal or blog supplement being published by one of the members. Other examples include our recent contribution to the Moodle community where we enhanced the functionality of an existing RSS 'tool' so that each course could receive an infinite number of syndicated information/resource feeds.

But the informed readers of Auricle will have spotted the gotcha! All of these tools, including Warwick's and Washington's run from within some sort of platform and invariably have some dependencies upon that platform. For example, let's say Warwick wanted to put BlogBuilder into the public domain so the whole educational community could benefit from their work. Not so easy! BlogBuilder is a java application which uses using Oracle as a back end and has a custom authentication dependency. The costs of decoupling and licensing Oracle would prove a major disincentive. Of course if BlogBuilder was running as a service then where it was hosted wouldn't matter so much … are you listening JISC:)

It's also easy to forget that the proprietary VLE WebCT started life as Web Courseware Tools, suggesting a toolkit model informed the original design. The 'toolkit' vision has now metamorphized into the 'complete enterprise solution'. But who knows, maybe the Web services model will find discrete tools coming back on to the agenda of the proprietary vendors' marketing departments?

So we need to accept that tools and services are generated on, or hosted in, a platform of some type. Of course, from a vendor's perspective, if they can transform their host application into a platform then recurrent $$$$$ await. What Warwick, Washington and the open source initiatives have recognized is that ownership and control of the platform is all important; it is from that point all other decisions are made.

Warwick or Washington wants to add a new tool or enhancement? They just do it. They have control of the platform. The majority of institutions, however, don't own their platform and so are reliant on their vendor's interpretation of their needs and the vendor's prioritization of the features and 'tools'. In the latter case such decisions are as much informed by business imperatives such as competitor analysis, the need to keep shareholders happy, and published release dates, as with client satisfaction.

Theoretically, the Sakai Project should fit right in with the tools oriented model I've been describing. I'm holding back from installing the first release for the moment, until I've done some more homework, but I found the Sakai illustrated documentation provide a useful overview. It's a little disappointing to find that in the user documentation Sakai is being described just as:

” … an enhanced version of the original [University of Michigan] CourseTools. It is a set of software tools designed to help instructors, researchers and students create websites on the World Wide Web.”

In another part of the Sakai site we find a rather grander:

“Michigan, Indiana, MIT, Stanford, and uPortal will all license their considerable intellectual property and/or experiences with large scale application software (e.g., Course Tools, Work Tools, Navigo Assessment, Oncourse, Stellar, uPortal, OneStart, Eden Workflow, CourseWorks, etc.) into a re-factoring of best features. This will include an enterprise-scale course management system, distributed research collaboration tools, and an enterprise services portal … ”

The apparent downgrading of Sakai to an 'enhancement of CourseTools' in the user documentation is perhaps because it's better to get something out there quickly than promise perfection later.

One of the contributors to Sakai is UPortal which is gaining a lot of traction in the higher education community. The concept of the enterprise portal into which you can plug and unplug services and tools is attractive, but the granularity of the services and tools fronted by the portal concerns me. On the one hand the portal could offer me access to my weblog (small grained) whereas on the other hand if all the portal is doing is acting as a gateway to a VLE, arguably, another portal, and another interface to navigate to get to the tool I want, then that becomes more questionnable.

Of course if current VLE's became invisible and transformed themselves into discrete services accessed via a portal then ….. ?

Weblog scalability and automation

As editor of Auricle you would expect me to be quite a fan of blogs, and I am; but their personal publishing heritage tends to get the in the way of scaling up their widespread use in education. For example, what if an article has multiple authors? What if we want every student and faculty member to have a blog (x,000s) automatically. What if the author want to choose who can view his/her articles, e.g. friends, project groups, tutors? Using these criteria most blog engines fall at the first hurdle. Why? Let's consider one scenario.

We've been looking for a Weblog engine which can cope with multiple authors and manage multiple blogs. We like the open source WordPress and want to use it, but it looks like we are going to have to jump through multiple installation hoops to support more than one weblog. So how can we give each student and faculty member a weblog on this basis? … it just won't scale. Due credit to the WordPress developers who are working on the issue, but even if a future version of WordPress allows for multiple blogs, when we start dealing with large numbers of blogs, we've got to look at the approach taken by the blog hosting companies, e.g. Blogger.com. They automate the process. Type in few basic details and voila! you have a blog. It seems to me that we need some scripting capabilities, API's exposed or perhaps even Web services???

Blogger et al do a sterling job but I wouldn't want to build a dependency on their free service into, say, a higher education programme where there has to be a guaranteed quality of service, i.e. a free service can be withdrawn at any time. Ok, Blogger.com probably wouldn't do it but they could do it.

The University of Warwick's eLab team have got it! They are showing the rest of us the way with their BlogBuilder tool to drive their Warwick Blogs service.

Warwick's eLab has also tackled the thorny question of who decides who can view a weblog posting?

“When you write an entry on your blog, you can control who can see it and who can comment on it. This control is achieved through creating groups of peole who you can then allow to see or comment on various entries. There are several preset groups such as 'Anyone', 'Staff', 'Students', 'Staff/Students' and 'Just me', and one group ready and waiting for you to add people to it which is called 'Friends', but you can also create your own groups.”

On a more general front I really like Warwick's approach. Whilst many HEIs seem to be convinced that the future of e-learning in their institution is inextricably linked to binding themselves to a single enterprise level proprietary VLE, Warwick's ELab Tools and Services IMHO shows us there is another more flexible and adaptable way. That doesn't mean they don't use VLEs, it's just that they have deliberately eschewed the emerging 'monoculture' I've waxed lyrical about elsewhere.

Other notable examples of such an approach include the University of Washington's Catalyst Tools.

Now the 'Catch 22' in the Warwick and perhaps the Washington examples is the tools were designed with a single institutional perspective in mind. Consequently, even if the developing institutions were willing to do so, decoupling them from the underlying technical infrastructures, e.g. may not be so easy or economic. Nevertheless, if such tools were made generally available to the wider community the benefits would be immense. We can but hope that, ultimately, the Technical Framework and Tools Strand, of the JISC's wider E-Learning Programme and associated eTools activities leads to the development of libraries of higher level tools and applications similar to the Warwick and Washington examples.

But I digress. Back to blogs!

Another interesting example of mass blog provision appears to be the University of Minnesota's UThink Blogs at the University Libraries. They appear to have Movable Type as their underlying blog engine and have several hundred blogs in existence. I wonder, however, how easy generating multiple blogs is using this system, how much code/script replication there is, and how much manual intervention is required?

We've considered how we could automate the creation of WordPress blogs but we still come back to the issue of multiple installs and replicated files because the basic architecture is still single blog oriented.

Our impression is that there's a lot of blogging engines out there which do a sterling job, but try and find a customizable engine which can auto-generate blogs and the options appear seriously limited. Of course we might be looking in the wrong place:)

The only possible candidate I could find was ScriptMe.com's SmE Blog Host so if anyone else knows of any we would be happy to know about them.

Big is beautiful … or perhaps not!

Be warned this makes for uncomfortable reading for those considering migration to enterprise class proprietary VLEs, in this case WebCT Vista. In OLDaily (25 Sep 2004) Stephen Downes, senior research officer with the National Research Council of Canada, describes a key theme for the Australian leg of his world tour.

“I have been touting the benefits of 'small' e-learning and questioning the value of large learning management systems.”

Downes' then goes on to highlight a DEOS-L (The Distance Education Online Symposium) listserv contribution by Christopher Sessums, Director of Distance Learning at the University of Florida. Now Christopher obviously isn't happy … The Sessums' response makes me wince. He highlights:

  • Expense for licensing the core product.
  • Expense for licensing Oracle.
  • Bugs.
  • Feature and interface overload.
  • The need for skilled and expensive high priests for support.

He certainly doesn't pull his punches when he states:

Kapowww!!! …. “If you don't have an adequate support staff, this system is a bear.”

Kapowww!!! …. “VISTA is hardly intuitive”

Kapowww!!! …. “IMHO, depending on your needs, you are better off getting a cheaper system, Moodle, Angel, or an open source CMS, that has a simple interface and provides basic functionality.”

Kapowww!!! …. “Spend your money on web designers, artists, simulation experts, people who can assist you in making your online vision possible.”

So what do I think? If you haven't already done so read my recent ALT-C paper E-Learning Frameworks and Tools: Is it too late? - The Director's Cut

Subscribe to RSS Feed Follow new Auricle posts on Twitter!
error

Enjoy this blog? Please spread the word :)