And the ADL wonders why people find SCORM difficult…

I’ll let the URLs do the talking:

URL for the ADL’s “SCORM Documents” page:
http://www.adlnet.gov/Technologies/scorm/SCORMSDocuments/Forms/AllItems.aspx?View={4D6DFFDE-3CFC-4DD9-A21A-4B687728824A}

URL for the ADL’s SCORM 1.2 page:
http://www.adlnet.gov/Technologies/scorm/SCORMSDocuments/Forms/AllItems.aspx?RootFolder=%2fTechnologies%2fscorm%2fSCORMSDocuments%2fPrevious%20Versions%2fSCORM%201.2&FolderCTID=0x0120007F801FCD5325044C89D91240519482D7&View={4D6DFFDE-3CFC-4DD9-A21A-4B687728824A}

‘Nuff said.

(In fairness, this can also be construed as a critique of Microsoft SharePoint.)

Advertisements

Dear Apple and Adobe

 

Update: Steve Jobs Responds! Well, not to my letter directly, but it hits on the major points and is a well-written explanation of Apple’s position.

Dear Apple and Adobe

I’m a long-time customer and have spent more money on your products than I have on just about any other aspect of my life. I’ve spent more money on your products than I’ve spent on my healthcare, vacations, kitchen appliances, children’s school supplies, or home entertainment system.

In return, you’ve increasingly shown a disregard for my needs and concerns, and have acted in ways that demonstrate all you want from me is my money.

For example, both of you have constantly forced me (or at a minimum pressured me) to buy updates to products I already paid for. For years I went along with it because I bought into the sales hype and assumed these updates would somehow make my life better.  In most cases, they did not.

Adobe, your constant tinkering with the Creative Suite has brought a few nifty tools to the world, but these new tools will not get me to overlook the incredible bloat you’ve unleashed on my computers — almost 6GB of program files on my Windows PC at work, and over 7GB of app files on my Mac at home. Your applications feel more unstable with every release, and your UI feels slow and unresponsive despite the extra RAM and other hardware upgrades on my machines. Some of the biggest security holes on my computers are due to your Acrobat software — the very same Acrobat software I’ve learned to hate because of how bloated, complicated, and unfriendly it has become. It feels like it gets worse with each release.

Apple, your innovation is refreshing. Adobe could learn a thing or two by examining your software: increased productivity through reduced feature sets and cleaner UI. Simple is usually best. However, despite your continued excellence in design, your behavior is repulsive. You’ve consistently screwed your early adoptors via your pricing schemes and forced millions of Americans to use a phone network they detest. (Why? Because AT&T was willing to give you a bigger cut of the revenue?) Worst of all, the totalitarianism displayed in your latest iPhone developer agreement is breathtaking. It appears your goal is to piss off everyone, even your staunchest allies… like Adobe.

Apple and Adobe, you used to play well together. You both benefited from your long-term relationship and grew into very large, very successful companies. I sincerely doubt either of you would have survived the 1990s intact if it weren’t for your partnership. Desktop publishing was the Mac’s forte and the one thing that kept it afloat when the buzzards were circling. And who provided the most popular DTP software? Adobe (and the companies Adobe acquired, like Aldus and Macromedia).

Adobe, I know you’re mad because Apple won’t let you put your Flash technology on the new iPhone platform (iPhone, iPod, iPad). Honestly, if I were controlling a platform, I would have major concerns, too. As I mentioned earlier, your track record for software quality seems to be in a steady decline. Your products have become infamous for security holes, bloat, and crashing. It didn’t used to be that way. Somewhere along the line you dropped the ball, and now it’s coming back to bite you. The good news is that it isn’t too late for you to reign things in and regain control of your software. Stop trying to please everyone by adding every conceivable feature under the sun, and really focus on the most important elements. Drop the cruft. Clean the cupboards. Get that lint out of your bellybutton. Once your software is respectable again, you’ll be in a much stronger position to complain about Apple.

Apple, I don’t know what happened to you. You went from being a popular underdog to being the class bully. You’re in danger of becoming as popular as Microsoft in the European court system. From where I sit, your biggest mistake has been the idea that you can take over the world, one industry at a time. Of course, many companies are aggressive and set big goals for themselves, but they don’t stab their partners in the back as quickly and viciously as you seem to do. Your hubris and eagerness to expand into your partners’ markets is going to be your downfall. People have liked you because of your design sensibilities and because you were the hip underdog. You are no longer the hip underdog, and with time, other companies will create products that will be (almost) as stylish but also cheaper and with equivalent or greater capabilities.

The bottom line is that neither of you are choir boys, and I’m fed up with your bickering.

Adobe, stop playing the sympathy card. It’s a complete turn-off because I know how crappy your software can be. Granted, it’s unfortunate that so many people depend on Flash and Flash doesn’t work on the iPhone platform, but Flash is not a web standard. For all its shortcomings, the iPhone platform has one excellent quality: a top-notch HTML5 browser. Standardistas have been warning people not to go all-in with Flash for years, and now we see why. If it isn’t part of a standard, it will not be incorporated into some products. It’s the vendor’s choice. Simple as that.

Apple, stop trying to take over the world. We’ve seen what happens to other companies who try it, and it never looks pretty. Focus on your core values and let your partners do their thing without stepping on their toes.

Oh, and ditch AT&T already, will ya?

Respectfully,

Philip

Fear of sharing, fear of failing

Janet Clarey posted a link to a great blog post by Rajesh Setty entitled Why some smart people are reluctant to share? Setty’s insights resound with me, not because I think I’m smart — quite the opposite, actually — but because the more I learn, the more I’m aware of my limitations.

Setty tried to determine why “smart people” are often reluctant to share their knowledge with others. His conclusion was:

Smart people want to give their best and as they learn more, they learn that they need to learn a lot more before they start sharing. They learn some more and they learn they need to learn some more. What they forget is that most of the expertise that they already have is either becoming “obvious” to them or better yet, going into their “background thinking.”

I agree with the points in Setty’s post completely, except in I’d substitute the word “experienced” for “smart.”

Setty’s line “Smart people want to give their best” seems to be something of a passing thought in his post. I’d like to give it more attention, because I believe conscientious experienced folks have a fear of giving bad advice.

Zeldman had it right when he said “If your old work doesn’t shame you, you’re not growing.” Experienced people look back at their younger, inexperienced self and either chuckle at their own gumption or get rosy-cheeked from embarrassment. I’m more of the rosy-cheeked guy. My old work shames me all the time, and it often causes me to hesitate when sharing my work or giving advice to others.

This may sound funny to some of you… anyone who follows this blog probably knows I spend a lot of time doling out technical advice via places like the SWFObject and eLearning Technology and Development Google Groups.  The truth is, I question myself in almost each and every post I write. Why? Because I’m experienced enough to know that maybe I shouldn’t be so quick and cocky with an answer. Maybe there’s a different solution I haven’t heard of. To paraphrase Setty, maybe I need to learn more first.

So why do I even try to give advice if I have a fear of giving bad advice? Because I want to learn more. I often find that helping others is the best way to teach myself. I’ve learned a ton through helping others, and find it rewarding in many ways. Except when I’m wrong, and then it plain sucks.

Best Practices in E-Learning

Someone recently posted a blog entry ranting about the use of the term “best practices” in our industry. I understand the frustration with thoughtless pronouncements about best practices, especially coming from people who may not know any better; it will often sound a lot like how mom used to say “eat this, it’s good for you” without really knowing whether it’s true. However, there is a big difference between best practices in terms of learning theory — something that’s difficult to quantify/prove — and technology.

A friend of mine, upon completing his MA in psychology, joked that his degree is the only one you can get where you can’t prove a thing that you’re taught. Learning theory is a form of psychology, and as such, you are guaranteed to run into a gazillion different opinions on how learning occurs: behaviorism, constructivism, cognitivism, yadda yadda yadda. Likewise, you will hear many opinions on what development models to follow (ADDIE vs agile vs something-or-other), evaluation methodology, and perhaps edge-case debates such as centralized learning structures versus de-centralized learning structures (social media peeps, I’m looking at you).

I guarantee these conversations will involve lots of name-dropping and liberal use of the term “best practice.” In these situations, I agree that there is no single answer to ANY of these issues, and context will be king.

Most technical issues, on the other hand, most certainly DO have best practices, and for good reason.

For starters, accessibility is a best practice. Why? Well, because it’s the right thing to do. No one should be denied an opportunity to learn simply because their ears or eyes or arms don’t work like yours do. Establishing a baseline level of accessibility is fairly easy to do, regardless of the size of your budget or your time constraints. For example:

  • For the hearing impaired: Text transcriptions and/or closed captioning for videos and Flash animations are as easy to set up as ever. Free/cheap video players like the Flash video component, the JW Player, and Flowplayer all support multiple captioning standards and make it easy to add captioning to a video. Rapid e-learning development tools such as Articulate Presenter and Adobe Captivate allow you to add captions or notes to your SWFs. (Side note: the text transcriptions for TED talks are an excellent example of what can be accomplished with just a little extra effort.)
  • For the visually impaired: If the content of your course is provided in a text format such as HTML, screen readers can read the text to the end user. What does this require of you? Well, if you use standard HTML, not much… just a little extra care in your layout and alternate text. If you embed an image, video, or animation, provide fallback text that describes the image or what happens in the video/animation. SWFObject (a free system for embedding SWF files in HTML documents) makes this easy to do.
    Similarly, Adobe has been working hard to make Flash Player and Adobe Reader more accessible to major screen readers. What do you have to configure to make it work? Nothing so far.
  • For those who can’t use a computer mouse: Thanks to initiatives like WAI-ARIA and companies like Adobe who are actively building keyboard support into their products, many script-based interactions (such as course navigation, quiz questions, and other activities) can be scripted to work without a mouse. Alternate input devices are often mapped to the keyboard input; if your course can be completed using a keyboard, you’re golden.
  • For the color blind: Accessibility can often be improved simply by adding text labels to color-coded objects and not relying on color alone.

I could go on for a while, but the point is that accessibility is definitely a best practice. It isn’t hard, and it certainly isn’t expensive to make a course accessible. It’s also the law if you receive any Federal funding.

There are definitely other technical best practices for e-learning:

  • SCORM: Technically not a standard but rather a collection of standards, SCORM is a best practice because it ensures your course will work on pretty much every major LMS (if you don’t like SCORM, AICC is equally valid). How can I say with confidence that SCORM is a best practice? Because in the bad old days before SCORM, developers had to spend weeks re-coding courses to work with each LMS’s proprietary code base. Once SCORM was widely adopted, the issue largely went away. No one wants to go back to the bad old days.
  • Valid HTML and CSS: If you write HTML and CSS, ensuring they validate means you know your pages will work in every major browser. We learned this lesson in the Netscape/Internet Explorer wars. Best practices on the web are still evolving; for example, sometimes it’s ok to write CSS that won’t validate if you know the repercussions and your code fails gracefully in older browsers. The best practice is simply that your pages work in most, if not all, browsers.
  • Don’t use proprietary code: See above. If your course uses ActiveX, which is only supported in Internet Explorer, your course won’t work in any other browsers. Almost anything implemented with ActiveX can be implemented using other non-proprietary methods. Again, the best practice is to ensure your pages work in most, if not all, browsers.
  • Follow sensible coding conventions: Well-written code that follows documented — and very well-reasoned — code conventions means your code will likely contain less errors, will be easier to update if the need arises, and will be more future-proof, avoiding expensive bugs like the Y2K bug, which could have been prevented with a bit of foresight. A great example of this type of code convention is Douglas Crockford’s JavaScript: The Good Parts.

There are definitely times when people throw around the phrase “best practice” and are simply talking out of their butts. “Never use yellow.” “Never use clip art.” “Never hire a penguin.” “Never let the learner do X.” “Always make the user do Y.” “Always use ___ format.” “Always use ___ pedagogy.”

Whatever.

Just remember that best practices DO exist, but not in every circumstance. And unless you want the evil eye from me and my compadres, remember to never use the phrase “best practice” unless you can back it up with evidence and sound reasoning.

Post script: I’ve noticed this bandying of best practices usually occurs when someone is trying to establish their expertise or exert control on a project, frequently in front of management-types. This is the same sort of thing Machiavelli did when he wrote The Prince, so why not treat them as Machiavelli would and … well, I guess another best practice is to know when to shut up, so I’ll stop here.

SCORM security (two kinds of SCORM people)

I’ve had a flurry of emails and messages regarding my SCORM cheat the past few days, and have received feedback from a number of well-regarded SCORM aficionados, some of whom contributed to the standard and helped make SCORM what it is today. This is wonderful, I’m very happy to hear from everyone, especially regarding such an engaging topic.

But as I hear more from these seasoned SCORM pros, I’ve made (what I believe to be) an interesting observation: there is a sharp division between die-hard SCORM developers and casual users. I suppose I’ve felt this way for a long time, but it’s really coming into focus this week. Let me try to define the camps.

  • Die-hard SCORM developers (aka scormmies). The scormmie is a person who understands what SCO roll-up means, and can hand-code an entire manifest. A scormmie thinks the word metadata is sexy. This person believes a course should be designed to use SCORM from the start, complete with sequencing and interaction tracking; if the course isn’t running in an LMS, it won’t function without being loaded into some kind of SCORM player or test suite. Scormmies get angry if their LMS hasn’t implemented the entire SCORM spec.
  • Casual users (aka shruggies). The shruggie is a person who doesn’t care about multi-SCO courses. Shruggies don’t want to be bothered by the technical details, and use rapid e-learning development tools to build courses, freeing them from needing to know any of the technical mumbo-jumbo. Metawhat? “SCORM… yeah, that’s one of the publishing options in [insert product name here], right? So it will work with my LMS?”

The e-learning market has changed significantly

Over the last week I’ve mostly heard from scormmies who make comments such as ‘well, if a developer knows what they’re doing, they’d never make their course that vulnerable to begin with!‘ and ‘a developer should never design a course to only require a completion and score… that’s asking for trouble.

The problem with this line of reasoning is that the e-learning landscape has changed dramatically since SCORM was first conceived; the scormmie used to be the majority. Now, with the proliferation of e-learning development tools and LMSs, the scormmie is a minority. Most “e-learning developers” are not programmers by trade, and are not familiar with the very complicated and intimidating SCORM spec. They use tools that do the heavy lifting for them.

If you survey most e-learning development tools (which is a booming market), the courses they publish are almost exclusively single-SCO courses that only use the simplest core SCORM functionality: completion status, lesson location (bookmarking), score, and suspend_data. These products are designed to create courses that work without SCORM, which means they only add the minimal SCORM code needed to get the course running on an LMS; all other logic is generally handled internally. They certainly don’t use sequencing and navigation or cmi.interactions.

LMS vendors generally advise customers to buy these off-the-shelf tools to build their courses. E-learning conferences are packed with tool vendors and advertisements selling the virtues of a ‘no technical expertise required’ tool. At work I sometimes get calls from vendors trying to sell me the latest and greatest tool.

The majority of courses are no longer developed by scormmies

All of this leads to one point: I think some of the SCORM guys have lost touch with the current market and don’t realize just how much of a problem a simple SCORM cheat like mine could be. Sure, it probably wouldn’t work on courses developed by seasoned scormmies because multi-SCO courses that utilize interactions are much too complicated for my itty-bitty script to tackle… but courses developed by mainstream development tools are easy targets. Ducks in a barrel. So long as the API is JavaScript and unprotected, a script like mine can bypass the SCO completely and set the course to complete before the learner even gets past the table of contents. The only way to figure out if someone cheated is to run a completion report and look for unusual patterns, which is highly unlikely in most corporate environments. As a friend noted the other day, there are many more script kiddies who can write cheats like mine now than there were when SCORM was first proposed.

Who gets the blame for the vulnerability?

Can the tool makers be blamed? Maybe, but hey, their #1 priority is satisfying the needs of the community, and the community wants quick, easy, and ‘can run on a CD-Rom’. Could the vendors have implemented more sophisticated SCORM mechanisms? Yes. However, everyone chooses the path of least resistance (and least development dollars), and we all know SCORM development is not a walk in the park. I’ve been using SCORM for five years and still avoid most of the complicated stuff because it’s … well … complicated.

The community at large (aka the shruggies) has bought into the notion that SCORM is the standard for e-learning. This is what the scormmies wanted, and it made the most sense for everyone involved, even the tool vendors. But how many people knew about the security vulnerabilities in the JavaScript-based API? A lot: the SCORM authors, the ADL, LMS vendors, tool vendors, and a number of prominent SCORM developers. Did any of these people warn the end clients of the risks? Maybe, but I personally have never been warned of any SCORM security issues in my five odd years of SCORM work. I’ve never been told “don’t use SCORM for that because it isn’t secure.”

Why didn’t anyone act?

I wasn’t privy to the early conversations, but I’ve been told that SCORM developers have said “don’t use SCORM for high-stakes assessments” from the very beginning, circa 2000. If this is the case, why has nothing been done to improve SCORM’s security? It’s only been about nine years. Did convenience beat out security in the race to implement the standard?

I get the impression that the scormmies (and remember, my term scormmie just means a person that works with SCORM, not necessarily an official representative) felt no one would bother trying to hack the system, and that a well-built course would be so difficult to cheat that it would be easier to simply take the course. With today’s simplistic single-SCO courseware tools, I don’t think this is a valid argument anymore.

I’ve also heard from scormmies that we’re still fine, because everyone knows SCORM shouldn’t be used for high-stakes training. I think a significant number of corporate, military and government trainers would disagree with that assessment, because the LMS salesperson never mentioned it. Neither did the e-learning development tool vendor. Oh, and that instructional designer we hired out of college? She’s heard of SCORM but has no clue how it works. Isn’t it safe since you have to log into the LMS with a password? There’s a padlock icon and an https protocol… that means it’s secure, right?

Nope.

Simple-SCO courses are used for all kinds of sensitive training nowadays. Compliance training alone is huge these days and can be found in examples from almost every simple-SCO tool vendor. As a colleague recently remarked, “it’s all low stakes until someone’s attorney gets involved”.

No hard feelings!

I would like to point out that I am not targeting anyone in particular, have no animosity towards anyone, and have the utmost respect for the scormmies and what they do (I’m half-scormmie myself). I’m an optimist with a very critical eye, and this post is intended as constructive criticism… criticism intended to cause positive change.

It simply became apparent to me that at some point the scormmie community dropped the ball and got complacent; it seems as though the whole community assumed no one would bother to hack a course. Well, I did. And I used public documentation to do it. It took two hours while I was flying on an airplane, and I’m not the sharpest tack in the box. I’m sorry if my cheat script caused a stir (and if this blog post makes some people uncomfortable) but we need to talk about this issue. Now.

What’s the solution?

OK, we’ve covered enough of the criticisms and the importance of working towards a solution… I’m ready to let it rest. Let’s finish on a positive note: SCORM uses existing technology and standards, and if multinational banks can protect billions of dollars from cyber-criminals using standard web technology, we should be able to secure our courseware, too. I personally think we should be able to figure something out in the next couple of months and that it ideally shouldn’t require much work to implement — no need to wait until SCORM 2.0 comes out!

Here are some suggestions I’ve heard:

  • using a secure web service to handle important duties such as processing completions and scores
  • rolling up SCOs in a way that forces the LMS to analyze multiple SCOs before setting pass/fail (a second ‘dummy’ SCO could be used if the course is a single-SCO course)
  • using form posts to submit the completions (the form post would contain a unique encrypted key that must match a key on the LMS)

Personally, I’m especially interested in ideas that don’t require modifications to LMS implementations and might only involve a strategic re-organizing of a SCO’s manifest or SCORM code. Perhaps using a SCO roll-up can become a security best practice, even if the course only uses one SCO? That type of simple solution would be ideal since it wouldn’t require modifications to an LMS or SCORM spec — it would only require a broad marketing effort to get the word out to all SCORM developers and toolmakers.

I would love to hear other ideas, as I feel we can probably come up with any number of workable solutions.   Please add to the discussion! Remember, these need to be solutions that can be implemented easily and by the single-SCO type of courseware tools flooding the e-learning market.

By the way, while we’re at it, can we improve accessibility in our e-learning, too? 😉

Cheating in SCORM

I’m always surprised how little people talk about cheating in e-learning; maybe it’s a fear of revealing just how easy it can be. The fact is, SCORM — the most common communication standard in e-learning — is fairly easy to hack. It uses a public JavaScript-based API that is easy to tap into and feed false data, and because it’s a standard, you know exactly what methods and properties are available in the API. It doesn’t matter what vendor or product produced the course (Articulate, Adobe, etc.)… if it uses SCORM, it’s vulnerable.

I’ve whipped up a proof-of-concept bookmarklet that when clicked will set your SCORM course to complete with a score of 100 (works with both SCORM 1.2 and 2004).

This bookmarklet isn’t guaranteed to work with all courses… it’s just a demonstration of what’s possible, and could be made much more sophisticated by someone highly motivated to cheat.

As e-learning continues to boom, we should be looking into ways of making courses more secure and more difficult to hack. I believe higher security should be achievable with current web technologies. For instance, how about requiring any score or completion data to be accompanied by a unique encrypted security key? Then no external script could inject false data because it wouldn’t have the required security key.

I don’t think cheating is a problem at the moment, but we should be proactive and implement better security before it becomes a problem.

Update #1: For those who are curious, the bookmarklet has been successfully tested in a few LMSs and test environments, but I won’t be revealing which ones. For those interested in the tech specs, the bookmarklet is an anonymous JavaScript function with no global variables. It was error-checked in JS Lint then compressed with the ‘shrink variables’ option enabled, which means it’s pretty hard to decipher. If you’re interested in seeing the uncompressed code, post a comment below with your email and I’ll consider sending a copy.

Update #2: The bookmarklet has been taken down. I am no longer distributing the code, though you’re welcome to write your own.

SCORM 2.0: high-level solutions or low-level tools?

Matt Wilcox posted an interesting argument about the development of the CSS3 standard; I think the central points of the argument can be applied to SCORM and where we’re potentially headed with SCORM 2.0.

After explaining some of the shortcomings of the current approach taken by the CSS3 Working Group, Matt writes:

What does CSS need to overcome these problems? First let me say what I think it really does not need. It does not need more ill thought out modules that provide half-baked solutions to explicitly defined problems and take a full decade to mature. We do not need the Working Group to study individual problem cases and propose a pre-packaged "solution" that either misses the point, is fundamentally wrong, or is inflexible. […]

The crux of the issue is that W3C seem to try providing high-level "solutions" instead of low-level tools. It’s a limiting ideology. With the CSS3 WG strategy as it’s been over the last decade, they would have to look at all of the problem points I proposed above, and come up with a module to provide a solution to each. But by giving [designers low-level tool functionality], we designers can build our own solutions to all of them, and innumerable other issues we have right now, and in the future.

As with the discussion regarding ECMAScript “Harmony”, I think LETSI should take a look at the meat of this argument and apply it to SCORM 2.0. Test cases are important for SCORM moving forward, but we can’t try to predict every issue a course developer might encounter — the possibilities are too numerous, and as we learned with Web 2.0, we can’t predict what technology (esp. browser capabilities) will be dominant in 5 years. What we can do is provide a loose framework or toolset that gives developers the flexibility to build their courses the way they want. This system would ensure interoperability by standardizing the tools and data management across LMSs.

You can read Matt’s post here: The Fundamental Problems of CSS3 by Matt Wilcox (via Dion @ Ajaxian)

Does SCORM need a little brother?

For many involved in the SCORM 2.0 project, the hope is that we can transform the SCORM 1.x caterpillar into a SCORM 2.x butterfly; that the gestation period of SCORM 1.x has ended, and soon SCORM 2.0 will emerge from its cocoon and dazzle the world with its beauty and agility.

What’s worrisome is that there appears to be a divide in the SCORM 2.0 community between the pro-aggregators and the non-aggregators, and from the looks of it, this divide will ultimately leave one side of the aisle unhappy.

Pro-aggregators

If there’s one thing I’m learning by reviewing the SCORM 2.0 white papers submitted to LETSI, it’s that the SCORM community is full of hope and ambition when it comes to truly reusable course objects. There are some really interesting ideas being proposed for SCORM 2.0 — especially as relates to social networking and Web 2.0 technology — and some folks are swinging for the fences. The biggest fear I sense from this group is that SCORM 2.0 won’t be aggressive enough when it comes incorporating new ideas or fixing the aggregation issues in SCORM 1.x, most notably sequencing and navigation.

Non-aggregators

The SCORM community is also full of e-learning developers who do not use aggregation, and are really in no hurry to try it. Yes, I’m one of them.

Hi, my name is Philip, and I don’t use content aggregation for my e-learning courses.

My SCORM 2.0 white paper [link no longer available] focused on the notion of keeping things simple, easy to use, and accessible. Content aggregation and the related issues of sequencing and navigation are by nature complicated and difficult to manage.

From my vantage point, it appears the vast majority of e-learning developers producing SCORM-enabled courses do not use aggregation, and choose to package their courses as all-in-one SCOs. I believe this is largely for two reasons:

1. Most commercial e-learning development software (Adobe Captivate, Articulate Presenter, Rapid Intake ProForm, Trivantis Lectora, etc.) do not natively support multi-SCO courses; they publish courses as self-contained non-shareable packages.

2. Most e-learning developers do not have the infrastructure, time, funding, management support, etc. to develop courses that use/reuse shareable content objects. This has no bearing on whether they think SCOs are a good idea — almost everyone sees the value in a true SCO world — it just means it isn’t practical for them at this point in time.

SCO what?

So if non-aggregators don’t use SCOs, what’s the point in using SCORM in their courseware? Simple: it’s the easiest way to guarantee basic course functionality and interoperability. They want to know their course will work in any LMS and has a standardized data repository for course data, including quiz results and bookmarking.

He’s not heavy, he’s my brother

Judging by the topics in the SCORM 2.0 white paper submissions, the pro-aggregators are currently the most vocal members of the fledgling SCORM 2.0 community. Non-aggregators are the (somewhat) silent majority. This is bound to cause some angst. I think there’s a way to keep both sides happy: spin off the communication element of SCORM (the SCORM RTE API) as a standalone standard.

Again, we must remember that SCO stands for shareable content object. If a course is not built to be shareable, it isn’t really a SCO, even if it uses SCORM for packaging. Spinning the communication element off into its own standard — without the name SCORM — would free SCORM to truly be a Shareable Content Object Reference Model, and would free non-aggregators from having to deal with the complexities of SCORM.

This would also allow the group responsible for the new communication standard to really dig in to the communication methodologies (ECMAScript API versus web service, etc.) without having to spend resources on other topics, such as content sequencing.

The biggest question would be: What should we call the new communication standard? I think we should use the name Course Content Communication Protocol (CCCP). That way the original sponsors of SCORM — the American government and military-industrial complex — would have to officially endorse the CCCP!

[Link for those who don’t get the joke.]

All kidding aside, I do believe we should consider removing the burden of the communication protocols from the SCORM 2.0 workgroup and let the discussions about SCORM 2.0 focus on aggregation. I’m sure I’m not the only one who feels this is a viable option; if anything, I imagine the companies that produce rapid e-learning development tools would be very interested in freeing themselves from SCORM while maintaining a high level of course interoperability.

Standards don’t foster innovation, they codify it

After all my ruminating on SCORM 2.0 the last couple of weeks, it was interesting to read about the latest news regarding the ECMAScript standard.

It was even more interesting to read some of the reactions, including those from Adobe’s Dave McAllister [link no longer available] and Yahoo’s Douglas Crockford. I swear this passage from Crockford could have come directly from one of the SCORM 2.0 discussions:

The success of this project will depend on the ability of TC39 to do a better job of managing the tradeoffs between innovation and stability, and adopting a discipline for managing complexity. Simplicity should be highly valued in a standard. Simplicity cannot be added. Instead, complexity must be removed.

It turns out that standard bodies are not good places to innovate. That’s what laboratories and startups are for. Standards must be drafted by consensus. Standards must be free of controversy. If a feature is too murky to produce a consensus, then it should not be a candidate for standardization. It is for a good reason that “design by committee” is a pejorative. Standards bodies should not be in the business of design. They should stick to careful specification, which is important and difficult work.

Just substitute LETSI for “TC39” and you have a very relevant, poignant piece of advice for the SCORM 2.0 workgroup as it moves forward.

A common thread in many of the posts I’ve been reading is that standards do not lead to innovation, but rather that innovation leads to standardization. Adobe’s Mike Chambers has a great compilation of comments on this topic. I especially found this quote on standards from Dojo’s Alex Russell to be very insightful:

we need to applaud and use the hell out of “non-standard” features until such time as there’s a standard to cover equivalent functionality.

LETSI has promised to foster innovation. Will LETSI attempt to codify innovative and non-standardized technology or practices into a standard? Will it only accept existing standards (royalty-free standards, at that)?

The Raising of SCORM 2.0 will be interesting to watch as it plays out.

SCORM 2.0 white paper submission

I submitted my white paper for SCORM 2.0 today. I don’t see it listed on the LETSI site yet, but I’m hoping they received it and that it will be added soon. 🙂

I wanted to share a few things about my experience writing that paper. First of all, it wound up being a very informal paper, and doesn’t really follow the structure the LETSI folks had requested (sorry, guys). I went through a number of false starts (about seven, I think) trying to nail down what I wanted to say; even now that I’m finished, I still feel like I didn’t quite express everything I wanted to say. SCORM is just so… nebulous, and the potential for scope creep with SCORM 2.0 is so strong that I had a hard time keeping my comments focused and concise.

In the end, I wound up writing something of a ‘stream of consciousness’ paper that mostly served to get some of my concerns and notions on the official record. When I see the scholarly work some of the other participants submitted, I feel rather embarrassed, but hey, such is life!

Speaking of other papers, I took the liberty of reviewing many of the submitted documents, and I must say, I’m impressed! There’s quite a range of material and opinions, and it looks like there’s some very interesting ideas in there. From what I can see, SCORM 2.0 might wind up being a simplified and cleaned up version of SCORM 2004, or it may be a completely different animal consisting of all kinds of new features supporting the latest trends in e-learning and web technology.

I especially agreed with one of Mike Rustici‘s points:

The ‘burden of complexity’ [should be] on the LMS developers. […] It only makes sense that the LMSs should have to do the hard work and that SCORM should be nearly transparent to content developers.

This goes along with my primary motivation: make SCORM easier for content developers! Otherwise no one will want to use it and it will become useless and will be replaced by proprietary workarounds. As Rustici says, “Sequencing needs to be simplified so that it is easier than creating your own alternative.” I say this is true for all of SCORM, not just sequencing.

White Paper Addendum

Here are a few odds and ends I left out of the paper that I think are worth putting on record. These are in addition to my previous suggestions for SCORM 2.0.

A lite API. If content aggregation remains part of the core spec, can we implement a ‘lite’ API that simplifies SCORM use for single SCO courses? (The reasoning being that single SCO courses shouldn’t be forced to use full manifests and other heavy configuration items that aggregated courses will require)

SCORM security via a Web service. One potential benefit for using a web service for SCORM in lieu of a ECMAScript API is security; ECMAScript is inherently insecure, and a web service (depending on the communication protocol being used) may be easier to encrypt and protect.

Frames and session states. I my paper, I only mentioned this topic in passing, but it’s worth fleshing out a little bit.

SCORM’s current ECMAScript API requires an active JavaScript connection with the course; If the course’s connection to the SCORM API is lost, the course can’t exchange SCORM data with the LMS, effectively killing the course’s tracking functionality. Any time a webpage is unloaded in the browser, that page’s JavaScript is terminated. This is a simplistic explanation, but the point is that the browser won’t allow JavaScript to maintain a session across pages. (A session is data that persists from page to page as a user navigates through a website.)

To get around this limitation, websites often use some trickery. The most common workarounds are:

  1. Traditional framesets
  2. iframes
  3. Flash

The frameset workaround uses the parent frame to maintain session data; the parent frame never reloads, and thus never ends the session. All page loading/unloading occurs in a child frame.

The iframe workaround is much like the frameset workaround, except it uses a single inline frame instead of a full frameset.

The Flash workaround is simple: the HTML page contains a Flash movie that handles all content; the HTML file itself never reloads, allowing it to maintain a JavaScript session, while the Flash content does all the dynamic loading and unloading of content without impacting the HTML file.

Since SCORM requires an active JavaScript session to maintain communication with the API, most SCORM courses are forced to use one of these three workarounds. The problem is that all three of these methods have significant (and well-documented) usability and accessibility issues.

If SCORM 2.0 can implement some kind of Web service that eliminates the need for a JavaScript/ECMAScript API, we can also eliminate the reliance on these three workarounds. This will serve to make e-learning courses much more accessible, and reduce developers’ reliance on JavaScript.

Have you read any of the submissions?

If you’ve read any of the whitepaper submissions, I’d be interested to know which ones caught your eye or made the hair on the back of your neck stand up. And if you have opinions on the papers, definitely submit your comments to the LETSI site… they need all the feedback they can get. After all, SCORM 2.0 is for us; make sure your voice is heard!