Before we start building functional SCORM courses, let’s take a moment to discuss how you can test your SCORM courses.
One of the biggest obstacles for venturing into hand-built courseware is probably the lack of a built-in preview feature. This also impacts testing. When you use a commercial e-learning development tool, it typically includes a preview feature, enabling you to launch and interact with your course without loading it into an LMS. You can test things out and make smalls edits quickly. If you see an issue in your course, you close the preview window, make your edit, then relaunch the preview window, sometimes in a matter of seconds. The ability to quickly jump between the source material and the preview mode is a very alluring feature.
How can we do that if we’re building courses by hand? Sure, you can launch the course in your local web browser, but the browser won’t be able to provide the SCORM API, which means you can’t test the SCORM functionality. When you’re ready to test your SCORM tracking code, you’ll need to publish it to an LMS, which means setting up your SCORM manifest, packaging the course as a ZIP, uploading the ZIP to an LMS, configuring the course settings in the LMS, then launching the course in the LMS. If you discover a typo, or just want to make a small tweak to your code, you’ll (usually) need to go through the same steps again. It becomes very tedious very quickly.
This is why I break my testing into three phases:
- Local testing using a web browser
- Local testing using a faux LMS
- Remote testing on a real LMS
Local testing using a web browser
In this first stage, I’m not worried about testing the tracking code. I’m more concerned about the basics: Ensuring my course navigation works, and that the user experience is as good as it can be. This includes ensuring the controls are accessible, and that the pages are responsive (they are flexible and will resize to fit different viewports). The first draft of a course might not even have any SCORM code in it, and should work in a web browser just like any other web site: you can open it, navigate, interact with it, etc.
This first stage is the best time to confirm the content, look, feel, and general functionality of the course before getting caught up in the LMS side of things. Ensure your content is accurate, that you don’t have any typos, that you have all of your final artwork in place, etc.
It’s critical that you test your course in different web browsers at this early stage of development. There are few things worse than developing a course which functions beautifully in one browser only to find it horribly broken in others. (Though, thankfully, this is less of an issue with modern browsers than it used to be.)
I generally do most of my initial development and testing in a single browser (typically Mozilla Firefox, simply because it’s my default browser), and jump to other browsers whenever I add new layouts or functionality to the course, such as a new interaction. Always ensure the new items look and behave as expected in all major browsers.
Don’t get hung up on making the courses look identical and pixel-perfect in every browser; each browser brings its own unique style and quirks to webpages, especially when using form elements such as drop-down menus. The focus should be on the content and functionality. Ask yourself if the user is receiving the same experience across all browsers, even if a few items (scrollbars, form fields, etc.) look different.
This is also a good time to ensure the courseware is keyboard accessible. In my opinion, being able to navigate the entire course using a keyboard (no mouse or trackpad) is a requirement, and should not be an afterthought. Many screen readers and assistive technologies use keyboard mappings to help their users navigate the course. Of course, accessibility runs much deeper than keyboard access, but this is an excellent way to train yourself to remember accessibility while you develop your courseware.
Finally, if you have a subject-matter expert, instructional designer, or client who needs to review and sign off on the course, do it at this stage if possible. It will save you from hassles down the road, since you can quickly update content without jumping through LMS hoops.
Notes
- Tools like BrowserStack and LambdaTest are an easy way to test your course across browsers and platforms.
- See sites such as https://www.w3counter.com/trends for the latest trends in browser market share.
- I like to test courses across the following browsers and platforms:
- Google Chrome (macOS and Windows)
- Firefox (macOS and Windows)
- Microsoft Edge (Windows only)
- Safari (macOS only)
- I sometimes test in Opera, but it has significantly smaller marketshare, and uses the same Blink rendering engine as Chrome. I don’t consider Opera a hard requirement. (This may not be the case for some European course developers, as Opera is much more widely used in Europe than the United States.)
- Microsoft Edge also uses the same rendering engine as Chrome, so if the course works in Google Chrome, it typically also works in Opera and Edge.
- If you plan to support mobile devices, you should test Safari for iOS as well as Chrome for Android devices.
Local testing using a faux LMS
In the second phase of testing, I test locally using a home-built faux-LMS. The faux LMS is basically a simple one- or two-page website running on my local machine. It reads files from my local filesystem, enabling it to display a list of local courses, which launch in a popup window when clicked.
The system uses a stripped-down SCORM API and saves the SCORM data to the web browser’s LocalStorage. All SCORM calls are properly recorded, including bookmarks. I can exit a course then resume it, just as you would on a production LMS.
This sounds very complicated, but in reality the experience is almost the same as simply viewing the course in a web browser. The only differences are that the course can be launched in a constrained pop-up window (mimicking an LMS experience) and the course will have SCORM tracking support.
Using a faux LMS is a real time-saver compared to using a true LMS, as it allows me to test files locally without uploading the entire course to an LMS, doesn’t require creating a ZIP package, and doesn’t require any fiddling with course settings or versioning in the LMS user interface. If I were using a real LMS, a task such as fixing a simple typo would require creating a new ZIP, uploading the ZIP to the LMS, then dealing with the LMS’s versioning system. And many times, when you launch the edited course you will run into caching issues, where the old version of the file remains cached on the LMS for some reason. In the faux LMS environment, if I spot a typo on a course page, I can quickly edit that file locally without having to repackage and re-upload to the LMS.
Unfortunately, I’m not ready to share my faux-LMS systems yet (sorry for being a tease!), but if you’d like to try something similar, look into open-source systems like Jonathan Putney’s SCORM Again or Tim St Clair’s SCORM Debug.
Remote testing on a real LMS
For the third phase of testing, I upload the course to a staging LMS to confirm it works as expected. This staging system is typically a clone of a production LMS, specifically provided to developers for testing courses.
If you don’t have a staging LMS, you may want to consider installing an installing an open-source LMS locally. It takes some effort, but will give you decent results. Some LMSs to consider are Moodle, Chamilo, Forma, and Ilias. Alternatively, you can use a third-party service such as Rustici Software’s SCORM Cloud. To be honest, SCORM Cloud is one of the best ways to test a course, as it provides very useful SCORM logs and error messages, which can help pinpoint any SCORM-related issue you may be encountering. SCORM Cloud provides free accounts for testing smaller courses, but requires payment for larger courses. It also requires uploading your files to their servers, which may present security and/or confidentiality issues for some clients.
Regardless, testing on a staging system is crucial for verifying the functionality of the course in a production environment before releasing it to your learners. Always check the course in a true LMS environment before assigning the course to learners.
Since LMS platforms vary, they sometimes introduce cross-platform issues. Even though you may have tested your course across browsers and platforms locally, you should do it again when your course has been loaded in the LMS. It’s always better to catch these issues yourself than having your boss/customer or end users (learners) finding them for you.
In short, test, test, and test again.