LearnSWFObject.com domain being retired

It’s hard to believe that I have run learnswfobject.com for nearly ten years.

It’s easy to overlook now, but back when Flash was booming, before iOS and Android changed everything, SWFObject was a very important piece of web technology. According to BuiltWith.com, SWFObject’s usage peaked at about 3.5 million sites in late 2013. As of Dec 2018, there are still over 1.1 million sites using SWFObject. That’s a lot of sites. Accordingly, there were also a lot of web developers trying to learn how to use SWFObject.

I didn’t create SWFObject — it was Geoff Stearns‘ brainchild — but I used to frequent the support forums and help people fix their broken Flash and SWFObject implementations. After writing the same advice over and over in forums and emails, I decided to create a tutorial site.  LearnSWFObject.com was built and released as a self-hosted WordPress blog in 2009.

I didn’t start tracking visits for learnswfobject.com until mid-2010, but according to my Google Analytics metrics, the site has had approximately 138,000 unique visitors since April 2010.

  • 2010 (partial): 12.3k
  • 2011: 15.5k
  • 2012: 12.8k
  • 2013: 11.3k
  • 2014: 16.5k
  • 2015: 25.7k
  • 2016: 20.7k
  • 2017: 15.2k
  • 2018: 8.1k

In 2014 I migrated the site from my self-hosted WordPress installation to GitHub Pages. I was tired of dealing with WordPress updates for what was essentially a defunct site, and was also tired of paying the hosting fees out of my own pocket. Migrating to GitHub Pages enabled me to set and forget, except for the annual domain name renewal.

Aside: It’s interesting to note the site had an unexpected upturn in visits after migrating to GitHub Pages. I guess being on GitHub gave it some SEO juice. 

As you’ve likely read by now, Flash is dying, and will be officially unsupported by Chrome, Edge, Firefox and Safari no later than 2020. This is not speculation: Cutoff dates for Flash Player support have been announced by all of the major browser vendors (Google, Apple, Microsoft, Mozilla), and by Adobe itself. SWFObject’s code base has not been updated since 2013 — over five years! It’s time to let it go. The learnswfobject.com domain is due to expire in February 2019, and for the first time in a decade, I do not plan to renew it. The site will remain up and running on GitHub pages for historical/archival purposes, reachable at https://pipwerks.github.io/learnswfobject/.

Advertisements

Eight Years of Running a Mac Mini Server

In late 2009 I migrated all of my websites to my personal Mac Mini server, which is hosted in MacMiniColo’s data center (now part of MacStadium). You can read about my reasons for moving from hosted services to my own server here.

I’ve never looked back, and have mostly enjoyed having my own server because of the freedom it gives me to experiment and customize my environment.

Mostly.

When I first got the server, I was new to Linux and was really happy Apple provided Server.app, which is a GUI for the standard fare of services, including Apache, mail, FTP, VPN, and certificate management. I had previously dabbled in Linux server administration via hosted services and Microsoft IIS at my workplace, but it’s safe to say I was still a n00b. Server.app handled the heavy lifting and made it easy for a lightweight like me to get a simple site up and running.

Almost exactly eight years later, I’ve replaced the hardware once (a newer, faster Mini), have updated macOS seven times, and replaced Server.app six or seven times. Through it all, the Mini (and MacMiniColo’s hosting) has been solid. The software? No so much.

Apple’s Server.app is a compilation of open-source software, which sounds great — plenty of people use the same software and there are literally thousands of how-to guides on the interwebs. Except… Apple in their wisdom decided to customize pretty much everything, which meant the aforementioned guides were often useless, causing endless headaches. (On the bright side, my Google-Fu has grown immensely.)

Over the past few years, HTTPS has become an increasingly important part of web hosting. Before the advent of Let’s Encrypt, I had purchased a couple of commercial SSL certificates (WOW they’re expensive) and installed them via Server.app. This was not very difficult. But as I started adding more and more sites and SSL certs to my server, I started running into really weird Apache errors, which often caused ALL of my sites to become unavailable. Remember, Server.app was doing the Apache config, not me, so it should have been as easy as drag-and-drop. Finding solutions to these errors proved to be incredibly painful, as there are very few resources for Server.app, and even fewer that are up-to-date. Every Apache troubleshooting guide I’d find referred to the standard Apache installation, not the Apple-flavored installation, which stored files in completely different locations and included many modifications.

But I soldiered on, eventually sorting out each issue and hoping it would be fixed in the next version of Server.app.

Last month I finally reached a tipping point. I purchased a domain name for my wife and created a placeholder site on my server. When I added an SSL cert for the new domain, all of my sites went down (again), and I kept getting cryptic Apache errors (again).

I seriously considered switching to a hosted service and giving up the Mini, but my prior experience with hosted services was horrible, and it would likely cost even more than what I pay for the Mini.

I decided to focus on getting out from under Server.app’s grip. Two of the most appealing paths:

  1. Go the Homebrew route and install all the key software (Apache, SQL, PHP, etc.) via Homebrew.
  2. Run a Linux server in a VM.

I love Homebrew and use it frequently on my MacBook. I knew it would work well on a server. However, when I gave it a try, I had the darndest time getting Server.app to let go of resources. I was running into conflicts left and right, even after uninstalling Server.app and running cleanup scripts. I put Homebrew on hold, thinking maybe I’d need a clean install of macOS to build on, but I wasn’t ready to nuke my server just yet.

I started looking into virtualization. Having worked with server virtualization (Proxmox) at my day job, I was excited to give a virtualized environment a try on my Mini. The Mini is not a powerhouse, and only has one network card, but I figured I run VMs on my MacBook Pro all the time, the Mini should be able to handle it as well. Worst case scenario, it would be a learning experience and I would go back to macOS or maybe a commercial hosting service. Best case scenario, I’d have new life for my server.

I downloaded VirtualBox and used my MacBook as a testing ground to see if I could get a proof of concept up and running. I managed to get a simple but powerful LAN going in just a few hours — pfSense handled all NAT and port fowarding, and an Ubuntu server VM provided the LAMP stack. It was working very well for a proof of concept, but I still had reservations about macOS running underneath, and those pesky conflicts caused by Server.app on my Mini.

I decided it was time for a clean install of macOS on my Mini. I got in touch with MacStadium’s (formerly MacMiniColo) very helpful support staff, and they mentioned VMWare’s ESXi was available for their customers, and that they’d handle the ESXi installation, free of charge.

If you’re not familiar with ESXi, it’s VMWare’s free hypervisor offering. Similar in concept to VirtualBox, but designed to be run “bare metal”, as an operating system on the hardware, not on top of macOS. Since ESXi runs as an OS, it’s notoriously tricky to install on a Mac, especially if your server is hundreds of miles away in a data center. I jumped at the chance to get it installed by folks who know what they’re doing.

I spent the last three weeks sorting out the architecture and am pleased to announce it’s all up and running. My sites, including this one, are now being served via an Ubuntu VM on ESXi, running on my Mac Mini in Las Vegas. Finding documentation for Ubuntu has been super easy, and tasks that were previously time consuming and manual, such as obtaining and updating Let’s Encrypt certs, are now completed in a few minutes.

It was a time-consuming transition, which explains why my sites were down for so long (sorry), but I’m really glad I made the switch. A few weeks in, and I don’t miss Server.app or macOS at all. If all goes well this server setup should last for years (with security updates, of course).

I hope to write a more detailed account of the architecture in a future post.

 

Using Scraper on RetroPie

RetroPie is a fun little arcade system that runs on Raspberry Pi. It includes Emulation Station, which allows the user to select games using a USB game pad or joystick instead of a keyboard.

One of Emulation Station’s features is a scraper, which analyzes your library of game ROMs and tries to download the appropriate artwork and game metadata from online databases. If successful, when you browse your library you will be presented with nice art and game descriptions.

I was excited to try the scraper, but ultimately found Emulation Station’s scraper to be very hit-or-miss.

Dozens of online forums and articles laud Steven Selph’s Scraper as being faster and more thorough, so I decided to give it a try. I was able to get Scraper installed rather quickly using the official RetroPie instructions for installing Scraper, but they unfortunately don’t give you much guidance beyond installation.

I rolled up my sleeves and spent a few hours tinkering. Here are my notes.

My first obstacle was how to access Scraper after installation. Seems silly in retrospect, but this took me quite some time to figure out.

Quit Emulation Station (F4 on your keyboard). You will be taken to the RetroPie command line (shell).

Note for advanced users: You can also run commands from an external computer if you have enabled SSH in RetroPie. I used SSH for most of the tasks detailed below; it was especially handy to have SFTP enabled for managing files. (video demonstration).

Using the command line, launch the RetroPie setup script:

sudo ./RetroPie-Setup/retropie_setup.sh

If you’re unfamiliar with the command line, an .sh file is a shell script. In this scenario, RetroPie-Setup is the folder containing the script, and retropie_setup.sh is the name of the script. Placing ./ in front of the path to the script tells the system to run the script. To run the script as an administrator, begin the line with sudo (“superuser do“).

The line above is equivalent to

cd RetroPie-Setup
sudo ./retropie_setup.sh

This will bring you to the RetroPie setup menu.

RetroPie setup menu

Go to “Configuration / tools”. You will be presented with a menu of options. Scraper will be listed near the bottom:

RetroPie configuration menu

Select Scraper and hit OK. You will be presented with the Scraper menu.

Scraper menu

I changed a few options and then let it “Scrape all systems”. It worked pretty well, but there was one thing that bugged me: the artwork Scraper grabbed was usually comprised of old posters or cabinet art, which often looked nothing like the game itself. I just wanted to see snapshots from within the game.

Turns out if you use Scraper on non-RetroPie systems, you have the option to specify a preference via command line flags. For example, you can specify an order of preference, with the options of snapshots, boxart, fanart, banner, and logo.

I tried for quite some time to run Scraper via command line in RetroPie, using the flags specified on the Scraper site, but I often encountered errors about specific flags not being supported. The only method that worked consistently was launching Scraper as I described above. But the option to specify a preference for image type is not built into the menu. Turns out this is a known limitation.

But where there is a will, there is a way. I looked at the contents of the scraper.sh file, and it was pretty trivial to add the missing flags directly to the file using a text editor.

The Scraper script file is located at

/opt/retropie/supplementary/scraper

When accessing the folder using an SFTP app, it looks like this:

Scraper folder in SFTP app

 

I right-clicked scraper.sh in my SFTP app of choice and opened it using a text editor.

Line 82 had  params+=(-skip_check), so I added my own line directly underneath it:

params+=(-skip_check)
params+=(-console_img "s,b,3b,l,f")

I am telling Scraper to get images for console games in this order of priority: snapshots, box art, 3D box art, logos, and fan art.

But my primary focus is arcade games, not console games; arcade games use a different flag for artwork. Looking further down the file, I noticed the line I wanted was 114. It already specified an order of priority for arcade games. I edited it to use my preferred order of priority: snapshots, marquees, then title.

[[ "$system" =~ ^mame-|arcade|fba|neogeo ]] && params+=(-mame -mame_img s,m,t)

I saved the file, closed it, then re-ran Scraper using the steps listed above. To my surprise, I didn’t see any significant changes to my artwork. Turns out the old artwork was still there, and Scraper only looks for art if no art exists. I needed to delete the old art first!

According to the scraper.sh file, the images are stored in $home/.emulationstation/downloaded_images/$system. For arcade games, this translates to  .emulationstation/downloaded_images/arcade.

Using the command line, I navigated to the parent folder:

cd .emulationstation/downloaded_images/

Then I removed the entire arcade folder:

sudo rm -r arcade

Warning, the sudo rm command is dangerous, it will delete whatever you specify. Be careful not to enter any typos.

When re-running Scraper, the snapshots downloaded as intended, and game menu in Emulation Station now displays the snapshots instead of old posters.

Hat tip to all of the open-source developers and others who created the tools and documentation that helped me sort this out.

PDFObject 2.0 released

After almost eight years in the making (and nearly 7 years of procrastinating), PDFObject 2.0 has arrived.

PDFObject is an open-source standards-friendly JavaScript utility for embedding PDF files into HTML documents. It’s like SWFObject, but for PDFs.

Version 1.0 was released in 2008 and has enjoyed modest success. Based on stats from PDFObject.com (including devious hot-linkers) and integration with 3rd-party products, I’m guesstimating it has been used on well over a million web pages. (If I had a nickel for every time it was used…)

I updated it a few times over the years, but generally only if someone reported a compatibility issue. Like an old beat-up car, it was a bit crusty, but still ran like a champ. That is, it ran like a champ until the rules of the game were changed — when Microsoft changed their ActiveX strategy in Internet Explorer 10-11 and Microsoft Edge, PDFObject’s checks for ActiveX began to fail, rendering PDFObject useless in those browsers. This marked the beginning of the end for PDFObject 1.x.

An update was overdue, yet I let it sit for a couple of years – I fully admit that kids, my job, and life tend to take precedence over an unfunded open-source project. But I never stopped thinking about PDFObject. I intentionally kept it at arm’s length for a while; I was fascinated by changes in the front-end development world, and waited to see how things would shake out.

It’s incredible how much has changed since 2008. For starters, the browser landscape has completely changed. Chrome, which didn’t exist when PDFObject was first released, now rules the land. It also happens to include built-in PDF support. PDF.js was invented, and eventually became Firefox’s default PDF rendering engine. Safari renders PDFs natively using Preview. iOS and Android exploded onto the scene, as did Node.js and NPM. Conversely, Adobe Reader’s market share took a nosedive thanks to browser vendors making Adobe Reader less relevant, not to mention disdain for Adobe Reader’s bloat and security holes. And, of course, HTML5 is now official, which means the <embed> element is officially sanctioned.

PDFObject 2.0 is a complete rewrite that tries to take all of this into consideration. It supports PDF.js. It’s packaged for NPM. It uses the <embed> element instead of the <object> element (not going to rename it to PDFEmbed though). It doesn’t pollute the global space and uses modern JavaScript conventions. It supports all CSS selectors, not just IDs. If you’re feeling frisky, you can even pass a jQuery element instead of a CSS selector (note: PDFObject does not require jQuery). Lots of little changes, which I hope add up to a better experience, wider compatibility, and lots of flexibility.

If you’d like to learn more about PDFObject 2.0, please visit the official site (completely redesigned as well), with examples, documentation and a code generator: http://pdfobject.com

The code is up on GitHub, and has been posted to npm.

Demos for LearnSWFObject have been moved

For the record, all demos for LearnSWFObject.com have been relocated from my personal server to GitHub. The root URL has changed from demos.learnswfobject.com to learnswfobject.com/demos.

This enables me to keep all of the demos in the same repo as the primary LearnSWFObject site. The site and demos are old, and have not been updated for years, but are still useful to some members of the Flash community. Moving the files to GitHub is a nice way to keep the tutorials and demos online while reducing my personal burden for hosting the sites.

SCORM on Google Trends

Interesting stats from Google: SCORM is clearly on the decline, as is AICC, but both still much stronger than xAPI (aka Tin Can), which is barely registering.

2004-present (10 years)

2009-present (5 years)

2012-2014 (2 years)

“experience api, tin can api weren’t searched for often enough to appear on the chart. Try selecting a longer time period.”

Does this mean anything? I dunno. But it’s interesting to see SCORM’s steady decline over the last 10 years. Also, please forgive the un-responsiveness of the graphs, Google hard-codes the width in px.

Convert “localhost” to your Mac’s current IP address

When developing web pages, I use MAMP.app or my Mac’s built-in Apache. Viewing the page means using an address such as http://localhost/mypage.html. If you use custom host names (especially easy with the excellent VirtualHostX.app), you may wind up with a localhost address such as http://projectname/mypage.html.

This works great when you’re just testing the pages on your Mac’s browsers. However, once you cross boundaries into Windows testing (via VMs or separate laptops), localhost will no longer resolve. Why? Because localhost is local to your machine.

If you want to view the page in a VM or on another machine, just swap the domain name with your machine’s IP address. For example http://localhost/mypage.html becomes http://10.0.1.14/mypage.html. (Note: you must be on the same network or have a public IP address.)

This works very well, but it’s tiresome to manually grab the IP address anytime you want to use a VM or share the page with coworkers, especially if you’re on DHCP and don’t have a static IP address.

I decided to make my life a little easier by writing an AppleScript that looks at the open tabs in Chrome and Safari then replaces “localhost” (or custom domain) with my current IP address. Saving this as a service enables me to go to Chrome > Services to run the script.

Chrome > Services

If you’d like to give it a try, the AppleScript is available as a Gist on GitHub.

AppleScript for generating SCORM manifest nodes

SCORM requires all of the course assets to be listed as a <file> item in the <resource> node. This is not evenly enforced — some LMSs don’t care of you do it or not — but is still a good practice.

If you’re anything like me, you find it to be a major pain and try to avoid it.

Today I decided to whip up an AppleScript that automates the generation of the <file> nodes to make my life a little easier. If you’re on a Mac, you may find it useful, too. I’ve posted it on GitHub as gist:

https://gist.github.com/pipwerks/9179518

Note that it doesn’t include the name of the root folder. Let’s say you have a root folder named content. If needed, you can simply specify the root using the “xml:base” attribute of the resource node, like so:


<resource identifier="reosurceID" adlcp:scormType="sco" href="index.html" type="webcontent" xml:base="content/">
   <file href="index.html" />
   <file href="Lesson01/index.html" />
</resource>

Clean out the root of your SCORM 2004 package

Anyone who works with SCORM 2004 has seen something like this:

Image of file directory with all schema files at root of directory

With just a little effort, you can make it look like this, and still be perfectly valid:

Image of file directory with all schema files placed in subfolder

SCORM manifests are required to specify a slew of schema files via the schemaLocation attribute. Here’s what you’d typically see:


<manifest identifier="pipwerks-schema-example" version="1.0"
          xmlns="http://www.imsglobal.org/xsd/imscp_v1p1" 
          xmlns:adlcp="http://www.adlnet.org/xsd/adlcp_v1p3" 
          xmlns:adlseq="http://www.adlnet.org/xsd/adlseq_v1p3" 
          xmlns:adlnav="http://www.adlnet.org/xsd/adlnav_v1p3" 
          xmlns:imsss="http://www.imsglobal.org/xsd/imsss" 
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
          xsi:schemaLocation="http://www.imsglobal.org/xsd/imscp_v1p1 imscp_v1p1.xsd 
                              http://www.adlnet.org/xsd/adlcp_v1p3 adlcp_v1p3.xsd 
                              http://www.adlnet.org/xsd/adlseq_v1p3 adlseq_v1p3.xsd 
                              http://www.adlnet.org/xsd/adlnav_v1p3 adlnav_v1p3.xsd 
                              http://www.imsglobal.org/xsd/imsss imsss_v1p0.xsd">

Notice the structure of the data in the schemaLocation attribute: external URL followed by a space then the local (relative) URL. For example:


http://www.imsglobal.org/xsd/imscp_v1p1 imscp_v1p1.xsd

In this example, imscp_v1p1.xsd is at the root of the package, in the same folder as the imsmanifext.xml file. The trick is to create a subfolder in the root of the package, then update schemaLocation to point to the subfolder. I created a subfolder named SCORM-schemas, which you can see in the following code exerpt:


<manifest identifier="pipwerks-schema-example" version="1.0"
          xmlns="http://www.imsglobal.org/xsd/imscp_v1p1" 
          xmlns:adlcp="http://www.adlnet.org/xsd/adlcp_v1p3" 
          xmlns:adlseq="http://www.adlnet.org/xsd/adlseq_v1p3" 
          xmlns:adlnav="http://www.adlnet.org/xsd/adlnav_v1p3" 
          xmlns:imsss="http://www.imsglobal.org/xsd/imsss" 
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
          xsi:schemaLocation="http://www.imsglobal.org/xsd/imscp_v1p1 SCORM-schemas/imscp_v1p1.xsd 
                              http://www.adlnet.org/xsd/adlcp_v1p3 SCORM-schemas/adlcp_v1p3.xsd 
                              http://www.adlnet.org/xsd/adlseq_v1p3 SCORM-schemas/adlseq_v1p3.xsd 
                              http://www.adlnet.org/xsd/adlnav_v1p3 SCORM-schemas/adlnav_v1p3.xsd 
                              http://www.imsglobal.org/xsd/imsss SCORM-schemas/imsss_v1p0.xsd">

Test, test, test! I’ve tested this in SCORM Cloud as well as a couple of real-world LMSs and haven’t encountered any issues. Your mileage may vary depending on your LMS’s SCORM implementation, but this is perfectly valid XML and shouldn’t break in any LMSs — unless the LMS is poorly coded, but that’s a rarity, right? (LOL)