Anyone who is actively involved with instructional design has at some point used the ADDIE model (Analyze, Design, Develop, Implement, and Evaluate) for their course development. This model is one of (if not the) most popular structures used by training designers today. As you can expect, it has received a lot of attention from the community – some criticizing it, others providing praise. Personally, I feel that ADDIE works just fine, and I have used a variation of it for years on my own projects.
It’s actually quite interesting how passionate people are one way or another when it comes to this model. I think if you approach any training design and implementation with an understanding that it will have its own unique qualities, then you allow for a flexibility within the model road-map. There’s nothing wrong with using ADDIE as a foundation; a starting place to build your own “model” of sorts. These types of structures are beneficial in helping drive consistency across projects.
I suppose what I find missing from the method is a TESTING component – or, a dry-run after development. This is traditionally lumped into the Development cycle, but I prefer to see it called out. Also, there isn’t much reference to post implementation. Whenever I do any kind of training program deployment, I ensure that there is a “Post-Deploy” component for quality assurance purposes. As you can see, adding items to the model is not against the law, do what you feel is most effective.
For those of you who are new to the field, or just want a reminder, the infographic below (provided by Nicole Legault) provides a nice overview. There is certainly more to each cycle, but this is always a good place to start.