Gaps in the ADDIE Instructional Design Model
I have often written in the past about the strengths of using an elearning model, such as ADDIE, for course design, development, and delivery.
I still happen to believe that ADDIE (or derivatives of this framework) tend to capture the most under the instructional design umbrella, but that’s not to say there aren’t any flaws.
As with any model out there, they should be modified depending on the context of the training material, audience, and client. It is during this manipulation of the model that some of the gaps are exposed.
This doesn’t make the model any less effective (especially if you recognize the shortcoming), you just need to be aware of it.
Putting 100% stock into anything is never a good idea with anything. The better approach is to diversify, and the same can be said when you go through the process of creating a training program.
ADDIE is a strong basis for any training event. There are even other models that have emerged with roots back to ADDIE – it certainly has its place. Still, there are weaknesses. Some of the most common faults, as originally shared by InstructionalDesign.org, include:
- Typical processes require unrealistically comprehensive up-front analysis Most teams respond by doing very little at all and fail to access critical elements
- Ignores some political realities. Opportunities are misses, vital resources aren’t made available, support is lacking, and targets shift.
- Storyboards are ineffective tools for creating, communicating and evaluating design alternatives. Poor designs aren’t recognized as such until too late.
- Detailed processes become so set that creativity becomes a nuisance.
- No accommodation for dealing with faults or good ideas throughput the process.
- Learning programs are designed to meet criteria that are measured (schedule, cost, throughput) and fail to focus on identifying behavioral changes.
- Post-tests provide little useful information to assist in improving instruction.
Some of these might not be as apparent in your current elearning development projects as others. For example, I never have had an issue with the last item listed here, especially when using Kirkpatrick four-levels of evaluation.
I also do not agree with the fourth identified weakness. For me, this can only arise if the model isn’t being adjusted properly to fit the situation. Going by the script can very well limit creativity that falls outside of the model.
In the end, you really don’t need to intentionally pick any model when designing a training program. From my experience, most clients will require you to demonstrate your knowledge of ADDIE, but this doesn’t mean that you have to use it.
Nonetheless, I would argue that even a lack of a traditional model will still resemble one. There isn’t anything wrong with leveraging one of these as a wireframe for your implementation. In fact, I would encourage it.