In this post, I’m going to wander a bit afield from my usual focus on how we talk about behavior from a conceptual or theoretical perspective. This discussion will still focus on how we talk about behavior, but it’s how we talk about our own professional behavior that is at issue.
The rules I’m talking about concern some of the details of our ABA technology. These rules focus on established procedures, technicalities apparently so widely agreed upon that they are often reduced to fixed protocols that do not vary much from one case to another. Here are three examples, although you can probably identify others as well.
Trials to criterion: In acquisition programs involving a series of steps on the way to mastery of a skill, the practitioner must decide when to either raise performance criteria or conclude that the skill has been mastered. This decision comes down to specifying how well the learner must perform under an existing standard before it is replaced by the next step in the series. The final step defines mastery of the behavior. The most common criterion specified is that 80% of the trials must be correctly completed in order to move to the next step or, if there are no more steps, to consider the behavior mastered. This figure more often than not seems to be the default level of performance that promises clinical success.
Number of days per phase: Within-subject experimental designs are based on a series of successive phases involving control or baseline conditions and experimental or treatment conditions. Both researchers and practitioners must make decisions about when to end one phase and start the next. Looking at session-by-session data, this decision comes down to how many sessions (or days, assuming one session per day) to accumulate before switching to the next phase. The modal value in the ABA literature based on the findings of Huitema (1986) and on my own research is three, a rule “codified in many textbooks on single-case designs” (Kennedy, 2005, p. 38).
Acquisition targets and reduction targets: There appears to be widespread agreement than targeting a behavior for reduction requires concomitantly establishing or at least strengthening one or more other behaviors, although there are reasons to reflect on this assumption (Johnston, 2006). This relationship seems to have acquired the status of a rule. I even worked with a doctoral level practitioner in state government some years ago who mandated a two-for-one rule for the intellectual disabilities system – two acquisition targets for every reduction target.
Some may debate whether these procedural specifications should be characterized as rules, but the more important concern should be their shortcomings as broadly accepted guidance for professional decision-making. I would hope that decisions about the number of trials to criterion would take into account a number of complexities, such as the nature of the behavior, the effects of different levels of competence, and more. Certainly decisions about the length of phases should be guided by a long list of important considerations, not the least of which is that three days/sessions is rarely enough to describe a stable state of responding. And decisions about the role of acquisition training as a component of reduction objectives raises a number of issues that could suggest that acquisition targets may sometimes not even be necessary (Johnston, 2006).
The good old days
There was a time, early in the history of behavior analysis, when there were no rules. What we now refer to as ABA was still aborning – even the name wasn’t well established. Practitioners made decisions about what to do based on their own training – such as it was – and the unique clinical circumstances before them. Did early behavior analysts realize they didn’t know enough to establish rules or did they instead look for other forms of guidance? How did we get to the point that textbooks and other training experiences encourage general rules for selected components of our technology?
Having been around during this period, I can attest that there wasn’t a noticeable longing for procedural rules. Of course, we didn’t know what we didn’t know, but any shortfall in our expertise was compensated for by adherence to the strategic and tactical underpinnings of the emerging field. That is, we behaved like analysts of behavior. Finding ways of being less analytical wasn’t on the agenda.
Times change, however, and the field has since grown in every dimension. It accumulated more research and clinical literature. It attracted attention, more often than not laudatory, and more people wanted to be applied behavior analysts. Training programs at undergraduate and graduate levels blossomed like spring flowers, along with a demand for workshops, short courses, and field training for paraprofessionals.
The field’s educational industry rose to the occasion with textbooks and less formal training materials that documented our growing understanding of operant behavior and behavior change technology. Core textbooks could only summarize the increasingly large ABA literature, however, because the details were just too much. In the interest of “giving it away,” we also tried to make our technology understandable and appealing to all. This effort often resulted in simplified summaries of what were actually increasingly complex ABA procedures.
The establishment of a professional credentialing industry, including not just its certifying body but state-by-state licensing programs and an increasingly array of companies offering study materials, mock exams, and more, has only encouraged this tendency toward simplification. Perhaps it shouldn’t be surprising that a thoroughly analytical focus to decision-making gradually gave way over the years to summaries of best practices and simple rules for some of the more common details.
The thesis underlying the title of this post is that summaries of best practices and simple rules are not the best we have to offer. ABA technology is complicated in general and more complicated still when fit to particular circumstances. To give the best that ABA has to offer requires professional expertise and lots of it. The Behavior Analyst Certification Board (BACB) recognizes this with three levels of certification and a registered technician program. The minimum credential for independent practice is the Board Certified Behavior Analyst (BCBA) certificate, which requires a sequence of increasingly strenuous masters level courses and experience training, topped off with an examination many applicants find too demanding. Although the BACB task standards and associated credentialing exam test items are necessarily superficial, the expectation is that each standard or test item represents a significant body of knowledge that helps define the level of expertise the discipline says the BCBA credential should represent.
That level of expertise goes way beyond summaries and rules. As a field, we expect practitioners to approach their responsibilities with a deep reservoir of knowledge and skills and to draw upon this background in making highly specific and well reasoned decisions about how the technology should be applied in each unique case. In other words, well-trained BCBAs should be able to go far beyond the simplicity of procedural rules. They should be able to explain where a rule came from and why it is an imperfect guideline. They should be able to explain why a particular situation requires more or less than a rule might suggest and how to evaluate the resulting outcome. For well-trained BCBAs, procedural rules merely get in the way; they are meant to be broken. If such rules have a place in the practice of ABA, it may be as an aid to guiding the contribution of those with more a superficial understanding of the technology. In other words, rules are for amateurs – no pejorative connotations intended – not BCBA professionals, who should bring rule-breaking expertise to treatment decisions.
Huitema, B. E. (1986). Autocorrelation in behavioral research: Wherefore art thou? In A. Poling & W. Fuqua, Eds. Research methods in applied behavior analysis. (187-208). New York, Plenum Press.
Johnston, J. M. (2006). “Replacing” Problem Behavior: An analysis of tactical alternatives. The Behavior Analyst, 29 (1), 1-11.
Kennedy, C. H. (2005). Single-case designs for educational research. Boston, Pearson Education.