The third installment in the Bar Exam Research Query Blog Series discusses research questions pertaining to the impact of bar exam success interventions. A few decades ago, bar performance was largely seen as outside the purview of law schools. Bar preparation, in its most direct form, occurred beyond the gaze of law school faculty and administrators, after students had already become graduates. Performance on the bar exam was largely seen as a reflection of aptitude, effort and preparation, not law school quality. But shifts in thinking about the role of law schools in ensuring that students are prepared for the profession have prompted unprecedented involvement of law schools in bar preparation.
Law schools are investing heavily in efforts to increase bar passage rates. There are bar review courses for 3Ls; tailored and mandatory curricula for students identified as “at risk”; even 1Ls are being encouraged to undertake their studies with the bar exam in mind. Specific interventions take many forms – from skills-based courses to one-on-one counseling to workshops on managing stress and anxiety. Professors are increasingly incorporating bar preparation in their courses, and students are often encouraged (and sometimes required) to take doctrinal courses covering subjects that are tested on the exam. Schools are also ramping up their academic support staff to focus more squarely on bar passage. These trends have noticeably altered the law school curriculum and the experience overall.
But as law schools dedicate extensive human and financial resources to helping prepare students for the bar exam, the question of impact comes to fore. Attendees at the Bar Exam Research Forum that AccessLex hosted back in April posed many questions pertaining to impact. Which efforts are working? To what extent are they working? Are there better ways to go about achieving bar passage goals? These are complicated questions. But they are very important.
The difficult aspect of measuring the impact of bar success initiatives is designing the assessment. At schools undertaking multiple interventions, how do they isolate and assess each one? Does it even make sense to do so? Or are the programs so integrated that they should be assessed collectively? Would it make sense to design true experiments, with control and treatment groups? Is it even ethical to do this? Or can post-intervention cohorts be compared to pre-intervention cohorts in ways that are not clouded by unrelated differences between the cohorts? How often should assessments be conducted?
Other questions from the forum pertained to the extent to which law schools could learn from each other? Should law schools share data pertaining to interventions and impacts with each other? Would that sacrifice some competitive advantage among schools whose students excel, or even overachieve, on the bar exam? Would the data even be helpful across different institutions?
And then what about the student herself? We can measure the impact of factors, such as LSAT score and law school GPA, but what about student engagement, motivation, self-efficacy, and the other intangible factors that surely influence bar performance?
And given that the fundamental purpose of these assessments would be to improve bar success programming and interventions, there were questions from the attendees relating to the extent to which assessments can be designed to inform action.
Any ideas on how impact assessments could be designed? Any additional research questions relating to impact? Please post your thoughts in the Comments section of this blog or email us at email@example.com.