July 23, 2021

Learning from the July 2020 Bar Exam Results: Creating a Laboratory for Experimentation and Study

By:
By Jason Scott and Paige Wilson
|
Research and Data

 

This is the second in a pair of blog posts exploring lessons learned from the July 2020 bar exam. In our previous post, we noted the difficulty in studying the bar exam results from July 2020 given what data is currently publicly available—which largely consists of the number of examinees and the number of passers only. Researchers’ abilities to draw comparisons, both to previous (and future) administrations as well as from one bar exam format to another, are limited without knowing more about who sat for the bar exam. What we learned in 2020 is that—with more detailed data—the environment is ripe for experimenting to see what innovations work best and for whom.

Please visit our supplementary data dashboards for jurisdiction-level July 2020 bar results and detailed historical bar results.

COVID-19 necessitated sudden changes in bar exam administration in July 2020 and February 2021, and the NCBE has laid the framework for significant overhaul of the UBE in the next four to five years. These groundbreaking shifts have created the perfect laboratory for testing various bar exam formats to learn what works best, for whom, and how these approaches might lead to greater equity of access for those that have been historically marginalized.

For example, given the upcoming shift to a virtual UBE, researchers could study whether bar candidates do better on virtual or traditional paper exams, giving consideration to whether the effect is similar across racial/ethnic and socioeconomic groups. And, if the results show an inequitable effect, further follow-up through interviews, focus groups, surveys, or other methods could help to understand the root cause of the discrepancies, leading the way for more innovation and experimentation.

But the research need not be limited to virtual versus pen-and-paper exams. Jurisdictions could test myriad changes, from small modifications, such as providing a larger clock for in-person exams, to large format shifts, such as reducing the bar exam to one day.

Generally, this could be done in one of two ways, studying the changes over the short- or long-term:

Experiment: Researcher-instigated, where bar examinees are assigned (preferably randomly) to one bar exam format or another, rendering two distinct groups—a treatment and a control. Alternatively, assignment could be performed to make more than two groups so as to compare more than one change.

Natural Experiment or Observational Study: Conducted ex post, where bar examinees are “placed” after the fact in a treatment or comparison group based on what exam format they received. Several methods can be reliably employed (when done correctly) to produce rigorous and actionable findings.

There are benefits to using either of these approaches but, irrespective of July 2020 Bar Exam Resultsmethod, the strength of the findings is dependent upon having: (1) reliable, detailed data on all examinees, including at the least their race/ethnicity, gender, law school attended, and whether attendance was full- or part-time; and, (2) experts in program evaluation and/or quantitative methods to guide the research.

This combination of data and expertise would open the door to answering many of the questions asked across law school campuses, as well as state bar associations and supreme courts, among others. Take, for example, a jurisdiction that is weighing the decision to offer the bar exam virtually either in a testing facility or remotely. In an experiment, when applying to take the bar exam, applicants could be randomly assigned to one of these two testing options. When the exams are scored, the jurisdiction could compare the pass rates between the two groups.

Or suppose a jurisdiction wonders how a change in cut score set to go into effect at the next bar administration affects pass rates. In an observational study, the jurisdiction could use one of many quantitative methodologies to create, post hoc, two groups: one that was subject to the previous cut score and the other to the new cut score. The difference between these groups could then be calculated to estimate how the change in cut score affected pass rates.

In each case, detailed data disaggregated by candidates’ demographic characteristics are needed to ensure that the groups are similar or, if not, to correct for dissimilarities (usually through regression analysis). Furthermore, these data are necessary to determine if there is an inequitable, disparate impact among examinees, especially those that have been historically underrepresented in the legal profession.

Moreover, the research could (and arguably should) extend temporally beyond just bar exam scores and pass rates. Understanding how these changes play out long term is important since improvements or declines in pass rates and scores do not necessarily indicate a benefit or harm. Some jurisdictions could be concerned that lower cut scores might lead to decreases in overall levels of lawyer competence and a concomitant increase in lawyer discipline. These concerns could be investigated with the right data.

However the experimenting is done, the resulting research and findings from this laboratory have the potential to be historic and long-lasting, making access to the legal profession more equitable for those that have been historically underrepresented and resulting in a new, more diverse generation of lawyers. Institutions interested in developing their own bar passage studies should consider submitting a Letter of Inquiry for the AccessLex Bar Success Research Grant, the deadline for which is May 31, 2021. Jurisdictions are also encouraged to contact us at research@accesslex.org about ways in which we might be able to assist you.