This descriptive database is composed of known empirical studies on the impacts of higher education in prison (HEP) in US correctional facilities since Federal Pell Grants began funding them in 1965. We identified, reviewed, and coded eligible studies to help us better understand the HEP research landscape and are delighted to now present it here as a resource for the field. Inclusion is not an endorsement of a study, its content, or its findings, as they have not been vetted for quality or any other indicator outside of basic qualifying information, see Methodology for more information. We welcome your general feedback on this working database, as well as submissions of any additional HEP studies.
Below are links to the interactive database, visual presentations of database data, and information on our methodology.
These descriptive visualizations are designed to introduce users to the database and help users interact with some of the trends we discovered through our coding of empirical studies on the impacts of HEP. The first set of visuals provide a sense of the scope of the studies we have collected, from the number of unique authors to the number of institutions included in our database.
The pie charts included in the second section are fully interactive: Users may click on a section of any given chart and view the studies included in that section. When available, clicking into individual study entries also provides a direct link for accessing that specific study (though many are behind paywalls). Clicking ‘Show data’ in the upper right corner will return you to the full list of studies, and clicking on the bottom right icon will provide a full screen version of the chart. Note that settings are disabled in this particular view.
We invite users to interact with this database’s advanced search, sort, and filtering functions here on our digital space. Users can also download the dataset as a static CSV file.
The database was compiled primarily by searching for relevant articles in a variety of online databases such as GoogleScholar, ProQuest Dissertations, and SAGE Journals Online. The database was supplemented with publications that were either referenced or included in literature reviews and/or meta-analyses that surfaced during the initial search process. Studies were included in our database if they met the following criteria:
- are empirical studies published after 1965 (when Pell funding first became available to incarcerated students);
- focus on institutions and/or programs in the United States;
- report specifically on the effects of academic programming offered to incarcerated learners (vs. CTE/vocational, other treatment programs);
- include post-secondary education programming in the analysis, and;
- study the relationships between prison education programming (or its particular features/components) and incarcerated learners’ outcomes (academic or other).
Our search through September 2020 led us to identify 100 studies containing evaluations of correctional education programs offered in US prisons, with approximately 75 percent isolating the effect of post-secondary education in particular. Due to the relative scarcity of publications that focus solely on higher education in prison programs, we did not include that as one of our criteria for studies in this database. We did however tag studies that isolated the effect of post-secondary programs as well as studies that focused on credit-bearing postsecondary education in prison to examine the relative frequency of both features.
For each study in the database, we included the following information: study abstract, sample description, research questions, outcome metrics, data collection and analysis methods, findings, and year and place of publication. If any of this information was missing, we noted that in the database as well. Certain categories, such as the correctional facilities involved and the specific sponsor of the study, are not available for every entry in our database; these optional categories are included towards the end of the database and any missing information was left blank.
Once the information was entered into the database in long form, we then used a coding rubric to assign tags to specific categories. For example, methodologies were tagged based on whether they were quantitative, qualitative, or mixed-methods and based on which data collection and/or data manipulation techniques were applied in the study (i.e. interviews, focus groups, bias correction, quasi-experimental methods). 1
A number of tags were used to elicit useful information on the composition of studies in our database. Study scope was used to differentiate studies that evaluated a single educational program, multiple programs, a statewide population, or a national population of students. This, combined with the study’s sample size, gave us a sense of the scale of each study. Sample type was also coded to determine the portion of studies that focused on male vs. female participants, as well as studies that involved other key populations such as faculty members and students’ families.
We also coded studies for the type of data they drew on to examine the portion of studies that collected original vs. pre-existing (i.e. secondary) data. We also differentiated between various sources of secondary data, noting when data belongs to the federal government or are owned by state or local entities. This is meant to provide users of the database with a sense of where data exists and which entities future researchers may need to interact with to procure data for their studies. We did not include specific sources of data as these can be found in the studies themselves.
Since the studies in our database used a wide variety of metrics to measure program or student outcomes, we developed a coding hierarchy based on our review of literature from the HEP space. We coded metrics by overall type (see the Metric: Type column) and subsequently by specific type (see the Metric: Specific column). This taxonomy allowed us to differentiate between student, program, and correctional-centered metrics to see where the field had focused historically. Note that studies focused on prison climate and/or discipline were coded into “Student behavior/conduct” and studies that focused on post-parole success were coded into “Recidivism” since they were concerned with students’ ability to remain out of prison. Studies that focused on gathering viewpoints of faculty or correctional staff regarding the quality of programming offered were coded into “Perceived quality of program,” while student perceptions of their own learning experience were coded into “Student satisfaction” as we believe this to be a more student-centered metric.
- Post-college outcomes
- Academic achievement
- Psychosocial development
- Student satisfaction
Academic Program-Centered Metrics
- Persistence/retention rates
- Graduate rates
- Program/institution relations
- Cost-benefit analysis
- Perceived quality of program
- Student behavior/conduct
Historically, one of the main critiques of this field has been its lack of empirical rigor. While this database is presently only descriptive in nature, we did add tags to provide users with a sense of the relative rigor of each of the studies. In the Data Analysis column, we coded studies based on whether they used descriptive or inferential statistics and added a separate tag to distinguish studies that used regression analysis. We also included columns indicating whether publications were peer-reviewed or whether they were open source. The former of these two measures of rigor only apply to quantitative studies; we have made no attempt to date to classify qualitative studies based on rigor, something that has proved challenging for even the most established of fields.
Finally, we coded studies based on whether they found a positive effect for the intervention at hand, no effect, or a negative effect. Since most of the studies were focused on correctional outcomes such as recidivism and disciplinary measures, a positive relationship here describes an intervention reducing recidivism for incarcerated learners. This differentiation again predominantly proved useful for quantitative studies, as nearly all the qualitative studies in our database found some positive impact of the intervention they were studying.
- Due to the wide variety of methods used in meta-analyses, these were all coded separately and given a ‘Meta-analysis’ tag for methodology, data analysis, and study scope. Meta-analyses were also coded differently in that their sample size is in studies, not the number of participants referenced in the studies themselves. This was done because sample information was near impossible to ascertain for meta-analyses, and we felt that providing the number of studies offered a glimpse into the range of potential subjects considered for these reports.