RESEARCH PAPER MAY 2026 CursoVivo CV-1

The Completion Crisis: Why 85–90% of Online Course Students Never Finish and What Structural Changes Reverse It

The Completion Crisis — Research paper cover image

Abstract

The dominant narrative in online education attributes the persistent failure of learners to complete self-paced courses to deficits in student discipline, motivation, or commitment. This paper challenges that narrative by synthesizing peer-reviewed evidence on completion rates across massive open online courses (MOOCs), structured cohort programs, and personalized adaptive learning environments. Drawing on Jordan's (2014, 2015) syntheses of MOOC completion data, Reich and Ruipérez-Valiente's (2019) longitudinal analysis of 5.63 million HarvardX and MITx learners, and institutional outcomes from Harvard Business School Online and CampusMVP, we document a consistent gap: open self-paced courses report completion rates of 3–15%, while structured programs delivering the same content domains report 85% or higher. The variance is concentrated not in learner characteristics but in design features the literature has long identified as critical — externalized scheduling, scaffolded progression, deadlines, social integration, and active feedback (Wood et al., 1976; Bloom, 1984; Tinto, 1975; Broadbent & Poon, 2015). The mainstream "lazy student" framing is consistent with the fundamental attribution error (Ross, 1977), in which observers overweight dispositional and underweight situational causes. We propose the CursoVivo implementation model — an artificial-intelligence-mediated framework that embeds personalized weekly plans, memory-bearing check-ins, and concrete deliverables inside an existing course — as a scalable mechanism for delivering the structural ingredients that distinguish high-completion programs from low-completion ones. Limitations regarding selection effects, intent-versus-behavior measurement, and the moderation of Bloom's two-sigma estimate are discussed.

Resumen en español

La narrativa dominante en la educación en línea atribuye el persistente fracaso de los estudiantes en completar cursos autoadministrados a déficits de disciplina, motivación o compromiso. Este artículo examina esa narrativa a la luz de la evidencia empírica disponible. Sintetizando los datos de Jordan (2014, 2015), el análisis longitudinal de 5,63 millones de estudiantes en HarvardX y MITx realizado por Reich y Ruipérez-Valiente (2019), y los resultados institucionales reportados por Harvard Business School Online y CampusMVP, se documenta una brecha consistente: los cursos abiertos autoadministrados reportan tasas de finalización del 3% al 15%, mientras que los programas estructurados que entregan el mismo dominio de contenido reportan tasas del 85% o superiores. La variancia se concentra no en las características del estudiante sino en variables de diseño que la literatura psicoeducativa ha identificado como causalmente relevantes: calendarización externalizada, progresión con andamiaje, plazos definidos, integración social y retroalimentación activa (Wood et al., 1976; Bloom, 1984; Tinto, 1975; Broadbent & Poon, 2015). El marco de "estudiante indisciplinado" es coherente con el error fundamental de atribución (Ross, 1977), por el cual los observadores sobrevaloran las causas disposicionales y subvaloran las situacionales. Se propone el modelo de implementación CursoVivo — un marco mediado por inteligencia artificial que integra planes semanales personalizados, seguimiento con memoria y entregables concretos dentro de un curso existente — como un mecanismo escalable para entregar los componentes estructurales que distinguen a los programas de alta finalización de los de baja finalización. Se discuten limitaciones relacionadas con efectos de selección, la medición de intención frente a comportamiento, y la moderación de la estimación de dos sigmas de Bloom.

Keywords: online course completion rates, student dropout, e-learning retention, course structure, student engagement, online education, course completion strategies, implementation gap, CursoVivo, fundamental attribution error, scaffolding, self-regulated learning

1. Introduction

1.1 The Prevailing Narrative

Across consumer-facing online education — from independent course creators to large platforms such as Hotmart, Udemy, Teachable, and Kajabi — the most widely shared explanation for low completion rates centers on the learner. The dominant framing holds that students who purchase but do not finish a course do so because they lack discipline, fail to commit, or are insufficiently motivated. This explanation is intuitive, requires no design changes from instructors, and is compatible with the consumer-product framing of online courses, in which the buyer is presumed responsible for extracting value from a fixed asset.

The narrative is reinforced by the architecture of self-paced course platforms, which were optimized during the 2012–2017 MOOC era for content delivery at scale rather than for completion as an outcome (Reich & Ruipérez-Valiente, 2019). Within that architecture, when a learner stops engaging, the platform records a dropout event without registering the structural conditions surrounding it, and the absence of those conditions becomes invisible in subsequent analyses.

1.2 The Problem

The cost of the prevailing diagnosis is twofold. First, it directs corrective attention away from modifiable design variables and toward learner characteristics that are neither easily measured nor easily changed by the institution. Second, it produces a stable equilibrium in which the same baseline outcomes persist for more than a decade despite substantial investment in content production, marketing, and platform features.

Reich and Ruipérez-Valiente (2019) provide perhaps the clearest empirical illustration. Analyzing the full population of HarvardX and MITx open online courses from October 2012 through May 2018 — 5.63 million unique learners across 12.67 million course registrations — they found that completion rates among all participants declined rather than improved over the period, falling to 3.13% in the 2017–2018 academic year. Even the verified track, in which learners pay and explicitly commit to certification, completed at approximately 46%. Notably, 52% of registrants never started the course at all. After six years of iteration on platforms with substantial institutional resources, the underlying completion floor did not shift.

If completion failures were primarily attributable to learner discipline, one would expect substantial variability in outcomes between platforms with comparable populations. Instead, the literature documents a consistent floor — and a separate, much higher ceiling that emerges only when course design changes.

1.3 Research Question and Thesis

This paper examines a single research question: when comparable populations of learners encounter the same content domain under different implementation structures, how much of the variance in completion outcomes is attributable to learner characteristics, and how much to course design?

We propose that the popular framing significantly overweights dispositional factors and underweights structural ones — a pattern consistent with what Ross (1977) termed the fundamental attribution error. We further propose that the CursoVivo implementation model, which delivers personalized weekly plans, memory-bearing check-ins, and structured deliverables through an artificial-intelligence layer embedded in an existing course, addresses the structural variables that the literature identifies as causally implicated in completion. The paper does not claim that learner characteristics are irrelevant. It claims, more narrowly, that the variance attributable to modifiable design features has been systematically underexamined.


2. Literature Review

2.0 Methodological Note

This review synthesizes peer-reviewed empirical studies, institutional case data, and high-quality industry reporting published between 1967 and 2025, sourced from Web of Science, Scopus, ERIC, PubMed, Google Scholar, and SSRN. Inclusion criteria prioritized studies measuring completion, persistence, or analogous behavioral outcomes in online or distance learning environments, supplemented by foundational works in educational psychology and organizational behavior that anchor the theoretical argument. Institutional and vendor-published case data (Harvard Business School Online; CampusMVP) are explicitly identified as such and are treated as illustrative rather than as independently verified evidence.

2.1 The Documented Completion Crisis in Open Online Courses

Quantitative documentation of MOOC completion began with Jordan’s (2014) synthesis of public-domain data across 91 MOOCs, which reported a typical completion rate of approximately 6.5% relative to total enrollment. In a subsequent expanded analysis of 221 courses, Jordan (2015) reported a median completion rate of 12.6%, with a range from 0.7% to 52.1%. The wide variance prompted closer examination of design factors associated with the upper end of the distribution.

The most comprehensive longitudinal data set available is the HarvardX and MITx series (Ho et al., 2014, 2015; Chuang & Ho, 2016; Reich & Ruipérez-Valiente, 2019). Across four years of operation, only a small fraction of registered learners earned certificates: approximately 2–4% of all participants in any given year, rising to roughly 22–46% among learners who paid for verified-track enrollment. The most recent analysis, covering six full years, found that the trajectory had not improved despite design iteration, course portfolio expansion, and growing user familiarity with the format (Reich & Ruipérez-Valiente, 2019).

A complementary line of research has questioned whether completion rate is the most appropriate metric for MOOCs (Henderikx, Kreijns, & Kalz, 2017; Reich, 2014). Henderikx et al. demonstrated that when completion is measured relative to learner intent rather than to enrollment, rates rise substantially, reaching 59–70% among learners who explicitly intended to finish. This refinement is methodologically important; however, it does not eliminate the underlying gap. Even among the subset of learners who paid, intended to certify, and began the course, completion in open-platform environments remained substantially below that observed in cohort-based programs operated within the same institutions.

2.2 The Fundamental Attribution Error in Educational Contexts

The attribution of poor completion outcomes to learner character represents a textbook instance of what Ross (1977) labeled the fundamental attribution error: the systematic tendency of observers to overweight dispositional and underweight situational explanations for the behavior of others. Jones and Harris (1967) provided the first empirical demonstration of this pattern, showing that observers attributed expressed attitudes to a writer’s underlying disposition even when they knew the writer had been instructed to take a specific position.

Applied to online education, the error operates as follows. An instructor or platform observes that a learner has stopped progressing. The most cognitively available explanation is one located within the learner — insufficient motivation, weak discipline, lack of commitment. Less available, and requiring more deliberate analysis, is the explanation located within the design environment — absence of deadlines, absence of social presence, absence of personalized feedback, or accumulated cognitive friction at a specific point in the curriculum. Because the dispositional explanation is more readily generated and requires no corrective action from the institution, it tends to prevail in informal industry discourse.

2.3 Help-Seeking Avoidance and Silent Attrition

A substantial body of educational psychology literature documents that learners who would benefit most from instructional support are systematically the least likely to request it. Ryan and Pintrich (1997), in a study of 203 adolescents, identified perceived threat to competence as a primary predictor of help-avoidance behavior. Subsequent work by Ryan, Pintrich, and Midgley (2001) extended these findings, showing that students with lower self-efficacy actively avoid help-seeking even when help is available and would improve outcomes. The edited volume by Karabenick and Newman (2006) consolidates more than two decades of research demonstrating that help-avoidance is a robust phenomenon across age groups and academic domains.

Almeda, Baker, and Corbett (2017) extended this literature into intelligent tutoring environments, demonstrating that help-avoidance directly predicts worse achievement in online problem-solving contexts. More recently, Hwang and colleagues (2024), in a study of 213 online STEM learners, found that help-avoidance negatively predicts retention intention while sense of belonging predicts adaptive help-seeking. These findings have a direct structural implication: instructional environments that depend on the learner to initiate help-seeking will systematically fail the learners most at risk of dropping out, because those learners are precisely the ones whose perceived competence threat will inhibit the request. A reactive support system — one that responds when contacted — addresses only the population already least likely to fail.

2.4 Self-Regulated Learning Limits in Self-Paced Environments

Broadbent and Poon (2015) conducted a systematic review of 12 studies examining the relationship between self-regulated learning strategies and academic achievement in online higher education. Time management, metacognition, effort regulation, and critical thinking were positively associated with achievement; rehearsal, elaboration, and organization showed weaker effects. The clinical implication of this finding for online course design is that the strategies most predictive of success are precisely those most difficult to sustain in unstructured, self-paced environments — and most amenable to externalization through scheduled deadlines, planned weekly tasks, and active follow-up.

Tinto’s (1975) model of student departure, although developed in residential higher education, has been adapted to distance contexts (Rovai, 2003) with consistent results: persistence depends on academic and social integration, both of which are minimal in default self-paced course architectures. Recent systematic mapping of the dropout phenomenon in distance learning (Elibol & Bozkurt, 2023) confirms that integration variables, not personal characteristics alone, explain a substantial share of attrition.

2.5 Research Gap

The literature documents (i) consistently low completion in open self-paced environments, (ii) the cognitive bias that locates the problem in learner character, (iii) the systematic failure of help-avoidant learners to access reactive support, and (iv) the dependence of completion on self-regulatory strategies that structured environments externalize. What has been less developed is an integrated framework specifying how these structural ingredients can be delivered at the per-learner level outside of high-cost cohort programs. The model proposed in Section 3.3 addresses this gap.


3. Analysis and Discussion

3.1 Cross-Program Comparison: Structure as the Independent Variable

The most informative comparisons are not between platforms with different content but between programs with comparable content delivered under different structural conditions. Two such comparisons are available.

The first is internal to Harvard. Reich and Ruipérez-Valiente (2019) report 3.13% overall and approximately 46% verified-track completion across HarvardX open courses in 2017–2018. Within the same institution, Harvard Business School Online publicly reports an 85% completion rate across all of its courses, attributed by the institution to a cohort-based design that incorporates scheduled start dates, weekly deadlines, peer cohorts, structured assessment, and case-method engagement (Harvard Business School Online, 2024). Although the populations are not identical — Harvard Business School Online learners pay substantially more and may differ in motivation — both populations are voluntarily enrolled, paying participants in Harvard-branded online education. The 27-fold difference in completion is large enough that it cannot plausibly be attributed entirely to baseline learner characteristics.

The second comparison is documented in the Spanish-language e-learning sector. CampusMVP, a professional training provider based in Spain, reports a self-published completion rate of 87% among its 2018 cohorts (CampusMVP, 2018). The intervention described is methodologically modest: a personalized weekly study plan visible to each learner, explicit deadlines, and a tutor and student-support function that proactively contacts underperforming learners. The provider explicitly contrasts this outcome with the contemporaneous MOOC baseline reported by Reich and Ruipérez-Valiente. As a single-vendor case, the figure cannot be treated as independently verified evidence; nevertheless, the intervention components match the structural variables identified in Section 2 as causally relevant.

The pattern across these and other cohort-based programs is consistent. Programs that externalize scheduling, embed deadlines, maintain social presence, and replace reactive support with proactive outreach report completion rates clustered in the 80–98% range. Programs that omit these features report completion rates clustered in the 5–15% range. The variance is concentrated in design.

3.2 Theoretical Foundations for the Structure Effect

Three classic findings illuminate why structure produces such large effects on completion outcomes.

Bloom (1984) demonstrated, in what he termed the two-sigma problem, that learners receiving one-to-one tutoring combined with mastery-learning thresholds performed approximately two standard deviations above peers in conventional classroom instruction. Approximately 90% of tutored students reached the level achieved only by the top 20% of conventionally taught students. Subsequent replication and meta-analytic work has moderated the magnitude of the effect — placing it closer to one-half to one standard deviation under typical conditions — and has identified mastery thresholds as accounting for a substantial portion of the original estimate (e.g., VanLehn, 2011). The moderated effect remains educationally substantial, and its mechanism is structural rather than dispositional: the tutor performs functions that the conventional environment leaves to the learner.

Wood, Bruner, and Ross (1976) formalized those functions in the concept of scaffolding, identifying six processes the more-knowledgeable other performs for the learner: recruitment of attention, reduction in degrees of freedom, direction maintenance, marking of critical features, frustration control, and demonstration. Each of these functions is absent or substantially reduced in default self-paced course environments and present in cohort-based or tutored environments.

Pfeffer and Sutton (2000) extended a parallel argument outside education, in their analysis of what they termed the knowing-doing gap in organizations. Organizations frequently possess the knowledge required to act yet fail to translate it into action, not because knowledge is incomplete but because no implementation structure converts it. The course-completion problem is structurally analogous. Consumer online courses typically deliver the content the learner needs; they less often deliver the implementation structure that converts that content into completed action.

3.3 Proposed Framework: The CursoVivo Implementation Model

The CursoVivo implementation model proposes that the structural ingredients identified in the cohort and tutoring literature can be delivered at the per-learner level through an artificial-intelligence layer embedded inside an existing course. The model is composed of six functional components, each mapped to a structural mechanism documented in the research literature reviewed above:

  1. Personalized weekly plan. A weekly action plan generated for each learner based on their context, with concrete tasks, dates, and contingency steps. This component externalizes the time-management self-regulatory strategy identified by Broadbent and Poon (2015) as predictive of achievement.

  2. Memory-bearing check-ins. Periodic structured outreach that registers what the learner committed to do, what was completed, and what blocked progress. This component implements the proactive support function identified in the help-avoidance literature (Almeda et al., 2017; Hwang et al., 2024) and counters the silent-attrition pattern.

  3. Concrete deliverables. Module-level production of drafts, checklists, schedules, and applied artifacts adapted to each learner’s case, rather than passive content consumption alone. This component operationalizes the marking of critical features and the demonstration functions described by Wood et al. (1976).

  4. Progress dashboard. A visible representation of current module, completed work, pending commitments, and accumulated production. This component supports the metacognitive self-regulatory strategy identified by Broadbent and Poon (2015).

  5. Daily focal task. A single concrete task per day to reduce decisional friction at the moment of engagement. This component implements the reduction-in-degrees-of-freedom function described by Wood et al. (1976).

  6. Course-specific tutor agent. A retrieval-bound agent trained on the specific course content that responds to learner questions using the instructor’s method, sequence, and exceptions, rather than general-purpose information from the open web. This component preserves the methodological specificity that distinguishes a particular instructor’s approach from generic content.

The model is not a content-generation system. The course content is provided by the instructor and is not modified. The model layers structural delivery on top of existing content, addressing the implementation-level variables the literature identifies as causally relevant.

3.4 Practical Implications

If the variance in online course completion is concentrated in design rather than in learner character, two practical implications follow for instructors and platform operators.

First, diagnostic priority should shift from learner-side variables to design-side variables. The most informative single measurement an instructor can perform on an existing course is not a survey of learner satisfaction but an examination of the module-level dropout distribution. Concentrated dropout at a specific module typically indicates a structural friction point — accumulated cognitive load, insufficient marking of critical features, or absence of an applied transition between content and practice — that can be addressed without redesigning the course.

Second, support architecture should be inverted. Reactive support, in which learners receive help on request, systematically excludes the help-avoidant population that most needs intervention (Ryan & Pintrich, 1997; Karabenick & Newman, 2006; Almeda et al., 2017). Proactive support, in which the system initiates structured contact regardless of learner request, addresses the help-avoidance constraint and is consistent with the proactive function described in cohort-based programs.

The general empirical regularity that emerges from the literature reviewed is straightforward, and warrants explicit statement: across comparable populations, online courses that externalize scheduling, scaffold weekly progression, and replace reactive with proactive support report completion rates several times higher than the open self-paced baseline. The variable that distinguishes the two outcomes is not the learner. It is the implementation structure of the course.


4. Conclusions

4.1 Summary of Findings

This paper examined whether the prevailing attribution of low online course completion to learner discipline is consistent with the available empirical record. The evidence reviewed indicates that it is not. Open self-paced environments report consistent baseline completion rates of 3–15% across providers, populations, and platform iterations (Jordan, 2014, 2015; Reich & Ruipérez-Valiente, 2019). Structured cohort programs delivered by the same institutions and operating in the same content domains report completion rates clustered between 80% and 98% (Harvard Business School Online, 2024; CampusMVP, 2018). The variance between these two distributions is substantially larger than can plausibly be attributed to differences in learner character alone, and is concentrated on design variables independently identified as causally relevant in the educational psychology literature: scheduled deadlines, scaffolded progression, externalized self-regulation, and proactive rather than reactive support (Wood et al., 1976; Bloom, 1984; Tinto, 1975; Broadbent & Poon, 2015).

The mainstream framing of low completion as a discipline problem is consistent with the fundamental attribution error (Ross, 1977; Jones & Harris, 1967). It is a more parsimonious explanation than the structural one and requires no design change from the institution, but it is not the explanation best supported by the evidence. A reasonable formulation of the empirical regularity is that the apparent shortfall in learner discipline is, in substantial part, a shortfall in course structure that has been misattributed.

4.2 Limitations

This review is subject to several limitations. First, it is a narrative literature review rather than a controlled experiment, and is therefore subject to selection bias in source identification. Second, the institutional case data drawn from Harvard Business School Online and CampusMVP are vendor-published and have not been independently audited; they are presented as illustrative rather than as evidence that meets the standard of peer-reviewed research. Third, learners enrolled in cohort-based programs typically pay more and self-select more strongly than learners in open MOOCs, so a portion of the observed completion difference reflects motivational and commitment-device effects in addition to structural effects. Fourth, Bloom’s (1984) original two-sigma estimate has been subsequently moderated, and the effect of one-to-one tutoring under typical conditions is closer to one-half to one standard deviation. Fifth, the proposed CursoVivo framework has not yet been validated through controlled experimental comparison; its components are individually consistent with the literature, but their combined effect in field deployment requires direct measurement.

4.3 Future Research Directions

Three lines of further investigation are particularly warranted. First, controlled within-population experiments that hold content constant while varying implementation structure would substantially strengthen causal inference about the relative contribution of design variables. Second, longitudinal field studies of artificial-intelligence-mediated personalization in Spanish-language and Latin American consumer-education markets — where the cultural preference for personalized support has been documented (Major, Francis, & Tsapali, 2021) — would address an underrepresented population in the existing literature. Third, comparative analysis of the relative contribution of each structural component (scheduling, scaffolding, proactive contact, mastery thresholds) would refine implementation guidance. The intersection of these three lines of inquiry is the subject of subsequent work in this research program.


References

Almeda, V., Baker, R. S. J. d., & Corbett, A. (2017). Help avoidance: When students should seek help, and the consequences of failing to do so. Teachers College Record, 119(3), 1–24. https://doi.org/10.1177/016146811711900303

Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16. https://doi.org/10.3102/0013189X013006004

Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education, 27, 1–13. https://doi.org/10.1016/j.iheduc.2015.04.007

CampusMVP. (2018). La trampa de la formación online sin fecha de fin [The trap of online training with no end date]. Krasis SL. https://www.campusmvp.es/recursos/post/la-trampa-de-la-formacion-online-sin-fecha-de-fin.aspx

Chuang, I., & Ho, A. D. (2016). HarvardX and MITx: Four years of open online courses, Fall 2012–Summer 2016 (HarvardX Working Paper). https://doi.org/10.2139/ssrn.2889436

Elibol, S., & Bozkurt, A. (2023). Student dropout as a never-ending evergreen phenomenon of online distance education. European Journal of Investigation in Health, Psychology and Education, 13(5), 906–924. https://doi.org/10.3390/ejihpe13050069

Harvard Business School Online. (2024). 5 benefits of corporate cohort-based learning. Harvard Business School Online Blog. https://online.hbs.edu/blog/post/corporate-cohort-learning

Henderikx, M. A., Kreijns, K., & Kalz, M. (2017). Refining success and dropout in massive open online courses based on the intention–behavior gap. Distance Education, 38(3), 353–368. https://doi.org/10.1080/01587919.2017.1369006

Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J., & Chuang, I. (2014). HarvardX and MITx: The first year of open online courses, Fall 2012–Summer 2013 (HarvardX/MITx Working Paper No. 1). https://doi.org/10.2139/ssrn.2381263

Hwang, S., Flavin, E., & Lee, J.-E. (2024). Antecedents and consequences of academic help-seeking in online STEM learning. International Journal of STEM Education, 11(57). https://doi.org/10.1186/s40594-024-00514-2

Jones, E. E., & Harris, V. A. (1967). The attribution of attitudes. Journal of Experimental Social Psychology, 3(1), 1–24. https://doi.org/10.1016/0022-1031(67)90034-0

Jordan, K. (2014). Initial trends in enrolment and completion of massive open online courses. International Review of Research in Open and Distributed Learning, 15(1), 133–160. https://doi.org/10.19173/irrodl.v15i1.1651

Jordan, K. (2015). Massive open online course completion rates revisited: Assessment, length and attrition. International Review of Research in Open and Distributed Learning, 16(3), 341–358. https://doi.org/10.19173/irrodl.v16i3.2112

Karabenick, S. A., & Newman, R. S. (Eds.). (2006). Help seeking in academic settings: Goals, groups, and contexts. Routledge.

Major, L., Francis, G. A., & Tsapali, M. (2021). The effectiveness of technology-supported personalised learning in low- and middle-income countries: A meta-analysis. British Journal of Educational Technology, 52(5), 1935–1964. https://doi.org/10.1111/bjet.13116

Pfeffer, J., & Sutton, R. I. (2000). The knowing–doing gap: How smart companies turn knowledge into action. Harvard Business School Press.

Reich, J. (2014). MOOC completion and retention in the context of student intent. EDUCAUSE Review, December 8, 2014.

Reich, J., & Ruipérez-Valiente, J. A. (2019). The MOOC pivot. Science, 363(6423), 130–131. https://doi.org/10.1126/science.aav7958

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 10, pp. 173–220). Academic Press. https://doi.org/10.1016/S0065-2601(08)60357-3

Rovai, A. P. (2003). In search of higher persistence rates in distance education online programs. The Internet and Higher Education, 6(1), 1–16. https://doi.org/10.1016/S1096-7516(02)00158-6

Ryan, A. M., & Pintrich, P. R. (1997). “Should I ask for help?” The role of motivation and attitudes in adolescents’ help seeking in math class. Journal of Educational Psychology, 89(2), 329–341. https://doi.org/10.1037/0022-0663.89.2.329

Ryan, A. M., Pintrich, P. R., & Midgley, C. (2001). Avoiding seeking help in the classroom: Who and why? Educational Psychology Review, 13(2), 93–114. https://doi.org/10.1023/A:1009013420053

Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89–125. https://doi.org/10.3102/00346543045001089

VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. https://doi.org/10.1080/00461520.2011.611369

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x