Generating impact
In Part 2, we outline the ways in which EdTech organizations can create or get evidence
KEY TERMS
To understand the type of evidence you can generate from or with your solution, it's essential to identify what stage your product is in, how the evidence aligns with that stage, and which type of research is most suitable. There are various types of research, and all of them should contribute to building a comprehensive evidence portfolio. This portfolio should include evidence from multiple areas: conceptual research, efficacy studies, and effectiveness research.
Conceptual evidence
Conceptual evidence involves the foundational understanding of your product's underlying theories and frameworks. It explores how your solution aligns with educational needs and goals, offering a rationale for why it should work in theory. This type of evidence helps articulate the purpose and intended impact of your solution, setting the stage for more rigorous, outcome-based research later on.
Efficacy
Efficacy refers to the ability of a product, intervention, or program to produce the desired result under ideal and controlled conditions. It is often evaluated through rigorous testing and trials to ensure the intervention works as intended. A randomized controlled trial is a typical efficacy study, often considered the most rigorous form of demonstrating efficacy.
Effectiveness
Effectiveness is the degree to which something is successful in producing a desired result under typical, real-world conditions. It measures how well an intervention works in practice, beyond controlled experimental settings. Teacher-centered interventions, inclusive research and development and participatory studies are good examples of effectiveness studies.
These different categories of evidence map on three main types of research: foundational, formative and summative research, each with a group of specific methodologies.
Foundational research focuses on the core principles and theoretical basis of an EdTech solution, often using methodologies like literature reviews, theoretical frameworks, and exploratory studies to understand educational needs and gaps the product aims to address. This stage helps ensure the solution is grounded in solid educational theory and learning sciences. It can provide Edtech companies with reports that show how their features relate to existing research, for example. Foundational research can be undertaken at any stage of an Edtech’s growth but is most impactful when undertaken right at the start and when the insights generated during the research review are followed through during product build and scale.
Formative research occurs during product development and testing phases, so is most typically introduced once companies have a working prototype. Its goal is to refine and improve the solution based on real-world feedback from users, such as teachers, learners, parents, employers. Common methodologies here include pilot studies, user testing, design-based research and focus groups. These methods help iterate the product by identifying areas for improvement before broader implementation. Companies can undertake formative research by partnering up with schools and research groups at universities. Typically, a sandbox method where participatory and co-design techniques are used, leads to informative, actionable and ethical insights.
Summative research takes place once the product is fully developed and focuses on evaluating its overall effectiveness in achieving desired outcomes. Companies typically wait to have a solid user base before undertaking summative research, both because of the cost and time such type of research takes. Summative research often uses randomized controlled trials (RCTs), quasi-experimental designs, and longitudinal studies to assess the impact on learning, behavior, or attitudes. In most evaluation frameworks by national governments, summative research is valued the highest as it provides the strongest evidence needed to demonstrate efficacy at scale.
CALCULATING IMPACT
Each impact investor tends to calculate impact in their own way, which can create inconsistencies across the field. Ideally, a standardized approach to pooling impact metrics would ensure greater consistency and comparability. While progress is being made in areas like fintech, social impact, and healthcare technologies, EdTech is still relatively new to this practice.
At Owl, you are expected to report on these metrics: Scale and Access, Outcomes and Diversity
Scale and access
This focuses on how many users a product serves and who they are. Our portfolio now reaches over 500+ million learners, 16.7 million educators, and 1.2 million employers worldwide. A significant milestone, 40% of learners served by our portfolio companies are English language learners, and 74% are learners of color. Within U.S. K-12, our companies serve 59% students of color, 49% free reduced lunch learners and 55% title 1 schools.
Outcomes
Outcomes are measured on a spectrum, from customer testimonials to quasi-experimental studies and randomized control trials (RCTs). Given the early nature of some of the recent investments in Gen AI companies, we’re seeing a lot more early-stage outcomes work - pilots, case studies, logic models at this stage.
Diversity
Diverse teams consistently demonstrate superior decision-making capabilities, highlighting the importance of inclusivity. It is vital that companies conduct annual diversity surveys to capture metrics such as gender, race, pay equity, and employee retention across all leadership levels to ensure that the diversity of the populations served is reflected in the organization building solutions for them.
RECOMMENDED RESOURCES
The basics of Logic Model and Theory of Change
The different kinds of research in edtech impact evaluation The Other EdTech Evidence