Return to site

The 2 Sigma Problem

On creating a scalable education system

Evidence Based Learning

In 1984, Benjamin Bloom, published a paper demonstrating that one-to-one learning (tutoring) led to an improvement in student performance by two standard deviations. In other words, the average one-to-one learner performed better than 98% of the students in a traditionally instructed group. One-to-one instruction was used in conjunction with a mastery based curriculum where students were required to demonstrate mastery before moving to the next topic.

These results underscore a profound point about the potential for student achievement, there exists an evidence based methodology to completely redefine student outcomes (regardless of socio-economic background). The conundrum is that one-to-one instruction is expensive to implement due to high labor costs, this is the 2 sigma problem.

One to One Instruction

According to the paper, one-to-one instruction (or what they call tutoring) is defined as 'students learning the subject matter with a tutor for each student (or for two or three students simultaneously). This tutoring instruction is followed periodically by a formative test, feedback corrective procedures and parallel formative tests as in the mastery learning classes.' It's important to note that one-to-one instruction in of itself doesn't appear to produce the 2 sigma effect and must be paired with mastery learning. Bloom defined mastery learning as, 'students learn the subject matter in a class with about 30 students per teacher. The instruction is the same as in the conventional class (usually with the same teacher). Formative tests (the same tests used with the conventional group) are given for feedback followed by corrective procedures and parallel formative test to determine the extend to which the students have mastered the subject matter.' He further describes that students do not progress through the curriculum unless mastery is explicitly demonstrated.

It's this potent combination of one-to-one instruction + mastery learning that leads to unheard of student outcomes. As mentioned above, this method is extremely expensive. Let's dig into how we might solve (and transcend) it.

Mastery Based Learning

The first piece of the puzzle is fairly easy to solve with a digital learning system (see Khan Academy, Coursera, Beast Math, etc.). These systems require students to demonstrate mastery via digital assessments. At Dexter, we've embedded similar assessments throughout our curriculum cards to insure that students gain mastery before moving on. This might seem like a mundane innovation, but within the context of a traditional school it's revolutionary. Another way of thinking about mastery learning is self paced learning. In a traditional environment, as Sal Khan described, time is constant and mastery is variable. All students progress through the curriculum as a function of time, regardless of their individual level of mastery. The structures of individual class periods, age segregation and uniform/fixed assessment dates make this an absolute in the traditional schooling environment.

At this point, it's worth asking the question, how do you truly measure mastery? Countless researchers have pointed to the dangers of standardized assessments, i.e. 'teaching to the test'. At Dexter, we're not so interested if you can pass an arbitrary multiple choice quiz and much more interested in helping our students develop powerful cognitive frameworks. Language fluency is a good analog to understand this point. We'd be remiss if our students could pass a language exam, but couldn't successfully navigate an environment where the application of said language was required. A blended approach that involves both traditional forms of assessment, application and communication of the knowledge is required. What we're after is not just assessing the memorization of isolated nuggets of knowledge, but instead assessing if the student has successfully developed the underlying general cognitive tool/framework. Doing this in a scalable way is not easy (i.e. thesis defense). We implement this by introducing traditional assessments on the micro level and application + communication based assessment on a macro-level. To be precise, students are given questions with explicitly correct answers throughout the curriculum in order to progress through the unit of study. In order to complete the unit of study and move onto a new unit, students demonstrate knowledge by the completion of a long form project or narrative description of their activities. The final step of demonstrating mastery involves communicating the knowledge to another student (see Feynman technique). This leads us to how we might resolve (transcend) the 2 sigma problem in a scalable and affordable way.

Scalable Tutoring via Learning Webs

In his landmark book, Deschooling Society, Ivan Illach described the creation of learning webs and outlined four primary areas needed to create them:

1. Access to educational objects and facilities that augment formal learning. He describes these as machine shops (maker spaces), studios, laboratories, libraries, game rooms, etc.

2. Skill Exchanges - permitting persons to list their skills and conditions under which they are willing to serve as models for others who want to learn these skills.

3. Peer-Matching - a communications network which permits persons to describe the learning activity in which they wish to engage, in the hope of finding a partner for the inquiry.

4. Reference services to professional educators - think world class educators like Veritasium or Physics Girl.

We run physical spaces to address his first requirement (interesting aside to think about the utility of providing more fluid access to the means of production/learning). The fourth requirement is solved through the democratization of world class educational materials through platforms like Youtube, i.e. the raise of celebrity educators. It's the second and third areas that are relevant to creating a model of scalable tutoring. His book was written in 1971 and lacked the vantage point of modern day software. He assumed a mechanical approach where the process took more explicit action on the users part, versus a more fluid experience powered by software. Let's start there.

Instead of using an off the shelf learning management system, we've developed our own. On one hand this is due to the fact that computers in education aren't being used as a transformative tool, but merely as a substitute (see SAMR model). The other primary reason for creating our own software platform is the need for tight integration between the physical space and the software. This tight integration is what allows us to realize Illach's dream of a learning web. For example, all our students check into the space with an RFID badge, meaning that our software system knows who is in the space. Our software allows us to develop a comprehensive model of each students mastery (on an incredibly granular level... i.e. we continuously track vocabulary level as students create notebooks), which opens up transformative capabilities.

Our journey in peer to peer learning began as a natural consequence of aiming for scalability. As students complete curriculum on our platform they produce an enormous amount of original work in the form of notebooks. The benefits of multiple cycles of feedback are well known (see Austin's Butterfly for a useful anecdote), but proved to be impossible given our limited staff. We ultimately developed a peer to peer feedback system where students give and receive critical feedback on their notebooks.

To our surprise, students very much enjoy giving and receiving feedback to their peers. The quality of their work also improved noticeably (see the Hierarchy of Audience). In combination with a basic literacy curriculum, the proper contextualization of writing (as a means of communication) drastically improved student writing.

Digital peer to peer feedback is great, but lacks the social texture attributed to some of the benefits of tutoring. We're now developing a matching system that connects students (based on mastery data) and matches them in a peer to peer mentor relationship. Up until this point, we've informally been doing this by verbally connecting students. We don't have the data yet, but this method might actually exceed the 2 sigma effect due to the fact that the learners themselves are now involved in the act of communicating/teaching the knowledge. This leads to reinforcement and a level of fluency that can't easily be measured in traditional ways. Traditional peer to peer matching shows a limited effect (Bloom points to .4), but existing approaches lack a software system that optimizes pairings on the backend - we think this is key to improving efficacy.

In practice, student A is working on a curriculum card around slope. Student B demonstrated technical mastery of the subject by passing the micro-level assessments and is midway through a long form project/notebook focused on application (i.e. creating a Scratch AI using a linear equation). Student B performs poorly on the micro-assessment and the matching algorithm determines that 1: Both students are on campus. 2: Student B has demonstrated mastery of the subject student A is struggling with. 3: Student B has previously proved effective with other students that share a similar learner profile as student A.

Student B receives a direct message via the Dex chatbot (another core technology we'll leave for another post) letting her know that she has been matched with student A for a mentoring session. Student B knows she needs to successfully mentor at least three other students in order to demonstrate true mastery and progress to the next unit so she's excited about the pairing. After their session, student A provides feedback about the experience and is given another mastery assessment to measure the effectiveness of the mentor session. In order to maximize the value of these sessions, all students complete on-boarding curriculum designed to help them develop meta-skills around learning how to learn and being a helpful mentor/tutor.

This system offloads the need for a payed staff member to provide the majority of one-to-one tutoring (we still use software to empower them to do this efficiently though), while also improving learning outcomes for both parties. On a macro-level, each student meets with a Dexter staff member on a weekly basis to provide more general guidance and intervene if their peer to peer mentor sessions haven't been effective.

On reflection, it's fitting that leveraging our deeply social nature as humans might solve the scalability problem - the answer is not isolation behind a screen, but instead using software to develop community.

Evidence Based Learning

At Dexter, our environment appears less structured than a traditional school, but in reality we've created a video game like environment where everything is meticulously tracked and monitored via software (and hardware.. discussion of Seeing Spaces for another post). Our structured curriculum follows from the results of direct instruction and is constantly being improved upon based on student performance data. This means that Dexter facilitators are free to mentor students as they progress through the curriculum and not be overwhelmed with the additional responsibilities of content creation and administrative work (grading, assessment, etc.).

Asking teachers to design instruction is like asking the pilot of a 747 to design the plane, or the conductor of a symphony to compose the score, or the lead in Hamlet to write the play.

- Shepard Barbash.

In this post, we've focused on a small part of the Dexter stack that directly addresses the 2 sigma problem and haven't discussed the equally important areas of powerful content (see Alan Kay, Kieran Egan, Seymour Papert), reimagining the physical learning environment (see Bret Victor), transformative use of computers (see Explorable Explanations) or processes for optimizing parent involvement.

What should be underscored is the imperative for the development of a true education system that relies of data versus anecdote. As an organization we're making a strong commitment to prove efficacy through rigorous studies and outside measures. Real human lives are on the line and a student experiencing anything less than best practices (backed by data) is a travesty. The high variability of experience (largely based on the quality of the teacher) which is currently the norm is unacceptable. It's our hope that introducing more evidence based learning models will transform education into an enterprise more like healthcare where best practices are defined, systematized and spread.

Does evidence based learning and the idea of making education more of a science sound interesting to you? We're hiring an educational statistician and would love to chat!

Are you a parent interested in sending your child to Dexter? Our full-time school launches in August, sign up for an interview today!

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK