Recent weeks have seen a great deal of excitement and concern about advances in machine learning, especially crystallized by the release of OpenAI's ChatGPT. Breathless commentators have proclaimed that it might spell the "end of programming", while educators have fretted about its consequences for their work and for student learning.

At Bootstrap, we take a much more sober perspective than these two extremes. It helps that our team includes several practicing computer scientists, who have been aware of these trends for many years now, and who even use them in some of their own work. As a consequence, we have a good understanding of both the possibilities and weaknesses of these technologies, and think hard about how to incorporate them into our pedagogy. In fact, we have been incorporating material based on this - without fanfare - into Bootstrap for a while now. In this blog post we want to tell you more about this.

First, we have to understand something essential about the way many of these technologies work. People unfamiliar with them reach for analogies, which can be helpful in some ways but dangerously misleading in others. For instance, people have compared them to calculators, essentially recapitulating an age-old debate in mathematics education. However, this analogy is pernicious.

A calculator takes a precise, well-defined problem and applies a precise, *correct* algorithm to produce a precise, *correct* answer. Human language is filled with idioms, imprecision, and subtleties. Calculators demand their users translate the richness of their thinking into a precise sequence of steps, essentially learning to speak to the machine in the language it understands. If we get a wrong answer, it can be because we translated a problem into calculator instructions *incorrectly*. This could be because we misunderstood the problem ourselves, or misunderstood the way the calculator itself worked. It could even be that we simply made a typo! But it is never because the calculator did not "understand" what we meant, or computed an operation incorrectly.

Technologies like ChatGPT and their siblings take a completely different approach, designed to spare us the translation. They use statistical tools to analyze vast quantities of text, applying massive computing power to look for patterns in language. This lets them take imprecise, poorly-defined prompts, apply algorithms that have **no understanding of the question**, and produce rough answers that happen to resemble text seen before. Nothing could be farther from how a calculator works. The well-known computer scientist Eugene Spafford once described Usenet (a predecessor of modern message boards and social media) thus: "Usenet is like a herd of performing elephants with diarrhea - massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." He could as well have been explaining systems like ChatGPT.

So imagine, now, that you instead have a calculator that lets you input word problems in English prose and produces answers. Say you write "Sally sells lemonade for $1.50/glass, and each glass costs $0.30 in ice, lemons and sugar. How many glasses does she need to sell to earn $50?" This calculator sometimes produces 41.6 ($50/$1.20 - the actual profit per glass), which is mathematically correct but contextually wrong because lemonade isn't sold in fractions of glasses. Sometimes it produces 42, which fixes this problem. But sometimes it produces a totally off-the-wall answer, like 34 ($50/$1.50), 167 ($50/$0.30), or perhaps even 15 ($50*0.30), and other answers, giving different responses each time.

When confronted with such a calculator, if you already knew the answer or had thought it through far enough to estimate the ballpark figure, you would be able to check. But if you already knew the answer, you probably wouldn't be asking it in the first place! Notice what has happened: the problem has shifted from "translate the problem into a precise, unambiguous question" to "figure out which of the answers is plausible". **It's the very opposite of a calculator.**

At Bootstrap, we've been preparing students for this world since 2007. Bootstrap's curricula (and the older curricula it is based on) have always approached problem solving as a structured endeavor that includes multiple, mutually-verifiable steps. This has been a cause of friction in the K-12 CS movement at times, as many of the curricula in this space derive from a strongly constructivist, "just keep trying things until you solve the problem!" mentality.

Bootstrap teaches a

As our curricular offerings have grown, this approach has expanded throughout our materials. We regularly present students with incorrect solutions, whether they are studying math, programming, or data science. Our Data Science curriculum even includes examples of manipulative statistics and writing, asking students to think critically to understand the flaw in what is presented.

Every educator is familiar with the adage "you don't understand something until you teach it". We believe a corollary to this is "you don't understand a problem until you've verified someone else's solution." Chat-GPT merely replaces the fictional "error-prone programmer" in our curriculum with an "error-prone programmer who happens to be a machine."

Teaching isn't about having kids memorize formulas - it's about teaching students how to think critically, how to bring all of their intuition, estimation, and sense-making to bear when interacting with the world. GPT doesn't replace *any* of that. In fact, it only reinforces why that kind of education is essential!

So don't take the clickbait that says GPT is a threat to education - it's only a threat to the kind of education that we all knew wasn't good to begin with, and a needed reminder of what's at the heart of teaching itself.

Posted March 17th, 2023