Continue reading "Equations, Identities, and #WhoSaysMath"
The post Equations, Identities, and #WhoSaysMath appeared first on Matt Salomone.
]]>From a Twitter thread on telling the difference between an equation and an identity in the algebra classroom came this tweetstorm. For me, the question raises issues of our own expert blind spots, mathematical communication and tacit knowledge, and a way in which category theory can help us be more humble about students’ common “errors.”
Unrolled at ThreadReader (better for dark mode): https://threadreaderapp.com/thread/1214965543788261377.html
On equations, identities, and #WhoSaysMath. Come for the pedagogy, stay for basic category theory. Thread here inspired the below.
TLDR: We can say "it's wrong" less and "it depends" more. (1/25)#MTBoS @CmonMattTHINK @j_lanier @katemath @mrdardy https://t.co/Bt5AyBTqTD
— Matt Salomone (@matthematician) January 8, 2020
The post Equations, Identities, and #WhoSaysMath appeared first on Matt Salomone.
]]>Continue reading "Of Cats and Combinatorics (Help!)"
The post Of Cats and Combinatorics (Help!) appeared first on Matt Salomone.
]]>Suppose you’re a pet photographer. You have two cats, and two cat sweaters. Not every cat must wear a sweater, nor must every sweater be used. But no cat may wear more than one sweater, nor may we squeeze multiple cats into the same sweater. How many different photos are possible?
The Answer: Seven.
The Balls-and-Urns Version: We have two urns, each with two balls. All are distinguishable. We may choose no balls at all and be done. Or, if we choose a ball from the first urn (a cat) we then must choose a ball from the second urn (a sweater) with which to match it. Then, we can either quit, or continue without replacement. So if our urns are
\(\mathscr{C} = \{ {\rm Amber}, {\rm Bosco} \} \quad \mathscr{S}=\{ {\rm Red}, {\rm Green}\}\)the seven possible photos are:
The Grammatical Version: Represent the cats by the symbols \(\mathscr{C}=\{A,B\}\) and the sweaters by the numbers \(\mathscr{S}=\{1,2\}\). Define a word as an element of the Cartesian product \(\mathscr{C}\times\mathscr{S}\), and a sentence as a subset of the power set \(2^{\mathscr{C}\times\mathscr{S}}\) in which no element of \(\mathscr{C}\) nor \(\mathscr{S}\) appears more than once.
Then among the \(16=2^{2\times 2}\) elements in this power set, exactly seven are sentences:
The Graph Theory Version: Consider the complete bipartite graph \(K_{2,2}\). If the first partite part represents the cats and the second the sweaters, then each edge in this graph represents a possible cat-to-sweater fitting.
In this formalism, a “photograph” corresponds to a matching on this graph, i.e., a subset of its set of edges in which no two edges share a common vertex (because one cat can’t wear multiple sweaters, nor vice versa). The graph \(K_{2,2}\) has a total of seven matchings, shown below.
The total number of distinct matchings possible in a graph is known as its Hosoya index, or Z-index. This index has been computed for several families of graphs; for the complete bipartite graphs we have
\(Z(K_{m,n}) = n!\, L_n^{m-n} (-1),\)where \(L_n^{m-n}(x)\) is the associated Laguerre polynomial. That this special function pokes its head into our problem ought to be surprising! It arises because matchings for this family of graphs satisfy identical recurrence relations to the solutions of the associated Laguerre differential equations (Godsil & Gutman 1981).
What I’m working on right now is a more general case which:
For example, suppose I add a third sweater and also a set of two fabric bows to my photography example:
\(\mathscr{C} = \{ {\rm Amber}, {\rm Bosco} \} \quad \mathscr{S}=\{ {\rm Red}, {\rm Green}, {\rm Yellow} \} \quad \mathscr{B}=\{ {\rm Satin}, {\rm Tuille}\}\)All seven of these items need to be in our photo. In how many ways can they be combined? Now, we have a complete tripartite graph \(K_{2,3,2}\), and each photograph is represented by a vertex-disjoint set of paths:
In how many different ways can we draw (any number of) paths on this graph such that no two paths share a vertex?
The answer appears to be 91. Thinking recursively, we could begin with the sum of the bipartite Hosoya indices we’d get by either dressing no cats, using no sweaters, or using no bows. (In these cases, we’re restricting to paths of length 1, i.e., edges.) Since the empty matching belongs in all three of these, we also need to subtract 2 to avoid counting it thrice.
\(Z(K_{3,2}) + Z(K_{2,2}) + Z(K_{2,3}) -2 = 13 + 7 + 13 -2 = 31.\)But this number misses the two-edge paths, the dapper kitties wearing a sweater and bow. If we want Amber alone to wear both sweater and bow, there are six ways to do that. And once we have made that choice, then we can independently either dress Bosco not at all (1 choice), in sweater alone (2 choices now), or in bow alone (1 choice). This provides for \(6\times 4 = 24\) possibilities, and reversing the roles of Amber and Bosco gives 24 more. Then, if we want both cats to wear sweater and bow, we have six choices of how to dress the first cat and two independent choices for the second — giving 12 more possibilities. This accounts for the remaining 60 “pathwise matchings” to complete our set of 91.
This recursive thinking seems like it has potential to bootstrap from the general bipartite case to the tripartite case, and if so, perhaps also beyond to the k-partite case. I don’t have those details worked out just yet, though. Maybe there’s a more clever approach that I’m missing?
Even more interesting to me than the question about how to add new articles of clothing to this photoshoot is the question of how to incorporate some indistinguishability. For example, suppose in the original two-cats, two-sweaters problem I have two identical red sweaters. Then there are only four photographs possible: Both cats undressed, Amber wearing red, Bosco wearing red, and both cats wearing red.
Worse yet, what if my two cats were also identical twins? Then, there are three possible photographs, enumerated by how many cats are wearing a sweater in them: zero, one, or two.
If it weren’t for the no-replacement feature of this counting problem, there’d be no difference between having one red sweater or having two. Alas, the effect doesn’t lend itself to a simple quotient.
In the fancier cats/sweaters/bows example above, consider the effect of making two of the sweaters identical – say, two red sweaters and one green. Of the 91 path-matchings we counted previously, some of them had both the red and yellow sweaters sitting unmatched. In fact, this number is 21, being \(7 = Z(K_{2,2})\) in which no sweater was worn, another 7 in which Amber wears the green sweater, and another 7 in which Bosco wears the green sweater. These 21 photographs are insensitive to whether the two other sweaters are the same color or different — they’re not being matched up with a cat or a bow anyway. So they are in bijection with the photographs in the red/red/green case in which both red sweaters are unmatched.
But the remaining 70 original path-matchings either involve the red sweater, the yellow sweater, or both. After the distinction between red and yellow is removed, there is now a \(2 = |S_2|\)-to-1 correspondence from these red/yellow/green photos to the red/red/green photos. The action of the transposition exchanges the class of red-only photos with the class of yellow-only photos, and since the cats and bows remain distinguishable, also acts freely on the class of photos that matched both red and yellow sweaters. So adding this indistinguishability reduces the original 70 photos in this group to only 35.
In total, then, we arrive at \(21+35=56\) possible photos when two of my sweaters have the same color.
How exactly to generalize this counting process, I have no idea. It feels like the worst kind of balls-and-urns type of problem: distinguishable urns, but partially distinguishable balls. What happens to my approach when multiple urns have some indistinguishable balls in them? What about multiple kinds of indistinguishability in the same urn (three identical red sweaters, three identical green sweaters, and two identical yellow ones)?
I’ve stared at this problem for long enough on my sabbatical to convince myself that it’s probably not trivial. But beyond the results on Hosoya indices and Laguerre polynomials that were valid in the (fully distinguishable) bipartite case, I haven’t found anyone who’s either made the leap from bipartite edge-counting to k-partite path counting, or looked at the question of making some vertices in a partite part interchangeable.
But I’m also not a specialist in graph theory or combinatorics. Am I even approaching this question in a helpful manner? Are there better ways of reframing this type of problem that could be susceptible to a smarter attack?
The post Of Cats and Combinatorics (Help!) appeared first on Matt Salomone.
]]>Continue reading "Grow Up, Branch Out (Transcript)"
The post Grow Up, Branch Out (Transcript) appeared first on Matt Salomone.
]]>
Quantitative literacy – also known as quantitative reasoning or simply “numeracy” – is said to be an essential skill in the workplace and in the world of the 21st century. But just what IS quantitative literacy, anyway? In this video I hope to help you to answer that question well enough to help you locate these skills in your teaching, whatever your discipline, and to connect you with resources that can take your teaching and support for these skills to the next level. You can use the video links as they appear to interact.
In 2009, Massachusetts legislators proposed to raise the state sales tax from a rate of 5 percent to 6.25 percent, just the second increase in the tax’s history. Needless to say, the proposal was divisive. In an April article, the Boston Globe called it a “1.25 percent increase.” Later in June, when the proposal was on the verge of passing, the Globe ran an Associated Press article that called it a “25 percent increase.” Both articles are referring to the same tax proposal. So: which one of them is correct?
The April article is correct! Here the author compared the new tax rate to the old tax rate by subtracting, measuring what we call the “absolute difference.” The tax rate was indeed set to be increased by adding an amount of 1.25 percentage points.
The June article is correct! Because the author is comparing the new tax rate to the old tax rate by dividing, measuring what we call the “relative difference” reported as a percent. Here, the amount of the proposed rate increase is exactly one-quarter, or 25 percent, of the existing rate.
In reality, BOTH of the articles are correct, each in its own way, because their authors just made different choices. The absolute difference does a good job of communicating the total amount of the increase itself. The relative difference compares that increase to the previous amount. Each provides one important form of context while obscuring another. And, you can imagine how supporters of the new tax might prefer one framing and opponents favor the other – even though they are both talking about the exactly the same thing.
So, we could subtract. Or, we could divide. That choice is meaningful. But wait a minute. Both of those math problems have exactly the same answer on my calculator. Why are they so different when they make it onto the page? What’s happening to MEANING on its trip out of and back into context?
Quantitative literacy is what happens when instead of asking “What math could we do?” we ask “What math should we do?” “Why do we think so?” and “What are the impacts of that choice?” Quantitative literacy therefore is a process of “sophisticated reasoning,” using mathematics that may itself be sophisticated, or as in this example, fairly elementary.
Quantitative literacy is more than mere mathematics. Both of them have their roots in simple number skills. But while the job of mathematics is to grow upward, building conceptual and technical sophistication as it goes, quantitative literacy is about growing outward, not just upward, finding more – and more various – opportunities to understand the world through a quantitative lens, at any level of our mathematical development. Just as a tree’s trunk carries water to its leaves, and in turn the leaves catch the sunlight to nourish the tree’s growth, in the same way, math skills are vital for quantitative literacy AND the practice of QL reinforces our math skills. The crucial difference is context.
We grow upward in a math class because we reason abstractly, thriving independently of context. But we grow outward in quantitative literacy by reasoning concretely, attending to number and quantity that are steeped in context that is authentic, meaningful, and yes, even messy.
We learn to read and write by reading and writing ABOUT things, and the more things we read and write about, the more our language skills can develop. We call QL a literacy for the same reason: it is not a skill that we learn but rather acquire through repeated use across many and various contexts. So math plays an essential role in building quantitative literacy but hardly a sufficient one. Mathematics alone cannot create a more numerate world. Mathematics needs partnership – conspiracy, even – from every discipline to do that.
And we no longer have the luxury of opting out of numbers. In the information age, data pervades every aspect of our personal, professional, and public lives. Some 2.5 quintillion bytes of new data are being stored every single day. We end every year with more than triple the data as when we started it. Data girds our decisions, suffuses our rhetoric. It drives our cars. And with that power comes risk.
In mathematics, after all, numbers are perfect constructs – but quantitative reasoning is the HUMAN expression of a numerical thought. Numbers are NOT truth; they are tools. And while numbers cannot lie, humans? They can. In order to give voice to numbers, we have to make a lot of choices. Those choices can reflect our biases. They can disguise our agendas. They can perpetuate inequality. And yet, we are all too content to uncritically assign credibility to arguments that use numbers. That concentrates disproportionate power into the hands of number-users — the more skilled among whom are actually MORE likely to use numbers to confirm their biases.
To handle the burning numerical questions of our age, our students need more than a fire extinguisher on the wall. They need a smoke alarm. A vigilant sensibility, a habit of mind. We don’t just want our students to use numbers; we want them to WANT to use numbers, to see the world through a quantitative lens. Because the stakes for numeracy have never been higher.
And that’s a challenge we are not yet prepared to meet. While there have been recent gains in school mathematics skills in the United States, American adults’ basic numeracy skills lag behind most of the developed world. And among young adults, we’re actually dead last. If the U.S. workforce is to remain globally competitive – if our citizens are to remain informed in an increasingly data-driven political climate – and if our college graduates are to have every opportunity to advance in their skilled careers – we have a lot of work to do to create a more numerate world.
As so often is the case, with that crisis comes opportunity. Our data economy needs everyone to be more conversant with numbers, yes, but it also creates demand for new kinds of experts. On the one hand, employers of all types agree that candidates with data skills make more attractive hires. (And, higher education leaders seem to agree that too many of our graduates don’t yet have those skills.) Meanwhile, the number of data professionals – statisticians, data scientists, and so forth – in the workforce is projected to increase dramatically in the next few years, and the hundreds of thousands of college graduates who fill these positions are going to command quite a salary to do so.
Higher education can answer that call. We can answer it by designing authentic quantitative experiences for students in every program of our universities. Those experiences look different in different programs. Math, science, and engineering majors have to grow taller trees, that are supported by deeper roots. Meanwhile humanities and fine arts majors might be better served to focus on broadening their foliage. Defining the right tree shape for each program, and ensuring each student’s roots in basic skills are strong enough to support it: those are the goals of Massachusetts’ initiatives on Math Pathways and Developmental Mathematics, work that as of this recording is quite active both on and among our public institutions.
Higher education can also answer this call by building quantitative literacy programs that complement our successful writing programs. A QL program brings together both an early, focused opportunity for students to develop quantitative expertise, what I call a “Big-Q” experience, and also myriad opportunities for students to use that expertise across the full spectrum of the curriculum: what I call “small q” experiences. Establish a strong trunk in the first year, and students will bear fruit and flowers all the way through graduation.
Faculty in mathematics and statistics are the agents of Big-Q; but faculty in every discipline are needed to support all the small q’s. So yes – if you’re watching this video, that includes you. Welcome to the conspiracy.
So, what does it look like to support both Big-Q and the small q’s? How do we both nourish the trunk of the tree and encourage it to grow limbs, branches, leaves, and flowers? And, since these tasks have different tools and different casts of characters, how do we do both at the same time in a complex university?
Well, remember first that Big-Q and small-q — our math and statistics departments on the one hand, and faculty across the curriculum on the other — are serving a common goal. Whether it occurs in the quantitative hothouse of an introductory statistics class, or on the other side of campus in an art studio, quantitative literacy tends to always involve the same elements: retrieving information from an authentic context, processing it, and then returning it back into its context. That includes an opportunity to calculate, yes, using tools that are appropriate, from pen and paper to supercomputer and everything in between. It also includes the process of choosing which calculation is appropriate. Showing an awareness of the inherent assumptions behind, and the limitations of, that choice. An interpretation – in context – of the results of that calculation. And, the ability to communicate all of the above in multiple forms of expression, formulas, data tables, visuals and graphs, and yes, in written and spoken communication.
In a “Big Q” setting, what we find are focused experiences designed to establish a foundation of transferrable quantitative skills. In these settings, numbers are the main course. The expectation that students reason quantitatively is pervasive and consistent. We can assess students’ skills analytically, based on that expectation. It is also here where we also find the highest degrees of student anxiety and avoidance, and higher rates of D, F, and W grades that impair student success and drive attrition. So we have to get this part right. We see in the Big-Q examples of institutions revising their approaches to developmental and first-year mathematics; and rethinking curriculum and delivery of research-methods courses in the social sciences; and developing entire undergraduate and graduate programs in data science.
One of the most widely-adopted Big-Q strategies represents a convergence of two needs: on the one hand, the need to build students’ foundational quantitative literacies in their first year; and on the other, the need to reform ineffective developmental math courses that did not appropriately match all students’ roots with their trees. This is the story of a two-semester sequence of courses that build strong developmental foundations for quantitative literacy, in both root and trunk – as well as a standardized assessment of those skills.
The Carnegie Foundation’s QUANTWAY project is an example of a widely-adopted curriculum for freshman-level, general-education quantitative reasoning, that includes both college-level modules teaching numeracy, algebraic modelling, and elementary statistical literacy in context, and also developmental skills modules to support these outcomes on either a prerequisite or on a corequisite basis. The Charles A. Dana Center at the University of Texas has also developed a similar curriculum and, through their New Mathways Project, also provides support for institutions and for state systems of higher education to bring reform to scale.
Since these Big-Q experiences are aimed at a general audience, the skills themselves can be assessed analytically. One widely-used instrument for measuring broad-based quantitative reasoning skills is the quantitative literacy and reasoning assessment. The QLRA is a multiple-choice test that asks students to use numbers in meaningful communications, to provide interpretations of graphical and tabular data, and to reason critically about uses of quantitative evidence. The QLRA has been used to assess the effectiveness of Big-Q courses, but also as a placement exam for those courses. QLRA has also been found to correlate with important student success outcomes in the first two years, in a way that adds predictive power independently to students’ math and verbal SAT scores.
From the very beginning, social science faculty have been key drivers in the movement for quantitative literacy. Recent enrollment booms in programs like psychology, criminal justice, and political science have created new populations of students who have to be prepared for success in both consuming and producing quantitative research and data analysis. This is a story of social science programs that have begun to re-scaffold those skills within their curriculum to provide their majors with foundational Big-Q experience early in their program – This is also the story of an analytic rubric for assessing those Big-Q skills using projects their students are already doing in their discipline.
Required courses in quantitative and qualitative research methods are common in social science programs. But in a lot of those programs, students do not encounter these courses until their senior year, and sometimes find – on the doorstep of graduation! – that their quantitative and statistical preparation is not up to the task.
In 2004, for example, the American Sociological Association recommended that programs require statistics and quantitative methods courses as appropriate “earlier rather than later in the major, so that advanced courses can be taught at a level that assumes students have had a foundation.” In other words, the longer students wait before Big-Q, the less time they have to USE those skills in their discipline. So by moving the introductory statistics course in the major to the freshman or sophomore level, and/or linking it with a statistical or quantitative course taken to fulfill an early math requirement, faculty teaching upper-division courses can be more successful engaging their students with quantitative research. And what social science faculty member wouldn’t want to do that?
As part of their LEAP project, the Association of American Colleges and Universities developed what is probably the most influential rubric for assessing “Big-Q” quantitative literacy, in whichever discipline it is taught. The VALUE rubric sets an analytic scale for each of the components of quantitative literacy: the interpretations and the assumptions needed to retrieve a quantitative idea from its context; the calculations and choices needed to gain an insight; and the analysis and communication needed to place that insight back into its context.
While it’s true that every college student’s future career will benefit from more quantitative fluency, the information economy also needs experts to inhabit new career paths in which Big-Q skills are central, Careers in business intelligence, predictive analytics, machine learning, and information management. New academic programs have arisen to meet this opportunity, many of which organize under the new umbrella called “data science.” Their rapid development has led to incredible variety in these programs, and consensus on a curriculum is still emerging. But typically it involves both foundational coursework in mathematics, statistics, as well as computer science; and partnerships with key client disciplines such as business, biology, and engineering.
Some of these programs arise as subsets of existing majors in mathematics or statistics, such as the data science specialization in Bowling Green State University’s math major. Other programs are essentially multidisciplinary, such as the programs at Iowa State which offers data science credentials at the certificate, minor, and major program levels. The highly marketable nature of the data science skill set has also driven undergraduate capstone experiences and industry-partnered internships, as well as pathways to advanced credentials and accelerated bachelor’s/master’s programs such as the one offered at the University of Massachusetts at Dartmouth.
One key design criterion for these programs is how do they provide access for students at the introductory level. An emerging practice is to leverage existing general-education courses, such as statistics, as on-ramps to the major, integrating some basic elements of computing to prepare interested students to continue to a second “bridge” course into data science. As one math chair put it, data science is not a math degree. But it’s not NOT a math degree either. And so, some programs even eschew the traditional series of calculus-based mathematics prerequisites in favor of a deeper dive into linear and matrix algebra.
Once a student’s Big-Q foundation is laid, there’s no limit to the breadth of “small q” expressions of quantitative literacy possible in their college experience. In a “small q” setting, we find programs in general education, faculty development, and student support that help to instill students’ quantitative habits of mind through repeated encounters across the curriculum. In these settings, the numbers are not always the main course; they can be side plates. Even desserts. Students should discover that the opportunity to wrestle with numbers presents itself far more often than does the expectation. So the tools we use to assess students’ success are more holistic, they adjust for the choices that students make to seize on those opportunities — or, not.
Here, we find students more at ease in their own discipline, where the choice to attend to numbers may not always be necessary for their task but can elevate a project from good to great. We see in small-q examples of general education programs that integrate quantitative reasoning opportunities within writing courses; of instructors who make a point of ensuring data is always incorporated among primary sources; and “infusions” and overlays that provide modularized quantitative experiences faculty across the disciplines can borrow for their teaching.
Students’ writing can shine a bright light on any of their critical thinking faculties. In contrast to mathematical skill, which can be evident in a symbolic manipulation, quantitative reasoning is inextricably wrapped up in the ways that we communicate. In vernacular and language. Culturally informed, socially constructed. And nothing draws those out quite like writing. Owing to the recent successes of Writing Across the Curriculum programs, college faculty are now providing more opportunities than ever for students to evince their thinking in written form. So, wisely, some quantitative literacy programs use these opportunities to also prompt students to write about quantitative information, and can use student writing to assess numeracy.
So-called “quantitative writing,” in which students are expected and supported to include analyses of quantitative information in any writing assignment, has been shown to enhance student learning by bringing them face-to-face with the messy questions and mindful choices necessary to navigate numbers in their authentic contexts.
Assessing quantitative skills in written work is correspondingly messy. But an exemplar approach is Carleton College’s QuIRK project. Before this rubric assesses the success with which students incorporated and drew valid inferences from quantitative information in their writing, it first controls for the opportunity that the assignment presented for them to do so, and then controls for the importance of the quantitative reasoning for their argument. Was it a central argument (a main course)? Or a peripheral argument (a tasty side dish)? At Carleton, this rubric was first used to assess quantitative reasoning in longitudinal portfolios of student work; but it has been adapted to many contexts and is useful anywhere that sourced, rhetorical writing incorporating quantitative evidence is assigned.
Achieving quantitative literacy across the curriculum means resisting the urge to sort disciplines, courses, and worst of all, people into “math” and “not math” categories. That’s a tendency to which math-anxious students and let’s be honest, even some faculty and advisers can be prone. But we cannot build a more numerate world from within our math and science classrooms; we must learn to assign value to using numbers in “not math” spaces as well. Such as in the humanities.
In an early workshop on my campus, I worked with a history professor who realized that in all the years that she’d scrutinized and assigned her students a particular source text, she’d never looked closely at the many data tables that were there. What stories, she realized, were those data telling that she, the author, and most importantly, her students, might have been missing out on? Just adding an explicit prompt to her existing assignment was all she needed to bring a data conversation into her classroom.
Likewise, two of the most math-anxious student populations on campus are also two populations that will have an outsized influence on the health of our democracy: future teachers, and future journalists. Because numbers can be used to obscure a story as easily as to reveal it, journalists especially have to cultivate skeptical habits around inferences drawn from quantitative information, and there are several high-profile initiatives to support quantitative literacies in journalism and mass communication programs.
In fact, news articles can be valuable conversation-starters for exercising quantitative reasoning with any audience – as the beginning of this same video illustrated. Including news articles that have data and charts among source material, without explicitly asking students to attend to the numbers, can be a valuable way to assess their numeracy habit of mind, to address that all-important question: Are students really going use these skills when there’s not a “math person” looking over their shoulder asking them to do so?
The most successful conspiracies are those that reach out into every corner. And several programs for quantitative literacy across the curriculum have made themselves indispensable on their campus through an “infusion” into general education. In a typical infusion model, a small-q experience is incorporated into each of a wide variety of gen-ed courses that students typically take throughout their college experience. This can come as just a module, something larger than a single class period but smaller than an entire course. Students then must take a minimum number of courses that include one of those modules in fulfillment their general education requirement. Infusions succeed in general education in part because they help faculty “add value” to the existing content they’re already teaching, rather than displacing it.
One exciting way to weave quantitative skills into the fabric of general education is to lead students into and then out of cognitive illusions. The Numeracy Infusion Course in Higher Education, at CUNY, supports faculty to pose and to help students wrestle with what happens when your gut reaction tells you one thing, but then a careful analysis of the numbers reveals something else. This approach recognizes that the biggest challenge in getting students to think critically about the numbers that are in a text, sometimes, is just getting them to actually slow down and to read the numbers, considering not just the emotional, heuristic response they create but also the precise values and interrelationships those numbers represent. Building the habit of treating written numbers as questions rather than answers; treating numbers as rhetoric rather than authority; treating numbers as being inseparable from communication skills and information literacy. That’s well worth the time spent in any general-education course. And when it comes to cognitive illusion, nothing engages students like surprise.
As the need for more quantitative literacy has blossomed, so too has the organizational infrastructure to support it. On a national level, the National Numeracy Network draws together faculty and administrators across all disciplines working on quantitative literacy, hosting an annual conference in the fall and compiling essential resources for this work through their website and through the open-access Journal Numeracy. The Mathematical Association of America likewise supports math and statistics faculty with its quantitative literacy special interest group. Regional networks and conferences, some of them loosely affiliated with NNN, have also arisen to build community. Around Massachusetts these include the annual conferences of the Northeast Consortium for Quantitative Literacy (NECQL) as well as the Southeastern Massachusetts Quantitative Engagement and Literacy meeting (SEQuEL).
As important as this work is, though, progress has come more quickly on some campuses than others – because it truly does take a campus-wide commitment to meet this challenge. So let’s wrap up this video by thinking about what quantitative literacy looks like at your own college or university right now. Which of these sounds more like your institutional design and culture?
Do you see your quantitative curriculum through the keyhole of traditional school mathematics? Or do you engage faculty across disciplines to review the curriculum for these skills, and incorporate institutional research and assessment in doing so?
Do you rely solely on a math course for students’ quantitative skill development? Or do you intentionally and unavoidably incorporate that skill development across your curriculum?
Do faculty in your disciplines feel like they have to lower their expectations when they teach quantitative skills? Or are your faculty able to transparently articulate minimum standards for these skills from the beginning of their course to the end?
Do you assume that students’ quantitative skills can be adequately supported by your math tutoring center? Or do you cultivate and train academic support staff to support this specific skill set in whatever contexts it arises?
And, do your internal programs and assessments themselves model effective quantitative reasoning in their processes of continuous improvement? Or, do you have your own struggles with data-driven decision-making?
Choose the closest overall rating and don’t worry. I’m just a video recording; your secret is safe with me.
At Level 1, you are where I believe the majority of QL programs are right now: just getting started. So if you haven’t yet, try to get out among a wide variety of faculty, and ask the question, and listen. Where do students have the opportunity to use numbers in your classes? What else could you bring into your teaching if they were better equipped to do so? When we began this process at my institution, we were surprised at how much and how varied students’ opportunities to use quantitative reasoning really were in their senior-level projects in all the disciplines. But, faculty themselves often hadn’t been aware of the opportunity, and those that were either didn’t make that explicit for their students, or didn’t know how to connect students with the support needed to be successful. Creating space for those grassroots faculty conversations, for us, was an essential first step at getting out of that Level 1.
If you’re at Level 2, maybe you’ve built some awareness on your campus of what quantitative literacy is and why it’s important, but maybe not everybody is at the table yet. You might be waiting on a catalyst for change. At a lot of institutions, this comes in the form of a general education revision, or a reaccreditation. These are the moments to take a really good look at your program-level learning outcomes for quantitative literacy. Put them, and your assessment data on them, in front of stakeholders: faculty, administrators, support staff, partners from industry and in the community. Are those outcomes and results describing what we really want from our students when they graduate? Are our students still going to be able to do these things a decade fro now? How are those going to be different a decade from now? Are those skills what our partner universities, our employers, and our community really need from us? Having those conversations can get you to the next level of your program.
At Level 3, you’ve got a mature quantitative literacy program with broad faculty ownership, strong co-curricular support, and regular assessment for improvement. Great job! But if you’re like a lot of mature programs, you might feel short on institutional priority – meaning, short on resources. Be sure that you can demonstrate the causal relationship between your program’s work and your students’ quantitative skills, and therefore on their success in college and in careers more generally. You might marry your learning outcomes assessment data to workforce trends; you might seek funding opportunities to create new programs or centers on your campus that can create and sustain pathways into skilled quantitative careers; you might support student interest and visibility through participation in high-profile competitions like DataFest. Consistently tying your work on campus to students’ opportunities off campus can create the well-deserved buzz you need to take your already-established program to the next level.
And if you rated yourself at Level 4, congratulations on your robust program for quantitative literacy, and let me know when I can come and visit your campus to see how you did it. (Call me!)
Roots. Trunk. Leaves. Basic skills, a Big-Q experience, and small-q’s galore. However mature your quantitative literacy tree is, I hope that the resources in this video can help you to water it. Whatever your next steps are, I wish you the best in taking them. A more numerate world awaits.
The post Grow Up, Branch Out (Transcript) appeared first on Matt Salomone.
]]>Continue reading "The MPG Illusion, Revisited"
The post The MPG Illusion, Revisited appeared first on Matt Salomone.
]]>In their paper on using cognitive illusion to improve quantitative literacy:
Numeracy Infusion Course for Higher Education (NICHE), 1: Teaching Faculty How to Improve Students’ Quantitative Reasoning Skills through Cognitive Illusions Wang, F., & Wilder, E. I. (2015). Numeracy, 8(2), 6.
the authors describe the following scenario, originally presented in a Science magazine article and summarized in the above slide from my Grow Up, Branch Out interactive video.
Consider a family that has an SUV that gets 10 MPG and a sedan that gets 25 MPG. Both are driven equal distances in a year. Is the family better off replacing the SUV with a minivan that gets 20 MPG or replacing the sedan with a hybrid that gets 50 MPG?
This problem is described as a cognitive illusion because many of us, thinking heuristically (that is, without stepping back and working out the details), are drawn to either conclude that:
However, neither is true. Doubling the miles-per-gallon economy of the lesser efficient vehicle results in greater fuel savings. Why is this the case? We’ll look at how arithmetic exposes this to be an issue of denominator neglect, and how the way Americans think about fuel economy is essentially shaped by cultural choices.
The key observation here is about unit conversion. Converting a quantity from one set of units into another is always a matter of either multiplying or dividing by the number 1, by representing 1 as a clever fraction whose numerator and denominator are equal measurements using unequal units.
Fuel economy, in this view, can be thought of as a conversion factor representing an equality between an amount of fuel consumed (in gallons) and a distance driven (in miles). The SUV making 10 miles per gallon, for instance, can be represented as the “equation”
\( 10 {\rm \; miles} = 1 {\rm \; gallon} \)This can be made into a conversion factor, then, by dividing this equation by either of its sides:
\( \frac{10 {\rm \; miles}}{1 {\rm\; gallon}} = 1 = \frac{1 {\rm \;gallon}}{10 {\rm \; miles}} \)Where we get tripped up, when we’re thinking heuristically, is in figuring out which of these two conversion factors is appropriate to the question. That is, are we more worried about multiplying something by 10? Or dividing it by 10?
Asking a crucial quantitative reasoning question — what’s changing and what’s not? — reveals that the number of miles the vehicles are being driven each year is remaining constant. So we will have the same number of miles for the SUV as for the minivan; the question is, what happens to the amount of fuel consumed? We therefore need to convert from miles into gallons, meaning we must “cancel the miles and introduce the gallons.” This means using the \(\frac{1 {\rm \;gallon}}{10 {\rm \; miles}}\) factor, or dividing by the MPG rating.
A little additional thought should persuade us that the actual number of miles driven per year will not affect our answer, so we’ll choose a convenient, round number: say, 10,000 miles. Here, then, are the results of dividing that number by each of the MPG figures to determine the amount of fuel each of these four vehicles would need to drive that far.
SUV (10 mpg):
\(10\,000 {\rm \; miles} \cdot \frac{1 {\rm \; gallon}}{10 {\rm \; miles}} = 1\, 000 {\rm \; gallons}\) |
Sedan (25 mpg):
\(10\,000 {\rm \; miles} \cdot \frac{1 {\rm \; gallon}}{25 {\rm \; miles}} = 400 {\rm \; gallons}\) |
Minivan (20 mpg):
\(10\,000 {\rm \; miles} \cdot \frac{1 {\rm \; gallon}}{20 {\rm \; miles}} = 500 {\rm \; gallons}\) |
Hybrid (50 mpg):
\(10\,000 {\rm \; miles} \cdot \frac{1 {\rm \; gallon}}{50 {\rm \; miles}} =200 {\rm \; gallons}\) |
This upgrade saves: \(1\, 000 – 500 = 500\) gallons | This upgrade saves: \(400 – 200 = 200\) gallons |
So, the mere fact that the numbers we’re seeing on the screen (the MPG ratings) belong in the denominator of the conversion not only thwarts our proportional reasoning, it inverts it. Denominators, as I often tell my students, are Bizarro World: up is down, bigger is smaller, and “nothing could mean anything.”
Quantitative reasoning is a human expression of mathematical thought. As such, QR is inescapably socially constructed and culturally informed.
In this example, the cultural information that obscures our reasoning is the distinctly American habit of reporting fuel economy with the fuel in the denominator (miles per gallon). Cognitively, this has several effects.
It’s just like the classic advertising story about A&W’s failed third-pound burger:
In the 1980s, A&W attempted to compete with McDonald's "Quarter Pounder" by introducing a third-pound burger.
However, it didn't sell because Americans thought 1/4th of a pound was larger.
— UberFacts (@UberFacts) February 7, 2019
Denominator neglect is the source of all manner of false cognitive illusions. So, why do I describe this as a cultural phenomenon in the U.S.?
Because European regulatory agencies have chosen to avoid it. European agencies report fuel economy with fuel in the numerator instead of the denominator, for instance, turning the dimensional “equation”
\(100\; {\rm kilometers} = 24 \; {\rm liters}\)into the conversion factor
\( \frac{24\; {\rm liters}}{100\; {\rm kilometers}} \)instead of the reciprocal. This approach sacrifices the “more is better” cognition, because a smaller figure — using less fuel over an equal distance — represents greater efficiency. But it avoids denominator neglect, it shines the spotlight on the amount of fuel consumed as the driver’s main independent variable, and it re-energizes our proportional reasoning.
In the EU, for example, this exercise would not create a cognitive illusion.
Consider a family that has an SUV that uses 24 L per 100 km and a sedan that uses 10 L per 100 km. Both are driven equal distances in a year. Is the family better off replacing the SUV with a minivan that uses 12 L per 100 km, or replacing the sedan with a hybrid that uses 5 L per 100 km?
Now, because the fuel usage is in the numerator instead of the denominator, and we are multiplying by these numbers in our conversion rather than dividing, our proportional reasoning should give us a more sensible insight. Let’s imagine 16,000 km of annual driving, which is approximately the same as 10,000 miles (though again, that specific amount will not affect the conclusion):
SUV (24 L / 100 km):
\(16\,000 {\rm \; km} \cdot \frac{24 {\rm \; L}}{100 {\rm \; km}} = 3\, 840 {\rm \; L}\) |
Sedan (10 L / 100 km):
\(16\,000 {\rm \; km} \cdot \frac{10 {\rm \; L}}{100{\rm \; km}} = 1\, 600{\rm \; L}\) |
Minivan (12 L / 100 km):
\(16\,000 {\rm \; km} \cdot \frac{12 {\rm \; L}}{100 {\rm \; km}} = 1\, 920 {\rm \; L}\) |
Hybrid (5 L / 100 km):
\(16\,000 {\rm \; km} \cdot \frac{5 {\rm \; L}}{100 {\rm \; km}} = 800 {\rm \; L}\) |
This upgrade saves: \(3\, 840- 1\, 920= 1\, 920 \) L | This upgrade saves: \(1\, 600 – 800= 800\) L |
Shockingly, the answer is still the same as our MPG example! (In part, this is because the fuel economy figures I chose are approximately equivalent to the originals.) But the proportional reasoning “works” in that we can clearly see, in the last row, that halving the fuel consumption rates has indeed halved the amount of fuel consumed in either case. It’s just that the SUV is using so much more fuel in a year compared to the sedan, since both are being driven equal distances, so half of its total consumption is still significantly greater than half of the sedan’s. We’ve traded denominator neglect for a form of base rate neglect instead. But at least there’s a glimmer of valid proportional reasoning here.
Because of the precise nature with which numbers carry meaning, correct quantitative insights almost always require engaging our slower, analytical cognitive machinery — rather than relying on the quick, reflexive answers our relational minds provide by default.
But our “System 2” thinking requires significant effort to activate, and that means that the lazy, heuristic System 1 can occasionally catch out even more numerate people. Trying to power quantitative skills with heuristic thinking is like trying to build a campfire with only newspaper and lighter fluid: bound to generate more light than heat, and leave everyone out in the cold.
The post The MPG Illusion, Revisited appeared first on Matt Salomone.
]]>Continue reading "Grow Up, Branch Out: Quantitative Literacy for the 21st Century"
The post Grow Up, Branch Out: Quantitative Literacy for the 21st Century appeared first on Matt Salomone.
]]>This H5P interactive video is best experienced full screen.
Click here to view the non-interactive version (33 mins.) via YouTube. The full transcript of this video is also available.
Linked Resources
The post Grow Up, Branch Out: Quantitative Literacy for the 21st Century appeared first on Matt Salomone.
]]>Continue reading "Anti-Numeracy: Valid, But Not Okay"
The post Anti-Numeracy: Valid, But Not Okay appeared first on Matt Salomone.
]]>Here’s an example, with author omitted. (These gags are ubiquitous and I’m not trying to “cancel” anyone!)
I am a trained mathematician right up until I have to calculate a restaurant tip. (In reply to the below)
I am a trained microbiogist right up until a cookie falls on the ground.
— Susanna L Harris (@SusannaLHarris) September 30, 2019
And while I know these are tongue-in-cheek funny jokes (so please don’t @ me), I have to ask: Who laughs? Who’s supposed to laugh? And what happens when they do?
I suggest three things.
You might only have gotten one of these three, but #MathTwitter felt strongly otherwise:
Resisting the urge to rant RE: the "I'm a mathematician who can't do simple arithmetic" meme. Do I:
— Matt Salomone (@matthematician) October 1, 2019
So let’s dive in, shall we?
The anti-intellectual tradition in American popular culture is an old one. Richard Hofstadter in the linked book, written in the 1960s, sees the phenomenon as a consequence of the U.S. educational system’s successful democratization of knowledge. The more available education was, the more devalued its currency may have become — and the average American had fewer excuses to remain uneducated. In this way, anti-intellectualism became a projection of one’s own shame onto elites and experts:
[T]hey found it easier to reject what they could not have than to admit the lack of it as a deficiency in themselves.
The result is a uniquely American strain of Know-Nothingism for which it is easy to find parallels in today’s public discourse that are so eerie, I suspect Hofstadter may have had a time machine. He wrote in 1963! that:
[t]he citizen cannot cease to need or to be at the mercy of experts, but he can achieve a kind of revenge by ridiculing the wild-eyed professor, the irresponsible brain truster, or the mad scientist, and by applauding the politicians as they pursue the subversive teacher, the suspect scientist, or the allegedly treacherous foreign-policy adviser. There has always been in our national experience a type of mind which elevates hatred to a kind of creed; for this mind, group hatreds take a place in politics similar to the class struggle in some other modern societies.
English literacy in the U.S. is all but universal now, and you will find few holdouts who take pride in making light of one’s lack of basic language, grammar, and spelling skills. Apart from this joker, of course:
Me: Spelling bee champion in fifth grade
Also me: Can't use a UNIX shell without 'alias claer clear' https://t.co/5E0WDiSUGc
— Matt Salomone (@matthematician) September 10, 2019
But mathematics and numeracy skills are a whole different story. Far from universal, basic numeracy skills in the U.S. are increasingly concentrated in fewer and fewer hands. The OECD’s Survey of Adult Skills included a test of basic numeracy for the first time in 2013, and among the twenty-five developed countries in which the survey was administered, U.S. adults ranked 23rd — surmounting only Italy and Spain.
And, as the labor market and the Big Data economy has created a wealth of new, well-compensated jobs for data professionals working in fields like business intelligence, predictive analytics, and data science, the net effect is that numeracy is in danger of becoming an elite skill. The one percent of the most numerate now wield more economic and political power over the ninety-nine percent who are less numerate than ever before. That’s a problem, especially when numberphiles are no less likely — in fact, more likely! — to deploy numbers to mislead, misinform, and motivate their biases.
Is it any wonder, then, that I can’t do math has become the new shibboleth of the populist anti-intellectual? The phrase is almost invariably deployed to either
It is that second usage that is, I suspect, most common – and most pernicious. As Hofstadter points out in the 1960s, we are invariably at the mercy of experts. And and Cathy O’Neil has memorably pointed out in the 2010s, the experts about whose ways and whose reach we have the least understanding are the “quants,” the professionals who collect, design, and analyze vast quantities of data on us that impact every facet of our lives (and, impact less privileged folks disproportionately more). In O’Neil’s view, being one of You, opting out of one’s obligation to be numerate at minimum as a defense against exploitation, comes at a steep social cost that perpetuates historical inequities.
So why do mathematicians – who are often the first to decry the culture’s antipathy toward all things mathematical – join the chorus, declaiming their own basic arithmetic skills?
One reason is exactly because it is a soft bid for solidarity with their audience. “It’s okay; I might be a mathematician but I’m not like that mean Mr. Horton who called you out in third grade for not knowing your times tables.” And I’m sure they don’t mean in those moments to normalize anti-intellectual or anti-numerate sentiments for their audience. But there’s a second reason that mathematicians as a group might want to diminish the importance of basic arithmetic and mental computation skills.
"Many persons who have not studied mathematics confuse it with arithmetic and consider it a dry and arid science. Actually, however, this science requires great fantasy."
— Sophia Kovalevsky
— Brendan W. Sullivan (@professorbrenda) October 2, 2019
Every social movement has its radical elements. (I’m barely restraining a good square-root pun here.)
Only extreme positions and extreme methods, they reason, are capable of shifting the Overton window in the direction of the change their larger movement wants. I’m thinking here about the contributions of people like Huey Newton and Malcolm X to the movement for civil rights, or the ways that some people justify disruptive protests and property damage in movements for environmental or social justice. There will always be those who will reach for a mile to move the public an inch.
Mathematicians who tout their lack of mental arithmetic skills, likewise, serve as radical exemplars of a cause: the cause of educating the wider culture about the nature of mathematics itself. Namely, that mathematics is vastly more than arithmetic and its kin subjects; it is an expansive way of thinking and reasoning, analytically and logically, with rigorous self-consistency and fastidious attention to detail. It is a discipline in which many of its practitioners find enormous beauty, taste seemingly transcendent truths, and playfully explore the limits of their imaginations. For them, mathematics is a true expression of human flourishing. And they don’t understand why the rest of the world doesn’t see it that way.
Or maybe they do understand, and that is the very source of their frustration.
Many students, after all, do experience school mathematics as the “dry and arid science” Kovalevskaya describes. Hemmed in as it is by a narrow arithmetic-geometry-algebra-calculus curriculum, and taught (especially in the early grades) by teachers who themselves harbor deep anxiety, trepidation, and skepticism of the value of the subject, it’s no surprise that the mathematics encountered in a great many school classrooms leaves little room for the kinds of playful, appreciative, beauty-seeking practices that professional mathematicians enjoy. The few students who are fortunate enough to have teachers and/or parents who nurture those practices with them — such as by playing with logical games or puzzles that often have very little to do with numbers or arithmetic at all — are more likely to persist and be successful in mathematics and science later in their educational life.
Worse yet, professional mathematicians do not generally find their colleagues in the academy to hold any less blinkered a view of the nature of our discipline. It is becoming increasingly possible in modern higher education for students to complete bachelor’s degrees having never had an experience of what “real” mathematical thought and practice looks like. There are probably several structural reasons why this is more the case now than it was fifty years ago; in my view they are, from most to least culpable:
There is a kind of codependent relationship between mathematicians and the university when it comes to teaching. We may want to make authentic mathematical discovery a part of every student’s education, but in the face of an academy that (rightly) sees innumeracy as a crisis and (wrongly) mathematics as its solution, we are at times too content to accept whatever role we are permitted to have in general education. And if we cannot introduce the wider student population to the true meaning of mathematics in our college curriculum, we can at least push back on this injustice by pointing out that our own deficits in basic arithmetic skill do not impair our ability to practice “true” mathematics. It may be passive-aggressive, but it keeps food on our tables.
Yet, while we as highly-educated people with (what were at some point in history) prestigious and stable careers can seemingly “get away” with deficient number skills, we ought to keep in mind that that is not true outside our bubbles. That brings me to my last point.
Quantitative literacy intersects with equity and social justice in more ways than ever. With numerical data being collected and stored at a breakneck pace, our lives are at the mercy of a tidal wave of numbers on a daily basis.
But, not all of our lives are affected equally. Cathy O’Neil’s book Weapons of Math Destruction has given perhaps the most popular oxygen to public skepticism about automation, algorithms, and Big Data. She argues that many data-powered algorithms, from credit scores to college admissions, far from being objective forces for equity, offer no more transparency or accountability than human decisions do, and in fact replicate social biases.
Algorithms don’t make things fair if you just blithely, blindly apply algorithms. They repeat our past practices, our patterns. They automate the status quo. That would be great if we had a perfect world, but we don’t.
To be innumerate in today’s world is to forsake an important defense mechanism against commercial, political, and financial exploitation. More privileged people, who enjoy more structural defenses against this kind of exploitation, can afford to lay this shield aside. But to normalize that behavior is to uphold an unequal status quo.
It takes a certain basic arithmetic skill, for example, to comprehend the scope of income inequality in our economy well enough to organize against it. It takes a certain basic arithmetic skill to grasp the corrosive effects of ever-more-regressive tax rates. To understand why usurious loans are never a good idea. To advance from wage employment to salary. These might not be critical issues for people in higher social strata, but to pretend they are unimportant is to narrow the pipeline into the middle class.
So, innumeracy exacts a high social cost in the United States, and that’s a bill that is coming due more and more each day. But does that mean all of us need to be able to calculate a tip on that bill off the tops of our heads? Is mental arithmetic really that important? Do we all need to go back to our times tables and speed tests?
No. Of course everyone will stumble over simple arithmetic computations done on the fly, because there are always moments when our rapid-thinking, relational cognition makes mistakes. The purposeful, effortful thinking required to do accurate mental arithmetic is always less likely to be used. And, as Daniel Kahneman writes in Thinking, Fast and Slow, that means we are less likely to assign importance to it.
People tend to assess the relative importance of issues by the ease with which they are retrieved from memory.
And mental arithmetic per se is admittedly less important nowadays, given the sheer ubiquity of computers and the instant ability nearly all of us have to use that technology at any given moment.
But for a public whose tendency is to conflate mental arithmetic, numeracy, and mathematical skill, throwing out the bathwater casts out the baby. When we devalue mental arithmetic, we risk — in their eyes — devaluing both numeracy writ large and the discipline of mathematics we hold so dear.
That’s why, despite my both-sides-iness in this post, I contend that projecting an image of innumeracy, even in jest, does more social harm than good. As long as a culture persists that is fearful, suspicious, and dismissive of all things numerical and mathematical, gags like these only add fuel to that fire.
We cannot stand on the deck of a ship and brag to a drowning person that we never learned to swim.
The post Anti-Numeracy: Valid, But Not Okay appeared first on Matt Salomone.
]]>Continue reading "Boom, Bust, Hockey Stick: Unanimity in the U.S. Supreme Court since 1945"
The post Boom, Bust, Hockey Stick: Unanimity in the U.S. Supreme Court since 1945 appeared first on Matt Salomone.
]]>But, the data suggest two interesting trends in Supreme Court unanimity over the past 75 years: a steady boom-and-bust cycle about every decade, and a significant Roberts Court uptick in the second derivative suggesting that year-over-year, the consensus about consensus may be disappearing.
The recent acceleration in U.S. political and ideological polarization has heightened voters’ sensitivity to consensus — or lack thereof — in our political discourse. So it’s only natural to ask whether the U.S. Supreme Court, whose perception as apolitical is rapidly evaporating, is becoming increasingly polarized in its decision-making. Have the Roberts Court’s four conservative-bloc justices (Alito, Gorsuch, Kavanaugh, Thomas) and four liberal-bloc justices (Ginsburg, Sotomayor, Breyer, Souter) been likely to retreat to their ideological corners in the Court’s decisions? Has there been, as Torrez suspects, a significant increase in split-decision opinions (such as 5-4) and decrease in unanimity? Are the justices as unlikely to find agreement on the politically charged issues of the day as is the rest of our political class?
Torrez can be forgiven for suspecting so – and I wonder if one reason for it is an availability heuristic bias. The Court, on average, makes between 80–90 decisions on merits each year, and many of these do not make national news. Those that do make news, though, are those most hotly contested along ideological lines. Whole Women’s Health. Obergefell. Citizens United. Of course Bush v. Gore. Besides having hot political and public controversy in common, these decisions also have in common that only five justices voted with the majority. In other words, it might feel like there are more split decisions because so many of the first decisions that come to our mind were split.
To test whether there is more to this suspicion than an availability effect, I wanted to look at a data set of Court decisions as far back into history as I could find (without straining my fall sabbatical). The indispensable SCOTUSBlog has compiled “stat pack” reports on each Supreme Court term since 1995, and my preliminary analysis used data from their reports, having roughly ten years’ each of data from the Rehnquist and Roberts courts.
Then, as I will now, I operationalized “polarization” using the number of dissenting votes as a proxy. Naturally, with a maximum of nine justices voting, this number of dissents ranges from 0 to 4, with zero serving as my marker of “unanimity” or consensus and four as the maximum level of “polarization” or dissent. Most Court decisions have nine voting justices, though a few have fewer, for various reasons — I chose not to compensate for this effect since the number of cases with fewer than nine votes is relatively small.
That preliminary analysis did not turn up evidence that polarization had increased in a significant way in the ten years after Roberts became chief justice relative to the ten years before.
But I wasn’t satisfied with only ten years on either side. I want to know how the Roberts Court stacks up against the rest of the 20th century. So I reached for a larger data set, clued in by a commenter on my previous post. (Thanks to you, commenter!)
Well – to be honest I didn’t follow his advice at first. Instead I used my institutional membership in the wonderful ICPSR database to locate what, in the years before the internet, was probably the definitive data set on the Supreme Court:
Spaeth, Harold J. UNITED STATES SUPREME COURT JUDICIAL DATABASE, 1953-1997 TERMS [Computer file]. 9th ICPSR version. East Lansing, MI: Michigan State University, Dept. of Political Science [producer], 1998. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 1999.
Spaeth’s database had everything I would need and much, much more: you could even use it to investigate questions like, which justices tended to vote in concert with which other justices? But, its data only ran up to 1997, so at first I attempted to use SCOTUSBlog’s data to fill in the years 1998–2018. That didn’t work; there were inconsistencies in the numbers of cases recorded in the years in which the two data sources overlapped. (Spaeth records many more decisions each year, even on its coarsest unit of analysis.) So I scrapped the idea of merging the two data sources and slunk back to the clue provided by my blog commenter, leading me to the Washington University Law Library.
The Washington University Supreme Court Database, it turns out, appears to have been initially seeded by Spaeth’s data (indeed, it makes explicit references to Spaeth to assist anyone comparing the two). But, it spans the years from 1945 through 2018. I’d found my source. The smallest such data set, which counts each Court citation exactly once (multiple memoranda or decisions on the same matter are excluded), includes 8,966 decisions since 1945. Its numbers matched most closely to SCOTUSBlog’s summaries in the years the latter analyzed, but still there were a few more cases in the WU database each year than appeared in the blog’s stat packs.
While I did use SPSS to recode its variables and extract only those I needed, I used Excel for analysis. (Being on sabbatical, my remote connection to university SPSS software is cumbersome. Please poke fun at me in the comments below.)
After coding a variable to register the number of dissenting votes in each case, I focused on the average number of dissents in merit cases each year. Because this average “jitters” frequently on a year-to-year basis, the blue line displayed below is a smoothed four-year moving average taken “backward” (so, e.g., the data point for 2016 is an average of cases in the years 2013 through 2016).
The orange line, plotted on the same scale, depicts the five-point, centered, discrete second derivative of the number of dissents. I say more about this quantity in my preliminary post.
So, what do I make of sixty-plus years’ worth of data?
On the question of whether the Roberts court has seen a precedent-breaking increase in dissension, my “Andrew is wrong” diagnosis stands. If anything, the past several years have seen more consensus (a lower average number of dissents per case) than any other period since World War II. This is juiced in part by the surprisingly unanimous term of 2016 in which the Washington University database recorded only three 4-dissent opinions and forty-one unanimous decisions.
The latter represents a difference in how SCOTUSBlog and the WU database record vote tallies. SCOTUSBlog records seven 4-dissent decisions in 2016, by contrast. (I’ve not looked closely enough at the cases themselves to uncover why the sources disagree.) Presumably, the WU database’s methodology remains consistent each year since 1945, so since I’ve not used SCOTUSBlog’s data here, this discrepancy ought not affect the analysis.
Far from a sudden increase in dissent, placed in historical context, the Roberts court has so far been marked by a dissension downswing – but one that, if trends continue, is ready to reverse.
What struck me about the historical record is how consistent the levels of dissent are across these six decades: hovering around an average of 1.6 dissents per case and scarcely wandering further than 0.4 votes away from this mean before reverting back. This reversion has come in a steady cadence, taking about 15 years to oscillate from maximum to minimum and then another 15 back to maximum. Consensus on the Supreme Court appears to ebb and flow in a predictable thirty-year-long oscillation.
I used spectral analysis to quantify this behavior. Since this data appears to begin in 1945 at the top of a crest and end in 2018 at the bottom of a trough, I used the odd-half-integer exponentials \(\varphi_k(t)=e^{(k+\frac12)\pi t / 73}\) as a basis for a discrete Fourier transform on the data. This found a modest peak at \(k=5\), and taking only this and the constant term (\(k=0\)) we obtain the oscillation shown as the dashed cosine curve in the plot:
\(d(t) = 1.614 + 0.180 \cos \left( \frac{5.5\pi}{73}t – 1.126 \right)\),
with \(t=0\) marking the year 1946. The wave number \(k=5\) gives an oscillation with a period of \(73/2.25 \approx 32\) years.
If this trend continues, then we would expect to see polarization increase over the next decade as dissension swings back upward from the relative trough in which the Roberts court now finds itself. Thinking about the political moment we appear to be in these days, this feels like a man-bites-dog proposition to me; but, of course, one we would then expect to see begin to reverse sometime in the 2030s.
The one Roberts court trend that was notable in my preliminary post — a sudden uptick in the dissent-over-time second derivative — remains notable even in the longer context of history.
The first derivative (the slope of the blue curve in my chart) measures the direction and rate of change in unanimity: it’s positive when dissent is increasing year-over-year and negative when it’s decreasing. Since the blue curve jitters up and down frequently over the decades, you can convince yourself that this first derivative bounces between positive and negative frequently. There’s not much to note in the trend of the first derivative over time; I don’t see, for example, a sustained period of positive first derivative that would suggest a steady increase in polarization in the Roberts court.
The second derivative, meanwhile, measures what I call the “whiplash effect:” How rapidly are the Supremes changing course on unanimity? To use an airplane metaphor, the second derivative measures the position of their flight stick: pulled way up when it’s positive or pushed way down when it’s negative.
And in the past five years they have yanked that stick further back than any point in the history of this data. Over the last five years the Court has been changing its mind on consensus, turning toward polarization, at a quicker rate than at any point since World War II.
One look at the last five years of data shows what I mean. 2014 was about as polarized (average 1.81 dissents per case) as was 2018 (1.78 dissents per case), but in between there was a precipitous decrease toward consensus (to a historic low point in 2016) and then a rapid rebound. All the passengers on this polarization airplane have spilled their drinks.
Because I used a centered five-point second derivative estimate,
\( f^{\prime\prime} (x_i) \approx \frac{1}{12} \bigl( -f(x_{i-2}) + 16f(x_{i-1}) – 30f(x_i) + 16f(x_{i+1})-f(x_{i+2})\bigr)\),
the course change over this five-year period becomes a single data point in the second derivative, attached to the year 2016, the last point on the orange second derivative graph. Note also that the orange curve is the magnitude of the second derivative only; negative signs have been discarded so that only the rate of acceleration is shown and not the direction.
Note that this data has been smoothed in three ways:
Andrew was wrong. (There. I just had to say it one last time.) The Roberts Court has not been an era of precedent-breaking polarization in how the Supreme Court justices vote on merit cases. But, if this analysis is to be believed, we are right now at the nadir of a downswing toward unanimity and have begun to rebound toward dissension – snapping back perhaps more rapidly than at any other point in this historical record.
I put more stock in the cyclical oscillation found here – the thirty-year long wave back and forth toward more consensus, then more polarization, and back again – than in the Roberts Court “hockey stick.” For one, the boom-and-bust cycle calculation takes all 60-plus years of data into account, with each year’s average used on its own. It’s global behavior in this data. The “hockey stick” is a local phenomenon, created by only a small handful of data points (namely, the five four-year-moving-averages ending in 2014 through 2018). Because it originated from the moving averages, the year-to-year jitter has been muted — but, whether the increase is statistically significant in the time series is a question I didn’t address. (Nor indeed do I know how to address it, off hand.)
And it is entirely possible, maybe even likely, that the hockey stick is explained not by polarized behavior on the Court but rather on the unusually unanimous years of 2015-2016, two of the most consensual Court terms in this data. Indeed, the levels of dissent in 2014 and 2018 (roughly 1.8 dissents per case) that bracket this five-year period were only slightly above the historical average (about 1.6). So it’s not that the polarization got anomalously high in this period; it’s that, for a brief time, it fell anomalously low.
But this hockey stick does partially scratch my – and maybe Mr. Torrez’s – confirmation bias itch. While the Roberts Court is not significantly more polarized, it appears to be headed in that direction. To mix one last metaphor, if this hockey stick is to be believed, the wheel on the S.S. Dissent has been decisively turned toward polarization in the past several years, and the ship is rapidly listing back away from the consensus of 2015-2016 and toward more dissension.
That “feels right” in today’s hyper-partisan world. But it will take a few more years to know for sure if this change is real or ephemeral.
My appreciation is due to the Opening Arguments podcast team – Andrew Torrez, Thomas Smith, and their production colleagues – for putting out a terrific show. It’s frequently hilarious, always intellectually honest, and inescapably informative in these tumultuous legal and political times. I’ve always been a math aficionado, but only recently have I caught the law bug, and it was hanging around their podcast that did that for me. Check it out and support their show on Patreon if you are able.
Thanks also to Tom Williams whose comment on my preliminary analysis pointed me to the excellent Washington University database. Not only was it perfect for this work, I’m sure it’d be a great data source for teaching projects suitable for statistics and quantitative literacy courses at any level — particularly for pre-law students, political science majors, or any groups of students interested in trends in the judiciary.
Lastly, it will be evident to any data scientist or statistician that reads these posts that I am neither. So if you’re mortally wounded by my analysis or by any of the choices I made, let me offer my hobbyist’s apology. The great thing about an open, public database like WU’s is that you can download it and do your own analysis too. I hope you’ll share your results with me when you do, either in the comments below or via social media.
The post Boom, Bust, Hockey Stick: Unanimity in the U.S. Supreme Court since 1945 appeared first on Matt Salomone.
]]>Continue reading "Is Supreme Court Unanimity Vanishing?"
The post Is Supreme Court Unanimity Vanishing? appeared first on Matt Salomone.
]]>My thesis is this, and the first half is pretty well established in the law review literature: that Supreme Court cases throughout our history have had a reverse-normal distribution. That is, they have predominantly been 9-0 decisions and tailing off toward 5-4. […] The mode of Supreme Court decisions has been 9-0. My thesis is that the distribution of Supreme Court decisions can be mathematically shown to have changed demonstrably beginning in the 1980s, and then beginning sharply with the Roberts court in 2007.
Using the “stat pack” reports compiled on SCOTUSBlog, which go back to 1995, I claim that the answer is no. While it remains true that unanimous 9-0 decisions (counting both those with and without dissenting opinions) is the modal outcome in cases decided on the merits each year since 1995, I find that the mean number of dissenting votes on merit cases has not (yet) markedly increased in the Roberts court.
According to these data, the most polarized year in the past twenty was 2008. This year had the narrowest split between number of unanimous cases (26) and number of 5-4 decisions (23) and was the only year in this data in which the average number of dissenting votes exceeded two (2.04). By contrast, lost in the political sturm und drang of October 2016 was the fact that the Supreme Court term was the most unanimous of these past twenty, with 41 unanimous decisions and only seven 5-4 decisions contributing to an average of less than one dissenting vote per decision (0.97).
Beyond that, the trend since 2007 has not hockey-sticked either up or down, neither toward unanimity nor toward polarization. So if there is a story in the second half of this data that is different than the first, it could be found in the “zig-zagging.”
Prior to 2007 in this data, the levels of unanimity in the court appeared fairly stable year over year. There were periods of gradual and probably statistically insignificant increases in polarization (1998-2001, 2002-2004, and 2006-2008) interrupted only by two sudden drops toward unanimity in 2002 and 2005. After 2008, however, the steadiness seems to evaporate and we see larger up-down-up-down swings in unanimity (2012-2013-2014-2015 being the most notable).
So perhaps the notion of unanimity in the Court is not disappearing but is becoming more fragile. One way to measure these “changes of direction” is to estimate the magnitude of the second derivative of the average number of dissenting votes. This captures the zig-zagginess of the data: it is zero when the data increases or decreases at a constant amount rate (i.e., in a straight line) and is largest in moments of most reversal in direction, such as when the sharp increase from 2013-2014 was followed by a sharp decrease from 2014-2015.
I used a five-point estimate of the second derivative, \(f”(x) \approx \displaystyle\frac{-f_{n-2} + 16f_{n-1}-30f_n+16f_{n+1}-f_{n+2}}{12}\), to measure this volatility. (Since the data extended back to 1995, there was enough information for a centered five-point estimate for each year 1998 through 2016 shown on the chart.)
The dashed line on the chart shows the magnitude of this second derivative, and to my eye there is the beginning of an increase in this quantity under the Roberts court. The swings in unanimity from year to year appear to have become larger and more frequent. It is now less likely that recent levels of unanimity or polarization are predictive of future levels.
To use a metaphor, if we imagine driving in a car whose gas pedal accelerates the court toward polarization and whose brake pedal slows it toward unanimity. The 00s, when this second derivative was smaller, was a time of light feet on these pedals, neither accelerating rapidly toward or away. Since 2008, by contrast, there has been a heavy foot on these pedals and the direction of movement toward or away from unanimity has changed frequently and swiftly. Time will tell in the next several years whether this general increase continues. Court watchers and fans of either unanimous or partisan courts are advised, if this trend continues, to invest in protective neck collars against whiplash.
This was a fun short exploration. I’d be interested in whether the longer historical perspective that Torrez suggests is borne out by data extending further back. It’d also be nice to go the next step toward his distributional hypothesis: for example, by computing the skewness of these distributions over time to assess for a shift from right-tailed (unanimity bias) to left-tailed (polarized bias).
The variance and skewness of the dissenting vote distributions don’t tell a story dissimilar to the above:
The variance is a measurement of the dispersion or “spread” of each distribution, and is lowest when the data is most clustered around its mean. Here it is lowest in 2016 when the unanimous 9-0 decisions dominated the court (41 decisions had no dissenting votes; only 28 decisions were not unanimous this year). There doesn’t appear to be much to learn from where variance was highest here — these would be the years in which there was the most variety in court splits ranging from 9-0 to 5-4.
There is perhaps a more interesting story in the skewness here, however. Skewness is a statistical quantity that most closely addresses Torrez’s hypothesis about bias-reversal: it is zero for distributions that are symmetrical about their mean, positive for distributions with long right-tails (i.e. which are “heavier” to the left of their mean), and negative for distributions with long left-tails (“heavier” to the right of their mean).
In our distributions, unanimity (zero dissenting votes) is the leftmost outcome on the scale while maximal polarization (four dissenting votes) is the rightmost. So, when our distributions have positive skew they are biased more toward unanimity; when they have negative skew they are biased more toward polarization. Put another way, positive skew means that more than half the data falls below the mean; negative skew means more than half falls above the mean.
By this measure, the only year in the past twenty with a bias toward polarization (negative skew) was 2008 — also the year of the highest average dissensions per case in this data. But 2018 was a very close second! This might get at the feeling that court watchers have about last year’s decisions: 2018’s vote split distribution came closest to a bias more toward split decisions than toward unanimity than in any other year except 2008.
Of course, the caveat with all of this analysis is that it is statistically somewhat simplistic, and any one year on its own is not evidence of a trend. But much as the recent inversion of the bond yield curve may portend recession on the horizon, if last year’s drop in skewness is a hint of things to come, it’s possible that we may be on the cusp of an era of Court decisions biased more toward splits than toward unanimity. Let’s see what we can learn from this fall’s season.
Any suggestions or tips? Give me a shout on Twitter.
The post Is Supreme Court Unanimity Vanishing? appeared first on Matt Salomone.
]]>Continue reading "The Visual Syllabus (2019 National IBL Conference Poster)"
The post The Visual Syllabus (2019 National IBL Conference Poster) appeared first on Matt Salomone.
]]>About three years ago, concomitant with my wholesale switch to standards-based grading, I also set aside the well-worn course syllabus template that I’d used for all my courses and set out, from a blank page, to design a syllabus my students would find worth reading. The result is a colorful, four-page visual syllabus that is now the key artifact of my teaching.
Few college teachers are trained in how to create a syllabus as a pedagogical tool. I, like most, first understood the syllabus to be a matter of legal, not educational, obligation. List your contact information. Required textbooks and chapters. Dates, especially of exams. A breakdown of grade percentages. And every policy — both your own and your institution’s — that will come between support your students and their learning process. And so my education in syllabus design produced an 8- to 10-page document that, however organized and well-intentioned I made it over the years, was a document that most of my students just did not find valuable.
What a missed opportunity.
What if, instead, my syllabus could be an actual learning tool? Could I find a way to communicate not only what my course was supposedly “about,” but also how I designed its learning tasks to support higher-order thinking? And why I think students will find the experience enjoyable, even useful? And, in as little space as possible, also convey the features of my standards-based grading system, with which they are not likely to have had much if any prior experience?
Can I convince students that, and how, my course is “different,” and that they might actually enjoy that difference, in the space of four pages?
Show me your syllabus, and I will show you what you value as a teacher.
Clearly, moving to a shorter syllabus would mean giving up on some things. But how important are those things really?
My old syllabus was so clearly framed around the notion that the syllabus must be a contract that its final page was actually a contract, detachable and with space for students to sign, stipulating that they understood the information therein and would abide by its policies. Especially with my lower-division and developmental courses, I made a point to collect these signature pages every semester.
I suppose my thinking was that, if a student were to run afoul of a policy, I could triumphantly produce their signature on a document that could “win” my side of the argument. The adversarial view of teaching – which counterposes instructors and students on opposite sides of a pitched battle between “rigorous learning” and “easy grades” – permeates many conversations about teaching in general, and syllabi in particular. Most of us know faculty who will describe the evolution of their syllabi as an iterative closing of loopholes, designed to prevent clever students from getting one over on their professor.
The clear assumption: that students are not deserving of trust.
That was not the relationship I wanted with my students. But my syllabus, as a crucial first impression of my course, was setting me up to have that relationship. It had to go.
My syllabus did not, in fact, have to look like a contract.
So, I summarized only the most important course “policies:” really, the ones that relate to ensuring all students have access to the necessary information and resources for success – from tutoring services to basic needs security – then put on my graphic design hat, opened the unlikeliest possible tool for graphic design (Microsoft Word) and got started.
(See the bottom of this post for a downloadable template.)
Visual, or “graphic,” syllabi, can take many forms. Mine was designed starting from a newsletter template in Microsoft Word – definitely not my favorite tool, but one that lends itself well to re-use and sharing. (You can find editable templates on this page below.) It inhabits four letter-sized pages; I print it as a bifold ledger-size page, exactly like a newsletter. This also makes it a convenient jacket for holding other first-day handouts.
On the outer pages of the syllabus are the things I want my students to see and know first:
In its ledger-print size, the inside of my syllabus becomes a double-page-wide spread that, for students, is both the most memorable and most instrumentally valuable part of the document. It includes:
It gets bemused looks from our campus copy center staff every semester when I get it printed, but for me there’s no question that my visual syllabus is a far better representation of how I teach than was my old, wordy, paranoiac contract syllabus. I also like that, because its pieces all interconnect so tightly, there aren’t many places where I can make isolated changes from one semester to the next unless I’m totally redesigning a course.
My next goal for these syllabi might be to rethink the “What You’ll Learn” and “How You’ll Learn” boxes on the front page. Do they really add value to the writing on the front page and the matrix on the inside, respectively? Or could I use this space for another purpose, such as a better introduction to my course tools and how students will interact with them (such as Slack and Twitch)?
The post The Visual Syllabus (2019 National IBL Conference Poster) appeared first on Matt Salomone.
]]>Continue reading "Numbers That I Used to Know"
The post Numbers That I Used to Know appeared first on Matt Salomone.
]]>
Student:
Now and then I think about my calculator
Bought it for a hundred bucks in junior high
That TI-83 was my best friend
I thought I’d be with it until the end
But now in college I’m a mathematics major
Nowadays I don’t have use for computation
And every question asks me why — I don’t know why
I used to think that one and one was two
But now I’m not sure even that is true
It at the very least depends on your definitions
Chorus:
My teachers only showed me how
To plug in numbers to a formula and get an answer
Calculation’s great and all
But what good is a decimal without a point?
It’s such a mindless plug-and-chug
Calculator did my work without me even thinking
That’s not mathematics, though
Those were just the numbers that I used to know
Teacher:
Now and then I think of all my amazing students
So good at memorizing just how every problem must be done
Got the answers right on their first try
They didn’t ever ask me why
So just take your A-plus and go
There’s no more to mathematics than the numbers that you used to know
Student:
You didn’t ever justify
Told me everything was true because the textbook said so
I don’t have to take your word
‘Cause everything’s conjecture ’till you have a proof
First principles and axioms
Implications, propositions, theorems, corollaries
That’s how mathematics goes
It’s more than just the numbers that you used to know
More than just the numbers that you used to know
The post Numbers That I Used to Know appeared first on Matt Salomone.
]]>