A Philosopher's Definitive (And Slightly Maddening) Case Against Replay Review

The motivation for using video review in sports is obvious: to get more calls right. This seems like an easy enough mission to fulfill, but anyone who has spent even a little time watching sports on TV can attest to the fact that the application of video review is not so simple. In most sports where it is applied, video review has actually created more confusion and less clarity. Why is this the case? Follow me into an examination of thousands of years of philosophical discourse, and we will find the answer together, my friends.

The root problem with video review is that it is so often used to make decisions based on rules that contain an inherent level of vagueness. For example, according to the NFL’s current catch rule (Rule 8, Section 1, Article 3a) an inbounds player must secure “control of the ball in his hands or arms prior to the ball touching the ground” in order to complete a catch. The term “control” in that rule is vague. There are borderline cases of controlling a football, which means boundaries for when the term “control” can and can’t be applied are fuzzy ones.

Philosophers have been dealing with the problems posed by vagueness since at least the 4th Century BC, because the problems that vagueness causes aren’t limited to the NFL’s struggle with the catch rule. Vagueness also has important implications for metaphysics, the philosophy of language, and our understanding of the nature of truth and the foundations of logic.

There are a number of different philosophical approaches to vagueness that can be applied to the case of video review. The bad news is that once we understand these different approaches to dealing with vagueness, we are only left with the conclusion that there is no sensible way to use video review when it’s being used to adjudicate rules with vagueness.

Vagueness poses a serious philosophical problem, because it leads to paradox. For example, someone with no hair at all is, obviously, bald. The term “bald” applies to someone with no hair. And adding a single strand of hair to their head isn’t enough to make them not bald. That is, if they added a single strand of hair to their head, then they’d still be bald.

But if one strand of hair isn’t enough to make someone not bald, then adding a second strand isn’t, either. And that’s because one strand of hair isn’t enough to make a difference between bald and not bald. So, going from one strand of hair to two strands won’t make someone not bald. And if two strands aren’t enough to make someone not bald, then adding a third strand isn’t enough either. Because again, one strand isn’t enough to make the difference between being bald and not bald.

You probably see where this is going. If you add enough hairs, this person is eventually going to have a full head of hair, but by parity of reasoning, each additional strand of hair isn’t enough to make them not bald. Since it’s been established that one strand isn’t enough to make the difference, this person would remain bald even though they have a full head of hair. But that’s false! Because someone with a full head of hair isn’t bald.

This is the Sorites Paradox, and it has been central to discussions of vagueness for over a thousand years. The name is derived from soros, the Greek word for heap, because early versions of the paradox were often constructed using the term “heap.”

Any vague term that admits borderline cases is what philosophers call Sorites-susceptible. There is a lot of Sorites-susceptibility in sports. Think of the NFL’s decades long struggle to define a catch. Each small rule change was meant to make the notion more and more precise, but only succeeded at leading to more controversy. That’s because miniscule differences in, say, the amount of time a player held the ball, or in hand placement relative to the ground, aren’t enough to make the difference between a catch and a non-catch. Those rule changes couldn’t eliminate the vagueness.

Nowhere is this Sorites-susceptibility clearer than with the most famous borderline case of all: the Dez Bryant “Catch.” Bryant, if you don’t remember, made a spectacular fourth down play that seemed to put the trailing Cowboys on the half-yard line. But after a video review, officials determined that Bryant had failed to maintain full control of the football. A few years later the NFL changed the catch rule (to the one described at the beginning of this article), and decided that Bryant had caught the football after all. The Bryant “Catch” is definitely a borderline case.

https://youtube.com/watch?v=uQxp-A5uvkA

With all this in mind, we can explore the three main approaches to dealing with the Sorites and borderline cases. Each has some weird implications, and none offer any truly satisfying solutions.

The most popular approach to vagueness is a semantic one. Vagueness, the thinking goes, is the result of semantic indecision, and so avoiding the Sorites Paradox is simply a matter of developing a non-standard semantics. The most popular (but not only!) version of this is supervaluationism.

The supervaluationist accepts that in borderline cases certain terms neither definitely apply nor definitely do not apply. But the supervaluationist avoids the Sorites Paradox by appealing to the notion of a precisification. Vague terms, the thought is, are semantically deficient only because they could be made more precise. The term “control” is vague because we could make that term more precise in any number of different ways. We could define, for instance, lots of specific values for when that predicate would apply (i.e. “holding the ball for 0.5 seconds,” “holding the ball for 0.75 seconds”).

The supervaluationist draws on this notion of precisification to define two notions of truth and falsity. Take a value associated with the vague term like “heap”: a million grains of sand. The supervaluationist says that the proposition, “A million grains of sand is a heap” is super-true. And that’s because no matter how we might make the predicate “heap” more precise, a million grains is definitely going to count as a heap. On the other hand, the proposition, “One grain of sand is a heap” is super-false. And that’s because no matter how we might make the term “heap” more precise, one grain of sand definitely won’t count as a heap. The driving thought is that vague terms admit multiple precisifications. And the non-borderline cases, in which we can determinately apply a term, are those in which no matter how we might make that term more precise, it will definitely apply or not apply.

How does this avoid the Sorites Paradox? Well, it’s super-true that one strand of hair doesn’t make someone not bald. And it’s super-true that two strands of hair don’t make someone not bald. And so on. But eventually we’ll get to the borderline cases, in which the term bald does not definitively apply or not apply. So, it isn’t super-true or super-false whether a term applies.

And that means we avoid the paradox. Because it is only a paradox if we have seemingly true premises that lead to a false conclusion. But on any precisification, at least some of the premises will be neither true nor false. And so, there is no paradox.

This is all a bit esoteric and technical and probably making your brain hurt, but it’s about to get weird, too. This is because the supervaluationist has a problem with bivalence, which is an intuitive assumption about truth that claims every proposition is either true or false (or at least every proposition of the declarative sort that we’re concerned with here). The supervaluationist has to deny bivalence. This is because the supervaluationist accepts that some propositions, namely those involving borderline cases, are neither super-true nor super-false.

Here’s why that’s weird: the supervaluationist still accepts the Law of Excluded Middle. This is the claim, going back to Aristotle, that for any proposition, either it or its negation is true. But if we accept the Law of Excluded Middle and deny bivalence, then we are left in a truly knotty situation.

Turn your mind back to Lambeau Field, January 2015 during the review of Bryant’s “catch.” Here’s a claim: “Dez Bryant controlled the ball or Dez Bryant did not control the ball.” The supervaluationist says that claim is super-true. No matter how we make the term “control” more precise, that proposition comes out as true. It’s just an instance of the Law of Excluded Middle.

But that above proposition isn’t super-true because “Dez Bryant controlled the ball” is super-true. The proposition “Dez Bryant controlled the ball” is neither super-true nor super-false. It’s a borderline case of control, after all. And that above proposition isn’t super-true because “Dez Bryant did not control the ball” is super-true. The proposition “Dez Bryant did not control the ball” is neither super-true or super-false.

So where does that leave us? It’s super-true that Dez Bryant either did or did not control the ball. But it isn’t true or false whether that particular instance was an instance of control or not. For the supervaluationist, in other words, it’s true that Dez Bryant either did or didn’t catch the ball. But it isn’t true or false whether that particular play was a catch.

That’s an aggravating result! It’s rooted in the supervaluationist commitment to propositions about borderline cases being neither definitely true nor definitely false. But this also shows why there is no satisfactory use of video review on this approach. Because if vagueness is semantic indecision, then in the case of Dez Bryant’s catch, there is no determinate application of the term “control.” There is no determinate answer to whether we can say Bryant controlled the football or not, and so video review is irrelevant. Taking a closer look doesn’t matter. Since it’s a borderline case, it isn’t super-true or super-false whether that was a catch, because on some precisifications it comes out a catch. On others, it doesn’t.

A second approach to vagueness is an epistemic one, which proposes that vagueness is a result of ignorance rather than any semantic deficiency. The epistemicist thinks that there is some sharp boundary between, for instance, being bald and not bald. There is one strand of hair that makes all the difference, but the moment at which that sharp cut-off occurs is simply unknowable.

And here’s where the epistemic solution bears some resemblance to the semantic one. Because the standard epistemic view takes this ignorance to be rooted in language. One thought is that the meaning of words supervenes on their use. And so, any sharp boundaries for a term would be a function of our dispositions and patterns regarding the use of that term. But we can’t know the totality of those dispositions and patterns. So, we can’t figure out the sharp boundary determined by the totality of those dispositions and patterns. This ignorance prevents our knowledge of any term’s sharp boundaries.

All this provides an easy solution to the Sorites Paradox. Because the epistemicist just accepts that one hair, or grain of sand, or whatever, makes all the difference. So, the argument has a false premise at some point. Because at some unknowable point one strand of hair makes someone not bald. And notice that, unlike the supervaluationist, they can keep bivalence. And so, every proposition involving a vague term is still true or false.

But this is exactly why epistemicism is so weird. Because the epistemicist thinks, for instance, that there is some exact moment when a player has adequate control of a football. There is some sharp boundary between control and lack of control. So, in the case of Dez Bryant, either he passed that sharp cut-off or he didn’t.

That in itself is pretty wild, but what’s wilder is that we can’t know when that cut-off happens or doesn’t happen. The dividing line between control and lack of control is literally unknowable. There is some precise set of circumstances that completes the process of controlling a football, but there is no way to know what those circumstances are.

So it should be obvious why there is no satisfying use of video review on this approach. Because in a borderline case like Dez Bryant’s, it is unknowable whether that set of circumstances was an instance of controlling the ball. There is no point in going frame by frame to take a closer look, because while there is some fact of the matter as to whether Bryant controlled the football, we can’t know what it is. If vagueness is ignorance, then there is no satisfying use of video review.

But wait, things can get even stranger. Those who defend semantic and epistemic views locate vagueness in us. For the supervaluationist, vagueness is a feature of language. For the epistemicist, vagueness results from our ignorance.

But a third approach to vagueness ventures into much headier territory. This approach denies that vagueness is a result of how we represent the world, and argues that it comes from the world itself. This is known as ontic or metaphysical vagueness. The Sorites Paradox, then, isn’t so much a problem to be solved, but an illustration of the world’s indeterminacy.

Here it’s worth emphasizing the weirdness of this approach up front. Because it seems reasonable to think that the world is fully determinate. While it might be unclear whether a certain term should apply to some state of affairs, that isn’t because of the state of affairs itself. When Dez Bryant made that acrobatic play in 2015, it was unclear whether we could apply certain terms to that state of affairs. Terms like “catch” or “control.” But what happened was fully determinate. There was nothing vague about the world or action itself. Or at least, that’s the intuitive thought.

The defender of ontic vagueness denies all this. They deny that the world itself is fully determinate. Instead, the world itself is vague. It can be vague whether someone is bald, for instance, because the world itself can be indeterminate with respect to baldness. And if Dez Bryant’s play was a genuine borderline case of control, then the indeterminacy with respect to whether it was a catch comes from the world itself. That’s pretty wild!

You might think that they very notion of ontic vagueness is unintelligible, but it’s not. Here’s a good way to get a handle on it: Take the notion of a precisification. The defender of ontic vagueness will claim that for some terms, even if we made them maximally precise, it would still be indeterminate whether that term applies.

Say for instance (borrowing an example from the philosopher Elizabeth Barnes) that the predicate “bald” is made maximally precise. So, someone is bald only if they have “less than 1,000 hairs.” How can it still be vague whether “bald” applies if the term is maximally precise? Well, say someone has exactly 999 hairs on his scalp, but one hair is super loose and about to fall off. In that case, it’s suddenly indeterminate how many hairs this person has. So, it’s indeterminate whether the term “bald” applies, even though we’ve made the term maximally precise. And we know all about that precisification! Thus, it isn’t semantic indecision or ignorance that’s causing trouble. The vagueness is a matter of the world itself being vague.

The implications for video review should be obvious here as well. Because if vagueness is metaphysical, then some state of affairs under review might be fully indeterminate. There is no fact of the matter as to whether someone controlled the ball or not. And so looking frame by frame won’t help. There is nothing there for a referee to discover.

Let the current haze you find yourself in after reading all of this act as evidence that there can be no satisfactory use of video review in sports, so long as the rules it’s tasked with adjudicating are vague. Either there is no fact of the matter as to which call is the right one, or that fact of the matter is unknowable. Furthermore, the motivation for widespread use of video review is to get calls right. But if the rules are vague, then this motivation is undermined on each approach, too.

This doesn’t mean we should get rid of video review entirely, because some rules don’t involve vagueness at all. Line calls in tennis are a good example. Hawkeye can stay! And maybe there are other ways to take vagueness into account. One might be to have some set amount of time to look over a replay in real time. Officials can look quickly over a few real time angles, and then move on. No need to look any closer, as there isn’t anything there to look for.

But here’s an objection: You might think these problems can be avoided if video review is used only to correct “clear and obvious” errors. This is the supposed standard for use of VAR in soccer. Things aren’t so simple, though, because there aren’t just borderline cases. There are borderline borderline cases. That is, it can be vague when the borderline cases start. It isn’t just that there is unclarity, but that there is unclarity about when the unclarity even begins.

This is the phenomenon of higher-order vagueness. And it shows why falling back on “clear and obvious error” won’t help. Because the notion of clear and obvious error is supposed to eliminate borderline cases of error from consideration, but it’s unclear when those toss-up cases even start.

And this phenomenon actually gets at the heart of some of the current dissatisfaction with VAR. For instance, this summer Argentina lost to Brazil in the Copa America semifinals. The game turned in part on a few close calls that went Brazil’s way and weren’t reviewed by VAR. Afterward, Lionel Messi said, “They [the officials] had called a lot of bullshit… But they didn’t even check the VAR [tonight], that’s unbelievable.” Messi thought those cases were borderline enough to be reviewed. The officials did not. Neither is talking about higher-order vagueness (as far as I know!), but that is the phenomenon at work here. It is vague when the borderline even begins. This means the only way to avoid that vagueness would be to review everything, which is something that no one wants.

The Stoics were one of the great Greek philosophical schools to emerge after Plato. They were system builders, and had intricate and intertwined logical, metaphysical, physical, and ethical theories. And part of their theory of logic was a deep commitment to bivalence, the claim that every proposition had to be true or false.

Their great opponents were the Skeptics, from Plato’s Academy (which had been turned to a form of skepticism in about 273 BC by Arcesilaus). And one of the primary tools in the Skeptic’s arsenal was a series of Sorites questions. Since the Stoics were committed to bivalence, answering any such series of questions would naturally force them into an embarrassing and paradoxical situation.

The great Stoic logician Chrysippus advocated a particular response to this line of Sorites-type questioning. Here’s the philosopher Timothy Williamson: “Chrysippus recommended that at some point in the sorites interrogation one should fall silent and withhold assent. The wise man would and the ordinary Stoic should suspend judgment.”

Perhaps this offers the clearest solution to how video review should be applied to rules with inherent vagueness. When confronted with something like the question of whether or not Dez caught the ball, the Stoic would fall silent and withhold judgement. Referees cannot withhold all judgement when it comes to close cases on the field. But when faced with such close calls, they would be well served to act as a Stoic, and refuse to interrogate the play any further.

James Darcy is a PhD candidate in philosophy at the University of Virginia. His interests include contemporary metaphysics and the New York Islanders.

Source: Read Full Article