Blur and Knowledge from Falsehood: Neural Network Science and Neurophysiology Meets Epistemology by Jody Azzouni
Convolutional neural networks (CNNs)—that are trained to recognize objects using clear images of them—fail catastrophically at recognizing objects when faced with degraded or blurry imagery. New results by Jang and Tong, and Pramod, Katti and Arun show visual object recognition is optimized by blurring the training images in a way that corresponds to the peripheral blur induced by the human eye. Optimizing recognition of objects this way empirically supports the functional significance of there being a hundred times less photoreceptors dedicated to peripheral vision than in the retina.
These results refute a longstanding epistemic slogan: knowledge of truths arises only from knowledge of truths—these knowledge-originating truths being either perceptually available or derived from earlier-established truths by impeccable inferences of some sort. This view has only recently come under attack in the recent epistemological literature: examples show that despite calculational or inferential missteps, one can nevertheless know one’s conclusion. A case due to Andrew Warfield describes Warfield as bringing a hundred handouts to a talk, and recognizing that he has enough handouts when he counts 51 audience members. Unfortunately, he has miscounted and there are 52 audience members. Nevertheless, he still knows he has enough handouts. This has come to be called, “knowledge from falsehood,” or “knowledge from non-knowledge.”
The philosophical literature on knowledge from non-knowledge up until now has failed to recognize that there are “knowledge from non-knowledge” cases that arise directly from perception. For example, if my allergies make what I see very blurry, I nevertheless can recognize (know) that a rabbit is hopping by despite there being nothing that corresponds to the blurry smear that I actually see. Knowledge from non-knowledge, as the many examples like this show, is therefore ubiquitous.
But the results, described in the first paragraph, about training CNNs on peripherally-blurred images, reveal something more dramatic: knowledge only because of falsehood. For the catastrophic failure of CNNs to recognize objects when trained on clear images shows that peripheral blur (misleading perceptual information) is necessary for recognizing objects--necessary for knowing that something is a person and not a fire hydrant, for example.
This is a striking phenomenon. Biologically built into our sensory apparatus are mechanisms for generating misleading information. Why? Because recognition of objects proves to be more reliable on the basis of misleading information than it is on the basis of perfectly veridical information. There should be no doubt that this phenomenon is widespread, and occurs with all the senses.
Some epistemic processes are more resilient than others, where resilience is a matter of how much the epistemic process in question can be fumbled by a user and the subsequent process (including fumbles) nevertheless is still highly reliable—reliable enough to be regarded as yielding knowledge when it results in something right.
This distinction between more resilient and less resilient epistemic processes illuminates a striking difference between informal rigorous mathematical practice and the properties of formal derivations. Notoriously, professional mathematical work is often flawed—the proofs in question don’t establish the results advertised. Nevertheless, the mathematicians who originally publish these proofs are credited with discovering the results in question—and not the textbook writers who clean up the proofs and provide versions that are informally-rigorously impeccable.
This can be explained by resilience. The mathematical methods used by professional mathematicians are ones that are resilient in the sense that they can withstand user errors and yet result in highly reliable informal rigorous processes of yielding new results. This is notoriously not the case with pure deductive methods—formal methods of proof, for example.
As the foregoing indicates, the appropriate epistemic model to frame these results in is a reliabilist one: we take knowledge to result from epistemic methods that are sufficiently reliable, regardless of whether those using those methods realize they’ve made mistakes or regardless of whether those methods are flawed in various ways.
Philosophers take note. The epistemic internalist thinks that someone knows something only if they have a justification in hand that they can use to justify their knowledge claim. We see now that that perspective is far less comfortable with knowledge from non-knowledge than the externalist perspective is. We think Warfield, in the handout example, knows he has enough handouts. Thus, the methods must be sufficiently reliable to yield knowledge (when they provide the correct answers)—regardless whether the method-user realizes this. Together with the blur results, this is further evidence that our ordinary concept of knowledge—the one we all share, scientists included—is an externalist one.