Comments
[deleted] t1_jc9ey07 wrote
[deleted]
divitch t1_jc9nyr5 wrote
I deeply agree with this position. It remind me Jamesian pragmatism. For me absolute truths do not exist so I seek the most suitable beliefs. As it is written "These patterns of belief also reflect our day-to-day epistemological processes."
N0_IDEA5 t1_jca5cwc wrote
I’m sympathetic to this view, as I feel knowledge should be decoupled from certainty. But I still think truth has a value. While I say we can’t be certain about the truth of any of our beliefs I still think that there is something that is a fact of the matter about the world. And if someone were to fall onto a belief that lines up with the true fact of the matter, then I think if we were to put our selfs into the boots of an all knowing deity then we’d be more inclined to say the person who stumbled onto the truth has something more than the person who’s beliefs are completely mistaken even if equally justified. The article seem to more or less be making an argument for coherentism (we know something based on how well it coheres with our other beliefs) and I feel this notion of knowledge has practical problems as you can just crate another factious belief to justify your belief in anything. It’s what conspiracy theories come from and it’s why they are often so hard to argue. Furthermore if we were to settle on this as knowledge (the best justified belief) then when it comes to education it will become incredibly hard to choose what we should and shouldn’t learn in schools, where as for now we have value in truth and try to teach what we are most likely to know.
Midrya t1_jcakwmi wrote
Honestly, this just feels like word games. It doesn't really change anything, and it looks like the only motivation to make the change is to get one step ahead of "skeptics" who are criticizing your ideas when you are presenting them as fact.
Base_Six OP t1_jcaymh9 wrote
I think truth is the objective of reasonable belief. We can't state that our beliefs are true, but we desire them to be true, which forms the primary motivator of improving our epistemological mechanisms.
I think the problem with coherentism is that it lacks that tie. If I can construct a coherent set of beliefs that I am a brain in a vat, why is that belief set worse than one which believes in my perceived reality? I don't think we can state from a position of pure coherentism why that would be the case. However, I think the argument that it's less reasonable to take a position that denies available evidence than one which accepts available evidence is a reasonable one. Even without being able to describe the likelihood that those foundational beliefs are true, the singularity of evidence in the form of senses and memories gives us only a single point to build off of if we'd like to construct beliefs about the outside world. This also addresses conspiracy theories in a less direct manner: in order to believe a conspiracy theory we generally need to be extremely epistemologically sloppy and disbelieve a lot of available evidence. If we anchor ourselves with the belief that evidence should be taken as reasonable in absence of counter-evidence, denying that evidence to believe a conspiracy is much more difficult.
Base_Six OP t1_jcb0qgs wrote
I wrote this more aimed at the skepticism of "you are a brain in a vat" than with criticism of more grounded ideas. Neither internalist nor externalist constructions of knowledge feel, to me, like they really have teeth when confronting strong skepticism for fundamental beliefs.
For me, I think skepticism is ultimately correct when it says "you don't know that your senses describe reality", but I think I can also make the claim that it would be unreasonable for me to disbelieve my perception of reality, which is what I'm trying to argue here.
Beyond that, 90% of epistemology feels like word games that don't really change anything, including my work, so I agree with you there. Russel probably has the right approach in just brushing skepticism aside and getting on with more interesting work. There's a substrata of beliefs that we need to accept to meaningfully engage with the world, and whether we just accept them axiomatically or struggle to construct a framework for accepting them on other grounds is generally irrelevant. I personally find the axiomatic approach unsatisfying, and think that a framework that can provide those foundational answers can also be useful more generally.
N0_IDEA5 t1_jcblh9n wrote
We agree on the issues with coherentism, and I think the idea of taking the position that accepts the apparent evidence is a good one. But even when using that I still think the “fact of the matter” holds importance in us having knowledge. Let’s take a modified case of Norman the clairvoyant. If Norman were to have for the first time the clairvoyant feeling that the president is in New York, and that turns out to be true (the fact of the matter). But he had more evidence pointing to the president being in Florida say there were news reports and others testimony to the president being in Florida. I feel it would be better to say Norman knows the president is in New York and not Florida. Rather than to say Norman knows the president is in Florida.
arjuna66671 t1_jcbmx1o wrote
Your example of Norman the clairvoyant provides an interesting insight into the relationship between evidence, belief, and truth. While I agree that the "fact of the matter" holds importance in having knowledge, it is also essential to consider the epistemic responsibilities of individuals when forming beliefs. In the case of Norman, even though the clairvoyant feeling turned out to be true, it would be epistemically irresponsible for him to base his beliefs solely on that feeling, given the more robust evidence pointing to the president being in Florida.
Our epistemic responsibility lies in cultivating beliefs based on the most reliable and available evidence. If we anchor ourselves with the principle that evidence should be taken as reasonable in the absence of counter-evidence, then we strive to form beliefs that are more likely to be true, even though we can't guarantee their truth.
In the context of education, the "fact of the matter" still holds value, but the process of refining our beliefs and knowledge should be emphasized. We should teach students to evaluate evidence critically and engage in open-minded inquiry, which can lead them to a better understanding of the world.
While the "fact of the matter" is important for knowledge, focusing on the epistemic responsibility of individuals and the refinement of our belief-forming processes can help address the challenges posed by coherentism and other epistemological theories. This focus on evidence and critical thinking also provides a practical framework for addressing issues in education and countering conspiracy theories.
FrozenDelta3 t1_jcbob91 wrote
We either know something or we don’t. A third state is erroneous knowledge, which is something we think we know is true but is actually provably or demonstrably false. When encountering evidence that is contrary to currently accepted knowledge, it’s very reasonable to correct erroneous knowledge.
While belief and knowledge may seem interchangeable or synonymous and to some extent they can be, they have different use applications. Can’t know the answers to unanswerable questions, but can definitely believe any answer to unanswerable questions.
It would seem a difference between knowledge and belief is provability. That which cannot be proven is considered unprovable. Cannot prove that one is or isn’t a brain in a jar, but definitely can believe it either way. I say either way because a seemingly opposing reaction to proposed belief is disbelief. “I believe we are brains in jars” is then countered with “I do not believe we are brains in jars”, then I say “both are unprovable and I neither believe nor disbelieve.”
Other uses and examples come to mind, but I’ve run out of time for now. Having said all of that, I am neither against nor for belief, I just see potential issues with rooting all information in belief.
Base_Six OP t1_jcbsx7m wrote
The problem is that if we accept the possibility that we're brains in jars, the vast majority of our information becomes unprovable. I can't disprove the strong skeptical hypothesis, therefore I can't know anything that would be disproven by the strong skeptical hypothesis. Either we have to accept that we have minimal knowledge or we need a conceptualization of "knowledge" that gets around strong skepticism. If we accept the former, we need some other epistemological basis to describe the majority of what we would like to say we "know".
I don't think it's an either/or between belief and knowledge. After all, anything I know is also something I believe. When I say "I possess knowledge" about a topic, I'm describing my belief in some manner. Definitions for knowledge vary, but generally they contain some element of "I have justification for my belief", as well as other things.
What I'm proposing here is that we can have solid justification for holding a belief even in absence of knowledge or proof that the belief is true. On the brain in a jar scenario, I'd say that I can't disprove the hypothesis but that I don't have justification for believing that hypothesis. Between the positions of belief and disbelief, I think that the reasonable position here is disbelief.
If I premise other beliefs on this non-knowledge disbelief of strong skepticism, I'd similarly say those beliefs are not knowledge, but nor are they just things that I happen to believe. They're "reasonable beliefs": the most reasonable positions I can take given the evidence I have, even if I don't possess knowledge.
Midrya t1_jcbur78 wrote
> I wrote this more aimed at the skepticism of "you are a brain in a vat" than with criticism of more grounded ideas.
That is all well and good, but I would be hesitant to call such a person a skeptic given that it requires assuming quite a few unverifiable premises. To use the language of your article, I don't believe a Strong Skeptical Hypothesis (SSH) can even exists, because such an argument requires that the one presenting it be inherently lacking in skepticism. A person who posits the idea that "you are a brain in a vat" is either a believer of some form of the simulation hypothesis, or is just being a contrarian for the sake of contrarianism.
For all it's worth, I can agree to the notion that we can't ever truly know something in the sense that we can verify that the information we posses regarding some thing is accurate; I just don't perceive any benefit in using the phrase "reasonable belief" in place of "knowledge".
Base_Six OP t1_jcbw6yg wrote
The question I would ask is: "Can Norman describe his belief as knowledge?" We can do so in this scenario, but only because of our position as an omniscient outsider. Norman does not have that sort of privileged information.
The relevant question for Norman is what he ought to believe on the basis of the evidence he has. He's got his clairvoyant feelings and some other conflicting external evidence. He can give credence to one or both of those and construct a belief appropriately, which he'll likely do on the basis of other beliefs. In this case, the most reasonable belief in absence of other supporting beliefs (supposing Norman values his clairvoyance) is perhaps a middle position: the President is either in New York or the president is in Florida.
On the other hand, if Norman possesses an evidence-based belief that clairvoyance is impossible, he might dismiss his clairvoyant feelings and conclude that the president is in Florida. Norman would possess a reasonable belief in this case, even if his manifest clairvoyance was in fact accurate. If Norman were to gain additional evidence that the president was in fact in New York (such as a first-hand sighting), he'd be reasonable in revising that belief and in giving more credence to further clairvoyant experiences.
We can categorize Norman's belief as knowledge or non-knowledge in all of these scenarios based on privileged information, but Norman cannot, and Norman's case represents the baseline we should consider when assessing our own beliefs. We can't say if our beliefs amount to knowledge since we aren't omniscient, but we can say if they're reasonable.
Base_Six OP t1_jcc1lx9 wrote
Do you think it's possible to state that some of our beliefs are logical and well founded, even in absence of true knowing?
When you state that you "agree to the notion that we can't ever truly know something", that's the crux of why I want to discuss reasonable belief. I agree with that statement, but think we can nonetheless offer a solid basis for belief. If we can't try know things, then using the term "knowledge" for those beliefs feels hollow.
As a less skeptical example, suppose I have a clock that says it's 2:00 PM, but unbeknownst to me my clock stopped fifteen minutes ago and the time is inaccurate. It doesn't make any sense for me to say that I know that it's 2:00, given that it is not in fact 2:00, but I can state that I have a reasonable belief since I don't have any evidence that my belief is inaccurate. Now, I also have a reasonable belief that clocks in general can be wrong; if it's absolutely critical that I know what time it is I should therefore make sure I'm not reliant on that clock as my sole source of information. I can discuss all of this under the umbrella of "reasonable belief" without issue, but can't really do the same from a position of knowledge.
N0_IDEA5 t1_jcc3094 wrote
Sure we are omnipotent in this scenario, but I feel there’s ways to put us into the hypothetical. Perhaps later reports come out to show the president actually was in New York. I feel it irksome to say Norman knew the president was in Florida until the evidence pointing to him being in New York out weighed it, when Norman also had that clairvoyant feeling. But I do think the notion of reasonability is getting somewhere, I just still feel the pull of truth being necessary.
FrozenDelta3 t1_jccaqh3 wrote
The problem is that if we accept the possibility that we're brains in jars, the vast majority of our information becomes unprovable.
Yes, this can occur from believing the answers to unanswerable questions. This isn’t unique to the philosophical brain in a jar scenario, it’s applicable to practically any question that is unanswerable. Basing logic on the answer to an unanswerable question leads to rabbit holes.
I can't disprove the strong skeptical hypothesis, therefore I can't know anything that would be disproven by the strong skeptical hypothesis.
What happens when you try to prove a proposed answer to an unanswerable question? Why try to prove or disprove the “brains in a jar” scenario when it’s unprovable? Do you accept that some questions are unanswerable and that the answers to unanswerable questions are unprovable?
What we know is multi-factorial and begins on a subjective level with sound parameters and practices (like repeatability and other scientific methods) and is confirmed or verified on a shared level. Unprovable scenarios like “brains in jars” can be suggested and can reveal more about unprovables than it does a commonly accepted truth in a commonly accepted shared reality.
I don't think it's an either/or between belief and knowledge. After all, anything I know is also something I believe.
If it is your agenda to say that you believe all that you know then this is just your perspective. I know that I have 5 fingers on my right hand. If you understand and accept the meanings of the words “I have 5 fingers on my right hand”, we occupy the same space in commonly shared reality, and you exist on a human wavelength then upon proving to yourself that I have five fingers on my right hand this information would become knowledge to you without requiring belief. And yes, even then if your agenda is to base all you know on belief then you can do this and I cannot disprove what you believe nor your ability to believe. But then this just says more about you as a person than it does me or commonly shared reality.
What I'm proposing here is that we can have solid justification for holding a belief even in absence of knowledge or proof that the belief is true. On the brain in a jar scenario, I'd say that I can't disprove the hypothesis but that I don't have justification for believing that hypothesis. Between the positions of belief and disbelief, I think that the reasonable position here is disbelief.
People can and do believe whatever they want to, and what people do believe is usually aligned with their bias and agenda.
If I premise other beliefs on this non-knowledge disbelief of strong skepticism, I'd similarly say those beliefs are not knowledge, but nor are they just things that I happen to believe. They're "reasonable beliefs": the most reasonable positions I can take given the evidence I have, even if I don't possess knowledge.
They are unprovable regardless of reasonability.
Base_Six OP t1_jccc12a wrote
I think the pull of truth is what motivates Norman's introspection: what he ultimately desires is a true belief, not one which is coherent. At each step, he assesses the evidence in light of that goal, and constructs the belief that is the most reasonable approximation he can make of the truth.
Irrespective of knowledge, it feels correct to say that Norman had a reasonable belief the the president was in Florida until he got more evidence. It also makes sense for Norman to characterize his beliefs as reasonable without the need to invoke an outside observer. That belief is grounded in truth as a goal, but ultimately independent of the actual facts in the matter.
Suppose we say that Norman is actually a brain in a vat, and that the president was a figment constructed by alien epistemologists experimenting on his perception. This doesn't and can't change his beliefs since it doesn't alter his evidence: his beliefs are still reasonable since they're the best approximation of truth he's capable of constructing. Norman can never say for sure if any evidence he gets is actually indicative of the truth, but he's still capable of engaging rationally with his evidence in an attempt to seek it out.
Midrya t1_jcccwc8 wrote
Logical sure, but not well-founded. We can absolutely arrive at beliefs which logically follow from more fundamental premises that we hold, but to be well founded we would need to demonstrate that those more fundamental premises are themselves true.
I also don't really think we can offer a solid basis for belief. I can explain to you why I believe what I do, and I could go down all the way to the fundamental premises I hold to be true, but the one thing I cannot do is prove to you that those axiomatic assumptions are "more reasonable" than some other set of axiomatic assumptions, especially if your experiences are not compatible with my axiomatic assumptions.
And the clock thing is more highlighting an issue with the article itself; you claim that a common definition of knowledge is "justified true belief", but there is no evidence provided that such a definition is in fact common. Saying you know it's 2:00 after looking at a clock which says 2:00 but is actually not accurate is only an issue if you require knowledge to be justified and true. Since the entire issue is predicated on the definition of the word knowledge, I feel it would be kind of important to establish that problematic definition is both an accepted and common definition, which a quick polling of dictionary.com, Merriam-Webster, and Cambridge would show that they don't really list a definition that if fully compatible with the one you are using. I normally don't like pulling out dictionary definitions in discussions because it feels pedantic in all the wrong ways, but since the entire issue is contingent on the common definition of the word knowledge we need to reasonably establish that the definition being used is the common definition of the word knowledge.
Base_Six OP t1_jccz75h wrote
My central tenet for justified belief is basically this: that the evidence we have (e.g. sensory or memory evidence) is a reasonable basis for belief.
This isn't because I think we can argue that the evidence is true, but because we don't have an alternate basis for logically interacting with the world. Our evidence is singular, and we can either accept it with some degree of doubt or we have no basis whatsoever. If we were to accept a skeptical hypothesis instead, then we would have to logically conclude that we have no evidence of the external world and no means of logically interacting with it. I don't know that my evidence is true or accurate (and in fact have good reason to think that at least some of it is untrue), but it's more reasonable for me to accept it ceteris paribus than it is to reject it.
For definitions of knowledge, I would recommend looking up knowledge in a philosophical source, such as the Stanford Encyclopedia of Philosophy. JTB is far from the only definition of knowledge, but it's the core of externalist conceptions of knowledge, which are generally more popular than internalist ones (which have their own issues, such as lack of grounding to reality or the possibility of false knowledge). I stuck with JTB because it's the simplest version and I didn't want to devote 50 pages to different forms of knowledge in this paper.
The clock problem itself comes from Russell's "Human Knowledge" and has been discussed fairly extensively as a proto-Gettier problem, largely as a criticism of JTB.
FrozenDelta3 t1_jcedeex wrote
Either we have to accept that we have minimal knowledge…
I’ve already accepted this. I would rather accept something is unprovable rather than make stuff up and then believe or disbelieve it’s true. This doesn’t mean I won’t entertain far out thoughts, rather my basis or starting point is one of knowing that we may never truly know the answer to unanswerable questions.
If we accept the former, we need some other epistemological basis to describe the majority of what we would like to say we "know".
I think where I describe in my last comment what I know and how you can know it too meets my criteria. It works for me. It’s basically what currently exists, ideally where everyone agrees to leave others to their own beliefs as long as it doesn’t harm others. If one wants to drink the koolaid then that’s on them, if they want to convince others to do that then I have an issue.
Edit Having said that, I understand the creative process behind discovery of the unknown and how technology and what is commonly accepted as being known is revealed/illuminated. If we limited ourselves to what is known then there may be little to no progress and advancement. I am mainly focused on pointing out unprovable philosophical scenarios and how they may prove to be good mental exercise in a way, but anything beyond working to understand and moving towards skewing to believe and I’ll pass.
Edit2 I know things and am open to being wrong. I understand now that I’d rather write something off as unprovable rather than participate in choosing either belief or disbelief.
HamiltonBrae t1_jcgawei wrote
What do you mean by beliefs here? If a belief is "a subjective attitude that something or proposition is true", then I feel like a reasonable/justified belief that something is true isn't really that different from knowledge here. Obviously, the thing you believe has to be true to count as knowledge but then you believe it is true by the definition of belief. If your evidence is strong enough or reasonable enough where you subjectively have no doubt then to me that says you would logically believe that you have knowledge of it, so is there much practical difference? In cases where you have less confidence or certainty in the evidence then yes you may not believe you have knowledge because you are obviously not sure; but then again, I don't think someone who is engaging in the "folly of knowledge" which you are arguing against would say they have knowledge either, because they are unsure: the stances are hard to distinguish. So, even if knowledge here is defined by JTB, I may not practically be able to get rid of the belief in knowledge; I believe I have knowledge in certain circumstances where subjective uncertainty approaches zero (e.g. like where my house is). Your article's view ends up with something like a Moorean paradox of claiming to be "discarding knowledge" but still logically ending up believing in it in the same cases someone would normally. Surely then the problems of skepticism about knowledge remain when using the term belief as defined above, if you believe that you have knowledge (regardless of whether you actually do under JTB)?
Regarding your skeptical hypothesis: you say we shouldn't believe the strongest skeptical hypotheses because they are "unactionable". I will give you that one, though I think maybe its conceivable for some one to have weird/incoherent beliefs like that and still function. The unactionable thing doesn't really seem to affect most of the weaker skeptical hypotheses at all though; just believing (or even just being unsure that) you live in a simulation or an evil demon deceiving your senses you seem to be things that don't contradict "actionable" beliefs at all; its still possible to have a normal life in a simulation.
Also, it seems that what counts as reasonable evidence is subjective. Your examples kind of preach to the choir of someone with relatively normal beliefs but could you actually convince someone who holds some of these skeptical hypotheses to change their beliefs? Probably not if their beliefs seem reasonable to them. Their beliefs and what counts as evidence may seem arbitrary and weird but so might yours to them. They might ask about your "falsifiable hypotheses" of why you can be so sure that there are no bees in the suitcase or how you know your test to check the broken watch is reliable. I feel like ultimately you would end up resorting to things like "because it happened before" or "because I remember these things tend to happen", then they might ask how can you show that this memory or knowledge is reliable and that opens the door for them to say that you're beliefs are just coming out of nowhere or that you haven't shown or justified that they are definitely true and that the skeptic should believe them. I think if you cannot convince the skeptic then you haven't truly solved the problem, unless you are implying in the article that the skeptic should believe in their skeptical hypotheses based on their "reasonable beliefs". I guess thats fine but its unintuitive to me to pit these different hypotheses against eachother if the message is just essentially believe whatever you think is reasonable. Neither would there seem to be much consequence of someone simply entertaining their uncertainty about an evil demon or even crossing the threshold to belief if doing so didn't have any effect on their "actionable" living.
I think an interesting point also is that these types of skeptical hypotheses are held by real people in some sense. Some people genuinely believe we are in a simulation, some people believe that the universe is purely mental(or physical) and many many people believe in some kind of God. Is God that much different from a (non)evil demon? Especially something like a creationist God where all of the evidence for evolution ans that the universe is billions of years old is just wrong.
Edit: Following from the last paragraph, it's also interesting to think how a Christian crisis of faith is kind of analogous to the skeptical problems raised by descartes, but inverted. Christians are faced with the problem that it is conceivable that their world could have been created without the existence of a (non)evil demon, and so everthing that follows in their beliefs is also false.
Base_Six OP t1_jcgtaf8 wrote
I think there's space for an everyday sort of knowledge if we define it as "beliefs in which I'm highly certain, and for which I'm highly certain I will not encounter contrary evidence." That feels like it falls far short of the general philosophical constructions of knowledge, though. For instance, under that sort of construct I can "know" things that are false, or know things that are contrary to my other beliefs. It's a useful shorthand, but not the same thing as the Knowledge of Descartes, Russell, or Goldman. It's a far cry removed from JTB, in any case.
There's people that subjectively have no doubt that the world is flat. Does that mean they have knowledge that the world is flat? Similarly, I have had dreams in which I've had zero subjective doubt that what I'm experiencing is reality. Does that mean I know my dreams to be reality? I don't think these sorts of edge cases are a problem for a colloquial knowledge-as-strong-belief sort of a construction, but I think they speak to its frailty as a philosophical construct.
I would define "reasonable" as the conclusions you come to that you subjectively feel to be most logical. These may not actually be logically sound, but we have to make do with the best we're capable of. If there's better logic out there that I don't have access to, it's irrelevant to me when I ask the question of what I ought to believe.
The caveat here is that I'm premising that statement on the notion that said logic is inaccessible. If I gain access to new logic, it would be unreasonable for me to discard it out of hand because it disagrees with my conclusions. This applies to most conspiracy theorists: they aren't unreasonable because they've come to false conclusions, they're unreasonable because they've supported their false conclusions on the basis of cherrypicked and/or fabricated evidence that's extensively contradicted. Ignoring those contradictions and ignoring the baseless construction of those beliefs is what renders them unreasonable.
If someone believes the Earth is flat because they're a child in an isolated community that's been told by trusted teachers and parents that the Earth is flat, they're reasonable in holding that belief. If someone is insistent in believing the Earth is flat when confronted with the mountains of counter-evidence and thousand year old proofs of its roundness, those same beliefs are no longer reasonable.
FrozenDelta3 t1_jch4jg0 wrote
Allow me an opportunity at a different approach.
The incompleteness theorem states that in any reasonable mathematical system there will always be true statements that cannot be proved. Responses to this theorem have been varied. Some people have proposed that if we demand that the standard of proof in the sciences is mathematical certainty and math is not 100% entirely provable then absolutely nothing is certain. While the incompleteness theorem presents a problem for those that want math to be entirely provable, this theorem only applies when self-referencing in a negative. So, as of now, math is provable except in this specific paradoxical self-referencing scenario yet people still claim that all math is now suspect despite it’s accepted provability.
I would rather judge a situation’s provability first before participating in likelihood of occurring or being real. It’s unprovable whether we are or are not brains in jars and that is my ultimate position, but if I were forced to choose I would lean unlikely. Do I believe it’s unlikely? No, I think it’s unlikely. First and foremost, it’s unprovable either way.
Would you happen to have access to this journal?
While many philosophers may agree that knowledge depends on true belief, I see that not everyone does. It seems to be a semantics game, each side clamoring for their specific words choice to become primary.
- A philosophy professor of mine once asked me if I knew that George Washington crossed the Delaware.*
My response would be “it’s been mentioned in history books so there may be truth to this story.” If the professor pushes me to choose belief or disbelief in the story I would push back against participating in belief or disbelief. I would much rather report on the origin or state of communicated beliefs rather than participating in choosing to belief or disbelieve.
In your second example you speak as if you may have read about Washington’s crossing being propaganda or intentional misinformation and then how you could believe what you read. Again, I would state what is written or even accepted by others without taking the next step of believing (or disbelieving) it myself.
Edit I can speak about things without participating in believing or disbelieving
Base_Six OP t1_jchl9sn wrote
I think it's perfectly reasonable to abstain from forming a belief, but I think there's plenty of situations in which it's reasonable to form beliefs even in absence of proof.
This is the case in many ordinary situations. Suppose I meet a couple and they tell me they're married. They wear wedding rings and act like a couple. I can't prove that they're married, but I have a substantial amount of evidence suggesting it's the case and no counter evidence. There are plenty of scenarios I could concoct which could be unprovable, such as that they're foreign spies or visiting aliens with a sham marriage as part of their cover story.
I don't encounter these scenarios and abstain from drawing conclusions on the basis of their unprovability: I construct beliefs on the basis of a preponderance of evidence. Colloquially, I might even say "I know they're married", even if I can't prove true belief.
I think a major difference between math and everyday epistemology is that the vast majority of math I encounter is provable, while the vast majority of everyday "knowledge" is premised on things that are not.
Brian t1_jck9f8l wrote
I feel this article is just misdefining or misunderstanding knowledge, and conflating it with certainty. It makes claims like:
>In contrast with general conceptions of knowledge, whether or not a belief is reasonable is not contingent on truth.
But what on earth about the general conception of knowledge says reasonableness is contingent on truth? It says that knowledge is contingent on truth, but the reasonableness of a belief is a seperate matter. I think what is going on is that the author is confusing the map and the territory wrt the "true" criteria of knowledge.
>If knowledge is justified true belief, then surely we can have justified belief regardless of whether we can state authoritatively that our belief is true
Ie. this seems to mistake the "true" criteria of JTB with "can state authoratively that it is true", but one is a statement about the thing itself, and one is a statement about our own mind. True means the claim is true, not anything about our beliefs or what we can state about the claim (that's what the "J" and "B" parts are for).
>again without need to add “truth” to our definition
No - truth is a vital part of the definition, but it simply isn't what the author thinks here. You need truth to make the claim refer to something out in the world, rather than mainly in our heads. And that truth criteria matters. No matter how confidently we assert a claim, the world in which the claim is actually true differs from one where it is false.
Considering something to be knowledge doesn't require certainty, only the same regular degree of confidence belief requires. The truth criteria only distinguishes between what someone thinks is knowledge, and what actually is, in the same way that there's a distinction between something someone thinks is round, and what shape it actually is. Ie. if we find that something we justifiably believe turned out to be false, we say "I thought I knew it", not "I used to know it, but now I don't" - our claim of knowledge was simply mistaken, just like any other claim could be. That doesn't mean we need to transfer that "true in fact" criteria somehow onto our beliefs about that, and indeed to do so would be to destroy the whole point of that criteria.
Base_Six OP t1_jclbumb wrote
For strong skeptical confrontations of knowledge, it's not a question of whether we can be certain but of whether we can actually be justified in stating that many of our core beliefs are true. Either we run into infinite regress or into things for which we don't have a solid basis for stating truthfulness for, such as that our senses are a reasonable basis for knowledge or that our memories have a degree of reliability. Moving from a position of knowledge to a position of reasonability allows for forms of justification that are rooted neither in truth nor in cohesion, which is what I'm trying to present in this paper. (Admittedly, I may not be doing a very good job in doing so!)
Beyond that, we can believe anything is knowledge, regardless of whether it actually is. I can believe I possess knowledge even if my premises are not knowledge, but at least for externalist conceptions of knowledge like JTB this is not generally considered to constitute "knowledge" in the philosophical sense.
Brian t1_jcldoui wrote
>but of whether we can actually be justified
Sure, but that's not an objection to the conception of knowledge, but about what criteria we consider to constitute justification. And pretty much no standard model of knowledge states that the justification criteria should be certainty, so I think you're targeting the wrong thing in your definitions here. In general, knowledge is held to be defeasible - you can be wrong about something you believe you know, and change your mind as to whether it was really knowledge. Certainty is not a requirement.
>Moving from a position of knowledge to a position of reasonability
As such, this is misunderstanding knowledge: those two are not at cross-purposes: "reasonability" is generally part of the justification part of knowledge - and there's certainly room to debate on exactly what makes something reasonable or constitutes a justified reason to believe something - but framing this as arguing against knowledge is to misunderstand what knowledge is about.
>allows for forms of justification that are rooted neither in truth nor in cohesion
Justification isn't rooted in truth for JTB - the truth criteria is entirely seperate. Certainly we believe it to be true (though not with certainty), since that's basically what belief is, but the truth criteria is considered entirely seperate from justification.
>but at least for externalist conceptions of knowledge like JTB this is not generally considered to constitute "knowledge" in the philosophical sense.
Certainly. If something is false, it's not knowledge. But I think that's absolutely something we should definitely be concerned about in our epistemology: If you believe, and are absolutely certain about something, I think there's a rather important difference depending on whether that belief is actually true. If Alice reasonably believes X, and Bob reasonably believes ¬X, I think there's something more to be said than "Well, both these people hold beliefs" - the question of which is right seems important.
Base_Six OP t1_jclim6g wrote
The last paragraph is what I disagree with. Suppose that there's a truth value to X, but that truth value us inaccessible to Alice and Bob. We as outside observers can state whether Alice or Bob has knowledge, but that observation isn't relevant to the mental processes of Alice and Bob. We can say "they both believe they have knowledge", but that isn't particularly interesting.
Suppose that Alice has access to evidence {A, B} and Bob has access to evidence {B, C}. If {A, B} ⇒ X and {B, C} ⇒ ¬X, stating that both of their beliefs is reasonable says more than "both of these people hold beliefs", it says that both people hold the best beliefs that can be constructed on the basis of their evidence. It would be unreasonable for Alice to believe ¬X or for Bob to believe X.
Suppose, furthermore, that {A, B, C} ⇒ X, but ¬X is ultimately true. If Bob gains access to A, he ought to believe X, and X would be the reasonable belief for the premises he holds. Saying that Bob knew ¬X and lost that knowledge when he changed his belief centers our assessment of Bob's mental processes on the wrong thing: what Bob should be concerned with is what conclusions he can draw on the basis of his available evidence, and we should concern ourselves similarly with the best conclusions that can be drawn from Bob's reference point, not from the reference point of an outside observer.
Perhaps another way of stating this would be to say that Bob ought to believe he knows X as a result of {A, B, C}, and that whether he knows X (or that he cannot know X because ¬X is true) is irrelevant.
Brian t1_jclldxh wrote
>but that observation isn't relevant to the mental processes of Alice and Bob
Sure. From their context, they may not be able to distinguish which is correct. But that doesn't mean it doesn't matter. When that context changes (eg. they learn something new), it matters a lot, and it's pretty relevant to their future actions and predictions. Thus it's an important aspect we want to distinguish in our epistemology. Ie. if Bob learns he was wrong, he shouldn't think his new situation is just as good as his old one just because in both cases he held "the best beliefs that can be constructed on the basis of his evidence". Something meaningful has been said, and that difference is important to encapsulate when discussing epistemology.
This distinction is independent of the internal belief states, but that's exactly why it's so important: we don't just want to talk about internal states, we want to relate these to the external reality we're talking about. The world where my belief is wrong is different in a very important way from a world where it's correct - a way I care very much about.
>Saying that Bob knew ¬X and lost that knowledge when he changed his belief centers our assessment of Bob's mental processes on the wrong thing
This is the wrong faming though. Bob never knew ¬X: ¬X was false. He thought he knew ¬X, but was mistaken. And that mistakenness is something Bob would care about, and should consider meaningful to his epistemology. If he didn't care about his truth, it'd be just as good for him to avoid learning anything, because he'd be in the same state either way wrt holding a reasonable belief. But in reality, if he's wrong, he'll want to find this out, and consider his situation improved when that happens.
>what Bob should be concerned with is what conclusions he can draw on the basis of his available evidence
The reason Bob should be concerned with this is solely because Bob wants his beliefs to be true ones. You can't discard that aspect.
>Perhaps another way of stating this would be to say that Bob ought to believe he knows X as a result of {A, B, C}, and that whether he knows X (or that he cannot know X because ¬X is true) is irrelevant.
Bob ought to believe X, but that certainly doesn't mean whether he knows it is irrelevant - epistemic contexts can change, and ones where our beliefs reflect truth are more valuable to us than ones that don't.
HamiltonBrae t1_jcpqfxt wrote
Sorry, late reply;
>It's a far cry removed from JTB, in any case.
Maybe I wasn't clear enough but my point was that using that definition of belief, then I think someone should logically believe that they have justified true beliefs If they believe that some fact is true and they think that that belief is justified. If you believe in justified true beliefs then surely it undermines the paradigm which wants to get rid of knowledge. The knowledge and non-knowledge views would be indistinguishable from a person's perspective from a practical viewpoint. My point is then not so much about whether knowledge actually exists in the JTB sense but whether someone should logically believe they have knowledge in the JTB sense under your scheme. I see you have specified your definition of reasonable though. I assumed that reasonable was more or less synonymous with justification since at face value when I think of someone having a reasonable belief then I think they are justifed in it, but maybe I should have anticipated some difference. Thinking more deeply though, I guess justification is complicated and I don't think I can even define the limits too well of where justification starts and ends.
At the same time, I don't think this affects my argument too much; but again, the more I think about this, the more complicated it seems to get. We can talk about someone believing something is a true when they have no uncertainty; we can also talk about someone believing their belief is reasonable or justified. Presumably they wouldn't assent to a belief that they didn't think was reasonable but if they were open to believing that some of those reasonable beliefs were justified then I think they would again be forced to believe that they had knowledge. Neither would I think that it differs from the knowledge position you argue against since someone working unser the assumption that knowlede was possible would also not believe they have knowledge if they didn't believe their belief was totally justified. So as long as a person believe that beliefs can be justified, then they should logically believe that they have knowledge.
>This applies to most conspiracy theorists: they aren't unreasonable because they've come to false conclusions, they're unreasonable because they've supported their false conclusions on the basis of cherrypicked and/or fabricated evidence that's extensively contradicted. Ignoring those contradictions and ignoring the baseless construction of those beliefs is what renders them unreasonable.
>If someone believes the Earth is flat because they're a child in an isolated community that's been told by trusted teachers and parents that the Earth is flat, they're reasonable in holding that belief.
If someone holds a belief reasonably because they have been taught it and don't know better then why can't someone have a reasonable belief from cherry picked/fabricated evidence. I think these two sources of knowledge are blurry because on one hand, the taught knowledge in the isolated community is going to be due to error/fabrication/cherry picking/deception while on the otherhand someone who holds their views despite counter evidence is going to subjectively feel that they are being reasonable and they cannot help that. They feel that the counter evidence they are shown is inadequate just as the non-conspiratorial person would feel about the evidence they are given by the conspiracy theorist; If the evidence doesn't seem reasonable to them, how can they help that? In their logic, what they have been shown just doesn't count as counter evidence. In your words, they come to conclusions about the counter evidenceu that they feel subjectively to be most logical. These may not actually be logically sound, but they have to make do with the best they're capable of.
Now, I do think that some beliefs seem more unreasonable to me than others (like conspiratorial ones) but its doesn't seem straightforward to defeat a skeptic purely with reason. Neither does there seem to be a straightforward divide between reasonable and unreasonable. For instance, some Christians may think their views are totally reasonable and conspiracy theorists views are totally unreasonable; but then again, I might think believing God is totally unreasonable. It doesn't seem sufficient to resolve the problem of skeptical hypotheses purely by "reasonable beliefs" if a person, specifically a skeptic, thinks the skeptical hypothesis is reasonable.
Base_Six OP t1_jd49j0w wrote
You can believe that you have JTB knowledge, but at that point what we're talking about is no different than any justified belief we possess. After all, we don't hold beliefs that we consider false. I think you could even reasonably describe a "Reasonable Belief" as one in which we ought to believe is justified and true, or to say it differently, that we believe is JTB knowledge.
The difference comes in terms of how we view a belief that is false. Under a JTB conception of knowledge, we usually say that someone can't actually know something that is false. While you can believe that you know that the Earth is flat, you can't actually know it because it's round. Under a Reasonable Belief paradigm, you can have a reasonable but incorrect belief. If someone believes something that's incorrect because they've got deficient evidence, that doesn't make their belief unreasonable.
What makes something unreasonable is if the justification we use to construct that belief isn't logically sound. For instance, cherrypicking evidence to support a belief is logically fallacious, so any belief that's supported based on cherrypicked evidence is unreasonable. This is the case even if the belief is true: coming to the correct conclusion doesn't mean we used logically sound methods to arrive at that conclusion. The difference between being taught something that's based on cherrypicked evidence and doing the cherrypicking yourself is that in the former case, you don't have the evidence necessary to tell that there's cherrypicking happening. That said, if we're aware that evidence and teaching can be flawed then we logically ought to check our sources. We should understand how our sources constructed their beliefs, as much as possible, and grant credence or disbelief to those sources appropriately.
Different people ought to come to different conclusions about a belief if they start with different evidence or different premises. Conspiratorial thinking is what renders a belief unreasonable, not the conclusions it generates.
HamiltonBrae t1_jddu4gn wrote
>You can believe that you have JTB knowledge
&nsbp;
Yes, I just think that under the reasonable belief paradigm that this is a contradiction. I think the idea of believing certain things are true has to be given up or surrogated with something else like the belief that something is empirically adequate. The contradiction could just be ignored I guess but arguably that also undermines the point of doing this kind of thinking which I think is to reduce things like that; after all, why was the reasonable belief paradign asserted in the first place. I think everyone probably inevitably tolerates some level of contradiction or paradox in their views though.
>The difference between being taught something that's based on cherrypicked evidence and doing the cherrypicking yourself is that in the former case, you don't have the evidence necessary to tell that there's cherrypicking happening.
I don't think you have the evidence to tell there is cherrypicking happening when you do it yourself either though. You think your picking of evidence is completely reasonable and isn't cherry picked at all. On the contrary, you will think the opposition are cherry picking evidence and ignoring your evidence.
>That said, if we're aware that evidence and teaching can be flawed then we logically ought to check our sources.
Yes, but we have more confidence in some sources or evidence than others to the point we don't think we need to check. We would consider this reasonable yet its possible the confidence is misplaced (and often is).
>and grant credence or disbelief to those sources appropriately.
And what is appropriate will seem different to different people.
>Different people ought to come to different conclusions about a belief if they start with different evidence or different premises. Conspiratorial thinking is what renders a belief unreasonable, not the conclusions it generates.
Its hard to see what separates conspiratorial from reasonable here because they are just coming from different evidences and premises too.
Base_Six OP t1_jc8uiu3 wrote
This is a topic I've been thinking about for a while: the notion that if we can identify beliefs that we ought to hold, we can utilize them in a knowledge-like matter even if they fall short of being knowledge themselves. Furthermore, if we can state with some degree of certainty that our beliefs are the ones that we ought to hold, whether or not they are knowledge is far less relevant.
This is also my first go at writing and sharing philosophy; I'm looking forward to hearing your thoughts!