Philosophies of Philosophy: Abstracts

Beyond the ‘Analytic-Continental’ Gap: The Converging Methods in the History of Philosophy ANDRIJA SOC, Belgrade University

The questions of philosophical methodology have been, for a long time, set along the lines of the ‘Analytic-Continental’ divide. However, if we look closer at various philosophers, we will recognize there are two possible divisions that are explanatory more fruitful: 1) Historical/Conceptual and 2) Systematic/Particularist. Each member of each pair can be then further associated with different epochs and different philosophical disciplines. For example, majority of philosophers today, whether they come from the Continent, English-speaking world or elsewhere, favor answering particular questions and formulating answers to different particular dilemmas by way of conceptual analysis. On the other hand, one might say that pre-20th century philosophers favored systematic approaches, with historical analysis being relevant then as much as it is today. In this paper, I will try to show that these two dichotomies, far from dividing or even evaluating philosophical approaches in the manner ‘Analytic-Continental’ divide is sometimes thought to be doing, merely situate various approaches to dealing with philosophical problems. Furthermore, my aim will be to show how even those philosophers we regard as purely systematic, like German idealists, still use conceptual analysis and ‘epistemology first’ approach as their stepping stone. The example of Critique of Pure Reason is perhaps an all too familiar instance of that. In the paper, I will thus examine two further instance we wouldn’t prima facie expect to contain similar approach. The first one is Hegel’s Phenomenology of Spirit, a necessary epistemological introduction into Hegel’s absolute idealism. The second one is Schelling’s philosophy. My final goal is to show that the basic tenet of analytic philosophers – the proper way to deal with philosophical problems is conceptual and linguistic analysis – is fundamentally true, but that it was implicitly also recognized in systems the reception of which, perhaps paradoxically, led to the so-called ‘analytic turn’.

Later Wittgenstein as Literature DAVID EGAN, Oxford

The methodology of Wittgenstein’s later philosophy has perplexed commentators since the first publication of the Philosophical Investigations in 1953. Wittgenstein disavows anything that could pass as a philosophical thesis and claims that philosophy should consist of description rather than explanation. His own work foregoes logical argumentation, and his writings are instead full of strange parables, striking images and gnomic aphorisms. This paper reads Wittgenstein’s method as exemplifying a distinctively literary approach to philosophy. Wittgenstein employs tools more familiar to literary fiction than to traditional philosophy, but he puts these tools to work in a philosophical context.

Wittgenstein deploys what he sometimes calls “pictures” or “objects of comparison” where other philosophers might be inclined to use arguments. These objects of comparison often involve fictional scenarios, where Wittgenstein imagines a tribe with unusual practices or an unusual encounter between a teacher and a pupil. These fictional scenarios are unlike traditional thought-experiments in that they do not serve as data that contribute to an argument. His aim is not to persuade his readers, but rather to prompt them “to regard a given case differently: that is, to compare it with this rather than that set of pictures. I have changed his way of looking at things” (Philosophical Investigations, §144). Wittgenstein’s pictures do not tell us how to see matters, but aim instead at destabilizing our habitual ways of looking at them. Like literary fiction, his text does not dictate its own interpretation, but rather offers open-ended prompts to re-evaluate what we ordinarily take for granted.

Wittgenstein’s later philosophy offers a searching critique of philosophy as it is traditionally understood, and so he needs tools that are not themselves a part of this philosophical tradition. In adopting the techniques of literary fiction, he challenges traditional philosophical methodology from outside that methodology. 

The twin track strategy CHRIS DALY, Manchester United

Many philosophers (including David Lewis circa 1973 and the neo-Fregeans) employ an intellectual division of labour.  Philosophy tells us what the truth conditions of various philosophically interesting sentences are. E.g. that sentences containing numerals are sentences containing singular terms putatively referring to numbers; sentences about what could be are sentences quantifying over possible worlds; and so on. Some discipline outside of philosophy (such as mathematics or common sense) tells us that many of these sentences are true. The purported result is that such philosophically controversial entities as numbers and possible worlds have been shown to exist. I criticise the twin track strategy and try to show that the two components conflict, rather than complement each other.

Intuitions and Why They Matter. A Response to Cappelen EUGEN FISHER, East Anglia

The view that philosophers rely on intuitions as evidence for philosophical theories has become a central tenet of meta-philosophical debate, shared by proponents of such radically different approaches as methodological rationalism, virtue epistemology, naturalised conceptual analysis, naturalised epistemology, and experimental philosophy. Herman Cappelen (2012) has sought to expose this central tenet as a philosophical self-misunderstanding. Through a series of case studies on generally accepted paradigms of intuition-driven philosophising, Cappelen develops an empirical argument to show that philosophers do not rely on judgments vel sim. with any of the features frequently attributed to intuitions in philosophical debate. The present paper follows Cappelen’s call to engage in such ‘empirical meta-philosophy’, but argues that the relevant intuitions are not what philosophers commonly think they are, and shows, pace Cappelen, that they are even more relevant than generally assumed.

The paper presents an aetiological conception of intuitions that is standard in cognitive and social psychology but largely neglected in meta-philosophical debate. It re-analyses one of Cappelen’s main exhibits, viz. Foot’s (1967) and Thomson’s (1985) discussion of trolley cases, in the light of experimental work on moral intuitions (incl. Haidt 2007, Schnall et al. 2008, Greene et al. 2008, Suter and Hertwig 2011). On this basis, the paper argues, first, that the authors at issue crucially rely on what psychologists call ‘intuitions’, second, that the authors accord an important normative role to these intuitions (though this is not happily described as ‘reliance on evidence for theories’) and, third, that the clash of intuitions with reflective judgments and beliefs generates a characteristic kind of problems, illustrated by the trolley problem. Whether or not philosophers rely on intuitions as evidence for their theories, intuitions generate characteristically philosophical problems and constrain solutions to them.

The philosopher and the grapes: on descriptive metaphysics and why it is not “sour metaphysics GIUSEPPINA D’ORO, Keele University

In Aesop’s famous fable a fox covets some high-hanging grapes. The fox jumps and jumps trying to reach them, but after several unsuccessful attempts it gives up, tail between its legs, muttering to itself “they are not ripe anyway”. The expression “sour grapes” has since come to signify the psychological self-defence mechanism that enables one to accept defeat without explicitly acknowledging it. Admitting failure is painful, so it is better, like the fox, to persuade oneself it was not worth having that which one was trying to achieve. Are descriptive metaphysicians like the fox in Aesop’s fable? Have they decided, after several unsuccessful attempts to grasp being qua being, that the mind-independent reality that metaphysics is ordinarily supposed to pursue is sour, and thus not worth having, rather than admitting cognitive failure? In other words, are descriptive metaphysicians being somewhat dishonest in arguing that metaphysics ought to concern itself with being as it is experienced rather than being qua being? This is how revisionary metaphysicians portray their descriptive counterparts when they accuse them of being engaged in “mere” conceptual analysis or “just” ordinary language philosophy. There is a certain kind of dishonesty involved in self-deception and this is precisely what descriptive metaphysicians are guilty of when they recommend epistemic humility. This paper argues against this view of descriptive metaphysics. By dismissing descriptive metaphysics as a psychological response to cognitive failure, revisionary metaphysicians fail to address the metaphilosophical challenge descriptive metaphysics presents them with. There is a metaphilosophical debate to be had, but in order to have it one must first undermine the view that descriptive metaphysics is a case of “sour grapes”.

The philosophy of language’s impact on the language of philosophy LARS INDERESLT, Duesseldorf Universitat

The question how philosophy should be conducted has been of continued interest to philosophers since antiquity. This paper sets the focus on an aspect of the philosophical methodology that has been (with a few exceptions such as Eucken (1879) Adorno (1973), Barbaric (2011)) strongly neglected in philosophy and philosophical research: Philosophical terminology. I argue that there is an influence in both directions between philosophical thought and the linguistic means to express them. In the first part of the talk a general sketch of ways philosophy of language and language of philosophy interact. Different levels have to be distinguished. Philosophies of language can primarily be characterized concerning their attitude towards a) natural and b) formal language for representing reality. The role of language can either be discussed or implied indirectly. The way language is made use of can be congruent or incongruent with the own philosophy. Apart from that other issues like the question to whom philosophy should be accessible are important. In the second part this sketch is filled in using major philosophers. E.g., Plato’s criticism of language prompts him to use dialogue, to stay away from a fixed terminology and to introduce common words with new philosophical meanings. On the other hand Aristotle’s believe in language as a sufficient mean to represent reality allows him to introduce plenty of new terms and even rules for systematical formation. Descartes even though he declares his philosophy to be a new start uses the same terminology as his predecessors. Heidegger combines the same claim with an innovative terminology. The third part of the paper reflects on the importance those observations might have for historians of philosophy. I argue that terminology research can supplement and revise the account we give of particular thinkers.

Naturalised metaphysics or metaphysics as metaphor JACK RITCHIE, Cape Town

Metaphysics, like science, is a descriptive enterprise. It aims in some broad sense to tell us how things are. Science is lauded by many (perhaps most) as having unveiled deep and interesting facts about our universe. Metaphysics, in contrast, is not. Scientists agree about a great deal. Metaphysicians do not. What should a contemporary metaphysician make of the apparent failings of her discipline when compared to the sciences? One sort of response has been to try to bring metaphysics in some way closer to the sciences. To undertake  something often called naturalised metaphysics. Two sorts of (entirely compatible) strategies are offered. First, metaphysicians claim to have learned a methodological lesson from science: the best way to settle theoretical disputes is to use the same abductive methods as used in the sciences. Second, good metaphysics should be guided by the content of our best scientific theories. The best metaphysical view is the one which offers a unified account of our successfl scientific theories. Views like physicalism and ontic structural realism (OSR) fall into this class. I argue that the abductive methods advocated by metaphysicians are nothing like those employed in the sciences and that physicalism and OSR are empty doctrines. In short all these attempts to naturalise metaphysics fail. I suggest a better way to think of the relation between science and metaphysics is to think of metaphysical statements as akin to metaphors and that their role is inspirational rather than descriptive. I illustrate the idea by considering some of Kepler's metaphysical ideas. 

Hybrid Virtue Epistemology and the A Priori BENJAMIN JARVIS, Queen’s Belfast

How should we understand good philosophical inquiry?  Ernest Sosa has argued that the key to answering this question lies with virtue epistemology.  According to virtue epistemology, competencies are prior to epistemic justification.  More precisely, a subject is justified in having some type of belief only because she could have a belief of that type by exercising her competencies.  For this reason, virtue epistemology is well positioned to explain why, in forming false philosophical beliefs, agents are often less rational than it is possible to be.  These false philosophical beliefs are unjustified---and the agent is thereby less rational for having them---precisely because these beliefs could not be formed by exercising competencies.  But, virtue epistemology is not well positioned to explain why, in failing to form some true philosophical beliefs, agents are less rational than it is possible to be.  In cases where agents fall short by failing to believe philosophical truths, the problem is not that they have unjustified beliefs, but that they lack justified ones.

I contend that what makes philosophy an a priori discipline is precisely that philosophical truths are often such that in failing to believe them, agents (at least typically) do fall short as rational inquirers.  The consequence is that virtue epistemology also cannot explain why philosophy is an a priori discipline. 

In response to these related shortcomings, I recommend a hybrid virtue epistemology where competences play an important role in understanding epistemic assessment of belief tokens, but not the epistemic assessment belief types.  I close by briefly considering how a hybrid virtue epistemology should understand the nature of competencies in light of the nature of the a priori. 

Which turn to take in the philosophy of understanding? JONATHAN LEWIS, Royal Holloway University

What do we mean by a ‘theory of x’? If, as Michael Dummett maintained, a ‘full-blooded theory of meaning’ ‘must give an explicit account, not only of what anyone must know in order to know the meaning of any given expression, but of what constitutes having such knowledge’, then can we ever talk about about meaning in terms that renders it truly comprehensible? For example, Richard Rorty considered the ideas of literal language and metaphorical language to be inherently distinct and to be largely unable to communicate with each other. However, even for his schematic distinction to work, we already require some means of drawing a clear line between literal language and language as metaphor. Is the language with which we draw the line itself literal or metaphorical, and what are the criteria for deciding this, without invalidly presupposing the distinction which one is supposed to be establishing? This decision cannot, as Rorty wished to make it, always be made pragmatically, in terms of the problem-solving aspect of literal use based on established language games as against the 'meaningless' use of metaphorical language in expressive innovation. Is it not possible to solve problems via metaphors, when, for example, a new metaphor provides the solution to a cognitive dilemma? Such metaphors can become literalised and (metaphorically) ‘die’ as they are incorporated into an existing conception, but the moment of literalisation will itself depend upon shifting boundaries between the literal and the metaphorical. Clearly the validity of these contentions depends upon one's conception of truth, which, ultimately, falls back on questions of understanding – how is it that assertions become literalised in the first place? As this paper seeks to explore, to explain the truth of assertions, we need to hermeneutically problematise a number of the presuppositions of both pragmatism and those modes of thought that appeal to the idea of a ‘theory of x’. 

How to Do Things with ‘Anglobalisation’ OISíN KEOHANE, Postdoctoral Fellow at the University of Edinburgh & Senior Research Associate at the University of Johannesburg.

The English language has become, in the current era, the dominant language of philosophical communication – not only are philosophers publishing more in English than ever before, but it has become the dominant ‘pivot’ language in translations (i.e. to translate between any pair of languages A and B, one translates A to the pivot language P, then from P to B). We can thus speak of the ‘Anglobalisation’ of philosophy as an event akin to the Latinisation of philosophical discourse in the first millennium. Indeed, concerns with the globalisation of English have become so acute in the last fifty years that various attempts at government-led linguistic protectionism have been triggered. But this topic also raises vital methodological questions both in and for philosophy. Central to this is the problem that English linguistic hegemony can serve, on the one hand, as a potentially useful vehicle for the universal diffusion of philosophical discourse, and on the other hand, that in the absence of a genuine ‘universal characteristic,’ the dominance of one ‘natural’ tongue in philosophy can threaten to produce unjust and particularistic (‘national’) forms of dogmatism that philosophy is supposed to challenge, and from which it is meant to liberate us.

The problem described above is highlighted in fascinating ways by Barbara Cassin’s edited 2004 volume, Vocabulaire européen des philosophies: Dictionnaire des intraduisibles [European Vocabulary of Philosophies: Dictionary of Untranslatables], which is due to appear in 2014 in English (Princeton University Press).  My paper will question the (national) framework that governs Cassin’s project, and her distinction between ‘Globish’ and English, but it will also seek to identity what, methodically speaking, is of value in the Vocabulaire européen des philosophies, especially when we discuss philosophy as a discipline in the modern university and the concept of translation on which the university is built. It will thus attempt to discuss matters that are salient to both ‘analytical’ and ‘continental’ philosophical traditions, befitting the ethos of the International Journal of Philosophical Studies. 

Wittgenstein on idealization in philosophy/logic OSKARI KUUSELA, East Anglia

Arguably, at the heart of Wittgenstein’s later method is an account of idealization in philosophical/logical clarification. This account explains how it is possible to clarify language use in simple and exact terms without committing to claims/theses about the simplicity and exactness of the uses of language targeted for clarification. In this capacity, the account constitutes the basis for a philosophical method that enables one to acknowledge the complexity of language use, without giving up the ideal of rigour in philosophy/logic. Wittgenstein’s account of the use of idealized models also explains how philosophy/logic can be understood as a non-empirical discipline, without conceiving language as an abstract entity in the manner of the Tractatus, thus avoiding ‘the sublimation of logic’—essentially a problematic mode of idealization. Wittgensteinian idealization differs also from idealization in natural science in that a philosophical idealization (such as the account of meaning as rule-governed use) is not something to be overcome later by means of a more complete account that includes factors that the idealized account leaves out. Rather, completeness in philosophy/logic is relative to particular philosophical problems in that an account is complete, if it can clarify those aspects of language use that are relevant for resolving the particular problem(s) at hand. This problem-relativity is also a crucial element of Wittgenstein’s conception of philosophy without theses, it being characteristic of philosophical theses that they aspire for completeness in a more abstract but arguably problematic sense. Having outlined Wittgenstein’s account of idealization, my paper concludes by indicating how his later method enables one to overcome the dispute between so-called ideal and ordinary language approaches in analytic philosophy, satisfying their different requirements for an adequate method. Along the way I point out similarities/differences with the Wittgenstein-interpretation of Horwich, similar to mine in general aspiration but different in detail. 

The methodological and metaphilosophical assumptions of the Gettier tradition in epistemology BOB LOCKIE, U of West London

This paper offers a summary, and a summary criticism, of the metaphilosophical and methodological assumptions of the Gettier (/neo-Gettier) tradition that has dominated epistemology over the last half century. It is suggested these assumptions number centrally among them the following:

  • That belief provenance is of core importance for epistemology: that, in particular, the normative aspect of the theory of knowledge should be substantially (in fact, almost exclusively) concerned with questions concerning the route by which we come by our beliefs.
  • The analytic (classical) conception of concepts.
  • That the presence or absence of luck (at least, certain kinds of luck) in the assembly of our beliefs is of major importance in deciding whether that belief is knowledge.
  • That knowledge is bottom-up, not top-down. (Witness the numerous analyses of perceptual, memory, knowledge in terms of ‘transmission faculties’, ‘preservation theories’, ‘connection requirements’ – and the concomitant assumption that top-down contributions to memory or perception would represent contaminants).
  • That a theory of knowledge should be tested against ‘basic’, ‘prior’, ‘pre-theoretical’ intuitions.
  • That a theory of knowledge must answer to regulative as well as theoretical desiderata.
  • That a philosophical theory of knowledge should be concerned with our concept of knowledge.
  • That theories of knowledge may be well developed by the practice of testing these to destruction with thought experiments involving often tortuous and highly counterfactual cases.
  • That a philosophical theory of knowledge is best developed by investigating first order, concrete, self-standing, neo-propositional (perceptual, testimonial, etc.) knowledge – as opposed to theoretically situated, abstract, higher-order and values-based knowledge.

None of these assumptions is embraced by the psychologists who investigate knowledge. This, it is argued, accounts for the striking contrast between the progress which followed and follows the cognitive revolution in psychology and the sterility of the neo-Gettier approaches within philosophy over precisely the same time-scale. 

Conditional qua Conceptual Analysis RAAMY MAJEED, The University of Otago

Consider the following: (i) we humans in the actual world are phenomenally conscious, and so (ii) if our world contains nonphysical states, which instantiate our phenomenally conscious experiences, these experiences must be nonphysical, and if our world lacks such states, these experiences must be physical. The so-called conditional analysis of phenomenal concepts, as proposed independently by Hawthorne (2002), Stalnaker (2002) and Braddon-Mitchell (2003), extends these two uncontroversial claims to more controversial claims about the reference, and sometimes content, of our phenomenal concepts. With regards to reference, the analysis states that (iii) if the actual world has nonphysical states of the relevant type, our phenomenal concepts refer to these states, and if the actual world lacks such states, these concepts refer to physical states. Moreover, with regards to content, the analysis states that (iv) the content of our phenomenal concepts is the conjunction of the two conditionals mentioned in (iii). Whether this is a correct analysis of our phenomenal concepts is controversial. However, putting this controversy aside, there still remains the question: what exactly is it to analyze a concept conditionally? The proponents of the analysis, as well as its detractors, remain surprisingly silent on this matter. Nonetheless, there is a need for an answer. If phenomenal concepts are conditional in some sui generis sense, then so might some other concepts. Moreover, if the conditional analysis of phenomenal concepts undermines Chalmers’s (1996) conceivability argument, as its proponents argue, then similar analyses, e.g. of ‘freewill’, ‘good’, ‘time’, etc., might enable us to undermine arguments against reductive explanations in other domains of philosophy as well. With that in mind, this paper aims to provide an analysis of the conditional analysis itself. In particular, I shall compare the conditional analysis of phenomenal concepts with more standard accounts of conceptual analysis in terms of logical structure, reference-fixing and conceptual content. I conclude that the most salient differences are to be drawn along the lines of reference-fixing. 

The Importance of Existence Questions NIALL CONNOLLY, TCD/UCD

Jonathan Schaffer argues that allegedly controversial existence questions, like ‘do properties exist,’ ‘do meanings exist,’ ‘does God exist’ are trivial: the answer to each of these questions is ‘yes obviously’.  Schaffer’s arguments serve the cause of the promotion of a ‘neo-Aristotelian’ tradition, according to which the important metaphysical questions are not ‘shallow’ existence questions, but questions about the ontological dependence of some entities on others: about ‘what grounds what’.     

This paper will counter Schaffer’s arguments, and demonstrate with examples that many existence questions are non-trivial, and at least as important as questions about dependence/grounding.  The examples are questions of the form: ‘do Fs exist,’ where Fs are posited to explain a phenomenon, but it is controversial whether ‘F’ applies to anything.  ‘Do Universals (that is, repeatable items) exist?’ is one such question. 

My examples aren’t questions about dependence, dressed up as existence questions.  Even if Fs are supposed to be the ontological ground of a phenomenon P, if Fs are to explain P the content of ‘F’ must include more than ‘the ground of P’: it must specify what makes Fs the ground of P.  Universals for instance are posited to explain facts of sameness of type.  If it is supposed that universals are the grounds of these facts, it is their repeatability which equips them for this role.  But it is controversial whether anything (fundamental or dependent) is repeatable.

I will illustrate the plight of metaphysics without existence questions with the Mellitron Theory.  Mellitrons, according to this theory, depend on Mel Gibson.  Is the Mellitron Theory correct?  How can we know if we don’t know what Mellitrons are supposed to be and whether such items exist?

{mooblock=How to Do Philosophical Things with Words ANDY BLITZER (co-authored), Georgetown}

In How to Do Things with Words (HDW), J.L. Austin addresses a blind-spot in traditional philosophy of language. “It was for too long,” he writes, “the assumption of philosophers that the business of a [speech act] can only be to ‘describe’, or to ‘state some fact’ which it must do either truly or falsely.” Accoring to Austin, this blindness—this failure to appreciate the “richness, subtlety, and ingenuity” of ordinary language—accounts for a variety of philosophical conundrums and confusions. His ultimate purpose in HDW is to counteract this conventional myopia by cataloging the full spectrum of language-related actions. While Austin's own catalog of such actions is currently out of date, his basic project is widely regarded as salutary. In illuminating “the ability of language to do other things than describe reality,” writes Mitchell Green in the SEP, speech act theory facilitates progress “not only within philosophy, but also in linguistics, psychology, legal theory, artificial intelligence, literary theory and many other scholarly disciplines.” In a word, then, it would appear that Austin's attempt to rectify philosophy's monomaniacal focus on description was successful. In this paper, we address a vestige of this monomania that has somehow escaped attention. More specifically, we argue that philosophers generally assume that the business of all distinctively philosophical speech acts can only be to ‘describe’, or to ‘state facts’, which they must do either truly or falsely. After examining two instances of this assumption (§1),5 we argue that it generates a variety of conundrums and confusions (§2), and attempt to rectify it along Austinean lines (§3). Our arguments in §§ 2 and 3, it should be said, draw from a potentially surprising mix of sources. In addition to mainstream philosophy of language—e.g., Searle (1979) and Lance and Kukla (2009)—we take cues from Heidegger, Wittgenstein, and Frege. While the latter three are certainly strange bedfellows, they all squint at the philosophical importance of speech acts other than description.

Why believe the Dictionary? – Ordinary Language Philosophy: Between Austin and Wittgenstein SEBASTIAN GREVE, Birkebeck

Austin, Ryle, Strawson: those are the names most commonly associated with, so-called, ‘Oxford Philosophy’, also known as Ordinary Language Philosophy. However, it is an open question whether Wittgenstein’s (later) philosophising should be subsumed under this label, or not. I do not propose to answer this question. In fact, I shall prefer to keep it open. Instead I want to bring into focus some discontinuities between (later) Wittgenstein’s philosophical methods and those of Ordinary Language Philosophy (or rather, what the latter are generally taken to be). In conclusion, then, I intend to show how the mentioned discontinuities between Wittgenstein and “Ordinary Language Philosophy” can be seen to be continuities after all, and, furthermore, why they should be seen as thus.

My discussion consists of three combining parts. In a first step, the interest in ordinary ways of speaking, that Austin and Wittgenstein share, as well as the philosophical significance that both philosophers ascribe to it shall be illustrated by comparison of selected passages from their respective writings. I argue that there is good reason both for supposing that Austin, often taken as a representative of a philosophical method that proceeds by citing the Oxford English Dictionary, is indeed much less dogmatic than thus (mis-) presented as well as for calling Wittgenstein “a member of the school”, at least in spirit.

In a second step, I intend to exhibit one significant difference between the two, namely, deriving from what, at first sight, (merely) appears to be Wittgenstein’s unique style of writing, as it is most clearly employed in his Philosophical Investigations. I argue for a methodological understanding of this “style”. I further argue that this is where Wittgenstein’s philosophising has to be distinguished from “Ordinary Language Philosophy”, as traditionally conceived.

However, in a third step, finally, drawing on the aforementioned observations about Austin’s non-dogmatic attitude towards ordinary language, I conclude by pointing out the potentially fruitful connection that can be seen between Wittgenstein’s (rhetoric, or hyper-stylistic) methods and “Ordinary Language Philosophy”. For, the former can be shown to successfully address one fundamental problem burdening most philosophy that emphasises the significance of our ordinary ways of speaking. This problem consists in the fact that the majority of people have been reluctant, ever since, to accept a look in the dictionary (so to speak) as a genuine act towards a greater philosophical clarity. Wittgenstein’s plurality of methods, by virtue of a complete reduction of dogmatic force, is designed to, inter alia, get around exactly this problem, and can thus be regarded as an attempt to open up for a wider acceptance of the basic assumptions of “Ordinary Language Philosophy”, or, perhaps, a change in perspective of what Ordinary Language Philosophy essentially consists in.

The Textuality of Philosophy MARTIN GRüNFELD, UCD

In Of Grammatology, Derrida characterizes the writing of philosophy as a double movement which simultaneously attempts to become something more than mere text namely truth, and needs writing in order to acquire its status as a discourse of truth. This tension exists at the heart of the production of knowledge within the philosophical field and this paper sets off from Derrida’s initial formulation. However, as I shall show, there are significant differences between the ways in which philosophical texts use writing when attempting to repress textuality and acquire the status of a discourse of truth.

In the paper, close-readings of three very different texts within the philosophical tradition are presented, namely Plato’s Phaedrus, Leibniz’ Monadology and Nietzsche’s Twilight of the Idols, and it is explored how these texts are composed of radically different poetic devices that work as ways to either repress or emphasize the textuality of philosophy. In this way, Plato’s use of irony and simile is compared with Leibniz’ apparent zero style writing, and Nietzsche’s reinforcement of language. Through this brief comparison, it is shown: firstly that there are important differences between how philosophical texts attempt to repress (or in the case of Nietzsche, emphasize) textuality, and secondly, that these differences have important implications not merely for the way in which philosophical truths are presented, but more importantly they imply inherent conceptions of truth, language and knowledge that are significant for the constitution of philosophy as a discourse of truth. Hence, the aim of this paper is to stress the importance of writing in philosophy, since writing cannot be seen as a secondary medium for truth, but rather must be understood as a constitutive feature of philosophy as a discourse of truth. Thus, philosophy must be understood as a being-in-the-text.

Heidegger’s hermeneutics of suspicion NAOMI VAN STEENBERGEN, Essex

In the contemporary debate on phenomenology as a method, many scholars hold that the point of phenomenology is to follow and describe the way in which things directly and immediately appear to us. On this understanding, the immediate availability (or “givenness”) of phenomena determines the scope of what phenomenology can and ought to show. In this paper, I argue that Heidegger’s phenomenological method starts from the exact opposite standpoint: for Heidegger, phenomenology comes into its own precisely vis-à-vis a subject matter that is hidden, or indeed, fundamentally selfconcealing.

In other words: Heidegger’s phenomenological method can be described as a hermeneutics of suspicion – a method that presumes that how things really are is not how they initially and mostly appear to us. I sketch one important way in which Heidegger takes the subject matter of phenomenology to be self-concealing: he describes the tendency of self-concealment as a tendency that fundamentally belongs to us. In Heidegger’s terms: we are ruinous. If this is the case, however, phenomenology is presented with a profound methodological challenge. How is phenomenology to disclose its subject matter when this subject matter has a fundamental tendency to frustrate such disclosure? And what is the methodological impact of the fact that the phenomenologist herself is fundamentally ruinous? I shall provide an exposition of what ruinance entails, examine the precise nature of the corresponding set of methodological problems, and indicate what provisions Heidegger himself offers for navigating them.

A Dialectical Account of Intuitions MARTHA CASSIDY-BRINN, University of Vienna

In this talk I defend a dialectical account of the philosophical practice of relying on intuitions as evidence. On this account, the reliability of intuitive judgments is constituted by common agreement that the judgment is true. I demonstrate the superiority of my account to the standard mental state account, which locates the reliability of intuitions in properties of mental states.

I proceed as follows:

(1) First, I argue against Weinberg, Nichols, and Stich’s claim that there is no demonstrated correlation between intuitions and the truth of normative judgments. I object that this claim is misguided because it adopts the mental state account.
(2) I show that we should instead adopt a dialectical account, on which intuitions are characterized by their dialectical status. Namely, in the dialectical context in which it is deployed as evidence, all or most people already believe the intuitive judgment; they agree that it is true. This dialectical agreement constitutes the acceptability of intuitions as evidence. I show that this account not only describes practice more accurately than the mental state account, but it also explains why intuitive judgments are good evidence for normative judgments.
(3) Williamson objects to the dialectical account, arguing that intuitive evidence is constituted by facts, not judgments. To assume otherwise is, he argues, egregiously skeptical. 
(4) I respond to Williamson’s objection in two steps: First, I show that the dialectical account proves that analytic philosophers aim to analyze pre-theoretic “folk” concepts. Second, I show that the dialectical account thereby blocks intuition skepticism, because commonly shared belief in an intuitive judgment implies that the judgment is a correct application of the folk concept.

(Conclusion) The dialectical account offers an accurate description of actual philosophical practice and thereby avoids pseudo-problems engendered by the mental state account.

Practical Knowledge in Mathematical Practice: A suggestion from Wittgenstein's philosophy MONICA SOLOMON, Notre Dame

The element of surprise in mathematical practice and the value of finding a proof unexpected constitute a puzzle for the philosophy of mathematical practice. The Hardyan picture and adepts of axiomatic method (which holds a standard view of a mathematical proof according to which finding a proof is a matter of deductive reasoning or an application of an axiomatic method to some sort of independent reality) find it generally easy to account for it. On this view, the source of surprise is the acquaintance with a new item of knowledge or a discovered inconsistency between new and previous knowledge. To the contrary, in this paper, I suggest that the element of surprise is a constitutive and necessary part of practical knowledge, and, in particular, of mathematical exercise. I develop my account based on Wittgensteinian examples in Philosophical Investigations and his Lectures on the Foundations of Mathematics. The heart of the account is to regard surprise as a key element of training to follow a rule, one of the most basic abilities for constructing a proof. The positive claim is to regard the surprise as mark for extensions of rules to new domains and to suggest that it is fruitful when embedded in practice. It is, I argue, an important epistemological element. Against the canonical views, I argue that the surprise should not be regarded a purely subjective experience, a description of a mental state, or that the content of a mathematical proof is independent of the potential for surprise that it holds.

Hegel and the De-familiarization of the Familiar HAMMAM ALDOURI, CRMEP Kingston University

This paper will take as its point of departure a well-known sentence from the preface of Hegel’s Phenomenology of Spirit: ‘Quite generally, the familiar, just because it is familiar, is not cognitively understood (Das Bekannte überhaupt ist darum, weil es bekannt ist, nicht erkannt).’  Although Hegel is highly suspicious of the employment of the ‘familiar’ as the articulation of philosophical truth, his philosophical exposition begins with the affirmation of the experience of the familiar in its immediate and abstract givenness as the locus in which philosophical truth emerges.  More simply put, philosophical thinking for Hegel begins with the experience of what appears as most familiar.  Crucially, the experience of the familiar is the experience of philosophy itself in so far as the latter emerges from the negation of the former and, according to Hegel, retroactively determines the truth content of the familiar.  Accordingly, it is the familiar that provides philosophical thinking with its preliminary methodological approach; that is to say, dialectical negation is contained within the familiar.  With this in mind, I will ask the following: what is the philosophical content of this preliminary experience of the familiar that mobilizes philosophical thought itself as the de-familiarization of the familiar?  And, more importantly, if the familiar is the basis on which philosophical thinking is mobilized, what is the possibility of escaping the confines of its dialectical reconstruction (that is, to nevertheless end with the positive immediacy of the familiar)?  ‘Escape’ here is understood as moment in which philosophy itself speaks through its differentiation from the familiar as the untruth of philosophy.  Yet, the category of ‘untruth (Unwahre)’ is not an external opposition to philosophical truth, but constitutes the immanent formation of the latter since it contains within it a truth (un-truth).  By re-formulating of Hegel’s dialectical process in terms of the model of the ‘de-familiarization of the familiar’ in the realm of truth, this paper will attempt to bring into relief the structure and meaning of the paradox of the dialectical experience of the familiar as a ‘meta-philosophical’ operation. 

Metatheory and the Evidential Weight of Intuitions AMANDA MACASKILL, NYU

It is common to appeal to intuitions as evidential support of a philosophical theory. But how much support does fit with 'folk' or 'reflective' intuitions provide when it comes to assessing the plausibility of different philosophical theories? In this paper I will argue that the extent to which fit with the intuitive data supports first-order philosophical theories will depend largely on the plausibility of different metatheories within the relevant domain. I will begin by considering an example within ethics, where the relevant issues are arguably the clearest. I construct a case involving two metaethical theories – cognitivist subjectivism and robust moral realism – and two first-order moral theories – act utilitarianism and a form of patient-centered deontology. Act utilitarianism is argued to be a simpler theory that fits less well with the intuitive data, while patient-centered deontology is argued to be a less simple theory that more closely fits the intuitive data. I argue that under robust moral realism, ethical intuitions will provide less evidential weight compared to other theoretical virtues than they will under cognitivist subjectivism, because the truth-conditions under cognitivist subjectivism are more closely tied to the mental states of agents than they are under robust moral realism. I argue that the relationship between metatheory and the weight given to intuitions does not merely affect ethical views. To demonstrate this, I consider two possible metatheories within epistemology and show that the degree to which philosophers should rely on intuitions in defense of epistemological theories is similarly dependent on their credences across metaepistemological theories. I conclude that the answer one gives to the question of how much evidential weight that one ought to give to intuitions in philosophy will largely depend on which metatheoretical positions one has the highest degrees of belief in.

Davidsonian-Heideggerian account of truth, a middle ground position MEHDI NASSAJI, University of Hull

Davidson is recognized as an analytic philosopher. Heidegger on the other hand is a well-known continental philosopher. They have both done quite a lot of work on the notion of truth.

Davidson rejects all traditional accounts of truth such as correspondence theory of truth, coherentism and pragmatism. His main problem with these theories is that they have an incorrect approach to truth. According to Davidson, truth is a primitive concept and we cannot provide an analytic or synthetic definition for it. However, according to him, that does not mean that we cannot say anything philosophically significant about truth. Davidson maintains that we can philosophically discuss the notion of truth by identifying its role in the emergence of thought, belief and other propositional attitudes. 

Heidegger also denies all traditional accounts of truth. Similar to Davidson, Heidegger’s major problem with those accounts is their approach. Instead of giving a definition of truth, Heidegger intends to tell us what truth does for us in order to understand the world. 

I believe, although Davidson and Heidegger are the representatives of two different philosophical disciplines, their methods of investigating the notion of truth are similar. Because they have similar approaches, their account of truth is consistent. Bringing them together would suggest a more comprehensive account of truth.

In this presentation, after briefly presenting both the Davidsonian and Heideggerian accounts of truth, I will explore how these two philosophers are coming closer to each other when they discuss the notion of truth. The lesson we can draw from this discussion is that in some cases, such as the discussion of truth, the best philosophical position is a position which does not draw a clear line between analytic and continental philosophy, but rather takes a middle ground position which is more useful. 

Is Chinese Thought Philosophy? --An Investigation from the Perspective of Song and Ming Neo-Confucianism. YANGXIAO OU, UNIVERSITY COLLEGE CORK

The classification of Chinese Thought as “philosophy” remains a controversial topic, and one of the main concerns regards the methodological techniques applied by traditional Chinese thinkers.  Until recently, Chinese Philosophy has been struggling to justify itself as a legitimate branch of ‘philosophy.’ Even today, occasional suspicions are voiced by philosophers both from within China and from Western countries. Some scholars have suggested speaking of ‘Chinese thought’ rather than of ‘Chinese philosophy.’ Since the early 20th century, and in response to the significant Western cultural influence of the time, a reconstruction of so-called ‘Chinese philosophy’ has been undertaken by major Chinese philosophers such as Feng Youlan and Mou Zongsan. This trend continues to dominate the academic study of the history of Chinese thought. In this context, a whole range of Western concepts and methodological frameworks were imposed on the interpretation of ancient Chinese philosophical texts.

In this paper, I try to recover some indigenous approaches to Chinese philosophy by outlining in detail some of the methodologies used in Song and Ming Neo-Confucianism, namely ‘ge wu zhi zhi’ 格物致知, or ‘to gain knowledge by investigating objects,’ and ‘ming xin jian xing’ 明心见性, or ‘to see one’s nature by clarifying the heart-mind.’  I will try to evaluate in how far Western philosophical methods such as introspection and theoretical speculation correspond to these Chinese approaches. On the basis of this evaluation, I will finally suggest an answer to the question if China has philosophy. 







The American Voice in Philosophy project is supported by:

  • The IRCHSS (“New Ideas” Award in the Humanities and Social Sciences)
  • UCD School of Philosophy
  • The International Journal of Philosophical Studies
  • UCD Clinton Institute for American Studies.
  • Society for the Advancement of American Philosophy
  • UCD Seed Funding

Principal Investigators

Popup Module

This is the Popup Module feature. Assign any module to the popup module position, and ensure that the Popup Feature is enabled in the Gantry Administrator.

You can configure its height and width from the Gantry Administrator.

[readon2 url="index.php?option=com_content&view=article&id=47&Itemid=54"]More Information[/readon2]