Wednesday, January 31, 2018

J33 - Manifesto on Humility

*This journal came out much less elegantly than it appeared in my head, but hopefully the delicate point I'm trying to make will still find its way through the verbosity.*


In my last journal, I discussed (somewhat briefly) the importance of humility in the learning process. I explained that to learn anything new requires an implicit admission that one had previously been wrong in his or her understanding of a phenomenon. We all have our theory of the universe and how it works, and learning is a series of revisions to this theory, rendering it more complete and more correct through exposure of our beliefs to the external world and our interactions with the theories of others. This process, of learning, involving a low level of humility as it does, is rather easy for many people and comes to us naturally, as we are frequently exposed to new information and desire to gain more knowledge such that we may apply a more correct understanding of the world in the pursuit of our chosen ends. However, a pattern that I’ve noticed is that the less frequently one revises one’s theory, the harder it is for one to do so. That is, someone who is constantly revising their understanding of the world is obviously quite aware of their ignorance, and are regularly practicing the process of learning. Others, who have reached a level of preeminence in certain fields of knowledge and who have accumulated vast amounts of experience under their belts, are less likely to admit that they are wrong and revise their understandings. Children for example, are constantly learning, because everything is new to them, and they are eager to do so. Older college professors, on the other hand, are rarely learning, and have, in fact, developed a level of hubris that makes learning far more difficult for them. They are almost certain to not admit their mistake in the face of a new, better theory, and throughout history have tried to suppress many of the innovations in their fields because they believed that they couldn’t possibly be wrong.

I also discussed in my last journal a particular way of viewing the various theories of the world that different people hold. There is a tendency in many people to view the world in black and white, in absolutes. Assuming that there is truth in the world, that X does mean X and that the universe is subject to certain laws which are comprehensible, then there is, in theory, one correct and complete theory of everything. Many people believe this much. But then many people go on to assume that there is this one theory (often subconsciously their own), and that all the other theories are simply wrong. This is true, to some extent, but it misses something important. Not all wrong theories are equally wrong. There is a scale, a spectrum, upon which different theories may be ranged and determined to be more correct or less correct, more complete or less complete. It’s not so much that all of these theories are wrong, but that they are all right to different degrees. Viewing the state of knowledge in this way does two things: First, it helps one see that their theory, while seemingly the most correct and complete, may not be totally correct and complete. Being closest to the destination does not at all mean that you’ve reached the destination. There is, therefore, a possibility that one’s understanding might be improved. Second, it allows us to see value in other people’s theories. Just because someone is wrong about something doesn’t mean that they’re wrong about everything. Indeed, they could be less right than you in their explanation of one phenomenon and more right than you in another phenomenon. Therefore, their arguments should not be discounted outright. 

[Note that no one thinks that they’re wrong in their beliefs. Bring wrong in one’s understanding of the world is extremely unhelpful, and anyone who consciously found themselves in such a position would seek to rectify the situation with haste. And yet, despite this confidence, everyone holds a slightly different theory. This is because everyone has different knowledge and different experiences and different minds. My point, however, is that all of these people live in the world of reality and yet still somehow believe what they believe, even though what they believe is probably wrong, so it seems that there must be something about each of their theories that makes some sort of sense. The ideas of primitive peoples, such as the idea that dancing to the rain god would help the crops grow, may not be reasonable in light of our present state of knowledge, but they were certainly rational in light of the primitive world-view. Similarly, today, there is a basis for every theory, an argument that can be made in support of every proposition. This doesn’t mean that everyone is right or that there is no objective truth, merely that disregarding another person’s conclusions without due consideration seems like a dangerous practice when that person is just as certain in their conclusions as you are in yours. If you think the resolution of some issue upon which many great minds have debated is obvious or “clear,” then you haven’t thought enough about the issue. If there is any disagreement on a subject, then it is not clear, and a conscientious scholar should be mindful of that fact.]

Finally, I also explained in my last journal the process by which knowledge and understanding is advanced, through argumentation and analysis among a community of intellectuals. I define an argument as a defense of a proposition (a truth-claim), using accepted premises to support a more controversial conclusion, and I define analysis as an attack on a proposition, breaking down an argument to examine the efficacy of its defense. I also distinguish between argument, the purpose of which is merely to support a truth-claim, and rhetoric, the purpose of which is to convince another to accept a truth-claim. Rhetoric is a tool which can be successful even if the proposition it advances is wrong, as humans are easily swayed by half-truths told with confidence. Argument, though, is where the real thinking and convincing happens, because it is argument which will determine whether a proposition is true or not. The goal of a research project or scholarly disquisition, therefore, is to build and present an argument, not necessarily to convince others of the correctness of one’s proposition. This is okay because in the scholarly community, at least theoretically, others will be looking for new propositions to contribute to their learning and will, as scholars, be more convinced by argument than by rhetoric. That’s how academia works: everyone advances their own theory, and one walks among them, analyzing each to see which are the most correct. [There is also the consideration of willingness to learn. Rhetoric can certainly be employed to manipulate a crowd, but true learning requires an internal motivation which implies that the student will have come searching for answers, and thus be better served by a strong argument rather than by empty rhetoric.] The task of an intellectual, therefore, in his or her effort to improve the state of knowledge in his or her field, is merely to offer argument and analysis in support of or in criticism of various propositions and theories.

Something that I’d like to discuss here that was not mentioned in my previous journal is the Hayekian idea of the Pretense of Knowledge. “The Pretense of Knowledge” is the name of a speech that Hayek gave when he won the Nobel Prize, which presented and embellished on part of his theory known as the Knowledge Problem. The Knowledge Problem is Hayek’s main argument against socialism. He asserts that central planners would fail utterly at managing the economy because they would be incapable of possessing all of the necessary information. They wouldn’t know what every individual needed or wanted, and they wouldn’t know the best processes for producing all the goods and services, and they wouldn’t know how to innovate at all or adopt new practices developed by others. The market, on the other hand, through market prices, provides an incentive for everyone in the economy to contribute their own individual knowledge to the social nexus of social organization and the prices provide a means of communication for the essentials of this distributed knowledge to be made known to everyone else in the economy. Hayek’s argument, therefore, was essentially that one group of people couldn’t possibly know everything, so we need a market economy to provide a means for all knowledge to be utilized effectively.

[This idea of distributed knowledge employed harmoniously by a free market is best illustrated by the short essay “I, Pencil” by Leonard Read, where a pencil traces his genealogy as a demonstration that not a single person in the world knows how to make a pencil from scratch.]

But the Pretense of Knowledge, I think, questions even more than this. This speech attacked the intellectuals who thought that they knew enough to fix the problems of the world. “What do we fix?” asks Hayek. “And how?” “And who are you to decide?” He mentions the truth about statistics that escapes almost everyone: statistics tell us nothing about the particular case. All that we know is that this case is a member of a class about which we have some knowledge of the outcomes of the entire class. Furthermore, the only constant in human affairs is the presence of change; therefore, statistics about past human phenomenon really can’t tell us anything certain about the future, even if the statistic in question is 100%. Two recent examples are the Patriots’ Super Bowl win over the Falcons (2017) after going into the third quarter down by 18 points, and the victory of Donald Trump in the 2016 presidential election after some pollsters put his chances of winning at 2%. 

Anyway, Hayek’s main point was to caution his fellow intellectuals about believing that they knew best, not just for themselves but for everyone else. “The curious task of economics,” he said, “is to demonstrate to men how little they really know about what they imagine they can design.” He stressed throughout his life that human beings are not and cannot be omniscient gods, capable of designing a new world out of nothing but their own ideas. Progress emerges, Hayek stressed, out of the cooperation of millions in the division of labor, not through the planning of government bureaucrats, as smart as they may be, because these bureaucrats don’t know everything and there is incredible danger in pretending that they do. 

[In his last book, The Fatal Conceit, Hayek even argues that due to the inherent limitations of the human mind, we cannot even be sure that our rational conception of certain systems (like ethics) is correct because the human mind cannot know what it cannot know and cannot see what it has not developed to be able to see (like complex social phenomenon).]

I said in my last journal that people learn by doing, which is a way of saying that people improve through practice, which is a way of saying that people fail before they get better. People learn from failure. Learning is a series of corrections, which implies a series of errors needing correcting. This process is the same for everyone; there is no shame in it. Indeed, the more one fails, the more one knows, and the better they become. In addiction-recovery circles, they say that one must admit that he or she has a problem before they can be helped. In the same way, one must admit that they might not know something in order to learn. One has to want to improve their state of knowledge before they will be able to, and this requires admitting that their current state of knowledge is incomplete or incorrect. This is a struggle for some, especially those who think (and do) know a lot already. It’s a struggle that I encounter with some of my students, who could potentially learn so much and yet are not willing to genuinely admit that they have anything to learn. But there is so much more value in admitting that you’re wrong and growing through correction than there is in clinging stubbornly to a falsehood. That is the true failure.

Without humility, there is no learning. Without learning, there is no progress. It’s okay to not know everything; it means you’re the same as everyone else. And it’s okay to not be the best in everything. You may be right about economics, and wrong about religion. As Hayek demonstrates, we don’t need to know everything. But we should always be aware of the fact that we don’t know everything, and remember the limitations of our knowledge. We should strive to every day stay humble, as increasingly difficult as that will become, because we always have more to learn. We should approach every controversy with questions rather than declarations, and put forward arguments only when we feel that our own truth-claim is better than others or when we have something new to contribute in support of an existing proposition. [I have developed the rare practice of only taking a stance on an issue when I’m able to argue for multiple sides of the debate. If I can’t argue for a position contra to mine, then I don’t understand that position, which disqualifies me from declaring that it is wrong. Therefore, there are many areas of knowledge and issues upon which I often decline to comment because I haven’t thought about them enough to understand all of the sides and thus properly choose one of them.] 

We are, all of us, probably wrong about 90% of the things that we believe. And the areas in which we believe we are most correct (because the truth seems so “clear”) are probably the ones in which we are most likely to be wrong. But, the marvelous thing about being human is that we can always change, and improve our understanding of the world. This learning, however, requires humility. Learning is a natural process, so long as we are willing to participate. We should strive, therefore, to remain humble, no matter how much we come to know, and to always seek improvement of our theories. Let us question ourselves no less rigorously than we question others. This is how we improve ourselves, and enable ourselves to improve the world around us.

What Is The Best Way To Learn?

*Originally written 03/25/2016*

It is becoming increasingly apparent that, in our current age, the role of the lecturer is a superfluous one. Indeed, one might think it absurd that a society with as much access to the Internet as ours would spend considerable resources on “teachers” who stand in barren rooms, day after day, simply reciting information. It is now possible to easily and cheaply learn any high school or even college content from one’s computer on platforms like Khan Academy, Udemy, and Wikipedia. Additionally, archives and institutes are making scholarly material available at an unbelievable rate (the Mises Institute alone has published more material on economics than could be consumed in a single human lifetime). Agencies and universities regularly collect and publish comprehensive sets of raw data on every aspect of life. There is an incomprehensible amount of knowledge out there, accessible to anyone. The knowledge of professional lecturers is insignificant in comparison; as lecturers, they offer nothing that the Internet cannot.

It seems clear, then, that the educational model of the United States public school system requires severe revisions. Almost everyone agrees that the intense focus on testing is unnecessary and detrimental, but there are even bigger problems to confront. The very method of educating used by public schools has become archaic and stifling. Still, the question remains: Given that the current system is inefficient and ineffective, what should we replace it with? What is the best way to learn?

The answer, in a word, is naturally. The clearest manifestation of this, however, is intellectual conversation. Yes, I think that conversation is perhaps the best method of education available; conversation on many topics and with many people. Through conversation, students can be exposed to a wide range of ideas and arguments but, since all participants offer something to a real conversation, everyone feels a sense of equality and fraternity as they explore a topic together. Everything that is said and heard is just something to analyze and consider, not a dogma to memorize and regurgitate on a test. Conversation stimulates the intellect in ways that memorization cannot. 

A friend once said to me that “In terms of retention, we are a culture of binge and purge learning. We consume as much as possible for a test, throw it up onto an exam, and flush it from our brains once the need for it wanes.” I’ve rarely heard a more accurate description of what goes on in modern-day schools. Again, the solution to this “binge and purge” learning is a conversation-centered education. Exposure to topics will occur naturally (even obscure topics can come up in everyday conversation), and knowledge of the topic will be gained organically, piece by piece, source by source. Within a very short time period the student will gain a very thorough understanding of the issue. Ben mentioned the concept of economic calculation when dismissing socialism. Andrew brought up Oskar Lange’s response in defense of socialism. Then the student googled all the terms he heard in the course of these conversations. Then he went to Bott to get his thoughts on the matter, and Bott recommended a book that addresses the issue in full. The next thing he knows, the student will be knowledgeable about a crucial yet obscure debate in economics and, because he pursued the information himself, because he wanted to know about it, he will be more likely to remember what he has learned. He will have gained a greater understanding of the topic than he would have from a normal lecture, and he will have heard the actual arguments presented, rather than just summaries of those arguments written by disinterested third parties. 

This conversation-centered method of education also solves the inter-subject-knowledge-crossover problem because, in conversation, you tend to bring up whatever comes to mind, regardless of where you learned the information. Through conversation you engage with the material, you actually use it, and it thereby comes to have value to you. Through conversation you begin to understand how seemingly random knowledge can be connected and arranged to support other thoughts and ideas. Through conversation you learn how to create, how to build something new with the knowledge that you have. Through conversation you learn how to think, reason, argue, and communicate, all in a natural, non-coercive manner. 

There are two other components of a good education that are implied in the conversation-centered method but deserve further elucidation. First, students need to be in control of their own education. Moreover, they need to take control of their own education of their own volition. They must have the necessary freedom to make this component more than rhetoric; they must have the freedom to not learn for us to be able to say that they’re actually in control over their own educations. Second, the students must have an interest in the subject matter, or at least a belief that it’s important. This condition is important for the students’ happiness, but it also aids in retention. Ask anyone: the things they remember from school are the things that they found interesting. Together, these two concepts, control and interest, fundamentally change the nature of one’s education. When one decides what he wants to learn, he’s also deciding that he wants to learn.

Now, the public school system seems perfectly designed to eliminate these elements from education. Conversation is positively discouraged as schools further embrace the lecture-and-test method of education. What few choices students have are just selections from small pools of sanctioned options. And it hardly needs saying that there is very little passion for or interest in schoolwork emanating from high school students. 

We conclude, then, with the proposition that schools are not good places for students to learn, at least so long as schools continue to provide the exact opposite of the best way to learn.

J32 - A Human Education

[Note at the outset that the terms “education” and “school” are not synonymous, although they both mean something similar and are used seemingly interchangeably in this journal. Education is a more general term that embodies all valuable learning that children and young adults experience, whereas school is the formal institution that seeks to effectively facilitate this learning experience. I note this because, while I do not go into this in the journal, schools serve a slightly different purpose than education. I believe that, like everything else, schooling should be provided by the market. However, if school is provided by the market, then it is serving present consumers. At the beginning of this piece I delve briefly into the purpose of education, which is generally to serve the future interests of children. But schools do not serve these future interests; their purpose is not to prepare children for the future, but to serve their present desires. This may well be preparation for the future, but it also might not be. There may, therefore, be “schools” where there is no systematic training or instruction in set curriculums and instead the goal is just to make kids happy while they’re there. So, schools serve the present. Which is kind of interesting to think about, because that’s certainly not how they are thought of today and even within my own theory presented here creates an interesting tension.]



I think that my project, and really all of my intellectual contributions, can be reduced to the explication and application of the idea of purpose. That is, most of my academic work in life has been the applying of the idea of human purpose, which is a foundational concept in economics, to other fields and philosophies. My theory of being human, I think, is really just a series of ruminations on what a purposive existence looks like. This makes a great deal of sense, really, since I have long argued here and elsewhere that what makes man different is the fact that he acts, or behaves purposefully, and that this sense of purpose is unique to man. I’ve made a number of incursions into other, related, subjects, like reason and imagination, but I see these concepts as all wrapped up in each other, essential elements of that one concept of humanity. My project, then, has come down to reminding the world that humans act with purpose, and questioning what that looks like. 

With that said, let us begin this rather long piece on education with an examination of education’s purpose. I want to move quickly through this portion because I recently spent pages working through the purpose of education in a longer, more comprehensive piece which I’ve been working on, and I want to be sure to not take up as much space here. So, what is the purpose of education? There have been many different answers offered to this question, throughout history and even today, and, to be honest, many of them have merit. Which one is correct will depend on many different factors, not the least of which is one’s definition of education. Here, at this early point in our discussion, and given the ambiguity of our still-broad concepts, I do not wish to discount any of these answers but, rather, suggest a more broad answer that encompasses many of the more specific answers offered. The purpose of education, I contend, is to prepare students to be successful in life. This answer, as ambiguous as the question it answers, gives rise to a host of other questions. By briefly moving through these questions, I believe that we can develop a more precise definition of education’s purpose that will hopefully remain broad enough to retain consensus. 

Beginning with the assumption that the purpose of education is, broadly, to prepare students to be successful in life, the next question that must be considered is what it means to be successful in life. To be successful in life, I contend, is to be happy. I use the word happy in the same sense as the economists and philosophers of antiquity: satisfaction, utility, welfare. In this sense, the end of all human action is to increase the actor’s happiness, whether or not those actions are successful or the actor’s ideas are sound. If a human life is defined by its pursuit of happiness, then the successful life would be a life where happiness is attained. It seems apparent that human beings never reach perfect happiness, but it seems just as apparent that some are more successful than others. No one can fault an institution for failing to do the impossible; if perfect happiness is out of reach, then education can fulfill its purpose by enabling students to be happier, or more successful.

The next question, of course, is how we can be happy. This is where the individualist in me comes roaring to the surface. I don’t think that there’s one right way to be happy. Or, rather, I believe that everyone has their own right way which works for them and not for others, because everyone is different, and I think that this is a magnificent thing. Because value is subjective, and preferences are personal, only the individual can decide what he wants most, what will provide him with the greatest amount of happiness, and any philosopher pontificating on a mountaintop about the good life can be disregarded. So, there is no one way to be happy; each person will need their own path. But this implies that we will need tools with which to search for our own individual happiness. What tools are these? Well, in the most general of terms, we need material means that can sustain us during our search, and mental means with which to search. The material means, of course, will need to be produced before they can be consumed (if just through being recognized and classified as means by the human mind). Producing these material means, therefore, requires its own mental means. These mental means, like other human abilities, must be (or may be) developed to be more effective. In sum, then, the key to finding and attaining happiness lies in the development of our human minds. 

The development of the human mind is the task of education. Or, to be more precise, the purpose of education is to develop students’ abilities to produce and use material and mental means in the pursuit of their individual happiness which is the definition of a successful life.

Before we turn to a discussion of what this education should look like and what schools should be teaching and how, I’d like to briefly digress and explain the learning process, as I see it. Every individual has his own theory of how the world works. And everyone’s theory, insofar as it addresses every phenomenon, is complete. That is, even our hunter-gatherer ancestors, who knew next to nothing about the physical and chemical and even biological processes at work in their environments, had a complete theory about the nature of that environment and how it functioned. They were ignorant and wrong, of course. But they did have an understanding of the world, just as we all do now, as ignorant and wrong as our own understanding may be. Learning, then, is not just the acquisition of new information, but a revision to this theory of ours. To discover the idea of gravitational force, for instance, requires not just learning that objects fall to the ground because the Earth has tremendous mass, but also abandoning one’s previous theory for why objects fall to the ground (because the ground is lower than us, for instance, and things fall downward). 

Learning, then, is more than just the presentation or discovery of new information and the remembrance of this information. Learning is a process of revision to our theories of the universe. Therefore, learning requires two things from the learner. Curiosity, to seek new information, is obviously needed. But also needed, I would suggest, is humility. I plan on dedicating a whole journal to the subject of humility in the near future, so I won’t delve into it too deeply here, but I wanted to raise the point. For true learning to occur, the learner has to acknowledge that they have something to learn. Part of this, I think, is accomplished just by exposure to the world and information and other people with their own theories. Our understandings of the world are under constant revision; it’s a natural process. But, at the same time, there is, at some level, a need for this attitude of humility, and the internal motivation it provides to better oneself. It is entirely possible, as we see all too often, for a person to block off their desire to learn and hold fast to their own beliefs in the face of overwhelming evidence to the contrary. Political discussions are excellent examples of this; no matter what one says, some of the supporters of each side will not be swayed. Anyway, I make this point because it’s not easy to admit that you’re wrong, and therefore learning is not an effortless process. But, more relevant to this particular piece, this means that attempts to teach people who don’t want to learn will be in vain, or at least grossly ineffective. Education requires, on the part of the student, a willingness to learn, which can be approximated through a coercive system of rewards and punishments but will, ultimately, have to come from within the student. [Note that the reason this low-key humility and concomitant constant-learning is natural to human beings is because we are trying to attain our ends and the acquisition of knowledge, which we need to effectuate our designs, is a means toward that attainment. However, the search for knowledge is always purposeful. We seek information when we think that we need it. It is entirely possible, and, in fact, almost certain, then, that one may be curious and willing to revise their understanding in one area or on one subject but not in or on another.]

So, what should education, with a purpose of developing our minds, look like? Well, first of all, given the digression above, I think that education should be voluntary, and that by making it voluntary we could immediately and naturally resolve many of the issues that we face in schools today. But, I don’t know that I want to go into that right now. Rather, I’d like to examine the question of what schools should be teaching to the students who are in them, regardless of how the students came to be there.

There are, as I see it, three essential bodies of knowledge that must be imparted upon the youth. First are the basic pieces of knowledge and social norms that are necessary for functioning in society. This includes reading, writing, basic mathematical calculations, and how to interact with people peaceably. Now, this knowledge is necessary because it would be next to impossible to get through life without it. It is therefore of critical importance. However, I would question whether this knowledge needs to be formally taught, or whether the provision of opportunities for its acquisition would be enough. Much like my argument against Kant’s a priori categories of knowledge, I believe that the reason this knowledge is so important is because it is a constant presence in life and human affairs. But if it’s a constant in human experience, than the easiest, and perhaps most effective, way of teaching this knowledge might be to just expose children to human experiences. I find it strange that we take children out of the real world and put them in artificial environments (schools) to teach them skills and knowledge that they need to operate in the real world, when exposure to the real world might have been enough to allow the children to learn what they needed (and it seems that the real world would provide a quite accurate litmus test for determining what information was actually necessary for each child’s specific environment) through observation, imitation, and practice.

The second body of knowledge is job-specific. This is the advanced knowledge of science or math or history that one will need to be successful in the job that they’ve chosen. Again, as a nod to my belief in individuality, I think that this subject-specific knowledge should only be taught to students who believe it will be useful to them, because of the career that they’ve chosen. Now, I don’t expect ten-year-olds to be deciding what job they want for life, but I do think that the narrowing of possibilities can begin much earlier than it does currently. Children may not quite know what they’d like to do in the future, but they certainly know what they don’t like doing now, and this is enough, I think, to begin specializing relatively early (and to much advantage). By specializing, students will be able to accomplish much more in their field than they would if their attention was diverted by other subjects. Furthermore, the idea that all children should have the same (specialized) knowledge is an attack on the division of labor, in my mind, because progress is not built on the backs of more smart people, but on the backs of more different people, and to force everyone to learn the same things for well over a decade is an affront to this principle, an attack on the idea that being different is okay and, indeed, necessary for a colorful and flourishing society. So, specialized knowledge for specializing individuals would be an essential part of any education, once a student begins making those choices about his own education.

Finally, there is the third body of knowledge, which is the key piece to this education, I think. This is the skills that apply the unique abilities of the human mind to the living of the uniquely human life. I am speaking, of course, of reasoning and imagination and critical thinking. Education in this area would consist mostly of the cultivation of these inherent human traits such that they could develop into more effective and powerful tools. This body of knowledge is the most important because it makes possible the other two. To learn anything, you have to understand it. To truly understand new information and arguments, one must analyze and think critically about them. And to make any advances in knowledge, one must question the accepted body of knowledge and imagine new and different possibilities for doing and explaining things. These skills, too, are necessary at some level for basic functioning in the world, but a cursory view of the people around us reveals that the level these skills are developed to is subject to much variation. Therefore, here, as opposed to the first body of knowledge, there seems to be a need for focused training.

Another digression now, this time on the nature of progressing knowledge. Knowledge advances through the process of argumentation. Recall that every individual has their own theory about the world. Some are more informed than others, some are better reasoned than others, and some are more cohesive than others. It’s important to realize, I think, that a proper view of all these theories is not that one (one’s own, of course) is right and all the rest are simply wrong, but that there is one complete and correct understanding of the universe and then [insert current world population here] other theories which are more or less accurate. That is, one theory might be better than another, but that doesn’t mean that the theory is correct, or that it is better than the other in every way. Every theory could potentially have something to learn from every other theory, because it’s quite likely that even the most complete and correct theory of the universe could still be hopelessly ignorant and muddled from the point of view of a supernatural intelligence. So, theories are more or less correct, in different areas, rather than simply right or wrong.

Progress in science and intellectual life generally occurs through the examination of others’ theories and presenting arguments in support of your own. Every proposition, every statement that one makes about the world, is a part of one's theory of the universe and therefore backed by an understanding of the world that is unique to that person. Argumentation is the process of defending one’s propositions, one’s truth-claims, against the rest of the world. By exposing our ideas to the light of reality and the analysis of other thinkers, we can ensure that our own understanding of the world is correct. Now, every time one makes a truth-claim about the world, there is an argument in defense of that proposition, whether implicit or explicit. However, it is the task of every responsible person to try to make their defense, their argument, as strong as possible and public so that the rest of the world can analyze their truth-claim for its veracity and perhaps learn from it. In this world of competing theories of the universe, based on different experiences and observations and chains of reasoning, argumentation and analysis is the process through which bad theories are discounted and new, better theories are created through the synthesis of other good theories. [For more on this, talk to me about my classes on argument theory and method (maybe one day I’ll distill it into a post here) and see my post from October, “On Questioning.”]  But, here I would just like to say that the process of argumentation and analysis, which yields advances in both the scientific and practical sciences, is dependent on the quality of the participants’ reasoning, critical thinking, and questioning skills. These essential human traits, capable of cultivation to remarkable levels, are the source of progress in human life and society, as evidenced by their central role in even the advanced areas of academic knowledge.

Now that we’ve determined, roughly, what is to be learned through education, we can ask how these things should be learned. And, to skip entirely a seemingly useless chain of reasoning, I assert that these things should be learned in the way which is most effective for people to learn. So this question changes slightly, for now, to be “How do people learn?” Now, I have said before, knowledge is for a purpose. That is, people have to want to learn in order for them to truly learn something, and they want to learn things for a purpose, perhaps to merely demonstrate their knowledge, or because they find the material interesting, or, more likely, because they have a goal in mind, the attainment of which requires the utilization of certain knowledge. So, people learn most effectively when they have a need for what they’re learning. 

Many researchers in the education reform movement, most notably John Holt and Maria Montessori, have said similar things about the learning patterns of children. According to these anthropologists, children learn by doing. Children do not take classes in walking and reading; they just start doing it. They begin badly, but over time, through revisions to their understanding of their objective, they improve. They say that this is true across the field. Children learn by doing. And this lines up with my theory above, that people learn when they have a reason to learn. Children have a reason to learn how to do something when they want to and begin doing that thing, and so they learn with the help of their magnificent human minds.

[Interestingly, Mises hypothesized that the reason we can’t remember our earliest years is because we were passive observers of the world, with no sense of purpose. That is, when we were only a year old, we weren’t thinking yet, in that uniquely human sense, and therefore didn’t really exist yet. We can’t remember anything because we weren’t thinking anything to be remembered.]

An understandable reaction to the fact that children learn through doing and when they have a reason to learn is to then force kids to do more of that doing and to introduce more reasons for doing so. This is the idea behind, I think (rather, this is an argument that could be used in support of), much of modern schooling. Students are given repetitive tasks in the areas where they should be learning, and a grading system is established to provide motivation for doing the tasks and doing them well. 

But, as I discussed on our way through these areas of knowledge, all of this knowledge which it is education’s purpose to provide is naturally sought and acquired through the living of life. Living in the world, and wanting even a minimum level of flexibility within it provides both the opportunity and incentive to learn the basics of reading, writing, and arithmetic, as well as basic social norms. When one decides how they wish to integrate themselves into the division of labor, they begin training for their job by learning the specialized information that they will need (oftentimes provided by the employer, who doesn’t wish to hire untrained workers), and the purpose for acquiring this knowledge, obviously, is to be successful in one’s work. And the essential functions of critical thinking and arguing and questioning are also present in daily life and useful in nearly every application, therefore capable of much cultivation through practice. All this to say that students will probably be doing all of these things naturally, without a coercive school system forcing them to do so, and with true purpose, rather than in fear of an artificial apparatus. 

[As an educator who has developed and runs an independent research program that is designed to develop students’ critical thinking and questioning abilities, I believe that the most effective way to cultivate these skills is simply through careful conversation. Demonstrate what you want the students to be learning, and then seize upon every opportunity to help them practice these skills. Ask questions, and teach them to question others, themselves, and even me (the teacher). When they want to talk about an interesting topic, even if it’s unrelated to their topic of study, engage with them, subtly help them build an argument and defend it. Throw out an idea or a news story every once in a while for them to analyze and think critically about. These skills certainly can be developed in a number of ways, but still all naturally, through careful attention and skillful enticement and appropriate amounts of respect.]

[I acknowledge, however, that there are multiple ways of teaching the same thing, and other methods or theories of education which are about as good as mine that exist and are employed successfully. Which will prove most effective and best for the students will depend on the individual and his circumstances and should therefore, I believe, be left to the market to discover.]

Now, all of this has just been a few of my thoughts on education generally. This is an area of particular interest for me (I design and run a high school program), and I have thought and written extensively on the topic. But now I’d like to, briefly, connect these general thoughts back to my project a little more explicitly. It should be somewhat implied in what I say above, but education should be developing children into adults with developed human faculties capable of leading purposeful lives in the pursuit of their individual happiness. As a voluntaryist, I do not believe that any child should be forced to learn anything specific, but in light of the rest of my project, there are certainly some things that it would be beneficial for everyone to know. To further develop one’s human abilities and become fully human, it may be necessary to become self-aware, or knowledgeable about what makes human beings different and special and how this state of affairs comes about. The basics of economics and philosophy, enough to understand man’s role in the universe and method of living, the necessity of society, how best to preserve it, and the function of other social institutions like law and morality and family, would undoubtedly be an asset to every person who cared enough to learn them (and everyone else in society). How, exactly, this should be taught should, again, be left to the market. But, in general, it requires educators to live the principles that can be derived from such knowledge: Respect everyone, even small children, as potential creators and shapers of their own destinies; create communities of learners where students can develop relationships and experience the benefits of working with others; and demonstrate a moral and just life to be observed, imitated, and practiced by the watchful little ones. 

There are, currently, many flaws with our current educational system. I have, for more than my entire adult life, been on the front lines trying to fix some of them. My colleagues help in my efforts and engage in their own campaigns. Some of my own students have picked up the standard themselves and are marching off to war. I have no doubt that education in this country will be radically transformed in my lifetime. As we make our changes, however, we should always keep in mind what the real purpose of education is. [For example, arguments for reform that begin by presenting evidence of decreasing student test scores could be regarded as implicitly arguing that the job of schools is to raise scores on standardized tests.] We should consider what we want students to be learning. Are we training future employees, or are we creating informed citizens, or are we trying to develop creative, thinking human beings? And finally, we should think about how we want to deliver this education to the students. For, as the great Gatto said, “The method of schooling is its only real content.” Will we continue to lock children into artificial environments where information they did not ask to learn is taught to them, or will we respect children as the growing human beings that they are and allow them the freedom to grow peacefully?

Tuesday, January 30, 2018

J31 - Morality as a Rational, Social Phenomenon

A topic like morality is one which cannot be done justice in a single journal, or a single essay, or a single thesis, or even a single book. Indeed, it is a subject which many of the great minds in our history have dedicated their lives to studying and discussing. My student, Alex Gugie, has himself written over 75,000 words on the subject this year, and I would encourage any reader to visit his website and read his thoughts on the matter. Having exerted no small amount of influence on his project, I can say that I endorse much of what he says there. However, I feel that I must comment, extensively, on morality myself, as it connects so many of the topics in EMC2 this year, and is, in fact, a critically important aspect of being human. And, as always, I have a somewhat unique perspective on the issue, in light of my wider theory of being human. 


With a topic as vast as this, I’m uncertain where I should begin. Therefore, I’ll quite arbitrarily start by examining the idea that human morality is a product of evolution. Charles Darwin, the father of the theory of evolution, was a proponent of the idea that morality was a byproduct of evolution, and that our ability to be moral creatures was what really separated us from our closest evolutionary relatives. There is much support for this idea, that morality has evolved. Indeed, we can justify many of the behaviors that we label as moral as behavior which would have aided in our struggle for survival throughout our species’ history, and we see many of these same behaviors in other animals, too. Altruism, doing good for others, is not a trait unique to human beings; many creatures exchange favors with each other. And there is an evolutionary advantage to doing good for others: for one thing, it can be good to be owed a favor, and for another it can be a sign of your own fitness that you’re capable of helping another. Similarly, evolution can explain why human beings, and other creatures, are more concerned with the fate of their close kin than with strangers, since close kin are more likely to carry the acting individual’s genes, and therefore their survival will more likely lead to the survival of those genes. Indeed, there are many ways in which altruistic and “moral” behavior would have been an evolutionary advantage throughout our history. Finally, there are many altruistic and “moral” behaviors that appear among all human cultures, seemingly unexplained. The Trolley Problem in philosophy, for example, is often seen as being easily answered, but ethicists have debated for generations the question of why that natural answer is or is not the right one. On the basis of the existence of these instinctual, advantageous behaviors, many modern theorists, Alex among them, therefore believe that evolution has made us moral.

There is much to be said, however, against that conclusion. The whole argument, the application of the evidence in support of the premise, is abysmally weak. First, it assumes, without justification, what behavior is moral. To say that humans are naturally moral because we are naturally altruistic is to beg the question of whether altruistic behavior is truly moral. It is true that altruism is generally regarded as moral behavior in our society. Why this is so, however, requires an explicated and defendable theory of morality by which to label altruistic behavior as moral. It may be that these theorists believe that evolution has designed us to instinctively know what is moral, and indeed has designed us to be moral, and therefore we can know that altruism is good and moral because, after all, evolution has driven us to be moral and to act altruistically. But surely the circularity in this argument is apparent; to escape it, one is reduced to saying that we act how we act, and that nature itself has stamped natural human behavior with the label of “moral.” Just because most people choose the same course of action when confronted with the Trolley Problem does not mean that their response is the correct or moral one.

But this theory is even more problematic because there are many behaviors which we consider to be moral which are not natural to us, and many behaviors that come naturally to us which we believe would be immoral. Doing “the right thing” often requires an internal struggle against our natural impulses. As Diderot said, “There is no moral precept that does not have something inconvenient about it.” So even if some moral behaviors are made easier by our evolution-shaped biology, we cannot attribute our entire conception of morality to evolution. Indeed, if evolution had truly made us moral, if how we acted naturally was how we should act morally, then there would be no need to consider the problem of morality, or for all those great thinkers mentioned above to dedicate their lives to the problem. In fact, that we’ve had all of these thinkers, and that each of us personally attempt to solve these problems, and that we all often come to different answers, is evidence that morality is not a product of evolution, but of the human mind. If we were all naturally moral, as human beings, there would be no need for debate over which behaviors were moral, even if perhaps we didn’t always want to conform to them. 

[I am aware of the literature speaking to the evolution of culture and its influence on morality. Given the forum here, I will not address it properly, but meme theory is just as, if not more, susceptible to my criticisms, especially my last.]

Finally, the theory that evolution has shaped us into moral creatures rests upon a fundamental misunderstanding of evolution. Evolution is a process, a passive process. It has no agency. It cannot shape us, make us, design us, mold us, push us, do anything to us. Evolution just happens. The confusion here is understandable, as the language used to teach evolution, and indeed to talk about the process generally, suggests its agency. The most fit creatures are “selected,” we are told, to pass on their genes and shape the next generation of the species. The phrase “natural selection” suggests that nature is selecting some advantageous trait for the species to possess. But, like the market, nature has no agency of its own. It does not select anything. Evolution, in a nutshell, is what happens when the individual members of the species who are not sufficiently adapted to their environment die before they reproduce. That’s all. Every single living organism is engaged in an epic struggle, from the moment of its birth, to survive the conditions of its environment. Some of them do not survive, and their genes do not get passed on to the next generation. The fittest do survive, and thus pass on their genes. This process repeats itself endlessly, and the process as a whole, when looked at in retrospect a couple hundred million years down the road, reveals a seemingly systematic series of changes in a species, which we call evolution. 

There are two important aspects to understand about this more realistic presentation of the theory of evolution. First, the trait being passed on must precede the passing on. That is, nature must have something to select. The fittest survive because they already have the trait which is advantageous in their environment. They may have gained this trait from their ancestors, but their ancestors had to acquire the trait (through mutation or specialized expression of certain genes) before it could be passed on. Therefore, evolution has not made us anything, has not truly chosen any trait for us to have. All that happened is that some creature had some trait, and this trait got passed on. Morality, therefore, even if it is evolutionary, could not have been produced by the evolutionary process. And since morality proper is a uniquely human phenomenon, and humans act with purpose, there must have been a reason for humans to begin acting morally before natural selection could find the moral members of the species to be the fittest for the environment. The idea of morality, therefore, is not a product of evolution, and therefore evolution did not make us moral. Morality had to be invented before it could be passed on. Second, there should be no conflation between traits which aided in our survival in certain environments and traits which are good and moral. In fact, there’s no guarantee that a certain trait we possess is even advantageous; perhaps the members of the species which possessed the trait were independently fittest and this other trait just happened to be passed along. Furthermore, we should not fall prey to the Whig theory of history and presume that every evolutionary change has been an improvement. We are always just trying to survive in our environment. No one acts a certain way because they believe that the action is “evolutionary advantageous.” No, people act as they do because they believe that the action will help them survive. This means that a species isn’t building towards something greater. The species that exist today are no better or worse than the species that existed millions of years ago. We’re all just trying to survive in, or adapt to, the environment we currently find ourselves in. And that environment could change, gradually or rapidly, and the fittest members of a species could suddenly lose that status. 

So evolution did not make us what we are. The fact that our ancestors managed to survive their environments and reproduced is what resulted in us being like they were. But there is nothing inherently good about that; it just is. We are what we are because our ancestors were what they were. That doesn’t make any of us moral. So, I reject the proposition that morality is somehow an evolutionary concept. 

As mentioned above, morality had to exist before it could be passed on. And, also mentioned above, morality is unique to human beings. But what is it that makes man different? Ah, yes. Reason. Man is the rational animal. Morality, therefore, must be a product of reason, and therefore discernible through reasoning. [I will admit that evolution has left us moral, but only insofar as evolution has left us rational.]

Reason, which is man’s great tool in his struggle against scarcity, allows man to always act purposefully, in what he believes to be his best interests. If reason has generated a system of morality, therefore, it seems sensible to conclude that the purpose of morality is to aid man is his quest to survive and thrive in a world of scarcity, to always act purposefully to attain what he believes to be in his best interests. As we have discussed in a great many journals, man’s faculty of reason is fully developed only in society. Additionally, it is society which has allowed man to do as much as he has in reshaping the world to fit his own image. Man, therefore, is a social creature just as much as he is a rational animal. They are one and the same. Therefore, the existence of society, which enables each individual man to fully develop his great tool of reason and also gives rise to an extensive division of labor which results in increased productivity for all, is in man’s best interests. It is society which has allowed man to not just survive, but to conquer nature and transform the world. However, as discussed in my journal on Society and Ideology, society is a rational phenomenon but not a designed one. Just like individuals don’t act as they do because they want evolution to pick their genes, individuals don’t act as they do because they want to sustain society. The concepts of evolutionary heritage and human society are just two big and alien for them to enter regularly into the calculus of acting man’s decision-making. Ignorance of the long-term consequences of one’s individual actions on the social fabric can lead men to inadvertently damage society by disregarding seemingly meaningless social norms. To combat this, I explained, society gives rise to other institutions, beyond the basic market economy at its foundation, to encourage social life. Law, education, and family all develop alongside society and serve to strengthen and sustain the social structure of humanity. Morality, I contend, is one of these institutions. It is an idea of how people should act, and it exists as a formal discipline so that human actions can be considered in light of their long-term consequences on the social structure and judged as either good or bad. By going through this process academically, general principles can be distilled which serve as a basic check on the behavior of individuals living in society. These individuals may not understand exactly why they should refrain from killing and stealing and lying, having not personally followed the long chains of reasoning behind the prohibition, but they know that the prohibited actions are “immoral,” and that they should seek to be moral. Therefore, they align their actions with the interests of society. 

Moral action, therefore, is action which sustains society. Immoral action is action which endangers society. Note that I am not necessarily talking about actions affecting other people. We certainly associate the idea of morality with our relationships with other people, but this is a result of the social nature of morality, not necessarily because morality is about other people. Moral action may be action that hurts other people, so long as it helps sustain society. Closing a factory, for example, is seen by many as an immoral action, as it suddenly thrusts a great number of workers into joblessness. But if the factory was suffering severe losses, this indicates that the factory’s operation was diverting resources that were more needed in other lines of production, and this factory’s closure means that society is better serving its purpose of maximizing wealth for all. The pursuit of profit, therefore, which is a hallmark of a capitalist society and is ruthlessly condemned by the critics of capitalism, is actually a moral principle which furthers the development and flourishing of human society. 

Society is what sets us apart from other creatures. Yes, reason sets us apart, but we are rational because we are social. What has allowed us to advance and create as no other species in the history of the world is our ability to cooperate with one another in a complex division of labor. Indeed, in his book A Natural History of Morality, Michael Tomasello says that while the great apes are about as intelligent as homo sapiens in physical tasks, they are vastly behind us when it comes to social tasks. That is, chimpanzees can build tools and understand language and mimic behavior and remember which cup hides food, but they do not cooperate with each other to solve tasks like humans do. This is what distinguishes human beings, our social nature, our "social intelligence." Tomasello actually says that the idea of “social intelligence” is something of an understatement, that we are actually “ultra social” and tend to cooperate with each other even on tasks where cooperation is unnecessary. This is because, Tomasello explains, we have come to think of ourselves as members of a larger group working towards one task, a phenomenon he terms “shared intentionality.” This shared intentionality, this sense that we belong to a group, a society, is the true cognitive difference between us and apes, he says.

I bring up Tomasello’s book because he raises a very interesting point in it. He compared the cognitive abilities of apes to the cognitive ability of children in a great many studies as he developed his understanding of the differences between homo sapiens and our closest relatives. In one of the chapters, he notes that the idea of morality is often broken down into two parts: the idea of sympathy, or concern for others, and fairness, or a concern that people get what they deserve. Tomasello notes that chimps have sympathy for each other, but that they lack a concern for fairness. And then, almost as a side note, he says that in these experiments, when the children worked together to obtain food, they would split the food evenly between them, but if the children were randomly given different amounts of food, they generally did not spread the food evenly between them. It seems that the experiments suggest that children believe it is fair to share a reward when they worked together to obtain it, but only in that context of collaboration. If a child did not contribute to the acquisition of the reward, then he or she did not deserve to share in it. I think this throw-away observation is extremely significant because it suggests that morality is directly tied to the idea of cooperation (implying shared effort in the completion of a task). Morality involves considerations of fairness, not just the arousal of sympathies.

Now, what generates society among human beings and not among lesser species? And what do I mean by society? Because the great apes and other species of humans throughout history have had families and bands and tribes that they lived in. But I’m talking about a more extensive society, one based on trade, rather than kinship. What leads to the development of society is the higher productivity of the division of labor, and the ability to recognize this fact. This recognition of the benefits of the division of labor, this is unique to homo sapiens and it is what has allowed us to create a society where no other species has. Many creatures engage in the swapping of favors and understand reciprocity (“scratch my back now and I’ll scratch yours later”). But to simultaneous swap two different objects for each other is a uniquely human phenomenon. And this idea of trade leads to the idea of specialization, which is the realization of the division of labor and its concomitant benefits in terms of productivity. 

Exchange is the fundamental social relation, and the market is the foundation of society. Again, society exists because it is man’s tool in his quest to survive and thrive in the world of scarcity in which he lives. Recognition of the benefits of society makes society possible. As Mises has said, “The greater productivity of work under the division of labor is a unifying influence. It leads men to regard each other as comrades in a joint struggle for welfare, rather than as competitors in a struggle for existence. It makes friends out of enemies, peace out of war, society out of individuals.” Peace. The idea of working together to create more for everyone, rather than fighting each other to get more for yourself, is the heart of society. Recognition of the higher productivity of the division of labor leads to the idea of peaceful cooperation, and this peaceful cooperation gives rise to human society with all its glory. This idea of peace is what allowed individual bands of homo sapiens to work together and build something greater. No other ape, and no other species of human, ever developed a society that extended beyond their family or core group. And this is because no other group of apes or other species of human could interact peacefully with other groups. And this is because none of these other groups could recognize the benefit of working with others. As Matt Ridley remarks in The Rational Optimist, “Famously, no other species of ape can encounter strangers without trying to kill them, and the instinct still lurks in the human breast. But by 82,000 years ago, human beings had overcome this problem sufficiently to be able to pass Nassarius shells hand to hand 125 miles inland. This is in striking contrast to the Neanderthals, whose stone tools were virtually always made from raw material available within an hour’s walk of where the tool was used.” Neanderthals were bigger than us, stronger than us, and probably had bigger brains than us. But the idea of trade was foreign to them, and without trade they were doomed to economic, technological, and cultural stagnation.

Society defines human beings. It allows us to be fully human, and it allows as to transform the world into a version that suits us better. Human action tends naturally towards the creation of society, in that we can recognize the benefits of the division of labor. However, the maintenance and growth of society is no sure thing, product of human will and action as it is, and therefore there needs to be guidelines for how a member of society should act within one, such that the society can be maintained and grow. These guidelines are what we call moral truths, or moral codes, and moral action is action which is conformity with these codes. I will not attempt to elucidate what a proper moral code would fully look like here, for that is not the purpose of this journal. But I do want to set forth two fundamental principles that must be embodied by such a code. First, the moral code must embody the peace principle. Society exists because men can cooperate with one another, and men can cooperate with one another only where there is a level of peace among them and one can trust that they will not be killed or otherwise harmed by his association with others. Almost all moral codes do hold relatively fast to this idea of peace among men. Again, morality is recognized as a social, or relational, concept. Many moral codes, therefore, include prohibitions against murder, theft, adultery, rape, lying, etc. These are all aggressive actions that disturb the peace and therefore lead to the decline of society. But there is a second principle that often goes missed in moral codes, and that is the market principle. In fact, even many of the greatest champions of the market economy have felt the need to make more philosophical appeals to justify their insistence that anti-market behavior is coincidentally immoral (see, for example, Rothbard's The Ethics of Liberty). But the truth is radically simple: society exists because of the higher productivity of the division of labor. If the benefits of the division of labor cannot be realized because of interference with the market system, then there is no reason for society to continue existing, as burdensome as it is for the individual, and therefore it will disintegrate. Therefore, hindering the operation of the market economy may be viewed as immoral behavior, as it tends to lead to the destruction of society.

Of course, this begs the question of whether or not society should be preserved. I can imagine, for instance, presenting this argument to some socialist-type and receiving the response that a society that creates such injustices as capitalism shouldn’t exist anyway. And this is a fair enough point, I suppose. But there is no real alternative, as I’ve demonstrated here and elsewhere, notably my first SDA. Society is based on the market. No market, no society. And a life without society would lead not only to the decline of man’s rational faculties, but to a decline in living standards for all and death for most. Society can sustain the population it does at the living standard it does because of the higher productivity of the division of labor and the innovation generated by competition and trade within a market economy. Society gives us the resources we need to live the lives we want to live. That’s why we created it. Society enables us to attain so many more ends, ends that wouldn’t even be conceivable to isolated man, than we could attain without it. So, even though moral, social behavior doesn’t always seem to come naturally to us, and always seems to be an inconvenience, we all try our best to comply because we believe it is the right thing to do, for whatever reason works for us, but ultimately because this behavior, while requiring some short term sacrifices, ultimately serves to sustain the society which has generated so much wealth and pleasure and long-term benefits for us to enjoy. There is no real standard for saying that (capitalist) society is better than isolated struggle, except the idea that human life and welfare is a good thing. So living in a society, and living morally in a society, does involve sacrifice, and doesn’t result in utopia. But every choice between ends involves sacrifice, and the economic science is unmistakably clear that the ends secured by the existence of a prosperous society are vastly greater than the ends served by short-term, anti-social behavior to avoid the discomforts of living in society. Therefore, we should all strive to be moral. It’s not about sacrificing for the good of society. The choice before each of us is not a choice of doing what’s good for us or doing what’s good for others. The choice is always a choice of doing what we want to do now or of living the type of life we want to live tomorrow. And to avoid having to work through that cost-benefit analysis at every moment, we have developed principles, bolstered by whatever belief system proves most effective, to assist in making those decisions.

Given the length of this journal, I think it necessarily to briefly recap before concluding. The title of this journal is “Morality as a Rational, Social Phenomenon.” As I’ve shown here, what sets humans apart, what allows for the exercise of our reason and makes our fantastical lives possible, is the existence of human society. This society is based on the higher productivity of the division of labor and man’s recognition of this fact. Because man acts for himself, and not with an eye towards how his actions affect others or even always how his actions will affect himself in the long run, it is possible that even individuals who genuinely recognize the benefits of society and wish to continue enjoying them may act in ways which tend to hurt society (the structure, not the other people in it). Therefore, man’s reason, which created society, also creates moral codes for the members of society to follow, so that man can act without having to trace the consequences of his short-term actions on his long-term well-being and the well-being of others. Morality is therefore a rational phenomenon, and its raison d’etre is society. Isolated man has no need for a moral code; he may act as he pleases with no thought to the consequences beyond his view. This moral code, in order for it to serve its true function, must embody both the principle of peace and the principle of the market, for these are the foundational principles of society. Peaceful cooperation in man’s struggle to survive and thrive in a world of scarcity. It is important that individuals are moral because society provides individuals with incalculable benefits that would be denied to them in isolation.

It is critically important to recognize morality as a social and as a rational phenomenon. Otherwise, otherworldly ideologies are appealed to, or other methods and philosophies are devised to yield moral codes. If morality is merely a product of evolution, then it need not be critically examined and sought to be improved, as either evolution will see to its improvement, or it needs no improvement, or it’s not important. If morality is merely a product of God’s will, then it need not be critically examined and sought to be improved, as God is good and God knows all and God says act this way, so we should. It is the same for other sources of morality. But when morality is acknowledged as a product of human reason, and is recognized as meant to serve a specific purpose (encouraging social action), then it can be critically examined and subjected to revision and improvement. [An effective delivery system will still be required for delivering these moral truths to the masses.] And I think this is so important because the state of morality for many people in our society remains a vague, fuzzy conception. Most people believe that morality roughly overlaps with altruism, and that there are a few prohibited activities related to that altruism. But this instinctual grasping of morality is not enough, for it entirely misses, and indeed substitutes altruism for, what should be the central principles of a strong moral system. This misunderstanding of morality leads to widespread action which is not truly moral, and this action can have deleterious effects on society. We could call morality a concern for our fellow man, then, in addition to ourselves, because getting morality right is of the utmost importance for everyone. Nothing less than the fate of humanity is at stake.

Sunday, January 14, 2018

History of Reason in Western Thought

The first group of thinkers to really recognize reason as an independent concept worthy of thoughtful consideration and development, as something more than a method of efficiently applying means to ends, were the Greeks. As Plato explains in The Republic, underlying the Greek idea of reason was the conception of form. Form is an identity of structure, a pattern of commonality, that connected diverse and changing iterations of the same essential thing. The Greek word for reason meant order or relation, and it was the Greek idea that the order and relation of things was not sensed, but apprehended through intelligence, which could see the universal trait in the particular iteration. This idea of form was broken down into four sub-ideas: form as essence, which dealt with the particular by reducing it to a kind of general, by classifying this particular structure as a type of tree; form as end, which connected objects based on their common end, by recognizing that although the sprout and the tree did not appear to have any common characteristics, they were both stages on the way to the realization of one goal; form as law, which examines what is required by the characteristics of the object, that the implication of the object being a tree is that it will burn; and form as system, which would create a unity between the many forms, allowing man to think without our sensory crutches and to see all things as connected necessarily.

The idea of form as law was really important. It held that the laws it derived were certain, or necessarily true; new, not merely analytic; independent of sense, seen with intelligence not sight; universal, with no exceptions and the same for all men; objective, existing outside of ourselves; and unchanging, providing an underlying structure for sensory changes to occur upon. The dominant conception of rational law in Western thought is based on this Greek idea of the form as law.

The idea of form as system is really interesting to me. Plato would say that in a fully rational system, there would be nothing arbitrary or isolated. That is, everything would be necessarily implied by everything else, so that all the parts would give each other support. He admits that this is a difficult level of comprehension to reach, since we are cursed, through weakness of focus, to see things piecemeal, moving step by step from premise to conclusion. But, he says, we can sometimes glimpse a better way of thinking in a piece of knowledge that we are quite familiar with. “The man who knows his subject never thinks in syllogisms.” [I find this rather compelling, because I have often been visited by startling flashes of insight after long consideration of a subject. That is, I know exactly what experience he’s talking about.] Plato thought it would be possible to see things as a whole, and to not have to move one’s attention from one part to the next because one would be seeing each thing in the light of its relation to everything else. That is, one could see the whole in the part and the parts in the whole.

Plato thought that the best and happiest life was the life of contemplation. He believed that practical reason would allow us to shape ourselves into our true forms, consistent with our true natures (ascertainable through reason). Thus, reason for him was more than a power of framing or following an argument, but was a crucial element that we needed to form an accurate idea of the ends of our life or to plan that life well.  

For the next major thinker in the history of reason, we have to fast-forward a couple thousand years, to Rene Descartes. [Strictly speaking, the idea of reason continued to develop during the intervening period, but it was wedded to the Catholic Church; Aquinas was the major rationalist in this time period, and I am a fan of his. But Descartes made reason autonomous once again.] Descartes believed that reason was a “natural light” that all normal men possessed and was our one source of clear and distinct (and therefore certain) knowledge. By distinct he meant absolute knowledge, by means of which we can grasp what things are in themselves with no distorting conditions. To get at this knowledge, he believed that we had to apply the method of mathematics to all knowledge: first, deal only with “simple natures,” abstractions of such simplicity that there was no ambiguity; second, deduce their relations logically, remaining unsatisfied with any connection that is less than necessary and self-evidencing; and third, proceed from the logically prior to the logically posterior, beginning with self-evident axioms. Unfortunately, Descartes failed to apply the first two steps successfully to the real world.

Descartes believed that the difficulties of being rational were difficulties of character rather than ability, and could be overcome through discipline. Reason never errs, so it ever seems to it must be because of a non-rational influence, and the rational man must anticipate and avoid these influences if he wanted to think successfully. 

Descartes’ task was taken up after his death by the great saint of rationalism, Baruch Spinoza. He was magnificent. His main work was on ethics, but he decided that in order to know what was good for man, he had to understand man and his place in nature, and in order to do this he had to create a system of philosophy. But the only tool he used to do this was his own mind: he never appealed to authority or revelation or common consent. And he never shirked from where his conclusions led him, becoming perhaps the first philosopher to discard the existence of God and free will. As a result, his brilliant work was virtually unknown for more than a century after his death.

Spinoza believed that knowledge itself, or rational understanding of things, was the ultimate end of human action. We are animated by a drive to maintain and expand our mental beings, and our minds are always in a process of evolving towards their natural ends. Since the natural end of imperfect ideas is to ultimately become wholly perfect ideas, we are constantly striving to achieve more complete perfect understanding. For Descartes, reason was an all-or-nothing process; either you understood something or you didn't. For Spinoza, there was a scale of reason with infinite degrees of success. There were several levels of advancement that our thought moved through, although all were means of grasping connections. The first level was contingent connections, which might have been otherwise. Here we are not truly thinking, but following lines of association, loosely connecting sense-data and concepts. The second level picks out the threads of necessity running between different things and connecting unanalyzed wholes. In other words, this level was an understanding of relationships between phenomenon, cause and effect. Spinoza believed that when he grasped a causal law, he was seeing a connection that was necessary and therefore intelligible. This second level is where Descartes would have stopped, but Spinoza believed that there was one higher level, which would be an understanding of a whole, similar to Plato’s conception of form as system. Spinoza was dissatisfied with the step-by-step progression of reasoning, and the abstraction that was necessary for logical, mathematical thinking. He believed that a higher understanding could be achieved where a whole succession of steps could be instantly grasped and everything would be seen in its own context of necessary relations. A concrete thing would be a focal point for infinite lines of necessity converging upon it from the rest of the universe. Once man achieved this knowledge, Spinoza believed, his thought would be fused with God’s own divine thought. In his view, God was the universe considered as a single system, fully comprehensive and fully comprehensible. 

Spinoza did get back around to ethics, of course. He believed that morals were a matter of intelligence. To live well is to live reasonably, and people go wrong when they misconceive their own good and the good of others, usually under the influence of emotion. This led him to the conclusion that impulsive behavior was animal and thus determined, while true freedom was found in rational thought and action. Growth in rationality, he said, is to be increasingly restrained by rational law, but this was the secret to true freedom and human happiness. 

The end of rationality, in my opinion, began with Gottfried Leibniz, although I must applaud him for his realism. For Spinoza, the world was a single whole; for Leibniz, the world was an assembly of different substances, and each was subject to its own struggle to a level of clear understanding. Leibniz believed that there were two different kinds of rational insights, “eternal truths,” true for all worlds, and “contingent necessities,” which might have been different than they are. Eternal truths were the propositions of logic and mathematics, analytic statements that he failed to justify and thus fueled the positivist criticisms of rationalism in later years. Leibniz also believed that all true statements about the real world would turn out to be necessary. That is, there were no statements that were merely empirical (except that things existed); everything was determined in the nature of things. If we really understood Caesar, with all of his features and all of his context, we would be able to see that the circumstances of his death were completely necessary and determined by his nature and the nature of the world around him. “Every true predication has its foundation in the nature of things.” 

However, whereas Spinoza believed that the main principles of the even the physical sciences were self-evident truths, Leibniz realized that they were not. Nature seemed to governed by laws that render its course inevitable, but it would be unreasonable to argue that these laws were themselves inevitable, in the sense that they couldn’t have been otherwise. Unfortunately, since these principles were not self-evident, Leibniz could think of no other explanation for them but that they were divinely created and chosen. Interestingly, while he believed that God had chosen the causal laws that governed the world, he also believed that God himself was bound to what logically followed from them, since even He could not do what was logically impossible. Once these laws are in place, the course of all things is logically necessary.

Next comes Immanuel Kant, who was quite a radical rationalist, and yet somehow is most closely associated with the modern sense of rationalism. Kant believed that reason could give necessary and universal knowledge, but also acknowledged that sense data could only ever yield probability, not certainty. Therefore, our understanding of causal laws must be a priori. More radically, we supply the causal laws. In other words, nature adjusts itself to our reason. Kant held that our reason was an ordering faculty that imposed its own structure and design on the sense data it was given, and therefore our conception of the world was a product of our mind. However, we could not control this process, and we could not understand it because our minds were not up to the task. That is, our minds were incapable of understanding the world as it really is, since all we could perceive was a world created by our minds. 

By reason, Kant meant “the faculty which supplies the principles of a priori knowledge.” He held that there were three level of a priori knowledge. The lowest level he called “pure intuition,” and it gave us the categories of space and time. These are not sense data, but orders in which sense data is arranged. The second level was called “understanding,” which operated through concepts and judgments. This is the ability to see how something is embedded in a series of necessary relations. Kant believed that there were four categories that could be derived a priori about things: quantity (everything exists in a whole-part relation with something else), quality (every feature will be present to some definite degree), relation (every event will have some cause and some effect), and modality (conclusions that are not necessary or possible (both a priori), but merely actual [this is the weakest category; I think Kant just got carried away]). The highest level was composed of the ideals which serve to regulate our attempts to order our knowledge. Kant believed that our experience of the world could be broken down into outward nature and inward feelings. Reason would deal with these spheres as distinct disciplines, but then, through rational theology, the fields would have to be reunited to show how the inner life existed in relation to the rest of the rational order. In a disappointing argument, Kant said that achieving full development of our rational faculties could not be achieved in a single lifetime, therefore there must be an afterlife in which we could continue to make progress. If this development is to be realized, it must be through a power that governed both nature and human nature. Thus, Kant “proved” the existence of God. Interestingly, Kant provided a strong refutation of his own theological theory in earlier versions of his Critique

Hegel conceived of reason as apprehension a priori, which may be synthetic, and recognized that rational insight is of more than one type. Indeed, Hegel distinguished about 80 categories of a priori thought (Kant only identified [a probably ambitious] 17). Hegel further recognized that these forms of knowledge existed on different levels and must be arranged hierarchically. However, Hegel did not believe that the categories could be broken up into distinct categories; for him, everything was a matter of degree. He replaced Kant’s three levels with a ladder with ascending orders from abstraction to concreteness. Hegel regarded necessity, truth, and reality themselves as matters of degree. The necessity became firmer, and one’s understanding more complete, the further one moved up the ladder and approximated the comprehensive vision at the top. We ascend this ladder through the dialectic, which is a zig-zag pattern of thought whereby we define concepts, thus making them more concrete, through their relation to other concepts. Thesis moves to antithesis and then culminates in a synthesis, which begins the process over again. Hegel believed that this was the necessary method because he believed that concepts changed in different contexts, and that as our scope of understanding widened, we could not assume that the isolated connections we had made earlier would hold true in the larger picture we could now see. It was better to understand things as part of each other from the first, rather than see how each thing was connected to other things, step by step. 

The essentials of Hegel’s theory were pretty much wholly adopted by the British rational idealists like F. H. Bradley. These Brits believed that reason was an impetus towards “wholeness,” that drive in human beings to understand a thing completely and to understand its interdependence with other things. In other words, we see a fragment of reality (and this comes from Hegel), and “the opposition between the real, in the fragmentary character in which the mind possesses it, and the true reality felt within the mind, is the moving cause of that unrest which sets up the dialectical process.” Reason is our drive to complete the picture. The function of reason as it works in each of us is to construct the rational whole. Integrated knowledge demanded consistency, both in itself and through interdependence with other facts, so that every fact was connected necessarily with others and ultimately with all other facts. Only once we reached this comprehensive consistency would reason be satisfied. 

Empiricism developed alongside this process of developing rationalism, though it started later, and at the turn of the century rationalism fell from grace as the preeminent theory of knowledge in the philosophical and scientific communities [for a number of reasons, which we can discuss if you’d like]. In recent decades it has made a comeback, since other philosophies have been found to be intellectually wanting in the areas of the social sciences, even as positivism continues to be the official creed of the universities and postmodernism continues to eat away at anything resembling a rational order [or common sense].