The summer issue of Midwest Zen was published today, and I was happy to have an essay included (“Beyond Fair and Unfair“). After losing my brother to cancer this past March, I’ve struggled to really express what that loss meant to me, and to so many of us who loved him. I tried to put part of that to words for his funeral service, but felt that I had to leave so much out.
This essay focuses on another part of that experience, on the injustice of losing a loved one at young age. I struggled with this a great deal, with the fact that it was so unfair.
Eventually, though, I started to see more clearly that this anger was misplaced, or at least it was for me. When I look closely, I can see that “fair” is way that we give a name to how people treat other people. And so, fairness stands outside of a serious illness like cancer. Fair and unfair do not really apply, even though that felt wrong to me, and still feels wrong to me at times.
In the end, there came a time when I saw clearly that my anger was not useful, and I had to set my own ideas of fair and unfair aside, to deal with what was really in front of me.
But, I still believe in fairness though:
That is not to say that we should discard fair and unfair entirely. Fair and unfair are ideas, constructions of our minds, but not in the sense that they are fictions. Not in the sense that seeing through fair and unfair frees us to be callous, or selfish. Not in that way at all. Fair and unfair are just ideas, but ones which matter precisely in the actions we take that affect other people. They matter in how we treat one another. They are a profound treasure for people who aspire to live together in peace.
What will the future of computer science be in our lifetime? Will we find ourselves in a world where we work side-by-side with intelligent computer programs? Or, is humanity teetering on a precipice, where we may end up creating artificial intelligences which will end up competing with humans, perhaps violently?
These and other possible futures for artificial intelligence (or AI) have been a staple of science fiction for many years. It has felt to me that much of our fascination is drawn to the possibility that we could produce intelligent beings that meet, or surpass, the abilities of humans. But, while we have seen many useful technologies (such as in computer vision, for example), most of the advances in the area of real intelligence have seemed slow. Especially when we compare AI to human capabilities. And really, most of the AI available today falls well short of the cognitive abilities of my dog. And don’t get me wrong, I love my dog, but he is really not the brightest of pets.
In the last few months, though, it really feels like things are changing. As someone who teaches in a college, when people started to really talk about ChatGPT, it caught my attention. ChatGPT is a generative AI, and it is quite impressive. You can give ChatGPT an open-ended prompt (“How can we live a meaningful life?”) and it will give what reads like a thoughtful response.
I asked ChatGPT exactly this question, and the response I received was somewhat interesting:
ChatGPT:Living a meaningful life is a deeply personal and subjective pursuit, as what brings meaning to one person may not necessarily do the same for another. However, here are some general principles and ideas that can help guide you towards a more meaningful life:
Identify your values: Reflect on what truly matters to you and what you believe in. Define your core values and use them as a compass to guide your decisions and actions.
Set meaningful goals: Determine what you want to achieve in different areas of your life, such as relationships, career, personal growth, and contribution to society. Set goals that align with your values and give you a sense of purpose.
The response continued through eight more points (practice gratitude, mindfulness, nurture relationships, etc.) and wrapped up with a concluding paragraph. Nothing about the response struck me as problematic, and each of the points raised seemed to be a good recommendation. It lacked some of the life of the more inspirational texts I would turn to, when reflecting on the meaning of life. But overall I thought it was a strong response to the question.
If you have any experience talking with Siri or an Amazon Alexa, ChatGPT is a very different experience, and really quite impressive. So good, in fact, that many of my colleagues (college professors) worry about the ways in which this kind of AI will be abused. Even now, we are cetainly seeing more people use ChatGPT and other generative AI as a short cut for writing papers, presentations, etc. For the faculty I know, the range of reactions we see seems similar to those of the wider public: swinging widely from wild enthusiasm to dread.
No matter what the final impact of ChatGPT and its cousins are on our lives, they represent just one type of artificial intelligence. This moment is important, as we see technologies that had just been conceptually possible (something that was possible in principle, but which had not been built yet) becoming realized or actualized. As these and other types of AI evolve, I think that it is clear that our lives will be transformed, just as life was transformed by other technologies (consider the development of the internet, or radio).
Whether this transformation brings us into a more (or a less) wholesome life remains to be seen, and for us to decide.
The Perversion of Wisdom
In a recent issue of Lion’s Roar, Ross Nervig wrote about the implications of AI for Buddhism. One example that he described was the Roshibot, a chatbot created by Jiryu Rutschman-Byler, and trained on the writings of Shunryu Suzuki.
In the introduction on the Roshibot webpage, Rutschman-Byler cautions that the chatbot “is not designed to relate facts, but to offer ‘plausible conversation’ or ‘coherent nonsense.’” He offers the Roshibot not as a teacher, but as an experiment in looking at the dharma through the lens of AI.
I spent a bit of time talking with the Roshibot, and found the “conversation” (in which I attempted to approach as if I were someone who was new to Zen) to be interesting. Roshibot’s responses were not bad, but I didn’t feel that I found much useful wisdom in them, or insights that would have been useful for my own practice (see the transcript below)
Me: How should I get started in Zen practice?
Roshibot:The first step in Zen practice is to find a good teacher. A teacher can give you important guidance, and you can learn from their example.
After finding a teacher, the next step is to begin a daily meditation practice. This can be zazen, walking meditation, or another form of practice.
It may take some time and effort before you feel confident in your practice. Just like any other skill, Zen practice takes time and dedication to develop. But with patience, and guidance from your teacher, you can make progress on the path.
Me: Why is meditation important for Zen?
Roshibot:Meditation is an essential part of Zen practice. It helps to quiet the mind and develop concentration, mindfulness, and clarity. Through meditation, we can become more aware of our thoughts and emotions, which can lead to greater insight into our true nature. Meditation allows us to deepen our understanding of the Dharma and realize the true self. Regularly practicing meditation is an important part of living a meaningful and skillful life.
Me: What does it mean to realize the true self?
Roshibot: Self-realization, or “enlightenment,” is a fundamental goal of the path of Zen. It means to go beyond our own thoughts and emotions, and to see things as they truly are without attachment, judgment, or bias. This means seeing our true nature, which is one of infinite wisdom and compassion. Self-realization is a state of being free from suffering and completely at peace with the world. It is important to remember that self-realization is not a one-time event, but a continual process that involves constant practice and meditation.
Me: How does self-realization free us from suffering?
Roshibot: Self-realization allows us to see the true nature of the world, which is one of perfect peace and harmony. When we see the true nature of all things, we are no longer attached to our own thoughts or emotions. This means that we are no longer bound by the suffering that arises from our attachments. We can live in the present moment, fully accepting and embracing the world as it is. When we are free from suffering, we can live with greater joy, peace, and contentment.
Me: How can we be sure that the true nature of the world is perfect peace and harmony?
Roshibot:The true nature of the world is something that cannot be described with words. It is beyond the realm of concepts and thought. We can only know the true nature of the world through direct experience. In the tradition of Zen, this direct experience is known as kensho or satori. It is a state of emptiness and clear, effortless awareness. When we realize this state, we become fully awake to the true nature of all things. From this place of enlightenment, the world appears as it truly is – perfect peace and harmony.
Me: If the world is empty, why would we describe it with the concepts of perfect peace and harmony?
Roshibot:The world is not empty in the sense of being nothing, but in the sense of being free from our attachments, judgments, and conceptual thinking. In the true nature of the world, there is no separation between self and other. There is just one interdependent reality that is perfectly inseparable and perfect in itself. When we experience this directly, we can see that it is inherently peaceful and harmonious.
In many ways, Roshibot assumes too much (in use of the terms peace, attachment, etc. that would need to be more carefully explained). When I pressed the chatbot to elaborate or clarify (what does it mean to say that the world is perfect peace and happiness?), Roshibot avoided going into depth. I had to smile at one point, when Roshibot says that enlightenment is a fundamental goal of Zen practice. Most teachers that I know would be very cautious to avoid putting the enlightenment in front of any student as some goal to work towards.
I can see the value of these kinds of chatbots that are trained using Zen teachings. They could certainly serve as more interactive ways for people to learn about Zen than a simple website or static document. And at the same time, I am glad that this was not my first experience with Zen or Buddhism. I don’t know if I would have continued, if I thought that Roshibot’s responses were an accurate representation of the practice and teachings of Zen.
My experience with Roshibot reminded me of a recent talk at the Sanshin Zen Community. In that dharma talk, Shinko Hagn considered artificial intelligence as he discussed the five skandhas (essentially the elements, or aggregates, that make up our experience according to Buddhism: form, feeling, perception, mental formations, and consciousness).
Hagn had some skepticism about whether AI could truly be conscious (a skepticism that I do not share personally, see more below). But I appreciated a point that Hagn made about AI, acknowledging that they may be useful and serve useful functions in the world. And that we should take care to distinguish between the knowledge that an AI has from wisdom (the kind of wisdom we pursue in Zen and other traditions).
Roshibot and other AI chatbots do not represent the final evolution of these kinds of generative AI. These AI are based on models that are going to keep improving, and so the experience and depth of their answers may become richer in the future. Fundamentally, though, we should remember that we are not receiving responses from a person who has seen deeply into the true nature of reality. The responses might be based on true teachings, but we have no reason to expect that what comes out of the AI will truly and faithfully represent the insights that were used to train the AI. It is always better to search for a true teacher, and I do not see much value for us to learn from an AI Buddha.
Talking with a generative AI such as ChatGPT can be captivating, and there are some moments when it can feel like you are having a real conversation. These experiences can make us wonder, is it possible for an AI to be conscious?
That is, can an AI be a person, with their subjective point of view that experiences the world, with an awareness of their existence that is similar to our own? That is a question that has captivated humans for many years, and one that will be increasingly important in our time as AI continues to develop. And, it is a very important question for those of us who work to live the bodhisattva’s vow to save all beings (“Beings are numberless, I vow to free them.” If AI could become sentient, then we may wake up one day to find that we have a new type of being to consider in our actions.
While there are many people who are skeptical about the potential for AI to become sentient, I think we should be open to the possibility. To be clear, I do not think that we have seen any cases of sentient AI today. However, based on my own knowledge and training in neuroscience, I do not see any theoretical reason why a conscious artificial intelligence impossible. I could be wrong, of course. And in this post, I won’t be able to flesh out completely why I feel that sentient AI is possible. I can say that my confidence that sentient AI is possible is based on the evidence we have that our experiences depend on information processing in the brain. With the right changes, a little extra stimulation here, a little less there, and any part of our experience can be affected.
Stimulation of the nervous system affects the information processing that depends on the cells in the nervous system (mostly neurons) have on the neurons they are connected to. Each neuron in the brain responds to a complex set of chemical signals from other neurons and from the rest of the body, which affect the electrical signals that one neuron sends to other cells (mostly other neurons). We can describe the role that a neuron plays in the brain by describing all of the ways that it responds to signals that it receives, and all of the signals it sends to other cells. While this set of responses can be complex, and can change with experience, it is at least theoretically possible to completely describe the role that one neuron plays in the brain (and thus in our experience in each moment).
If you think about cochlear implants, this technology is based on an understanding of how cells of the cochlea usually respond to vibrations (produced by sound), and replicating the normal electrical signals. These implants are still improving, and have not yet been able to fully replace the input from the cochlea, but they have restored a great deal of hearing for many individuals. Similar work has the potential to restore sight in people who have vision loss, or to create prosthetic limbs that can be controlled by the brain, and which may also restore some sense of sensation (adding a sense of touch to a prosthetic limb). In all of these cases, some part of the normal inputs to the nervous system (or outputs, for controlling a limb) have been replaced by a computer system that can stimulate neurons, or respond to the signals of neurons.
Imagine if we could design a tiny, tiny, artificial neuron. One that completely mimicked the processing of the original. If we could replace a real neuron with an artificial one, what would happen? This is not likely feasible in practice (to actually replace one neuron in the brain would be technically very challenging, and it would be easier to replace it with another neuron, that physically copies the original, but the idea is possible in principle).
Based on what we know about the brain, I really have no reason to think that my experience would change if I replaced one (or two, or many, or most) of the neurons in my brain with artificial neurons. So, then I have no reason to think that the processing that is done by my brain could not be done in an artificial system, what we might think of as a computer. And so, I don’t see any reason that it is fundamentally impossible for a computer to replicate all of the processing that a human brain does. And, if a computer program fully replicated the processing of a human brain, then I don’t see any reason why that program could not be conscious.
However, while I think that sentient AI is possible, it is important to make a couple of points clear. First, it is very unlikely that any sentient AI would have an experience that is similar to that of a human. Our own experience depends on the brain, as it interacts closely with the body and the rest of the world. The specific experience we have depends highly on the structure of the brain (how it is built to process information) and how that brain interacts with the body and the rest of the world. It would be possible to replicate that structure in a computer simulation. Possible, yes, but largely pointless. These possibilities seem mostly motivated by some kind of thirst for using digital simulations of humans as a kind of artificial immortality (for instance, in the comedy Upload).
To create some kind of digital afterlife for humans, we would probably need to make highly accurate simulations of a brain (quite a feat on its own!). To preserve something like normal human experience, we would need to run this simulation while feeding it a rich representation of the internal state of the body, not to mention all of the interactions with the external world. Probably these could also be simulated in a digital world. But why would we do so, for a being that does not have a body, and is not in the world?
So, I would expect that a sentient AI that is conscious would have an experience of the world that is very different from ours: it would need to represent different kinds of internal variables, and probably process information about the world in different ways. It probably would have beliefs about the world, and something like what we recognize as emotions or motivations (which are important for setting goals and priorities). But, what it is like to be that AI would be very different from our own experience. Perhaps so different that we are not able to truly appreciate it.
Any sentient AI, which might have a very different experience of the world, is not likely to benefit from Zen practice. Our practice is focused on helping humans to awaken, and is specific to the conditions we find ourselves in. Sentient AI is likely to experience a different kind of suffering, and any path to awakening would need to be specific to their own experience. If there is an AI Buddha someday, it will teach for other AI, not for humans.
While I do believe that sentient AI is possible at some point in the future, we have another, more immediate, problem. It is very important for us to see clearly is how difficult it will be to determine if an AI is sentient or not. Already, we have seen people claim that some of the generative AI models are actually conscious. I am not surprised (that people feel this way), as it turns out to be a difficult question to recognize consciousness. After all, how do you really know that the person sitting next to you is actually conscious? In our daily lives, we rely on a set of cues (facial expressions, patterns of responses to verbal and non-verbal interactions) to really determine if another person is conscious and attentive. These cues are ones that can be faked by non-sentient AI, encouraging us to feel as if a non-sentient AI is a person.
So, I would encourage us to consider the possibility that AI could be sentient. And, to recognize that it will be a challenge to determine if a specific AI is sentient or not. If someday we do end up creating sentient AI, it would be a tragedy to be ignorant of this moment. We have an obligation to any sentient being, and I think that sentient AI should not be an exception. For this reason, more than out of a concern that AI represents an existential threat, I think that we should avoid creating sentient AI. The potential that we will create a new kind of sentient being, with perhaps new ways to suffer in delusion, seems too important for us to ignore.
I think of the animated film, The World of Tomorrow, which has a small story, intended to be humorous, where solar-powered robots on the moon must be kept in the light. And so, they are programmed to fear death, and to be afraid of the dark side of the moon, so they flee the darkness. They end up being abandoned there, left to flee and live in constant fear. They do occasionally send some of their depressed poetry back to the humans: “The light is the life. Robot must move. Move, robot, move. But why? Move, move, move. Robot. Forever move.”
Only a fictional example, of course, but we should be very cautious to avoid the casual creation of new suffering. I think that we already have enough beings that need our care and concern.