Implementation

AI and inquiry

AI and inquiry

My approach to meditation has always had a flavour if inquiry about it. However, discovering Liberation Unleashed (LU) showed me a more concrete and deliberate approach to inquiry.

(Being honest, I actually found LU very hard, because it is geared to helping people see through illusion, and less suited to identifying whether or not they have. Also, my mind does seem to work somewhat differently to others, making it harder to engage in the specific LU style of dialogue).

Whatever my personal experience of engaging with LU, its style of dialogue is significant. The simple idea of one person asking questions of another, in order to guide them towards their unconscious assumptions is powerful, and has had many positive impacts on myself and friends. Kevin Schanilec built upon the work of LU to add additional inquiries, drawn from the Buddhist tradition. Where I struggled with LU, these inquiries were earth shattering and life changing.

Yet, these inquiries are time-consuming. They require a guide to stay with you for the duration of the inquiry, in (likely electronic) conversation. This could be days, months, or years. Thus, whilst extremely powerful, it does not appear to scale very well - it is far too labour intensive.

I’ve wondered often about how and whether technology could help here. AI is an obvious choice, but an AI that can guide is way outside my technical abilities, and a decision tree style of inquiry would be interesting, but would lack the nuance required to guide into assumptions.

An AI Dialogue

Now, we’ve probably all heard of ChatGPT by now.

This morning, I thought I’d attempt to get it to guide me. (Note, I assume I am not allowed to repeat ChatGPT’s answers to me here. However, I absolutely can share the questions I asked of it). I started by asking it:

“Please guide me in a liberation unleashed style inquiry.”

To which it answered in the affirmative, then described how we pick a topic, and explore preconcieved ideas about that topic. All very convincing. However, it didn’t suggest “the self” as such a topic.

I then asked:

“So, where to start?”

Again it suggested to pick a belief or experience that is important to me, and ask if it is truly valid. Then it urged me to ask if this belief is valid - to keep looking at it from different perspectives. It played the imperative and supportive card: do this, and do keep going.

But, as my mock guidee, I still didn’t really know what to do, so I asked:

“Can you actually guide me, like guides on liberation unleashed do?”

It replied that it could guide me, then offered me an eight stage interaction between “me” and “you”, demonstrating how a guiding session might look.

I complained:

“That is an example of guiding. You didn’t guide me.”

It apologised! It then asked me for a specific experience. So I picked “the self”, seeing as it hadn’t twigged to that as the core belief that LU explores.

Again, it gave me an example of a dialogue, this time around “the self”.

I told it:

“Don’t answer for me!”

Again, it apologised, and offered to guide without answering. It urged me to take a moment to reflect on the belief that the self is a fixed and permanent entity and to try and find evidence for it in direct experience.

Finally, I had guided ChatGPT into how to guide me!

At this point, I had to attempt to give the sort of answers an LU client might give, and its responses were generally appropriate to the conversation. It ended most of its comments with an imperative (keep questioning and exploring until you arrive at a deeper understanding or the though disolves). This repetition was mildly annoying, but not sufficient to turn me away from the conversation.


Conclusion

So what do I make of this? Firstly, that it can even attempt such a conversation is simply astounding. It took training to get it there, but it may well be possible to seed the engine with that guidance: I was able to guide it as to how to guide, perhaps that could be “pre-done” for actual clients. And presumably the more people actually used it, the better it would get.

The questions that this raises for me are:

  • To what degree can this be relied upon to guide people meaningfully? Will it keep pushing them into their hidden assumptions (given its unemotional nature, this is possible), or will it just end up being repetitive and somewhat annoying?
  • Will people accept this as a meaningful way to engage? The truth is, this is clearly something that we as a society are going to be exploring in the coming years, totally aside from this application, so perhaps we will find answers to this question relatively soon.
  • If we assume that this technology can cover part of the need, can it meaningfully interface with human guides? Can it be trained in how to identify specific humans it can recomment?

Suffice to say: I didn’t expect to be saying that I could see an AI model conducting these inquiries for at least another ten years!

Join My Newsletter

Receive updates when I have something meaningful to share.