Monday, November 24, 2025

Large Language Models (LLMs) as a tool for mindful engagement

 


The air in the small temple room feels thin, the silence broken only by the rhythmic tick-tick-tick of the old clock somewhere distant. My mind, however, is anything but silent. It buzzes, a frantic collection of to-do lists, deferred tasks and anxieties about the future. This internal chatter is the default setting. Buddhist teachings, particularly the concept of detachment, have always resonated deeply with me, not as a call to apathy, but as an invitation to clarity. It’s about releasing the grip – the desperate need to control outcomes, to possess, to be certain of the next moment. It’s about seeing things as they are, without the distortion of clinging or aversion.

Then came the digital age, and with it, a new kind of noise, a different kind of hum. Our minds are constantly fed, curated, and amplified by technology. News feeds whip us into a frenzy; notifications demand instant attention; surveillance algorithms record, predict and shape our desires, often without our conscious awareness. It feels like an externalization of that internal chatter, a relentless stream of stimuli and social validation that pulls us ever further into the past (what happened?) and the future (what will happen?).

And then, the LLMs arrived. These complex systems, capable of generating text on almost any topic, became tools for reflection. At first, I was skeptical. Could a machine truly help with something as nuanced and personal as Buddhist practice? Could it offer insights, or was it just sophisticated pattern-matching?

Instead of seeing it as a source of answers, I treated it like a mirror to reflect. An ideator.

The process is strange, yet compelling. I type, and the AI generates text. Sometimes it’s profound, echoing familiar themes or offering perspectives I hadn’t considered. Sometimes it’s nonsensical, revealing the limits of its understanding or the inherent biases in its training data. More often, it’s a mix, sometimes insightful, sometimes frustratingly unoriginal.

But the practice itself shifts my focus. I use a commandline interface for LLMs. This slows the process of using the AI. When I engage with the LLM, I'm doing so consciously. I pause. I type slowly, considering my words. The act of formulating a question or prompt becomes a moment of intentionality, a break from the automatic scrolling or reacting.

The slowness of my computer in producing 4 to 5 tokens per second which is a measure of the speed of AI text generation is not a hindrance. The waiting for a response creates a gap in my mental timeline – a pause that allows me to breathe, to notice the physical sensation of waiting. It’s a micro-dharma practice. Slowness dulls the entitlement of instant gratification. It subverts the impotence of rapid scrolling in hope of discovery.

The LLM offers a different avenue for reflection, forcing me to articulate my thoughts, my questions, my struggles in a way that might be less immediate or emotionally charged than thinking them. It externalizes the internal, creating space for observation.

This use feels like a bridge between the ancient practice of mindful reflection and the contemporary reality of technological immersion. We are living in a time defined by constant connectivity, information overload and rapid change. The Buddhist urge to detach from the craving for constant stimulation, the attachment to digital identities, the aversion to feeling left out or behind – these are as relevant now as they were two millennia ago.

Using an LLM for reflection is, in a way, a practice in detachment. I detach from the immediate need for a quick answer or a dopamine hit from social media validation. I engage intentionally, asking the question, receiving the response, and then, crucially, noting it without immediately accepting or rejecting it. I detach from the ego's desire for control over the outcome of the conversation, trusting the process (even if it's just a complex algorithm) to provide a response.

I am present to the machine, just as I might be present to a fellow practitioner or a meditation cushion. The technology is the context, not the core of the experience. The core is the mind engaging with the process, observing the arising and passing away of thoughts, including those about the AI itself. The presentness of the moment is confirmed by the sound of the computer's fan speeding up as the the AI writes its output. The fans turn off once the generative stream ends marking the passage of time.

There is, of course, a danger. We can become overly reliant, seeking external validation or answers from the machine rather than turning inward. We can anthropomorphize the AI, projecting human qualities onto it. We can also forget the simple, direct practices of mindfulness – the feeling of the breath, the sensation of the body, the quiet observation of thoughts without commentary. The LLM is a tool, not a replacement. It’s a conversation partner, not a guru.

Yet, there is also immense potential. This intersection allows us to explore complex ideas – the nature of consciousness, the impact of technology on the mind, ethical dilemmas in the digital age – in a way that was previously unimaginable. It democratizes access to diverse perspectives, including those rooted in different contemplative traditions.

So, perhaps this is the path forward: a mindful engagement with technology. We use these powerful tools, like LLMs, not as distractions, but as aids to reflection, practice, and perhaps even connection. We practice detachment not by withdrawing from the world, but by understanding our relationship to it, and to the tools that mediate our experience.

We sit at the intersection of generative text and technological wave, not as observers, but as participants; bringing the same non-judgmental awareness, the same willingness to simply be with whatever arises, whether it’s a thought, a feeling, or a line of text generated by an algorithm.

It’s about finding the present moment, even amidst the constant hum of the digital age, and simply noting it so that we can be here now.

~ Victor Khong

Friday, July 11, 2025

Lessons from the United Kingdom's Post Office Scandal

In 1999, the UK Post Office–in partnership with Fujitsu’s Horizon IT system–transitioned from paper-based accounting to a digital platform designed to streamline branch operations. Yet within months, subpostmasters across the country began reporting inexplicable cash shortfalls that Horizon’s logs could neither justify nor explain. Rather than investigate the software, the Post Office interpreted these discrepancies as evidence of theft, leading to over 900 prosecutions for fraud, false accounting, or theft between 1999 and 2015. The human toll was catastrophic: careers and reputations were shattered, families bankrupted, and at least thirteen individuals took their own lives under the weight of wrongful criminal allegations (Wikipedia, AP News).

This tragedy was not merely a series of technical glitches, but a stark illustration of confirmation bias and corporate abdication of responsibility. Early warnings—from both Detica and Deloitte—flagged Horizon as “not fit for purpose,” yet were quietly set aside in favor of protecting a lucrative private-finance initiative contract and institutional reputation. Frontline staff who challenged the system were met with denials and threats, reinforcing a hierarchical assumption that postmasters—often small-business operators of modest means—were more likely to be dishonest than victims of flawed software. The result was a systemic bias that punished the least powerful and silenced dissent, underscoring the necessity for independent oversight, a culture that prizes inquiry over image, and an unwavering commitment to human dignity when technology fails (Computer Weekly, ft.com).

Beyond cognitive failings, the scandal exposed a systemic bias against lower‐status workers. Subpostmasters were typically modest‐means operators—often in rural or economically marginal communities—charged with running what they effectively “bought” as their own businesses. To the Post Office’s leadership, their grievances were easily dismissed: those “buying a job” were presumed to have greater incentive to steal, not to be victims of flawed software. This class prejudice amplified the injustice: those with the least power and fewest resources found themselves demonized, unable to secure meaningful redress, and branded as criminals by an institution they had served loyally (The Guardian, Hacker News).

From this betrayal of trust, several corporate lessons emerge:

  1. Embed Independent Oversight
    No matter how confident vendors or internal champions may be, all mission‐critical systems require ongoing, independent verification. Regular, transparent audits by truly autonomous teams can catch latent defects before they metastasize into crises.

  2. Cultivate a Culture of Doubt
    Organizations must encourage, not punish, challenge. When frontline staff raise concerns—especially repeatedly—those concerns should trigger technical forensics, not managerial defensiveness. A “speak‐up” culture should be protected against any form of retaliation.

  3. Recognize the Perils of Confirmation Bias
    Decision‐makers must be trained to identify and counteract confirmation bias. Formal procedures—such as mandatory devil’s advocate reviews—can force teams to consider alternative explanations for data anomalies rather than leaping to blame individuals.

  4. Prioritize Human Impact over Institutional Reputation
    An organization’s reflex to preserve its reputation can eclipse its duty of care to stakeholders. In the Horizon case, protecting the PFI investment took precedence over subpostmasters’ livelihoods. A more humane governing ethic would have halted prosecutions at the first sign of systemic error.

  5. Ensure Equitable Access to Justice
    Lower‐income actors often lack the means to challenge large institutions. Corporate frameworks should include funding or insurance provisions to support independent legal review for those facing allegations based on corporate data.

  6. Design for Transparency and Traceability
    Critical software systems must log not only transactions but also configuration changes, remote access events, and error‐handling pathways. Had Horizon’s audit trails been more accessible, it would have been far harder to conceal bugs.

As the public inquiry chaired by Sir Wyn Williams makes clear, the true cost of this scandal extends far beyond financial compensation. It punctures the social contract between institution and citizen, especially for those of modest means. The Post Office case should stand as a stark reminder: technological ambition must be matched by ethical foresight, procedural rigor, and a steadfast commitment to those who stand to be most vulnerable when systems fail (thetimes.co.uk, ukri.org).