Experiences are the seeds out of which wisdom grows.
- Mike Hobday

- Dec 1, 2025
- 3 min read
Updated: Dec 2, 2025
“Experiences are the seeds out of which wisdom grows.”
Konrad Adenauer
What is the future value of human wisdom, experience, knowledge? How does its value stand up against your LLM of choice?
My workday is already transformed. Every day my skills are honed on a growing array of AI tools.
“Prompt and context engineering” are new to my vocabulary. Mary Bury, my English teacher, is finally owed a debt of gratitude. Refined and creative writing skills seem at last to be valuable again, (after a social media fuelled demise). Skills necessary to guide LLM engagement in a way that delivers useful and insightful results.
26,490 days of experiences has established a "library of context" that I use (and expand on daily), to assist me in determining the best solution to a business problem or the appropriate next steps as regards an approaching rhino.
Contributing to this knowledge bank I have worked in 5 business sectors, with 12 employers, advised or managed more than 50 companies, travelled to as many countries, held 37 job titles, reinvented my expertise at least 15 times. I have learned about humanity, loving, mourning, hiring, firing, teaching, coaching, mentoring, leading and managing.
Human wisdom is the ability to draw on this firmament in seemingly an instant, without trawling laboriously through every record, every experience to consciously find a match and predict what comes next. Efficiency being super useful when confronted by a charging rhino!
Learning and experience are key to the exploitation of intelligence, real or artificial in equal measure.
LLM’s learning and experience has involved training on vast data sets augmented when needed by live data feeds. To respond to our question? “A Rhino charging my way very fast!” involves breaking down my words into small pieces, word fragments and punctuation (!), numerical representations are created. The context of the conversation is considered, and the response is generated one fragment at a time, then organised for human understanding… “run or stand very still!”
Good and well-ordered data is as valuable today in the world of generative AI as it was when the first CRM and ERP systems came along.
Today we risk our corporate library of data being one enormous room with every “book” thrown in and data flowing in through every window.
Thomas Harris’ Hannibal Lecter in Silence of the Lambs used a technique called the “memory palace.”
“Like any well-organised mind, Hannibal Lecter’s [memory palace] has its compartments, and he stores memories in them the way one stores treasures and rare things. His memory palace is vast, with high vaulted halls and sunlight pouring through high windows”
Structure and organisation of memories and experience is key.

Memory formation in the brain involves several stages and biological mechanisms. The creation of a memory trace, reinforcement with long-term potentiation. Consolidation and storage are where the sleep librarian kicks in to improve memory networks to make them clearer, sharper and more efficient. Deep sleep is good for a tidy mind!
H-MEM (or Hierarchical Memory for High-Efficiency Long-Term Reasoning) is the deep sleep for LLM Agents and introduces the concept of a Memory Architecture.
Today typically a query is made on the whole library, where every shelf is searched and related, the memory is retrieved. H-MEM uses hierarchical memory and a position index to search through a layered memory and effectively removes the influence of irrelevant memories. In Lecter’s "memory palace" rapid access to a vault with the knowledge, experience and derived wisdom specific to the query is located with less energy and greater accuracy with reinforced context reducing hallucination.
H-MEM is yet to be scaled but signals the direction of travel. The importance of context to eliminate the irrelevant. So how does this help us with today's challenge of scaling the use of AI in a structured manner with risk containment and ROI in mind?
There is too much focus on prompts (yet they are often poorly written), and not enough focus on well-articulated context that guides the LLM prompt. “Context Engineering” is a critical component of an AI business and technical architecture.
The concept of "context" is not new, we have had context guidance as human employees for over a century codified in brand values, business conduct guidelines, data governance, treating customers fairly, risk appetite and so on. Codification for AI Agents provides guardrails for relevant, business and brand consistent Generative AI outputs that support your enterprise talent whether in a contact centre or for investment managers on the trading floor.
My 26490 days of experience of inputs, outputs, of goals achieved and missed are still valuable. They guide my prompt and context engineering and support decisions made as the proverbial “Human in the Loop” the final arbiter of LLM outputs.
Mike Hobday

(c) Bluzco ltd 2025



Comments