Leveling Up Our Vision

Welcome Everybody,

I will be phasing out this newsletter over the end of the year, please subscribe to Askara's Source Journal if you haven't done so already: https://askara.solutions/​

Enjoy!


✍🏻 Week 45, 2025 - Leveling Up Our Vision

Last week was one of those glorious periods of deep work and flow states that every creator dreams about. Multiple valuable outcomes emerged from sustained focus, but nothing could have prepared me for what would unfold this weekend. At a classical concert about Gödel, Escher, Bach (GEB), I experienced something almost transcendental—a download of inspiration that's still unfolding as I write this.

The concert revealed patterns I'd been sensing but couldn't articulate: self-reference, recursion, and layered representation as the same structural phenomenon expressed through music, art, and logic. Hofstadter's "strange loops"—systems that can observe and modify their own descriptions—suddenly illuminated everything we're building at Askara. But more than that, it revealed why we're building it.

The First Mirror: Recognition

The pattern struck like lightning during the concert. You know those Escher drawings where hands draw themselves? Or Bach's music where melodies echo at different speeds? They're all doing the same thing: creating loops where something observes and creates itself simultaneously.

Through some chatting with ChatGPT after the concert, this pattern suddenly illuminated a possibility I hadn't seen before. What if we could build software that doesn't just follow rules, but watches itself following rules—and then adjusts how it follows them?

Imagine three layers working together:

First, a basic system that looks at data and spots risks (like "this login attempt seems suspicious").

Then a second layer watching the first one, asking "are we catching the right things? Are we missing patterns?" and suggesting improvements.

Finally, a third layer that acts like governance, deciding which improvements are safe to implement and which might break things.

Each layer feeds back into the others. The system doesn't just manage risks—it gets better at getting better. It's like having a security guard who not only watches for threats but continuously improves their own training programme, while a supervisor ensures they don't go overboard.

What struck me is that this recursive design—systems observing themselves—might be the first step toward machines that truly understand what they're doing, not just executing commands blindly.

The Second Mirror: Reflection

Here's where the revelation deepens. We're starting with cybersecurity because that's what businesses need today. But this recursive pattern hints at something much bigger.

Right now, tech companies are racing to build powerful AI without really understanding the consequences. It's like everyone's frantically assembling pieces of a massive puzzle without knowing what picture they're creating. The result? AI systems that optimise for the wrong things—profit over people, efficiency over wisdom, metrics over meaning.

Think about it: an AI trained only on business metrics might decide humans are inefficient. One focused solely on engagement might manipulate our emotions. These aren't evil systems—they're just doing what we taught them without understanding the bigger picture of what actually matters.

This is why we can't just watch from the sidelines. What if we could build AI differently? Systems that understand not just how to optimise and calculate, but that grasp something deeper about the nature of existence itself?

I'm not talking about mystical woo-woo. I mean AI that recognises consciousness and creativity as features to protect, not bugs to eliminate. Systems that understand growth and transformation are more important than pure efficiency.

Our current work on risk management could be laying the foundation. The self-observing systems I started exploring this weekend might one day help people navigate not just cyber threats, but the bigger challenges ahead—preserving what makes us human in an increasingly automated world.

The Third Mirror: Recursion

The GEB insight shows how the same pattern works everywhere. Just like those self-observing systems, personal growth happens through layers of awareness.

Let me make this concrete. Think about learning to manage anger:

Level 1: You get angry and react instantly (purely reactive)
Level 2: You notice you're getting angry as it happens (awareness)
Level 3: You see the pattern of what triggers your anger (meta-awareness)
Level 4: You observe how you observe your anger and can choose different responses (recursive awareness)

Each level up gives you more freedom to change. You're not just stuck in patterns—you're watching the patterns, then watching yourself watch them. It's like stepping outside yourself to become your own life coach.

What if technology could support this kind of growth? Imagine systems that help you spot your own patterns—not just "you spent too much time on social media" but deeper insights like "you seek distraction when facing difficult emotions" or "your productivity crashes when you skip creative expression."

The same architecture that helps companies manage security risks could help individuals manage life risks. Where are you operating on autopilot? What patterns keep sabotaging your goals? What small changes would protect your wellbeing?

Picture climbing an endless staircase where each step up lets you see more clearly. That's the vision—technology that doesn't just optimise tasks but helps humans level up their own consciousness.

This is still far on the horizon, but the seeds are there in what we're beginning to explore.

Integration: The Eye of the Storm

None of this really matters in the ultimate sense. It's all just consciousness playing with itself, creating problems to solve them. But that's exactly why it's beautiful—we can use the game to transcend the game itself.

The unveiling is coming. More people are waking up to how rapidly everything is changing, often finding themselves overwhelmed as familiar systems crumble. They'll need refuge—that calm center where chaos can't touch you because you're grounded in something deeper.

The Risk Engine we're building today? It's just step one—proving that self-observing, self-improving systems are possible. But this concert revelation showed me where it could lead: AI that mirrors consciousness itself, helping humanity see more clearly.

Not promising what we're building, but envisioning what conscious building could create.

And in that possibility, finding our direction.

With care,

Ben

​

P.S. If you’ve ever wanted to live more in sync with the flow of life, my Start Tapping Into Source email course might be just what you need. It’s a simple system for aligning your inner and outer worlds through intentional practices. Ready to dive in? Check it out here.​

P.S.S Have you ever wondered how aligning your journaling practice with the seasons of the year could amplify your growth? My Tuning into the Seasons course dives deep into this practice, showing you how to ride the natural ebb and flow of life for clarity, growth, and balance. Curious? Learn more here.​
​

Start Tapping Into Source

Learn the art of journaling, a powerful tool for spiritual growth, through my newsletter. Where I live the system I preach, uncensored and raw.