Discussion about this post

User's avatar
chungsam Lee's avatar

Subject: Story idea — Alignment before meaning in LLMs (structural, not values)

Hi Billy,

I’m sharing a short story idea that frames LLM alignment failures as a structural problem, not primarily a values/dataset issue.

Many failures (hallucination, inconsistency, misalignment) appear before meaning stabilizes — when the model’s perception locks into fixed “identity forms” (self/other/intent/agent).

I’m developing a framework called NCAF (Non-Conceptual Alignment Framework) with a companion method PMD (Perceptual Model Decomposition). Core claim: alignment must happen *before meaning*, at the level of perceptual structure.

If this is interesting, I can share a 1–2 page summary with diagrams + examples.

My post: https://northstarai.substack.com

Best,

Lee Chungsam

No posts

Ready for more?