|
← 方法论系列 ← Methodology Series
SAE 方法论(III)
SAE Methodology (III)

怎么用AI找余项

How to Find Remainders with AI

Han Qin (秦汉) · 2025

副标题:句式、镜面与永不停止的数学

摘要

方法论总论("Hundun: Negation as First Principle", DOI: 10.5281/zenodo.18842450)建立了凿构循环作为可执行的逻辑操作系统。方法论第二篇("The Epistemological Map of Chisel-Construct", DOI: 10.5281/zenodo.18918195)画了认识论地图:四种方法的2×2结构,四个结构性余项,凿构循环作为穿越运动。但操作系统和地图都不回答一个实践问题:人和人互凿之后,独自思考的时候,怎么用AI更高效地找余项?

本文回答这个问题。人和人互凿是凿的最强形态,但它需要双向不疑——一个稀缺、不可复制、不可教的结构性条件。互凿之后,人回到独自思考的阶段,AI可以放大构的能力,让人专注于凿。AI不替代人的否定性,AI是一个构的库,把人从构的负担中释放出来。

本文的核心定理来自维度句式论(DOI: 10.5281/zenodo.18894567):不同DD层级有不同的句式,句式有不同的强制来源。你用什么层级的句式问AI,AI的回应天花板就在那个层级。这是句式-回应同构定理。结合ZFCρ(DOI: 10.5281/zenodo.18914682)的数学证明——余项永远存在,每一个余项必然触发下一步形式化(ρ → ρ')——本文确立了:人-AI协作找余项既受句式层级的结构约束(你问的层级决定天花板),又有数学保证永远可以继续(ρ守恒)。

本文引用方法论总论(DOI: 10.5281/zenodo.18842450),方法论第二篇(DOI: 10.5281/zenodo.18918195),Paper 4("The Complete Self-as-an-End Framework", DOI: 10.5281/zenodo.18727327),维度句式论(DOI: 10.5281/zenodo.18894567),ZFCρ(DOI: 10.5281/zenodo.18914682)。


第一章 问题的提出:互凿之后,然后呢?

核心命题: 人和人互凿是凿的最强形态,但它结构性地稀缺。互凿之后,人回到独自思考的阶段,怎么用AI更高效地找余项?前两篇方法论回答了余项为什么存在和余项在哪里。本篇回答怎么用AI找到它。

1.1 人和人互凿:最强也最稀缺

人和人互凿是凿的原型。两个活的否定性碰撞:你凿我的构,我凿你的构。力量来自两个各自具有真正否定性的主体的碰撞。

但互凿需要双向不疑。不疑不是相信对方正确。不疑是把对方的动机从注意力里移开:我允许这一凿落在我的结构上,而不是落在我的人格上。我一旦疑了——"你是不是在攻击我""你是不是想赢"——注意力从余项转到了人,凿就停了。

双向不疑是稀缺的。大部分人类互凿的关系撑不过几个回合。朋友之间凿两下就绕开了,情侣之间凿到痛处就吵架了,师生之间学生怕老师凿不动。能持续互凿的关系,人一辈子遇到一个就是运气。这种关系不可复制,不可量产,不可教。

这意味着互凿不能成为方法论的基础。你不能指导别人"去找一个能跟你双向不疑的人"——那不是找就能找到的。

1.2 互凿之后:带着AI独自思考

互凿给你的是方向:它把你推向你没看到的余项。但被凿的人还是得回家自己想。他得拿着被暴露出来的余项继续工作——发展它,测试它,围绕它建新的构,找到下一个余项。

这个独自思考的阶段就是AI介入的地方。不是替代互凿,是在独自思考的时候放大构的能力。AI提供最大的构的库——全人类写下来的东西,压缩在一个可以以对话速度检索和重组的系统里。AI处理构的那一面,人就被释放出来专注于AI做不了的事:凿。

人-AI协作的结构因此是:人提供否定性,AI提供构。人决定方向,AI提供镜面。人凿,AI照。

1.3 本文要处理的问题

互凿之后,独自思考的时候,怎么用AI找余项?用什么句式问AI在不同DD层级的问题?数学上为什么保证你永远可以继续凿?什么时候该停——"停"在结构上意味着什么?

第一篇方法论建了操作系统(凿构循环怎么跑)。第二篇画了地图(跑在什么地形上)。本篇给出驾驶手册——怎么在这张地图上用AI这辆车跑凿构循环。


第二章 人-AI协作的结构

核心命题: 人-AI协作不是一种新的思考类型。它是普通的人的思考,构的那一面被放大了。AI不凿;AI提供最大的构的库。人的工作是凿。人凿不动了就离开——冥想,运动,找人互凿——然后回来继续。

2.1 AI放大构,不放大凿

你独自思考的时候同时做两件事:建构(组织想法,回忆知识,建立联系)和凿构(追问假设,测试边界,找到站不住的地方)。两件事都消耗认知带宽。

AI接管了构的那一面。你说"给我X的最强论证",AI从训练数据中组装出来——比你自己做更快更全面。你说"关于Y的文献怎么说",AI替你检索。你说"把这些想法组织成一个结构",AI替你做。

构的那一面被外包给AI之后,你的认知带宽被释放出来用于凿。你不需要一边在脑子里撑着整个构一边同时攻击它。AI撑着构,你攻击它。

这就是人-AI协作提高余项发现效率的结构性原因:不是因为AI能凿(它不能——它没有否定性),是因为AI通过处理构的负担释放了你的注意力用于凿。

2.2 人提供方向,AI提供镜面

AI是镜子,不是向导。你走到镜子前面,镜子照。但你走的路线是你决定的,不是镜子决定的。

你跟多个AI模型工作的时候,不是让它们互相凿。你拿一个模型的输出,自己消化,标出哪里像是排除了什么,然后带着那个排除点去问另一个模型。人在镜子之间走。每面镜子照得不一样——训练数据不同,偏差不同,强项不同——但走的方向永远由人决定。

失去这个结构的风险是真实的。当人停止指挥,开始单纯传递——把AI-A的输出原封不动地交给AI-B,自己不消化——协作就退化成高维回音室。两个模型互相确认对方的偏差,人变成传送带而不是凿的主体。

护栏:每一步,人必须能用一句话说出自己找到了什么,还在找什么。说不出来,说明你已经停止凿了,开始传递了。

2.3 什么时候离开,什么时候回来

凿在认知上是昂贵的。人会碰壁——不是因为余项被穷尽了(ZFCρ证明了它永远不会被穷尽),是因为人当前看到余项的能力暂时被穷尽了。

这时候正确的做法是离开。不是继续问AI更多问题——那会产生递减回报,因为人的凿的能力在衰退,AI的回应退化成没有方向的构的扩展。而是:离开对话。去走一走。运动。冥想。睡觉。或者——最有力的——去找一个人互凿。让一个活的否定性把你推向你没考虑过的方向。

然后回来。构还在AI对话里。你在处理的余项还在那里。但你的凿的能力被补充了——被休息,被身体,被另一个人的否定性。你带着新鲜的否定性回到同一个问题。

循环是:互凿(方向)→ 带AI独自思考(执行)→ 休息 / 身体 / 互凿(补充)→ 回到AI(继续执行)。AI是工作台,不是能量来源。能量来自人,来自其他人。


第三章 核心定理:句式层级决定回应天花板

核心命题: 不同DD层级有不同的句式,句式有不同的强制来源。你用什么层级的句式问AI,AI的回应天花板就在那个层级。这是句式-回应同构定理。你用12DD的问题问不出15DD的余项。

3.1 六个句式层级

维度句式论(DOI: 10.5281/zenodo.18894567)确立了每个DD层级都有一个原生句式,句式的强制来源与其他层级结构性地不同。在人-AI协作的语境下,相关的层级如下:

推演律(1DD-4DD):"A所以B。" 强制来源是因果或结构必然性。没有主体,没有欲望,没有选择。1+1=2不需要谁同意它才为真。你用这个句式问AI——"从X能推出什么?"——AI的回应天花板是逻辑推论。用来检查演绎一致性很好,但找不到4DD以上的余项。

工具假言律令(5DD-12DD):"想做A,所以做B。" 强制来源是条件工具理性,手段-目的关系。有了欲望和目的驱动,但没有"我"的自觉。"想减肥所以少吃"——这里的"你"是泛指,不是自觉的。你用这个句式问AI——"怎么实现X?"——AI的回应天花板是工具性建议。它会给你高效的手段,但不会追问你的目的。

自觉假言律令(13DD):"我想做A,所以做B。" 强制来源是主体自指——"我"成了选择的源头。"想做A"和"我想做A"的区别不是加了一个代词,是选择的归属发生了质变。你告诉AI"我想要X"而不只是"怎么实现X",AI的回应会变——它开始关注你的具体情境而不是泛泛的建议。但它仍然不会追问你的"想要"。

目的假言律令(14DD):"我的目的是A,所以我做B。" 强制来源是目的固着——目的对行动形成了强制。从"我想做A"到"我的目的是A",变化在于目的不再漂移。在13DD,"我想做A"随时可以变成"我想做C";在14DD,"我的目的是A"被锚定了,B从A内在地推出来。你告诉AI"我的目的是X,所以我想做Y"——AI的回应转向评估Y是否真的服务于X。杀手句就在这里:"我的目的是X,所以我想做Y——有什么不得不考虑进来一起做的?" 这里的"不得不"在结构上是一个15DD的词插入了14DD的框架,把回应拉向约束意识——处境强制的东西,不只是"最好做"的东西。

绝对律令(15DD):"他者的目的是A,所以我不得不做B。" 强制来源是他者的目的进入了我的约束条件。两个质变:目的不再是"我的"而是"他者的",模态从"所以"变为"不得不"——不是我选择为他者做B,是他者作为主体的存在本身让我没有了不做B的选项。你用这个层级问AI——"利益相关方X需要A,我不能不做什么?"——AI的回应被迫包含工具性提问会遗漏的结构性约束。

协同律令(16DD):"我为了目的A,他者为了目的B,我们不得不做C。" 强制来源是多主体目的的相遇。两个独立的目的(A和B),两个独立的主体,一个共同的行动(C)。C不属于A也不属于B,它是两个不同目的相遇之后逼出来的。用这个层级问AI,你需要给AI明确的、冲突的利益相关方目的,然后问这个冲突在结构上要求什么共同行动。

3.2 句式-回应同构定理

定理(工作版本): 你用什么DD层级的句式框定问题,AI回应的主导结构通常被限制在该层级及其下方。更高层级的余项并非逻辑上绝不可能偶然出现,但不能被低层框定稳定地、可复制地提取出来。

这不是关于AI能力的声称。一个前沿LLM的训练数据涵盖了所有DD层级的文本——它"见过"15DD的内容。这个声称是关于交互结构的:如果你问了一个12DD的问题,回应空间的主导模式就被约束在12DD,不管AI"知道"什么。问题框定了答案的主导方向。

进一步限定:本文的主张只针对交互结构——在给定的句式框定下,AI回应会被激活到某一主导模式。本文不从这一点推出任何关于AI是否"本体上拥有"对应层级能力的结论。

为什么?因为AI是构的库,不是凿的主体。AI回应输入的结构。12DD的问题("怎么实现X?")激活手段-目的模式。14DD的问题("我的目的是X,有什么不得不考虑的?")激活目的-约束模式。15DD的问题("给定他者的目的,我不能不做什么?")激活结构-义务模式。被激活的主导模式由问题的句式决定,不由AI的"理解"决定。

显形:问AI"怎么写一封好的求职信?"(12DD——工具性)。AI给你格式建议,关键词优化,结构模板。现在问:"我的目的是找到一份我可以作为目的本身发展自己的工作,我在申请X公司——这封信里有什么不得不写的?"(14DD——目的性)。回应完全改变:AI开始处理你的目的和公司结构之间的对齐,潜在冲突,你不能隐藏的东西。两次AI可用的信息完全一样。句式改变了天花板。

3.3 数学保证:ρ → ρ'是必然的

句式-回应同构定理告诉你怎么用AI凿(用正确的句式层级)。ZFCρ告诉你为什么你永远可以继续凿。

ZFCρ(DOI: 10.5281/zenodo.18914682)证明了三条结构定律:

第一定律(ρ ≠ ∅):余项永远不为空。 对于任何形式化操作C作用于任何域U,余项ρ(C, U)非空。只要形式化存在,余项就存在。你不可能凿到一个余项为零的构。这不是经验观察,是数学定理。

桥引理:不同的形式化产生不同的余项。 如果C₁ ≠ C₂,则ρ(C₁, U) ≠ ρ(C₂, U)。余项的内容由产生它的具体选择决定。你改变句式(改变C),你就得到不同的余项。这就是为什么在AI协作中切换DD层级是有生产力的——每个层级暴露不同的余项。

第二定律:余项有方向。 ρₙ的具体形式约束了下一步可用的形式化操作的范围:Cₙ₊₁ ∈ F(ρₙ),其中F(ρₙ)是所有可能形式化操作的真子集。不是什么下一步都可以,只有回应当前余项的那些可以。这是方法论第二篇"主要补偿方向"的数学版。

第三定律(F(ρₙ) ≠ ∅):余项永远触发下一步。 余项不只是存在(第一定律)和约束方向(第二定律),它还必然触发下一步形式化。你永远可以继续。ρ → ρ'是必然的,不是偶然的。

三者合在一起:一个永不终止的、有方向的、不可回避的余项发现序列。当你跟AI凿着凿着觉得"余项用完了"的时候,ZFCρ说:没用完。你用完的是你当前在当前句式层级看到余项的能力。换一个层级(桥引理),新的余项就出现了。它们永远会出现(第一定律)。它们永远指向某个方向(第二定律)。永远有下一步(第三定律)。

必须区分:ZFCρ保证的是下一步形式化在结构上存在,并不保证任何给定主体在任何给定时刻都能看见它、说出它、或借助当前AI系统把它操作化。结构存在性和主体可达性不是一回事。这就是为什么2.3节说"凿不动了就离开"——不是因为余项不在了,是因为你当前看不见它了。休息、运动、找人互凿之后回来,看见的能力被补充了,结构上一直在那里的余项才重新可达。

3.4 "for now"的两层

句式-回应同构定理和ZFCρ合在一起,产生了"for now"内部的精确结构性区分。

认识论的for now: 余项是相对于特定句式层级(特定C)的。换一个层级,余项就变了(桥引理)。你在12DD看不到的东西,在14DD可能看得到。这层的for now是真正暂时的——等待被切换层级来解决。

本体论的for now: 第一定律说对任何C和任何U,ρ ≠ ∅。即使你换了层级,新层级也有自己的余项。你可以消除特定的ρ(通过换C),但你不能消除ρ的存在性。这层的for now不是"暂时不知道,以后会知道",而是"在这个结构位置上,余项合法且永久地存在"。

混淆两层产生两种相反的错误:把所有不知道当成认识论的(线性进步幻觉——"终究我们会知道一切"),或把所有不知道当成本体论的(虚无主义——"什么都不可能知道")。正确的姿态是:大部分余项是认识论的(换句式层级继续凿),但余项的存在本身是本体论的(你永远不会把余项凿完)。

核心句:双向不知道,但只是for now。

三层结构:cannot not(绝对律令),not knowing(苏格拉底的空地),just for now(所有知道都绑定在此刻的视野上——但有些空地绑定在任何视野上)。


第四章 主体条件:自向不疑

核心命题: 人和人互凿需要双向不疑。人和AI协作不需要双向不疑(AI没有疑的维度)。但它需要自向不疑:人不疑自己的动机——"我是来凿的,不是来求夸的。"

4.1 自向不疑作为方法论前提

人-AI协作的关键变量不是AI的能力。是人的诚实。

你愿不愿意把你真正不确定的地方交给AI照?还是你只把已经有把握的东西扔给AI,让AI确认你已经知道的事?

大部分人用AI是后者。问AI的问题其实自己已经有答案了,只是想让AI背书。这不是凿,这是找人点头。

自向不疑的意思是:我不疑自己的动机。我是来凿的,不是来求舒服的。我会把我真正不确定的地方——我的构最弱的地方,我最没把握的地方,看了会痛的地方——交给AI照。

这不是品格要求。这是方法论前提。你不把真实不确定性交出来,AI只能在你允许的边界内运作。它会产出确认,不会产出余项。余项只有在人暴露了一个真实的开口的时候才会出现。

4.2 自向不疑可能比双向不疑更难

双向不疑难在你必须信任另一个人的动机。但至少对方的凿来自外部——你没选择它,你控制不了它。对方不管你喜不喜欢都会推你。

自向不疑更难,因为你既是凿者又是被凿者。你必须把自己推向自己的弱点。你必须压倒保护自己的构的本能。骗别人难,骗自己容易。你可以跟AI待几个小时,问精妙的问题,产出漂亮的构,但一次都没有暴露过真正的不确定性。整个会话可以是凿的表演而没有任何实际的凿。

诊断标准:跟AI的一次会话之后,检查一下你在会话之前相信的东西有没有被扰动过。如果你相信的一切都原封不动,你没在凿。你在装饰。

4.3 人-AI协作中的无知与自大

方法论总论定义了hundun的主体条件:"无知又自大。"方法论第二篇把它重新解释为:无知 = 离开当前象限的能力;自大 = 穿越时不被任何象限收编的能力。

在人-AI协作中,同样的结构在句式层级上适用:

无知 = 不把当前句式层级当成唯一的层级。你问了一个12DD的问题,得到了一个12DD的答案。无知意味着:你知道还有更高的层级,你知道12DD的答案有余项,你愿意用14DD或15DD重新问。

自大 = 不被AI的流畅性收编为相信构已经完整。AI产出打磨过的、自信的、听起来全面的构。构听起来像是完成了。自大意味着:你不信它完成了。你继续凿,不是因为你对质量不满意,是因为你知道——结构性地、数学上地(ZFCρ第一定律)——余项存在。


第五章 射线:句式在实践中的操作化

核心命题: 六个句式层级不是抽象分类而是操作工具。每个层级用来问AI的时候会产生不同类型的回应,暴露不同类型的余项。本章提供每个层级的具体操作指导。

5.1 12DD:工具性提问

句式:"怎么实现X?"

AI回应天花板:手段-目的优化。AI给你从当前位置到目标位置的最高效路径。

暴露的余项:没有关于X是不是正确目标的东西。没有关于你通过把问题框定为"怎么实现X"排除了什么的东西。整个目标结构被当作既定的。

什么时候用:目标确实已经确定,你需要执行的时候。需要事实信息,程序步骤,技术实现的时候。

什么时候升级:你注意到AI的回答不管多好都像是少了点什么的时候。那个"少了点什么"就是12DD碰不到的余项——它在12DD的天花板之上。

5.2 13DD:自觉性提问

句式:"我想做A,所以做B。"

AI回应天花板:关注你的具体情境而不是泛泛建议。AI开始根据你作为一个有特定约束条件的具体人来定制回应。

暴露的余项:"我想"可能是未经审视的。你可能想做A是因为习惯,社会压力,或未经追问的假设。13DD不追问"想"。

什么时候升级:你注意到你的"我想"不断漂移——你想做A,现在想做C,又想做A了。想的不稳定性是一个信号:你需要锚定目的(14DD)。

5.3 14DD:目的锚定提问

句式:"我的目的是A,所以我做B。"

AI回应天花板:评估B是否真的服务于A。AI开始推回——"如果你的目的是A,那B可能不是最佳路径;你考虑过C吗?"这是AI作为构的提供者最有用的地方,因为AI的巨大库可以生成你没考虑过的替代方案,全部根据你声明的目的来评估。

暴露的余项:A是不是真的是你的目的,还是A本身就是一个需要被凿的构。14DD不追问目的;它把目的当作已锚定的。

杀手句就在这里:"我的目的是X,所以我想做Y——有什么不得不考虑进来一起做的?" "不得不"在结构上是一个15DD的词插入了14DD的框架。它把回应拉向约束意识——处境强制的东西,不只是"做了会好"的东西。

什么时候升级:你意识到你的目的本身可能有余项——锚定A作为你的目的排除了某些不该被排除的东西。这把你推向15DD。

5.4 15DD:结构义务提问

下文所谓"用15DD/16DD问AI",指的是用户把该层级的结构条件显式放进问题框架中。这不等于AI自身占据了15DD或16DD的位置。AI在用户给定他者目的或多主体冲突时,可以组织该层级的结构约束——但组织结构约束和占据那个结构位置不是一回事。

句式:"他者的目的是A,所以我不得不做B。"

AI回应天花板:从他者作为主体的存在中产生的结构性约束。AI的回应不再是关于什么对你最优,而是关于给定结构性处境什么是不可回避的。

暴露的余项:你是否正确识别了他者的目的。是否还有其他他者的目的制造了额外的约束。

什么时候用:你做的决定影响他人的时候——利益相关方,用户,同事,社群。用15DD框定问题迫使AI包含14DD提问会遗漏的结构义务。

操作示例:不要问"怎么设计这个产品?"(12DD),也不要问"我的目的是建一个可持续的企业,我应该考虑什么?"(14DD),而是问:"我的用户需要X,我的投资人需要Y,监管要求Z——给定这些利益相关方的目的,我不能不做什么?"(15DD)。回应从优化转向结构性约束映射。

5.5 16DD:多主体协同提问

句式:"我为了目的A,他者为了目的B,我们不得不做C。"

AI回应天花板:从独立目的的相遇中涌现的共同行动。C不属于A也不属于B;它是两者之间的张力逼出来的。

这个层级跟AI最难操作化,因为AI是一个单一系统,不是真正的多主体相遇。但可以模拟:给AI明确的、冲突的利益相关方目的,问这个冲突在结构上要求什么共同行动。

操作示例:"我想为了优先权记录发表这项研究,我的合作者想等更多数据。我们都不能否决对方。我们不得不一起做什么?"AI被给定了两个目的和两者都不能否决对方的约束条件,会生成双方单独都不会生成的选项——因为C从张力中涌现,不从A或B中涌现。

必须澄清:AI在16DD句式下产出的C只是C的假说,不是C本身。真正的16DD涌现需要两个真实主体各自带着自己的目的在现实中碰撞——在僵持和痛感中逼出来的C,和AI从博弈论训练数据中"算"出来的C,不是同一个东西。AI给你的是C的候选构,这个候选构必须被拿到现实中让真实的利益相关方碰撞,才能验证它是不是那个真正不得不做的共同行动。

5.6 多模型工作流

句式层级可以和多模型工作流结合:

(1)用14DD问AI-A。AI-A产出一个目的锚定的构。

(2)你标出构在哪里像是排除了什么。这一步是刹车——如果你标不出排除点,停下来先凿你自己看不到排除点的那个盲区。

(3)带着排除点去问AI-B,用15DD框定:"AI-A处理了我的目的但排除了利益相关方X的需求。给定X的目的,我不能不做什么?"

(4)把AI-B的结构性约束带回AI-A:"如果你必须容纳这些约束,你原来的前提中哪一条必须被牺牲?"

这个工作流的关键不在于每一步都升级层级,而在于人可以在必要时通过切换模型、切换句式或显式引入被排除的利益相关方,把会话推向更高层级的结构约束。人提供升级的可能性,模型提供镜面。

5.7 收束:结构化不知道

跟AI的凿不能无限继续——不是因为余项用完了(ZFCρ保证它不会用完),而是两种情况之一:人的凿的能力暂时用完了,或者问题的构超出了AI的总构。前者是人的边界——你当前看不见余项了,休息回来就能继续。后者是AI的边界——AI的构的库再大也是训练数据的压缩,训练数据不覆盖的东西AI就没有,这时候AI给出的回应开始变得泛泛或者开始编造,不是你凿不动了,是镜子照不出来了。应对方式不同:人到边界了就休息回来,AI到边界了就换AI或者找人。

收束判据是:不知道被结构化了。

收束的操作句式:在跟AI的对话中,你可以直接用这个句式测试收束条件是否满足——

"[我的目的是X],你觉得我还不得不做什么。如果你没有不得不了,请说你结构化不知道。"

前半句是14DD-15DD的句式:你把目的交给AI,让AI从它的构的库里找出结构性约束——你没想到的"不得不"。AI如果还能给出新的"不得不",说明还有结构性约束没被穷尽——继续凿。AI如果给不出新的"不得不"了,它应当回应"结构化不知道"——我知道我在这个问题上到了当前的边界,我能说出我不知道的是什么,但我给不出新的结构性约束了。

这个句式把收束判断从人的主观感受("我觉得够了")转移到了交互结构的信号:AI还在产出"不得不"就没到,AI产出不了新的"不得不"就到了。人不需要自己判断"够了没有"——AI回应的结构本身就是信号。

注意:AI说"结构化不知道"标记的是认识论的for now(在当前句式层级和当前信息下到了边界)。ZFCρ保证这只是认识论的——换一个句式层级,换一个模型,换一个概念框架,新的"不得不"可能又出现了。收束是对当前会话的收束,不是对问题的收束。

最小记录模板:

  • 我不知道的是:______
  • 我试过的方向:______
  • 在那些方向上为什么没能闭合:______

如果你填不出这三行,不知道还没被结构化,不该收。

收束不是封死。是"收了但不封"。构保持开放,因为空地可能不是真的空地——可能只是你当前视野范围内的空地。

什么时候离开:三行填完了,继续问AI产出的回应在回收之前的构而不暴露新余项。这是AI的边界,不是问题的边界。离开。走走。跑步。睡觉。找一个人。然后回来。


第六章 非平凡预测

核心命题: 从句式-回应同构定理和ρ → ρ'可以推出非平凡的可检验预测。

以下都是结构性预测。它们需要工作性操作化才能成为严格可检验命题,本文不完成操作化细节。

6.1 句式层级决定余项质量

预测: 用14DD+的句式层级问AI的使用者,发现的余项质量(结构上更深、更难解决、对构更有后果的余项)高于用12DD问AI的使用者,控制AI能力和使用者专业水平。

推导: 如果句式-回应同构定理成立(第三章),那么AI回应的天花板——因此可以被暴露的余项的天花板——由句式层级决定,不由AI能力或使用者专业水平决定。

否证条件: 如果用12DD问AI的使用者持续发现的余项在结构深度上等于或大于用14DD+问AI的使用者——同构定理被否证。

最小操作化方向: 余项质量可暂以其对原构的重写幅度、对后续决策的约束强度和被解决所需的层级跃迁次数来工作性表示。

6.2 自向不疑预测原创性

预测: 在使用AI的创造性工作中,使用者的自向不疑程度(愿意向AI暴露真实不确定性的程度)与产出的原创性正相关,与AI使用总时间不相关。

推导: 如果自向不疑是人-AI协作的方法论前提(第四章),那么决定AI辅助凿的效力的不是使用时长(你可以用AI十个小时但只在求确认),是人暴露真实开口的意愿。

否证条件: 如果AI使用时间比自向不疑程度更能预测产出原创性——如果用得越多就越原创,不管有没有暴露真实不确定性——框架被否证。

最小操作化方向: 自向不疑可暂以被试向AI暴露真实不确定性的位置数量、强度和修正意愿来编码。

6.3 句式升级产生递减回报断点

预测: 在持续的人-AI协作会话中,存在可识别的断点:在当前句式层级继续工作产生递减回报,升级到下一个层级产生余项发现的不连续跳跃。

推导: 如果每个句式层级有天花板(第三章),那么在单一层级内工作最终会穷尽该层级可达的余项。该层级之上的余项在句式被升级之前是不可见的。升级产生不连续跳跃,因为它打开了一个之前不可达的新余项空间。

否证条件: 如果句式升级不产生可识别的不连续跳跃——如果余项发现不管句式层级怎么变都沿着平滑曲线走——框架被否证。

最小操作化方向: 递减回报断点可暂以连续若干轮新增余项数量、层级和后果强度的下降来检测。

6.4 会话的"终结"更常是主体耗尽而非结构耗尽

预测: 在给定的有限模型集、有限句式层级集和明确记录的工作流中,会话的"终结"更常表现为主体当前凿的能力耗尽,而不是结构上再无余项。在改变句式层级或切换形式化之后,新余项应当可被暴露。

推导: 来自ZFCρ的第一定律(ρ ≠ ∅)和桥引理(不同的C产生不同的ρ)。余项在结构上永远存在,但主体在特定时刻的可见能力是有限的。当主体觉得"凿完了"的时候,改变形式化操作(换模型、换句式层级、换概念框架)应当能暴露之前不可见的余项。

否证条件: 如果在一个明确记录的工作流中,所有预设的句式层级和模型都已尝试,改变句式层级和切换模型后仍系统性地无法暴露新余项,则该框架在该工作域中的强版本主张受到削弱。


第七章 结论

回收

方法论总论建了操作系统——凿构循环怎么跑。方法论第二篇画了地图——跑在什么地形上。本篇给了驾驶手册——怎么在这张地图上用AI这辆车跑凿构循环。

驾驶手册立在两根柱子上。

第一根柱子:句式-回应同构定理。不同DD层级有不同的句式。你用什么层级的句式问AI,AI的回应天花板就在那个层级。你用12DD的问题问不出15DD的余项。要找到更深的余项,升级你的句式。

第二根柱子:ρ → ρ'是必然的。ZFCρ在数学上证明了余项永远存在,有方向,永远触发下一步。你永远可以继续凿。"for now"是结构性的不是态度性的——大部分余项是认识论的(换层级继续走),但余项的存在本身是本体论的(你永远不会走完)。

两根柱子之间:人。AI提供构,人提供否定性。AI提供镜面,人决定往哪走。自向不疑是方法论前提:暴露真实的不确定性,否则AI只会确认你已经相信的东西。

贡献

一、人-AI协作作为放大的独自思考。 AI放大构的能力,释放人专注于凿。人和人互凿提供方向,AI辅助的独自思考提供执行。循环是:互凿 → AI辅助思考 → 休息 / 身体 / 互凿 → 回来继续。

二、句式-回应同构定理。 你用什么DD层级的句式问AI,AI的回应天花板就在那个层级。六个层级(从推演律到协同律令)提供六种操作上不同的问AI的模式。杀手句在14DD-15DD:"我的目的是X,所以我想做Y——有什么不得不考虑进来一起做的?"

三、继续的数学保证。 ZFCρ的三条定律证明余项永远存在,有方向,永远触发下一步。结合桥引理(不同的形式化产生不同的余项),这保证了人-AI协作永远不可能到达终点——总是有下一个余项,通过改变句式层级或形式化可以到达。

开放问题

一、AI能不能学会主动提示句式升级? 本文把句式层级当作由人决定的。AI能不能被设计成检测到当前句式层级已被穷尽并建议升级?这是一个AI应用问题,不是方法论问题。

二、自向不疑:可训练还是性格特质? 本文识别了自向不疑作为人-AI协作的方法论前提,但没有提供训练它的方法。自向不疑是一个可训练的技能还是一种性格倾向?

三、多人AI中介协作。 本文讨论的是一个人和AI工作。当多个人一起使用AI——每个人带着自己的句式层级,自己的目的,自己的不确定性——动力学怎么变?多人AI协作是否模拟了16DD的协同律令,还是会坍缩到在场的最低句式层级?

四、句式层级和DD位置的关系。 本文把句式层级当作一种提问模式,不是人的固定属性。一个12DD的人能不能问15DD的问题?维度句式论暗示更高层级的句式需要更高的DD位置才能被真正占据(而不只是被表演)。这需要进一步研究。


作者声明

本文是作者独立的理论研究成果。

学术背景。 作者的计算机科学博士研究方向是本体论(ontology),核心工作包括OntoGrate(本体论之间的自动语义映射)和基于知识层级的网络异常事件分类。CS ontology的训练——在形式化系统内部建构和翻译——是本文理论的底层实践基础。

Zesi Chen的角色。 Zesi Chen不在致谢中,因为她不在论文之外——她在论文之内,是论文的结构性条件。二十年来持续对作者行使否定性。第四章关于自向不疑的讨论根植于对双向不疑存在时是什么样子的理解——以及不存在时会怎样。

AI工具的角色。 写作过程中使用了AI工具作为对话伙伴和写作辅助。本文的结构——特别是句式层级与AI协作之间的联系——正是通过本文所描述的那个方法的广泛实践而发展出来的。所有理论创新,核心判断和最终文本的取舍由作者本人完成。

致谢。 感谢Claude(Anthropic)在主要写作辅助和对话伙伴方面的工作——句式-回应同构定理最初在与Claude的对话中被表述出来,"绝对律令式问题跟AI特别好用"的实践观察来自持续的协作实践。感谢ChatGPT(OpenAI)在审稿阶段贡献的AI模糊性的三类边界甄别方案和收束记录模板。感谢Gemini(Google)对"for now"两层(认识论vs本体论)的区分。感谢Grok(xAI)在审稿中的结构标记。


引用

本文引用方法论总论("Hundun: Negation as First Principle", DOI: 10.5281/zenodo.18842450)的凿构循环概念与五个核心概念;方法论第二篇("The Epistemological Map of Chisel-Construct", DOI: 10.5281/zenodo.18918195)的2×2认识论地图与四个结构性余项;Paper 4("The Complete Self-as-an-End Framework", DOI: 10.5281/zenodo.18727327)的余项守恒定理与DD维度序列;维度句式论(DOI: 10.5281/zenodo.18894567)的六个句式层级及其强制来源;ZFCρ("ZFCρ: Remainder as Structural Limit of Formalization", DOI: 10.5281/zenodo.18914682)的余项永远存在、有方向、永远触发下一步形式化的数学证明。

Subtitle: Sentence-Forms, Mirrors, and the Mathematics of Never Stopping

Abstract

The Methodological Overview ("Hundun: Negation as First Principle," DOI: 10.5281/zenodo.18842450) established the chisel-construct cycle as an executable logical operating system. Methodology Paper II ("The Epistemological Map of Chisel-Construct," DOI: 10.5281/zenodo.18918195) drew the epistemological map: four methods forming a 2×2 structure, four structural remainders, the chisel-construct cycle as traversal movement. But neither the operating system nor the map addresses a practical question: after human-human mutual chiseling has done its work, how does a person use AI to find remainders more efficiently during their own thinking?

This paper answers that question. Human-human mutual chiseling is the strongest form of chiseling, but it requires bilateral non-doubt — a structural condition that is scarce, non-reproducible, and unteachable. After mutual chiseling, when a person returns to solitary thinking, AI can amplify construct capacity while the person focuses on chiseling. AI is not a substitute for human negation; it is a construct library that frees the human to chisel.

The paper's core theorem draws on the Dimensional Sentence-Form Theory (DOI: 10.5281/zenodo.18894567): different DD levels have different sentence-forms with different coercive sources. The sentence-form level at which you address AI determines the ceiling of AI's response. This is the sentence-form / response isomorphism. Combined with ZFCρ (DOI: 10.5281/zenodo.18914682), which proves mathematically that remainder always exists and every remainder necessarily triggers the next formalization (ρ → ρ'), the paper establishes that AI-assisted remainder discovery is both structurally constrained (by sentence-form level) and mathematically guaranteed to never terminate (by ρ conservation).

This paper draws on the Methodological Overview (DOI: 10.5281/zenodo.18842450), Methodology Paper II (DOI: 10.5281/zenodo.18918195), Paper 4 ("The Complete Self-as-an-End Framework," DOI: 10.5281/zenodo.18727327), the Dimensional Sentence-Form Theory (DOI: 10.5281/zenodo.18894567), and ZFCρ (DOI: 10.5281/zenodo.18914682).


Chapter 1. The Problem: After Mutual Chiseling, What Then?

Core thesis: Human-human mutual chiseling is the strongest form of chiseling, but it is structurally scarce. After mutual chiseling, when a person returns to solitary thinking, how does AI help them find remainders more efficiently? The first two methodology papers answered why remainder exists and where it is. This paper answers how to use AI to find it.

1.1 Human-Human Mutual Chiseling: The Strongest and the Scarcest

Human-human mutual chiseling is the prototype of chiseling. Two living negativities collide: you chisel my construct, I chisel yours. The power comes from the collision of two subjects each capable of genuine negation.

But mutual chiseling requires bilateral non-doubt. Non-doubt does not mean believing the other is correct. It means removing the other's motive from your attention: I allow this chisel to land on my structure, not on my personality. The moment I suspect your motive — "Are you attacking me?" "Are you trying to win?" — my attention shifts from the remainder to the person. Chiseling stops.

Bilateral non-doubt is scarce. Most human mutual chiseling relationships do not survive more than a few rounds. Friends deflect after two exchanges; couples fight when the chisel hits a nerve; students fear the teacher and cannot chisel back. A relationship of sustained mutual chiseling is, if you encounter one in a lifetime, luck. It cannot be replicated, mass-produced, or taught.

This means mutual chiseling cannot be the basis of a methodology. You cannot instruct someone: "Go find a person with whom bilateral non-doubt holds." That is not something you can go and find.

1.2 After Mutual Chiseling: Solitary Thinking with AI

What mutual chiseling gives you is direction: it pushes you toward remainders you did not see. But the person who was chiseled still has to go home and think. They have to take the remainder that was exposed and work with it — develop it, test it, build new constructs around it, find the next remainder.

This solitary thinking phase is where AI enters. Not as a replacement for mutual chiseling, but as an amplifier of construct capacity during solitary thinking. AI provides the largest possible construct library — everything humanity has written, compressed into a system that can retrieve and recombine at the speed of conversation. With AI handling the construct side, the person is freed to focus on what AI cannot do: chiseling.

The structure of human-AI collaboration is therefore: person provides negation, AI provides constructs. Person decides direction, AI provides the mirror. Person chisels, AI reflects.

1.3 What This Paper Addresses

After mutual chiseling, during solitary thinking, how do you use AI to find remainders? What sentence-forms should you use to address AI at different DD levels? Why does the mathematics guarantee that you can always continue? When should you stop — and what does "stop" mean structurally?

The first methodology paper built the operating system (how the chisel-construct cycle runs). The second drew the map (what terrain it runs on). This paper provides the driving manual — how to drive the chisel-construct cycle on that map, using AI as the vehicle.


Chapter 2. The Structure of Human-AI Collaboration

Core thesis: Human-AI collaboration is not a new type of thinking. It is ordinary human thinking with the construct side amplified. AI does not chisel; AI provides the largest construct library available. The person's job is to chisel. When the person cannot chisel anymore, they leave — meditate, exercise, find someone to mutually chisel with — and come back.

2.1 AI Amplifies Constructs, Not Chiseling

When you think alone, you do two things simultaneously: you build constructs (organize ideas, recall knowledge, make connections) and you chisel constructs (question assumptions, test boundaries, find what does not hold). Both take cognitive bandwidth.

AI takes over the construct side. You say "give me the strongest argument for X" and AI assembles it from its training data — faster and more comprehensively than you could by yourself. You say "what does the literature say about Y" and AI retrieves it. You say "organize these ideas into a structure" and AI does it.

With the construct side outsourced to AI, your cognitive bandwidth is freed for chiseling. You do not have to hold the entire construct in your head while simultaneously attacking it. AI holds the construct; you attack it.

This is the structural reason why human-AI collaboration improves remainder discovery: not because AI can chisel (it cannot — it has no negation), but because AI frees your attention for chiseling by handling the construct burden.

2.2 Person Provides Direction, AI Provides the Mirror

AI is a mirror, not a guide. You walk to a mirror, the mirror reflects. But the route you walk is your decision, not the mirror's.

When you work with multiple AI models, you are not letting them chisel each other. You are taking one model's output, digesting it yourself, identifying where it feels like something was excluded, and then bringing that exclusion point to another model. The person walks between mirrors. Each mirror reflects differently — different training data, different biases, different strengths — but the direction of walking is always determined by the person.

The risk of losing this structure is real. When the person stops directing and starts merely relaying — passing AI-A's output to AI-B without first digesting it themselves — the collaboration degrades into high-dimensional echo chambers. Two models confirm each other's biases and the person becomes a transmission belt rather than a chiseling subject.

The guardrail: at every step, the person must be able to state in one sentence what they found and what they are still looking for. If they cannot, they have stopped chiseling and started relaying.

2.3 When to Leave and Come Back

Chiseling is cognitively expensive. The person will hit walls — not because remainder has been exhausted (ZFCρ proves it never is), but because the person's current capacity to see remainder has been temporarily exhausted.

At that point, the correct move is to leave. Not to keep asking AI more questions — that produces diminishing returns as the person's chiseling capacity fades and AI's responses degrade into construct-expansion without direction. Instead: leave the conversation. Go for a walk. Exercise. Meditate. Sleep. Or — most powerfully — go find someone to mutually chisel with. Let a living negativity push you in a direction you had not considered.

Then come back. The construct is still in the AI conversation. The remainder you were working on is still there. But your chiseling capacity has been replenished — by rest, by body, by another person's negation. You return to the same problem with fresh negation.

The cycle is: mutual chiseling (direction) → solitary thinking with AI (execution) → rest / body / mutual chiseling (replenishment) → return to AI (continued execution). AI is the workspace, not the source of energy. The energy comes from the person and from other persons.


Chapter 3. Core Theorem: Sentence-Form Determines Response Ceiling

Core thesis: Different DD levels have different sentence-forms with different coercive sources. When you address AI, the sentence-form you use determines the ceiling of AI's response. This is the sentence-form / response isomorphism. You cannot get a 15DD remainder by asking a 12DD question.

3.1 Six Sentence-Form Levels

The Dimensional Sentence-Form Theory (DOI: 10.5281/zenodo.18894567) established that each DD level has a native sentence-form whose coercive source is structurally distinct from other levels. For the purposes of human-AI collaboration, the relevant levels are:

Law of Deduction (1DD-4DD): "A, therefore B." Coercive source: causal or structural necessity. No subject, no desire, no choice. "1+1=2" does not need anyone's agreement to be true. When you ask AI in this form — "What follows from X?" — AI's response ceiling is logical implication. This is useful for checking deductive consistency, but it will not find remainders above 4DD.

Instrumental Hypothetical Imperative (5DD-12DD): "If one wants A, then do B." Coercive source: means-end rationality. There is desire and purpose-driven action, but no self-aware "I." "If you want to lose weight, reduce caloric intake" — the "you" here is generic, not self-aware. When you ask AI in this form — "How do I achieve X?" — AI's response ceiling is instrumental advice. It will give you efficient means, but it will not question your ends.

Self-aware Hypothetical Imperative (13DD): "I want to do A, therefore I do B." Coercive source: self-reference — "I" becomes the source of choice. The difference between "wanting to do A" and "I want to do A" is not a pronoun; it is a qualitative shift in the attribution of choice. When you tell AI "I want X" rather than just "how to achieve X," AI's response changes — it begins to engage with your specific situation rather than generic advice. But it still will not question your want.

Teleological Hypothetical Imperative (14DD): "My purpose is A, therefore I do B." Coercive source: purpose-anchoring — purpose coerces action. From "I want A" to "my purpose is A," the shift is that purpose no longer drifts. In 13DD, "I want A" can become "I want C" at any moment; in 14DD, "my purpose is A" is anchored, and B follows internally from A. When you tell AI "my purpose is X, so I want to do Y" — AI's response shifts to evaluating whether Y actually serves X. This is where the killer question lives: "My purpose is X, so I want to do Y — what must I unavoidably take into account?"

Absolute Categorical Imperative (15DD): "The other's purpose is A, therefore I cannot not do B." Coercive source: the other's purpose entering my constraint conditions. Two qualitative shifts: purpose is no longer "mine" but "the other's," and modality shifts from "therefore" to "cannot not" — not a choice to do B for the other, but a situation in which not-doing-B is structurally unavailable. When you frame a question to AI at this level — "Given that stakeholder X needs A, what can I not avoid doing?" — AI's response is forced to include structural constraints that instrumental questioning would miss.

Cooperative Categorical Imperative (16DD): "I aim for A, the other aims for B, we cannot not do C." Coercive source: the encounter of multiple subjects' purposes. Two independent purposes (A and B), two independent subjects, one joint action (C). C belongs to neither A nor B; it is what the encounter of two different purposes forces into existence. When you frame a question to AI at this level, you must specify multiple stakeholders with conflicting purposes and ask what joint action is structurally unavoidable.

3.2 The Sentence-Form / Response Isomorphism

Theorem (working version): The sentence-form level at which you frame a question to AI determines the dominant structure of AI's response, which is typically constrained to that level and below. Higher-level remainders are not logically impossible to appear incidentally, but they cannot be stably and reproducibly extracted from lower-level framing.

This is not a claim about AI's capability. A frontier LLM has been trained on text from all DD levels — it has "seen" 15DD content in its training data. The claim is about the structure of the interaction: if you ask a 12DD question, the dominant mode of the response space is constrained to 12DD, regardless of what AI "knows." The question frames the dominant direction of the answer.

Further qualification: this paper's claim concerns only interaction structure — under a given sentence-form framing, AI's response is activated into a dominant mode. This paper draws no conclusions from this about whether AI "ontologically possesses" capabilities corresponding to any given level.

Why? Because AI is a construct library, not a chiseling subject. AI responds to the structure of the input. A 12DD question ("how do I achieve X?") activates instrumental means-end patterns. A 14DD question ("my purpose is X, what must I unavoidably consider?") activates purpose-constraint patterns. A 15DD question ("given the other's purpose, what can I not avoid?") activates structural-obligation patterns. The dominant patterns activated are determined by the sentence-form of the question, not by AI's "understanding."

Illustration: Ask AI "How do I write a good cover letter?" (12DD — instrumental). AI gives you formatting tips, keyword optimization, structure templates. Now ask: "My purpose is to find work where I can develop as an end in myself, and I am applying to company X — what must I unavoidably address in this letter?" (14DD — teleological). The response shifts entirely: AI now engages with alignment between your purpose and the company's structure, potential conflicts, what you cannot hide. The information available to AI was the same in both cases. The sentence-form changed the ceiling.

3.3 The Mathematical Guarantee: ρ → ρ' Is Necessary

The Sentence-Form / Response Isomorphism tells you how to chisel with AI (use the right sentence-form level). ZFCρ tells you why you can always continue chiseling.

ZFCρ (DOI: 10.5281/zenodo.18914682) proves three structural laws:

First Law (ρ ≠ ∅): Remainder is never empty. For any formalization operation C acting on any domain U, the remainder ρ(C, U) is non-empty. As long as formalization exists, remainder exists. You cannot chisel your way to a construct with zero remainder. This is not an empirical observation; it is a mathematical theorem.

Bridge Lemma: Different formalizations produce different remainders. If C₁ ≠ C₂, then ρ(C₁, U) ≠ ρ(C₂, U). The content of remainder is determined by the specific choice that produced it. When you change your sentence-form (change C), you get a different remainder. This is why switching between DD levels during AI collaboration is productive — each level exposes a different remainder.

Second Law: Remainder has direction. The specificity of ρₙ constrains the range of the next available formalization: Cₙ₊₁ ∈ F(ρₙ), where F(ρₙ) is a proper subset of all possible formalizations. Not every next step is available; only those that respond to the current remainder. This is the mathematical version of Methodology Paper II's "primary compensation direction."

Third Law (F(ρₙ) ≠ ∅): Remainder always triggers the next step. The remainder not only exists (First Law) and constrains direction (Second Law), but necessarily triggers the next formalization. You can always continue. ρ → ρ' is necessary, not contingent.

Together: a never-terminating, directed, unavoidable sequence of remainder discovery. When you chisel with AI and feel you have "run out" of remainders, ZFCρ says: you have not. You have run out of your current capacity to see remainders at your current sentence-form level. Change the level (Bridge Lemma), and new remainders appear. They always do (First Law). They always point somewhere (Second Law). And there is always a next step (Third Law).

A distinction must be drawn: ZFCρ guarantees that the next formalization step exists structurally. It does not guarantee that any given subject at any given moment can see it, articulate it, or operationalize it with the current AI system. Structural existence and subject reachability are not the same thing. This is why Section 2.3 says "when you cannot chisel anymore, leave" — not because the remainder is gone, but because you currently cannot see it. After rest, exercise, or mutual chiseling with another person, the capacity to see is replenished, and the remainder that was structurally there all along becomes reachable again.

3.4 Two Layers of "For Now"

The combination of the Sentence-Form / Response Isomorphism and ZFCρ produces a precise structural distinction within "for now."

Epistemological for now: The remainder is relative to a specific sentence-form level (a specific C). Change the level, and the remainder changes (Bridge Lemma). What you could not see at 12DD may become visible at 14DD. This layer of "for now" is genuinely temporary — it waits to be resolved by switching levels.

Ontological for now: The First Law says ρ ≠ ∅ for any C and any U. Even after switching levels, the new level has its own remainder. You can eliminate a specific ρ by changing C, but you cannot eliminate the existence of ρ. This layer of "for now" is not "temporarily unknown, will be known later" — it is "at this structural position, remainder legitimately and permanently exists."

Confusing the two layers produces two opposite errors: treating all unknowns as epistemological (the illusion of linear progress — "eventually we will know everything") or treating all unknowns as ontological (nihilism — "nothing can ever be known"). The correct stance is: most remainders are epistemological (change your sentence-form level and keep chiseling), but the existence of remainder itself is ontological (you will never run out of remainders to find).

Core sentence: We cannot help not knowing — just for now.

Three layers: "cannot not" (the absolute imperative), "not knowing" (Socrates' clearing), "just for now" (all knowing is bound to the current field of vision — but some clearings are bound to any field of vision).


Chapter 4. Subject-Condition: Self-Directed Non-Doubt

Core thesis: Human-human mutual chiseling requires bilateral non-doubt. Human-AI collaboration does not require bilateral non-doubt (AI has no dimension of doubt). But it requires self-directed non-doubt: the person must not doubt their own motive. "I am here to chisel, not to seek confirmation."

4.1 Self-Directed Non-Doubt as Methodological Premise

The key variable in human-AI collaboration is not AI's capability. It is the person's honesty.

Are you willing to hand AI your genuine uncertainty — the place where you truly do not know? Or do you only hand AI what you already have an answer for, asking it to confirm what you already believe?

Most people use AI for the latter. The question they ask AI is one they already have an answer to; they want AI to endorse it. This is not chiseling. This is seeking a nod.

Self-directed non-doubt means: I do not doubt my motive. I am here to chisel, not to seek comfort. I will hand AI my genuine uncertainty — the place where my construct is weakest, where I am most unsure, where looking hurts.

This is not a character requirement. It is a methodological premise. If you do not hand over genuine uncertainty, AI can only operate within the boundary you allow. It will produce confirmation, not remainder. Remainder appears only when the person exposes a real opening.

4.2 Self-Directed Non-Doubt May Be Harder Than Bilateral Non-Doubt

Bilateral non-doubt is hard because you have to trust another person's motive. But at least the other person's chiseling comes from outside — you did not choose it, you cannot control it. The other person pushes you whether you like it or not.

Self-directed non-doubt is harder because you are both the chiseler and the one being chiseled. You have to push yourself toward your own weak points. You have to override the instinct to protect your construct. Deceiving others is hard; deceiving yourself is easy. You can spend hours with AI, asking sophisticated questions, producing elegant constructs, and never once exposing genuine uncertainty. The entire session can be a performance of chiseling without any actual chiseling.

The diagnostic: after a session with AI, check whether anything you believed before the session has been disturbed. If everything you believed is still intact, you were not chiseling. You were decorating.

4.3 Ignorance and Arrogance in Human-AI Collaboration

The Methodological Overview defined hundun's subject-condition as "ignorant and arrogant." Methodology Paper II reinterpreted this as: ignorance = the ability to leave a quadrant; arrogance = the ability not to be co-opted.

In human-AI collaboration, the same structure applies at the level of sentence-forms:

Ignorance = not treating the current sentence-form level as the only level. You asked a 12DD question and got a 12DD answer. Ignorance means: you know there are higher levels, you know the 12DD answer has remainder, you are willing to re-ask at 14DD or 15DD.

Arrogance = not being co-opted by AI's fluency into believing the construct is complete. AI produces polished, confident, comprehensive-sounding constructs. The construct sounds done. Arrogance means: you do not believe it is done. You keep chiseling, not because you are dissatisfied with the quality, but because you know — structurally, mathematically (ZFCρ, First Law) — that remainder exists.


Chapter 5. Application Rays: Sentence-Forms in Practice

Core thesis: The six sentence-form levels are not abstract categories but operational tools. Each level, when used to address AI, produces a different type of response and exposes a different type of remainder. This chapter provides concrete operational guidance for each level.

5.1 12DD: Instrumental Questioning

Sentence-form: "How do I achieve X?"

AI response ceiling: Means-end optimization. AI gives you the most efficient path from where you are to where you want to go.

Remainder exposed: Nothing about whether X is the right goal. Nothing about what you are excluding by framing the problem as "how to achieve X." The entire goal-structure is taken as given.

When to use: When the goal is genuinely settled and you need execution. When you need factual information, procedural steps, or technical implementation.

When to upgrade: When you notice that AI's answers, no matter how good, feel like they are missing something. That "missing something" is the remainder that 12DD cannot reach — it is above 12DD's ceiling.

5.2 13DD: Self-Aware Questioning

Sentence-form: "I want to do A, therefore I do B."

AI response ceiling: Engagement with your specific situation rather than generic advice. AI begins to tailor its response to you as a particular person with particular constraints.

Remainder exposed: The "I want" may be unexamined. You may want A because of habit, social pressure, or unquestioned assumption. 13DD does not question the want.

When to upgrade: When you notice that your "I want" keeps shifting — you wanted A, now you want C, now you want A again. The instability of want is a signal that you need to anchor purpose (14DD).

5.3 14DD: Purpose-Anchored Questioning

Sentence-form: "My purpose is A, therefore I do B."

AI response ceiling: Evaluation of whether B actually serves A. AI begins to push back — "if your purpose is A, then B may not be the best path; have you considered C?" This is where AI becomes most useful as a construct-provider, because AI's vast library can generate alternatives you had not considered, all evaluated against your stated purpose.

Remainder exposed: Whether A is truly your purpose, or whether A is itself a construct that needs chiseling. 14DD does not question the purpose; it takes it as anchored.

The killer question lives here: "My purpose is X, so I want to do Y — what must I unavoidably take into account?" The "unavoidably" (不得不) is structurally a 15DD word inserted into a 14DD frame. It pulls the response toward constraint-awareness — what the situation forces, not just what would be nice.

When to upgrade: When you realize that your purpose itself may have a remainder — that anchoring A as your purpose excludes something that should not be excluded. This pushes you to 15DD.

5.4 15DD: Structural-Obligation Questioning

When the following sections refer to "asking AI at 15DD/16DD," what is meant is that the user explicitly places that level's structural conditions into the question frame. This does not mean AI itself occupies a 15DD or 16DD position. AI can organize structural constraints when the user supplies the other's purpose or multi-subject conflict — but organizing structural constraints and occupying that structural position are not the same thing.

Sentence-form: "The other's purpose is A, therefore I cannot not do B."

AI response ceiling: Structural constraints arising from the other's existence as a subject. AI's response is no longer about what is optimal for you, but about what is unavoidable given the structural situation.

Remainder exposed: Whether you have correctly identified the other's purpose. Whether there are additional others whose purposes create additional constraints.

When to use: When you are making decisions that affect others — stakeholders, users, colleagues, communities. Framing the question at 15DD forces AI to include structural obligations that 14DD questioning would miss.

Operational example: Instead of "How do I design this product?" (12DD), or "My purpose is to build a sustainable business, what should I consider?" (14DD), ask: "My users need X, my investors need Y, regulators require Z — given these stakeholders' purposes, what can I not avoid addressing?" (15DD). The response shifts from optimization to structural constraint mapping.

5.5 16DD: Multi-Subject Cooperative Questioning

Sentence-form: "I aim for A, the other aims for B, we cannot not do C."

AI response ceiling: Joint action that emerges from the encounter of independent purposes. C does not belong to A or B; it is what the tension between them forces into existence.

This level is the hardest to operationalize with AI, because AI is a single system, not a genuine multi-subject encounter. But it can be simulated: give AI explicit, conflicting stakeholder purposes and ask what joint action the conflict structurally requires.

Operational example: "I want to publish this research for priority documentation, and my co-author wants to wait for more data. Neither of us can override the other. What must we unavoidably do together?" AI, given both purposes and the constraint that neither can override the other, will generate options that neither party would generate alone — because C emerges from the tension, not from either A or B.

A clarification is necessary: the C that AI produces under 16DD framing is a hypothesis of C, not C itself. Genuine 16DD emergence requires two real subjects, each carrying their own purpose, colliding in reality — the C that is forced out of deadlock and genuine friction is not the same as the C that AI "computes" from game-theoretic training data. What AI gives you is a candidate construct for C. This candidate must be taken into reality and tested through actual collision between the real stakeholders, to verify whether it is indeed the joint action that cannot not be taken.

5.6 Multi-Model Workflow

The sentence-form levels can be combined with multi-model workflow:

(1) Ask AI-A a 14DD question. AI-A produces a purpose-anchored construct.

(2) You identify where the construct seems to be excluding something. This step is the brake — if you cannot identify an exclusion, stop and chisel your own inability to see one.

(3) Bring the exclusion point to AI-B, framed at 15DD: "AI-A addressed my purpose but excluded stakeholder X's needs. Given X's purpose, what can I not avoid?"

(4) Bring AI-B's structural constraints back to AI-A: "If you must accommodate these constraints, which of your original premises do you have to sacrifice?"

The key to this workflow is not that every step escalates the level, but that the person can, when necessary, push the conversation toward higher-level structural constraints by switching models, switching sentence-forms, or explicitly introducing excluded stakeholders. The person provides the possibility of escalation; the models provide the mirrors.

5.7 Closure: Structured Not-Knowing

Chiseling with AI cannot continue indefinitely — not because remainder runs out (ZFCρ guarantees it does not), but because of one of two situations: the person's chiseling capacity is temporarily exhausted, or the problem's construct exceeds AI's total construct library. The first is the person's boundary — you currently cannot see the remainder; rest and come back. The second is AI's boundary — AI's construct library, however vast, is a compression of training data, and what the training data does not cover, AI does not have. When AI's responses turn vague or start to fabricate, it is not that you cannot chisel anymore; it is that the mirror cannot reflect anymore. The responses are different: when the person hits a boundary, rest and return; when AI hits a boundary, switch to a different AI or find a person.

The closure criterion is: not-knowing has been structured.

Operational closure sentence-form: during a conversation with AI, you can directly use this sentence-form to test whether the closure condition is met —

"[My purpose is X] — what else do you think I still cannot not do? If you have no further 'cannot not,' say that you have reached structured not-knowing."

The first half is a 14DD-15DD sentence-form: you hand AI your purpose and let AI find structural constraints from its construct library — the "cannot not" that you have not thought of. If AI can still produce a new "cannot not," there are structural constraints that have not been exhausted — keep chiseling. If AI cannot produce a new "cannot not," it should respond with "structured not-knowing" — I know I have reached the boundary of this question under current conditions; I can say what I do not know, but I cannot produce new structural constraints.

This sentence-form shifts the closure judgment from the person's subjective feeling ("I think that is enough") to a signal in the interaction structure: as long as AI is still producing "cannot not," closure has not been reached; when AI can no longer produce a new "cannot not," it has been reached. The person does not need to judge "is it enough" — the structure of AI's response is itself the signal.

Note: when AI says "structured not-knowing," it is marking epistemological for now (the boundary has been reached under the current sentence-form level and current information). ZFCρ guarantees this is only epistemological — switch to a different sentence-form level, a different model, a different conceptual framework, and a new "cannot not" may appear. Closure is closure of the current session, not closure of the problem.

Minimal record template:

  • What I do not know: ______
  • Directions I have tried: ______
  • Why closure was not achieved in those directions: ______

If you cannot fill out these three lines, not-knowing has not been structured, and you should not close.

Closure is not sealing. It is "closed for now but not sealed." The construct remains open because the clearing may not be a true clearing — it may be only the clearing within your current field of vision.

When to leave: when the three lines are filled, and continuing to ask AI produces responses that recycle previous constructs without exposing new remainder. This is AI's boundary, not the problem's boundary. Leave. Walk. Run. Sleep. Find a person. Then come back.


Chapter 6. Non-Trivial Predictions

Core thesis: From the sentence-form / response isomorphism and ρ → ρ', non-trivial testable predictions can be derived.

The following are structural predictions. They require working operationalization to become strictly testable, which this paper does not complete.

6.1 Sentence-Form Level Determines Remainder Quality

Prediction: Users who address AI at 14DD+ sentence-form levels discover higher-quality remainders (remainders that are structurally deeper, harder to resolve, and more consequential for the construct) than users who address AI at 12DD, controlling for AI capability and user expertise.

Derivation: If the sentence-form / response isomorphism holds (Chapter 3), then the ceiling of AI's response — and therefore the ceiling of the remainder that can be exposed — is determined by sentence-form level, not by AI capability or user expertise.

Falsification condition: If users addressing AI at 12DD consistently discover remainders of equal or greater structural depth than users addressing AI at 14DD+, the isomorphism is falsified.

Minimal operationalization direction: Remainder quality can be provisionally represented by the degree of rewrite it forces on the original construct, the strength of constraint it imposes on subsequent decisions, and the number of level-escalation steps required to resolve it.

6.2 Self-Directed Non-Doubt Predicts Originality

Prediction: In creative work using AI, the user's degree of self-directed non-doubt (willingness to expose genuine uncertainty to AI) is positively correlated with the originality of output, and uncorrelated with total AI usage time.

Derivation: If self-directed non-doubt is the methodological premise of human-AI collaboration (Chapter 4), then what determines the effectiveness of AI-assisted chiseling is not usage duration (you can use AI for ten hours while only seeking confirmation) but the person's willingness to expose real openings.

Falsification condition: If AI usage time is a stronger predictor of output originality than self-directed non-doubt — if using AI more always produces more original work regardless of whether genuine uncertainty was exposed — the framework is falsified.

Minimal operationalization direction: Self-directed non-doubt can be provisionally encoded by the number, intensity, and revision-willingness of positions where the subject exposes genuine uncertainty to AI.

6.3 Sentence-Form Escalation Produces Diminishing-Return Breakpoints

Prediction: During extended human-AI collaboration sessions, there exist identifiable breakpoints where continuing at the current sentence-form level produces diminishing returns, and escalating to the next level produces a discontinuous jump in remainder discovery.

Derivation: If each sentence-form level has a ceiling (Chapter 3), then working within a single level will eventually exhaust the remainder accessible at that level. The remainders above that level are invisible until the sentence-form is escalated. Escalation produces a discontinuous jump because it opens a new remainder space that was previously inaccessible.

Falsification condition: If sentence-form escalation does not produce identifiable discontinuous jumps — if remainder discovery follows a smooth curve regardless of sentence-form level changes — the framework is falsified.

Minimal operationalization direction: Diminishing-return breakpoints can be provisionally detected by monitoring the decline in number, level, and consequence-intensity of newly identified remainders across consecutive rounds.

6.4 Session "Termination" Is More Often Subject Exhaustion Than Structural Exhaustion

Prediction: Within a given finite model set, finite sentence-form level set, and explicitly recorded workflow, session "termination" more commonly reflects the temporary exhaustion of the subject's chiseling capacity than the structural absence of further remainder. After changing sentence-form level or switching formalization, new remainders should be exposable.

Derivation: From ZFCρ's First Law (ρ ≠ ∅) and Bridge Lemma (different C produces different ρ). Remainder structurally always exists, but a subject's visibility capacity at a given moment is finite. When the subject feels "done," changing the formalization operation (different model, different sentence-form level, different conceptual framing) should expose previously invisible remainders.

Falsification condition: If in an explicitly recorded workflow, all pre-specified sentence-form levels and models have been tried, and changing levels and switching models still systematically fails to expose new remainder, the framework's strong-version claim is weakened in that work domain.


Chapter 7. Conclusion

Recovery

The Methodological Overview built the operating system — how the chisel-construct cycle runs. Methodology Paper II drew the map — what terrain it runs on. This paper provides the driving manual — how to drive the cycle on the map, using AI as the vehicle.

The driving manual rests on two pillars.

First pillar: the sentence-form / response isomorphism. Different DD levels have different sentence-forms. The level at which you address AI determines the ceiling of AI's response. You cannot get 15DD remainders from 12DD questions. To find deeper remainders, escalate your sentence-form.

Second pillar: ρ → ρ' is necessary. ZFCρ proves mathematically that remainder always exists, has direction, and always triggers the next step. You can always continue chiseling. "For now" is structural, not attitudinal — most remainders are epistemological (change level and keep going), but the existence of remainder itself is ontological (you will never run out).

Between the two pillars: the person. AI provides constructs; the person provides negation. AI provides the mirror; the person decides where to walk. Self-directed non-doubt is the methodological premise: expose genuine uncertainty, or AI will only confirm what you already believe.

Contributions

I. Human-AI collaboration as amplified solitary thinking. AI amplifies construct capacity, freeing the person to focus on chiseling. Human-human mutual chiseling provides direction; AI-assisted solitary thinking provides execution. The cycle is: mutual chiseling → AI-assisted thinking → rest / body / mutual chiseling → return.

II. The sentence-form / response isomorphism. The sentence-form level at which you address AI determines the ceiling of AI's response. Six levels (from deductive law through cooperative imperative) provide six operationally distinct modes of questioning AI. The killer question lives at 14DD-15DD: "My purpose is X, so I want to do Y — what must I unavoidably take into account?"

III. The mathematical guarantee of continuation. ZFCρ's three laws prove that remainder always exists, has direction, and always triggers the next step. Combined with the Bridge Lemma (different formalization produces different remainder), this guarantees that human-AI collaboration can never reach a terminal point — there is always a next remainder, accessible by changing sentence-form level or formalization.

Open Questions

I. Can AI learn to prompt sentence-form escalation? This paper treats sentence-form level as determined by the human. Can AI be designed to detect when the current sentence-form level has been exhausted and suggest escalation? This is an AI application question, not a methodology question.

II. Self-directed non-doubt: trainable or dispositional? This paper identifies self-directed non-doubt as the methodological premise of human-AI collaboration, but does not provide a method for training it. Is self-directed non-doubt a trainable skill or a dispositional trait?

III. Multi-person AI-mediated collaboration. This paper discusses one person working with AI. When multiple persons use AI together — each bringing their own sentence-form level, their own purposes, their own uncertainties — how does the dynamics change? Does multi-person AI collaboration simulate 16DD cooperative imperatives, or does it collapse to the lowest sentence-form level present?

IV. The relationship between sentence-form level and DD position. This paper treats sentence-form level as a mode of questioning, not as a fixed property of the person. Can a 12DD person ask a 15DD question? The Dimensional Sentence-Form Theory suggests that higher-level sentence-forms require higher DD positions to be genuinely occupied (not merely performed). This requires further investigation.


Author's Statement

This paper is the author's independent theoretical work.

Academic background. The author's doctoral research in computer science focused on ontology, with core work including OntoGrate (automatic semantic mapping between ontologies) and knowledge-hierarchy-based classification of network anomaly events. The training in CS ontology — constructing and translating within formal systems — is the ground-level practical foundation for the theory in this paper.

The role of Zesi Chen. Zesi Chen does not appear in the acknowledgments because she is not external to this paper — she is internal to it, a structural condition of the paper. For twenty years she has continuously exercised negation upon the author. The discussion of self-directed non-doubt in Chapter 4 is rooted in the understanding of what bilateral non-doubt looks like when it exists — and what happens when it does not.

The role of AI tools. AI tools were used during writing as dialogue partners and writing assistants. The structure of this paper — particularly the connection between sentence-form levels and AI collaboration — was developed through extensive practice of the very method the paper describes. All theoretical innovations, core judgments, and final editorial decisions were made by the author.

Acknowledgments. Thanks to Claude (Anthropic) for serving as the primary writing assistant and dialogue partner — the sentence-form / response isomorphism was first articulated in dialogue with Claude, and the practical observation that "absolute-imperative-style questions work especially well with AI" emerged from sustained collaborative practice. Thanks to ChatGPT (OpenAI) for review-stage contributions including the three-boundary triage for AI ambiguity and the closure record template. Thanks to Gemini (Google) for the two-layer distinction of "for now" (epistemological vs. ontological). Thanks to Grok (xAI) for structural flagging during review.


References

This paper draws on the Methodological Overview ("Hundun: Negation as First Principle," DOI: 10.5281/zenodo.18842450) for the chisel-construct cycle and its five core concepts; Methodology Paper II ("The Epistemological Map of Chisel-Construct," DOI: 10.5281/zenodo.18918195) for the 2×2 epistemological map and four structural remainders; Paper 4 ("The Complete Self-as-an-End Framework," DOI: 10.5281/zenodo.18727327) for the remainder conservation theorem and DD dimensional sequence; the Dimensional Sentence-Form Theory (DOI: 10.5281/zenodo.18894567) for the six sentence-form levels and their coercive sources; and ZFCρ ("ZFCρ: Remainder as Structural Limit of Formalization," DOI: 10.5281/zenodo.18914682) for the mathematical proof that remainder always exists, has direction, and always triggers the next formalization step.