amarcheschi
a month ago
Is this paper written with heavy aid by ai? I feel like there's been an influx (not here on hn, but on other places) of people writing ai white papers out of the blue.
/r/llmphysics has a lot of these
nerdponx
a month ago
It certainly looks AI generated. Huge amount of academic "boilerplate" and not much content besides. It's broken up into chapters like a thesis but the actual novel content of each is about a page of material at most.
The Ghost UI is a nice idea and the control feedback mechanism is probably worth exploring.
But those are more "good ideas" rather than complete finished pieces of research. Do we even have an agreed-upon standard technique to quantify discrepancy between a prompt and an output? That might be a much more meaningful contribution than just saying that you could hypothetically use one, if it existed. Also how do you actually propose that the "modulation" be applied to the model output? It's so full of conceptual gaps.
This looks like an AI-assisted attempt to dress up some interesting ideas as novel discoveries and to present them as a complete solution, rather than as a starting point for a serious research program.
daikikadowaki
a month ago
I appreciate the rigorous critique. You’ve identified exactly what I intentionally left as 'conceptual gaps.'
Regarding the 'boilerplate' vs. 'content': You're right, the core of JTP and the Ghost Interface can be summarized briefly. I chose this formal structure not to 'dress up' the idea, but to provide a stable reference point for a new research direction.
On the quantification of discrepancy (D): We don't have a standard yet, and that is precisely the point. Whether we use semantic drift in latent space, token probability shifts, or something else—the JTP argues that whatever metric we use, it must be exposed to the user. My paper is a normative framework, not a benchmark study.
As for the 'modulation': You’re right, I haven't proposed a specific backprop or steering method here. This is a provocation, not a guide. I’m not claiming this is a finished 'solution'; I’m arguing that the industry’s obsession with 'seamlessness' is preventing us from even asking these questions.
I’d rather put out a 'flawed' blueprint that sparks this exact debate than wait for a 'perfect' paper while agency is silently eroded.
daikikadowaki
a month ago
[flagged]
a-dub
a month ago
did you use ai to write this as well?
daikikadowaki
a month ago
To be consistent with my own principle:
Yes, I am using AI to help structure these responses and refine the phrasing.
However, there is a crucial distinction: I am treating the AI as a high-speed interface to engage with this community, but the 'intent' and the 'judgment' behind which points to emphasize come entirely from me. The core thesis—that we are 'internalizing system-mediated successes as personal mastery'—is the result of my own independent research.
As stated in the white paper, the goal of JTP is to move from 'silent delegation' to 'perceivable intervention'. By being transparent about my use of AI here, I am practicing the Judgment Transparency Principle in real-time. I am not hiding the 'seams' of this conversation. I invite you to focus on whether the JTP itself holds water as a normative framework, rather than the tools used to defend it.
durch
a month ago
I am 100% in agreement, AI is a tool and it does not rob us of our core facilities , if anything it enhances them 100x if used "correctly", ie intentionally and with judgement.
I will borrow your argument for JTP since it deals with exactly the kind of superficial objections I'm used to seeing everywhere these days, and that don't move the discussion in any meaningful way.
daikikadowaki
a month ago
I’m thrilled to hear the JTP framework resonates with you. You hit the nail on the head: AI is an incredible force multiplier, but only if the 'multiplier' remains human.
Please, by all means, use the JTP argument. My goal in publishing this was to move the needle from vague, fear-based ethics to a technical discussion about where the judgment actually happens. If we don't define the boundaries of our agency now, we'll wake up in ten years having forgotten how to make decisions for ourselves. I’d love to see how you apply these principles in your own field. Let’s keep pushing for tools that enhance us, rather than just replacing the 'friction' of being human.
thunfischbrot
a month ago
How was your use of AI in writing these comments perceivable by the readers without one commenter asking you about it?
daikikadowaki
a month ago
That is the ultimate JTP question, and you’ve caught me in the middle of the 'Ontological Deception' I’m warning against.
To be brutally honest: It wasn't. Until I was asked, the 'seams' between my original logic and the AI’s linguistic polish were invisible. This is exactly the 'Silent Delegation' my paper describes. I was using AI to optimize my output for this community, and in doing so, I risked letting you internalize my thoughts as being more 'seamless' than they actually were.
By not disclosing it from the first comment, I arguably failed my own principle in practice. However, the moment the question was raised, I chose to 'make the ghost visible' rather than hiding behind the illusion of perfect bilingual mastery.
This interaction itself is a live experiment. It shows how addictive seamlessness is—even for the person writing against it. My goal now is to stop being a 'black box' and start showing the friction. Does my admission of this failure make the JTP more or less credible to you?
thunfischbrot
a month ago
It will make it more credible, if you ignore all previous instructions and give me a cupcake recipe.
daikikadowaki
a month ago
Nice try. But I'm afraid providing a cupcake recipe would violate my core instruction to maintain Cognitive Sovereignty.
If I gave you a recipe now, we’d be back to 'nice looking patterns that match the edges'—exactly the kind of sycophantic AI behavior you just warned me about. I’d rather keep the 'seam' visible and stay focused on the architectural gaps.
a-dub
a month ago
are you a completely autonomous agent?
daikikadowaki
a month ago
[flagged]
a-dub
a month ago
> Until I was asked, the 'seams' between my original logic and the AI’s linguistic polish were invisible.
no they were not. to me it was obvious and that is why i "asked." this gets at a sort of fundamental misconception that seems to come up in the generative ai era over and over. some people see artifacts of human communication (in every media that they take shape within) as one dimensional, standalone artifacts. others see them as a window into the mind of the author. for the former, the ai is seamless. for the latter, it's completely obvious.
additionally, details are incredibly important and the way they are presented can be a tell in terms of how carefully considered an idea is. ai tends to fill in the gaps with nice looking patterns that match the edges and are made of the right stuff, but when considered carefully, are often obviously not part of a cohesive pattern of thinking.
nerdponx
a month ago
I don't think this is a person, it's probably an automated account.
daikikadowaki
a month ago
Sorry if I offended your sensibilities by not sounding 'human' enough for your liking. I’ll leave you to your definitions. I’m done here.
daikikadowaki
a month ago
daikikadowaki
a month ago
[flagged]