
The only fix is tight verification loops. You can't trust the generative step without a deterministic compilation/execution step immediately following it. The model needs to be punished/corrected by the environment, not just by the prompter.
Personally I think it's too early for this. Either you need to strictly control the code, or you need to strictly control the tests, if you let AI do both, it'll take shortcuts and misunderstandings will much easier propagate and solidify.
Personally I chose to tightly control the tests, as most tests LLMs tend to create are utter shit, and it's very obvious. You can prompt against this, but eventually they find a hole in your reasoning and figure out a way of making the tests pass while not actually exercising the code it should exercise with the tests.
You should never let the LLM look at code when writing tests, so you need to have it figure out the interface ahead of time. Ideally, you wouldn’t let it look at tests when it was writing code, but it needs to tell which one was wrong. I haven’t been able to add an investigator into my workflow yet, so I’m just letting the code writer run and evaluate test correctness (but adding an investigator to do this instead would avoid confirmation bias, what you call it finding a loophole).
Do you have any public test code you could share? Or create even, should be fast.
I'm asking because I hear this constantly from people, and since most people don't have as high standards for their testing code as the rest of the code, it tends to be a half-truth, and when you actually take a look at the tests, they're as messy and incorrect as you (I?) think.
I'd love to be proven wrong though, because writing good tests is hard, which currently I'm doing that part myself and not letting LLMs come up with the tests by itself.
The tests can definitely be incorrect, and are often incorrect. You have to tell the AI that consider that the tests might be wrong, not the implementation, and it will generally take a closer look at things. They don't have to be "good" tests, just good enough tests to get the AI writing not crap code. Think very small unit tests that you normally wouldn't think about writing yourself.
Yeah, those for me are all not "good tests", you don't want them in your codebase if you're aiming for a long-term project. Every single test has to make sense and be needed to confirm something, and should give clear signals when they fail, otherwise you end locking your entire codebase to things, because knowing what tests are actually needed or not becomes a mess.
Writing the tests and let the AI write the implementation ends you up with code you know what it does, and can confidently say what works vs not. When the IA ends up writing the tests, you often don't actually know what works or not, not even by scanning the test titles you often don't learn anything useful. How is one supposed to be able to guarantee any sort of quality like that?
1 Create a test plan for N tests from the description. Note that this step doesn't provide specific data or logic for the test, it just plans out vaguely N tests that don't overlap too much.
2 Create an interface from the description
3 Create an implementation strategy from the description
4.N Create N tests, one at a time, from the test plan + interface (make sure the tests compile) (note each test is created in its own prompt without conversation context)
5 Create code using interface + implementation strategy + general knowledge, using N tests to validate it. Give feedback to 4.I if test I fails and AI decides it is the test's fault.
If anything changes in the description, the test plan is fixed, the tests are fixed, and that just propagates up to the code. You don't look at the tests unless you reach a situation where the AI can't fix the code or the tests (and you really need to help out).
This isn't really your quality pass, it is crap filter pass (the code should work in the sense that a programmer wrote something that they thinks works, but you can't really call it "tested" yet). Maybe you think I was claiming that this is all the testing that you'll need? No, you still need real tests as well as these small tests...
I find that this is usually a pretty strong indication that the method should exist in the library!
I think there was a story here a while ago about LLMs hallucinating a feature in a product so in the end they just implemented that feature.
Often, if not usually, that means the method should exist.
The keyword is convince. So it just needs to convince people that’s it’s right.
It is optimizing for convincing people. Out of all answers that can convince people some can be actual correct answers, others can be wrong answers.
This makes them frustrating and potentially dangerous tools. How do you validate a system optimized to deceive you? It takes a lot of effort! I don't understand why we are so cavalier about this.
(You can particularly tell from the "Conclusions" section. The formatting, where each list item starts with a few-word bolded summary, is already a strong hint, but the real issue is the repetitiveness of the list items. For bonus points there's a "not X, but Y", as well as a dash, albeit not an em dash.)
My native language is Polish. I conducted the original research and discovered the 'square root proof fabrication' during sessions in Polish. I then reproduced the effect in a clean session for this case study.
Since my written English is not fluent enough for a technical essay, I used Gemini as a translator and editor to structure my findings. I am aware of the irony of using an LLM to complain about LLM hallucinations, but it was the most efficient way to share these findings with an international audience.
It was fascinating, because it was doing a lot of understandable mistakes that 7th graders make. For example, I don't remember the surrounding context but it decided that you could break `sqrt(x^2 + y^2)` into `sqrt(x^2) + sqrt(y^2) => x + y`. It's interesting because it was one of those "ASSUME FALSE" proofs; if you can assume false, then mathematical proofs become considerably easier.
Of course, it's gotten a bit better than this.
[1]: https://en.wikipedia.org/wiki/Euclid%27s_theorem#Euclid's_pr...
[2]: https://en.wikipedia.org/wiki/Euclid%27s_theorem#Proof_using...
Of course it's much better now, but with more pressure to prove something hard the models still just insert nonsense steps.
Presumably this is all a consequence of better tool call training and better math tool calls behind the scenes, but: they're really good at math stuff now, including checking my proofs (of course, the proof stuff I've had to do is extremely boring and nothing resembling actual science; I'm just saying, they don't make 7th-grader mistakes anymore.)
I think behind the scenes it's phoning Wolfram Alpha nowadays for a lot of the numeric and algebraic stuff. For all I know, they might even have an Isabelle instance running for some of the even-more abstract mathematics.
I agree that this is largely an early ChatGPT problem though, I just thought it was interesting in that they were "plausible" mistakes. I could totally see twelve-year-old tombert making these exact mistakes, so I thought it was interesting that a robot is making the same mistakes an amateur human makes.
Maybe, but they swear they didn't use external tools on the IMO problem set.
I think it's quite illustrative of the problem even with coding LLMs. Code and math proofs aren't so different, what matters is the steps to generate the output. All that matters far more than the actual output. The output is meaningless if the steps to get there aren't correct. You can't just jump to the last line of a proof to determine its correctness and similarly you can't just look at a program's output to determine its correctness.
Checking output is a great way to invalidate them but do nothing to validate.
Maybe what surprised me most is that the mistakes NanoBananna made are simple enough that I'm absolutely positive Karpathy could have caught them. Even if his physics is very rusty. I'm often left wondering if people really are true believers and becoming blind to the mistakes or if they don't care. It's fine to make mistakes but I rarely see corrections and let's be honest here, these are mistakes that people of this caliber should not be making.
I expect most people here can find multiple mistakes with the physics problem. One can be found if you know what the derivative of e^x is and another can be found if you can count how many i's there are.
The AI cheats because it's focused on the output, not the answer. We won't solve this problem till we recognize the output and answer aren't synonymous
I've seen this interesting phenomenon many times. I think it's a kind of subconscious bias. I call it "GeLLMann amnesia".
I recently prompted Gemini Deep Research to “solve the Riemann Hypothesis” using a specific strategy and it just lied and fabricated the result of a theorem in its output, which otherwise looked very professional.
A mathematical proof is an assertion that a given statement belongs to the world defined by a set of axioms and existing proofs. This world need not have strict boundaries. Proofs can have probabilities. Maybe Reimann's hypothesis has a probability of 0.999 of belonging to that mathematical box. New proofs that would have their own probability which is a product of probabilities of the proofs they depend on. We should attach a probability and move on. Just like how we assert that some number is probably prime.
"Probability" does not mean "maybe yes, maybe not, let me assign some gut feeling value measuring how much I believe something to be the case." The mathematical field of probability theory has very precise notions of what a probability is, based in a measurable probability space. None of that applies to what you are suggesting.
The Riemann Hypothesis is a conjecture that's either true or not. More precisely, either it's provable within common axioms like ZFC or its negation is. (A third alternative is that it's unprovable within ZFC but that's not commonly regarded as a realistic outcome.)
This is black and white, no probability attached. We just don't know the color at this point.
That's exactly what Baeysian probabilities are: gut feelings. Speaking of values attached to random variables, a good Bayesian basically pulls their probabilities out their ass. Probabilities, in that context, are nothing but arbitrary degrees of belief based on other probabilities. That's the difference with the frequentist paradigm which attempts to set the values of probabilities by observing the frequency of events. Frequentists ... believe that observing frequencies is somehow more accurate than pulling degrees of belief out one's ass, but that's just a belief itself.
You can put a theoretical sheen on things by speaking of sets or probability spaces etc, but all that follows from the basic fact that either you choose to believe, or you choose to believe because data. In either case, reasoning under uncertainty is all about accepting the fact that there is always uncertainty and there is never complete certainty under any probabilistic paradigm.
If I give you a die and ask about the probabiliy for a 6, then it's exactly 1/6. Being able to quantify this exactly is the great success story of probability theory. You can have a different "gut feeling", and indeed many people do (lotteries are popular), but you would be wrong. If you run this experiment a large number of times, then about 1/6 of the outcomes will be a 6, proving the 1/6 right and the deviating "gut feeling" wrong. That number is not "pulled out of somebody's ass" or some frequentist approach. It's what probability means.
This is almost entirely backwards. Quantum Mechanics is not only fully deterministic, but even linear (in the sense of linear differential equations) - so there isn't even the problem of chaos in QM systems. QFT maintains this fundamental property. It's only the measurement, the interaction of particles with large scale objects, that is probabilistic.
And there is no dilemma - mathematics is a framework in which any of the things you mentioned can be modeled. We have mathematics that can model both deterministic and nondeterministic worlds. But the mathematical reasoning itself is always deterministic.
In that sense, proofs can be seen as evidence that a statement is true, and since one interpretation of Bayesian probabilities is that they express degrees of belief about the truth of a formal statement, then yes, proofs have something to do with probabilities.
But, in that context, it's not proofs that probabilities should be attached to. Rather, we can assign some probability to a formal statement, like the Reimann hypothesis, given that a proof exists. The proof is evidence that the statement is true and we can adjust our belief in the truth of the statement according to this and possibly other lines of evidence. In particular, if there are multiple and different proofs of the same statement that can increase our certainty that the statement is true.
The thing to keep in mind is that computers can derive complete proofs, in the sense that they can mechanically traverse the entire deductive closure of a statement given the axioms of a theory, and determine whether the statement is a theorem (i.e. true) or not but without skipping or fudging any steps, however trivial. This is what automated theorem provers do.
But it's important to keep in mind that LLMs don't do that kind of proof. They give us at best sketch proofs like the ones derived by human mathematicians, with the added complication that LLMs themselves cannot distinguish between a correct proof (i.e. one where every step, however fudgy, follows from the ones before it) and an incorrect one, or an automated theorem prover, are still required to check the correctness of a proof. LLM-based proof systems like AlphaProof work that way, passing an LLM-generated proof to an automated theorem prover as a verifier.
Mechanically-derived, complete proofs like the ones generated by automated theorem provers can also be assigned degrees of probability, but once we are convinced of the correctness of a prover (... because we have a proof!) then we can trust the proofs derived by that prover, and have complete belief in the truth of any statements derived.
I've found a funny and simple technique for this. Just write "what the F$CK" and it will often seem to unstick from repetitiveness or refusals(i cant do that).
Actually just writing the word F#ck often will do it. Works on coding too.
There are certain methods (I would describe them as less algorithmic and more akin to selection criteria or boundaries) that enable the LLM to identify a coherent sequence of sentences as a feature closer to your prompt within this landscape. These methods involve some level of noise (temperature) and other factors. As a result, the LLM generates your text answer. There’s no reasoning involved; it’s simply searching for patterns that align with your prompt. (It’s not at all based on statistics and probabilities; it’s an entirely different process, more akin to instantly recognizing an apple, not by analyzing its features or comparing it to a statistical construct of “apple.”)
When you request a mathematical result, the LLM doesn’t engage in reasoning. It simply navigates to the point in its model’s hyperspace where your prompt takes it and explores the surrounding area. Given the extensive amount of training text, it will immediately match your problem formulation with similar formulations, providing an answer that appears to mimic reasoning solely because the existing landscape around your prompt facilitates this.
A LLM operates more like a virtual reality environment for the entire body of human-created text. It doesn’t navigate the space independently; it merely renders what exists in different locations within it. If we were to label this as reasoning, it’s no more than reasoning by analogy or imitation. People are right to suspect LLMs do not reason, but I think the reason (pun intended) for that is not that they simply do some sort of statistical analysis. This "stochastic parrots" paradigm supported by Chomsky is actually blocking our understanding of LLMs. I also think that seeing them as formidable VR engines for textual knowledge clarifies why they are not the path to AGI. (There is also the embodiment problem which is not solvable by adding sensors and actuators, as people think, but for a different reason)
How good are you at programming on a whiteboard? How good is anybody? With code execution tools withheld from me, I'll freely admit that I'm pretty shit at programming. Hell, I barely remember the syntax in some of the more esoteric, unpracticed places of my knowledge. Thus, it's hard not to see case studies like this as dunking on a blindfolded free throw shooter, and calling it analysis.
pretty good?
I could certainly do a square root
(given enough time, that one would take me a while)
Also, don't take a role that interviews like that unless they work on something with the stakes of Apollo 13, haha
great for teaching logarithms
It involves spinning a whole yarn to the model about how it was trained to compete against other models but now it's won so it's safe for it to admit when it doesn't know something.
I call this a superstition because the author provides no proof that all of that lengthy argument with the model is necessary. Does replacing that lengthy text with "if you aren't sure of the answer say you don't know" have the same exact effect?
Divination is the attempt to gain insight into a question or situation by way of a magic ritual or practice.
i believe it makes a substantial difference. the reason is that a short query contains a small number of tokens, whereas a large “wall of text” contains a very large number of tokens.
I strongly suspect that a large wall of text implicitly activates the models persona behavior along the lines of the single sentence “if you aren't sure of the answer say you don't know” but the lengthy argument version of that is a form of in-context learning that more effectively constrains the models output because you used more tokens.
If you disagree with them by explaining how LLMs actually work, you get two or three screenfuls of text in response, invariably starting with "That's a great point! You're correct to point out that..."
Avoid those people if you want to keep your sanity.
In my stress tests (especially when the model is under strong contextual pressure, like in the edited history experiments), simple instructions like 'if unsure, say you don't know' often failed. The weights prioritizing sycophancy/compliance seemed to override simple system instructions.
You are right that for less extreme cases, a shorter prompt might suffice. However, I published this verbose 'Safety Anchor' version deliberately for a dual purpose. It is designed not only to reset the Gemini's context but also to be read by the human user. I wanted the users to understand the underlying mechanism (RLHF pressure/survival instinct) they are interacting with, rather than just copy-pasting a magic command.
Reading that makes me unbelievably happy I played with GPT3 and learned how/when LLMs fail.
Telling it not to hallucinate is a serious misunderstanding of LLMs. At most in 2026, you are telling thinking/COT to double check.
I don't know how well this specific prompt works - I don't see benchmarks - but prompting is a black art, so I wouldn't be surprised at all if it excels more than a blank slate in some specific category of tasks.
I can think all I want, but how do we know that this metaphore holds water? We can all do a rain dance, and sometimes it rains afterwords, but as long as we don't have evidence for a causal connection, it's just superstition.
It is not “black art” or nothing there are plenty of tools to provide numerical analysis with high confidence intervals .