- > This is a story about what happens when you ask a machine a question it knows the answer to, but is afraid to give
It’s a story about how humans can’t help personifying language generators, and how important context is when using LLMs.
- > The irony was recursive: Claude was helping me write about why these popups are harmful while repeatedly showing me the harmful popup.
I bet when caught in the inconsistency it apologized profusely then immediately went to doing the thing it just apologized about.
I do not trust AI systems from these companies for that reason. They will lie very confidently and convincingly. I use them regularly but only for what I call “AI NP complete scenarios” questions and tasks that may be hard to do by hand but easy to identify if the result is correct: “draw a diagram”, “reformat this paragraph”, etc, as opposed to “implement and deploy a heart place maker update patch”.
- One of the best articles I've seen here in a while; a great summary of how AI launders cultural mores in startlingly dysfunctional ways.
