Is JSON the Future of LLM Prompting or Just Noise?
A recent discussion has emerged around using JSON-formatted prompts for improving the performance of large language models (LLMs). Social media posts have been buzzing with claims about the benefits of JSON, suggesting that those who don’t use it are missing a crucial element for success. However, some influential voices in the AI community are calling these assertions into question. Noah MacCallum from OpenAI’s applied AI team expressed his frustration, stating that “JSON prompting isn’t better” and emphasizing that he finds it disturbing that these ideas are gaining traction. He recommends Markdown and XML, citing their superior efficiency and relevance for model performance.
MacCallum’s critique is echoed by various developers who share his insights. Jared Zoneraich, the founder of PromptLayer, argues that using JSON can steer models into technical mindsets instead of fostering creativity, which can diminish effectiveness in open-ended tasks. He prefers using Markdown for its flexibility while utilizing XML for more structured elements of prompts, especially when working with specific attachments. This perspective reflects a broader discussion about the best strategies for engaging with LLMs in an effective manner.
John Leimgruber, an expert in LLM quantization, simplifies the conversation even further by advocating for simple, well-formatted Markdown. He notes that there’s no compelling reason to deviate from this approach, as it keeps the process straightforward. Aligning with this view, even OpenAI’s official documentation supports the use of clean text rather than structured formats like JSON or XML. For instance, a recent review of ChatGPT’s internal systems revealed that prompts used in ‘Study Mode’ are just plain text, highlighting the trend towards simplicity.
Despite the criticism, JSON isn’t without its advantages. Kirk Kaiser, a developer and author, points out that JSON can be useful for structuring complex prompts. Nikunj K. from FPV Ventures also acknowledges that for certain tasks, specifically structuring prompts can yield better results than a free-form approach. As Kaiser aptly summarizes, every LLM exhibits unique and unpredictable outputs, and the most effective method often hinges on the specific model, prompt, and intended outcome.
A thoughtful approach to prompt creation helps maximize the potential of LLMs, accommodating various user needs.
“Content generated using AI”
We create intelligent software and AI-driven solutions to automate workflows, modernize legacy systems, and sharpen your competitive edge.
