PROMPTIDE

PROMPTING...

Analytics

While executing a prompt, users see detailed per-token analytics to help them better understand the model’s output. The completion window shows the precise tokenization of the context alongside the numeric identifiers of each token. When clicking on a token, users also see the top-K tokens after applying top-P thresholding and the aggregated attention mask at the token.

User inputs

Prompts can be made interactive through the user_input() function, which blocks execution until the user has entered a string into a textbox in the UI. The user_input() function returns the string entered by the user, which cen then, for example, be added to the context via the prompt() function. Using these APIs, a chatbot can be implemented in just four lines of code:

PROMPTING...

PromptIDE

While executing a prompt, users see detailed per-token analytics to help them better understand the model’s output. The completion window shows the precise tokenization of the context alongside the numeric identifiers of each token. When clicking on a token, users also see the top-K tokens after applying top-P thresholding and the aggregated attention mask at the token. Developers can upload small files to the PromptIDE (up to 5 MiB per file. At most 50 MiB total) and use their uploaded files in the prompt. The read_file() function returns any uploaded file as a byte array. When combined with the concurrency feature mentioned above, this can be used to implement batch processing prompts to evaluate a prompting technique on a variety of problems. The screenshot below shows a prompt that calculates the MMLU evaluation score.

KEEP PROMPTING...

THE DEGEN AWAITS

TOKENOMICS

liquidity Burned

TOTAL SUPPLY: 964,415,562,050

Renounced CONTRACT

0% Buy/Sell Tax

Socials

The xAI PromptIDE is an integrated development environment for prompt engineering and interpretability research. It accelerates prompt engineering through an SDK that allows implementing complex prompting techniques and rich analytics that visualize the network’s outputs. We use it heavily in our continuous development of Grok.