Tokens are often described as representing about three-quarters of a word. So if you give an LLM 10,000 words of text to study, that’s 12,000 to 13,000 content tokens. From a developer’s perspective, if the body of code that Copilot inspects (for refactoring, bug hunting, etc.) consists of 10,000 “words” (expressions, statements, variable names, functions, etc.), using it once in a query counts as 12,000 to 13,000 tokens toward that month’s quota.
Prompt text as input is also counted, as is output from Copilot.
Effective next month, the price tier will remain fixed at the current level, but instead of being allocated a number of queries per month, users will be given the same value of “AI credits”. Base tier Copilot Pro subscribers ($10pcm) receive 1,000 credits, and according to GitHub, 1 AI credit is currently worth $1 USD.
The number of tokens purchased with each credit depends on the model used, the input/output combination, the size of the cache (data held in the LLM’s memory for context), and the requested functionality. Therefore, if a developer primarily uses simple queries, there may be no need to purchase additional tokens in the form of credits every month. Conversely, multi-agent queries on complex and long code bases will empty your AI credit account more quickly. Queries against the most advanced frontier models are more expensive than queries against less powerful models.
GitHub’s price changes include several compensation benefits for users. Code completion (similar to the autocomplete feature on your phone) and next edit suggestions remain free.

