0%
HomeBlogProducts
Alchemy Usage: Increasing Transparency and Simplicity

Alchemy Usage: Increasing Transparency and Simplicity

Author: Alchemy Team

Reviewed by Brady Werkheiser


Published on January 11, 20212 min read

Background

Our pricing is based on the concept of “compute units” - a measure of the total computational resources your apps are using on Alchemy. You can think of this as how you would pay Amazon for usage of AWS. Some queries are lightweight and fast to run (e.g. eth_blockNumber) and others can be more intense (e.g. large eth_getLogs queries). We want to give you the lowest cost possible so instead of charging by number of queries, we only charge for the amount of compute used.

Challenges

When compute is measured with every request, compute units (and as a result monthly usage) can be difficult to predict. Some methods have highly variable intensity (e.g. eth_call), so the number of compute units they consume can vary by as much as 100x.  This makes things like optimizing usage or predicting monthly cost more confusing than they should be. 

We want to make measuring and predicting usage on Alchemy simple and easy to understand for all of our customers. For this reason, and to create the most developer friendly platform possible, we have decided to update the way we measure compute units. 

An Update to Compute Units

With our new compute unit model, each method is assigned a fixed number of compute units, derived from its average intensity. This means you will know exactly how many compute units a particular call will consume before you make it, increasing predictability and transparency massively. 

As an example of how this update improves developer experience, consider a user making 10,000 trace requests:

With the old compute units, the user would make 10,000 trace_call requests, not knowing until after they complete how many compute units were used. Depending on the blocks, contracts, and timing of the calls, the user may end up using 750,000 or 1,000,000 compute units in total. Even still, the next time they go to make 10,000 trace_call requests, the outcome may be different.

Using the new model of compute units, if someone made 10,000 trace_call requests, at 75 CU per call, this would consume a total of 10,000*75 = 750,000 CUs, each and every time. 

No guessing and variability - just a fair and deterministic measure of usage!

For a full list of CU assignments by method, determined from years of benchmarked usage, see our docs.

Better Rate Limits

Currently, we use the concept of queries per second or QPS as a primary rate limit. While this is a pretty typical way for APIs to rate limit, it’s a relatively blunt tool. This means we are often forced to air on the side of caution and be stricter than necessary regarding rate limits. 

With the updated definition of compute units, we are able to replace QPS with a more favorable measure: compute units per second, or CUPS. 

CUPS rate limits are effectively a weighted version of QPS, where the weight for each method is its compute intensity. This limit ends up being more accurate by accounting for variability in requests and the types of methods that are being called. This allows us to provide an even more reliable overall service with as generous (or more so in many cases!) limits for all of our customers.

Summary

Overall, we think this change will be a huge improvement for our customers. We always take your feedback seriously, and strive to make the developer experience the best it can be so that we can all focus on creating the best possible products.

Please don’t hesitate to reach out with any questions or clarifications as we are happy to help.

Section background image

Build blockchain magic with Alchemy

Alchemy combines the most powerful web3 developer products and tools with resources, community and legendary support.