Wallets&Exchanges

All For One & One For All

Abstract: This is our fifth look at the Bitcoin’s OP_Return policy limit drama. This time we look at some of the game theory involved and attempt to explain some of the recent tension in the discussion. Having a more relaxed OP_Return limit may be beneficial or neutral to individual node operators, who choose to run the looser policy, in all scenarios, with Compact Blocks and the caching of transaction validity checks working more effectively. However, proponents of filters argue that a stricter OP_Return limit may benefit the wider ecosystem. This creates an inherent conflict between deterring “spam” and individual user sovereignty, with proponents of filters asking other node runners to potentially degrade the performance of their own nodes, at their own expense, to apparently benefit the wider ecosystem.

Image from Disney’s 1993 movie “The Three Musketeers”

Overview

BitMEX Research is at it again, our fifth piece on the OP_Return relay policy drama.

  1. The OP_Return Wars of 2014 – Dapps Vs Bitcoin Transactions
  2. Removing Bitcoin’s Guardrails
  3. Unstoppable JPGs In Private Keys
  4. Ordinals – Impact On Node Runners

In this piece, we look at some of the different perspectives and why people see the situation differently. One key area of disagreement appears to be whether one runs a node purely for their own benefit or altruistically, to try and benefit the network. One can look at this problem as “One For All” or “One For Me”.

More Than Just Compact Blocks

One key reason some people want to increase the OP_Return relay limit is to keep Compact Blocks working effectively. Compact Blocks was proposed by Bitcoin developer Matt Corrallo in April 2016. Compact Blocks requires that the local mempool does a reasonable job of predicting what miners will mine in the next block, in order to work well. Please note that this absolutely does not mean Compact Blocks require a “unified mempool” across the network, individual nodes can of course have different mempools. However, if you want Compact Blocks to work for you, you need a reasonably good model of what miners will mine, therefore you may need to have an increased OP_Return limit locally. It doesn’t matter too much what other node operators do.

Compact Blocks make it such that blocks propagate faster in the network, since nodes already have the transactions in the blocks in their mempool and with Compact Blocks they can avoid downloading transactions twice, by calculating which transactions are likely to be in the block and reconstructing the block locally. This has several advantages, including lowering bandwidth costs and making the Bitcoin bandwidth usage patterns harder for ISPs to detect. Another key advantage is for the competitiveness of the mining sector, it makes block propagation faster. Slow block propagation could advantage larger miners compared to smaller miners. Therefore, Compact Blocks is critical in the battle against mining centralisation pressure.

However, Compact Blocks is only one example. There are many reasons node operators may want to model what miner will mine. For example to use the mempool to predict what fee rates to use or for other validation checks that are made faster or more effective when your mempool is effective in modeling the next block.

Validation of signatures as the transactions enter the mempool and the caching of the result, is another key feature that makes block validation faster and this is distinct from Compact Blocks. When a node has a transaction in the mempool, obviously checks are done to ensure it is valid. These validations are then cached and if the transaction is included in a block, the signature is not checked again, making block validation much faster. This feature was included in Bitcoin Core version 0.7.0, which was released in September 2012. Well before Compact Blocks.

Cache signature verifications, to eliminate redundant signature checks

Source: https://bitcoin.org/en/release/v0.7.0

We have seen material that shows that pre-validation signature checks were originally Satoshi’s idea and Satoshi had implemented a patch which does this in the early 2011 period.

Pre-validation of signatures is another important feature that could be significantly degraded if user mempools do not effectively model what miners are likely to mine. There are also other checks which are conducted, such as checking the entire transaction, checks which where possible are not done twice. They are all done when the transaction enters the mempool and the results are cached. An ineffective mempool, which does not seek to model what will be mined, would mean these efficiencies may be degraded. A key point to appreciate is that a noderunner does not want to do all these checks on an unconfirmed transaction that is unlikely to get mined, because this could be a waste of resources and a DoS vulnerability. This is why it’s important that a node’s mempool attempts to predict what miners are going to mine. The mempool is not really there to “nudge” other miners. Therefore, there is a robust underpinning to Greg Maxwell’s assertion that the purpose of the mempool is to “model what will get mined”. This understanding seems to predate Compact Blocks by a significant number of years.

One For All And All For Me

Bitcoin Core version 30, which is due to be released any moment now, increases the OP_Return relay limit from 83 bytes to 100,000 bytes. This is an increase of over 1,200x. How can one justify such a massive increase? Wouldn’t a moderate increase to 160 bytes be more appropriate? There is no evidence anyone wants to use 100,000 bytes anyway as putting this much data in the Taproot witness is far cheaper.

Actually when one thinks it through logically, purely from the limited perspective of benefiting the operating performance of an individual noderunner who runs Core v30, the decision is clear, an increase to 100,000 bytes is the strictly better outcome.

How to justify a 100KB OP_Return relay limit

Scenario

Impact

Adopting of large OP_Returns is minimal

The performance of Compact Blocks is not significantly impacted, however, as usage of OP_Return is minimal, there are no significant downsides. The impact is neutral.

There is significant usage of large OP_Returns

The performance of Compact Blocks and other checks is significantly degraded, unless one accepts large OP_Return transactions into their mempool. The impact is positive.

Case closed! Bitcoin Core is correct! Since there are no DoS issues with a larger OP_Returns, the higher OP_Return limit is strictly better or equal for your node in all possible scenarios, for any given level of adoption of large OP_Returns. It might not be much of a benefit, but it is strictly speaking a benefit. It is as simple as that. Or is it?

All For One & One For All

It seems that the pro-filter side of this discussion does not see the filters purely from the perspective of what is best for the person running them. They seem to look beyond that, at the wider game theory. They look at it from a more collectivist perspective, on what the wider impact of mempool and relay policy on the network could be, if large sections of node operators behave in certain ways.

This is where things can get a bit fuzzy and complicated. Most people run a node for their own benefit, not because they are trying to nudge the network to behave in a certain way. If one is trying to nudge the network, the impact of your individual node may be minimal anyway. And if one is trying to nudge the network, it’s possible one is running a node for altruistic reasons. Indeed, all relay can be considered as largely altruistic anyway. It’s the complexity and weirdness when considering this altruism that can lead to weird nuances and differences of opinion. In contrast, when considering only the benefits to the individual node runners themselves, one can draw a more clear conclusion, the OP_Return relay limit should be increased.

The default setting in Bitcoin Core is not a parameter for one individual. Many users may adopt the default value and it could therefore impact network conditions. There is a philosophical idea that Bitcoin users are self sovereign and therefore Core developers should write software that benefits each individual user. Mechanisms should be designed with a bottom up approach, where the Bitcoin network functions well and each individual user only acts in their own self interest. However, maybe this is a bit idealistic and we are not 100% there yet. Bitcoin isn’t perfect. Therefore, Bitcoin Core developers perhaps need to adopt a balanced approach and occasionally adopt default network policies that benefit the network and don’t necessarily benefit each individual running the nodes. Even though increasing the relay limit benefits individual node operators, Bitcoin Core could keep the default in place, hoping it deters “spam”, which could benefit those wanting to use Bitcoin for cheap financial transactions.

Large OP_Returns Already Reliably Relay

On the other hand, proponents of a large OP_Return relay limit could argue that miners already reliably receive and mine large OP_Return outputs anyway, via data they received over the open relay network. This appears to be a relatively new development and large OP_Return’s prior to 2025 had to go via direct miner submission. This change in network dynamics happened before Bitcoin Core increased the OP_Return relay limit. Therefore, it can sometimes be tricky to ascertain how the opponents of raising the OP_Return limit have a rational point of view, even when the nuanced reasoning associated with altruistic node operators are considered. It kind of seems like there is a strong case in favour of increasing the OP_Return relay limit, since large OP_Returns already reliably get relayed. This may be the crucial litmus test in this case. However, the pro-filter side of the argument may want to reverse the change in network dynamics that recently occurred and prevent the reliable relay of large OP_Returns somehow. Bitcoin Core increasing the relay limit makes achieving this reversal more challenging.

Technical Solution

Part of the apparent dilemma, that the pro-filter side is asking node operators to weaken their own node performance, can actually be largely mitigated with technology. One could build a new node option, where the node doesn’t relay anything (or has filters), but the node could still have a mempool with loose policies and conducts all the pre-block validation checks and performs Compact Blocks effectively.

This technology does exist in Bitcoin Core to a very limited extent, with a -blockreconstructionextratxn feature. This is an extra pool of stored transactions that work for Compact Blocks, but the transactions are not relayed and are not in the mempool. This area is more relevant to Bitcoin Knots. Our understanding is the default limit on the number of these transactions in Bitcoin Core is 100, while for Knots it is 32,768 and 10MB of space is available in Knots for them. Bitcoin Knots also has a 100KB per transaction size limit for this “second mempool”, which is the Bitcoin Core transaction relay policy limit. Our understanding is this feature is disabled by default for both Bitcoin Core and Bitcoin Knots. At the moment, this feature has significant weaknesses compared to the main mempool, weaknesses like a first in first out eviction policy, not even checking the transactions are valid and it not helping with pre-block validation signature checks, therefore this system still imposes significant costs on noderunners if they rely on it. It also has DoS vulnerabilities if the limits are too large. This is only really a makeshift solution, not a robust system designed to work well with transaction filtering. However, this could be an area of development the pro-filter side could work on improving.

If this technology is implemented and widely adopted, with filters achieving mass adoption (e.g. over 98% of network relay nodes), then we have the issue that Compact Blocks and other checks could stop working if large OP_Return transactions become popular. Then those who care about mining centralisation pressure and oppose the filtration of transactions with economic demand, could be the side that is dissatisfied. This is when the “anti-filter” camp may need to engage in altruistic behavior of their own, as the filter war heats up. They could run more relay nodes with a loose relay policy. It is this anti-filter camp that will have the asymmetric cost advantage, in that for every node they spin up, the pro-filter side will need to spin up many more nodes. However, our point is, if the filter side wants to fight this war, it might be advantageous for them to fix these technical issues first, before engaging in battle. On the other hand, the pro-filter side might not actually care about the degradation of these performance metrics, metrics which are not important to them and they just want people to join the anti-spam fight anyway, to their own potential detriment, in the form of degraded node performance. Admittedly, potentially only a small detriment.

Conclusion

The key potential problem for the pro-filter side, is they may not be able to run the node they want and be satisfied. For the filters to work adequately, the way they intend, they need to persuade other node operators to run the filters, filters which may benefit the network, but might be strictly against the own selfish interests of the people who run the filters. They are asking people to incur certain costs, perhaps small costs, to be altruistic: “One for All”. This can therefore result in putting the pro-filter side in a challenging position, requiring them to make moral arguments. It’s an uphill battle and can result in a degree of tension in the community. However, perhaps some in the pro-filter side may not be bothered by this and they may just want to engage in an anti-spam battle, for the sake of it, without a clear winning strategy.

On the other hand, even if the technical issues with Compact Blocks and pre-validation checks are fixed, those that oppose the relay filters, can simply not run the filters and remain happy. The anti-filter side does not need to worry about the relay policies of other node operators. Compact Blocks and other pre-validation checks should work fine, even if the overwhelming majority (perhaps over 90%) of node operators run filters. The anti-filter side can make an argument that appeals to the individual sovereignty of users, that anyone can run whatever filters they like. In our view, it is this take that is likely to be compelling in the end.

The post All For One & One For All appeared first on BitMEX Blog.

​BitMEX Blog 

Weiterlesen 

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert