UNO Commitments

ryanc
Posts: 147
Joined: Wed Dec 18, 2013 8:10 pm
os: linux

Re: UNO Commitments

Post by ryanc »

This is awesome. The only comment I have is that we should consider our choice of how many branches are possible. Put in another way, it may be prudent to build the trie more granularity than on byte boundaries.

If we use byte boundaries, it is possible for an attacker to make the proof for a given name take HASHLEN*256*NAMELEN of data. For d/wikileaks this works out to 90,112 bytes of overhead for 256 bit hashes or 56,320 bytes for 160 bit hashes. Best case would be 352 or 220 bytes for 256 or 160 bit hashes respectively.

If we use nybble boundaries, it works out to 11,264/7,040 worst case, 704/440 best case. In other words, we reduce the worst case by a factor of 8 but only increase the best case by a factor of 2. I think this is probably about the right trade-off. We could halve the worst case while doubling the best case another time, but given that an attack would have to buy quite a few names to pull this off, I don't think it's worth it. Using radix trees probably improves the best case further.

These numbers assume no other overhead, which is not strictly true, but they should be a good ballpark.

Tangentially, I think that it may be possible to build a thin client that does not need to remain connected to the network - it could connect to nodes, request block headers only from peers, then once it's caught up, get the UNO proof. It could even support requesting UNO proof+block headers (from the last block it has as a reference) from multiple nodes, most work wins. Based on the estimates here, it should be a few hundred KB of data per day the light client has been inactive. Given that an average web page weighs in at almost 2MB this is not too bad.

Depth of data to trust is an issue with the block headers + UNO validation model. An attacker had a pretty good chance of occasionally making a false block with UNO root hash. A trust depth can be chosen, but then updates are delayed until that block depth is reached. The intervening transactions could be provided by the node being queried, but a dishonest node could simply say they don't exist. A second UNO proof from the latest block plus all the intervening transactions still allows a dishonest node to claim there are no updates, but only if they can mine a fake proof. We need to model this, but I think a good balance is possible.

domob
Posts: 1129
Joined: Mon Jun 24, 2013 11:27 am
Contact:

Re: UNO Commitments

Post by domob »

Ryan, these are good points! We should discuss this before we eventually merge the code. For now, I'll stick to byte boundaries - but it can be changed later if we think it should be.

I've now implemented dynamic updating of the trie in memory when blocks arrive, although currently no hashes are cached so that it still takes around 10 seconds to compute the root hash. This is easy to fix as the next step. Then one should be able to compute the root hash quickly for every new block if the user is willing to keep the trie in memory all the time (with a new command-line option). Computation of the root hash from LevelDB alone will be my step after that.
BTC: 1domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NCdomobcmcmVdxC5yxMitojQ4tvAtv99pY
BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS
Use your Namecoin identity as OpenID: https://nameid.org/

ryanc
Posts: 147
Joined: Wed Dec 18, 2013 8:10 pm
os: linux

Re: UNO Commitments

Post by ryanc »

Byte boundaries are good for prototyping, probably not too hard to change. Perhaps there is some other way that branches can be limited to address extreme cases. I like the idea of the commitment being for the previous block's UNO set for efficiency.

domob
Posts: 1129
Joined: Mon Jun 24, 2013 11:27 am
Contact:

Re: UNO Commitments

Post by domob »

Some more updates: Keeping the trie in memory and updating it when new blocks arrive works now. The hash of every subtree is cached, so that only changed branches have to be recomputed.

I've done a benchmark run with this (i. e., trie always in memory) and "-reindex", computing the root hash for every newly connected block. (This basically simulates what a full node would have been required to do if UNO hashes had been enforced for all past.) All trie updates and hash computations together took (on my machine) around 200 seconds for the entire blockchain. This is around 3.7% of the time the -reindex took in total. For me, this seems like a totally affordable performance cost of the feature.

On startup, the trie is initially built from the name db. This takes 25 seconds for me (including a computation of the root hash so that all subtrees have their hashes cached). The process with in-memory trie uses 370 MiB instead of around 100 MiB without it (according to "top").

If anyone is interested, feel free to experiment with my "uno-trie" branch. Proper testing (unit tests and updates to the regtests) is still missing, as is the planned hash computation without in-memory trie.
BTC: 1domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NCdomobcmcmVdxC5yxMitojQ4tvAtv99pY
BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS
Use your Namecoin identity as OpenID: https://nameid.org/

ryanc
Posts: 147
Joined: Wed Dec 18, 2013 8:10 pm
os: linux

Re: UNO Commitments

Post by ryanc »

Have you thought at all about wire format for proofs? Would be interesting to do some testing. Any opinions on Hash160 vs Hash256 vs SHA256? I somewhat favor Hash160 to keep sizes down, but this is not a strong opinion.

If radix trees are being used, we could do something like this for each level:

[uint16] number of child nodes, range of 1 to 257 (yes, 257, not 256)
list (each prefix) {
[uint8] number of characters in prefix, 0 characters means the hash is for the transaction
[characters (variable length)] prefix
[hash]
}

then the full transaction for the name requested at the very end after all the levels.

Would love it if when you have time you could document what how the structures are serialized for hashing.

domob
Posts: 1129
Joined: Mon Jun 24, 2013 11:27 am
Contact:

Re: UNO Commitments

Post by domob »

The CUnoTrie class (representing a node plus its subtree) has standard serialisation / unserialisation methods - this defines the wire protocol (although for full (sub)trees, not proofs). For hashing, the same format is used with child trees replaced by their hashes - this could also be used as the wire protocol for proofs. (But the latter is not yet implemented, since it is not used anywhere so far and not necessary for a "plain" full node.)

Hashing is done in a "non-radix way". Consequently, the extra complexity of implementing radix tries is not part of the consensus protocol. Instead, it is an (optional) implementation detail that optimises memory usage. I think this choice makes sense to ease implementation of verifiers and alternative clients, although it (slightly) increases the computations necessary for calculating the root hash.

So far, I've not documented any details - but if you are interested in them, you can look at the code.
BTC: 1domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NCdomobcmcmVdxC5yxMitojQ4tvAtv99pY
BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS
Use your Namecoin identity as OpenID: https://nameid.org/

ryanc
Posts: 147
Joined: Wed Dec 18, 2013 8:10 pm
os: linux

Re: UNO Commitments

Post by ryanc »


domob
Posts: 1129
Joined: Mon Jun 24, 2013 11:27 am
Contact:

Re: UNO Commitments

Post by domob »

I've not fully read it, I was just briefly aware of its existence. Do you know whether Bitcoin is still actively trying to work on that? I heard that they have abandoned the original proposal maaku was tasked with as a "bad idea".

Very roughly speaking, it looks quite similar to my own implementation - except that they use bits for branching. This is what I suggest to do also for us once the prototyping is done. (It simplifies the nodes' maps quite a bit if you just have left/right.)
BTC: 1domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NCdomobcmcmVdxC5yxMitojQ4tvAtv99pY
BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS
Use your Namecoin identity as OpenID: https://nameid.org/

ryanc
Posts: 147
Joined: Wed Dec 18, 2013 8:10 pm
os: linux

Re: UNO Commitments

Post by ryanc »

Maaku said he was still slowly working on it when i asked him about it on irc a month ago. I could try asking again. His doesn't concern itself with authenticated denial, though.

johnc
Posts: 89
Joined: Sun Dec 28, 2014 10:03 am

Re: UNO Commitments

Post by johnc »

I agree with the general idea of this

however you should carefully consider whether this has to be done each block or each x blocks.

Also it will be nice investigate if this is possible for the UTXO too.

The idea is that a regular user can have a lightweightclient gui on their computer without having to download the whole blockchain and choose to have the TX, the names or both.

It could be a huge step forward. but will need real pro coders to keep this low memory fingerprint.

Post Reply