5 billion Bitcoin customers: SPV and token requirements ...

03-09 20:23 - 'Another reason why whitepaper pursim is a ridiculous cult. [Here] in 2010 Satoshi did a stealth edit to calculate cumulative POW. Longest chain as a metric will cause huge problems with SPV as SPV clients would have no problem...' by /u/jakesonwu removed from /r/Bitcoin within 0-4min

Another reason why whitepaper pursim is a ridiculous cult. [Here]1 in 2010 Satoshi did a stealth edit to calculate cumulative POW. Longest chain as a metric will cause huge problems with SPV as SPV clients would have no problem accepting longest invalid chain.
Context Link
Go1dfish undelete link
unreddit undelete link
Author: jakesonwu
1: gith*b.*o**bitc*in*b*t*oin/commit/40c*0369*1*3*3f8d7385*50e20342e998c994*1*di**-*23***d6*a1a4522**ee*71*96*47b31*4*0
Unknown links are censored to prevent spreading illicit content.
submitted by removalbot to removalbot [link] [comments]

A Glance at the Heart: Proof-of-Authority Technology in the UMI Network

A Glance at the Heart: Proof-of-Authority Technology in the UMI Network

Greetings from the UMI Team! Our Whitepaper describes in detail the key pros and cons of the two mechanisms which the great majority of other cryptocurrencies are based on:
Proof-of-Work (PoW) — mining technology. Used in Bitcoin, Ethereum, Litecoin, Monero, etc.
Proof-of-Stake (PoS) and its derivatives — forging technology. Used in Nxt, PeerCoin, NEO, PRIZM, etc.
As a result of a careful analysis of PoW and PoS, which are designed to fight against centralization, there came a conclusion that they both fail to perform their main mission and, in the long run, they lead to the network centralization and poor performance. For this reason, we took a different approach. We use Proof-of-Authority (PoA) algorithm coupled with master nodes, which can ensure the UMI network with decentralization and maximum speed.
The Whitepaper allows you to understand the obvious things. This article will give you a clear and detailed explanation of the technology implemented in the UMI network. Let's glance at the heart of the network right now.
Proof-of-Authority: How and Why It Emerged
It's been over a decade since the first transaction in the Bitcoin network. Over this time, the blockchain technology has undergone some qualitative changes. It's down to the fact that the cryptocurrency world seeing the emerging Proof-of-Work defects in the Bitcoin network year after year has actively searched for ways to eliminate them.
PoW decentralization and reliability has an underside of low capacity and scalability problem that prevents the network from rectifying this shortcoming. Moreover, with the growing popularity of Bitcoin, greed of miners who benefit from high fees resulting from the low network throughput has become a serious problem. Miners have also started to create pools making the network more and more centralized. The “human factor” that purposefully slowed down the network and undermined its security could never be eliminated. All this essentially limits the potential for using PoW-based cryptocurrencies on a bigger scale.
Since PoW upgrade ideas came to nothing, crypto community activists have suggested cardinally new solutions and started to develop other protocols. This is how the Proof-of-Stake technology emerged. However, it proved to be excellent in theory rather than in practice. Overall, PoS-based cryptocurrencies do demonstrate a higher capacity, but the difference is not as striking. Moreover, PoS could not fully solve the scalability issue.
In the hope that it could cope with the disaster plaguing all cryptocurrencies, the community came up with brand new algorithms based on alternative operating principles. One of them is the Proof-of-Authority technology. It was meant to be an effective alternative with a high capacity and a solution to the scalability problem. The idea of using PoA in cryptocurrencies was offered by Gavin Wood — a high-profile blockchain programmer and Ethereum co-founder.
Proof-of-Authority Major Features
PoA's major difference from PoW and PoS lies in the elimination of miner or forger races. Network users do not fight for the right to be the first to create a block and receive an award, as it happens with cryptocurrencies based on other technologies. In this case blockchain's operating principle is substantially different — Proof-of-Authority uses the “reputation system” and only allows trusted nodes to create blocks.
It solves the scalability problem allowing to considerably increase capacity and handle transactions almost instantly without wasting time on unnecessary calculations made by miners and forgers. Moreover, trusted nodes must meet the strict capacity requirements. This is one the main reasons why we have selected PoA since this is the only technology allowing to fully use super-fast nodes.
Due to these features, the Proof-of-Authority algorithm is seen as one of the most effective and promising options for bringing blockchain to various business sectors. For instance, its model perfectly fits the logistics and supply chain management sectors. As an outstanding example, PoA is effectively used by the Microsoft Azure cloud platform to offer various tools for bringing blockchain solutions to businesses.
How the UMI Network Gets Rid of the Defects and Incorporates the Benefits of Proof-of-Authority Method
Any system has both drawbacks and advantages — so does PoA. According to the original PoA model, each trusted node can create a block, while it is technically impossible for ordinary users to interfere with the system operation. This makes PoA-based cryptocurrencies a lot more centralized than those based on PoW or PoS. This has always been the main reason for criticizing the PoA technology.
We understood that only a completely decentralized product could translate our vision of a "hard-to-hit", secure and transparent monetary instrument into reality. Therefore, we started with upgrading its basic operating principle in order to create a product that will incorporate all the best features while eliminating the defects. What we’ve got is a decentralized PoA method. We will try to explain at the elementary level:
- We've divided the nodes in the UMI network into two types: master nodes and validator nodes.
- Only master nodes have the right to create blocks and confirm transactions. Among master node holders there's the UMI team and their trusted partners from across the world. Moreover, we deliberately keep some of our partners — those who hold master nodes — in secret in order to secure ourselves against potential negative influence, manipulation, and threats from third parties. This way we ensure maximum coherent and reliable system operation.
- However, since the core idea behind a decentralized cryptocurrency rules out any kind of trust, the blockchain is secured to prevent master nodes from harming the network in the event of sabotage or collusion. It might happen to Bitcoin or other PoW- or PoS-based cryptocurrencies if, for example, several large mining pools unite and perform a 51% attack. But it can’t happen to UMI. First, the worst that bad faith master node holders can do is to negligibly slow down the network. But the UMI network will automatically respond to it by banning such nodes. Thus, master nodes will prevent any partner from doing intentional harm to the network. Moreover, it will not be able to do this, even if most other partners support it. Nothing — not even quantum computers — will help hackers. Read our post "UMI Blockchain Six-Level Security" for more details.
- A validator node can be launched by any participant. Validator nodes maintain the network by verifying the correctness of blocks and excluding the possibility of fakes. In doing so they increase the overall network security and help master nodes carry out their functions. More importantly, those who hold validator nodes control those who hold master nodes and confirm that the latter don't violate anything and comply with the rules. You can find more details about validator nodes in the article we mentioned above.
- Finally, the network allows all interested users to launch light nodes (SPV), which enables viewing and sending transactions without having to download the blockchain and maintain the network. With light nodes, any network user can make sure if the system is operating properly and doesn't have to download the blockchain to do this.
- In addition, we are developing the ability to protect the network in case 100% of the master nodes (10,000 master nodes in total) are "disabled" for some reason. Even this is virtually impossible, we've thought ahead and in the worst-case scenario, the system will automatically move to PoS. By doing so, it will be able to continue processing transactions. We're going to tell you about this in our next publications.
Thus, the UMI network uses an upgraded version of this technology which possesses all its advantages with drawbacks eliminated. This model is truly decentralized and maximum secured.
Another major drawback of PoA-based cryptos is no possibility to grant incentives to users. PoA doesn't imply forging or mining which allow users to earn cryptocurrency while generating new coins. No reward for maintaining the network is the main reason why the crypto community is not interested in PoA. This is, of course, unfair. With this in mind, the UMI team has found the best solution — the unique staking smart-contract. It allows you to increase the number of your coins up to 40% per month even with no mining or forging meaning the human factor cannot have a negative impact on the decentralization and network performance.
New-Generation Proof-of-Authority
The UMI network uses an upgraded version of PoA technology which possesses all its advantages with drawbacks virtually eliminated. This makes UMI a decentralized, easily scalable, and yet the most secure, productive, profitable and fair cryptocurrency, working for the sake of all people.
The widespread use of UMI can change most aspects of society in different areas, including production, commerce, logistics, and all financial arrangements. We are just beginning this journey and thrilled to have you with us. Let's change the world together!
Best regards, UMI Team!
submitted by UMITop to u/UMITop [link] [comments]

Bitcoin API to Easily Create Your Bitcoin Wallet - Tokenview

What is a Bitcoin Wallet?

Simply put, a Bitcoin wallet is actually a 'management tool for private keys, addresses, and blockchain data.' The private key is generated from a random number, and the address is calculated from the private key. The bitcoins received and transferred from the wallet (address) must also be listed. Of course, the wallet must also support collection and payment. In other words, all of these have to be done through tools, and these types of tools are collectively referred to as 'bitcoin wallets.' ViewToken can not only predict the market of virtual currencies, but also help users earn more digital assets through financial management, and can directly use the ViewToken flash swap function, eliminating the hassle of exchange on exchanges. At present, digital currency wallets have different functions.
According to the maintenance method of blockchain data, we can divide wallets into:
According to the hardware equipment used, we can divide the wallet into:

Bitcoin Wallet development and Address Management

In Bitcoin application development, how to query the details of a Bitcoin address and how to query all transactions that occurred on the Bitcoin address? How to query all transactions that occurred in a specified Bitcoin address? How to manage Bitcoin addresses? Here is the solution:
Due to the data storage structure of Bitcoin, it is impossible to directly use the original API of Bitcoin to query historical transaction data of a specified address. Therefore, the most simple first solution is to store each transaction data on the Bitcoin blockchain in its own database, and then index the transaction address information (such as Scriptpubkey, pubkey, or the address itself), so that You can query the database freely and efficiently.
The previous method requires users to be able to parse the Bitcoin blockchain data and build a database environment by themselves, but it is more cumbersome and requires certain technical requirements. Many users basically do not use this method. Third-party API data services can solve this problem. Users can directly access the Tokenview blockchain API data service page, choose the appropriate API interface according to their own needs, and easily develop Bitcoin wallets and manage wallet addresses.
Users can choose a third solution to obtain Bitcoin address information and management: Bitcoin node implementation software. Such as btcd, this method uses the Bitcoin node software implemented in the go language when starting btcd. Just use the --addrindex flag to automatically create a Bitcoin address index.
submitted by Doris333 to u/Doris333 [link] [comments]

The CBDC Road to Practice-The Framework of LDF 2020

The CBDC Road to Practice-The Framework of LDF 2020
The CBDC Road To Practice——The Framework of LDF 2020
March 8, 2020 By JH( Lend0X Project Architect)
The Market Structure Analysis of CBDC
I. CBDC helps GDP growth
CBDC can be used as cash for commercial banks or as a medium for (government) bonds. The way in which assets are issued will have a huge impact on GDP growth. For commercial banks, the CBDC issued by the central bank is the source of assets. For customers, the products under the CBDC are the use of funds. Blockchain-based CBDC and bank account-based digital cash and banknotes are generally considered to have a huge difference in the contribution of GDP to quality, cost, and efficiency.
The Bank of England states in the 2019 study that the macroeconomic effects of issuing central bank digital currency (CBDC), the following three advantages of digital currency can increase interest-bearing central bank liabilities, and distributed ledgers can compete with bank deposits as a medium of exchange.
In the digital currency economy model 1. The model in the report matches the adjusted US currency issuance before the crisis, and we find that if the issuance of CBDC accounts for 30% of GDP, compared with government bonds, it may permanently increase GDP by 3%.
  1. Reduce real interest rates, reverse taxes and currency transaction costs.
  2. As a second monetary policy tool, countercyclical CBDC price or quantity rules can greatly improve the ability of the central bank to stabilize the business cycle.
II. The issuing system and payment structure of CBDC
The BIS research report pointed out that CBDC has many open questions, such as whether they should be retail or wholesale? Directly or indirectly to consumers? Account-based or token-based? Based on distributed ledgers, a centralized model or a hybrid model? How does CBDC pay across borders?
Of the three issuance systems (indirect, direct, and hybrid), CBDC can only be issued directly by the central bank. In The first type of indirect issuance structure,the CBDC is the indirect architecture ,and is done indirectly. ICBDC in the hands of consumers (such as the digital currency issued by the 4 largest state-owned commercial banks in DCEP) represents commercial banks (such as the 4 largest state-owned commercial banks) debt.
In the second type of direct and third type of mixed issuance structure, consumers are creditors of the central bank. In the direct CBDC model (type 2), the central bank processes all payments in real time and therefore maintains a record of all retail assets. The hybrid CBDC model is an intermediate solution where the consumer is a creditor of the central bank, but real-time payments are handled by the intermediary, and the central bank keeps copies of all retail
CBDCs in order to transfer them from one payment service provider to another in the event of a technical failure.
In terms of efficiency
Three payment architecture architectures allow account-based or token-based access. Although its DCEP digital currency is not a token in the blockchain, it is similar to the token in blockchain in key features such as non-double spending, anonymity, non-forgeability, security, transferability, separability, and programmability. Therefore, DCEP still belongs to the Token paradigm, not the account paradigm.
All four combinations are possible for any CBDC architecture (indirect, direct or hybrid) whatever the payment structure is based on the centralization or centralization mode, the account or token mode of blockchain smart contract account . But in different structures, central banks, commercial banks, and the private sector operate different parts of the infrastructure.
At present, the DCEP issuance structure adopts a two-tier structure, and its payment system——four major state-owned commercial
banks issuing four ICDBC tokens. Its technical architecture features are consistent with the first indirect distribution method. Because DCEP is positioned as digital cash (M0 cash) and the central bank's DCEP supports offline mobile payment, considering its huge payment transactions, a centralized account system for DCEP payment methods is essential. Offline Payment methods access to mobile wallets based on tokens are also essential for commercial banks.

LDF Central Bank Digital Currency CBDC Project Development
At present, the technical framework of the CBDC and the selection of infrastructure are divided into the R & D and cooperation of domestic application planning DCEP application scenarios; its overseas expansion goal supports the development of the “Belt and Road” digital asset ecosystem. DCEP adopts a double-layer system of commercial banks and central banks to adapt to the existing currency
systems of sovereign countries in the world. China, as a currency issuing country, has strong economic strength and basic conditions necessary for world currencies. At the same time, DCEP can also save the issued funds, calculate the inflation rate and other macroeconomic indicators more accurately, better curb illegal activities such as money laundering and terrorist financing, and facilitate foreign exchange circulation worldwide.
1. LDF——the combination of CBDC program and token economy
Only after answering questions such as the openness of CBDC currency itself, can we solve how the application of multiple blockchain industries such as LDF digital asset issuance platform, digital asset support bond platform, and lending and other CBDC currency "product traceability", "digital identity authentication", "judicial depository", "secure communication"and other basic applications, these LDFs are an important direction for exploring blockchain applications.
2.Select the most widely used blockchain technology as the basic platform
LDF introduced CBDC to use blockchain technology because it is the most mature landing foundation platform. It has the advantages of decentralization, openness, autonomy, anonymity, and tamper resistance. It can make the entire system information highly transparent, its data stability and the reliability is extremely high, which solves the point-to-point trust problem and can reduce transaction and operating costs. At present, the underlying technologies of mainstream digital assets such as Bitcoin, Ethereum, and USDT are all blockchain technologies. At the same time, the application scenarios of the blockchain not only include digital currency, but also include many fields such as "product traceability", "digital identity authentication", "judicial depository", "secure communication" and so on.
3.Interpretation of DCEP and selection of LDF blockchain technology architecture
·DCEP does not use a real blockchain like Libra, but may use a centralized ledger based on the UTXO (Unspent Transaction Output) model, and it still belongs to the Token paradigm. This centralized ledger reflects the digital currency issuance and registration system maintained by the central bank. It does not need to run consensus algorithms and will not be subject to the performance bottleneck of the blockchain. The blockchain may be used for the definitive registration of digital currencies and occupy a subsidiary position.

·Users need to use DCEP wallet. The core of the wallet is a pair of public and private keys. The public key is also the address, where the digital certificate of RMB is stored. This digital certificate is not a token in the blockchain in the complete sense, but it is consistent with the Token in many key features, and it is based on 100% RMB reserve. Users can initiate transfer transactions between addresses through the wallet private key. The transfer transaction is recorded
directly in the centralized ledger by the central bank. In this way, DCEP implements account loose coupling and controlled anonymity.
·Although DCEP is a currency tool, the third-party payment is mainly a payment tool after "disconnecting directly", but there are many similarities between the two. If DCEP is good enough in terms of technical efficiency and business development, and from the perspective of users, third-party payments can bring the same experience after DCEP and "disconnect directly". Therefore, DCEP has a mutual substitution relationship with third-party payment in the application after “disconnecting directly”.
·DCEP will have a tightening effect on M2, and M2 tightening reflects the contraction of the banking system to a certain extent. Digital currency does not pay interest, and the People's Bank of China has no plan to completely replace cash with DCEP, so DCEP will not constitute a new monetary policy tool. DCEP has strong policy implications for central bank monitoring of capital flows, as well as anti-money laundering, anti-terrorist financing and anti-tax evasion. Therefore, the supervisory function of DCEP exceeds that of monetary policy.
·The impact of DCEP on RMB internationalization is mainly reflected in cross-border payments based on digital currencies. Although cross-border payments including DCEP, can promote RMB internationalization, cross-border payment is only a necessary condition for RMB internationalization, not a sufficient one. The internationalization of the RMB is inseparable from a series of institutional arrangements.
4.The effectiveness of digital currencies in the LDF framework
CBDC is positioned as digital cash or currency under the LDF framework, and the remaining various tokens, cryptocurrencies, and stablecoins are treated as digital assets. The application platforms involved in LDF (asset mortgage bond platform, digital asset issuance platform, and lending). The underlying assets of LDF are part of the digital asset equity. The reason why LDF uses CBDC and stable currency as currency is due to ·LDF framework links three financial ecosystems ·CBDC has the characteristics of currency transaction, accounting unit and value storage have been verified
·Stablecoins can be used as a payment tool for token economic platforms, not currencies
The stable currency selected by LDF should effectively play the payment function of the currency, and meet the requirements of the following LDF framework: ·Must be universally accepted ·Must be easy to standardize in order to determine its value
Due to the characteristics of DvP (payment is settlement) based on blockchain technology, LDF's smart contracts have the characteristics of decentralized intermediaries, such as the function of asset account contracts partially replacing account settlement; the asset pool contract replacing SPV, and the cash flow contract replacing assets Payment intermediary The digital currency selected as an LDF that meets the above standards is very important for the effectiveness of the LDF framework. Otherwise, the platform built by the LDF framework will not be able to achieve the capabilities of distributed ledgers and DAO organizations.
LDF regulatory compliance
LDF chooses CBDC (DCEP) as the construction of digital asset transaction payment platform, which has the characteristics of DvP (asset payment is settlement). It supervises compliance with the selection of digital currencies that support smart contract accounts and trading platforms (anti-money laundering and anti-terrorist financing) has a decisive role.
DCEP takes the form of loosely coupled accounts to achieve controlled anonymity. The current electronic payment methods, such as bank cards and third-party payment platforms, all use the method of tightly coupling accounts, that is, funds must be transferred through real-name bank accounts. But With the improvement of people's awareness of information security, electronic payment cannot meet people's demand for anonymous payment. The digital currency of the central bank adopts the form of loosely coupled accounts, enabling asset transfers without the need for bank accounts, so as to achieve controllable anonymity.
Unlike Bitcoin's complete anonymity, the central bank has the right to obtain the transaction data within the legal scope, and the source
of digital currency can be traced through big data analysis, while other commercial banks and merchants cannot obtain relevant information. This mechanism, while protecting data security and citizen privacy, also enables illegal activities such as money laundering to be effectively supervised.
Association of LDF's DAO Autonomous Economic Model with CBDC
The direct DCB (such as DCEP) or LIBRA of the LDF token can quantify the value of DAO / DAE through a certain transformation and analysis, and predict its future long-term growth rate and the problems to be solved by the economic model, the solution path adopted, and the overall structure design, technological innovation, team composition, development vision and roadmap.
·The LDF economic model transplants the estimation model of the asset value of the general economic system to DAO 2.0 organization and market management, so as to establish a unified evaluation system for the value generated by the distributed autonomous economy (DAE). The endogenous economic growth model considers important parameters such as savings rate, population growth rate, and technological progress as endogenous variables. The long-term growth rate of the economy can be determined by the interior of the model. Moreover, the LDF economic model takes the number of tokens, nodes, and technical inputs of the distributed organization as similar parameters. The CBDC (such as DCEP) or LIBRA directly targeted by the token can quantify the value of DAO / DAE through certain transformation and analysis and predict its long-term growth rate in the future.
·In response to the special needs of transactions and asset on-chain in the blockchain field, the LDF economic model has developed a DAE (Decentralized Autonomous Economic) protocol group specifically designed to eliminate various pain points of decentralization in the blockchain field, and has developed corresponding LDF DAO DAPP, these agreements include: ·Issuance and trading of tokens based on smart contracts ·Distributed order submission and matching ·Transaction interest rate and mortgage method based on automatic discovery mechanism
Therefore, whether it is a community member, an investor, or a blockchain project developer that develops applications on the LDF economic model, it can use the distributed rules, consensus mechanisms, infrastructure, and smart contracts provided by it to achieve the following purposes:
·Encrypted token asset transaction and circulation based on community autonomy ·Issue of new LDF tokens ·Construction, collaboration, management, voting, and decision- making of specific encryption token communities
·Develop a smart contract system for the dual factors of community node rights and workload ·Customized incentive standards for nodes with different interests
Welcome to discuss with the author of this article, please contact via email:[email protected]
submitted by Lend0x to u/Lend0x [link] [comments]

Best General RenVM Questions of January 2020

Best General RenVM Questions of January 2020

‌*These questions are sourced directly from Telegram
Q: When you say RenVM is Trustless, Permissionless, and Decentralized, what does that actually mean?
A: Trustless = RenVM is a virtual machine (a network of nodes, that do computations), this means if you ask RenVM to trade an asset via smart contract logic, it will. No trusted intermediary that holds assets or that you need to rely on. Because RenVM is a decentralized network and computes verified information in a secure environment, no single party can prevent users from sending funds in, withdrawing deposited funds, or computing information needed for updating outside ledgers. RenVM is an agnostic and autonomous virtual broker that holds your digital assets as they move between blockchains.
Permissionless = RenVM is an open protocol; meaning anyone can use RenVM and any project can build with RenVM. You don't need anyone's permission, just plug RenVM into your dApp and you have interoperability.
Decentralized = The nodes that power RenVM ( Darknodes) are scattered throughout the world. RenVM has a peak capacity of up to 10,000 Darknodes (due to REN’s token economics). Realistically, there will probably be 100 - 500 Darknodes run in the initial Mainnet phases, ample decentralized nonetheless.

Q: Okay, so how can you prove this?
A: The publication of our audit results will help prove the trustlessness piece; permissionless and decentralized can be proven today.
Permissionless = https://github.com/renproject/ren-js
Decentralized = https://chaosnet.renproject.io/

Q: How does Ren sMPC work? Sharmir's secret sharing? TSS?
A: There is some confusion here that keeps arising so I will do my best to clarify.TL;DR: *SSS is just data. It’s what you do with the data that matters. RenVM uses sMPC on SSS to create TSS for ECDSA keys.*SSS and TSS aren’t fundamental different things. It’s kind of like asking: do you use numbers, or equations? Equations often (but not always) use numbers or at some point involve numbers.
SSS by itself is just a way of representing secret data (like numbers). sMPC is how to generate and work with that data (like equations). One of the things you can do with that work is produce a form of TSS (this is what RenVM does).
However, TSS is slightly different because it can also be done *without* SSS and sMPC. For example, BLS signatures don’t use SSS or sMPC but they are still a form of TSS.
So, we say that RenVM uses SSS+sMPC because this is more specific than just saying TSS (and you can also do more with SSS+sMPC than just TSS). Specifically, all viable forms of turning ECDSA (a scheme that isn’t naturally threshold based) into a TSS needs SSS+sMPC.
People often get confused about RenVM and claim “SSS can’t be used to sign transactions without making the private key whole again”. That’s a strange statement and shows a fundamental misunderstanding about what SSS is.
To come back to our analogy, it’s like saying “numbers can’t be used to write a book”. That’s kind of true in a direct sense, but there are plenty of ways to encode a book as numbers and then it’s up to how you interpret (how you *use*) those numbers. This is exactly how this text I’m writing is appearing on your screen right now.
SSS is just secret data. It doesn’t make sense to say that SSS *functions*. RenVM is what does the functioning. RenVM *uses* the SSSs to represent private keys. But these are generated and used and destroyed as part of sMPC. The keys are never whole at any point.

Q: Thanks for the explanation. Based on my understanding of SSS, a trusted dealer does need to briefly put the key together. Is this not the case?
A: Remember, SSS is just the representation of a secret. How you get from the secret to its representation is something else. There are many ways to do it. The simplest way is to have a “dealer” that knows the secret and gives out the shares. But, there are other ways. For example: we all act as dealers, and all give each other shares of our individual secret. If there are N of us, we now each have N shares (one from every person). Then we all individually add up the shares that we have. We now each have a share of a “global” secret that no one actually knows. We know this global secret is the sum of everyone’s individual secrets, but unless you know every individual’s secret you cannot know the global secret (even though you have all just collectively generates shares for it). This is an example of an sMPC generation of a random number with collusion resistance against all-but-one adversaries.

Q: If you borrow Ren, you can profit from the opposite Ren gain. That means you could profit from breaking the network and from falling Ren price (because breaking the network, would cause Ren price to drop) (lower amount to be repaid, when the bond gets slashed)
A: Yes, this is why it’s important there has a large number of Darknodes before moving to full decentralisation (large borrowing becomes harder). We’re exploring a few other options too, that should help prevent these kinds of issues.

Q: What are RenVM’s Security and Liveliness parameters?
A: These are discussed in detail in our Wiki, please check it out here: https://github.com/renproject/ren/wiki/Safety-and-Liveliness#analysis

Q: What are the next blockchain under consideration for RenVM?
A: These can be found here: https://github.com/renproject/ren/wiki/Supported-Blockchains

Q: I've just read that Aztec is going to be live this month and currently tests txs with third parties. Are you going to participate in early access or you just more focused on bringing Ren to Subzero stage?
A: At this stage, our entire focus is on Mainnet SubZero. But, we will definitely be following up on integrating with AZTEC once everything is out and stable.

Q: So how does RenVM compare to tBTC, Thorchain, WBTC, etc..?
A: An easy way to think about it is..RenVM’s functionality is a combination of tBTC (+ WBTC by extension), and Thorchain’s (proposed) capabilities... All wrapped into one. Just depends on what the end-user application wants to do with it.

Q1: What are the core technical/security differences between RenVM and tBTC?A1: The algorithm used by tBTC faults if even one node goes offline at the wrong moment (and the whole “keep” of nodes can be penalised for this). RenVM can survive 1/3rd going offline at any point at any time. Advantage for tBTC is that collusion is harder, disadvantage is obviously availability and permissionlessness is lower.
tBTC an only mint/burn lots of 1 BTC and requires an on-Ethereum SPV relay for Bitcoin headers (and for any other chain it adds). No real advantage trade-off IMO.
tBTC has a liquidation mechanism that means nodes can have their bond liquidated because of ETH/BTC price ratio. Advantage means users can get 1 BTC worth of ETH. Disadvantage is it means tBTC is kind of a synthetic: needs a price feed, needs liquid markets for liquidation, users must accept exposure to ETH even if they only hold tBTC, nodes must stay collateralized or lose lots of ETH. RenVM doesn’t have this, and instead uses fees to prevent becoming under-collateralized. This requires a mature market, and assumed Darknodes will value their REN bonds fairly (based on revenue, not necessarily what they can sell it for at current —potentially manipulated—market value). That can be an advantage or disadvantage depending on how you feel.
tBTC focuses more on the idea of a tokenized version of BTC that feels like an ERC20 to the user (and is). RenVM focuses more on letting the user interact with DeFi and use real BTC and real Bitcoin transactions to do so (still an ERC20 under the hood, but the UX is more fluid and integrated). Advantage of tBTC is that it’s probably easier to understand and that might mean better overall experience, disadvantage really comes back to that 1 BTC limit and the need for a more clunky minting/burning experience that might mean worse overall experience. Too early to tell, different projects taking different bets.
tBTC supports BTC (I think they have ZEC these days too). RenVM supports BTC, BCH, and ZEC (docs discuss Matic, XRP, and LTC).
Q2: This are my assumed differences between tBTC and RenVM, are they correct? Some key comparisons:
-Both are vulnerable to oracle attacks
-REN federation failure results in loss or theft of all funds
-tBTC failures tend to result in frothy markets, but holders of tBTC are made whole
-REN quorum rotation is new crypto, and relies on honest deletion of old key shares
-tBTC rotates micro-quorums regularly without relying on honest deletion
-tBTC relies on an SPV relay
-REN relies on federation honesty to fill the relay's purpose
-Both are brittle to deep reorgs, so expanding to weaker chains like ZEC is not clearly a good idea
-REN may see total system failure as the result of a deep reorg, as it changes federation incentives significantly
-tBTC may accidentally punish some honest micro-federations as the result of a deep reorg
-REN generally has much more interaction between incentive models, as everything is mixed into the same pot.
-tBTC is a large collection of small incentive models, while REN is a single complex incentive model
A2: To correct some points:
The oracle situation is different with RenVM, because the fee model is what determines the value of REN with respect to the cross-chain asset. This is the asset is what is used to pay the fee, so no external pricing is needed for it (because you only care about the ratio between REN and the cross-chain asset).
RenVM does rotate quorums regularly, in fact more regularly than in tBTC (although there are micro-quorums, each deposit doesn’t get rotated as far as I know and sticks around for up to 6 months). This rotation involves rotations of the keys too, so it does not rely on honest deletion of key shares.
Federated views of blockchains are easier to expand to support deep re-orgs (just get the nodes to wait for more blocks for that chain). SPV requires longer proofs which begins to scale more poorly.
Not sure what you mean by “one big pot”, but there are multiple quorums so the failure of one is isolated from the failures of others. For example, if there are 10 shards supporting BTC and one of them fails, then this is equivalent to a sudden 10% fee being applied. Harsh, yes, but not total failure of the whole system (and doesn’t affect other assets).
Would be interesting what RenVM would look like with lots more shards that are smaller. Failure becomes much more isolated and affects the overall network less.
Further, the amount of tBTC you can mint is dependent on people who are long ETH and prefer locking it up in Keep for earning a smallish fee instead of putting it in Compound or leveraging with dydx. tBTC is competing for liquidity while RenVM isn't.

Q: I understand correctly RenVM (sMPC) can get up to a 50% security threshold, can you tell me more?
A: The best you can theoretically do with sMPC is 50-67% of the total value of REN used to bond Darknodes (RenVM will eventually work up to 50% and won’t go for 67% because we care about liveliness just as much as safety). As an example, if there’s $1M of REN currently locked up in bonded Darknodes you could have up to $500K of tokens shifted through RenVM at any one specific moment. You could do more than that in daily volume, but at any one moment this is the limit.Beyond this limit, you can still remain secure but you cannot assume that players are going to be acting to maximize their profit. Under this limit, a colluding group of adversaries has no incentive to subvert safety/liveliness properties because the cost to attack roughly outweighs the gain. Beyond this limit, you need to assume that players are behaving out of commitment to the network (not necessarily a bad assumption, but definitely weaker than the maximizing profits assumption).

Q: Why is using ETH as collateral for RenVM a bad idea?
A: Using ETH as collateral in this kind of system (like having to deposit say 20 ETH for a bond) would not make any sense because the collateral value would then fluctuate independently of what kind of value RenVM is providing. The REN token on the other hand directly correlates with the usage of RenVM which makes bonding with REN much more appropriate. DAI as a bond would not work as well because then you can't limit attackers with enough funds to launch as many darknodes as they want until they can attack the network. REN is limited in supply and therefore makes it harder to get enough of it without the price shooting up (making it much more expensive to attack as they would lose their bonds as well).
A major advantage of Ren's specific usage of sMPC is that security can be regulated economically. All value (that's being interopped at least) passing through RenVM has explicit value. The network can self-regulate to ensure an attack is never worth it.

Q: Given the fee model proposal/ceiling, might be a liquidity issue with renBTC. More demand than possible supply?A: I don’t think so. As renBTC is minted, the fees being earned by Darknodes go up, and therefore the value of REN goes up. Imagine that the demand is so great that the amount of renBTC is pushing close to 100% of the limit. This is a very loud and clear message to the Darknodes that they’re going to be earning good fees and that demand is high. Almost by definition, this means REN is worth more.
Profits of the Darknodes, and therefore security of the network, is based solely on the use of the network (this is what you want because your network does not make or break on things outside the systems control). In a system like tBTC there are liquidity issues because you need to convince ETH holders to bond ETH and this is an external problem. Maybe ETH is pumping irrespective of tBTC use and people begin leaving tBTC to sell their ETH. Or, that ETH is dumping, and so tBTC nodes are either liquidated or all their profits are eaten by the fact that they have to be long on ETH (and tBTC holders cannot get their BTC back in this case). Feels real bad man.

Q: I’m still wondering which asset people will choose: tbtc or renBTC? I’m assuming the fact that all tbtc is backed by eth + btc might make some people more comfortable with it.
A: Maybe :) personally I’d rather know that my renBTC can always be turned back into BTC, and that my transactions will always go through. I also think there are many BTC holders that would rather not have to “believe in ETH” as an externality just to maximize use of their BTC.

Q: How does the liquidation mechanism work? Can any party, including non-nodes act as liquidators? There needs to be a price feed for liquidation and to determine the minting fee - where does this price feed come from?
A: RenVM does not have a liquidation mechanism.
Q: I don’t understand how the price feeds for minting fees make sense. You are saying that the inputs for the fee curve depend on the amount of fees derived by the system. This is circular in a sense?
A: By evaluating the REN based on the income you can get from bonding it and working. The only thing that drives REN value is the fact that REN can be bonded to allow work to be done to earn revenue. So any price feed (however you define it) is eventually rooted in the fees earned.

Q: Who’s doing RenVM’s Security Audit?
A: ChainSecurity | https://chainsecurity.com/

Q: Can you explain RenVM’s proposed fee model?
A: The proposed fee model can be found here: https://github.com/renproject/ren/wiki/Safety-and-Liveliness#fees

Q: Can you explain in more detail the difference between "execution" and "powering P2P Network". I think that these functions are somehow overlapping? Can you define in more detail what is "execution" and "powering P2P Network"? You also said that at later stages semi-core might still exist "as a secondary signature on everything (this can mathematically only increase security, because the fully decentralised signature is still needed)". What power will this secondary signature have?
A: By execution we specifically mean signing things with the secret ECDSA keys. The P2P network is how every node communicates with every other node. The semi-core doesn’t have any “special powers”. If it stays, it would literally just be a second signature required (as opposed to the one signature required right now).
This cannot affect safety, because the first signature is still required. Any attack you wanted to do would still have to succeed against the “normal” part of the network. This can affect liveliness, because the semi-core could decide not to sign. However, the semi-core follows the same rules as normal shards. The signature is tolerant to 1/3rd for both safety/liveliness. So, 1/3rd+ would have to decide to not sign.
Members of the semi-core would be there under governance from the rest of our ecosystem. The idea is that members would be chosen for their external value. We’ve discussed in-depth the idea of L<3. But, if RenVM is used in MakerDAO, Compound, dYdX, Kyber, etc. it would be desirable to capture the value of these ecosystems too, not just the value of REN bonded. The semi-core as a second signature is a way to do this.
Imagine if the members for those projects, because those projects want to help secure renBTC, because it’s used in their ecosystems. There is a very strong incentive for them to behave honestly. To attack RenVM you first have to attack the Darknodes “as per usual” (the current design), and then somehow convince 1/3rd of these projects to act dishonestly and collapse their own ecosystems and their own reputations. This is a very difficult thing to do.
Worth reminding: the draft for this proposal isn’t finished. It would be great for everyone to give us their thoughts on GitHub when it is proposed, so we can keep a persistent record.

Q: Which method or equation is used to calculate REN value based on fees? I'm interested in how REN value is calculated as well, to maintain the L < 3 ratio?
A: We haven’t finalized this yet. But, at this stage, the plan is to have a smart contract that is controlled by the Darknodes. We want to wait to see how SubZero and Zero go before committing to a specific formulation, as this will give us a chance to bootstrap the network and field inputs from the Darknodes owners after the earnings they can make have become more apparent.
submitted by RENProtocol to RenProject [link] [comments]

If everyone should run full nodes then why POW?

Preamble: I always post my viewpoint on a sub with an opposing standpoint for the sole reason that the best way to learn is from critique and thus my choice of posting here. Please don’t confuse rebuttals with trolling, it's often just often just a misunderstanding on either or both party’s side. Please refrain from pointing out people or altcoins and evaluate premises on their own merits. Also please consider a comment before down voting.
So, as might be deduced I am against the notion that everyone should run a full node and that instead miners can be ‘trusted’ (due to economic incentives) to provide an honest chain on the one with most proof of work and that SPV is good enough for 99% of users. Hopefully the hypothetical scenario following will help to further (or weaken) my case and understanding. Note that this was a shower thought and might be crushed with a single comment (which will be good and what I’m here for).
Introducing Bitcoin with zero greenhouse gas emissions and improved security consensus rules:
Consider these hypothetical changes to Bitcoin’s consensus rules for a hypothetical upgrade to full nodes (note again this is very quick thoughts so over time this could be improved significantly).
So here we have a new and improved Bitcoin that is environmentally friendly and significantly more secure due to the fact that you can compound security by taking a hash that is buried under sChain's POW for as far back as you wish.
Looking forward to those spotting flaws in my preliminary thoughts on this (I am expecting a lot to be honest).
So in the hypothetical scenario that this POW leaching consensus model holds (after this initial suggestion is optimised to as good as it can be) then do we not have to rethink this every node should be validating all transactions idea?
EDIT: After some discussion I want to make some revisions (mainly to remove any POS'ish incentives the initial description might have created)... 1) There will be no rewards whatsoever for creating blocks 2) The block producers are chosen randomly from UTXO set based on sChain's block hashes
submitted by fiddley2000 to Bitcoin [link] [comments]

An in-depth analysis of Bitcoin's throughput bottlenecks, potential solutions, and future prospects

I've recently spent altogether too much time putting together an analysis of the limits on block size and transactions/second on the basis of various technical bottlenecks. The methodology I use is to choose specific operating goals and then calculate estimates of throughput and maximum block size for each of various different operating requirements for Bitcoin nodes and for the Bitcoin network as a whole. The smallest bottlenecks represents the actual throughput limit for the chosen goals, and therefore solving that bottleneck should be the highest priority.
The goals I chose are supported by some research into available machine resources in the world, and to my knowledge this is the first paper that suggests any specific operating goals for Bitcoin. However, the goals I chose are very rough and very much up for debate. I strongly recommend that the Bitcoin community come to some consensus on what the goals should be and how they should evolve over time, because choosing these goals makes it possible to do unambiguous quantitative analysis that will make the blocksize debate much more clear cut and make coming to decisions about that debate much simpler. Specifically, it will make it clear whether people are disagreeing about the goals themselves or disagreeing about the solutions to improve how we achieve those goals.
There are many simplifications I made in my estimations, and I fully expect to have made plenty of mistakes. I would appreciate it if people could review the paper and point out any mistakes, insufficiently supported logic, or missing information so those issues can be addressed and corrected. Any feedback would help (especially review of my math)!
Here's the paper: https://github.com/fresheneesz/bitcoinThroughputAnalysis
Oh, I should also mention that there's a spreadsheet you can download and use to play around with the goals yourself and look closer at how the numbers were calculated. Also, there was a discussion on BitcoinDiscussion a while back.
submitted by fresheneesz to Bitcoin [link] [comments]

Can you detect block reward increase without a full node?

Some concern trolls claim miners may conspire to increase the base reward to trick SPV wallets. If there is an easy and inexpensive way to detect it, it could be a good counter.
Couldn't you just check the latest block, or a reasonable number of blocks, or the latest block in combination with the SPV block headers?
Is it really necessary to know all the UTXOs to verify the mining reward wasn't increased?
submitted by bch_ftw to btc [link] [comments]

PROPOSAL: Lite Client Compatible Onchain Unique Name System

What I am proposing is a system that allows a user to be able to register his name on-chain in such a way that it is trivial for someone running solely an SPV wallet to verify the authenticity of their claim to ownership of their name. The protocol can also be easily expanded to allow transfer of ownership of the name, although that will not be covered in detail here but it very much easily achievable and importantly it does not require any extra or specialised server in order to operate.
This could have many uses within the Bitcoin Cash ecosystem as it provides a way for users to be able to remember and identify each other by their names and not their addresses or stealth addresses which substantially improves usability and allows for Bitcoin Cash to be an even better electronic cash and widely used means of payment.
The problem with most designs for an onchain protocol for name registration is the difficulty for those who do not have the capacity to scan through the full blockchain (i.e those using an SPV wallet) to verify a claim to ownership of a name.
This is because, assuming that name ownership is awarded by the rules of the protocol on a first-in-first-serve basis, one must be sure that any given claim in a transaction on the blockchain to ownership of a name is in fact the first claim to ownership of that name. This can only be ensured, usually, by going and looking at all other transactions that have the potential to include registration information and ensuring that none previously include a claim to this name.
The reason this poses a difficulty for a lite client is because by definition they do not know the entire history of the blockchain and so as a result are not able to easily be sure of the authenticity of any claim to a name. This has resulted in the inevitable situation that we currently find the state of onchain name registration systems to be in which is having to trade off between the presence of a server to scan the blockchain to verify the authenticity of a name or the non-uniqueness of names, as we see in cashaccounts, for instance.
The solution being proposed solves this problem of lite client verification of a name ownership proof by forcing, through the protocol's rules, that all claims to ownership of a particular name are made using an output direct to a specific "burner" address that is deterministically generated from the name itself, for instance it could be the name plus an agreed upon padding until the end of the address (besides the checksum).
So what this means is that any claim to ownership of a name is a transaction that has a p2pkh output that sends to an address that is generated from the name being registered, so such an address for the name "bob" might look like: 1bobfffffffffffffffffffffffffffff[checksum]
Said output would also contain an op_return that includes whatever string, address or stealth address the user wants to associate with this name.
What is achieved by forcing the output's p2pkh target address to be this deterministically calculated address is that now all that is required in order to verify the authenticity of a claim to it is to simply ask a full node or spv server to return all transactions sent to that address and to ensure that the first transaction sent to it (that contains an op_return in the same output, as defined above) is the one given to them as the claim to ownership. It seems Reddit is limiting the size of this post so read the rest here: https://docs.google.com/document/d/1cHv-bBxnijexohF7x-i_RICIncbzpFrxBeoXMSVtuCk/edit?usp=drivesdk
Let me know what you think/if you have any questions :)
submitted by soundmoneyio to btc [link] [comments]

What happens if the blocksize stays at 1MB?

With the current impasse that Bitcoin finds itself in, what are the theoretical and pragmatic implications of the blockchain staying at 1MB for the long term future?
Will this simply mean that transactions will take increasingly longer confirmation times? Or will there be a point where the network will simply not be able to cope with the number of transactions (if the transaction rate continues to increase). Would there be any other implications. I have looked online but haven't been able to find any answers for the blockchain illiterate that explain this.
submitted by thewanderingsalmon to Bitcoin [link] [comments]

An explanation of 0-fee transactions in BCH

This post does not address mobile wallets or SPV wallets.

I am unsure about the ABC GUI wallet but the BitcoinUnlimited GUI wallet has the option to send a transaction with no fee. However, there is a priority requirement enforced by the BCH network that must be met for that free transaction to be included into a block by default (miners can override this but i wont get into that here).

PLEASE NOTE: The following calculation is not 100% correct and the reality is a bit more complicated but this serves as a good general guideline to follow to know if your tx will be accepted with no fee.

To be included in a block with no fee the transaction must destroy at least 1 coinday (57.600.000 "priority points"). The main reason for this is to prevent spam.

You can find how many coindays a transaction will destroy using the following formula: Sum up the "priority points" of each input by multiplying the age of the input in blocks with its value in satoshi. Add all of the "priority points" for all of the inputs together then divide by the size of the tx in bytes (Example below). If the result of that equation is greater than 1coinday (57.600.000) then a miner that supports free transactions under the default configuration will include it in the next block they mine.
The 57.600.000 is the result of COIN * 144 / 250. COIN is the number of satoshis that make up 1 whole bitcoin (100.000.000). 144 is how many blocks are mined in a day on average (6*24). 250 is a number that has to do with the transactions size but i wont get into that right now.

Example: if your tx spends 2 inputs, the first of which is 10 blocks old and has a value of 10.000 satoshi and the second is 20 blocks old and has a value of 7.500 satoshi and the size of the tx is 212 bytes then the transactions priority is (10 * 10.000) + (20 * 7.500) = 250.000 / 212 = ~1.179,25. 1.179,25 is clearly less than 57.600.000 and will need to sit in the mempool for some time to earn enough points to be included in a block. In general the formula favors older transactions that are smaller in size.

I hope this explanation helps you understand the free relay system a little bit better.

Edit: "priority points" is in quotes because its a term i just made up for the value so this explanation was easier to understand. internally in the code it is typically just called dPriority.
Edit for more Information: - By default miners allocation a small percentage of the block (iirc its 5% by default) for free transactions
- this is another less mathy description of the bitcoin days destroyed concept https://en.bitcoin.it/wiki/Bitcoin_Days_Destroyed.
submitted by GregGriffith to btc [link] [comments]

Mentor Monday, December 19, 2016: Ask all your bitcoin questions!

Ask (and answer!) away! Here are the general rules:
And don't forget to check out /BitcoinBeginners
You can sort by new to see the latest questions that may not be answered yet.
submitted by BashCoBot to Bitcoin [link] [comments]

Could the "Segregated Witness" part of the "Sidechain Elements" project be used to solve the Bitcoin tx malleability problem ?

"As transaction IDs no longer cover the signatures, they remove all forms of transaction malleability, in a much more fundamental way than BIP62 aims to do. This results in a larger class of multi-clause contract constructs that become safe to use."
Could it be in some way backported to Bitcoin ?
submitted by mably to Bitcoin [link] [comments]

Neutro Yellow Paper — “Simplified Payment Verification nodes” chapter..

Neutro Yellow Paper — “Simplified Payment Verification nodes” chapter..
This is part two of our short series and here we’ll try to explain in simple terms how can I, operating a lightweight node, verify the correctness of the chain/ compare the two or more seemingly valid chains given to me by my peers? Simplified Payment Verification is a term borrowed from Bitcoin. The meaning is the same, the method is different though.
In Bitcoin I can store on my device just block headers. It’s easy to calculate the PoW difficulty combined for all blocks as PoW itself is difficult and costly to perform, but easy and cheap to verify. Because in Neutro our focal point of consensus is relative Proof of Work combined for all blocks, and to know the relative Proof of Work we must know how many votes (or rather votes cast using how many validation tokens) are included in each block, we must verify a lot of votes for each block. That is a problem.
But what if in every block there was a simplified “list” of votes in the same order as they are included in the Merkle tree within that block? Let’s say I want to create a false chain. I include this list in every block of my chain and the proper Merkle tree with votes. In order to raise my blockchain’s relative Proof of Work, I must include some “artificial” (created by me in order to cheat the network) votes. I can try to cheat the SPV nodes by either:
a) including in my block’s “list” of votes false values that do not exist in the Merkle tree
b) including in the Merkle tree votes that are false or otherwise incorrect
As an SPV node I can use the following method of “random questions”. As I download a block, I see a list of votes. First thing I should do is to calculate if, assuming that list is not falsified, the relative Proof of Work for that block would result in a value that is declared within that block. If it wouldn’t — it’s an incorrect chain. If it would — I now ask for random votes from that list and check their correctness (value, signature, adherence to protocol’s rules). Let’s say there are 20 votes in the list. I ask for votes number 3,8,19,20. My peer “shows” me them by providing me with the correct Merkle path (as they are stored in the form of a Merkle tree).
What’s the motive behind that? The more someone wants to cheat the network, the more false votes he will include in his chain. The more false votes he includes in his chain, the more likely SPV nodes will detect his lie. This method allows an SPV node to ask for and verify only a small portion of data. But, because of the logic presented above, the probability of catching a liar is growing with the “size” of his lie. If an adversary produces just a small number of false votes within his chain, it won’t help him much. If he produces a lot of false votes, the probability of him cheating any SPV node is decreased with every false vote included in his chain. This probability, with proper parameters, is extremely low given even the few percents of false votes. So this is the method of easy verification of the main-chain.
Once I’ve verified the main-chain, it’s easy to verify if a given transaction was performed in a given block of a given shard-chain. I just need to follow the Merkle path from a given main-chain block to a given shard-chain block header, then starting from that shard-chain block header (or Merkle root included in it, to be precise) to data within that shard-chain block that I need.
Source: https://medium.com/@neutroprotocol/explain-it-to-me-like-im-14-and-i-know-how-bitcoin-works-251343781336
submitted by ceydakaraelma to ICOAnalysis [link] [comments]

Decred Journal – August 2018

Note: you can read this on GitHub (link), Medium (link) or old Reddit (link) to see all the links.


dcrd: Version 1.3.0 RC1 (Release Candidate 1) is out! The main features of this release are significant performance improvements, including some that benefit SPV clients. Full release notes and downloads are on GitHub.
The default minimum transaction fee rate was reduced from 0.001 to 0.0001 DCkB. Do not try to send such small fee transactions just yet, until the majority of the network upgrades.
Release process was changed to use release branches and bump version on the master branch at the beginning of a release cycle. Discussed in this chat.
The codebase is ready for the new Go 1.11 version. Migration to vgo module system is complete and the 1.4.0 release will be built using modules. The list of versioned modules and a hierarchy diagram are available here.
The testnet was reset and bumped to version 3.
Comments are welcome for the proposal to implement smart fee estimation, which is important for Lightning Network.
@matheusd recorded a code review video for new Decred developers that explains how tickets are selected for voting.
dcrwallet: Version 1.3.0 RC1 features new SPV sync mode, new ticket buyer, new APIs for Decrediton and a host of bug fixes. On the dev side, dcrwallet also migrated to the new module system.
Decrediton: Version 1.3.0 RC1 adds the new SPV sync mode that syncs roughly 5x faster. The feature is off by default while it receives more testing from experienced users. Other notable changes include a design polish and experimental Politeia integration.
Politeia: Proposal editing is being developed and has a short demo. This will allow proposal owners to edit their proposal in response to community feedback before voting begins. The challenges associated with this feature relate to updating censorship tokens and maintaining a clear history of which version comments were made on. @fernandoabolafio produced this architecture diagram which may be of interest to developers.
@degeri joined to perform security testing of Politeia and found several issues.
dcrdata: mainnet explorer upgraded to v2.1 with several new features. For users: credit/debit tx filter on address page, showing miner fees on coinbase transaction page, estimate yearly ticket rewards on main page, cool new hamburger menu and keyboard navigation. For developers: new chain parameters page, experimental Insight API support, endpoints for coin supply and block rewards, testnet3 support. Lots of minor API changes and frontend tweaks, many bug fixes and robustness improvements.
The upcoming v3.0 entered beta and is deployed on beta.dcrdata.org. Check out the new charts page. Feedback and bug reports are appreciated. Finally, the development version v3.1.0-pre is on alpha.dcrdata.org.
Android: updated to be compatible with the latest SPV code and is syncing, several performance issues are worked on. Details were posted in chat. Alpha testing has started, to participate please join #dev and ask for the APK.
iOS: backend is mostly complete, as well as the front end. Support for devices with smaller screens was improved. What works now: creating and recovering wallets, listing of transactions, receiving DCR, displaying and scanning QR codes, browsing account information, SPV connection to peers, downloading headers. Some bugs need fixing before making testable builds.
Ticket splitting: v0.6.0 beta released with improved fee calculation and multiple bug fixes.
docs: introduced new Governance section that grouped some old articles as well as the new Politeia page.
@Richard-Red created a concept repository sandbox with policy documents, to illustrate the kind of policies that could be approved and amended by Politeia proposals.
decred.org: 8 contributors added and 4 removed, including 2 advisors (discussion here).
decredmarketcap.com is a brand new website that shows the most accurate DCR market data. Clean design, mobile friendly, no javascript required.
Dev activity stats for August: 239 active PRs, 219 commits, 25k added and 11k deleted lines spread across 8 repositories. Contributions came from 2-10 developers per repository. (chart)


Hashrate: went from 54 to 76 PH/s, the low was 50 and the new all-time high is 100 PH/s. BeePool share rose to ~50% while F2Pool shrank to 30%, followed by coinmine.pl at 5% and Luxor at 3%.
Staking: 30-day average ticket price is 95.6 DCR (+3.0) as of Sep 3. During the month, ticket price fluctuated between a low of 92.2 and high of 100.5 DCR. Locked DCR represented between 3.8 and 3.9 million or 46.3-46.9% of the supply.
Nodes: there are 217 public listening and 281 normal nodes per dcred.eu. Version distribution: 2% at v1.4.0(pre) (dev builds), 5% on v1.3.0 (RC1), 62% on v1.2.0 (-5%), 22% on v1.1.2 (-2%), 6% on v1.1.0 (-1%). Almost 69% of nodes are v.1.2.0 and higher and support client filters. Data snapshot of Aug 31.


Obelisk posted 3 email updates in August. DCR1 units are reportedly shipping with 1 TH/s hashrate and will be upgraded with firmware to 1.5 TH/s. Batch 1 customers will receive compensation for missed shipment dates, but only after Batch 5 ships. Batch 2-5 customers will be receiving the updated slim design.
Innosilicon announced the new D9+ DecredMaster: 2.8 TH/s at 1,230 W priced $1,499. Specified shipping date was Aug 10-15.
FFMiner DS19 claims 3.1 TH/s for Blake256R14 at 680 W and simultaneously 1.55 TH/s for Blake2B at 410 W, the price is $1,299. Shipping Aug 20-25.
Another newly noticed miner offer is this unit that does 46 TH/s at 2,150 W at the price of $4,720. It is shipping Nov 2018 and the stats look very close to Pangolin Whatsminer DCR (which has now a page on asicminervalue).


www.d1pool.com joined the list of stakepools for a total of 16.
Australian CoinTree added DCR trading. The platform supports fiat, there are some limitations during the upgrade to a new system but also no fees in the "Early access mode". On a related note, CoinTree is working on a feature to pay household bills with cryptocurrencies it supports.
Three new OTC desks were added to exchanges page at decred.org.
Two mobile wallets integrated Decred:
Reminder: do your best to understand the security and privacy model before using any wallet software. Points to consider: who controls the seed, does the wallet talk to the nodes directly or via middlemen, is it open source or not?




Targeted advertising report for August was posted by @timhebel. Facebook appeal is pending, some Google and Twitter campaigns were paused and some updated. Read more here.
Contribution to the @decredproject Twitter account has evolved over the past few months. A #twitter_ops channel is being used on Matrix to collaboratively draft and execute project account tweets (including retweets). Anyone with an interest in contributing to the Twitter account can ask for an invitation to the channel and can start contributing content and ideas there for evaluation by the Twitter group. As a result, no minority or unilateral veto over tweets is possible. (from GitHub)


For those willing to help with the events:
BAB: Hey all, we are gearing up for conference season. I have a list of places we hope to attend but need to know who besides @joshuam and @Haon are willing to do public speaking, willing to work booths, or help out at them? You will need to be well versed on not just what is Decred, but the history of Decred etc... DM me if you are interested. (#event_planning)
The Decred project is looking for ambassadors. If you are looking for a fun cryptocurrency to get involved in send me a DM or come talk to me on Decred slack. (@marco_peereboom, longer version here)


Decred Assembly episode 21 is available. @jy-p and lead dcrwallet developer @jrick discussed SPV from Satoshi's whitepaper, how it can be improved upon and what's coming in Decred.
Decred Assembly episodes 1-21 are available in audio only format here.
New instructional articles on stakey.club: Decrediton setup, Deleting the wallet, Installing Go, Installing dcrd, dcrd as a Linux service. Available in both English and Portuguese.
Decred scored #32 in the August issue of Chinese CCID ratings. The evaluation model was explained in this interview.
Satis Group rated Decred highly in their cryptoasset valuation research report (PDF). This was featured by several large media outlets, but some did not link to or omitted Decred entirely, citing low market cap.
Featured articles:

Community Discussions

Community stats:
Comm systems news:
After another debate about chat systems more people began testing and using Matrix, leading to some gardening on that platform:
Reddit: substantive discussion about Decred cons; ecosystem fund; a thread about voter engagement, Politeia UX and trolling; idea of a social media system for Decred by @michae2xl; how profitable is the Obelisk DCR1.
Chats: cross-chain trading via LN; plans for contractor management system, lower-level decision making and contractor privacy vs transparency for stakeholders; measuring dev activity; what if the network stalls, multiple implementations of Decred for more resilience, long term vision behind those extensive tests and accurate comments in the codebase; ideas for process for policy documents, hosting them in Pi and approving with ticket voting; about SPV wallet disk size, how compact filters work; odds of a wallet fetching a wrong block in SPV; new module system in Go; security of allowing Android app backups; why PoW algo change proposal must be specified in great detail; thoughts about NIPoPoWs and SPV; prerequisites for shipping SPV by default (continued); Decred vs Dash treasury and marketing expenses, spending other people's money; why Decred should not invade a country, DAO and nation states, entangling with nation state is poor resource allocation; how winning tickets are determined and attack vectors; Politeia proposal moderation, contractor clearance, the scale of proposals and decision delegation, initial Politeia vote to approve Politeia itself; chat systems, Matrix/Slack/Discord/RocketChat/Keybase (continued); overview of Korean exchanges; no breaking changes in vgo; why project fund burn rate must keep low; asymptotic behavior of Decred and other ccs, tail emission; count of full nodes and incentives to run them; Politeia proposal translations and multilingual environment.
An unusual event was the chat about double negatives and other oddities in languages in #trading.


DCR started the month at USD 56 / BTC 0.0073 and had a two week decline. On Aug 14 the whole market took a huge drop and briefly went below USD 200 billion. Bitcoin went below USD 6,000 and top 100 cryptos lost 5-30%. The lowest point coincided with Bitcoin dominance peak at 54.5%. On that day Decred dived -17% and reached the bottom of USD 32 / BTC 0.00537. Since then it went sideways in the USD 35-45 / BTC 0.0054-0.0064 range. Around Aug 24, Huobi showed DCR trading volume above USD 5M and this coincided with a minor recovery.
@ImacallyouJawdy posted some creative analysis based on ticket data.

Relevant External

StopAndDecrypt published an extensive article "ASIC Resistance is Nothing but a Blockchain Buzzword" that is much in line with Decred's stance on ASICs.
The ongoing debates about the possible Sia fork yet again demonstrate the importance of a robust dispute resolution mechanism. Also, we are lucky to have the treasury.
Mark B Lundeberg, who found a vulnerability in atomicswap earlier, published a concept of more private peer-to-peer atomic swaps. (missed in July issue)
Medium took a cautious stance on cryptocurrencies and triggered at least one project to migrate to Ghost (that same project previously migrated away from Slack).
Regulation: Vietnam bans mining equipment imports, China halts crypto events and tightens control of crypto chat groups.
Reddit was hacked by intercepting 2FA codes sent via SMS. The announcement explains the impact. Yet another data breach suggests to think twice before sharing any data with any company and shift to more secure authentication systems.
Intel and x86 dumpsterfire keeps burning brighter. Seek more secure hardware and operating systems for your coins.
Finally, unrelated to Decred but good for a laugh: yetanotherico.com.

About This Issue

This is the 5th issue of Decred Journal. It is mirrored on GitHub, Medium and Reddit. Past issues are available here.
Most information from third parties is relayed directly from source after a minimal sanity check. The authors of Decred Journal have no ability to verify all claims. Please beware of scams and do your own research.
Feedback is appreciated: please comment on Reddit, GitHub or #writers_room on Matrix or Slack.
Contributions are welcome too. Some areas are collecting content, pre-release review or translations to other languages. Check out @Richard-Red's guide how to contribute to Decred using GitHub without writing code.
Credits (Slack names, alphabetical order): bee, Haon, jazzah, Richard-Red and thedecreddigest.
submitted by jet_user to decred [link] [comments]

My List of Crypto Tax Questions

1.) Are crypto transactions pre-2018 like-kind transactions?
2.) If you invested in an ICO through a SAFT, what is the cost basis for the token?
3.) Can you participate in a syndicate and pool money as an investment club? SPV? LP?
3a.) If someone invests through a SAFT, but pools money, is the person signing the SAFT responsible for all taxes or can they distribute the taxes to pool members? How?
3b.) Is the taxable event when you send to the pool or when the syndicate leader sends to the SAFT address?
4.) How do wash sales work for crypto?
5.) How do you report transactions from a DEX if it doesn’t store records of transactions?
6.) How do you report privacy coin transactions?
7.) If there is a theft or loss, is it based of your original cost basis or the fair market value at the time of loss?
8.) Do you have to file a FBAR for using a foreign exchange? For using a hardware wallet?
9.) Can you share a wallet with a friend or family member?
10.) Is giving a gift of 1 eth first considered cashing out into USD and therefore a taxable event before the gift?
11.) Are you able to choose between FIFO, LIFO, HIFO, or Specific Identification?
12.) If you want to cash out a token into USD but you first have to go through ETH to cash out on Coinbase, does FIFO apply in this situation, forcing you to cash out your oldest ETH?
13.) Are unwanted airdrops taxable? What is the cost basis? What if you are spammed with unwanted airdrops?
14.) Is the Bitcoin Cash cost basis when the fork happened or when it was airdropped by Coinbase? Is the cost basis for a forked coin always $0?
15.) Can you file as a 474 MTM day trader even though crypto is considered a property?
16.) Is changing a place holder token like EOS into a main net coin a taxable event?
17.) How do you treat transaction fees for just sending crypto between wallets?
18.) Are transfers considered taxable events? Coinbase seems to think so.
19.) How would using a ETH lending platform like SALT, Sweetbridge, or Maker, be looked at tax wise? Is it considered cashing out and a taxable event?
20.) How do you calculate taxes for margin or futures trading?
21.) Is it a company expense if you use utility tokens like ETH to make your dapp function?
22.) Can you write off investing in a non profit coin on your taxes?
23.) What is the cost basis for someone that was given a Bitcoin? What is the cost basis for someone who inherits a Bitcoin?
23a.) What if the donors basis was higher than the market value of the Bitcoin at the time of gift and there was a capital loss?
24.) What is the cost basis for crypto donations? If you bought BTC at 20k and now it is 8k, can you claim a deduction for the 20k cost basis?
25.) Is it possible to invest in crypto through a Self-Directed Roth IRA so you don’t have to pay any taxes on capital gains one day?
26.) Do you have to file a Section 83b election for all tokens received as income, including advisor tokens?
27.) Is buying a coffee with Bitcoin a taxable event?
submitted by speedyarrow415 to ethtrader [link] [comments]

I keep reading people say bitcoin development is stalled

But in practice there's more going on right now than there's ever been in the last few years. You just have to look in the right places. Here's a few days of documented github activity from the bitcoin slack and I've a feeling there are hundreds more people working on Bitcoin projects outside of the work being done by core:
github BOT [6:28 PM] [bitcoin:master] 2 new commits by Daniel Kraft and 1 other: f93c2a1 net: Avoid duplicate getheaders requests. - Daniel Kraft 8e8bebc Merge #8054: net: Avoid duplicate getheaders requests. - Wladimir J. van der Laan
[6:28] [bitcoin/bitcoin] Pull request closed: #8054 net: Avoid duplicate getheaders requests. by laanwj
[6:31] [bitcoin:master] 6 new commits by Pieter Wuille and 1 other: d253ec4 Make ProcessNewBlock dbp const and update comment - Pieter Wuille 316623f Switch reindexing to AcceptBlock in-loop and ActivateBestChain afterwards - Pieter Wuille fb8fad1 Optimize ActivateBestChain for long chains - Pieter Wuille d3d7547 Add -reindex-chainstate that does not rebuild block index - Pieter Wuille b4d24e1 Report reindexing progress in GUI - Pieter Wuille Show more...
[6:31] [bitcoin/bitcoin] Pull request closed: #7917 Optimize reindex by laanwj
Joshua Unseth [9:55 PM] joined #commit-activity. Also, @sjors joined and left.
----- May 19th -----
github BOT [12:08 AM] [bitcoin/bitcoin] Pull request submitted by EthanHeilman

8070 Remove non-determinism which is breaking net_tests #8069

If addrmanUncorrupted does not have the same nKey every time it will map addrs to different bucket positions and occasionally cause a collision between two addrs, breaking the test.
github BOT [1:00 AM] [bitcoin/bitcoin] Pull request closed: #7716 [0.11] Backport BIP9 and softfork for BIP's 68,112,113 by morcos
Eragmus You Should Probably Stop Modding [1:12 AM] joined #commit-activity. Also, @buttmunch joined, @icandothisallday joined, @misnomer joined, @coreneedstostop joined, @xchins joined, @jbeener joined, @jbleeks joined, @whalepanda joined, @grinny joined, @alex_may joined, @mr_e joined.
github BOT [2:46 PM] [bitcoin:master] 5 new commits by Warren Togami and 1 other: 00678bd Make failures to connect via Socks5() more informative and less unnecessarily scary. - Warren Togami 0d9af79 SOCKS5 connecting and connected messages with -debug=net. - Warren Togami 94fd1d8 Make Socks5() InterruptibleRecv() timeout/failures informative. - Warren Togami bf9266e Use Socks5ErrorString() to decode error responses from socks proxy. - Warren Togami 18436d8 Merge #8033: Fix Socks5() connect failures to be less noisy and less unnecessarily scary - Wladimir J. Show more...
[2:46] [bitcoin/bitcoin] Pull request closed: #8033 Fix Socks5() connect failures to be less noisy and less unnecessarily scary by laanwj
github BOT [3:56 PM] [bitcoin:master] 3 new commits by EthanHeilman and 2 others: f4119c6 Remove non-determinism which is breaking net_tests #8069 - EthanHeilman 2a8b358 Fix typo adddrman to addrman as requested in #8070 - Ethan Heilman 7771aa5 Merge #8070: Remove non-determinism which is breaking net_tests #8069 - Wladimir J. van der Laan
[3:56] [bitcoin/bitcoin] Pull request closed: #8070 Remove non-determinism which is breaking net_tests #8069 by laanwj
github BOT [5:18 PM] [bitcoin/bitcoin] Pull request submitted by MarcoFalke

8072 travis: 'make check' in parallel and verbose

• 'make check' in parallel, since the log will take care of clean output • 'make check' verbose, so that test failure causes aren't hidden
Fixes: #8071
github BOT [7:56 PM] [bitcoin/bitcoin] Pull request submitted by rat4

8073 qt: askpassphrasedialog: Clear pass fields on accept

This is usability improvement in a case if user gets re-asked passphrase. (e.g. made a typo)
Victor Broman [8:01 PM] joined #commit-activity. Also, @bb joined, @ziiip joined.
----- May 20th -----
github BOT [12:34 PM] [bitcoin/bitcoin] Pull request submitted by jsantos4you

8075 0.12

[12:37] [bitcoin/bitcoin] Pull request closed: #8075 0.12 by sipa
github BOT [3:37 PM] [bitcoin/bitcoin] Pull request closed: #7082 Do not absolutely protect local peers and make eviction more aggressive. by gmaxwell
github BOT [3:44 PM] [bitcoin:master] 2 new commits by Cory Fields and 1 other: 401ae65 travis: 'make check' in parallel and verbose - Cory Fields 1b87e5b Merge #8072: travis: 'make check' in parallel and verbose - MarcoFalke
[3:44] [bitcoin/bitcoin] Pull request closed: #8072 travis: 'make check' in parallel and verbose by MarcoFalke
github BOT [3:58 PM] [bitcoin/bitcoin] Pull request closed: #7093 Address mempool information leak and resource wasting attacks. by gmaxwell
github BOT [6:11 PM] [bitcoin/bitcoin] Pull request submitted by sdaftuar

8076 VerifyDB: don't check blocks that have been pruned

If a pruning node ends up in a state where it has very few blocks on disk, then a node could fail to start up in VerifyDB. This pull changes the behavior for pruning nodes, so that we will just not bother trying to check blocks that have been pruned.
I don't expect this edge case to be triggered much in practice currently; this is a preparatory commit for segwit (to deal with the case of pruning nodes that upgrade after segwit activation).
Erik Hedman [6:20 PM] joined #commit-activity
github BOT [8:46 PM] [bitcoin/bitcoin] Pull request submitted by jtimon

8077 Consensus: Decouple from chainparams.o and timedata.o

Do it for the consensus-critical functions:
• CheckBlockHeader • CheckBlock • ContextualCheckBlockHeader Show more...
github BOT [9:26 PM] [bitcoin:master] 3 new commits by MarcoFalke: fac9349 [qa] Remove hardcoded "4 nodes" from test_framework - MarcoFalke fad68f7 [qa] Reduce node count for some tests - MarcoFalke 8844ef1 Merge #8056: [qa] Remove hardcoded "4 nodes" from test_framework - MarcoFalke
[9:27] [bitcoin/bitcoin] Pull request closed: #8056 [qa] Remove hardcoded "4 nodes" from test_framework by MarcoFalke
github BOT [9:48 PM] [bitcoin/bitcoin] Pull request submitted by petertodd

8078 Disable the mempool P2P command when bloom filters disabled

Only useful to SPV peers, and attackers... like bloom is a DoS vector as far more data is sent than received.
null radix [10:15 PM] joined #commit-activity
github BOT [11:34 PM] [bitcoin:master] 2 new commits by MarcoFalke: fab5233 [qa] test_framework: Set wait-timeout for bitcoind procs - MarcoFalke 37f9a1f Merge #8047: [qa] test_framework: Set wait-timeout for bitcoind procs - MarcoFalke
[11:34] [bitcoin/bitcoin] Pull request closed: #8047 [qa] test_framework: Set wait-timeout for bitcoind procs by MarcoFalke
github BOT [11:48 PM] [bitcoin/bitcoin] Pull request closed: #7826 [Qt] show conflicts of unconfirmed transactions in the UI by jonasschnelli
[11:50] [bitcoin/bitcoin] Pull request re-opened: #7826 [Qt] show conflicts of unconfirmed transactions in the UI by jonasschnelli
----- May 21st ----- Rentaro Matsukata [1:56 AM] joined #commit-activity. Also, @evilone joined, @cryptop joined, @thomas5 joined.
github BOT [1:54 PM] [bitcoin/bitcoin] Pull request submitted by gmaxwell

8080 Do not use mempool for GETDATA for tx accepted after the last mempool req.

The ability to GETDATA a transaction which has not (yet) been relayed is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p message and is only needed to fetch transactions returned by it.
github BOT [5:48 PM] [bitcoin/bitcoin] Pull request submitted by gmaxwell

8082 Defer inserting into maprelay until just before relaying.

Also extend the relaypool lifetime by 1 minute (6%) to 16 minutes.
This reduces the rate of not founds by better matching the far end expectations, it also improves privacy by removing the ability to use getdata to probe for a node having a txn before Show more...
Sergey Ukustov [9:17 PM] joined #commit-activity. Also, @stoicism joined.
----- Yesterday May 22nd, 2016 -----
github BOT [5:59 AM] [bitcoin/bitcoin] Pull request submitted by jonasschnelli

8083 Add support for dnsseeds with option to filter by servicebits

Opposite part of https://github.com/sipa/bitcoin-seedepull/36. Including new testnet seed that supports filtering.
Required for SW #7910.
Junseth Sock Puppet Account [6:13 AM] joined #commit-activity
github BOT [1:59 PM] [bitcoin/bitcoin] Pull request submitted by gmaxwell

8084 Add recently accepted blocks and txn to AttemptToEvictConnection.

This protect any not-already-protected peers who were the most recent to relay transactions and blocks to us.
This also takes increases the eviction agressiveness by making it willing to disconnect a netgroup with only one member.
github BOT [5:04 PM] [bitcoin/bitcoin] Pull request submitted by theuni

8085 p2p: Begin encapsulation

This work creates CConnman. The idea is to begin moving data structures and functionality out of globals in net.h and into an instanced class, in order to avoid side-effects in networking code. Eventually, an (internal) api begins to emerge, and as long as the conditions of that api are met, the inner-workings may be a black box.
For now (for ease), a single global CConnman is created. Down the road, the instance could be passed around instead. Also, CConnman should be moved out of net.h/net.cpp, Show more...
github BOT [5:14 PM] [bitcoin/bitcoin] Pull request submitted by sipa

8086 Use SipHash for node eviction

github BOT [5:50 PM] [bitcoin/bitcoin] Pull request closed: #6844 [REST] Add send raw transaction by lclc
----- Today May 23rd, 2016 ----- yannie888 [5:21 AM] joined #commit-activity. Also, @myco joined, @er_sham joined, @ethdealer joined.
github BOT [3:23 PM] [bitcoin/bitcoin] Pull request submitted by pstratem

8087 Introduce CBlockchain and move CheckBlockHeader

[3:23] [bitcoin/bitcoin] Pull request submitted by pstratem

8088 Avoid recalculating vchKeyedNetGroup in eviction logic.

Lazy calculate vchKeyedNetGroup in CNode::GetKeyedNetGroup.
submitted by BillyHodson to Bitcoin [link] [comments]

AMA with Wanchain VP Lini

AMA with Wanchain VP Lini
Original article here: https://medium.com/wanchain-foundation/ama-with-wanchain-vp-lini-58ada078b4fe

“What is unique about us is that we have actually put theory into practice.”
— Lini
Wanchain’s Vice President of Business Development, Lini, sat down with blockchain media organization Neutrino for an AMA covering a wide range of topics concerning Wanchain’s development.
The following is an English translation of the original Chinese AMA which was held on December 13th, 2018:
Neutrino: Could you please first share with us a little basic background, what are the basic concepts behind cross chain technology? What are the core problems which are solved with cross-chain? In your opinion, what is the biggest challenge of implementing cross chain to achieve value transfer between different chains?
Lini: Actually, this question is quite big. Let me break it down into three smaller parts:
  1. First, what is the meaning of “cross-chain”?
In China, we like to use the word “cross-chain”, the term “interoperability” is used more frequently in foreign countries. Interoperability is also one of the important technologies identified by Vitalik for the development of a future blockchain ecosystem mentioned in the Ethereum white paper. So cross-chain is basically the concept of interoperability between chains.
  1. The core problem solved by cross chain is that of “multi-ledger” synchronous accounting
In essence, blockchain is a distributed bookkeeping technique, also known as distributed ledger technology. Tokens are the core units of account on each chain, there currently exist many different chains, each with their own token. Of especial importance is the way in which each ledger uses tokens to interact with each other for the purpose of clearing settlements.
  1. The core purpose of the cross-chain technology is as one of the key infrastructures of the future economy based on digital currencies.
Cross chain technology is one of the foundational technological infrastructures that is necessary for the large scale application of blockchain technology.
Neutrino: As we all know, there are many different kinds of cross-chain technologies. Please give us a brief introduction to several popular cross-chain technologies on the market, and the characteristics of each of these technologies。
Lini: Before answering this question, it is very important to share two important concepts with our friends: heterogeneity and homogeneity, and centralization and decentralization.
These two points are especially important for understanding various cross-chain technologies, because there are many different technologies and terminologies, and these are some of the foundational concepts needed for understanding them.
There are also two core challenges which must be overcome to implement cross-chain:
Combining the above two points, we look at the exploration of some solutions in the industry and the design concepts of other cross-chain projects.
First I’d like to discuss the Relay solution.
However the Relay solution must consume a relatively large amount of gas to read the BTC header. Another downside is that, as we all know, Bitcoin’s blocks are relatively slow, so the time to wait for verification will be long, it usually takes about 10 minutes to wait for one block to confirm, and the best practice is to wait for 6 blocks.
The next concept is the idea of Sidechains.
This solution is good, but not all chains contain SPV, a simple verification method. Therefore, there are certain drawbacks. Of course, this two way peg way solves challenge beta very well, that is, the atomicity of the transaction.
These two technical concepts have already been incorporated into a number of existing cross chain projects. Let’s take a look at two of the most influential of these.
The first is Polkadot.
This is just a summary based on Polkadot’s whitepaper and most recent developments. The theoretical design is very good and can solve challenges alpha and beta. Last week, Neutrino organized a meetup with Polkadot, which we attended. In his talk, Gavin’s focus was on governance, he didn’t get into too much technical detail, but Gavin shared some very interesting ideas about chain governance mechanisms! The specific technical details of Polkadot may have to wait until after their main net is online before it can be analyzed.
Next is Cosmos.
Cosmos is a star project who’s basic concept is similar to Polkadot. Cosmos’s approach is based on using a central hub. Both projects both take into account the issue of heterogeneous cross-chain transactions, and both have also taken into account how to solve challenges alpha and beta.
To sum up, each research and project team has done a lot of exploration on the best methods for implementing cross-chain technology, but many are still in the theoretical design stage. Unfortunately, since the main net has not launched yet, it is not possible to have a more detailed understanding of each project’s implementation. A blockchain’s development can be divided into two parts: theoretical design, and engineering implementation. Therefore, we can only wait until after the launch of each project’s main network, and then analyze it in more detail.
Neutrino: As mentioned in the white paper, Wanchain is a general ledger based on Ethereum, with the goal of building a distributed digital asset financial infrastructure. There are a few questions related to this. How do you solve Ethereum’s scaling problem? How does it compare with Ripple, which is aiming to be the standard trading protocol that is common to all major banks around the world? As a basic potential fundamental financial infrastructure, what makes Wanchain stand out?
Lini: This question is actually composed of two small questions. Let me answer the first one first.
  1. Considerations about TPS.
First of all, Wanchain is not developed on Ethereum. Instead, it draws on some of Ethereum’s code and excellent smart contracts and virtual machine EVM and other mature technical solutions to build the mainnet of Wanchain.
The TPS of Ethereum is not high at this stage, which is limited by various factors such as the POW consensus mechanism. However, this point also in part is due to the characteristics of Ethereum’s very distributed and decentralized features. Therefore, in order to improve TPS, Wanchain stated in its whitepaper that it will launch its own POS consensus, thus partially solving the performance issues related to TPS. Wanchain’s POS is completely different from the POS mechanism of Ethereum 2.0 Casper.
Of course, at the same time, we are also paying close attention to many good proposals from the Ethereum community, such as sharding, state channels, side chains, and the Raiden network. Since blockchain exists in the world of open source, we can of course learn from other technological breakthroughs and use our own POS to further improve TPS. If we have some time at the end, I’d love to share some points about Wanchain’s POS mechanism.
  1. Concerning, Ripple, it is completely different from what Wanchain hopes to do.
Ripple is focused on exchanges between different fiat pairs, the sharing of data between banks and financial institutions, as a clearing and settlement system, and also for the application of DLT, for example the Notary agent mechanism.
Wanchain is focused on different use cases, it is to act as a bridge between different tokens and tokens, and between assets and tokens. For various cross-chain applications it is necessary to consume WAN as a gas fee to pay out to nodes.
So it seems that the purpose Ripple and Wanchain serve are quite different. Of course, there are notary witnesses in the cross-chain mechanism, that is, everyone must trust the middleman. Ripple mainly serves financial clients, banks, so essentially everyone’s trust is already there.
Neutrino: We see that Wanchain uses a multi-party computing and threshold key sharing scheme for joint anchoring, and achieves “minimum cost” for integration through cross-chain communication protocols without changing the original chain mechanism. What are the technical characteristics of multi-party computing and threshold key sharing? How do other chains access Wanchain, what is the cross-chain communication protocol here? What is the cost of “minimum cost?
Lini: The answer to this question is more technical, involving a lot of cryptography, I will try to explain it in a simple way.
  1. About sMPC -
It stands for secure multi-party computation. I will explain it using an example proposed by the scholar Andrew Yao, the only Turing Award winner in China. The scenario called Yao’s Millionaire Problem. How can two millionaires know who is wealthier without revealing the details of their wealth to each other or a trusted third party? I’m not going to explain the answer in detail here, but those who are interested can do a web search to learn more.
In sMPC multiple parties each holding their own piece of private data jointly perform a calculation (for example, calculating a maximum value) and obtain a calculation result. However, in the process, each party involved does not leak any of their respective data. Essentially sMPC calculation can allow for designing a protocol without relying on any trusted third parties, since no individual ever has access to the complete private information.
Secure multiparty computing can be abstractly understood as two parties who each have their own private data, and can calculate the results of a public function without leaking their private data. When the entire calculation is completed, only the calculation results are revealed to both parties, and neither of them knows the data of the other party and the intermediate data of the calculation process. The protocol used for secure multiparty computing is homomorphic encryption + secret sharing + OT (+ commitment scheme + zero knowledge proofs, etc.)
Wanchain’s 21 cross chain Storeman nodes use sMPC to participate in the verification of a transaction without obtaining of a user’s complete private key. Simply put, the user’s private key will have 21 pieces given to 21 anonymous people who each can only get 1/21 part, and can’t complete the whole key.
  1. Shamir’s secret sharing
There are often plots in a movie where a top secret document needs to be handed over to, let’s say five secret agents. In order to protect against the chance of an agent from being arrested or betraying the rest, the five agents each hold only part of a secret key which will reveal the contents of the documents. But there is also a hidden danger: if one the agents are really caught, how can the rest of the agents access the information in the documents? At this point, you may wonder if there is any way for the agents to still recover the original text with only a portion of the keys? In other words, is there any method that allows a majority of the five people to be present to unlock the top secret documents? In this case, the enemy must be able to manipulate more than half of the agents to know the information in the secret documents.
Wanchain uses the threshold M<=N; N=21; M=16. That is to say, at least 16 Storeman nodes must participate in multi-party calculation to confirm a transaction. Not all 21 Storeman nodes are required to participate. This is a solution to the security problem of managing private keys.
Cross-chain communication protocols refers to the different communication methods used by different chains. This is because heterogeneous cross-chain methods can’t change the mechanism of the original chains. Nakamoto and Vitalik will not modify their main chains because they need BTC and ETH interoperability. Therefore, project teams that can only do cross-chain agreements to create different protocols for each chain to “talk”, or communicate. So the essence of a cross-chain protocol is not a single standard, but a multiple sets of standards. But there is still a shared sMPC and threshold design with the Storeman nodes.
The minimum cost is quite low, as can be shown with Wanchain 3.0’s cross chain implementation. In fact it requires just two smart contracts, one each on Ethereum and Wanchain to connect the two chains. To connect with Bitcoin all that is needed is to write a Bitcoin script. Our implementation guarantees both security and decentralization, while at the same time remaining simple and consuming less computation. The specific Ethereum contract and Bitcoin scripts online can be checked out by anyone interested in learning more.
Neutrino: What kind of consensus mechanism is currently used by Wanchain? In addition, what is the consensus and incentive mechanism for cross-chain transactions, and what is the purpose of doing so? And Wanchain will support cross-chain transactions (such as BTC, ETH) on mainstream public chains, asset cross-chain transactions between the alliance chains, and cross-chain transactions between the public and alliance chains, how can you achieve asset cross-chain security and privacy?
Lini: It is now PPOW (Permissioned Proof of Work), in order to ensure the reliability of the nodes before the cross-chain protocol design is completed, and to prepare to switch to POS (as according to the Whitepaper roadmap). The cross-chain consensus has been mentioned above, with the participation of a small consensus (at least 16 nodes) in a set of 21 Storeman nodes through sMPC and threshold secret sharing.
In addition, the incentive is achieved through two aspects: 1) 100% of the cross chain transaction fee is used to reward the Storeman node; 2) Wanchain has set aside a portion of their total token reserve as an incentive mechanism for encouraging Storeman nodes in case of small cross-chain transaction volume in the beginning.
It can be revealed that Storeman participation is opening gradually and will become completely distributed and decentralized in batches. The first phase of the Storeman node participation and rewards program is to be launched at the end of 2018. It is expected that the selection of participants will be completed within one quarter. Please pay attention to our official announcements this month.
In addition, for public chains, consortium chains, and private chains, asset transfer will also follow the cross-chain mechanism mentioned above, and generally follow the sMPC and threshold integration technology to ensure cross-chain security.
When it comes to privacy, this topic will be bigger. Going back to the Wanchain Whitepaper, we have provided privacy protection on Wanchain mainnet. Simply put, the principle is using ring signatures. The basic idea is that it mixes the original address with many other addresses to ensure privacy. We also use one-time address. In this mechanism a stamp system is used that generates a one-time address from a common address. This has been implemented since our 2.0 release.
But now only the privacy protection of native WAN transactions can be provided. The protection of cross-chain privacy and user experience will also be one of the important tasks for us in 2019.
Neutrino: At present, Wanchain uses Storeman as a cross-chain trading node. Can you introduce the Storeman mechanism and how to protect these nodes?
Lini: Let me one problem from two aspects.
  1. As I introduced before in my explanation of sMPC, the Storeman node never holds the user’s private key, but only calculates the transaction in an anonymous and secure state, and the technology prevents the Storeman nodes from colluding.
  2. Even after technical guarantees, we also designed a “double protection” against the risk from an economic point of view, that is, each node participating as a Storeman needs to pledge WAN in the contract as a “stake”. The pledge of WAN will be greater than the amount of any single transaction as a guarantee against loss of funds.
If the node is malicious (even if it is a probability of one in a billion), the community will be compensated for the loss caused by the malicious node by confiscation of the staked WAN. This is like the POS mechanism used by ETH, using staking to prevent bad behavior is a common principle.
Neutrino: On December 12th, the mainnet of Wanchain 3.0 was launched. Wanchain 3.0 opened cross-chain transactions between Bitcoin, Ethereum and ERC20 (such as MakerDao’s stable currency DAI and MKR). What does this version mean for you and the industry? This upgrade of cross-chain with Bitcoin is the biggest bright spot. So, if now you are able to use Wanchain to make transactions between what is the difference between tokens, then what is the difference between a cross chain platform like Wanchain and cryptocurrency exchanges?
Lini: The release of 3.0 is the industry’s first major network which has crossed ETH and BTC, and it has been very stable so far. As mentioned above, many cross-chain, password-protected theoretical designs are very distinctive, but for engineering implementation, the whether or not it can can be achieved is a big question mark. Therefore, this time Wanchain is the first network launched in the world to achieve this. Users are welcome to test and attack. This also means that Wanchain has connected the two most difficult and most challenging public networks. We are confident we will soon be connecting other well-known public chains.
At the same time of the release of 3.0, we also introduced cross chain integration with other ERC20 tokens in the 2.X version, such as MakerDao’s DAI, MKR, LRC, etc., which also means that more tokens of excellent projects on Ethereum will also gradually be integrated with Wanchain.
Some people will be curious, since Wanchain has crossed so many well-known public chains/projects; how is it different with crypto exchanges? In fact, it is very simple, one centralized; one distributed. Back to the white paper of Nakamoto, is not decentralization the original intention of blockchain? So what Wanchain has to do is essentially to solve the bottom layer of the blockchain, one of the core technical difficulties.
Anyone trying to create a DEX (decentralized exchange); digital lending and other application scenarios can base their application on Wanchain. There is a Wanchain based DEX prototype made by our community members Jeremiah and Harry, which quite amazing. Take a look at this video below.
Neutrino: What are the specific application use cases after the launch of Wanchain 3.0? Most are still exploring small-scale projects. According to your experience, what are the killer blockchain applications of the future? What problems need to be solved during this period? How many years does it take?
  1. Wanchain is just a technology platform rather than positioning itself as an application provider; that is, Wanchain will continue to support the community, and the projects which use cross-chain technology to promote a wide range of use cases for Wanchain.
  2. Cross-chain applications that we anticipate include things like: decentralized exchanges, digital lending, cross chain games, social networking dAPPs, gambling, etc. We also expect to see applications using non fungible tokens, for example exchange of real assets, STOs, etc.
  3. We recently proposed the WanDAPP solution. Simply speaking, a game developer for example has been developing on Ethereum, and ERC20 tokens have been issued, but they hope to expand the player base of their games to attract more people. To participate and make full use of their DAPP, you can consider using the WanDAPP solution to deploy the game DAPP on other common platforms, such as EOS, TRON, etc., but you don’t have to issue new tokens on these chains or use the previous ERC20 tokens. In this way the potential user population of the game can be increased greatly without issuing more tokens on a new chain, improving the real value of the original token. This is accomplished completely using the cross-chain mechanism of Wanchain.
  4. For large-scale applications, the infrastructure of the blockchain is not yet complete, there are issues which must first be dealt with such as TPS, sharding, sidechains, state channels, etc. These all must be solved for the large-scale application of blockchain applications. I don’t dare to guess when it will be completed, it depends on the progress of various different technical projects. In short, industry practitioners and enthusiasts need a little faith and patience.
Neutrino community member Block Venture Capital Spring: Will Wanchain be developing any more cross chain products aimed at general users? For example will the wallet be developed to make automatic cross chain transfers with other public chains? Another issue the community is concerned about is the currency issuance. Currently there are more than 100 million WAN circulating, what about the rest, when will it be released?
Lini: As a cross-chain public chain, we are not biased towards professional developers or ordinary developers, and they are all the same. As mentioned above, we provide a platform as infrastructure, and everyone is free to develop applications on us.
For example, if it is a decentralized exchange, it must be for ordinary users to trade on; if it is some kind of financial derivatives product, it is more likely to be used by finance professionals. As for cross-chain wallets which automatically exchange, I’m not sure if you are talking about distributed exchanges, the wallet will not be “automatic” at first, but you can “automatically” redeem other tokens.
Finally, the remaining WAN tokens are strictly in accordance with the plan laid out in the whitepaper. For example, the POS node reward mentioned above will give 10% of the total amount for reward. At the same time, for the community, there are also rewards for the bounty program. The prototype of the DEX that I just saw is a masterpiece of the overseas community developers, and also received tokens from our incentive program.
Neutrino community member’s question: There are many projects in the market to solve cross-chain problems, such as: Cosmos, Polkadot, what are Wanchain’s advantages and innovations relative to these projects?
Lini: As I mentioned earlier, Cosmos and pPolkadot all proposed very good solutions in theory. Compared with Wanchain, I don’t think that we have created anything particularly unique in our theory. The theoretical basis for our work is cryptography, which is derived from the academic foundation of scholars such as Yao Zhizhi and Silvio Micali. Our main strong point is that we have taken theory and put it into practice..
Actually, the reason why people often question whether a blockchain project can be realized or not is because the whitepapers are often too ambitious. Then when they actually start developing there are constant delays and setbacks. So for us, we focus on completing our very solid and realizable engineering goals. As for other projects, we hope to continue to learn from each other in this space.
Neutrino community member Amos from Huobi Research Institute question: How did you come to decide on 21 storeman nodes?
Lini: As for the nodes we won’t make choices based on quantity alone. The S in the POS actually also includes the time the tokens are staked, so that even if a user is staking less tokens, the amount of time they stake them for will also be used to calculate the award, so that is more fair. We designed the ULS (Unique Leader Selection) algorithm in order to reduce the reliance on the assumption of corruption delay (Cardano’s POS theory). which is used for ensuring fairness to ensure that all participants in the system can have a share of the reward, not only few large token holders.
Wu Di, a member of the Neutrino community: Many big exchanges have already begun to deploy decentralized exchanges. For example, Binance, and it seems that the progress is very fast. Will we be working with these influential exchanges in the future? We we have the opportunity to cooperate with them and broaden our own influence?
Lini: I also have seen some other exchange’s DEX. Going back the original point, distributed cross-chain nodes and centralized ones are completely different. I’m guessing that most exchanges use a centralized cross-chain solution, so it may not be the same as the 21 member Storeman group of Wanchain, but I think that most exchanges will likely be using their own token and exchange system. This is my personal understanding. But then, if you are developing cross chain technology, you will cooperate with many exchanges that want to do a DEX. Not only Binance, but also Huobi, Bithumb, Coinbase… And if there is anyone else who would like to cooperate we welcome them!
Neutrino community member AnneJiang from Maker: Dai as the first stable chain of Wanchain will open a direct trading channel between Dai and BTC. In relation to the Dai integration, has any new progress has been made on Wanchain so far?
Lini: DAI’s stable currency has already been integrated on Wanchain. I just saw it yesterday, let me give you a picture. It’s on the current 3.0 browser, https://www.wanscan.org/, you can take a look at it yourself.
This means that users with DAI are now free to trade for BTC, or ETH or some erc20 tokens. There is also a link to the Chainlink, and LRC is Loopring, so basically there are quite a few excellent project tokens. You may use the Wanchain to trade yourself, but since the DEX is not currently open, currently you can only trade with friends you know.

About Neutrino

Neutrino is a distributed, innovative collaborative community of blockchains. At present, we have established physical collaboration spaces in Tokyo, Singapore, Beijing, Shanghai and other places, and have plans to expand into important blockchain innovation cities such as Seoul, Thailand, New York and London. Through global community resources and partnerships, Neutrino organizes a wide range of online an offline events, seminars, etc. around the world to help developers in different regions better communicate and share their experiences and knowledge.

About Wanchain

Wanchain is a blockchain platform that enables decentralized transfer of value between blockchains. The Wanchain infrastructure enables the creation of distributed financial applications for individuals and organizations. Wanchain currently enables cross-chain transactions with Ethereum, and today’s product launch will enable the same functionalities with Bitcoin. Going forward, we will continue to bridge blockchains and bring cross-chain finance functionality to companies in the industry. Wanchain has employees globally with offices in Beijing (China), Austin (USA), and London (UK).
You can find more information about Wanchain on our website. Additionally, you can reach us through Telegram, Discord, Medium, Twitter, and Reddit. You can also sign up for our monthly email newsletter here.
submitted by maciej_wan to wanchain [link] [comments]

Merkle Trees and Mountain Ranges - Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

Original link: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html
Unedited text and originally written by:

Peter Todd pete at petertodd.org
Tue May 17 13:23:11 UTC 2016
Previous message: [bitcoin-dev] Bip44 extension for P2SH/P2WSH/...
Next message: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and

We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic

# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.

## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.

## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.

2) STXO set

Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to the
TXO commitment.

3) TXO journal

FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.

4) TXO MMR list

Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok. We'll
cover this in more in detail later.

### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two

1) Recently created output

Output created after the most recent TXO commitment, so it should be in the
UTXO set; the transaction spending it does not need a TXO commitment proof.
Remove the output from the UTXO set and append it to the TXO journal.

2) Archived output

Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and if
not add it. Append the output and TXO commitment proof to the TXO journal.

In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.

### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer

Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.

### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:

/ \
a b

If we add another entry we get state #1:

/ \
0 \
/ \ \
a b c

Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:

/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e

This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:

/ \
2 \

Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h

Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf

/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h

If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:


We can add that data to our local knowledge of the TXO MMR, unpruning part of

/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h

Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:

/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h

Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:

/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)

Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:

/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g

After unpruning we have the following data for state #5:

/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g

That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:

/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k

Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).

### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.

A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.

### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.

When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next

Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.

### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that

Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.

For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.

## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.

## Further Work

While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.

6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.

# References

1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,

2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,

3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,

https://petertodd.org 'peter'[:-1]@petertodd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160517/33f69665/attachment-0001.sig>
submitted by Godballz to CryptoTechnology [link] [comments]

Tech.gtv spv - YouTube Boursorama - YouTube Solving problems using Symbolab - YouTube GENESIS MINING  WEEK 3  LONG TRANSACTION TIME - YouTube The Scientific Calculator - Grade 8

SPV nodes that rely on bloom filters leak considerable information about the addresses of Bitcoin users. 5) It’s not that hard. To less than tech-savvy users, running a full node may seem like a challenge. However, running a Bitcoin core full node is nothing more than simply downloading the latest Bitcoin core client version and running it. 4. Bitcoin Core – A full Bitcoin node. Platforms: Mac OS, Linux, and Windows. All of the wallets I’ve covered so far are known as SPV wallets or lite wallets. This means that they don’t have a full copy of the blockchain in order to verify transactions – they rely on other computers on the network to give them transaction information.. Bitcoin Core is a full node Bitcoin wallet. 38YLaSDaBGk3VDtSPv7XowHJU6XT26RALY Bitcoin address with balance chart. Received: 168.38 8 BTC (4 ins). first: 2017-11-24 20:33:32 UTC. last: 2020-04-02 17:39:06 UTC Bitcoin Wallet Bitcoin Wallet is easy to use and reliable, while also being secure and fast. Its vision is de-centralization and zero trust; no central service is needed for Bitcoin-related operations. The app is a good choice for non-technical people. The bitcoin blockchain is a public ledger that records bitcoin transactions. It is implemented as a chain of blocks, each block containing a hash of the previous block up to the genesis block of the chain. A network of communicating nodes running bitcoin software maintains the blockchain.:215–219 Transactions of the form payer X sends Y bitcoins to payee Z are broadcast to this network using ...

[index] [15584] [23974] [26150] [3909] [6799] [20587] [31739] [20317] [17049] [2531]

Tech.gtv spv - YouTube

Bienvenue sur la chaîne YouTube de Boursorama ! Le portail boursorama.com compte plus de 30 millions de visites mensuelles et plus de 290 millions de pages v... spv bitcoin spv bank spv business spv buy to let spv buy to let mortgages spv bhubaneswar spv bergen spv bankruptcy remote spv bvi spv benefits سحر c spv company spv calculator spv captain ... Buying and selling cars in Texas. Car Registration California - How To Fill Out the Duplicate Title Form - Duration: 8:14. All In One Vehicle Registration Service 49,220 views Symbolab is great way to solve your math problems and get the step by step solutions along with it. This video is a tutorial on how to enter your math proble... SAT Math Test Prep Online Crash Course Algebra & Geometry Study Guide Review, Functions,Youtube - Duration: 2:28:48. The Organic Chemistry Tutor 1,696,949 views