Litecoin API Commands (Both CLI and JSON RPC)

Unitimes AMA | Danger in Blockchain, Data Protection is Necessary

Unitimes AMA | Danger in Blockchain, Data Protection is Necessary
https://preview.redd.it/22zrdwgeg3m31.jpg?width=1280&format=pjpg&auto=webp&s=1370c511afa85ec06cda6843c36aa9289456806d
At 10:30 on September 12, Unitimes held the 40th online AMA about blockchain technologies and applications. We were glad to have Joanes Espanol , CEO and CTO of Amberdata, to share with us on ‘’Danger in Blockchain, Data Protection is Necessary‘’ . The AMA is composed of two parts : Fixed Q&A and Free Q&A. Check out the details below!

Fixed Q&A

  1. Please introduce yourself and Amberdata
Hi everybody, my name is Joanes Espanol and I am co-founder and CTO of Amberdata. Prior to founding Amberdata, I have worked on several large scale ingestion pipelines, distributed systems and analytics platforms, with a focus on infrastructure automation and highly available systems. I am passionate about information retrieval and extracting meaning from data.
Amberdata is a blockchain and digital asset company which combines validated blockchain and market data from the top crypto exchanges into a unified platform and API, enabling customers to operate with confidence and build real-time data-powered applications.
  1. What type of data does the API provide?
The advantage and uniqueness of Amberdata’s API is the combination of blockchain and pricing data together in one API call.
We provide a standardized way to access blockchain data (blocks, transactions, account information, etc) across different blockchain models like UTXO (Bitcoin, Litecoin, Dash, Zcash...) and Account Based (Ethereum...), with contextualized pricing data from the top crypto exchanges in one API call. If you want to build applications on top of different blockchains, you would have to learn the intricacies of each distributed ledgers, run multiple nodes, aggregate the data, etc - instead of spending all that time and money, you can start immediately by using the APIs that we provide.
What can you get access to? Accounts, account-balances, blocks, contracts, internal messages, logs and events, pending transactions, security audits, source code, tokens, token balances, token transfers, token supplies (circulating & total supplies), transactions as well as prices, order books, trades, tickers and best bid and offers for about 2,000 different assets.
One important thing to note is that most of the APIs return validated data that anybody can verify by themselves. Blockchain is all about trust - operating in a hostile and trustless environment, maintaining consensus while continuously under attack, etc - and we want to make sure that we maintain that level of trust, so the API returns all the information that you would need to recalculate Merkle proofs yourself, hence guaranteeing the data was not tampered with and is authentique.
  1. Why is it important to combine blockchain and market data?
Cryptoeconomics plays a key role in the blockchain world. One simple way to explain this is to look at why peer-to-peer file sharing systems like BitTorrent failed. These file sharing protocols were an early form of decentralization, with each node contributing to and participating in this “global sharing computer”. The issue with these protocols is that they relied on the good will of each participant to (re-)share their files - but without economic incentive, or punishment for not following the rules, it opened the door to bad behavior which ultimately led to its demise.
The genius of Satoshi Nakamoto was to combine and improve upon existing decentralized protocols with game theory, to arrive at a consensus protocol able to circumvent the Byzatine’s General Problem. Now participants have incentives to follow the rules (they get financially rewarded for doing so by mining for example, and penalized for misbehaving), which in turn results in a stable system. This was the first time that crypto-economics were used in a working product and this became the base and norm for a lot of the new systems today.
Pricing data is needed as context to blockchain data: there are a lot of (ERC-20) tokens created on Ethereum - it is very easy to clone an existing contract, and configure it with a certain amount of initial tokens (most commonly in the millions and billions in volume). Each token has an intrinsic value, as determined by the law of supply and demand, and as traded on the exchanges. Price fluctuations have an impact on the adoption and usage, meaning on the overall transaction volume (and to a certain extent transaction throughput) on the blockchain.
Blockchain data is needed as context to market data: activity on blockchain can have an impact on market data. For example, one can look at the incoming token transfers in the Ethereum transaction pool and see if there are any impending big transfers for a specific token, which could result in a significant price move on the other end. Being able to detect that kind of movement and act upon it is the kind of signals that traders are looking for. Another example can be found with token supplies: exchanges want to be notified as soon as possible when a token circulating supply changes, as it affects their trading ability, and in the worst case scenario, they would need to halt trading if a token contract gets compromised.
In conclusion, events on the blockchain can influence price, and market events also have an impact on blockchain data: the two are intimately intertwined, and putting them both in context leads to better insights and better decision making.
  1. All the data you provide is publicly available, what gives?
Very true, all this data is publicly available, that is one of the premises and fundamentals of blockchain models, where all the data is public and transparent across all the nodes of the network. The problem is that, even though it is publicly available, it is not quick, not easy and not cheap to access.
Not quick: blockchain data structures were designed and optimized for achieving consensus in a hostile and trustless environment and for internal state management, not for random access and overall search. Imagine you want to list all the transactions that your wallet address has participated in? The only way to do that would be to replay all the transactions from the beginning of time (starting at the genesis block), looking at the to and from addresses and retain only the ones matching your wallet: at over 500 million of transactions as of today, it will take some unacceptable amount of time to retrieve that list for a customer facing application.
Not easy: Some very basic things that one would expect when dealing with financial assets and instruments are actually very difficult to get at, especially when related to tokens. For example, the current Ether balance of a wallet is easy to retrieve in one call to a Geth or Parity client - however, looking at time series of these balances starts to be a little hairy, as not all historical state is kept by these clients, unless you are running a full archive node. Looking at token holdings and balances gets even more complicated, as most of the token transfers are part of the transient state and not kept on chain. Moreover, token transfers and balance changes over time are triggered by different mechanisms (especially when dealing with contract to contract function calls), and detecting these changes accurately is prone to errors.
Not cheap: As mentioned above, most of the historical data and time series metrics are only available via a full archive node, which at the time of writing requires about 3TB of disk space, just to hold all the blockchain state - and remember, this state is in a compressed and not easily accessible format. To convert it to a more searchable format requires much more space. Also, running your own full archive node requires constant care, maintenance and monitoring, which has become very expensive and prohibitive to run.
  1. Who uses your API today and what do they do with it?
A wide variety of applications and projects are using our API, across different industries ranging from wallets and trust funds (DappRadar), to accounting and arbitrage firms (Moremath), including analytics (Stratcoins) and compliance & security companies (Blue Swan). Amberdata’s API is attractive to many different people because it is very complete and fast, and it provides additional data enrichment not available in other APIs, and because of these, it appeals to and fits nicely with our customers use cases:
· It can be used in the traditional REST way to augment your own processes or enrich your own data with hard to get pieces of information. For example, lots of our users retrieve historical information (blocks and transactions) and relay it in their applications to their own customers, while others are more interested in financial data (account & token balances) and time series for portfolio management.
https://medium.com/amberdata/keep-it-dry-use-amberdatas-api-9cdb222a41ba
· Other projects are more in need of real-time up-to-date data, for which we recommend using our websockets, so you can filter out data in real-time and match your exact needs, rather than getting the firehose of information and having to filter out and discard 99% of it.
· We have a few research projects tapping into our API as well. For example, some of our customers want access to historical market data to backtest their trading strategies and fine-tune their own algorithms.
· Our API is also fully Json RPC compliant, meaning some people use it as a drop-in replacement for their own node, or as an alternative to Infura for example. We have some customers using both Amberdata and Infura as their web3 providers, with the benefits of getting additional enriched data when connecting to our API.
· And finally, we have also built an SDK on top of the API itself, so it is easier to integrate into your own application (https://www.npmjs.com/package/web3data-js).
We also have several subscriptions to match your needs. The developer tier is free and gets you access to 90% of all the data. If you are not sure about your usage patterns yet, we recommend the on-demand plan to get started, while for heavy users the professional and enterprise plans would be more adequate - see https://amberdata.io/pricing for more information.
All and all, we try really hard to make it as easy as possible to use for you. We do the heavy lifting, so you don’t have to worry about all the minutia and you can focus on bringing value to your customers. We work very closely with our customers and continuously improve upon and add new features to our API. If something is not supported or you want something that is not in the API, chances are we already have the data, do not hesitate to ask us ;)
  1. Amberdata recently made some headlines for discovering a vulnerability on Parity client. Can you tell us a bit more about it?
This is an interesting one. One of our internal processes flagged a contract, and more specifically the balanceOf(...) call: it was/is taking more than 5 seconds to execute (while typically this call takes only a few milliseconds). While investigating further, we started looking at the debug traces for that contract call and were pretty surprised when a combination of trace_call+vmTrace crashed our Parity node - and not just randomly, the same call would exhibit the exact same behavior each time, and on different Parity nodes. It turns out that this contract is very poorly written, and the implementation of balanceOf(...) keeps on looping over all the holders of the token, which eventually runs out of memory.
Even though this is a pretty severe bug (any/all Parity node(s) can be remotely shutdown with just one small call to its API), in practice the number of nodes at risk is probably small because only operators who have enabled public facing RPC calls (and possibly the ones who have enabled tracing as well) are affected - which are both disabled by default. Kudos to the Parity team for fixing and releasing a patch in less than 24 hours after the bug was reported!
  1. How do you access the data? How do I get started?
We sometimes get the question, “I do not know how to code, can I still use your data?”, and it is possible! We have built a few dashboards on our platform, and you can visualize and monitor different metrics, and get alerts: https://amberdata.io/dashboards/infrastructure.
A good starting point is to use our Postman collection, which is pretty complete and can give you a very good overview of all the capabilities: https://amberdata.io/docs/libraries and https://www.getpostman.com/collections/79afa5bafe91f0e676d6.
For more advanced users, the REST API is where you should start, but as I mentioned earlier, how to access the data depends on your use case: REST, websockets, Json RPC and SDK are the most commonly ways of getting to it. We have a lot of tutorials and code examples available here: https://amberdata.io/docs.
For developers interested in getting access to Amberdata’s blockchain and market data from within their own contract, they can use the Chainlink Oracle contract, which integrates directly with the API:
https://medium.com/amberdata/smart-contract-oracles-with-amberdata-io-358c2c422d8a
  1. Amberdata just recently celebrated 2 years birthday. What is your proudest accomplishment? Any mistake/lesson you would like to share with us?
The blockchain and crypto market is one of the fastest evolving and innovating markets ever, and a very fast paced environment. Having been heads down for two years now, it is sometimes easy to lose sight of the big picture. The journey has been long, but I am happy and proud to see it all come together: we started with blockchain data and monitoring/alerting, added search, validation and derived data (tokens, supplies, etc) along the way, and finally market data to close the loop on all the cryptoeconomics. Seeing the overall engagement from the community around our data is very gratifying: API usage climbing up, more and more pertinent and relevant questions/suggestions on our support channels, other projects like Kadena sending us their own blockchain data so it can be included in Amberdata’s offering… all of these makes me want to do more :)

Free Q&A

---Who are your competitors? What makes you better?
There are a few data providers out there offering similar information as Amberdata. For example, Etherscan has very complete blockchain data for Ethereum, and CoinmarketCap has assets rankings by market cap and some pricing information. We actually did a pretty thorough analysis on the different data providers and they pros and cons:
https://medium.com/amberdata/which-blockchain-data-api-is-right-for-you-3f3758efceb1
What makes Amberdata unique is three folds:
· Combination of blockchain and market data: typically other providers offer one or the other, but not both, and not integrated with each other - with Amberdata, in one API call I can get blockchain and historically accurate pricing data at the same time. We have also standardized access across multiple blockchains, so you get one interface for all and do not have to worry about understanding each and every one of them.
· Validated & verifiable data: we work hard to preserve transparency and trust and are very open about how our metrics are calculated. For example, blockchain data comes with all the pieces needed to recompute the Mekle proofs so the integrity of the data can be verified at any moment. Also, additional metrics like circulating supply are based on tangible and very concrete definitions so anybody can follow and recalculate them by themselves if needed.
· Enriched data: we have spent a lot of time enriching our APIs with (historical) off chain data like token names and symbols, mappings for token addresses and tradable market pairs, etc. At the same time, our APIs are very granular and provide a level of detail that only a few other providers offer, especially with market data (Level 2 with order books across multiple exchanges, Best Bid Offers, etc).
That's all for the 40th AMA. We should like to thank all the community members for their participation and cooperation! Thanks, Joanes!
submitted by Unitimes_ to u/Unitimes_ [link] [comments]

Nice Article About How HPB Perform Vs EOS (and so ETH)

HPB: Unique Blockchain Infrastructure
Now most public chains will mention that the problem of tps development is the problem of the blockchain. This is also because the traditional blockchain has the problem of poor performance. In order to reach consensus, the efficiency is sacrificed. But if you want to build an ecosystem of countless DAPPs based on the public chain, there is no guarantee of performance that is almost impossible.
The dream of building a DAPP ecosystem is that Bitcoin has not been completed and it is not necessary to complete it. Bitcoin is only a digital currency and it has initially fulfilled its historical mission. It has become a value storer, and it has opened the world of the blockchain. .
Ethereum started with the goal of building a world-wide computer that provided the infrastructure for building decentralized applications, but so far it has only succeeded in the crowdfunding field. Due to performance, cost, scalability, and other issues, it is not yet possible to become a DAPP infrastructure. By the end of 2017, a simple encrypted cat game would have caused Ethereum to jam. Ethereum tried to get rid of the predicament through techniques such as fragmentation, Plasma, and PoS consensus.
Newcomers, such as EOS, are highlighting their high performance, emphasizing the possibility of reaching mega-level tps. Then, in the future, an infrastructure is needed to build a prosperous DAPP ecosystem on this decentralized infrastructure to meet the user or business needs of different scenarios.
What kind of program is a better choice? This is what blue fox has been paying attention to. Blue Fox focuses on an HPB blockchain project that uses a completely different search path than other public chains or infrastructure. This path is worth paying attention to all the buddies who pay attention to the blockchain.
This path is a combination of hardware and software. It is more demanding and the practice is more difficult. However, if it is truly grounded, it may be a good path.
HPB to become a high-performance blockchain infrastructure
Whether HPB or EOS have the same goals, they must provide a high-performance infrastructure for the decentralized ecosystem. why? Mainly from the blockchain to the mainstream business scene point of view. The current blockchain has achieved some success in security and decentralization, but there are natural constraints in terms of efficiency. This hinders its application scenario to the mainstream.
This is also a direction that Blockchain 3.0 has been exploring. Through higher performance, lower costs, and better scalability to meet the needs of more decentralized application scenarios.
The current bitcoin and Ethereum's throughput are both worrying. Bitcoin supports about 7 transactions per second on average, and Ethereum has about 15 throughputs. If you make the block bigger, you can also increase the throughput, but it will cause the problem of block bloat. Last year, an encrypted cat game made everyone see the blockchain congestion problem. From a performance point of view, it takes a long time for blockchains to reach the mainstream.
In addition to the lack of tps performance, the transaction cost of the blockchain is high. Both ordinary users and developers cannot afford gas costs that are too high. For example, before Ethereum's crypto-games became hot, there were even transaction fees compared to encrypted cats. It is also expensive.
The HPB and EOS goals are similar, but their paths are completely different. HPB uses a combination of hardware and software, has its own dedicated chip hardware server, which makes it theoretically have higher performance.
HPB is also trying to create an operating system architecture that can build applications. This architecture includes accounts, identity and authorization management, policy management, databases, asynchronous communications, program scheduling on CPUs, FPGAs, or clusters, and hardware accelerated technology. Realizes low delay and high concurrency and realizes mega-level tps to meet the needs of commercial scenarios.
It is different from EOS. Its architecture, in addition to its software architecture and its hardware architecture, is a combination of hardware and software blockchain architecture that combines high-performance computing and cloud computing concepts. The hardware system includes a distributed core node composed of high performance computing hardware, a general communication network, and a cloud terminal supported by high performance computing hardware.
The core node supports a standard blockchain software architecture, including consensus algorithms, network communications, and task processing. It also introduces a hardware acceleration engine. It works with software to achieve high-performance tps through BOE technology (Blockchain Offload Engine) and consensus algorithm acceleration, data compression, and data encryption.
BOE makes HPB unique
In the HPB's overall architecture, compared with other blockchain infrastructures, there are obvious differences. One of the important points is its BOE technology.
BOE mentioned above, is the blockchain offload engine. The BOE engine includes BOE hardware, BOE firmware, and matching software systems. It is a heterogeneous processing system that achieves high performance and high concurrent computational acceleration by combining CPU serial capabilities with the parallel processing capabilities of the FPGA/ASIC chip.
In the process of parsing TCP packets and UDP packets, the BOE module does not need to participate in the CPU, which can save CPU resources. The BOE module performs integrity checking, signature verification, and account balance verification on received messages such as transactions and blocks, performs fragment processing on large data to be transmitted, and encapsulates the fragments to ensure the integrity of received data. At the same time, statistics work will be performed according to the received traffic of the TCP connection, and corresponding incentives will be provided according to the system contribution.
BOE has played its own role in signature verification speed, encryption channel security, data transmission speed, network performance, and concurrent connections.
The BOE acceleration engine embeds the ECDSA module. The main purpose of this module is to improve the speed of signature verification. ECDSA is also an elliptic curve digital signature algorithm. Although it is a mature algorithm that is widely used at present, the pure software method can only be performed thousands of times per second and cannot meet the high performance requirements. So the combination of BOE and ECDSA is a good attempt.
In the process of data transmission between different nodes, BOE needs to establish an encrypted channel. In this process, it uses a hardware random number generator to implement the security of the encrypted channel, because the seed of the random number of the key exchange becomes unpredictable.
The BOE acceleration engine also uses block data fragmentation broadcasting technology. Block fragmentation includes a complete block header, which facilitates the broadcast of newly generated blocks to all nodes. With block data fragmentation, network data can be quickly transmitted between different nodes.
The BOE technology can perform traffic statistics of node connections based on hardware, and can calculate network bandwidth data provided by different nodes. Only providing network bandwidth to the system will have the opportunity to become a high contribution value node. In this way, incentives for the contribution of the nodes are provided.
In terms of concurrency, BOE is expected to maintain more than 10,000 TCP sessions and handle 10,000 concurrent sessions through an acceleration engine. BOE's dedicated parallel processing hardware replaces the traditional software serial processing functions such as transaction data broadcasting, unverified blockwide network broadcasting, transaction confirmation broadcasting, and the like.
According to HPB estimates, through the BOE acceleration engine, the session response speed and the number of session maintenance can reach more than 100 times the processing power of the common computing platform node. If the actual environment can be achieved, it is a very significant performance improvement.
Consensus algorithm for internal and external bi-level elections
HPB not only significantly improves performance through BOE, but also adopts a dual-layer internal and external voting mechanism in consensus algorithms. It attempts to achieve more efficient consensus efficiency on the premise of ensuring security and privacy.
Outer election refers to the selection of high-contribution-value node members from many candidate nodes, and the election will use node contribution value evaluation indicators. Inner-layer election refers to an anonymous voting mechanism based on a hash queue. When a block is generated, it calculates which high-contribution value node preferentially generates a block. Nodes with high priority have the right to generate blocks preferentially.
So, how to choose high contribution value node? Here is the first indicator to evaluate the contribution value. The indicators include whether a BOE acceleration engine is configured, network bandwidth contribution (data throughput over a fixed period of time), reputation, and total node token holding time. Among them, the creditworthiness of the node is obtained through the analysis of participating transactions and data analysis such as packaged blocks and transaction forwarding. The total holding time of the node token can be obtained by real-time statistics on the account information.
The outer election adopts an adaptive and consistent election plan. That is, by maintaining the consistency of “books” to ensure the consistency of outer elections, this can reduce network synchronization, and can also use the data of each node on the chain. The first is to put the above-mentioned four evaluation indicators into the block. By keeping the account books consistent, you can calculate the current ranking of all the participating candidate nodes. The higher-ranking high-contribution value nodes will become the official high contribution in the next round. Value node.
With the formal high contribution value node, the goal of the inner election is to find the high contribution value node corresponding to each block as soon as possible. The entire process is divided into three phases: nominations, statistics, and calculations. These three phases combine security, privacy, and performance.
The first is the nomination. At the beginning of the voting period, the BOE acceleration engine generates a random Commit. The high contribution value node submits its Commit, and the Commit synchronizes with the chain generated by the high-performance node. After the voting period is over, the Commit in the blockchain is started and the ticket pool is created. The last is the calculation. The calculation is mainly based on the weight algorithm to calculate the node's generation priority in the block. Generate the highest-priority high-contribution value node and obtain the block package right.
Other nodes can verify the random number and address signature according to the principle of verifiable random function, which not only guarantees security, but also guarantees the unpredictability and privacy of high contribution value nodes.
In general, HPB's consensus algorithm combines security, privacy, and speed through a combination of hardware and software. Using the BOE acceleration engine to generate random numbers, contribution value evaluation indicators, coherence ledgers, anonymous voting mechanisms, weight algorithms, signature verification, etc., privacy, reliability, security, and high efficiency are achieved.
Universal virtual machine design: support for different blockchains
The HPB virtual machine adopts a plug-in design mechanism and can support multiple virtual machines. It can implement the combination of the underlying virtual machine and upper level program language translation and support, and support the basic application of virtual machines. In addition, the external interface of the virtual machine can be realized through customized API operations, which can interact with the account data and external data.
The advantage of this mechanism is that it can realize the high performance of native code execution when the smart contract runs, and it can also implement the common virtual machine mechanism supporting different blockchains. For example, it can support Ethereum virtual machine EVM. The smart contract on EVM can also be used on HPB.
Neo's virtual machine NeoVM can also be used on HPB. When high-performance scenarios are needed, users of both EVM and NeoVM need only a few adaptations to interact with other HPB applications.
The HPB smart contract has also made some improvements, such as the management of the life cycle, auditing and forming a common template. No progress can realize the full lifecycle management of smart contracts, such as the complete and controllable process management and integration rights management mechanism for intelligent contract submission, deployment, use, and logout.
In smart contract auditing, HPB conducts a protective audit that combines automated tool auditing with professional code design. In terms of templates, HPB gradually formed a generic smart contract template to support the flexible configuration of various common business scenarios.
Incentives for a positive cycle of token economy
When the high-contribution value node generates a block, it will receive a token reward from the system. From the design of the HPB, the system will issue a token of no more than 6% per year, and the additional token will be proportional to the total number of high-contribution nodes and candidate nodes.
In order to obtain the token reward from the system, it must first become a high contribution value node, and only the high contribution value node has the right to generate a block.
In order to obtain the right to generate a block, it is necessary to contribute, including holding a certain number of HPB tokens, having a BOE hardware acceleration engine, and contributing network bandwidth to the system.
From its mechanism, we can see that HPB's token economic system design is considered from the formation of a positive incentive system. It maintains the overall HPB system by holding the HPB token, having a BOE hardware acceleration engine, and contributing network bandwidth to the system. safe operation.
HPB landing: supports a variety of high-frequency scenes
In essence, HPB is a high-performance blockchain platform and is an infrastructure where various blockchain applications can be explored. Including blockchain finance, blockchain games, blockchain entertainment, blockchain big data, blockchain anti-fake tracking, blockchain energy and many other fields.
In terms of finance, decentralized lending, decentralized asset management, etc. can all be built on the HPB platform to meet high-frequency lending and transaction scenarios.
In terms of games, although all game operations are not practical, the up-chaining and trading of assets such as game props are important scenes. Once the realization of the game product chain, you can ensure that the game assets are transparent, unique, can not be tampered with, never disappeared, etc., providing great convenience for the transaction between the game products.
Compared with traditional centralized service providers, there are many advantages. For example, there is no need to worry about the loss, confiscation, or change of virtual game products. The transaction process is also simple and convenient. Since HPB has a high-performance blockchain, it is expected to support millions of concurrents, and many high-frequency scenarios can also be satisfied.
For blockchain entertainment, it can support the securitization of star assets, such as star-related token assets. In terms of blockchain big data, it can support the data right, ensure that the data owner controls the data ownership, ensure the authenticity of the data, traceability, can not be modified, and finally realize data transactions according to the needs of different entities. , to ensure personal privacy and data security.
Based on HPB's blockchain infrastructure, based on its high performance, blockchain applications can be built in multiple scenarios. The HPB design provides a blockchain application program interface and application development package. In the HPB blockchain base layer, it provides blockchain data access and interactive interfaces, and supports various applications and development languages ​​using JSON-RPC and RESTful APIs. It also supports multi-dimensional blockchain data query and transaction submission, and the interactive access interface can be integrated with the privilege control system.
The application development package includes comprehensive functional service packages that operate on blockchains based on different development languages. For example, it provides functional interfaces such as encryption, data signature, and transaction generation, and can seamlessly support integration and function expansion of various language service systems. , supports multiple language SDKs such as Java, JavaScript, Ruby, Python, and .NET.
Conclusion
If the future blockchain wants to enter the mainstream population, it must have high-performance public-chain or infrastructure support to form a true application ecosystem. Ethereum's dream to build a decentralized ecosystem cannot be achieved on an existing basis. Ethereum is trying to improve performance and expand scalability through fragmentation, plasma, and pos consensus mechanisms.
At the same time, the current status quo has also spawned other public-linked efforts, including eos, HPB, etc. Among them, HPB has adopted a unique combination of hardware and software, dedicated BOE hardware acceleration, signature verification speed, encryption channel security, data transmission Speed, network performance, and high concurrent support all have their own advantages over simple software solutions.
In the software architecture, consensus algorithms for internal and external elections, flexible virtual machine design, application program interfaces, and development packages are also used to provide infrastructure for the development of blockchain application scenarios.
From the overall design of HPB, its goal is to provide high-performance infrastructure for the entire blockchain to mainstream people. With a high-performance infrastructure, blockchains can only be implemented in many high-frequency scenarios to create more application ecosystems and have the opportunity to reach mainstream people.
The HPB team focused on the technical background, including the founder Wang Xiaoming who was an early evangelist in the blockchain and once participated in the establishment of UnionPay Big Data, Beltal, and Beltal CTO. Co-founder CTO Xu Li has more than 10 years of experience in chip industry R&D and management. He was responsible for the logic design, R&D, and FPGA chip marketing of the core products of the world's top qualified equipment suppliers and the world's largest component distributor. Technical VP Shu Shanlin once worked for Inspur, a well-known Chinese server manufacturer, as an embedded chief engineer, and has extensive R&D experience in embedded software and underlying software. Another co-founder, Li Jinxin, is a former blockchain analyst of Guotai Junan and has extensive experience in digital asset investment.
The background of the team members is in line with the HPB's soft and hard path. According to the latest monthly report, the basic PCB layout design of the BOE board, the overall architecture design of the BOE, and the ECC acceleration scheme have also been completed. At the same time, several tests have been completed for the BOE hardware acceleration engine.
It is hoped that HPB will develop rapidly and will embark on a path with its own characteristics in the future of blockchain infrastructure competition. It will provide support for more decentralized applications and eventually build a prosperous ecosystem.
Risk Warning: All Blue Fox articles do not constitute investment recommendations, investment risks, it is recommended to conduct in-depth inspection of the project, and carefully make their own investment decisions.
Source: https://mp.weixin.qq.com/s/RSuz6R6MTotEL_U__Al_Wg
submitted by azerbajian to HPBtrader [link] [comments]

QuarkChain Testnet 2.0 Mining.

QuarkChain Testnet 1.0 was built based on standardized blockchain system requirements, which included network, wallet, browser, and virtual machine functionalities. Other than the fact that the token was a test currency, the environment was completely compatible with the main network. By enhancing the communication efficiency and security of the network, Testnet 2.0 further improves the openness of the network. In addition, Testnet 2.0 will allow community members (other than citizens or residents of the United States) to contribute directly to the network, i.e. running a full node and mining, and receive testnet tokens as rewards.
QuarkChain Testnet 2.0 will support multiple mining algorithms, including two typical algorithms: Ethash and Double SHA256, as well as QuarkChain’s unique algorithm called Qkchash – a customized ASIC-resistant, CPU mining algorithm, exclusively developed by QuarkChain. Mining is available both on the root chain and on shards due to QuarkChain’s two-layered blockchain structure. Miners can flexibly choose to mine on the root chain with higher computing power requirements or on shards based on their own computing power levels. Our Goal By allowing community members to participate in mining on Testnet 2.0, our goal is to enhance QuarkChain’s community consensus, encourage community members to participate in testing and building the QuarkChain network, and gain first-hand experience of QuarkChain’s high flexibility and usability. During this time, we hope that the community can develop a better understanding about our mining algorithms, sharding technologies, and governance structures, etc. Furthermore, this will be a more thorough challenge to QuarkChain’s design before the launch of mainnet! Thus, we sincerely invite you to join the Testnet 2.0 mining event and build QuarkChain’s infrastructure together!
Today, we’re pleased to announce that we are officially providing the CPU mining demo to the public (other than citizens and residents of the United States)! Everyone can participate in our mining event, and earn tQKC, which can be exchanged to real rewards by non-U.S. persons after the launch of our mainnet. Also, we expect to upgrade our testnet over time, and expect to allow GPU mining for Ethash, and ASIC mining for Double SHA256 in the future. In addition, in the near future, a mining pool that is compatible with all mining algorithms of QuarkChain is also expected to be supported.
We hope all the community members can join in with us, and work together to complete this milestone! 2 Introduction to Mining Algorithms 2.1 What is mining? Mining is the process of generating the new blocks, in which the records of current transactions are added to the record of past transactions. Miners use software that contribute their mining power to participate in the maintenance of a blockchain. In return, they obtain a certain amount of QKC per block, which is called coinbase reward. Like many other blockchain technologies, QuarkChain adopts the most widely used Proof of Work (PoW) consensus algorithm to secure the network.
A cryptographically-secure PoW is a costly and time-consuming process which is difficult to solve due to computation-intensity or memory intensity but easy for others to verify. For a block to be valid it must satisfy certain requirements and hash to a value less than the current target threshold. Reverting a block requires recreating all successor blocks and redoing the work they contain, which is costly.
By running a cluster, everyone can become a miner and participate in the mining process. The mining rewards are proportional to the number of blocks mined by each individual.
2.2 Introduction to QuarkChain Algorithms and Mining setup According to QuarkChain’s two-layered blockchain structure and Boson consensus, different shards can apply different consensus and mining algorithms. As part of the Boson consensus, each shard can adjust the difficulty dynamically to increase or decrease the hash power of each shard chain.
In order to fully test QuarkChain testnet 2.0, we adopt three different types of mining algorithms” Ethash, Double SHA256, and Qkchash, which is ASIC resistant and exclusively developed by QuarkChain founder Qi Zhou. These first two hash algorithms correspond to the mining algorithms dominantly conducted on the graphics processing unit (GPU) and application-specific integrated circuits (ASIC), respectively.
I. Ethash Ethash is the PoW mining algorithm for Ethereum. It is the latest version of earlier Dagger-Hashimoto. Ethash is memory intensive, which makes it require large amounts of memory space in the process of mining. The efficiency of mining is basically independent of the CPU, but directly related to memory size and bandwidth. Therefore, by design, building Ethash ASIC is relatively difficult. Currently, the Ethash mining is dominantly conducted on the GPU machines. Read more about Ethash: https://github.com/ethereum/wiki/wiki/Ethash
II. Double SHA256 Double SHA256 is the PoW mining algorithms for Bitcoin. It is computational intensive hash algorithm, which uses two SHA256 iterations for the block header. If the hash result is less than the specific target, the mining is successful. ASIC machine has been developed by Bitmain to find more hashes with less electrical power usage. Read more about Double SHA256: https://en.bitcoin.it/wiki/Block_hashing_algorithm
III. Qkchash Originally, Bitcoin mining was conducted on the CPU of individual computers, with more cores and greater speed resulting in more profitability. After that, the mining process became dominated by GPU machines, then field-programmable gate arrays (FPGA) and finally ASIC, in a race to achieve more hash rates with less electrical power usage. Due to this arms race, it has become increasingly harder for prospective new miners to join. This raises centralization concerns because the manufacturers of the high-performance ASIC are concentrated in a small few.
To solve this, after extensive research and development, QuarkChain founder Dr. Qi Zhou has developed mining algorithm — Qkchash, that is expected to be ASIC-resistant. The idea is motivated by the famous date structure orders-statistic tree. Based on this data structure, Qkchash requires to perform multiple search, insert, and delete operations in the tree, which tries to break the ASIC pipeline and makes the code execution path to be data-dependent and unpredictable besides random memory-access patterns. Thus, the mining efficiency is closely related to the CPU, which ensures the security of Boston consensus and encourges the mining decentralization.
Please refer to Dr. Qi’s paper for more details: https://medium.com/quarkchain-official/order-statistics-based-hash-algorithm-e40f108563c4
2.3 Testnet 2.0 mining configuration Numbers of Shards: 8 Cluster: According to the real-time online mining node The corresponding mining algorithm is Read more about Ethash with Guardian: https://github.com/QuarkChain/pyquarkchain/wiki/Ethash-with-Guardian)
We will provide cluster software and the demo implementation of CPU mining to the public. Miners are able to arbitrarily select one shard or multiple shards to mine according to the mining difficulty and rewards of different shards. GPU / ASIC mining is allowed if the public manages to get it working with the current testnet. With the upgrade of our testnet, we will further provide the corresponding GPU / ASIC software.
QuarkChain’s two-layered blockchain structure, new P2P mode, and Boson consensus algorithm are expected tobe fully tested and verified in the QuarkChain testnet 2.0. 3 Mining Guidance In order to encourage all community members to participate in QuarkChain Testnet 2.0 mining event, we have prepared three mining guidances for community members of different backgrounds.
Today we are releasing the Docker Mining Tutorial first. This tutorial provides a command line configuration guide for developers and a docker image for multiple platforms, including a concise introduction of nodes and mining settings. Follow the instructions here: Quick Start with QuarkChain Mining.
Next we will continue to release: A tutorial for community members who don’t have programming background. In this tutorial, we will teach how to create private QuarkChain nodes using AWS, and how to mine QKC step by step. This tutorial is expected to be released in the next few days. Programs and APIs integrated with GPU / ASIC mining. This is expected to allow existing miners to switch to QKC mining more seamlessly. Frequently Asked Questions: 1. Can I use my laptop or personal computer to mine? Yes, we will provide cluster software and the demo implementation of CPU mining to the public. Miners will be able to arbitrarily select one shard or multiple shards to mine according to the work difficulty and rewards of different shards. 2. What is the minimum requirements for my laptop or personal computer to mine? Please prepare a Linux or MacOs machine with public IP address or port forwarding set up. 3. Can I mine with my GPU or an ASIC machine? For now, we will only be providing the demo implementation of CPU mining as our first step. Interested miners/developers can rewrite the corresponding GPU / ASIC mining program, according to the JSON RPC API we provided. With the upgrade of our testnet, we expect to provide the corresponding GPU / ASIC interface at a later date. 4. What is the difference among the different mining algorithms? Which one should I choose? Double SHA256 is a computational intensive algorithm, but Ethash and Qkchash are memory intensive algorithms, which have certain requirements on the computer’s memory. Since currently we only support CPU mining, the mining efficiency entirely depends on the cores and speed of CPU. 5. For testnet mining, what else should I know? First, the mining process will occupy a computer’s memory. Thus, it is recommended to use an idle computer for mining. In Testnet 2.0 settings, the target block time of root chain is 60 seconds, and the target block time of shard chain is 10 seconds. The mining is a completely random process, which will take some time and consume a certain amount of electricity. 6. What are the risks of testnet mining? Currently our testnet is still under the development stage and may not be 100% stable. Thus, there would be some risks for QuarkChain main chain forks in testnet, software upgrades and system reboots. These may cause your tQKC or block record to be lost despite our best efforts to ensure the stability and security of the testnet.
For more technical questions, welcome to join our developer community on Discard: https://discord.me/quarkchain. 4 Reward Mechanism Testnet 2.0 and all rewards described herein, including mining, are not being offered and will not be available to any citizens or residents of the United States and certain other jurisdictions. All rewards will only be payable following the mainnet launch of QuarkChain. In order to claim or receive any of the following rewards after mainnet launch, you will be required to provide certain identifying documentation and information about yourself. Failure to provide such information or demonstrate compliance with the restrictions herein may result in forfeiture of all rewards, prohibition from participating in future QuarkChain programs, and other sanctions.
NO U.S. PERSONS MAY PARTICIPATE IN TESTNET 2.0 AND QUARKCHAIN WILL STRICTLY ENFORCE THIS VIA OUR KYC PROCEDURES. IF YOU ARE A CITIZEN OR RESIDENT OF THE UNITED STATES, DO NOT PARTICIPATE IN TESTNET 2.0. YOU WILL NOT RECEIVE ANY REWARDS FOR YOUR PARTICIPATION.
4.1 Mining Rewards
  1. Prize Pool A total of 5 million QKC prize pool have been reserved to motivate all miners to participate in the testnet 2.0 mining event. According to the different mining algorithms, the prize pool is allocated as follows:
Total Prize Pool: 5,000,000 QKC Prize Pool for Ethash Algorithm: 2,000,000 QKC Prize Pool for Double SHA256 Algorithm: 1,000,000 QKC Prize Pool for Qkchash Algorithm: 2,000,000 QKC
The number of QKC each miner is eligible to receive upon mainnet launch will be calculated on a pro rata basis for each mining algorithm set forth above, based on the ratio of sharded block mined by each miner to the total number of sharded block mined by all miners employing such mining algorithm in Testnet 2.0.
  1. Early-bird Rewards To encourage more people to participate early, we will provide early bird rewards. Miners who participate in the first month (December 2018, PST) will enjoy double points. This additional point reward will be ended on December 31, 2018, 11:59pm (PST).
4.2 Bonus for Bug Submission: If you find any bugs for QuarkChain testnet, please feel free to create an issue on our Github page: https://github.com/QuarkChain/pyquarkchain/issues, or send us an email to [email protected]. We may provide related rewards based on the importance and difficulty of the bugs.
4.3 Reward Rules: QuarkChain reserves the right to review the qualifications of the participants in this event. If any cheating behaviors were to be found, the participant will be immediately disqualified from any rewards. QuarkChain further reserves the right to update the rules of the event, to stop the event/network, or to restart the event/network in its sole discretion, including the right to interpret any rules, terms or conditions. For the latest information, please visit our official website or follow us on Telegram/Twitter. About QuarkChain QuarkChain is a flexible, scalable, and user-oriented blockchain infrastructure by applying blockchain sharding technology. It is one of the first public chains that successfully implemented state sharding technology for blockchain in the world. QuarkChain aims to deliver 100,000+ on-chain TPS. Currently, 14,000+ peak TPS has already been achieved by an early stage testnet. QuarkChain already has over 50 partners in its ecosystem. With flexibility, scalability, and usability, QuarkChain is enabling EVERYONE to enjoy blockchain technology at ANYTIME and ANYWHERE.
Testnet 2.0 and all rewards described herein are not being and will not be offered in the United States or to any U.S. persons (as defined in Regulation S promulgated under the U.S. Securities Act of 1933, as amended) or any citizens or residents of countries subject to sanctions including the Balkans, Belarus, Burma, Cote D’Ivoire, Cuba, Democratic Republic of Congo, Iran, Iraq, Liberia, North Korea, Sudan, Syria, Zimbabwe, Central African Republic, Crimea, Lebanon, Libya, Somalia, South Suda, Venezuela and Yemen. QuarkChain reserves the right to terminate, suspend or prohibit participation of any user in Testnet 2.0 at any time.
In order to claim or receive any rewards, including mining rewards, you will be required to provide certain identifying documentation and information. Failure to provide such information or demonstrate compliance with the restrictions herein may result in termination of your participation, forfeiture of all rewards, prohibition from participating in future QuarkChain programs, and other actions.
This announcement is provided for informational purposes only and does not guarantee anyone a right to participate in or receive any rewards in connection with Testnet 2.0.
Note: The use of Testnet 2.0 is subject to our terms and conditions available at: https://quarkchain.io/testnet-2-0-terms-and-conditions/
more about qurakchain: Website: https://quarkchain.io/cn/ Facebook: https://www.facebook.com/quarkchainofficial/ Twitter: https://twitter.com/Quark_Chain Telegram: https://t.me/quarkchainio
submitted by Rahadsr to u/Rahadsr [link] [comments]

Bitcoin Core 0.10.0 released | Wladimir | Feb 16 2015

Wladimir on Feb 16 2015:
Bitcoin Core version 0.10.0 is now available from:
https://bitcoin.org/bin/0.10.0/
This is a new major version release, bringing both new features and
bug fixes.
Please report bugs using the issue tracker at github:
https://github.com/bitcoin/bitcoin/issues
The whole distribution is also available as torrent:
https://bitcoin.org/bin/0.10.0/bitcoin-0.10.0.torrent
magnet:?xt=urn:btih:170c61fe09dafecfbb97cb4dccd32173383f4e68&dn;=0.10.0&tr;=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.publicbt.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.ccc.de%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr;=udp%3A%2F%2Fopen.demonii.com%3A1337&ws;=https%3A%2F%2Fbitcoin.org%2Fbin%2F
Upgrading and downgrading

How to Upgrade
If you are running an older version, shut it down. Wait until it has completely
shut down (which might take a few minutes for older versions), then run the
installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or
bitcoind/bitcoin-qt (on Linux).
Downgrading warning
Because release 0.10.0 makes use of headers-first synchronization and parallel
block download (see further), the block files and databases are not
backwards-compatible with older versions of Bitcoin Core or other software:
  • Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or
other programs. Reindexing using earlier versions will also not work
anymore as a result of this.
  • The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support.
If you want to be able to downgrade smoothly, make a backup of your entire data
directory. Without this your node will need start syncing (or importing from
bootstrap.dat) anew afterwards. It is possible that the data from a completely
synchronised 0.10 node may be usable in older versions as-is, but this is not
supported and may break as soon as the older version attempts to reindex.
This does not affect wallet forward or backward compatibility.
Notable changes

Faster synchronization
Bitcoin Core now uses 'headers-first synchronization'. This means that we first
ask peers for block headers (a total of 27 megabytes, as of December 2014) and
validate those. In a second stage, when the headers have been discovered, we
download the blocks. However, as we already know about the whole chain in
advance, the blocks can be downloaded in parallel from all available peers.
In practice, this means a much faster and more robust synchronization. On
recent hardware with a decent network link, it can be as little as 3 hours
for an initial full synchronization. You may notice a slower progress in the
very first few minutes, when headers are still being fetched and verified, but
it should gain speed afterwards.
A few RPCs were added/updated as a result of this:
  • getblockchaininfo now returns the number of validated headers in addition to
the number of validated blocks.
  • getpeerinfo lists both the number of blocks and headers we know we have in
common with each peer. While synchronizing, the heights of the blocks that we
have requested from peers (but haven't received yet) are also listed as
'inflight'.
  • A new RPC getchaintips lists all known branches of the block chain,
including those we only have headers for.
Transaction fee changes
This release automatically estimates how high a transaction fee (or how
high a priority) transactions require to be confirmed quickly. The default
settings will create transactions that confirm quickly; see the new
'txconfirmtarget' setting to control the tradeoff between fees and
confirmation times. Fees are added by default unless the 'sendfreetransactions'
setting is enabled.
Prior releases used hard-coded fees (and priorities), and would
sometimes create transactions that took a very long time to confirm.
Statistics used to estimate fees and priorities are saved in the
data directory in the fee_estimates.dat file just before
program shutdown, and are read in at startup.
New command line options for transaction fee changes:
  • -txconfirmtarget=n : create transactions that have enough fees (or priority)
so they are likely to begin confirmation within n blocks (default: 1). This setting
is over-ridden by the -paytxfee option.
  • -sendfreetransactions : Send transactions as zero-fee transactions if possible
(default: 0)
New RPC commands for fee estimation:
  • estimatefee nblocks : Returns approximate fee-per-1,000-bytes needed for
a transaction to begin confirmation within nblocks. Returns -1 if not enough
transactions have been observed to compute a good estimate.
  • estimatepriority nblocks : Returns approximate priority needed for
a zero-fee transaction to begin confirmation within nblocks. Returns -1 if not
enough free transactions have been observed to compute a good
estimate.
RPC access control changes
Subnet matching for the purpose of access control is now done
by matching the binary network address, instead of with string wildcard matching.
For the user this means that -rpcallowip takes a subnet specification, which can be
  • a single IP address (e.g. 1.2.3.4 or fe80::0012:3456:789a:bcde)
  • a network/CIDR (e.g. 1.2.3.0/24 or fe80::0000/64)
  • a network/netmask (e.g. 1.2.3.4/255.255.255.0 or fe80::0012:3456:789a:bcde/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff)
An arbitrary number of -rpcallow arguments can be given. An incoming connection will be accepted if its origin address
matches one of them.
For example:
| 0.9.x and before | 0.10.x |
|--------------------------------------------|---------------------------------------|
| -rpcallowip=192.168.1.1 | -rpcallowip=192.168.1.1 (unchanged) |
| -rpcallowip=192.168.1.* | -rpcallowip=192.168.1.0/24 |
| -rpcallowip=192.168.* | -rpcallowip=192.168.0.0/16 |
| -rpcallowip=* (dangerous!) | -rpcallowip=::/0 (still dangerous!) |
Using wildcards will result in the rule being rejected with the following error in debug.log:
 Error: Invalid -rpcallowip subnet specification: *. Valid are a single IP (e.g. 1.2.3.4), a network/netmask (e.g. 1.2.3.4/255.255.255.0) or a network/CIDR (e.g. 1.2.3.4/24). 
REST interface
A new HTTP API is exposed when running with the -rest flag, which allows
unauthenticated access to public node data.
It is served on the same port as RPC, but does not need a password, and uses
plain HTTP instead of JSON-RPC.
Assuming a local RPC server running on port 8332, it is possible to request:
In every case, EXT can be bin (for raw binary data), hex (for hex-encoded
binary) or json.
For more details, see the doc/REST-interface.md document in the repository.
RPC Server "Warm-Up" Mode
The RPC server is started earlier now, before most of the expensive
intialisations like loading the block index. It is available now almost
immediately after starting the process. However, until all initialisations
are done, it always returns an immediate error with code -28 to all calls.
This new behaviour can be useful for clients to know that a server is already
started and will be available soon (for instance, so that they do not
have to start it themselves).
Improved signing security
For 0.10 the security of signing against unusual attacks has been
improved by making the signatures constant time and deterministic.
This change is a result of switching signing to use libsecp256k1
instead of OpenSSL. Libsecp256k1 is a cryptographic library
optimized for the curve Bitcoin uses which was created by Bitcoin
Core developer Pieter Wuille.
There exist attacks[1] against most ECC implementations where an
attacker on shared virtual machine hardware could extract a private
key if they could cause a target to sign using the same key hundreds
of times. While using shared hosts and reusing keys are inadvisable
for other reasons, it's a better practice to avoid the exposure.
OpenSSL has code in their source repository for derandomization
and reduction in timing leaks that we've eagerly wanted to use for a
long time, but this functionality has still not made its
way into a released version of OpenSSL. Libsecp256k1 achieves
significantly stronger protection: As far as we're aware this is
the only deployed implementation of constant time signing for
the curve Bitcoin uses and we have reason to believe that
libsecp256k1 is better tested and more thoroughly reviewed
than the implementation in OpenSSL.
[1] https://eprint.iacr.org/2014/161.pdf
Watch-only wallet support
The wallet can now track transactions to and from wallets for which you know
all addresses (or scripts), even without the private keys.
This can be used to track payments without needing the private keys online on a
possibly vulnerable system. In addition, it can help for (manual) construction
of multisig transactions where you are only one of the signers.
One new RPC, importaddress, is added which functions similarly to
importprivkey, but instead takes an address or script (in hexadecimal) as
argument. After using it, outputs credited to this address or script are
considered to be received, and transactions consuming these outputs will be
considered to be sent.
The following RPCs have optional support for watch-only:
getbalance, listreceivedbyaddress, listreceivedbyaccount,
listtransactions, listaccounts, listsinceblock, gettransaction. See the
RPC documentation for those methods for more information.
Compared to using getrawtransaction, this mechanism does not require
-txindex, scales better, integrates better with the wallet, and is compatible
with future block chain pruning functionality. It does mean that all relevant
addresses need to added to the wallet before the payment, though.
Consensus library
Starting from 0.10.0, the Bitcoin Core distribution includes a consensus library.
The purpose of this library is to make the verification functionality that is
critical to Bitcoin's consensus available to other applications, e.g. to language
bindings such as [python-bitcoinlib](https://pypi.python.org/pypi/python-bitcoinlib) or
alternative node implementations.
This library is called libbitcoinconsensus.so (or, .dll for Windows).
Its interface is defined in the C header [bitcoinconsensus.h](https://github.com/bitcoin/bitcoin/blob/0.10/src/script/bitcoinconsensus.h).
In its initial version the API includes two functions:
  • bitcoinconsensus_verify_script verifies a script. It returns whether the indicated input of the provided serialized transaction
correctly spends the passed scriptPubKey under additional constraints indicated by flags
  • bitcoinconsensus_version returns the API version, currently at an experimental 0
The functionality is planned to be extended to e.g. UTXO management in upcoming releases, but the interface
for existing methods should remain stable.
Standard script rules relaxed for P2SH addresses
The IsStandard() rules have been almost completely removed for P2SH
redemption scripts, allowing applications to make use of any valid
script type, such as "n-of-m OR y", hash-locked oracle addresses, etc.
While the Bitcoin protocol has always supported these types of script,
actually using them on mainnet has been previously inconvenient as
standard Bitcoin Core nodes wouldn't relay them to miners, nor would
most miners include them in blocks they mined.
bitcoin-tx
It has been observed that many of the RPC functions offered by bitcoind are
"pure functions", and operate independently of the bitcoind wallet. This
included many of the RPC "raw transaction" API functions, such as
createrawtransaction.
bitcoin-tx is a newly introduced command line utility designed to enable easy
manipulation of bitcoin transactions. A summary of its operation may be
obtained via "bitcoin-tx --help" Transactions may be created or signed in a
manner similar to the RPC raw tx API. Transactions may be updated, deleting
inputs or outputs, or appending new inputs and outputs. Custom scripts may be
easily composed using a simple text notation, borrowed from the bitcoin test
suite.
This tool may be used for experimenting with new transaction types, signing
multi-party transactions, and many other uses. Long term, the goal is to
deprecate and remove "pure function" RPC API calls, as those do not require a
server round-trip to execute.
Other utilities "bitcoin-key" and "bitcoin-script" have been proposed, making
key and script operations easily accessible via command line.
Mining and relay policy enhancements
Bitcoin Core's block templates are now for version 3 blocks only, and any mining
software relying on its getblocktemplate must be updated in parallel to use
libblkmaker either version 0.4.2 or any version from 0.5.1 onward.
If you are solo mining, this will affect you the moment you upgrade Bitcoin
Core, which must be done prior to BIP66 achieving its 951/1001 status.
If you are mining with the stratum mining protocol: this does not affect you.
If you are mining with the getblocktemplate protocol to a pool: this will affect
you at the pool operator's discretion, which must be no later than BIP66
achieving its 951/1001 status.
The prioritisetransaction RPC method has been added to enable miners to
manipulate the priority of transactions on an individual basis.
Bitcoin Core now supports BIP 22 long polling, so mining software can be
notified immediately of new templates rather than having to poll periodically.
Support for BIP 23 block proposals is now available in Bitcoin Core's
getblocktemplate method. This enables miners to check the basic validity of
their next block before expending work on it, reducing risks of accidental
hardforks or mining invalid blocks.
Two new options to control mining policy:
  • -datacarrier=0/1 : Relay and mine "data carrier" (OP_RETURN) transactions
if this is 1.
  • -datacarriersize=n : Maximum size, in bytes, we consider acceptable for
"data carrier" outputs.
The relay policy has changed to more properly implement the desired behavior of not
relaying free (or very low fee) transactions unless they have a priority above the
AllowFreeThreshold(), in which case they are relayed subject to the rate limiter.
BIP 66: strict DER encoding for signatures
Bitcoin Core 0.10 implements BIP 66, which introduces block version 3, and a new
consensus rule, which prohibits non-DER signatures. Such transactions have been
non-standard since Bitcoin v0.8.0 (released in February 2013), but were
technically still permitted inside blocks.
This change breaks the dependency on OpenSSL's signature parsing, and is
required if implementations would want to remove all of OpenSSL from the
consensus code.
The same miner-voting mechanism as in BIP 34 is used: when 751 out of a
sequence of 1001 blocks have version number 3 or higher, the new consensus
rule becomes active for those blocks. When 951 out of a sequence of 1001
blocks have version number 3 or higher, it becomes mandatory for all blocks.
Backward compatibility with current mining software is NOT provided, thus miners
should read the first paragraph of "Mining and relay policy enhancements" above.
0.10.0 Change log

Detailed release notes follow. This overview includes changes that affect external
behavior, not code moves, refactors or string updates.
RPC:
  • f923c07 Support IPv6 lookup in bitcoin-cli even when IPv6 only bound on localhost
  • b641c9c Fix addnode "onetry": Connect with OpenNetworkConnection
  • 171ca77 estimatefee / estimatepriority RPC methods
  • b750cf1 Remove cli functionality from bitcoind
  • f6984e8 Add "chain" to getmininginfo, improve help in getblockchaininfo
  • 99ddc6c Add nLocalServices info to RPC getinfo
  • cf0c47b Remove getwork() RPC call
  • 2a72d45 prioritisetransaction
  • e44fea5 Add an option -datacarrier to allow users to disable relaying/mining data carrier transactions
  • 2ec5a3d Prevent easy RPC memory exhaustion attack
  • d4640d7 Added argument to getbalance to include watchonly addresses and fixed errors in balance calculation
  • 83f3543 Added argument to listaccounts to include watchonly addresses
  • 952877e Showing 'involvesWatchonly' property for transactions returned by 'listtransactions' and 'listsinceblock'. It is only appended when the transaction involves a watchonly address
  • d7d5d23 Added argument to listtransactions and listsinceblock to include watchonly addresses
  • f87ba3d added includeWatchonly argument to 'gettransaction' because it affects balance calculation
  • 0fa2f88 added includedWatchonly argument to listreceivedbyaddress/...account
  • 6c37f7f getrawchangeaddress: fail when keypool exhausted and wallet locked
  • ff6a7af getblocktemplate: longpolling support
  • c4a321f Add peerid to getpeerinfo to allow correlation with the logs
  • 1b4568c Add vout to ListTransactions output
  • b33bd7a Implement "getchaintips" RPC command to monitor blockchain forks
  • 733177e Remove size limit in RPC client, keep it in server
  • 6b5b7cb Categorize rpc help overview
  • 6f2c26a Closely track mempool byte total. Add "getmempoolinfo" RPC
  • aa82795 Add detailed network info to getnetworkinfo RPC
  • 01094bd Don't reveal whether password is <20 or >20 characters in RPC
  • 57153d4 rpc: Compute number of confirmations of a block from block height
  • ff36cbe getnetworkinfo: export local node's client sub-version string
  • d14d7de SanitizeString: allow '(' and ')'
  • 31d6390 Fixed setaccount accepting foreign address
  • b5ec5fe update getnetworkinfo help with subversion
  • ad6e601 RPC additions after headers-first
  • 33dfbf5 rpc: Fix leveldb iterator leak, and flush before gettxoutsetinfo
  • 2aa6329 Enable customising node policy for datacarrier data size with a -datacarriersize option
  • f877aaa submitblock: Use a temporary CValidationState to determine accurately the outcome of ProcessBlock
  • e69a587 submitblock: Support for returning specific rejection reasons
  • af82884 Add "warmup mode" for RPC server
  • e2655e0 Add unauthenticated HTTP REST interface to public blockchain data
  • 683dc40 Disable SSLv3 (in favor of TLS) for the RPC client and server
  • 44b4c0d signrawtransaction: validate private key
  • 9765a50 Implement BIP 23 Block Proposal
  • f9de17e Add warning comment to getinfo
Command-line options:
  • ee21912 Use netmasks instead of wildcards for IP address matching
  • deb3572 Add -rpcbind option to allow binding RPC port on a specific interface
  • 96b733e Add -version option to get just the version
  • 1569353 Add -stopafterblockimport option
  • 77cbd46 Let -zapwallettxes recover transaction meta data
  • 1c750db remove -tor compatibility code (only allow -onion)
  • 4aaa017 rework help messages for fee-related options
  • 4278b1d Clarify error message when invalid -rpcallowip
  • 6b407e4 -datadir is now allowed in config files
  • bdd5b58 Add option -sysperms to disable 077 umask (create new files with system default umask)
  • cbe39a3 Add "bitcoin-tx" command line utility and supporting modules
  • dbca89b Trigger -alertnotify if network is upgrading without you
  • ad96e7c Make -reindex cope with out-of-order blocks
  • 16d5194 Skip reindexed blocks individually
  • ec01243 --tracerpc option for regression tests
  • f654f00 Change -genproclimit default to 1
  • 3c77714 Make -proxy set all network types, avoiding a connect leak
  • 57be955 Remove -printblock, -printblocktree, and -printblockindex
  • ad3d208 remove -maxorphanblocks config parameter since it is no longer functional
Block and transaction handling:
  • 7a0e84d ProcessGetData(): abort if a block file is missing from disk
  • 8c93bf4 LoadBlockIndexDB(): Require block db reindex if any blk*.dat files are missing
  • 77339e5 Get rid of the static chainMostWork (optimization)
  • 4e0eed8 Allow ActivateBestChain to release its lock on cs_main
  • 18e7216 Push cs_mains down in ProcessBlock
  • fa126ef Avoid undefined behavior using CFlatData in CScript serialization
  • 7f3b4e9 Relax IsStandard rules for pay-to-script-hash transactions
  • c9a0918 Add a skiplist to the CBlockIndex structure
  • bc42503 Use unordered_map for CCoinsViewCache with salted hash (optimization)
  • d4d3fbd Do not flush the cache after every block outside of IBD (optimization)
  • ad08d0b Bugfix: make CCoinsViewMemPool support pruned entries in underlying cache
  • 5734d4d Only remove actualy failed blocks from setBlockIndexValid
  • d70bc52 Rework block processing benchmark code
  • 714a3e6 Only keep setBlockIndexValid entries that are possible improvements
  • ea100c7 Reduce maximum coinscache size during verification (reduce memory usage)
  • 4fad8e6 Reject transactions with excessive numbers of sigops
  • b0875eb Allow BatchWrite to destroy its input, reducing copying (optimization)
  • 92bb6f2 Bypass reloading blocks from disk (optimization)
  • 2e28031 Perform CVerifyDB on pcoinsdbview instead of pcoinsTip (reduce memory usage)
  • ab15b2e Avoid copying undo data (optimization)
  • 341735e Headers-first synchronization
  • afc32c5 Fix rebuild-chainstate feature and improve its performance
  • e11b2ce Fix large reorgs
  • ed6d1a2 Keep information about all block files in memory
  • a48f2d6 Abstract context-dependent block checking from acceptance
  • 7e615f5 Fixed mempool sync after sending a transaction
  • 51ce901 Improve chainstate/blockindex disk writing policy
  • a206950 Introduce separate flushing modes
  • 9ec75c5 Add a locking mechanism to IsInitialBlockDownload to ensure it never goes from false to true
  • 868d041 Remove coinbase-dependant transactions during reorg
  • 723d12c Remove txn which are invalidated by coinbase maturity during reorg
  • 0cb8763 Check against MANDATORY flags prior to accepting to mempool
  • 8446262 Reject headers that build on an invalid parent
  • 008138c Bugfix: only track UTXO modification after lookup
P2P protocol and network code:
  • f80cffa Do not trigger a DoS ban if SCRIPT_VERIFY_NULLDUMMY fails
  • c30329a Add testnet DNS seed of Alex Kotenko
  • 45a4baf Add testnet DNS seed of Andreas Schildbach
  • f1920e8 Ping automatically every 2 minutes (unconditionally)
  • 806fd19 Allocate receive buffers in on the fly
  • 6ecf3ed Display unknown commands received
  • aa81564 Track peers' available blocks
  • caf6150 Use async name resolving to improve net thread responsiveness
  • 9f4da19 Use pong receive time rather than processing time
  • 0127a9b remove SOCKS4 support from core and GUI, use SOCKS5
  • 40f5cb8 Send rejects and apply DoS scoring for errors in direct block validation
  • dc942e6 Introduce whitelisted peers
  • c994d2e prevent SOCKET leak in BindListenPort()
  • a60120e Add built-in seeds for .onion
  • 60dc8e4 Allow -onlynet=onion to be used
  • 3a56de7 addrman: Do not propagate obviously poor addresses onto the network
  • 6050ab6 netbase: Make SOCKS5 negotiation interruptible
  • 604ee2a Remove tx from AlreadyAskedFor list once we receive it, not when we process it
  • efad808 Avoid reject message feedback loops
  • 71697f9 Separate protocol versioning from clientversion
  • 20a5f61 Don't relay alerts to peers before version negotiation
  • b4ee0bd Introduce preferred download peers
  • 845c86d Do not use third party services for IP detection
  • 12a49ca Limit the number of new addressses to accumulate
  • 35e408f Regard connection failures as attempt for addrman
  • a3a7317 Introduce 10 minute block download timeout
  • 3022e7d Require sufficent priority for relay of free transactions
  • 58fda4d Update seed IPs, based on bitcoin.sipa.be crawler data
  • 18021d0 Remove bitnodes.io from dnsseeds.
Validation:
  • 6fd7ef2 Also switch the (unused) verification code to low-s instead of even-s
  • 584a358 Do merkle root and txid duplicates check simultaneously
  • 217a5c9 When transaction outputs exceed inputs, show the offending amounts so as to aid debugging
  • f74fc9b Print input index when signature validation fails, to aid debugging
  • 6fd59ee script.h: set_vch() should shift a >32 bit value
  • d752ba8 Add SCRIPT_VERIFY_SIGPUSHONLY (BIP62 rule 2) (test only)
  • 698c6ab Add SCRIPT_VERIFY_MINIMALDATA (BIP62 rules 3 and 4) (test only)
  • ab9edbd script: create sane error return codes for script validation and remove logging
  • 219a147 script: check ScriptError values in script tests
  • 0391423 Discourage NOPs reserved for soft-fork upgrades
  • 98b135f Make STRICTENC invalid pubkeys fail the script rather than the opcode
  • 307f7d4 Report script evaluation failures in log and reject messages
  • ace39db consensus: guard against openssl's new strict DER checks
  • 12b7c44 Improve robustness of DER recoding code
  • 76ce5c8 fail immediately on an empty signature
Build system:
  • f25e3ad Fix build in OS X 10.9
  • 65e8ba4 build: Switch to non-recursive make
  • 460b32d build: fix broken boost chrono check on some platforms
  • 9ce0774 build: Fix windows configure when using --with-qt-libdir
  • ea96475 build: Add mention of --disable-wallet to bdb48 error messages
  • 1dec09b depends: add shared dependency builder
  • c101c76 build: Add --with-utils (bitcoin-cli and bitcoin-tx, default=yes). Help string consistency tweaks. Target sanity check fix
  • e432a5f build: add option for reducing exports (v2)
  • 6134b43 Fixing condition 'sabotaging' MSVC build
  • af0bd5e osx: fix signing to make Gatekeeper happy (again)
  • a7d1f03 build: fix dynamic boost check when --with-boost= is used
  • d5fd094 build: fix qt test build when libprotobuf is in a non-standard path
  • 2cf5f16 Add libbitcoinconsensus library
  • 914868a build: add a deterministic dmg signer
  • 2d375fe depends: bump openssl to 1.0.1k
  • b7a4ecc Build: Only check for boost when building code that requires it
Wallet:
  • b33d1f5 Use fee/priority estimates in wallet CreateTransaction
  • 4b7b1bb Sanity checks for estimates
  • c898846 Add support for watch-only addresses
  • d5087d1 Use script matching rather than destination matching for watch-only
  • d88af56 Fee fixes
  • a35b55b Dont run full check every time we decrypt wallet
  • 3a7c348 Fix make_change to not create half-satoshis
  • f606bb9 fix a possible memory leak in CWalletDB::Recover
  • 870da77 fix possible memory leaks in CWallet::EncryptWallet
  • ccca27a Watch-only fixes
  • 9b1627d [Wallet] Reduce minTxFee for transaction creation to 1000 satoshis
  • a53fd41 Deterministic signing
  • 15ad0b5 Apply AreSane() checks to the fees from the network
  • 11855c1 Enforce minRelayTxFee on wallet created tx and add a maxtxfee option
GUI:
  • c21c74b osx: Fix missing dock menu with qt5
  • b90711c Fix Transaction details shows wrong To:
  • 516053c Make links in 'About Bitcoin Core' clickable
  • bdc83e8 Ensure payment request network matches client network
  • 65f78a1 Add GUI view of peer information
  • 06a91d9 VerifyDB progress reporting
  • fe6bff2 Add BerkeleyDB version info to RPCConsole
  • b917555 PeerTableModel: Fix potential deadlock. #4296
  • dff0e3b Improve rpc console history behavior
  • 95a9383 Remove CENT-fee-rule from coin control completely
  • 56b07d2 Allow setting listen via GUI
  • d95ba75 Log messages with type>QtDebugMsg as non-debug
  • 8969828 New status bar Unit Display Control and related changes
  • 674c070 seed OpenSSL PNRG with Windows event data
  • 509f926 Payment request parsing on startup now only changes network if a valid network name is specified
  • acd432b Prevent balloon-spam after rescan
  • 7007402 Implement SI-style (thin space) thoudands separator
  • 91cce17 Use fixed-point arithmetic in amount spinbox
  • bdba2dd Remove an obscure option no-one cares about
  • bd0aa10 Replace the temporary file hack currently used to change Bitcoin-Qt's dock icon (OS X) with a buffer-based solution
  • 94e1b9e Re-work overviewpage UI
  • 8bfdc9a Better looking trayicon
  • b197bf3 disable tray interactions when client model set to 0
  • 1c5f0af Add column Watch-only to transactions list
  • 21f139b Fix tablet crash. closes #4854
  • e84843c Broken addresses on command line no longer trigger testnet
  • a49f11d Change splash screen to normal window
  • 1f9be98 Disable App Nap on OSX 10.9+
  • 27c3e91 Add proxy to options overridden if necessary
  • 4bd1185 Allow "emergency" shutdown during startup
  • d52f072 Don't show wallet options in the preferences menu when running with -disablewallet
  • 6093aa1 Qt: QProgressBar CPU-Issue workaround
  • 0ed9675 [Wallet] Add global boolean whether to send free transactions (default=true)
  • ed3e5e4 [Wallet] Add global boolean whether to pay at least the custom fee (default=true)
  • e7876b2 [Wallet] Prevent user from paying a non-sense fee
  • c1c9d5b Add Smartfee to GUI
  • e0a25c5 Make askpassphrase dialog behave more sanely
  • 94b362d On close of splashscreen interrupt verifyDB
  • b790d13 English translation update
  • 8543b0d Correct tooltip on address book page
Tests:
  • b41e594 Fix script test handling of empty scripts
  • d3a33fc Test CHECKMULTISIG with m == 0 and n == 0
  • 29c1749 Let tx (in)valid tests use any SCRIPT_VERIFY flag
  • 6380180 Add rejection of non-null CHECKMULTISIG dummy values
  • 21bf3d2 Add tests for BoostAsioToCNetAddr
  • b5ad5e7 Add Python test for -rpcbind and -rpcallowip
  • 9ec0306 Add CODESEPARATOFindAndDelete() tests
  • 75ebced Added many rpc wallet tests
  • 0193fb8 Allow multiple regression tests to run at once
  • 92a6220 Hook up sanity checks
  • 3820e01 Extend and move all crypto tests to crypto_tests.cpp
  • 3f9a019 added list/get received by address/ account tests
  • a90689f Remove timing-based signature cache unit test
  • 236982c Add skiplist unit tests
  • f4b00be Add CChain::GetLocator() unit test
  • b45a6e8 Add test for getblocktemplate longpolling
  • cdf305e Set -discover=0 in regtest framework
  • ed02282 additional test for OP_SIZE in script_valid.json
  • 0072d98 script tests: BOOLAND, BOOLOR decode to integer
  • 833ff16 script tests: values that overflow to 0 are true
  • 4cac5db script tests: value with trailing 0x00 is true
  • 89101c6 script test: test case for 5-byte bools
  • d2d9dc0 script tests: add tests for CHECKMULTISIG limits
  • d789386 Add "it works" test for bitcoin-tx
  • df4d61e Add bitcoin-tx tests
  • aa41ac2 Test IsPushOnly() with invalid push
  • 6022b5d Make script_{valid,invalid}.json validation flags configurable
  • 8138cbe Add automatic script test generation, and actual checksig tests
  • ed27e53 Add coins_tests with a large randomized CCoinViewCache test
  • 9df9cf5 Make SCRIPT_VERIFY_STRICTENC compatible with BIP62
  • dcb9846 Extend getchaintips RPC test
  • 554147a Ensure MINIMALDATA invalid tests can only fail one way
  • dfeec18 Test every numeric-accepting opcode for correct handling of the numeric minimal encoding rule
  • 2b62e17 Clearly separate PUSHDATA and numeric argument MINIMALDATA tests
  • 16d78bd Add valid invert of invalid every numeric opcode tests
  • f635269 tests: enable alertnotify test for Windows
  • 7a41614 tests: allow rpc-tests to get filenames for bitcoind and bitcoin-cli from the environment
  • 5122ea7 tests: fix forknotify.py on windows
  • fa7f8cd tests: remove old pull-tester scripts
  • 7667850 tests: replace the old (unused since Travis) tests with new rpc test scripts
  • f4e0aef Do signature-s negation inside the tests
  • 1837987 Optimize -regtest setgenerate block generation
  • 2db4c8a Fix node ranges in the test framework
  • a8b2ce5 regression test only setmocktime RPC call
  • daf03e7 RPC tests: create initial chain with specific timestamps
  • 8656dbb Port/fix txnmall.sh regression test
  • ca81587 Test the exact order of CHECKMULTISIG sig/pubkey evaluation
  • 7357893 Prioritize and display -testsafemode status in UI
  • f321d6b Add key generation/verification to ECC sanity check
  • 132ea9b miner_tests: Disable checkpoints so they don't fail the subsidy-change test
  • bc6cb41 QA RPC tests: Add tests block block proposals
  • f67a9ce Use deterministically generated script tests
  • 11d7a7d [RPC] add rpc-test for http keep-alive (persistent connections)
  • 34318d7 RPC-test based on invalidateblock for mempool coinbase spends
  • 76ec867 Use actually valid transactions for script tests
  • c8589bf Add actual signature tests
  • e2677d7 Fix smartfees test for change to relay policy
  • 263b65e tests: run sanity checks in tests too
Miscellaneous:
  • 122549f Fix incorrect checkpoint data for testnet3
  • 5bd02cf Log used config file to debug.log on startup
  • 68ba85f Updated Debian example bitcoin.conf with config from wiki + removed some cruft and updated comments
  • e5ee8f0 Remove -beta suffix
  • 38405ac Add comment regarding experimental-use service bits
  • be873f6 Issue warning if collecting RandSeed data failed
  • 8ae973c Allocate more space if necessary in RandSeedAddPerfMon
  • 675bcd5 Correct comment for 15-of-15 p2sh script size
  • fda3fed libsecp256k1 integration
  • 2e36866 Show nodeid instead of addresses in log (for anonymity) unless otherwise requested
  • cd01a5e Enable paranoid corruption checks in LevelDB >= 1.16
  • 9365937 Add comment about never updating nTimeOffset past 199 samples
  • 403c1bf contrib: remove getwork-based pyminer (as getwork API call has been removed)
  • 0c3e101 contrib: Added systemd .service file in order to help distributions integrate bitcoind
  • 0a0878d doc: Add new DNSseed policy
  • 2887bff Update coding style and add .clang-format
  • 5cbda4f Changed LevelDB cursors to use scoped pointers to ensure destruction when going out of scope
  • b4a72a7 contrib/linearize: split output files based on new-timestamp-year or max-file-size
  • e982b57 Use explicit fflush() instead of setvbuf()
  • 234bfbf contrib: Add init scripts and docs for Upstart and OpenRC
  • 01c2807 Add warning about the merkle-tree algorithm duplicate txid flaw
  • d6712db Also create pid file in non-daemon mode
  • 772ab0e contrib: use batched JSON-RPC in linarize-hashes (optimization)
  • 7ab4358 Update bash-completion for v0.10
  • 6e6a36c contrib: show pull # in prompt for github-merge script
  • 5b9f842 Upgrade leveldb to 1.18, make chainstate databases compatible between ARM and x86 (issue #2293)
  • 4e7c219 Catch UTXO set read errors and shutdown
  • 867c600 Catch LevelDB errors during flush
  • 06ca065 Fix CScriptID(const CScript& in) in empty script case
Credits

Thanks to everyone who contributed to this release:
  • 21E14
  • Adam Weiss
  • Aitor Pazos
  • Alexander Jeng
  • Alex Morcos
  • Alon Muroch
  • Andreas Schildbach
  • Andrew Poelstra
  • Andy Alness
  • Ashley Holman
  • Benedict Chan
  • Ben Holden-Crowther
  • Bryan Bishop
  • BtcDrak
  • Christian von Roques
  • Clinton Christian
  • Cory Fields
  • Cozz Lovan
  • daniel
  • Daniel Kraft
  • David Hill
  • Derek701
  • dexX7
  • dllud
  • Dominyk Tiller
  • Doug
  • elichai
  • elkingtowa
  • ENikS
  • Eric Shaw
  • Federico Bond
  • Francis GASCHET
  • Gavin Andresen
  • Giuseppe Mazzotta
  • Glenn Willen
  • Gregory Maxwell
  • gubatron
  • HarryWu
  • himynameismartin
  • Huang Le
  • Ian Carroll
  • imharrywu
  • Jameson Lopp
  • Janusz Lenar
  • JaSK
  • Jeff Garzik
  • JL2035
  • Johnathan Corgan
  • Jonas Schnelli
  • jtimon
  • Julian Haight
  • Kamil Domanski
  • kazcw
  • kevin
  • kiwigb
  • Kosta Zertsekel
  • LongShao007
  • Luke Dashjr
  • Mark Friedenbach
  • Mathy Vanvoorden
  • Matt Corallo
  • Matthew Bogosian
  • Micha
  • Michael Ford
  • Mike Hearn
  • mrbandrews
  • mruddy
  • ntrgn
  • Otto Allmendinger
  • paveljanik
  • Pavel Vasin
  • Peter Todd
  • phantomcircuit
  • Philip Kaufmann
  • Pieter Wuille
  • pryds
  • randy-waterhouse
  • R E Broadley
  • Rose Toomey
  • Ross Nicoll
  • Roy Badami
  • Ruben Dario Ponticelli
  • Rune K. Svendsen
  • Ryan X. Charles
  • Saivann
  • sandakersmann
  • SergioDemianLerner
  • shshshsh
  • sinetek
  • Stuart Cardall
  • Suhas Daftuar
  • Tawanda Kembo
  • Teran McKinney
  • tm314159
  • Tom Harding
  • Trevin Hofmann
  • Whit J
  • Wladimir J. van der Laan
  • Yoichi Hirai
  • Zak Wilcox
As well as everyone that helped translating on [Transifex](https://www.transifex.com/projects/p/bitcoin/).
Also lots of thanks to the bitcoin.org website team David A. Harding and Saivann Carignan.
Wladimir
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-February/007480.html
submitted by bitcoin-devlist-bot to bitcoin_devlist [link] [comments]

Programming Bitcoin-qt using the RPC api (1 of 6) Building Bitcoin Websites - YouTube Bitcoin JSON-RPC Tutorial 7 - Wallet Notify m1xolyd1an - YouTube Bitcoin JSON-RPC Tutorial 1

Ich bin versucht zu rufen JSON-RPS-API Mit C#. Ich habe meine eigene IP-Adresse mit Ihren Anmeldeinformationen. Vor dem schreiben-API in C# habe ich installiert Bitcoin auf meinem server. Nun, über die details zum server, ich möchte zum testen der API, ob es funktioniert oder nicht. Hier ist das Programm.cs-Datei. public static void Main(string[] args) { var Daten = RequestServer("getaccount ... Newest 'json-rpc' Questions. 28, "time":sure I can use dcrctl instead of JSON-RPC but my It can be noted that this is the same way btcd and bitcoin core function as well. ETokenD is a NodeJS package which operates a bitcoind-compatible JSON-RPC API.Emercoin Community Documentation 1 May 2018 Welcome to /r/btc!. On that day, Albert presented Pascal at the Bitcoin Freenode and BitcoinTalk forums. After the initial V1 release, new developers joined the project forming what is known as the Pascal Developers. The Pascal Developers fundamentally improved the SafeBox design by providing the “missing piece of the puzzle” – the ability to make the SafeBox independently verifiable without the need for ... I have bitcoin daemon and I want to use the walletnotify option with a json-rpc call. Some of the examples use a "transaction.sh" file for walletnotify. What is it for? What do I have to write in ... wallet bitcoind api json-rpc wallet-notify. asked Apr 10 '14 at 6:14. M.R. 481 1 1 gold badge 4 4 silver badges 11 11 bronze badges. 30. votes. 5answers 36k views How to get an address's balance ... I am trying to call JSON RPS API Using C#. I have added my own IP address with its credentials. Before writing API in C#, I have installed Bitcoin on my server. Now, using the server details, i would like to test the API whether it is working or not. Here is the program.cs file.

[index] [10403] [8164] [36877] [691] [6387] [3795] [19372] [4648] [41218] [9979]

Programming Bitcoin-qt using the RPC api (1 of 6)

Bitcoin JSON-RPC Tutorial 3 - bitcoin.conf - Duration: 8:10. ... Programming Bitcoin-qt using the RPC api (1 of 6) - Duration: 3:17. Lars Holdgaard 4,678 views. 3:17. Great Lecture: Alan Watts ... Build, test, and publish a Java web API that simultaneously handles multiple protocols (including SOAP, REST, JSON-RPC, Thrift Binary and Thrift Compact) using Thriftly. You write the Java ... JSON RPC Calls with Bitcoin qt (4 of 6) Bitcoin JSON-RPC tutorial. Set up your bitcoin.conf file and create custom settings with bitcoind. BTC: 1NPrfWgJfkANmd1jt88A141PjhiarT8d9U An introduction to the Bitcoin JSON-RPC tutorial series. BTC: 1NPrfWgJfkANmd1jt88A141PjhiarT8d9U

#