These crates contain experimental pure Rust implementations of scalafield arithmetic for the respective elliptic curves (secp256k1, NIST P-256). These implementations are new, unaudited, and haven't received much public scrutiny. We have explicitly labeled them as being at a "USE AT YOUR OWN RISK" level of maturity. That said, these implementations utilize the best modern practices for this class of elliptic curves (complete projective formulas providing constant time scalar multiplication). In particular:
This release has been a cross-functional effort, with contributions from some of the best Rust elliptic curve cryptography experts. I'd like to thank everyone who's contributed, and hope that these crates are useful, especially for embedded cryptography and cryptocurrency use cases. EDIT: the version in the title is incorrect. The correct version is v0.4.0, unfortunately the title cannot be edited.
Thanks to all who submitted questions for Shiv Malik in the GAINS AMA yesterday, it was great to see so much interest in Data Unions! You can read the full transcript here:
Gains x Streamr AMA Recap
https://preview.redd.it/o74jlxia8im51.png?width=1236&format=png&auto=webp&s=93eb37a3c9ed31dc3bf31c91295c6ee32e1582be Thanks to everyone in our community who attended the GAINS AMA yesterday with, Shiv Malik. We were excited to see that so many people attended and gladly overwhelmed by the amount of questions we got from you on Twitter and Telegram. We decided to do a little recap of the session for anyone who missed it, and to archive some points we haven’t previously discussed with our community. Happy reading and thanks to Alexandre and Henry for having us on their channel! What is the project about in a few simple sentences? At Streamr we are building a real-time network for tomorrow’s data economy. It’s a decentralized, peer-to-peer network which we are hoping will one day replace centralized message brokers like Amazon’s AWS services. On top of that one of the things I’m most excited about are Data Unions. With Data Unions anyone can join the data economy and start monetizing the data they already produce. Streamr’s Data Union framework provides a really easy way for devs to start building their own data unions and can also be easily integrated into any existing apps. Okay, sounds interesting. Do you have a concrete example you could give us to make it easier to understand? The best example of a Data Union is the first one that has been built out of our stack. It's called Swash and it's a browser plugin. You can download it here: http://swashapp.io/ And basically it helps you monetize the data you already generate (day in day out) as you browse the web. It's the sort of data that Google already knows about you. But this way, with Swash, you can actually monetize it yourself. The more people that join the union, the more powerful it becomes and the greater the rewards are for everyone as the data product sells to potential buyers. Very interesting. What stage is the project/product at? It's live, right? Yes. It's live. And the Data Union framework is in public beta. The Network is on course to be fully decentralized at some point next year. How much can a regular person browsing the Internet expect to make for example? So that's a great question. The answer is no one quite knows yet. We do know that this sort of data (consumer insights) is worth hundreds of millions and really isn't available in high quality. So With a union of a few million people, everyone could be getting 20-50 dollars a year. But it'll take a few years at least to realise that growth. Of course Swash is just one data union amongst many possible others (which are now starting to get built out on our platform!) With Swash, I believe they now have 3,000 members. They need to get to 50,000 before they become really viable but they are yet to do any marketing. So all that is organic growth. I assume the data is anonymized btw? Yes. And there in fact a few privacy protecting tools Swash supplys to its users. How does Swash compare to Brave? So Brave really is about consent for people's attention and getting paid for that. They don't sell your data as such. Swash can of course be a plugin with Brave and therefore you can make passive income browsing the internet. Whilst also then consenting to advertising if you so want to earn BAT. Of course it's Streamr that is powering Swash. And we're looking at powering other DUs - say for example mobile applications. The holy grail might be having already existing apps and platforms out there, integrating DU tech into their apps so people can consent (or not) to having their data sold - and then getting a cut of that revenue when it does sell. The other thing to recognise is that the big tech companies monopolise data on a vast scale - data that we of course produce for them. That is stifling innovation. Take for example a competitor map app. To effectively compete with Google maps or Waze, they need millions of users feeding real time data into it. Without that - it's like Google maps used to be - static and a bit useless. Right, so how do you convince these big tech companies that are producing these big apps to integrate with Streamr? Does it mean they wouldn't be able to monetize data as well on their end if it becomes more available through an aggregation of individuals? If a map application does manage to scale to that level then inevitably Google buys them out - that's what happened with Waze. But if you have a data union which bundles together the raw location data of millions of people then any application builder can come along and license that data for their app. This encourages all sorts of innovation and breaks the monopoly. We're currently having conversations with Mobile Network operators to see if they want to pilot this new approach to data monetization. And that's what even more exciting. Just be explicit with users - do you want to sell your data? Okay, if yes, then which data point do you want to sell. Then the mobile network operator (like T-mobile for example) then organises the sale of the data of those who consent and everyone gets a cut. Streamr - in this example provides the backend to port and bundle the data, and also the token and payment rail for the payments. So for big companies (mobile operators in this case), it's less logistics, handing over the implementation to you, and simply taking a cut? It's a vision that we'll be able to talk more about more concretely in a few weeks time 😁 Compared to having to make sense of that data themselves (in the past) and selling it themselves Sort of. We provide the backened to port the data and the template smart contracts to distribute the payments. They get to focus on finding buyers for the data and ensuring that the data that is being collected from the app is the kind of data that is valuable and useful to the world. (Through our sister company TX, we also help build out the applications for them and ensure a smooth integration). The other thing to add is that the reason why this vision is working, is that the current data economy is under attack. Not just from privacy laws such as GDPR, but also from Google shutting down cookies, bidstream data being investigated by the FTC (for example) and Apple making changes to IoS14 to make third party data sharing more explicit for users. All this means that the only real places for thousands of multinationals to buy the sort of consumer insights they need to ensure good business decisions will be owned by Google/FB etc, or from SDKs or through this method - from overt, rich, consent from the consumer in return for a cut of the earnings. A couple of questions to get a better feel about Streamr as a whole now and where it came from. How many people are in the team? For how long have you been working on Streamr? We are around 35 people with one office in Zug, Switzerland and another one in Helsinki. But there are team members all over the globe, we’ve people in the US, Spain, the UK, Germany, Poland, Australia and Singapore. I joined Streamr back in 2017 during the ICO craze (but not for that reason!) And did you raise funds so far? If so, how did you handle them? Are you planning to do any future raises? We did an ICO back in Sept/Oct 2017 in which we raised around 30 Millions CHF. The funds give us enough runway for around five/six years to finalize our roadmap. We’ve also simultaneously opened up a sister company consultancy business, TX which helps enterprise clients implementing the Streamr stack. We've got no more plans to raise more! What is the token use case? How did you make sure it captures the value of the ecosystem you're building The token is used for payments on the Marketplace (such as for Data Union products for example) also for the broker nodes in the Network. ( we haven't talked much about the P2P network but it's our project's secret sauce). The broker nodes will be paid in DATAcoin for providing bandwidth. We are currently working together with Blockscience on our tokeneconomics. We’ve just started the second phase in their consultancy process and will be soon able to share more on the Streamr Network’s tokeneconoimcs. But if you want to summate the Network in a sentence or two - imagine the Bittorrent network being run by nodes who get paid to do so. Except that instead of passing around static files, it's realtime data streams. That of course means it's really well suited for the IoT economy. Well, let's continue with questions from Twitter and this one comes at the perfect time. Can Streamr Network be used to transfer data from IOT devices? Is the network bandwidth sufficient? How is it possible to monetize the received data from a huge number of IOT devices? From u/EgorCypto Yes, IoT devices are a perfect use case for the Network. When it comes to the network’s bandwidth and speed - the Streamr team just recently did extensive research to find out how well the network scales. The result was that it is on par with centralized solutions. We ran experiments with network sizes between 32 to 2048 nodes and in the largest network of 2048 nodes, 99% of deliveries happened within 362 ms globally. To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! So we're super happy with those results. Here's a link to the paper: https://medium.com/streamrblog/streamr-network-performance-and-scalability-whitepaper-adb461edd002 While we're on the technical side, second question from Twitter: Can you be sure that valuable data is safe and not shared with service providers? Are you using any encryption methods? From u/ CryptoMatvey Yes, the messages in the Network are encrypted. Currently all nodes are still run by the Streamr team. This will change in the Brubeck release - our last milestone on the roadmap - when end-to-end encryption is added. This release adds end-to-end encryption and automatic key exchange mechanisms, ensuring that node operators can not access any confidential data. If BTW - you want to get very technical the encryption algorithms we are using are: AES (AES-256-CTR) for encryption of data payloads, RSA (PKCS #1) for securely exchanging the AES keys and ECDSA (secp256k1) for data signing (same as Bitcoin and Ethereum). Last question from Twitter, less technical now :) In their AMA ad, they say that Streamr has three unions, Swash, Tracey and MyDiem. Why does Tracey help fisherfolk in the Philippines monetize their catch data? Do they only work with this country or do they plan to expand? From u/ alej_pacedo So yes, Tracey is one of the first Data Unions on top of the Streamr stack. Currently we are working together with the WWF-Philippines and the UnionBank of the Philippines on doing a first pilot with local fishing communities in the Philippines. WWF is interested in the catch data to protect wildlife and make sure that no overfishing happens. And at the same time the fisherfolk are incentivized to record their catch data by being able to access micro loans from banks, which in turn helps them make their business more profitable. So far, we have lots of interest from other places in South East Asia which would like to use Tracey, too. In fact TX have already had explicit interest in building out the use cases in other countries and not just for sea-food tracking, but also for many other agricultural products. (I think they had a call this week about a use case involving cows 😂) I recall late last year, that the Streamr Data Union framework was launched into private beta, now public beta was recently released. What are the differences? Any added new features? By u/Idee02 The main difference will be that the DU 2.0 release will be more reliable and also more transparent since the sidechain we are using for micropayments is also now based on blockchain consensus (PoA). Are there plans in the pipeline for Streamr to focus on the consumer-facing products themselves or will the emphasis be on the further development of the underlying engine?by u/ Andromedamin We're all about what's under the hood. We want third party devs to take on the challenge of building the consumer facing apps. We know it would be foolish to try and do it all! As a project how do you consider the progress of the project to fully developed (in % of progress plz) by u/ Hash2T We're about 60% through I reckon! What tools does Streamr offer developers so that they can create their own DApps and monetize data?What is Streamr Architecture? How do the Ethereum blockchain and the Streamr network and Streamr Core applications interact? By u/ CryptoDurden We'll be releasing the Data UNion framework in a few weeks from now and I think DApp builders will be impressed with what they find. We all know that Blockchain has many disadvantages as well, So why did Streamr choose blockchain as a combination for its technology? What's your plan to merge Blockchain with your technologies to make it safer and more convenient for your users? By u/noonecanstopme So we're not a blockchain ourselves - that's important to note. The P2P network only uses BC tech for the payments. Why on earth for example would you want to store every single piece of info on a blockchain. You should only store what you want to store. And that should probably happen off chain. So we think we got the mix right there. What were the requirements needed for node setup ? by u/ John097 Good q - we're still working on that but those specs will be out in the next release. How does the STREAMR team ensure good data is entered into the blockchain by participants? By u/ kartika84 Another great Q there! From the product buying end, this will be done by reputation. But ensuring the quality of the data as it passes through the network - if that is what you also mean - is all about getting the architecture right. In a decentralised network, that's not easy as data points in streams have to arrive in the right order. It's one of the biggest challenges but we think we're solving it in a really decentralised way. What are the requirements for integrating applications with Data Union? What role does the DATA token play in this case? By u/JP_Morgan_Chase There are no specific requirements as such, just that your application needs to generate some kind of real-time data. Data Union members and administrators are both paid in DATA by data buyers coming from the Streamr marketplace. Regarding security and legality, how does STREAMR guarantee that the data uploaded by a given user belongs to him and he can monetize and capitalize on it? By u/kherrera22 So that's a sort of million dollar question for anyone involved in a digital industry. Within our system there are ways of ensuring that but in the end the negotiation of data licensing will still, in many ways be done human to human and via legal licenses rather than smart contracts. at least when it comes to sizeable data products. There are more answers to this but it's a long one! Okay thank you all for all of those! The AMA took place in theGAINS Telegramgroup 10/09/20. Answers by Shiv Malik.
There is no doubt that bitcoins software will evolve to cope to the latest threat well before quantum computers can crack the already inherently secure nature of bitcoins code. But how quickly can all the banking and other legacy financial software manage to upgrade in time? No chance. What do you think will happen to the price of bitcoin when no coiners start losing their bank account balances to quantum cyber attacks?
Technical: Upcoming Improvements to Lightning Network
Price? Who gives a shit about price when Lightning Network development is a lot more interesting????? One thing about LN is that because there's no need for consensus before implementing things, figuring out the status of things is quite a bit more difficult than on Bitcoin. In one hand it lets larger groups of people work on improving LN faster without having to coordinate so much. On the other hand it leads to some fragmentation of the LN space, with compatibility problems occasionally coming up. The below is just a smattering sample of LN stuff I personally find interesting. There's a bunch of other stuff, like splice and dual-funding, that I won't cover --- post is long enough as-is, and besides, some of the below aren't as well-known. Anyway.....
Yeah the exciting new Lightning Network channel update protocol!
Solves "toxic waste" problem. In the current Poon-Dryja update protocol, old state ("waste") is dangerous ("toxic") because if your old state is acquired by your most hated enemy, they can use that old state to publish a stale unilateral close transaction, which your counterparty must treat as a theft attempt and punish you, causing you to lose funds. With Decker-Russell-Osuntokun old state is not revoked, but is instead gainsaid by later state: instead of actively punishing old state, it simply replaces the old state with a later state.
Allows multiple participants in the update protocol. This can be used as the update protocol for a channel factory with 3 or more participants, for example (channels are not practical for multiple participants since the loss of any one participants makes the channel completely unuseable; it's more sensible to have a multiple-participant factory that splits up into 2-participant channels). Poon-Dryja only supports two participants. Another update protocol, Decker-Wattenhofer, also supports multiple participants, but requires much larger locktimes in case of a unilateral close (measurable in weeks, whereas Poon-Dryja and Decker-Russell-Osuntokun can be measured in hours or days).
It uses nLockTime in a very clever way.
No, it does not solve the "watchtower needed" problem. Decker-Russell-Osuntokun still requires watchtowers if you're planning to be offline for a long time.
What might be confused is that it was initially thought that watchtowers under Decker-Russell-Osuntokun could be made more efficient by having the channel participant update a single "slot" in the watchtower, rather than having to consume one "slot" per update in Poon-Dryja. However, the existence of the "poisoned blob" attack by ZmnSCPxj means that having a replaceable "slot" is risky if the other participant of the channel can spoof you. And the safest way to prevent spoofing somebody is to identify that somebody --- but now that means the watchtower can surveill the activities of somebody it has identified, losing privacy.
Requires base layer change --- SIGHASH_NOINPUT / SIGHASH_ANYPREVOUT. This is still being worked out and may potentially not reach Bitcoin anytime soon.
Determining costs of routes is somewhat harder, and may complicate routefinding algorithms. In particular: every channel today has a "CLTV Delta", a number of blocks by which the total maximum delay of the payment is increased. This maximum delay is the maximum amount of time by which an outgoing payment can be locked, and needs to be reduced for UX purposes. Decker-Russell-Osuntokun will also add a "CSV minimum", a number of blocks, which must be smaller than the delay of an HTLC going through the channel. Current routefinding algos are good at minimizing a summed-up cost (like the "CLTV Delta") so the "CSV minimum" may require discovering / developing new routefinding algos.
Due to the "CSV minimum" above, existing nodes that don't understand Decker-Russell-Osuntokun cannot reliably route over Decker-Russell-Osuntokun channels, as they might not impose this minimum properly.
Multipart payments / AMP
Splitting up large payments into smaller parts!
There are at least three variants of multipart payments: Original, Base, and High.
Original is the original AMP proposed by Lightning Labs. It sacrifices proof-of-payment in order to allow each path to have a different payment hash. This is done by having the payer use a derivation scheme to generate each part's payment preimage from a seed, then having the split the seed (using secret sharing) to each part. The receiver can only reconstruct the seed if all parts reach it.
Base simply uses the same payment hash for all routes. This retains proof-of-payment (i.e. an invoice is undeniably signed by the receiver, including a payment hash in the invoice; public knowledge of the payment preimage is proof that the receiver has in fact received money, and any third party can be convinced of this by being shown the signed invoice and the preimage). The receiver could just take one part of the payment and then claim to be underpaid by the payer and then deny service, but claiming any one part is enough to publish the payment preimage, creating a proof-of-payment: so the receiver can provably be made liable, even if it took just one part, thus the incentive of the receiver is to only take in the payment once all parts have arrived to it.
High requires elliptic curve points / scalars. It combines both Original and Base, retaining proof-of-payment (sacrificed by Original) and ensuring cryptographically-secure waiting for all parts (rather than the mere economically-incentivized of Base). This is done by using elliptic curve homomorphism to addition of scalars to add together the payer-provided preimage (really scalar) of Original with the payee-provided preimage (really scalar) of Base.
Better expected reliability. Channels are limited by capacity. By splitting up into many smaller payments, you can fit into more channels and be more likely to successfully reach the payee.
Capacity on mutiple of your channels can be used to pay. Currently if you have 0.05BTC on one channel and 0.05BTC on another channel, you can't pay 0.06BTC without first rebalancing your channels (and paying fees for the rebalance first, whether the payment succeeds or not). With multipart you can now combine the capacities of multiple of your channels, and only pay fees for combining them if the payment pushes through.
Wumbo payments (oversized payments) come "for free" without having to be explicitly supported by the nodes of the network: you just split up wumbo payments into parts smaller than the wumbo limit.
Multipart will have higher fees. Part of the feerate of each channel is a flat-rate fee. Going through multiple paths means paying more of this flat-rate fee.
It's not clear how to split up payments. Heuristics for payment splitting have to be derived and developed and tested.
Payment points / scalars
Using the magic of elliptic curve homomorphism for fun and Lightning Network profits! Basically, currently on Lightning an invoice has a payment hash, and the receiver reveals a payment preimage which, when inputted to SHA256, returns the given payment hash. Instead of using payment hashes and preimages, just replace them with payment points and scalars. An invoice will now contain a payment point, and the receiver reveals a payment scalar (private key) which, when multiplied with the standard generator point G on secp256k1, returns the given payment point. This is basically Scriptless Script usage on Lightning, instead of HTLCs we have Scriptless Script Pointlocked Timelocked Contracts (PTLCs).
Enables a shit-ton of improvements: payment decorrelation, stuckless payments, noncustodial escrow over Lightning (the Hodl Hodl Lightning escrow is custodial, read the fine print), High multipart.
It's the same coolness that makes Schnorr Signatures cool. ECDSA, despite being based on elliptic curves, is not cool because the hash-the-nonce operation needed to prevent it from infringing Schnorr's fatherfucking patent also prevents ECDSA from using the cool elliptic curve homomorphism of addition over scalars.
Requires Schnorr on Bitcoin layer.
Actually, we can work with 2p-ECDSA without waiting for Schnorr. We get back the nice elliptic curve homomorphism by passing the ECDSA nonce through another cryptosystem, Paillier. This gets us the ability to do Scriptless Script. I think it has only 80-bits security because of going through Paillier though.
Basically the conundrum is: we could implement 2p-ECDSA now, hope we never have to test the 80-bit security anytime soon, then switch to Schnorr with 128-bit security later (which means reimplementing a bunch of things, because the calculations are different and the data that needs to be exchanged between channel participants is very different between the 2p-ECDSA and Schnorr). Reimplementing is painful and is more dev work. If we don't implement with 2p-ECDSA now, though, we will be delaying all the nice elliptic curve goodness (stuckless, noncustodial escrow, payment decorrelation) until Bitcoin gets Schnorr.
Elliptic curve discrete log problem is theoretically quantum-vulnerable. If we can't find a qunatum-resistant homomorphic construction, we'll have to give up the advantages (payment decorrelation, stuckless payments, noncustodial escrow over Lightning) we got from using elliptic curve points and go back to boring old hashes.
Ensuring that payers cannot access data or other digital goods without proof of having paid the provider. In a nutshell: the payment preimage used as a proof-of-payment is the decryption key of the data. The provider gives the encrypted data, and issues an invoice. The buyer of the data then has to pay over Lightning in order to learn the decryption key, with the decryption key being the payment preimage.
Enables data providers to sell data. This could be sensors, livestreams, blogs, articles, whatever.
There's no scheme to determine if the data provider is providing actually-useful data. The data-provider could just stream https://random.org for example. This is a potentially-impossible problem. Even if the data-provider provides a "sample" of the data, and is able to derive some proof that the sample is indeed a true snippet of the encrypted data, the rest of the data outside of the sample might just be random junk.
No more payments getting stuck somewhere in the Lightning network without knowing whether the payee will ever get paid! (that's actually a bit overmuch claim, payments still can get stuck, but what "stuckless" really enables is that we can now safely run another parallel payment attempt until any one of the payment attempts get through). Basically, by using the ability to add points together, the payer can enforce that the payee can only claim the funds if it knows two pieces of information:
The payment scalar corresponding to the payment point in the invoice signed by the payee.
An "acknowledgment" scalar provided by the payer to the payee via another communication path.
This allows the payer to make multiple payment attempts in parallel, unlike the current situation where we must wait for an attempt to fail before trying another route. The payer only needs to ensure it generates different acknowledgment scalars for each payment attempt. Then, if at least one of the payment attempts reaches the payee, the payee can then acquire the acknowledgment scalar from the payer. Then the payee can acquire the payment. If the payee attempts to acquire multiple acknowledgment scalars for the same payment, the payer just gives out one and then tells the payee "LOL don't try to scam me", so the payee can only acquire a single acknowledgment scalar, meaning it can only claim a payment once; it can't claim multiple parallel payments.
Can safely run multiple parallel payment attempts as long as you have the funds to do so.
Needs payment point + scalar
Non-custodial escrow over Lightning
The "acknowledgment" scalar used in stuckless can be reused here. The acknowledgment scalar is derived as an ECDH shared secret between the payer and the escrow service. On arrival of payment to the payee, the payee queries the escrow to determine if the acknowledgment point is from a scalar that the escrow can derive using ECDH with the payer, plus a hash of the contract terms of the trade (for example, to transfer some goods in exchange for Lightning payment). Once the payee gets confirmation from the escrow that the acknowledgment scalar is known by the escrow, the payee performs the trade, then asks the payer to provide the acknowledgment scalar once the trade completes. If the payer refuses to give the acknowledgment scalar even though the payee has given over the goods to be traded, then the payee contacts the escrow again, reveals the contract terms text, and requests to be paid. If the escrow finds in favor of the payee (i.e. it determines the goods have arrived at the payer as per the contract text) then it gives the acknowledgment scalar to the payee.
True non-custodial escrow: the escrow service never holds any funds.
Needs payment point + scalar.
Because elliptic curve points can be added (unlike hashes), for every forwarding node, we an add a "blinding" point / scalar. This prevents multiple forwarding nodes from discovering that they have been on the same payment route. This is unlike the current payment hash + preimage, where the same hash is used along the route. In fact, the acknowledgment scalar we use in stuckless and escrow can simply be the sum of each blinding scalar used at each forwarding node.
Privacy! Multiple forwarding nodes cannot coordinate to try to uncover the payer and payee of each payment.
Threshold Signature Explained— Bringing Exciting Applications with TSS
— A deep dive into threshold signature without mathematics by ARPA’s cryptographer Dr. Alex Su https://preview.redd.it/cp0wib2mk0q41.png?width=757&format=png&auto=webp&s=d42056f42fb16041bc512f10f10fed56a16dc279 Threshold signature is a distributed multi-party signature protocol that includes distributed key generation, signature, and verification algorithms. In recent years, with the rapid development of blockchain technology, signature algorithms have gained widespread attention in both academic research and real-world applications. Its properties like security, practicability, scalability, and decentralization of signature are pored through. Due to the fact that blockchain and signature are closely connected, the development of signature algorithms and the introduction of new signature paradigms will directly affect the characteristics and efficiency of blockchain networks. In addition, institutional and personal account key management requirements stimulated by distributed ledgers have also spawned many wallet applications, and this change has also affected traditional enterprises. No matter in the blockchain or traditional financial institutions, the threshold signature scheme can bring security and privacy improvement in various scenarios. As an emerging technology, threshold signatures are still under academic research and discussions, among which there are unverified security risks and practical problems. This article will start from the technical rationale and discuss about cryptography and blockchain. Then we will compare multi-party computation and threshold signature before discussing the pros and cons of different paradigms of signature. In the end, there will be a list of use cases of threshold signature. So that, the reader may quickly learn about the threshold signature. I. Cryptography in Daily Life Before introducing threshold signatures, let’s get a general understanding of cryptography. How does cryptography protect digital information? How to create an identity in the digital world? At the very beginning, people want secure storage and transmission. After one creates a key, he can use symmetric encryption to store secrets. If two people have the same key, they can achieve secure transmission between them. Like, the king encrypts a command and the general decrypts it with the corresponding key. But when two people do not have a safe channel to use, how can they create a shared key? So, the key exchange protocol came into being. Analogously, if the king issues an order to all the people in the digital world, how can everyone proves that the sentence originated from the king? As such, the digital signature protocol was invented. Both protocols are based on public key cryptography, or asymmetric cryptographic algorithms. “Tiger Rune” is a troop deployment tool used by ancient emperor’s, made of bronze or gold tokens in the shape of a tiger, split in half, half of which is given to the general and the other half is saved by the emperor. Only when two tiger amulets are combined and used at the same time, will the amulet holder get the right to dispatch troops. Symmetric and asymmetric encryption constitute the main components of modern cryptography. They both have three fixed parts: key generation, encryption, and decryption. Here, we focus on digital signature protocols. The key generation process generates a pair of associated keys: the public key and the private key. The public key is open to everyone, and the private key represents the identity and is only revealed to the owner. Whoever owns the private key has the identity represented by the key. The encryption algorithm, or signature algorithm, takes the private key as input and generate a signature on a piece of information. The decryption algorithm, or signature verification algorithm, uses public keys to verify the validity of the signature and the correctness of the information. II. Signature in the Blockchain Looking back on blockchain, it uses consensus algorithm to construct distributed books, and signature provides identity information for blockchain. All the transaction information on the blockchain is identified by the signature of the transaction initiator. The blockchain can verify the signature according to specific rules to check the transaction validity, all thanks to the immutability and verifiability of the signature. For cryptography, the blockchain is more than using signature protocol, or that the consensus algorithm based on Proof-of-Work uses a hash function. Blockchain builds an infrastructure layer of consensus and transaction through. On top of that, the novel cryptographic protocols such as secure multi-party computation, zero-knowledge proof, homomorphic encryption thrives. For example, secure multi-party computation, which is naturally adapted to distributed networks, can build secure data transfer and machine learning platforms on the blockchain. The special nature of zero-knowledge proof provides feasibility for verifiable anonymous transactions. The combination of these cutting-edge cryptographic protocols and blockchain technology will drive the development of the digital world in the next decade, leading to secure data sharing, privacy protection, or more applications now unimaginable. III. Secure Multi-party Computation and Threshold Signature After introducing how digital signature protocol affects our lives, and how to help the blockchain build identities and record transactions, we will mention secure multi-party computation (MPC), from where we can see how threshold signatures achieve decentralization. For more about MPC, please refer to our previous posts which detailed the technical background and application scenarios. MPC, by definition, is a secure computation that several participants jointly execute. Security here means that, in one computation, all participants provide their own private input, and can obtain results from the calculation. It is not possible to get any private information entered by other parties. In 1982, when Prof. Yao proposed the concept of MPC, he gave an example called the “Millionaires Problem” — two millionaires who want to know who is richer than the other without telling the true amount of assets. Specifically, the secure multiparty computation would care about the following properties:
Privacy: Any participant cannot obtain any private input of other participants, except for information that can be inferred from the computation results.
Correctness and verifiability: The computation should ensure correct execution, and the legitimacy and correctness of this process should be verifiable by participants or third parties.
Fairness or robustness: All parties involved in the calculation, if not agreed in advance, should be able to obtain the computation results at the same time or cannot obtain the results.
Supposing we use secure multi-party computation to make a digital signature in a general sense, we will proceed as follows:
Key generation phase: all future participants will be involved together to do two things: 1) each involved party generates a secret private key; 2) The public key is calculated according to the sequence of private keys.
Signature phase: Participants joining in a certain signature use their own private keys as private inputs, and the information to be signed as a public input to perform a joint signature operation to obtain a signature. In this process, the privacy of secure multi-party computing ensures the security of private keys. The correctness and robustness guarantee the unforgeability of the signature and everyone can all get signatures.
Verification phase: Use the public key corresponding to the transaction to verify the signature as traditional algorithm. There is no “secret input” during the verification, this means that the verification can be performed without multi-party computation, which will become an advantage of multi-party computation type distributed signature.
The signature protocol constructed on the idea of secure multiparty computing is the threshold signature. It should be noted that we have omitted some details, because secure multiparty computing is actually a collective name for a type of cryptographic protocol. For different security assumptions and threshold settings, there are different construction methods. Therefore, the threshold signatures of different settings will also have distinctive properties, this article will not explain each setting, but the comparative result with other signature schemes will be introduced in the next section. IV. Single Signature, Multi-Signature and Threshold Signature Besides the threshold signature, what other methods can we choose? Bitcoin at the beginning, uses single signature which allocates each account with one private key. The message signed by this key is considered legitimate. Later, in order to avoid single point of failure, or introduce account management by multiple people, Bitcoin provides a multi-signature function. Multi-signature can be simply understood as each account owner signs successively and post all signatures to the chain. Then signatures are verified in order on the chain. When certain conditions are met, the transaction is legitimate. This method achieves a multiple private keys control purpose. So, what’s the difference between multi-signature and threshold signature? Several constraints of multi-signature are:
The access structure is not flexible. If an account’s access structure is given, that is, which private keys can complete a legal signature, this structure cannot be adjusted at a later stage. For example, a participant withdraws, or a new involved party needs to change the access structure. If you must change, you need to complete the initial setup process again, which will change the public key and account address as well.
Less efficiency. The first is that the verification on chain consumes power of all nodes, and therefore requires a processing fee. The verification of multiple signatures is equivalent to multiple single signatures. The second is performance. The verification obviously takes more time.
Requirements of smart contract support and algorithm adaptation that varies from chain to chain. Because multi-sig is not naturally supported. Due to the possible vulnerabilities in smart contracts, this support is considered risky.
No anonymity, this is not able to be trivially called disadvantage or advantage, because anonymity is required for specific conditions. Anonymity here means that multi-signature directly exposes all participating signers of the transaction.
Correspondingly, the threshold signature has the following features:
The access structure is flexible. Through an additional multi-party computation, the existing private key sequence can be expanded to assign private keys to new participants. This process will not expose the old and newly generated private key, nor will it change the public key and account address.
It provides more efficiency. For the chain, the signature generated by the threshold signature is not different from a single signature, which means the following improvements : a) The verification is the same as the single signature, and needs no additional fee; b ) the information of the signer is invisible, because for other nodes, the information is decrypted with the same public key; c) No smart contract on chain is needed to provide additional support.
In addition to the above discussion, there is a distributed signature scheme supported by Shamir secret sharing. Secret sharing algorithm has a long history which is used to slice information storage and perform error correction information. From the underlying algorithm of secure computation to the error correction of the disc. This technology has always played an important role, but the main problem is that when used in a signature protocol, Shamir secret sharing needs to recover the master private key. As for multiple signatures or threshold signature, the master private key has never been reconstructed, even if it is in memory or cache. this short-term reconstruction is not tolerable for vital accounts. V. Limitations Just like other secure multi-party computation protocols, the introduction of other participants makes security model different with traditional point-to-point encrypted transmission. The problem of conspiracy and malicious participants were not taken into account in algorithms before. The behavior of physical entities cannot be restricted, and perpetrators are introduced into participating groups. Therefore, multi-party cryptographic protocols cannot obtain the security strength as before. Effort is needed to develop threshold signature applications, integrate existing infrastructure, and test the true strength of threshold signature scheme. VI. Scenarios 1. Key Management The use of threshold signature in key management system can achieve a more flexible administration, such as ARPA’s enterprise key management API. One can use the access structure to design authorization pattern for users with different priorities. In addition, for the entry of new entities, the threshold signature can quickly refresh the key. This operation can also be performed periodically to level up the difficulty of hacking multiple private keys at the same time. Finally, for the verifier, the threshold signature is not different from the traditional signature, so it is compatible with old equipments and reduces the update cost. ARPA enterprise key management modules already support Elliptic Curve Digital Signature Scheme secp256k1 and ed25519 parameters. In the future, it will be compatible with more parameters. https://preview.redd.it/c27zuuhdl0q41.png?width=757&format=png&auto=webp&s=26d46e871dadbbd4e3bea74d840e0198dec8eb1c 2. Crypto Wallet Wallets based on threshold signature are more secure because the private key doesn’t need to be rebuilt. Also, without all signatures posted publicly, anonymity can be achieved. Compared to the multi-signature, threshold signature needs less transaction fees. Similar to key management applications, the administration of digital asset accounts can also be more flexible. Furthermore, threshold signature wallet can support various blockchains that do not natively support multi-signature, which reduces the risk of smart contracts bugs.
This article describes why people need the threshold signature, and what inspiring properties it may bring. One can see that threshold signature has higher security, more flexible control, more efficient verification process. In fact, different signature technologies have different application scenarios, such as aggregate signatures not mentioned in the article, and BLS-based multi-signature. At the same time, readers are also welcomed to read more about secure multi-party computation. Secure computation is the holy grail of cryptographic protocols. It can accomplish much more than the application of threshold signatures. In the near future, secure computation will solve more specific application questions in the digital world.
Dr. Alex Su works for ARPA as the cryptography researcher. He got his Bachelor’s degree in Electronic Engineering and Ph.D. in Cryptography from Tsinghua University. Dr. Su’s research interests include multi-party computation and post-quantum cryptography implementation and acceleration.
Very interesting coin. Only at ~750k marketcap. Only being traded on a single exchange (qtrade). Seems to be completely under the radar, nearly no posts on reddit about it made so far. Completely at ground floor with no marketing yet. Main developer has been working on Bitcoin projects and crypto since before 2012 (was the main developer for Satoshi Dice). Clean, Original Code. Custom POW. UTXO Built into Blocks Headers. Truly ASIC, Miner Centralization, and Quantum Tough. No Premining. Aggressive IO based PoW with large deterministic files should be very hard to ASIC in any sort of cost effective way. Increasing file size as hashrate increases means large SSDs and NVMEs will likely remain a competitve mining option. Quantum Tough. It is estimated that a 256-bit elliptic curve (like bitcoin uses) could be broken by a quantum computer with about 1600 qubits. SnowBlossom has a QHard mode which does a 3-of-3 multisig (secp256k1, RSA 8192, DSTU 4145) which increases the required qubits to the 16,000 range. UTXO root in block header. Allows giving provable results to light clients, such as browser based wallets and mobile apps. Current supply: 2,113,650 out of a max of 21,000,000 (same mining curve as Bitcoin). They also just released an Android wallet, and there is a lot of other stuff happening with the project. What does everyone here think?
Dear Reddit community, Following our announcement for DTube v0.9, I have received countless questions about the new blockchain part, avalon. First I want to make it clear, that it would have been utterly impossible to build this on STEEM, even with the centralized SCOT/Tribes that weren't available when I started working on this. This will become much clearer as you read through the whole wall of text and understand the novelties. SteemPeak says this is a 25 minutes read, but if you are truly interested in the concept of a social blockchain, and you believe in its power, I think it will be worth the time!
I'm a long time member of STEEM, with tens of thousands of staked STEEM for 2 years+. I understand the instinctive fear from the other members of the community when they see a new crypto project coming out. We've had two recent examples recently with the VOICE and LIBRA annoucements, being either hated or ignored. When you are invested morally, and financially, when you see competitors popping up, it's normal to be afraid. But we should remember competition is healthy, and learn from what these projects are doing and how it will influence us. Instead, by reacting the way STEEM reacts, we are putting our heads in the sand and failing to adapt. I currently see STEEM like the "North Korea of blockchains", trying to do everything better than other blockchains, while being #80 on coinmarketcap and slowly but surely losing positions over the months. When DLive left and revealed their own blockchain, it really got me thinking about why they did it. The way they did it was really scummy and flawed, but I concluded that in the end it was a good choice for them to try to develop their activity, while others waited for SMTs. Sadly, when I tried their new product, I was disappointed, they had botched it. It's purely a donation system, no proof of brain... And the ultra-majority of the existing supply is controlled by them, alongside many other 'anti-decentralization' features. It's like they had learnt nothing from their STEEM experience at all... STEEM was still the only blockchain able to distribute crypto-currency via social interactions (and no, 'donations' are not social interactions, they are monetary transfers; bitcoin can do it too). It is the killer feature we need. Years of negligence or greed from the witnesses/developers about the economic balance of STEEM is what broke this killer feature. Even when proposing economical changes (which are actually getting through finally in HF21), the discussions have always been centered around modifying the existing model (changing the curve, changing the split, etc), instead of developing a new one.
You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.
What if I built a new model for proof of brain distribution from the ground up? I first tried playing with STEEM clones, I played with EOS contracts too. Both systems couldn't do the concepts I wanted to integrate for DTube, unless I did a major refactor of tens of thousands of lines of code I had never worked with before. Making a new blockchain felt like a lighter task, and more fun too. Before even starting, I had a good idea of the concepts I'd love to implement. Most of these bullet points stemmed from observations of what happened here on STEEM in the past, and what I considered weaknesses for d.tube's growth.
The first concept I wanted to implement deep down the core of how a DPOS chain works, is that I didn't want the token to be staked, at all (i.e. no 'powering up'). The cons of staking for a decentralized social platform are obvious: * complexity for the users with the double token system. * difficulty to onboard people as they need to freeze their money, akin to a pyramid scheme. The only good thing about staking is how it can fill your bandwidth and your voting power when you power-up, so you don't need to wait for it to grow to start transacting. In a fully-liquid system, your account ressources start at 0% and new users will need to wait for it to grow before they can start transacting. I don't think that's a big issue. That meant that witness elections had to be run out of the liquid stake. Could it be done? Was it safe for the network? Can we update the cumulative votes for witnesses without rounding issues? Even when the money flows between accounts freely? Well I now believe it is entirely possible and safe, under certain conditions. The incentive for top witnesses to keep on running the chain is still present even if the stake is liquid. With a bit of discrete mathematics, it's easy to have a perfectly deterministic algorithm to run a decentralized election based off liquid stake, it's just going to be more dynamic as the funds and the witness votes can move around much faster.
NO EARLY USER ADVANTAGE
STEEM has had multiple events that influenced the distribution in a bad way. The most obvious one is the inflation settings. One day it was hella-inflationary, then suddently hard fork 16 it wasn't anymore. Another major one, is the non-linear rewards that ran for a long time, which created a huge early-user advantage that we can still feel today. I liked linear rewards, it's what gives minnows their best chance while staying sybil-resistant. I just needed Avalon's inflation to be smart. Not hyper-inflationary like The key metric to consider for this issue, is the number of tokens distributed per user per day. If this metric goes down, then the incentive for staying on the network and playing the game, goes down everyday. You feel like you're making less and less from your efforts. If this metric goes up, the number of printed tokens goes up and the token is hyper-inflationary and holding it feels really bad if you aren't actively earning from the inflation by playing the game. Avalon ensures that the number of printed tokens is proportional to the number of users with active stake. If more users come in, avalon prints more tokens, if users cash-out and stop transacting, the inflation goes down. This ensures that earning 1 DTC will be about as hard today, tomorrow, next month or next year, no matter how many people have registered or left d.tube, and no matter what happens on the markets.
NO LIMIT TO MY VOTING POWER
Another big issue that most steemians don't really know about, but that is really detrimental to STEEM, is how the voting power mana bar works. I guess having to manage a 2M SP delegation for @dtube really convinced me of this one. When your mana bar is full at 100%, you lose out the potential power generation, and rewards coming from it. And it only takes 5 days to go from 0% to 100%. A lot of people have very valid reasons to be offline for 5 days+, they shouldn't be punished so hard. This is why all most big stake holders make sure to always spend some of their voting power on a daily basis. And this is why minnows or smaller holders miss out on tons of curation rewards, unless they delegate to a bidbot or join some curation guild... meh. I guess a lot of people would rather just cash-out and don't mind the trouble of having to optimize their stake. So why is it even a mana bar? Why can't it grow forever? Well, everything in a computer has to have a limit, but why is this limit proportional to my stake? While I totally understand the purpose of making the bandwidth limited and forcing big stake holders to waste it, I think it's totally unneeded and inadapted for the voting power. As long as the growth of the VP is proportional to the stake, the system stays sybil-resistant, and there could technically be no limit at all if it wasn't for the fact that this is ran in a computer where numbers have a limited number of bits. On Avalon, I made it so that your voting power grows virtually indefinitely, or at least I don't think anyone will ever reach the current limit of Number.MAX_SAFE_INTEGER: 9007199254740991 or about 9 Peta VP. If you go inactive for 6 months on an account with some DTCs, when you come back you will have 6 months worth of power generation to spend, turning you into a whale, at least for a few votes. Another awkward limit on STEEM is how a 100% vote spends only 2% of your power. Not only STEEM forces you to be active on a daily basis, you also need to do a minimum of 10 votes / day to optimize your earnings. On Avalon, you can use 100% of your stored voting power in a single mega-vote if you wish, it's up to you.
A NEW PROOF-OF-BRAIN
No Author rewards
People should vote with the intent of getting a reward from it. If 75% of the value forcibly goes to the author, it's hard to expect a good return from curation. Steem is currently basically a complex donation platform. No one wants to donate when they vote, no matter what they will say, and no matter how much vote-trading, self-voting or bid-botting happens. So in order to keep a system where money is printed when votes happen, if we cannot use the username of the author to distribute rewards, the only possibility left is to use the list of previous voters aka "Curation rewards". The 25% interesting part of STEEM, that has totally be shadowed by the author rewards for too long.
STEEM has always suffered from the issue that the downvote button is unused, or when it's used, it's mostly for evil. This comes from the fact that in STEEM's model, downvotes are not eligible for any rewards. Even if they were, your downvote would be lowering the final payout of the content, and your own curation rewards... I wanted Avalon's downvotes to be completely symmetric to the upvotes. That means if we revert all the votes (upvotes become downvotes and vice versa), the content should still distribute the same amount of tokens to the same people, at the same time.
No payment windows
Steem has a system of payments windows. When you publish a content, it opens a payment window where people can freely upvote or downvote to influence the payout happening 7 days later. This is convenient when you want a system where downvotes lower rewards. Waiting 7 days to collect rewards is also another friction point for new users, some of them might never come back 7 days later to convince themselves that 'it works'. On avalon, when you are part of the winners of curation after a vote, you earn it instantly in your account, 100% liquid and transferable.
Unlimited monetization in time
Indeed, the 7 days monetization limit has been our biggest issue for our video platform since day 8. This incentivized our users to create more frequent, but lesser quality content, as they know that they aren't going to earn anything from the 'long-haul'. Monetization had to be unlimited on DTube, so that even a 2 years old video could be dug up and generate rewards in the far future. Infinite monetization is possible, but as removing tokens from a balance is impossible, the downvotes cannot remove money from the payout like they do on STEEM. Instead, downvotes print money in the same way upvotes do, downvotes still lower the popularity in the hot and trending and should only rewards other people who downvoted the same content earlier.
New curation rewards algorithm
STEEM's curation algorithm isn't stupid, but I believe it lacks some elegance. The 15 minutes 'band-aid' necessary to prevent curation bots (bots who auto vote as fast as possible on contents of popular authors) that they added proves it. The way is distributes the reward also feels very flat and boring. The rewards for my votes are very predictable, especially if I'm the biggest voter / stake holder for the content. My own vote is paying for my own curation rewards, how stupid is that? If no one elses votes after my big vote despite a popularity boost, it probably means I deserve 0 rewards, no? I had to try different attempts to find an algorithm yielding interesting results, with infinite monetization, and without obvious ways to exploit it. The final distribution algorithm is more complex than STEEM's curation but it's still pretty simple. When a vote is cast, we calculate the 'popularity' at the time of the vote. The first vote is given a popularity of 0, the next votes are defined by (total_vp_upvotes - total_vp_downvotes) / time_since_1st_vote. Then we look into the list of previous votes, and we remove all votes in the opposite direction (up/down). The we remove all the votes with a higher popularity if its an upvote, or the ones with a lower popularity if its a downvote. The remaining votes in the list are the 'winners'. Finally, akin to STEEM, the amount of tokens generated by the vote will be split between winners proportionally to the voting power spent by each (linear rewards - no advantages for whales) and distributed instantly. Instead of purely using the order of the votes, Avalon distribution is based on when the votes are cast, and each second that passes reduces the popularity of a content, potentially increasing the long-term ROI of the next vote cast on it. GraphIt's possible to chart the popularity that influences the DTC monetary distribution directly in the d.tube UI This algorithm ensures there are always losers. The last upvoter never earns anything, also the person who upvoted at the highest popularity, and the one who downvoted at the lowest popularity would never receive any rewards for their vote. Just like the last upvoter and last downvoter wouldn't either. All the other ones in the middle may or may not receive anything, depending on how the voting and popularity evolved in time. The one with an obvious advantage, is the first voter who is always counted as 0 popularity. As long as the content stays at a positive popularity, every upvote will earn him rewards. Similarly, being the first downvoter on an overly-popular content could easily earn you 100% rewards on the next downvote that could be from a whale, earning you a fat bonus. While Avalon doesn't technically have author rewards, the first-voter advantage is strong, and the author has the advantage of always being the first voter, so the author can still earn from his potentially original creations, he just needs to commit some voting power on his own contents to be able to publish.
ONE CHAIN <==> ONE APP
More scalable than shared blockchains
Another issue with generalistic blockchains like ETH/STEEM/EOS/TRX, which are currently hosting dozens of semi-popular web/mobile apps, is the reduced scalability of such shared models. Again, everything in a computer has a limit. For DPOS blockchains, 99%+ of the CPU load of a producing node will be to verify the signatures of the many transactions coming in every 3 seconds. And sadly this fact will not change with time. Even if we had a huge breakthrough on CPU speeds today, we would need to update the cryptographic standards for blockchains to keep them secure. This means it would NOT become easier to scale up the number of verifiable transactions per seconds. Oh, but we are not there yet you're thinking? Or maybe you think that we'll all be rich if we reach the scalability limits so it doesn't really matter? WRONG The limit is the number of signature verifications the most expensive CPU on the planet can do. Most blockchains use the secp256k1 curve, including Bitcoin, Ethereum, Steem and now Avalon. It was originally chosen for Bitcoin by Satoshi Nakamoto probably because it's decently quick at verifying signatures, and seems to be backdoor-proof (or else someone is playing a very patient game). Maybe some other curves exist with faster signature verification speed, but it won't be improved many-fold, and will likely require much research, auditing, and time to get adopted considering the security implications.
In 2015 Graphene was created, and Bitshares was completely rewritten. This was able to achieve 100,000 transaction per second on a single machine, and decentralized global stress testing achieved 18,000 transactions per second on a distributed network.
So BitShares/STEEM and other DPOS graphene chains in production can validate at most 18000 txs/sec, so about 1.5 billion transactions per day. EOS, Tendermint, Avalon, LIBRA or any other DPOS blockchain can achieve similar speeds, because there's no planet-killing proof-of-works, and thanks to the leader-based/democratic system that reduces the number of nodes taking part in the consensus. As a comparison, there are about 4 billion likes per day on instagram, so you can probably double that with the actual uploads, stories and comments, password changes, etc. The load is also likely unstable through the day, probably some hours will go twice as fast as the average. You wouldn't be able to fit Instagram in a blockchain, ever, even with the most scalable blockchain tech on the world's best hardware. You'd need like a dozen of those chains. And instagram is still a growing platform, not as big as Facebook, or YouTube. So, splitting this limit between many popular apps? Madness! Maybe it's still working right now, but when many different apps reach millions of daily active users plus bots, it won't fit anymore. Serious projects with a big user base will need to rethink the shared blockchain models like Ethereum, EOS, TRX, etc because the fees in gas or necessary stake required to transact will skyrocket, and the victims will be the hordes of minnows at the bottom of the distribution spectrum. If we can't run a full instagram on a DPOS blockchain, there is absolutely no point trying to run medium+reddit+insta+fb+yt+wechat+vk+tinder on one. Being able to run half an instagram is already pretty good and probably enough to actually onboard a fair share of the planet. But if we multiply the load by the number of different app concepts available, then it's never gonna scale. DTube chain is meant for the DTube UI only. Please do not build something unrelated to video connecting to our chain, we would actively do what we can to prevent you from growing. We want this chain to be for video contents only, and the JSON format of the contents should always follow the one used by d.tube. If you are interested in avalon tech for your project isn't about video, it's strongly suggested to fork the blockchain code and run your own avalon chain with a different origin id, instead of trying to connect your project to dtube's mainnet. If you still want to do it, chain leaders would be forced to actively combat your project as we would consider it as useless noise inside our dedicated blockchain.
Another issue of sharing a blockchain, is the issues coming up with the governance of it. Tons of features enabled by avalon would be controversial to develop on STEEM, because they'd only benefit DTube, and maybe even hurt/break some other projects. At best they'd be put at the bottom of a todo list somewhere. Having a blockchain dedicated to a single project enables it to quickly push updates that are focused on a single product, not dozens of totally different projects. Many blockchain projects are trying to make decentralized governance true, but this is absolutely not what I am interested in for DTube. Instead, in avalon the 'init' account, or 'master' account, has very strong permissions. In the DTC case, @dtube: * will earn 10% fees from all the inflation * will not have to burn DTCs to create accounts * will be able to do certain types of transactions when others can't * * account creation (during steem exclusivity period) * * transfers (during IEO period) * * transfering voting power and bandwidth ressources (used for easier onboarding) For example, for our IEO we will setup a mainnet where only @dtube is allowed to transfer funds or vote until the IEO completes and the airdrop happens. This is also what enabled us to create a 'steem-only' registration period on the public testnet for the first month. Only @dtube can create accounts, this way we can enforce a 1 month period where users can port their username for free, without imposters having a chance to steal usernames. Through the hard-forking mechanism, we can enable/disable these limitations and easily evolve the rules and permissions of the blockchain, for example opening monetary transfers at the end of our IEO, or opening account creation once the steem exclusivity ends. Luckily, avalon is decentralized, and all these parameters (like the @dtube fees, and @dtube permissions) are easily hardforkable by the leaders. @dtube will however be a very strong leader in the chain, as we plan to use our vote to at least keep the #1 producing node for as long as we can. We reserve the right to 'not follow' an hardfork. For example, it's obvious we wouldn't follow something like reducing our fees to 0% as it would financially endanger the project, and we would rather just continue our official fork on our own and plug d.tube domain and mobile app to it. On the other end of the spectrum, if other leaders think @dtube is being tyranical one way or another, leaders will always have the option of declining the new hardforks and putting the system on hold, then @dtube will have an issue and will need to compromise or betray the trust of 1/3 of the stake holders, which could reveal costly. The goal is to have a harmounious, enterprise-level decision making within the top leaders. We expect these leaders to be financially and emotionally connected with the project and act for good. @dtube is to be expected to be the main good actor for the chain, and any permission given to it should be granted with the goal of increasing the DTC marketcap, and nothing else. Leaders and @dtube should be able to keep cooperation high enough to keep the hard-forks focused on the actual issues, and flowing faster than other blockchain projects striving for a totally decentralized governance, a goal they are unlikely to ever achieve.
A lot of hard-forking
Avalon is easily hard-forkable, and will get hard-forked often, on purpose. No replays will be needed for leaders/exchanges during these hard-forks, just pull the new hardfork code, and restart the node before the hard-fork planned time to stay on the main fork. Why is this so crucial? It's something about game theory. I have no former proof for this, but I assume a social and financial game akin to the one played on steem since 2016 to be impossible to perfectly balance, even with a thourough dichotomical process. It's probably because of some psychological reason, or maybe just the fact that humans are naturally greedy. Or maybe it's just because of the sheer number of players. They can gang up together, try to counter each others, and find all sorts of creative ideas to earn more and exploit each other. In the end, the slightest change in the rules, can cause drastic gameplay changes. It's a real problem, luckily it's been faced by other people in the past. Similarly to what popular and succesful massively multiplayer games have achieved, I plan to patch or suggest hard-forks for avalon's mainnet on a bi-monthly basis. The goal of this perfect imbalance concept, is to force players to re-discover their best strategy often. By introducing regular, small, and semi-controlled changes into this chaos, we can fake balance. This will require players to be more adaptative and aware of the changes. This prevents the game from becoming stale and boring for players, while staying fair.
Death to bots
Automators on the other side, will need to re-think their bots, go through the developement and testing phase again, on every new hard-fork. It will be an unfair cat-and-mouse game. Doing small and semi-random changes in frequent hard-forks will be a easy task for the dtube leaders, compared to the work load generated to maintain the bots. In the end, I hope their return on investment to be much lower compared to the bid-bots, up to a point where there will be no automation. Imagine how different things would have been if SteemIt Inc acted strongly against bid-bots or other forms of automation when they started appearing? Imagine if hard-forks were frequent and they promised to fight bid-bots and their ilk? Who would be crazy enough to make a bid-bot apart from @berniesanders then? I don't want you to earn DTCs unless you are human. The way you are going to prove you are human, is not by sending a selfie of you with your passport to a 3rd party private company located on the other side of the world. You will just need to adapt to the new rules published every two weeks, and your human brain will do it subconsciously by just playing the voting game and seeing the rewards coming. All these concepts are aimed at directly improving d.tube, making it more resilient, and scale both technologically and economically. Having control over the full tech stack required to power our dapp will prevent issues like the one we had with the search engine, where we relied too heavily on a 3rd party tool, and that created a 6-months long bug that basically broke 1/3 of the UI. While d.tube's UI can now totally run independently from any other entity, we kept everything we could working with STEEM, and the user is now able to transparently publish/vote/comment videos on 2 different chains with one click. This way we can keep on leveraging the generalistic good features of STEEM that our new chain doesn't focuses on doing, such as the dollar-pegged token, the author rewards/donation mechanism, the tribes/communities tokens, and simply the extra exposure d.tube users can get from other website (steemit.com, busy.org, partiko, steempeak, etc), which is larger than the number of people using d.tube directly. The public testnet has been running pretty well for 3 weeks now, with 6000+ accounts registered, and already a dozen of independant nodes popping up and running for leaders. The majority of the videos are cross-posted on both chains and the daily video volume has slightly increased since the update, despite the added friction of the new 'double login' system and several UI bugs. If you've read this article, I'm hoping to get some reactions from you in the comments section! Some even more focused articles about avalon are going to pop on my blog in the following weeks, such as how to get a node running and running for leadewitness, so feel free to follow me to get more news and help me reach 10K followers ;)
The aforementioned trilemma has been bugging the researchers world-wide ever since the "blockchain" tern became a mainstay. Being a blockchain at it's core, NEO is not immune to the issue either. Having spent the last month constantly exhanging opinions on various critical issues with fellow CoZ contributors, I wanted to write a short excerpt explaining my vision of the future of NEO. Complete decentralization is very much a myth. Even the reference Bitcoin implementation (I will approach this from a theoretical standpoint, therefore hashrate dominance is going to be ignored) is not completely decentralized featuring a hardcoded seed list and 11 checkpoints helping a node to rebuild the chain "trust-lessly". We should accept that NEO is not going to reach the "Bitcoin level" of decentralization because it was never intended to in the first place. The goal now is to implement the most efficient consensus protocol (HoneyBadger BFT and FastBFT are good starting points) to facilitate the most robust block production times in as much of a decentralized environment as the protocol would allow. Fallback mechanics need to be specified in case the core consensus stalls (Michael Herman is working on a performance monitoring tool to aid the protocol in swaping consensus nodes based on the real-time performance metrics). The discussions about the consensus node elections are ongoing, with a few interesting ideas on the map (for example, Fabio Canesin's PoW voting) aiming to decrease the influence of exchanges, developers and bagholders on the outcome of the elections. In the future, a SPV-mode could also become a possibility and can potentially be even more space-efficient than the Bitcoin counterpart. Scalability is exactly what differentiates NEO from many other projects on the market and is the main reason for a relative lack of decentralization potential of the network. However, a few important issues need to be revisited before the platform can become truly scalable. The current consensus protocol proposals are not efficient in high network throughput scenarios (a.k.a. 100,000 tps). Despite being a major selling poin of NEO, instant finality is also the main reason why the protocol is currently not able to scale well. Another persistent problem, that the majority of the cryptocurrency community will find relatable is the ballooning transaction sizes. Unlike the popular belief, Schnorr signatures offer an incremental decrease in transaction sizes (estimated to be roughly 25%) which is not enough to make a 100,000 tps ledger able to remain efficiently distributed. Boneh-Lynn-Shacham signatures can potentially prove to be the missing puzzle piece, however, this signature standard is currently severely inefficient and requires a lot more research to be done. Another angle of research concentrates on blockchain pruning. If proven to be possible in a trust-less manner, pruning could revolutionize the efficiency of the blockchains. Apart from that, we shouldn't forget about horizontal scaling solutions (for example, sharding). In my opinion, security is the most important piece of the trilemma. Unfortunately, the security of NEO is currently preserved by the trust in the developers. However, that will change with decentralization. In general, the security of the platform depends on the issues discussed in the two sections above. Data distribution and a sturdy consensus protocol are vital to the preservation of the immutability and the unforgeability of data. Another issue that could potentially arise at some point down the line is the security of the cryptography used. The elliptic curves used in NEO (secp256k1) could be brute-forced in the future by the power of quantum computing and we have to prepare for the "doomsday" by exploring the alternative mechanisms (for example, lattice-based algorithms).
MINING A CRYPTO CURRENCY
The mining activity consist in calling a mathematical procedure we can’t predict the result before we run it. But we intend to obtain a very specific result, which usually consist in a certain number of 0 as the first chars before any random answer. If we found the nonce (a random object) combined with the transaction data and the coin algorithm that produce such result, we’ll have solve a transaction block and we’ll get a reward for that. Thanks to this work, the transaction listed in the block will be added to the blockchain and anyone will be able to check our work. That’s the concept of ‘proof of work’ allowing anyone to replay the mathematical procedure with the nonce discovered by the node that solved the block and to confirm block inclusion into the blockchain.
POLITICAL AND ETHICAL CONSIDERATIONS
The Tera project is young. It will have to face the same problems is facing today the Bitcoin platform :
a large amount of accounts used to get money back from credulous people (e-intrusion, mail threats, etc.)
a large amount of accounts used for illegal commercial activities (drugs, weapons, etc.)
attacks to make the blockchain platform down or to corrupt the blockchain datas
too many financial speculation that result in a coin value that has no sense.
Any Crypto Currency Project with the goal its money and contracts to be used as any other historical money or service contract has to consider its political and ethical usage. Processes have to be imagined, designed and implemented in order to be able to fight against extortion, corruption and illegal activities threating crypto-currency development.
Tera is entirely written in Java) over the NodeJS library as functional layer in order to take advantages of a robust and high level library designed to allow large and effective network node management. The miner part is imported from an external repository and is written in C in order to get the best performances for this module. Tera is actually officially supported on Linux and Windows. If you start mining Tera thanks to this article, you can add my account 188131 as advisor to yours. On simple demand I’ll refund you half of the extra coins generated for advisors when you’ll solve blocks (@freddy#8516 on discord).
Mining Tera has one major design constraint : you need one public IP per Tera node or miner. Yet, you can easily mine it on a computer desktop at home. The mining algorithm has been designed in order to be GPU resistant. In order to mine Tera coin you’ll need a multi-core processor (2 minimum) and some RAM, between 1 and 4GB per process that will mine. The mining reward level depends of the « power » used to solve a block (Top Tera Miners).
COST AND USAGE CONSIDERATIONS
There is two main cost centers in order to mine a crypto currency :
the cost of the hardware and the energy required to make a huge amount of mathematical operations connected to the blockchain network through the Internet,
the human cost in order to deploy, maintain and keep running miners and blockchain nodes.
Implementation of blockchain technologies provided a solution that allows to refuse of use of central servers for storing the database and to entrust them to a distributed registry. For the first time it was implemented through the example of digital currency – Bitcoin. Then enthusiastic programmers, who focused their attention on the opened opportunities, went further. They began to implement their ideas and supply new tools, laying the foundation for the digital system of the future. In such a way, smart contracts and decentralized applications appeared, presented to the user as software products for wide variety of spheres: business, entertainment, communication… Virtual blockchain began to obtain visual outlines. https://preview.redd.it/utjl7ervb9u21.jpg?width=1080&format=pjpg&auto=webp&s=46f4ce45dbf1a3f3d84be8b027ebf10fb3374c30
Leaders in creation DApps
Ethereum became the first blockchain project that allows the creation of smart contracts and DApps. But programmers faced difficulty - low transaction speed, which was limited to 20 tps. In 2017, EOS and TRON acted as projects that found a solution. They collected millions with the help of ICO and drove by roadmap. On the one hand, indeed, the shown speed was significantly different and gave the opportunity to develop DApps (TRON - 1200 tps, EOS - 4000 tps). Although Ethereum took the lead in the number of created in its blockchain applications, taking a head start in two years, the competitors who appeared on the market began to drain-away developers to their platforms. For April 2019 EOS ranks the first in the number of unique users (171k) and transaction volume ($ 3 bln), TRON - the second (71k, 600 mln). This is despite the fact that more DApps allocated in Ethereum. The only thing is that transaction rate both TRON and EOS was obtained detriment of decentralization. Both projects work on the PoS consensus algorithm, and this is already a reason not to consider them decentralized. As for Ethereum, its blockchain with each fork also gradually moves towards the final transition to the PoS consensus. While the main developers justify their actions by that the PoW algorithm has already outlived itself and the distributed registry running on such a consensus cannot provide the characteristics, necessary for full-fledged DApps functionality. That is not so!
Attractive advantage of TERA
What DApps are on the TERA platform
What else would be nice to see on TERA
Virtual soil of TERA is just beginning to put out shoots of the first planted digital seeds, and in the blockchain space there is enough space for the flight of imagination when creating DApps. I will give as an example just the first that comes to mind, as well as taken from the statements of members of the TERA community on Discordapp.com. Thus, would be great to see: - platform for trading binary options, for a start at least on the same pair of TERA / BTC; - browser for work with DApps and recommendations for building applications; - a service for voting on the blockchain would provide transparency into the system of election in any sphere (such a proposal was received at Eurovision-2019, why not create it on TERA); - author's rights patenting services for works of art, inventions and other intellectual property; - a full-fledged forum where comments would be saved and could not be deleted (the foundation of such an application has already been laid); - chess, backgammon, bingo, durak and other classic games; - more games! Strategies, shooter games, exploration games... Suggest to take a deeper look to the hits of the 80s and 90s. Could be found what to remember and transfer to the decentralized platform TERA; - fully functional casino with a wide range of entertainment: 3-line and 5-line Slot games, roulette, blackjack, poker, bets and other services of gambling industry; - and so on and so forth. If such ideas do not come to developers’ mind, feel free to contact the author of this article and he will share his own in the field of gaming and gambling, and help with their realization with the creative approach! In the long run, what we need for everyday life? Tools to make money and ways to spend it. The unique features of TERA blockchain make it possible to place both in it. Plus they will be independent of any central administration and free of censorship. To inspire developers for actions, the Chinese TERA community announced a contest with a prize fund of 165,000 coins. https://preview.redd.it/ksrxovbbd9u21.jpg?width=720&format=pjpg&auto=webp&s=c2f16e877b7dac0cf2b4ef0bb249f98c0c6d0396 Those who are interested in placing their DApps on TERA will be helped by DApp Paper - https://docs.google.com/document/d/10yXAKxaU7YgrQnbdXu_L7WWovUoRtdJwo3tXXaGZGSQ/edit And a couple of words to the developers: You can choose TRON platform, you can choose EOS or Ethereum. But what are your priorities? All these systems are in the hands of corporate owners, which leads away from decentralization. And those who want to get away from totalitarian control and give the fruits of their creativity complete freedom should consider the TERA platform. The TERA Foundation website has a page with a special world map. It shows in real time how many full nodes support the blockchain operation- https://terafoundation.org/map.html. To be among the first means to participate in laying the foundation, that is able to withstand a powerful boost to the rapid development of future technologies. If there is no your city on the map, only a few minutes separates you from correcting it. Instructions for node installation will help to join the TERA community and make your contribution right now. It can be found here: https://sourceforge.net/p/tera/code/ci/mastetree/README.md https://preview.redd.it/zcq5ijbgd9u21.jpg?width=1246&format=pjpg&auto=webp&s=a3cdd4219a5904a89f5de59413d45d6769b07a54 Participation in the expansion of TERA encourages growth and development of the idea of freedom from central administration, which was stated when Bitcoin was launched. These are not just fantasies or dreams, there are authoritative representatives of science who consider blockchains to be the basis for the economy of future, which will have no boundaries. And with the help of active participants full decentralization will come much faster. This development has no limits, since it is fueled by the independent striving, own free will and potential that is inherent in each of us.
Monero is my favorite thing as of now. I mean like ever in the whole world. Its potential to basically free the world (of government tyranny, censorship, famine, central bankers, etc) is by far the most promising of any crypto out there, and probably more promising than literally any other thing or movement in the entire history of the world (besides OneCoin or Darth Vader). I am by no means a Dash pump and dumper where I just wanna see the price moon immediately (not that I'm opposed to that, obviously), or a Zcash fool where I have no idea what cryptocurrency is about (Zcash is a cult, imo), I'm a Moner, or whatever Monero people are called. I just want it to be used! Which means, first and foremost, people have to be aware of its existence. What I've been doing, and I don't recommend this, unless you have a mountain of monero and an opinion of it as high as I do, is tell people to download the monero wallet, in exchange for me sending them a monero. I think this is much more effective than telling them "CryptoNight is much more ASIC resistant than SHA256!" or "ed25519 is more secure than secp256k1!" or whatever. It allows them to feel good when monero gets more expensive, maybe sad when it gets cheaper, but that emotional interaction with monero isn't something that's just forgotten, like those weird words they've never heard before are. First, they are given something they understand (an asset that might gain or lose value), and later they are waaay more likely to "fall down the rabbit hole" than some guy who doesn't own monero. (thus increasing user base and therefore security, privacy, and fungibility, and hopefully they'll not be able to contain their excitement, like me and have to tell yet more people) It's getting to be prohibitively expensive to do this, and I find that a lot of people simply will not accept less than an entire monero for that deal, idk why. Maybe cuz the current paradigm of nothing before the decimal signifies "change", i.e. worthless, or not worth downloading an "annoying blocktrain thing" (real quote). This is vexing. I know I'm not gonna go to my grandma and explain everything and have her be like "dam thats sweet im buying monero". For people my age, (I'm 20) I want to be able to convey what it's all about, concisely and in a way that excites people more than something that's out of the scope of the concepts they understand. I'm sure some of you more enthusiastic Moners (or monerites?) have "converted" some crypto-foreigners. What's the best way to actively do this? Is there like a video I can sic on em or something? Every fluffyponyza presentation I've ever seen would go like 32.4 miles above the average person's head. They won't read getmonero, or do work to figure out what this obscure and seemingly boring thing is. Some people, especially now, won't be convinced that monero is worth their time, but a lot are open to it. How to effectively reach as many as possible is what I'm after. Would it be legal to have a FFS for a video or a small PR type deal about monero? Not that its gonna instantly make it the world reserve currency, but it could help attract devs or something, idk. Do you guys think something like that would be worth it? I personally feel that it is important for monero to gain positive exposure, probably more than most people on here. If monero is this thing that people only hear about when they read a news report on a drug bust or ransomware attack, we'll have to go through the same phase bitcoin went through where the establishment along with Joe the Plumber wants it made illegal, or subject to more stringent KYC or AML regulations, cuz "only criminals use it". It is possible that some other crypto takes monero market share because of this. When bitcoin was at that stage, there was bitcoin. That was the crypto space. Now there are tempting scamcoins around every corner and I'd hate to see monero fall by the wayside, even temporarily, in lieu of some ridiculous thing like Zcash, which imo misses the whole point of cryptocurrency, or some sockpuppetty corporatecoin like eth. Privacy IS for everyone and I think it might be possible to skip this phase entirely, given the right information is widely distributed. tl;dr How can I make the fundamental concepts behind Monero accessible and attractive to the average computer-using person, or potential devs who might otherwise be swayed to working on another alt or Bitcoin instead? ps Sorry for the long post I'm a piss poor writer and can't be concise Edit: for formatting so its not a brick of words
We would like to proudly announce Arionum, a new cryptocurrency built from scratch! Introduction Arionum was designed with the future in mind, in a market where the growth beats all expectations. Arionum aims to offer a secure electronic payments system that is able to scale without a degraded performance or a degraded user experience. It offers a fixed 0.25% fee on all transactions and it has a dynamic transaction limit per block, allowing it to keep up with a growing number of transactions at all times. One of the main advantages of Arionum is that it was fully written from scratch in PHP, one of the most popular programming languages in the world. While php is not as fast as c++. for example, the high number of developers that can easily understand and develop PHP and the Arionum compensates for this. The main inspiration has been Satoshi Nakamoto's bitcoin white paper, but all the code has been thought and written by the developers to keep it's originality. Arionum has been thought as a democratic and egalitarian coin, having no pre-mined coins, long mining period, no developer fees and an algorithm that advantages the average user with available CPU resources rather than mining farms. Original Announcement:https://bitcointalk.org/index.php?topic=2710248.0 Specifications Name: Arionum Symbol: ARO Block time: ~ 4 minutes Mining reward: Starts at 1000 and decreases by 10 each 10800 blocks Mining time: 8 years and 4 months Premine: NO Premine Transaction fee: Always 0.25% Block Hash: sha512 Mining algorithm: Argon2i + SHA512 Total coin supply: 545.399.000 Signature Algorithm: ECDSA's secp256k1 curve DB Backend: MySQL / MariaDB Whitepaper: https://www.arionum.com/wp.pdf Roadmap
The source routing design is done. We have two engineers implementing it and it should be ready within a month. We are using ChaCha20 and ECCDH with curve secp256k1 for link layer encryption.
We are now working with an ASIC designer to ensure that the lowest level of the Skywire source router can be implemented in a small FPGA floorplan. He is also experimenting with Skywire with HackRF over whitespace in the 801.11af band, which would increase the range of the network significantly compared to wifi.
The wire protocol is now using a "services" architecture. A service instance is created an then associated with a Skywire node. A service is a struct containing state and a map of messages (golang structs) that the service responds to. We have a convenient, automatic interface for packet/message serialization and handling responses. Services communicate over "channels" which are used to multiplex multiple services over a single TCP/IP or UDP connection between nodes. Channel 0 is reserved for service introspection (listing services offered by node), peer discovery, ping, quality of service information, DHT (distributed hash table) and PEX (peer exchange). Messages received within a channel are currently assumed to be ordered and lossless (retransmission handled at higher level). There is room in future to add channel prioritization and out of order messaging and letting channels handle dropped messages. The current implementation is using TCP/ip in golang, but we will switch to a UDP connection pool in C in future.
The mesh networking source routing implementation will be implemented as a service in Skywire
The route discovery mechanism for finding multihop routes over wifi to the clearnet, will be implemented as a separate service. At this point the mesh network will be functioning. We are very close.
We are still working on the coin.
We have "personal blockchains working" and on github. These allow a user to create a blockchain, that only a user with the private key can mint blocks for. Peers replicating the chain can be looked up by DHT using the public key hash of the blockchain owner. This gives every user their own side chain that can contain any data. Can be used for distributed DNS and metadata storage. https://github.com/skycoin/skywire/blob/mastechaintest.go
Personal blockchains are ordered. We have an unordered version called "Blobs", which can be used for distributed DNS and metadata systems. Blobs can be used to implement a peer to peer replicated key value store library called "Ether". Each key-value store database has a public key. If you know the hash of the public key, you can find peers replicating the database and download it from them. Only the owner can modify the key-value store (updates must be signed by the owners private key), but everyone can read it. Updates are propagated in a peer-to-peer fashion using the gossip protocol. Ether examples will be on Github soon.
The Skycoin distributed consensus white paper draft is done and will be on github soon. Skycoin has no miners, confirms transactions in seconds, is completely decentralized and is immune to 51% attacks. Skycoin achieves a higher level of security than Bitcoin, at a lower cost to run the network.
We still have a lot of work to do, but have been making very good progress.
How to generate a Private Address with your mind and other nice Bitcoin tricks!
Warning: Using anything but true randomness to generate your keys , always increases the risk of the key being guessed. (Which is still super extremely low if done properly). This post has nothing to do with Brainwallet, but should be able to give you an insight to Bitcoin adresses and how easy it is to create your own key in an offline and software-less situation. You can use this:
If you don't trust software generators or your computer's random generator. You could generate a key by rolling a dice or picking random balls from a bag.
If you're meeting with somebody you trust, want to exchange money but don't have no software, phone or technology on you. Just a piece of paper.
You could create a private address on the spot and share with the other party. Once at home you load it with Bitcoins and the other one can take them. Now, generating the public address from the private key is something MUCH more difficult to do without software assistance. But still, knowing how to create a private key with little to no software assistance could prove useful. Your private key is just a Random Number. Nothing fancy, nothing shiny and nothing special. Just a long-ass random number. How long? It's got 256 digits in binary, which is a long string of 256 1's and 0's. But the same number can also be expressed in just 64 digits if we do it in base 16 (hexadecimal). This hexadecimal number can also be imported by many of the current popular bitcoin applications such as armory, electrum or blockchain.info (haven't tried any others). Once imported, the software will then, probably, convert that number to Base58 also called the "Wallet import Format". This is the format you're probably familiar with that starts with a 5 and that includes some security checks. But we don't need to be able to create a base58 number. Just a 64 digits base 16 number, which is much easier and most applications will be able to import it. Generate your own Private key: You can either do it in your mind or find a random device with at least 16 outputs. Remember that the quality and security of the key depends on the randomness. But, while it is true that the human mind is biased when writing down random numbers, in the end this is no easier to guess than a good passphrase. These are the random numbers you can choose from: 0 1 2 3 4 5 6 7 8 9 a b c d e f Just treat the letters as if they were numbers and write down one random number, 64 times. At the end you'll have a 64 digit, hexadecimal, random or semi-random number. Just import it, without spaces, to your favourite application and it will generate the public address for you. That's it! You're done! As you can guess, this also means that "Funny" private keys such as: 777777777... (64 times) 314159265... (the number pi) 123456789... (a number succession) Are very well possible and can be tried out! I would strongy advice against using them to hold any bitcoins, but it's interesting to keep those "Funny" addresses in mind. Funfact: The number you've been "generating" is by far larger than the amount of atoms on the whole planet. Generating private keys with /dev/random If you're on a linux machine, then you have access to this great device called /dev/random. Most Bitcoin softwares use a different device called /dev/urandom (notice the extra u) to generate the addresses because it is faster. The "urandom" output is pseudorandom and not truly random... this allows it to be faster and output numbers at a much higher rate, but the randomness offered by /dev/random is much better. (Although it may take a few seconds to generate a key). Write this command in a terminal:
od -An -N32 -x /dev/random
This should provide you with a 64 digit hexadecimal number which will work excellently as a random private key. (but you must remove the spaces). Generating a Brainwallet offline Create your own brainwallet without using brainwallet.org.
echo -n "your passphrase" | shasum -a 256
The -n option removes the new line that "echo" usually appends and the | sign processes the output you wrote to the next command which will create a 256 bit long hash from it. The hash will be in hexadecimal and, yes you guessed right, it will be 64 digits long. This will, by the way, give you exactly the same private key as brainwallet.org, but can be used offline on a fresh booted live-cd without needing to connect to the internet, for example. Finally, You must know, that for the keys to be accepted by Bitcoin software, they must be larger than 1 and smaller than FFFF FFFF FFFF FFFF FFFF FFFF FFFF FFFE BAAE DCE6 AF48 A03B BFD2 5E8C D036 4141. So, as long as you don't start with 31 consecutive F's when manually generating your key, it will be valid. More information about this. I hope you like it! Credit also goes to Flatfly and DannyHamilton who helped me out on the Bitcointalk forums.
EthereumCash★ETHC★Masternode - Pos★Payout 50% Profit To Holders★
Our Project is EthereumCash Coin not Token. We have our system, with our plan, we not clone or scam. Masternode and Payout profit every month to holders. And Our coin Symbol is ETHC not ECASH. Thanks! NOTE: 1. We dont make any Masternode. Masternode just for big holder with 50k coin. We dont want to have more masternode because more coin mined, price of ETHC will drop. This is coin of my company, more coin of us with more potetial will posted soon. Now more work to do. Thanks
BIG EVENT will coming at 1 NOV when more potential coins of us end ICO and list on exchange. We will update information about our coins after 1 NOV. Very potential coins. We will payout Bounty at 18 OCT. Next payout will inform later.
BOUNTY REWARD will decrease soon. Let's GO. We must change some rule about rewards. New rule will better and make ETHC reach high value. Thanks
Just only 4 mil ETHC on exchange, no more coin. We hold 73% premine to get profit from our project.
NEWS: * Today, our team will discuss about how to use money when ETHC sale on exchange, 50% want to share 50% profit to holders to make great project. But we must have more agree, hope we have good news soon. Finally, Our team decide share 50% profit from sale ETHC on exchange for holders, hope all investors will hold ETHC to get BTC. Thanks.
Our topics is just deleted by Mod, so please dont post your address on our topic.
Our key #EthereumCash just in top 10 on Google.
A big forum ( our partner) just support our project, a banner will release tomorrow
https://www.emoneyspace.com/banner_stats.php?h=%2FkfqBZeXOos%3Dhttps://www.emoneyspace.com/forum/index.php?action=profile;u=10955 We just payout 400 address quickly fill out form bounty on twitter, other must wait later . Thanks! EthereumCash Project What is EthereumCash ? EthereumCash is an experimental new digital currency that enables anonymous, instant payments to anyone, anywhere in the world. EthereumCash uses peer-to-peer technology to operate with no central authority: managing transactions and issuing money are carried out collectively by the network. EthereumCash is a PoS-based cryptocurrency, and depend upon libsecp256k1 by sipa, the sources for which can be found here: https://github.com/bitcoin/secp256k1 EthereumCash is a project which was created in January 2017, launched by a group of Masters graduated in Technology Institutes of Hardware, Web Programming and CryptoCurrency. Currently, the project has 60 Antminer L3 + devices. EthereumCash was released to fund the development of the project, and using funds was a way to purchase new Antminer devices. The special issue is that we will pay EthereumCash holders 50% of the profits from the mining of us every 45 days since EthereumCash is listed on the exchange. On the other hand, our project is also interested in partners and is ready to collaborate by buying 30% EthereumCash and allowing us to use their existing Antminer devices with low fee. Our mining plan at the beginning of the project was to mine the coins to be issued or newly listed, with low difficult and then we waited and sold out at high prices beyond expectations. Besides, when we find the coin that we see has the potential for growth or high price expectations we will also bring all our equipment to mine those coin. We are young with a dynamic creativity, willing to devote as much to the success and development of the company. EthereumCash Details Coin Name : EthereumCash Coin abbreviation : ETHC Coin Type : Full POS Algorith : SHA256 Total Coin Supply : 100 Million Premine : 15% (15 Million) Masternode Rewards : 60% Stake : 10 ETHC Minimum Stake Age : 8 hours Number of confirmation : 10 For more information, as well as an immediately useable, binary version of the EthereumCash Core software, see https://ethereumcash.io Premine Distribution Our Company : 50% Dev Team : 23% ( 73% not allow to sale, just hold and get profit) Sale on Exchange : 20% Bounty & Airdrop : 7% (1.050.000 Coins) How to mine ? You can buy some coins and stake them to get rewards. More coins more rewards. Masternode require 50k coins. It just for big holders, we dont want to make more masternode because more reward, easy to get reward, price of coin will drop more. But with payout 50% profit from our mining, we hope ETHC will rise up for along term. Download wallet ? Download your EthereumCash wallet for free. This wallet protects your EthereumCash and stores it securely. You can use this wallet for all kind of EthereumCash transactions. Windows wallet: (we check our wallet on virustotal,it 's negative alert. Dont worried. thanks) https://github.com/ethereumcashdev/ethereumcash/files/1385162/ethereumcash-qt-windows.zip Linux wallet: (please install libdb5.1++ before run linux wallet by command apt-get install libdb5.1++) https://github.com/ethereumcashdev/ethereumcash/files/1384405/ethereumcash-qt-linux.zip MacOSX wallet: https://github.com/ethereumcashdev/ethereumcash/files/1390116/ethereumcash-qt-macosx.zip Our Website https://ethereumcash.io Block Explorer https://explorer.ethereumcash.io Exchange Coming soon next week Our coins We will post some Our potential coins soon... How to config masternode Maternode is running, please check https://explorer.ethereumcash.io Code: Step 1: getnewaddress 0
Step 2: Send 50.000 ETHC coin to your masternode address (EWb9GpJptZ5ywdybAdSSGm1mALkKBD46Ev) Step 3: masternode genkey
https://askcoin.org/blog/assets/images/ask_header.jpeg Askcoin ! The Cash of Knowledge The DAG Based Blockchain System And The Coin To Exchange Knowledge With Value Introduction The AskCoin system is a blockchain infrastructure designed specially for the knowledge sharing platforms. It works with existing Q&A platforms like zhihu.com, stackoverflow.com and quora.com etc. We consider a common decentralized blockchain platform for all these websites (we call them Apps) is necessary. It’s a user-motivated way for people to provide valuable answers which would be rewarded by an independent cryptocurrency. The author can now exchange his/her knowledge with value much faster, easier and more transparent by using the platform-independent token which can be used anywhere in the AskCoin ecosystem. Essentially, every App in the ecosystem is an AskCoin wallet. You can ask and answer questions, make the payment, and transfer tokens. We will implement a default mobile wallet (for both Android and IOS) and with an internal exchange market in it. The whole ecosystem looks like this: https://askcoin.org/blog/assets/images/Askcoin_ecosystem.png The Desgin Askcoin uses the ed25519 elliptic curve cryptography instead of the secp256k1 which is used by the Bitcoin and the Ethereum. Askcoin address encoding is a derived version of Bech32 address encoding which is introduced by BIP173 https://askcoin.org/blog/assets/images/ASK_address.png AskCoin uses DAG (Directed Acyclic Graph) based technology, different from traditional blockchain technology, such as BitCoin and Ethereum. We believe DAG offers many advantages in the user cases where AskCoin aims to address. Before AskCoin there are IOTA and Byteball projects which are famous for their DAG techniques. The AskCoin team learned a lot from the Byteball project and will implement the same mechanism as MainChain which was fristly introduced in the Byteball project. https://askcoin.org/blog/assets/images/Askcoin_mainchain_selection.png AskCoin will not fork from any current existing projects. Instead, we will try to build the system from scratch by using the JAVA programming language. AskCoin will implement an internal exchange market. The user can exchange ASK with BTC and ETH directly from the mobile wallets. The system will become a sidechain of both the Bitcoin and the Ethereum. AskCoin network will act as a sidechain of both Bitcoin and Ethereum. https://askcoin.org/blog/assets/images/Askcoin_exchange.png Money supply/Initial distribution The system will have an internal token called ASKCOIN (ASK for short). The token will be used to pay the transaction fee and make the payment to a person who answered the question. The token can be used with the same value in different APPs that integrates the AskCoin blockchain. That is to say, the coin earned in app1 can also be used in app2. They are the same coins.The initial supply for the coin is 1,000,000,000 ASK. The entire token supply will never change and no more token will be generated. The coins will be created in the genesis block(transaction) and will be distributed to ICO participants accordingly. Team For more information about Askcoin team please refer to the Askcoin website ICO The Askcoin project is doing ICO now, join Askcoin ICO at bitbill.com RoadMap
• Askcoin White Paper Draft (2017/06 done) • Askcoin pre-ICO for 1600BTC (2017/06 done) • Askcoin Website online : https://askcoin.org (2017/06 done) • Askcoin ICO (under going) • Askcoin Meetup #1, ShangHai/China (2017/07/09 done) • Askcoin Meetup #2, Beijing/China (2017/07/15 done) • Askcoin Meetup #3, ShenZhen/China (2017/07/23 done) • Askcoin White Paper v1.0 (2017/08) • Askcoin Dev Roadmap (2017/08) • Askcoin Testnet (2017/11) • Askcoin Wallet (iOS/Android) (2017/11) • Askcoin Mainnet & genesis (2018/02) • ASK Distribution (2018/02) • The first ASK-based App online (blockchain-tech Q&A community) (2018/05)
Decred is an open, progressive, and self-funding cryptocurrency with a system of community-based governance integrated into its blockchain. At its core is a hybridized proof-of-work proof-of-stake (PoW/PoS) consensus system that aims to strike a balance between PoW miners and PoS voters to create a more robust notion of consensus. The project is a result of the theoretical proposals brought by proof-of-activity (PoA) and MC2 in 2013. Decred development started in April, 2014 with a single developer and expanded to include developers from btcsuite shortly thereafter. Decred is built in the spirit of open participation and we have provided below a full disclosure of the technical features of the system, wallets and mining, initial funding and distribution, project governance and development, and a group contribution timeline. We hope to launch mainnet on January 18th, 2016, and will provide additional details in this thread. Everyone is welcome to participate, and you are certainly welcome to join the development and project groups if you have interest in contributing to our efforts! i. Technical Features The features below are implemented in Decred and will be available in full at launch. For a deeper description, please consult the Decred Technical Brief (DTB001). •Novel hybridized proof-of-work/proof-of-stake (PoW/PoS) consensus system - A decentralized lottery is used to select PoS miners to vote on PoW blocks. The PoW and PoS subsidies account for 60% and 30% of each total block subsidy, respectively. This system is based on that of MC2, which is very similar to, but developed independently from, Proof-of-Activity (PoA) by Iddo Bentov, Charles Lee, Alex Mizrahi and Meni Rosenfeld. •Cold staking and decentralized stake pooling - The ability to generate new coins without the risk of having your coins online when PoS mining. The PoS mining system has also been engineered with distributed, decentralized stake pooling in mind, so that even those with small amounts of stake can participate in network validation. •Internal voting system for the addition of new features and hard or soft fork selection - Both PoW and PoS miners can vote for features and issues through bit flags, providing a sensible mechanism for resolving disputes about the features of the blockchain. •Immutable transaction hashes ("transaction IDs") by separating transaction signatures from the rest of the transaction data - A permanent fix for transaction hash malleability has been implemented that prevents mutability of the transaction hash by separating it from its input signatures. This allows more efficient SPV validation. Fraud proofs have also been added. •Elliptic curve cryptography over secp256k1 with optional Curve25519 support - The Bitcoin scripting system has been modified to allow for simple, drop-in addition of new elliptical curve digital signature algorithms. •Schnorr signatures with threshold n-of-n support - In addition to supporting Schnorr signatures, groups of signers can now jointly sign transactions off-chain in constant size signatures, ensuring higher privacy and less blockchain bloat. •Script enhancements and new OP codes - New OP codes have been added to the existing Bitcoin scripting engine, and extensions for the plug-in use of future scripting engines have been added. •PoW mining using BLAKE256 hash algorithm - Inspired by Bernstein's Chacha stream cipher, SHA3 finalist BLAKE256 offers speed as well as high security. •Compatibility with Bitcoin transaction scripting system - Decred's scripting system has been derived from Bitcoin's with care in ensuring that all future updates to the Bitcoin transaction script will be easily extensible to Decred. Further, any newly created functionalities will also be devised with backwards compatibility with Bitcoin in mind. •Modularized, easy-to-use Golang btcsuite codebase - Thanks the to the codebase inherited from btcsuite, adding new features to the daemon or wallet will be facile. Decred will episodically sync updates from btcsuite, so that it benefits from the latest developments in Bitcoin. •Hierarchical deterministic (HD) wallets - Wallets use a seed to deterministically generate addresses, so your wallet can be restored from a single BIP0032 seed. •Transaction expiration - Transactions have a new expiration field to prevent inclusion into the blockchain after a certain height. •Patches for intrinsic Bitcoin bugs - Extra push for multisignature scripts has been removed, SIGHASH_SINGLE behavior has been corrected. •Approximately 21 million coins - Exponential decay in subsidy or the number of coins generated per year. •Self-funded development via block subsidy - In order to have an ongoing source of funding for development work, a consensus rule has been added to allocate 10% of each block subsidy to a development organization. This entity is transparent and responsible for funding development work performed by current and new developers so that the project remains sustainable without a funding dependence on outside forces in the future. Decred therefore improves with growth in a sustainable way and is accountable only to its users. ii. Wallets and Mining •Web wallet service - In order for users to have access to a GUI on all platforms, we have created a web wallet service forked from BitPay's Copay wallet and its dependencies. This wallet allows users to access all the basics with Decred: sending and receiving coins, multisig transactions. •Command-line wallet - For more advanced users, we have a command-line wallet, dcrwallet. dcrwallet allows users to mine PoS and collect rewards by participating in the PoW/PoS consensus system. •Simple GPU miner - A simple AMD GPU miner that connects to a local daemon will be available before launch. In the future, proper getblocktemplate functionality will be enabled and pool software will be made available. iii. Initial Funding and Airdrop Decred opted for a different funding model in an attempt to shift the risk carried by supporters to the developers of the project. Instead of asking interested parties to fund the development of the software, the developers decided to pool funds together and carry the project to completion before making it public. The consensus was that this is an ethical path given the realities of funding software development, due to the fact that the developers alone carry the risk of the project failing, whereas in the past potential users were expected to pay for coins before any code was written. We felt this was unjust. The development of Decred was funded by Company 0 and from the pockets of its developers individually. The cost of developing the project, in terms of developer pay, totals to approximately USD 250,000, which Company 0 paid to developers. An additional amount of approximately USD 165,000 has been allocated for unpaid work and individual purchases by developers. We felt that the most equitable way to handle compensation for these expenses was to perform a small premine as part of the project launch. The model is unusual in that no developer received any amount of coins for free - all coins owned by developers will either be purchased at a rate of USD 0.49 per coin from their own pockets or exchanged for work performed at the same rate. The premine consists of 8% of the total supply of 21 million coins, meaning the premine consists of 1.68 million coins. Rather than allocating the entire premine to the bring-up costs, we decided to split the premine equally between compensation for bring-up and an "airdrop", where we freely give an equal amount of coins to a number of airdrop participants. This means Company 0 and its developers will have put roughly USD 415,000 into the bring-up since April, 2014 and receive 4% of the total supply, 840,000 coins (at USD 0.49 per coin). The remaining 4% will be spread evenly across a list of airdrop participants as part of an effort to build the Decred network and decentralize its distribution. Coins held by Company 0 will be used to fund its ongoing work on open-source projects, such as Decred and btcsuite. Giving away these coins in an airdrop allows us to accomplish several things at once for the project: enlarge the Decred network, further help decentralize the distribution of coins, and allow us to get coins into the hands of people who are interested in participating in the project. Decred is fundamentally about technological progress, so the airdrop will target individuals that have made contributions to advance technology in its various forms. The maximum number of airdrop participants is capped at 5,000 individuals, so we recommend registering sooner rather than later. These coins will be given away unconditionally and there is zero expectation of Decred receiving anything from you in return for these coins. Sign up for the airdrop is currently open, but the airdrop registration will commence on January 4th, 2016. People who have been selected to participate in the airdrop will receive an email that contains a link to a web registration form. This form will require airdrop participants to enter an address to which their coins can be sent. Binaries and source code will be made available so that you can generate a wallet seed and an address for your airdrop coins. Once you have entered your receiving address into the airdrop webform and submitted it, you will receive your coins on the projected launch date of January, 18th, 2016. iv. Project Governance and Development In addition to the technical features that make up the technology, Decred as a project introduces several development and governance features and proposals to ensure and steer long-term growth. We encourage participants to discuss these topics earnestly, as we want to ensure the system of development and governance is built on a solid foundation. •A multi-stakeholder development ecosystem that welcomes and empowers participants who want to build new functionality and improve on existing features. •Any party can submit feature proposals and developers are paid for work to fulfill requirements. This is done in full view of the community in a system designed to fight against ingroup-outgroup dynamics. •The initial contributors are the developers responsible for btcsuite (est. early 2013 - present). •A proposal for a layered form of transparent meritocratic governance that extends beyond proof-of-work and proof-of-stake mechanisms to bring forward and represent insider and outsider voices in the community. •A proposal for bottom-up decision-making through the Decred Assembly, an evolving and inclusive list of community members who make non-financial contributions to the project through their work and effort. •The project is bound by the Decred Constitution on the core principles of finite issuance, privacy, security, fungibility, inclusivity, and progressive development of the technology that keeps these principles together. v. Group Contribution Timeline Below are key points of free and open-source contributions made by the Decred developers to the digital currency ecosystem since 2013. The largest of which is the btcsuite package, which comprises a suite of packages and tools for working with Bitcoin in Golang, and includes btcd, a full node, mining capable, Bitcoin implementation. To date, the total contribution across btcsuite represents 98,046 lines of code, 44,576 of which are test coverage. vi. Additional Information Website: https://decred.org Forum: https://forum.decred.org Wiki: https://wiki.decred.org Reddit: https://reddit.com/decred Twitter: https://twitter.com/decredproject IRC: #decred on irc.freenode.net
secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a special non-random way which allows for especially efficient computation. As a result, it is often more than 30% faster than other curves if the implementation is sufficiently ... All of them are bindings of bitcoin-core's secp256k1 library to Swift. Can I make something like let kp = KeyPair("secp256k1"), let signedBytes = kp.sign(bytes)? If yes then how, and if no then are there any other ways to do that? ios swift cryptography ecdsa hyperledger-sawtooth. share improve this question follow edited May 30 '18 at 21:04. skywinder. 20.1k 15 15 gold badges 84 ... As early as 2011, Bitcoin developer Hal Finney pointed out on Bitcointalk that the endomorphism feature of secp256k1 could be used to accelerate the signature verification of the ECDSA , Finney ... Exchange transaction; Submit transaction; Confirm transaction; How do I put data on chain; Components. Paymail and Channels . P2P service discovery; Feature negotiation; Message exchange; Merchant API . Transaction submission; Transaction confirmation; Crypto library . Secp256k1/ecdsa algorithms; Secure key storage and management; Database ... secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a special non-random way which allows for especially efficient computation. As a result, it is often more than 30% faster than other curves if the implementation is sufficiently ...
Bitcoin Tutorial #12 - Der SECP256K1-Standard für Schlüssel
This video is unavailable. Watch Queue Queue. Watch Queue Queue John Wagnon discusses the basics and benefits of Elliptic Curve Cryptography (ECC) in this episode of Lightboard Lessons. Check out this article on DevCentra... Gource visualization of secp256k1 (https://github.com/bitcoin/secp256k1). Optimized C library for EC operations on curve secp256k1 According to crypto advocate Roger Ver, Bitcoin Cash ( BCH ) is the only “recent” coin to be widely used on the dark web. “The only other coins being used on... For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Lectures by Walter Lewin. They will make you ♥ Physics. Recommended for you