注解

Please make sure you are looking at the documentation that matches the version of the software you are using. See the version label at the top of the navigation panel on the left. You can change it using selector at the bottom of that navigation panel.

A Blockchain Platform for the Enterprise

_images/hyperledger_fabric_logo_color.png

Enterprise grade permissioned distributed ledger platform that offers modularity and versatility for a broad set of industry use cases.





Introduction

一般来说,区块链是一个不可变的交易账本,在peer节点的分布式网络中维护。这些节点各自通过应用已由共识协议验证的交易来维护账本的副本,这些交易被分组为区块,区块包含着将每个区块绑定到前一个区块的哈希。

区块链的第一个、也是得到最广泛认可的应用是比特币加密货币,不过其他应用程序也纷纷效仿。以太坊是另一种加密货币,它采用了不同的方法,集成了许多与比特币相同的特性,但添加了智能合约,为分布式应用程序创建了一个平台。比特币和以太坊属于区块链,我们将其归类为公共许可的区块链技术。基本上,这些是公共网络,对任何人开放,参与者匿名互动。

随着比特币、以太坊的普及,一些其它衍生技术的成长,人们对将区块链、分布式账本和分布式应用平台的底层技术应用于更具创新性的企业用例的兴趣也在增长。然而,许多企业用例需要性能特性,而无许可的区块链技术(目前)无法提供这些特性。此外,在许多用例中,验证参与者的身份是一个硬性需求,例如在必须遵循了解客户(KYC)和反洗钱(AML)规则的金融交易中。

对于企业使用,我们需要考虑以下要求:

  • Participants must be identified/identifiable

  • Networks need to be permissioned

  • High transaction throughput performance

  • Low latency of transaction confirmation

  • Privacy and confidentiality of transactions and data pertaining to business transactions

虽然许多早期的区块链平台目前正在适应企业使用,但超级账本Fabric从一开始就为企业使用而设计。下面的部分描述了超级账本Fabric (Fabric)如何将自己与其他区块链平台区别开来,并描述了其架构决策的一些动机。

Hyperledger Fabric

超级账本Fabric是一个开源的企业级许可分布式账本技术(DLT)平台,设计用于企业上下文中,它提供了一些与其他流行的分布式账本或区块链平台相区别的关键功能。

一个关键区别是超级账本是在Linux基金会的基础上建立的,而Linux基金会本身也有很长一段非常成功的历史,在开放管理下培育开源项目,发展强大的可持续社区和繁荣的生态系统。超级账本由一个多样化的技术指导委员会管理,而超级账本Fabric项目则由来自多个组织的一组不同的维护者管理。它有一个开发社区,自其最早提交以来已经发展到超过35个组织和近200个开发人员。

Fabric具有高度模块化和可配置的架构,能够为广泛的行业用例(包括银行、金融、保险、医疗、人力资源、供应链,甚至数字音乐交付)提供创新、多功能性和优化。

Fabric是第一个支持用Java、Go和Node.js等通用编程语言编写智能合约的分布式账本平台。而不是受限制的领域特定语言(DSL)。这意味着大多数企业已经具备了开发智能合约所需的技能,并且不需要额外的培训来学习一门新语言或DSL。

Fabric平台也是许可的,这意味着与公共无许可的网络不同,参与者彼此是相识的,而不是因匿名而完全互不信任。这意味着尽管参与者可能无法完全相互信任(例如,他们可能是同一行业的竞争对手),网络可以在基于参与者之间信任的治理模型下运行,就如法律协议或框架来处理纠纷。

该平台最重要的区别之一是它支持可插入的共识协议,这些协议使平台能够更有效地定制,以适应特定的用例和信任模型。例如,当部署在单个企业中,或者由可信的权威机构操作时,完全拜占庭式的容错共识可能被认为是不必要的,并且会对性能和吞吐量造成过度的拖累。在这种情况下,一个崩溃容错(CFT)共识协议可能就足够了,而在多参与方、去中心的用例中,可能需要一个更传统的拜占庭容错(BFT)共识协议。

Fabric可以利用的共识协议不需要象原生加密货币那样激励高成本挖矿激励或促进智能契约执行。避免使用加密货币可以减少一些重要的风险/攻击方向,而没有加密挖矿操作意味着平台部署后,其运行成本与任何其他分布式系统大致相同。

这些不同的设计特性的组合使得Fabric成为当今性能更好的平台之一,无论是在交易处理还是交易确认延迟方面都是如此,而且它还支持交易和实现交易的智能合约(Fabric叫“链码”)的隐私和机密性。

让我们更详细地探讨这些不同的特性。

Modularity

超级账本Fabric被特别设计成具有模块化架构。无论它是可插入的共识协议、可插入的身份管理协议(如LDAP或OpenID连接)、密钥管理协议还是加密库,该平台的核心都被设计为可配置以满足企业用例需求的多样性。

在高层,Fabric由以下模块组件组成:

  • A pluggable ordering service establishes consensus on the order of transactions and then broadcasts blocks to peers.

  • A pluggable membership service provider is responsible for associating entities in the network with cryptographic identities.

  • An optional peer-to-peer gossip service disseminates the blocks output by ordering service to other peers.

  • Smart contracts (“chaincode”) run within a container environment (e.g. Docker) for isolation. They can be written in standard programming languages but do not have direct access to the ledger state.

  • The ledger can be configured to support a variety of DBMSs.

  • A pluggable endorsement and validation policy enforcement that can be independently configured per application.

业内有一个公平的认识,即没有“一个区块链来统治所有人”。可以以多种方式配置超级账本Fabric,以满足多个行业用例的不同解决方案需求。

Permissioned vs Permissionless Blockchains

在一个无许可的区块链中,几乎任何人都可以参与,而且每个参与者都是匿名的。在这样的上下文中,除了区块链在一定深度之前的状态是不可变的之外,不能有其他信任。为了缓解这种信任缺失,无许可区块链通常采用原生加密货币“挖矿”或交易费来提供经济激励,以抵消参与基于“工作证明”(PoW)的拜占庭容错共识的巨大成本。

另一方面,许可区块链在一组已知的、验证的和经常被审查的参与者中运行区块链,这些参与者在产生一定程度信任的治理模型下运行。许可区块链提供了一种方法来保护一组实体之间的交互,这些实体有一个共同的目标,但是它们之间可能并不完全信任彼此。通过依赖参与者的身份,许可区块链可以使用更传统的容错崩溃协议(CFT)或拜占庭容错协议(BFT),这些协议不需要昂贵的挖矿。

此外,在这种一个许可上下文中,参与者通过智能合约故意引入恶意代码的风险降低了。首先,参与者彼此都是已知的,并且所有的操作,无论是提交应用程序交易、修改网络配置还是部署智能合约,都按照针对网络和相关交易类型建立的背书策略记录在区块链上。而不是完全匿名,犯罪方很容易识别,以及按照治理模型的条款处理事件。

Smart Contracts

一个智能合约,或者Fabric所称的“链码”,作为一个受信任的分布式应用程序,从区块链和peer之间的底层共识中获得安全/信任。它是区块链应用程序的业务逻辑。

有三个要点适用于智能合约,尤其是应用于平台时: 许多智能合约在网络中同时运行, 它们可以动态部署(在许多情况下,任何人都可以),并且 应用程序代码应该被视为不可信的,甚至可能是恶意的。

  • many smart contracts run concurrently in the network,

  • they may be deployed dynamically (in many cases by anyone), and

  • application code should be treated as untrusted, potentially even malicious.

大多数现有的支持智能合约的区块链平台遵循一个顺序执行架构,其中的共识协议: - 验证和排序交易,然后将它们传播到所有peer节点, - 然后,每个peer按顺序执行交易。

  • validates and orders transactions then propagates them to all peer nodes,

  • each peer then executes the transactions sequentially.

排序执行架构几乎可以在所有现有的区块链系统中找到,从公共/无许可平台如以太坊(基于PoW共识)到许可平台如Tendermint、Chain和Quorum。

在使用排序执行架构的区块链中执行的智能合约必须是确定性的;否则,可能永远无法达成共识。为了解决非确定性问题,许多平台要求智能合约使用非标准或特定于领域的语言(如Solidity)编写,以便消除非确定性操作。这阻碍了广泛采用,因为它要求编写智能合约的开发人员学习一种新语言,并可能导致编程错误。

此外,由于所有交易都由所有节点按顺序执行,因此性能和规模受到限制。智能合约代码在系统中的每个节点上执行,这一事实要求采取复杂的措施来保护整个系统免受潜在恶意合约的攻击,以确保整个系统的弹性。

A New Approach

Fabric为交易引入了一种新的架构,我们称之为execute-order-validate。它通过将交易流程分成三个步骤来解决order-execute模型所面临的弹性、灵活性、可伸缩性、性能和保密性方面的挑战: - 执行交易并检查其正确性,从而为它背书, - 通过一个(可插入的)共识协议对交易排序,并且 - 在将交易提交到账本之前,根据特定于应用程序的背书策略验证交易

  • execute a transaction and check its correctness, thereby endorsing it,

  • order transactions via a (pluggable) consensus protocol, and

  • validate transactions against an application-specific endorsement policy before committing them to the ledger

这种设计与Fabric中的order-execute范例截然不同,后者在就顺序达成最终协议之前执行交易。

在Fabric中,特定于应用程序的背书策略指定需要哪些peer节点,或者需要多少节点来保证正确执行给定的智能合约。因此,每个交易只需要由满足交易的背书策略所需的peer节点子集执行(背书)。这允许并行执行,从而提高系统的总体性能和规模。第一个阶段还消除了任何非确定性,因为可以在排序之前过滤掉不一致的结果。

因为我们消除了非确定性,Fabric是第一个支持使用标准编程语言的区块链技术。在1.1.0版本中,可以用Go或Node.js编写智能合约,同时计划在后续版本中支持其他流行语言,包括Java。

Privacy and Confidentiality

正如我们已经讨论过的,在一个公共的、无许可的区块链网络中,交易在每个节点上执行。这意味着既不能对合约本身保密,也不能对它们处理的交易数据保密。每个交易及其实现代码对网络中的每个节点都是可见的。在这种情况下,我们用合约和数据的保密性代替了PoW提供的拜占庭容错共识。

对于许多业务/企业用例来说,这种保密性的缺乏可能会带来问题。例如,在供应链合作伙伴网络中,一些消费者可能会获得优惠价格(作为巩固关系或促进额外销售的一种手段)。如果每个参与者都能看到每个合约和交易,那么在一个完全透明的网络中维护这样的业务关系就不可能了——每个人都想要优惠价格!

第二个例子是证券业,建立头寸(或卖出头寸)的交易员不希望自己的竞争对手知道这一点,否则他们会试图参与进来,削弱交易员的策略。

为了解决在交付企业用例需求时缺乏隐私和机密性的问题,区块链平台采用了多种方法。他们都有自己的权衡。

加密数据是提供机密性的一种方法;然而,在一个利用PoW达成共识的无许可网络中,加密的数据位于每个节点上。只要有足够的时间和计算资源,密文就可以被破解。对于许多企业用例来说,他们的信息可能被泄露的风险是不可接受的。

零知识证明(ZKP)是另一个研究领域,正在探索解决这个问题,这里的权衡是,目前,计算一个ZKP需要相当多的时间和计算资源。因此,在这种情况下,为了保密性牺牲了性能。

在可以利用其他共识形式的许可上下文中,可以探讨将机密信息的分发限制在仅授权节点上的方法。

超级账本Fabric是一个许可平台,它通过其通道架构实现了保密性。基本上,Fabric网络上的参与者可以在部分参与者之间建立一个“通道”,该授权通道可以看到特定交易集合。可以将其想象成覆盖网络。因此,只有那些参与通道的节点才能访智能合约(链码)和交易的数据,从而保护隐私和机密性。

为了提高其隐私性和保密性,Fabric增加了对私有数据的支持,并且正在开发将来可用的零知识证明(zero knowledge proof, ZKP)。更多关于这一点,因为它变得可用。

Pluggable Consensus

交易的排序被委托给一个模块化共识组件,以便在逻辑上与执行交易和维护账本的peer解耦。具体来说,是排序服务。由于共识是模块化的,因此其实现可以根据特定部署或解决方案的信任假设进行定制。这种模块化的架构允许平台依赖于成熟的CFT(崩溃容错)或BFT(拜占庭容错)排序工具包。

Fabric目前提供两种CFT排序服务实现。第一种是基于Raft协议的etcd库。另一个是Kafka(内部使用Zookeeper)。有关当前可用的排序服务的信息,请参阅我们关于排序的概念文档。

还请注意,这些并不相互排斥。Fabric网络可以有多个排序服务,支持不同的应用程序或应用程序需求。

Performance and Scalability

区块链平台的性能受许多变量的影响,如交易大小、区块大小、网络大小以及硬件的限制等。超级账本社区目前正在Performance and Scale工作组内开发一套度量标准草案,以及一个名为超级账本Caliper的基准框架的相应实现。

虽然这项工作还在继续开发中,并且应该被看作是对区块链平台性能和规模特性的确定度量,但是IBM Research的一个团队已经发表了一篇同行评审的论文,对超级账本Fabric的架构和性能进行了评估。本文对Fabric的架构进行了深入的讨论,然后使用超级账本Fabric v1.1的初步版本报告了团队对该平台的性能评估。

研究团队所做的基准测试工作为Fabric v1.1.0版本带来了大量性能改进,使平台的总体性能比v1.0.0版本提高了一倍多。

Conclusion

任何对区块链平台的认真评估都应该在其短列表中包括超级账本Fabric。

综合而言,Fabric的差异化功能使其成为一个高度可伸缩的系统,可用于许可区块链,支持灵活的信任假设,从而使该平台支持广泛的行业用例,从政府、到金融、到供应链物流、到医疗保健等等。

更重要的是,超级账本Fabric是(当前)十个超级账本项目中最活跃的一个。围绕该平台的社区建设正在稳步增长,每一个连续发布的创新都远远超过其他任何企业区块链平台。

Acknowledgement

前文出自可限定全文”超级账本Fabric:一个对于许可区块链的分布式操作系统”——- Elli Androulaki, Artem Barger, Vita Bortnikov, Christian Cachin, Konstantinos Christidis, Angelo De Caro, David Enyeart, Christopher Ferris, Gennady Laventman, Yacov Manevich, Srinivasan Muralidharan, Chet Murthy, Binh Nguyen, Manish Sethi, Gari Singh, Keith Smith, Alessandro Sorniotti, Chrysoula Stathakopoulou, Marko Vukolic, Sharon Weed Cocco, Jason Yellick

v2.0 Alpha 新功能

有关 Alpha release 的一句话概要

Hyperledger v2.0 的 Alpha release 版本允许用户可以尝试两种非常棒的功能 — 全新的 Fabric chaincode 生命周期以及 FabToken。Alpha release 正在被用来为用户提供关于这些新能力的一个预览,这并不代表她可以被用于生产环境。另外现在还没有对于 v2.0 Alpha release 的升级支持,也没有任何从 Alpha release 到未来的 v2.x 版本的升级支持的打算。

Fabric chaincode 生命周期

Fabric 2.0 Alpha 引入了对 chaincode 的去中心化的管理,带来了一个在你的 peers 安装一个 chaincode 并且在一个 channel 中启动 chaincode 的新流程。这个全新的 Fabric chaincode 生命周期在一个 chaincode 能够被用于同账本进行交互之前,允许多个组织对于一个 chaincode 的参数达成协议,比如 chaincode 背书策略。这个新的模型相对于以前的生命周期而言,提供了多个改进:

  • 多个组织必须同意一个 chaincode 的参数: 在 Fabric release 1.x 版本中,一个组织有能力为所有的 channel 成员设置一个 chaincode 的参数(比如背书策略)。新的 Fabric chaincode 生命周期变得更加灵活,因为她及支持中心化的信任模型(就像之前的生命周期模型),同时也支持在一个背书策略起作用前,必须要有足够有效的数量的组织同意这个背书策略的这种去中心化的模型。

  • 更安全的 chaincode 升级流程: 在以前的 chaincode 生命周期中,升级的交易可以有单独的一个组织来发起,这就为尚未安装新的 chaincode 的 channel 成员带来了风险。新的模型允许一个 chaincode 只有当有效数量的组织已经批准了这个升级之后才会被升级。

  • 更简单的背书策略更新: Fabric 生命周期允许你能够改变一个背书策略而不需要必须打包或者重新安装 chaincode。用户也可以利用一个新的必须要求 channel 上的主要成员提供背书的默认策略。这个策略会在组织被添加到或者从 channel 中被移除的时候自动被更新。

  • 可检查的 chaincode 包: Fabric 生命周期将 chaincode 打包成了更容易阅读的 tar 文件。这使其更容易来检查 chaincode 包,并且在多个组织间协调安装。

  • 在一个 channel 上使用一个包来启动多个 chaincode: 之前的生命周期使用一个在 chaincode 包被安装的时候所指定的名字和版本来定义 channel 上的每个 chaincode。现在你可以使用不同名字的同一个 chaincode 包并且可以把它多次的部署到相同或者不同的 channel 上去。

使用新的 chaincode 生命周期

使用下边的教程来开始新的 chaincode 生命周期:

限制以及局限性

在 v2.0 Alpha release 中的新的 Fabric chaincode 生命周期功能还没有彻底完成。特别的,要注意下边这些在 Alpha release 中的局限性:

  • 尚未支持 CoachDB 索引

  • 通过新的生命周期定义的 chaincodes 还不能通过服务发现被找到

这些限制会在 Alpha release 之后被解决。

FabToken

Fabric 2.0 Alpha 也为用户提供了在 Fabric channels 上非常简单就能将资产表现为 tokens 的能力。FabToken 是一个 token 管理系统,它使用了由 Hyperledger Fabric 提供的身份及成员的基础设施,使用一个 Unspent Transaction Output(UTXO)的模型来发行、交易以及赎回 tokens。

  • Using FabToken:这个操作指南提供了一个关于在一个 Fabric 网络上如何使用 tokens 的详细的概览。这个指南也包含了一个如何使用 token CLI 来创建和交换 tokens 的例子。

Alphine 镜像

从 v2.0 开始,Hyperledger Fabric 镜像将会使用 Alphine Linux,一个面向安全的、轻量的 Linux distribution。这意味着 Docker 镜像现在更小了,提供了更快的下载和启动时间,并且在 host 系统上占用更小的磁盘空间。Alphine Linux 是彻底的安全优先的设计,并且 Alphine distribution 的极简主义本质很大程度的减小了安全缺陷的风险。

Raft 排序服务

在 v1.4.1 中被引入,Raft 是一个崩溃容错 fault tolerant (CFT) 排序服务,基于一个在 etcd 中的 Raft 协议的实现。Raft 遵循一个 “leader 和 follower” 模型,一个 leader 节点被选举出来(每个 channel)并且它的决定会被复制给 followers。Raft 排序服务应该比基于 Kafka 的排序服务更简单的设置并管理,并且他们的设计允许分布世界各地的组织来为一个去中心化的排序服务贡献节点。

Release notes

Release notes 提供了转换到新的 release 的更详细的信息,也包括了一个关于全部 release change log 的链接。

Key Concepts

介绍

Hyperledger Fabric是一个分布式账本解决方案的平台,其基础是提供高度机密性、弹性、灵活性和可伸缩性的模块化架构。它旨在支持不同组件的可插入实现,并适应经济生态系统中存在的复杂性和复杂性。

我们建议初次使用的用户从通读下面介绍的剩余部分开始,以便熟悉区块链的工作原理以及Hyperledger Fabric的特定特性和组件。

如果您已经熟悉了区块链和Hyperledger Fabric,那么可以访问:doc:getting_started,然后在那里查看演示、技术规范和API等。

什么是区块链?

一个分布式账本

区块链网络的核心是一个分布式账本,它记录在网络上发生的所有交易。

区块链账本通常被描述为**去中心化的**,因为它是跨许多网络参与者复制的,每个参与者**协作**维护它。我们将看到,去中心化和协作是反映现实世界中企业交换商品和服务方式的强大属性。

_images/basic_network.png

除了去中心和协作之外,记录到区块链的信息是仅追加的,使用了加密技术,该技术可以保证,一旦将交易添加到账本中,就不能对其进行修改。这种“不可改”属性使确定信息来源变得很简单,因为参与者可以确定信息在事后没有被更改。这就是为什么区块链有时被描述为**证据系统**。

智能合约

为了支持信息的一致更新——并启用所有账本功能(交易、查询等)——区块链网络使用**智能合约**提供对账本的受控访问。

_images/Smart_Contract.png

智能合约不仅是一种封装信息并使其在网络上保持简单的关键机制,还可以通过编写它们来允许参与者自动执行特定方面的交易。

例如,可以编写一个智能合约来规定货物的运输成本,其中运费根据货物到达的速度而变化。根据双方同意的条款并写在账本上,当收货时相应的资金自动转手。

共识

在整个网络中保持账本交易同步的过程——确保账本只有交易被适当的参与者批准时才更新,并且当账本确实更新时,它们以相同的交易顺序更新——被称为**共识**。

_images/consensus.png

以后你会学到更多关于账本、智能合约和共识的知识。现在,只要将区块链看作是一个共享的、复制的交易系统就足够了,它通过智能合约进行更新,并通过一个称为共识的协作过程保持一致的同步。

为什么区块链有用?

当今的记录系统

今天的交易网络只不过是自保留业务记录以来就存在的网络的稍加更新的版本。一个**业务网络**的成员彼此进行交易,但是他们保持各自的交易记录。无论是16世纪的佛兰德挂毯,还是今天的证券,他们所做的交易都必须在每次出售时确定其来源,以确保出售一件物品拥有一系列所有权。

剩下的就是象这样的一个业务网络:

_images/current_network.png

现代技术已经将这一过程从石碑和文件夹带到硬盘和云平台,但其底层结构是相同的。不存在统一的系统来管理网络参与者的身份,确定出处是如此费力需要几天清理证券交易(全世界的量在数万亿美元),合同必须手动签署和执行,每个数据库系统中包含独特的信息,因此代表了一个单点故障。

尽管对可见性和信任的需求是明确的,但在当今信息和流程共享的破碎方法下,构建跨越业务网络的记录系统是不可能的。

区块链是不同的

如果商业网络有标准的方法在网络上建立身份、执行交易和存储数据,而不是用“现代”交易系统来表示效率低下的老鼠洞,那会怎么样?如果可以通过查看一列交易来确定资产的来源,而这些交易一旦写入,就不能更改,因此可以信任,那么该怎么办?

商业网络应该是这样的:

_images/future_net.png

这是一个区块链网络,每个参与者都有自己复制的账本副本。除了共享账本信息,更新账本的过程也被共享。不像今天的系统,参与者的**私有**程序用于更新他们的**私有**账本,区块链系统有**共享**程序来更新**共享**账本。

区块链网络能够通过共享账本协调其业务网络,可以减少关联私有信息和处理的时间、成本和风险,同时提高信任和可见性。

您现在知道了区块链是什么以及它为什么有用。还有很多其他的重要细节,基本思想都与共享信息和过程有关。

什么是超级账本Fabric?

Linux基金会于2015年创建了Hyperledger项目,以推进跨行业的区块链技术。它不是宣布一个单一的区块链标准,而是鼓励通过一个社区过程以协作的方式开发区块链技术,并拥有鼓励开放知识产权和采用及时关键标准。

超级账本Fabric是超级账本中的区块链项目之一。和其他区块链技术一样,它有一个账本,使用智能合同,是一个参与者管理交易的系统。

超级账本结构与其他区块链系统的不同之处在于,它是**私有的**和**许可的**。与允许匿名参与网络的开放无许可系统(需要像“工作证明”这样的协议来验证交易并保护网络)不同,超级账本Fabric网络成员通过一个受信任的**成员服务提供者(MSP)**来注册。

超级账本Fabric还提供了几个可插入的选项。账本数据可以以多种格式存储,共识机制可以在内部和外部交换,并且支持不同的MSPs。

超级账本Fabric还提供了创建**通道**的能力,允许一组参与者创建单独的交易账本。某些参与者可能是竞争对手,并且不希望每个参与者都知道他们所做的每一笔交易(例如,他们向某些参与者而不是其他参与者提供特殊的价格)的网络来说,这是一个特别重要的选择。如果两个参与者组成一个通道,那么只是这些参与者(没有其他参与者)拥有该通道的账本副本。

共享账本

超级账本Fabric有一个账本子系统,它由两个组件组成:世界状态**和**交易日志。对于所属的超级账本Fabric网络,每个参与者都有一份账本的副本。

世界状态组件描述在给定时间点的账本状态。这是账本的数据库。交易日志组件记录导致当前世界状态的所有交易;它是世界状态的更新历史。因此,账本是世界状态数据库和交易日志历史记录的组合。

账本为世界状态提供了一个可替换的数据存储。默认情况下,这是一个LevelDB键值存储数据库。交易日志不需要可插入的。它只简单地记录区块链网络使用的账本数据库的之前和之后的值。

智能合约

超级账本Fabric智能合约是用**链码**编写的,当应用程序需要与账本交互时,由区块链外部的应用程序调用。在大多数情况下,链码只与账本的数据库组件、世界状态(例如查询它)交互,而不是与交易日志交互。

链码可以用几种编程语言实现。目前支持Go和Node。

隐私

根据网络的需要,企业对企业(B2B)网络中的参与者可能对他们共享多少信息非常敏感。对于其他网络来说,隐私不会是他们最关心的问题。

超级账本Fabric支持隐私(使用通道)是关键功能需求的网络,就象它支持相对开放的网络一样。

共识

交易必须按照发生的顺序写在账本上,即使它们可能位于网络中不同的参与者之间。为了做到这一点,必须建立交易的顺序,并且必须建立一种拒绝错误地(或恶意地)插入账本的不良交易的方法。

这是计算机科学的一个深入研究领域,有许多方法可以实现它,每种方法都有不同的权衡。例如,PBFT(实用拜占庭容错)可以为文件副本提供一种相互通信的机制,以保持每个副本的一致性,即使在损坏的情况下也是如此。或者,在比特币中,排序是通过一个称为“挖矿”的过程进行的,在这个过程中,相互竞争的计算机竞相解决一个密码谜题,这个谜题定义了所有进程随后建立的顺序。

超级账本Fabric的设计允许网络启动者选择一种最能代表参与者之间关系的共识机制。就隐私而言,有一系列的需求:从关系中高度结构化的网络到更加对等的网络。

我们将学习更多关于超级账本Fabric的共识机制,目前包括SOLO和Kafka。

我在哪里可以学到更多?

概念文档,将带您了解身份在Fabric网络中扮演的关键角色(使用已建立的PKI结构和x.509证书)。

讨论成员服务提供者(MSP)的角色,该角色将身份转换为Fabric网络中的角色。

peer节点,被组织拥有,承载着账本和智能合约,构成了Fabric网络的物理结构。

了解如何下载Fabric二进制文件并使用示例脚本引导您自己的示例网络。然后卸载网络,一步一步地学习它是如何构建的。

部署一个非常简单的网络,甚至比构建第一个网络还要简单,使用一个简单的智能合约和应用程序。

一个查看示范交易流程的高级视图。

从高层次上看一下本文介绍的一些组件和概念,以及其他一些组件和概念,并描述它们如何在示例交易流程中协同工作。

超级账本Fabric功能

超级账本Fabric是分布式账本技术(DLT)的一种实现,它在模块化的区块链架构中提供企业可用的网络安全、可伸缩性、保密性和性能。超级账本Fabric提供了以下区块链网络功能:

身份管理

为了启用许可网络,超级账本Fabric提供了一个成员身份服务,该服务管理用户ID并对网络上的所有参与者进行身份认证。访问控制列表可以通过授权特定的网络操作来提供额外的权限层。例如,可以允许特定的用户ID调用链码应用程序,但是不能部署新的链码。

隐私和保密

超级账本 Fabric允许相互竞争的商业利益,以及任何需要私有的、机密的交易的组在同一个许可网络上共存。私有**通道**是受限制的消息传递路径,可用于为网络成员的特定子集提供交易隐私和机密性。通道上的所有数据,包括交易、成员和通道信息,都是不可见的,任何未显式授予访问该通道权限的网络成员都无法访问这些数据。

高效的处理

超级账本Fabric按node节点类型分配网络角色。为了向网络提供并发性和并行性,交易执行与交易排序和担保是分离的。在对交易进行排序之前执行交易可以使每个peer节点同时处理多个交易。这种并发执行提高了每个peer上的处理效率,并加速了交易到排序服务的交付。

除了支持并行处理之外,劳动分工还为排序节点减轻了交易执行和账本维护的负担,而peer节点则不需要排序(共识)工作负载。这种角色的划分还限制了授权和身份验证所需的处理;所有peer节点不必信任所有排序节点,反之亦然,因此一个节点上的进程可以独立于另一个节点的验证运行。

链码功能

链码应用程序对逻辑进行编码,供通道上特定类型的交易调用。例如,为资产所有权变更定义参数的链码,确保所有转移所有权的交易都遵循相同的规则和要求。**系统链码**与一般链码的区别是它为整个通道定义了操作参数。生命周期和配置系统链码定义了通道的规则;背书和验证系统链码定义了背书和验证交易的需求。

模块化设计

超级账本 Fabric实现了模块化架构,为网络设计师提供了功能选择。例如,身份、排序(共识)和加密的特定算法可以插入任何超级账本Fabric网络。其结果是一个通用的区块链架构,任何行业或公共领域都可以采用该架构,并保证其网络将跨市场、监管和地理边界进行互操作。

超级账本Fabric模型

本节概述了超级账本Fabric结构中的关键设计特性,实现了其全面、可定制的企业区块链解决方案的承诺:

  • 资产 — 资产定义使得几乎任何具有货币价值的东西都可以通过网络进行交换,从全食超市(whole foods)到古董车,再到货币期货。

  • 链码 — 链码的执行与交易排序是分开的,这限制了跨节点类型的信任和验证所需的级别,并优化了网络的可伸缩性和性能。

  • 账本特点 — 不可改的、共享的账本为每个通道编码整个交易历史,并包含类似sql的查询功能,用于有效的审计和争议解决。

  • 隐私 — 通道和私有数据收集支持私有和机密的多边交易,这通常是在公共网络上交换资产的竞争企业和受监管的行业所需要的。

  • 安全会员服务 — 被许可的成员资格提供了一个可信的区块链网络,参与者知道所有的交易都可以被授权的监管机构和审计人员检测和跟踪。

  • 共识 — 达成共识的独特方法支持企业所需的灵活性和可伸缩性。

资产

资产可以是有形的(房地产和硬件),也可以是无形的(合同和知识产权)。超级账本Fabric提供了使用链码交易修改资产的能力。

资产在超级账本Fabric中表示为键值对的集合,状态更改记录为在:ref:`通道`账本上的交易。资产可以用二进制和/或JSON格式表示。

在超级账本Fabric应用程序中,您可以使用`Hyperledger Composer <https://github.com/hyperledger/composer>`__ 工具轻松定义和使用的资产。

链码

链码是定义一项或多项资产的软件,以及修改资产的交易指令;换句话说,这就是业务逻辑。链码强制执行读取或更改键值对(或其他状态数据库信息)的规则。链码函数针对账本的当前状态数据库执行,并通过交易提议发起。链码执行产生一组键值写(写集),可以提交给网络并应用于所有peer节点上的账本。

账本特性

账本是Fabric中所有状态转变的有序的、防篡改的记录。状态转换是参与方提交的链码调用(“交易”)的结果。每笔交易都会产生一组资产键值对,这些键值对在创建、更新或删除时提交到账本。

账本由一个区块链(“链”)组成,它以区块的形式存储不可变的、有序的记录,以及一个状态数据库来维护当前的fabric状态。每个通道有一个账本。每个peer为其所属的每个通道维护一份账本副本。

Fabric账本的一些特点:

  • 查询和使用基于键的查找更新账本、范围查询和组合键查询

  • 使用富查询语言的只读查询(如果使用CouchDB作为状态数据库)

  • 只读历史查询——查询某键的账本历史记录,支持数据溯源场景

  • 交易由链码中读取的键/值(读集)版本和链码中写入的键/值(写集)的版本组成

  • 交易包含每个背书peer的签名,并提交给排序服务

  • 交易排序后进入区块,并从排序服务“交付”到通道上的peer

  • peer根据背书策略验证交易并执行这些策略

  • 在添加区块之前,执行版本控制检查,以确保所读取的资产的状态自链码执行时间以来没有更改

  • 一旦交易被验证和提交,就会有不变性

  • 通道的账本包含一个定义了策略、访问控制列表和其他相关信息的配置区块

  • 通道包含 Membership Service Provider 实例,允许从不同的证书颁发机构派生加密材料

有关数据库、存储结构和“查询能力”的更深入研究,请参阅 账本 主题。

隐私

超级账本Fabric在每个通道的基础上使用一个不可变的账本,以及可以操作和修改资产当前状态(即更新键值对)的链码。一个账本存在于一个通道的范围内——它可以在整个网络中共享(假设每个参与者都在一个公共通道上操作)——或者它可以私有化,只包含一组特定的参与者。

在后一种情况下,这些参与者将创建一个单独的通道,从而隔离/分离他们的交易和账本。为了解决跨越透明度和隐私之间的差距,链码只可以安装于peer,需要访问执行读取和写入的资产状态(换句话说,如果一个链码不是安装在peer,它将无法正确地与账本接口)。

当该通道上的一个组织子集需要对其交易数据保密时,将使用一个私有数据集合(集合)将该数据隔离在一个私有数据库中,从逻辑上与通道账本分离,只有经过授权的组织子集才能访问该数据。

因此,通道使交易对更广泛的网络保持私有,而集合使通道上组织的子集之间的数据保持私有。

为了进一步混淆数据,可以使用常见的加密算法(如AES)对链码中的值进行加密(部分或全部),然后再将交易发送给排序服务并将区块添加到账本中。一旦加密数据被写入账本,就只能由拥有用于生成密码文本的相应密钥的用户解密。有关链码加密的详细信息,请参阅:doc:`chaincode4ade`主题。

有关如何在区块链网络上实现隐私的更多细节,请参阅 私有数据 主题。

安全和成员服务

超级账本Fabric支撑着一个所有参与者都知道身份的交易网络。公钥基础设施用于生成与组织、网络组件、最终用户或客户端应用程序绑定的加密证书。因此,可以在更广泛的网络和通道级别上操纵和控制数据访问控制。这种“许可”概念的超级账本Fabirc,加上通道的存在和功能,有助于解决隐私和机密性是最重要的问题。

请参阅 成员服务提供者(MSP) 主题,以更好地理解密码实现,以及在超级账本Fabric中使用的签名、验证和身份验证方法。

共识

在分布式账本技术中,共识最近已经成为特定算法的同义词,在单个函数中。然而,共识不只是简单地同意交易的顺序,这种差异在超级账本Fabric中通过其在整个交易流程(从提议和背书,到排序、验证和担保)中的基本作用得到了强调。简而言之,共识被定义为包含在一个区块中的一组交易的正确性的全循环验证。

当区块的交易的顺序和结果满足显式策略标准检查时,最终就会达成共识。这些检查和平衡发生在交易的生命周期中,包括使用背书策略来规定哪些特定成员必须背书某个交易类,以及系统链码来确保这些策略得到执行和维护。在作出担保之前,peer将使用这些系统链码,以确保存在足够的背书,并且背书来自适当的实体。此外,在将任何包含交易的区块添加到账本之前,还将进行版本检查,在此期间对账本的当前状态达成共识。最后的检查提供了对双重开销操作和其他可能损害数据完整性的威胁的保护,并允许对非静态变量执行函数。

除了进行大量的背书、有效性和版本检查之外,还在交易流的各个方向进行身份验证。访问控制列表是在网络的层次结构层上实现的(从排序服务到通道),当交易提议通过不同的架构组件时,将重复签名、验证和认证有效负载。综上所述,共识并不仅仅局限于一批交易的商定顺序;相反,它是一个全面的特性,是在交易从提议到担保的过程中进行的验证的副产品。

查看 Transaction Flow 图表,以获得共识的可视化表示。

Blockchain network

注意:本教程描述的网络使用前面的生命周期过程,其中链码在通道上实例化。这个主题将被更新,以反映Fabric 链码生命周期特性,该特性在v2.0.0的alpha版本中首次引入。

本主题将在概念层次上描述超级账本Fabric如何允许组织在形成区块链网络方面进行协作。如果您是架构师、管理员或开发人员,您可以使用这个主题来深入了解超级账本Fabric区块链网络中的主要结构和流程组件。本主题将使用一个可管理的工作示例,介绍区块链网络中的所有主要组件。理解了这个示例之后,您可以在文档的其他地方阅读关于这些组件的更详细的信息,或者尝试构建一个示例网络。

阅读本主题并理解策略的概念之后,您将对组织需要做出的决策有一个坚实的理解,以便建立控制已部署的超级账本Fabric的策略。您还将了解组织如何使用声明性策略(超级账本Fabric的一个关键特性)管理网络演化。简而言之,您将了解超级账本Fabric的主要技术组件以及组织需要对此做出的决策。

What is a blockchain network?

区块链网络是为应用程序提账本和智能合约(链码)服务的技术基础设施。主要地,智能合约用于生成交易,这些交易随后被分发到网络中的每个peer节点,并在每个节点的账本副本上记录下来。应用程序的用户可能是使用客户算应用程序或区块链网络管理员的最终用户。

在大多数情况下,多个组织作为一个联盟聚集在一起形成网络,它们的权限由一组策略决定,这些策略在最初配置网络时由联盟商定。此外,网络策略可以随着时间的推移而变化,这取决于联盟中的组织的协议,我们将在讨论修改策略的概念时发现这一点。

The sample network

在开始之前,让我们先展示一下我们的目标!这是一个表示示例网络最终状态的图表。

不要担心这看起来会很复杂!在学习本主题的过程中,我们将逐步构建网络,以便您了解R1、R2、R3和R4组织如何为网络贡献基础设施来帮助形成网络。这个基础设施实现了区块链网络,它由组成网络的组织(如谁可以添加新的组织)同意的策略控制。

_images/network.diagram.1.pngnetwork.structure

R1、R2、R3和R4这四个组织已经共同决定,并签署了一份协议,他们将建立和开发一个超级账本Fabric网络。R4被指定为网络启动器——它被赋予设置网络初始版本的权力。R4无意在网络上执行业务交易。R1和R2需要在整个网络中进行私有通信,R2和R3也是如此。组织R1有一个客户端应用程序,可以在通道C1中执行业务交易。R2组织有一个客户端应用程序,它可以在通道C1和C2中执行类似的工作。组织R3有一个客户端应用程序可以在通道C2上实现这一点。peer节点P1维护与C1关联的账本L1的副本。peer节点P2维护与C1关联的账本L1和与C2关联的账本L2的副本。peer节点P3维护与C2关联的账本L2的副本。网络根据网络配置NC4中指定的策略规则进行治理,网络受组织R1和R4的控制。通道C1根据通道配置CC1中指定的策略规则进行管理;通道由R1和R2组织控制。通道C2按照通道配置CC2中指定的策略规则进行管理;通道在R2和R3组织的控制下。有一个排序服务O4作为N的网络管理点,并使用系统通道。排序服务还支持应用程序通道C1和C2,以便将交易排序为区块以便分发。这四个组织都有一个首选的证书颁发机构。

Creating the Network

让我们从创建网络的基础开始:

_images/network.diagram.2.pngnetwork.creation

当一个排序器启动时,网络就形成了。在我们的示例网络N中,包含单个节点O4的排序服务根据网络配置NC4进行配置,NC4赋予R4组织管理权限。在网络级别,证书颁发机构CA4用于向R4组织的管理员和网络节点分发身份。

我们可以看到,定义网络N的第一个东西是排序服务O4。将排序服务看作是网络的初始管理点是有帮助的。如前所述,O4最初由组织R4中的管理员配置和启动,并托管在R4中。配置NC4包含描述网络初始管理功能集的策略。最初,这只设置为在网络上只授予R4权限。这将会改变,我们稍后会看到,但是现在R4是网络中唯一的成员。

Certificate Authorities

您还可以看到一个证书颁发机构CA4,它用于向管理员和网络节点颁发证书。CA4在我们的网络中扮演着关键角色,因为它分发X.509证书,这些证书可用于标识属于组织R4的组件。CA签发的证书也可以用于签署交易,以表明一个组织认可交易结果——这是将交易结果接受到账本上的先决条件。让我们更详细地研究一下CA的这两个方面。

首先,区块链网络的不同组件使用证书相互标识自己来自特定组织。这就是为什么支持区块链网络的CA通常不止一个——不同的组织经常使用不同的CA。我们将在网络中使用四个CA;每个组织一个。的确,CA是如此重要,以至于超级账本Fabric为您提供了一个内置的CA(称为Fabric-CA)来帮助您开始工作,尽管在实践中,组织将选择使用它们自己的CA。

证书到成员组织的映射是通过一个称为成员服务提供者(MSP)的结构实现的。网络配置NC4使用一个命名的MSP来标识CA4分发的证书的属性,CA4将证书持有者与组织R4关联起来。然后,NC4可以在策略中使用这个MSP名称来授予来自R4的参与者对网络资源的特殊权限。这种策略的一个例子是识别R4中的管理员,他们可以向网络添加新的成员组织。我们没有在这些图上显示MSP,因为它们只会把它们弄乱,但是它们非常重要。

其次,我们将在稍后看到CA颁发的证书如何成为交易生成和验证过程的核心。具体来说,X.509证书用于客户端应用程序交易提议和智能合约交易响应中,以便对交易进行数字签名。随后,承载账本副本的网络节点在接受账本上的交易之前验证交易签名是否有效。

让我们回顾一下示例区块链网络的基本结构。有一个资源,网络N,由证书颁发机构CA4定义的一组用户访问,这些用户对网络N中的资源拥有一组权限,这些权限由网络配置NC4中包含的策略描述。当我们配置并启动排序服务节点O4时,所有这些都变为现实。

Adding Network Administrators

NC4最初配置为只允许R4用户在网络上拥有管理权限。在下一阶段,我们将允许组织R1用户管理网络。让我们看看网络是如何发展的:

_images/network.diagram.2.1.pngnetwork.admins

组织R4更新网络配置,使组织R1也成为管理员。在这一点之后,R1和R4对网络配置具有同等的权限。

我们看到添加了一个新的组织R1作为管理员——R1和R4现在在网络上拥有相同的权限。我们还可以看到已经添加了证书颁发机构CA1——它可以用来标识来自R1组织的用户。在此之后,R1和R4的用户都可以管理网络。

虽然排序器节点O4运行在R4的基础设施上,但是R1对它拥有共享的管理权限,只要它能够获得网络访问。这意味着R1或R4可以更新网络配置NC4,使R2组织成为网络操作的子集。这样,即使R4正在运行排序服务,R1对其拥有完全的管理权限,R2创建新联盟的权限是有限的。

在其最简单的形式中,排序服务是网络中的一个节点,这就是您在示例中看到的。排序服务通常是多节点的,并且可以配置为在不同的组织中拥有不同的节点。例如,我们可以在R4中运行O4并将其连接到O2,这是组织R1中的一个单独的排序器节点。这样,我们就有了一个多站点、多组织的管理结构。

我们将在本主题的稍后部分进一步讨论排序服务,但是现在只将排序服务看作一个管理点,它提供对网络的不同组织控制的访问。

Defining a Consortium

虽然现在网络可以由R1和R4管理,但是几乎没有什么可以做的。我们需要做的第一件事是定义一个联盟。这个词的字面意思是“一个有着共同命运的团体”,所以对于区块链网络中的一组组织来说,这是一个合适的选择。

让我们来看看联盟是如何定义的:

_images/network.diagram.3.pngnetwork.consortium

网络管理员定义一个包含两个成员(组织R1和R2)的联盟X1。这个联盟定义存储在网络配置NC4中,将在网络开发的下一阶段使用。CA1和CA2分别是这些组织的证书颁发机构。

由于NC4的配置方式,只有R1或R4可以创建新的联盟。这个图显示了一个新的联盟X1的添加,它将R1和R2定义为组成它的组织。我们还可以看到CA2被添加到R2中来识别用户。请注意,一个联盟可以有任意数量的组织成员——我们只展示了两个,因为这是最简单的配置。

为什么联盟很重要?我们可以看到,一个联盟定义了网络中的一组组织,它们都需要彼此进行交易——在本例中是R1和R2。如果组织有一个共同的目标,那么将它们组织在一起是很有意义的,而这正是正在发生的事情。

网络虽然由一个组织启动,但现在由一组更大的组织控制。我们可以这样开始,R1、R2和R4共享控制,但是这种构建使它更容易理解。

现在,我们将使用联盟X1创建一个非常重要的超级账本Fabric区块链的部分——一个通道。

Creating a channel for a consortium

让我们创建Fabric区块链网络的这个关键部分——一个通道。通道是一个主要的通信机制,通过它,联盟的成员可以彼此通信。网络中可以有多个通道,但是现在,我们先从一个开始。

让我们看看第一个通道是如何添加到网络的:

_images/network.diagram.4.pngnetwork.channel

使用联盟定义X1为R1和R2创建了通道C1。通道由通道配置CC1控制,完全独立于网络配置。CC1由R1和R2管理,它们对C1拥有相同的权限。R4在CC1中没有任何权限。

通道C1为联盟X1提供了一个私有通信机制。我们可以看到通道C1已经连接到排序服务O4,但是没有附加任何其他内容。在网络开发的下一个阶段,我们将连接组件,如客户端应用程序和peer节点。但在这一点上,通道代表了未来连接的潜力。

尽管通道C1是网络N的一部分,但它与网络N是有很大区别的。还要注意,组织R3和R4不在这个通道中——它用于R1和R2之间的交易处理。在前面的步骤中,我们了解了R4如何授予R1创建新联盟的权限。值得一提的是,R4还允许R1创建通道!在这个图中,创建通道C1的可能是组织R1或R4。同样,请注意,通道可以有任意数量的组织连接到它——我们已经展示了两个,因为它是最简单的配置。

同样,请注意通道C1如何具有与网络配置NC4完全独立的配置CC1。CC1包含控制R1和R2对通道C1拥有的权限的策略——正如我们已经看到的,R3和R4在这个通道中没有权限。R3和R4只有在由R1或R2添加到通道配置CC1中的适当策略时才能与C1交互。一个例子是定义谁可以向通道添加新组织。特别需要注意的是,R4不能将自己添加到通道C1中——它必须而且只能由R1或R2授权。

为什么通道如此重要?通道是有用的,因为它们提供了一种机制,用于联盟成员之间的私有通信和私有数据。通道提供区别于其他通道和网络的隐私。在这方面,超级账本Fabric非常强大,因为它允许组织共享基础设施,同时又保持私有。这里没有矛盾——网络中的不同联盟需要适当共享不同的信息和流程,而通道提供了一种有效的机制。通道提供高效的基础设施共享,同时维护数据和通信隐私。

我们还可以看到,一旦创建了通道,它就在一个非常真实的意义上“自由于网络”。只有在通道配置中显式指定的组织才能控制它,从现在到将来。同样,此后对网络配置NC4的任何更新都不会对通道配置CC1产生直接影响;例如,如果更改了联盟定义X1,则不会影响通道C1的成员。因此,通道是有用的,因为它们允许组成通道的组织之间进行私人通信。此外,通道中的数据与网络的其他部分(包括其他通道)完全隔离。

另外,还定义了一个特殊的系统通道供排序服务使用。它的行为与常规通道完全相同,因此常规通道有时称为应用程序通道。我们通常不需要担心这个通道,但是我们将在本主题的后面对此进行更多的讨论。

Peers and Ledgers

现在让我们开始使用通道将区块链网络和组织组件连接在一起。在网络开发的下一个阶段,我们可以看到我们的网络N刚刚获得了两个新的组件,即peer节点P1和账本实例L1。

_images/network.diagram.5.pngnetwork.peersledger

peer节点P1已加入通道C1。P1物理上承载着账本L1的副本。P1和O4可以使用通道C1进行通信。

peer节点是承载区块链账本副本的网络组件!最后,我们开始看到一些可识别的区块链组件!P1在网络中的目的纯粹是为了存放L1账本的一个副本供其他人访问。我们可以将L1看作物理上托管在P1上,但是逻辑上托管在通道C1上。当我们向通道添加更多的peer时,我们会更清楚地看到这个想法。

P1配置的一个关键部分是由CA1发布的X.509身份,它将P1与组织R1关联起来。一旦P1启动,它就可以使用排序器O4连接通道C1。当O4接收到这个连接请求时,它使用通道配置CC1来确定P1在这个通道上的权限。例如,CC1确定P1是否可以读写账本L1的信息。

注意peer点是如何由拥有它们的组织连接到通道的,尽管我们只添加了一个peer节点,但是我们将看到网络中的多个通道上如何有多个peer节点。稍后我们将看到peer可以扮演的不同角色。

Applications and Smart Contract chaincode

既然通道C1上有一个账本,我们就可以开始连接客户端应用程序,以使用账本的主要工作人员(即peer)提供的一些服务了!

注意网络是如何发展的:

_images/network.diagram.6.pngnetwork.appsmartcontract

一个智能合约S5已经安装到P1上。组织R1中的客户端应用程序A1可以使用S5通过peer节点P1访问账本。A1、P1和O4都连接到通道C1,即它们都可以使用该通道提供的通信设施。

在网络开发的下一个阶段,我们可以看到客户端应用程序A1可以使用通道C1连接到特定的网络资源——在这种情况下,A1可以连接到peer节点P1和排序器节点O4。同样,请查看通道对于网络和组织组件之间的通信是如何起中心作用的。就像peer和排序器一样,客户端应用程序将具有将其与组织关联的标识。在我们的示例中,客户端应用程序A1与组织R1相关联;虽然它在Fabric区块链网络之外,但是它通过通道C1与之连接。

现在看来,A1可以通过P1直接访问账本L1,但实际上,所有的访问都是通过一个名为智能合约链码的特殊程序S5来管理的。可以认为S5定义了对账本的所有通用访问模式;S5提供了一组定义良好的方法,通过这些方法可以查询或更新账本L1。简而言之,客户端应用程序A1必须通过智能合约S5才能到达账本L1!

每个组织中的应用程序开发人员都可以创建智能合约链码,以实现联盟成员共享的业务流程。智能合约用于帮助生成交易,这些交易随后可以分布到网络中的每个节点。我们稍后会讨论这个想法;当网络更大时,就更容易理解了。现在,需要理解的重要事情是,要达到这一点,必须对智能合约执行两个操作;它必须已经安装,然后实例化。

Installing a smart contract

开发出智能合约S5之后,组织R1中的管理员必须将其安装到peer节点P1上。这是一个简单的操作;事件发生后,P1对S5有了充分的了解。具体来说,P1可以看到S5的实现逻辑——它用来访问账本L1的程序代码。我们将此与S5接口进行对比,S5接口只描述S5的输入和输出,而不考虑其实现。

当一个组织在一个通道中有多个peer时,它可以选择在其上安装智能合约的peer;它不需要在每个peer上都安装智能合约。

Instantiating a smart contract

但是,仅仅因为P1安装了S5,其他连接到通道C1的组件并不知道它;它必须首先在通道C1上实例化。在我们的示例中,只有一个peer节点P1,组织R1中的管理员必须使用P1在通道C1上实例化S5。实例化后,通道C1上的每个组件都知道存在S5;在我们的示例中,这意味着S5现在可以由客户端应用程序A1调用!

注意,尽管通道上的每个组件现在都可以访问S5,但是它们不能看到它的程序逻辑。这对安装了它的节点仍然是私有的;在我们的例子中,这意味着P1。从概念上讲,这意味着实例化的是智能合约接口,而不是安装的智能合约实现。加强这一观念;安装智能合约显示了我们如何认为它是物理上托管在peer节点上的,而实例化智能合约则显示了我们如何从逻辑上考虑它是由通道托管的。

Endorsement policy

实例化时提供的最重要的附加信息是背书策略。它描述了哪些组织必须批准交易,然后才会被其他组织接受到他们的账本上。在我们的示例网络中,只有在R1或R2背书的情况下,交易才能被接受到账本L1上。

实例化行为将背书策略放置在通道配置CC1中;它使通道的任何成员都可以访问它。您可以在交易流程主题中阅读更多关于背书策略的信息。

Invoking a smart contract

一旦在peer节点上安装了智能合约并在通道上实例化了它,客户端应用程序就可以调用它。客户端应用程序通过向智能合约背书策略指定的组织所属的peer发送交易提议来实现这一点。交易提议作为智能合约的输入,智能合约使用它生成一个经过背书的交易响应,该响应由peer节点返回给客户端应用程序。

正是这些交易响应与交易提议打包在一起,形成一个完全背书的交易,可以将其分发到整个网络。稍后我们将更详细地讨论这个问题,了解应用程序如何调用智能合约来生成经过背书的交易就足够了。

在网络开发的这个阶段,我们可以看到组织R1完全参与了网络。它的应用程序(从A1开始)可以通过智能合约S5访问账本L1,生成将由R1背书的交易,并因此被账本接受,因为它们符合背书策略。

Network completed

回想一下,我们的目标是为联盟X1——组织R1和R2创建一个通道。网络开发的下一阶段将看到组织R2将其基础设施添加到网络中。

让我们看看网络是如何演化的:

_images/network.diagram.7.pngnetwork.grow

网络是通过增加R2组织的基础设施而发展起来的。具体地说,R2添加了peer节点P2(承载账本L1的副本)和链码S5。P2也加入了通道C1,应用程序A2也是如此。A2和P2是使用CA2的证书识别的。所有这些意味着应用程序A1和A2都可以使用peer节点P1或P2在C1上调用S5。

我们可以看到组织R2在通道C1上添加了一个peer节点P2。P2还包含账本L1和智能合约 S5的副本。我们可以看到R2还添加了客户端应用程序A2,它可以通过通道C1连接到网络。为了实现这一点,R2组织中的管理员创建了peer节点P2并将其连接到通道C1,与R1中的管理员的方法相同。

我们已经创建了我们的第一个运行网络!在网络开发的这个阶段,我们有一个通道,R1和R2组织可以在其中完全相互交易。具体来说,这意味着应用程序A1和A2可以使用通道C1上的智能合约S5和账本L1生成交易。

Generating and accepting transactions

与peer节点(总是承载账本副本)相反,我们看到有两种不同类型的peer节点;那些拥有智能合约的和那些不拥有智能合约的。在我们的网络中,每个peer节点都承载智能合约的副本,但是在更大的网络中,将有更多的peer节点不承载智能合约的副本。peer只能在安装智能合约时运行它,但是它可以通过连接到通道来了解智能合约的接口。

您不应该认为没有安装智能合约的peer节点在某种程度上是较差的。更重要的是,拥有智能合约的peer节点具有一种特殊的能力——帮助生成交易。注意,所有peer节点都可以在其账本L1副本上验证并随后接受或拒绝交易。然而,只有安装了智能合约的peer节点才能参与交易背书过程,而交易背书是生成有效交易的核心。

在本主题中,我们不需要担心交易是如何生成、分发和接受的确切细节——只要了解我们有一个区块链网络,其中组织R1和R2可以作为账本捕获的交易共享信息和流程,就足够了。我们将在其他主题中学习更多关于交易、账本、智能合约的知识。

Types of peers

在超级账本Fabric中,虽然所有peer都是相同的,但它们可以根据网络的配置方式承担多个角色。现在,我们对典型的网络拓扑结构有了足够的了解,可以描述这些角色。 -提交peer - 背书peer

  • Committing peer. Every peer node in a channel is a committing peer. It receives blocks of generated transactions, which are subsequently validated before they are committed to the peer node’s copy of the ledger as an append operation.

  • Endorsing peer. Every peer with a smart contract can be an endorsing peer if it has a smart contract installed. However, to actually be an endorsing peer, the smart contract on the peer must be used by a client application to generate a digitally signed transaction response. The term endorsing peer is an explicit reference to this fact.

    An endorsement policy for a smart contract identifies the organizations whose peer should digitally sign a generated transaction before it can be accepted onto a committing peer’s copy of the ledger.

这是peer的两种主要类型;peer还可以扮演另外两个角色: - 领导人peer - 锚点peer

  • Leader peer. When an organization has multiple peers in a channel, a leader peer is a node which takes responsibility for distributing transactions from the orderer to the other committing peers in the organization. A peer can choose to participate in static or dynamic leadership selection.

    It is helpful, therefore to think of two sets of peers from leadership perspective – those that have static leader selection, and those with dynamic leader selection. For the static set, zero or more peers can be configured as leaders. For the dynamic set, one peer will be elected leader by the set. Moreover, in the dynamic set, if a leader peer fails, then the remaining peers will re-elect a leader.

    It means that an organization’s peers can have one or more leaders connected to the ordering service. This can help to improve resilience and scalability in large networks which process high volumes of transactions.

  • Anchor peer. If a peer needs to communicate with a peer in another organization, then it can use one of the anchor peers defined in the channel configuration for that organization. An organization can have zero or more anchor peers defined for it, and an anchor peer can help with many different cross-organization communication scenarios.

请注意,peer可以同时是提交peer、背书peer、领导人peer和锚点peer!只有锚点peer是可选的——对于所有实际目的,总是会有一个领导peer、至少一个背书peer和至少一个提交peer。

Install not instantiate

与组织R1类似,组织R2必须将智能合约S5安装到其peer节点P2上。这是显而易见的——如果应用程序A1或A2希望在peer节点P2上使用S5来生成交易,那么它必须首先出现;安装是实现此目的的机制。此时,peer节点P2拥有智能合约和账本的物理副本;与P1一样,它可以在账本L1的副本上生成和接受交易。

然而,与组织R1相反,组织R2不需要在通道C1上实例化智能合约S5。这是因为组织R1已经在通道上实例化了S5。实例化只需要发生一次;随后加入通道的任何peer都知道通道可以使用智能合约S5。这一事实反映了账本L1和智能合约确实以物理方式存在于peer节点上,并以逻辑方式存在于通道上;R2只是向网络添加了L1和S5的另一个物理实例。

在我们的网络中,我们可以看到通道C1连接两个客户端应用程序、两个peer节点和一个排序服务。由于只有一个通道,所以这些组件只与一个逻辑账本交互。peer节点P1和P2具有与账本L1相同的副本。智能合约S5的副本通常使用相同的编程语言以相同的方式实现,但如果没有,它们必须在语义上等价。

我们可以看到,谨慎地向网络添加peer可以帮助支持增加的吞吐量、稳定性和弹性。例如,网络中更多的peer将允许更多的应用程序连接到它;组织中的多个peer将在计划内或计划外停机的情况下提供额外的弹性。

这一切都意味着,可以配置支持各种操作目标的复杂拓扑——网络的大小没有理论上的限制。此外,单个组织中的peer高效地发现和彼此通信的技术机制(gossip协议)将容纳大量peer节点,以支持此类拓扑。

谨慎地使用网络和通道策略,甚至可以对大型网络进行良好的治理。组织可以自由地向网络添加peer节点,只要它们符合网络商定的策略。网络和通道策略创造了自治和控制之间的平衡,这是去中心化网络的特征。

Simplifying the visual vocabulary

现在我们要简化表示示例区块链网络的可视化词汇表。随着网络规模的增长,最初用来帮助我们理解通道的线路将变得很麻烦。想象一下,如果我们添加另一个peer或客户端应用程序或另一个通道,我们的图表将会多么复杂?

这就是我们马上要做的,在我们做之前,让我们先简化一下视觉词汇表。下面是我们目前开发的网络的一个简化表示:

_images/network.diagram.8.pngnetwork.vocabulary

图中显示了网络N中与通道C1相关的事实如下:客户端应用程序A1和A2可以使用通道C1与peer P1和P2通信,以及排序器O4。peer节点P1和P2可以使用通道C1的通信服务。排序服务O4可以使用通道C1的通信服务。通道配置CC1适用于通道C1。

注意,通过用连接点替换通道线,简化了网络图,连接点显示为包含通道号的蓝色圆圈。没有任何信息丢失。这种表示更具有可伸缩性,因为它消除了交叉行。这使我们能够更清晰地表示更大的网络。通过关注组件和通道之间的连接点,而不是通道本身,我们实现了这种简化。

Adding another consortium definition

在网络开发的下一阶段,我们将介绍组织R3。我们要给R2和R3两个组织一个单独的应用通道让它们可以互相进行交易。这个应用程序通道将完全独立于前面定义的通道,因此R2和R3交易可以对它们保持私有。

让我们回到网络层,为R2和R3定义一个新的联盟X2:

_images/network.diagram.9.pngnetwork.consortium2

来自组织R1或R4的网络管理员添加了一个新的联盟定义X2,其中包括组织R2和R3。这将用于为X2定义一个新通道。

注意,网络现在定义了两个联盟:X1表示组织R1和R2, X2表示组织R2和R3。引入联盟X2是为了能够为R2和R3创建一个新的通道。

新通道只能由网络配置策略(NC4)中指定的具有适当权限的组织创建,即R1或R4。这是一个策略的例子,它将能够在网络级别管理资源的组织与能够在通道级别管理资源的组织分开。看到这些策略的作用,有助于我们理解为什么超级账本Fabric具有复杂的分层策略结构。

在实践中,联盟定义X2被添加到网络配置NC4中。我们将在文档的其他部分讨论此操作的确切机制。

Adding a new channel

现在让我们使用这个新的联盟定义X2来创建一个新的通道C2。为了帮助你加深对更简单的通道符号的理解,我们使用了两种视觉样式——通道C1用蓝色的圆形端点表示,而通道C2用红色连接线表示:

_images/network.diagram.10.pngnetwork.channel2

使用联盟定义X2为R2和R3创建了一个新的通道C2。通道有一个通道配置CC2,完全独立于网络配置NC4和通道配置CC1。通道C2由R2和R3管理,R2和R3对CC2中的策略定义的C2拥有相同的权限。R1和R4在CC2中没有定义任何权限。

通道C2为联盟X2提供了一种私有通信机制。再一次,请注意组织如何在一个联盟中联合起来是什么形式的通道。通道配置CC2现在包含管理通道资源的策略,通过通道C2将管理权分配给组织R2和R3。它完全由R2和R3管理;R1和R4在C2通道没有权力。例如,通道配置CC2随后可以更新,以添加支持网络增长的组织,但这只能由R2或R3完成。

注意通道配置CC1和CC2如何保持完全独立,并且完全独立于网络配置NC4。我们再次看到了超级账本Fabric网络的去中心化本质;一旦创建了通道C2,它就由组织R2和R3独立于其他网络元素进行管理。通道策略始终保持彼此独立,并且只能由授权在通道中这样做的组织更改。

随着网络和通道的发展,网络和通道的配置也将发生变化。有一个过程可以通过受控的方式完成这一过程——包括捕获对这些配置的更改的配置交易。每个配置更改都会生成一个新的配置区块交易,在本主题的后面,我们将看到如何验证和接受这些区块来分别创建更新的网络和通道配置。

Network and channel configurations

在我们的示例网络中,我们看到了网络和通道配置的重要性。这些配置非常重要,因为它们封装了网络成员同意的策略,这些策略为控制对网络资源的访问提供了一个共享引用。网络和通道配置还包含关于网络和通道组成的事实,例如联盟及其组织的名称。

例如,当使用排序服务节点O4首次形成网络时,其行为由网络配置NC4控制。NC4的初始配置只包含允许组织R4管理网络资源的策略。NC4随后被更新,以允许R1管理网络资源。一旦进行了此更改,连接到O4的任何组织R1或R4的管理员都将拥有网络管理权限,因为这是网络配置NC4中的策略所允许的。在内部,排序服务中的每个节点记录网络配置中的每个通道,以便在网络级别创建每个通道的记录。

这意味着,尽管排序服务节点O4是创建联盟X1和X2以及通道C1和C2的参与者,但是网络的智能包含在O4遵守的网络配置NC4中。只要O4表现得像一个好的参与者,并且在处理网络资源时正确地实现NC4中定义的策略,我们的网络就会像所有组织已经同意的那样运行。在许多方面,NC4可以被认为比O4更重要,因为它最终控制网络访问。

对于peer,通道配置也适用相同的原则。在我们的网络中,P1和P2也是很好的参与者。当peer节点P1和P2与客户端应用程序A1或A2交互时,它们各自使用通道配置CC1中定义的策略来控制对通道C1资源的访问。

例如,如果A1想访问peer节点P1或P2上的智能合约链码S5,每个peer节点使用其CC1的副本来确定A1可以执行的操作。例如,A1可以根据CC1中定义的策略从账本L1中读取或写入数据。稍后我们将看到通道及其通道配置CC2中的角色的相同模式。同样,我们可以看到,虽然peer和应用程序是网络中的关键参与者,但是它们在通道中的行为更多地是由通道配置策略决定的,而不是其他因素。

最后,了解网络和通道配置是如何在物理上实现的是很有帮助的。我们可以看到网络和通道的配置在逻辑上是单一的——网络有一个,每个通道也有一个。这是很重要的;访问网络或通道的每个组件,对授予不同组织的权限必须共享理解。

即使存在逻辑上的单一配置,但它实际上被构成网络或通道的每个节点复制并保持一致。例如,在我们的网络peer节点P1和P2都有通道配置CC1的副本,当网络完全完成时,peer节点P2和P3都有通道配置CC2的副本。类似地,排序服务节点O4具有网络配置的副本,但是在多节点配置中,每个排序服务节点都有自己的网络配置副本。

网络和通道配置在使用相同区块链技术上都保持一致,无论用于用户交易,还是用于配置交易。要更改网络或通道配置,管理员必须提交一个配置交易来更改网络或通道配置。它必须由在适当策略中标识为负责配置更改的组织签署。这个策略称为mod_policy,我们稍后将讨论它。

事实上,排序服务节点操作一个小型区块链,通过我们前面提到的系统通道连接。使用系统通道排序服务节点分发网络配置交易。这些交易用于在每个排序服务节点上协同维护网络配置的一致副本。以类似的方式,应用程序通道中的peer节点可以分发通道配置交易。同样,这些交易用于在每个peer节点上维护通道配置的一致副本。

物理上分布而逻辑上单一,这种对象之间的平衡是超级账本Fabric中的一种常见模式。例如,象网络配置这种对象,它逻辑上是单一的,但最终会在一组排序服务节点之间进行物理复制。我们还可以在通道配置、账本和某种程度上的智能合约中看到它,这些合约安装在多个位置,但是它们的接口逻辑上存在于通道级别。您可以在超级账本Fabric中多次看到这种模式,它使超级账本Fabric能够同时去中心化和可管理。

Adding another peer

既然组织R3能够完全参与通道C2,让我们将其基础设施组件添加到通道中。我们不是一次只做一个组件,而是同时添加一个peer、它的本地账本副本、一个智能合约和一个客户端应用程序!

让我们看看添加了组织R3组件的网络:

_images/network.diagram.11.pngnetwork.peer2

图中显示了网络N中通道C1和C2的相关事实如下:客户端应用程序A1和A2可以使用通道C1与peer P1和P2通信,使用排序服务O4;客户端应用程序A3可以使用通道C2与peer P3通信,并使用排序服务O4。排序服务O4可以使用通道C1和C2的通信服务。通道配置CC1适用于通道C1, CC2适用于通道C2。

首先,请注意,由于peer节点P3连接到通道C2,它与使用通道C1的peer节点具有不同的账本L2。账本L2的有效范围是通道C2。账本L1是完全独立的;它的作用域是通道C1。这是有意义的——通道C2的目的是在联盟X2的成员之间提供私有通信,而账本L2是他们交易的私有存储。

类似地,安装在peer节点P3上并在通道C2上实例化的智能合约S6用于提供对账本L2的受控访问。应用程序A3现在可以使用通道C2调用智能合约S6提供的服务,来生成可以被网络中的每个账本L2副本接受的交易。

此时,我们有一个单独的网络,其中定义了两个完全独立的通道。这些通道为组织之间的交易提供了独立管理的设施。这就是工作中的去中心化;我们在控制和自主之间取得了平衡。这是通过应用于由不同组织控制和影响的通道的策略来实现的。

Joining a peer to multiple channels

在网络开发的最后阶段,让我们将重点放在组织R2上。我们可以利用R2是联盟X1和X2的成员这一事实,将其加入多个通道:

_images/network.diagram.12.pngnetwork.multichannel

图中显示了网络N中通道C1和通道C2的相关事实如下:客户端应用程序A1可以使用通道C1与peer P1和P2通信,排序服务O4;客户端应用程序A2可以使用通道C1与peer P1和P2通信,通道C2与peer P2和P3通信,以及排序服务O4;客户端应用程序A3可以使用通道C2与peer P3和P2通信,并使用排序服务O4。排序服务O4可以使用通道C1和C2的通信服务。通道配置CC1适用于通道C1, CC2适用于通道C2。

我们可以看到R2是网络中的一个特殊组织,因为它是两个应用程序通道中唯一的成员组织!它可以在通道C1上与组织R1进行交易,同时也可以在不同的通道C2上与组织R3进行交易。

请注意peer节点P2如何为通道C1安装了智能合约S5,为通道C2安装了智能合约S6。peer节点P2是两个通道的完整成员,同时通过不同的智能合约为不同的账本服务。

这是一个非常强大的概念——通道既提供了组织分离的机制,也提供了组织间协作的机制。一直以来,这个基础设施都是由一组独立的组织提供并在它们之间共享的。

同样重要的是,peer节点P2的行为控制非常不同,这取决于它交易所在的通道。具体来说,通道配置CC1中包含的策略规定了P2在通道C1中进行交易时可用的操作,而通道配置CC2中的策略控制了P2在通道C2中的行为。

同样,这是可取的——R2和R1同意通道C1的规则,而R2和R3同意通道C2的规则。这些规则是在各自的通道策略中捕获的——它们可以而且必须被通道中的每个组件使用,以强制执行正确的行为。

类似地,我们可以看到客户端应用程序A2现在能够在通道C1和C2上进行交易。同样,它也将由适当通道配置中的策略控制。另外,请注意客户端应用程序A2和peer节点P2使用的是混合的可视化词汇表——包括行和连接。可以看到它们是等价的;它们是视觉上的同义词。

The ordering service

细心的读者可能会注意到,排序服务节点似乎是一个集中的组件;它最初用于创建网络,并连接到网络中的每个通道。尽管我们将R1和R4添加到控制排序器的网络配置策略NC4中,该节点仍然运行在R4的基础设施上。在一个去中心化的世界里,这看起来是错误的!

别担心!我们的示例网络展示了最简单的排序服务配置,以帮助您理解网络管理点的概念。事实上,排序服务本身也可以完全去中心化!我们在前面提到,排序服务可以由不同组织拥有的许多单独节点组成,所以让我们看看如何在示例网络中实现这一点。

让我们来看看一个更真实的排序服务节点配置:

_images/network.diagram.15.pngnetwork.finalnetwork2

一个多组织排序服务。排序服务包括排序服务节点O1和O4。O1由组织R1提供,节点O4由组织R4提供。网络配置NC4为来自组织R1和R4的参与者定义了网络资源权限。

我们可以看到这个排序服务完全去中心化了——它在组织R1中运行,在组织R4中运行。网络配置策略NC4允许R1和R4对网络资源拥有相同的权限。来自组织R1和R4的客户端应用程序和peer节点可以通过连接到节点O1或节点O4来管理网络资源,因为这两个节点的行为方式相同,这是由网络配置NC4中的策略定义的。在实践中,来自特定组织的参与者倾向于使用由其母组织提供的基础设施,但情况并非总是如此。

De-centralized transaction distribution

除了作为网络的管理点之外,排序服务还提供了另一个关键功能——交易的分发点。排序服务是一个组件,它从应用程序中收集经过背书的交易并将其排序到交易区块中,然后将这些交易区块分发到通道中的每个peer节点。在每一个提交peer,交易都会被记录下来,不管交易是有效的还是无效的,它们的本地账本副本也会被适当地更新。

注意排序服务节点O4为通道C1执行的角色与为网络N执行的角色非常不同。当在通道级别执行操作时,O4的角色是收集交易务并在通道C1中分发区块。它根据通道配置CC1中定义的策略执行此操作。相反,当在网络级别执行操作时,O4的角色是根据网络配置NC4中定义的策略为网络资源提供管理点。再次注意,这些角色是如何分别由通道和网络配置中的不同策略定义的。这将增强基于声明策略的配置在超级账本Fabric中的重要性。策略定义并用于控制联盟中每个成员的一致行为。

我们可以看到,与超级账本Fabric中的其他组件一样,排序服务也是一个完全去中心的组件。无论是作为网络管理点,还是作为通道中区块的分发器,其节点都可以根据需要分布到网络中的多个组织中。

Changing policy

通过对示例网络的探索,我们已经看到了控制系统中参与者行为的策略的重要性。我们只讨论了一些可用的策略,但是可以声明性地定义许多策略来控制行为的各个方面。这些单独的策略将在文档的其他部分讨论。

最重要的是,超级账本Fabric提供了一个独特的功能强大的策略,允许网络和通道管理员管理策略更改本身!其基本理念是,无论策略变化发生在组织内部或组织之间,还是由外部监管机构强制实施,策略变化都是持续的。例如,新组织可以加入通道,或者现有组织的权限可以增加或减少。让我们进一步研究一下更改策略是如何在超级账本Fabric中实现的。

他们理解的关键点是策略变更是由策略本身内部的策略管理的。修改策略,简称mod_policy,是管理更改的网络或通道配置中的首类策略。让我们举两个简单的例子,说明我们如何使用mod_policy管理网络中的更改!

第一个例子是网络最初建立的时候。此时,只有组织R4被允许管理网络。实际上,这是通过使R4成为网络配置NC4中定义的唯一具有网络资源权限的组织来实现的。此外,NC4的mod_policy只提到了组织R4——只允许R4更改此配置。

然后,我们将网络N改进为允许组织R1管理网络。R4通过在通道创建和联盟创建策略中添加R1实现了这一点。由于这个更改,R1能够定义联盟X1和X2,并创建通道C1和C2。R1在网络配置中对通道和联盟策略具有同等的管理权限。

然而,R4可以通过网络配置为R1授予更多的权力!R4可以将R1添加到mod_policy中,这样R1也可以管理网络策略的更改。

第二种功能比第一种功能强大得多,因为现在R1完全控制了网络配置NC4!这意味着R1原则上可以从网络中删除R4的管理权限。实际上,R4将配置mod_policy,以便R4也需要批准更改,或者mod_policy中的所有组织都必须批准更改。mod_policy具有很大的灵活性,可以使其尽可能复杂,以支持所需的任何更改过程。

这就是mod_policy的作用——它允许将基本配置优雅地演化为复杂的配置。所有这一切都是在所有有关组织的同意下发生的。mod_policy的行为与网络或通道配置中的其他策略相同;它定义了一组允许更改mod_policy本身的组织。

在本小节中,我们只讨论了策略和mod_policy的功能的皮毛。在策略主题中会详细讨论这个问题,但是现在让我们回到已经完成的网络!

Network fully formed

让我们用一致的视觉词汇来概括一下我们的网络。我们稍微重新组织了它使用我们更紧凑的视觉语法,因为它更好地适应更大的拓扑:

_images/network.diagram.14.pngnetwork.finalnetwork2

在这个图中,我们看到Fabric区块链网络由两个应用程序通道和一个排序通道组成。R1和R4负责排序通道,R1和R2负责蓝色的应用程序通道,R2和R3负责红色的应用程序通道。客户端应用程序A1是组织R1的一个元素,而CA1是它的证书颁发机构。注意R2组织中的peer P2可以使用蓝色和红色应用程序通道的通信设施。每个应用程序通道都有自己的通道配置,在本例中是CC1和CC2。系统通道的通道配置是网络配置(NC4)的一部分。

我们已经完成了构建一个示例超级账本Fabric区块链网络的概念之旅。我们创建了一个包含两个通道和三个peer节点、两个智能合约和一个排序服务的四个组织网络。它由四个证书颁发机构支持。它为三个客户端应用程序提供账本和智能合约服务,客户端应用程序可以通过这两个通道与它交互。花点时间浏览一下图表中网络的细节,并随时回顾主题以加强您的知识,或者转到更详细的主题。

Summary of network components

下面是我们讨论过的网络组件的快速总结:

Network summary

在本主题中,我们了解了不同的组织如何共享它们的基础设施来提供一个集成的超级账本Fabric区块链网络。我们已经看到了如何将集体基础设施组织成提供独立管理的私有通信机制的通道。我们已经看到了如何通过使用来自各自证书颁发机构的证书来识别来自不同组织的参与者,比如客户端应用程序、管理员、peer和排序器。反过来,我们也看到了策略的重要性,它定义了这些组织参与者在网络和通道资源上拥有的一致同意的权限。

Identity

What is an Identity?

区块链网络中的不同参与者包括peer、排序器、客户端应用程序、管理员等等。这些参与者中的每一个——能够消费服务的网络内外的活动元素——都有一个封装在X.509数字证书中的数字身份。这些身份非常重要,因为它们决定了参与者在区块链网络中对资源的确切权限和对信息的访问权限。

此外,数字身份还具有一些额外的属性,Fabric使用这些属性来确定权限,并为身份和相关属性的联合提供了一个特殊的名称——主体。主体与userid或groupid类似,但是更加灵活,因为它们可以包含参与者身份的广泛属性,比如参与者的组织、组织单元、角色,甚至参与者的特定身份。当我们讨论主体时,它们是决定其权限的属性。

要使身份可验证,它必须来自可信的权威。成员服务提供者(MSP)在Fabric中实现了这一点。更具体地说,MSP是一个组件,它定义了管理组织的有效身份的规则。Fabric中的默认MSP实现使用X.509证书作为身份,采用传统公钥基础设施(PKI)层次模型(稍后将详细介绍PKI)。

A Simple Scenario to Explain the Use of an Identity

假设你去超市买一些杂货。在收银台,你会看到一个牌子,上面写着只接受Visa、万事达卡和美国运通卡。如果你想用另一张卡支付——我们称之为“ImagineCard”——不管这张卡是否真实,你的账户里是否有足够的资金。我们不会接受的。

_images/identity.diagram.6.pngScenario

拥有一张有效的信用卡是不够的——它还必须被商店接受!PKIs和MSP以相同的方式协同工作——PKI提供了一个身份列表,MSP表示这些身份中哪些是参与网络的给定组织的成员。

PKI证书颁发机构和MSP提供了类似的功能组合。PKI就像一个卡提供商——它提供许多不同类型的可验证身份。另一方面,MSP类似于商店接受的卡片提供者列表,确定哪些身份是商店支付网络的可信成员(参与者)。MSP将可验证的身份转换为区块链网络的成员。

让我们更详细地研究一下这些概念。

What are PKIs?

公钥基础设施(PKI)是一组互联网技术,它在网络中提供安全的通信。将S放入HTTPS的是PKI——如果您在web浏览器上阅读本文,您可能正在使用PKI来确保它来自经过验证的源。

_images/identity.diagram.7.pngPKI

公钥基础设施(PKI)的元素。PKI由证书颁发机构组成,这些证书颁发机构向各方(例如服务的用户、服务提供者)颁发数字证书,然后在与环境交换的消息中使用这些证书对自己进行身份验证。CA的证书撤销列表(CRL)构成不再有效的证书的引用。撤销证书的原因有很多。例如,因为与证书关联的加密私有材料已被公开,证书可能会被吊销。

虽然区块链网络不仅仅是一个通信网络,它还依赖PKI标准来确保各个网络参与者之间的安全通信,并确保在区块链上发布的消息得到正确的身份验证。因此,了解PKI的基础知识以及为什么MSP如此重要是很重要的。

PKI有四个要素:

  • Digital Certificates

  • Public and Private Keys

  • Certificate Authorities

  • Certificate Revocation Lists

让我们快速描述一下这些PKI基础知识,如果您想了解更多细节,可以从Wikipedia开始。

Digital Certificates

数字证书是包含一组与证书持有者相关的属性的文档。最常见的证书类型是符合X.509标准的证书,该标准允许在其结构中编码某参与方的身份细节。

例如,密歇根州底特律市Mitchell Cars公司制造部门的玛丽莫里斯可能拥有一个SUBJECT 属性为C=US, ST=Michigan, L=Detroit, O=Mitchell Cars, OU=Manufacturing, CN=Mary Morris /UID=123456的数字证书。玛丽的证件类似于她的政府身份证——它提供了有关玛丽的信息,她可以用这些信息来证明有关她的关键事实。在X.509证书中还有许多其他属性,但是现在让我们只关注这些属性。

_images/identity.diagram.8.pngDigitalCertificate

描述一个叫玛丽·莫里斯的参与方的数字证书。玛丽是证书的SUBJECT(主题),突出显示的主题文本显示了关于玛丽的关键事实。如您所见,证书还包含更多的信息。最重要的是,玛丽的公钥分布在她的证书中,而她的私钥没有。此签名密钥必须保持私有。

重要的是,玛丽的所有属性都可以使用一种称为密码学的数学技术(字面意思是“秘密写作”)记录下来,这样篡改会使证书失效。密码学允许玛丽向其他人提供她的证书,以证明她的身份,只要另一方信任证书颁发者,即证书颁发机构(certificate Authority, CA)。只要CA安全地保存某些加密信息(即它自己的私有签名密钥),任何阅读证书的人都可以确保关于玛丽的信息没有被篡改——它始终具有玛丽莫里斯的那些特定属性。将玛丽的X.509证书看作是无法更改的数字身份证。

Authentication, Public keys, and Private Keys

身份验证和消息完整性是安全通信中的重要概念。身份验证要求交换消息的各方确信创建特定消息的身份。对于具有“完整性”的消息,意味着在其传输期间没有被修改。例如,您可能希望确保正在与真实的玛丽莫里斯进行通信,而不是与一个模仿者进行通信。或者,如果玛丽给你发了一条信息,你可能想要确保它在传输过程中没有被任何人篡改。

传统的身份验证机制依赖于数字签名,顾名思义,数字签名允许一方对其消息进行数字签名。数字签名还为签名消息的完整性提供了保证。

从技术上讲,数字签名机制要求每一方都持有两个加密连接的密钥:一个公开密钥(广泛可用并充当身份验证锚)和一个私有密钥(用于在消息上生成数字签名)。数字签名消息的接收方可以通过检查附加签名在预期发送方的公钥下是否有效来验证接收到的消息的来源和完整性。

私钥和各自的公钥之间的独特关系是使安全通信成为可能的密码术。密钥之间独特的数学关系使得私钥可以用于对消息生成签名,只有对应的公钥可以匹配该签名,而且只能对同一消息进行签名。

_images/identity.diagram.9.pngAuthenticationKeys

在上面的示例中,玛丽使用她的私钥对消息签名。任何使用公钥查看签名消息的人都可以验证签名。

Certificate Authorities

正如您所看到的,参与者或节点能够通过系统信任的权威机构为其颁发的数字身份参与区块链网络。在最常见的情况下,数字身份(或简单的身份)具有加密验证的数字证书的形式,这些证书符合X.509标准,由证书颁发机构(CA)颁发。

CA是互联网安全协议的一个常见部分,您可能听说过一些比较流行的协议:赛门铁克(最初是Verisign)、GeoTrust、DigiCert、GoDaddy和Comodo等等。

_images/identity.diagram.11.pngCertificateAuthorities

证书颁发机构将证书分发给不同的参与者。这些证书由CA进行数字签名,并将参与者与参与者的公钥绑定在一起(还可以选择使用完整的属性列表)。因此,如果信任CA(并且知道它的公钥),则可以通过验证参与者证书中的CA签名,信任绑定到证书中公钥的特定的参与者和证书包含的属性。

证书可以广泛传播,因为它们既不包含参与者的私钥,也不包含CA的私钥。因此,它们可以用作信任的锚点,用于验证来自不同参与者的消息。

CA也有一个证书,可以广泛使用。这允许给定CA颁发的身份的使用者,通过检查证书只能由相应私钥(CA)的持有者生成来验证它们。

在区块链设置中,希望与网络交互的每个参与者都需要一个身份。在这种情况下,您可能会说可以使用一个或多个CA从数字的角度定义组织的成员。CA为组织的参与者提供了一个可验证的数字身份的基础。

Root CAs, Intermediate CAs and Chains of Trust

CA有两种类型:根CA和中间CA。由于根CA(赛门铁克、Geotrust等)必须向互联网用户安全分发数亿个证书,因此将此过程分散到所谓的中间CA是有意义的。这些中间CA的证书由根CA或其他中间机构颁发,允许为链中的任何CA颁发的任何证书建立“信任链”。这种能力来追踪回根CA不仅让CA上规模,同时仍然提供安全的功能——允许使用证书的组织对使用中间CA有信心——它限制了根CA的曝光,如果妥协,会危及整个链的信任。另一方面,如果中间CA受到损害,则会有更小的风险。

_images/identity.diagram.1.pngChainOfTrust

在根CA和一组中间CA之间建立信任链,只要为每个中间CA的证书颁发CA的CA是根CA本身,或者对根CA具有信任链。

当涉及到跨多个组织颁发证书时,中间CA提供了巨大的灵活性,这在经过许可的区块链系统(如Fabric)中非常有用。例如,您将看到不同的组织可能使用不同的根CA,或者使用相同的根CA和不同的中间CA——这确实取决于网络的需要。

Fabric CA

这是因为CA非常重要,Fabric提供了一个内置的CA组件,允许您在您形成的区块链网络中创建CA。这个组件——称为Fabric CA,是一个私有根CA提供者,能够管理具有X.509证书的Fabric参与者的数字身份。因为Fabric CA是针对Fabric的根CA需求的自定义CA,所以它本质上不能为浏览器中的常规/自动使用提供SSL证书。但是,由于必须使用一些CA来管理身份(即使在测试环境中),Fabric CA可以用来提供和管理证书。使用公共/商业根或中间CA来提供标识也是可能的,而且完全合适。

如果您感兴趣,可以在CA文档部分阅读更多关于Fabric CA的内容。

Certificate Revocation Lists

证书撤销列表(Certificate Revocation List, CRL)很容易理解——它只是一个引用证书的列表,CA知道这些证书会因为这样或那样的原因被撤销。如果您回想一下商店场景,CRL就像一个被盗信用卡列表。

当第三方想要验证另一方的身份时,它首先检查发出CA的CRL,以确保证书没有被撤销。验证者不需要检查CRL,但是如果他们不检查CRL,他们将冒着接受被盗身份的风险。

_images/identity.diagram.12.pngCRL

使用CRL检查证书是否有效。如果模拟程序试图将一个损坏的数字证书传递给验证方,可以首先根据发出CA的CRL检查它,以确保它不会被列为不再有效。

注意,被撤销的证书与到期的证书非常不同。被撤销的证书还没有过期——按其他任何标准衡量,它们都是完全有效的证书。有关CRL的更深入信息,请单击这里。

现在您已经了解了PKI如何通过信任链提供可验证的身份,下一步是了解如何使用这些身份来表示区块链网络的可信成员。这就是成员服务提供者(MSP)发挥作用的地方——它标识区块链网络中给定组织的成员。

要了解更多关于成员身份的信息,请参阅关于MSP的概念文档。

Membership

如果您已经阅读了关于身份的文档,那么您已经看到PKI如何通过信任链提供可验证的身份。现在让我们看看如何使用这些标识来表示区块链网络的可信成员。

这就是成员服务提供者(MSP)发挥作用——它标识所信任的根CA和中间CA定义信任域的成员,例如,一个组织,要么通过列出其成员的身份,要么通过标识出哪些CA授权为他们的成员发行有效身份,或者——通常会是这样的——通过两者的结合。

MSP的强大功能不仅仅是列出谁是网络参与者或通道的成员。MSP在它代表的组织范围内确定参与者可以扮演的特定角色(如管理员,或作为子组织的成员组),并在网络上下文和通道中设置访问权限的基础(如通道管理员、读者、作家)。

组织MSP的配置将以通道MSP的形式通知相应组织成员所参与的所有通道。除了通道MSP,节点、排序器和客户端还维护一个本地MSP,以便在通道上下文之外对成员消息进行身份验证,并定义特定组件上的权限(例如谁有能力在peer上安装链码)。

此外,MSP还允许标识已被撤销的身份列表(如身份文档中所讨论的),但是我们将讨论该过程如何扩展到MSP。

稍后我们将更多地讨论本地和通道MSP。现在,让我们看看MSP通常做什么。

Mapping MSPs to Organizations

组织是一组受管理的成员。可以是像跨国公司这样的大公司,也可以是像花店这样的小公司。关于组织(或orgs),最重要的是他们在一个MSP下管理他们的成员。注意,这与我们稍后将讨论的X.509证书中定义的组织概念不同。

组织及其MSP之间的排他性关系使得以组织的名称命名MSP是明智的,您会发现在大多数策略配置中都采用了这种约定。例如,组织ORG1可能有一个名为ORG1-MSP之类的MSP。在某些情况下,组织可能需要多个成员组——例如,通道用于在组织之间执行非常不同的业务功能。在这些情况下,有多个MSP并相应地命名是有意义的,例如,ORG2-MSP-NATIONAL和ORG2-MSP-GOVERNMENT,这反映了与政府监管通道相比,ORG2在全国销售渠道中的成员信任根是不同的。

_images/membership.diagram.3.pngMSP1

一个组织有两种不同的MSP配置。第一个配置显示了MSP和组织之间的典型关系——单个MSP定义了组织的成员列表。在第二种配置中,不同的MSP用于表示具有国家、国际和政府关联的不同组织组。

Organizational Units and MSPs

一个组织通常被划分为多个组织单元(OU),每个单元都有一组特定的职责。例如,ORG1组织可能同时具有ORG1-MANUFACTURING 和ORG1-DISTRIBUTION OU来反映这些独立的业务线。当CA发出X.509证书时,证书中的OU字段指定身份所属的业务线。

稍后我们将看到,OU怎样帮助控制组织中被认为是区块链网络成员的部分。例如,只有来自ORG1-MANUFACTURING OU的身份才能访问通道,而ORG1-DISTRIBUTION不能。

最后,尽管这是对OU的轻微滥用,但有时它们可以被联盟中的不同组织用来区分彼此。在这种情况下,不同的组织使用相同的根CA和中间CA作为它们的信任链,但是分配OU字段来标识每个组织的成员。稍后我们还将看到如何配置MSP来实现这一点。

Local and Channel MSPs

在区块链网络中,MSP出现在两个位置:通道配置(通道MSP)和在参与者的前提下的本地位置(本地MSP)。本地MSP是为客户端(用户)和node节点(peer和排序器)定义的。节点本地MSP为该节点定义权限(例如,peer管理员是谁)。用户的本地MSP允许用户端在其交易中作为通道的成员(例如在链码交易中)或作为系统中特定角色的所有者(例如在配置交易中的某组织管理员)对自己进行身份验证。

每个节点和用户都必须定义一个本地MSP,因为它定义了谁拥有该级别的管理或参与权(peer管理员不一定是通道管理员,反之亦然)。

相反,通道MSPs在通道级别定义管理和参与权。每个参与通道的组织都必须为其定义一个MSP。通道上的peer和排序器将共享通道MSPs的相同视图,因此能够正确地对通道参与者进行身份验证。这意味着,如果组织希望加入通道,则需要在通道配置中包含一个包含组织成员信任链的MSP。否则,来自这个组织身份的交易将被拒绝。

本地和通道MSPs之间的关键区别不在于它们如何工作——它们都将身份转换为角色——而在于它们的作用域。

_images/membership.diagram.4.pngMSP2

本地和通道MSPs。每个peer的信任域(例如组织)由peer的本地MSP(例如ORG1或ORG2)定义。通过将组织的MSP添加到通道配置中,可以表示组织加入了通道。例如,此图中的通道由ORG1和ORG2管理。类似的原则也适用于网络、排序器和用户,但是为了简单起见,这里没有显示这些原则。

通过查看区块链管理员安装和实例化智能合约时发生的情况,您可能会发现了解如何使用本地和通道MSPs很有帮助,如上图所示。

管理员B使用RCA1颁发的身份连接到peer,并将其存储在本地MSP中。当B试图在peer上安装智能合约时,peer检查其本地MSP ORG1-MSP,以验证B的身份确实是ORG1的成员。成功的验证将允许安装命令成功完成。随后,B希望在通道上实例化智能合约。因为这是一个通道操作,所以通道上的所有组织都必须同意它。因此,peer必须在成功提交此命令之前检查通道的MSPs。(其他事情也必须发生,但现在先集中精力做上面的事情。)

本地MSPs只在它们应用到的节点或用户的文件系统上的定义。因此,在物理和逻辑上,每个节点或用户只有一个本地MSP。但是,由于通道MSPs对通道中的所有节点都可用,所以它们在通道配置中逻辑上定义一次。然而,通道MSP也在通道中每个节点的文件系统上实例化,并通过协商一致保持同步。因此,虽然每个节点的本地文件系统上都有每个通道MSP的副本,但从逻辑上讲,通道MSP驻留在通道或网络上并由通道或网络维护。

MSP Levels

通道和本地MSPs之间的分离反映了组织管理其本地资源(如peer节点或排序器节点)和通道资源(如账本、智能合约和联盟)的需求,这些资源在通道或网络级别运行。将这些MSPs看作是不同级别的是有帮助的,MSPs处于较高的级别,与网络管理相关,而处于较低级别的MSPs处理私有资源管理的身份。MSPs在每个管理级别都是强制性的——必须为网络、通道、peer、排序器和用户定义它们。

_images/membership.diagram.2.pngMSP3

MSP的级别。peer和排序器的MSPs是本地的,而通道(包括网络配置通道)的MSPs是跨该通道的所有参与者共享的。在这个图中,网络配置通道由ORG1管理,但是另一个应用程序通道可以由ORG1和ORG2管理。peer是ORG2的成员,由ORG2管理,而ORG1管理图的排序器。ORG1信任来自RCA1的身份,而ORG2信任来自RCA2的身份。注意,这些是管理员身份,反映了谁可以管理这些组件。所以当ORG1管理网络时,ORG2.MSP确实存在于网络定义中。

  • Network MSP: The configuration of a network defines who are the members in the network — by defining the MSPs of the participant organizations — as well as which of these members are authorized to perform administrative tasks (e.g., creating a channel).

  • Channel MSP: It is important for a channel to maintain the MSPs of its members separately. A channel provides private communications between a particular set of organizations which in turn have administrative control over it. Channel policies interpreted in the context of that channel’s MSPs define who has ability to participate in certain action on the channel, e.g., adding organizations, or instantiating chaincodes. Note that there is no necessary relationship between the permission to administrate a channel and the ability to administrate the network configuration channel (or any other channel). Administrative rights exist within the scope of what is being administrated (unless the rules have been written otherwise — see the discussion of the ROLE attribute below).

  • Peer MSP: This local MSP is defined on the file system of each peer and there is a single MSP instance for each peer. Conceptually, it performs exactly the same function as channel MSPs with the restriction that it only applies to the peer where it is defined. An example of an action whose authorization is evaluated using the peer’s local MSP is the installation of a chaincode on the peer.

  • Orderer MSP: Like a peer MSP, an orderer local MSP is also defined on the file system of the node and only applies to that node. Like peer nodes, orderers are also owned by a single organization and therefore have a single MSP to list the actors or nodes it trusts.

MSP Structure

到目前为止,您已经看到MSP最重要的元素是根或中间CA的规范,这些CA用于在各自的组织中建立参与者或节点的成员关系。但是,还有更多的因素与这两个因素一起使用,以协助成员的职能。

_images/membership.diagram.5.pngMSP4

上面的图显示了本地MSP如何存储在本地文件系统中。尽管通道MSP的物理结构不是完全按照这种方式构建的,但是这样考虑它们仍然是一种有用的方式。

正如您所看到的,MSP有9个元素。在目录结构中考虑这些元素最简单,其中MSP名称是根文件夹名称,每个子文件夹表示MSP配置的不同元素。

让我们更详细地描述这些文件夹,看看它们为什么重要。

  • Root CAs: This folder contains a list of self-signed X.509 certificates of the Root CAs trusted by the organization represented by this MSP. There must be at least one Root CA X.509 certificate in this MSP folder.

    This is the most important folder because it identifies the CAs from which all other certificates must be derived to be considered members of the corresponding organization.

  • Intermediate CAs: This folder contains a list of X.509 certificates of the Intermediate CAs trusted by this organization. Each certificate must be signed by one of the Root CAs in the MSP or by an Intermediate CA whose issuing CA chain ultimately leads back to a trusted Root CA.

    An intermediate CA may represent a different subdivision of the organization (like ORG1-MANUFACTURING and ORG1-DISTRIBUTION do for ORG1), or the organization itself (as may be the case if a commercial CA is leveraged for the organization’s identity management). In the latter case intermediate CAs can be used to represent organization subdivisions. Here you may find more information on best practices for MSP configuration. Notice, that it is possible to have a functioning network that does not have an Intermediate CA, in which case this folder would be empty.

    Like the Root CA folder, this folder defines the CAs from which certificates must be issued to be considered members of the organization.

  • Organizational Units (OUs): These are listed in the $FABRIC_CFG_PATH/msp/config.yaml file and contain a list of organizational units, whose members are considered to be part of the organization represented by this MSP. This is particularly useful when you want to restrict the members of an organization to the ones holding an identity (signed by one of MSP designated CAs) with a specific OU in it.

    Specifying OUs is optional. If no OUs are listed, all the identities that are part of an MSP — as identified by the Root CA and Intermediate CA folders — will be considered members of the organization.

  • Administrators: This folder contains a list of identities that define the actors who have the role of administrators for this organization. For the standard MSP type, there should be one or more X.509 certificates in this list.

    It’s worth noting that just because an actor has the role of an administrator it doesn’t mean that they can administer particular resources! The actual power a given identity has with respect to administering the system is determined by the policies that manage system resources. For example, a channel policy might specify that ORG1-MANUFACTURING administrators have the rights to add new organizations to the channel, whereas the ORG1-DISTRIBUTION administrators have no such rights.

    Even though an X.509 certificate has a ROLE attribute (specifying, for example, that an actor is an admin), this refers to an actor’s role within its organization rather than on the blockchain network. This is similar to the purpose of the OU attribute, which — if it has been defined — refers to an actor’s place in the organization.

    The ROLE attribute can be used to confer administrative rights at the channel level if the policy for that channel has been written to allow any administrator from an organization (or certain organizations) permission to perform certain channel functions (such as instantiating chaincode). In this way, an organizational role can confer a network role.

  • Revoked Certificates: If the identity of an actor has been revoked, identifying information about the identity — not the identity itself — is held in this folder. For X.509-based identities, these identifiers are pairs of strings known as Subject Key Identifier (SKI) and Authority Access Identifier (AKI), and are checked whenever the X.509 certificate is being used to make sure the certificate has not been revoked.

    This list is conceptually the same as a CA’s Certificate Revocation List (CRL), but it also relates to revocation of membership from the organization. As a result, the administrator of an MSP, local or channel, can quickly revoke an actor or node from an organization by advertising the updated CRL of the CA the revoked certificate as issued by. This “list of lists” is optional. It will only become populated as certificates are revoked.

  • Node Identity: This folder contains the identity of the node, i.e., cryptographic material that — in combination to the content of KeyStore — would allow the node to authenticate itself in the messages that is sends to other participants of its channels and network. For X.509 based identities, this folder contains an X.509 certificate. This is the certificate a peer places in a transaction proposal response, for example, to indicate that the peer has endorsed it — which can subsequently be checked against the resulting transaction’s endorsement policy at validation time.

    This folder is mandatory for local MSPs, and there must be exactly one X.509 certificate for the node. It is not used for channel MSPs.

  • KeyStore for Private Key: This folder is defined for the local MSP of a peer or orderer node (or in an client’s local MSP), and contains the node’s signing key. This key matches cryptographically the node’s identity included in Node Identity folder and is used to sign data — for example to sign a transaction proposal response, as part of the endorsement phase.

    This folder is mandatory for local MSPs, and must contain exactly one private key. Obviously, access to this folder must be limited only to the identities of users who have administrative responsibility on the peer.

    Configuration of a channel MSPs does not include this folder, as channel MSPs solely aim to offer identity validation functionalities and not signing abilities.

  • TLS Root CA: This folder contains a list of self-signed X.509 certificates of the Root CAs trusted by this organization for TLS communications. An example of a TLS communication would be when a peer needs to connect to an orderer so that it can receive ledger updates.

    MSP TLS information relates to the nodes inside the network — the peers and the orderers, in other words, rather than the applications and administrations that consume the network.

    There must be at least one TLS Root CA X.509 certificate in this folder.

  • TLS Intermediate CA: This folder contains a list intermediate CA certificates CAs trusted by the organization represented by this MSP for TLS communications. This folder is specifically useful when commercial CAs are used for TLS certificates of an organization. Similar to membership intermediate CAs, specifying intermediate TLS CAs is optional.

    For more information about TLS, click here.

如果你读过这个文档以及我们关于身份的文档,你应该对身份和会员如何在超级账本Fabric中工作有一个很好的理解。您已经了解了如何使用PKI和MSPs来识别在区块链网络中协作的参与者。您已经了解了证书、公钥/私钥和信任根的工作原理,以及MSPs的物理和逻辑结构。

Peers

区块链网络主要由一组peer节点(或者简单地说成peer)组成。peer是网络的基本元素,因为它们承载着账本和智能合约。回想一下,账本不可更改地记录了由智能合约生成的所有交易(包含在超级账本Fabric的链码中,稍后将详细介绍)。智能合约和账本分别用于封装网络中的共享进程和共享信息。peer的这些方面使它们成为了解Fabric网络的良好起点。

当然,区块链网络的其他元素也很重要:账本和智能合约、排序器、策略、通道、应用程序、组织、身份和成员,您可以在它们各自的专用部分中阅读更多关于它们的信息。本节主要讨论peer及其与Fabric网络中其他元素的关系。

_images/peers.diagram.1.pngPeer1

区块链网络由peer节点组成,每个节点都可以保存账本副本和智能合约副本。在本例中,网络N由peer P1、P2和P3组成,它们各自维护各自的分布式账本L1实例。P1 P2和P3使用相同的链码S1来访问他们的分布式账本副本。

可以创建、启动、停止、重新配置甚至删除peer。它们公开了一组API,使管理员和应用程序能够与它们提供的服务进行交互。在本节中,我们将更多地了解这些服务。

A word on terminology

Fabric实现了一个称为链码的技术概念的智能合约——它只是访问账本的一段代码,用受支持的编程语言之一编写。在本主题中,我们通常使用术语链码,但是如果您更习惯这个术语,可以将其理解为智能合约。这是一回事! 如果您想了解更多关于链码和智能合约的信息,请查看我们关于智能合约和链码的文档。

Ledgers and Chaincode

让我们更详细地看看peer。我们可以看到,peer承载了账本和链码。更准确地说,peer实际上承载着账本实例和链码实例。注意,这在Fabric网络中提供了故意的冗余——它避免了单点故障。在本节的后面,我们将更多地了解区块链网络的分布式和去中心特性。

_images/peers.diagram.2.pngPeer2

peer节点承载账本实例和链码实例。在本例中,P1承载一个账本L1实例和一个链码S1实例。在单个peer上可以托管许多账本和链码。

由于peer是账本和链码的宿主,如果应用程序和管理员想访问这些资源,就必须与peer进行交互。这就是为什么peer被认为是构成Fabric网络的最基本的构件。当第一次创建peer时,它既没有账本也没有链码。稍后我们将看到如何在peer节点上创建账本,以及如何安装链码。

Multiple Ledgers

peer能够承载多个账本,这很有帮助,因为它允许灵活的系统设计。最简单的配置是peer管理一个账本,但是当需要时,peer托管两个或多个账本是绝对合适的。

_images/peers.diagram.3.pngPeer3

托管多个账本的peer。peer托管一个或多个账本,每个账本具有零个或多个适用于它们的链码。在这个例子中,我们可以看到peer P1承载着账本L1和L2。使用链码S1访问账本L1。另一方面,账本L2可以使用链码S1和S2访问。

虽然peer完全有可能在不加载任何访问该账本的链码的情况下托管账本实例,但是很少有peer是以这种方式配置的。绝大多数peer将至少安装一个链码,该链码可以查询或更新peer的账本实例。值得一提的是,无论用户是否安装了链码供外部应用程序使用,peer端也有始终存在的特殊系统链码。本主题不详细讨论这些。

Multiple Chaincodes

一个peer拥有的账本数量和能够访问该账本的链码数量之间没有固定的关系。一个peer可能有许多链码和许多账本可用。

_images/peers.diagram.4.pngPeer4

一个peer承载多个链码的例子。每个账本可以有许多链码来访问它。在这个示例中,我们可以看到peer P1承载着L1和L2,其中L1由链码S1和S2访问,L2由S1和S3访问。我们可以看到S1可以同时访问L1和L2。

稍后我们将看到为什么Fabric中的通道概念对于在peer节点上托管多个账本或多个链码非常重要。

Applications and Peers

现在我们将展示应用程序如何与peer交互来访问账本。账本查询交互包括应用程序和peer之间的一个简单的三步对话;账本更新交互稍微复杂一些,需要额外的两个步骤。为了帮助您开始使用Fabric,我们稍微简化了这些步骤,但是不要担心——理解账本查询的应用程序peer交互与账本更新交易样式之间的区别是最重要的。

当应用程序需要访问账本和链码时,它们总是连接到peer。Fabric软件开发工具包(SDK)使程序员很容易做到这一点——它的API使应用程序能够连接到peer,调用链码来生成交易,向网络提交交易,这些交易将被排序并提交到分布式账本,并在此过程完成时接收事件。

通过peer连接,应用程序可以执行链码来查询或更新账本。账本查询交易的结果将立即返回,而账本更新涉及应用程序、peer和排序器之间更复杂的交互。让我们更详细地研究一下。

_images/peers.diagram.6.pngPeer6

peer与排序器一起,确保每个peer的账本都是最新的。在本例中,应用程序A连接到P1并调用链码S1来查询或更新账本L1。P1调用S1生成一个包含查询结果或建议的账本更新的提案响应。应用程序A接收提案响应,对于查询,流程现在已经完成。对于更新,A从所有响应构建一个交易,并将其发送到O1进行排序。O1将网络上的交易收集到块中,并将其分发给所有的peer,包括P1。P1在应用到L1之前验证交易。更新L1之后,P1生成一个事件,由A接收,表示完成。

peer可以立即将查询结果返回给应用程序,因为满足查询所需的所有信息都在peer的本地账本副本中。peer从不与其他peer协商以响应来自应用程序的查询。但是,应用程序可以连接到一个或多个peer来发出查询;例如,在多个peer之间验证结果,或者如果怀疑信息可能过时,则从另一个peer检索最新的结果。在图中,您可以看到账本查询是一个简单的三步过程。

更新交易以与查询交易相同的方式启动,但是有两个额外的步骤。尽管账本更新应用程序也连接到peer以调用链码,但与账本查询应用程序不同,单个peer此时不能执行账本更新,因为其他peer必须首先同意更改——这是一个称为共识的过程。因此,peer向应用程序返回一个提议更新——该peer将在其他peer事先同意的情况下应用该更新。第一个额外步骤(步骤4)要求应用程序向整个peer网络发送一组匹配的提议更新,作为对各自账本的提交的事务。这是通过应用程序来实现的,它使用一个排序器将交易打包成区块,并将它们分发到整个peer网络,在将它们应用到每个peer网络的本地账本副本之前,可以对它们进行验证。由于整个排序处理需要一些时间(几秒钟)才能完成,因此将异步通知应用程序,如步骤5所示。

在本节的稍后部分,您将了解关于此排序流程的详细性质的更多信息——要真正详细了解此流程,请参阅交易流程主题。

Peers and Channels

尽管本节讨论的是peer而不是通道,但是花点时间了解peer如何通过通道相互交互以及如何与应用程序交互是值得的,通道是区块链网络中的一组组件可以私下通信和交易的一种机制。

这些组件通常是peer节点、排序器和应用程序,通过加入一个通道,它们同意协作,共同地共享和管理与该通道关联的相同的账本副本。从概念上讲,您可以将通道看作类似于朋友组(尽管通道的成员当然不需要是朋友!)一个人可能有几组朋友,每组都有他们一起做的活动。这些群体可能是完全独立的(一群工作上的朋友和一群爱好上的朋友相比),或者他们之间可能有交叉。然而,每个组都是自己的实体,具有某种“规则”。

_images/peers.diagram.5.pngPeer5

通道允许一组特定的peer和应用程序在区块链网络中彼此通信。在本例中,应用程序A可以使用通道C直接与peer P1和P2通信。(为了简单起见,此图中没有显示排序器,但是必须在一个正常运行的网络中显示排序器。)

我们看到通道的存在方式与peer不同——将通道看作由物理peer集合组成的逻辑结构更合适。理解这一点非常重要——peer为通道的访问和管理提供控制点。

Peers and Organizations

现在您已经了解了peer及其与账本、链码和通道的关系,您将能够看到多个组织是如何组合在一起形成区块链网络的。

区块链网络由一组组织管理,而不是由一个组织管理。peer对于这种分布式网络的构建至关重要,因为它们属于这些组织,并且是这些组织与网络的连接点。

_images/peers.diagram.8.pngPeer8

具有多个组织的区块链网络中的peer。区块链网络是由不同组织拥有和提供的peer构建的。在这个例子中,我们看到四个组织提供八个peer来组成一个网络。通道C连接网络N中的五个peer——P1、P3、P5、P7和P8。这些组织拥有的其他peer尚未连接到此通道,但通常至少连接到另一个通道。由特定组织开发的应用程序将连接到他们自己组织的peer以及不同组织的peer。同样,为了简单起见,此图中没有显示排序器节点。

你能看到区块链网络的形成过程是非常重要的。这个网络是由多个向其提供资源的组织组成和管理的。peer是我们在本主题中讨论的资源,但是组织提供的资源不仅仅是peer。这里有一个原则在起作用——如果没有组织将他们的个人资源贡献给集体网络,网络实际上是不存在的。此外,网络随着这些协作组织提供的资源的增加和减少而增长和收缩。

您可以看到(除了排序服务之外)没有集中的资源——在上面的示例中,如果组织没有提供它们的peer,网络N将不存在。这反映了一个事实,即除非各组织提供构成网络的资源,否则网络在任何意义上都不存在。此外,网络并不依赖于任何一个单独的组织——只要有一个组织存在,它就会继续存在,不管其他哪个组织可能来来去去。这就是网络去中心化的核心。

不同组织中的应用程序(如上例所示)可能相同,也可能不同。这是因为一个组织完全取决于它的应用程序如何处理其peer的账本副本。这意味着应用程序和表示逻辑可能会因组织而异,即使它们各自的peer承载完全相同的账本数据。

应用程序可以连接到其组织中的peer,也可以连接到另一个组织中的peer,这取决于所需的账本交互的性质。对于账本查询交互,应用程序通常连接到它们自己组织的peer。对于账本更新交互,我们将在后面看到为什么应用程序需要连接到代表每个组织的peer,这些组织需要背书账本更新。

Peers and Identity

既然您已经了解了来自不同组织的peer如何聚集在一起形成一个区块链网络,那么有必要花一些时间了解peer是如何被其管理员分配到组织中的。

peer具有通过来自特定证书颁发机构的数字证书分配给它们的身份。在本指南的其他部分,您可以阅读更多关于X.509数字证书如何工作的信息,但是,就目前而言,可以将数字证书看作是一个ID卡,它提供了关于peer的大量可验证信息。网络中的每个peer都由所属组织的管理员分配一个数字证书。

_images/peers.diagram.9.pngPeer9

当peer连接到通道时,其数字证书通过通道MSP标识其所属组织。在这个例子中,P1和P2的身份由CA1给出。通道C根据其通道配置中的策略确定来自CA1的身份应该使用ORG1.MSP与Org1关联。同样,P3和P4也被ORG2标识。MSP作为Org2的一部分。

当一个peer使用通道连接到区块链网络时,通道配置中的策略使用peer的身份来确定其权限。身份到组织的映射是由一个称为成员服务提供者(MSP)的组件提供的——它决定如何将peer分配给特定组织中的特定角色,并相应地获得对区块链资源的适当访问权。此外,peer只能由单个组织拥有,因此与单个MSP关联。我们将在本节的稍后部分了解更多关于peer访问控制的内容,本指南的其他部分有一整节关于MSPs和访问控制策略。但是现在,可以将MSP看作是在区块链网络中提供个人身份和特定组织角色之间的链接。

暂时离题一下,peer以及所有与区块链网络交互的东西都从它们的数字证书和MSP获得它们的组织身份。如果peer、应用程序、最终用户、管理员和排序器想要与区块链网络进行交互,他们必须具有身份和关联的MSP。我们为使用身份(主体)与区块链网络交互的每个实体提供一个名称。在本指南的其他地方,您可以了解更多关于主体和组织的信息,但是现在您已经了解了足够多的信息,可以继续了解peer了!

最后,请注意,peer的物理位置在哪里不是很重要——它可以驻留在云端,或在数据中心旗下的一个组织,或者在一个本地机器- - -与它关联的身份标识出它属于一个特定的组织。在我们上面的例子中,P3可以托管在Org1的数据中心,但是只要与它相关联的数字证书由CA2颁发,那么它就属于Org2。

Peers and Orderers

们已经看到peer构成了区块链网络的基础,承载着账本和智能契约,可以通过peer连接的应用程序查询和更新这些合约。然而,应用程序和peer相互交互以确保每个peer的账本保持一致的机制是由称为排序器的特殊节点协调的,现在我们将注意力转向这些节点。

更新交易与查询交易有很大的不同,因为单个peer不能单独更新账本——更新需要网络中其他peer的同意。peer要求网络中的其他peer在将账本更新应用到peer的本地账本之前批准该更新。这个过程称为共识,它比一个简单的查询需要更长的时间来完成。但是,当所有需要批准该交易的peer都这样做时,并且该交易已提交到账本,peer将通知其连接的应用程序账本已更新。在本节中,您将看到更多关于peer和排序器如何管理共识过程的详细信息。

具体来说,想要更新账本的应用程序涉及到一个三步的过程,它确保区块链网络中的所有peer保持它们的账本彼此一致。在第一个阶段,应用程序与背书peer的子集一起工作,每个peer都向应用程序提供对提议的账本更新的背书,但是不将提议的更新应用于它们的账本副本。在第二阶段,这些单独的背书作为交易收集在一起并打包成区块。在最后一个阶段,这些区块被分发回每个peer,在将每个交易应用到该peer的账本副本之前,将在每个peer验证每个账本。

正如您将看到的,排序器节点是这个过程的核心,所以让我们更详细地研究应用程序和peer如何使用排序器生成可以一致应用于分布式复制账本的账本更新。

Phase 1: Proposal

交易工作流的第1阶段涉及应用程序和一组peer之间的交互——它不涉及排序器。阶段1只涉及一个应用程序,该应用程序要求不同组织的背书peer同意提议的链码调用的结果。

要开始第1阶段,应用程序生成一个交易提案,并将其发送给每个所需的peer集以进行背书。然后,这些支持peer中的每一个都使用交易提案独立地执行链码,以生成交易提案响应。它不将此更新应用于账本,而只是简单地签名并将其返回给应用程序。一旦应用程序收到足够数量的签名提案响应,交易流程的第一阶段就完成了。让我们更详细地研究这个阶段。

_images/peers.diagram.10.pngPeer10

交易提案由返回已背书的提案响应的peer独立执行。在本例中,应用程序A1生成交易T1提案P,并将其发送给通道C上的peer P1和peer P2。单独地,P2使用交易T1提案P执行S1,生成交易T1响应R2,并通过E2表示赞同。应用程序A1收到交易T1的两个已背书的响应,即E1和E2。

最初,应用程序选择一组peer来生成一组提议的账本更新。应用程序选择哪些peer?嗯,这取决于背书策略(为链码定义),它定义了一组组织,这些组织需要在账本更改被网络接受之前对其进行背书。这就是达成共识的真正含义——每一个重要的组织都必须背书提议的账本变更,然后才会被任何peer的账本接受。

peer通过添加其数字签名,并使用其私钥对整个有效负载签名,从而背书提案响应。此背书随后可用于证明该组织的peer生成了特定的响应。在我们的示例中,如果peer P1属于组织Org1,则背书E1对应一个数字证明,“账本L1上的交易T1响应R1已由Org1的peer P1提供!”

当应用程序收到足够多的peer签署的提案响应时,阶段1结束。我们注意到,对于相同的交易提案,不同的peer可以向应用程序返回不同的、因此不一致的交易提案。这可能只是因为结果在不同的时间在不同的peer上的不同状态的账本生成,在这种情况下,应用程序可以简单地请求更新的提案响应。虽不太可能但更严重的是,结果可能不同是因为链码是不确定的。非确定性是链码和账本的敌人,如果发生这种情况,则表明提议的交易存在严重问题,因为不一致的结果显然不能适用于账本。单个peer不能知道他们的交易结果是非确定性的——在检测到非确定性之前,必须收集交易响应进行比较。(严格地说,这还不够,但是我们将这个讨论推迟到交易部分,在交易部分将详细讨论非确定性。)

在阶段1的末尾,如果应用程序希望丢弃不一致的交易响应,它可以自由地丢弃这些响应,从而有效地提前终止交易工作流。稍后我们将看到,如果应用程序试图使用一组不一致的交易响应来更新账本,它将被拒绝。

Phase 2: Ordering and packaging transactions into blocks

交易工作流的第二阶段是打包阶段。排序器是这个过程的关键——它接收包含来自许多应用程序的已背书交易提案响应的交易,并将交易排序为区块。有关排序和打包阶段的更多细节,请查看关于排序阶段的概念信息。

Phase 3: Validation and commit

在阶段2的末尾,我们看到排序器负责收集提案的交易更新、对它们排序并将它们打包成区块,以便分发给peer,这些简单但重要的过程。

交易工作流的最后一个阶段涉及到从排序器到peer的区块的分发和随后的验证,这些区块可以应用到账本中。具体来说,在每一个peer,一个区块内的每一笔交易都要经过验证,以确保它在应用到账本之前得到所有相关组织的一致认可。失败的交易保留下来进行审计,但不应用于账本。

_images/peers.diagram.12.pngPeer12

排序器节点的第二个角色是将区块分发给peer。在本例中,排序器O1将区块B2分配给peer P1和peer P2。peer P1处理区块B2,导致在P1上的账本L1中添加一个新区块。同时,peerP2处理区块B2,从而将一个新区块添加到P2上的账本L1中。一旦这个过程完成,peer P1和P2上的账本L1就会一直更新,并且每个账本L1都可以通知连接的应用程序交易已经被处理。

阶段3从排序器将区块分发给连接到它的所有peer开始。peer连接到通道上的排序器,这样,当生成一个新区块时,连接到排序器的所有peer都将被发送一个新区块的副本。每个peer将独立地处理此区块,但与通道上的其他peer的处理方式完全相同。这样,我们就能使账本保持一致。同样值得注意的是,并不是每个peer都需要连接到一个排序器——peer可以使用gossip协议将区块级联到其他peer,而其他peer也可以独立地处理它们。但是让我们把那个讨论留到下次吧!

当接收到一个区块时,peer将按照它在区块中出现的顺序处理每个交易。对于每一笔交易,每个peer都将根据生成该交易的链码的背书策略,验证该交易是否已被所需组织背书。例如,一些交易可能只需要一个组织的背书,而另一些交易可能需要多个背书才能被认为是有效的。这个验证过程验证所有相关组织是否生成了相同的结果。还要注意,此验证与阶段1中的签注检查不同,在第1阶段中,应用程序接收来自背书peer的响应,并决定发送提案交易。如果应用程序违反了背书策略,发送了错误的交易,那么在第三阶段的验证过程中,peer仍然可以拒绝该交易。

如果一笔交易被正确地背书,peer将试图将其应用于账本。为此,peer必须执行账本一致性检查,以验证账本的当前状态与生成提案更新时账本的状态是否兼容。这可能并不总是可能的,即使交易已得到完全支持。例如,另一个交易可能更新了账本中的相同资产,因此交易更新不再有效,因此不能再应用。通过这种方式,每个peer的账本副本在整个网络中保持一致,因为它们都遵循相同的验证规则。

在peer成功地验证每一笔交易之后,它将更新账本。失败的交易和成功的交易一样,不应用于账本,但为了审计的目的保留它们。这意味着peer区块几乎与从排序器接收到的区块完全相同,除了区块中每个交易上的有效或无效指示符。

我们还注意到,阶段3不需要运行链码——这只在阶段1中完成,这很重要。这意味着链码只能在背书节点上使用,而不能在整个区块链网络中使用。这通常是有帮助的,因为它保持链代码的逻辑机密,以支持组织。这与链码(交易提案响应)的输出形成对比,链码与通道中的每个peer共享,无论它们是否支持该交易。这种支持peer的专门化旨在帮助可伸缩性。

最后,每次将一个区块提交到peer的账本时,该peer都会生成一个适当的事件。区块事件包括完整的区块内容,而区块交易事件只包含摘要信息,例如区块中的每个交易是否已验证或无效。链码执行产生的链码事件也可以在此时发布。应用程序可以注册这些事件类型,以便在发生时通知它们。这些通知结束了交易工作流的第三个也是最后一个阶段。

总之,第3阶段看到的是由排序器生成的区块一致地应用于账本。将事务严格地按区块排序,允许每个peer验证交易更新在整个区块链网络上一致地应用。

Orderers and Consensus

整个交易工作流流程称为共识,因为所有peer都已就交易的顺序和内容达成协议,而这个流程是由排序器协调的。共识是一个多步骤的过程,只有当流程完成时,应用程序才会收到账本更新的通知——在不同的peer上,更新的时间可能略有不同。

我们将在以后的排序器主题中更详细地讨论排序器,但是现在,将排序器看作是收集和分发应用程序中提案的账本更新的节点,以便peer验证和包含在账本中。

就是这样!现在我们已经完成了peer之旅,以及与Fabric相关的其他组件。我们已经看到,peer在很多方面都是最基本的元素——它们组成网络、托管的链码和账本、处理交易提案和响应,并通过一致的应用交易更新来使账本保持最新。

Smart Contracts and Chaincode

受众 :架构师、应用程序和智能合约开发者、管理员。

注意:本教程描述的网络使用以前的生命周期过程,其中链码在通道上实例化。这个主题将被更新,以反映Fabric链码生命周期特性,该特性在v2.0.0的alpha版本中首次引入。

从应用程序开发人员的角度来看,智能合约与账本一起构成了超级账本Fabric区块链系统的核心。账本包含关于一组业务对象的当前和历史状态的事实,而智能合约定义了生成添加到账本中的新事实的可执行逻辑。链码通常被管理员用于对相关的智能合约进行分组,以便部署,但也可以用于Fabric的低级系统编程。在本主题中,我们将重点讨论为什么智能合约和链码都存在,以及如何和何时使用它们。

在本主题中,我们将讨论:

Smart contract

在业务之间进行交互之前,它们必须定义一组公共合约,其中包括公共术语、数据、规则、概念定义和流程。将这些合约放在一起,就构成了管理交易各方之间所有交互的业务模型。

_images/smartcontract.diagram.01.pngsmart.diagram1 A smart contract defines the rules between different organizations in executable code. Applications invoke a smart contract to generate transactions that are recorded on the ledger.

通过使用区块链网络,我们可以将这些合约转换为可执行程序(业内称为智能合约),从而打开各种各样的新可能性。这是因为智能合约可以为任何类型的业务对象实现治理规则,以便在执行智能合约时自动执行这些规则。例如,一个智能合约可能会确保新车在指定的时间内交付,或者根据预先安排的条款释放资金,分别改善货物或资本的流动。然而,最重要的是,智能合约的执行要比人工业务流程高效得多。

在上面的图中,我们可以看到两个组织,ORG1和ORG2,如何定义了一个汽车智能合约来查询、传输和更新汽车。来自这些组织的应用程序调用此智能合约来执行业务流程中商定的步骤,例如将特定汽车的所有权从ORG1转移到ORG2。

Terminology

超级账本Fabric用户经常交替使用术语智能合约和链码。通常,智能合约定义交易逻辑,控制包含在世界状态中的业务对象的生命周期。然后将其打包成链码,然后部署到区块链网络。可以将智能合约看作管理交易,而链码则控制如何打包用于部署的智能合约。

_images/smartcontract.diagram.02.pngsmart.diagram2 A smart contract is defined within a chaincode. Multiple smart contracts can be defined within the same chaincode. When a chaincode is deployed, all smart contracts within it are made available to applications.

在图中,我们可以看到包含三个智能合约的车辆链码:轿车、船只和卡车。我们还可以看到包含四个智能合约的保险链码:保单、责任、联合和证券化。在这两种情况下,这些合约涵盖了与车辆和保险相关的业务流程的关键方面。在本主题中,我们将以汽合约为例。我们可以看到,智能合约是一个特定于领域的程序,它与特定的业务流程相关,而链码是一组用于安装和实例化的相关智能合约的技术容器。

Ledger

在最简单的层次上,区块链不可更改地记录交易,交易在账本中更新状态。智能合约以编程方式访问两个不同的账本区块链,永恒地记录所有交易的历史,和世界状态保存缓存这些状态的当前值,因为它是一个对象的当前值,通常是必需的。

智能合约主要将put、get和delete状态置于世界状态,还可以查询不可变的区块链交易记录。

  • A get typically represents a query to retrieve information about the current state of a business object.

  • A put typically creates a new business object or modifies an existing one in the ledger world state.

  • A delete typically represents the removal of a business object from the current state of the ledger, but not its history.

智能合约有许多可用的API。关键的是,在所有情况下,无论交易创建、读取、更新还是删除处于世界状态的业务对象,区块链都包含了这些更改的不可变记录。

Development

智能合约是应用程序开发的重点,正如我们所看到的,可以在一个链码中定义一个或多个智能合约。将链码部署到网络中,可以使其中包含的所有智能合约对该网络中的组织可用。这意味着只有管理员才需要担心链码;每个人都可以用智能合约来思考。

智能合约的核心是一组交易定义。例如,看看fabcar.js,你可以看到一个创建一辆新车的智能合约交易:

async createCar(ctx, carNumber, make, model, color, owner) {

    const car = {
        color,
        docType: 'car',
        make,
        model,
        owner,
    };

    await ctx.stub.putState(carNumber, Buffer.from(JSON.stringify(car)));
}

在编写您的第一个应用程序教程中,您可以了解更多关于Fabcar 智能合约的信息。

智能合约可以描述与多组织决策中数据不可篡改相关的几乎无限的业务用例。智能合约开发人员的工作是获取一个可能控制财务价格或交付条件的现有业务流程,并用JavaScript、GOLANG或Java等编程语言将其表示为一个智能合约。将数百年的法律语言转换为编程语言所需要的法律和技术技能,越来越多的智能合约审核员正在实践。您可以在“开发应用程序”主题中了解如何设计和开发智能合约。

Endorsement

与每个链码相关联的是一个背书策略,它适用于其中定义的所有智能合约。背书策略非常重要;它指示区块链网络中的哪些组织必须签署由给定智能合约生成的交易,以便该交易被声明为有效。

_images/smartcontract.diagram.03.pngsmart.diagram3 Every smart contract has an endorsement policy associated with it. This endorsement policy identifies which organizations must approve transactions generated by the smart contract before those transactions can be identified as valid.

一个示例背书策略可能定义参与区块链网络的四个组织中的三个必须在交易被认为有效之前签署该交易。所有的交易,无论是有效的还是无效的,都被添加到分布式账本中,但是只有有效的交易更新世界状态。

如果背书策略指定必须有多个组织签署交易,则必须由足够多的组织执行智能合约,以便生成有效的交易。在上面的示例中,要使汽车有效,需要由ORG1和ORG2执行并签署一个用于转移汽车的智能合约交易。

背书策略使得超级账本Fabric不同于其他区块链,比如以太坊(Ethereum)或比特币(Bitcoin)。在这些系统中,网络中的任何节点都可以生成有效的交易。超级账本Fabric更真实地模拟现实世界;交易必须由网络中的可信组织验证。例如,政府机构必须签署有效的证件交易,或者购车者和销售者都必须签署汽车转让交易。背书策略旨在允许超级账本Fabric更好地为这些类型的真实交互建模。

最后,背书策略只是超级账本Fabric中策略的一个例子。还可以定义其他策略来确定谁可以查询或更新账本,或者从网络中添加或删除参与者。一般来说,策略应该由区块链网络中的组织联盟事先商定,尽管它们不是一成不变的。实际上,策略本身可以定义可以更改它们的规则。虽然这是一个高级主题,但是也可以在Fabric提供的规则之上定义定制的背书策略规则。

Valid transactions

当智能合约执行时,它运行在区块链网络中组织所拥有的peer节点上。合约接受一组称为交易提案的输入参数,并将其与程序逻辑结合使用来读写账本。对世界状态的更改被捕获为交易提案响应(或者仅仅是交易响应),该响应包含一个读写集,其中包含已读取的状态,以及如果交易有效,将写入新的状态。注意,在执行智能合约时,世界状态没有更新!

_images/smartcontract.diagram.04.pngsmart.diagram4 All transactions have an identifier, a proposal, and a response signed by a set of organizations. All transactions are recorded on the blockchain, whether valid or invalid, but only valid transactions contribute to the world state.

检查汽车转移交易。您可以看到一个交易t3,用于ORG1和ORG2之间的汽车转移。查看交易如何具有输入{CAR1, ORG1, ORG2}和输出{CAR1.owner=ORG1, CAR1.owner=ORG2},表示所有者从ORG1更改为ORG2。注意,输入是如何由应用程序的组织ORG1签名的,输出是如何由背书策略ORG1和ORG2标识的两个组织签名的。这些签名是使用每个参与者的私钥生成的,这意味着网络中的任何人都可以验证网络中的所有参与者在交易细节上达成一致。

分发到网络中所有peer节点的交易分两个阶段进行验证。首先,根据背书策略检查交易,确保有足够的组织签署。其次,对其进行检查,以确保当交易由背书的peer节点签名时,世界状态的当前值与交易的读集匹配;没有中间更新。如果一个交易通过了这两个测试,它就被标记为有效。所有交易都被添加到区块链历史记录中,不管是有效的还是无效的,但是只有有效的交易才会导致对世界状态的更新。

在我们的示例中,t3是一个有效的交易,因此CAR1的所有者已更新为ORG2。但是t4(未显示)是无效的交易,所以当它在账本记录时,世界状态没有更新,CAR2仍然属于ORG2所有。

最后,要了解如何使用具有世界状态的智能合约或链码,请阅读链码命名空间主题。

Channels

超级账本Fabric允许组织通过通道同时参与多个单独的区块链网络。通过加入多个通道,一个组织可以参与一个所谓的网络的网络。通道提供高效的基础设施共享,同时维护数据和通信隐私。它们足够独立,可以帮助组织将它们的工作流量与不同的对手方分开,但又足够集成,以便在必要时协调独立的活动。

_images/smartcontract.diagram.05.pngsmart.diagram5 A channel provides a completely separate communication mechanism between a set of organizations. When a chaincode is instantiated on a channel, an endorsement policy is defined for it; all the smart contracts within the chaincode are made available to the applications on that channel.

当链码在通道上实例化时,管理员为链码定义一个背书策略,并且可以在链码升级时更改它。背书策略同样适用于部署到通道的同一链码中定义的所有智能合约。这也意味着一个智能合约可以被部署到不同的通道,使用不同的背书策略。

在上面的示例中,将汽车合约部署到车辆通道,并将保险合约部署到保险通道。汽车合约有一个背书策略,要求ORG1和ORG2在被认为有效之前签署交易,而保险合约有一个背书策略,只要求ORG3签署有效交易。ORG1参与车辆通道和保险网络两个网络,分别与ORG2和ORG3协调这两个网络之间的活动。

Intercommunication

智能合约能够调用同一通道内和跨不同通道的其他智能合约。通过这种方式,他们可以读写世界状态数据,否则由于智能合约名称空间,他们将无法访问这些数据。

这种内部合约通信有一些限制,这些限制将在链码名称空间主题中详细描述。

System chaincode

链码中定义的智能合约为一组区块链组织之间达成一致的业务流程,编码领域相关规则。然而,链码还可以定义低层程序代码,这些代码对应独立于领域的系统交互,与这些业务流程的智能合约无关。

以下是不同类型的系统链码及其相关缩写:

  • Lifecycle system chaincode (LSCC) runs in all peers to handle package signing, install, instantiate, and upgrade chaincode requests. You can read more about the LSCC implements this process.

  • Configuration system chaincode (CSCC) runs in all peers to handle changes to a channel configuration, such as a policy update. You can read more about this process in the following chaincode topic.

  • Query system chaincode (QSCC) runs in all peers to provide ledger APIs which include block query, transaction query etc. You can read more about these ledger APIs in the transaction context topic.

  • Endorsement system chaincode (ESCC) runs in endorsing peers to cryptographically sign a transaction response. You can read more about how the ESCC implements this process.

  • Validation system chaincode (VSCC) validates a transaction, including checking endorsement policy and read-write set versioning. You can read more about the LSCC implements this process.

底层的Fabric开发人员和管理员可以根据自己的需要修改这些系统链码。然而,系统链码的开发和管理是一项专门的活动,与智能合约的开发完全分离,通常没有必要。对系统链码的更改必须非常小心地处理,因为它们是超级账本Fabric网络正确运行的基础。例如,如果没有正确地开发系统链代码,一个peer节点可能会以不同的方式更新其世界状态或区块链副本。这种缺乏共识是账本分叉的一种形式,是一种非常不可取的情况。

Ledger

受众:架构师、应用程序开发者和智能合约开发者、管理员

账本是超级账本Fabric中的一个重要概念;它存储有关业务对象的重要事实信息;包括对象属性的当前值,和产生这些当前值的交易的历史。

在这个主题中,我们将涉及:

What is a Ledger?

账本包含业务的当前状态,就像一个交易日记账。欧洲和中国最早的账本可以追溯到近1000年前,苏美尔人在4000年前就有了石制账本——但让我们从一个更现代的例子开始吧!

你可能已经习惯查看你的银行账户了。对你来说,最重要的是可用的余额——它是你现在能花多少钱。如果你想知道你的余额是如何产生的,那么你可以查看决定它的交易贷项和借项。这是一个真实的账本示例——一个状态(您的银行余额)和一组确定账本的有序交易(贷记和借记)。超级账本Fabric的动机是出于这两个相同的考虑——显示一组账本状态的当前值,以及捕获决定这些状态的交易历史。

Ledgers, Facts and States

账本并不真正地存储业务对象,而是存储关于这些对象的事实。当我们说“我们在账本中存储一个业务对象”时,我们真正的意思是我们正在记录关于一个对象当前状态的事实,以及关于导致当前状态的交易历史的事实。在一个日益数字化的世界里,我们感觉自己在看一个物体,而不是关于一个物体的事实。对于数字对象,它很可能存在于外部数据存储中;我们存储在账本中的事实使我们能够确定它的位置以及有关它的其他关键信息。

虽然关于业务对象当前状态的事实可能会更改,但是关于它的事实历史是不可变的,可以将其添加到其中,但不能对其进行回溯性更改。我们将看到,将区块链看作业务对象事实的不可变历史,是如何理解它的一种简单而强大的方法。

现在让我们仔细看看超级账本Fabric的账本结构!

The Ledger

在超级账本Fabric中,账本由两个不同但相关的部分组成——一个世界状态和一个区块链。每一个都表示一组关于一组业务对象的事实。

首先,有一个世界状态——一个数据库,其中包含一组账本状态的当前值的缓存。世界状态使程序可以很容易地直接访问状态的当前值,而不必遍历整个交易日志来计算它。缺省情况下,账本状态表示为键值对,稍后我们将看到超级账本Fabric如何在这方面提供灵活性。世界状态可以频繁地更改,因为可以创建、更新和删除状态。

其次,还有一个区块链——一个交易日志,记录导致当前世界状态的所有更改。交易收集在附加到区块链的区块中——使您能够了解导致当前世界状态的更改的历史。区块链数据结构与世界状态非常不同,因为一旦写入,就无法修改;它是不可变的。

_images/ledger.diagram.1.pngledger.ledger A Ledger L comprises blockchain B and world state W, where blockchain B determines world state W. We can also say that world state W is derived from blockchain B.

在一个超级账本Fabric网络中有一个逻辑账本是很有帮助的。实际上,该网络维护一个账本的多个副本——通过一个称为“共识”的过程,这些副本与其他副本保持一致。分布式账本技术(DLT)这个术语经常与这种账本联系在一起——这种账本在逻辑上是单一的,但是在整个网络中分布着许多一致的副本。

现在让我们更详细地研究世界状态和区块链数据结构。

World State

世界状态将业务对象属性的当前值保存为唯一的账本状态。这很有用,因为程序通常需要对象的当前值;遍历整个区块链来计算对象的当前值将会很麻烦——您只需要直接从世界状态获取它。

_images/ledger.diagram.3.pngledger.worldstate A ledger world state containing two states. The first state is: key=CAR1 and value=Audi. The second state has a more complex value: key=CAR2 and value={model:BMW, color=red, owner=Jane}. Both states are at version 0.

账本状态记录一组关于特定业务对象的事实。我们的示例显示了CAR1和CAR2这两辆车的账本状态,每辆车都有一个键和一个值。应用程序可以调用智能合约,该合约使用简单的账本API来获取、设置和删除状态。注意状态值可以是简单的(奥迪…),也可以是复合的(类型:BMW…)。通常查询世界状态来检索具有特定属性的对象,例如查找所有红色宝马。

世界状态作为数据库实现。这很有意义,因为数据库提供了一组丰富的操作符来有效地存储和检索状态。稍后我们将看到,可以将超级账本Fabric配置为使用不同的世界状态数据库来满足不同类型的状态值和应用程序(例如在复杂查询中)所需的访问模式的需要。

应用程序提交的交易捕获对世界状态的更改,这些交易最终提交到账本区块链。应用程序通过超级账本Fabric SDK与这种共识机制的细节隔离;它们仅仅调用一个智能合约,当交易被包含在区块链中时(无论是否有效),它们都会得到通知。关键的设计要点是,只有由一组必需的背书组织签名的交易才会导致对世界状态的更新。如果一个交易没有足够的背书者签名,它将不会导致世界状态的改变。您可以阅读更多关于应用程序如何使用智能合约以及如何开发应用程序的信息。

您还会注意到,状态有一个版本号,在上面的图表中,状态CAR1和CAR2处于它们的初始版本0。用于内部使用的版本号,并在每次状态更改时递增。每当更新状态时,都会检查版本,以确保当前状态与背书时的版本匹配。这就确保了世界状态正在按照预期发生变化;没有并发更新。

最后,当第一次创建账本时,世界状态为空。因为表示对世界状态的有效更改的任何交易都记录在区块链上,这意味着可以随时从区块链重新生成世界状态。这非常方便——例如,创建peer时自动生成世界状态。此外,如果某个peer异常失败,则可以在peer重新启动时(在接受交易之前)重新生成世界状态。

Blockchain

现在让我们把注意力从世界状态转移到区块链。虽然世界状态包含一组与一组业务对象的当前状态相关的事实,但是区块链是关于这些对象如何达到其当前状态的事实的历史记录。区块链记录了每个账本状态的每个以前版本以及它是如何被更改的。

区块链结构为相互链接的区块的顺序日志,其中每个区块包含一系列交易,每个交易表示对世界状态的查询或更新。其他地方讨论了交易的确切排序机制;重要的是,区块排序,以及区块内的交易排序,是在称为排序服务的超超级账本Fabric组件首次创建区块时建立的。

每个区块的头部包含区块交易的散列,以及前一个区块的头部散列的副本。这样,账本上的所有交易都按顺序排列,并以密码方式连接在一起。这种散列和链接使账本数据非常安全。即使一个承载账本的节点被篡改了,它也不能让所有其他节点相信它拥有“正确的”区块链,因为账本分布在一个由独立节点组成的网络中。

与使用数据库的世界状态相反,区块链始终作为文件实现。这是一个明智的设计选择,因为区块链数据结构严重偏向于非常小的一组简单操作。附加到区块链末尾的操作是主要操作,查询目前是一个相对不频繁的操作。

让我们更详细地看看区块链的结构。

_images/ledger.diagram.2.pngledger.blockchain A blockchain B containing blocks B0, B1, B2, B3. B0 is the first block in the blockchain, the genesis block.

在上面的图中,我们可以看到B2区块有一个区块数据D2,它包含所有的交易:T5、T6、T7。

最重要的是,B2有一个区块头部H2,它包含D2中所有交易的加密散列,以及与前一个区块B1相同的散列。通过这种方式,区块之间不可分割地、不可改变地链接在一起,术语区块链很好地捕捉到了这一点!

最后,如图所示,区块链中的第一个区块称为创世区块。它是账本的起点,尽管它不包含任何用户交易。相反,它包含一个配置交易,其中包含网络通道的初始状态(未显示)。在文档中讨论区块链网络和通道时,我们将更详细地讨论创世区块。

Blocks

让我们仔细看看一个区块的结构。它由三个部分组成

  • Block Header

    This section comprises three fields, written when a block is created.

    • Block number: An integer starting at 0 (the genesis block), and increased by 1 for every new block appended to the blockchain.

    • Current Block Hash: The hash of all the transactions contained in the current block.

    • Previous Block Hash: A copy of the hash from the previous block in the blockchain.

    These fields are internally derived by cryptographically hashing the block data. They ensure that each and every block is inextricably linked to its neighbour, leading to an immutable ledger.

    _images/ledger.diagram.4.pngledger.blocks Block header details. The header H2 of block B2 consists of block number 2, the hash CH2 of the current block data D2, and a copy of a hash PH1 from the previous block, block number 1.

  • Block Data

    This section contains a list of transactions arranged in order. It is written when the block is created by the ordering service. These transactions have a rich but straightforward structure, which we describe later in this topic.

  • Block Metadata

    This section contains the time when the block was written, as well as the certificate, public key and signature of the block writer. Subsequently, the block committer also adds a valid/invalid indicator for every transaction, though this information is not included in the hash, as that is created when the block is created.

Transactions

正如我们所看到的,交易捕获对世界状态的更改。让我们来看看包含在一个区块中的交易的详细区块数据结构。

_images/ledger.diagram.5.pngledger.transaction Transaction details. Transaction T4 in blockdata D1 of block B1 consists of transaction header, H4, a transaction signature, S4, a transaction proposal P4, a transaction response, R4, and a list of endorsements, E4.

在上面的例子中,我们可以看到以下字段:

  • Header

    This section, illustrated by H4, captures some essential metadata about the transaction – for example, the name of the relevant chaincode, and its version.

  • Signature

    This section, illustrated by S4, contains a cryptographic signature, created by the client application. This field is used to check that the transaction details have not been tampered with, as it requires the application’s private key to generate it.

  • Proposal

    This field, illustrated by P4, encodes the input parameters supplied by an application to the smart contract which creates the proposed ledger update. When the smart contract runs, this proposal provides a set of input parameters, which, in combination with the current world state, determines the new world state.

  • Response

    This section, illustrated by R4, captures the before and after values of the world state, as a Read Write set (RW-set). It’s the output of a smart contract, and if the transaction is successfully validated, it will be applied to the ledger to update the world state.

  • Endorsements

    As shown in E4, this is a list of signed transaction responses from each required organization sufficient to satisfy the endorsement policy. You’ll notice that, whereas only one transaction response is included in the transaction, there are multiple endorsements. That’s because each endorsement effectively encodes its organization’s particular transaction response – meaning that there’s no need to include any transaction response that doesn’t match sufficient endorsements as it will be rejected as invalid, and not update the world state.

这就总结了交易的主要领域——还有其他领域,但是这些是您需要了解的基本领域,以便对账本数据结构有一个坚实的了解。

World State database options

世界状态被物理地实现为一个数据库,以提供简单而有效的账本状态存储和检索。正如我们所看到的,账本状态可以有简单的值,也可以有复合的值,为了适应这一点,世界状态数据库的实现可以有所不同,从而允许这些值得到有效的实现。目前,世界状态数据库的选项包括LevelDB和CouchDB。

LevelDB是默认值,当账本状态是简单的键值对时,它尤其适用。LevelDB数据库与网络节点紧密地共存——它嵌入在相同的操作系统进程中。

当账本状态结构为JSON文档时,CouchDB是一个特别合适的选择,因为CouchDB支持在业务交易中经常看到的丰富的查询和更新数据类型。在实现方面,CouchDB运行在单独的操作系统进程中,但是peer节点和CouchDB实例之间仍然存在1:1的关系。所有这些都是智能合约所看不到的。有关CouchDB的更多信息,请参见CouchDB作为状态数据库。

在LevelDB和CouchDB中,我们看到了超级账本Fabric的一个重要方面——它是可插入的。世界状态数据库可以是关系数据存储、图形存储或时态数据库。这为可以有效访问的账本状态类型提供了极大的灵活性,允许超级账本Fabric处理许多不同类型的问题。

Example Ledger: fabcar

当我们结束关于账本的话题时,让我们来看一个账本示例。如果您已经运行了fabcar示例应用程序,那么您已经创建了这个账本。

fabcar样例应用程序创建一组10辆车,每辆车都有一个唯一的标识;不同的颜色,不同的款式,不同的主人。以下是前四辆车创建后的账本。

_images/ledger.diagram.6.pngledger.transaction The ledger, L, comprises a world state, W and a blockchain, B. W contains four states with keys: CAR1, CAR2, CAR3 and CAR4. B contains two blocks, 0 and 1. Block 1 contains four transactions: T1, T2, T3, T4.

我们可以看到世界状态包含对应于CAR0 CAR1 CAR2 CAR3的状态。CAR0的值表明这是一辆蓝色的丰田普锐斯,目前归Tomoko所有,我们可以看到其他车的状态和值类似。此外,我们可以看到所有car状态的版本号都是0,这表示这是它们的初始版本号——自创建以来,它们一直没有更新。

我们还可以看到区块链包含两个区块。区块0是创世区块,尽管它不包含任何与汽车相关的交易。然而,第1区块包含交易T1、T2、T3、T4,这些交易对应于为世界状态中的CAR0到CAR3创建初始状态的交易。我们可以看到区块1与区块0相连。

我们还没有显示区块或交易中的其他字段,特别是头部和散列。如果您对这些细节感兴趣,可以在文档的其他地方找到专门的参考主题。它为您提供了一个完整的工作示例,其中详细介绍了整个区块及其交易——但是现在,您已经对超级账本Fabric账本有了一个坚实的概念理解。做得好!

Namespaces

虽然我们把账本描述成一个单一的世界状态和单一的区块链,但这有点过于简单化了。实际上,每个链码都有自己的世界状态,与所有其他链码分离。世界状态位于一个名称空间中,因此只有同一链码中的智能合约才能访问给定的名称空间。

区块链没有名称空间。它包含来自许多不同智能合约名称空间的交易。您可以在本主题中阅读更多关于链码名称空间的信息。

现在让我们看看名称空间的概念是如何应用于超级账本Fabric通道中的。

Channels

在超级账本Fabric中,每个通道都有一个完全独立的账本。这意味着完全独立的区块链和完全独立的世界状态,包括名称空间。应用程序和智能合约可以在通道之间通信,以便在通道之间访问账本信息。

在本主题中,您可以阅读更多关于账本如何与通道一起工作的信息。

More information

要深入了解交易流程、并发控制和世界状态数据库,请将交易流程、读写集语义和CouchDB作为状态数据库主题。

The Ordering Service

涉众:架构师、排序服务管理员、通道创建者

这个话题是一个概念性的介绍顺序的概念,排序器与peer相互作用,如何在一个交易流程中所发挥的作用,和当前可用的排序服务的实现概述,尤其关注 Raft 排序服务实现。

What is ordering?

许多分布式区块链,如以太坊(Ethereum)和比特币(Bitcoin),都不是许可链的,这意味着任何节点都可以参与共识过程,在共识过程中,交易被排序并捆绑成区块。因为这个事实,这些系统依靠概率共识算法最终保证账本一致性高的概率,但仍容易受到不同的账本(有时也称为一个账本“分叉”),在网络中不同的参与者对于交易顺序有不同的观点。

超级账本Fabric的工作方式不同。它具有一种称为排序器的节点(也称为“排序节点”),它执行交易排序,并与其他节点一起形成一个排序服务。因为Fabric的设计依赖于确定性的共识算法,所以由排序服务生成的任何区块由peer验证都能保证是最终的和正确的。账本不能像在其他分布式区块链中那样分叉。

除了促进确定性之外,将链码执行的背书(发生在peer)与排序分离,还在性能和可伸缩性方面提供了Fabric的优势,消除了由相同节点执行和排序时可能出现的瓶颈。

Orderer nodes and channel configuration

除了排序角色之外,排序器还维护允许创建通道的组织列表。此组织列表称为“联盟”,列表本身保存在“排序器系统通道”(也称为“排序系统通道”)的配置中。默认情况下,此列表及其所在的通道只能由排序器管理员编辑。请注意,排序服务可以保存这些列表中的几个,这使得联盟成为Fabric多租户的载体。

排序器还对通道执行基本访问控制,限制谁可以读写数据,以及谁可以配置数据。请记住,谁有权修改通道中的配置元素取决于相关管理员在创建联盟或通道时设置的策略。配置交易由排序器处理,因为它需要知道当前策略集合来执行其基本形式的访问控制。在这种情况下,排序器处理配置更新,以确保请求者拥有正确的管理权限。如果是,排序器将根据现有配置验证更新请求,生成一个新的配置交易,并将其打包到一个区块中,该区块将转发给通道上的所有peer。然后peer处理配置交易,以验证排序器批准的修改确实满足通道中定义的策略。

Orderer nodes and Identity

与区块链网络交互的所有东西,包括peer、应用程序、管理员和排序器,都从它们的数字证书和成员服务提供者(MSP)定义中获取它们的组织身份。

有关身份和MSP的更多信息,请查看我们关于身份和成员的文档。

与peer一样,排序节点属于组织。与peer类似,应该为每个组织使用单独的证书颁发机构(CA)。这个CA是否将作为根CA发挥作用,或者您是否选择部署根CA,然后部署与该根CA关联的中间CA,这取决于您。

Orderers and the transaction flow

Phase one: Proposal

从我们对peer节点的讨论中,我们已经看到它们构成了区块链网络的基础,托管账本,应用程序可以通过智能合约查询和更新这些账本。

具体地说,希望更新账本的应用程序涉及到一个分三个阶段的过程,该过程确保区块链网络中的所有peer保持它们的账本彼此一致。

在第一阶段,客户端应用程序将交易提案发送给一组peer,这些peer将调用一个智能合约来生成一个提议的账本更新,然后背书结果。背书peer此时不将提案的更新应用于其账本副本。相反,背书的peer将向客户机应用程序返回一个提案响应。已背书的交易提案最终将在第二阶段被排序为区块,然后在第三阶段分发给所有peer进行最终验证和提交。

要深入了解第一个阶段,请参阅peer主题。

Phase two: Ordering and packaging transactions into blocks

在完成交易的第一阶段之后,客户端应用程序已经从一组peer接收到一个经过背书的交易提案响应。现在是交易的第二阶段。

在此阶段,应用程序客户端将包含已背书交易提案响应的交易提交到排序服务节点。排序服务创建交易区块,这些交易区块最终将分发给通道上的所有peer,以便在第三阶段进行最终验证和提交。

排序服务节点同时接收来自许多不同应用程序客户端的交易。这些排序服务节点一起工作,共同组成排序服务。它的工作是将提交的交易按定义良好的顺序安排成批次,并将它们打包成区块。这些区块将成为区块链的区块!

区块中的交易数量取决于区块的期望大小和最大运行时间相关的通道配置参数(确切地说,是BatchSize和BatchTimeout参数)。然后将这些区块保存到排序器的账本中,并分发给已经加入通道的所有peer。如果此时恰好有一个peer关闭,或者稍后加入通道,则在重新连接到排序服务节点或与另一个peer通信(闲聊)之后,通道将接收到这些区块。我们将在第三阶段看到peer如何处理这个区块。

_images/orderer.diagram.1.pngOrderer1

排序节点的第一个角色是打包提案的账本更新。在本例中,应用程序A1向排序器O1发送由E1和E2背书的交易T1。同时,应用程序A2将E1背书的交易T2发送给排序器O1。O1将来自应用程序A1的交易T1和来自应用程序A2的交易T2以及来自网络中其他应用程序的交易打包到区块B2中。我们可以看到,在B2中,交易顺序是T1、T2、T3、T4、T6、T5——这可能不是这些交易到达排序器的顺序!(这个例子显示了一个非常简单的排序服务配置,只有一个排序节点。)

值得注意的是,一个区块中交易的顺序不一定与排序服务接收的顺序相同,因为可能有多个排序服务节点几乎同时接收交易。重要的是,排序服务将交易放入严格的顺序中,并且peer在验证和提交交易时将使用这个顺序。

区块内交易的严格排序使得超级账本Fabric与其他区块链稍有不同,在其他区块链中,相同的交易可以被打包成多个不同的区块,从而形成一个链。在超级账本Fabric中,由排序服务生成的区块是最终的。一旦一笔交易被写进一个区块,它在账本中的地位就得到了不可动摇的保证。正如我们前面所说,超级账本Fabric的终结性意味着没有账本分支——经过验证的交易永远不会被还原或删除。

我们还可以看到,虽然peer执行智能合约并处理交易,但是排序器绝对不会这样做。到达排序器的每个授权交易都被机械地打包在一个区块中——排序器不判断交易的内容(前面提到的通道配置交易除外)。

在第二阶段的末尾,我们看到排序器负责收集提议的交易更新、排序它们并将它们打包成区块,以便分发,这些简单但重要的过程。

Phase three: Validation and commit

交易工作流的第三个阶段涉及到从排序器到peer的区块的分发和随后的验证,这些区块可以应用到账本中。

阶段3从排序器将区块分发给连接到它的所有peer开始。同样值得注意的是,并不是每个peer都需要连接到一个排序器——peer可以使用gossip协议将区块级联到其他peer。

每个peer将独立地验证分布式区块,但以确定的方式验证,确保账本保持一致。具体来说,每个peer的通道将区块中的每个交易进行验证,以确保得到需要的组织peer的背书,其背书吻合,它还没有被其他最近已提交交易搞失效。无效的交易仍然保留在排序器创建的不可变区块中,但是peer将它们标记为无效,并且不更新账本的状态。

_images/orderer.diagram.2.pngOrderer2

排序节点的第二个角色是将区块分发给peer节点。在本例中,排序器O1将区块B2分配给peer P1和peer P2。peer P1处理区块B2,导致在P1上的账本L1中添加一个新区块。同时,peer P2处理区块B2,从而将一个新区块添加到P2上的账本L1中。一旦这个过程完成,peer P1和P2上的账本L1就会一直更新,并且每个账本L1都可以通知连接的应用程序交易已经被处理。

总之,第三阶段看到的是由排序服务生成的区块一致地应用于账本。将交易严格地按区块排序,允许每个peer验证交易更新是否在整个区块链网络上一致地应用。

要更深入地了解阶段3,请参阅peer主题。

Ordering service implementations

虽然当前可用的每个排序服务都以相同的方式处理交易和配置更新,但是仍然有几种不同的实现可以在排序服务节点之间就严格的交易排序达成共识。

有关如何建立排序节点(无论该节点将在什么实现中使用)的信息,请参阅关于建立排序节点的文档。

  • Solo

    The Solo implementation of the ordering service is aptly named: it features only a single ordering node. As a result, it is not, and never will be, fault tolerant. For that reason, Solo implementations cannot be considered for production, but they are a good choice for testing applications and smart contracts, or for creating proofs of concept. However, if you ever want to extend this PoC network into production, you might want to start with a single node Raft cluster, as it may be reconfigured to add additional nodes.

  • Raft

    New as of v1.4.1, Raft is a crash fault tolerant (CFT) ordering service based on an implementation of Raft protocol in etcd. Raft follows a “leader and follower” model, where a leader node is elected (per channel) and its decisions are replicated by the followers. Raft ordering services should be easier to set up and manage than Kafka-based ordering services, and their design allows different organizations to contribute nodes to a distributed ordering service.

  • Kafka

    Similar to Raft-based ordering, Apache Kafka is a CFT implementation that uses a “leader and follower” node configuration. Kafka utilizes a ZooKeeper ensemble for management purposes. The Kafka based ordering service has been available since Fabric v1.0, but many users may find the additional administrative overhead of managing a Kafka cluster intimidating or undesirable.

Solo

如上所述,在开发测试、开发或概念验证网络时,单独排序服务是一个不错的选择。出于这个原因,它是默认的排序服务部署在我们构建第一个网络教程),从其他网络组件的角度来看,一个Solo排序服务处理交易相同于更复杂的Kafka和Raft实现,而节省维护和升级多个节点和集群的管理开销。由于Solo排序服务不能容错,因此它永远不应该被认为是生产区块链网络的可行替代方案。对于只希望从单个排序节点开始但将来可能希望增长的网络,单节点Raft集群是更好的选择。

Raft

有关如何配置Raft排序服务的信息,请参阅有关配置Raft排序服务的文档。

生产网络的排序服务选择,建立Raft协议的Fabric实现使用一个“领导者和跟随者”模式,一个领导者的动态排序节点中选出一个通道(这个集合的节点称为“同意者”),和领导者将信息复制到跟随者节点。因为系统可以承受节点的损失,包括领导者节点,只要还有大量的有序节点(称为“quorum”)剩余,Raft被称为“崩溃容错”(crash fault tolerant, CFT)。换句话说,如果一个通道中有三个节点,它可以承受一个节点的丢失(剩下两个)。如果一个通道中有五个节点,则可能会丢失两个节点(留下三个剩余节点)。

从它们提供给网络或通道的服务的角度来看,Raft和现有的基于kafka的排序服务我们将在稍后讨论)是相似的。它们都是使用领导者和跟随者设计的CFT排序服务。如果您是应用程序开发人员、智能合约开发人员或peer管理员,您不会注意到基于Raft和Kafka的排序服务之间的功能差异。然而,有几个主要的差异值得考虑,特别是如果你打算管理一个排序服务:

  • Raft is easier to set up. Although Kafka has scores of admirers, even those admirers will (usually) admit that deploying a Kafka cluster and its ZooKeeper ensemble can be tricky, requiring a high level of expertise in Kafka infrastructure and settings. Additionally, there are many more components to manage with Kafka than with Raft, which means that there are more places where things can go wrong. And Kafka has its own versions, which must be coordinated with your orderers. With Raft, everything is embedded into your ordering node.

  • Kafka and Zookeeper are not designed to be run across large networks. They are designed to be CFT but should be run in a tight group of hosts. This means that practically speaking you need to have one organization run the Kafka cluster. Given that, having ordering nodes run by different organizations when using Kafka (which Fabric supports) doesn’t give you much in terms of decentralization because the nodes will all go to the same Kafka cluster which is under the control of a single organization. With Raft, each organization can have its own ordering nodes, participating in the ordering service, which leads to a more decentralized system.

  • Raft is supported natively. While Kafka-based ordering services are currently compatible with Fabric, users are required to get the requisite images and learn how to use Kafka and ZooKeeper on their own. Likewise, support for Kafka-related issues is handled through Apache, the open-source developer of Kafka, not Hyperledger Fabric. The Fabric Raft implementation, on the other hand, has been developed and will be supported within the Fabric developer community and its support apparatus.

  • Where Kafka uses a pool of servers (called “Kafka brokers”) and the admin of the orderer organization specifies how many nodes they want to use on a particular channel, Raft allows the users to specify which ordering nodes will be deployed to which channel. In this way, peer organizations can make sure that, if they also own an orderer, this node will be made a part of a ordering service of that channel, rather than trusting and depending on a central admin to manage the Kafka nodes.

  • Raft is the first step toward Fabric’s development of a byzantine fault tolerant (BFT) ordering service. As we’ll see, some decisions in the development of Raft were driven by this. If you are interested in BFT, learning how to use Raft should ease the transition.

注:与Solo和Kafka类似,Raft排序服务在向客户发送收据确认后可能会丢失交易。例如,如果领导者崩溃的时间与跟随者崩溃的时间大致相同时提供接收确认。因此,无论如何,应用程序客户端都应该侦听peer的交易提交事件(检查交易有效性),但是应该格外小心,以确保客户端还能优雅地容忍一个超时,在这个超时中,交易没有在配置的时间范围内提交。根据应用程序的不同,可能需要在这样的超时之后重新提交交易或收集一组新的背书。

Raft concepts

尽管Raft提供了许多与Kafka相同的功能(尽管是在一个更简单、更容易使用的包中),但它在幕后的功能与Kafka有本质上的不同,并向Fabric引入了许多新概念,或对现有概念进行了扭曲。

日志条目。Raft排序服务中的主要工作单元是一个“日志条目”,其中包含称为“日志”的完整序列。如果大多数成员(换句话说是一个法定人数)同意条目及其顺序,从而复制不同排序器上的日志,那么我们认为日志是一致的。

批准者集合。排序节点积极参与给定通道的共识机制,并接收该通道的复制日志。这可以是所有可用的节点(在单个集群中或者在多个集群中),也可以是这些节点的子集。

有限状态机(FSM)。Raft中的每个排序节点都有一个FSM,它们共同用于确保各个排序节点中的日志序列是确定的(以相同的顺序编写)。

法定人数。描述需要确认提案以便能够对交易排序的最小同意人数。对于每个批准者集合,这是大多数节点。在具有五个节点的集群中,必须有三个节点可用,才能有一个法定人数。如果节点的法定人数因任何原因不可用,则排序服务集群对于通道上的读和写操作都不可用,并且不能提交任何新日志。

领导者。这并不是一个新概念——正如我们所说,Kafka也使用了领导者——但是在任何给定的时间,通道的批准者集合都选择一个节点作为领导者,这一点非常重要(我们稍后将在Raft中描述这是如何发生的)。领导者负责接收新的日志条目,将它们复制到跟随者的排序节点,并在认为提交了某个条目时进行管理。这不是一种特殊类型的排序器。它只是排序器在某些时候可能扮演的角色,而不是由环境决定的其他角色。

跟随者。再次强调,这不是一个新概念,但是理解跟随者的关键是跟随者从领导者那里接收日志并确定地复制它们,确保日志保持一致。我们将在关于领导者选举的部分中看到,跟随者也会收到来自领导者的“心跳”消息。如果领导者在一段可配置的时间内停止发送这些消息,跟随者将发起一次领导者选举,其中一人将当选为新的领导者。

Raft in a transaction flow

每个通道都在Raft协议的单独实例上运行,该协议允许每个实例选择不同的领导者。这种配置还允许在集群由不同组织控制的有序节点组成的用例中进一步分散服务。虽然所有Raft节点都必须是系统通道的一部分,但它们不一定必须是所有应用程序通道的一部分。通道创建者(和通道管理员)能够选择可用排序器的子集,并根据需要添加或删除排序节点(只要一次只添加或删除一个节点)。

虽然这种配置以冗余心跳消息和线程的形式产生了更多的开销,但它为BFT奠定了必要的基础。

在Raft中,交易(以提案或配置更新的形式)由接收交易的排序节点自动路由到该通道的当前领导者。这意味着peer和应用程序在任何特定时间都不需要知道谁是领导者节点。只有排序节点需要知道。

当排序器验证检查完成后,将按照我们交易流程的第二阶段的描述,对交易进行排序、打包成区块、协商并分发。

Architectural notes
How leader election works in Raft

尽管选举领导者的过程发生在排序器的内部过程中,但是值得注意的是这个过程是如何工作的。

Raft节点总是处于以下三种状态之一:跟随者、候选人或领导者。所有节点最初都是作为跟随者开始的。在这种状态下,他们可以接受来自领导者的日志条目(如果其中一个已经当选),或者为领导者投票。如果在一段时间内没有接收到日志条目或心跳(例如,5秒),节点将自我提升到候选状态。在候选状态中,节点从其他节点请求选票。如果候选人获得法定人数的选票,那么他就被提升为领导者。领导者必须接受新的日志条目并将其复制到跟随者。

要了解领导者选举过程的可视化表示,请查看数据的秘密生活。

Snapshots

如果一个排序节点宕机,它如何在重新启动时获得它丢失的日志?

虽然可以无限期地保留所有日志,但是为了节省磁盘空间,Raft使用了一个称为“快照”的过程,在这个过程中,用户可以定义日志中要保留多少字节的数据。这个数据量将符合一定数量的区块(这取决于区块中的数据量)。注意,快照中只存储完整的区块)。

例如,假设滞后副本R1刚刚重新连接到网络。它最新的区块是100。领导者L位于第196块,并被配置为在本例中快照20区块数据量。R1因此将从L接收区块180,然后为区块101到180发送请求。区块180到196然后将通过正常Raft协议复制到R1。

Kafka

Fabric支持的另一个容错崩溃排序服务是对Kafka分布式流平台的改编,将其用作排序节点集群。您可以在Apache Kafka网站上阅读更多关于Kafka的信息,但是在更高的层次上,Kafka使用与Raft相同的概念上的“领导者和跟随者”配置,其中交易(Kafka称之为“消息”)从领导者节点复制到跟随者节点。在领导者节点宕机的情况下,一个跟随者成为领导者, 排序可以继续,确保容错,就像Raft一样。

Kafka集群的管理,包括任务协调、集群成员、访问控制和控制器选择等,由ZooKeeper集合及其相关api来处理。

Kafka集群和ZooKeeper 集合的设置是出了名的棘手,所以我们的文档假设您对Kafka和ZooKeeper 有一定的了解。如果您决定在不具备此专业知识的情况下使用Kafka,那么在试验基于Kafka的排序服务之前,至少应该完成Kafka快速入门指南的前六个步骤。您还可以参考这个示例配置文件来简要解释Kafka和ZooKeeper的合理默认值。

要了解如何启动基于Kafka的排序服务,请查看我们关于Kafka的文档。

Private data

What is private data?

在某个通道上的一组组织需要对该通道上的其他组织保持数据私密的情况下,它们可以选择创建一个新通道,其中只包含需要访问数据的组织。但是,在每一种情况下创建单独的通道会产生额外的管理开销(维护链码版本、策略、MSPs等),并且不允许在保持部分数据私有的同时,让所有通道参与者都看到交易。

这就是为什么从v1.2开始,Fabric就提供了创建私有数据集合的能力,这使得通道上定义的组织子集能够背书、提交或查询私有数据,而不必创建单独的通道。

What is a private data collection?

一个私有数据集合由两个元素组成:

  1. The actual private data, sent peer-to-peer via gossip protocol to only the organization(s) authorized to see it. This data is stored in a private state database on the peers of authorized organizations (sometimes called a “side” database, or “SideDB”), which can be accessed from chaincode on these authorized peers. The ordering service is not involved here and does not see the private data. Note that because gossip distributes the private data peer-to-peer across authorized organizations, it is required to set up anchor peers on the channel, and configure CORE_PEER_GOSSIP_EXTERNALENDPOINT on each peer, in order to bootstrap cross-organization communication.

  2. A hash of that data, which is endorsed, ordered, and written to the ledgers of every peer on the channel. The hash serves as evidence of the transaction and is used for state validation and can be used for audit purposes.

下图说明了一个被授权拥有私有数据的节点和一个未授权的节点的账本内容。

_images/PrivateDataConcept-2.pngprivate-data.private-data

如果集合成员陷入争议,或者如果他们想将资产转让给第三方,则可以决定与其他方共享私有数据。然后,第三方可以计算私有数据的散列,并查看它是否与通道账本上的状态匹配,从而证明在某个时间点上集合成员之间的状态存在。

When to use a collection within a channel vs. a separate channel
  • Use channels when entire transactions (and ledgers) must be kept confidential within a set of organizations that are members of the channel.

  • Use collections when transactions (and ledgers) must be shared among a set of organizations, but when only a subset of those organizations should have access to some (or all) of the data within a transaction. Additionally, since private data is disseminated peer-to-peer rather than via blocks, use private data collections when transaction data must be kept confidential from ordering service nodes.

A use case to explain collections

考虑一个由5个组织组成的通道,他们交易农产品:

  • A Farmer selling his goods abroad

  • A Distributor moving goods abroad

  • A Shipper moving goods between parties

  • A Wholesaler purchasing goods from distributors

  • A Retailer purchasing goods from shippers and wholesalers

分销商可能希望与农场主和托运商之间保持私密交易,以对批发商和零售商保密交易条款(以免暴露他们收取的加价)。

分销商还可能希望与批发商建立单独的私有数据关系,因为它向批发商收取的价格比零售商低。

批发商还可能希望与零售商和托运商建立私有数据关系。

与其为这些关系定义许多小通道,不如定义多个私有数据集合(PDC)在以下各方之间共享私有数据:

  1. PDC1: Distributor, Farmer and Shipper

  2. PDC2: Distributor and Wholesaler

  3. PDC3: Wholesaler, Retailer and Shipper

_images/PrivateDataConcept-1.pngprivate-data.private-data

使用此示例,分销商拥有的节点在其账本中拥有多个私有数据库,其中包括来自分销商、农民和发货人关系以及分销商和批发商关系的私有数据。因为这些数据库与保存通道账本的数据库是分开的,所以私有数据有时被称为“SideDB”。

_images/PrivateDataConcept-3.pngprivate-data.private-data

Transaction flow with private data

当在链码中引用私有数据集合时,为了保护私有数据的机密性,在提案、背书的和提交交易到账本时,交易流程略有不同。

关于不使用私有数据的交易流程的详细信息,请参阅我们关于交易流程的文档。

  1. The client application submits a proposal request to invoke a chaincode function (reading or writing private data) to endorsing peers which are part of authorized organizations of the collection. The private data, or data used to generate private data in chaincode, is sent in a transient field of the proposal.

  2. The endorsing peers simulate the transaction and store the private data in a transient data store (a temporary storage local to the peer). They distribute the private data, based on the collection policy, to authorized peers via gossip.

  3. The endorsing peer sends the proposal response back to the client. The proposal response includes the endorsed read/write set, which includes public data, as well as a hash of any private data keys and values. No private data is sent back to the client. For more information on how endorsement works with private data, click here.

  4. The client application submits the transaction (which includes the proposal response with the private data hashes) to the ordering service. The transactions with the private data hashes get included in blocks as normal. The block with the private data hashes is distributed to all the peers. In this way, all peers on the channel can validate transactions with the hashes of the private data in a consistent way, without knowing the actual private data.

  5. At block commit time, authorized peers use the collection policy to determine if they are authorized to have access to the private data. If they do, they will first check their local transient data store to determine if they have already received the private data at chaincode endorsement time. If not, they will attempt to pull the private data from another authorized peer. Then they will validate the private data against the hashes in the public block and commit the transaction and the block. Upon validation/commit, the private data is moved to their copy of the private state database and private writeset storage. The private data is then deleted from the transient data store.

Purging private data

对于非常敏感的数据,即使共享私有数据的各方也可能希望 — 或政府法规要求 — 定期“清除”其peer的数据,留下区块链上的数据哈希作为私有数据的不可变证据。

在某些情况下,私有数据只需要存在于节点的私有数据库上,直到可以将其复制到区块链外部的数据库中为止。数据也可能只需要存在于节点上,直到使用它完成链码业务流程(交易结算、合约履行等)。

为了支持这些用例,如果已经持续N个块都没有修改私有数据了,N是可配置的,则可以清除它。清除的私有数据不能从链码查询,并且其他节点也请求不到。

How a private data collection is defined

有关集合定义的更多详细信息,以及关于私有数据和集合的其他更细层级的信息,请参阅私有数据主题。

用例

Hyperledger Requirements工作组正在梳理大量区块链用例,并维护一个清单(https://wiki.hyperledger.org/groups/requirements/use case inventory)。

Getting Started

Prerequisites

Before we begin, if you haven’t already done so, you may wish to check that you have all the prerequisites below installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.

Install cURL

Download the latest version of the cURL tool if it is not already installed or if you get errors running the curl commands from the documentation.

注解

If you’re on Windows please see the specific note on Windows extras below.

Docker and Docker Compose

You will need the following installed on the platform on which you will be operating, or developing on (or for), Hyperledger Fabric:

  • MacOSX, *nix, or Windows 10: Docker Docker version 17.06.2-ce or greater is required.

  • Older versions of Windows: Docker Toolbox - again, Docker version Docker 17.06.2-ce or greater is required.

You can check the version of Docker you have installed with the following command from a terminal prompt:

docker --version

注解

Installing Docker for Mac or Windows, or Docker Toolbox will also install Docker Compose. If you already had Docker installed, you should check that you have Docker Compose version 1.14.0 or greater installed. If not, we recommend that you install a more recent version of Docker.

You can check the version of Docker Compose you have installed with the following command from a terminal prompt:

docker-compose --version

Go Programming Language

Hyperledger Fabric uses the Go Programming Language for many of its components.

  • Go version 1.12.x is required.

Given that we will be writing chaincode programs in Go, there are two environment variables you will need to set properly; you can make these settings permanent by placing them in the appropriate startup file, such as your personal ~/.bashrc file if you are using the bash shell under Linux.

First, you must set the environment variable GOPATH to point at the Go workspace containing the downloaded Fabric code base, with something like:

export GOPATH=$HOME/go

注解

You must set the GOPATH variable

Even though, in Linux, Go’s GOPATH variable can be a colon-separated list of directories, and will use a default value of $HOME/go if it is unset, the current Fabric build framework still requires you to set and export that variable, and it must contain only the single directory name for your Go workspace. (This restriction might be removed in a future release.)

Second, you should (again, in the appropriate startup file) extend your command search path to include the Go bin directory, such as the following example for bash under Linux:

export PATH=$PATH:$GOPATH/bin

While this directory may not exist in a new Go workspace installation, it is populated later by the Fabric build system with a small number of Go executables used by other parts of the build system. So even if you currently have no such directory yet, extend your shell search path as above.

Node.js Runtime and NPM

If you will be developing applications for Hyperledger Fabric leveraging the Hyperledger Fabric SDK for Node.js, you will need to have version 8.9.x of Node.js installed.

注解

Versions other than the 8.x series are not supported at this time.

注解

Installing Node.js will also install NPM, however it is recommended that you confirm the version of NPM installed. You can upgrade the npm tool with the following command:

npm install npm@5.6.0 -g
Python

注解

The following applies to Ubuntu 16.04 users only.

By default Ubuntu 16.04 comes with Python 3.5.1 installed as the python3 binary. The Fabric Node.js SDK requires an iteration of Python 2.7 in order for npm install operations to complete successfully. Retrieve the 2.7 version with the following command:

sudo apt-get install python

Check your version(s):

python --version

Windows extras

If you are developing on Windows 7, you will want to work within the Docker Quickstart Terminal which uses Git Bash and provides a better alternative to the built-in Windows shell.

However experience has shown this to be a poor development environment with limited functionality. It is suitable to run Docker based scenarios, such as Getting Started, but you may have difficulties with operations involving the make and docker commands.

On Windows 10 you should use the native Docker distribution and you may use the Windows PowerShell. However, for the binaries command to succeed you will still need to have the uname command available. You can get it as part of Git but beware that only the 64bit version is supported.

Before running any git clone commands, run the following commands:

git config --global core.autocrlf false
git config --global core.longpaths true

You can check the setting of these parameters with the following commands:

git config --get core.autocrlf
git config --get core.longpaths

These need to be false and true respectively.

The curl command that comes with Git and Docker Toolbox is old and does not handle properly the redirect used in Getting Started. Make sure you install and use a newer version from the cURL downloads page

For Node.js you also need the necessary Visual Studio C++ Build Tools which are freely available and can be installed with the following command:

npm install --global windows-build-tools

See the NPM windows-build-tools page for more details.

Once this is done, you should also install the NPM GRPC module with the following command:

npm install --global grpc

Your environment should now be ready to go through the Getting Started samples and tutorials.

注解

If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.

Install Samples, Binaries and Docker Images

While we work on developing real installers for the Hyperledger Fabric binaries, we provide a script that will download and install samples and binaries to your system. We think that you’ll find the sample applications installed useful to learn more about the capabilities and operations of Hyperledger Fabric.

注解

If you are running on Windows you will want to make use of the Docker Quickstart Terminal for the upcoming terminal commands. Please visit the Prerequisites if you haven’t previously installed it.

If you are using Docker Toolbox on Windows 7 or macOS, you will need to use a location under C:\Users (Windows 7) or /Users (macOS) when installing and running the samples.

If you are using Docker for Mac, you will need to use a location under /Users, /Volumes, /private, or /tmp. To use a different location, please consult the Docker documentation for file sharing.

If you are using Docker for Windows, please consult the Docker documentation for shared drives and use a location under one of the shared drives.

Determine a location on your machine where you want to place the fabric-samples repository and enter that directory in a terminal window. The command that follows will perform the following steps:

  1. If needed, clone the hyperledger/fabric-samples repository

  2. Checkout the appropriate version tag

  3. Install the Hyperledger Fabric platform-specific binaries and config files for the version specified into the /bin and /config directories of fabric-samples

  4. Download the Hyperledger Fabric docker images for the version specified

Once you are ready, and in the directory into which you will install the Fabric Samples and binaries, go ahead and execute the command to pull down the binaries and images.

注解

If you want the latest production release, omit all version identifiers.

curl -sSL http://bit.ly/2ysbOFE | bash -s

注解

If you want a specific release, pass a version identifier for Fabric, Fabric-ca and thirdparty Docker images. The command below demonstrates how to download Fabric v2.0.0 Alpha release v2.0.0-alpha

curl -sSL http://bit.ly/2ysbOFE | bash -s -- <fabric_version> <fabric-ca_version> <thirdparty_version>
curl -sSL http://bit.ly/2ysbOFE | bash -s -- 2.0.0-alpha 2.0.0-alpha 0.4.15

注解

If you get an error running the above curl command, you may have too old a version of curl that does not handle redirects or an unsupported environment.

Please visit the Prerequisites page for additional information on where to find the latest version of curl and get the right environment. Alternately, you can substitute the un-shortened URL: https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh

The command above downloads and executes a bash script that will download and extract all of the platform-specific binaries you will need to set up your network and place them into the cloned repo you created above. It retrieves the following platform-specific binaries:

  • configtxgen,

  • configtxlator,

  • cryptogen,

  • discover,

  • idemixgen

  • orderer,

  • peer,

  • token, and

  • fabric-ca-client

and places them in the bin sub-directory of the current working directory.

You may want to add that to your PATH environment variable so that these can be picked up without fully qualifying the path to each binary. e.g.:

export PATH=<path to download location>/bin:$PATH

Finally, the script will download the Hyperledger Fabric docker images from Docker Hub into your local Docker registry and tag them as ‘latest’.

The script lists out the Docker images installed upon conclusion.

Look at the names for each image; these are the components that will ultimately comprise our Hyperledger Fabric network. You will also notice that you have two instances of the same image ID - one tagged as “amd64-1.x.x” and one tagged as “latest”. Prior to 1.2.0, the image being downloaded was determined by uname -m and showed as “x86_64-1.x.x”.

注解

On different architectures, the x86_64/amd64 would be replaced with the string identifying your architecture.

注解

If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.

Before we begin, if you haven’t already done so, you may wish to check that you have all the Prerequisites installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.

Once you have the prerequisites installed, you are ready to download and install HyperLedger Fabric. While we work on developing real installers for the Fabric binaries, we provide a script that will Install Samples, Binaries and Docker Images to your system. The script also will download the Docker images to your local registry.

Hyperledger Fabric SDKs

Hyperledger Fabric offers a number of SDKs to support various programming languages. There are two officially released SDKs for Node.js and Java:

In addition, there are three more SDKs that have not yet been officially released (for Python, Go and REST), but they are still available for downloading and testing:

Hyperledger Fabric CA

Hyperledger Fabric provides an optional certificate authority service that you may choose to use to generate the certificates and key material to configure and manage identity in your blockchain network. However, any CA that can generate ECDSA certificates may be used.

开发应用程序

The scenario

受众:架构师、应用程序开发者和智能合约开发者、业务专家

在本主题中,我们将描述一个涉及六个组织的业务场景,这些组织使用PaperNet发行、购买和赎回商业票据。PaperNet是一个建立在超级账本FAbric的商业票据网络。我们将使用该场景概述开发商业票据应用程序和参与者组织使用的智能合约的需求。

PaperNet network

PaperNet是一个商业票据网络,允许适当授权的参与者发行、交易、赎回和定价商业票据。

_images/develop.diagram.1.pngdevelop.systemscontext

PaperNet商业票据网。目前有六家机构使用PaperNet网络发行、购买、销售、赎回和定价商业票据。MagentoCorp发行和赎回商业票据。Digbank、BigFund、BrokerHouse和HedgeMatic都互相交易商业票据。RateM为商业票据提供了各种风险评估。

让我们看看MagnetoCorp 是如何使用PaperNet和商业票据来帮助其业务的。

Introducing the actors

MagnetoCorp是一家生产自动驾驶电动汽车的知名公司。在2020年4月初,MagnetoCorp赢得了一份为Daintree公司生产10000辆D型汽车的大订单。Daintree公司是个人交通市场的新参与者。尽管该订单代表着MagnetoCorp的重大胜利,但在11月1日开始交付之前,Daintree不必为这些车辆付款。6个月前,MagnetoCorp与Daintree已正式达成协议。

为了制造这些汽车,MagnetoCorp将需要雇佣1000名工人至少工作6个月。这给它的财务带来了短期压力——每月需要额外支付500万美元给这些新员工。商业票据的目的是帮助MagnetoCorp克服其短期融资需求——在Daintree开始为其新款D型汽车付款时,该公司预计将拥有充足的现金,因此每月都要发放工资。

5月底,MagnetoCorp需要500万美元来支付5月1日雇佣的额外工人的工资。为此,它发行了一张面值为500万美元的商业票据,期限为6个月,预计届时将看到Daintree的现金流入。Digbank认为MagnetoCorp是有信誉的,因此不需要比央行2%的基准利率高多少的溢价。按照目前的汇率,6个月后的500万美元相当于485万美元。因此,该公司以494万美元的价格购买了MagnetoCorp6个月期商业票据——与其价值485万美元相比,这是一个小折扣。DigiBank完全预计,在6个月内,它将能够从MagnetoCorp手中赎回500万美元,从而为自己带来1万美元的利润,因为它承担了与这张商业票据相关的风险增加。这额外的1万美元意味着它的投资回报率为2.4%,明显高于2%的无风险回报率。

6月底,当MagnetoCorp发行一份500万美元的新商业票据来支付6月份的工资时,它被BigFund以494万美元的价格收购。这是因为6月的商业环境与5月大致相同,导致BigFund对MagnetoCorp商业票据的估值与DigiBank 5月的估值相同。

此后的每个月,MagnetoCorp都可以发行新的商业票据,以履行其薪资义务,这些票据可能被DigiBank或PaperNet商业票据网络的任何其他参与者购买——BigFund、HedgeMatic或BrokerHouse。这些机构可能会为商业票据支付更多或更少的费用,这取决于两个因素——央行基准利率和与MagnetoCorp相关的风险。后者的数字取决于多种因素,比如D型车的产量,以及评级机构RateM评估的MagnetoCorp的信誉。

PaperNet中的组织有不同的角色,如MagnetoCorp发行票据、DigiBank、BigFund、HedgeMatic和BrokerHouse交易票据以及RateM对票据估价。扮演同样角色的组织,如Digbank、Bigfund、HedgeMatic和BrokerHouse,都是竞争对手。组织不同的角色不一定是竞争对手,但可能仍有对立的商业利益,例如MagentoCorp将期望高评级的票据以高价卖给他们,而DigiBank将受益于较低的评级,这样它能以较低的价格买进。可以看出,即使是看似简单的网络,如PaperNet,也可能具有复杂的信任关系。区块链可以帮助在竞争对手之间建立信任,或者在可能导致争议的商业利益对立的组织之间建立信任。Fabric尤其能够捕获甚至细粒度的信任关系。

让我们暂停一下MagnetoCorp的故事,开发客户端应用程序和智能合约,PaperNet使用这些应用程序发行、购买、销售和赎回商业票据,并获得组织之间的信任关系。我们稍后会回到评级机构RateM的角色。

Analysis

受众:架构师、应用程序开发者和智能合约开发者、业务专家

让我们更详细地分析一下商业票据。PaperNet的参与者,如MagnetoCorp和DigiBank,使用商业票据交易来实现他们的业务目标——让我们研究一下商业票据的结构以及随着时间的推移影响它的交易。我们还将根据网络中组织之间的信任关系,考虑PaperNet中的哪些组织需要签署交易。稍后我们将重点讨论资金如何在买家和卖家之间流动;现在,让我们专注于MagnetoCorp发表的第一篇论文。

Commercial paper lifecycle

一张票据00001是由MagnetoCorp在5月31日发布。花点时间看看这个票据的第一种状态,它有不同的性质和值:

Issuer = MagnetoCorp
Paper = 00001
Owner = MagnetoCorp
Issue date = 31 May 2020
Maturity = 30 November 2020
Face value = 5M USD
Current state = issued

票据的状态是发行交易的结果,它带来了MagnetoCorp 的第一份商业票据的存在!请注意,这张票据的面值为500万美元,可在今年晚些时候赎回。查看当发行00001时,发行者和所有者是如何相同的。请注意,本票据可以惟一地标识为MagnetoCorp00001——发行方和票据属性的组合。最后,查看属性Current state = issue如何快速标识MagnetoCorp paper 00001在其生命周期中的阶段。

发行后不久,该票据被DigiBank收购。花点时间看看同样的商业票据是如何由于这次购买交易而发生变化的:

Issuer = MagnetoCorp
Paper = 00001
Owner = DigiBank
Issue date = 31 May 2020
Maturity date = 30 November 2020
Face value = 5M USD
Current state = trading

最显著的变化是所有者的变化——看看这家最初由MagnetoCorp拥有的票据现在是如何被DigiBank拥有的。我们可以想象,这张票据后来可能会被卖给BrokerHouse或HedgeMatic,然后相应的变更所有者。请注意,目前的状态如何让我们容易地确定,该票据现在正在交易。

六个月后,如果DigiBank仍持有商业票据,可向MagnetoCorp赎回:

Issuer = MagnetoCorp
Paper = 00001
Owner = MagnetoCorp
Issue date = 31 May 2020
Maturity date = 30 November 2020
Face value = 5M USD
Current state = redeemed

最后的赎回交易结束了商业票据的生命周期——可以认为已经结束。通常必须保存赎回的商业票据的记录,而赎回状态允许我们快速识别这些票据。通过比较所有者和交易创建者的身份,票据所有者这个值可以用于对赎回交易执行访问控制。Fabric通过getCreator() 链码API支持这一点。如果将GO语言用作链码语言,还可以使用客户端标识链码库检索交易创建者的其他属性。

Transactions

我们已经看到,票据00001的生命周期相对简单——它在发行、交易和赎回之间移动,这是由于发行、购买或赎回交易的结果。

这三笔交易由MagnetoCorp和DigiBank发起(两次),驱动票据00001状态变化。让我们更详细地看看影响票据的交易:

Issue

研究MagnetoCorp发起的第一笔交易:

Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD

查看怎样发起交易具有属性和值的结构。这种交易结构与票据00001的结构不同,但非常匹配。这是因为它们是不同的东西——票据00001反映了PaperNet的状态,这是发起交易的结果。正是发起交易背后的逻辑(我们无法看到)获取这些属性并创建了票据。因为交易创建了票据,这意味着这些结构之间有着非常密切的关系。

参与此次发起交易的唯一组织是MagnetoCorp.。自然,MagnetoCorp需要批准这笔交易。一般来说,一张票据的发起者必须在发起新票据的交易上签字。

Buy

接下来,研究一下将票据00001的所有权从MagnetoCorp转移到DigiBank的购买交易:

Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = MagnetoCorp
New owner = DigiBank
Purchase time = 31 May 2020 10:00:00 EST
Price = 4.94M USD

在这张票据中,请查看购买交易如何拥有更少的属性。那是因为这个交易只修改了这张票据。只有新的所有者= DigiBank才会因为这笔交易而改变;其他的都是一样的。没关系,购买交易中最重要的是所有权的变更,事实上,在这次交易中,我们承认了票据的当前所有者,MagnetoCorp。

您可能会问,为什么在票据00001中没有记录购买时间和价格属性?这又回到了交易和票据之间的区别。494万美元的价格标签实际上是交易的属性,而不是票据的属性。花点时间想想这种不同;这并不像看上去那么明显。我们稍后将看到,账本将记录这两条信息——影响票据的所有交易的历史,以及它的最新状态。明确这种信息的分离是非常重要的。

同样值得记住的是,票据00001可能会被买卖很多次。虽然在我们的场景中跳过了一点,但是让我们检查一下,如果票据00001更改了所有权,我们可能会看到哪些交易。

如果被BigFund购买:

Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = DigiBank
New owner = BigFund
Purchase time = 2 June 2020 12:20:00 EST
Price = 4.93M USD

随后被HedgeMatic购买:

Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = BigFund
New owner = HedgeMatic
Purchase time = 3 June 2020 15:59:00 EST
Price = 4.90M USD

看看票据所有者是如何变化的,以及出了例子,价格是如何变化的。你能想出MagnetoCorp商业票据价格下跌的原因吗?

直观地说,购买交易要求销售和购买组织都需要在这样的交易上签名,这样就有证据表明交易双方达成了协议。

Redeem

票据00001的赎回交易表示其生命周期的结束。在我们相对简单的例子中,DigiBank启动了将商业票据转回MagnetoCorp的交易:

Txn = redeem
Issuer = MagnetoCorp
Paper = 00001
Current owner = HedgeMatic
Redeem time = 30 Nov 2020 12:00:00 EST

再次注意,赎回交易的属性非常少;对票据00001的所有更改都可以通过赎回交易逻辑计算:发起者将成为新的所有者,当前状态将变为赎回。在我们的示例中指定了当前所有者属性,以便可以根据票据的当前所有者对其进行检查。

从信任的角度来看,购买交易的相同推理也适用于赎回指令:参与交易的两个组织都需要在上面签名。

The Ledger

在本主题中,我们已经看到交易和生成的票据状态是PaperNet中两个最重要的概念。实际上,我们将在任何超级账本Fabric分布式账本中看到这两个基本元素——一个包含所有对象的当前值的世界状态,以及一个记录导致当前世界状态的所有交易历史的区块链。

交易所需的签名是通过规则强制执行的,这些规则在将交易追加到账本之前进行评估。只有在提供了所需的签名之后,Fabric才会接受交易为有效。

你现在处在一个很好的位置,把这些想法转化为一份智能合约。如果您的编程有点生疏,不要担心,我们将提供理解程序代码的技巧和要点。掌握商业票据智能合约是设计自己的应用程序的第一步。或者,如果您是一位业务分析师,对一些编程比较熟悉,那么不要害怕继续深入研究!

Process and Data Design

受众:架构师、应用程序开发者和智能合约开发者、业务专家

本主题向您展示如何在PaperNet中设计商业票据流程及其相关数据结构。我们的分析强调,使用状态和交易对PaperNet建模提供了一种精确的方法来理解正在发生的事情。现在我们将详细阐述这两个紧密相关的概念,以帮助我们随后设计PaperNet的智能合约和应用程序。

Lifecycle

正如我们所看到的,在处理商业票据时,有两个重要的概念引起了我们的关注,状态和交易。事实上,这对于所有区块链用例都是正确的;有价值的概念对象,建模为状态,其生命周期转换由交易描述。对状态和交易的有效分析是成功实现的必要起点。

我们可以用状态转换图来表示商业票据的生命周期:

_images/develop.diagram.4.pngdevelop.statetransition The state transition diagram for commercial paper. Commercial papers transition between issued, trading and redeemed states by means of the issue, buy and redeem transactions.

查看状态图如何描述商业票据如何随时间变化,以及特定交易如何控制生命周期转换。在超级账本Fabric中,智能合约实现了商业票据在不同状态之间转换的交易逻辑。商业票据状态实际上是在账本世界状态中持有的;让我们仔细看看。

Ledger state

回想一下商业票据的结构:

_images/develop.diagram.5.pngdevelop.paperstructure A commercial paper can be represented as a set of properties, each with a value. Typically, some combination of these properties will provide a unique key for each paper.

看看一个商业票据属性的怎样具有值00001,而面值属性的值是500万美元。最重要的是,当前的状态属性表明商业票据是发行、交易还是赎回。综合起来,所有的属性构成了商业票据的状态。此外,这些单独商业票据状态的全部集合构成账本世界状态。

所有账本状态共享此表格;每一个都有一组属性,每个属性都有不同的值。状态的这种多属性特性是一个强大的特性——它允许我们将一个Fabric状态看作一个向量,而不是一个简单的标量。然后,我们将关于整个对象的事实表示为单个状态,这些状态随后经历由交易逻辑的控制进行转换。Fabric状态实现为键/值对,其中值是对象属性编码到一个格式中(通常是JSON)。账本数据库可以支持针对这些属性的高级查询操作,这对于复杂的对象检索非常有帮助。

看看MagnetoCorp的票据00001如何表示为一个状态向量,该状态向量根据不同的交易促进进行转换:

_images/develop.diagram.6.pngdevelop.paperstates A commercial paper state is brought into existence and transitions as a result of different transactions. Hyperledger Fabric states have multiple properties, making them vectors rather than scalars.

请注意,每一张票据都是以空状态开始的,从技术上讲,空状态是票据的nil状态,因为它不存在!请参阅票据00001是如何由发行交易产生的,以及随后如何由于购买和赎回交易而更新它。

注意每个状态是如何自我描述的;每个属性都有一个名称和一个值。尽管我们所有的商业票据目前都具有相同的属性,但这并不需要一直如此,因为超级账本Fabric支持不同状态具有不同属性。这允许相同的账本世界状态包含相同资产的不同形式以及不同类型的资产。它还使更新一个状态的结构成为可能;设想一个需要额外数据字段的新规则。灵活的状态属性支持数据随时间演化的基本需求。

State keys

在大多数实际应用中,状态将具有在给定上下文中拥有唯一标识它的属性组合——这是键(key)。PaperNet商业票据的键是由发行者和票据属性串联而成;所以对于MagnetoCorp的第一张票据,它是MagnetoCorp00001。

状态键允许我们唯一地标识一张票据;它是作为发行交易的结果创建的,随后由购买和赎回交易更新。超级账本Fabric要求账本中的每个状态都有一个唯一的键。

当可用属性集中没有唯一键可用时,应用程序通过交易输入决定唯一键来创建状态。这个唯一的键通常与某种形式的UUID一起使用,虽然可读性较差,但这是一种标准实践。重要的是,账本中的每个状态对象都必须有唯一的键。

Multiple states

如我们所见,PaperNet中的商业票据以状态向量的形式存储在账本中。能够从账本中查询不同的商业票据是合理的需求;例如:查找MagnetoCorp发行的所有票据,或者:查找处于赎回状态的MagnetoCorp发行的所有票据。

为了使这类搜索任务成为可能,将所有相关的票据放在一个逻辑列表中是很有帮助的。PaperNet设计结合了商业票据列表的思想——一个逻辑容器,每当发行或更改商业票据时,该容器都会更新。

Logical representation

将所有PaperNet商业票据放在一个商业票据列表中是有帮助的:

_images/develop.diagram.7.pngdevelop.paperlist MagnetoCorp’s newly created commercial paper 00004 is added to the list of existing commercial papers.

新票据可以作为发行交易的结果添加到列表中,并且已经在列表中的票据可以通过购买或赎回交易进行更新。查看列表如何具有描述性名称:org.papernet.papers;使用这种DNS名称是一个非常好的主意,因为精心选择的名称将使您的区块链设计对其他人更加直观。这一观点同样适用于智能合约名称。

Physical representation

当然,在PaperNet中只考虑一张票据的列表是正确的 – org.papernet.papers – 列表最好实现为一组单独的Fabric状态,其组合键将状态与其列表关联起来。这样,每个状态的组合键都是惟一的,并且支持有效的列表查询。

_images/develop.diagram.8.pngdevelop.paperphysical Representing a list of PaperNet commercial papers as a set of distinct Hyperledger Fabric states

注意,列表中的每张票据都是由向量状态表示的,其中包含连接org.papernet.paper、发行人及票据属性形成的惟一组合键。这种结构有两个原因: - 它允许我们检查账本中的任何状态向量,以确定它在哪个列表中,而不需要参考单独的列表。这类似于观察一群体育迷,通过他们所穿衬衫的颜色来确定他们支持哪支球队。体育迷们宣布他们的忠诚;我们不需要粉丝名单。 - 超级账本Fabric内部使用并发控制机制来更新账本,这样将票据保存在单独的状态向量中可以大大减少共享状态冲突的机会。这种冲突要求交易重新提交,使应用程序设计复杂化,并降低性能。

  • It allows us to examine any state vector in the ledger to determine which list it’s in, without reference to a separate list. It’s analogous to looking at set of sports fans, and identifying which team they support by the colour of the shirt they are wearing. The sports fans self-declare their allegiance; we don’t need a list of fans.

  • Hyperlegder Fabric internally uses a concurrency control mechanism to update a ledger, such that keeping papers in separate state vectors vastly reduces the opportunity for shared-state collisions. Such collisions require transaction re-submission, complicate application design, and decrease performance.

这第二点实际上是超级账本Fabric的一个关键要点;状态向量的物理设计对于优化性能和行为非常重要。把你们的状态分开!

Trust relationships

我们已经讨论了网络中的不同角色(如发行者、交易员或评级机构)以及不同的商业利益如何决定谁需要签署交易。在Fabric中,这些规则由所谓的背书策略捕获。规则可以设置在链码粒度上,也可以设置在单个状态键上。

这意味着在PaperNet中,我们可以为整个命名空间设置一个规则,以确定哪些组织可以发布新票据。稍后,可以为单个票据设置和更新规则,以捕获购买和赎回交易的信任关系。

在下一个主题中,我们将向您展示如何结合这些设计概念来实现PaperNet商业票据智能合同,然后在应用程序中加以利用!

Smart Contract Processing

受众:架构师、应用程序和聪智能合约开发者

在区块链网络的核心是一个智能合约。在PaperNet中,商业票据智能合约中的代码定义了商业票据的有效状态,以及将票据从一种状态转换到另一种状态的交易逻辑。在本主题中,我们将向您展示如何实现一个管理商业票据发行、购买和赎回过程的真实世界智能合约。

我们会讲到:

如果您愿意,可以下载示例,甚至在本地运行它。它是用JavaScript编写的,但是逻辑是完全独立于语言的,所以您可以很容易地看到发生了什么!(Java和GOLANG也将提供该示例。)

Smart Contract

智能合约定义业务对象的不同状态,并管理对象在这些不同状态之间移动的流程。智能合约非常重要,因为它们允许架构师和智能合约开发者定义关键业务流程和数据,在区块链网络中跨组织协作。

在PaperNet网络中,智能合约由不同的网络参与者共享,如MagnetoCorp和DigiBank。所有连接到网络的应用程序都必须使用相同版本的智能合约,以便它们共同实现相同的共享业务流程和数据。

Contract class

PaperNet商业票据智能合约的副本包含在papercontract.js中。用你的浏览器查看它,或者在你最喜欢的编辑器中打开它(如果你已经下载了它)。

您可能从文件路径中注意到,这是MagnetoCorp的智能合约副本。MagnetoCorp和DigiBank必须就它们将使用的智能合约版本达成一致。现在,不管你看的是哪个组织的副本,它们都是一样的。

花点时间看看智能合约的整体结构;注意,它相当短!在 papercontract.js的顶部,你会看到商业票据智能合约有一个定义:

class CommercialPaperContract extends Contract {...}

CommercialPaperContract类包含商业票据的交易定义——发行、购买和赎回。正是这些交易使商业票据得以存在,并在其生命周期中移动它们。我们将很快研究这些交易,但现在请注意 CommercialPaperContract是如何扩展超级账本Fabric Contract类的。这个内置类和上下文类在前面已经介绍过:

const { Contract, Context } = require('fabric-contract-api');

我们的商业票据合约将使用这些类的内置特性,例如自动方法调用、每个交易上下文、交易处理程序和类共享状态。

还请注意类构造函数如何使用它的超类用显式的合约名初始化自己:

constructor() {
    super('org.papernet.commercialpaper');
}

最重要的是,org.papernet.commercialpaper非常具有描述性——这个智能合约是所有PaperNet组织对商业票据的一致定义。

通常每个文件只有一个智能合约——合约往往有不同的生命周期,因此将它们分开是明智的。然而,在某些情况下,多个智能合约可能为应用程序提供语法帮助,例如EuroBond、DollarBond、YenBond,但本质上提供相同的功能。在这种情况下,可以消除智能合约和交易的歧义。

Transaction definition

在类中,找到issue方法。

async issue(ctx, issuer, paperNumber, issueDateTime, maturityDateTime, faceValue) {...}

每当调用此合约以发行商业票据时,都会对该函数进行控制。回想一下商业票据00001是如何通过以下交易创建的:

Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD

我们已经为编程风格更改了变量名,但是看看这些属性如何几乎直接映射到issue方法变量。

每当应用程序请求发出商业票据时,合约自动地赋予issue方法控制权。交易属性值通过相应的变量提供给方法。查看一个应用如何使用示例应用程序,在应用主题中使用超级账本Fabric SDK提交交易。

您可能已经注意到了issue定义中的一个额外变量——ctx。它被称为交易上下文,它总是第一个。默认情况下,它同时维护与交易逻辑相关的每个合约和每个交易的信息。例如,它将包含MagnetoCorp指定的交易标识符、MagnetoCorp发行的用户数字证书、以及对账本API的访问。

查看智能合约如何通过实现自己的createContext()方法扩展默认交易上下文,而不是接受默认实现:

createContext() {
  return new CommercialPaperContext()
}

此扩展上下文将自定义属性paperList添加到默认值:

class CommercialPaperContext extends Context {

  constructor() {
    super();
    // All papers are held in a list of papers
    this.paperList = new PaperList(this);
}

我们很快就会看到ctx.paperList随后可用于帮助存储和检索所有PaperNet商业票据。

要巩固对智能合约交易结构的理解,请找到购买和赎回交易定义,并查看它们如何映射到相应的商业票据交易。

购买交易:

async buy(ctx, issuer, paperNumber, currentOwner, newOwner, price, purchaseTime) {...}
Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = MagnetoCorp
New owner = DigiBank
Purchase time = 31 May 2020 10:00:00 EST
Price = 4.94M USD

赎回交易:

async redeem(ctx, issuer, paperNumber, redeemingOwner, redeemDateTime) {...}
Txn = redeem
Issuer = MagnetoCorp
Paper = 00001
Redeemer = DigiBank
Redeem time = 31 Dec 2020 12:00:00 EST

在这两种情况下,观察商业票据交易与智能合约方法定义之间的1:1对应关系。不要担心异步,等待关键字——它们允许将异步JavaScript函数像其他编程语言中的同步函数一样对待。

Transaction logic

现在您已经了解了合约的结构和交易的定义,让我们将重点放在智能合约中的逻辑上。

回想第一个发行交易:

Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD

它导致issue方法被传递控制:

async issue(ctx, issuer, paperNumber, issueDateTime, maturityDateTime, faceValue) {

   // create an instance of the paper
  let paper = CommercialPaper.createInstance(issuer, paperNumber, issueDateTime, maturityDateTime, faceValue);

  // Smart contract, rather than paper, moves paper into ISSUED state
  paper.setIssued();

  // Newly issued paper is owned by the issuer
  paper.setOwner(issuer);

  // Add the paper to the list of all similar commercial papers in the ledger world state
  await ctx.paperList.addPaper(paper);

  // Must return a serialized paper to caller of smart contract
  return paper.toBuffer();
}

逻辑很简单:获取交易输入变量,创建一个新的商业票据,使用paperList将其添加到所有商业票据的列表中,并返回新的商业票据(作为缓冲序列化)作为交易响应。

请参阅如何从交易上下文检索paperList以提供对商业票据列表的访问。 issue()、buy() 和redeem() 不断地重新访问ctx.paperList,以保持最新的商业票据清单。

购买交易的逻辑更为复杂:

async buy(ctx, issuer, paperNumber, currentOwner, newOwner, price, purchaseDateTime) {

  // Retrieve the current paper using key fields provided
  let paperKey = CommercialPaper.makeKey([issuer, paperNumber]);
  let paper = await ctx.paperList.getPaper(paperKey);

  // Validate current owner
  if (paper.getOwner() !== currentOwner) {
      throw new Error('Paper ' + issuer + paperNumber + ' is not owned by ' + currentOwner);
  }

  // First buy moves state from ISSUED to TRADING
  if (paper.isIssued()) {
      paper.setTrading();
  }

  // Check paper is not already REDEEMED
  if (paper.isTrading()) {
      paper.setOwner(newOwner);
  } else {
      throw new Error('Paper ' + issuer + paperNumber + ' is not trading. Current state = ' +paper.getCurrentState());
  }

  // Update the paper
  await ctx.paperList.updatePaper(paper);
  return paper.toBuffer();
}

在使用paper. setowner (newOwner)更改所有者之前,查看交易怎样检查currentOwner和正在交易的票据是否一致。基本流程很简单——检查一些先决条件,设置新所有者,更新账本上的商业票据,并将更新后的商业票据(序列化为缓冲)作为交易响应返回。

为什么不看看您是否理解赎回交易的逻辑?

Representing an object

我们已经了解了如何使用CommercialPaper和PaperList类定义和实现发行、购买和赎回交易。让我们通过了解这些类如何工作来结束这个主题。

在paper.js文件中找到CommercialPaper类:

class CommercialPaper extends State {...}

该类包含商业票据状态的内存表示形式。查看createInstance方法如何使用提供的参数初始化新的商业票据:

static createInstance(issuer, paperNumber, issueDateTime, maturityDateTime, faceValue) {
  return new CommercialPaper({ issuer, paperNumber, issueDateTime, maturityDateTime, faceValue });
}

回想一下这个类是如何被发行交易使用的:

let paper = CommercialPaper.createInstance(issuer, paperNumber, issueDateTime, maturityDateTime, faceValue);

查看每次调用发行交易时,如何创建包含交易数据的新的商业票据内存实例。

以下几点需要注意:

  • This is an in-memory representation; we’ll see later how it appears on the ledger.

  • The CommercialPaper class extends the State class. State is an application-defined class which creates a common abstraction for a state. All states have a business object class which they represent, a composite key, can be serialized and de-serialized, and so on. State helps our code be more legible when we are storing more than one business object type on the ledger. Examine the State class in the state.js file.

  • A paper computes its own key when it is created – this key will be used when the ledger is accessed. The key is formed from a combination of issuer and paperNumber.

    constructor(obj) {
      super(CommercialPaper.getClass(), [obj.issuer, obj.paperNumber]);
      Object.assign(this, obj);
    }
    
  • A paper is moved to the ISSUED state by the transaction, not by the paper class. That’s because it’s the smart contract that governs the lifecycle state of the paper. For example, an import transaction might create a new set of papers immediately in the TRADING state.

CommercialPaper类的其余部分包含简单的辅助方法:

getOwner() {
    return this.owner;
}

回想一下智能合约如何使用这样的方法来移动商业票据通过生命周期。例如,在赎回交易中我们看到:

if (paper.getOwner() === redeemingOwner) {
  paper.setOwner(paper.getIssuer());
  paper.setRedeemed();
}

Access the ledger

现在在paperlist.js文件中找到PaperList类:

class PaperList extends StateList {

这个实用程序类用于管理超级账本Fabric状态数据库中的所有PaperNet商业票据。PaperList数据结构在架构主题中有更详细的描述。

与CommercialPaper类一样,该类扩展了应用程序定义的StateList类,该类为状态列表创建公共抽象——在本例中,是PaperNet中的所有商业票据。

addPaper()方法是 StateList.addState()方法上的一个简单装饰:

async addPaper(paper) {
  return this.addState(paper);
}

您可以在StateList.js文件中看到StateList类如何使用Fabric API putState()将商业票据作为状态数据写入账本:

async addState(state) {
  let key = this.ctx.stub.createCompositeKey(this.name, state.getSplitKey());
  let data = State.serialize(state);
  await this.ctx.stub.putState(key, data);
}

账本中的每一项状态数据都需要这两个基本要素: - 键: 键由createCompositeKey()组成,使用固定的名称和状态键。构造PaperList对象时分配了名称,state. getsplitkey()确定每个状态的唯一键。 - 数据:数据只是商业票据状态的序列化形式,使用State.serialize()实用程序方法创建。State类使用JSON序列化和反序列化数据,State的业务对象类(在我们的例子中是CommercialPaper)根据需要在构造PaperList对象时再次设置。

  • Key: key is formed with createCompositeKey() using a fixed name and the key of state. The name was assigned when the PaperList object was constructed, and state.getSplitKey() determines each state’s unique key.

  • Data: data is simply the serialized form of the commercial paper state, created using the State.serialize() utility method. The State class serializes and deserializes data using JSON, and the State’s business object class as required, in our case CommercialPaper, again set when the PaperList object was constructed.

请注意,StateList不存储关于单个状态或状态总列表的任何信息——它将所有这些委托给Fabric状态数据库。这是一个重要的设计模式——它减少了在超级账本Fabric中发生账本MVCC冲突的机会。

StateList getState()和updateState()方法的工作方式类似:

async getState(key) {
  let ledgerKey = this.ctx.stub.createCompositeKey(this.name, State.splitKey(key));
  let data = await this.ctx.stub.getState(ledgerKey);
  let state = State.deserialize(data, this.supportedClasses);
  return state;
}
async updateState(state) {
  let key = this.ctx.stub.createCompositeKey(this.name, state.getSplitKey());
  let data = State.serialize(state);
  await this.ctx.stub.putState(key, data);
}

查看他们如何使用Fabric api putState()、getState()和createCompositeKey()访问账本。稍后,我们将扩展这个智能合约,列出paperNet中的所有商业票据——实现这种账本检索的方法可能是什么样的?

就是这样!在本主题中,您已经了解了如何实现PaperNet的智能合约。您可以转到下一个子主题,了解应用程序如何使用Fabric SDK调用智能合约。

Application

受众:架构师、应用程序和智能合约开发者

An application can interact with a blockchain network by submitting transactions to a ledger or querying ledger content. This topic covers the mechanics of how an application does this; in our scenario, organizations access PaperNet using applications which invoke issue, sell and redeem transactions defined in a commercial paper smart contract. Even though MagnetoCorp’s application to issue a commercial paper is basic, it covers all the major points of understanding.

在这个主题中,我们将涉及: 调用智能合约的应用程序流程 应用程序如何使用钱包和身份 应用程序如何使用网关连接 如何访问特定的网络 如何构造交易请求 如何提交交易 如何处理交易响应

为了帮助您理解,我们将参考超级账本Fabric提供的商业票据示例应用程序。您可以下载它并在本地运行它。它是用JavaScript编写的,但是逻辑是完全独立于语言的,所以您可以很容易地看到发生了什么!(Java和GOLANG也将提供该示例。)

Basic Flow

应用程序使用Fabric SDK与区块链网络进行交互。下面是一个应用程序如何调用商业票据智能合约的简化图:

_images/develop.diagram.3.pngdevelop.application A PaperNet application invokes the commercial paper smart contract to submit an issue transaction request.

申请提交交易须遵循六个基本步骤:

  • Select an identity from a wallet

  • Connect to a gateway

  • Access the desired network

  • Construct a transaction request for a smart contract

  • Submit the transaction to the network

  • Process the response

您将看到一个典型的应用程序如何使用Fabric SDK执行这六个步骤。您将在issue.js文件中找到应用程序代码。在您的浏览器中查看它,或者在您最喜欢的编辑器中打开它(如果您已经下载了它)。花点时间看看应用程序的整体结构;即使有注释和间距,也只有100行代码!

Wallet

在issue.js的顶部,你会看到两个Fabric类被纳入作用域:

const { FileSystemWallet, Gateway } = require('fabric-network');

您可以阅读node SDK文档中关于fabric-network类的内容,但是现在,让我们看看如何将MagnetoCorp的应用程序连接到PaperNet。应用程序使用Fabric Wallet(钱包)类如下:

const wallet = new FileSystemWallet('../identity/user/isabella/wallet');

查看wallet如何在本地文件系统中定位钱包。从钱包中检索到的身份显然是针对一个名为Isabella的用户的,他正在使用发行应用程序。该钱包拥有一组身份(X.509数字证书),可用于访问PaperNet或任何其他Fabric网络。如果您运行本教程并查看这个目录,您将看到Isabella的身份凭证。

想象一个钱包,里面装着你的身份证、驾照或ATM卡的数字等价物。其中的X.509数字证书将使持有者与某个组织相关联,从而赋予他们在网络通道中的权利。例如,Isabella可能是MagnetoCorp的一名管理员,这可能会给她比其他用户(来自DigiBank的Balaji)更多的特权。此外,智能合约可以在使用交易上下文进行智能合约处理期间检索此身份。

还要注意,钱包里没有任何形式的现金或代币——它们容纳身份。

Gateway

第二个关键类是Fabric网关。最重要的是,网关标识一个或多个peer,这些peer提供对网络的访问——在我们的示例中是PaperNet。查看issue.js如何连接到它的网关:

await gateway.connect(connectionProfile, connectionOptions);

gateway.connect()有两个重要的参数: - connectionProfile:连接概要文件的文件系统位置,它将一组peer标识为到PaperNet的网关 - connectionOptions:一组用于控制issue.js如何与PaperNet交互的选项

  • connectionProfile: the file system location of a connection profile that identifies a set of peers as a gateway to PaperNet

  • connectionOptions: a set of options used to control how issue.js interacts with PaperNet

了解客户端应用程序如何使用网关将自己与可能发生更改的网络拓扑隔离。网关使用连接概要文件和连接选项将交易提议发送到网络中的正确peer节点。

花一些时间检查连接概要文件 ./gateway/connectionProfile.yaml。它使用YAML,使其易于阅读。

加载并转换为JSON对象:

let connectionProfile = yaml.safeLoad(file.readFileSync('./gateway/connectionProfile.yaml', 'utf8'));

现在,我们只对概要文件的 channels: 和peers: 部分感兴趣(我们稍微修改了一些细节,以便更好地解释发生了什么)。

channels:
  papernet:
    peers:
      peer1.magnetocorp.com:
        endorsingPeer: true
        eventSource: true

      peer2.digibank.com:
        endorsingPeer: true
        eventSource: true

peers:
  peer1.magnetocorp.com:
    url: grpcs://localhost:7051
    grpcOptions:
      ssl-target-name-override: peer1.magnetocorp.com
      request-timeout: 120
    tlsCACerts:
      path: certificates/magnetocorp/magnetocorp.com-cert.pem

  peer2.digibank.com:
    url: grpcs://localhost:8051
    grpcOptions:
      ssl-target-name-override: peer1.digibank.com
    tlsCACerts:
      path: certificates/digibank/digibank.com-cert.pem

请参见channel: 如何标识PaperNet: 网络通道,及其两个peer。MagnetoCorp拥有peer1.magenetocorp.com和DigiBank拥有peer2.digibank.com,两者都具有着背书peer的角色。通过peers: key 连接到这些peer,其中包含关于如何连接到它们的详细信息,包括它们各自的网络地址。

连接配置文件包含很多信息——不仅仅是peer——还包括网络通道、网络排序器、组织和CA,所以如果您不了解所有信息,请不要担心!

现在让我们将注意力转向connectionOptions对象:

let connectionOptions = {
  identity: userName,
  wallet: wallet
}

查看它如何指定用于连接到网关的标识、用户名和钱包。这些值是在代码的前部赋予的。

应用程序还可以使用其他连接选项来指示SDK代表它聪明地工作。例如:

let connectionOptions = {
  identity: userName,
  wallet: wallet,
  eventHandlerOptions: {
    commitTimeout: 100,
    strategy: EventStrategies.MSPID_SCOPE_ANYFORTX
  },
}

这里,commitTimeout告诉SDK等待100秒,以得知交易是否被提交。 strategy: EventStrategies.MSPID_SCOPE_ANYFORTX指定SDK可以在一个MagnetoCorp peer确认了交易之后通知应用程序,这与strategy: EventStrategies.NETWORK_SCOPE_ALLFORTX形成了对比,它要求来自MagnetoCorp和DigiBank的所有peer确认这笔交易。

如果您愿意,请阅读更多关于连接选项如何允许应用程序指定面向目标的行为,而不必担心如何实现该行为。

Network channel

在网关的connectionProfile.yaml 中定义的peer提供了issue.js访问PaperNet的权限。因为这些peer可以连接到多个网络通道,所以网关实际上为应用程序提供了对多个网络通道的访问!

查看应用程序如何选择一个特定的通道:

const network = await gateway.getNetwork('PaperNet');

从这一点开始,网络将提供访问PaperNet。此外,如果应用程序想同时访问另一个网络BondNet,则很容易:

const network2 = await gateway.getNetwork('BondNet');

现在我们的应用程序可以访问第二个网络BondNet,与PaperNet一起!

我们可以在这里看到超级账本Fabric的一个强大特性——应用程序可以通过连接到多个网关peer(每个网关peer都加入到多个网络通道),参与到一个网络网络中。根据gateway.connect()中提供的钱包身份,应用程序在不同的通道中具有不同的权限。

Construct request

该应用程序现在可以发行商业票据了。要做到这一点,它将使用CommercialPaperContract,同样,它是简单地访问这个智能合约:

const contract = await network.getContract('papercontract', 'org.papernet.commercialpaper');

注意,应用程序如何提供一个名称——papercontract——和一个显式的合约名称:org.papernet.commercialpaper! 我们将看到如何利用合约名称从包含许多合约的papercontract.js链码文件中选择一个合约。在PaperNet中,papercontract.js以papercontract的名称安装和实例化,如果您感兴趣,请阅读如何安装和实例化包含多个智能合约的链码。

如果我们的应用程序需要同时在PaperNet或BondNet访问另一个合约这将是容易的:

const euroContract = await network.getContract('EuroCommercialPaperContract');

const bondContract = await network2.getContract('BondContract');

在这些示例中,请注意我们没有使用符合条件的合约名称——每个文件只有一个智能合约,getContract()将使用它找到的第一个合约。

回忆一下MagnetoCorp发行第一份商业票据时的交易:

Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD

现在让我们把这个交易提交到PaperNet!

Submit transaction

提交交易是对SDK的一个方法调用:

const issueResponse = await contract.submitTransaction('issue', 'MagnetoCorp', '00001', '2020-05-31', '2020-11-30', '5000000');

查看submitTransaction()参数如何匹配交易请求的参数。这些值将传递给智能合约中的issue()方法,并用于创建新的商业票据。回忆它的签名:

async issue(ctx, issuer, paperNumber, issueDateTime, maturityDateTime, faceValue) {...}

智能合约似乎在应用程序发出submitTransaction()之后不久就会收到控制,但事实并非如此。在幕后,SDK使用connectionOptions和connectionProfile详细信息将交易提议发送到网络中的正确peer,在那里可以获得所需的背书。但是应用程序不需要担心这些——它只会发出submitTransaction, SDK会处理所有这些!

注意,submitTransaction API包含一个侦听交易提交的进程。侦听提交是必需的,因为没有提交,您将不知道您的交易是否已成功地排序、验证和提交到账本。

现在让我们将注意力转向应用程序如何处理响应!

Process response

回想一下papercontract.js中的发行交易如何返回商业票据响应:

return paper.toBuffer();

您将注意到一个小问题——在将新文件返回到应用程序之前,需要将其转换为缓冲。注意issue.js如何使用CommercialPaper.fromBuffer()类方法将响应缓冲重新还原为商业票据:

let paper = CommercialPaper.fromBuffer(issueResponse);

这使得票据能够以一种自然的方式在一个描述性的完成信息中使用:

console.log(`${paper.issuer} commercial paper : ${paper.paperNumber} successfully issued for value ${paper.faceValue}`);

查看相同的票据类在应用程序和智能合约中是如何使用的——如果您像这样构造代码,它将真正有助于可读性和重用。

与交易提议一样,应用程序可能会在智能合约完成后不久收到控制,但事实并非如此。在幕后,SDK管理整个共识过程,并根据策略connectionOption通知应用程序何时完成。如果您对SDK的底层功能感兴趣,请阅读详细的交易流程。

就是这样! 在本主题中,通过研究MagnetoCorp的应用程序如何在PaperNet中发行新的商业票据,您已经了解了如何从示例应用程序调用智能合约。现在研究一下关键账本和智能合约数据结构背后的架构主题。

应用程序设计元素

本节阐述了在Hyperledger Fabric中建立起的客户端应用和智能合约开发的关键特性。对这些特性的深入了解将有助于您设计和实现高效的解决方案。

Contract names

Audience: Architects, application and smart contract developers, administrators

A chaincode is a generic container for deploying code to a Hyperledger Fabric blockchain network. One or more related smart contracts are defined within a chaincode. Every smart contract has a name that uniquely identifies it within a chaincode. Applications access a particular smart contract within an instantiated chaincode using its contract name.

In this topic, we’re going to cover:

Chaincode

In the Developing Applications topic, we can see how the Fabric SDKs provide high level programming abstractions which help application and smart contract developers to focus on their business problem, rather than the low level details of how to interact with a Fabric network.

Smart contracts are one example of a high level programming abstraction, and it is possible to define smart contracts within in a chaincode container. When a chaincode is installed and instantiated, all the smart contracts within it are made available to the corresponding channel.

_images/develop.diagram.20.pngcontract.chaincode Multiple smart contracts can be defined within a chaincode. Each is uniquely identified by their name within a chaincode.

In the diagram above, chaincode A has three smart contracts defined within it, whereas chaincode B has four smart contracts. See how the chaincode name is used to fully qualify a particular smart contract.

The ledger structure is defined by a set of deployed smart contracts. That’s because the ledger contains facts about the business objects of interest to the network (such as commercial paper within PaperNet), and these business objects are moved through their lifecycle (e.g. issue, buy, redeem) by the transaction functions defined within a smart contract.

In most cases, a chaincode will only have one smart contract defined within it. However, it can make sense to keep related smart contracts together in a single chaincode. For example, commercial papers denominated in different currencies might have contracts EuroPaperContract, DollarPaperContract, YenPaperContract which might need to be kept synchronized with each other in the channel to which they are deployed.

Name

Each smart contract within a chaincode is uniquely identified by its contract name. A smart contract can explicitly assign this name when the class is constructed, or let the Contract class implicitly assign a default name.

Examine the papercontract.js chaincode file:

class CommercialPaperContract extends Contract {

    constructor() {
        // Unique name when multiple contracts per chaincode file
        super('org.papernet.commercialpaper');
    }

See how the CommercialPaperContract constructor specifies the contract name as org.papernet.commercialpaper. The result is that within the papercontract chaincode, this smart contract is now associated with the contract name org.papernet.commercialpaper.

If an explicit contract name is not specified, then a default name is assigned – the name of the class. In our example, the default contract name would be CommercialPaperContract.

Choose your names carefully. It’s not just that each smart contract must have a unique name; a well-chosen name is illuminating. Specifically, using an explicit DNS-style naming convention is recommended to help organize clear and meaningful names; org.papernet.commercialpaper conveys that the PaperNet network has defined a standard commercial paper smart contract.

Contract names are also helpful to disambiguate different smart contract transaction functions with the same name in a given chaincode. This happens when smart contracts are closely related; their transaction names will tend to be the same. We can see that a transaction is uniquely defined within a channel by the combination of its chaincode and smart contract name.

Contract names must be unique within a chaincode file. Some code editors will detect multiple definitions of the same class name before deployment. Regardless the chaincode will return an error if multiple classes with the same contract name are explicitly or implicitly specified.

Application

Once a chaincode has been installed on a peer and instantiated on a channel, the smart contracts in it are accessible to an application:

const network = await gateway.getNetwork(`papernet`);

const contract = await network.getContract('papercontract', 'org.papernet.commercialpaper');

const issueResponse = await contract.submitTransaction('issue', 'MagnetoCorp', '00001', '2020-05-31', '2020-11-30', '5000000');

See how the application accesses the smart contract with the contract.getContract() method. The papercontract chaincode name org.papernet.commercialpaper returns a contract reference which can be used to submit transactions to issue commercial paper with the contract.submitTransaction() API.

Default contract

The first smart contract defined in a chaincode is the called the default smart contract. A default is helpful because a chaincode will usually have one smart contract defined within it; a default allows the application to access those transactions directly – without specifying a contract name.

_images/develop.diagram.21.pngdefault.contract A default smart contract is the first contract defined in a chaincode.

In this diagram, CommercialPaperContract is the default smart contract. Even though we have two smart contracts, the default smart contract makes our previous example easier to write:

const network = await gateway.getNetwork(`papernet`);

const contract = await network.getContract('papercontract');

const issueResponse = await contract.submitTransaction('issue', 'MagnetoCorp', '00001', '2020-05-31', '2020-11-30', '5000000');

This works because the default smart contract in papercontract is CommercialPaperContract and it has an issue transaction. Note that the issue transaction in BondContract can only be invoked by explicitly addressing it. Likewise, even though the cancel transaction is unique, because BondContract is not the default smart contract, it must also be explicitly addressed.

In most cases, a chaincode will only contain a single smart contract, so careful naming of the chaincode can reduce the need for developers to care about chaincode as a concept. In the example code above it feels like papercontract is a smart contract.

In summary, contract names are a straightforward mechanism to identify individual smart contracts within a given chaincode. Contract names make it easy for applications to find a particular smart contract and use it to access the ledger.

Chaincode namespace

Audience: Architects, application and smart contract developers, administrators

A chaincode namespace allows it to keep its world state separate from other chaincodes. Specifically, smart contracts in the same chaincode share direct access to the same world state, whereas smart contracts in different chaincodes cannot directly access each other’s world state. If a smart contract needs to access another chaincode world state, it can do this by performing a chaincode-to-chaincode invocation. Finally, a blockchain can contain transactions which relate to different world states.

In this topic, we’re going to cover:

Motivation

A namespace is a common concept. We understand that Park Street, New York and Park Street, Seattle are different streets even though they have the same name. The city forms a namespace for Park Street, simultaneously providing freedom and clarity.

It’s the same in a computer system. Namespaces allow different users to program and operate different parts of a shared system, without getting in each other’s way. Many programming languages have namespaces so that programs can freely assign unique identifiers, such as variable names, without worrying about other programs doing the same. We’ll see that Hyperledger Fabric uses namespaces to help smart contracts keep their ledger world state separate from other smart contracts.

Scenario

Let’s examine how the ledger world state organizes facts about business objects that are important to the organizations in a channel using the diagram below. Whether these objects are commercial papers, bonds, or vehicle registrations, and wherever they are in their lifecycle, they are maintained as states within the ledger world state database. A smart contract manages these business objects by interacting with the ledger (world state and blockchain), and in most cases this will involve it querying or updating the ledger world state.

It’s vitally important to understand that the ledger world state is partitioned according to the chaincode of the smart contract that accesses it, and this partitioning, or namespacing is an important design consideration for architects, administrators and programmers.

_images/develop.diagram.50.pngchaincodens.scenario The ledger world state is separated into different namespaces according to the chaincode that accesses it. Within a given channel, smart contracts in the same chaincode share the same world state, and smart contracts in different chaincodes cannot directly access each other’s world state. Likewise, a blockchain can contain transactions that relate to different chaincode world states.

In our example, we can see four smart contracts defined in two different chaincodes, each of which is in their own chaincode container. The euroPaper and yenPaper smart contracts are defined in the papers chaincode. The situation is similar for the euroBond and yenBond smart contracts – they are defined in the bonds chaincode. This design helps application programmers understand whether they are working with commercial papers or bonds priced in Euros or Yen, and because the rules for each financial product don’t really change for different currencies, it makes sense to manage their deployment in the same chaincode.

The diagram also shows the consequences of this deployment choice. The database management system (DBMS) creates different world state databases for the papers and bonds chaincodes and the smart contracts contained within them. World state A and world state B are each held within distinct databases; the data are isolated from each other such that a single world state query (for example) cannot access both world states. The world state is said to be namespaced according to its chaincode.

See how world state A contains two lists of commercial papers paperListEuro and paperListYen. The states PAP11 and PAP21 are instances of each paper managed by the euroPaper and yenPaper smart contracts respectively. Because they share the same chaincode namespace, their keys (PAPxyz) must be unique within the namespace of the papers chaincode, a little like a street name is unique within a town. Notice how it would be possible to write a smart contract in the papers chaincode that performed an aggregate calculation over all the commercial papers – whether priced in Euros or Yen – because they share the same namespace. The situation is similar for bonds – they are held within world state B which maps to a separate bonds database, and their keys must be unique.

Just as importantly, namespaces mean that euroPaper and yenPaper cannot directly access world state B, and that euroBond and yenBond cannot directly access world state A. This isolation is helpful, as commercial papers and bonds are very distinct financial instruments; they have different attributes and are subject to different rules. It also means that papers and bonds could have the same keys, because they are in different namespaces. This is helpful; it provides a significant degree of freedom for naming. Use this freedom to name different business objects meaningfully.

Most importantly, we can see that a blockchain is associated with the peer operating in a particular channel, and that it contains transactions that affect both world state A and world state B. That’s because the blockchain is the most fundamental data structure contained in a peer. The set of world states can always be recreated from this blockchain, because they are the cumulative results of the blockchain’s transactions. A world state helps simplify smart contracts and improve their efficiency, as they usually only require the current value of a state. Keeping world states separate via namespaces helps smart contracts isolate their logic from other smart contracts, rather than having to worry about transactions that correspond to different world states. For example, a bonds contract does not need to worry about paper transactions, because it cannot see their resultant world state.

It’s also worth noticing that the peer, chaincode containers and DBMS all are logically different processes. The peer and all its chaincode containers are always in physically separate operating system processes, but the DBMS can be configured to be embedded or separate, depending on its type. For LevelDB, the DBMS is wholly contained within the peer, but for CouchDB, it is a separate operating system process.

It’s important to remember that namespace choices in this example are the result of a business requirement to share commercial papers in different currencies but isolate them separate from bonds. Think about how the namespace structure would be modified to meet a business requirement to keep every financial asset class separate, or share all commercial papers and bonds?

Channels

If a peer is joined to multiple channels, then a new blockchain is created and managed for each channel. Moreover, every time a chaincode is instantiated in a new channel, a new world state database is created for it. It means that the channel also forms a kind of namespace alongside that of the chaincode for the world state.

However, the same peer and chaincode container processes can be simultaneously joined to multiple channels – unlike blockchains, and world state databases, these processes do not increase with the number of channels joined.

For example, if the papers and bonds chaincodes were instantiated on a new channel, there would a totally separate blockchain created, and two new world state databases created. However, the peer and chaincode containers would not increase; each would just be connected to multiple channels.

Usage

Let’s use our commercial paper example to show how an application uses a smart contract with namespaces. It’s worth noting that an application communicates with the peer, and the peer routes the request to the appropriate chaincode container which then accesses the DBMS. This routing is done by the peer core component shown in the diagram.

Here’s the code for an application that uses both commercial papers and bonds, priced in Euros and Yen. The code is fairly self-explanatory:

const euroPaper = network.getContract(papers, euroPaper);
paper1 = euroPaper.submit(issue, PAP11);

const yenPaper = network.getContract(papers, yenPaper);
paper2 = yenPaper.submit(redeem, PAP21);

const euroBond = network.getContract(bonds, euroBond);
bond1 = euroBond.submit(buy, BON31);

const yenBond = network.getContract(bonds, yenBond);
bond2 = yenBond.submit(sell, BON41);

See how the application:

  • Accesses the euroPaper and yenPaper contracts using the getContract() API specifying the papers chaincode. See interaction points 1a and 2a.

  • Accesses the euroBond and yenBond contracts using the getContract() API specifying the bonds chaincode. See interaction points 3a and 4a.

  • Submits an issue transaction to the network for commercial paper PAP11 using the euroPaper contract. See interaction point 1a. This results in the creation of a commercial paper represented by state PAP11 in world state A; interaction point 1b. This operation is captured as a transaction in the blockchain at interaction point 1c.

  • Submits a redeem transaction to the network for commercial paper PAP21 using the yenPaper contract. See interaction point 2a. This results in the creation of a commercial paper represented by state PAP21 in world state A; interaction point 2b. This operation is captured as a transaction in the blockchain at interaction point 2c.

  • Submits a buy transaction to the network for bond BON31 using the euroBond contract. See interaction point 3a. This results in the creation of a bond represented by state BON31 in world state B; interaction point 3b. This operation is captured as a transaction in the blockchain at interaction point 3c.

  • Submits a sell transaction to the network for bond BON41 using the yenBond contract. See interaction point 4a. This results in the creation of a bond represented by state BON41 in world state B; interaction point 4b. This operation is captured as a transaction in the blockchain at interaction point 4c.

See how smart contracts interact with the world state:

  • euroPaper and yenPaper contracts can directly access world state A, but cannot directly access world state B. World state A is physically held in the papers database in the database management system (DBMS) corresponding to the papers chaincode.

  • euroBond and yenBond contracts can directly access world state B, but cannot directly access world state A. World state B is physically held in the bonds database in the database management system (DBMS) corresponding to the bonds chaincode.

See how the blockchain captures transactions for all world states:

  • Interactions 1c and 2c correspond to transactions create and update commercial papers PAP11 and PAP21 respectively. These are both contained within world state A.

  • Interactions 3c and 4c correspond to transactions both update bonds BON31 and BON41. These are both contained within world state B.

  • If world state A or world state B were destroyed for any reason, they could be recreated by replaying all the transactions in the blockchain.

Cross chaincode access

As we saw in our example scenario, euroPaper and yenPaper cannot directly access world state B. That’s because we have designed our chaincodes and smart contracts so that these chaincodes and world states are kept separately from each other. However, let’s imagine that euroPaper needs to access world state B.

Why might this happen? Imagine that when a commercial paper was issued, the smart contract wanted to price the paper according to the current return on bonds with a similar maturity date. In this case it will be necessary for the euroPaper contract to be able to query the price of bonds in world state B. Look at the following diagram to see how we might structure this interaction.

_images/develop.diagram.51.pngchaincodens.scenario How chaincodes and smart contracts can indirectly access another world state – via its chaincode.

Notice how:

  • the application submits an issue transaction in the euroPaper smart contract to issue PAP11. See interaction 1a.

  • the issue transaction in the euroPaper smart contract calls the query transaction in the euroBond smart contract. See interaction point 1b.

  • the queryin euroBond can retrieve information from world state B. See interaction point 1c.

  • when control returns to the issue transaction, it can use the information in the response to price the paper and update world state A with information. See interaction point 1d.

  • the flow of control for issuing commercial paper priced in Yen is the same. See interaction points 2a, 2b, 2c and 2d.

Control is passed between chaincode using the invokeChaincode() API. This API passes control from one chaincode to another chaincode.

Although we have only discussed query transactions in the example, it is possible to invoke a smart contract which will update the called chaincode’s world state. See the considerations below.

Considerations
  • In general, each chaincode will have a single smart contract in it.

  • Multiple smart contracts should only be deployed in the same chaincode if they are very closely related. Usually, this is only necessary if they share the same world state.

  • Chaincode namespaces provide isolation between different world states. In general it makes sense to isolate unrelated data from each other. Note that you cannot choose the chaincode namespace; it is assigned by Hyperledger Fabric, and maps directly to the name of the chaincode.

  • For chaincode to chaincode interactions using the invokeChaincode() API, both chaincodes must be installed on the same peer.

    • For interactions that only require the called chaincode’s world state to be queried, the invocation can be in a different channel to the caller’s chaincode.

    • For interactions that require the called chaincode’s world state to be updated, the invocation must be in the same channel as the caller’s chaincode.

Transaction context

Audience: Architects, application and smart contract developers

A transaction context performs two functions. Firstly, it allows a developer to define and maintain user variables across transaction invocations within a smart contract. Secondly, it provides access to a wide range of Fabric APIs that allow smart contract developers to perform operations relating to detailed transaction processing. These range from querying or updating the ledger, both the immutable blockchain and the modifiable world state, to retrieving the transaction-submitting application’s digital identity.

A transaction context is created when a smart contract is instantiated, and made available to every subsequent transaction invocation. A transaction context helps smart contract developers write programs that are powerful, efficient and easy to reason about.

Scenario

In the commercial paper sample, papercontract initially defines the name of the list of commercial papers for which it’s responsible. Each transaction subsequently refers to this list; the issue transaction adds new papers to it, the buy transaction changes its owner, and the redeem transaction marks it as complete. This is a common pattern; when writing a smart contract it’s often helpful to initialize and recall particular variables in sequential transactions.

_images/develop.diagram.40.pngtransaction.scenario A smart contract transaction context allows smart contracts to define and maintain user variables across transaction invocations. Refer to the text for a detailed explanation.

Programming

When a smart contract is constructed, a developer can optionally override the built-in Context class createContext method to create a custom context:

createContext() {
    new CommercialPaperContext();
}

In our example, the CommercialPaperContext is specialized for CommercialPaperContract. See how the custom context, addressed through this, adds the specific variable PaperList to itself:

CommercialPaperContext extends Context {
    constructor () {
        this.paperList = new PaperList(this);
    }
}

When the createContext() method returns at point (1) in the diagram above, a custom context ctx has been created which contains paperList as one of its variables.

Subsequently, whenever a smart contract transaction such as issue, buy or redeem is called, this context will be passed to it. See how at points (2), (3) and (4) the same commercial paper context is passed into the transaction method using the ctx variable.

See how the context is then used at point (5):

ctx.paperList.addPaper(...);
ctx.stub.putState(...);

Notice how paperList created in CommercialPaperContext is available to the issue transaction. See how paperList is similarly used by the redeem and buy transactions; ctx makes the smart contracts efficient and easy to reason about.

You can also see that there’s another element in the context – ctx.stub – which was not explictly added by CommercialPaperContext. That’s because stub and other variables are part of the built-in context. Let’s now examine the structure of this built-in context, these implicit variables and how to use them.

Structure

As we’ve seen from the example, a transaction context can contain any number of user variables such as paperList.

The transaction context also contains two built-in elements that provide access to a wide range of Fabric functionality ranging from the client application that submitted the transaction to ledger access.

  • ctx.stub is used to access APIs that provide a broad range of transaction processing operations from putState() and getState() to access the ledger, to getTxID() to retrieve the current transaction ID.

  • ctx.clientIdentity is used to get information about the identity of the user who submitted the transaction.

We’ll use the following diagram to show you what a smart contract can do using the stub and clientIdentity using the APIs available to it:

_images/develop.diagram.41.pngcontext.apis A smart contract can access a range of functionality in a smart contract via the transaction context stub and clientIdentity. Refer to the text for a detailed explanation.

Stub

The APIs in the stub fall into the following categories:

  • World state data APIs. See interaction point (1). These APIs enable smart contracts to get, put and delete state corresponding to individual objects from the world state, using their key:


    These basic APIs are complemented by query APIs which enable contracts to retrieve a set of states, rather than an individual state. See interaction point (2). The set is either defined by a range of key values, using full or partial keys, or a query according to values in the underlying world state database. For large queries, the result sets can be paginated to reduce storage requirements:

  • Private data APIs. See interaction point (3). These APIs enable smart contracts to interact with a private data collection. They are analogous to the APIs for world state interactions, but for private data. There are APIs to get, put and delete a private data state by its key:


    This set is complemented by set of APIs to query private data (4). These APIs allow smart contracts to retrieve a set of states from a private data collection, according to a range of key values, either full or partial keys, or a query according to values in the underlying world state database. There are currently no pagination APIs for private data collections.

  • Transaction APIs. See interaction point (5). These APIs are used by a smart contract to retrieve details about the current transaction proposal being processed by the smart contract. This includes the transaction identifier and the time when the transaction proposal was created.

    • getTxID() returns the identifier of the current transaction proposal (5).

    • getTxTimestamp() returns the timestamp when the current transaction proposal was created by the application (5).

    • getCreator() returns the raw identity (X.509 or otherwise) of the creator of transaction proposal. If this is an X.509 certificate then it is often more appropriate to use ctx.ClientIdentity.

    • getSignedProposal() returns a signed copy of the current transaction proposal being processed by the smart contract.

    • getBinding() is used to prevent transactions being maliciously or accidentally replayed using a nonce. (For practical purposes, a nonce is a random number generated by the client application and incorporated in a cryptographic hash.) For example, this API could be used by a smart contract at (1) to detect a replay of the transaction (5).

    • getTransient() allows a smart contract to access the transient data an application passes to a smart contract. See interaction points (9) and (10). Transient data is private to the application-smart contract interaction. It is not recorded on the ledger and is often used in conjunction with private data collections (3).


  • Key APIs are used by smart contracts to manipulate state key in the world state or a private data collection. See interaction points 2 and 4.

    The simplest of these APIs allows smart contracts to form and split composite keys from their individual components. Slightly more advanced are the ValidationParameter() APIs which get and set the state based endorsement policies for world state (2) and private data (4). Finally, getHistoryForKey() retrieves the history for a state by returning the set of stored values, including the transaction identifiers that performed the state update, allowing the transactions to be read from the blockchain (10).


  • Event APIs are used manage event processing in a smart contract.

    • setEvent()

      Smart contracts use this API to add user events to a transaction response. See interaction point (5). These events are ultimately recorded on the blockchain and sent to listening applications at interaction point (11).


  • Utility APIs are a collection of useful APIs that don’t easily fit in a pre-defined category, so we’ve grouped them together! They include retrieving the current channel name and passing control to a different chaincode on the same peer.

    • getChannelID()

      See interaction point (13). A smart contract running on any peer can use this API to determined on which channel the application invoked the smart contract.

    • invokeChaincode()

      See interaction point (14). Peer3 owned by MagnetoCorp has multiple smart contracts installed on it. These smart contracts are able to call each other using this API. The smart contracts must be collocated; it is not possible to call a smart contract on a different peer.


    Some of these utility APIs are only used if you’re using low-level chaincode, rather than smart contracts. These APIs are primarily for the detailed manipulation of chaincode input; the smart contract Contract class does all of this parameter marshalling automatically for developers.

ClientIdentity

In most cases, the application submitting a transaction will be using an X.509 certificate. In the example, an X.509 certificate (6) issued by CA1 (7) is being used by Isabella (8) in her application to sign the proposal in transaction t6 (5).

ClientIdentity takes the information returned by getCreator() and puts a set of X.509 utility APIs on top of it to make it easier to use for this common use case.

  • getX509Certificate() returns the full X.509 certificate of the transaction submitter, including all its attributes and their values. See interaction point (6).

  • getAttributeValue() returns the value of a particular X.509 attribute, for example, the organizational unit OU, or distinguished name DN. See interaction point (6).

  • assertAttributeValue() returns TRUE if the specified attribute of the X.509 attribute has a specified value. See interaction point (6).

  • getID() returns the unique identity of the transaction submitter, according to their distinguished name and the issuing CA’s distinguished name. The format is x509::{subject DN}::{issuer DN}. See interaction point (6).

  • getMSPID() returns the channel MSP of the transaction submitter. This allows a smart contract to make processing decisions based on the submitter’s organizational identity. See interaction point (15) or (16).

Transaction handlers

Audience: Architects, Application and smart contract developers

Transaction handlers allow smart contract developers to define common processing at key points during the interaction between an application and a smart contract. Transaction handlers are optional but, if defined, they will receive control before or after every transaction in a smart contract is invoked. There is also a specific handler which receives control when a request is made to invoke a transaction not defined in a smart contract.

Here’s an example of transaction handlers for the commercial paper smart contract sample:

_images/develop.diagram.2.pngdevelop.transactionhandler

Before, After and Unknown transaction handlers. In this example, beforeTransaction() is called before the issue, buy and redeem transactions. afterTransaction() is called after the issue, buy and redeem transactions. unknownTransaction() is only called if a request is made to invoke a transaction not defined in the smart contract. (The diagram is simplified by not repeating beforeTransaction and afterTransaction boxes for each transaction.)

Types of handler

There are three types of transaction handlers which cover different aspects of the interaction between an application and a smart contract:

  • Before handler: is called before every smart contract transaction is invoked. The handler will usually modify the transaction context to be used by the transaction. The handler has access to the full range of Fabric APIs; for example, it can issue getState() and putState().

  • After handler: is called after every smart contract transaction is invoked. The handler will usually perform post-processing common to all transactions, and also has full access to the Fabric APIs.

  • Unknown handler: is called if an attempt is made to invoke a transaction that is not defined in a smart contract. Typically, the handler will record the failure for subsequent processing by an administrator. The handler has full access to the Fabric APIs.

Defining a transaction handler is optional; a smart contract will perform correctly without handlers being defined. A smart contract can define at most one handler of each type.

Defining a handler

Transaction handlers are added to the smart contract as methods with well defined names. Here’s an example which adds a handler of each type:

CommercialPaperContract extends Contract {

    ...

    async beforeTransaction(ctx) {
        // Write the transaction ID as an informational to the console
        console.info(ctx.stub.getTxID());
    };

    async afterTransaction(ctx, result) {
        // This handler interacts with the ledger
        ctx.stub.cpList.putState(...);
    };

    async unknownTransaction(ctx) {
        // This handler throws an exception
        throw new Error('Unknown transaction function');
    };

}

The form of a transaction handler definition is the similar for all handler types, but notice how the afterTransaction(ctx, result) also receives any result returned by the transaction. The API documentation shows you the exact form of these handlers.

Handler processing

Once a handler has been added to the smart contract, it will be invoked during transaction processing. During processing, the handler receives ctx, the transaction context, performs some processing, and returns control as it completes. Processing continues as follows:

  • Before handler: If the handler completes successfully, the transaction is called with the updated context. If the handler throws an exception, then the transaction is not called and the smart contract fails with the exception error message.

  • After handler: If the handler completes successfully, then the smart contract completes as determined by the invoked transaction. If the handler throws an exception, then the transaction fails with the exception error message.

  • Unknown handler: The handler should complete by throwing an exception with the required error message. If an Unknown handler is not specified, or an exception is not thrown by it, there is sensible default processing; the smart contract will fail with an unknown transaction error message.

If the handler requires access to the function and parameters, then it is easy to do this:

async beforeTransaction(ctx) {
    // Retrieve details of the transaction
    let txnDetails = ctx.stub.getFunctionAndParameters();

    console.info(`Calling function: ${txnDetails.fcn} `);
    console.info(util.format(`Function arguments : %j ${stub.getArgs()} ``);
}

See how this handler uses the utility API getFunctionAndParameters via the transaction context.

Multiple handlers

It is only possible to define at most one handler of each type for a smart contract. If a smart contract needs to invoke multiple functions during before, after or unknown handling, it should coordinate this from within the appropriate function.

Endorsement policies

Audience: Architects, Application and smart contract developers

Endorsement policies define the smallest set of organizations that are required to endorse a transaction in order for it to be valid. To endorse, an organization’s endorsing peer needs to run the smart contract associated with the transaction and sign its outcome. When the ordering service sends the transaction to the committing peers, they will each individually check whether the endorsements in the transaction fulfill the endorsement policy. If this is not the case, the transaction is invalidated and it will have no effect on world state.

Endorsement policies work at two different granularities: they can be set for an entire namespace, as well as for individual state keys. They are formulated using basic logic expressions such as AND and OR. For example, in PaperNet this could be used as follows: the endorsement policy for a paper that has been sold from MagnetoCorp to DigiBank could be set to AND(MagnetoCorp.peer, DigiBank.peer), requiring any changes to this paper to be endorsed by both MagnetoCorp and DigiBank.

Connection Profile

Audience: Architects, application and smart contract developers

A connection profile describes a set of components, including peers, orderers and certificate authorities in a Hyperledger Fabric blockchain network. It also contains channel and organization information relating to these components. A connection profile is primarily used by an application to configure a gateway that handles all network interactions, allowing it to focus on business logic. A connection profile is normally created by an administrator who understands the network topology.

In this topic, we’re going to cover:

Scenario

A connection profile is used to configure a gateway. Gateways are important for many reasons, the primary being to simplify an application’s interaction with a network channel.

_images/develop.diagram.30.pngprofile.scenario Two applications, issue and buy, use gateways 1&2 configured with connection profiles 1&2. Each profile describes a different subset of MagnetoCorp and DigiBank network components. Each connection profile must contain sufficient information for a gateway to interact with the network on behalf of the issue and buy applications. See the text for a detailed explanation.

A connection profile contains a description of a network view, expressed in a technical syntax, which can either be JSON or YAML. In this topic, we use the YAML representation, as it’s easier for you to read. Static gateways need more information than dynamic gateways because the latter can use service discovery to dynamically augment the information in a connection profile.

A connection profile should not be an exhaustive description of a network channel; it just needs to contain enough information sufficient for a gateway that’s using it. In the network above, connection profile 1 needs to contain at least the endorsing organizations and peers for the issue transaction, as well as identifying the peers that will notify the gateway when the transaction has been committed to the ledger.

It’s easiest to think of a connection profile as describing a view of the network. It could be a comprehensive view, but that’s unrealistic for a few reasons:

  • Peers, orderers, certificate authorities, channels, and organizations are added and removed according to demand.

  • Components can start and stop, or fail unexpectedly (e.g. power outage).

  • A gateway doesn’t need a view of the whole network, only what’s necessary to successfully handle transaction submission or event notification for example.

  • Service Discovery can augment the information in a connection profile. Specifically, dynamic gateways can be configured with minimal Fabric topology information; the rest can be discovered.

A static connection profile is normally created by an administrator who understands the network topology in detail. That’s because a static profile can contain quite a lot of information, and an administrator needs to capture this in the corresponding connection profile. In contrast, dynamic profiles minimize the amount of definition required, and therefore can be a better choice for developers who want to get going quickly, or administrators who want to create a more responsive gateway. Connection profiles are created in either the YAML or JSON format using an editor of choice.

Usage

We’ll see how to define a connection profile in a moment; let’s first see how it is used by a sample MagnetoCorp issue application:

const yaml = require('js-yaml');
const { Gateway } = require('fabric-network');

const connectionProfile = yaml.safeLoad(fs.readFileSync('../gateway/paperNet.yaml', 'utf8'));

const gateway = new Gateway();

await gateway.connect(connectionProfile, connectionOptions);

After loading some required classes, see how the paperNet.yaml gateway file is loaded from the file system, converted to a JSON object using the yaml.safeLoad() method, and used to configure a gateway using its connect() method.

By configuring a gateway with this connection profile, the issue application is providing the gateway with the relevant network topology it should use to process transactions. That’s because the connection profile contains sufficient information about the PaperNet channels, organizations, peers, orderers and CAs to ensure transactions can be successfully processed.

It’s good practice for a connection profile to define more than one peer for any given organization – it prevents a single point of failure. This practice also applies to dynamic gateways; to provide more than one starting point for service discovery.

A DigiBank buy application would typically configure its gateway with a similar connection profile, but with some important differences. Some elements will be the same, such as the channel; some elements will overlap, such as the endorsing peers. Other elements will be completely different, such as notification peers or certificate authorities for example.

The connectionOptions passed to a gateway complement the connection profile. They allow an application to declare how it would like the gateway to use the connection profile. They are interpreted by the SDK to control interaction patterns with network components, for example to select which identity to connect with, or which peers to use for event notifications. Read about the list of available connection options and when to use them.

Structure

To help you understand the structure of a connection profile, we’re going to step through an example for the network shown above. Its connection profile is based on the PaperNet commercial paper sample, and stored in the GitHub repository. For convenience, we’ve reproduced it below. You will find it helpful to display it in another browser window as you now read about it:

  • Line 9: name: "papernet.magnetocorp.profile.sample"

    This is the name of the connection profile. Try to use DNS style names; they are a very easy way to convey meaning.

  • Line 16: x-type: "hlfv1"

    Users can add their own x- properties that are “application-specific” – just like with HTTP headers. They are provided primarily for future use.

  • Line 20: description: "Sample connection profile for documentation topic"

    A short description of the connection profile. Try to make this helpful for the reader who might be seeing this for the first time!

  • Line 25: version: "1.0"

    The schema version for this connection profile. Currently only version 1.0 is supported, and it is not envisioned that this schema will change frequently.

  • Line 32: channels:

    This is the first really important line. channels: identifies that what follows are all the channels that this connection profile describes. However, it is good practice to keep different channels in different connection profiles, especially if they are used independently of each other.

  • Line 36: papernet:

    Details of papernet, the first channel in this connection profile, will follow.

  • Line 41: orderers:

    Details of all the orderers for papernet follow. You can see in line 45 that the orderer for this channel is orderer1.magnetocorp.example.com. This is just a logical name; later in the connection profile (lines 134 - 147), there will be details of how to connect to this orderer. Notice that orderer2.digibank.example.com is not in this list; it makes sense that applications use their own organization’s orderers, rather than those from a different organization.

  • Line 49: peers:

    Details of all the peers for papernet will follow.

    You can see three peers listed from MagnetoCorp: peer1.magnetocorp.example.com, peer2.magnetocorp.example.com and peer3.magnetocorp.example.com. It’s not necessary to list all the peers in MagnetoCorp, as has been done here. You can see only one peer listed from DigiBank: peer9.digibank.example.com; including this peer starts to imply that the endorsement policy requires MagnetoCorp and DigiBank to endorse transactions, as we’ll now confirm. It’s good practice to have multiple peers to avoid single points of failure.

    Underneath each peer you can see four non-exclusive roles: endorsingPeer, chaincodeQuery, ledgerQuery and eventSource. See how peer1 and peer2 can perform all roles as they host papercontract. Contrast to peer3, which can only be used for notifications, or ledger queries that access the blockchain component of the ledger rather than the world state, and hence do not need to have smart contracts installed. Notice how peer9 should not be used for anything other than endorsement, because those roles are better served by MagnetoCorp peers.

    Again, see how the peers are described according to their logical names and their roles. Later in the profile, we’ll see the physical information for these peers.

  • Line 97: organizations:

    Details of all the organizations will follow, for all channels. Note that these organizations are for all channels, even though papernet is currently the only one listed. That’s because organizations can be in multiple channels, and channels can have multiple organizations. Moreover, some application operations relate to organizations rather than channels. For example, an application can request notification from one or all peers within its organization, or all organizations within the network – using connection options. For this, there needs to be an organization to peer mapping, and this section provides it.

  • Line 101: MagnetoCorp:

    All peers that are considered part of MagnetoCorp are listed: peer1, peer2 and peer3. Likewise for Certificate Authorities. Again, note the logical name usages, the same as the channels: section; physical information will follow later in the profile.

  • Line 121: DigiBank:

    Only peer9 is listed as part of DigiBank, and no Certificate Authorities. That’s because these other peers and the DigiBank CA are not relevant for users of this connection profile.

  • Line 134: orderers:

    The physical information for orderers is now listed. As this connection profile only mentioned one orderer for papernet, you see orderer1.magnetocorp.example.com details listed. These include its IP address and port, and gRPC options that can override the defaults used when communicating with the orderer, if necessary. As with peers:, for high availability, specifying more than one orderer is a good idea.

  • Line 152: peers:

    The physical information for all previous peers is now listed. This connection profile has three peers for MagnetoCorp: peer1, peer2, and peer3; for DigiBank, a single peer peer9 has its information listed. For each peer, as with orderers, their IP address and port is listed, together with gRPC options that can override the defaults used when communicating with a particular peer, if necessary.

  • Line 194: certificateAuthorities:

    The physical information for certificate authorities is now listed. The connection profile has a single CA listed for MagnetoCorp, ca1-magnetocorp, and its physical information follows. As well as IP details, the registrar information allows this CA to be used for Certificate Signing Requests (CSR). These are used to request new certificates for locally generated public/private key pairs.

Now you’ve understood a connection profile for MagnetoCorp, you might like to look at a corresponding profile for DigiBank. Locate where the profile is the same as MagnetoCorp’s, see where it’s similar, and finally where it’s different. Think about why these differences make sense for DigiBank applications.

That’s everything you need to know about connection profiles. In summary, a connection profile defines sufficient channels, organizations, peers, orderers and certificate authorities for an application to configure a gateway. The gateway allows the application to focus on business logic rather than the details of the network topology.

Sample

This file is reproduced inline from the GitHub commercial paper sample.

1: ---
2: #
3: # [Required]. A connection profile contains information about a set of network
4: # components. It is typically used to configure gateway, allowing applications
5: # interact with a network channel without worrying about the underlying
6: # topology. A connection profile is normally created by an administrator who
7: # understands this topology.
8: #
9: name: "papernet.magnetocorp.profile.sample"
10: #
11: # [Optional]. Analogous to HTTP, properties with an "x-" prefix are deemed
12: # "application-specific", and ignored by the gateway. For example, property
13: # "x-type" with value "hlfv1" was originally used to identify a connection
14: # profile for Fabric 1.x rather than 0.x.
15: #
16: x-type: "hlfv1"
17: #
18: # [Required]. A short description of the connection profile
19: #
20: description: "Sample connection profile for documentation topic"
21: #
22: # [Required]. Connection profile schema version. Used by the gateway to
23: # interpret these data.
24: #
25: version: "1.0"
26: #
27: # [Optional]. A logical description of each network channel; its peer and
28: # orderer names and their roles within the channel. The physical details of
29: # these components (e.g. peer IP addresses) will be specified later in the
30: # profile; we focus first on the logical, and then the physical.
31: #
32: channels:
33:   #
34:   # [Optional]. papernet is the only channel in this connection profile
35:   #
36:   papernet:
37:     #
38:     # [Optional]. Channel orderers for PaperNet. Details of how to connect to
39:     # them is specified later, under the physical "orderers:" section
40:     #
41:     orderers:
42:     #
43:     # [Required]. Orderer logical name
44:     #
45:       - orderer1.magnetocorp.example.com
46:     #
47:     # [Optional]. Peers and their roles
48:     #
49:     peers:
50:     #
51:     # [Required]. Peer logical name
52:     #
53:       peer1.magnetocorp.example.com:
54:         #
55:         # [Optional]. Is this an endorsing peer? (It must have chaincode
56:         # installed.) Default: true
57:         #
58:         endorsingPeer: true
59:         #
60:         # [Optional]. Is this peer used for query? (It must have chaincode
61:         # installed.) Default: true
62:         #
63:         chaincodeQuery: true
64:         #
65:         # [Optional]. Is this peer used for non-chaincode queries? All peers
66:         # support these types of queries, which include queryBlock(),
67:         # queryTransaction(), etc. Default: true
68:         #
69:         ledgerQuery: true
70:         #
71:         # [Optional]. Is this peer used as an event hub? All peers can produce
72:         # events. Default: true
73:         #
74:         eventSource: true
75:       #
76:       peer2.magnetocorp.example.com:
77:         endorsingPeer: true
78:         chaincodeQuery: true
79:         ledgerQuery: true
80:         eventSource: true
81:       #
82:       peer3.magnetocorp.example.com:
83:         endorsingPeer: false
84:         chaincodeQuery: false
85:         ledgerQuery: true
86:         eventSource: true
87:       #
88:       peer9.digibank.example.com:
89:         endorsingPeer: true
90:         chaincodeQuery: false
91:         ledgerQuery: false
92:         eventSource: false
93: #
94: # [Required]. List of organizations for all channels. At least one organization
95: # is required.
96: #
97: organizations:
98:    #
99:    # [Required]. Organizational information for MagnetoCorp
100:   #
101:   MagnetoCorp:
102:     #
103:     # [Required]. The MSPID used to identify MagnetoCorp
104:     #
105:     mspid: MagnetoCorpMSP
106:     #
107:     # [Required]. The MagnetoCorp peers
108:     #
109:     peers:
110:       - peer1.magnetocorp.example.com
111:       - peer2.magnetocorp.example.com
112:       - peer3.magnetocorp.example.com
113:     #
114:     # [Optional]. Fabric-CA Certificate Authorities.
115:     #
116:     certificateAuthorities:
117:       - ca-magnetocorp
118:   #
119:   # [Optional]. Organizational information for DigiBank
120:   #
121:   DigiBank:
122:     #
123:     # [Required]. The MSPID used to identify DigiBank
124:     #
125:     mspid: DigiBankMSP
126:     #
127:     # [Required]. The DigiBank peers
128:     #
129:     peers:
130:       - peer9.digibank.example.com
131: #
132: # [Optional]. Orderer physical information, by orderer name
133: #
134: orderers:
135:   #
136:   # [Required]. Name of MagnetoCorp orderer
137:   #
138:   orderer1.magnetocorp.example.com:
139:     #
140:     # [Required]. This orderer's IP address
141:     #
142:     url: grpc://localhost:7050
143:     #
144:     # [Optional]. gRPC connection properties used for communication
145:     #
146:     grpcOptions:
147:       ssl-target-name-override: orderer1.magnetocorp.example.com
148: #
149: # [Required]. Peer physical information, by peer name. At least one peer is
150: # required.
151: #
152: peers:
153:   #
154:   # [Required]. First MagetoCorp peer physical properties
155:   #
156:   peer1.magnetocorp.example.com:
157:     #
158:     # [Required]. Peer's IP address
159:     #
160:     url: grpc://localhost:7151
161:     #
162:     # [Optional]. gRPC connection properties used for communication
163:     #
164:     grpcOptions:
165:       ssl-target-name-override: peer1.magnetocorp.example.com
166:       request-timeout: 120001
167:   #
168:   # [Optional]. Other MagnetoCorp peers
169:   #
170:   peer2.magnetocorp.example.com:
171:     url: grpc://localhost:7251
172:     grpcOptions:
173:       ssl-target-name-override: peer2.magnetocorp.example.com
174:       request-timeout: 120001
175:   #
176:   peer3.magnetocorp.example.com:
177:     url: grpc://localhost:7351
178:     grpcOptions:
179:       ssl-target-name-override: peer3.magnetocorp.example.com
180:       request-timeout: 120001
181:   #
182:   # [Required]. Digibank peer physical properties
183:   #
184:   peer9.digibank.example.com:
185:     url: grpc://localhost:7951
186:     grpcOptions:
187:       ssl-target-name-override: peer9.digibank.example.com
188:       request-timeout: 120001
189: #
190: # [Optional]. Fabric-CA Certificate Authority physical information, by name.
191: # This information can be used to (e.g.) enroll new users. Communication is via
192: # REST, hence options relate to HTTP rather than gRPC.
193: #
194: certificateAuthorities:
195:   #
196:   # [Required]. MagnetoCorp CA
197:   #
198:   ca1-magnetocorp:
199:     #
200:     # [Required]. CA IP address
201:     #
202:     url: http://localhost:7054
203:     #
204:     # [Optioanl]. HTTP connection properties used for communication
205:     #
206:     httpOptions:
207:       verify: false
208:     #
209:     # [Optional]. Fabric-CA supports Certificate Signing Requests (CSRs). A
210:     # registrar is needed to enroll new users.
211:     #
212:     registrar:
213:       - enrollId: admin
214:         enrollSecret: adminpw
215:     #
216:     # [Optional]. The name of the CA.
217:     #
218:     caName: ca-magnetocorp

Connection Options

Audience: Architects, administrators, application and smart contract developers

Connection options are used in conjunction with a connection profile to control precisely how a gateway interacts with a network. Using a gateway allows an application to focus on business logic rather than network topology.

In this topic, we’re going to cover:

Scenario

A connection option specifies a particular aspect of a gateway’s behaviour. Gateways are important for many reasons, the primary being to allow an application to focus on business logic and smart contracts, while it manages interactions with the many components of a network.

_images/develop.diagram.35.pngprofile.scenario The different interaction points where connection options control behaviour. These options are explained fully in the text.

One example of a connection option might be to specify that the gateway used by the issue application should use identity Isabella to submit transactions to the papernet network. Another might be that a gateway should wait for all three nodes from MagnetoCorp to confirm a transaction has been committed returning control. Connection options allow applications to specify the precise behaviour of a gateway’s interaction with the network. Without a gateway, applications need to do a lot more work; gateways save you time, make your application more readable, and less error prone.

Usage

We’ll describe the full set of connection options available to an application in a moment; let’s first see how they are specified by the sample MagnetoCorp issue application:

const userName = 'User1@org1.example.com';
const wallet = new FileSystemWallet('../identity/user/isabella/wallet');

const connectionOptions = {
  identity: userName,
  wallet: wallet,
  eventHandlerOptions: {
    commitTimeout: 100,
    strategy: EventStrategies.MSPID_SCOPE_ANYFORTX
    }
  };

await gateway.connect(connectionProfile, connectionOptions);

See how the identity and wallet options are simple properties of the connectionOptions object. They have values userName and wallet respectively, which were set earlier in the code. Contrast these options with the eventHandlerOptions option which is an object in its own right. It has two properties: commitTimeout: 100 (measured in seconds) and strategy: EventStrategies.MSPID_SCOPE_ANYFORTX.

See how connectionOptions is passed to a gateway as a complement to connectionProfile; the network is identified by the connection profile and the options specify precisely how the gateway should interact with it. Let’s now look at the available options.

Options

Here’s a list of the available options and what they do.

  • wallet identifies the wallet that will be used by the gateway on behalf of the application. See interaction 1; the wallet is specified by the application, but it’s actually the gateway that retrieves identities from it.

    A wallet must be specified; the most important decision is the type of wallet to use, whether that’s file system, in-memory, HSM or database.

  • identity is the user identity that the application will use from wallet. See interaction 2a; the user identity is specified by the application and represents the user of the application, Isabella, 2b. The identity is actually retrieved by the gateway.

    In our example, Isabella’s identity will be used by different MSPs (2c, 2d) to identify her as being from MagnetoCorp, and having a particular role within it. These two facts will correspondingly determine her permission over resources, such as being able to read and write the ledger, for example.

    A user identity must be specified. As you can see, this identity is fundamental to the idea that Hyperledger Fabric is a permissioned network – all actors have an identity, including applications, peers and orderers, which determines their control over resources. You can read more about this idea in the membership services topic.

  • clientTlsIdentity is the identity that is retrieved from a wallet (3a) and used for secure communications (3b) between the gateway and different channel components, such as peers and orderers.

    Note that this identity is different to the user identity. Even though clientTlsIdentity is important for secure communications, it is not as foundational as the user identity because its scope does not extend beyond secure network communications.

    clientTlsIdentity is optional. You are advised to set it in production environments. You should always use a different clientTlsIdentity to identity because these identities have very different meanings and lifecycles. For example, if your clientTlsIdentity was compromised, then so would your identity; it’s more secure to keep them separate.

  • eventHandlerOptions.commitTimeout is optional. It specifies, in seconds, the maximum amount of time the gateway should wait for a transaction to be committed by any peer (4a) before returning control to the application. The set of peers to use for notification is determined by the eventHandlerOptions.strategy option. If a commitTimeout is not specified, the gateway will use a timeout of 300 seconds.

  • eventHandlerOptions.strategy is optional. It identifies the set of peers that a gateway should use to listen for notification that a transaction has been committed. For example, whether to listen for a single peer, or all peers, from its organization. It can take one of the following values:

    • EventStrategies.MSPID_SCOPE_ANYFORTX Listen for any peer within the user’s organization. In our example, see interaction points 4b; any of peer 1, peer 2 or peer 3 from MagnetoCorp can notify the gateway.

    • EventStrategies.MSPID_SCOPE_ALLFORTX This is the default value. Listen for all peers within the user’s organization. In our example peer, see interaction point 4b. All peers from MagnetoCorp must all have notified the gateway; peer 1, peer 2 and peer 3. Peers are only counted if they are known/discovered and available; peers that are stopped or have failed are not included.

    • EventStrategies.NETWORK_SCOPE_ANYFORTX Listen for any peer within the entire network channel. In our example, see interaction points 4b and 4c; any of peer 1-3 from MagnetoCorp or peer 7-9 of DigiBank can notify the gateway.

    • EventStrategies.NETWORK_SCOPE_ALLFORTX Listen for all peers within the entire network channel. In our example, see interaction points 4b and 4c. All peers from MagnetoCorp and DigiBank must notify the gateway; peers 1-3 and peers 7-9. Peers are only counted if they are known/discovered and available; peers that are stopped or have failed are not included.

    • <PluginEventHandlerFunction> The name of a user-defined event handler. This allows a user to define their own logic for event handling. See how to define a plugin event handler, and examine a sample handler.

      A user-defined event handler is only necessary if you have very specific event handling requirements; in general, one of the built-in event strategies will be sufficient. An example of a user-defined event handler might be to wait for more than half the peers in an organization to confirm a transaction has been committed.

      If you do specify a user-defined event handler, it does not affect your application logic; it is quite separate from it. The handler is called by the SDK during processing; it decides when to call it, and uses its results to select which peers to use for event notification. The application receives control when the SDK has finished its processing.

      If a user-defined event handler is not specified then the default values for EventStrategies are used.

  • discovery.enabled is optional and has possible values true or false. The default is true. It determines whether the gateway uses service discovery to augment the network topology specified in the connection profile. See interaction point 6; peer’s gossip information used by the gateway.

    This value will be overridden by the INITIALIIZE-WITH-DISCOVERY environment variable, which can be set to true or false.

  • discovery.asLocalhost is optional and has possible values true or false. The default is true. It determines whether IP addresses found during service discovery are translated from the docker network to the local host.

    Typically developers will write applications that use docker containers for their network components such as peers, orderers and CAs, but that do not run in docker containers themselves. This is why true is the default; in production environments, applications will likely run in docker containers in the same manner as network components and therefore address translation is not required. In this case, applications should either explicitly specify false or use the environment variable override.

    This value will be overridden by the DISCOVERY-AS-LOCALHOST environment variable, which can be set to true or false.

Considerations

The following list of considerations is helpful when deciding how to choose connection options.

  • eventHandlerOptions.commitTimeout and eventHandlerOptions.strategy work together. For example, commitTimeout: 100 and strategy: EventStrategies.MSPID_SCOPE_ANYFORTX means that the gateway will wait for up to 100 seconds for any peer to confirm a transaction has been committed. In contrast, specifying strategy: EventStrategies.NETWORK_SCOPE_ALLFORTX means that the gateway will wait up to 100 seconds for all peers in all organizations.

  • The default value of eventHandlerOptions.strategy: EventStrategies.MSPID_SCOPE_ALLFORTX will wait for all peers in the application’s organization to commit the transaction. This is a good default because applications can be sure that all their peers have an up-to-date copy of the ledger, minimizing concurrency issues.

    However, as the number of peers in an organization grows, it becomes a little unnecessary to wait for all peers, in which case using a pluggable event handler can provide a more efficient strategy. For example the same set of peers could be used to submit transactions and listen for notifications, on the safe assumption that consensus will keep all ledgers synchronized.

  • Service discovery requires clientTlsIdentity to be set. That’s because the peers exchanging information with an application need to be confident that they are exchanging information with entities they trust. If clientTlsIdentity is not set, then discovery will not be obeyed, regardless of whether or not it is set.

  • Although applications can set connection options when they connect to the gateway, it can be necessary for these options to be overridden by an administrator. That’s because options relate to network interactions, which can vary over time. For example, an administrator trying to understand the effect of using service discovery on network performance.

    A good approach is to define application overrides in a configuration file which is read by the application when it configures its connection to the gateway.

    Because the discovery options enabled and asLocalHost are most frequently required to be overridden by administrators, the environment variables INITIALIIZE-WITH-DISCOVERY and DISCOVERY-AS-LOCALHOST are provided for convenience. The administrator should set these in the production runtime environment of the application, which will most likely be a docker container.

Wallet

Audience: Architects, application and smart contract developers

A wallet contains a set of user identities. An application run by a user selects one of these identities when it connects to a channel. Access rights to channel resources, such as the ledger, are determined using this identity in combination with an MSP.

In this topic, we’re going to cover:

Scenario

When an application connects to a network channel such as PaperNet, it selects a user identity to do so, for example ID1. The channel MSPs associate ID1 with a role within a particular organization, and this role will ultimately determine the application’s rights over channel resources. For example, ID1 might identify a user as a member of the MagnetoCorp organization who can read and write to the ledger, whereas ID2 might identify an administrator in MagnetoCorp who can add a new organization to a consortium.

_images/develop.diagram.10.pngwallet.scenario Two users, Isabella and Balaji have wallets containing different identities they can use to connect to different network channels, PaperNet and BondNet.

Consider the example of two users; Isabella from MagnetoCorp and Balaji from DigiBank. Isabella is going to use App 1 to invoke a smart contract in PaperNet and a different smart contract in BondNet. Similarly, Balaji is going to use App 2 to invoke smart contracts, but only in PaperNet. (It’s very easy for applications to access multiple networks and multiple smart contracts within them.)

See how:

  • MagnetoCorp uses CA1 to issue identities and DigiBank uses CA2 to issue identities. These identities are stored in user wallets.

  • Balaji’s wallet holds a single identity, ID4 issued by CA2. Isabella’s wallet has many identities, ID1, ID2 and ID3, issued by CA1. Wallets can hold multiple identities for a single user, and each identity can be issued by a different CA.

  • Both Isabella and Balaji connect to PaperNet, and its MSPs determine that Isabella is a member of the MagnetoCorp organization, and Balaji is a member of the DigiBank organization, because of the respective CAs that issued their identities. (It is possible for an organization to use multiple CAs, and for a single CA to support multiple organizations.)

  • Isabella can use ID1 to connect to both PaperNet and BondNet. In both cases, when Isabella uses this identity, she is recognized as a member of MangetoCorp.

  • Isabella can use ID2 to connect to BondNet, in which case she is identified as an administrator of MagnetoCorp. This gives Isabella two very different privileges: ID1 identifies her as a simple member of MagnetoCorp who can read and write to the BondNet ledger, whereas ID2 identities her as a MagnetoCorp administrator who can add a new organization to BondNet.

  • Balaji cannot connect to BondNet with ID4. If he tried to connect, ID4 would not be recognized as belonging to DigiBank because CA2 is not known to BondNet’s MSP.

Types

There are different types of wallets according to where they store their identities:

_images/develop.diagram.12.pngwallet.types The four different types of wallet: File system, In-memory, Hardware Security Module (HSM) and CouchDB.

  • FileSystem: This is the most common place to store wallets; file systems are pervasive, easy to understand, and can be network mounted. They are a good default choice for wallets.

    Use the FileSystemWallet class to manage file system wallets.

  • In-memory: A wallet in application storage. Use this type of wallet when your application is running in a constrained environment without access to a file system; typically a web browser. It’s worth remembering that this type of wallet is volatile; identities will be lost after the application ends normally or crashes.

    Use the InMemoryWallet class to manage in-memory wallets.

  • Hardware Security Module: A wallet stored in an HSM. This ultra-secure, tamper-proof device stores digital identity information, particularly private keys. HSMs can be locally attached to your computer or network accessible. Most HSMs provide the ability to perform on-board encryption with private keys, such that the private key never leave the HSM.

    Currently you should use the FileSystemWallet class in combination with the HSMWalletMixin class to manage HSM wallets.

  • CouchDB: A wallet stored in Couch DB. This is the rarest form of wallet storage, but for those users who want to use the database back-up and restore mechanisms, CouchDB wallets can provide a useful option to simplify disaster recovery.

    Use the CouchDBWallet class to manage CouchDB wallets.

Structure

A single wallet can hold multiple identities, each issued by a particular Certificate Authority. Each identity has a standard structure comprising a descriptive label, an X.509 certificate containing a public key, a private key, and some Fabric-specific metadata. Different wallet types map this structure appropriately to their storage mechanism.

_images/develop.diagram.11.pngwallet.structure A Fabric wallet can hold multiple identities with certificates issued by a different Certificate Authority. Identities comprise certificate, private key and Fabric metadata.

There’s a couple of key class methods that make it easy to manage wallets and identities:

const identity = X509WalletMixin.createIdentity('Org1MSP', certificate, key);

await wallet.import(identityLabel, identity);

See how the X509WalletMixin.createIdentity() method creates an identity that has metadata Org1MSP, a certificate and a private key. See how wallet.import() adds this identity to the wallet with a particular identityLabel.

The Gateway class only requires the mspid metadata to be set for an identity – Org1MSP in the above example. It currently uses this value to identify particular peers from a connection profile, for example when a specific notification strategy is requested. In the DigiBank gateway file networkConnection.yaml, see how Org1MSP notifications will be associated with peer0.org1.example.com:

organizations:
  Org1:
    mspid: Org1MSP

    peers:
      - peer0.org1.example.com

You really don’t need to worry about the internal structure of the different wallet types, but if you’re interested, navigate to a user identity folder in the commercial paper sample:

magnetocorp/identity/user/isabella/
                                  wallet/
                                        User1@org1.example.com/
                                                              User1@org.example.com
                                                              c75bd6911aca8089...-priv
                                                              c75bd6911aca8089...-pub

You can examine these files, but as discussed, it’s easier to use the SDK to manipulate these data.

Operations

The different wallet classes are derived from a common Wallet base class which provides a standard set of APIs to manage identities. It means that applications can be made independent of the underlying wallet storage mechanism; for example, File system and HSM wallets are handled in a very similar way.

_images/develop.diagram.13.pngwallet.operations Wallets follow a lifecycle: they can be created or opened, and identities can be read, added, deleted and exported.

An application can use a wallet according to a simple lifecycle. Wallets can be opened or created, and subsequently identities can be added, read, updated, deleted and exported. Spend a little time on the different Wallet methods in the JSDOC to see how they work; the commercial paper tutorial provides a nice example in addToWallet.js:

const wallet = new FileSystemWallet('../identity/user/isabella/wallet');

const cert = fs.readFileSync(path.join(credPath, '.../User1@org1.example.com-cert.pem')).toString();
const key = fs.readFileSync(path.join(credPath, '.../_sk')).toString();

const identityLabel = 'User1@org1.example.com';
const identity = X509WalletMixin.createIdentity('Org1MSP', cert, key);

await wallet.import(identityLabel, identity);

Notice how:

  • When the program is first run, a wallet is created on the local file system at .../isabella/wallet.

  • a certificate cert and private key are loaded from the file system.

  • a new identity is created with cert, key and Org1MSP using X509WalletMixin.createIdentity().

  • the new identity is imported to the wallet with wallet.import() with a label User1@org1.example.com.

That’s everything you need to know about wallets. You’ve seen how they hold identities that are used by applications on behalf of users to access Fabric network resources. There are different types of wallets available depending on your application and security needs, and a simple set of APIs to help applications manage wallets and the identities within them.

Gateway

Audience: Architects, application and smart contract developers

A gateway manages the network interactions on behalf of an application, allowing it to focus on business logic. Applications connect to a gateway and then all subsequent interactions are managed using that gateway’s configuration.

In this topic, we’re going to cover:

Scenario

A Hyperledger Fabric network channel can constantly change. The peer, orderer and CA components, contributed by the different organizations in the network, will come and go. Reasons for this include increased or reduced business demand, and both planned and unplanned outages. A gateway relieves an application of this burden, allowing it to focus on the business problem it is trying to solve.

_images/develop.diagram.25.pnggateway.scenario A MagnetoCorp and DigiBank applications (issue and buy) delegate their respective network interactions to their gateways. Each gateway understands the network channel topology comprising the multiple peers and orderers of two organizations MagnetoCorp and DigiBank, leaving applications to focus on business logic. Peers can talk to each other both within and across organizations using the gossip protocol.

A gateway can be used by an application in two different ways:

  • Static: The gateway configuration is completely defined in a connection profile. All the peers, orderers and CAs available to an application are statically defined in the connection profile used to configure the gateway. For peers, this includes their role as an endorsing peer or event notification hub, for example. You can read more about these roles in the connection profile topic.

    The SDK will use this static topology, in conjunction with gateway connection options, to manage the transaction submission and notification processes. The connection profile must contain enough of the network topology to allow a gateway to interact with the network on behalf of the application; this includes the network channels, organizations, orderers, peers and their roles.

  • Dynamic: The gateway configuration is minimally defined in a connection profile. Typically, one or two peers from the application’s organization are specified, and they use service discovery to discover the available network topology. This includes peers, orderers, channels, instantiated smart contracts and their endorsement policies. (In production environments, a gateway configuration should specify at least two peers for availability.)

    The SDK will use all of the static and discovered topology information, in conjunction with gateway connection options, to manage the transaction submission and notification processes. As part of this, it will also intelligently use the discovered topology; for example, it will calculate the minimum required endorsing peers using the discovered endorsement policy for the smart contract.

You might ask yourself whether a static or dynamic gateway is better? The trade-off is between predictability and responsiveness. Static networks will always behave the same way, as they perceive the network as unchanging. In this sense they are predictable – they will always use the same peers and orderers if they are available. Dynamic networks are more responsive as they understand how the network changes – they can use newly added peers and orderers, which brings extra resilience and scalability, at potentially some cost in predictability. In general it’s fine to use dynamic networks, and indeed this the default mode for gateways.

Note that the same connection profile can be used statically or dynamically. Clearly, if a profile is going to be used statically, it needs to be comprehensive, whereas dynamic usage requires only sparse population.

Both styles of gateway are transparent to the application; the application program design does not change whether static or dynamic gateways are used. This also means that some applications may use service discovery, while others may not. In general using dynamic discovery means less definition and more intelligence by the SDK; it is the default.

Connect

When an application connects to a gateway, two options are provided. These are used in subsequent SDK processing:

  await gateway.connect(connectionProfile, connectionOptions);
  • Connection profile: connectionProfile is the gateway configuration that will be used for transaction processing by the SDK, whether statically or dynamically. It can be specified in YAML or JSON, though it must be converted to a JSON object when passed to the gateway:

    let connectionProfile = yaml.safeLoad(fs.readFileSync('../gateway/paperNet.yaml', 'utf8'));
    

    Read more about connection profiles and how to configure them.

  • Connection options: connectionOptions allow an application to declare rather than implement desired transaction processing behaviour. Connection options are interpreted by the SDK to control interaction patterns with network components, for example to select which identity to connect with, or which peers to use for event notifications. These options significantly reduce application complexity without compromising functionality. This is possible because the SDK has implemented much of the low level logic that would otherwise be required by applications; connection options control this logic flow.

    Read about the list of available connection options and when to use them.

Static

Static gateways define a fixed view of a network. In the MagnetoCorp scenario, a gateway might identify a single peer from MagnetoCorp, a single peer from DigiBank, and a MagentoCorp orderer. Alternatively, a gateway might define all peers and orderers from MagnetCorp and DigiBank. In both cases, a gateway must define a view of the network sufficient to get commercial paper transactions endorsed and distributed.

Applications can use a gateway statically by explicitly specifying the connect option discovery: { enabled:false } on the gateway.connect() API. Alternatively, the environment variable setting FABRIC_SDK_DISCOVERY=false will always override the application choice.

Examine the connection profile used by the MagnetoCorp issue application. See how all the peers, orderers and even CAs are specified in this file, including their roles.

It’s worth bearing in mind that a static gateway represents a view of a network at a moment in time. As networks change, it may be important to reflect this in a change to the gateway file. Applications will automatically pick up these changes when they re-load the gateway file.

Dynamic

Dynamic gateways define a small, fixed starting point for a network. In the MagnetoCorp scenario, a dynamic gateway might identify just a single peer from MagnetoCorp; everything else will be discovered! (To provide resiliency, it might be better to define two such bootstrap peers.)

If service discovery is selected by an application, the topology defined in the gateway file is augmented with that produced by this process. Service discovery starts with the gateway definition, and finds all the connected peers and orderers within the MagnetoCorp organization using the gossip protocol. If anchor peers have been defined for a channel, then service discovery will use the gossip protocol across organizations to discover components within the connected organization. This process will also discover smart contracts installed on peers and their endorsement policies defined at a channel level. As with static gateways, the discovered network must be sufficient to get commercial paper transactions endorsed and distributed.

Dynamic gateways are the default setting for Fabric applications. They can be explicitly specified using the connect option discovery: { enabled:true } on the gateway.connect() API. Alternatively, the environment variable setting FABRIC_SDK_DISCOVERY=true will always override the application choice.

A dynamic gateway represents an up-to-date view of a network. As networks change, service discovery will ensure that the network view is an accurate reflection of the topology visible to the application. Applications will automatically pick up these changes; they do not even need to re-load the gateway file.

Multiple gateways

Finally, it is straightforward for an application to define multiple gateways, both for the same or different networks. Moreover, applications can use the name gateway both statically and dynamically.

It can be helpful to have multiple gateways. Here are a few reasons:

  • Handling requests on behalf of different users.

  • Connecting to different networks simultaneously.

  • Testing a network configuration, by simultaneously comparing its behaviour with an existing configuration.

本主题介绍如何开发客户端应用程序和智能合约,以使用超级账本Fabric解决业务问题。在一个真实的**商业票据**场景中,涉及多个组织,您将了解完成此目标所需的所有概念和任务。我们假设区块链网络已经可用。

该主题是为多个受众设计的:

  • 解决方案和应用程序架构师

  • 客户端应用程序开发者

  • 智能合约开发者

  • 业务专家

您可以选择按顺序阅读主题,也可以选择适当的各个部分。每个主题部分都根据读者的相关性进行标记,因此无论您是在寻找业务信息还是技术信息,当一个主题适合您时,就会很清楚。

本主题遵循典型的软件开发生命周期。它从业务需求开始,然后涵盖开发应用程序和智能合约所需的所有主要技术活动,以满足这些需求。

如果您愿意,您可以通过运行商业票据 `教程<../tutorial/commercial_paper.html>`_来立即尝试商业票据场景。当您需要对本教程中介绍的概念进行更全面的解释时,您可以回到这个主题。

Tutorials

We offer tutorials to get you started with Hyperledger Fabric. The first is oriented to the Hyperledger Fabric application developer, Writing Your First Application. It takes you through the process of writing your first blockchain application for Hyperledger Fabric using the Hyperledger Fabric Node SDK.

The second tutorial is oriented towards the Hyperledger Fabric network operators, 搭建你的第一个网络(BYFN). This one walks you through the process of establishing a blockchain network using Hyperledger Fabric and provides a basic sample application to test it out.

There are also tutorials for updating your channel, 向通道添加组织, and upgrading your network to a later version of Hyperledger Fabric, Upgrading Your Network Components.

Finally, we offer two chaincode tutorials. One oriented to developers, 链码开发者, and the other oriented to operators, Chaincode for Operators.

注解

If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.

Writing Your First Application

注解

If you’re not yet familiar with the fundamental architecture of a Fabric network, you may want to visit the Key Concepts section prior to continuing.

It is also worth noting that this tutorial serves as an introduction to Fabric applications and uses simple smart contracts and applications. For a more in-depth look at Fabric applications and smart contracts, check out our 开发应用程序 section or the Commercial paper tutorial.

In this tutorial we’ll be looking at a handful of sample programs to see how Fabric apps work. These applications and the smart contracts they use are collectively known as FabCar. They provide a great starting point to understand a Hyperledger Fabric blockchain. You’ll learn how to write an application and smart contract to query and update a ledger, and how to use a Certificate Authority to generate the X.509 certificates used by applications which interact with a permissioned blockchain.

We will use the application SDK — described in detail in the Application topic – to invoke a smart contract which queries and updates the ledger using the smart contract SDK — described in detail in section Smart Contract Processing.

We’ll go through three principle steps:

1. Setting up a development environment. Our application needs a network to interact with, so we’ll get a basic network our smart contracts and application will use.

_images/AppConceptsOverview.png

2. Learning about a sample smart contract, FabCar. We use a smart contract written in JavaScript. We’ll inspect the smart contract to learn about the transactions within them, and how they are used by applications to query and update the ledger.

3. Develop a sample application which uses FabCar. Our application will use the FabCar smart contract to query and update car assets on the ledger. We’ll get into the code of the apps and the transactions they create, including querying a car, querying a range of cars, and creating a new car.

After completing this tutorial you should have a basic understanding of how an application is programmed in conjunction with a smart contract to interact with the ledger hosted and replicated on the peers in a Fabric network.

注解

These applications are also compatible with Service Discovery and Private data, though we won’t explicitly show how to use our apps to leverage those features.

Set up the blockchain network

注解

This next section requires you to be in the first-network subdirectory within your local clone of the fabric-samples repo.

If you’ve already run through 搭建你的第一个网络(BYFN), you will have downloaded fabric-samples and have a network up and running. Before you run this tutorial, you must stop this network:

./byfn.sh down

If you have run through this tutorial before, use the following commands to kill any stale or active containers. Note, this will take down all of your containers whether they’re Fabric related or not.

docker rm -f $(docker ps -aq)
docker rmi -f $(docker images | grep fabcar | awk '{print $3}')

If you don’t have a development environment and the accompanying artifacts for the network and applications, visit the Prerequisites page and ensure you have the necessary dependencies installed on your machine.

Next, if you haven’t done so already, visit the Install Samples, Binaries and Docker Images page and follow the provided instructions. Return to this tutorial once you have cloned the fabric-samples repository, and downloaded the latest stable Fabric images and available utilities.

If you are using Mac OS and running Mojave, you will need to install Xcode.

Launch the network

注解

This next section requires you to be in the fabcar subdirectory within your local clone of the fabric-samples repo.

Launch your network using the startFabric.sh shell script. This command will spin up a blockchain network comprising peers, orderers, certificate authorities and more. It will also install and instantiate a javascript version of the FabCar smart contract which will be used by our application to access the ledger. We’ll learn more about these components as we go through the tutorial.

./startFabric.sh javascript

Alright, you’ve now got a sample network up and running, and the FabCar smart contract installed and instantiated. Let’s install our application pre-requisites so that we can try it out, and see how everything works together.

Install the application

注解

The following instructions require you to be in the fabcar/javascript subdirectory within your local clone of the fabric-samples repo.

Run the following command to install the Fabric dependencies for the applications. It will take about a minute to complete:

npm install

This process is installing the key application dependencies defined in package.json. The most important of which is the fabric-network class; it enables an application to use identities, wallets, and gateways to connect to channels, submit transactions, and wait for notifications. This tutorial also uses the fabric-ca-client class to enroll users with their respective certificate authorities, generating a valid identity which is then used by fabric-network class methods.

Once npm install completes, everything is in place to run the application. For this tutorial, you’ll primarily be using the application JavaScript files in the fabcar/javascript directory. Let’s take a look at what’s inside:

ls

You should see the following:

enrollAdmin.js  node_modules       package.json  registerUser.js
invoke.js       package-lock.json  query.js      wallet

There are files for other program languages, for example in the fabcar/typescript directory. You can read these once you’ve used the JavaScript example – the principles are the same.

If you are using Mac OS and running Mojave, you will need to install Xcode.

Enrolling the admin user

注解

The following two sections involve communication with the Certificate Authority. You may find it useful to stream the CA logs when running the upcoming programs by opening a new terminal shell and running docker logs -f ca.example.com.

When we created the network, an admin user — literally called admin — was created as the registrar for the certificate authority (CA). Our first step is to generate the private key, public key, and X.509 certificate for admin using the enroll.js program. This process uses a Certificate Signing Request (CSR) — the private and public key are first generated locally and the public key is then sent to the CA which returns an encoded certificate for use by the application. These three credentials are then stored in the wallet, allowing us to act as an administrator for the CA.

We will subsequently register and enroll a new application user which will be used by our application to interact with the blockchain.

Let’s enroll user admin:

node enrollAdmin.js

This command has stored the CA administrator’s credentials in the wallet directory.

Register and enroll user1

Now that we have the administrator’s credentials in a wallet, we can enroll a new user — user1 — which will be used to query and update the ledger:

node registerUser.js

Similar to the admin enrollment, this program uses a CSR to enroll user1 and store its credentials alongside those of admin in the wallet. We now have identities for two separate users — admin and user1 — and these are used by our application.

Time to interact with the ledger…

Querying the ledger

Each peer in a blockchain network hosts a copy of the ledger, and an application program can query the ledger by invoking a smart contract which queries the most recent value of the ledger and returns it to the application.

Here is a simplified representation of how a query works:

_images/write_first_app.diagram.1.png

Applications read data from the ledger using a query. The most common queries involve the current values of data in the ledger – its world state. The world state is represented as a set of key-value pairs, and applications can query data for a single key or multiple keys. Moreover, the ledger world state can be configured to use a database like CouchDB which supports complex queries when key-values are modeled as JSON data. This can be very helpful when looking for all assets that match certain keywords with particular values; all cars with a particular owner, for example.

First, let’s run our query.js program to return a listing of all the cars on the ledger. This program uses our second identity – user1 – to access the ledger:

node query.js

The output should look like this:

Wallet path: ...fabric-samples/fabcar/javascript/wallet
Transaction has been evaluated, result is:
[{"Key":"CAR0", "Record":{"colour":"blue","make":"Toyota","model":"Prius","owner":"Tomoko"}},
{"Key":"CAR1", "Record":{"colour":"red","make":"Ford","model":"Mustang","owner":"Brad"}},
{"Key":"CAR2", "Record":{"colour":"green","make":"Hyundai","model":"Tucson","owner":"Jin Soo"}},
{"Key":"CAR3", "Record":{"colour":"yellow","make":"Volkswagen","model":"Passat","owner":"Max"}},
{"Key":"CAR4", "Record":{"colour":"black","make":"Tesla","model":"S","owner":"Adriana"}},
{"Key":"CAR5", "Record":{"colour":"purple","make":"Peugeot","model":"205","owner":"Michel"}},
{"Key":"CAR6", "Record":{"colour":"white","make":"Chery","model":"S22L","owner":"Aarav"}},
{"Key":"CAR7", "Record":{"colour":"violet","make":"Fiat","model":"Punto","owner":"Pari"}},
{"Key":"CAR8", "Record":{"colour":"indigo","make":"Tata","model":"Nano","owner":"Valeria"}},
{"Key":"CAR9", "Record":{"colour":"brown","make":"Holden","model":"Barina","owner":"Shotaro"}}]

Let’s take a closer look at this program. Use an editor (e.g. atom or visual studio) and open query.js.

The application starts by bringing in scope two key classes from the fabric-network module; FileSystemWallet and Gateway. These classes will be used to locate the user1 identity in the wallet, and use it to connect to the network:

const { FileSystemWallet, Gateway } = require('fabric-network');

The application connects to the network using a gateway:

const gateway = new Gateway();
await gateway.connect(ccp, { wallet, identity: 'user1' });

This code creates a new gateway and then uses it to connect the application to the network. ccp describes the network that the gateway will access with the identity user1 from wallet. See how the ccp has been loaded from ../../basic-network/connection.json and parsed as a JSON file:

const ccpPath = path.resolve(__dirname, '..', '..', 'basic-network', 'connection.json');
const ccpJSON = fs.readFileSync(ccpPath, 'utf8');
const ccp = JSON.parse(ccpJSON);

If you’d like to understand more about the structure of a connection profile, and how it defines the network, check out the connection profile topic.

A network can be divided into multiple channels, and the next important line of code connects the application to a particular channel within the network, mychannel:

const network = await gateway.getNetwork('mychannel');

Within this channel, we can access the smart contract fabcar to interact with the ledger:

const contract = network.getContract('fabcar');

Within fabcar there are many different transactions, and our application initially uses the queryAllCars transaction to access the ledger world state data:

const result = await contract.evaluateTransaction('queryAllCars');

The evaluateTransaction method represents one of the simplest interaction with a smart contract in blockchain network. It simply picks a peer defined in the connection profile and sends the request to it, where it is evaluated. The smart contract queries all the cars on the peer’s copy of the ledger and returns the result to the application. This interaction does not result in an update the ledger.

The FabCar smart contract

Let’s take a look at the transactions within the FabCar smart contract. Navigate to the chaincode/fabcar/javascript/lib subdirectory at the root of fabric-samples and open fabcar.js in your editor.

See how our smart contract is defined using the Contract class:

class FabCar extends Contract {...

Within this class structure, you’ll see that we have the following transactions defined: initLedger, queryCar, queryAllCars, createCar, and changeCarOwner. For example:

async queryCar(ctx, carNumber) {...}
async queryAllCars(ctx) {...}

Let’s take a closer look at the queryAllCars transaction to see how it interacts with the ledger.

async queryAllCars(ctx) {

  const startKey = 'CAR0';
  const endKey = 'CAR999';

  const iterator = await ctx.stub.getStateByRange(startKey, endKey);

This code defines the range of cars that queryAllCars will retrieve from the ledger. Every car between CAR0 and CAR999 – 1,000 cars in all, assuming every key has been tagged properly – will be returned by the query. The remainder of the code iterates through the query results and packages them into JSON for the application.

Below is a representation of how an application would call different transactions in a smart contract. Each transaction uses a broad set of APIs such as getStateByRange to interact with the ledger. You can read more about these APIs in detail.

_images/RunningtheSample.png

We can see our queryAllCars transaction, and another called createCar. We will use this later in the tutorial to update the ledger, and add a new block to the blockchain.

But first, go back to the query program and change the evaluateTransaction request to query CAR4. The query program should now look like this:

const result = await contract.evaluateTransaction('queryCar', 'CAR4');

Save the program and navigate back to your fabcar/javascript directory. Now run the query program again:

node query.js

You should see the following:

Wallet path: ...fabric-samples/fabcar/javascript/wallet
Transaction has been evaluated, result is:
{"colour":"black","make":"Tesla","model":"S","owner":"Adriana"}

If you go back and look at the result from when the transaction was queryAllCars, you can see that CAR4 was Adriana’s black Tesla model S, which is the result that was returned here.

We can use the queryCar transaction to query against any car, using its key (e.g. CAR0) and get whatever make, model, color, and owner correspond to that car.

Great. At this point you should be comfortable with the basic query transactions in the smart contract and the handful of parameters in the query program.

Time to update the ledger…

Updating the ledger

Now that we’ve done a few ledger queries and added a bit of code, we’re ready to update the ledger. There are a lot of potential updates we could make, but let’s start by creating a new car.

From an application perspective, updating the ledger is simple. An application submits a transaction to the blockchain network, and when it has been validated and committed, the application receives a notification that the transaction has been successful. Under the covers this involves the process of consensus whereby the different components of the blockchain network work together to ensure that every proposed update to the ledger is valid and performed in an agreed and consistent order.

_images/write_first_app.diagram.2.png

Above, you can see the major components that make this process work. As well as the multiple peers which each host a copy of the ledger, and optionally a copy of the smart contract, the network also contains an ordering service. The ordering service coordinates transactions for a network; it creates blocks containing transactions in a well-defined sequence originating from all the different applications connected to the network.

Our first update to the ledger will create a new car. We have a separate program called invoke.js that we will use to make updates to the ledger. Just as with queries, use an editor to open the program and navigate to the code block where we construct our transaction and submit it to the network:

await contract.submitTransaction('createCar', 'CAR12', 'Honda', 'Accord', 'Black', 'Tom');

See how the applications calls the smart contract transaction createCar to create a black Honda Accord with an owner named Tom. We use CAR12 as the identifying key here, just to show that we don’t need to use sequential keys.

Save it and run the program:

node invoke.js

If the invoke is successful, you will see output like this:

Wallet path: ...fabric-samples/fabcar/javascript/wallet
2018-12-11T14:11:40.935Z - info: [TransactionEventHandler]: _strategySuccess: strategy success for transaction "9076cd4279a71ecf99665aed0ed3590a25bba040fa6b4dd6d010f42bb26ff5d1"
Transaction has been submitted

Notice how the invoke application interacted with the blockchain network using the submitTransaction API, rather than evaluateTransaction.

await contract.submitTransaction('createCar', 'CAR12', 'Honda', 'Accord', 'Black', 'Tom');

submitTransaction is much more sophisticated than evaluateTransaction. Rather than interacting with a single peer, the SDK will send the submitTransaction proposal to every required organization’s peer in the blockchain network. Each of these peers will execute the requested smart contract using this proposal, to generate a transaction response which it signs and returns to the SDK. The SDK collects all the signed transaction responses into a single transaction, which it then sends to the orderer. The orderer collects and sequences transactions from every application into a block of transactions. It then distributes these blocks to every peer in the network, where every transaction is validated and committed. Finally, the SDK is notified, allowing it to return control to the application.

注解

submitTransaction also includes a listener that checks to make sure the transaction has been validated and committed to the ledger. Applications should either utilize a commit listener, or leverage an API like submitTransaction that does this for you. Without doing this, your transaction may not have been successfully orderered, validated, and committed to the ledger.

submitTransaction does all this for the application! The process by which the application, smart contract, peers and ordering service work together to keep the ledger consistent across the network is called consensus, and it is explained in detail in this section.

To see that this transaction has been written to the ledger, go back to query.js and change the argument from CAR4 to CAR12.

In other words, change this:

const result = await contract.evaluateTransaction('queryCar', 'CAR4');

To this:

const result = await contract.evaluateTransaction('queryCar', 'CAR12');

Save once again, then query:

node query.js

Which should return this:

Wallet path: ...fabric-samples/fabcar/javascript/wallet
Transaction has been evaluated, result is:
{"colour":"Black","make":"Honda","model":"Accord","owner":"Tom"}

Congratulations. You’ve created a car and verified that its recorded on the ledger!

So now that we’ve done that, let’s say that Tom is feeling generous and he wants to give his Honda Accord to someone named Dave.

To do this, go back to invoke.js and change the smart contract transaction from createCar to changeCarOwner with a corresponding change in input arguments:

await contract.submitTransaction('changeCarOwner', 'CAR12', 'Dave');

The first argument — CAR12 — identifies the car that will be changing owners. The second argument — Dave — defines the new owner of the car.

Save and execute the program again:

node invoke.js

Now let’s query the ledger again and ensure that Dave is now associated with the CAR12 key:

node query.js

It should return this result:

Wallet path: ...fabric-samples/fabcar/javascript/wallet
Transaction has been evaluated, result is:
{"colour":"Black","make":"Honda","model":"Accord","owner":"Dave"}

The ownership of CAR12 has been changed from Tom to Dave.

注解

In a real world application the smart contract would likely have some access control logic. For example, only certain authorized users may create new cars, and only the car owner may transfer the car to somebody else.

Summary

Now that we’ve done a few queries and a few updates, you should have a pretty good sense of how applications interact with a blockchain network using a smart contract to query or update the ledger. You’ve seen the basics of the roles smart contracts, APIs, and the SDK play in queries and updates and you should have a feel for how different kinds of applications could be used to perform other business tasks and operations.

Additional resources

As we said in the introduction, we have a whole section on 开发应用程序 that includes in-depth information on smart contracts, process and data design, a tutorial using a more in-depth Commercial Paper tutorial and a large amount of other material relating to the development of applications.

Commercial paper tutorial

受众 :架构师、应用程序和智能合约开发者、管理员。

本教程会知道你怎么样安装使用一个商业票据的应用和智能合约。它是一个面向任务的主题,所以它强调过程而不是概念。当您想更详细地理解这些概念时,您可以阅读“应用程序开发”主题。

_images/commercial_paper.diagram.1.pngcommercialpaper.tutorial In this tutorial two organizations, MagnetoCorp and DigiBank, trade commercial paper with each other using PaperNet, a Hyperledger Fabric blockchain network.

一旦你建立了一个基本的网络,你将扮演伊莎贝拉(Isabella),她是磁石公司的一名员工,将代表磁石公司发行一份商业票据。然后,你将转换角色,扮演电子银行(DigiBank)的员工巴拉吉(Balaji),他将购买这张商业票据,持有一段时间,然后跟MagnetoCorp兑换一小笔利润。

您将扮演不同组织中的开发人员、最终用户和管理员的角色,执行以下步骤,这些步骤旨在帮助您理解作为两个独立工作的不同组织协作是什么样子的,但是要根据超级账本网络中相互同意的规则进行协作。

本教程已经在MacOS和Ubuntu上进行了测试,应该可以在其他Linux发行版上使用。Windows版本正在开发中。

Prerequisites

在开始之前,您必须安装本教程所需的一些必备技术软件。我们已经把这些降到最低,这样你就可以快速开始了。

你必须安装以下技术软件:

  • Node version 8.9.0, or higher. Node is a JavaScript runtime that you can use to run applications and smart contracts. You are recommended to use the LTS (Long Term Support) version of node. Install node here.

  • Docker version 18.06, or higher. Docker help developers and administrators create standard environments for building and running applications and smart contracts. Hyperledger Fabric is provided as a set of Docker images, and the PaperNet smart contract will run in a docker container. Install Docker here.

你会发现安装以下技术软件很有帮助:

  • A source code editor, such as Visual Studio Code version 1.28, or higher. VS Code will help you develop and test your application and smart contract. Install VS Code here.

    Many excellent code editors are available including Atom, Sublime Text and Brackets.

随着您对应用程序和智能合约开发越来越有经验,您可能会发现安装以下技术很有帮助。当你第一次运行教程时,不需要安装这些:

  • Node Version Manager. NVM helps you easily switch between different versions of node – it can be really helpful if you’re working on multiple projects at the same time. Install NVM here.

Download samples

商业票据教程是一个名为fabric -samples的公共GitHub仓库中保存的超级账本示例之一。在您的机器上运行本教程时,您的第一个任务是下载fabric-samples仓库。

_images/commercial_paper.diagram.2.pngcommercialpaper.download Download the fabric-samples GitHub repository to your local machine.

$GOPATH是超级账本中的一个重要环境变量;它标识要安装的根目录。正确使用任何一种编程语言都是非常重要的!打开一个新的终端窗口,使用env命令检查$GOPATH是否已设置:

$ env
...
GOPATH=/Users/username/go
NVM_BIN=/Users/username/.nvm/versions/node/v8.11.2/bin
NVM_IOJS_ORG_MIRROR=https://iojs.org/dist
...

如果没有设置$GOPATH,请使用以下说明

You can now create a directory relative to $GOPATHwhere fabric-samples will be installed:

$ mkdir -p $GOPATH/src/github.com/hyperledger/
$ cd $GOPATH/src/github.com/hyperledger/

使用git克隆命令将fabric-samples库复制到这个位置:

$ git clone https://github.com/hyperledger/fabric-samples.git

请随意检查fabric-samples的目录结构

$ cd fabric-samples
$ ls

CODE_OF_CONDUCT.md    balance-transfer            fabric-ca
CONTRIBUTING.md       basic-network               first-network
Jenkinsfile           chaincode                   high-throughput
LICENSE               chaincode-docker-devmode    scripts
MAINTAINERS.md        commercial-paper            README.md
fabcar

请注意商业票据目录——这就是我们的示例所在的位置!

您现在已经完成了教程的第一阶段!在继续操作时,将为不同的用户和组件打开多个命令窗口。例如:

  • to run applications on behalf of Isabella and Balaji who will trade commercial paper with each other

  • to issue commands to on behalf of administrators from MagnetoCorp and DigiBank, including installing and instantiating smart contracts

  • to show peer, orderer and CA log output

我们将清楚地说明什么时候应该从特定的命令窗口运行命令;例如

(isabella)$ ls

指示您应该从Isabella的窗口运行ls命令。

Create network

本教程目前使用的基本网络;它将很快更新到一个配置,更好地反映了PaperNet的多组织结构。目前,这个网络足以向您展示如何开发应用程序和智能合约。

_images/commercial_paper.diagram.3.pngcommercialpaper.network The Hyperledger Fabric basic network comprises a peer and its ledger database, an orderer and a certificate authority (CA). Each of these components runs as a docker container.

peer节点、它的分类账、orderer节点和CA都在各自的docker容器中运行。在生产环境中,组织通常使用与其他系统共享的现有ca;他们并不专注于Fabric网络。

您可以使用fabric-samplesbasic-network目录中包含的命令和配置来管理基本网络。让我们使用start.sh shell脚本启动本地机器上的网络:

$ cd fabric-samples/basic-network
$ ./start.sh

docker-compose -f docker-compose.yml up -d ca.example.com orderer.example.com peer0.org1.example.com couchdb
Creating network "net_basic" with the default driver
Pulling ca.example.com (hyperledger/fabric-ca:)...
latest: Pulling from hyperledger/fabric-ca
3b37166ec614: Pull complete
504facff238f: Pull complete
(...)
Pulling orderer.example.com (hyperledger/fabric-orderer:)...
latest: Pulling from hyperledger/fabric-orderer
3b37166ec614: Already exists
504facff238f: Already exists
(...)
Pulling couchdb (hyperledger/fabric-couchdb:)...
latest: Pulling from hyperledger/fabric-couchdb
3b37166ec614: Already exists
504facff238f: Already exists
(...)
Pulling peer0.org1.example.com (hyperledger/fabric-peer:)...
latest: Pulling from hyperledger/fabric-peer
3b37166ec614: Already exists
504facff238f: Already exists
(...)
Creating orderer.example.com ... done
Creating couchdb             ... done
Creating ca.example.com         ... done
Creating peer0.org1.example.com ... done
(...)
2018-11-07 13:47:31.634 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2018-11-07 13:47:31.730 UTC [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel

注意docker-compose -f docker-compose.yml up -d ca.example.com… …命令如何从DockerHub中提取四个超级账本容器映像,然后启动它们。这些容器拥有用于这些超级账本组件的软件的最新版本。请随意探索basic-network目录——在本教程中,我们将使用它的大部分内容。

您可以使用docker ps命令列出运行基本网络组件的docker容器

$ docker ps

CONTAINER ID        IMAGE                        COMMAND                  CREATED              STATUS              PORTS                                            NAMES
ada3d078989b        hyperledger/fabric-peer      "peer node start"        About a minute ago   Up About a minute   0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp   peer0.org1.example.com
1fa1fd107bfb        hyperledger/fabric-orderer   "orderer"                About a minute ago   Up About a minute   0.0.0.0:7050->7050/tcp                           orderer.example.com
53fe614274f7        hyperledger/fabric-couchdb   "tini -- /docker-ent…"   About a minute ago   Up About a minute   4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp       couchdb
469201085a20        hyperledger/fabric-ca        "sh -c 'fabric-ca-se…"   About a minute ago   Up About a minute   0.0.0.0:7054->7054/tcp                           ca.example.com

看看你是否可以将这些容器映射到基本网络(你可能需要水平滚动来定位信息):

  • A peer peer0.org1.example.com is running in container ada3d078989b

  • An orderer orderer.example.com is running in container 1fa1fd107bfb

  • A CouchDB database couchdb is running in container 53fe614274f7

  • A CA ca.example.com is running in container 469201085a20

这些容器都来自一个名为net_basic的docker网络。您可以使用docker network命令查看网络:

$ docker network inspect net_basic

    {
        "Name": "net_basic",
        "Id": "62e9d37d00a0eda6c6301a76022c695f8e01258edaba6f65e876166164466ee5",
        "Created": "2018-11-07T13:46:30.4992927Z",
        "Containers": {
            "1fa1fd107bfbe61522e4a26a57c2178d82b2918d5d423e7ee626c79b8a233624": {
                "Name": "orderer.example.com",
                "IPv4Address": "172.20.0.4/16",
            },
            "469201085a20b6a8f476d1ac993abce3103e59e3a23b9125032b77b02b715f2c": {
                "Name": "ca.example.com",
                "IPv4Address": "172.20.0.2/16",
            },
            "53fe614274f7a40392210f980b53b421e242484dd3deac52bbfe49cb636ce720": {
                "Name": "couchdb",
                "IPv4Address": "172.20.0.3/16",
            },
            "ada3d078989b568c6e060fa7bf62301b4bf55bed8ac1c938d514c81c42d8727a": {
                "Name": "peer0.org1.example.com",
                "IPv4Address": "172.20.0.5/16",
            }
        },
        "Labels": {}
    }

查看作为单个docker网络的一部分,这四个容器如何使用不同的IP地址。(为了清晰起见,我们将输出缩短了。)

回顾一下:您已经从GitHub下载了Hyperledger Fabric样例库,并且已经在本地机器上运行了基本网络。现在让我们开始扮演磁石公司的角色,他们希望交易商业票据。

Working as MagnetoCorp

要监视PaperNet的MagnetoCorp组件,管理员可以使用logspout工具查看一组docker容器的聚合输出。它将不同的输出流收集到一个地方,这样就可以很容易地从一个窗口看到发生了什么。这对于管理员在安装智能合约或开发人员在调用智能合约时非常有用。

现在让我们以MagnetoCorp管理员的身份监视PaperNet。打开fabric-samples目录中的一个新窗口,找到并运行monitordocker.sh脚本,用来启动与docker网络net_basic关联的PaperNet docker容器的logspout工具:

(magnetocorp admin)$ cd commercial-paper/organization/magnetocorp/configuration/cli/
(magnetocorp admin)$ ./monitordocker.sh net_basic
...
latest: Pulling from gliderlabs/logspout
4fe2ade4980c: Pull complete
decca452f519: Pull complete
(...)
Starting monitoring on all containers on the network net_basic
b7f3586e5d0233de5a454df369b8eadab0613886fc9877529587345fc01a3582

注意,如果monitordocker.sh中的默认端口已经在使用,那么可以将端口号传递给上面的命令。

(magnetocorp admin)$ ./monitordocker.sh net_basic <port_number>

这个窗口现在将显示docker容器的输出,所以让我们启动另一个终端窗口,它将允许MagnetoCorp管理员与网络交互。

_images/commercial_paper.diagram.4.pngcommercialpaper.workmagneto A MagnetoCorp administrator interacts with the network via a docker container.

为了与PaperNet交互,MagnetoCorp管理员需要使用超级账本peer命令。方便的是,这些都可以在hyperledger/fabric-tools docker镜像中预先构建。

让我们使用docker-compose命令为管理员启动一个特定于magnetocorp的docker容器:

(magnetocorp admin)$ cd commercial-paper/organization/magnetocorp/configuration/cli/
(magnetocorp admin)$ docker-compose -f docker-compose.yml up -d cliMagnetoCorp

Pulling cliMagnetoCorp (hyperledger/fabric-tools:)...
latest: Pulling from hyperledger/fabric-tools
3b37166ec614: Already exists
(...)
Digest: sha256:058cff3b378c1f3ebe35d56deb7bf33171bf19b327d91b452991509b8e9c7870
Status: Downloaded newer image for hyperledger/fabric-tools:latest
Creating cliMagnetoCorp ... done

再次,查看如何从Docker Hub检索到hyperledger/fabric-tools docker镜像并添加到网络中:

(magnetocorp admin)$ docker ps

CONTAINER ID        IMAGE                        COMMAND                  CREATED              STATUS              PORTS                                            NAMES
562a88b25149        hyperledger/fabric-tools     "/bin/bash"              About a minute ago   Up About a minute                                                    cliMagnetoCorp
b7f3586e5d02        gliderlabs/logspout          "/bin/logspout"          7 minutes ago        Up 7 minutes        127.0.0.1:8000->80/tcp                           logspout
ada3d078989b        hyperledger/fabric-peer      "peer node start"        29 minutes ago       Up 29 minutes       0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp   peer0.org1.example.com
1fa1fd107bfb        hyperledger/fabric-orderer   "orderer"                29 minutes ago       Up 29 minutes       0.0.0.0:7050->7050/tcp                           orderer.example.com
53fe614274f7        hyperledger/fabric-couchdb   "tini -- /docker-ent…"   29 minutes ago       Up 29 minutes       4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp       couchdb
469201085a20        hyperledger/fabric-ca        "sh -c 'fabric-ca-se…"   29 minutes ago       Up 29 minutes       0.0.0.0:7054->7054/tcp                           ca.example.com

MagnetoCorp管理员将使用容器562a88b25149中的命令行与PaperNet交互。还要注意logspout容器b7f3586e5d02;这是为monitordocker.sh命令捕获所有其他docker容器的输出。

现在让我们使用这个命令行作为MagnetoCorp管理员与PaperNet进行交互。

Smart contract

issue、buy和redeem是PaperNet智能合同的三个核心功能。它被应用于在账本上提交发行、购买和赎回商业票据的交易。我们的下一个任务是研究这个智能合约。

打开一个代表MagnetoCorp开发人员的新终端窗口,并切换到包含MagnetoCorp的智能合约副本的目录,使用您选择的编辑器查看它(本教程中使用VS Code):

(magnetocorp developer)$ cd commercial-paper/organization/magnetocorp/contract
(magnetocorp developer)$ code .

在文件夹的lib目录中,您将看到papercontract.js文件——它包含了商业票据的智能合约!

_images/commercial_paper.diagram.10.pngcommercialpaper.vscode1 An example code editor displaying the commercial paper smart contract in papercontract.js

papercontract.js是一个JavaScript程序,设计用于运行在node.js环境中。注意以下关键程序行:

  • const { Contract, Context } = require('fabric-contract-api');

    This statement brings into scope two key Hyperledger Fabric classes that will be used extensively by the smart contract – Contract and Context. You can learn more about these classes in the fabric-shim JSDOCS.

  • class CommercialPaperContract extends Contract {

    This defines the smart contract class CommercialPaperContract based on the built-in Fabric Contract class. The methods which implement the key transactions to issue, buy and redeem commercial paper are defined within this class.

  • async issue(ctx, issuer, paperNumber, issueDateTime, maturityDateTime...) {

    This method defines the commercial paper issue transaction for PaperNet. The parameters that are passed to this method will be used to create the new commercial paper.

    Locate and examine the buy and redeem transactions within the smart contract.

  • let paper = CommercialPaper.createInstance(issuer, paperNumber, issueDateTime...);

    Within the issue transaction, this statement creates a new commercial paper in memory using the CommercialPaper class with the supplied transaction inputs. Examine the buy and redeem transactions to see how they similarly use this class.

  • await ctx.paperList.addPaper(paper);

    This statement adds the new commercial paper to the ledger using ctx.paperList, an instance of a PaperList class that was created when the smart contract context CommercialPaperContext was initialized. Again, examine the buy and redeem methods to see how they use this class.

  • return paper.toBuffer();

    This statement returns a binary buffer as response from the issue transaction for processing by the caller of the smart contract.

请随意查看合同目录中的其他文件,以了解智能合约是如何工作的,并在智能合约主题中详细阅读papercontract.js是如何设计的。

Install contract

在应用程序调用papercontract之前,必须将其安装到PaperNet中适当的peer节点上。磁石集团(MagnetoCorp)和数字银行(DigiBank)的管理人员可以向各自拥有权限的peer节点上安装papercontract。

_images/commercial_paper.diagram.6.pngcommercialpaper.install A MagnetoCorp administrator installs a copy of the papercontract onto a MagnetoCorp peer.

智能合约是应用程序开发的重点,它包含在一个名为chaincode的超级账本构件中。一个或多个智能合约可以在一个链码中定义,安装一个链码将允许PaperNet中的不同组织使用它们。这意味着只有管理员才需要担心链码;每个人都可以用智能合约来行事。

MagnetoCorp管理员使用peer chaincode install命令将papercontract智能合约从本地机器的文件系统复制到目标peer节点的docker容器中的文件系统。一旦智能合约安装在peer节点上并在通道上实例化,应用程序就可以调用papercontract,并通过putState()和getState() 这样的Fabric APIs与账本数据库交互。检查这些api是如何被ledger-apistate .js中的StateList类使用的。

现在让我们以MagnetoCorp管理员的身份安装papercontract。在MagnetoCorp administrator的命令窗口中,使用docker exec命令在cliMagnetCorp容器中运行peer chaincode install命令:

(magnetocorp admin)$ docker exec cliMagnetoCorp peer chaincode install -n papercontract -v 0 -p /opt/gopath/src/github.com/contract -l node

2018-11-07 14:21:48.400 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2018-11-07 14:21:48.400 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
2018-11-07 14:21:48.466 UTC [chaincodeCmd] install -> INFO 003 Installed remotely response:<status:200 payload:"OK" >

cliMagnetCorp容器已经将CORE_PEER_ADDRESS=peer0.org1.example.com:7051设置为将其命令指向peer0.org1.example.com,并且远程安装了INFO 003…表明papercontract已成功安装到此peer节点。目前,MagentoCorp的管理人员只需在MagentoCorp的一个peer节点上安装一份papercontract合约。

请注意peer chaincode install命令如何指定智能合约的路径,-p,与cliMagnetoCorp容器的文件系统相关: /opt/gopath/src/github.com/contract。此路径已映射到本地文件系统路径…/organization/magnetocorp/contract。通过magnetocorp/configuration/cli/docker-compose.yml文件:

volumes:
    - ...
    - ./../../../../organization/magnetocorp:/opt/gopath/src/github.com/
    - ...

查看volume指令如何将 organization/magnetocorp映射到/opt/gopath/src/github.com/,从而提供对本地文件系统的容器访问,其中保存了magnetocorp的papercontract智能合约副本。

您可以在这里阅读更多关于docker compose和peer chaincode install 命令。

Instantiate contract

现在,包含商业票据智能合约的papercontract链码已安装在所需的PaperNet的peer节点上,管理员可以将其提供给不同的网络通道,以便连接到这些通道的应用程序可以调用它。因为我们使用的是PaperNet的基本网络配置,所以我们只在一个网络通道mychannel中提供papercontract。

_images/commercial_paper.diagram.7.pngcommercialpaper.instant A MagnetoCorp administrator instantiates papercontract chaincode containing the smart contract. A new docker chaincode container will be created to run papercontract.

MagnetoCorp管理员使用peer chaincode instantiate命令在mychannel上实例化papercontract:

(magnetocorp admin)$ docker exec cliMagnetoCorp peer chaincode instantiate -n papercontract -v 0 -l node -c '{"Args":["org.papernet.commercialpaper:instantiate"]}' -C mychannel -P "AND ('Org1MSP.member')"

2018-11-07 14:22:11.162 UTC [chaincodeCmd] InitCmdFactory -> INFO 001 Retrieved channel (mychannel) orderer endpoint: orderer.example.com:7050
2018-11-07 14:22:11.163 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default escc
2018-11-07 14:22:11.163 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default vscc

这个命令可能需要几分钟才能完成。

实例化中最重要的参数之一是-P。它指定了papercontract的背书策略,描述了一组必须在确定交易有效之前对其进行背书(执行和签署)的组织。所有的交易,无论是有效的还是无效的,都将被记录在账本区块链上,但是只有有效的交易才会更新世界状态。

在传递过程中,请查看实例化如何传递orderer地址orderer.example.com:7050。这是因为它还向orderer提交了一个实例化事务,orderer将在下一个块中包含该事务,并将该事务分发给已经加入mychannel的所有peer,从而使任何peer都能够在自己的隔离的chaincode容器中执行chaincode。注意,实例化只需要在papercontract将运行的通道发出一次,即使它通常安装在许多peer节点上。

使用docker ps命令查看一个papercontract容器如何被启动:

(magnetocorp admin)$ docker ps

CONTAINER ID        IMAGE                                              COMMAND                  CREATED             STATUS              PORTS          NAMES
4fac1b91bfda        dev-peer0.org1.example.com-papercontract-0-d96...  "/bin/sh -c 'cd /usr…"   2 minutes ago       Up 2 minutes                       dev-peer0.org1.example.com-papercontract-0

注意,容器名为dev-peer0.org1.example.com-papercontract-0-d96…以指示是哪个peer节点启动了它,以及它正在运行papercontract version 0。

现在我们已经建立并运行了一个基本的PaperNet,并且安装并实例化了papercontract,接下来让我们将注意力转向MagnetoCorp应用程序,它发行了一份商业票据。

Application structure

包含在papercontract中的智能合约被称为MagnetoCorp的应用 issue.js。Isabella使用此应用程序向账本发出一笔交易,发行商业票据00001。让我们快速研究一下发行应用程序是如何工作的。

_images/commercial_paper.diagram.8.pngcommercialpaper.application A gateway allows an application to focus on transaction generation, submission and response. It coordinates transaction proposal, ordering and notification processing between the different network components.

由于发行应用程序代表Isabella提交事务,它首先从Isabella的钱包中检索Isabella的X.509证书,该证书可能存储在本地文件系统或硬件安全模块HSM中。然后,发行应用程序可以利用网关在通道上提交事务。Hyperledger Fabric SDK提供了一个抽象网关,这样应用程序可以在将网络交互委托给网关的同时专注于应用程序逻辑。网关和钱包使编写超级账应用程序变得很简单。

让我们来看看伊莎贝拉将要使用的发行应用程序。为她打开一个单独的终端窗口,在fabric-samples中找到MagnetoCorp /application文件夹:

(magnetocorp user)$ cd commercial-paper/organization/magnetocorp/application/
(magnetocorp user)$ ls

addToWallet.js		issue.js		package.json

addToWalle.js是Isabella将用来将她的身份载入钱包的程序,而publish.js将通过调用papercontract来使用这个身份来代表MagnetoCorp创建商业票据00001。

更改为包含MagnetoCorp的应用程序issue.js的副本目录。,并使用你的代码编辑器来检查它:

(magnetocorp user)$ cd commercial-paper/organization/magnetocorp/application
(magnetocorp user)$ code issue.js

检查该目录;它包含发行应用程序及其所有依赖项。

_images/commercial_paper.diagram.11.pngcommercialpaper.vscode2 A code editor displaying the contents of the commercial paper application directory.

注意issue.js中的以下关键程序行:

  • const { FileSystemWallet, Gateway } = require('fabric-network');

    This statement brings two key Hyperledger Fabric SDK classes into scope – Wallet and Gateway. Because Isabella’s X.509 certificate is in the local file system, the application uses FileSystemWallet.

  • const wallet = new FileSystemWallet('../identity/user/isabella/wallet');

    This statement identifies that the application will use isabella wallet when it connects to the blockchain network channel. The application will select a particular identity within isabella wallet. (The wallet must have been loaded with the Isabella’s X.509 certificate – that’s what addToWallet.js does.)

  • await gateway.connect(connectionProfile, connectionOptions);

    This line of code connects to the network using the gateway identified by connectionProfile, using the identity referred to in ConnectionOptions.

    See how ../gateway/networkConnection.yaml and User1@org1.example.com are used for these values respectively.

  • const network = await gateway.getNetwork('mychannel');

    This connects the application to the network channel mychannel, where the papercontract was previously instantiated.

  • const contract = await network.getContract('papercontract', 'org.papernet.comm...');

此语句使应用程序可寻址到papercontract中的命名空间org.papernet.commercial alpaper定义的智能合约。一旦应用程序发出getContract,它就可以提交在其中实现的任何事务。

  • const issueResponse = await contract.submitTransaction('issue', 'MagnetoCorp', '00001'...);

    This line of code submits the a transaction to the network using the issue transaction defined within the smart contract. MagnetoCorp, 00001… are the values to be used by the issue transaction to create a new commercial paper.

  • let paper = CommercialPaper.fromBuffer(issueResponse);

    This statement processes the response from the issue transaction. The response needs to deserialized from a buffer into paper, a CommercialPaper object which can interpreted correctly by the application.

请随意检查/application目录中的其他文件,以了解issue.js是如何工作的,并详细阅读它在应用程序主题中是如何实现的

Application dependencies

issues.js应用程序是用JavaScript编写的,设计运行在node.js环境中,充当PaperNet网络的客户端。与通常的做法一样,MagnetoCorp的应用程序构建在许多外部节点包之上——以提高开发的质量和速度。考虑一下problem.js是如何包含js-yaml包来处理YAML网关连接概要文件的,或者包含fabric-network包来访问网关和钱包类的:

const yaml = require('js-yaml');
const { FileSystemWallet, Gateway } = require('fabric-network');

必须使用npm install命令将这些包从npm下载到本地文件系统。按照惯例,必须将包安装到应用程序相关/node_modules目录中,以便在运行时使用。

检查包装。查看issue.js如何识别要下载的包及其确切版本:

  "dependencies": {
    "fabric-network": "~1.4.0",
    "fabric-client": "~1.4.0",
    "js-yaml": "^3.12.0"
  },

npm版本控制非常强大;你可以在这里阅读更多。

让我们用npm install命令安装这些包——这可能需要一分钟来完成:

(magnetocorp user)$ cd commercial-paper/organization/magnetocorp/application/
(magnetocorp user)$ npm install

(           ) extract:lodash: sill extract ansi-styles@3.2.1
(...)
added 738 packages in 46.701s

查看此命令如何更新目录:

(magnetocorp user)$ ls

addToWallet.js		node_modules	      	package.json
issue.js	      	package-lock.json

检查node_modules目录,查看已安装的包。有很多,因为js-yaml和fabric-network本身是建立在其他npm包之上的!有助的是,package_json文件标识所安装的确切版本,如果您想准确地重现环境,这将是非常宝贵的;例如,测试、诊断问题或交付经过验证的应用程序。

Wallet

伊莎贝拉几乎准备好运行issue.js发行MagnetoCorp商业票据00001,只剩一项任务要做了!issue.js代表伊莎贝拉,因此磁石公司将使用她钱包里的身份来反映这些事实。现在,我们需要执行这个一次性活动,将适当的X.509凭证添加到她的钱包中。

在Isabella的终端窗口中,运行addToWallet.js程序将身份信息添加到她的钱包中:

(isabella)$ node addToWallet.js

done

Isabella可以在她的钱包中存储多个身份,但是在我们的示例中,她只使用一个——User1@org.example.com。这个身份目前与现在的基本网络相关,而不是更实际的PaperNet配置——我们将很快更新本教程。

addToWallet.js是一个简单的文件复制程序,您可以在空闲时查看。它从基本网络移动一个身份样本到Isabella的钱包。让我们关注一下这个程序的结果——用于向PaperNet提交交易的钱包的内容:

(isabella)$ ls ../identity/user/isabella/wallet/

User1@org1.example.com

查看目录结构如何映射User1@org1.example.com身份标识——Isabella使用的其他标识将拥有自己的文件夹。在这个目录中,您将找到issue.js将代表isabella使用的身份信息:

(isabella)$ ls ../identity/user/isabella/wallet/User1@org1.example.com

User1@org1.example.com      c75bd6911a...-priv      c75bd6911a...-pub

注意:

  • a private key c75bd6911a...-priv used to sign transactions on Isabella’s behalf, but not distributed outside of her immediate control.

  • a public key c75bd6911a...-pub which is cryptographically linked to Isabella’s private key. This is wholly contained within Isabella’s X.509 certificate.

  • a certificate User1@org.example.com which contains Isabella’s public key and other X.509 attributes added by the Certificate Authority at certificate creation. This certificate is distributed to the network so that different actors at different times can cryptographically verify information created by Isabella’s private key.

    Learn more about certificates here. In practice, the certificate file also contains some Fabric-specific metadata such as Isabella’s organization and role – read more in the wallet topic.

Issue application

Isabella现在可以使用issue.js提交一笔交易,该交易将发行MagnetoCorp的商业票据00001:

(isabella)$ node issue.js

Connect to Fabric gateway.
Use network channel: mychannel.
Use org.papernet.commercialpaper smart contract.
Submit commercial paper issue transaction.
Process issue transaction response.
MagnetoCorp commercial paper : 00001 successfully issued for value 5000000
Transaction complete.
Disconnect from Fabric gateway.
Issue program complete.

node命令初始化node.js环境,并运行issue.js。从程序输出可以看出,MagnetoCorp 商业票据 00001发行的面值为500万美元。

如您所见,为了实现这一点,应用程序调用papercontract.js中的CommercialPaper智能合约中定义的发行事务。这是由MagnetoCorp管理员在网络中安装和实例化的。智能合约通过Fabric api与分类账本交互,最显著的是putState()和getState(),以表示新的商业票据作为世界状态中的向量状态。我们将看到这个向量状态是如何被smart合约中定义的buy和redeem事务控制的。

底层Fabric SDK始终处理事务背书、排序和通知过程,使应用程序的逻辑更加直观;SDK使用网关来抽象网络细节,connectionOptions来声明更高级的处理策略,比如事务重试。

现在让我们跟随MagnetoCorp 00001的生命周期,将重点转向购买商业票据的DigiBank。

Working as DigiBank

现在商业票据00001已经由MagnetoCorp发行,让我们切换上下文,作为DigiBank的员工与PaperNet进行交互。首先,我们将作为管理员创建一个配置为与PaperNet交互的控制台。然后,终端用户Balaji将使用Digibank的buy应用程序购买商业票据00001,将其移动到生命周期的下一个阶段。

_images/commercial_paper.diagram.5.pngcommercialpaper.workdigi DigiBank administrators and applications interact with the PaperNet network.

由于本教程目前使用的是PaperNet的基本网络,所以网络配置非常简单。管理员使用类似于MagnetoCorp的控制台,但配置为Digibank的文件系统。同样,digbank终端用户将使用与MagnetoCorp应用程序调用相同智能合约的应用程序,尽管它们包含的是digbank的逻辑和配置。智能合约捕获共享的业务流程,而分类账保存共享的业务数据,无论哪个应用程序调用它们。

让我们打开一个单独的终端,允许DigiBank管理员与PaperNet交互。在 fabric-samples中:

(digibank admin)$ cd commercial-paper/organization/digibank/configuration/cli/
(digibank admin)$ docker-compose -f docker-compose.yml up -d cliDigiBank

(...)
Creating cliDigiBank ... done

Digibank银行管理人员现在可以使用这个docker容器与网络进行互动:

CONTAINER ID        IMAGE                            COMMAND                  CREATED             STATUS              PORT         NAMES
858c2d2961d4        hyperledger/fabric-tools         "/bin/bash"              18 seconds ago      Up 18 seconds                    cliDigiBank

在本教程中,您将使用名为cliDigiBank的命令行容器代表DigiBank与网络进行交互。我们还没有显示所有docker容器,在现实世界中,digbank用户只能看到他们可以访问的网络组件(peers、orderers、ca)。

由于PaperNet网络配置非常简单,所以digbank的管理员目前在本教程中没有什么要做的。让我们把注意力转向Balaji。

Digibank applications

Balaji使用DigiBank的buy应用程序向超级账本提交一笔交易,该账本将商业票据00001的所有权从MagnetoCorp转移到DigiBank。商业票据智能合约与MagnetoCorp的应用程序使用的合约是一样的,但这次的交易不同——它是购买而不是发行。让我们来看看DigiBank的应用程序是如何工作的。

为Balaji打开一个单独的终端窗口。在fabric-samples中,切换到包含buy.js程序的DigiBank应用程序目录。然后用编辑器打开:

(balaji)$ cd commercial-paper/organization/digibank/application/
(balaji)$ code buy.js

如您所见,这个目录包含Balaji将使用的buy和redeem应用程序。

_images/commercial_paper.diagram.12.pngcommercialpaper.vscode3 DigiBank’s commercial paper directory containing the buy.js and redeem.js applications.

DigiBank的buy.js应用程序在结构上与MagnetoCorp的issue.js非常相似,除了两点不同:

  • Identity: the user is a DigiBank user Balaji rather than MagnetoCorp’s Isabella

    const wallet = new FileSystemWallet('../identity/user/balaji/wallet');`
    

    See how the application uses the balaji wallet when it connects to the PaperNet network channel. buy.js selects a particular identity within balaji wallet.

  • Transaction: the invoked transaction is buy rather than issue

    `const buyResponse = await contract.submitTransaction('buy', 'MagnetoCorp', '00001'...);`
    

    A buy transaction is submitted with the values MagnetoCorp, 00001…, that are used by the CommercialPaper smart contract class to transfer ownership of commercial paper 00001 to DigiBank.

请随意检查应用程序目录中的其他文件,以了解应用程序如何工作,并详细阅读应用程序主题中buy.js是如何实现的。

Run as DigiBank

购买和赎回商业票据的DigiBank应用程序与MagnetoCorp的发行应用程序具有非常相似的结构。因此,让我们安装它们的依赖项并设置Balaji的钱包,这样他就可以使用这些应用程序购买和赎回商业票据。

与MagnetoCorp一样,Digibank必须使用npm install命令安装所需的应用程序包,同样,这需要很短的时间来完成。

在 DigiBank 的管理员窗口下,安装应用程式的依赖项:

(digibank admin)$ cd commercial-paper/organization/digibank/application/
(digibank admin)$ npm install

(            ) extract:lodash: sill extract ansi-styles@3.2.1
(...)
added 738 packages in 46.701s

在Balaji的终端窗口,运行addToWallet.js程序,将身份信息添加到他的钱包:

(balaji)$ node addToWallet.js

done

addToWallet.js程序将balaji的身份信息添加到他的钱包中,buy.js和redeemm .js将使用该钱包向PaperNet提交交易。

和Isabella一样,Balaji可以在他的钱包中存储多个身份,但是在我们的示例中,他只使用一个——Admin@org.example.com。他相应的钱包结构digbank /identity/user/balaji/wallet/Admin@org1.example.com包含的与Isabella的非常相似——请随意查看。

Buy application

Balaji现在可以使用buy.js提交一笔交易,该交易将把MagnetoCorp商业票据00001的所有权转让给DigiBank。

在Balaji的窗口中运行buy应用程序:

(balaji)$ node buy.js

Connect to Fabric gateway.
Use network channel: mychannel.
Use org.papernet.commercialpaper smart contract.
Submit commercial paper buy transaction.
Process buy transaction response.
MagnetoCorp commercial paper : 00001 successfully purchased by DigiBank
Transaction complete.
Disconnect from Fabric gateway.
Buy program complete.

您可以看到程序输出,MagnetoCorp commercial paper 00001被代表DigiBank的Balaji成功购买。js调用了businessalpaper smart contract中定义的buy事务,该事务使用putState()和getState() Fabric api在世界范围内更新了商业票据00001。正如您所看到的,购买和发行商业票据的应用程序逻辑非常类似,智能合约逻辑也是如此。

Redeem application

商业票据00001生命周期的最后一笔交易是由DigiBank与MagnetoCorp进行赎回。Balaji使用redeem.js提交事务来执行smart合约中的赎回逻辑。

在Balaji的窗口中运行赎回交易:

(balaji)$ node redeem.js

Connect to Fabric gateway.
Use network channel: mychannel.
Use org.papernet.commercialpaper smart contract.
Submit commercial paper redeem transaction.
Process redeem transaction response.
MagnetoCorp commercial paper : 00001 successfully redeemed with MagnetoCorp
Transaction complete.
Disconnect from Fabric gateway.
Redeem program complete.

再次,请查看当redeem.js调用commercial paper中定义的赎回事务时,商业票据00001是如何成功赎回的。再次,它更新了世界范围内的商业票据00001,以反映所有权回到了发行该票据的MagnetoCorp。

Further reading

要更详细地了解本教程中显示的应用程序和智能合约是如何工作的,您将发现阅读开发应用程序很有帮助。本主题将更全面地解释商业票据场景、PaperNet业务网络及其参与者,以及它们使用的应用程序和智能契约如何工作。

也可以使用这个示例开始创建您自己的应用程序和智能合约!

搭建你的第一个网络(BYFN)

注解

经验证,使用最新稳定版本的Docker镜像,以及所提供的tar文件中预编译好的建制工具,可以运行本文涉及的指令。然而,如果您使用当前master分支编译得到的镜像或者工具,有可能出现配置错误或panic。

BYFN搭建了一个Hyperledger Fabric示例网络,包含了两个组织,每个组织含有两个peer节点,同时还会部署一个默认的Solo排序服务,其它排序服务的实现也是可用的。

安装前的准备工作

Before we begin, if you haven’t already done so, you may wish to check that you have all the Prerequisites installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.

You will also need to Install Samples, Binaries and Docker Images. You will notice that there are a number of samples included in the fabric-samples repository. We will be using the first-network sample. Let’s open that sub-directory now.

cd fabric-samples/first-network

注解

The supplied commands in this documentation MUST be run from your first-network sub-directory of the fabric-samples repository clone. If you elect to run the commands from a different location, the various provided scripts will be unable to find the binaries.

Want to run it now?

We provide a fully annotated script — byfn.sh — that leverages these Docker images to quickly bootstrap a Hyperledger Fabric network that by default is comprised of four peers representing two different organizations, and an orderer node. It will also launch a container to run a scripted execution that will join peers to a channel, deploy a chaincode and drive execution of transactions against the deployed chaincode.

Here’s the help text for the byfn.sh script:

Usage:
byfn.sh <mode> [-c <channel name>] [-t <timeout>] [-d <delay>] [-f <docker-compose-file>] [-s <dbtype>] [-l <language>] [-o <consensus-type>] [-i <imagetag>] [-v]"
  <mode> - one of 'up', 'down', 'restart', 'generate' or 'upgrade'"
    - 'up' - bring up the network with docker-compose up"
    - 'down' - clear the network with docker-compose down"
    - 'restart' - restart the network"
    - 'generate' - generate required certificates and genesis block"
    - 'upgrade'  - upgrade the network from version 1.3.x to 1.4.0"
  -c <channel name> - channel name to use (defaults to \"mychannel\")"
  -t <timeout> - CLI timeout duration in seconds (defaults to 10)"
  -d <delay> - delay duration in seconds (defaults to 3)"
  -f <docker-compose-file> - specify which docker-compose file use (defaults to docker-compose-cli.yaml)"
  -s <dbtype> - the database backend to use: goleveldb (default) or couchdb"
  -l <language> - the chaincode language: golang (default), node, or java"
  -o <consensus-type> - the consensus-type of the ordering service: solo (default), kafka, or etcdraft"
  -i <imagetag> - the tag to be used to launch the network (defaults to \"latest\")"
  -v - verbose mode"
byfn.sh -h (print this message)"

Typically, one would first generate the required certificates and
genesis block, then bring up the network. e.g.:"

  byfn.sh generate -c mychannel"
  byfn.sh up -c mychannel -s couchdb"
  byfn.sh up -c mychannel -s couchdb -i 1.4.0"
  byfn.sh up -l node"
  byfn.sh down -c mychannel"
  byfn.sh upgrade -c mychannel"

Taking all defaults:"
      byfn.sh generate"
      byfn.sh up"
      byfn.sh down"

If you choose not to supply a flag, the script will use default values.

Generate Network Artifacts

Ready to give it a go? Okay then! Execute the following command:

./byfn.sh generate

You will see a brief description as to what will occur, along with a yes/no command line prompt. Respond with a y or hit the return key to execute the described action.

Generating certs and genesis block for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] y
proceeding ...
/Users/xxx/dev/fabric-samples/bin/cryptogen

##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
org1.example.com
2017-06-12 21:01:37.334 EDT [bccsp] GetDefault -> WARN 001 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
...

/Users/xxx/dev/fabric-samples/bin/configtxgen
##########################################################
#########  Generating Orderer Genesis block ##############
##########################################################
2017-06-12 21:01:37.558 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.562 EDT [msp] getMspConfig -> INFO 002 intermediate certs folder not found at [/Users/xxx/dev/byfn/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts]. Skipping.: [stat /Users/xxx/dev/byfn/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts: no such file or directory]
...
2017-06-12 21:01:37.588 EDT [common/configtx/tool] doOutputBlock -> INFO 00b Generating genesis block
2017-06-12 21:01:37.590 EDT [common/configtx/tool] doOutputBlock -> INFO 00c Writing genesis block

#################################################################
### Generating channel configuration transaction 'channel.tx' ###
#################################################################
2017-06-12 21:01:37.634 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.644 EDT [common/configtx/tool] doOutputChannelCreateTx -> INFO 002 Generating new channel configtx
2017-06-12 21:01:37.645 EDT [common/configtx/tool] doOutputChannelCreateTx -> INFO 003 Writing new channel tx

#################################################################
#######    Generating anchor peer update for Org1MSP   ##########
#################################################################
2017-06-12 21:01:37.674 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.678 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 002 Generating anchor peer update
2017-06-12 21:01:37.679 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 003 Writing anchor peer update

#################################################################
#######    Generating anchor peer update for Org2MSP   ##########
#################################################################
2017-06-12 21:01:37.700 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.704 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 002 Generating anchor peer update
2017-06-12 21:01:37.704 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 003 Writing anchor peer update

This first step generates all of the certificates and keys for our various network entities, the genesis block used to bootstrap the ordering service, and a collection of configuration transactions required to configure a Channel.

Bring Up the Network

Next, you can bring the network up with one of the following commands:

./byfn.sh up

The above command will compile Golang chaincode images and spin up the corresponding containers. Go is the default chaincode language, however there is also support for Node.js and Java chaincode. If you’d like to run through this tutorial with node chaincode, pass the following command instead:

# we use the -l flag to specify the chaincode language
# forgoing the -l flag will default to Golang

./byfn.sh up -l node

注解

For more information on the Node.js shim, please refer to its documentation.

注解

For more information on the Java shim, please refer to its documentation.

Тo make the sample run with Java chaincode, you have to specify -l java as follows:

./byfn.sh up -l java

注解

Do not run both of these commands. Only one language can be tried unless you bring down and recreate the network between.

In addition to support for multiple chaincode languages, you can also issue a flag that will bring up a five node Raft ordering service or a Kafka ordering service instead of the one node Solo orderer. For more information about the currently supported ordering service implementations, check out The Ordering Service.

To bring up the network with a Raft ordering service, issue:

./byfn.sh up -o etcdraft

To bring up the network with a Kafka ordering service, issue:

./byfn.sh up -o kafka

Once again, you will be prompted as to whether you wish to continue or abort. Respond with a y or hit the return key:

Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n]
proceeding ...
Creating network "net_byfn" with the default driver
Creating peer0.org1.example.com
Creating peer1.org1.example.com
Creating peer0.org2.example.com
Creating orderer.example.com
Creating peer1.org2.example.com
Creating cli


 ____    _____      _      ____    _____
/ ___|  |_   _|    / \    |  _ \  |_   _|
\___ \    | |     / _ \   | |_) |   | |
 ___) |   | |    / ___ \  |  _ <    | |
|____/    |_|   /_/   \_\ |_| \_\   |_|

Channel name : mychannel
Creating channel...

The logs will continue from there. This will launch all of the containers, and then drive a complete end-to-end application scenario. Upon successful completion, it should report the following in your terminal window:

Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query successful on peer1.org2 on channel 'mychannel' =====================

===================== All GOOD, BYFN execution completed =====================


 _____   _   _   ____
| ____| | \ | | |  _ \
|  _|   |  \| | | | | |
| |___  | |\  | | |_| |
|_____| |_| \_| |____/

You can scroll through these logs to see the various transactions. If you don’t get this result, then jump down to the Troubleshooting section and let’s see whether we can help you discover what went wrong.

Bring Down the Network

Finally, let’s bring it all down so we can explore the network setup one step at a time. The following will kill your containers, remove the crypto material and four artifacts, and delete the chaincode images from your Docker Registry:

./byfn.sh down

Once again, you will be prompted to continue, respond with a y or hit the return key:

Stopping with channel 'mychannel' and CLI timeout of '10'
Continue? [Y/n] y
proceeding ...
WARNING: The CHANNEL_NAME variable is not set. Defaulting to a blank string.
WARNING: The TIMEOUT variable is not set. Defaulting to a blank string.
Removing network net_byfn
468aaa6201ed
...
Untagged: dev-peer1.org2.example.com-mycc-1.0:latest
Deleted: sha256:ed3230614e64e1c83e510c0c282e982d2b06d148b1c498bbdcc429e2b2531e91
...

If you’d like to learn more about the underlying tooling and bootstrap mechanics, continue reading. In these next sections we’ll walk through the various steps and requirements to build a fully-functional Hyperledger Fabric network.

注解

The manual steps outlined below assume that the FABRIC_LOGGING_SPEC in the cli container is set to DEBUG. You can set this by modifying the docker-compose-cli.yaml file in the first-network directory. e.g.

cli:
  container_name: cli
  image: hyperledger/fabric-tools:$IMAGE_TAG
  tty: true
  stdin_open: true
  environment:
    - GOPATH=/opt/gopath
    - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
    - FABRIC_LOGGING_SPEC=DEBUG
    #- FABRIC_LOGGING_SPEC=INFO

Crypto Generator

We will use the cryptogen tool to generate the cryptographic material (x509 certs and signing keys) for our various network entities. These certificates are representative of identities, and they allow for sign/verify authentication to take place as our entities communicate and transact.

How does it work?

Cryptogen consumes a file — crypto-config.yaml — that contains the network topology and allows us to generate a set of certificates and keys for both the Organizations and the components that belong to those Organizations. Each Organization is provisioned a unique root certificate (ca-cert) that binds specific components (peers and orderers) to that Org. By assigning each Organization a unique CA certificate, we are mimicking a typical network where a participating Member would use its own Certificate Authority. Transactions and communications within Hyperledger Fabric are signed by an entity’s private key (keystore), and then verified by means of a public key (signcerts).

You will notice a count variable within this file. We use this to specify the number of peers per Organization; in our case there are two peers per Org. We won’t delve into the minutiae of x.509 certificates and public key infrastructure right now. If you’re interested, you can peruse these topics on your own time.

After we run the cryptogen tool, the generated certificates and keys will be saved to a folder titled crypto-config. Note that the crypto-config.yaml file lists five orderers as being tied to the orderer organization. While the cryptogen tool will create certificates for all five of these orderers, unless the Raft or Kafka ordering services are being used, only one of these orderers will be used in a Solo ordering service implementation and be used to create the system channel and mychannel.

Configuration Transaction Generator

The configtxgen tool is used to create four configuration artifacts:

  • orderer genesis block,

  • channel configuration transaction,

  • and two anchor peer transactions - one for each Peer Org.

Please see configtxgen for a complete description of this tool’s functionality.

The orderer block is the Genesis Block for the ordering service, and the channel configuration transaction file is broadcast to the orderer at Channel creation time. The anchor peer transactions, as the name might suggest, specify each Org’s 锚节点 on this channel.

How does it work?

Configtxgen consumes a file - configtx.yaml - that contains the definitions for the sample network. There are three members - one Orderer Org (OrdererOrg) and two Peer Orgs (Org1 & Org2) each managing and maintaining two peer nodes. This file also specifies a consortium - SampleConsortium - consisting of our two Peer Orgs. Pay specific attention to the “Profiles” section at the bottom of this file. You will notice that we have several unique profiles. A few are worth noting:

  • TwoOrgsOrdererGenesis: generates the genesis block for a Solo ordering service.

  • SampleMultiNodeEtcdRaft: generates the genesis block for a Raft ordering service. Only used if you issue the -o flag and specify etcdraft.

  • SampleDevModeKafka: generates the genesis block for a Kafka ordering service. Only used if you issue the -o flag and specify kafka.

  • TwoOrgsChannel: generates the genesis block for our channel, mychannel.

These headers are important, as we will pass them in as arguments when we create our artifacts.

注解

Notice that our SampleConsortium is defined in the system-level profile and then referenced by our channel-level profile. Channels exist within the purview of a consortium, and all consortia must be defined in the scope of the network at large.

This file also contains two additional specifications that are worth noting. Firstly, we specify the anchor peers for each Peer Org (peer0.org1.example.com & peer0.org2.example.com). Secondly, we point to the location of the MSP directory for each member, in turn allowing us to store the root certificates for each Org in the orderer genesis block. This is a critical concept. Now any network entity communicating with the ordering service can have its digital signature verified.

Run the tools

You can manually generate the certificates/keys and the various configuration artifacts using the configtxgen and cryptogen commands. Alternately, you could try to adapt the byfn.sh script to accomplish your objectives.

Manually generate the artifacts

You can refer to the generateCerts function in the byfn.sh script for the commands necessary to generate the certificates that will be used for your network configuration as defined in the crypto-config.yaml file. However, for the sake of convenience, we will also provide a reference here.

First let’s run the cryptogen tool. Our binary is in the bin directory, so we need to provide the relative path to where the tool resides.

../bin/cryptogen generate --config=./crypto-config.yaml

You should see the following in your terminal:

org1.example.com
org2.example.com

The certs and keys (i.e. the MSP material) will be output into a directory - crypto-config - at the root of the first-network directory.

Next, we need to tell the configtxgen tool where to look for the configtx.yaml file that it needs to ingest. We will tell it look in our present working directory:

export FABRIC_CFG_PATH=$PWD

Then, we’ll invoke the configtxgen tool to create the orderer genesis block:

../bin/configtxgen -profile TwoOrgsOrdererGenesis -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block

To output a genesis block for a Raft ordering service, this command should be:

../bin/configtxgen -profile SampleMultiNodeEtcdRaft -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block

Note the SampleMultiNodeEtcdRaft profile being used here.

To output a genesis block for a Kafka ordering service, issue:

../bin/configtxgen -profile SampleDevModeKafka -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block

If you are not using Raft or Kafka, you should see an output similar to the following:

2017-10-26 19:21:56.301 EDT [common/tools/configtxgen] main -> INFO 001 Loading configuration
2017-10-26 19:21:56.309 EDT [common/tools/configtxgen] doOutputBlock -> INFO 002 Generating genesis block
2017-10-26 19:21:56.309 EDT [common/tools/configtxgen] doOutputBlock -> INFO 003 Writing genesis block

注解

The orderer genesis block and the subsequent artifacts we are about to create will be output into the channel-artifacts directory at the root of this project. The channelID in the above command is the name of the system channel.

Create a Channel Configuration Transaction

Next, we need to create the channel transaction artifact. Be sure to replace $CHANNEL_NAME or set CHANNEL_NAME as an environment variable that can be used throughout these instructions:

# The channel.tx artifact contains the definitions for our sample channel

export CHANNEL_NAME=mychannel  && ../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME

Note that you don’t have to issue a special command for the channel if you are using a Raft or Kafka ordering service. The TwoOrgsChannel profile will use the ordering service configuration you specified when creating the genesis block for the network.

If you are not using a Raft or Kafka ordering service, you should see an output similar to the following in your terminal:

2017-10-26 19:24:05.324 EDT [common/tools/configtxgen] main -> INFO 001 Loading configuration
2017-10-26 19:24:05.329 EDT [common/tools/configtxgen] doOutputChannelCreateTx -> INFO 002 Generating new channel configtx
2017-10-26 19:24:05.329 EDT [common/tools/configtxgen] doOutputChannelCreateTx -> INFO 003 Writing new channel tx

Next, we will define the anchor peer for Org1 on the channel that we are constructing. Again, be sure to replace $CHANNEL_NAME or set the environment variable for the following commands. The terminal output will mimic that of the channel transaction artifact:

../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org1MSP

Now, we will define the anchor peer for Org2 on the same channel:

../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org2MSP

Start the network

注解

If you ran the byfn.sh example above previously, be sure that you have brought down the test network before you proceed (see Bring Down the Network).

We will leverage a script to spin up our network. The docker-compose file references the images that we have previously downloaded, and bootstraps the orderer with our previously generated genesis.block.

We want to go through the commands manually in order to expose the syntax and functionality of each call.

First let’s start our network:

docker-compose -f docker-compose-cli.yaml up -d

If you want to see the realtime logs for your network, then do not supply the -d flag. If you let the logs stream, then you will need to open a second terminal to execute the CLI calls.

Create & Join Channel

Recall that we created the channel configuration transaction using the configtxgen tool in the Create a Channel Configuration Transaction section, above. You can repeat that process to create additional channel configuration transactions, using the same or different profiles in the configtx.yaml that you pass to the configtxgen tool. Then you can repeat the process defined in this section to establish those other channels in your network.

We will enter the CLI container using the docker exec command:

docker exec -it cli bash

If successful you should see the following:

root@0d78bb69300d:/opt/gopath/src/github.com/hyperledger/fabric/peer#

For the following CLI commands against peer0.org1.example.com to work, we need to preface our commands with the four environment variables given below. These variables for peer0.org1.example.com are baked into the CLI container, therefore we can operate without passing them. HOWEVER, if you want to send calls to other peers or the orderer, keep the CLI container defaults targeting peer0.org1.example.com, but override the environment variables as seen in the example below when you make any CLI calls:

# Environment variables for PEER0

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID="Org1MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

Next, we are going to pass in the generated channel configuration transaction artifact that we created in the Create a Channel Configuration Transaction section (we called it channel.tx) to the orderer as part of the create channel request.

We specify our channel name with the -c flag and our channel configuration transaction with the -f flag. In this case it is channel.tx, however you can mount your own configuration transaction with a different name. Once again we will set the CHANNEL_NAME environment variable within our CLI container so that we don’t have to explicitly pass this argument. Channel names must be all lower case, less than 250 characters long and match the regular expression [a-z][a-z0-9.-]*.

export CHANNEL_NAME=mychannel

# the channel.tx file is mounted in the channel-artifacts directory within your CLI container
# as a result, we pass the full path for the file
# we also pass the path for the orderer ca-cert in order to verify the TLS handshake
# be sure to export or replace the $CHANNEL_NAME variable appropriately

peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

注解

Notice the --cafile that we pass as part of this command. It is the local path to the orderer’s root cert, allowing us to verify the TLS handshake.

This command returns a genesis block - <CHANNEL_NAME.block> - which we will use to join the channel. It contains the configuration information specified in channel.tx If you have not made any modifications to the default channel name, then the command will return you a proto titled mychannel.block.

注解

You will remain in the CLI container for the remainder of these manual commands. You must also remember to preface all commands with the corresponding environment variables when targeting a peer other than peer0.org1.example.com.

Now let’s join peer0.org1.example.com to the channel.

# By default, this joins ``peer0.org1.example.com`` only
# the <CHANNEL_NAME.block> was returned by the previous command
# if you have not modified the channel name, you will join with mychannel.block
# if you have created a different channel name, then pass in the appropriately named block

 peer channel join -b mychannel.block

You can make other peers join the channel as necessary by making appropriate changes in the four environment variables we used in the Create & Join Channel section, above.

Rather than join every peer, we will simply join peer0.org2.example.com so that we can properly update the anchor peer definitions in our channel. Since we are overriding the default environment variables baked into the CLI container, this full command will be the following:

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp CORE_PEER_ADDRESS=peer0.org2.example.com:9051 CORE_PEER_LOCALMSPID="Org2MSP" CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt peer channel join -b mychannel.block

Alternatively, you could choose to set these environment variables individually rather than passing in the entire string. Once they’ve been set, you simply need to issue the peer channel join command again and the CLI container will act on behalf of peer0.org2.example.com.

Update the anchor peers

The following commands are channel updates and they will propagate to the definition of the channel. In essence, we adding additional configuration information on top of the channel’s genesis block. Note that we are not modifying the genesis block, but simply adding deltas into the chain that will define the anchor peers.

Update the channel definition to define the anchor peer for Org1 as peer0.org1.example.com:

peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/Org1MSPanchors.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

Now update the channel definition to define the anchor peer for Org2 as peer0.org2.example.com. Identically to the peer channel join command for the Org2 peer, we will need to preface this call with the appropriate environment variables.

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp CORE_PEER_ADDRESS=peer0.org2.example.com:9051 CORE_PEER_LOCALMSPID="Org2MSP" CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/Org2MSPanchors.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
Install and define a chaincode

注解

We will utilize a simple existing chaincode. To learn how to write your own chaincode, see the 链码开发者 tutorial.

注解

These instructions use the Fabric chaincode lifecycle introduced in the v2.0 Alpha release. If you would like to use the previous lifecycle to install and instantiate a chaincode, visit the v1.4 version of the Building your first network tutorial.

Applications interact with the blockchain ledger through chaincode. Therefore we need to install a chaincode on every peer that will execute and endorse our transactions. However, before we can interact with our chaincode, the members of the channel need to agree on a chaincode definition that establishes chaincode governance.

We need to package the chaincode before it can be installed on our peers. For each package you create, you need to provide a chaincode package label as a description of the chaincode. Use the following commands to package a sample Go, Node.js or Java chaincode.

Golang

# this packages a Golang chaincode.
# make note of the --lang flag to indicate "golang" chaincode
# for go chaincode --path takes the relative path from $GOPATH/src
# The --label flag is used to create the package label
peer lifecycle chaincode package mycc.tar.gz --path github.com/hyperledger/fabric-samples/chaincode/abstore/go/ --lang golang --label mycc_1

Node.js

# this packages a Node.js chaincode
# make note of the --lang flag to indicate "node" chaincode
# for node chaincode --path takes the absolute path to the node.js chaincode
# The --label flag is used to create the package label
peer lifecycle chaincode package mycc.tar.gz --path /opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/abstore/node/ --lang node --label mycc_1

Java

# this packages a java chaincode
# make note of the --lang flag to indicate "java" chaincode
# for java chaincode --path takes the absolute path to the java chaincode
# The --label flag is used to create the package label
peer lifecycle chaincode package mycc.tar.gz --path /opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/abstore/java/ --lang java --label mycc_1

Each of the above commands will create a chaincode package named ``mycc.tar.gz`, which we can use to install the chaincode on our peers. Issue the following command to install the package on peer0 of Org1.

# this command installs a chaincode package on your peer
peer lifecycle chaincode install mycc.tar.gz

A successful install command will return a chaincode package identifier. You should see output similar to the following:

2019-03-13 13:48:53.691 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nEmycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173" >
2019-03-13 13:48:53.691 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173

You can also find the chaincode package identifier by querying your peer for information about the packages you have installed.

# this returns the details of the packages installed on your peers
peer lifecycle chaincode queryinstalled

The command above will return the same package identifier as the install command. You should see output similar to the following:

Get installed chaincodes on peer:
Package ID: mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173, Label: mycc_1

We are going to need the package ID for future commands, so let’s go ahead and save it as an environment variable. Paste the package ID returned by the peer lifecycle chaincode queryinstalled command into the command below. The package ID may not be the same for all users, so you need to complete this step using the package ID returned from your console.

# Save the package ID as an environment variable.

CC_PACKAGE_ID=mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173

The endorsement policy of mycc will be set to require endorsements from a peer in both Org1 and Org2. Therefore, we also need to install the chaincode on a peer in Org2.

Modify the following four environment variables to issue the install command as Org2:

# Environment variables for PEER0 in Org2

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
CORE_PEER_ADDRESS=peer0.org2.example.com:9051
CORE_PEER_LOCALMSPID="Org2MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt

Now install the chaincode package onto peer0 of Org2. The following command will install the chaincode and return same identifier as the install command we issued as Org1.

# this installs a chaincode package on your peer
peer lifecycle chaincode install mycc.tar.gz

After you install the package, you need to approve a chaincode definition for your organization. The chaincode definition includes the important parameters of chaincode governance, including the chaincode name and version. The definition also includes the package identifier used to associate the chaincode package installed on your peers with a chaincode definition approved by your organization.

Because we set the environment variables to operate as Org2, we can use the following command to approve a definition of the mycc chaincode for Org2. The approval is distributed within each organization using gossip, so the command does not need to target every peer within an organization.

# this approves a chaincode definition for your org
# make note of the --package-id flag that provides the package ID
# use the --init-required flag to request the ``Init`` function be invoked to initialize the chaincode
peer lifecycle chaincode approveformyorg --channelID $CHANNEL_NAME --name mycc --version 1.0 --init-required --package-id $CC_PACKAGE_ID --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent

We could have provided a --signature-policy or --channel-config-policy argument to the command above to set the chaincode endorsement policy. The endorsement policy specifies how many peers belonging to different channel members need to validate a transaction against a given chaincode. Because we did not set a policy, the definition of mycc will use the default endorsement policy, which requires that a transaction be endorsed by a majority of channel members present when the transaction is submitted. This implies that if new organizations are added to or removed from the channel, the endorsement policy is updated automatically to require more or fewer endorsements. In this tutorial, the default policy will require an endorsement from a peer belonging to Org1 AND Org2 (i.e. two endorsements). See the Endorsement policies documentation for more details on policy implementation.

All organizations need to agree on the definition before they can use the chaincode. Modify the following four environment variables to operate as Org1:

# Environment variables for PEER0

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID="Org1MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

You can now approve a definition for the mycc chaincode as Org1. Chaincode is approved at the organization level. You can issue the command once even if you have multiple peers.

# this defines a chaincode for your org
# make note of the --package-id flag that provides the package ID
# use the --init-required flag to request the Init function be invoked to initialize the chaincode
peer lifecycle chaincode approveformyorg --channelID $CHANNEL_NAME --name mycc --version 1.0 --init-required --package-id $CC_PACKAGE_ID --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent

Once a sufficient number of channel members have approved a chaincode definition, one member can commit the definition to the channel. By default a majority of channel members need to approve a definition before it can be committed. It is possible to discover the approval status for the chanincode definition across all organizations by issuing the following query:

# the flags used for this command are identical to those used for approveformyorg
# except for --package-id which is not required since it is not stored as part of
# the definition
peer lifecycle chaincode queryapprovalstatus --channelID $CHANNEL_NAME --name mycc --version 1.0 --init-required --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

The command will produce as output a JSON map showing if the organizations in the channel have approved the chaincode definition provided in the queryapprovalstatus command. In this case, given that both organizations have approved, we obtain:

{
        "Approved": {
                "Org1MSP": true,
                "Org2MSP": true
        }
}

Since both channel members have approved the definition, we can now commit it to the channel using the following command. You can issue this command as either Org1 or Org2. Note that the transaction targets peers in Org1 and Org2 to collect endorsements.

# this commits the chaincode definition to the channel
peer lifecycle chaincode commit -o orderer.example.com:7050 --channelID $CHANNEL_NAME --name mycc --version 1.0 --sequence 1 --init-required --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --waitForEvent
Invoking the chaincode

After a chaincode definition has been committed to a channel, we are ready to invoke the chaincode and start interacting with the ledger. We requested the execution of the Init function in the chaincode definition using the --init-required flag. As a result, we need to pass the --isInit flag to its first invocation and supply the arguments to the Init function. Issue the following command to initialize the chaincode and put the initial data on the ledger.

# be sure to set the -C and -n flags appropriately
# use the --isInit flag if you are invoking an Init function
peer chaincode invoke -o orderer.example.com:7050 --isInit --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["Init","a","100","b","100"]}' --waitForEvent

The first invoke will start the chaincode container. We may need to wait for the container to start. Node.js images will take longer.

Query

Let’s query the chaincode to make sure that the container was properly started and the state DB was populated. The syntax for query is as follows:

# be sure to set the -C and -n flags appropriately

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'
Invoke

Now let’s move 10 from a to b. This transaction will cut a new block and update the state DB. The syntax for invoke is as follows:

# be sure to set the -C and -n flags appropriately
peer chaincode invoke -o orderer.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["invoke","a","b","10"]}' --waitForEvent
Query

Let’s confirm that our previous invocation executed properly. We initialized the key a with a value of 100 and just removed 10 with our previous invocation. Therefore, a query against a should return 90. The syntax for query is as follows.

# be sure to set the -C and -n flags appropriately

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'

We should see the following:

Query Result: 90
Install the chaincode on an additional peer

If you want additional peers to interact with the ledger, then you will need to join them to the channel and install the same chaincode package on the peers. You only need to approve the chaincode definition once from your organization. A chaincode container will be launched for each peer as soon as they try to interact with that specific chaincode. Again, be cognizant of the fact that the Node.js images will be slower to build and start upon the first invoke.

We will install the chaincode on a third peer, peer1 in Org2. Modify the following four environment variables to issue the install command against peer1 in Org2:

# Environment variables for PEER1 in Org2

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
CORE_PEER_ADDRESS=peer1.org2.example.com:10051
CORE_PEER_LOCALMSPID="Org2MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt

Now install the mycc package on peer1 of Org2:

# this command installs a chaincode package on your peer
peer lifecycle chaincode install mycc.tar.gz
Query

Let’s confirm that we can issue the query to Peer1 in Org2. We initialized the key a with a value of 100 and just removed 10 with our previous invocation. Therefore, a query against a should still return 90.

Peer1 in Org2 must first join the channel before it can respond to queries. The channel can be joined by issuing the following command:

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp CORE_PEER_ADDRESS=peer1.org2.example.com:10051 CORE_PEER_LOCALMSPID="Org2MSP" CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt peer channel join -b mychannel.block

After the join command returns, the query can be issued. The syntax for query is as follows.

# be sure to set the -C and -n flags appropriately

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'

We should see the following:

Query Result: 90

If you received an error, it may be because it takes a few seconds for the peer to join and catch up to the current blockchain height. You may re-query as needed. Feel free to perform additional invokes as well.

What’s happening behind the scenes?

注解

These steps describe the scenario in which script.sh is run by ‘./byfn.sh up’. Clean your network with ./byfn.sh down and ensure this command is active. Then use the same docker-compose prompt to launch your network again

  • A script - script.sh - is baked inside the CLI container. The script drives the createChannel command against the supplied channel name and uses the channel.tx file for channel configuration.

  • The output of createChannel is a genesis block - <your_channel_name>.block - which gets stored on the peers’ file systems and contains the channel configuration specified from channel.tx.

  • The joinChannel command is exercised for all four peers, which takes as input the previously generated genesis block. This command instructs the peers to join <your_channel_name> and create a chain starting with <your_channel_name>.block.

  • Now we have a channel consisting of four peers, and two organizations. This is our TwoOrgsChannel profile.

  • peer0.org1.example.com and peer1.org1.example.com belong to Org1; peer0.org2.example.com and peer1.org2.example.com belong to Org2

  • These relationships are defined through the crypto-config.yaml and the MSP path is specified in our docker compose.

  • The anchor peers for Org1MSP (peer0.org1.example.com) and Org2MSP (peer0.org2.example.com) are then updated. We do this by passing the Org1MSPanchors.tx and Org2MSPanchors.tx artifacts to the ordering service along with the name of our channel.

  • A chaincode - abstore - is packaged and installed on peer0.org1.example.com and peer0.org2.example.com

  • The chaincode is then separately approved by Org1 and Org2, and then committed on the channel. Since an endorsement policy was not specified, the channel’s default endorsement policy of a majority of organizations will get utilized, meaning that any transaction must be endorsed by a peer tied to Org1 and Org2.

  • The chaincode Init is then called which starts the container for the target peer, and initializes the key value pairs associated with the chaincode. The initial values for this example are [“a”,”100” “b”,”200”]. This first invoke results in a container by the name of dev-peer0.org2.example.com-mycc-1.0 starting.

  • A query against the value of “a” is issued to peer0.org2.example.com. A container for Org2 peer0 by the name of dev-peer0.org2.example.com-mycc-1.0 was started when the chaincode was initialized. The result of the query is returned. No write operations have occurred, so a query against “a” will still return a value of “100”.

  • An invoke is sent to peer0.org1.example.com and peer0.org2.example.com to move “10” from “a” to “b”

  • A query is sent to peer0.org2.example.com for the value of “a”. A value of 90 is returned, correctly reflecting the previous transaction during which the value for key “a” was modified by 10.

  • The chaincode - abstore - is installed on peer1.org2.example.com

  • A query is sent to peer1.org2.example.com for the value of “a”. This starts a third chaincode container by the name of dev-peer1.org2.example.com-mycc-1.0. A value of 90 is returned, correctly reflecting the previous transaction during which the value for key “a” was modified by 10.

What does this demonstrate?

Chaincode MUST be installed on a peer in order for it to successfully perform read/write operations against the ledger. Furthermore, a chaincode container is not started for a peer until an init or traditional transaction - read/write - is performed against that chaincode (e.g. query for the value of “a”). The transaction causes the container to start. Also, all peers in a channel maintain an exact copy of the ledger which comprises the blockchain to store the immutable, sequenced record in blocks, as well as a state database to maintain a snapshot of the current state. This includes those peers that do not have chaincode installed on them (like peer1.org1.example.com in the above example) . Finally, the chaincode is accessible after it is installed (like peer1.org2.example.com in the above example) because its definition has already been committed on the channel.

How do I see these transactions?

Check the logs for the CLI Docker container.

docker logs -f cli

You should see the following output:

2017-05-16 17:08:01.366 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-05-16 17:08:01.366 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-05-16 17:08:01.366 UTC [msp/identity] Sign -> DEBU 006 Sign: plaintext: 0AB1070A6708031A0C08F1E3ECC80510...6D7963631A0A0A0571756572790A0161
2017-05-16 17:08:01.367 UTC [msp/identity] Sign -> DEBU 007 Sign: digest: E61DB37F4E8B0D32C9FE10E3936BA9B8CD278FAA1F3320B08712164248285C54
Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query successful on peer1.org2 on channel 'mychannel' =====================

===================== All GOOD, BYFN execution completed =====================


 _____   _   _   ____
| ____| | \ | | |  _ \
|  _|   |  \| | | | | |
| |___  | |\  | | |_| |
|_____| |_| \_| |____/

You can scroll through these logs to see the various transactions.

How can I see the chaincode logs?

Inspect the individual chaincode containers to see the separate transactions executed against each container. Here is the combined output from each container:

$ docker logs dev-peer0.org2.example.com-mycc-1.0
04:30:45.947 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Init
Aval = 100, Bval = 200

$ docker logs dev-peer0.org1.example.com-mycc-1.0
04:31:10.569 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Invoke
Query Response:{"Name":"a","Amount":"100"}
ex02 Invoke
Aval = 90, Bval = 210

$ docker logs dev-peer1.org2.example.com-mycc-1.0
04:31:30.420 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Invoke
Query Response:{"Name":"a","Amount":"90"}

You can also see the peer logs to view chaincode invoke messages and block commit messages:

$ docker logs peer0.org1.example.com

Understanding the Docker Compose topology

The BYFN sample offers us two flavors of Docker Compose files, both of which are extended from the docker-compose-base.yaml (located in the base folder). Our first flavor, docker-compose-cli.yaml, provides us with a CLI container, along with an orderer, four peers. We use this file for the entirety of the instructions on this page.

注解

the remainder of this section covers a docker-compose file designed for the SDK. Refer to the Node SDK repo for details on running these tests.

The second flavor, docker-compose-e2e.yaml, is constructed to run end-to-end tests using the Node.js SDK. Aside from functioning with the SDK, its primary differentiation is that there are containers for the fabric-ca servers. As a result, we are able to send REST calls to the organizational CAs for user registration and enrollment.

If you want to use the docker-compose-e2e.yaml without first running the byfn.sh script, then we will need to make four slight modifications. We need to point to the private keys for our Organization’s CA’s. You can locate these values in your crypto-config folder. For example, to locate the private key for Org1 we would follow this path - crypto-config/peerOrganizations/org1.example.com/ca/. The private key is a long hash value followed by _sk. The path for Org2 would be - crypto-config/peerOrganizations/org2.example.com/ca/.

In the docker-compose-e2e.yaml update the FABRIC_CA_SERVER_TLS_KEYFILE variable for ca0 and ca1. You also need to edit the path that is provided in the command to start the ca server. You are providing the same private key twice for each CA container.

Using CouchDB

The state database can be switched from the default (goleveldb) to CouchDB. The same chaincode functions are available with CouchDB, however, there is the added ability to perform rich and complex queries against the state database data content contingent upon the chaincode data being modeled as JSON.

To use CouchDB instead of the default database (goleveldb), follow the same procedures outlined earlier for generating the artifacts, except when starting the network pass docker-compose-couch.yaml as well:

docker-compose -f docker-compose-cli.yaml -f docker-compose-couch.yaml up -d

abstore should now work using CouchDB underneath.

注解

If you choose to implement mapping of the fabric-couchdb container port to a host port, please make sure you are aware of the security implications. Mapping of the port in a development environment makes the CouchDB REST API available, and allows the visualization of the database via the CouchDB web interface (Fauxton). Production environments would likely refrain from implementing port mapping in order to restrict outside access to the CouchDB containers.

You can use abstore chaincode against the CouchDB state database using the steps outlined above, however in order to exercise the CouchDB query capabilities you will need to use a chaincode that has data modeled as JSON. The sample chaincode marbles02 has been written to demostrate the queries you can issue from your chaincode if you are using a CouchDB database. You can locate the marbles02 chaincode in the fabric/examples/chaincode/go directory.

We will follow the same process to create and join the channel as outlined in the Create & Join Channel section above. Once you have joined your peer(s) to the channel, use the following steps to interact with the marbles02 chaincode:

  • Package and install the chaincode on peer0.org1.example.com:

     peer lifecycle chaincode package marbles.tar.gz --path github.com/hyperledger/fabric-samples/chaincode/marbles02/go/ --lang golang --label marbles_1
     peer lifecycle chaincode install marbles.tar.gz

The install command will return a chaincode packageID that you will use to
approve a chaincode definition.
2019-04-08 20:10:32.568 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nJmarbles_1:cfb623954827aef3f35868764991cc7571b445a45cfd3325f7002f14156d61ae\022\tmarbles_1" >
2019-04-08 20:10:32.568 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: marbles_1:cfb623954827aef3f35868764991cc7571b445a45cfd3325f7002f14156d61ae
  • Save the packageID as an environment variable so you can pass it to future commands:

    CC_PACKAGE_ID=marbles_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173
    
  • Approve a chaincode definition as Org1:

# be sure to modify the $CHANNEL_NAME variable accordingly for the instantiate command

peer lifecycle chaincode approveformyorg --channelID $CHANNEL_NAME --name marbles --version 1.0 --package-id $CC_PACKAGE_ID --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent
  • Install the chaincode on peer0.org2.example.com:

CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
CORE_PEER_ADDRESS=peer0.org2.example.com:9051
CORE_PEER_LOCALMSPID="Org2MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
peer lifecycle chaincode install marbles.tar.gz
  • Approve a chaincode definition as Org2, and then commit the definition to the channel:

# be sure to modify the $CHANNEL_NAME variable accordingly for the instantiate command

peer lifecycle chaincode approveformyorg --channelID $CHANNEL_NAME --name marbles --version 1.0 --package-id $CC_PACKAGE_ID --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent
peer lifecycle chaincode commit -o orderer.example.com:7050 --channelID $CHANNEL_NAME --name marbles --version 1.0 --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --waitForEvent
  • We can now create some marbles. The first invoke of the chaincode will start the chaincode container. You may need to wait for the container to start.

# be sure to modify the $CHANNEL_NAME variable accordingly

peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["initMarble","marble1","blue","35","tom"]}'

Once the container has started, you can issue additional commands to create some marbles and move them around:

# be sure to modify the $CHANNEL_NAME variable accordingly

peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["initMarble","marble2","red","50","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["initMarble","marble3","blue","70","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["transferMarble","marble2","jerry"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["transferMarblesBasedOnColor","blue","jerry"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["delete","marble1"]}'
  • If you chose to map the CouchDB ports in docker-compose, you can now view the state database through the CouchDB web interface (Fauxton) by opening a browser and navigating to the following URL:

    http://localhost:5984/_utils

You should see a database named mychannel (or your unique channel name) and the documents inside it.

注解

For the below commands, be sure to update the $CHANNEL_NAME variable appropriately.

You can run regular queries from the CLI (e.g. reading marble2):

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["readMarble","marble2"]}'

The output should display the details of marble2:

Query Result: {"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}

You can retrieve the history of a specific marble - e.g. marble1:

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["getHistoryForMarble","marble1"]}'

The output should display the transactions on marble1:

Query Result: [{"TxId":"1c3d3caf124c89f91a4c0f353723ac736c58155325f02890adebaa15e16e6464", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"tom"}},{"TxId":"755d55c281889eaeebf405586f9e25d71d36eb3d35420af833a20a2f53a3eefd", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"jerry"}},{"TxId":"819451032d813dde6247f85e56a89262555e04f14788ee33e28b232eef36d98f", "Value":}]

You can also perform rich queries on the data content, such as querying marble fields by owner jerry:

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarblesByOwner","jerry"]}'

The output should display the two marbles owned by jerry:

Query Result: [{"Key":"marble2", "Record":{"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}},{"Key":"marble3", "Record":{"color":"blue","docType":"marble","name":"marble3","owner":"jerry","size":70}}]

Why CouchDB

CouchDB is a kind of NoSQL solution. It is a document-oriented database where document fields are stored as key-value maps. Fields can be either a simple key-value pair, list, or map. In addition to keyed/composite-key/key-range queries which are supported by LevelDB, CouchDB also supports full data rich queries capability, such as non-key queries against the whole blockchain data, since its data content is stored in JSON format and fully queryable. Therefore, CouchDB can meet chaincode, auditing, reporting requirements for many use cases that not supported by LevelDB.

CouchDB can also enhance the security for compliance and data protection in the blockchain. As it is able to implement field-level security through the filtering and masking of individual attributes within a transaction, and only authorizing the read-only permission if needed.

In addition, CouchDB falls into the AP-type (Availability and Partition Tolerance) of the CAP theorem. It uses a master-master replication model with Eventual Consistency. More information can be found on the Eventual Consistency page of the CouchDB documentation. However, under each fabric peer, there is no database replicas, writes to database are guaranteed consistent and durable (not Eventual Consistency).

CouchDB is the first external pluggable state database for Fabric, and there could and should be other external database options. For example, IBM enables the relational database for its blockchain. And the CP-type (Consistency and Partition Tolerance) databases may also in need, so as to enable data consistency without application level guarantee.

A Note on Data Persistence

If data persistence is desired on the peer container or the CouchDB container, one option is to mount a directory in the docker-host into a relevant directory in the container. For example, you may add the following two lines in the peer container specification in the docker-compose-base.yaml file:

volumes:
 - /var/hyperledger/peer0:/var/hyperledger/production

For the CouchDB container, you may add the following two lines in the CouchDB container specification:

volumes:
 - /var/hyperledger/couchdb0:/opt/couchdb/data

Troubleshooting

  • Always start your network fresh. Use the following command to remove artifacts, crypto, containers and chaincode images:

    ./byfn.sh down
    

    注解

    You will see errors if you do not remove old containers and images.

  • If you see Docker errors, first check your docker version (Prerequisites), and then try restarting your Docker process. Problems with Docker are oftentimes not immediately recognizable. For example, you may see errors resulting from an inability to access crypto material mounted within a container.

    If they persist remove your images and start from scratch:

    docker rm -f $(docker ps -aq)
    docker rmi -f $(docker images -q)
    
  • If you see errors on your create, approve, commit, invoke or query commands, make sure you have properly updated the channel name and chaincode name. There are placeholder values in the supplied sample commands.

  • If you see the below error:

    Error: Error endorsing chaincode: rpc error: code = 2 desc = Error installing chaincode code mycc:1.0(chaincode /var/hyperledger/production/chaincodes/mycc.1.0 exits)
    

    You likely have chaincode images (e.g. dev-peer1.org2.example.com-mycc-1.0 or dev-peer0.org1.example.com-mycc-1.0) from prior runs. Remove them and try again.

    docker rmi -f $(docker images | grep peer[0-9]-peer[0-9] | awk '{print $3}')
    
  • If you see something similar to the following:

    Error connecting: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
    Error: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
    

    Make sure you are running your network against the “1.0.0” images that have been retagged as “latest”.

  • If you see the below error:

    [configtx/tool/localconfig] Load -> CRIT 002 Error reading configuration: Unsupported Config Type ""
    panic: Error reading configuration: Unsupported Config Type ""
    

    Then you did not set the FABRIC_CFG_PATH environment variable properly. The configtxgen tool needs this variable in order to locate the configtx.yaml. Go back and execute an export FABRIC_CFG_PATH=$PWD, then recreate your channel artifacts.

  • To cleanup the network, use the down option:

    ./byfn.sh down
    
  • If you see an error stating that you still have “active endpoints”, then prune your Docker networks. This will wipe your previous networks and start you with a fresh environment:

    docker network prune
    

    You will see the following message:

    WARNING! This will remove all networks not used by at least one container.
    Are you sure you want to continue? [y/N]
    

    Select y.

  • If you see an error similar to the following:

    /bin/bash: ./scripts/script.sh: /bin/bash^M: bad interpreter: No such file or directory
    

    Ensure that the file in question (script.sh in this example) is encoded in the Unix format. This was most likely caused by not setting core.autocrlf to false in your Git configuration (see Windows extras). There are several ways of fixing this. If you have access to the vim editor for instance, open the file:

    vim ./fabric-samples/first-network/scripts/script.sh
    

    Then change its format by executing the following vim command:

    :set ff=unix
    

注解

If you continue to see errors, share your logs on the fabric-questions channel on Hyperledger Rocket Chat or on StackOverflow.

向通道添加组织

注解

确保您已经下载了适当的镜像和二进制文件,如 Install Samples, Binaries and Docker Images 和:doc:`prereqs`中所述,这些文件都符合本文档的版本(可以在左边目录的底部找到)。特别是,“fabric-samples”文件夹的版本必须包含``eyfn.sh``(“扩展您的第一个网络”)脚本及其相关脚本。

本教程作为对:doc:build_network (BYFN)教程的扩展,并将演示如何向BYFN自动生成的应用程序通道(mychannel)添加一个新的组织``Org3``。它假设对BYFN有很好的理解,包括前面提到的实用程序的用法和功能。

虽然我们在这里只关注新组织的集成,但是在执行其他通道配置更新(例如,更新修改策略或更改批大小)时也可以采用相同的方法。要了解更多关于通道配置更新的过程和可能性,请查看:doc:config_update)。同样值得注意的是,像这里演示的通道配置更新通常由组织管理员(而不是链码或应用程序开发人员)负责。

注解

在继续之前确保自动’byfn.sh``脚本在您的机器上没有错误地运行。如果您已经将二进制文件和相关工具(``cryptogen, configtxgen, 等)导出到PATH变量中,那么您就可以相应地修改命令,而无需传递完全限定的路径。

设置环境

我们将在您的本地从``first-network`` 子目录克隆``fabric-samples``。现在切换到那个目录。您还需要打开一些额外的终端以方便使用。

首先,使用``byfn.sh``脚本整理。该命令将杀死所有活动的或陈旧的docker容器,并删除以前生成的构件。要执行通道配置更新任务,并不**必需**关闭Fabric网络。但是,出于本教程的考虑,我们希望从已知的初始状态开始操作。因此,让我们运行以下命令来清理以前的环境:

./byfn.sh down

现在生成默认的BYFN构件:

./byfn.sh generate

并利用CLI容器内的脚本执行启动网络:

./byfn.sh up

现在您的机器上已经运行了一个干净的BYFN版本,您可以使用两条不同的路径。首先,我们提供了一个完整的带注释脚本,它将执行配置交易更新以将Org3引入网络。

此外,我们还将显示相同流程的“手动”版本,显示每个步骤并解释它所完成的工作(因为我们在此手动过程之前向您展示了如何关闭您的网络,所以您也可以运行脚本,然后查看每个步骤)。

使用脚本将Org3加入通道

你应该在``first-network``中。要使用该脚本,只需发出以下命令:

./eyfn.sh up

这里的输出很值得一读。您将看到添加了Org3加密原料,创建并签名了配置更新,然后安装了链码,以允许Org3执行账本查询。

如果一切顺利,你会得到这样的信息:

========= All GOOD, EYFN test execution completed ===========

eyfn.sh can be used with the same Node.js chaincode and database options as byfn.sh by issuing the following (instead of ./byfn.sh up):

./byfn.sh up -c testchannel -s couchdb -l node

然后:

./eyfn.sh up -c testchannel -s couchdb -l node

对于那些希望更深入地了解这个过程的人,文档的其余部分将向您展示用于进行通道更新的每个命令及其功能。

手动将Org3导入通道

注解

下面列出了手动步骤,假定“cli”和“Org3cli”容器中的“FABRIC_LOGGING_SPEC”设置为“DEBUG”。

对于 ``cli``容器,您可以通过修改 ``first-network``目录下的文件 ``docker-compose-cli.yaml``来设置它。

cli:
  container_name: cli
  image: hyperledger/fabric-tools:$IMAGE_TAG
  tty: true
  stdin_open: true
  environment:
    - GOPATH=/opt/gopath
    - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
    #- FABRIC_LOGGING_SPEC=INFO
    - FABRIC_LOGGING_SPEC=DEBUG

对于 ``Org3cli``容器,您可以通过修改``first-network``目录下的文件``docker-compose-org3.yaml``来设置它。

Org3cli:
  container_name: Org3cli
  image: hyperledger/fabric-tools:$IMAGE_TAG
  tty: true
  stdin_open: true
  environment:
    - GOPATH=/opt/gopath
    - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
    #- FABRIC_LOGGING_SPEC=INFO
    - FABRIC_LOGGING_SPEC=DEBUG

如果你用过``eyfn.sh``脚本,你需要关闭你的网络。这可以通过发出下列命令做到:

./eyfn.sh down

这将关闭网络,删除所有容器,并撤消我们为添加Org3所做的操作。

当网络关闭时,请将其重新恢复。

./byfn.sh generate

然后:

./byfn.sh up

这将使您的网络恢复到执行 ``eyfn.sh``之前的状态。

现在我们准备手动添加Org3。作为第一步,我们需要生成Org3的加密材料。

生成Org3加密材料

在另一个终端,从``first-network``切换到``org3-artifacts``子目录。

cd org3-artifacts

这里有两个有趣的``yaml``文件: org3-crypto.yaml 和``configtx.yaml``。首先,生成Org3的加密材料:

../../bin/cryptogen generate --config=./org3-crypto.yaml

这个命令读取我们新的密码 yaml``文件——``org3-crypto.yaml,并利用``cryptogen`` 为Org3 CA生成密钥和证书,以及绑定两个peer节点到这个新组织。与BYFN实现一样,这个加密材料被放入当前工作目录(在我们的示例中是“org3-artifacts”)中新生成的“crypto-config”文件夹中。

现在使用 configtxgen 实用程序用JSON格式输出Org3专用的配置材料。在命令开始之前,我们将告诉工具在当前目录中查找它需要提取的``configtx.yaml`` 文件。

export FABRIC_CFG_PATH=$PWD && ../../bin/configtxgen -printOrg Org3MSP > ../channel-artifacts/org3.json

上面的命令创建一个JSON文件– org3.json – 并将其输出到``first-network``根目录下的``channel-artifacts`` 子目录中。这个文件包含Org3策略定义,以及三个重要的base64格式的证书在:admin用户证书(之后用作Org3的管理员)、CA根证书和TLS根证书。在即将到来的步骤中,我们将这个JSON文件附加到通道配置。

最后一项管理工作是将排序器Org的MSP材料转移到Org3``crypto-config`` 目录中。我们特别关注排序器的TLS根证书,它将允许在Org3实体和网络的排序节点之间进行安全通信。

cd ../ && cp -r crypto-config/ordererOrganizations org3-artifacts/crypto-config/

现在我们准备更新通道配置…

准备CLI环境

更新过程使用配置转换工具——’configtxlator。该工具提供了一个独立于SDK的无状态REST API。此外,它还提供了一个CLI,以简化Fabric网络中的配置工作。该工具允许在不同的等效数据表示/格式之间进行简单的转换(在本例中,是在protobufs和JSON之间)。此外,该工具可以根据两个通道配置之间的差异计算配置更新交易。

首先,exec 进入CLI容器。回想一下,这个容器已经安装了BYFN``crypto-config``库,使我们能够访问两个最初的peer组织和排序器Org的MSP材料。引导的身份是Org1管理用户,这意味着我们想要扮演Org2的任何步骤都需要导出特定于MSP的环境变量。

docker exec -it cli bash

导出“ORDERER_CA”和“CHANNEL_NAME”变量:

export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem  && export CHANNEL_NAME=mychannel

检查变量是否设置正确:

echo $ORDERER_CA && echo $CHANNEL_NAME

注解

如果出于任何原因需要重新启动CLI容器,还需要重新导出两个环境变量—— ORDERER_CA``和``CHANNEL_NAME

获取配置

现在我们有了一个CLI容器,其中包含两个关键的环境变量—— ORDERER_CA 和``CHANNEL_NAME``已导出 。让我们去获取通道``mychannel``的最新配置块。

我们必须提取配置的最新版本的原因是,通道配置元素是经过版本控制的。版本控制之所以重要,有几个原因。它可以防止配置更改被重复或重播(例如用旧的CRL恢复到通道配置会带来安全风险)。它还有助于确保并发性(例如如果您想从通道中删除一个组织,在添加了一个新的组织之后,版本控制将有助于防止您同时删除两个组织,从而只删除您想要删除的组织)。

peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CHANNEL_NAME --tls --cafile $ORDERER_CA

该命令将二进制protobuf通道配置块保存为``config_block.pb``。注意,名称和文件扩展名的选择是任意的。但是,建议遵循一种约定,即同时标识所表示对象的类型及其编码(protobuf或JSON)。

当您发出``peer channel fetch``命令时,终端中有相当数量的输出。日志中的最后一行很有趣:

2017-11-07 17:17:57.383 UTC [channelCmd] readBlock -> DEBU 011 Received block: 2

这告诉我们,mychannel``的最新配置区块实际上是2号区块,**不是** 创世区块。默认情况下, ``peer channel fetch config 命令返回目标通道**最近的**配置区块,在本例中是第三个区块。这是因为BYFN脚本在两个独立的通道更新交易中为我们的两个组织(Org1``和``Org2)定义了锚点peer。

因此,我们得到如下配置序列:

  • 区块0:创世区块

  • 区块1:Org1锚点区块更新

  • 区块2:Org2锚点区块更新

将配置转换为JSON并对其进行修剪

现在,我们将使用``configtxlator``工具将这个通道配置块解码为JSON格式(可以由人类读取和修改)。我们还必须删除与我们想要进行的更改无关的所有头部、元数据、创建者签名等等。我们通过“jq”工具来实现:

configtxlator proto_decode --input config_block.pb --type common.Block | jq .data.data[0].payload.data.config > config.json

这就留给我们一个经过修剪的JSON对象—— config.json,位于``first-network``中的``fabric-samples``文件夹中—这将作为配置更新的基线。

花点时间在您选择的文本编辑器(或浏览器)中打开这个文件。即使您已经完成了本教程的学习,也值得研究它,因为它揭示了底层配置结构和其他可以进行的通道更新。我们将在:doc:`config_update`中更详细地讨论它们。

添加Org3加密材料

注解

到目前为止,您所采取的步骤几乎是相同的,无论您试图进行哪种配置更新。我们选择在本教程中添加一个组织,因为它是您可以尝试的最复杂的通道配置更新之一。

我们将再次使用``jq``工具来添加Org3配置定义—— org3.json ——到通道的应用程序groups字段,并将输出命名为——modified_config.json

jq -s '.[0] * {"channel_group":{"groups":{"Application":{"groups": {"Org3MSP":.[1]}}}}}' config.json ./channel-artifacts/org3.json > modified_config.json

现在,在CLI容器中我们有两个感兴趣的JSON文件—— config.json``和``modified_config.json。初始文件只包含Org1和Org2材料,而“修改”文件包含所有三个Org。此时,只需重新编码这两个JSON文件并计算增量。

首先,转换 config.json 回到名为 ``config.pb``的protobuf :

configtxlator proto_encode --input config.json --type common.Config --output config.pb

接下来,编码``modified_config.json``为``modified_config.pb``:

configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb

现在使用 configtxlator 来计算这两个配置protobuf之间的增量。这个命令将输出一个新的protobuf二进制文件,名为``org3_update.pb``:

configtxlator compute_update --channel_id $CHANNEL_NAME --original config.pb --updated modified_config.pb --output org3_update.pb

这个新的proto——org3_update.pb——包含Org3定义和指向Org1和Org2材料的高级指针。我们能够抛弃为Org1和Org2提供的大量MSP材料和修改策略信息,因为这些数据已经存在于通道的创世区块中。因此,我们只需要两个构型之间的增量。

在提交通道更新之前,我们需要执行一些最后的步骤。首先,让我们将这个对象解码为可编辑的JSON格式,并将其命名为’org3_update.json:

configtxlator proto_decode --input org3_update.pb --type common.ConfigUpdate | jq . > org3_update.json

现在,我们有一个解码的更新文件 – org3_update.json – 我们需要将其封装在信封消息中。这一步将返回我们之前删除的头部字段。我们将这个文件命名为``org3_update_in_envelope.json``:

echo '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":{"config_update":'$(cat org3_update.json)'}}}' | jq . > org3_update_in_envelope.json

使用格式正确的JSON – org3_update_in_envelope.json – 我们将最后一次使用``configtxlator``工具,并将其转换为Fabric所需的完全成熟的protobuf格式。我们将最终的更新对象命名为 org3_update_in_envelope.pb:

configtxlator proto_encode --input org3_update_in_envelope.json --type common.Envelope --output org3_update_in_envelope.pb

签署并提交配置更新

差不多完成了!

在CLI容器中,现在我们有了一个protobuf二进制文件 ,org3_update_in_envelope.pb 。然而,在将配置写入账本之前,我们需要来自必需的管理用户的签名。通道应用程序组的修改策略(mod_policy)被设置为默认的“多数”,这意味着我们需要大多数现有的org管理员来签署它。因为我们只有两个org——Org1和Org2——而两个中的大多数是2,所以我们需要他们两个都签名。如果没有这两个签名,排序服务将因为未能满足策略而拒绝交易。

首先,让我们以Org1管理员的身份签署这个更新proto。请记住,CLI容器是用Org1 MSP材料引导的,所以我们只需要发出``peer channel signconfigtx``命令:

peer channel signconfigtx -f org3_update_in_envelope.pb

最后一步是切换CLI容器的身份,以反映Org2管理用户。为此,我们导出了四个特定于Org2 MSP的环境变量。

注解

在组织之间切换以签署配置交易(或执行任何其他操作)并不反映实际的Fabric操作。单个容器永远不会安装整个网络的加密材料。相反,配置更新将需要安全地通过带外传递给一个Org2管理员进行检查和批准。

导出Org2环境变量:

# you can issue all of these commands at once

export CORE_PEER_LOCALMSPID="Org2MSP"

export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt

export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp

export CORE_PEER_ADDRESS=peer0.org2.example.com:9051

最后,我们会发出``peer channel update`` 命令。Org2管理员签名将附加到这个调用,所以不需要手动签署第二次protobuf:

注解

即将对排序服务进行的更新调用将经过一系列系统签名和策略检查。因此,您可能会发现,对排序节点的日志进行流处理和检查非常有用。从另一个shell发出一个``docker logs -f orderer.example.com``命令来显示它们。

发送更新调用:

peer channel update -f org3_update_in_envelope.pb -c $CHANNEL_NAME -o orderer.example.com:7050 --tls --cafile $ORDERER_CA

如果你的更新成功提交,你应该会看到一个类似于下面的消息摘要提示:

2018-02-24 18:56:33.499 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: 3207B24E40DE2FAB87A2E42BC004FEAA1E6FDCA42977CB78C64F05A88E556ABA

您还将看到我们的配置交易提交:

2018-02-24 18:56:33.499 UTC [channelCmd] update -> INFO 010 Successfully submitted channel update

成功的通道更新调用将向通道上的所有peer节点返回一个新区块–区块5。可能您还记得,区块0-2是初始通道配置,而区块3和4是 ``mycc``链码的实例化和调用。因此,第5区块作为最近的通道配置,现在在通道上定义了Org3。

为了 ``peer0.org1.example.com``检查日志:

docker logs -f peer0.org1.example.com

如果希望检查新配置区块的内容,请遵循演示的过程来获取和解码新配置区块。

配置领导选举

注解

本节作为一般参考,用于理解在初始通道配置完成后将组织添加到网络时的领导人选举设置。这个示例默认为动态领导人选举,它在“peer-base.yaml”中为网络中的所有peer节点设置。

新加入的peer节点使用创世区块引导,该区块不包含有关正在通道配置更新中添加的组织的信息。因此,新peer节点不能使用gossip,因为它们无法验证其他peer节点从自己的组织转发的区块,直到它们获得将组织添加到通道的配置交易。因此,新添加的peer节点必须具有以下配置之一,以便从排序服务接收区块:

1. To utilize static leader mode, configure the peer to be an organization leader:

CORE_PEER_GOSSIP_USELEADERELECTION=false
CORE_PEER_GOSSIP_ORGLEADER=true

注解

对于添加到通道的所有新peer节点,此配置必须相同。

2. To utilize dynamic leader election, configure the peer to use leader election:

CORE_PEER_GOSSIP_USELEADERELECTION=true
CORE_PEER_GOSSIP_ORGLEADER=false

注解

因为新添加的组织的peer节点将无法形成成员视图,所以这个选项将类似于静态配置,因为每个peer节点将开始声明自己是领导人。然而,一旦使用将组织添加到通道的配置交易对其进行更新,组织将只有一个活动领导人。因此,如果您最终希望组织的peer节点利用领导人选举,建议利用这个选项。

将Org3加入通道

此时,为了包含我们的新组织– Org3,通道配置已经更新, 这意味着连接到它的peer节点现在可以加入``mychannel``。

首先,让我们启动Org3的peer节点和特定于Org3的CLI容器。

打开一个新的终端,并从``first-network``启动Org3 docker compose:

docker-compose -f docker-compose-org3.yaml up -d

这个新的compose文件已配置为跨初始网络进行桥接,因此两个peer节点和CLI容器将能够使用现有peer节点和排序节点进行解析。现在运行了三个新的容器,exec到特定于org3的CLI容器中:

docker exec -it Org3cli bash

正如我们对初始CLI容器所做的那样,导出两个关键的环境变量: ORDERER_CA``和``CHANNEL_NAME:

export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem && export CHANNEL_NAME=mychannel

检查变量是否设置正确:

echo $ORDERER_CA && echo $CHANNEL_NAME

现在让我们向排序服务发送一个调用,请求“mychannel”的创世区块。由于我们的通道更新成功,排序服务能够验证附加到此调用的Org3签名。如果没有成功地将Org3附加到通道配置中,排序服务会拒绝此请求。

注解

同样,您可能会发现,对排序节点的日志进行流处理以显示签名/验证逻辑和策略检查非常有用。

使用 ``peer channel fetch``命令检索此区块:

peer channel fetch 0 mychannel.block -o orderer.example.com:7050 -c $CHANNEL_NAME --tls --cafile $ORDERER_CA

注意,我们传递了一个``0``来表示我们想要通道账本上的第一个区块(即创世区块)。如果我们只是通过 peer channel fetch config 命令,那么我们将接收到区块5——定义了Org3的更新配置。然而,我们不能从下游区块开始账本——我们必须从区块0开始。

发出``peer channel join``命令,并传入创世区块 – mychannel.block:

peer channel join -b mychannel.block

如果您想加入Org3的第二个peer节点,导出 TLS``和``ADDRESS 变量并重新发出``peer channel join command``:

export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer1.org3.example.com/tls/ca.crt && export CORE_PEER_ADDRESS=peer1.org3.example.com:12051

peer channel join -b mychannel.block

安装、定义和调用链码

一旦您加入了通道,您就可以在Org3的peer节点上打包并安装一个链码。然后需要将链码定义批准为Org3。因为链码定义已经提交到您已经加入的通道,所以您可以在批准定义之后开始使用链码。

注解

这些指令使用了在v2.0 Alpha版本中引入的Fabric 链码生命周期。如果您想使用以前的生命周期来安装和实例化链码,请访问“将org添加到通道教程”的v1.4版本 <https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html>`__。

第一步是打包来自Org3 CLI的链码:

peer lifecycle chaincode package mycc.tar.gz --path github.com/hyperledger/fabric-samples/chaincode/abstore/go/ --lang golang --label mycc_1

这个命令将创建一个名为``mycc.tar.gz``的链码包,我们可以使用它在我们的peer节点上安装链码。在这个命令中,您需要提供一个链码包标签来描述链码。如果通道正在运行用Java或Node.js编写的链码,则相应地修改该命令。发出以下命令在Org3的peer0上安装包:

# this command installs a chaincode package on your peer
peer lifecycle chaincode install mycc.tar.gz

如果希望在Org3的第二个peer点上安装链码,还可以修改环境变量并重新发出命令。注意,第二次安装并不是强制的,因为您只需要将链码安装在peer节点充当背书者,或以其他方式与账本接口(即仅查询)。peer节点仍将运行验证逻辑,并在没有运行链码容器的情况下充当提交者。

下一步是将``mycc``的链码定义批准为Org3。Org3需要批准与Org1和Org2已批准并提交给通道的相同的定义。链码定义还需要包括链码包标识符。你可以通过查询你的peer节点来找到包的标识符:

# this returns the details of the packages installed on your peers
peer lifecycle chaincode queryinstalled

您应该会看到类似如下的输出:

Get installed chaincodes on peer:
Package ID: mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173, Label: mycc_1

我们将在将来的命令中需要包ID,所以让我们继续将它保存为一个环境变量。将`peer lifecycle chaincode queryinstalled` 返回的包ID粘贴到下面的命令中。包ID可能不适合所有用户,因此需要使用从控制台返回的包ID完成此步骤。

# Save the package ID as an environment variable.

CC_PACKAGE_ID=mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173

使用以下命令来批准Org3的 ``mycc``链码的定义:

# this approves a chaincode definition for your org
# use the --package-id flag to provide the package identifier
# use the --init-required flag to request the ``Init`` function be invoked to initialize the chaincode
peer lifecycle chaincode approveformyorg --channelID $CHANNEL_NAME --name mycc --version 1.0 --init-required --package-id $CC_PACKAGE_ID --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent

您可以使用 peer lifecycle chaincode querycommitted 命令来检查您已批准的链码定义是否已提交到通道。

# use the --name flag to select the chaincode whose definition you want to query
peer lifecycle chaincode querycommitted --channelID $CHANNEL_NAME --name mycc --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

一个成功的命令将返回关于提交定义的信息:

Committed chaincode definition for chaincode 'mycc' on channel 'mychannel':
Version: 1, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc

由于链码定义已经提交,所以在您批准定义之后,就可以使用’mycc 链码了。链码定义使用默认的背书策略,这要求通道上的大多数组织对交易进行背书。这意味着,如果一个组织被添加到或从通道中删除,背书策略将自动更新。我们之前需要来自Org1和Org2(2个中的2个)的背书,现在我们需要来自Org1、Org2和Org3中的两个(3个中的2个)的背书。

查询链码以确保它已经启动。注意,您可能需要等待链码容器启动。

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'

我们应该看到 ``Query Result: 90``的响应。

现在发出一个调用,将“10”从“a”转账到“b”。在下面的命令中,我们的目标是在Org1和Org3中收集足够数量的背书。

peer chaincode invoke -o orderer.example.com:7050  --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA -C $CHANNEL_NAME -n mycc -c '{"Args":["invoke","a","b","10"]}' --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org3.example.com:11051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls/ca.crt

最后一次查询:

peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'

我们应该看到 ``Query Result: 80``的响应,准确地反映了这个链码的世界状态的更新。

结论

通道配置更新过程确实非常复杂,但是各个步骤都有一个逻辑方法。最终的目的是形成一个用protobuf二进制格式表示的delta交易对象,然后获得所需的管理签名数量,以便通道配置更新交易能够满足通道的修改策略。

``configtxlator``和``jq``工具以及不断增长的“peer channel”命令为我们提供了完成这项任务的功能。

Upgrading Your Network Components

注解

When we use the term “upgrade” in this documentation, we’re primarily referring to changing the version of a component (for example, going from a v1.3 binary to a v1.4.x binary). The term “update,” on the other hand, refers not to versions but to configuration changes, such as updating a channel configuration or a deployment script. As there is no data migration, technically speaking, in Fabric, we will not use the term “migration” or “migrate” here.

注解

Upgrading from Fabric v1.4 to the v2.0 Alpha release is not supported. This tutorial will be updated after the 2.0 Alpha release.

Overview

Because the 搭建你的第一个网络(BYFN) (BYFN) tutorial defaults to the “latest” binaries, if you have run it since the release of v1.4.x, your machine will have v1.4.x binaries and tools installed on it and you will not be able to upgrade them.

As a result, this tutorial will provide a network based on Hyperledger Fabric v1.3 binaries as well as the v1.4.x binaries you will be upgrading to.

At a high level, our upgrade tutorial will perform the following steps:

  1. Backup the ledger and MSPs.

  2. Upgrade the orderer binaries to Fabric v1.4.x Because migration from Solo to Raft is not supported, and the 1.4.1 release of Fabric is the first to support Raft, this tutorial will not cover the process for upgrading to a Raft ordering service.

  3. Upgrade the peer binaries to Fabric v1.4.x.

注解

There are no new 功能需求 in v1.4.x As a result, we do not have to update any channel configurations as part of an upgrade to v1.4.x.

This tutorial will demonstrate how to perform each of these steps individually with CLI commands. We will also describe how the CLI tools image can be updated.

注解

Because BYFN uses a “Solo” ordering service (one orderer), our script brings down the entire network. However, in production environments, the orderers and peers can be upgraded simultaneously and on a rolling basis. In other words, you can upgrade the binaries in any order without bringing down the network.

Because BYFN is not compatible with the following components, our script for upgrading BYFN will not cover them:

  • Fabric CA

  • Kafka

  • CouchDB

  • SDK

The process for upgrading these components — if necessary — will be covered in a section following the tutorial. We will also show how to upgrade the Node chaincode shim.

From an operational perspective, it’s worth noting that the process for gathering logs has changed in v1.4, from CORE_LOGGING_LEVEL (for the peer) and ORDERER_GENERAL_LOGLEVEL (for the orderer) to FABRIC_LOGGING_SPEC (the new operations service). For more information, check out the Fabric release notes.

Prerequisites

If you haven’t already done so, ensure you have all of the dependencies on your machine as described in Prerequisites.

Launch a v1.3 network

Before we can upgrade to v1.4, we must first provision a network running Fabric v1.3 images.

Just as in the BYFN tutorial, we will be operating from the first-network subdirectory within your local clone of fabric-samples. Change into that directory now. You will also want to open a few extra terminals for ease of use.

Clean up

We want to operate from a known state, so we will use the byfn.sh script to kill any active or stale docker containers and remove any previously generated artifacts. Run:

./byfn.sh down
Generate the crypto and bring up the network

With a clean environment, launch our v1.3 BYFN network using these four commands:

git fetch origin

git checkout v1.3.0

./byfn.sh generate

./byfn.sh up -t 3000 -i 1.3.0

注解

If you have locally built v1.3 images, they will be used by the example. If you get errors, please consider cleaning up your locally built v1.3 images and running the example again. This will download v1.3 images from docker hub.

If BYFN has launched properly, you will see:

===================== All GOOD, BYFN execution completed =====================

We are now ready to upgrade our network to Hyperledger Fabric v1.4.x.

Get the newest samples

注解

The instructions below pertain to whatever is the most recently published version of v1.4.x. Please substitute 1.4.x with the version identifier of the published release that you are testing. In other words, replace ‘1.4.x’ with ‘1.4.0’ if you are testing the first release.

Before completing the rest of the tutorial, it’s important to get the v1.4.x (for example, 1.4.1) version of the samples, you can do this by issuing:

git fetch origin

git checkout v1.4.x
Want to upgrade now?

We have a script that will upgrade all of the components in BYFN as well as enable any capabilities (note, no new capabilities are required for v1.4). If you are running a production network, or are an administrator of some part of a network, this script can serve as a template for performing your own upgrades.

Afterwards, we will walk you through the steps in the script and describe what each piece of code is doing in the upgrade process.

To run the script, issue these commands:

# Note, replace '1.4.x' with a specific version, for example '1.4.1'.
# Don't pass the image flag '-i 1.4.x' if you prefer to default to 'latest' images.

./byfn.sh upgrade -i 1.4.x

If the upgrade is successful, you should see the following:

===================== All GOOD, End-2-End UPGRADE Scenario execution completed =====================

If you want to upgrade the network manually, simply run ./byfn.sh down again and perform the steps up to — but not including — ./byfn.sh upgrade -i 1.4.x. Then proceed to the next section.

注解

Many of the commands you’ll run in this section will not result in any output. In general, assume no output is good output.

Upgrade the orderer containers

Orderer containers should be upgraded in a rolling fashion (one at a time). At a high level, the orderer upgrade process goes as follows:

  1. Stop the orderer.

  2. Back up the orderer’s ledger and MSP.

  3. Restart the orderer with the latest images.

  4. Verify upgrade completion.

As a consequence of leveraging BYFN, we have a Solo orderer setup, therefore, we will only perform this process once. In a Kafka setup, however, this process will have to be repeated on each orderer.

注解

This tutorial uses a docker deployment. For native deployments, replace the file orderer with the one from the release artifacts. Backup the orderer.yaml and replace it with the orderer.yaml file from the release artifacts. Then port any modified variables from the backed up orderer.yaml to the new one. Utilizing a utility like diff may be helpful.

Let’s begin the upgrade process by bringing down the orderer:

docker stop orderer.example.com

export LEDGERS_BACKUP=./ledgers-backup

# Note, replace '1.4.x' with a specific version, for example '1.4.1'.
# Set IMAGE_TAG to 'latest' if you prefer to default to the images tagged 'latest' on your system.

export IMAGE_TAG=$(go env GOARCH)-1.4.x

We have created a variable for a directory to put file backups into, and exported the IMAGE_TAG we’d like to move to.

Once the orderer is down, you’ll want to backup its ledger and MSP:

mkdir -p $LEDGERS_BACKUP

docker cp orderer.example.com:/var/hyperledger/production/orderer/ ./$LEDGERS_BACKUP/orderer.example.com

In a production network this process would be repeated for each of the Kafka-based orderers in a rolling fashion.

Now download and restart the orderer with our new fabric image:

docker-compose -f docker-compose-cli.yaml up -d --no-deps orderer.example.com

Because our sample uses a “Solo” ordering service, there are no other orderers in the network that the restarted orderer must sync up to. However, in a production network leveraging Kafka, it will be a best practice to issue peer channel fetch <blocknumber> after restarting the orderer to verify that it has caught up to the other orderers.

Upgrade the peer containers

Next, let’s look at how to upgrade peer containers to Fabric v1.4.x. Peer containers should, like the orderers, be upgraded in a rolling fashion (one at a time). As mentioned during the orderer upgrade, orderers and peers may be upgraded in parallel, but for the purposes of this tutorial we’ve separated the processes out. At a high level, we will perform the following steps:

  1. Stop the peer.

  2. Back up the peer’s ledger and MSP.

  3. Remove chaincode containers and images.

  4. Restart the peer with latest image.

  5. Verify upgrade completion.

We have four peers running in our network. We will perform this process once for each peer, totaling four upgrades.

注解

Again, this tutorial utilizes a docker deployment. For native deployments, replace the file peer with the one from the release artifacts. Backup your core.yaml and replace it with the one from the release artifacts. Port any modified variables from the backed up core.yaml to the new one. Utilizing a utility like diff may be helpful.

Let’s bring down the first peer with the following command:

export PEER=peer0.org1.example.com

docker stop $PEER

We can then backup the peer’s ledger and MSP:

mkdir -p $LEDGERS_BACKUP

docker cp $PEER:/var/hyperledger/production ./$LEDGERS_BACKUP/$PEER

With the peer stopped and the ledger backed up, remove the peer chaincode containers:

CC_CONTAINERS=$(docker ps | grep dev-$PEER | awk '{print $1}')
if [ -n "$CC_CONTAINERS" ] ; then docker rm -f $CC_CONTAINERS ; fi

And the peer chaincode images:

CC_IMAGES=$(docker images | grep dev-$PEER | awk '{print $1}')
if [ -n "$CC_IMAGES" ] ; then docker rmi -f $CC_IMAGES ; fi

Now we’ll re-launch the peer using the v1.4.x image tag:

docker-compose -f docker-compose-cli.yaml up -d --no-deps $PEER

注解

Although, BYFN supports using CouchDB, we opted for a simpler implementation in this tutorial. If you are using CouchDB, however, issue this command instead of the one above:

docker-compose -f docker-compose-cli.yaml -f docker-compose-couch.yaml up -d --no-deps $PEER

注解

You do not need to relaunch the chaincode container. When the peer gets a request for a chaincode, (invoke or query), it first checks if it has a copy of that chaincode running. If so, it uses it. Otherwise, as in this case, the peer launches the chaincode (rebuilding the image if required).

Verify peer upgrade completion

We’ve completed the upgrade for our first peer, but before we move on let’s check to ensure the upgrade has been completed properly with a chaincode invoke.

注解

Before you attempt this, you may want to upgrade peers from enough organizations to satisfy your endorsement policy. Although, this is only mandatory if you are updating your chaincode as part of the upgrade process. If you are not updating your chaincode as part of the upgrade process, it is possible to get endorsements from peers running at different Fabric versions.

Before we get into the CLI container and issue the invoke, make sure the CLI is updated to the most current version by issuing:

docker-compose -f docker-compose-cli.yaml stop cli

docker-compose -f docker-compose-cli.yaml up -d --no-deps cli

If you specifically want the v1.3 version of the CLI, issue:

IMAGE_TAG=$(go env GOARCH)-1.3.x docker-compose -f docker-compose-cli.yaml up -d --no-deps cli

Once you have the version of the CLI you want, get into the CLI container:

docker exec -it cli bash

Now you’ll need to set two environment variables — the name of the channel and the name of the ORDERER_CA:

CH_NAME=mychannel

ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

Now you can issue the invoke:

peer chaincode invoke -o orderer.example.com:7050 --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --tls --cafile $ORDERER_CA  -C $CH_NAME -n mycc -c '{"Args":["invoke","a","b","10"]}'

Our query earlier revealed a to have a value of 90 and we have just removed 10 with our invoke. Therefore, a query against a should reveal 80. Let’s see:

peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

We should see the following:

Query Result: 80

After verifying the peer was upgraded correctly, make sure to issue an exit to leave the container before continuing to upgrade your peers. You can do this by repeating the process above with a different peer name exported.

export PEER=peer1.org1.example.com
export PEER=peer0.org2.example.com
export PEER=peer1.org2.example.com

Upgrading components BYFN does not support

Although this is the end of our update tutorial, there are other components that exist in production networks that are not compatible with the BYFN sample. In this section, we’ll talk through the process of updating them.

Fabric CA container

To learn how to upgrade your Fabric CA server, click over to the CA documentation.

Upgrade Node SDK clients

注解

Upgrade Fabric and Fabric CA before upgrading Node SDK clients. Fabric and Fabric CA are tested for backwards compatibility with older SDK clients. While newer SDK clients often work with older Fabric and Fabric CA releases, they may expose features that are not yet available in the older Fabric and Fabric CA releases, and are not tested for full compatibility.

Use NPM to upgrade any Node.js client by executing these commands in the root directory of your application:

npm install fabric-client@latest

npm install fabric-ca-client@latest

These commands install the new version of both the Fabric client and Fabric-CA client and write the new versions package.json.

Upgrading the Kafka cluster

It is not required, but it is recommended that the Kafka cluster be upgraded and kept up to date along with the rest of Fabric. Newer versions of Kafka support older protocol versions, so you may upgrade Kafka before or after the rest of Fabric.

If you followed the Upgrading Your Network to v1.3 tutorial, your Kafka cluster should be at v1.0.0. If it isn’t, refer to the official Apache Kafka documentation on upgrading Kafka from previous versions to upgrade the Kafka cluster brokers.

Upgrading Zookeeper

An Apache Kafka cluster requires an Apache Zookeeper cluster. The Zookeeper API has been stable for a long time and, as such, almost any version of Zookeeper is tolerated by Kafka. Refer to the Apache Kafka upgrade documentation in case there is a specific requirement to upgrade to a specific version of Zookeeper. If you would like to upgrade your Zookeeper cluster, some information on upgrading Zookeeper cluster can be found in the Zookeeper FAQ.

Upgrading CouchDB

If you are using CouchDB as state database, you should upgrade the peer’s CouchDB at the same time the peer is being upgraded. CouchDB v2.2.0 has been tested with Fabric v1.4.x.

To upgrade CouchDB:

  1. Stop CouchDB.

  2. Backup CouchDB data directory.

  3. Install CouchDB v2.2.0 binaries or update deployment scripts to use a new Docker image (CouchDB v2.2.0 pre-configured Docker image is provided alongside Fabric v1.4).

  4. Restart CouchDB.

Upgrade Node chaincode shim

To move to the new version of the Node chaincode shim a developer would need to:

  1. Change the level of fabric-shim in their chaincode package.json from 1.3 to 1.4.x.

  2. Repackage this new chaincode package and install it on all the endorsing peers in the channel.

  3. Perform an upgrade to this new chaincode. To see how to do this, check out peer chaincode.

注解

This flow isn’t specific to moving from 1.3 to 1.4.x It is also how one would upgrade from any incremental version of the node fabric shim.

Upgrade Chaincodes with vendored shim

注解

The v1.3.0 shim is compatible with the v1.4.x peer, but, it is still best practice to upgrade the chaincode shim to match the current level of the peer.

A number of third party tools exist that will allow you to vendor a chaincode shim. If you used one of these tools, use the same one to update your vendoring and re-package your chaincode.

If your chaincode vendors the shim, after updating the shim version, you must install it to all peers which already have the chaincode. Install it with the same name, but a newer version. Then you should execute a chaincode upgrade on each channel where this chaincode has been deployed to move to the new version.

If you did not vendor your chaincode, you can skip this step entirely.

在Fabric里面使用私有数据

This tutorial will demonstrate the use of collections to provide storage and retrieval of private data on the blockchain network for authorized peers of organizations.

The information in this tutorial assumes knowledge of private data stores and their use cases. For more information, check out Private data.

注解

These instructions use the new Fabric chaincode lifecycle introduced in the Fabric v2.0 Alpha release. If you would like to use the previous lifecycle model to use private data with chaincode, visit the v1.4 version of the Using Private Data in Fabric tutorial.

The tutorial will take you through the following steps to practice defining, configuring and using private data with Fabric:

  1. Build a collection definition JSON file

  2. Read and Write private data using chaincode APIs

  3. Install and define a chaincode with a collection

  4. Store private data

  5. Query the private data as an authorized peer

  6. Query the private data as an unauthorized peer

  7. Purge Private Data

  8. Using indexes with private data

  9. Additional resources

This tutorial will use the marbles private data sample — running on the Building Your First Network (BYFN) tutorial network — to demonstrate how to create, deploy, and use a collection of private data. The marbles private data sample will be deployed to the 搭建你的第一个网络(BYFN) (BYFN) tutorial network. You should have completed the task Install Samples, Binaries and Docker Images; however, running the BYFN tutorial is not a prerequisite for this tutorial. Instead the necessary commands are provided throughout this tutorial to use the network. We will describe what is happening at each step, making it possible to understand the tutorial without actually running the sample.

Build a collection definition JSON file

The first step in privatizing data on a channel is to build a collection definition which defines access to the private data.

The collection definition describes who can persist data, how many peers the data is distributed to, how many peers are required to disseminate the private data, and how long the private data is persisted in the private database. Later, we will demonstrate how chaincode APIs PutPrivateData and GetPrivateData are used to map the collection to the private data being secured.

A collection definition is composed of the following properties:

  • name: Name of the collection.

  • policy: Defines the organization peers allowed to persist the collection data.

  • requiredPeerCount: Number of peers required to disseminate the private data as a condition of the endorsement of the chaincode

  • maxPeerCount: For data redundancy purposes, the number of other peers that the current endorsing peer will attempt to distribute the data to. If an endorsing peer goes down, these other peers are available at commit time if there are requests to pull the private data.

  • blockToLive: For very sensitive information such as pricing or personal information, this value represents how long the data should live on the private database in terms of blocks. The data will live for this specified number of blocks on the private database and after that it will get purged, making this data obsolete from the network. To keep private data indefinitely, that is, to never purge private data, set the blockToLive property to 0.

  • memberOnlyRead: a value of true indicates that peers automatically enforce that only clients belonging to one of the collection member organizations are allowed read access to private data.

To illustrate usage of private data, the marbles private data example contains two private data collection definitions: collectionMarbles and collectionMarblePrivateDetails. The policy property in the collectionMarbles definition allows all members of the channel (Org1 and Org2) to have the private data in a private database. The collectionMarblesPrivateDetails collection allows only members of Org1 to have the private data in their private database.

For more information on building a policy definition refer to the Endorsement policies topic.

// collections_config.json

[
  {
       "name": "collectionMarbles",
       "policy": "OR('Org1MSP.member', 'Org2MSP.member')",
       "requiredPeerCount": 0,
       "maxPeerCount": 3,
       "blockToLive":1000000,
       "memberOnlyRead": true
  },

  {
       "name": "collectionMarblePrivateDetails",
       "policy": "OR('Org1MSP.member')",
       "requiredPeerCount": 0,
       "maxPeerCount": 3,
       "blockToLive":3,
       "memberOnlyRead": true
  }
]

The data to be secured by these policies is mapped in chaincode and will be shown later in the tutorial.

This collection definition file is deployed when the chaincode definition is committed to the channel using the peer lifecycle chaincode commit command. More details on this process are provided in Section 3 below.

Read and Write private data using chaincode APIs

The next step in understanding how to privatize data on a channel is to build the data definition in the chaincode. The marbles private data sample divides the private data into two separate data definitions according to how the data will be accessed.

// Peers in Org1 and Org2 will have this private data in a side database
type marble struct {
  ObjectType string `json:"docType"`
  Name       string `json:"name"`
  Color      string `json:"color"`
  Size       int    `json:"size"`
  Owner      string `json:"owner"`
}

// Only peers in Org1 will have this private data in a side database
type marblePrivateDetails struct {
  ObjectType string `json:"docType"`
  Name       string `json:"name"`
  Price      int    `json:"price"`
}

Specifically access to the private data will be restricted as follows:

  • name, color, size, and owner will be visible to all members of the channel (Org1 and Org2)

  • price only visible to members of Org1

Thus two different sets of private data are defined in the marbles private data sample. The mapping of this data to the collection policy which restricts its access is controlled by chaincode APIs. Specifically, reading and writing private data using a collection definition is performed by calling GetPrivateData() and PutPrivateData(), which can be found here.

The following diagrams illustrate the private data model used by the marbles private data sample.

_images/SideDB-org1.png _images/SideDB-org2.png
Reading collection data

Use the chaincode API GetPrivateData() to query private data in the database. GetPrivateData() takes two arguments, the collection name and the data key. Recall the collection collectionMarbles allows members of Org1 and Org2 to have the private data in a side database, and the collection collectionMarblePrivateDetails allows only members of Org1 to have the private data in a side database. For implementation details refer to the following two marbles private data functions:

  • readMarble for querying the values of the name, color, size and owner attributes

  • readMarblePrivateDetails for querying the values of the price attribute

When we issue the database queries using the peer commands later in this tutorial, we will call these two functions.

Writing private data

Use the chaincode API PutPrivateData() to store the private data into the private database. The API also requires the name of the collection. Since the marbles private data sample includes two different collections, it is called twice in the chaincode:

  1. Write the private data name, color, size and owner using the collection named collectionMarbles.

  2. Write the private data price using the collection named collectionMarblePrivateDetails.

For example, in the following snippet of the initMarble function, PutPrivateData() is called twice, once for each set of private data.

// ==== Create marble object, marshal to JSON, and save to state ====
      marble := &marble{
              ObjectType: "marble",
              Name:       marbleInput.Name,
              Color:      marbleInput.Color,
              Size:       marbleInput.Size,
              Owner:      marbleInput.Owner,
      }
      marbleJSONasBytes, err := json.Marshal(marble)
      if err != nil {
              return shim.Error(err.Error())
      }

      // === Save marble to state ===
      err = stub.PutPrivateData("collectionMarbles", marbleInput.Name, marbleJSONasBytes)
      if err != nil {
              return shim.Error(err.Error())
      }

      // ==== Create marble private details object with price, marshal to JSON, and save to state ====
      marblePrivateDetails := &marblePrivateDetails{
              ObjectType: "marblePrivateDetails",
              Name:       marbleInput.Name,
              Price:      marbleInput.Price,
      }
      marblePrivateDetailsBytes, err := json.Marshal(marblePrivateDetails)
      if err != nil {
              return shim.Error(err.Error())
      }
      err = stub.PutPrivateData("collectionMarblePrivateDetails", marbleInput.Name, marblePrivateDetailsBytes)
      if err != nil {
              return shim.Error(err.Error())
      }

To summarize, the policy definition above for our collection.json allows all peers in Org1 and Org2 to store and transact with the marbles private data name, color, size, owner in their private database. But only peers in Org1 can store and transact with the price private data in its private database.

As an additional data privacy benefit, since a collection is being used, only the private data hashes go through orderer, not the private data itself, keeping private data confidential from orderer.

Start the network

Now we are ready to step through some commands which demonstrate how to use private data.

Try it yourself

Before installing, defining, and using the marbles private data chaincode below, we need to start the BYFN network. For the sake of this tutorial, we want to operate from a known initial state. The following command will kill any active or stale docker containers and remove previously generated artifacts. Therefore let’s run the following command to clean up any previous environments:

cd fabric-samples/first-network
./byfn.sh down

If you’ve already run through this tutorial, you’ll also want to delete the underlying docker containers for the marbles private data chaincode. Let’s run the following commands to clean up previous environments:

docker rm -f $(docker ps -a | awk '($2 ~ /dev-peer.*.marblesp.*/) {print $1}')
docker rmi -f $(docker images | awk '($1 ~ /dev-peer.*.marblesp.*/) {print $3}')

Start up the BYFN network with CouchDB by running the following command:

./byfn.sh up -c mychannel -s couchdb

This will create a simple Fabric network consisting of a single channel named mychannel with two organizations (each maintaining two peer nodes) and an ordering service while using CouchDB as the state database. Either LevelDB or CouchDB may be used with collections. CouchDB was chosen to demonstrate how to use indexes with private data.

注解

For collections to work, it is important to have cross organizational gossip configured correctly. Refer to our documentation on Gossip data dissemination protocol, paying particular attention to the section on “anchor peers”. Our tutorial does not focus on gossip given it is already configured in the BYFN sample, but when configuring a channel, the gossip anchors peers are critical to configure for collections to work properly.

Install and define a chaincode with a collection

Client applications interact with the blockchain ledger through chaincode. Therefore we need to install a chaincode on every peer that will execute and endorse our transactions. However, before we can interact with our chaincode, the members of the channel need to agree on a chaincode definition that establishes chaincode governance, including the private data collection configuration. We are going to package, install, and then define the chaincode on the channel using peer lifecycle chaincode.

Install chaincode on all peers

The chaincode needs to be packaged before it can be installed on our peers. We can use the peer lifecycle chaincode package command to package the marbles chaincode.

The BYFN network includes two organizations, Org1 and Org2, with two peers each. Therefore, the chaincode package has to be installed on four peers:

  • peer0.org1.example.com

  • peer1.org1.example.com

  • peer0.org2.example.com

  • peer1.org2.example.com

After the chaincode is packaged, we can use the peer lifecycle chaincode install command to install the Marbles chaincode on each peer.

Try it yourself

Assuming you have started the BYFN network, enter the CLI container.

docker exec -it cli bash

Your command prompt will change to something similar to:

bash-4.4#
  1. Use the following command to package the Marbles private data chaincode from the git repository inside your local container.

    peer lifecycle chaincode package marblesp.tar.gz --path github.com/hyperledger/fabric-samples/chaincode/marbles02_private/go/ --lang golang --label marblespv1
    

    This command will create a chaincode package named marblesp.tar.gz.

  2. Use the following command to install the chaincode package onto the peer peer0.org1.example.com in your BYFN network. By default, after starting the BYFN network, the active peer is set to: CORE_PEER_ADDRESS=peer0.org1.example.com:7051:

    peer lifecycle chaincode install marblesp.tar.gz
    

    A successful install command will return the chaincode identifier, similar to the response below:

    2019-03-13 13:48:53.691 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nEmycc:ebd89878c2bbccf62f68c36072626359376aa83c36435a058d453e8dbfd894cc" >
    2019-03-13 13:48:53.691 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: mycc:ebd89878c2bbccf62f68c36072626359376aa83c36435a058d453e8dbfd894cc
    
  3. Use the CLI to switch the active peer to the second peer in Org1 and install the chaincode. Copy and paste the following entire block of commands into the CLI container and run them:

    export CORE_PEER_ADDRESS=peer1.org1.example.com:8051
    peer lifecycle chaincode install marblesp.tar.gz
    
  4. Use the CLI to switch to Org2. Copy and paste the following block of commands as a group into the peer container and run them all at once:

    export CORE_PEER_LOCALMSPID=Org2MSP
    export PEER0_ORG2_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
    export CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG2_CA
    export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
    
  5. Switch the active peer to the first peer in Org2 and install the chaincode:

    export CORE_PEER_ADDRESS=peer0.org2.example.com:9051
    peer lifecycle chaincode install marblesp.tar.gz
    
  6. Switch the active peer to the second peer in org2 and install the chaincode:

    export CORE_PEER_ADDRESS=peer1.org2.example.com:10051
    peer lifecycle chaincode install marblesp.tar.gz
    
Approve the chaincode definition

Each channel member that wants to use the chaincode needs to approve a chaincode definition for their organization. Since both organizations are going to use the chaincode in this tutorial, we need to approve the chaincode definition for both Org1 and Org2.

The chaincode definition includes the package identifier that was returned by the install command. This packege ID is used to associate the chaincode package installed on your peers with the chaincode definition approved by your organization. We can also use the peer lifecycle chaincode queryinstalled command to find the package ID of marblesp.tar.gz.

Once we have the package ID, we can then use the peer lifecycle chaincode approveformyorg command to approve a definition of the marbles chaincode for Org1 and Org2. To approve the private data collection definition that accompanies the marbles02_private, sample, provide the path to the collections JSON file using the --collections-config flag.

Try it yourself

Run the following commands inside the CLI container to approve a definition for Org1 and Org2.

  1. Use the following command to query your peer for the package ID of the installed chaincode.

    peer lifecycle chaincode queryinstalled
    

The command will return the same package identifier as the install command. You should see output similar to the following:

Get installed chaincodes on peer:
Package ID: marblespv1:57f5353b2568b79cb5384b5a8458519a47186efc4fcadb98280f5eae6d59c1cd, Label: marblespv1
Package ID: mycc_1:27ef99cb3cbd1b545063f018f3670eddc0d54f40b2660b8f853ad2854c49a0d8, Label: mycc_1
  1. Declare the package ID as an environment variable. Paste the package ID of marblespv1 returned by the peer lifecycle chaincode queryinstalled into the command below. The package ID may not be the same for all users, so you need to complete this step using the package ID returned from your console.

    export CC_PACKAGE_ID=marblespv1:57f5353b2568b79cb5384b5a8458519a47186efc4fcadb98280f5eae6d59c1cd
    
  2. Make sure we are running the CLI as Org1. Copy and paste the following block of commands as a group into the peer container and run them all at once:

    export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
    export CORE_PEER_LOCALMSPID=Org1MSP
    export PEER0_ORG1_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
    export CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG1_CA
    export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
    
  3. Use the following command to approve a definition of the Marbles private data chaincode for Org2. This command includes a path to the collection definition file. The approval is distributed within each organization using gossip, so the command does not need to target every peer within an organization.

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    peer lifecycle chaincode approveformyorg --channelID mychannel --name marblesp --version 1.0 --collections-config $GOPATH/src/github.com/hyperledger/fabric-samples/chaincode/marbles02_private/collections_config.json --signature-policy "OR('Org1MSP.member','Org2MSP.member')" --init-required --package-id $CC_PACKAGE_ID --sequence 1 --tls true --cafile $ORDERER_CA --waitForEvent
    

    When the command completes successfully you should see something similar to:

    2019-03-18 16:04:09.046 UTC [cli.lifecycle.chaincode] InitCmdFactory -> INFO 001 Retrieved channel (mychannel) orderer endpoint: orderer.example.com:7050
    2019-03-18 16:04:11.253 UTC [chaincodeCmd] ClientWait -> INFO 002 txid [efba188ca77889cc1c328fc98e0bb12d3ad0abcda3f84da3714471c7c1e6c13c] committed with status (VALID) at
    
  4. Use the CLI to switch to Org2. Copy and paste the following block of commands as a group into the peer container and run them all at once.

    export CORE_PEER_ADDRESS=peer0.org2.example.com:9051
    export CORE_PEER_LOCALMSPID=Org2MSP
    export PEER0_ORG2_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
    export CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG2_CA
    export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
    
  5. You can now approve the chaincode definition for Org2:

    peer lifecycle chaincode approveformyorg --channelID mychannel --name marblesp --version 1.0 --collections-config $GOPATH/src/github.com/hyperledger/fabric-samples/chaincode/marbles02_private/collections_config.json --signature-policy "OR('Org1MSP.member','Org2MSP.member')" --init-required --package-id $CC_PACKAGE_ID --sequence 1 --tls true --cafile $ORDERER_CA --waitForEvent
    
Commit the chaincode definition

Once a sufficient number of organizations (in this case, a majority) have approved a chaincode definition, one organization commit the definition to the channel.

Use the peer lifecycle chaincode commit command to commit the chaincode definition. This command needs to target the peers in Org1 and Org2 to collect endorsements for the commit transaction. The peers will endorse the transaction only if their organizations have approved the chaincode definition. This command will also deploy the collection definition to the channel.

We are ready to use the chaincode after the chaincode definition has been committed to the channel. Because the marbles private data chaincode contains an initiation function, we need to use the peer chaincode invoke command to invoke Init() before we can use other functions in the chaincode.

Try it yourself

  1. Run the following commands to commit the definition of the marbles private data chaincode to the BYFN channel mychannel.

export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
export ORG1_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export ORG2_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
peer lifecycle chaincode commit -o orderer.example.com:7050 --channelID mychannel --name marblesp --version 1.0 --sequence 1 --collections-config $GOPATH/src/github.com/hyperledger/fabric-samples/chaincode/marbles02_private/collections_config.json --signature-policy "OR('Org1MSP.member','Org2MSP.member')" --init-required --tls true --cafile $ORDERER_CA --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles $ORG1_CA --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles $ORG2_CA --waitForEvent

注解

When specifying the value of the --collections-config flag, you will need to specify the fully qualified path to the collections_config.json file. For example:

--collections-config  $GOPATH/src/github.com/hyperledger/fabric-samples/chaincode/marbles02_private/collections_config.json

When the commit transaction completes successfully you should see something similar to:

[chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
[chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
  1. Use the following command to invoke the Init function to initialize the chaincode:

    peer chaincode invoke -o orderer.example.com:7050 --channelID mychannel --name marblesp --isInit --tls true --cafile $ORDERER_CA --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles $ORG1_CA -c '{"Args":["Init"]}'
    

Store private data

Acting as a member of Org1, who is authorized to transact with all of the private data in the marbles private data sample, switch back to an Org1 peer and submit a request to add a marble:

Try it yourself

Copy and paste the following set of commands to the CLI command line.

export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
export PEER0_ORG1_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

Invoke the marbles initMarble function which creates a marble with private data — name marble1 owned by tom with a color blue, size 35 and price of 99. Recall that private data price will be stored separately from the private data name, owner, color, size. For this reason, the initMarble function calls the PutPrivateData() API twice to persist the private data, once for each collection. Also note that the private data is passed using the --transient flag. Inputs passed as transient data will not be persisted in the transaction in order to keep the data private. Transient data is passed as binary data and therefore when using CLI it must be base64 encoded. We use an environment variable to capture the base64 encoded value, and use tr command to strip off the problematic newline characters that linux base64 command adds.

export MARBLE=$(echo -n "{\"name\":\"marble1\",\"color\":\"blue\",\"size\":35,\"owner\":\"tom\",\"price\":99}" | base64 | tr -d \\n)
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n marblesp -c '{"Args":["initMarble"]}'  --transient "{\"marble\":\"$MARBLE\"}"

You should see results similar to:

[chaincodeCmd] chaincodeInvokeOrQuery->INFO 001 Chaincode invoke successful. result: status:200

Query the private data as an authorized peer

Our collection definition allows all members of Org1 and Org2 to have the name, color, size, owner private data in their side database, but only peers in Org1 can have the price private data in their side database. As an authorized peer in Org1, we will query both sets of private data.

The first query command calls the readMarble function which passes collectionMarbles as an argument.

// ===============================================
// readMarble - read a marble from chaincode state
// ===============================================

func (t *SimpleChaincode) readMarble(stub shim.ChaincodeStubInterface, args []string) pb.Response {
     var name, jsonResp string
     var err error
     if len(args) != 1 {
             return shim.Error("Incorrect number of arguments. Expecting name of the marble to query")
     }

     name = args[0]
     valAsbytes, err := stub.GetPrivateData("collectionMarbles", name) //get the marble from chaincode state

     if err != nil {
             jsonResp = "{\"Error\":\"Failed to get state for " + name + "\"}"
             return shim.Error(jsonResp)
     } else if valAsbytes == nil {
             jsonResp = "{\"Error\":\"Marble does not exist: " + name + "\"}"
             return shim.Error(jsonResp)
     }

     return shim.Success(valAsbytes)
}

The second query command calls the readMarblePrivateDetails function which passes collectionMarblePrivateDetails as an argument.

// ===============================================
// readMarblePrivateDetails - read a marble private details from chaincode state
// ===============================================

func (t *SimpleChaincode) readMarblePrivateDetails(stub shim.ChaincodeStubInterface, args []string) pb.Response {
     var name, jsonResp string
     var err error

     if len(args) != 1 {
             return shim.Error("Incorrect number of arguments. Expecting name of the marble to query")
     }

     name = args[0]
     valAsbytes, err := stub.GetPrivateData("collectionMarblePrivateDetails", name) //get the marble private details from chaincode state

     if err != nil {
             jsonResp = "{\"Error\":\"Failed to get private details for " + name + ": " + err.Error() + "\"}"
             return shim.Error(jsonResp)
     } else if valAsbytes == nil {
             jsonResp = "{\"Error\":\"Marble private details does not exist: " + name + "\"}"
             return shim.Error(jsonResp)
     }
     return shim.Success(valAsbytes)
}

Now Try it yourself

Query for the name, color, size and owner private data of marble1 as a member of Org1. Note that since queries do not get recorded on the ledger, there is no need to pass the marble name as a transient input.

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarble","marble1"]}'

You should see the following result:

{"color":"blue","docType":"marble","name":"marble1","owner":"tom","size":35}

Query for the price private data of marble1 as a member of Org1.

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarblePrivateDetails","marble1"]}'

You should see the following result:

{"docType":"marblePrivateDetails","name":"marble1","price":99}

Query the private data as an unauthorized peer

Now we will switch to a member of Org2 which has the marbles private data name, color, size, owner in its side database, but does not have the marbles price private data in its side database. We will query for both sets of private data.

Switch to a peer in Org2

From inside the docker container, run the following commands to switch to the peer which is unauthorized to access the marbles price private data.

Try it yourself

export CORE_PEER_ADDRESS=peer0.org2.example.com:9051
export CORE_PEER_LOCALMSPID=Org2MSP
export PEER0_ORG2_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
export CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG2_CA
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
Query private data Org2 is authorized to

Peers in Org2 should have the first set of marbles private data (name, color, size and owner) in their side database and can access it using the readMarble() function which is called with the collectionMarbles argument.

Try it yourself

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarble","marble1"]}'

You should see something similar to the following result:

{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"tom"}
Query private data Org2 is not authorized to

Peers in Org2 do not have the marbles price private data in their side database. When they try to query for this data, they get back a hash of the key matching the public state but will not have the private state.

Try it yourself

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarblePrivateDetails","marble1"]}'

You should see a result similar to:

{"Error":"Failed to get private details for marble1: GET_STATE failed:
transaction ID: b04adebbf165ddc90b4ab897171e1daa7d360079ac18e65fa15d84ddfebfae90:
Private data matching public hash version is not available. Public hash
version = &version.Height{BlockNum:0x6, TxNum:0x0}, Private data version =
(*version.Height)(nil)"}

Members of Org2 will only be able to see the public hash of the private data.

Purge Private Data

For use cases where private data only needs to be on the ledger until it can be replicated into an off-chain database, it is possible to “purge” the data after a certain set number of blocks, leaving behind only hash of the data that serves as immutable evidence of the transaction.

There may be private data including personal or confidential information, such as the pricing data in our example, that the transacting parties don’t want disclosed to other organizations on the channel. Thus, it has a limited lifespan, and can be purged after existing unchanged on the blockchain for a designated number of blocks using the blockToLive property in the collection definition.

Our collectionMarblePrivateDetails definition has a blockToLive property value of three meaning this data will live on the side database for three blocks and then after that it will get purged. Tying all of the pieces together, recall this collection definition collectionMarblePrivateDetails is associated with the price private data in the initMarble() function when it calls the PutPrivateData() API and passes the collectionMarblePrivateDetails as an argument.

We will step through adding blocks to the chain, and then watch the price information get purged by issuing four new transactions (Create a new marble, followed by three marble transfers) which adds four new blocks to the chain. After the fourth transaction (third marble transfer), we will verify that the price private data is purged.

Try it yourself

Switch back to peer0 in Org1 using the following commands. Copy and paste the following code block and run it inside your peer container:

export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
export PEER0_ORG1_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

Open a new terminal window and view the private data logs for this peer by running the following command:

docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata'

You should see results similar to the following. Note the highest block number in the list. In the example below, the highest block height is 4.

[pvtdatastorage] func1 -> INFO 023 Purger started: Purging expired private data till block number [0]
[pvtdatastorage] func1 -> INFO 024 Purger finished
[kvledger] CommitWithPvtData -> INFO 022 Channel [mychannel]: Committed block [0] with 1 transaction(s)
[kvledger] CommitWithPvtData -> INFO 02e Channel [mychannel]: Committed block [1] with 1 transaction(s)
[kvledger] CommitWithPvtData -> INFO 030 Channel [mychannel]: Committed block [2] with 1 transaction(s)
[kvledger] CommitWithPvtData -> INFO 036 Channel [mychannel]: Committed block [3] with 1 transaction(s)
[kvledger] CommitWithPvtData -> INFO 03e Channel [mychannel]: Committed block [4] with 1 transaction(s)

Back in the peer container, query for the marble1 price data by running the following command. (A Query does not create a new transaction on the ledger since no data is transacted).

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarblePrivateDetails","marble1"]}'

You should see results similar to:

{"docType":"marblePrivateDetails","name":"marble1","price":99}

The price data is still in the private data ledger.

Create a new marble2 by issuing the following command. This transaction creates a new block on the chain.

export MARBLE=$(echo -n "{\"name\":\"marble2\",\"color\":\"blue\",\"size\":35,\"owner\":\"tom\",\"price\":99}" | base64 | tr -d \\n)
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n marblesp -c '{"Args":["initMarble"]}' --transient "{\"marble\":\"$MARBLE\"}"

Switch back to the Terminal window and view the private data logs for this peer again. You should see the block height increase by 1.

docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata'

Back in the peer container, query for the marble1 price data again by running the following command:

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarblePrivateDetails","marble1"]}'

The private data has not been purged, therefore the results are unchanged from previous query:

{"docType":"marblePrivateDetails","name":"marble1","price":99}

Transfer marble2 to “joe” by running the following command. This transaction will add a second new block on the chain.

export MARBLE_OWNER=$(echo -n "{\"name\":\"marble2\",\"owner\":\"joe\"}" | base64 | tr -d \\n)
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n marblesp -c '{"Args":["transferMarble"]}' --transient "{\"marble_owner\":\"$MARBLE_OWNER\"}"

Switch back to the Terminal window and view the private data logs for this peer again. You should see the block height increase by 1.

docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata'

Back in the peer container, query for the marble1 price data by running the following command:

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarblePrivateDetails","marble1"]}'

You should still be able to see the price private data.

{"docType":"marblePrivateDetails","name":"marble1","price":99}

Transfer marble2 to “tom” by running the following command. This transaction will create a third new block on the chain.

export MARBLE_OWNER=$(echo -n "{\"name\":\"marble2\",\"owner\":\"tom\"}" | base64 | tr -d \\n)
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n marblesp -c '{"Args":["transferMarble"]}' --transient "{\"marble_owner\":\"$MARBLE_OWNER\"}"

Switch back to the Terminal window and view the private data logs for this peer again. You should see the block height increase by 1.

docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata'

Back in the peer container, query for the marble1 price data by running the following command:

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarblePrivateDetails","marble1"]}'

You should still be able to see the price data.

{"docType":"marblePrivateDetails","name":"marble1","price":99}

Finally, transfer marble2 to “jerry” by running the following command. This transaction will create a fourth new block on the chain. The price private data should be purged after this transaction.

export MARBLE_OWNER=$(echo -n "{\"name\":\"marble2\",\"owner\":\"jerry\"}" | base64 | tr -d \\n)
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n marblesp -c '{"Args":["transferMarble"]}' --transient "{\"marble_owner\":\"$MARBLE_OWNER\"}"

Switch back to the Terminal window and view the private data logs for this peer again. You should see the block height increase by 1.

docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata'

Back in the peer container, query for the marble1 price data by running the following command:

peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarblePrivateDetails","marble1"]}'

Because the price data has been purged, you should no longer be able to see it. You should see something similar to:

Error: endorsement failure during query. response: status:500
message:"{\"Error\":\"Marble private details does not exist: marble1\"}"

Using indexes with private data

Indexes can also be applied to private data collections, by packaging indexes in the META-INF/statedb/couchdb/collections/<collection_name>/indexes directory alongside the chaincode. An example index is available here .

For deployment of chaincode to production environments, it is recommended to define any indexes alongside chaincode so that the chaincode and supporting indexes are deployed automatically as a unit, once the chaincode has been installed on a peer and instantiated on a channel. The associated indexes are automatically deployed upon chaincode instantiation on the channel when the --collections-config flag is specified pointing to the location of the collection JSON file.

Additional resources

For additional private data education, a video tutorial has been created.

注解

The video uses the previous lifecycle model to install private data collections with chaincode.





链码教程

什么是链码?

链码是一个用 Gonode.jsJava 语言编写的程序,它实现了一个规定的接口。 链码运行在一个隔离于背书节点进程的安全 Docker 容器中。链码通过由应用提交的交易来初始化和管理账本状态。

链码通常处理被网络成员认可的业务逻辑,所以它可以被认为是一个“智能合约”。链码创建的状态的作用域在链码中是唯一的,不能被其他链码直接访问。然而,在同一个网络中,给予适当的权限,一个链码可以被另一个链码访问它的状态。

两种角色

对于链码我们提供两个不同的视角。一个是从开发区块链应用/解决方案的应用开发者视角,题为 链码开发者,另一个是 Chaincode for Operators,它面向负责管理区块链网络的操作员,他利用 Hyperledger Fabric API 来安装和管理链码,而不太可能参与链码应用的开发。

Fabric 链码生命周期

Fabric 链码生命周期负责管理链码安装和链码在一个通道中使用之前定义它们的参数。从 Fabric 2.0 Alpha 版开始,链码的管理是完全去中心的:多个组织可以使用 Fabric 链码周期来认可链码的参数,如在链码与账本交互前确定链码背书策略。

新模式相对于之前的生命周期提供了几个改进。

  • 多个组织必须认同一个链码中的参数: 在 Fabric 1.x 版本中,一个组织有能力为通道的其他成员设置链码参数(如背书策略)。新 Fabric 链码生命周期更灵活,因为它同时支持中心化信任模型(如之前的生命周期模型)和去中心化模型,即在背书策略生效前需要足够多的组织认同。

  • **更安全的链码升级过程:**在之前的链码生命周期中,升级交易可以被单个组织发起,对于没有安装新链码的通道成员这带来了风险。新模式只有在足够数量的组织同意升级后才能升级链码。

  • 更容易的背书策略更新: Fabric 生命周期允许你在不重新打包和不重新安装链码的情况下修改背书策略。用户还可以利用新的默认策略,它需要通道的多数成员背书。当通道有组织加入或离开,这个策略可以自动更新。

  • 可查看的链码包装: Fabric 生命周期将链码文件打包成易读的 tar 文件来。这使得查看链码包和跨多个组织协调安装链码变得更容易。

  • **在通道上使用一个链码报包启动多个链码:**之前的生命周期中定义的是,当链码包安装的时候通道上每一个链码都使用链码包中指定的的名称和版本号。现在你可以使用一个链码包,在同一个或者不同的通道上使用不同的名称部署多次。

要学习更多新的关于 Fabric 生命周期的内容,请访问 Chaincode for Operators

注解

在 v2.0 Alpha 版中,新的 Fabric 链码生命周期的特性尚未完成。特别地,请注意 Alpha 发行版中的以下限制:

  • 还不支持 CouchDB 索引

  • 用新生命周期定义的链码还不能被服务发现找到

这些限制将在 Alpha 发行版之后得到解决。使用旧生命周期模型安装和实例化链码,请访问 v1.4 版本的`操作员链码教程<https://hyperledger-fabric.readthedocs.io/en/release-1.4/chaincode4noah.html>`_

你可以通过创建一个新通道并设置通道功能到 V2_0 来使用 Fabric 链码生命周期。通道启用了 V2_0 功能后,你就不能使用旧生命周期来安装、实例化或更新链码。然而,你启用 V2_0 功能后,你仍然可以调用使用之前的生命周期模型安装的链码。 Fabric v2.0 Alpha 版不支持从之前的生命周期迁移到新生命周期。

链码开发者

什么是链码?

Chaincode是一个程序,用`Go <https://golang.org>`_, node.js, or `Java <https://java.com/en/>`_编写,实现指定的接口。链码运行在安全的Docker容器中,该容器与背书节点进程隔离。链码通过应用提交的交易初始化和管理账本状态。

链码通常处理网络成员同意的业务逻辑,因此它类似于“智能合约”。在一个提议交易中,可以调用链码来更新或查询账本。给予适当的权限,一个链码可以调用另一个链码(在相同通道中或在不同通道中)来访问其状态。注意,如果被调用的链码位于与调用链码不同的通道上,则只允许读查询。也就是说,在不同通道上被调用的链码只是一个“查询”,它不参与后续提交阶段的状态验证检查。

在下面的部分中,我们将通过应用开发者的视角来研究链码。我们将展示一个简单的链码示例应用程序,并介绍Chaincode Shim API中每个方法的用途。

链码API

每个链码程序都必须实现``Chaincode`` 接口,该接口的方法是在收到交易的响应中被调用的。你可以在下面找到适用于不同语言的Chaincode Shim API的参考文档:

在每种语言中,客户端调用``Invoke`` 方法来提交交易提议。此方法允许您使用链码读取和写入通道账本上的数据。

您还需要包含一个 Init 方法,它将作为链码的初始化函数。当链码启动或升级时,将调用此方法来初始化链码。默认情况下,这个函数永远不会执行。但是,您可以使用链码定义来请求执行 Init 函数。如果请求执行 Init ,Fabric将确保在调用任何其他函数之前调用 Init ,并且只调用一次。该选项为您提供了额外的控制,用户可以对其中的链码进行初始化,并能够将初始数据添加到账本。如果您正在使用节点的CLI来批准链码定义,请使用``–init-required`` 标志来请求执行``Init`` 函数。然后使用 peer chaincode invoke`命令调用``Init` 函数,并传入 --isInit 标志。如果您正在为Node.js使用Fabric SDK,请访问 `如何安装和启动您的链码<https://fabric-sdk-node.github.io/master/tutorial-chaincode-lifecycle.html>`__。更多信息,请看 Chaincode for Operators

在链码“shim”API中的另一个接口是“ChaincodeStubInterface”:

这些用于存取和修改账本,并在链码之间进行调用。

在使用Go链码的本教程中,我们将通过实现一个管理简单“资产”的简单链码应用程序来演示这些API的使用。

简单资产链码

我们的应用程序是一个基本的链码示例,用于在账本上创建资产(键值对)。

为代码选择一个位置

如果您没有用过Go编程,您可能希望确保已经安装了 Go Programming Language 并正确配置了系统。

现在,您需要为链码应用程序创建一个目录,作为``$GOPATH/src/``的子目录。

为了简单起见,我们使用以下命令:

mkdir -p $GOPATH/src/sacc && cd $GOPATH/src/sacc

现在,让我们创建源文件,我们将用下列代码填充:

touch sacc.go
辅助工作

首先,让我们从一些辅助工作开始。与每个链码一样,它实现了 Chaincode interface 特别是 InitInvoke 函数。因此,让我们添加Go import语句,用于链码的必要依赖项。我们将导入链码shim包和`peer protobuf package <https://godoc.org/github.com/hyperledger/fabric/protos/peer>`_。接下来,让我们添加一个结构 SimpleAsset 作为Chaincode shim函数的接收器。

package main

import (
    "fmt"

    "github.com/hyperledger/fabric/core/chaincode/shim"
    "github.com/hyperledger/fabric/protos/peer"
)

// SimpleAsset implements a simple chaincode to manage an asset
type SimpleAsset struct {
}
初始化链码

下面,我们将实现``Init`` 函数。

// Init is called during chaincode instantiation to initialize any data.
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {

}

注解

注意,链码升级也调用这个函数。在编写现有链码升级的链码时,请确保适当地修改``Init`` 函数。特别是,如果没有“迁移”或升级过程中没有需要初始化的内容,请提供一个空的“Init”方法。

接下来,我们将使用函数 ChaincodeStubInterface.GetStringArgs 检索参数调用``Init``,并检查有效性。在我们的示例中,我们期望取到一个键值对。

// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data, so be careful to avoid a scenario where you
// inadvertently clobber your ledger's data!
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
  // Get the args from the transaction proposal
  args := stub.GetStringArgs()
  if len(args) != 2 {
    return shim.Error("Incorrect arguments. Expecting a key and a value")
  }
}

接下来,既然我们已经确定调用是有效的,我们将在账本中存储初始状态。为此,我们将以键和值作为参数传入调用`ChaincodeStubInterface.PutState <https://godoc.org/github.com/hyperledger/fabric/core/chaincode/shim#ChaincodeStub.PutState>`_ 。PutState ‘ _。假设一切顺利,返回一个表明初始化成功的peer.Response对象。

// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data, so be careful to avoid a scenario where you
// inadvertently clobber your ledger's data!
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
  // Get the args from the transaction proposal
  args := stub.GetStringArgs()
  if len(args) != 2 {
    return shim.Error("Incorrect arguments. Expecting a key and a value")
  }

  // Set up any variables or assets here by calling stub.PutState()

  // We store the key and the value on the ledger
  err := stub.PutState(args[0], []byte(args[1]))
  if err != nil {
    return shim.Error(fmt.Sprintf("Failed to create asset: %s", args[0]))
  }
  return shim.Success(nil)
}
调用链码

首先,让我们添加``Invoke`` 函数的签名。

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The 'set'
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {

}

与上面的``Init`` 函数一样,我们需要从``ChaincodeStubInterface``中提取参数。函数的参数将是要调用的链码应用的函数名称。在我们的例子中,应用只有两个函数:set``和``get,它们允许设置资产的值或检索资产的当前状态。我们首先调用`ChaincodeStubInterface.GetFunctionAndParameters <https://godoc.org/github.com/hyperledger/fabric/core/chaincode/shim#ChaincodeStub.GetFunctionAndParameters>`_提取函数名和链码应用函数的参数。

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
    // Extract the function and args from the transaction proposal
    fn, args := stub.GetFunctionAndParameters()

}

接下来,我们将验证函数名是否为``set`` 或 get,并调用那些链码应用程序函数,通过``shim.Success``或``shim.Error``函数返回适当的响应,该函数将响应序列化为gRPC protobuf消息。

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
    // Extract the function and args from the transaction proposal
    fn, args := stub.GetFunctionAndParameters()

    var result string
    var err error
    if fn == "set" {
            result, err = set(stub, args)
    } else {
            result, err = get(stub, args)
    }
    if err != nil {
            return shim.Error(err.Error())
    }

    // Return the result as success payload
    return shim.Success([]byte(result))
}
实现链码应用程序

如前所述,我们的链码应用程序实现了两个可以通过``Invoke`` ‘函数调用的函数。现在我们来实现这些函数。请注意,如前所述,要访问账本的状态,我们将利用链码shim API的 ChaincodeStubInterface.PutStateChaincodeStubInterface.GetState 函数。

// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 2 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
    }

    err := stub.PutState(args[0], []byte(args[1]))
    if err != nil {
            return "", fmt.Errorf("Failed to set asset: %s", args[0])
    }
    return args[1], nil
}

// Get returns the value of the specified asset key
func get(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 1 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key")
    }

    value, err := stub.GetState(args[0])
    if err != nil {
            return "", fmt.Errorf("Failed to get asset: %s with error: %s", args[0], err)
    }
    if value == nil {
            return "", fmt.Errorf("Asset not found: %s", args[0])
    }
    return string(value), nil
}
放在一起

最后,我们需要添加``main`` 函数,它将调用`shim.Start <https://godoc.org/github.com/hyperledger/fabric/core/chaincode/shim#Start>`_ 函数。这是完整的链码程序源代码。

package main

import (
    "fmt"

    "github.com/hyperledger/fabric/core/chaincode/shim"
    "github.com/hyperledger/fabric/protos/peer"
)

// SimpleAsset implements a simple chaincode to manage an asset
type SimpleAsset struct {
}

// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data.
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
    // Get the args from the transaction proposal
    args := stub.GetStringArgs()
    if len(args) != 2 {
            return shim.Error("Incorrect arguments. Expecting a key and a value")
    }

    // Set up any variables or assets here by calling stub.PutState()

    // We store the key and the value on the ledger
    err := stub.PutState(args[0], []byte(args[1]))
    if err != nil {
            return shim.Error(fmt.Sprintf("Failed to create asset: %s", args[0]))
    }
    return shim.Success(nil)
}

// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
    // Extract the function and args from the transaction proposal
    fn, args := stub.GetFunctionAndParameters()

    var result string
    var err error
    if fn == "set" {
            result, err = set(stub, args)
    } else { // assume 'get' even if fn is nil
            result, err = get(stub, args)
    }
    if err != nil {
            return shim.Error(err.Error())
    }

    // Return the result as success payload
    return shim.Success([]byte(result))
}

// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 2 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
    }

    err := stub.PutState(args[0], []byte(args[1]))
    if err != nil {
            return "", fmt.Errorf("Failed to set asset: %s", args[0])
    }
    return args[1], nil
}

// Get returns the value of the specified asset key
func get(stub shim.ChaincodeStubInterface, args []string) (string, error) {
    if len(args) != 1 {
            return "", fmt.Errorf("Incorrect arguments. Expecting a key")
    }

    value, err := stub.GetState(args[0])
    if err != nil {
            return "", fmt.Errorf("Failed to get asset: %s with error: %s", args[0], err)
    }
    if value == nil {
            return "", fmt.Errorf("Asset not found: %s", args[0])
    }
    return string(value), nil
}

// main function starts up the chaincode in the container during instantiate
func main() {
    if err := shim.Start(new(SimpleAsset)); err != nil {
            fmt.Printf("Error starting SimpleAsset chaincode: %s", err)
    }
}
构建链码

现在让我们编译你的链码

go get -u github.com/hyperledger/fabric/core/chaincode/shim
go build

假设没有错误,现在我们可以继续下一步,测试您的链码。

用开发模式测试

通常,链码由节点启动和维护。但是在“开发模式”中,链码是由用户构建和启动的。这种模式在链码开发阶段对于快速代码/构建/运行/调试循环非常有用。

我们利用预生成的排序器和通道构件组成的示例开发网络来启动“开发模式”。因此,用户可以立即跳到编译链码和驱动调用的过程中。

安装超级账本Fabric示例

如果您还没有这样做,请 Install Samples, Binaries and Docker Images

导航到“fabric-samples”克隆体的“chaincode-docker-devmode”目录:

cd chaincode-docker-devmode

现在打开三个终端并导航到每个终端的“chaincode-docker-devmode”目录。

终端1 - 启动网络

docker-compose -f docker-compose-simple.yaml up

上面的代码使用“SingleSampleMSPSolo”排序器配置文件启动网络,并在“dev模式”中启动节点。它还启动了另外两个容器,一个用于链码环境,另一个用于CLI与链码交互。创建和加入通道的命令嵌入到CLI容器中,因此我们可以立即跳转到链码调用。

终端2 - 构建和启动链码

docker exec -it chaincode bash

你将看到如下结果:

root@d2629980e76b:/opt/gopath/src/chaincode#

现在,编译你的链码:

cd sacc
go build

现在运行链码:

CORE_PEER_ADDRESS=peer:7052 CORE_CHAINCODE_ID_NAME=mycc:0 ./sacc

链码由节点启动,链码日志表明节点注册成功。注意,在这个阶段,链码与任何通道都没有关联。这是在后续步骤中使用 instantiate 命令完成的。

终端3 - 使用链码

即使您处于 --peer-chaincodedev 模式,您仍然必须安装链码,以便生命周期系统链码能够正常通过检查。这个要求可能会在以后的 --peer-chaincodedev 模式中被删除。

我们将利用CLI容器来驱动这些调用。

docker exec -it cli bash
peer chaincode install -p chaincodedev/chaincode/sacc -n mycc -v 0
peer chaincode instantiate -n mycc -v 0 -c '{"Args":["a","10"]}' -C myc

现在发出一个调用,将“a”的值更改为“20”。

peer chaincode invoke -n mycc -c '{"Args":["set", "a", "20"]}' -C myc

最后,查询 a。我们应该看到’``20``的值。

peer chaincode query -n mycc -c '{"Args":["query","a"]}' -C myc

测试新链码

默认情况下,我们只挂载`sacc``。然而,您可以通过将不同的链码添加到 chaincode 子目录并重新启动您的网络,从而轻松地测试不同的链码。此时,它们将可在您的``chaincode``容器中被访问。

链码访问控制

链码可以通过调用GetCreator()函数来利用客户端(提交者)证书进行访问控制决策。此外,Go shim还提供了扩展API,从提交者的证书中提取客户端身份,这些API可用于访问控制决策,无论是基于客户端身份本身、org身份,还是基于客户端身份属性。

例如,表示为键/值的资产可能包含作为值的一部分的客户端身份(例如表示该资产所有者的JSON属性),并且只有这个客户端可能被授权在将来更新键/值。可以在链码中使用客户端身份库扩展API来检索提交者信息,从而做出这样的访问控制决策。

有关详细信息,请参见`客户端身份(CID)库文档 <https://github.com/hyperledger/fabric/blob/master/core/chaincode/shim/ext/cid/README.md>`_ 。

要将客户端身份shim扩展作为依赖项添加到链码中,请参见 管理用Go编写的链码的外部依赖关系

链码加密

链码在某些场景中,对与密钥关联的值进行整体或部分加密可能很有用。例如,如果一个人的社会安全号码或地址被写入了账本,那么您可能不希望这些数据以明文形式出现。链码加密是通过使用`实体扩展 <https://github.com/hyperledger/fabric/tree/master/core/chaincode/shim/ext/entities>`__ 来实现的,该扩展是一个BCCSP包装器,带有商品工厂和函数,用于执行类似加密和椭圆曲线数字签名等的加密操作。例如,要加密,链码的调用程序通过瞬态字段传入一个加密密钥。然后,可以将相同的密钥用于后续查询操作,从而对加密的状态值进行正确的解密。

有关更多信息和示例,请参见“fabric/examples”目录中的`Encc例子 <https://github.com/hyperledger/fabric/tree/master/examples/chaincode/go/enccc_example>`__。要特别注意 utils.go 辅助程序。这个实用程序加载链码shim API和实体扩展,并构建一个新的函数类(例如``encryptAndPutState`` & getStateAndDecrypt)来示范加密链码。因此,链码现在可以将基本shim API 的``Get`` /Put``与``Encrypt /``Decrypt``的附加功能结合起来。

要将加密实体扩展作为依赖项添加到链码中,请参见 管理用Go编写的链码的外部依赖关系.

管理用Go编写的链码的外部依赖关系

如果您的链码需要Go标准库没有提供的包,那么您需要在链码中包含这些包。将shim和任何扩展库作为依赖项添加到链码中也是一个很好的实践。

有`许多工具 <https://github.com/golang/go/wiki/PackageManagementTools>`__ 可用于管理(或“处理”)这些依赖项。下面演示如何使用``govendor``:

govendor init
govendor add +external  // Add all external package, or
govendor add github.com/external/pkg // Add specific external package

这将外部依赖项导入到本地的 vendor 目录中。如果您正在处理Fabric shim或shim扩展,请在执行govendor命令之前,将Fabric存储库克隆到您的$GOPATH/src/github.com/hyperledger目录。

一旦依赖项在chaincode目录中被处理,peer chaincode package``和``peer chaincode install 操作将把与依赖项相关的代码包含到链码包中。

Chaincode for Operators

What is Chaincode?

链码是一个程序,用Go、Node.js或Java编写,实现指定的接口。链码运行在安全的Docker容器中,该容器与背书节点进程隔离。链码通过应用程序提交的交易初始化和管理账本状态。

链码通常处理网络成员认同的业务逻辑,因此它可以被视为“智能合约”。由链码创建的账本更新只适用于该链码,不能被其他链码直接访问。但是,在相同的网络中,给定适当的权限,一个链码可以调用另一个链码来访问它的状态。

在接下来的部分中,我们将通过区块链网络操作者而不是应用程序开发者的视角来研究链码。链码操作者可以使用本教程学习如何使用Fabric链码生命周期在其网络上部署和管理链码。

Chaincode lifecycle

Fabric 链码生命周期是一个允许多个组织在链码在通道上使用之前,就如何操作达成一致意见的程序。本教程将讨论链码操作员如何使用Fabric生命周期来执行以下任务:

注意:在v2.0 Alpha版本中,新的Fabric链码生命周期还没有完成。具体来说,要注意Alpha发行版中的以下限制:

  • Service Discovery is not yet supported

  • Chaincodes defined with the new lifecycle are not yet discoverable via service discovery

这些限制将在Alpha发行版之后得到解决。要使用旧的生命周期模型来安装和实例化链码,请访问链码的操作员教程的v1.4版本。

Install and define a chaincode

Fabric链码生命周期要求组织同意定义链码的参数,例如名称、版本和链码背书策略。通道成员通过以下四个步骤达成协议。并不是每个通道上的组织都需要完成每个步骤。

  1. Package the chaincode: This step can be completed by one organization or by each organization.

  2. Install the chaincode on your peers: Every organization that will use the chaincode to endorse a transaction or query the ledger needs to complete this step.

  3. Approve a chaincode definition for your organization: Every organization that will use the chaincode needs to complete this step. The chaincode definition needs to be approved by a sufficient number of organizations to satisfy the channel’s LifecycleEndorsment policy (a majority, by default) before the chaincode can be started on the channel.

  4. Commit the chaincode definition to the channel: The commit transaction needs to be submitted by one organization once the required number of organizations on the channel have approved. The submitter first collects endorsements from enough peers of the organizations that have approved, and then submits the transaction to commit the chaincode definition.

本教程提供了Fabric链码生命周期操作的详细概述,而不是具体的命令。要了解关于如何使用节点的CLI来使用Fabric生命周期的更多信息,请参见在构建您的第一个网络教程或节点的生命周期命令引用中安装和定义链码。要了解关于如何使用Fabric SDK for Node.js使用Fabric生命周期的更多信息,请访问如何安装和启动链码。

Step One: Packaging the smart contract

Chaincode needs to be packaged in a tar file before it can be installed on your peers. You can package a chaincode using the Fabric peer binaries, the Node Fabric SDK, or a third party tool such as GNU tar. When you create a chaincode package, you need to provide a chaincode package label to create a succinct and human readable description of the package.

如果使用第三方工具来打包链码,则生成的文件需要采用以下格式。Fabric节点二进制文件和Fabric SDKs将自动创建这种格式的文件。

  • The chaincode needs to be packaged in a tar file, ending with a .tar.gz file extension.

  • The tar file needs to contain two files (no directory): a metadata file “Chaincode-Package-Metadata.json” and another tar containing the chaincode files.

  • “Chaincode-Package-Metadata.json” contains JSON that specifies the chaincode language, code path, and package label. You can see an example of a metadata file below:

    {"Path":"github.com/chaincode/fabcar/go","Type":"golang","Label":"fabcarv1"}
    
Step Two: Install the chaincode on your peers

你需要在每一个执行和背书交易的节点上安装链码包。无论使用CLI还是SDK,都需要使用节点管理员完成此步骤,节点管理员的签名证书位于节点MSP的admincerts文件夹中。建议组织只打包一次链码,然后在属于其组织的每个节点上安装相同的包。如果一个通道想要确保每个组织运行相同的链码,那么一个组织可以打包一个链码并用带外方式将其发送到其他通道成员。

一个成功的安装命令将返回一个链码包标识符,它是一个包标签和包的散列组合。此包标识符用于将安装在节点上的链码包与组织批准的链码定义关联起来。为下一步保存标识符。您还可以通过使用节点CLI查询安装在您的节点上的包来找到包标识符。

Step Three: Approve a chaincode definition for your organization

链码由链码定义控制。当通道成员批准链码定义时,该批准作为组织对其接受的链码参数的表决。这些经过批准的组织定义允许通道成员在链码在通道上使用之前就链码达成一致。链码定义包括以下参数,这些参数需要跨组织保持一致:

  • Name: The name that applications will use when invoking the chaincode.

  • Version: A version number or value associated with a given chaincodes package. If you upgrade the chaincode binaries, you need to change your chaincode version as well.

  • Sequence: The number of times the chaincode has been defined. This value is an integer, and is used to keep track of chaincode upgrades. For example, when you first install and approve a chaincode definition, the sequence number will be 1. When you next upgrade the chaincode, the sequence number will be incremented to 2.

  • Endorsement Policy: Which organizations need to execute and validate the transaction output. The endorsement policy can be expressed as a string passed to the CLI or the SDK, or it can reference a policy in the channel config. By default, the endorsement policy is set to Channel/Application/Endorsement, which defaults to require that a majority of organizations in the channel endorse a transaction.

  • Collection Configuration: The path to a private data collection definition file associated with your chaincode. For more information about private data collections, see the Private Data architecture reference.

  • Initialization: All chaincode need to contain an Init function that is used to initialize the chaincode. By default, this function is never executed. However, you can use the chaincode definition to request that the Init function be callable. If execution of Init is requested, fabric will ensure that Init is invoked before any other function and is only invoked once.

  • ESCC/VSCC Plugins: The name of a custom endorsement or validation plugin to be used by this chaincode.

链码定义还包括包标识符。对于希望使用链码的每个组织,这都是必需的参数。包ID不需要对所有组织都是相同的。组织可以在不安装链码包或在定义中包含标识符的情况下批准链码定义。

希望使用链码的每个通道成员都需要为其组织批准一个链码定义。此批准需要提交给排序服务,然后分发给所有节点。此批准需要由组织管理员提交,其签名证书在组织定义的MSP中作为管理证书列出。成功提交批准交易后,已批准的定义存储在一个集合中,该集合可用于组织的所有节点。因此,即使您有多个节点,也只需要为您的组织批准一次链码。

Step Four: Commit the chaincode definition to the channel

一旦有足够数量的通道成员批准了链码定义,一个组织就可以将该定义提交给通道。在使用节点的CLI将定义提交到通道之前,可以使用queryapprovalstatus命令查找哪些通道成员已经批准了该定义。提交交易提议首先发送给通道成员的节点,这些成员查询为其组织批准的链码定义,如果组织已经批准了该定义,则对其进行背书。然后将交易提交给排序服务,然后该服务将链码定义提交给通道。提交定义交易需要作为组织管理员提交,其签名证书在组织定义的MSP中作为管理证书列出。

在将定义成功提交给通道之前,需要批准该定义的组织数量由通道/应用程序/生命周期背书策略控制。默认情况下,该策略要求通道中的大多数组织认可该交易。生命周期背书策略与链码背书策略是分开的。例如,即使某链码背书策略只需要一个或两个组织的签名,大多数通道成员仍然需要根据默认策略批准链码定义。在提交通道定义时,您需要在通道中发送给足够多的节点组织,以满足您的生命周期背书策略。

组织可以在不安装链码包的情况下批准链码定义。如果组织不需要使用链码,他们可以在没有包标识符的情况下批准链码定义,以确保满足生命周期背书策略。

将链码定义提交给通道后,通道成员可以开始使用链码。链码的第一次调用将在交易提议发送给的所有节点上启动链码容器,只要这些节点已经安装了链码包。您可以使用链码定义来要求调用Init函数来启动链码。否则,通道成员可以通过调用链码中的任何交易来启动链码容器。无论是初始化函数还是其他交易的第一个调用,都受链码背书策略的约束。链码容器可能需要几分钟的时间才能启动。

Upgrade a chaincode

您可以使用与安装和启动链码相同的Fabric生命周期过程来升级链码。您可以升级链码二进制文件,或者只更新链码策略。按照以下步骤升级链码:

  1. Repackage the chaincode: You only need to complete this step if you are upgrading the chaincode binaries.

  2. Install the new chaincode package on your peers: Once again, you only need to complete this step if you are upgrading the chaincode binaries. Installing the new chaincode package will generate a package ID, which you will need to pass to the new chaincode definition. You also need to change the chaincode version.

  3. Approve a new chaincode definition: If you are upgrading the chaincode binaries, you need to update the chaincode version and the package ID in the chaincode definition. You can also update your chaincode endorsement policy without having to repackage your chaincode binaries. Channel members simply need to approve a definition with the new policy. The new definition needs to increment the sequence variable in the definition by one.

  4. Commit the definition to the channel: When a sufficient number of channel members have approved the new chaincode definition, one organization can commit the new definition to upgrade the chaincode definition to the channel. There is no separate upgrade command as part of the lifecycle process.

  5. Upgrade the chaincode container: If you updated the chaincode definition without upgrading the chaincode package, you do not need to upgrade the chaincode container. If you did upgrade the chaincode binaries, a new invoke will upgrade the chaincode container. If you requested the execution of the Init function in the chaincode definition, you need to upgrade the chaincode container by invoking the Init function again after the new definition is successfully committed.

Fabric链码生命周期使用链码定义中的序列来跟踪升级。所有通道成员都需要将序列号增加1,并批准一个新的定义来升级链码。版本参数用于跟踪链码二进制文件,只有在升级链码二进制文件时才需要更改。

Migrate to the new Fabric lifecycle

您可以通过创建一个新通道并将通道功能设置为V2_0来使用Fabric链码生命周期。您将无法使用前一个生命周期在启用V2_0功能的通道上安装、实例化或更新链码。没有对v2.0 Alpha版本的升级支持,也没有从Alpha版本到未来版本v2.x的升级支持。

System Chaincode Plugins

System chaincodes are specialized chaincodes that run as part of the peer process as opposed to user chaincodes that run in separate docker containers. As such they have more access to resources in the peer and can be used for implementing features that are difficult or impossible to be implemented through user chaincodes. Examples of System Chaincodes include QSCC (Query System Chaincode) for ledger and other Fabric-related queries, CSCC (Configuration System Chaincode) which helps regulate access control, _lifecycle (which regulates the Fabric chaincode lifecycle), and the legacy LSCC (Lifecycle System Chaincode) which regulated the previous chaincode lifecycle.

Unlike a user chaincode, a system chaincode is not installed and instantiated using proposals from SDKs or CLI. It is registered and deployed by the peer at start-up.

System chaincodes can be linked to a peer in two ways: statically, and dynamically using Go plugins. This tutorial will outline how to develop and load system chaincodes as plugins.

Developing Plugins

A system chaincode is a program written in Go and loaded using the Go plugin package.

A plugin includes a main package with exported symbols and is built with the command go build -buildmode=plugin.

Every system chaincode must implement the Chaincode Interface and export a constructor method that matches the signature func New() shim.Chaincode in the main package. An example can be found in the repository at examples/plugin/scc.

Existing chaincodes such as the QSCC can also serve as templates for certain features, such as access control, that are typically implemented through system chaincodes. The existing system chaincodes also serve as a reference for best-practices on things like logging and testing.

注解

On imported packages: the Go standard library requires that a plugin must include the same version of imported packages as the host application (Fabric, in this case).

Configuring Plugins

Plugins are configured in the chaincode.systemPlugin section in core.yaml:

chaincode:
  systemPlugins:
    - enabled: true
      name: mysyscc
      path: /opt/lib/syscc.so
      invokableExternal: true
      invokableCC2CC: true

A system chaincode must also be whitelisted in the chaincode.system section in core.yaml:

chaincode:
  system:
    mysyscc: enable

Using CouchDB

This tutorial will describe the steps required to use the CouchDB as the state database with Hyperledger Fabric. By now, you should be familiar with Fabric concepts and have explored some of the samples and tutorials.

注解

The Fabric chaincode lifecycle that is being introduced in the v2.0 Alpha release does not support using indexes with CouchDB. As a result, this tutorial requires the previous lifecycle process to install and instantiate a chaincode that includes CouchDB indexes. Download the release-1.4 version of the Fabric Samples to use this tutorial. For more information, see Add the index to your chaincode folder.

The tutorial will take you through the following steps:

  1. Enable CouchDB in Hyperledger Fabric

  2. Create an index

  3. Add the index to your chaincode folder

  4. Install and instantiate the Chaincode

  5. Query the CouchDB State Database

  6. Use best practices for queries and indexes

  7. Query the CouchDB State Database With Pagination

  8. Update an Index

  9. Delete an Index

For a deeper dive into CouchDB refer to CouchDB as the State Database and for more information on the Fabric ledger refer to the Ledger topic. Follow the tutorial below for details on how to leverage CouchDB in your blockchain network.

Throughout this tutorial we will use the Marbles sample as our use case to demonstrate how to use CouchDB with Fabric and will deploy Marbles to the 搭建你的第一个网络(BYFN) (BYFN) tutorial network. You should have completed the task Install Samples, Binaries and Docker Images. However, running the BYFN tutorial is not a prerequisite for this tutorial, instead the necessary commands are provided throughout this tutorial to use the network.

Why CouchDB?

Fabric supports two types of peer databases. LevelDB is the default state database embedded in the peer node and stores chaincode data as simple key-value pairs and supports key, key range, and composite key queries only. CouchDB is an optional alternate state database that supports rich queries when chaincode data values are modeled as JSON. Rich queries are more flexible and efficient against large indexed data stores, when you want to query the actual data value content rather than the keys. CouchDB is a JSON document datastore rather than a pure key-value store therefore enabling indexing of the contents of the documents in the database.

In order to leverage the benefits of CouchDB, namely content-based JSON queries,your data must be modeled in JSON format. You must decide whether to use LevelDB or CouchDB before setting up your network. Switching a peer from using LevelDB to CouchDB is not supported due to data compatibility issues. All peers on the network must use the same database type. If you have a mix of JSON and binary data values, you can still use CouchDB, however the binary values can only be queried based on key, key range, and composite key queries.

Enable CouchDB in Hyperledger Fabric

CouchDB runs as a separate database process alongside the peer, therefore there are additional considerations in terms of setup, management, and operations. A docker image of CouchDB is available and we recommend that it be run on the same server as the peer. You will need to setup one CouchDB container per peer and update each peer container by changing the configuration found in core.yaml to point to the CouchDB container. The core.yaml file must be located in the directory specified by the environment variable FABRIC_CFG_PATH:

  • For docker deployments, core.yaml is pre-configured and located in the peer container FABRIC_CFG_PATH folder. However when using docker environments, you typically pass environment variables by editing the docker-compose-couch.yaml to override the core.yaml

  • For native binary deployments, core.yaml is included with the release artifact distribution.

Edit the stateDatabase section of core.yaml. Specify CouchDB as the stateDatabase and fill in the associated couchDBConfig properties. For more details on configuring CouchDB to work with fabric, refer here. To view an example of a core.yaml file configured for CouchDB, examine the BYFN docker-compose-couch.yaml in the HyperLedger/fabric-samples/first-network directory.

Create an index

Why are indexes important?

Indexes allow a database to be queried without having to examine every row with every query, making them run faster and more efficiently. Normally, indexes are built for frequently occurring query criteria allowing the data to be queried more efficiently. To leverage the major benefit of CouchDB – the ability to perform rich queries against JSON data – indexes are not required, but they are strongly recommended for performance. Also, if sorting is required in a query, CouchDB requires an index of the sorted fields.

注解

Rich queries that do not have an index will work but may throw a warning in the CouchDB log that the index was not found. However, if a rich query includes a sort specification, then an index on that field is required; otherwise, the query will fail and an error will be thrown.

To demonstrate building an index, we will use the data from the Marbles sample. In this example, the Marbles data structure is defined as:

type marble struct {
         ObjectType string `json:"docType"` //docType is used to distinguish the various types of objects in state database
         Name       string `json:"name"`    //the field tags are needed to keep case from bouncing around
         Color      string `json:"color"`
         Size       int    `json:"size"`
         Owner      string `json:"owner"`
}

In this structure, the attributes (docType, name, color, size, owner) define the ledger data associated with the asset. The attribute docType is a pattern used in the chaincode to differentiate different data types that may need to be queried separately. When using CouchDB, it recommended to include this docType attribute to distinguish each type of document in the chaincode namespace. (Each chaincode is represented as its own CouchDB database, that is, each chaincode has its own namespace for keys.)

With respect to the Marbles data structure, docType is used to identify that this document/asset is a marble asset. Potentially there could be other documents/assets in the chaincode database. The documents in the database are searchable against all of these attribute values.

When defining an index for use in chaincode queries, each one must be defined in its own text file with the extension *.json and the index definition must be formatted in the CouchDB index JSON format.

To define an index, three pieces of information are required:

  • fields: these are the frequently queried fields

  • name: name of the index

  • type: always json in this context

For example, a simple index named foo-index for a field named foo.

{
    "index": {
        "fields": ["foo"]
    },
    "name" : "foo-index",
    "type" : "json"
}

Optionally the design document attribute ddoc can be specified on the index definition. A design document is CouchDB construct designed to contain indexes. Indexes can be grouped into design documents for efficiency but CouchDB recommends one index per design document.

小技巧

When defining an index it is a good practice to include the ddoc attribute and value along with the index name. It is important to include this attribute to ensure that you can update the index later if needed. Also it gives you the ability to explicitly specify which index to use on a query.

Here is another example of an index definition from the Marbles sample with the index name indexOwner using multiple fields docType and owner and includes the ddoc attribute:

{
  "index":{
      "fields":["docType","owner"] // Names of the fields to be queried
  },
  "ddoc":"indexOwnerDoc", // (optional) Name of the design document in which the index will be created.
  "name":"indexOwner",
  "type":"json"
}

In the example above, if the design document indexOwnerDoc does not already exist, it is automatically created when the index is deployed. An index can be constructed with one or more attributes specified in the list of fields and any combination of attributes can be specified. An attribute can exist in multiple indexes for the same docType. In the following example, index1 only includes the attribute owner, index2 includes the attributes owner and color and index3 includes the attributes owner, color and size. Also, notice each index definition has its own ddoc value, following the CouchDB recommended practice.

{
  "index":{
      "fields":["owner"] // Names of the fields to be queried
  },
  "ddoc":"index1Doc", // (optional) Name of the design document in which the index will be created.
  "name":"index1",
  "type":"json"
}

{
  "index":{
      "fields":["owner", "color"] // Names of the fields to be queried
  },
  "ddoc":"index2Doc", // (optional) Name of the design document in which the index will be created.
  "name":"index2",
  "type":"json"
}

{
  "index":{
      "fields":["owner", "color", "size"] // Names of the fields to be queried
  },
  "ddoc":"index3Doc", // (optional) Name of the design document in which the index will be created.
  "name":"index3",
  "type":"json"
}

In general, you should model index fields to match the fields that will be used in query filters and sorts. For more details on building an index in JSON format refer to the CouchDB documentation.

A final word on indexing, Fabric takes care of indexing the documents in the database using a pattern called index warming. CouchDB does not typically index new or updated documents until the next query. Fabric ensures that indexes stay ‘warm’ by requesting an index update after every block of data is committed. This ensures queries are fast because they do not have to index documents before running the query. This process keeps the index current and refreshed every time new records are added to the state database.

Add the index to your chaincode folder

Once you finalize an index, it is ready to be packaged with your chaincode for deployment by being placed alongside it in the appropriate metadata folder.

If your chaincode installation and instantiation uses the Hyperledger Fabric Node SDK, the JSON index files can be located in any folder as long as it conforms to this directory structure. During the chaincode installation using the client.installChaincode() API, include the attribute (metadataPath) in the installation request. The value of the metadataPath is a string representing the absolute path to the directory structure containing the JSON index file(s).

Alternatively, if you are using the peer command to install and instantiate the chaincode, then the JSON index files must be located under the path META-INF/statedb/couchdb/indexes which is located inside the directory where the chaincode resides.

The Marbles sample below illustrates how the index is packaged with the chaincode which will be installed using the peer commands.

Marbles Chaincode Index Package

This sample includes one index named indexOwnerDoc:

{"index":{"fields":["docType","owner"]},"ddoc":"indexOwnerDoc", "name":"indexOwner","type":"json"}
Start the network

注解

The following tutorial needs to be run using the release-1.4 version of the Fabric Samples. If you have already downloaded release-2.0 of the Fabric Samples, you can use the git checkout to download release-1.4. Navigate to the fabric-samples directory on your local machine. Then run the command git checkout v1.4.0.

Try it yourself

Before installing and instantiating the marbles chaincode, we need to start up the BYFN network. For the sake of this tutorial, we want to operate from a known initial state. The following command will kill any active or stale docker containers and remove previously generated artifacts. Therefore let’s run the following command to clean up any previous environments:

cd fabric-samples/first-network
./byfn.sh down

Now start up the BYFN network with CouchDB by running the following command:

./byfn.sh up -c mychannel -s couchdb

This will create a simple Fabric network consisting of a single channel named mychannel with two organizations (each maintaining two peer nodes) and an ordering service while using CouchDB as the state database.

Install and instantiate the Chaincode

Client applications interact with the blockchain ledger through chaincode. As such we need to install the chaincode on every peer that will execute and endorse our transactions and instantiate the chaincode on the channel. In the previous section, we demonstrated how to package the chaincode so they should be ready for deployment.

Chaincode is installed onto a peer and then instantiated onto the channel using peer.

  1. Use the peer chaincode install command to install the Marbles chaincode on a peer.

Try it yourself

Assuming you have started the BYFN network, navigate into the CLI container using the command:

docker exec -it cli bash

Use the following command to install the Marbles chaincode from the git repository onto a peer in your BYFN network. The CLI container defaults to using peer0 of org1:

peer chaincode install -n marbles -v 1.0 -p github.com/hyperledger/fabric-samples/chaincode/marbles02/go

2. Issue the peer chaincode instantiate command to instantiate the chaincode on a channel.

Try it yourself

To instantiate the Marbles sample on the BYFN channel mychannel run the following command:

export CHANNEL_NAME=mychannel
peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -v 1.0 -c '{"Args":["init"]}' -P "OR ('Org0MSP.peer','Org1MSP.peer')"
Verify index was deployed

Indexes will be deployed to each peer’s CouchDB state database once the chaincode is both installed on the peer and instantiated on the channel. You can verify that the CouchDB index was created successfully by examining the peer log in the Docker container.

Try it yourself

To view the logs in the peer docker container, open a new Terminal window and run the following command to grep for message confirmation that the index was created.

docker logs peer0.org1.example.com  2>&1 | grep "CouchDB index"

You should see a result that looks like the following:

[couchdb] CreateIndex -> INFO 0be Created CouchDB index [indexOwner] in state database [mychannel_marbles] using design document [_design/indexOwnerDoc]

注解

If Marbles was not installed on the BYFN peer peer0.org1.example.com, you may need to replace it with the name of a different peer where Marbles was installed.

Query the CouchDB State Database

Now that the index has been defined in the JSON file and deployed alongside the chaincode, chaincode functions can execute JSON queries against the CouchDB state database, and thereby peer commands can invoke the chaincode functions.

Specifying an index name on a query is optional. If not specified, and an index already exists for the fields being queried, the existing index will be automatically used.

小技巧

It is a good practice to explicitly include an index name on a query using the use_index keyword. Without it, CouchDB may pick a less optimal index. Also CouchDB may not use an index at all and you may not realize it, at the low volumes during testing. Only upon higher volumes you may realize slow performance because CouchDB is not using an index and you assumed it was.

Build the query in chaincode

You can perform complex rich queries against the chaincode data values using the CouchDB JSON query language within chaincode. As we explored above, the marbles02 sample chaincode includes an index and rich queries are defined in the functions - queryMarbles and queryMarblesByOwner:

  • queryMarbles

    Example of an ad hoc rich query. This is a query where a (selector) string can be passed into the function. This query would be useful to client applications that need to dynamically build their own selectors at runtime. For more information on selectors refer to CouchDB selector syntax.

  • queryMarblesByOwner

    Example of a parameterized query where the query logic is baked into the chaincode. In this case the function accepts a single argument, the marble owner. It then queries the state database for JSON documents matching the docType of “marble” and the owner id using the JSON query syntax.

Run the query using the peer command

In absence of a client application to test rich queries defined in chaincode, peer commands can be used. Peer commands run from the command line inside the docker container. We will customize the peer chaincode query command to use the Marbles index indexOwner and query for all marbles owned by “tom” using the queryMarbles function.

Try it yourself

Before querying the database, we should add some data. Run the following command in the peer container to create a marble owned by “tom”:

peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble1","blue","35","tom"]}'

After an index has been deployed during chaincode instantiation, it will automatically be utilized by chaincode queries. CouchDB can determine which index to use based on the fields being queried. If an index exists for the query criteria it will be used. However the recommended approach is to specify the use_index keyword on the query. The peer command below is an example of how to specify the index explicitly in the selector syntax by including the use_index keyword:

// Rich Query with index name explicitly specified:
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarbles", "{\"selector\":{\"docType\":\"marble\",\"owner\":\"tom\"}, \"use_index\":[\"_design/indexOwnerDoc\", \"indexOwner\"]}"]}'

Delving into the query command above, there are three arguments of interest:

  • queryMarbles

Name of the function in the Marbles chaincode. Notice a shim shim.ChaincodeStubInterface is used to access and modify the ledger. The getQueryResultForQueryString() passes the queryString to the shim API getQueryResult().

func (t *SimpleChaincode) queryMarbles(stub shim.ChaincodeStubInterface, args []string) pb.Response {

        //   0
        // "queryString"
         if len(args) < 1 {
                 return shim.Error("Incorrect number of arguments. Expecting 1")
         }

         queryString := args[0]

         queryResults, err := getQueryResultForQueryString(stub, queryString)
         if err != nil {
               return shim.Error(err.Error())
         }
         return shim.Success(queryResults)
}
  • {"selector":{"docType":"marble","owner":"tom"}

This is an example of an ad hoc selector string which finds all documents of type marble where the owner attribute has a value of tom.

  • "use_index":["_design/indexOwnerDoc", "indexOwner"]

Specifies both the design doc name indexOwnerDoc and index name indexOwner. In this example the selector query explicitly includes the index name, specified by using the use_index keyword. Recalling the index definition above Create an index, it contains a design doc, "ddoc":"indexOwnerDoc". With CouchDB, if you plan to explicitly include the index name on the query, then the index definition must include the ddoc value, so it can be referenced with the use_index keyword.

The query runs successfully and the index is leveraged with the following results:

Query Result: [{"Key":"marble1", "Record":{"color":"blue","docType":"marble","name":"marble1","owner":"tom","size":35}}]

Use best practices for queries and indexes

Queries that use indexes will complete faster, without having to scan the full database in couchDB. Understanding indexes will allow you to write your queries for better performance and help your application handle larger amounts of data or blocks on your network.

It is also important to plan the indexes you install with your chaincode. You should install only a few indexes per chaincode that support most of your queries. Adding too many indexes, or using an excessive number of fields in an index, will degrade the performance of your network. This is because the indexes are updated after each block is committed. The more indexes need to be updated through “index warming”, the longer it will take for transactions to complete.

The examples in this section will help demonstrate how queries use indexes and what type of queries will have the best performance. Remember the following when writing your queries:

  • All fields in the index must also be in the selector or sort sections of your query for the index to be used.

  • More complex queries will have a lower performance and will be less likely to use an index.

  • You should try to avoid operators that will result in a full table scan or a full index scan such as $or, $in and $regex.

In the previous section of this tutorial, you issued the following query against the marbles chaincode:

// Example one: query fully supported by the index
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarbles", "{\"selector\":{\"docType\":\"marble\",\"owner\":\"tom\"}, \"use_index\":[\"indexOwnerDoc\", \"indexOwner\"]}"]}'

The marbles chaincode was installed with the indexOwnerDoc index:

{"index":{"fields":["docType","owner"]},"ddoc":"indexOwnerDoc", "name":"indexOwner","type":"json"}

Notice that both the fields in the query, docType and owner, are included in the index, making it a fully supported query. As a result this query will be able to use the data in the index, without having to search the full database. Fully supported queries such as this one will return faster than other queries from your chaincode.

If you add extra fields to the query above, it will still use the index. However, the query will additionally have to scan the indexed data for the extra fields, resulting in a longer response time. As an example, the query below will still use the index, but will take a longer time to return than the previous example.

// Example two: query fully supported by the index with additional data
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarbles", "{\"selector\":{\"docType\":\"marble\",\"owner\":\"tom\",\"color\":\"red\"}, \"use_index\":[\"/indexOwnerDoc\", \"indexOwner\"]}"]}'

A query that does not include all fields in the index will have to scan the full database instead. For example, the query below searches for the owner, without specifying the the type of item owned. Since the ownerIndexDoc contains both the owner and docType fields, this query will not be able to use the index.

// Example three: query not supported by the index
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarbles", "{\"selector\":{\"owner\":\"tom\"}, \"use_index\":[\"indexOwnerDoc\", \"indexOwner\"]}"]}'

In general, more complex queries will have a longer response time, and have a lower chance of being supported by an index. Operators such as $or, $in, and $regex will often cause the query to scan the full index or not use the index at all.

As an example, the query below contains an $or term that will search for every marble and every item owned by tom.

// Example four: query with $or supported by the index
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarbles", "{\"selector\":{"\$or\":[{\"docType\:\"marble\"},{\"owner\":\"tom\"}]}, \"use_index\":[\"indexOwnerDoc\", \"indexOwner\"]}"]}'

This query will still use the index because it searches for fields that are included in indexOwnerDoc. However, the $or condition in the query requires a scan of all the items in the index, resulting in a longer response time.

Below is an example of a complex query that is not supported by the index.

// Example five: Query with $or not supported by the index
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarbles", "{\"selector\":{"\$or\":[{\"docType\":\"marble\",\"owner\":\"tom\"},{"\color\":"\yellow\"}]}, \"use_index\":[\"indexOwnerDoc\", \"indexOwner\"]}"]}'

The query searches for all marbles owned by tom or any other items that are yellow. This query will not use the index because it will need to search the entire table to meet the $or condition. Depending the amount of data on your ledger, this query will take a long time to respond or may timeout.

While it is important to follow best practices with your queries, using indexes is not a solution for collecting large amounts of data. The blockchain data structure is optimized to validate and confirm transactions, and is not suited for data analytics or reporting. If you want to build a dashboard as part of your application or analyze the data from your network, the best practice is to query an off chain database that replicates the data from your peers. This will allow you to understand the data on the blockchain without degrading the performance of your network or disrupting transactions.

You can use block or chaincode events from your application to write transaction data to an off-chain database or analytics engine. For each block received, the block listener application would iterate through the block transactions and build a data store using the key/value writes from each valid transaction’s rwset. The Peer channel-based event services provide replayable events to ensure the integrity of downstream data stores.

Query the CouchDB State Database With Pagination

When large result sets are returned by CouchDB queries, a set of APIs is available which can be called by chaincode to paginate the list of results. Pagination provides a mechanism to partition the result set by specifying a pagesize and a start point – a bookmark which indicates where to begin the result set. The client application iteratively invokes the chaincode that executes the query until no more results are returned. For more information refer to this topic on pagination with CouchDB.

We will use the Marbles sample function queryMarblesWithPagination to demonstrate how pagination can be implemented in chaincode and the client application.

  • queryMarblesWithPagination

    Example of an ad hoc rich query with pagination. This is a query where a (selector) string can be passed into the function similar to the above example. In this case, a pageSize is also included with the query as well as a bookmark.

In order to demonstrate pagination, more data is required. This example assumes that you have already added marble1 from above. Run the following commands in the peer container to create four more marbles owned by “tom”, to create a total of five marbles owned by “tom”:

Try it yourself

peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble2","yellow","35","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble3","green","20","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble4","purple","20","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble5","blue","40","tom"]}'

In addition to the arguments for the query in the previous example, queryMarblesWithPagination adds pagesize and bookmark. PageSize specifies the number of records to return per query. The bookmark is an “anchor” telling couchDB where to begin the page. (Each page of results returns a unique bookmark.)

  • queryMarblesWithPagination

Name of the function in the Marbles chaincode. Notice a shim shim.ChaincodeStubInterface is used to access and modify the ledger. The getQueryResultForQueryStringWithPagination() passes the queryString along with the pagesize and bookmark to the shim API GetQueryResultWithPagination().

func (t *SimpleChaincode) queryMarblesWithPagination(stub shim.ChaincodeStubInterface, args []string) pb.Response {

      //   0
      // "queryString"
      if len(args) < 3 {
              return shim.Error("Incorrect number of arguments. Expecting 3")
      }

      queryString := args[0]
      //return type of ParseInt is int64
      pageSize, err := strconv.ParseInt(args[1], 10, 32)
      if err != nil {
              return shim.Error(err.Error())
      }
      bookmark := args[2]

      queryResults, err := getQueryResultForQueryStringWithPagination(stub, queryString, int32(pageSize), bookmark)
      if err != nil {
              return shim.Error(err.Error())
      }
      return shim.Success(queryResults)
}

The following example is a peer command which calls queryMarblesWithPagination with a pageSize of 3 and no bookmark specified.

小技巧

When no bookmark is specified, the query starts with the “first” page of records.

Try it yourself

// Rich Query with index name explicitly specified and a page size of 3:
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarblesWithPagination", "{\"selector\":{\"docType\":\"marble\",\"owner\":\"tom\"}, \"use_index\":[\"_design/indexOwnerDoc\", \"indexOwner\"]}","3",""]}'

The following response is received (carriage returns added for clarity), three of the five marbles are returned because the pagsize was set to 3:

[{"Key":"marble1", "Record":{"color":"blue","docType":"marble","name":"marble1","owner":"tom","size":35}},
 {"Key":"marble2", "Record":{"color":"yellow","docType":"marble","name":"marble2","owner":"tom","size":35}},
 {"Key":"marble3", "Record":{"color":"green","docType":"marble","name":"marble3","owner":"tom","size":20}}]
[{"ResponseMetadata":{"RecordsCount":"3",
"Bookmark":"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkGoOkOWDSOSANIFk2iCyIyVySn5uVBQAGEhRz"}}]

注解

Bookmarks are uniquely generated by CouchDB for each query and represent a placeholder in the result set. Pass the returned bookmark on the subsequent iteration of the query to retrieve the next set of results.

The following is a peer command to call queryMarblesWithPagination with a pageSize of 3. Notice this time, the query includes the bookmark returned from the previous query.

Try it yourself

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarblesWithPagination", "{\"selector\":{\"docType\":\"marble\",\"owner\":\"tom\"}, \"use_index\":[\"_design/indexOwnerDoc\", \"indexOwner\"]}","3","g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkGoOkOWDSOSANIFk2iCyIyVySn5uVBQAGEhRz"]}'

The following response is received (carriage returns added for clarity). The last two records are retrieved:

[{"Key":"marble4", "Record":{"color":"purple","docType":"marble","name":"marble4","owner":"tom","size":20}},
 {"Key":"marble5", "Record":{"color":"blue","docType":"marble","name":"marble5","owner":"tom","size":40}}]
[{"ResponseMetadata":{"RecordsCount":"2",
"Bookmark":"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkmoKkOWDSOSANIFk2iCyIyVySn5uVBQAGYhR1"}}]

The final command is a peer command to call queryMarblesWithPagination with a pageSize of 3 and with the bookmark from the previous query.

Try it yourself

peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarblesWithPagination", "{\"selector\":{\"docType\":\"marble\",\"owner\":\"tom\"}, \"use_index\":[\"_design/indexOwnerDoc\", \"indexOwner\"]}","3","g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkmoKkOWDSOSANIFk2iCyIyVySn5uVBQAGYhR1"]}'

The following response is received (carriage returns added for clarity). No records are returned, indicating that all pages have been retrieved:

[]
[{"ResponseMetadata":{"RecordsCount":"0",
"Bookmark":"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkmoKkOWDSOSANIFk2iCyIyVySn5uVBQAGYhR1"}}]

For an example of how a client application can iterate over the result sets using pagination, search for the getQueryResultForQueryStringWithPagination function in the Marbles sample.

Update an Index

It may be necessary to update an index over time. The same index may exist in subsequent versions of the chaincode that gets installed. In order for an index to be updated, the original index definition must have included the design document ddoc attribute and an index name. To update an index definition, use the same index name but alter the index definition. Simply edit the index JSON file and add or remove fields from the index. Fabric only supports the index type JSON, changing the index type is not supported. The updated index definition gets redeployed to the peer’s state database when the chaincode is installed and instantiated. Changes to the index name or ddoc attributes will result in a new index being created and the original index remains unchanged in CouchDB until it is removed.

注解

If the state database has a significant volume of data, it will take some time for the index to be re-built, during which time chaincode invokes that issue queries may fail or timeout.

Iterating on your index definition

If you have access to your peer’s CouchDB state database in a development environment, you can iteratively test various indexes in support of your chaincode queries. Any changes to chaincode though would require redeployment. Use the CouchDB Fauxton interface or a command line curl utility to create and update indexes.

注解

The Fauxton interface is a web UI for the creation, update, and deployment of indexes to CouchDB. If you want to try out this interface, there is an example of the format of the Fauxton version of the index in Marbles sample. If you have deployed the BYFN network with CouchDB, the Fauxton interface can be loaded by opening a browser and navigating to http://localhost:5984/_utils.

Alternatively, if you prefer not use the Fauxton UI, the following is an example of a curl command which can be used to create the index on the database mychannel_marbles:

// Index for docType, owner. // Example curl command line to define index in the CouchDB channel_chaincode database

curl -i -X POST -H "Content-Type: application/json" -d
       "{\"index\":{\"fields\":[\"docType\",\"owner\"]},
         \"name\":\"indexOwner\",
         \"ddoc\":\"indexOwnerDoc\",
         \"type\":\"json\"}" http://hostname:port/mychannel_marbles/_index

注解

If you are using BYFN configured with CouchDB, replace hostname:port with localhost:5984.

Delete an Index

Index deletion is not managed by Fabric tooling. If you need to delete an index, manually issue a curl command against the database or delete it using the Fauxton interface.

The format of the curl command to delete an index would be:

curl -X DELETE http://localhost:5984/{database_name}/_index/{design_doc}/json/{index_name} -H  "accept: */*" -H  "Host: localhost:5984"

To delete the index used in this tutorial, the curl command would be:

curl -X DELETE http://localhost:5984/mychannel_marbles/_index/indexOwnerDoc/json/indexOwner -H  "accept: */*" -H  "Host: localhost:5984"

视频

参考YouTube上的Hyperledger Fabric频道



此集合包含演示各种开发人员示范的V1版本的功能和组件,如:账本、渠道、Gossip算法、SDK、链码、MSP等等…

操作指南

Upgrading to the Newest Version of Fabric

At a high level, upgrading a Fabric network from v1.3 to v1.4 can be performed by following these steps:

  • Upgrade the binaries for the ordering service, the Fabric CA, and the peers. These upgrades may be done in parallel.

  • Upgrade client SDKs.

  • (Optional) Upgrade the Kafka cluster.

To help understand this process, we’ve created the Upgrading Your Network Components tutorial that will take you through most of the major upgrade steps, including upgrading peers and orderers. We’ve included both a script as well as the individual steps to achieve these upgrades.

Because our tutorial leverages the 搭建你的第一个网络(BYFN) (BYFN) sample, it has certain limitations (it does not use Fabric CA, for example). Therefore we have included a section at the end of the tutorial that will show how to upgrade your CA, Kafka clusters, CouchDB, Zookeeper, vendored chaincode shims, and Node SDK clients.

Because there are no new 功能需求 in v1.4, the upgrade process does not require any channel configuration transactions.

Setting up an ordering node

In this topic, we’ll describe the process for bootstrapping an ordering node. If you want more information about the different ordering service implementations and their relative strengths and weaknesses, check out our conceptual documentation about ordering.

Broadly, this topic will involve a few interrelated steps:

  1. Creating the organization your ordering node belongs to (if you have not already done so)

  2. Configuring your node (using orderer.yaml)

  3. Creating the genesis block for the orderer system channel

  4. Bootstrapping the orderer

Note: this topic assumes you have already pulled the Hyperledger Fabric orderer images from docker hub.

Create an organization definition

Like peers, all orderers must belong to an organization that must be created before the orderer itself is created. This organization has a definition encapsulated by a Membership Service Provider (MSP) that is created by a Certificate Authority (CA) dedicated to creating the certificates and MSP for the organization.

For information about creating a CA and using it to create users and an MSP, check out the Fabric CA user’s guide.

Configure your node

The configuration of the orderer is handled through a yaml filed called orderer.yaml. The FABRIC_CFG_PATH environment variable is used to point to an orderer.yaml file you’ve configured, which will extract a series of files and certificates on your file system.

To look at a sample orderer.yaml, check out the fabric-samples github repo, which should be read and studied closely before proceeding. Note in particular a few values:

  • LocalMSPID — this is the name of the MSP, generated by your CA, of your orderer organization. This is where your orderer organization admins will be listed.

  • LocalMSPDir — the place in your file system where the local MSP is located.

  • # TLS enabled, Enabled: false. This is where you specify whether you want to enable TLS. If you set this value to true, you will have to specify the locations of the relevant TLS certificates. Note that this is mandatory for Raft nodes.

  • GenesisFile — this is the name of the genesis block you will generate for this ordering service.

  • GenesisMethod — the method by which the genesis block is created. This can be either file, in which the file in the GenesisFile is specified, and provisional, in which the profile in GenesisProfile is used.

If you are deploying this node as part of a cluster (for example, as part of a cluster of Raft nodes), make note of the Cluster and Consensus sections.

If you plan to deploy a Kafka based ordering service, you will need to complete the Kafka section.

Generate the genesis block of the orderer

The first block of a newly created channel is known as a “genesis block”. If this genesis block is being created as part of the creation of a new network (in other words, if the orderer being created will not be joined to an existing cluster of orderers), then this genesis block will be the first block of the “orderer system channel” (also known as the “ordering system channel”), a special channel managed by the orderer admins which includes a list of the organizations permitted to create channels. The genesis block of the orderer system channel is special: it must be created and included in the configuration of the node before the node can be started.

To learn how to create a genesis block using the configtxgen tool, check out Channel Configuration (configtx).

Bootstrap the ordering node

Once you have built the images, created the MSP, configured your orderer.yaml, and created the genesis block, you’re ready to start your orderer using a command that will look similar to:

docker-compose -f docker-compose-cli.yaml up -d --no-deps orderer.example.com

With the address of your orderer replacing orderer.example.com.

Updating a Channel Configuration

What is a Channel Configuration?

通道配置包含与通道管理相关的所有信息。最重要的是,通道配置指定了哪些组织是通道的成员,但它还包含其他通道范围的配置信息,如通道访问策略和区块批大小。

此配置存储在账本的一个区块中,因此称为配置(config)区块。配置区块包含一个配置。第一个区块称为“创世区块”,包含引导通道所需的初始配置。每次通道的配置更改都是通过一个新的配置区块完成的,最新的配置区块表示当前通道配置。排序器和peer将当前通道配置保存在内存中,以方便所有通道操作,例如修剪一个新区块和验证区块交易。

由于配置存储在区块中,所以更新配置是通过一个称为“配置交易”的流程进行的(尽管该流程与普通交易稍有不同)。更新配置是一个获取配置、将配置转换成人类可读的格式、修改配置并提交批准的过程。

要更深入地了解提取配置并将其转换为JSON的过程,请查看向通道添加Org。在这个文档中,我们将重点介绍编辑配置的不同方法以及签名的过程。

Editing a Config

通道是高度可配置的,但不是无限可配置的。不同的配置元素有不同的修改策略(指定签署配置更新所需的一组标识)。

要查看可以更改的范围,查看JSON格式的配置非常重要。将一个组织添加到一个通道教程中会生成一个配置,因此,如果您已经阅读了该文档,您可以简单地引用它。对于那些没有配置的人,我们将在这里提供一个配置(为了便于阅读,将这个配置放入一个支持JSON折叠的查看器中可能会有帮助,比如atom或Visual Studio)。

**Click here to see the config** ``` { "channel_group": { "groups": { "Application": { "groups": { "Org1MSP": { "mod_policy": "Admins", "policies": { "Admins": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org1MSP", "role": "ADMIN" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Readers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org1MSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Writers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org1MSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" } }, "values": { "AnchorPeers": { "mod_policy": "Admins", "value": { "anchor_peers": [ { "host": "peer0.org1.example.com", "port": 7051 } ] }, "version": "0" }, "MSP": { "mod_policy": "Admins", "value": { "config": { "admins": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHRENDQWIrZ0F3SUJBZ0lRSWlyVmg3NVcwWmh0UjEzdmltdmliakFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NUzVsZUdGdGNHeGxMbU52YlRBZUZ3MHhOekV4TWpreE9USTBNRFphRncweU56RXhNamN4T1RJME1EWmEKTUZzeEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVI4d0hRWURWUVFEREJaQlpHMXBia0J2Y21jeExtVjRZVzF3YkdVdVkyOXRNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNkdVeDlpczZ0aG1ZRE9tMmVHSlA5eW1yaXJYWE1Cd0oKQmVWb1Vpak5haUdsWE03N2NsSE5aZjArMGFjK2djRU5lMzQweGExZVFnb2Q0YjVFcmQrNmtxTk5NRXN3RGdZRApWUjBQQVFIL0JBUURBZ2VBTUF3R0ExVWRFd0VCL3dRQ01BQXdLd1lEVlIwakJDUXdJb0FnWWdoR2xCMjBGWmZCCllQemdYT280czdkU1k1V3NKSkRZbGszTDJvOXZzQ013Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnYmlEWDVTMlIKRTBNWGRobDZFbmpVNm1lTEJ0eXNMR2ZpZXZWTlNmWW1UQVVDSUdVbnROangrVXZEYkZPRHZFcFRVTm5MUHp0Qwp5ZlBnOEhMdWpMaXVpaWFaCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" ], "crypto_config": { "identity_identifier_hash_function": "SHA256", "signature_hash_family": "SHA2" }, "name": "Org1MSP", "root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNRekNDQWVxZ0F3SUJBZ0lSQU03ZVdTaVM4V3VVM2haMU9tR255eXd3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekV1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGN4TVRJNU1Ua3lOREEyV2hjTk1qY3hNVEkzTVRreU5EQTIKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NUzVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1TNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQkJiTTVZS3B6UmlEbDdLWWFpSDVsVnBIeEl1TDEyaUcyWGhkMHRpbEg3MEljMGFpRUh1dG9rTkZsUXAzTWI0Zgpvb0M2bFVXWnRnRDJwMzZFNThMYkdqK2pYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSUdJSVJwUWR0QldYd1dEODRGenEKT0xPM1VtT1ZyQ1NRMkpaTnk5cVBiN0FqTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDSUdlS2VZL1BsdGlWQTRPSgpRTWdwcDRvaGRMcGxKUFpzNERYS0NuOE9BZG9YQWlCK2g5TFdsR3ZsSDdtNkVpMXVRcDFld2ZESmxsZi9MZXczClgxaDNRY0VMZ3c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" ], "tls_root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWZDZ0F3SUJBZ0lSQUtsNEFQWmV6dWt0Nk8wYjRyYjY5Y0F3Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpFdVpYaGhiWEJzWlM1amIyMHdIaGNOTVRjeE1USTVNVGt5TkRBMldoY05NamN4TVRJM01Ua3kKTkRBMldqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTVM1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCSnNpQXVjYlcrM0lqQ2VaaXZPakRiUmFyVlRjTW9TRS9mSnQyU0thR1d5bWQ0am5xM25MWC9vVApCVmpZb21wUG1QbGZ4R0VSWHl0UTNvOVZBL2hwNHBlalh6QmRNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WCkhTVUVDREFHQmdSVkhTVUFNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdLUVlEVlIwT0JDSUVJSnlqZnFoa0FvY3oKdkRpNnNGSGFZL1Bvd2tPWkxPMHZ0VGdFRnVDbUpFalZNQW9HQ0NxR1NNNDlCQU1DQTBjQU1FUUNJRjVOVVdCVgpmSjgrM0lxU3J1NlFFbjlIa0lsQ0xDMnlvWTlaNHBWMnpBeFNBaUE5NWQzeDhBRXZIcUFNZnIxcXBOWHZ1TW5BCmQzUXBFa1gyWkh3ODZlQlVQZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" ] }, "type": 0 }, "version": "0" } }, "version": "1" }, "Org2MSP": { "mod_policy": "Admins", "policies": { "Admins": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org2MSP", "role": "ADMIN" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Readers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org2MSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Writers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org2MSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" } }, "values": { "AnchorPeers": { "mod_policy": "Admins", "value": { "anchor_peers": [ { "host": "peer0.org2.example.com", "port": 9051 } ] }, "version": "0" }, "MSP": { "mod_policy": "Admins", "value": { "config": { "admins": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHVENDQWNDZ0F3SUJBZ0lSQU5Pb1lIbk9seU94dTJxZFBteStyV293Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekl1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGN4TVRJNU1Ua3lOREEyV2hjTk1qY3hNVEkzTVRreU5EQTIKV2pCYk1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFZk1CMEdBMVVFQXd3V1FXUnRhVzVBYjNKbk1pNWxlR0Z0Y0d4bExtTnZiVEJaCk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQkh1M0ZWMGlqdFFzckpsbnBCblgyRy9ickFjTHFJSzgKVDFiSWFyZlpvSkhtQm5IVW11RTBhc1dyKzM4VUs0N3hyczNZMGMycGhFVjIvRnhHbHhXMUZubWpUVEJMTUE0RwpBMVVkRHdFQi93UUVBd0lIZ0RBTUJnTlZIUk1CQWY4RUFqQUFNQ3NHQTFVZEl3UWtNQ0tBSU1pSzdteFpnQVVmCmdrN0RPTklXd2F4YktHVGdLSnVSNjZqVmordHZEV3RUTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDSUQxaEtRdk8KVWxyWmVZMmZZY1N2YWExQmJPM3BVb3NxL2tZVElyaVdVM1J3QWlBR29mWmVPUFByWXVlTlk0Z2JCV2tjc3lpZgpNMkJmeXQwWG9NUThyT2VidUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" ], "crypto_config": { "identity_identifier_hash_function": "SHA256", "signature_hash_family": "SHA2" }, "name": "Org2MSP", "root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1pVXk5SGRSbXB5MDdsSjhRMlZNWXN3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekl1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGN4TVRJNU1Ua3lOREEyV2hjTk1qY3hNVEkzTVRreU5EQTIKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NaTVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1pNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQk50YW1PY1hyaGwrQ2hzYXNSeklNWjV3OHpPWVhGcXhQbGV0a3d5UHJrbHpKWE01Qjl4QkRRVWlWNldJS2tGSwo0Vmd5RlNVWGZqaGdtd25kMUNBVkJXaWpYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSU1pSzdteFpnQVVmZ2s3RE9OSVcKd2F4YktHVGdLSnVSNjZqVmordHZEV3RUTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFEQ3FFRmFqeU5IQmVaRworOUdWVkNFNWI1YTF5ZlhvS3lkemdLMVgyOTl4ZmdJZ05BSUUvM3JINHFsUE9HbjdSS3Yram9WaUNHS2t6L0F1Cm9FNzI4RWR6WmdRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" ], "tls_root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTakNDQWZDZ0F3SUJBZ0lSQU9JNmRWUWMraHBZdkdMSlFQM1YwQU13Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpJdVpYaGhiWEJzWlM1amIyMHdIaGNOTVRjeE1USTVNVGt5TkRBMldoY05NamN4TVRJM01Ua3kKTkRBMldqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTWk1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTWk1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCTWZ1QTMwQVVBT1ZKRG1qVlBZd1lNbTlweW92MFN6OHY4SUQ5N0twSHhXOHVOOUdSOU84aVdFMgo5bllWWVpiZFB2V1h1RCszblpweUFNcGZja3YvYUV5alh6QmRNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WCkhTVUVDREFHQmdSVkhTVUFNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdLUVlEVlIwT0JDSUVJRnk5VHBHcStQL08KUGRXbkZXdWRPTnFqVDRxOEVKcDJmbERnVCtFV2RnRnFNQW9HQ0NxR1NNNDlCQU1DQTBnQU1FVUNJUUNZYlhSeApXWDZoUitPU0xBNSs4bFRwcXRMWnNhOHVuS3J3ek1UYXlQUXNVd0lnVSs5YXdaaE0xRzg3bGE0V0h4cmt5eVZ2CkU4S1ZsR09IVHVPWm9TMU5PT0U9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" ] }, "type": 0 }, "version": "0" } }, "version": "1" }, "Org3MSP": { "groups": {}, "mod_policy": "Admins", "policies": { "Admins": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org3MSP", "role": "ADMIN" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Readers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org3MSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Writers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "Org3MSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" } }, "values": { "MSP": { "mod_policy": "Admins", "value": { "config": { "admins": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHRENDQWIrZ0F3SUJBZ0lRQUlSNWN4U0hpVm1kSm9uY3FJVUxXekFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTXk1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NeTVsZUdGdGNHeGxMbU52YlRBZUZ3MHhOekV4TWpreE9UTTRNekJhRncweU56RXhNamN4T1RNNE16QmEKTUZzeEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVI4d0hRWURWUVFEREJaQlpHMXBia0J2Y21jekxtVjRZVzF3YkdVdVkyOXRNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFSFlkVFY2ZC80cmR4WFd2cm1qZ0hIQlhXc2lxUWxrcnQKZ0p1NzMxcG0yZDRrWU82aEd2b2tFRFBwbkZFdFBwdkw3K1F1UjhYdkFQM0tqTkt0NHdMRG5hTk5NRXN3RGdZRApWUjBQQVFIL0JBUURBZ2VBTUF3R0ExVWRFd0VCL3dRQ01BQXdLd1lEVlIwakJDUXdJb0FnSWNxUFVhM1VQNmN0Ck9LZmYvKzVpMWJZVUZFeVFlMVAyU0hBRldWSWUxYzB3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnUm5LRnhsTlYKSmppVGpkZmVoczRwNy9qMkt3bFVuUWVuNFkyUnV6QjFrbm9DSUd3dEZ1TEdpRFY2THZSL2pHVXR3UkNyeGw5ZApVNENCeDhGbjBMdXNMTkJYCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" ], "crypto_config": { "identity_identifier_hash_function": "SHA256", "signature_hash_family": "SHA2" }, "name": "Org3MSP", "root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNRakNDQWVtZ0F3SUJBZ0lRUkN1U2Y0RVJNaDdHQW1ydTFIQ2FZREFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTXk1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NeTVsZUdGdGNHeGxMbU52YlRBZUZ3MHhOekV4TWpreE9UTTRNekJhRncweU56RXhNamN4T1RNNE16QmEKTUhNeEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVJrd0Z3WURWUVFLRXhCdmNtY3pMbVY0WVcxd2JHVXVZMjl0TVJ3d0dnWURWUVFECkV4TmpZUzV2Y21jekxtVjRZVzF3YkdVdVkyOXRNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUUKZXFxOFFQMnllM08vM1J3UzI0SWdtRVdST3RnK3Zyc2pRY1BvTU42NEZiUGJKbmExMklNaVdDUTF6ZEZiTU9hSAorMUlrb21yY0RDL1ZpejkvY0M0NW9xTmZNRjB3RGdZRFZSMFBBUUgvQkFRREFnR21NQThHQTFVZEpRUUlNQVlHCkJGVWRKUUF3RHdZRFZSMFRBUUgvQkFVd0F3RUIvekFwQmdOVkhRNEVJZ1FnSWNxUFVhM1VQNmN0T0tmZi8rNWkKMWJZVUZFeVFlMVAyU0hBRldWSWUxYzB3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnTEgxL2xSZElWTVA4Z2FWeQpKRW01QWQ0SjhwZ256N1BVV2JIMzZvdVg4K1lDSUNPK20vUG9DbDRIbTlFbXhFN3ZnUHlOY2trVWd0SlRiTFhqCk5SWjBxNTdWCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" ], "tls_root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWZDZ0F3SUJBZ0lSQU9xc2JQQzFOVHJzclEvUUNpalh6K0F3Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpNdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpNdVpYaGhiWEJzWlM1amIyMHdIaGNOTVRjeE1USTVNVGt6T0RNd1doY05NamN4TVRJM01Ua3oKT0RNd1dqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTXk1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTXk1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCSVJTTHdDejdyWENiY0VLMmhxSnhBVm9DaDhkejNqcnA5RHMyYW9TQjBVNTZkSUZhVmZoR2FsKwovdGp6YXlndXpFalFhNlJ1MmhQVnRGM2NvQnJ2Ulpxalh6QmRNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WCkhTVUVDREFHQmdSVkhTVUFNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdLUVlEVlIwT0JDSUVJQ2FkVERGa0JPTGkKblcrN2xCbDExL3pPbXk4a1BlYXc0MVNZWEF6cVhnZEVNQW9HQ0NxR1NNNDlCQU1DQTBjQU1FUUNJQlgyMWR3UwpGaG5NdDhHWXUweEgrUGd5aXQreFdQUjBuTE1Jc1p2dVlRaktBaUFLUlE5N2VrLzRDTzZPWUtSakR0VFM4UFRmCm9nTmJ6dTBxcThjbVhseW5jZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" ] }, "type": 0 }, "version": "0" } }, "version": "0" } }, "mod_policy": "Admins", "policies": { "Admins": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "MAJORITY", "sub_policy": "Admins" } }, "version": "0" }, "Readers": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "ANY", "sub_policy": "Readers" } }, "version": "0" }, "Writers": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "ANY", "sub_policy": "Writers" } }, "version": "0" } }, "version": "1" }, "Orderer": { "groups": { "OrdererOrg": { "mod_policy": "Admins", "policies": { "Admins": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "OrdererMSP", "role": "ADMIN" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Readers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "OrdererMSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" }, "Writers": { "mod_policy": "Admins", "policy": { "type": 1, "value": { "identities": [ { "principal": { "msp_identifier": "OrdererMSP", "role": "MEMBER" }, "principal_classification": "ROLE" } ], "rule": { "n_out_of": { "n": 1, "rules": [ { "signed_by": 0 } ] } }, "version": 0 } }, "version": "0" } }, "values": { "MSP": { "mod_policy": "Admins", "value": { "config": { "admins": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNDakNDQWJDZ0F3SUJBZ0lRSFNTTnIyMWRLTTB6THZ0dEdoQnpMVEFLQmdncWhrak9QUVFEQWpCcE1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4RnpBVkJnTlZCQU1URG1OaExtVjRZVzF3CmJHVXVZMjl0TUI0WERURTNNVEV5T1RFNU1qUXdObG9YRFRJM01URXlOekU1TWpRd05sb3dWakVMTUFrR0ExVUUKQmhNQ1ZWTXhFekFSQmdOVkJBZ1RDa05oYkdsbWIzSnVhV0V4RmpBVUJnTlZCQWNURFZOaGJpQkdjbUZ1WTJsegpZMjh4R2pBWUJnTlZCQU1NRVVGa2JXbHVRR1Y0WVcxd2JHVXVZMjl0TUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJCnpqMERBUWNEUWdBRTZCTVcvY0RGUkUvakFSenV5N1BjeFQ5a3pnZitudXdwKzhzK2xia0hZd0ZpaForMWRhR3gKKzhpS1hDY0YrZ0tpcVBEQXBpZ2REOXNSeTBoTEMwQnRacU5OTUVzd0RnWURWUjBQQVFIL0JBUURBZ2VBTUF3RwpBMVVkRXdFQi93UUNNQUF3S3dZRFZSMGpCQ1F3SW9BZ3o3bDQ2ZXRrODU0NFJEanZENVB6YjV3TzI5N0lIMnNUCngwTjAzOHZibkpzd0NnWUlLb1pJemowRUF3SURTQUF3UlFJaEFNRTJPWXljSnVyYzhVY2hkeTA5RU50RTNFUDIKcVoxSnFTOWVCK0gxSG5FSkFpQUtXa2h5TmI0akRPS2MramJIVmgwV0YrZ3J4UlJYT1hGaEl4ei85elI3UUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" ], "crypto_config": { "identity_identifier_hash_function": "SHA256", "signature_hash_family": "SHA2" }, "name": "OrdererMSP", "root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNMakNDQWRXZ0F3SUJBZ0lRY2cxUVZkVmU2Skd6YVU1cmxjcW4vakFLQmdncWhrak9QUVFEQWpCcE1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4RnpBVkJnTlZCQU1URG1OaExtVjRZVzF3CmJHVXVZMjl0TUI0WERURTNNVEV5T1RFNU1qUXdObG9YRFRJM01URXlOekU1TWpRd05sb3dhVEVMTUFrR0ExVUUKQmhNQ1ZWTXhFekFSQmdOVkJBZ1RDa05oYkdsbWIzSnVhV0V4RmpBVUJnTlZCQWNURFZOaGJpQkdjbUZ1WTJsegpZMjh4RkRBU0JnTlZCQW9UQzJWNFlXMXdiR1V1WTI5dE1SY3dGUVlEVlFRREV3NWpZUzVsZUdGdGNHeGxMbU52CmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJQTVI2MGdCcVJham9hS0U1TExRYjRIb28wN3QKYTRuM21Ncy9NRGloQVQ5YUN4UGZBcDM5SS8wMmwvZ2xiMTdCcEtxZGpGd0JKZHNuMVN6ZnQ3NlZkTitqWHpCZApNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WSFNVRUNEQUdCZ1JWSFNVQU1BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdLUVlEVlIwT0JDSUVJTSs1ZU9uclpQT2VPRVE0N3crVDgyK2NEdHZleUI5ckU4ZERkTi9MMjV5Yk1Bb0cKQ0NxR1NNNDlCQU1DQTBjQU1FUUNJQVB6SGNOUmQ2a3QxSEdpWEFDclFTM0grL3R5NmcvVFpJa1pTeXIybmdLNQpBaUJnb1BVTTEwTHNsMVFtb2dlbFBjblZGZjJoODBXR2I3NGRIS2tzVFJKUkx3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" ], "tls_root_certs": [ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNORENDQWR1Z0F3SUJBZ0lRYWJ5SUl6cldtUFNzSjJacisvRVpXVEFLQmdncWhrak9QUVFEQWpCc01Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4R2pBWUJnTlZCQU1URVhSc2MyTmhMbVY0CllXMXdiR1V1WTI5dE1CNFhEVEUzTVRFeU9URTVNalF3TmxvWERUSTNNVEV5TnpFNU1qUXdObG93YkRFTE1Ba0cKQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdUQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjVERWTmhiaUJHY21GdQpZMmx6WTI4eEZEQVNCZ05WQkFvVEMyVjRZVzF3YkdVdVkyOXRNUm93R0FZRFZRUURFeEYwYkhOallTNWxlR0Z0CmNHeGxMbU52YlRCWk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQkVZVE9mdG1rTHdiSlRNeG1aVzMKZVdqRUQ2eW1UeEhYeWFQdTM2Y1NQWDlldDZyU3Y5UFpCTGxyK3hZN1dtYlhyOHM5K3E1RDMwWHl6OEh1OWthMQpSc1dqWHpCZE1BNEdBMVVkRHdFQi93UUVBd0lCcGpBUEJnTlZIU1VFQ0RBR0JnUlZIU1VBTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0tRWURWUjBPQkNJRUlJcjduNTVjTWlUdENEYmM5UGU0RFpnZ0ZYdHV2RktTdnBNYUhzbzAKSnpFd01Bb0dDQ3FHU000OUJBTUNBMGNBTUVRQ0lGM1gvMGtQRkFVQzV2N25JVVh6SmI5Z3JscWxET05UeVg2QQpvcmtFVTdWb0FpQkpMbS9IUFZ0aVRHY2NldUZPZTE4SnNwd0JTZ1hxNnY1K1BobEdsbU9pWHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" ] }, "type": 0 }, "version": "0" } }, "version": "0" } }, "mod_policy": "Admins", "policies": { "Admins": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "MAJORITY", "sub_policy": "Admins" } }, "version": "0" }, "BlockValidation": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "ANY", "sub_policy": "Writers" } }, "version": "0" }, "Readers": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "ANY", "sub_policy": "Readers" } }, "version": "0" }, "Writers": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "ANY", "sub_policy": "Writers" } }, "version": "0" } }, "values": { "BatchSize": { "mod_policy": "Admins", "value": { "absolute_max_bytes": 103809024, "max_message_count": 10, "preferred_max_bytes": 524288 }, "version": "0" }, "BatchTimeout": { "mod_policy": "Admins", "value": { "timeout": "2s" }, "version": "0" }, "ChannelRestrictions": { "mod_policy": "Admins", "version": "0" }, "ConsensusType": { "mod_policy": "Admins", "value": { "type": "solo" }, "version": "0" } }, "version": "0" } }, "mod_policy": "", "policies": { "Admins": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "MAJORITY", "sub_policy": "Admins" } }, "version": "0" }, "Readers": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "ANY", "sub_policy": "Readers" } }, "version": "0" }, "Writers": { "mod_policy": "Admins", "policy": { "type": 3, "value": { "rule": "ANY", "sub_policy": "Writers" } }, "version": "0" } }, "values": { "BlockDataHashingStructure": { "mod_policy": "Admins", "value": { "width": 4294967295 }, "version": "0" }, "Consortium": { "mod_policy": "Admins", "value": { "name": "SampleConsortium" }, "version": "0" }, "HashingAlgorithm": { "mod_policy": "Admins", "value": { "name": "SHA256" }, "version": "0" }, "OrdererAddresses": { "mod_policy": "/Channel/Orderer/Admins", "value": { "addresses": [ "orderer.example.com:7050" ] }, "version": "0" } }, "version": "0" }, "sequence": "3", "type": 0 } ```

在这种形式下,配置可能看起来有些吓人,但是一旦您研究了它,您就会发现它有一个逻辑结构。

除了策略的定义(定义谁可以在通道级别做某些事情,以及谁有权更改谁可以更改配置)之外,通道还具有可以使用配置更新修改的其他类型的特性。将一个组织添加到一个通道中是最重要的——将一个组织添加到通道中。其他一些配置更新可以改变的事情包括:

  • Batch Size. These parameters dictate the number and size of transactions in a block. No block will appear larger than absolute_max_bytes large or with more than max_message_count transactions inside the block. If it is possible to construct a block under preferred_max_bytes, then a block will be cut prematurely, and transactions larger than this size will appear in their own block.

    {
      "absolute_max_bytes": 102760448,
      "max_message_count": 10,
      "preferred_max_bytes": 524288
    }
    
  • Batch Timeout. The amount of time to wait after the first transaction arrives for additional transactions before cutting a block. Decreasing this value will improve latency, but decreasing it too much may decrease throughput by not allowing the block to fill to its maximum capacity.

    { "timeout": "2s" }
    
  • Channel Restrictions. The total number of channels the orderer is willing to allocate may be specified as max_count. This is primarily useful in pre-production environments with weak consortium ChannelCreation policies.

    {
     "max_count":1000
    }
    
  • Channel Creation Policy. Defines the policy value which will be set as the mod_policy for the Application group of new channels for the consortium it is defined in. The signature set attached to the channel creation request will be checked against the instantiation of this policy in the new channel to ensure that the channel creation is authorized. Note that this config value is only set in the orderer system channel.

    {
    "type": 3,
    "value": {
      "rule": "ANY",
      "sub_policy": "Admins"
      }
    }
    
  • Kafka brokers. When ConsensusType is set to kafka, the brokers list enumerates some subset (or preferably all) of the Kafka brokers for the orderer to initially connect to at startup. Note that it is not possible to change your consensus type after it has been established (during the bootstrapping of the genesis block).

    {
      "brokers": [
        "kafka0:9092",
        "kafka1:9092",
        "kafka2:9092",
        "kafka3:9092"
      ]
    }
    
  • Anchor Peers Definition. Defines the location of the anchor peers for each Org.

    {
      "host": "peer0.org2.example.com",
        "port": 9051
    }
    
  • Hashing Structure. The block data is an array of byte arrays. The hash of the block data is computed as a Merkle tree. This value specifies the width of that Merkle tree. For the time being, this value is fixed to 4294967295 which corresponds to a simple flat hash of the concatenation of the block data bytes.

    { "width": 4294967295 }
    
  • Hashing Algorithm. The algorithm used for computing the hash values encoded into the blocks of the blockchain. In particular, this affects the data hash, and the previous block hash fields of the block. Note, this field currently only has one valid value (SHA256) and should not be changed.

    { "name": "SHA256" }
    
  • Block Validation. This policy specifies the signature requirements for a block to be considered valid. By default, it requires a signature from some member of the ordering org.

    {
      "type": 3,
      "value": {
        "rule": "ANY",
        "sub_policy": "Writers"
      }
    }
    
  • Orderer Address. A list of addresses where clients may invoke the orderer Broadcast and Deliver functions. The peer randomly chooses among these addresses and fails over between them for retrieving blocks.

    {
      "addresses": [
        "orderer.example.com:7050"
      ]
    }
    

正如我们通过添加组织的构件和MSP信息来添加组织一样,您也可以通过逆转流程来删除它们。

注意,一旦定义了共识类型并引导了网络,就不能通过配置更新来更改它。

还有另一个重要的通道配置(特别是对于v1.1),称为功能需求。它有自己的文档,可以在这里找到。

假设您想要编辑通道的区块批大小(因为这是一个单一的数字字段,这是最容易进行的更改之一)。首先,为了方便引用JSON路径,我们将其定义为一个环境变量。

要建立此功能,请查看您的配置,找到您要查找的内容,然后回溯路径。

例如,如果您发现批大小,您将看到它是排序器的一个值。排序器可以在groups下找到,它位于channel_group下。批大小值在max_message_count值下有一个参数。

这条路径是这样的:

 export MAXBATCHSIZEPATH=".channel_group.groups.Orderer.values.BatchSize.value.max_message_count"

接下来,显示该属性的值:

jq "$MAXBATCHSIZEPATH" config.json

它应该返回一个值10(至少在我们的示例网络中)。

现在,让我们设置新的批大小并显示新值:

 jq “$MAXBATCHSIZEPATH = 20” config.json > modified_config.json
 jq “$MAXBATCHSIZEPATH” modified_config.json

一旦修改了JSON,就可以对其进行转换并提交了。将Org添加到通道中的脚本和步骤将指导您完成转换JSON的过程,所以让我们来看看提交它的过程。

Get the Necessary Signatures

一旦您成功地生成了protobuf文件,就可以对它进行签名了。要做到这一点,你需要知道你要改变的东西的相关策略。

默认情况下,编辑配置:

  • A particular org (for example, changing anchor peers) requires only the admin signature of that org.

  • The application (like who the member orgs are) requires a majority of the application organizations’ admins to sign.

  • The orderer requires a majority of the ordering organizations’ admins (of which there are by default only 1).

  • The top level channel group requires both the agreement of a majority of application organization admins and orderer organization admins.

如果您对通道中的默认策略进行了更改,则需要相应地计算签名需求。

注意:您可以根据您的应用程序编写收集签名的脚本。一般来说,您收集的签名可能总是比要求的多。

获得这些签名的实际过程将取决于您如何设置系统,但是有两个主要实现。目前,Fabric命令行默认为“pass it along”系统。也就是说,提出配置更新的组织的管理员将更新发送给需要签名的其他人(通常是另一个管理员)。这个管理员对它进行签名(或者不签名),并将其传递给下一个管理员,以此类推,直到有足够的签名来提交配置为止。

这具有简单性的优点——当有足够的签名时,最后一个管理员可以简单地提交配置交易(在Fabric中,peer channel update命令默认包含一个签名)。但是,这个过程只在较小的通道中实用,因为“pass it along”方法可能会很耗时。

另一个选项是向通道上的每个管理员提交更新,并等待足够的签名返回。然后可以将这些签名装订在一起并提交。这使得创建配置更新的管理员(强制他们为每个签名者处理一个文件)的工作变得更加困难,但是对于正在开发Fabric管理应用程序的用户来说,这是推荐的工作流。

一旦配置被添加到账本中,最好将其拉出并将其转换为JSON,以检查是否正确地添加了所有内容。这也将作为最新配置的有用副本。

成员服务提供者(MSP)

该文档提供了关于MSPs的设置和最佳实践的详细信息。

成员服务提供者(MSP)是一个旨在提供成员操作体系结构的抽象的组件

特别地,MSP抽象出了颁发、验证证书以及用户身份验证背后的所有加密机制和协议。MSP可以定义自己的身份概念,以及控制这些身份的规则(标识验证)和身份验证(签名生成和验证)。

一个超级账本fabric的区块链网络可以由一个或者多个MSP来管理。这提供了模块化的成员管理操作,以及不同的成员标准和架构之间的互操作性。

在本文档的其余部分中,我们将详细介绍如何设置超级账本 Fabric所支持的MSP,并讨论有关其使用的最佳实践方法。

MSP配置

通道要设置一个MSP的实例,需要在每个peer和orderer上进行本地配置(启用peer和orderer节点的签名),并在通道上为所有通道成员启用peer、orderer、客户端标识验证和各自的签名验证(身份验证)。

首先,需要为每个MSP指定一个名称,以便在网络中引用该MSP(比如“msp1”,“org2”,以及“org3.divA”)。这代表在通道中引用联盟、组织、部门时的MSP成员名称。也可以称之为“MSP 标识符”或者“MSP ID”。每个MSP实例都要求MSP标识符是惟一的。例如,如果在启动系统通道时检测到两个具有相同标识符的MSP实例,则orderer设置将失败。

In the case of default implementation of MSP, a set of parameters need to be specified to allow for identity (certificate) validation and signature verification. These parameters are deduced by RFC5280, and include:

  • A list of self-signed (X.509) certificates to constitute the root of trust

  • A list of X.509 certificates to represent intermediate CAs this provider considers for certificate validation; these certificates ought to be certified by exactly one of the certificates in the root of trust; intermediate CAs are optional parameters

  • A list of X.509 certificates with a verifiable certificate path to exactly one of the certificates of the root of trust to represent the administrators of this MSP; owners of these certificates are authorized to request changes to this MSP configuration (e.g. root CAs, intermediate CAs)

  • A list of Organizational Units that valid members of this MSP should include in their X.509 certificate; this is an optional configuration parameter, used when, e.g., multiple organizations leverage the same root of trust, and intermediate CAs, and have reserved an OU field for their members

  • A list of certificate revocation lists (CRLs) each corresponding to exactly one of the listed (intermediate or root) MSP Certificate Authorities; this is an optional parameter

  • A list of self-signed (X.509) certificates to constitute the TLS root of trust for TLS certificate.

  • A list of X.509 certificates to represent intermediate TLS CAs this provider considers; these certificates ought to be certified by exactly one of the certificates in the TLS root of trust; intermediate CAs are optional parameters.

Valid identities for this MSP instance are required to satisfy the following conditions:

  • They are in the form of X.509 certificates with a verifiable certificate path to exactly one of the root of trust certificates;

  • They are not included in any CRL;

  • And they list one or more of the Organizational Units of the MSP configuration in the OU field of their X.509 certificate structure.

For more information on the validity of identities in the current MSP implementation, we refer the reader to MSP Identity Validity Rules.

In addition to verification related parameters, for the MSP to enable the node on which it is instantiated to sign or authenticate, one needs to specify:

  • The signing key used for signing by the node (currently only ECDSA keys are supported), and

  • The node’s X.509 certificate, that is a valid identity under the verification parameters of this MSP.

It is important to note that MSP identities never expire; they can only be revoked by adding them to the appropriate CRLs. Additionally, there is currently no support for enforcing revocation of TLS certificates.

How to generate MSP certificates and their signing keys?

To generate X.509 certificates to feed its MSP configuration, the application can use Openssl. We emphasize that in Hyperledger Fabric there is no support for certificates including RSA keys.

Alternatively one can use cryptogen tool, whose operation is explained in Getting Started.

Hyperledger Fabric CA can also be used to generate the keys and certificates needed to configure an MSP.

MSP setup on the peer & orderer side

To set up a local MSP (for either a peer or an orderer), the administrator should create a folder (e.g. $MY_PATH/mspconfig) that contains six subfolders and a file:

  1. a folder admincerts to include PEM files each corresponding to an administrator certificate

  2. a folder cacerts to include PEM files each corresponding to a root CA’s certificate

  3. (optional) a folder intermediatecerts to include PEM files each corresponding to an intermediate CA’s certificate

  4. (optional) a file config.yaml to configure the supported Organizational Units and identity classifications (see respective sections below).

  5. (optional) a folder crls to include the considered CRLs

  6. a folder keystore to include a PEM file with the node’s signing key; we emphasise that currently RSA keys are not supported

  7. a folder signcerts to include a PEM file with the node’s X.509 certificate

  8. (optional) a folder tlscacerts to include PEM files each corresponding to a TLS root CA’s certificate

  9. (optional) a folder tlsintermediatecerts to include PEM files each corresponding to an intermediate TLS CA’s certificate

In the configuration file of the node (core.yaml file for the peer, and orderer.yaml for the orderer), one needs to specify the path to the mspconfig folder, and the MSP Identifier of the node’s MSP. The path to the mspconfig folder is expected to be relative to FABRIC_CFG_PATH and is provided as the value of parameter mspConfigPath for the peer, and LocalMSPDir for the orderer. The identifier of the node’s MSP is provided as a value of parameter localMspId for the peer and LocalMSPID for the orderer. These variables can be overridden via the environment using the CORE prefix for peer (e.g. CORE_PEER_LOCALMSPID) and the ORDERER prefix for the orderer (e.g. ORDERER_GENERAL_LOCALMSPID). Notice that for the orderer setup, one needs to generate, and provide to the orderer the genesis block of the system channel. The MSP configuration needs of this block are detailed in the next section.

Reconfiguration of a “local” MSP is only possible manually, and requires that the peer or orderer process is restarted. In subsequent releases we aim to offer online/dynamic reconfiguration (i.e. without requiring to stop the node by using a node managed system chaincode).

Organizational Units

In order to configure the list of Organizational Units that valid members of this MSP should include in their X.509 certificate, the config.yaml file needs to specify the organizational unit identifiers. Here is an example:

OrganizationalUnitIdentifiers:
  - Certificate: "cacerts/cacert1.pem"
    OrganizationalUnitIdentifier: "commercial"
  - Certificate: "cacerts/cacert2.pem"
    OrganizationalUnitIdentifier: "administrators"

The above example declares two organizational unit identifiers: commercial and administrators. An MSP identity is valid if it carries at least one of these organizational unit identifiers. The Certificate field refers to the CA or intermediate CA certificate path under which identities, having that specific OU, should be validated. The path is relative to the MSP root folder and cannot be empty.

Identity Classification

The default MSP implementation allows to further classify identities into clients and peers, based on the OUs of their x509 certificates. An identity should be classified as a client if it submits transactions, queries peers, etc. An identity should be classified as a peer if it endorses or commits transactions. In order to define clients and peers of a given MSP, the config.yaml file needs to be set appropriately. Here is an example:

NodeOUs:
  Enable: true
  ClientOUIdentifier:
    Certificate: "cacerts/cacert.pem"
    OrganizationalUnitIdentifier: "client"
  PeerOUIdentifier:
    Certificate: "cacerts/cacert.pem"
    OrganizationalUnitIdentifier: "peer"

As shown above, the NodeOUs.Enable is set to true, this enables the identify classification. Then, client (peer) identifiers are defined by setting the following properties for the NodeOUs.ClientOUIdentifier (NodeOUs.PeerOUIdentifier) key:

  1. OrganizationalUnitIdentifier: Set this to the value that matches the OU that the x509 certificate of a client (peer) should contain.

  2. Certificate: Set this to the CA or intermediate CA under which client (peer) identities should be validated. The field is relative to the MSP root folder. It can be empty, meaning that the identity’s x509 certificate can be validated under any CA defined in the MSP configuration.

When the classification is enabled, MSP administrators need to be clients of that MSP, meaning that their x509 certificates need to carry the OU that identifies the clients. Notice also that, an identity can be either a client or a peer. The two classifications are mutually exclusive. If an identity is neither a client nor a peer, the validation will fail.

Finally, notice that for upgraded environments the 1.1 channel capability needs to be enabled before identify classification can be used.

Channel MSP setup

At the genesis of the system, verification parameters of all the MSPs that appear in the network need to be specified, and included in the system channel’s genesis block. Recall that MSP verification parameters consist of the MSP identifier, the root of trust certificates, intermediate CA and admin certificates, as well as OU specifications and CRLs. The system genesis block is provided to the orderers at their setup phase, and allows them to authenticate channel creation requests. Orderers would reject the system genesis block, if the latter includes two MSPs with the same identifier, and consequently the bootstrapping of the network would fail.

For application channels, the verification components of only the MSPs that govern a channel need to reside in the channel’s genesis block. We emphasize that it is the responsibility of the application to ensure that correct MSP configuration information is included in the genesis blocks (or the most recent configuration block) of a channel prior to instructing one or more of their peers to join the channel.

When bootstrapping a channel with the help of the configtxgen tool, one can configure the channel MSPs by including the verification parameters of MSP in the mspconfig folder, and setting that path in the relevant section in configtx.yaml.

Reconfiguration of an MSP on the channel, including announcements of the certificate revocation lists associated to the CAs of that MSP is achieved through the creation of a config_update object by the owner of one of the administrator certificates of the MSP. The client application managed by the admin would then announce this update to the channels in which this MSP appears.

Best Practices

In this section we elaborate on best practices for MSP configuration in commonly met scenarios.

1) Mapping between organizations/corporations and MSPs

We recommend that there is a one-to-one mapping between organizations and MSPs. If a different type of mapping is chosen, the following needs to be to considered:

  • One organization employing various MSPs. This corresponds to the case of an organization including a variety of divisions each represented by its MSP, either for management independence reasons, or for privacy reasons. In this case a peer can only be owned by a single MSP, and will not recognize peers with identities from other MSPs as peers of the same organization. The implication of this is that peers may share through gossip organization-scoped data with a set of peers that are members of the same subdivision, and NOT with the full set of providers constituting the actual organization.

  • Multiple organizations using a single MSP. This corresponds to a case of a consortium of organizations that are governed by similar membership architecture. One needs to know here that peers would propagate organization-scoped messages to the peers that have an identity under the same MSP regardless of whether they belong to the same actual organization. This is a limitation of the granularity of MSP definition, and/or of the peer’s configuration.

2) One organization has different divisions (say organizational units), to which it wants to grant access to different channels.

Two ways to handle this:

  • Define one MSP to accommodate membership for all organization’s members. Configuration of that MSP would consist of a list of root CAs, intermediate CAs and admin certificates; and membership identities would include the organizational unit (OU) a member belongs to. Policies can then be defined to capture members of a specific OU, and these policies may constitute the read/write policies of a channel or endorsement policies of a chaincode. A limitation of this approach is that gossip peers would consider peers with membership identities under their local MSP as members of the same organization, and would consequently gossip with them organization-scoped data (e.g. their status).

  • Defining one MSP to represent each division. This would involve specifying for each division, a set of certificates for root CAs, intermediate CAs, and admin Certs, such that there is no overlapping certification path across MSPs. This would mean that, for example, a different intermediate CA per subdivision is employed. Here the disadvantage is the management of more than one MSPs instead of one, but this circumvents the issue present in the previous approach. One could also define one MSP for each division by leveraging an OU extension of the MSP configuration.

3) Separating clients from peers of the same organization.

In many cases it is required that the “type” of an identity is retrievable from the identity itself (e.g. it may be needed that endorsements are guaranteed to have derived by peers, and not clients or nodes acting solely as orderers).

There is limited support for such requirements.

One way to allow for this separation is to create a separate intermediate CA for each node type - one for clients and one for peers/orderers; and configure two different MSPs - one for clients and one for peers/orderers. Channels this organization should be accessing would need to include both MSPs, while endorsement policies will leverage only the MSP that refers to the peers. This would ultimately result in the organization being mapped to two MSP instances, and would have certain consequences on the way peers and clients interact.

Gossip would not be drastically impacted as all peers of the same organization would still belong to one MSP. Peers can restrict the execution of certain system chaincodes to local MSP based policies. For example, peers would only execute “joinChannel” request if the request is signed by the admin of their local MSP who can only be a client (end-user should be sitting at the origin of that request). We can go around this inconsistency if we accept that the only clients to be members of a peer/orderer MSP would be the administrators of that MSP.

Another point to be considered with this approach is that peers authorize event registration requests based on membership of request originator within their local MSP. Clearly, since the originator of the request is a client, the request originator is always deemed to belong to a different MSP than the requested peer and the peer would reject the request.

4) Admin and CA certificates.

It is important to set MSP admin certificates to be different than any of the certificates considered by the MSP for root of trust, or intermediate CAs. This is a common (security) practice to separate the duties of management of membership components from the issuing of new certificates, and/or validation of existing ones.

5) Blacklisting an intermediate CA.

As mentioned in previous sections, reconfiguration of an MSP is achieved by reconfiguration mechanisms (manual reconfiguration for the local MSP instances, and via properly constructed config_update messages for MSP instances of a channel). Clearly, there are two ways to ensure an intermediate CA considered in an MSP is no longer considered for that MSP’s identity validation:

  1. Reconfigure the MSP to no longer include the certificate of that intermediate CA in the list of trusted intermediate CA certs. For the locally configured MSP, this would mean that the certificate of this CA is removed from the intermediatecerts folder.

  2. Reconfigure the MSP to include a CRL produced by the root of trust which denounces the mentioned intermediate CA’s certificate.

In the current MSP implementation we only support method (1) as it is simpler and does not require blacklisting the no longer considered intermediate CA.

6) CAs and TLS CAs

MSP identities’ root CAs and MSP TLS certificates’ root CAs (and relative intermediate CAs) need to be declared in different folders. This is to avoid confusion between different classes of certificates. It is not forbidden to reuse the same CAs for both MSP identities and TLS certificates but best practices suggest to avoid this in production.

Channel Configuration (configtx)

Shared configuration for a Hyperledger Fabric blockchain network is stored in a collection configuration transactions, one per channel. Each configuration transaction is usually referred to by the shorter name configtx.

Channel configuration has the following important properties:

  1. Versioned: All elements of the configuration have an associated version which is advanced with every modification. Further, every committed configuration receives a sequence number.

  2. Permissioned: Each element of the configuration has an associated policy which governs whether or not modification to that element is permitted. Anyone with a copy of the previous configtx (and no additional info) may verify the validity of a new config based on these policies.

  3. Hierarchical: A root configuration group contains sub-groups, and each group of the hierarchy has associated values and policies. These policies can take advantage of the hierarchy to derive policies at one level from policies of lower levels.

Anatomy of a configuration

Configuration is stored as a transaction of type HeaderType_CONFIG in a block with no other transactions. These blocks are referred to as Configuration Blocks, the first of which is referred to as the Genesis Block.

The proto structures for configuration are stored in fabric/protos/common/configtx.proto. The Envelope of type HeaderType_CONFIG encodes a ConfigEnvelope message as the Payload data field. The proto for ConfigEnvelope is defined as follows:

message ConfigEnvelope {
    Config config = 1;
    Envelope last_update = 2;
}

The last_update field is defined below in the Updates to configuration section, but is only necessary when validating the configuration, not reading it. Instead, the currently committed configuration is stored in the config field, containing a Config message.

message Config {
    uint64 sequence = 1;
    ConfigGroup channel_group = 2;
}

The sequence number is incremented by one for each committed configuration. The channel_group field is the root group which contains the configuration. The ConfigGroup structure is recursively defined, and builds a tree of groups, each of which contains values and policies. It is defined as follows:

message ConfigGroup {
    uint64 version = 1;
    map<string,ConfigGroup> groups = 2;
    map<string,ConfigValue> values = 3;
    map<string,ConfigPolicy> policies = 4;
    string mod_policy = 5;
}

Because ConfigGroup is a recursive structure, it has hierarchical arrangement. The following example is expressed for clarity in golang notation.

// Assume the following groups are defined
var root, child1, child2, grandChild1, grandChild2, grandChild3 *ConfigGroup

// Set the following values
root.Groups["child1"] = child1
root.Groups["child2"] = child2
child1.Groups["grandChild1"] = grandChild1
child2.Groups["grandChild2"] = grandChild2
child2.Groups["grandChild3"] = grandChild3

// The resulting config structure of groups looks like:
// root:
//     child1:
//         grandChild1
//     child2:
//         grandChild2
//         grandChild3

Each group defines a level in the config hierarchy, and each group has an associated set of values (indexed by string key) and policies (also indexed by string key).

Values are defined by:

message ConfigValue {
    uint64 version = 1;
    bytes value = 2;
    string mod_policy = 3;
}

Policies are defined by:

message ConfigPolicy {
    uint64 version = 1;
    Policy policy = 2;
    string mod_policy = 3;
}

Note that Values, Policies, and Groups all have a version and a mod_policy. The version of an element is incremented each time that element is modified. The mod_policy is used to govern the required signatures to modify that element. For Groups, modification is adding or removing elements to the Values, Policies, or Groups maps (or changing the mod_policy). For Values and Policies, modification is changing the Value and Policy fields respectively (or changing the mod_policy). Each element’s mod_policy is evaluated in the context of the current level of the config. Consider the following example mod policies defined at Channel.Groups["Application"] (Here, we use the golang map reference syntax, so Channel.Groups["Application"].Policies["policy1"] refers to the base Channel group’s Application group’s Policies map’s policy1 policy.)

  • policy1 maps to Channel.Groups["Application"].Policies["policy1"]

  • Org1/policy2 maps to Channel.Groups["Application"].Groups["Org1"].Policies["policy2"]

  • /Channel/policy3 maps to Channel.Policies["policy3"]

Note that if a mod_policy references a policy which does not exist, the item cannot be modified.

Configuration updates

Configuration updates are submitted as an Envelope message of type HeaderType_CONFIG_UPDATE. The Payload data of the transaction is a marshaled ConfigUpdateEnvelope. The ConfigUpdateEnvelope is defined as follows:

message ConfigUpdateEnvelope {
    bytes config_update = 1;
    repeated ConfigSignature signatures = 2;
}

The signatures field contains the set of signatures which authorizes the config update. Its message definition is:

message ConfigSignature {
    bytes signature_header = 1;
    bytes signature = 2;
}

The signature_header is as defined for standard transactions, while the signature is over the concatenation of the signature_header bytes and the config_update bytes from the ConfigUpdateEnvelope message.

The ConfigUpdateEnvelope config_update bytes are a marshaled ConfigUpdate message which is defined as follows:

message ConfigUpdate {
    string channel_id = 1;
    ConfigGroup read_set = 2;
    ConfigGroup write_set = 3;
}

The channel_id is the channel ID the update is bound for, this is necessary to scope the signatures which support this reconfiguration.

The read_set specifies a subset of the existing configuration, specified sparsely where only the version field is set and no other fields must be populated. The particular ConfigValue value or ConfigPolicy policy fields should never be set in the read_set. The ConfigGroup may have a subset of its map fields populated, so as to reference an element deeper in the config tree. For instance, to include the Application group in the read_set, its parent (the Channel group) must also be included in the read set, but, the Channel group does not need to populate all of the keys, such as the Orderer group key, or any of the values or policies keys.

The write_set specifies the pieces of configuration which are modified. Because of the hierarchical nature of the configuration, a write to an element deep in the hierarchy must contain the higher level elements in its write_set as well. However, for any element in the write_set which is also specified in the read_set at the same version, the element should be specified sparsely, just as in the read_set.

For example, given the configuration:

Channel: (version 0)
    Orderer (version 0)
    Application (version 3)
       Org1 (version 2)

To submit a configuration update which modifies Org1, the read_set would be:

Channel: (version 0)
    Application: (version 3)

and the write_set would be

Channel: (version 0)
    Application: (version 3)
        Org1 (version 3)

When the CONFIG_UPDATE is received, the orderer computes the resulting CONFIG by doing the following:

  1. Verifies the channel_id and read_set. All elements in the read_set must exist at the given versions.

  2. Computes the update set by collecting all elements in the write_set which do not appear at the same version in the read_set.

  3. Verifies that each element in the update set increments the version number of the element update by exactly 1.

  4. Verifies that the signature set attached to the ConfigUpdateEnvelope satisfies the mod_policy for each element in the update set.

  5. Computes a new complete version of the config by applying the update set to the current config.

  6. Writes the new config into a ConfigEnvelope which includes the CONFIG_UPDATE as the last_update field and the new config encoded in the config field, along with the incremented sequence value.

  7. Writes the new ConfigEnvelope into a Envelope of type CONFIG, and ultimately writes this as the sole transaction in a new configuration block.

When the peer (or any other receiver for Deliver) receives this configuration block, it should verify that the config was appropriately validated by applying the last_update message to the current config and verifying that the orderer-computed config field contains the correct new configuration.

Permitted configuration groups and values

Any valid configuration is a subset of the following configuration. Here we use the notation peer.<MSG> to define a ConfigValue whose value field is a marshaled proto message of name <MSG> defined in fabric/protos/peer/configuration.proto. The notations common.<MSG>, msp.<MSG>, and orderer.<MSG> correspond similarly, but with their messages defined in fabric/protos/common/configuration.proto, fabric/protos/msp/mspconfig.proto, and fabric/protos/orderer/configuration.proto respectively.

Note, that the keys {{org_name}} and {{consortium_name}} represent arbitrary names, and indicate an element which may be repeated with different names.

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Application":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                        "AnchorPeers":peer.AnchorPeers,
                    },
                },
            },
        },
        "Orderer":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                    },
                },
            },

            Values:map<string, *ConfigValue> {
                "ConsensusType":orderer.ConsensusType,
                "BatchSize":orderer.BatchSize,
                "BatchTimeout":orderer.BatchTimeout,
                "KafkaBrokers":orderer.KafkaBrokers,
            },
        },
        "Consortiums":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{consortium_name}}:&ConfigGroup{
                    Groups:map<string, *ConfigGroup> {
                        {{org_name}}:&ConfigGroup{
                            Values:map<string, *ConfigValue>{
                                "MSP":msp.MSPConfig,
                            },
                        },
                    },
                    Values:map<string, *ConfigValue> {
                        "ChannelCreationPolicy":common.Policy,
                    }
                },
            },
        },
    },

    Values: map<string, *ConfigValue> {
        "HashingAlgorithm":common.HashingAlgorithm,
        "BlockHashingDataStructure":common.BlockDataHashingStructure,
        "Consortium":common.Consortium,
        "OrdererAddresses":common.OrdererAddresses,
    },
}

Orderer system channel configuration

The ordering system channel needs to define ordering parameters, and consortiums for creating channels. There must be exactly one ordering system channel for an ordering service, and it is the first channel to be created (or more accurately bootstrapped). It is recommended never to define an Application section inside of the ordering system channel genesis configuration, but may be done for testing. Note that any member with read access to the ordering system channel may see all channel creations, so this channel’s access should be restricted.

The ordering parameters are defined as the following subset of config:

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Orderer":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                    },
                },
            },

            Values:map<string, *ConfigValue> {
                "ConsensusType":orderer.ConsensusType,
                "BatchSize":orderer.BatchSize,
                "BatchTimeout":orderer.BatchTimeout,
                "KafkaBrokers":orderer.KafkaBrokers,
            },
        },
    },

Each organization participating in ordering has a group element under the Orderer group. This group defines a single parameter MSP which contains the cryptographic identity information for that organization. The Values of the Orderer group determine how the ordering nodes function. They exist per channel, so orderer.BatchTimeout for instance may be specified differently on one channel than another.

At startup, the orderer is faced with a filesystem which contains information for many channels. The orderer identifies the system channel by identifying the channel with the consortiums group defined. The consortiums group has the following structure.

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Consortiums":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{consortium_name}}:&ConfigGroup{
                    Groups:map<string, *ConfigGroup> {
                        {{org_name}}:&ConfigGroup{
                            Values:map<string, *ConfigValue>{
                                "MSP":msp.MSPConfig,
                            },
                        },
                    },
                    Values:map<string, *ConfigValue> {
                        "ChannelCreationPolicy":common.Policy,
                    }
                },
            },
        },
    },
},

Note that each consortium defines a set of members, just like the organizational members for the ordering orgs. Each consortium also defines a ChannelCreationPolicy. This is a policy which is applied to authorize channel creation requests. Typically, this value will be set to an ImplicitMetaPolicy requiring that the new members of the channel sign to authorize the channel creation. More details about channel creation follow later in this document.

Application channel configuration

Application configuration is for channels which are designed for application type transactions. It is defined as follows:

&ConfigGroup{
    Groups: map<string, *ConfigGroup> {
        "Application":&ConfigGroup{
            Groups:map<String, *ConfigGroup> {
                {{org_name}}:&ConfigGroup{
                    Values:map<string, *ConfigValue>{
                        "MSP":msp.MSPConfig,
                        "AnchorPeers":peer.AnchorPeers,
                    },
                },
            },
        },
    },
}

Just like with the Orderer section, each organization is encoded as a group. However, instead of only encoding the MSP identity information, each org additionally encodes a list of AnchorPeers. This list allows the peers of different organizations to contact each other for peer gossip networking.

The application channel encodes a copy of the orderer orgs and consensus options to allow for deterministic updating of these parameters, so the same Orderer section from the orderer system channel configuration is included. However from an application perspective this may be largely ignored.

Channel creation

When the orderer receives a CONFIG_UPDATE for a channel which does not exist, the orderer assumes that this must be a channel creation request and performs the following.

  1. The orderer identifies the consortium which the channel creation request is to be performed for. It does this by looking at the Consortium value of the top level group.

  2. The orderer verifies that the organizations included in the Application group are a subset of the organizations included in the corresponding consortium and that the ApplicationGroup is set to version 1.

  3. The orderer verifies that if the consortium has members, that the new channel also has application members (creation consortiums and channels with no members is useful for testing only).

  4. The orderer creates a template configuration by taking the Orderer group from the ordering system channel, and creating an Application group with the newly specified members and specifying its mod_policy to be the ChannelCreationPolicy as specified in the consortium config. Note that the policy is evaluated in the context of the new configuration, so a policy requiring ALL members, would require signatures from all the new channel members, not all the members of the consortium.

  5. The orderer then applies the CONFIG_UPDATE as an update to this template configuration. Because the CONFIG_UPDATE applies modifications to the Application group (its version is 1), the config code validates these updates against the ChannelCreationPolicy. If the channel creation contains any other modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be invoked.

  6. The new CONFIG transaction with the new channel config is wrapped and sent for ordering on the ordering system channel. After ordering, the channel is created.

Endorsement policies

Every chaincode has an endorsement policy which specifies the set of peers on a channel that must execute chaincode and endorse the execution results in order for the transaction to be considered valid. These endorsement policies define the organizations (through their peers) who must “endorse” (i.e., approve of) the execution of a proposal.

注解

Recall that state, represented by key-value pairs, is separate from blockchain data. For more on this, check out our Ledger documentation.

As part of the transaction validation step performed by the peers, each validating peer checks to make sure that the transaction contains the appropriate number of endorsements and that they are from the expected sources (both of these are specified in the endorsement policy). The endorsements are also checked to make sure they’re valid (i.e., that they are valid signatures from valid certificates).

Two ways to require endorsement

By default, endorsement policies are specified in the chaincode definition, which is agreed to by channel members and then committed to a channel (that is, one endorsement policy covers all of the state associated with a chaincode).

However, there are cases where it may be necessary for a particular state (a particular key-value pair, in other words) to have a different endorsement policy. This state-based endorsement allows the default chaincode-level endorsement policies to be overridden by a different policy for the specified keys.

To illustrate the circumstances in which these two types of endorsement policies might be used, consider a channel on which cars are being exchanged. The “creation” — also known as “issuance” – of a car as an asset that can be traded (putting the key-value pair that represents it into the world state, in other words) would have to satisfy the chaincode-level endorsement policy. To see how to set a chaincode-level endorsement policy, check out the section below.

If the car requires a specific endorsement policy, it can be defined either when the car is created or afterwards. There are a number of reasons why it might be necessary or preferable to set a state-specific endorsement policy. The car might have historical importance or value that makes it necessary to have the endorsement of a licensed appraiser. Also, the owner of the car (if they’re a member of the channel) might also want to ensure that their peer signs off on a transaction. In both cases, an endorsement policy is required for a particular asset that is different from the default endorsement policies for the other assets associated with that chaincode.

We’ll show you how to define a state-based endorsement policy in a subsequent section. But first, let’s see how we set a chaincode-level endorsement policy.

Setting chaincode-level endorsement policies

Chaincode-level endorsement policies are agreed to by channel members when they approve a chaincode definition for their organization. A sufficient number of channel members need to approve a chaincode definition to meet the Channel/Application/LifecycleEndorsement policy, which by default is set to a majority of channel members, before the definition can be committed to the channel. Once the definition has been committed, the chaincode is ready to use. Any invoke of the chaincode that writes data to the ledger will need to be validated by enough channel members to meet the endorsement policy.

You can specify an endorsement policy for a chainocode using the Fabric SDKs. For an example, visit the How to install and start your chaincode in the Node.js SDK documentation. You can also create an endorsement policy from your CLI when you approve and commit a chaincode definition with the Fabric peer binaries by using the —-signature-policy flag.

注解

Don’t worry about the policy syntax ('Org1.member', et all) right now. We’ll talk more about the syntax in the next section.

For example:

peer lifecycle chaincode approveformyorg --channelID mychannel —-signature-policy "AND('Org1.member', 'Org2.member')" --name mycc --version 1.0 --package-id mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173 --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent

The above command approves the chaincode definition of mycc with the policy AND('Org1.member', 'Org2.member') which would require that a member of both Org1 and Org2 sign the transaction. After a sufficient number of channel members approve a chaincode definition for mycc, the definition and endorsement policy can be committed to the channel using the command below:

peer lifecycle chaincode commit -o orderer.example.com:7050 --channelID mychannel —-signature-policy "AND('Org1.member', 'Org2.member')" --name mycc --version 1.0 --sequence 1 --init-required --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt

Notice that, if the identity classification is enabled (see 成员服务提供者(MSP)), one can use the PEER role to restrict endorsement to only peers.

For example:

peer lifecycle chaincode approveformyorg --channelID mychannel —-signature-policy "AND('Org1.peer', 'Org2.peer')" --name mycc --version 1.0 --package-id mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173 --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent

In addition to the specifying an endorsement policy from the CLI or SDK, a chaincode can also use policies in the channel configuration as endorsement policies. You can use the ``–channel-config-policy``flag to select a channel policy with format used by the channel configuration and by ACLs.

For example:

peer lifecycle chaincode approveformyorg --channelID mychannel --channel-config-policy Channel/Application/Admins --name mycc --version 1.0 --package-id mycc_1:3a8c52d70c36313cfebbaf09d8616e7a6318ababa01c7cbe40603c373bcfe173 --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --waitForEvent

If you do not specify a policy, the chaincode definition will use the Channel/Application/Endorsement policy by default, which requires that a transaction be validated by a majority of channel members. This policy depends on the membership of the channel, so it will be updated automatically when organizations are added or removed from a channel. One advantage of using channel policies is that they can be written to be updated automatically with channel membership.

If you specify an endorsement policy using the —-signature-policy flag or the SDK, you will need to update the policy when organizations join or leave the channel. A new organization added to the channel after instantiation will be able to query a chaincode (provided the query has appropriate authorization as defined by channel policies and any application level checks enforced by the chaincode) but will not be able to execute or endorse the chaincode. Only organizations listed in the endorsement policy syntax will be able sign transactions.

Endorsement policy syntax

As you can see above, policies are expressed in terms of principals (“principals” are identities matched to a role). Principals are described as 'MSP.ROLE', where MSP represents the required MSP ID and ROLE represents one of the four accepted roles: member, admin, client, and peer.

Here are a few examples of valid principals:

  • 'Org0.admin': any administrator of the Org0 MSP

  • 'Org1.member': any member of the Org1 MSP

  • 'Org1.client': any client of the Org1 MSP

  • 'Org1.peer': any peer of the Org1 MSP

The syntax of the language is:

EXPR(E[, E...])

Where EXPR is either AND, OR, or OutOf, and E is either a principal (with the syntax described above) or another nested call to EXPR.

For example:
  • AND('Org1.member', 'Org2.member', 'Org3.member') requests one signature from each of the three principals.

  • OR('Org1.member', 'Org2.member') requests one signature from either one of the two principals.

  • OR('Org1.member', AND('Org2.member', 'Org3.member')) requests either one signature from a member of the Org1 MSP or one signature from a member of the Org2 MSP and one signature from a member of the Org3 MSP.

  • OutOf(1, 'Org1.member', 'Org2.member'), which resolves to the same thing as OR('Org1.member', 'Org2.member').

  • Similarly, OutOf(2, 'Org1.member', 'Org2.member') is equivalent to AND('Org1.member', 'Org2.member'), and OutOf(2, 'Org1.member', 'Org2.member', 'Org3.member') is equivalent to OR(AND('Org1.member', 'Org2.member'), AND('Org1.member', 'Org3.member'), AND('Org2.member', 'Org3.member')).

Setting key-level endorsement policies

Setting regular chaincode-level endorsement policies is tied to the lifecycle of the corresponding chaincode. They can only be set or modified when instantiating or upgrading the corresponding chaincode on a channel.

In contrast, key-level endorsement policies can be set and modified in a more granular fashion from within a chaincode. The modification is part of the read-write set of a regular transaction.

The shim API provides the following functions to set and retrieve an endorsement policy for/from a regular key.

注解

ep below stands for the “endorsement policy”, which can be expressed either by using the same syntax described above or by using the convenience function described below. Either method will generate a binary version of the endorsement policy that can be consumed by the basic shim API.

SetStateValidationParameter(key string, ep []byte) error
GetStateValidationParameter(key string) ([]byte, error)

For keys that are part of Private data in a collection the following functions apply:

SetPrivateDataValidationParameter(collection, key string, ep []byte) error
GetPrivateDataValidationParameter(collection, key string) ([]byte, error)

To help set endorsement policies and marshal them into validation parameter byte arrays, the Go shim provides an extension with convenience functions that allow the chaincode developer to deal with endorsement policies in terms of the MSP identifiers of organizations, see KeyEndorsementPolicy:

type KeyEndorsementPolicy interface {
    // Policy returns the endorsement policy as bytes
    Policy() ([]byte, error)

    // AddOrgs adds the specified orgs to the list of orgs that are required
    // to endorse
    AddOrgs(roleType RoleType, organizations ...string) error

    // DelOrgs delete the specified channel orgs from the existing key-level endorsement
    // policy for this KVS key. If any org is not present, an error will be returned.
    DelOrgs(organizations ...string) error

    // ListOrgs returns an array of channel orgs that are required to endorse changes
    ListOrgs() ([]string)
}

For example, to set an endorsement policy for a key where two specific orgs are required to endorse the key change, pass both org MSPIDs to AddOrgs(), and then call Policy() to construct the endorsement policy byte array that can be passed to SetStateValidationParameter().

To add the shim extension to your chaincode as a dependency, see 管理用Go编写的链码的外部依赖关系.

Validation

At commit time, setting a value of a key is no different from setting the endorsement policy of a key — both update the state of the key and are validated based on the same rules.

Validation

no validation parameter set

validation parameter set

modify value

check chaincode ep

check key-level ep

modify key-level ep

check chaincode ep

check key-level ep

As we discussed above, if a key is modified and no key-level endorsement policy is present, the chaincode-level endorsement policy applies by default. This is also true when a key-level endorsement policy is set for a key for the first time — the new key-level endorsement policy must first be endorsed according to the pre-existing chaincode-level endorsement policy.

If a key is modified and a key-level endorsement policy is present, the key-level endorsement policy overrides the chaincode-level endorsement policy. In practice, this means that the key-level endorsement policy can be either less restrictive or more restrictive than the chaincode-level endorsement policy. Because the chaincode-level endorsement policy must be satisfied in order to set a key-level endorsement policy for the first time, no trust assumptions have been violated.

If a key’s endorsement policy is removed (set to nil), the chaincode-level endorsement policy becomes the default again.

If a transaction modifies multiple keys with different associated key-level endorsement policies, all of these policies need to be satisfied in order for the transaction to be valid.

Using FabToken

FabToken允许用户轻松标记Hyperledger Fabric上的资产。Token作为Fabric v2.0中的Alpha版本特性引入。您可以使用以下操作指南了解FabToken并开始使用token。您可以在本指南的末尾找到一个在Fabric上创建token的示例,该示例扩展了BYFN教程。

What is FabToken

将资产表示为Token允许您使用区块链账本来建立项目的惟一状态和所有权,并使用多方信任的共识机制来转移所有权。只要账本是安全的,资产就是不可变的,未经所有者同意不能转让。

Token可以代表有形资产,比如流经供应链的货物或正在交易的金融工具。代币也可以代表无形资产,如忠诚度点数。因为没有所有者的同意不能转让token,而且交易是在分布式账本上验证的,所以将资产表示为token可以降低跨多方转让资产的风险和难度。

FabToken是一个token管理系统,允许您使用Hyperledger Fabric发出、转账和赎回token。Token存储在channel账本上,可以由channel的任何成员拥有。FabToken使用Fabric的成员服务来验证token所有者的身份,并管理其公钥和私钥。Fabric token交易只有由具有有效MSP标识符的token所有者发出时才有效。

FabToken提供了一个简单的接口来标记Fabric channel上的资产,同时利用channel提供的验证和信任。Token使用channel排序节点和peer节点进行共识和验证。Token还使用channel策略来控制允许哪些成员拥有和发出Token。然而,用户不需要使用智能合约来创建或管理token。Token可以建立资产的不变性和所有权,而不需要channel成员编写和批准复杂的业务逻辑来创建和管理这些资产。Token所有者可以使用自己信任的节点来创建token交易,而不必依赖于属于其他组织的节点来执行或背书交易。

The token lifecycle

Token在Hyperledger Fabric内部有一个闭环的生命周期,可以发行、转让、赎回。

  • Tokens are created by being issued. The token issuer defines the type of asset represented by the tokens and the quantity. The issuer also assigns issued tokens to their original owners.

  • Tokens are “spent” by being transferred. The token owner transfers the asset represented by token to a new owner that is a member of the fabric channel. Once the token has been transferred, it can no longer be spent or accessed by the previous owner.

  • Tokens are removed from the channel by being redeemed. Redeemed tokens are no longer owned by any channel member and thus can no longer be spent.

FabToken使用一种未花费的交易输出(UTXO)模型来验证token交易。UTXO交易是一种强大的保证,可以保证资产是唯一的,只能由所有者转移,并且不能双花。每笔交易都需要一组特定的输出和输入。输出是交易创建的新token。这些在账本上以“未花费”状态列出。输入需要作为另一个交易的输出创建的未花费token。当交易被验证时,从通道账本的状态数据库中删除已使用的token,从而销毁已使用的token。

Token生命周期构建在UTXO模型之上,以确保token是唯一的,并且只能使用一次。当发行token时,它将以发行方指定的属于所有者的未花费状态创建。然后,所有者可以转让或赎回token。在做token转账时,将创建转账交易者拥有的token作为输入。交易的输出是转账的接收方拥有的新token。输入token变为“已花费”,并从状态数据库中删除。所转移的token表示的资产数量需要与输出的数量相同。被赎回的token被转移到一个空的所有者。这使得赎回的token不可能由通道的任何成员再次转账。

下面的指南描述了如何在Fabric中创建和使用token。这些说明详细说明了在使用Fabric token client、Fabric SDKs提供的API或token CLI时,使用FabToken需要哪些步骤和信息。您可以在本指南的末尾找到一个FabToken示例。

Issuing tokens

Token只能由发行者创建。发行者是通道成员,他们被发行策略授予发行token的权限。满足策略的用户可以使用发行交易将token添加到账本中。

Token有三个属性:

  • Owner identifies the channel member that can transfer or redeem the new token through its MSP identity.

  • Type describes the asset the token represents, such as USD, EUR, or BYFNcoins in the example below.

  • Quantity is the number of units of Type that the Token represents.

例如,美元类型的每个token可以表示100美元。每一美元不需要是一个单独的token。为了消费50美元的token,或者添加50美元,将创建表示新数量单位的新token。

发行策略还可以限制哪些用户可以发布特定类型的token。在Fabric v2.0 Alpha版中,IssuingPolicy被设置为ANY,这意味着所有通道成员都可以发出任何类型的token。在将来的版本中,用户可以限制此策略。

List

您可以使用List方法或命令查询您拥有的未花费的token。一个成功执行的list命令返回以下值:

  • TokenID is the identifier of each token you own.

  • Type is the asset your tokens represent.

  • Quantity is number of units of Type in hexadecimal format of each asset that you own.

Transfer

您可以通过将自己拥有的token转账给其他通道成员来花费它们。您可以通过提供以下值来进行token转账:

  • Token ID: The ID of the tokens you want to transfer.

  • Quantity: The amount of the asset represented by each token to be transferred.

  • Recipient: The MSP identifier of the channel member you want to transfer the assets to.

注意,转账交易针对token所表示的基础资产,而不是交易token本身。相反,新的token是由转账交易创建的。例如,如果您拥有一个值100美元的token,您可以使用该token花费50美元。转账交易将创建两个新的token作为输出。一个代表50美元的token将属于您,另一个代表50美元的token将属于收款人。

被转账到交易接收者的资产数量需要与输入token所代表的资产数量相同。如果您不想转出token所表示的资产的全部数量,您可以只转资产的一部分,并且交易将自动使您成为余额的所有者。使用上面的示例,如果只花费100美元token中的50美元,那么转账交易将自动创建一个价值50美元的新token,并将您作为所有者。

为取得成功,转账须符合下列条件:

  • The tokens being transferred need to belong to the transaction initiator and are unspent.

  • All input tokens of the transaction need to be of the same type.

Redeem

赎回的token不能再花费。赎回token的操作会从由通道管理的业务网络中删除资产,并确保不能再转账或更改该资产。如果供应链中的物品到达了最终目的地,或者金融资产到达了期限,则表示该资产的token可以被赎回,因为该资产不再需要由通道成员使用。

所有者赎回token需要提供以下参数:

  • Token ID: The ID of the token you want to redeem.

  • Quantity: The quantity of the asset represented by each token you want to redeem.

只有token所有者提交赎回交易时才能赎回token。不需要赎回token所表示的全部资产数量。例如,如果您有一个表示100美元的token,并且想赎回50美元,那么赎回交易将创建一个值50美元的新token,并将另一个50美元转移到一个没有所有者的受限帐户。因为该账户没有所有者,50美元不能再被任何通道的成员转移。

The token transaction flow

Fabtoken绕过了标准的Hyperledger Fabric背书流程。针对链码的交易需要在足够多的组织的peer节点上执行,以满足链码背书策略。这将确保交易的结果与智能合约的逻辑保持一致,并且该逻辑的结果已被多个组织验证。由于token是资产的唯一表示形式,只能由其所有者转移或赎回,因此不需要多个组织验证初始交易。

Token CLI和Node.js版的Fabric SDK都包含FabToken client模块,模块可以利用可信节点,或称为校准节点,创建token交易。例如,属于操作节点的组织的用户可以使用该节点查询其token并使用它们。如果连接到启用了V2_0功能的通道,那么任何具有Fabric 2.0 Alpha代码的节点都可以用作校准节点。

  • In the case of an issue transaction, the prover peer will verify that the requested operation satisfies the IssuingPolicy associated with the tokens being created.

  • In the case of transfer, redeem and list, the peer checks that the input tokens are unspent and belong to the entity requesting the transaction.

  • In the case of transfer and redeem, the peer checks that the input and output tokens are all of the same type and that the output tokens have the same type and sum up to the same quantity as the input tokens.

一旦client在校准节点的帮助下生成了token交易,它就将交易发送给排序服务。然后,排序服务将交易发送给提交节点以进行验证并添加到账本中。提交节点检查交易是否符合UTXO交易模型,以及基础资产是否被双花或超额使用。

FabToken Example

您可以尝试使用BYFN教程中的示例网络亲自处理token,比如发行和转账token。在本例中,我们将使用Token CLI在./byfn.sh脚本创建的通道上交易一些token化的BYFNcoins。

您还可以使用Node.js版的Fabric SDK来处理token。访问Node.js版Fabric SDK文档中的“如何执行token操作”教程。您还可以找到一个示例,它使用Node.js版Fabric SDK,在fabric-samples中发行、转账和赎回token。

Start the network

第一步是打开示例网络。./byfn.sh脚本创建了一个由两个组织Org1和Org2组成的Fabric网络,其中的节点连接到一个名为mychannel的通道。我们将使用mychannel发行token,并在Org1和Org2之间转移它们。

首先,我们需要清理我们的环境。下面的命令将导航到fabric-samples目录,杀死所有活动的或陈旧的Docker容器,并删除以前生成的内容:

cd fabric-samples/first-network
./byfn.sh down

首先需要生成示例网络所需的构件。运行以下命令:

./byfn.sh generate

我们需要添加一些在后面步骤中需要的文件。导航到first-network目录中的crypto-config目录。

cd crypto-config

Token CLI使用的来自每个组织的配置文件,包含关于组织信任哪些节点以及往哪个排序节点发送交易的信息。下面是Org1的配置文件。请注意,Org1使用它自己的节点作为校准节点,并在文件的“ProverPeer”部分中提供节点端点信息。

**Org1 Configuration file** ``` { "ChannelID":"", "MSPInfo":{ "MSPConfigPath":"", "MSPID":"Org1MSP", "MSPType":"bccsp" }, "Orderer":{ "Address":"orderer.example.com:7050", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem", "ServerNameOverride":"" }, "CommitterPeer":{ "Address":"peer0.org1.example.com:7051", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt", "ServerNameOverride":"" }, "ProverPeer":{ "Address":"peer0.org1.example.com:7051", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt", "ServerNameOverride":"" } } ```

将上面的文件粘贴到文本编辑器中,并将其保存为configorg1.json。保存configorg1之后,在文本编辑器中创建一个新文件,并粘贴下面的json文件。在相同位置将文件保存为configorg2.json:

**Org2 Configuration file** ``` { "ChannelID":"", "MSPInfo":{ "MSPConfigPath":"", "MSPID":"Org2MSP", "MSPType":"bccsp" }, "Orderer":{ "Address":"orderer.example.com:7050", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem", "ServerNameOverride":"" }, "CommitterPeer":{ "Address":"peer0.org2.example.com:9051", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt", "ServerNameOverride":"" }, "ProverPeer":{ "Address":"peer0.org2.example.com:9051", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt", "ServerNameOverride":"" } } ```

现在我们需要保存一个额外的文件,以便在转移token时使用。在文本编辑器中创建一个新文件,并将下面的文件保存为share.json:

**shares.json** ``` [ { "recipient":"Org2MSP:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/User1@org2.example.com/msp", "quantity":"50" } ] ```

现在,您可以导航回fabric-samples目录并启动sample网络:

cd ..
/byfn.sh up

该命令将创建用于发行和转账token的orgs、peers、orderers和channel。当命令成功完成时,您应该会看到以下结果:

========= All GOOD, BYFN execution completed ===========

 _____   _   _   ____
| ____| | \ | | |  _ \
|  _|   |  \| | | | | |
| |___  | |\  | | |_| |
|_____| |_| \_| |____/
Issue tokens

我们将标记100个BYFNcoins,这只能由我们信任的朋友在我们的示例网络上发行和交易。使用以下命令导航到CLI容器:

docker exec -it cli bash

使用下面的命令作为Org1管理员发出一个值100 BYFNcoins的token。该命令使用configorg1.json查找org1的校准节点的端口地址,它将使用该端口地址组装交易。注意,Org1管理员提交交易,但是Org1的User1将是token所有者。

# Issue the token as Org1

token issue --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp --channel mychannel --type BYFNcoins --quantity 100 --recipient Org1MSP:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp

一个成功的命令会产生一个类似如下的响应:

2019-03-12 00:49:43.864 UTC [token.client] BroadcastReceive -> INFO 001 calling OrdererClient.broadcastReceive
Orderer Status [SUCCESS]
Committed [true]

您可以使用list命令查看创建的token。此命令由User1发出,它是新token的所有者。

# List the tokens belonging to User1 of Org1

token list --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp --channel mychannel

一个成功的命令会产生一个类似如下的响应:

{"tx_id":"4e2664225d6a67508cfa539108383e682f3d03debb768aa7920851fdeea6f5b7"}
[BYFNcoins,100]

在命令输出中,您可以找到tokenID、类型和数量。tokenID是创建token的交易的transactionID。

Transferring tokens

既然已经创建了token,那么Org1的User1现在可以通过将BYFNcoins转账给另一个用户来使用token。Org1的User1将给Org2的User1 50 BYFNcoins,同时自己保留50。

使用下面的命令启动转账。使用 tokenIDs 来标记转账列表中返回的tokenID。注意 – Shares 标记如何把一个JSON文件传递给Token CLI,该文件在Org2中将50 BYFNcoins分配给User1。这是您在启动网络之前在 crypto-config文件夹中创建的文件。因为输入token表示100个BYFNcoins,所以转账交易将自动创建一个属于Org1的User1的新token,它表示没有转账给Org2的50个BYFNcoins。

# Transfer 50 BYFNcoins to User1 of Org2
# The split of coins tranfered to Org1 and Org2 is in shares.json

token transfer --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp --channel mychannel --tokenIDs '[{"tx_id":"4e2664225d6a67508cfa539108383e682f3d03debb768aa7920851fdeea6f5b7"}]' --shares /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/shares.json

一旦你提交了上面的命令,你可以再次运行list命令来验证Org1的User1现在只有50 BYFNcoins:

# List the tokens belonging to User1 of Org1

token list --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp --channel mychannel

注意,现在的BYFNcoins与之前的有不同的tokenID。这次转账销毁了之前的token,并创建了一个价值50 BYFNcoins的新token。

{"tx_id":"4eaf466884586106f480dd0bb4f675ddaa54d1290ea53e9c24a2c1344fb71d2c"}
[BYFNcoins,50]

您可以运行下面的命令来验证Org2的User1收到了50 BYFNcoins:

# List the tokens belonging to User1 of Org2

token list --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg2.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/User1@org2.example.com/msp --channel mychannel

Org2拥有的币的tokenID使用与Org1拥有的币相同的交易ID,因为它是由相同的交易创建的。但是,因为它是交易的第二个输出,所以还会给它一个索引,以区别于Org1拥有的token。

{"tx_id":"4eaf466884586106f480dd0bb4f675ddaa54d1290ea53e9c24a2c1344fb71d2c","index":1}
[BYFNcoins,50]

Redeeming tokens

Tokens只能由其所有者赎回。一旦用token表示的资产被赎回,该token就不能再转移给任何其他所有者。

使用下面的命令赎回属于Org2的25个BYFNcoins。

# Redeem tokens belonging to User1 of Org2

token redeem --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg2.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/User1@org2.example.com/msp --channel mychannel  --tokenIDs '[{"tx_id":"4eaf466884586106f480dd0bb4f675ddaa54d1290ea53e9c24a2c1344fb71d2c","index":1}]' --quantity 25

Org2现在只有一个值25 BYFNcoins的token。使用list命令验证Org2的User1拥有的BYFNcoins的数量。

# List the tokens belonging to User1 of Org2

token list --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg2.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/User1@org2.example.com/msp --channel mychannel

关注作为赎回交易的输出创建的新TokenID。

让我们尝试赎回属于其他用户的tokens。使用下面的命令尝试作为Org2赎回属于Org1的面值为50 BYFNcoins的token:

# Redeem tokens as Org1 belonging to Org2

token redeem --config /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg2.json --mspPath /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/User1@org2.example.com/msp --channel mychannel  --tokenIDs '[{"tx_id":"4eaf466884586106f480dd0bb4f675ddaa54d1290ea53e9c24a2c1344fb71d2c"}]' --quantity 50

结果将出现以下错误:

error from prover: the requestor does not own inputs

Future features

FabToken Alpha版只支持有限的发行和交易功能。以后的版本将通过支持不可替代的tokens和链码互操作功能,为用户提供更大的能力来将token集成到业务逻辑中,

不可替换的tokens不能合并或分割。一旦它们被创建,它们只能被转账到一个新的所有者或赎回。您可以使用不可替换的token来表示惟一的资产,比如映射到特定座位的音乐会门票。

链码互操作功能允许使用链码发行、转账和赎回tokens。这将允许通道使用通道成员同意的业务逻辑发行和定义token。例如,您可以使用链码设置链码的属性,并将某些属性与不同的交易关联起来。

Pluggable transaction endorsement and validation

Motivation

When a transaction is validated at time of commit, the peer performs various checks before applying the state changes that come with the transaction itself:

  • Validating the identities that signed the transaction.

  • Verifying the signatures of the endorsers on the transaction.

  • Ensuring the transaction satisfies the endorsement policies of the namespaces of the corresponding chaincodes.

There are use cases which demand custom transaction validation rules different from the default Fabric validation rules, such as:

  • UTXO (Unspent Transaction Output): When the validation takes into account whether the transaction doesn’t double spend its inputs.

  • Anonymous transactions: When the endorsement doesn’t contain the identity of the peer, but a signature and a public key are shared that can’t be linked to the peer’s identity.

Pluggable endorsement and validation logic

Fabric allows for the implementation and deployment of custom endorsement and validation logic into the peer to be associated with chaincode handling in a pluggable manner. This logic can be either compiled into the peer as built in selectable logic, or compiled and deployed alongside the peer as a Golang plugin.

By default, A chaincode will use the built in endorsement and validation logic. However, users have the option of selecting custom endorsement and validation plugins as part of the chaincode definition. An administrator can extend the endorsement/validation logic available to the peer by customizing the peer’s local configuration.

Configuration

Each peer has a local configuration (core.yaml) that declares a mapping between the endorsement/validation logic name and the implementation that is to be run.

The default logic are called ESCC (with the “E” standing for endorsement) and VSCC (validation), and they can be found in the peer local configuration in the handlers section:

handlers:
    endorsers:
      escc:
        name: DefaultEndorsement
    validators:
      vscc:
        name: DefaultValidation

When the endorsement or validation implementation is compiled into the peer, the name property represents the initialization function that is to be run in order to obtain the factory that creates instances of the endorsement/validation logic.

The function is an instance method of the HandlerLibrary construct under core/handlers/library/library.go and in order for custom endorsement or validation logic to be added, this construct needs to be extended with any additional methods.

Since this is cumbersome and poses a deployment challenge, one can also deploy custom endorsement and validation as a Golang plugin by adding another property under the name called library.

For example, if we have custom endorsement and validation logic which is implemented as a plugin, we would have the following entries in the configuration in core.yaml:

handlers:
    endorsers:
      escc:
        name: DefaultEndorsement
      custom:
        name: customEndorsement
        library: /etc/hyperledger/fabric/plugins/customEndorsement.so
    validators:
      vscc:
        name: DefaultValidation
      custom:
        name: customValidation
        library: /etc/hyperledger/fabric/plugins/customValidation.so

And we’d have to place the .so plugin files in the peer’s local file system.

The name of the custom plugin needs to be referenced by the chaincode definition to be used by the chaincode. If you are using the peer CLI to approve the chaincode definition, use the --escc and --vscc flag to select the name of the custom endorsement or validation library. If you are using the Fabric SDK for Node.js, visit How to install and start your chaincode. For more information, see Chaincode for Operators.

注解

Hereafter, custom endorsement or validation logic implementation is going to be referred to as “plugins”, even if they are compiled into the peer.

Endorsement plugin implementation

To implement an endorsement plugin, one must implement the Plugin interface found in core/handlers/endorsement/api/endorsement.go:

// Plugin endorses a proposal response
type Plugin interface {
    // Endorse signs the given payload(ProposalResponsePayload bytes), and optionally mutates it.
    // Returns:
    // The Endorsement: A signature over the payload, and an identity that is used to verify the signature
    // The payload that was given as input (could be modified within this function)
    // Or error on failure
    Endorse(payload []byte, sp *peer.SignedProposal) (*peer.Endorsement, []byte, error)

    // Init injects dependencies into the instance of the Plugin
    Init(dependencies ...Dependency) error
}

An endorsement plugin instance of a given plugin type (identified either by the method name as an instance method of the HandlerLibrary or by the plugin .so file path) is created for each channel by having the peer invoke the New method in the PluginFactory interface which is also expected to be implemented by the plugin developer:

// PluginFactory creates a new instance of a Plugin
type PluginFactory interface {
    New() Plugin
}

The Init method is expected to receive as input all the dependencies declared under core/handlers/endorsement/api/, identified as embedding the Dependency interface.

After the creation of the Plugin instance, the Init method is invoked on it by the peer with the dependencies passed as parameters.

Currently Fabric comes with the following dependencies for endorsement plugins:

  • SigningIdentityFetcher: Returns an instance of SigningIdentity based on a given signed proposal:

// SigningIdentity signs messages and serializes its public identity to bytes
type SigningIdentity interface {
    // Serialize returns a byte representation of this identity which is used to verify
    // messages signed by this SigningIdentity
    Serialize() ([]byte, error)

    // Sign signs the given payload and returns a signature
    Sign([]byte) ([]byte, error)
}
  • StateFetcher: Fetches a State object which interacts with the world state:

// State defines interaction with the world state
type State interface {
    // GetPrivateDataMultipleKeys gets the values for the multiple private data items in a single call
    GetPrivateDataMultipleKeys(namespace, collection string, keys []string) ([][]byte, error)

    // GetStateMultipleKeys gets the values for multiple keys in a single call
    GetStateMultipleKeys(namespace string, keys []string) ([][]byte, error)

    // GetTransientByTXID gets the values private data associated with the given txID
    GetTransientByTXID(txID string) ([]*rwset.TxPvtReadWriteSet, error)

    // Done releases resources occupied by the State
    Done()
 }

Validation plugin implementation

To implement a validation plugin, one must implement the Plugin interface found in core/handlers/validation/api/validation.go:

// Plugin validates transactions
type Plugin interface {
    // Validate returns nil if the action at the given position inside the transaction
    // at the given position in the given block is valid, or an error if not.
    Validate(block *common.Block, namespace string, txPosition int, actionPosition int, contextData ...ContextDatum) error

    // Init injects dependencies into the instance of the Plugin
    Init(dependencies ...Dependency) error
}

Each ContextDatum is additional runtime-derived metadata that is passed by the peer to the validation plugin. Currently, the only ContextDatum that is passed is one that represents the endorsement policy of the chaincode:

 // SerializedPolicy defines a serialized policy
type SerializedPolicy interface {
      validation.ContextDatum

      // Bytes returns the bytes of the SerializedPolicy
      Bytes() []byte
 }

A validation plugin instance of a given plugin type (identified either by the method name as an instance method of the HandlerLibrary or by the plugin .so file path) is created for each channel by having the peer invoke the New method in the PluginFactory interface which is also expected to be implemented by the plugin developer:

// PluginFactory creates a new instance of a Plugin
type PluginFactory interface {
    New() Plugin
}

The Init method is expected to receive as input all the dependencies declared under core/handlers/validation/api/, identified as embedding the Dependency interface.

After the creation of the Plugin instance, the Init method is invoked on it by the peer with the dependencies passed as parameters.

Currently Fabric comes with the following dependencies for validation plugins:

  • IdentityDeserializer: Converts byte representation of identities into Identity objects that can be used to verify signatures signed by them, be validated themselves against their corresponding MSP, and see whether they satisfy a given MSP Principal. The full specification can be found in core/handlers/validation/api/identities/identities.go.

  • PolicyEvaluator: Evaluates whether a given policy is satisfied:

// PolicyEvaluator evaluates policies
type PolicyEvaluator interface {
    validation.Dependency

    // Evaluate takes a set of SignedData and evaluates whether this set of signatures satisfies
    // the policy with the given bytes
    Evaluate(policyBytes []byte, signatureSet []*common.SignedData) error
}
  • StateFetcher: Fetches a State object which interacts with the world state:

// State defines interaction with the world state
type State interface {
    // GetStateMultipleKeys gets the values for multiple keys in a single call
    GetStateMultipleKeys(namespace string, keys []string) ([][]byte, error)

    // GetStateRangeScanIterator returns an iterator that contains all the key-values between given key ranges.
    // startKey is included in the results and endKey is excluded. An empty startKey refers to the first available key
    // and an empty endKey refers to the last available key. For scanning all the keys, both the startKey and the endKey
    // can be supplied as empty strings. However, a full scan should be used judiciously for performance reasons.
    // The returned ResultsIterator contains results of type *KV which is defined in protos/ledger/queryresult.
    GetStateRangeScanIterator(namespace string, startKey string, endKey string) (ResultsIterator, error)

    // GetStateMetadata returns the metadata for given namespace and key
    GetStateMetadata(namespace, key string) (map[string][]byte, error)

    // GetPrivateDataMetadata gets the metadata of a private data item identified by a tuple <namespace, collection, key>
    GetPrivateDataMetadata(namespace, collection, key string) (map[string][]byte, error)

    // Done releases resources occupied by the State
    Done()
}

Important notes

  • Validation plugin consistency across peers: In future releases, the Fabric channel infrastructure would guarantee that the same validation logic is used for a given chaincode by all peers in the channel at any given blockchain height in order to eliminate the chance of mis-configuration which would might lead to state divergence among peers that accidentally run different implementations. However, for now it is the sole responsibility of the system operators and administrators to ensure this doesn’t happen.

  • Validation plugin error handling: Whenever a validation plugin can’t determine whether a given transaction is valid or not, because of some transient execution problem like inability to access the database, it should return an error of type ExecutionFailureError that is defined in core/handlers/validation/api/validation.go. Any other error that is returned, is treated as an endorsement policy error and marks the transaction as invalidated by the validation logic. However, if an ExecutionFailureError is returned, the chain processing halts instead of marking the transaction as invalid. This is to prevent state divergence between different peers.

  • Error handling for private metadata retrieval: In case a plugin retrieves metadata for private data by making use of the StateFetcher interface, it is important that errors are handled as follows: CollConfigNotDefinedError'' and ``InvalidCollNameError'', signalling that the specified collection does not exist, should be handled as deterministic errors and should not lead the plugin to return an ``ExecutionFailureError.

  • Importing Fabric code into the plugin: Importing code that belongs to Fabric other than protobufs as part of the plugin is highly discouraged, and can lead to issues when the Fabric code changes between releases, or can cause inoperability issues when running mixed peer versions. Ideally, the plugin code should only use the dependencies given to it, and should import the bare minimum other than protobufs.

Access Control Lists (ACL)

What is an Access Control List?

注意:本主题涉及通道管理级别的访问控制和策略。要了解链码中的访问控制,请参阅我们的链码开发者教程。

Fabric使用访问控制列表(ACL)来管理对资源的访问,方法是将一个策略与资源相关联,策略指定一个计算结果为true或false规则,给定一组标识。Fabric包含许多缺省ACL。在本文档中,我们将讨论如何格式化它们以及如何重写默认值。

但在此之前,有必要对资源和策略有一些了解。

Resources

用户通过指定用户链码、系统链码或事件流源与Fabric交互。因此,这些端点被认为是应该执行访问控制的“资源”。

应用程序开发人员需要了解这些资源以及与它们关联的默认策略。这些资源的完整列表可以在config .yaml中找到。您可以查看这里的configtx.yaml示例文件。

在configtx.yaml中命名的资源是Fabric当前定义的所有内部资源的详细列表。那里采用的宽松约定是<component>/<resource>。因此cscc/GetConfigBlock是CSCC组件中GetConfigBlock调用的资源。

Policies

策略是Fabric工作方式的基础,因为它们允许根据与实现请求所需的资源关联的策略检查与请求关联的身份(或一组身份)。背书策略用于确定交易是否得到了适当的背书。在通道配置中定义的策略被引用为访问控制策略的修改策略,并在通道配置本身中定义。

策略可以通过以下两种方式构建:签名策略或 ImplicitMeta策略。

Signature policies

这些策略指定了满足策略需要哪些用户签名。例如:

Policies:
  MyPolicy:
    Type: Signature
    Rule: “Org1.Peer OR Org2.Peer”

此策略结构可以解释为:名为MyPolicy的策略只能通过“来自Org1的节点”或“来自Org2的节点”角色的身份签名来满足。

签名策略支持AND、OR和NOutOf的任意组合,允许构造非常强大的规则,比如:“一个org A管理员和两个其他管理员,或者20个org管理员中的11个”。

ImplicitMeta policies

ImplicitMeta策略在配置层次结构的更深层上聚合最终由签名策略定义的策略结果。它们支持默认规则,比如“大多数组织管理员”。与签名策略相比,这些策略使用了不同但仍然非常简单的语法:<ALL|ANY|MAJORITY> <sub_policy>。

例如:任何读者或多数管理员。

注意,在默认策略配置中,管理员具有管理角色。指定只有管理员(或管理员的某些子集)才能访问资源的策略往往是针对敏感或网络管理方面(例如在通道上实例化链码)。写入者往往能够提议账本更新,比如交易,但通常没有管理权限。读者是一个被动的角色。他们可以获取信息,但没有提议更新账本的权限,也不能执行管理任务。这些默认策略可以添加、编辑或补充,例如通过新的节点和客户端角色(如果您有NodeOU支持)。

这里有一个ImplicitMeta策略结构的例子:

Policies:
  AnotherPolicy:
    Type: ImplicitMeta
    Rule: "MAJORITY Admins"

在这里,大多数管理员可以满足策略AnotherPolicy,其中管理员最终由下级的签名策略指定。

Where is access control specified?

访问控制默认值存在于configtx.yaml中, configtxgen用于构建通道的配置文件。

访问控制可以通过以下两种方式之一进行更新:编辑configtx.yaml,它将ACL更改传播到任何新通道,或者通过更新特定通道配置文件中的访问控制。

How ACLs are formatted in configtx.yaml

ACL的格式是键值对,由资源函数名后跟字符串组成。要查看这是什么样子,请参考这个示例文件configtx.yaml。

从这个例子中摘录了两段:

# ACL policy for invoking chaincodes on peer
peer/Propose: /Channel/Application/Writers
# ACL policy for sending block events
event/Block: /Channel/Application/Readers

这些ACL定义了对节点/提议和事件/区块资源的访问,仅限于满足分别在正则路径/Channel/Application/Writers和/Channel/Application/Readers上定义的策略。

Updating ACL defaults in configtx.yaml

在引导网络时需要覆盖ACL默认值,或者在引导通道之前更改ACL的情况下,最佳实践将是更新config .yaml。

假设您想修改节点/提议的ACL默认值,它指定了在节点上调用链码的策略,从/Channel/Application/ writer到一个名为MyPolicy的策略。

这是通过添加一个名为MyPolicy的策略来实现的(它可以被称为任何名称,但在本例中,我们将其称为MyPolicy)。这个策略在configtx.yaml中的Application.Policies部分下定义,它指定了对于一个用户要授予或拒绝的访问授权。对于本例,我们将创建一个标识为SampleOrg.admin的签名策略。

Policies: &ApplicationDefaultPolicies
    Readers:
        Type: ImplicitMeta
        Rule: "ANY Readers"
    Writers:
        Type: ImplicitMeta
        Rule: "ANY Writers"
    Admins:
        Type: ImplicitMeta
        Rule: "MAJORITY Admins"
    MyPolicy:
        Type: Signature
        Rule: "OR('SampleOrg.admin')"

然后,编辑configtx.yaml中的Application: ACLs部分,来修改 peer/Propose从下面的:

peer/Propose: /Channel/Application/Writers

到:

peer/Propose: /Channel/Application/MyPolicy

一旦在configtx中更改了这些字段,configtxgen工具将使用在创建通道创建交易时定义的策略和ACL。当联盟成员的管理员之一适当地签署并提交时,将已定义的acl和策略创建一个新通道。

一旦将MyPolicy引导到通道配置中,它还可以引用它来覆盖其他ACL默认值。例如:

SampleSingleMSPChannel:
    Consortium: SampleConsortium
    Application:
        <<: *ApplicationDefaults
        ACLs:
            <<: *ACLsDefault
            event/Block: /Channel/Application/MyPolicy

这将限制SampleOrg.admin签署出块事件的能力。

如果已经创建了希望使用这个ACL的通道,则它们必须使用以下流程一次更新一个通道配置:

Updating ACL defaults in the channel config

如果已经创建了通道,希望使用MyPolicy限制对节点/提议的访问,或者希望创建不想让其他通道知道的ACL,则必须通过配置更新交易一次更新一个通道配置。

注意:通道配置交易是一个复杂的过程,我们不会在这里深入讨论。如果你想了解更多关于他们的信息,请查看我们关于通道配置更新的文档和我们的“向通道添加组织”教程。

在提取、翻译和剥离配置区块的元数据之后,您将通过在Application: policies下添加MyPolicy来编辑配置,管理员、编写者和读者策略已经位于Application: policies下。

"MyPolicy": {
  "mod_policy": "Admins",
  "policy": {
    "type": 1,
    "value": {
      "identities": [
        {
          "principal": {
            "msp_identifier": "SampleOrg",
            "role": "ADMIN"
          },
          "principal_classification": "ROLE"
        }
      ],
      "rule": {
        "n_out_of": {
          "n": 1,
          "rules": [
            {
              "signed_by": 0
            }
          ]
        }
      },
      "version": 0
    }
  },
  "version": "0"
},

特别注意这里的msp_identifer和角色。

然后,在配置的ACLs部分,从这里更改节点/提议 ACL:

"peer/Propose": {
  "policy_ref": "/Channel/Application/Writers"

到:

"peer/Propose": {
  "policy_ref": "/Channel/Application/MyPolicy"

注意:如果在通道配置中没有定义ACL,则必须添加整个ACL结构。

一旦更新了配置,就需要通过通常的通道更新流程来提交配置。

Satisfying an ACL that requires access to multiple resources

如果成员发出调用多个系统链码的请求,则必须满足所有这些系统链码的ACL。

例如,peer/ proposal引用通道上的任何提议请求。如果特定的提议需要访问两个系统链码,其中一个系统链码需要一个身份满足写入者,另一个系统链码需要一个身份满足MyPolicy,那么提交提议的成员必须有一个对写入者和MyPolicy都计算为“true”的身份。

在默认配置中,Writer是一个签名策略,其规则是SampleOrg.member。换句话说,是“我组织的任何成员”。上面列出的MyPolicy有一个SampleOrg.admin规则,或“我的组织的任何管理员”。为了满足这些ACL,成员必须同时是管理员和SampleOrg成员。默认情况下,所有管理员都是成员(虽然不是所有管理员都是成员),但是可以将这些策略改写为您希望它们成为的任何内容。因此,重要的是跟踪这些策略,以确保节点提议的ACL是可以满足的(除非这是意图)。

Migration considerations for customers using the experimental ACL feature

以前,访问控制列表的管理是在通道创建交易的isolated_data部分中完成的,并通过PEER_RESOURCE_UPDATE交易进行更新。最初,人们认为资源树将处理几个函数的更新,而这些函数最终是用其他方式处理的,因此维护一个单独的并行节点配置树被认为是不必要的。

使用v1.1中的实验资源树为客户迁移是可以的。由于官方v1.2版本不支持旧的ACL方法,网络操作者应该关闭所有的节点。然后,他们应该将它们升级到v1.2,提交一个通道重新配置交易,该交易启用v1.2功能并设置所需的ACL,最后重新启动升级的节点。重新启动的节点将立即使用新的通道配置并根据需要强制执行ACL。

MSP Implementation with Identity Mixer

What is Idemix?

Idemix is a cryptographic protocol suite, which provides strong authentication as well as privacy-preserving features such as anonymity, the ability to transact without revealing the identity of the transactor, and unlinkability, the ability of a single identity to send multiple transactions without revealing that the transactions were sent by the same identity.

There are three actors involved in an Idemix flow: user, issuer, and verifier.

_images/idemix-overview.png
  • An issuer certifies a set of user’s attributes are issued in the form of a digital certificate, hereafter called “credential”.

  • The user later generates a “zero-knowledge proof” of possession of the credential and also selectively discloses only the attributes the user chooses to reveal. The proof, because it is zero-knowledge, reveals no additional information to the verifier, issuer, or anyone else.

As an example, suppose “Alice” needs to prove to Bob (a store clerk) that she has a driver’s license issued to her by the DMV.

In this scenario, Alice is the user, the DMV is the issuer, and Bob is the verifier. In order to prove to Bob that Alice has a driver’s license, she could show it to him. However, Bob would then be able to see Alice’s name, address, exact age, etc. — much more information than Bob needs to know.

Instead, Alice can use Idemix to generate a “zero-knowledge proof” for Bob, which only reveals that she has a valid driver’s license and nothing else.

So from the proof:

  • Bob does not learn any additional information about Alice other than the fact that she has a valid license (anonymity).

  • If Alice visits the store multiple times and generates a proof each time for Bob, Bob would not be able to tell from the proof that it was the same person (unlinkability).

Idemix authentication technology provides the trust model and security guarantees that are similar to what is ensured by standard X.509 certificates but with underlying cryptographic algorithms that efficiently provide advanced privacy features including the ones described above. We’ll compare Idemix and X.509 technologies in detail in the technical section below.

How to use Idemix

To understand how to use Idemix with Hyperledger Fabric, we need to see which Fabric components correspond to the user, issuer, and verifier in Idemix.

  • The Fabric Java SDK is the API for the user. In the future, other Fabric SDKs will also support Idemix.

  • Fabric provides two possible Idemix issuers:

    1. Fabric CA for production environments or development, and

    2. the idemixgen tool for development environments.

  • The verifier is an Idemix MSP in Fabric.

In order to use Idemix in Hyperledger Fabric, the following three basic steps are required:

_images/idemix-three-steps.png

Compare the roles in this image to the ones above.

  1. Consider the issuer.

    Fabric CA (version 1.3 or later) has been enhanced to automatically function as an Idemix issuer. When fabric-ca-server is started (or initialized via the fabric-ca-server init command), the following two files are automatically created in the home directory of the fabric-ca-server: IssuerPublicKey and IssuerRevocationPublicKey. These files are required in step 2.

    For a development environment and if you are not using Fabric CA, you may use idemixgen to create these files.

  2. Consider the verifier.

    You need to create an Idemix MSP using the IssuerPublicKey and IssuerRevocationPublicKey from step 1.

    For example, consider the following excerpt from configtx.yaml in the Hyperledger Java SDK sample:

    - &Org1Idemix
        # defaultorg defines the organization which is used in the sampleconfig
        # of the fabric.git development environment
        name: idemixMSP1
    
        # id to load the msp definition as
        id: idemixMSPID1
    
        msptype: idemix
        mspdir: crypto-config/peerOrganizations/org3.example.com
    

    The msptype is set to idemix and the contents of the mspdir directory (crypto-config/peerOrganizations/org3.example.com/msp in this example) contains the IssuerPublicKey and IssuerRevocationPublicKey files.

    Note that in this example, Org1Idemix represents the Idemix MSP for Org1 (not shown), which would also have an X509 MSP.

  3. Consider the user. Recall that the Java SDK is the API for the user.

    There is only a single additional API call required in order to use Idemix with the Java SDK: the idemixEnroll method of the org.hyperledger.fabric_ca.sdk.HFCAClient class. For example, assume hfcaClient is your HFCAClient object and x509Enrollment is your org.hyperledger.fabric.sdk.Enrollment associated with your X509 certificate.

    The following call will return an org.hyperledger.fabric.sdk.Enrollment object associated with your Idemix credential.

    IdemixEnrollment idemixEnrollment = hfcaClient.idemixEnroll(x509enrollment, "idemixMSPID1");
    

    Note also that IdemixEnrollment implements the org.hyperledger.fabric.sdk.Enrollment interface and can, therefore, be used in the same way that one uses the X509 enrollment object, except, of course, that this automatically provides the privacy enhancing features of Idemix.

Idemix and chaincode

From a verifier perspective, there is one more actor to consider: chaincode. What can chaincode learn about the transactor when an Idemix credential is used?

The cid (Client Identity) library (for golang only) has been extended to support the GetAttributeValue function when an Idemix credential is used. However, as mentioned in the “Current limitations” section below, there are only two attributes which are disclosed in the Idemix case: ou and role.

If Fabric CA is the credential issuer:

  • the value of the ou attribute is the identity’s affiliation (e.g. “org1.department1”);

  • the value of the role attribute will be either ‘member’ or ‘admin’. A value of ‘admin’ means that the identity is an MSP administrator. By default, identities created by Fabric CA will return the ‘member’ role. In order to create an ‘admin’ identity, register the identity with the role attribute and a value of 2.

For an example of setting an affiliation in the Java SDK see this sample.

For an example of using the CID library in go chaincode to retrieve attributes, see this go chaincode.

Current limitations

The current version of Idemix does have a few limitations.

  • Fixed set of attributes

    It not yet possible to issue or use an Idemix credential with custom attributes. Custom attributes will be supported in a future release.

    The following four attributes are currently supported:

    1. Organizational Unit attribute (“ou”):

    • Usage: same as X.509

    • Type: String

    • Revealed: always

    1. Role attribute (“role”):

    • Usage: same as X.509

    • Type: integer

    • Revealed: always

    1. Enrollment ID attribute

    • Usage: uniquely identify a user — same in all enrollment credentials that belong to the same user (will be used for auditing in the future releases)

    • Type: BIG

    • Revealed: never in the signature, only when generating an authentication token for Fabric CA

    1. Revocation Handle attribute

    • Usage: uniquely identify a credential (will be used for revocation in future releases)

    • Type: integer

    • Revealed: never

  • Revocation is not yet supported

    Although much of the revocation framework is in place as can be seen by the presence of a revocation handle attribute mentioned above, revocation of an Idemix credential is not yet supported.

  • Peers do not use Idemix for endorsement

    Currently, Idemix MSP is used by the peers only for signature verification. Signing with Idemix is only done via Client SDK. More roles (including a ‘peer’ role) will be supported by Idemix MSP.

Technical summary

Comparing Idemix credentials to X.509 certificates

The certificate/credential concept and the issuance process are very similar in Idemix and X.509 certs: a set of attributes is digitally signed with a signature that cannot be forged and there is a secret key to which a credential is cryptographically bound.

The main difference between a standard X.509 certificate and an Identity Mixer credential is the signature scheme that is used to certify the attributes. The signatures underlying the Identity Mixer system allow for efficient proofs of the possession of a signature and the corresponding attributes without revealing the signature and (selected) attribute values themselves. We use zero-knowledge proofs to ensure that such “knowledge” or “information” is not revealed while ensuring that the signature over some attributes is valid and the user is in possession of the corresponding credential secret key.

Such proofs, like X.509 certificates, can be verified with the public key of the authority that originally signed the credential and cannot be successfully forged. Only the user who knows the credential secret key can generate the proofs about the credential and its attributes.

With regard to unlinkability, when an X.509 certificate is presented, all attributes have to be revealed to verify the certificate signature. This implies that all certificate usages for signing transactions are linkable.

To avoid such linkability, fresh X.509 certificates need to be used every time, which results in complex key management and communication and storage overhead. Furthermore, there are cases where it is important that not even the CA issuing the certificates is able to link all the transactions to the user.

Idemix helps to avoid linkability with respect to both the CA and verifiers, since even the CA is not able to link proofs to the original credential. Neither the issuer nor a verifier can tell whether two proofs were derived from the same credential (or from two different ones).

More details on the concepts and features of the Identity Mixer technology are described in the paper Concepts and Languages for Privacy-Preserving Attribute-Based Authentication.

Topology Information

Given the above limitations, it is recommended to have only one Idemix-based MSP per channel or, at the extreme, per network. Indeed, for example, having multiple Idemix-based MSPs per channel would allow a party, reading the ledger of that channel, to tell apart transactions signed by parties belonging to different Idemix-based MSPs. This is because, each transaction leak the MSP-ID of the signer. In other words, Idemix currently provides only anonymity of clients among the same organization (MSP).

In the future, Idemix could be extended to support anonymous hierarchies of Idemix-based Certification Authorities whose certified credentials can be verified by using a unique public-key, therefore achieving anonymity across organizations (MSPs). This would allow multiple Idemix-based MSPs to coexist in the same channel.

In principal, a channel can be configured to have a single Idemix-based MSP and multiple X.509-based MSPs. Of course, the interaction between these MSP can potential leak information. An assessment of the leaked information need to be done case by case.wq

Underlying cryptographic protocols

Idemix technology is built from a blind signature scheme that supports multiple messages and efficient zero-knowledge proofs of signature possession. All of the cryptographic building blocks for Idemix were published at the top conferences and journals and verified by the scientific community.

This particular Idemix implementation for Fabric uses a pairing-based signature scheme that was briefly proposed by Camenisch and Lysyanskaya and described in detail by Au et al.. The ability to prove knowledge of a signature in a zero-knowledge proof Camenisch et al. was used.

Identity Mixer MSP configuration generator (idemixgen)

This document describes the usage for the idemixgen utility, which can be used to create configuration files for the identity mixer based MSP. Two commands are available, one for creating a fresh CA key pair, and one for creating an MSP config using a previously generated CA key.

Directory Structure

The idemixgen tool will create directories with the following structure:

- /ca/
    IssuerSecretKey
    IssuerPublicKey
    RevocationKey
- /msp/
    IssuerPublicKey
    RevocationPublicKey
- /user/
    SignerConfig

The ca directory contains the issuer secret key (including the revocation key) and should only be present for a CA. The msp directory contains the information required to set up an MSP verifying idemix signatures. The user directory specifies a default signer.

CA Key Generation

CA (issuer) keys suitable for identity mixer can be created using command idemixgen ca-keygen. This will create directories ca and msp in the working directory.

Adding a Default Signer

After generating the ca and msp directories with idemixgen ca-keygen, a default signer specified in the user directory can be added to the config with idemixgen signerconfig.

$ idemixgen signerconfig -h
usage: idemixgen signerconfig [<flags>]

Generate a default signer for this Idemix MSP

Flags:
    -h, --help               Show context-sensitive help (also try --help-long and --help-man).
    -u, --org-unit=ORG-UNIT  The Organizational Unit of the default signer
    -a, --admin              Make the default signer admin
    -e, --enrollment-id=ENROLLMENT-ID
                             The enrollment id of the default signer
    -r, --revocation-handle=REVOCATION-HANDLE
                             The handle used to revoke this signer

For example, we can create a default signer that is a member of organizational unit “OrgUnit1”, with enrollment identity “johndoe”, revocation handle “1234”, and that is an admin, with the following command:

idemixgen signerconfig -u OrgUnit1 --admin -e "johndoe" -r 1234

The Operations Service

The peer and the orderer host an HTTP server that offers a RESTful “operations” API. This API is unrelated to the Fabric network services and is intended to be used by operators, not administrators or “users” of the network.

The API exposes the following capabilities:

  • Log level management

  • Health checks

  • Prometheus target for operational metrics (when configured)

Configuring the Operations Service

The operations service requires two basic pieces of configuration:

  • The address and port to listen on.

  • The TLS certificates and keys to use for authentication and encryption. Note, these certificates should be generated by a separate and dedicated CA. Do not use a CA that has generated certificates for any organizations in any channels.

Peer

For each peer, the operations server can be configured in the operations section of core.yaml:

operations:
  # host and port for the operations server
  listenAddress: 127.0.0.1:9443

  # TLS configuration for the operations endpoint
  tls:
    # TLS enabled
    enabled: true

    # path to PEM encoded server certificate for the operations server
    cert:
      file: tls/server.crt

    # path to PEM encoded server key for the operations server
    key:
      file: tls/server.key

    # most operations service endpoints require client authentication when TLS
    # is enabled. clientAuthRequired requires client certificate authentication
    # at the TLS layer to access all resources.
    clientAuthRequired: false

    # paths to PEM encoded ca certificates to trust for client authentication
    clientRootCAs:
      files: []

The listenAddress key defines the host and port that the operation server will listen on. If the server should listen on all addresses, the host portion can be omitted.

The tls section is used to indicate whether or not TLS is enabled for the operations service, the location of the service’s certificate and private key, and the locations of certificate authority root certificates that should be trusted for client authentication. When enabled is true, most of the operations service endpoints require client authentication, therefore clientRootCAs.files must be set. When clientAuthRequired is true, the TLS layer will require clients to provide a certificate for authentication on every request. See Operations Security section below for more details.

Orderer

For each orderer, the operations server can be configured in the Operations section of orderer.yaml:

Operations:
  # host and port for the operations server
  ListenAddress: 127.0.0.1:8443

  # TLS configuration for the operations endpoint
  TLS:
    # TLS enabled
    Enabled: true

    # PrivateKey: PEM-encoded tls key for the operations endpoint
    PrivateKey: tls/server.key

    # Certificate governs the file location of the server TLS certificate.
    Certificate: tls/server.crt

    # Paths to PEM encoded ca certificates to trust for client authentication
    ClientRootCAs: []

    # Most operations service endpoints require client authentication when TLS
    # is enabled. ClientAuthRequired requires client certificate authentication
    # at the TLS layer to access all resources.
    ClientAuthRequired: false

The ListenAddress key defines the host and port that the operations server will listen on. If the server should listen on all addresses, the host portion can be omitted.

The TLS section is used to indicate whether or not TLS is enabled for the operations service, the location of the service’s certificate and private key, and the locations of certificate authority root certificates that should be trusted for client authentication. When Enabled is true, most of the operations service endpoints require client authentication, therefore RootCAs must be set. When ClientAuthRequired is true, the TLS layer will require clients to provide a certificate for authentication on every request. See Operations Security section below for more details.

Operations Security

As the operations service is focused on operations and intentionally unrelated to the Fabric network, it does not use the Membership Services Provider for access control. Instead, the operations service relies entirely on mutual TLS with client certificate authentication.

When TLS is disabled, authorization is bypassed and any client that can connect to the operations endpoint will be able to use the API.

When TLS is enabled, a valid client certificate must be provided in order to access all resources unless explicitly noted otherwise below.

When clientAuthRequired is also enabled, the TLS layer will require a valid client certificate regardless of the resource being accessed.

Log Level Management

The operations service provides a /logspec resource that operators can use to manage the active logging spec for a peer or orderer. The resource is a conventional REST resource and supports GET and PUT requests.

When a GET /logspec request is received by the operations service, it will respond with a JSON payload that contains the current logging specification:

{"spec":"info"}

When a PUT /logspec request is received by the operations service, it will read the body as a JSON payload. The payload must consist of a single attribute named spec.

{"spec":"chaincode=debug:info"}

If the spec is activated successfully, the service will respond with a 204 "No Content" response. If an error occurs, the service will respond with a 400 "Bad Request" and an error payload:

{"error":"error message"}

Health Checks

The operations service provides a /healthz resource that operators can use to help determine the liveness and health of peers and orderers. The resource is a conventional REST resource that supports GET requests. The implementation is intended to be compatible with the liveness probe model used by Kubernetes but can be used in other contexts.

When a GET /healthz request is received, the operations service will call all registered health checkers for the process. When all of the health checkers return successfully, the operations service will respond with a 200 "OK" and a JSON body:

{
  "status": "OK",
  "time": "2009-11-10T23:00:00Z"
}

If one or more of the health checkers returns an error, the operations service will respond with a 503 "Service Unavailable" and a JSON body that includes information about which health checker failed:

{
  "status": "Service Unavailable",
  "time": "2009-11-10T23:00:00Z",
  "failed_checks": [
    {
      "component": "docker",
      "reason": "failed to connect to Docker daemon: invalid endpoint"
    }
  ]
}

In the current version, the only health check that is registered is for Docker. Future versions will be enhanced to add additional health checks.

When TLS is enabled, a valid client certificate is not required to use this service unless clientAuthRequired is set to true.

Metrics

Some components of the Fabric peer and orderer expose metrics that can help provide insight into the behavior of the system. Operators and administrators can use this information to better understand how the system is performing over time.

Configuring Metrics

Fabric provides two ways to expose metrics: a pull model based on Prometheus and a push model based on StatsD.

Prometheus

A typical Prometheus deployment scrapes metrics by requesting them from an HTTP endpoint exposed by instrumented targets. As Prometheus is responsible for requesting the metrics, it is considered a pull system.

When configured, a Fabric peer or orderer will present a /metrics resource on the operations service.

Peer

A peer can be configured to expose a /metrics endpoint for Prometheus to scrape by setting the metrics provider to prometheus in the metrics section of core.yaml.

metrics:
  provider: prometheus
Orderer

An orderer can be configured to expose a /metrics endpoint for Prometheus to scrape by setting the metrics provider to prometheus in the Metrics section of orderer.yaml.

Metrics:
  Provider: prometheus
StatsD

StatsD is a simple statistics aggregation daemon. Metrics are sent to a statsd daemon where they are collected, aggregated, and pushed to a backend for visualization and alerting. As this model requires instrumented processes to send metrics data to StatsD, this is considered a push system.

Peer

A peer can be configured to send metrics to StatsD by setting the metrics provider to statsd in the metrics section of core.yaml. The statsd subsection must also be configured with the address of the StatsD daemon, the network type to use (tcp or udp), and how often to send the metrics. An optional prefix may be specified to help differentiate the source of the metrics — for example, differentiating metrics coming from separate peers — that would be prepended to all generated metrics.

metrics:
  provider: statsd
  statsd:
    network: udp
    address: 127.0.0.1:8125
    writeInterval: 10s
    prefix: peer-0
Orderer

An orderer can be configured to send metrics to StatsD by setting the metrics provider to statsd in the Metrics section of orderer.yaml. The Statsd subsection must also be configured with the address of the StatsD daemon, the network type to use (tcp or udp), and how often to send the metrics. An optional prefix may be specified to help differentiate the source of the metrics.

Metrics:
    Provider: statsd
    Statsd:
      Network: udp
      Address: 127.0.0.1:8125
      WriteInterval: 30s
      Prefix: org-orderer

For a look at the different metrics that are generated, check out Metrics Reference.

Metrics Reference

Prometheus Metrics

The following metrics are currently exported for consumption by Prometheus.

Name

Type

Description

Labels

blockcutter_block_fill_duration

histogram

The time from first transaction enqueing to the block being cut in seconds.

channel

broadcast_enqueue_duration

histogram

The time to enqueue a transaction in seconds.

channel type status

broadcast_processed_count

counter

The number of transactions processed.

channel type status

broadcast_validate_duration

histogram

The time to validate a transaction in seconds.

channel type status

chaincode_execute_timeouts

counter

The number of chaincode executions (Init or Invoke) that have timed out.

chaincode

chaincode_launch_duration

histogram

The time to launch a chaincode.

chaincode success

chaincode_launch_failures

counter

The number of chaincode launches that have failed.

chaincode

chaincode_launch_timeouts

counter

The number of chaincode launches that have timed out.

chaincode

chaincode_shim_request_duration

histogram

The time to complete chaincode shim requests.

type channel chaincode success

chaincode_shim_requests_completed

counter

The number of chaincode shim requests completed.

type channel chaincode success

chaincode_shim_requests_received

counter

The number of chaincode shim requests received.

type channel chaincode

cluster_comm_egress_queue_capacity

gauge

Capacity of the egress queue.

host msg_type channel

cluster_comm_egress_queue_length

gauge

Length of the egress queue.

host msg_type channel

cluster_comm_egress_queue_workers

gauge

Count of egress queue workers.

channel

cluster_comm_egress_stream_count

gauge

Count of streams to other nodes.

channel

cluster_comm_egress_tls_connection_count

gauge

Count of TLS connections to other nodes.

cluster_comm_ingress_stream_count

gauge

Count of streams from other nodes.

cluster_comm_msg_dropped_count

counter

Count of messages dropped.

host channel

cluster_comm_msg_send_time

histogram

The time it takes to send a message in seconds.

host channel

consensus_etcdraft_cluster_size

gauge

Number of nodes in this channel.

channel

consensus_etcdraft_committed_block_number

gauge

The block number of the latest block committed.

channel

consensus_etcdraft_config_proposals_received

counter

The total number of proposals received for config type transactions.

channel

consensus_etcdraft_data_persist_duration

histogram

The time taken for etcd/raft data to be persisted in storage (in seconds).

channel

consensus_etcdraft_is_leader

gauge

The leadership status of the current node: 1 if it is the leader else 0.

channel

consensus_etcdraft_leader_changes

counter

The number of leader changes since process start.

channel

consensus_etcdraft_normal_proposals_received

counter

The total number of proposals received for normal type transactions.

channel

consensus_etcdraft_proposal_failures

counter

The number of proposal failures.

channel

consensus_etcdraft_snapshot_block_number

gauge

The block number of the latest snapshot.

channel

consensus_kafka_batch_size

gauge

The mean batch size in bytes sent to topics.

topic

consensus_kafka_compression_ratio

gauge

The mean compression ratio (as percentage) for topics.

topic

consensus_kafka_incoming_byte_rate

gauge

Bytes/second read off brokers.

broker_id

consensus_kafka_outgoing_byte_rate

gauge

Bytes/second written to brokers.

broker_id

consensus_kafka_record_send_rate

gauge

The number of records per second sent to topics.

topic

consensus_kafka_records_per_request

gauge

The mean number of records sent per request to topics.

topic

consensus_kafka_request_latency

gauge

The mean request latency in ms to brokers.

broker_id

consensus_kafka_request_rate

gauge

Requests/second sent to brokers.

broker_id

consensus_kafka_request_size

gauge

The mean request size in bytes to brokers.

broker_id

consensus_kafka_response_rate

gauge

Requests/second sent to brokers.

broker_id

consensus_kafka_response_size

gauge

The mean response size in bytes from brokers.

broker_id

couchdb_processing_time

histogram

Time taken in seconds for the function to complete request to CouchDB

database function_name result

deliver_blocks_sent

counter

The number of blocks sent by the deliver service.

channel filtered

deliver_requests_completed

counter

The number of deliver requests that have been completed.

channel filtered success

deliver_requests_received

counter

The number of deliver requests that have been received.

channel filtered

deliver_streams_closed

counter

The number of GRPC streams that have been closed for the deliver service.

deliver_streams_opened

counter

The number of GRPC streams that have been opened for the deliver service.

dockercontroller_chaincode_container_build_duration

histogram

The time to build a chaincode image in seconds.

chaincode success

endorser_chaincode_instantiation_failures

counter

The number of chaincode instantiations or upgrade that have failed.

channel chaincode

endorser_duplicate_transaction_failures

counter

The number of failed proposals due to duplicate transaction ID.

channel chaincode

endorser_endorsement_failures

counter

The number of failed endorsements.

channel chaincode chaincodeerror

endorser_proposal_acl_failures

counter

The number of proposals that failed ACL checks.

channel chaincode

endorser_proposal_validation_failures

counter

The number of proposals that have failed initial validation.

endorser_proposals_received

counter

The number of proposals received.

endorser_propsal_duration

histogram

The time to complete a proposal.

channel chaincode success

endorser_successful_proposals

counter

The number of successful proposals.

fabric_version

gauge

The active version of Fabric.

version

gossip_comm_messages_received

counter

Number of messages received

gossip_comm_messages_sent

counter

Number of messages sent

gossip_comm_overflow_count

counter

Number of outgoing queue buffer overflows

gossip_leader_election_leader

gauge

Peer is leader (1) or follower (0)

channel

gossip_membership_total_peers_known

gauge

Total known peers

channel

gossip_payload_buffer_size

gauge

Size of the payload buffer

channel

gossip_privdata_commit_block_duration

histogram

Time it takes to commit private data and the corresponding block (in seconds)

channel

gossip_privdata_fetch_duration

histogram

Time it takes to fetch missing private data from peers (in seconds)

channel

gossip_privdata_list_missing_duration

histogram

Time it takes to list the missing private data (in seconds)

channel

gossip_privdata_pull_duration

histogram

Time it takes to pull a missing private data element (in seconds)

channel

gossip_privdata_purge_duration

histogram

Time it takes to purge private data (in seconds)

channel

gossip_privdata_reconciliation_duration

histogram

Time it takes for reconciliation to complete (in seconds)

channel

gossip_privdata_retrieve_duration

histogram

Time it takes to retrieve missing private data elements from the ledger (in seconds)

channel

gossip_privdata_send_duration

histogram

Time it takes to send a missing private data element (in seconds)

channel

gossip_privdata_validation_duration

histogram

Time it takes to validate a block (in seconds)

channel

gossip_state_commit_duration

histogram

Time it takes to commit a block in seconds

channel

gossip_state_height

gauge

Current ledger height

channel

grpc_comm_conn_closed

counter

gRPC connections closed. Open minus closed is the active number of connections.

grpc_comm_conn_opened

counter

gRPC connections opened. Open minus closed is the active number of connections.

grpc_server_stream_messages_received

counter

The number of stream messages received.

service method

grpc_server_stream_messages_sent

counter

The number of stream messages sent.

service method

grpc_server_stream_request_duration

histogram

The time to complete a stream request.

service method code

grpc_server_stream_requests_completed

counter

The number of stream requests completed.

service method code

grpc_server_stream_requests_received

counter

The number of stream requests received.

service method

grpc_server_unary_request_duration

histogram

The time to complete a unary request.

service method code

grpc_server_unary_requests_completed

counter

The number of unary requests completed.

service method code

grpc_server_unary_requests_received

counter

The number of unary requests received.

service method

ledger_block_processing_time

histogram

Time taken in seconds for ledger block processing.

channel

ledger_blockchain_height

gauge

Height of the chain in blocks.

channel

ledger_blockstorage_commit_time

histogram

Time taken in seconds for committing the block and private data to storage.

channel

ledger_statedb_commit_time

histogram

Time taken in seconds for committing block changes to state db.

channel

ledger_transaction_count

counter

Number of transactions processed.

channel transaction_type chaincode validation_code

logging_entries_checked

counter

Number of log entries checked against the active logging level

level

logging_entries_written

counter

Number of log entries that are written

level

StatsD Metrics

The following metrics are currently emitted for consumption by StatsD. The %{variable_name} nomenclature represents segments that vary based on context.

For example, %{channel} will be replaced with the name of the channel associated with the metric.

Bucket

Type

Description

blockcutter.block_fill_duration.%{channel}

histogram

The time from first transaction enqueing to the block being cut in seconds.

broadcast.enqueue_duration.%{channel}.%{type}.%{status}

histogram

The time to enqueue a transaction in seconds.

broadcast.processed_count.%{channel}.%{type}.%{status}

counter

The number of transactions processed.

broadcast.validate_duration.%{channel}.%{type}.%{status}

histogram

The time to validate a transaction in seconds.

chaincode.execute_timeouts.%{chaincode}

counter

The number of chaincode executions (Init or Invoke) that have timed out.

chaincode.launch_duration.%{chaincode}.%{success}

histogram

The time to launch a chaincode.

chaincode.launch_failures.%{chaincode}

counter

The number of chaincode launches that have failed.

chaincode.launch_timeouts.%{chaincode}

counter

The number of chaincode launches that have timed out.

chaincode.shim_request_duration.%{type}.%{channel}.%{chaincode}.%{success}

histogram

The time to complete chaincode shim requests.

chaincode.shim_requests_completed.%{type}.%{channel}.%{chaincode}.%{success}

counter

The number of chaincode shim requests completed.

chaincode.shim_requests_received.%{type}.%{channel}.%{chaincode}

counter

The number of chaincode shim requests received.

cluster.comm.egress_queue_capacity.%{host}.%{msg_type}.%{channel}

gauge

Capacity of the egress queue.

cluster.comm.egress_queue_length.%{host}.%{msg_type}.%{channel}

gauge

Length of the egress queue.

cluster.comm.egress_queue_workers.%{channel}

gauge

Count of egress queue workers.

cluster.comm.egress_stream_count.%{channel}

gauge

Count of streams to other nodes.

cluster.comm.egress_tls_connection_count

gauge

Count of TLS connections to other nodes.

cluster.comm.ingress_stream_count

gauge

Count of streams from other nodes.

cluster.comm.msg_dropped_count.%{host}.%{channel}

counter

Count of messages dropped.

cluster.comm.msg_send_time.%{host}.%{channel}

histogram

The time it takes to send a message in seconds.

consensus.etcdraft.cluster_size.%{channel}

gauge

Number of nodes in this channel.

consensus.etcdraft.committed_block_number.%{channel}

gauge

The block number of the latest block committed.

consensus.etcdraft.config_proposals_received.%{channel}

counter

The total number of proposals received for config type transactions.

consensus.etcdraft.data_persist_duration.%{channel}

histogram

The time taken for etcd/raft data to be persisted in storage (in seconds).

consensus.etcdraft.is_leader.%{channel}

gauge

The leadership status of the current node: 1 if it is the leader else 0.

consensus.etcdraft.leader_changes.%{channel}

counter

The number of leader changes since process start.

consensus.etcdraft.normal_proposals_received.%{channel}

counter

The total number of proposals received for normal type transactions.

consensus.etcdraft.proposal_failures.%{channel}

counter

The number of proposal failures.

consensus.etcdraft.snapshot_block_number.%{channel}

gauge

The block number of the latest snapshot.

consensus.kafka.batch_size.%{topic}

gauge

The mean batch size in bytes sent to topics.

consensus.kafka.compression_ratio.%{topic}

gauge

The mean compression ratio (as percentage) for topics.

consensus.kafka.incoming_byte_rate.%{broker_id}

gauge

Bytes/second read off brokers.

consensus.kafka.outgoing_byte_rate.%{broker_id}

gauge

Bytes/second written to brokers.

consensus.kafka.record_send_rate.%{topic}

gauge

The number of records per second sent to topics.

consensus.kafka.records_per_request.%{topic}

gauge

The mean number of records sent per request to topics.

consensus.kafka.request_latency.%{broker_id}

gauge

The mean request latency in ms to brokers.

consensus.kafka.request_rate.%{broker_id}

gauge

Requests/second sent to brokers.

consensus.kafka.request_size.%{broker_id}

gauge

The mean request size in bytes to brokers.

consensus.kafka.response_rate.%{broker_id}

gauge

Requests/second sent to brokers.

consensus.kafka.response_size.%{broker_id}

gauge

The mean response size in bytes from brokers.

couchdb.processing_time.%{database}.%{function_name}.%{result}

histogram

Time taken in seconds for the function to complete request to CouchDB

deliver.blocks_sent.%{channel}.%{filtered}

counter

The number of blocks sent by the deliver service.

deliver.requests_completed.%{channel}.%{filtered}.%{success}

counter

The number of deliver requests that have been completed.

deliver.requests_received.%{channel}.%{filtered}

counter

The number of deliver requests that have been received.

deliver.streams_closed

counter

The number of GRPC streams that have been closed for the deliver service.

deliver.streams_opened

counter

The number of GRPC streams that have been opened for the deliver service.

dockercontroller.chaincode_container_build_duration.%{chaincode}.%{success}

histogram

The time to build a chaincode image in seconds.

endorser.chaincode_instantiation_failures.%{channel}.%{chaincode}

counter

The number of chaincode instantiations or upgrade that have failed.

endorser.duplicate_transaction_failures.%{channel}.%{chaincode}

counter

The number of failed proposals due to duplicate transaction ID.

endorser.endorsement_failures.%{channel}.%{chaincode}.%{chaincodeerror}

counter

The number of failed endorsements.

endorser.proposal_acl_failures.%{channel}.%{chaincode}

counter

The number of proposals that failed ACL checks.

endorser.proposal_validation_failures

counter

The number of proposals that have failed initial validation.

endorser.proposals_received

counter

The number of proposals received.

endorser.propsal_duration.%{channel}.%{chaincode}.%{success}

histogram

The time to complete a proposal.

endorser.successful_proposals

counter

The number of successful proposals.

fabric_version.%{version}

gauge

The active version of Fabric.

gossip.comm.messages_received

counter

Number of messages received

gossip.comm.messages_sent

counter

Number of messages sent

gossip.comm.overflow_count

counter

Number of outgoing queue buffer overflows

gossip.leader_election.leader.%{channel}

gauge

Peer is leader (1) or follower (0)

gossip.membership.total_peers_known.%{channel}

gauge

Total known peers

gossip.payload_buffer.size.%{channel}

gauge

Size of the payload buffer

gossip.privdata.commit_block_duration.%{channel}

histogram

Time it takes to commit private data and the corresponding block (in seconds)

gossip.privdata.fetch_duration.%{channel}

histogram

Time it takes to fetch missing private data from peers (in seconds)

gossip.privdata.list_missing_duration.%{channel}

histogram

Time it takes to list the missing private data (in seconds)

gossip.privdata.pull_duration.%{channel}

histogram

Time it takes to pull a missing private data element (in seconds)

gossip.privdata.purge_duration.%{channel}

histogram

Time it takes to purge private data (in seconds)

gossip.privdata.reconciliation_duration.%{channel}

histogram

Time it takes for reconciliation to complete (in seconds)

gossip.privdata.retrieve_duration.%{channel}

histogram

Time it takes to retrieve missing private data elements from the ledger (in seconds)

gossip.privdata.send_duration.%{channel}

histogram

Time it takes to send a missing private data element (in seconds)

gossip.privdata.validation_duration.%{channel}

histogram

Time it takes to validate a block (in seconds)

gossip.state.commit_duration.%{channel}

histogram

Time it takes to commit a block in seconds

gossip.state.height.%{channel}

gauge

Current ledger height

grpc.comm.conn_closed

counter

gRPC connections closed. Open minus closed is the active number of connections.

grpc.comm.conn_opened

counter

gRPC connections opened. Open minus closed is the active number of connections.

grpc.server.stream_messages_received.%{service}.%{method}

counter

The number of stream messages received.

grpc.server.stream_messages_sent.%{service}.%{method}

counter

The number of stream messages sent.

grpc.server.stream_request_duration.%{service}.%{method}.%{code}

histogram

The time to complete a stream request.

grpc.server.stream_requests_completed.%{service}.%{method}.%{code}

counter

The number of stream requests completed.

grpc.server.stream_requests_received.%{service}.%{method}

counter

The number of stream requests received.

grpc.server.unary_request_duration.%{service}.%{method}.%{code}

histogram

The time to complete a unary request.

grpc.server.unary_requests_completed.%{service}.%{method}.%{code}

counter

The number of unary requests completed.

grpc.server.unary_requests_received.%{service}.%{method}

counter

The number of unary requests received.

ledger.block_processing_time.%{channel}

histogram

Time taken in seconds for ledger block processing.

ledger.blockchain_height.%{channel}

gauge

Height of the chain in blocks.

ledger.blockstorage_commit_time.%{channel}

histogram

Time taken in seconds for committing the block and private data to storage.

ledger.statedb_commit_time.%{channel}

histogram

Time taken in seconds for committing block changes to state db.

ledger.transaction_count.%{channel}.%{transaction_type}.%{chaincode}.%{validation_code}

counter

Number of transactions processed.

logging.entries_checked.%{level}

counter

Number of log entries checked against the active logging level

logging.entries_written.%{level}

counter

Number of log entries that are written

Error handling

General Overview

Hyperledger Fabric code should use the vendored package github.com/pkg/errors in place of the standard error type provided by Go. This package allows easy generation and display of stack traces with error messages.

Usage Instructions

github.com/pkg/errors should be used in place of all calls to fmt.Errorf() or errors.New(). Using this package will generate a call stack that will be appended to the error message.

Using this package is simple and will only require easy tweaks to your code.

First, you’ll need to import github.com/pkg/errors.

Next, update all errors that are generated by your code to use one of the error creation functions (errors.New(), errors.Errorf(), errors.WithMessage(), errors.Wrap(), errors.Wrapf().

注解

See https://godoc.org/github.com/pkg/errors for complete documentation of the available error creation function. Also, refer to the General guidelines section below for more specific guidelines for using the package for Fabric code.

Finally, change the formatting directive for any logger or fmt.Printf() calls from %s to %+v to print the call stack along with the error message.

General guidelines for error handling in Hyperledger Fabric

  • If you are servicing a user request, you should log the error and return it.

  • If the error comes from an external source, such as a Go library or vendored package, wrap the error using errors.Wrap() to generate a call stack for the error.

  • If the error comes from another Fabric function, add further context, if desired, to the error message using errors.WithMessage() while leaving the call stack unaffected.

  • A panic should not be allowed to propagate to other packages.

Example program

The following example program provides a clear demonstration of using the package:

package main

import (
  "fmt"

  "github.com/pkg/errors"
)

func wrapWithStack() error {
  err := createError()
  // do this when error comes from external source (go lib or vendor)
  return errors.Wrap(err, "wrapping an error with stack")
}
func wrapWithoutStack() error {
  err := createError()
  // do this when error comes from internal Fabric since it already has stack trace
  return errors.WithMessage(err, "wrapping an error without stack")
}
func createError() error {
  return errors.New("original error")
}

func main() {
  err := createError()
  fmt.Printf("print error without stack: %s\n\n", err)
  fmt.Printf("print error with stack: %+v\n\n", err)
  err = wrapWithoutStack()
  fmt.Printf("%+v\n\n", err)
  err = wrapWithStack()
  fmt.Printf("%+v\n\n", err)
}

Logging Control

Overview

Logging in the peer and orderer is provided by the common/flogging package. Chaincodes written in Go also use this package if they use the logging methods provided by the shim. This package supports

  • Logging control based on the severity of the message

  • Logging control based on the software logger generating the message

  • Different pretty-printing options based on the severity of the message

All logs are currently directed to stderr. Global and logger-level control of logging by severity is provided for both users and developers. There are currently no formalized rules for the types of information provided at each severity level. When submitting bug reports, developers may want to see full logs down to the DEBUG level.

In pretty-printed logs the logging level is indicated both by color and by a four-character code, e.g, “ERRO” for ERROR, “DEBU” for DEBUG, etc. In the logging context a logger is an arbitrary name (string) given by developers to groups of related messages. In the pretty-printed example below, the loggers ledgermgmt, kvledger, and peer are generating logs.

2018-11-01 15:32:38.268 UTC [ledgermgmt] initialize -> INFO 002 Initializing ledger mgmt
2018-11-01 15:32:38.268 UTC [kvledger] NewProvider -> INFO 003 Initializing ledger provider
2018-11-01 15:32:38.342 UTC [kvledger] NewProvider -> INFO 004 ledger provider Initialized
2018-11-01 15:32:38.357 UTC [ledgermgmt] initialize -> INFO 005 ledger mgmt initialized
2018-11-01 15:32:38.357 UTC [peer] func1 -> INFO 006 Auto-detected peer address: 172.24.0.3:7051
2018-11-01 15:32:38.357 UTC [peer] func1 -> INFO 007 Returning peer0.org1.example.com:7051

An arbitrary number of loggers can be created at runtime, therefore there is no “master list” of loggers, and logging control constructs can not check whether logging loggers actually do or will exist.

Logging specification

The logging levels of the peer and orderer commands are controlled by a logging specification, which is set via the FABRIC_LOGGING_SPEC environment variable.

The full logging level specification is of the form

[<logger>[,<logger>...]=]<level>[:[<logger>[,<logger>...]=]<level>...]

Logging severity levels are specified using case-insensitive strings chosen from

FATAL | PANIC | ERROR | WARNING | INFO | DEBUG

A logging level by itself is taken as the overall default. Otherwise, overrides for individual or groups of loggers can be specified using the

<logger>[,<logger>...]=<level>

syntax. Examples of specifications:

info                                        - Set default to INFO
warning:msp,gossip=warning:chaincode=info   - Default WARNING; Override for msp, gossip, and chaincode
chaincode=info:msp,gossip=warning:warning   - Same as above

Logging format

The logging format of the peer and orderer commands is controlled via the FABRIC_LOGGING_FORMAT environment variable. This can be set to a format string, such as the default

"%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}"

to print the logs in a human-readable console format. It can be also set to json to output logs in JSON format.

Go chaincodes

The standard mechanism to log within a chaincode application is to integrate with the logging transport exposed to each chaincode instance via the peer. The chaincode shim package provides APIs that allow a chaincode to create and manage logging objects whose logs will be formatted and interleaved consistently with the shim logs.

As independently executed programs, user-provided chaincodes may technically also produce output on stdout/stderr. While naturally useful for “devmode”, these channels are normally disabled on a production network to mitigate abuse from broken or malicious code. However, it is possible to enable this output even for peer-managed containers (e.g. “netmode”) on a per-peer basis via the CORE_VM_DOCKER_ATTACHSTDOUT=true configuration option.

Once enabled, each chaincode will receive its own logging channel keyed by its container-id. Any output written to either stdout or stderr will be integrated with the peer’s log on a per-line basis. It is not recommended to enable this for production.

API

NewLogger(name string) *ChaincodeLogger - Create a logging object for use by a chaincode

(c *ChaincodeLogger) SetLevel(level LoggingLevel) - Set the logging level of the logger

(c *ChaincodeLogger) IsEnabledFor(level LoggingLevel) bool - Return true if logs will be generated at the given level

LogLevel(levelString string) (LoggingLevel, error) - Convert a string to a LoggingLevel

A LoggingLevel is a member of the enumeration

LogDebug, LogInfo, LogNotice, LogWarning, LogError, LogCritical

which can be used directly, or generated by passing a case-insensitive version of the strings

DEBUG, INFO, NOTICE, WARNING, ERROR, CRITICAL

to the LogLevel API.

Formatted logging at various severity levels is provided by the functions

(c *ChaincodeLogger) Debug(args ...interface{})
(c *ChaincodeLogger) Info(args ...interface{})
(c *ChaincodeLogger) Notice(args ...interface{})
(c *ChaincodeLogger) Warning(args ...interface{})
(c *ChaincodeLogger) Error(args ...interface{})
(c *ChaincodeLogger) Critical(args ...interface{})

(c *ChaincodeLogger) Debugf(format string, args ...interface{})
(c *ChaincodeLogger) Infof(format string, args ...interface{})
(c *ChaincodeLogger) Noticef(format string, args ...interface{})
(c *ChaincodeLogger) Warningf(format string, args ...interface{})
(c *ChaincodeLogger) Errorf(format string, args ...interface{})
(c *ChaincodeLogger) Criticalf(format string, args ...interface{})

The f forms of the logging APIs provide for precise control over the formatting of the logs. The non-f forms of the APIs currently insert a space between the printed representations of the arguments, and arbitrarily choose the formats to use.

In the current implementation, the logs produced by the shim and a ChaincodeLogger are timestamped, marked with the logger name and severity level, and written to stderr. Note that logging level control is currently based on the name provided when the ChaincodeLogger is created. To avoid ambiguities, all ChaincodeLogger should be given unique names other than “shim”. The logger name will appear in all log messages created by the logger. The shim logs as “shim”.

The default logging level for loggers within the Chaincode container can be set in the core.yaml file. The key chaincode.logging.level sets the default level for all loggers within the Chaincode container. The key chaincode.logging.shim overrides the default level for the shim logger.

# Logging section for the chaincode container
logging:
  # Default level for all loggers within the chaincode container
  level:  info
  # Override default level for the 'shim' logger
  shim:   warning

The default logging level can be overridden by using environment variables. CORE_CHAINCODE_LOGGING_LEVEL sets the default logging level for all loggers. CORE_CHAINCODE_LOGGING_SHIM overrides the level for the shim logger.

Go language chaincodes can also control the logging level of the chaincode shim interface through the SetLoggingLevel API.

SetLoggingLevel(LoggingLevel level) - Control the logging level of the shim

Below is a simple example of how a chaincode might create a private logging object logging at the LogInfo level.

var logger = shim.NewLogger("myChaincode")

func main() {

    logger.SetLevel(shim.LogInfo)
    ...
}

使用传输层安全性(TLS)确保安全通信

Fabric支持使用TLS在node节点之间进行安全通信。TLS通信既可以使用单向(仅限服务器)身份验证,也可以使用双向(服务器和客户机)身份验证。

为peer节点配置TLS

peer节点既是TLS服务器,也是TLS客户端。当另一个peer节点、应用程序或CLI与它建立连接时是前者,当它与另一个peer节点或排序者建立连接时是后者。

要在peer节点上启用TLS,请设置以下peer节点配置属性:

  • peer.tls.enabled = true

  • peer.tls.cert.file =包含TLS服务器证书的文件完全限定路径

  • peer.tls.key.file = 包含TLS服务器私钥的文件完全限定路径

  • peer.tls.rootcert.file = 包含颁发TLS服务器证书的证书颁发机构(CA)的证书链的文件完全限定路径

默认情况下,当在peer节点上启用TLS时,将关闭TLS客户机身份验证。这意味着在TLS握手期间,peer节点不会验证客户机(另一个peer节点、应用程序或CLI)的证书。要在peer节点上启用TLS客户机身份验证,请设置peer配置属性``peer.tls.clientAuthRequired`` 为``true`` ,并设置``peer.tls.clientRootCAs.files``属性指向CA链文件,该文件包含为您组织的客户端颁发TLS证书的CA证书链。

默认情况下,peer节点在充当TLS服务器和客户端时将使用相同的证书和私钥对。要为客户端使用不同的证书和私钥对,请设置``peer.tls.clientCert.file`` 和``peer.tls.clientKey.file`` 配置属性分别指向客户端证书和密钥文件的完全限定路径。

客户端认证的TLS也可以通过设置以下环境变量来启用:

  • CORE_PEER_TLS_ENABLED = true

  • CORE_PEER_TLS_CERT_FILE = 服务器证书的完全限定路径

  • CORE_PEER_TLS_KEY_FILE = 服务器私钥的完全限定路径

  • CORE_PEER_TLS_ROOTCERT_FILE = CA链文件的完全限定路径

  • CORE_PEER_TLS_CLIENTAUTHREQUIRED = true

  • CORE_PEER_TLS_CLIENTROOTCAS_FILES = CA链文件的完全限定路径

  • CORE_PEER_TLS_CLIENTCERT_FILE = 客户端证书的完全限定路径

  • CORE_PEER_TLS_CLIENTKEY_FILE = 客户端密钥的完全限定路径

当在peer节点上启用客户端身份验证时,需要客户端在TLS握手期间发送其证书。如果客户端不发送证书,握手将失败,peer端将关闭连接。

当peer加入通道时,通道成员的根CA证书链将从通道的配置区块中读取,并添加到TLS客户端和服务器根CA数据结构中。因此,peer对peer通信,peer对排序器通信应该无缝地工作。

为排序器节点配置TLS

要在排序器节点上启用TLS,请设置以下排序器配置属性:

  • General.TLS.Enabled = true

  • General.TLS.PrivateKey = 包含服务器私钥的文件的完全限定路径

  • General.TLS.Certificate = 包含服务器证书的文件的完全限定路径

  • General.TLS.RootCAs = 包含颁发TLS服务器证书的CA证书链的文件完全限定路径

默认情况下,TLS客户端身份验证在排序器上是关闭的,peer也是如此。要启用TLS客户端身份验证,请设置以下配置属性:

  • General.TLS.ClientAuthRequired = true

  • General.TLS.ClientRootCAs = 包含颁发TLS服务器证书的CA证书链的文件完全限定路径

客户端认证的TLS也可以通过设置以下环境变量来启用:

  • ORDERER_GENERAL_TLS_ENABLED = true

  • ORDERER_GENERAL_TLS_PRIVATEKEY = 包含服务器私钥的文件的完全限定路径

  • ORDERER_GENERAL_TLS_CERTIFICATE = 包含服务器证书的文件的完全限定路径

  • ORDERER_GENERAL_TLS_ROOTCAS = 包含颁发TLS服务器证书的CA证书链的文件完全限定路径

  • ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED = true

  • ORDERER_GENERAL_TLS_CLIENTROOTCAS = 包含颁发TLS服务器证书的CA证书链的文件完全限定路径

为peer CLI配置TLS

启用TLS的peer节点运行peer CLI命令时,必须设置以下环境变量:

  • CORE_PEER_TLS_ENABLED = true

  • CORE_PEER_TLS_ROOTCERT_FILE = 包含发出TLS服务器证书的CA证书链的文件完全限定路径

如果在远程服务器上也启用了TLS客户端身份验证,除了上述变量外,还必须设置以下变量:

  • CORE_PEER_TLS_CLIENTAUTHREQUIRED = true

  • CORE_PEER_TLS_CLIENTCERT_FILE = 客户端证书的完全限定路径

  • CORE_PEER_TLS_CLIENTKEY_FILE = 客户端私钥的完全限定路径

当运行连接到排序服务的命令时,例如 peer channel <create|update|fetch>peer chaincode <invoke|instantiate>,如果在排序器上启用了TLS,还必须指定以下命令行参数:

  • –tls

  • –cafile <fully qualified path of the file that contains cert chain of the orderer CA>

如果在排序器上启用了TLS客户端身份验证,还必须指定以下参数:

  • –clientauth

  • –keyfile <fully qualified path of the file that contains the client private key>

  • –certfile <fully qualified path of the file that contains the client certificate>

调试TLS问题

在调试TLS问题之前,建议在TLS客户端和服务器端都启用``GRPC debug``,以获取更多信息。要启用``GRPC debug``,请将环境变量``FABRIC_LOGGING_SPEC``设置为包含``grpc=debug``。例如,要将默认日志级别设置为``INFO``,将GRPC日志级别设置为’DEBUG,请将日志规范设置为``grpc=debug:info``。

如果在客户端看到错误消息``remote error: tls: bad certificate`` ,这通常意味着TLS服务器启用了客户端身份验证,而服务器要么没有收到正确的客户端证书,要么收到了它不信任的客户端证书。确保客户端正在发送它的证书,并且它已经由peer节点或排序节点信任的CA证书之一签名。

如果您在您的链码日志中看到错误消息``remote error: tls: bad certificate``,请确保您的链码是使用Fabric v1.1或更新版本提供的链码shim构建的。如果您的链码不包含shim的vendored副本,那么删除链码容器并重新启动它的peer程序将使用当前shim版本重新构建链码容器。

Configuring and operating a Raft ordering service

受众:Raft 排序节点管理员

Conceptual overview

For a high level overview of the concept of ordering and how the supported ordering service implementations (including Raft) work at a high level, check out our conceptual documentation on the Ordering Service.

To learn about the process of setting up an ordering node — including the creation of a local MSP and the creation of a genesis block — check out our documentation on Setting up an ordering node.

Configuration

While every Raft node must be added to the system channel, a node does not need to be added to every application channel. Additionally, you can remove and add a node from a channel dynamically without affecting the other nodes, a process described in the Reconfiguration section below.

Raft nodes identify each other using TLS pinning, so in order to impersonate a Raft node, an attacker needs to obtain the private key of its TLS certificate. As a result, it is not possible to run a Raft node without a valid TLS configuration.

A Raft cluster is configured in two planes:

  • Local configuration: Governs node specific aspects, such as TLS communication, replication behavior, and file storage.

  • Channel configuration: Defines the membership of the Raft cluster for the corresponding channel, as well as protocol specific parameters such as heartbeat frequency, leader timeouts, and more.

Recall, each channel has its own instance of a Raft protocol running. Thus, a Raft node must be referenced in the configuration of each channel it belongs to by adding its server and client TLS certificates (in PEM format) to the channel config. This ensures that when other nodes receive a message from it, they can securely confirm the identity of the node that sent the message.

The following section from configtx.yaml shows three Raft nodes (also called “consenters”) in the channel:

       Consenters:
            - Host: raft0.example.com
              Port: 7050
              ClientTLSCert: path/to/ClientTLSCert0
              ServerTLSCert: path/to/ServerTLSCert0
            - Host: raft1.example.com
              Port: 7050
              ClientTLSCert: path/to/ClientTLSCert1
              ServerTLSCert: path/to/ServerTLSCert1
            - Host: raft2.example.com
              Port: 7050
              ClientTLSCert: path/to/ClientTLSCert2
              ServerTLSCert: path/to/ServerTLSCert2

Note: an orderer will be listed as a consenter in the system channel as well as any application channels they’re joined to.

When the channel config block is created, the configtxgen tool reads the paths to the TLS certificates, and replaces the paths with the corresponding bytes of the certificates.

Local configuration

The orderer.yaml has two configuration sections that are relevant for Raft orderers:

Cluster, which determines the TLS communication configuration. And consensus, which determines where Write Ahead Logs and Snapshots are stored.

Cluster parameters:

By default, the Raft service is running on the same gRPC server as the client facing server (which is used to send transactions or pull blocks), but it can be configured to have a separate gRPC server with a separate port.

This is useful for cases where you want TLS certificates issued by the organizational CAs, but used only by the cluster nodes to communicate among each other, and TLS certificates issued by a public TLS CA for the client facing API.

  • ClientCertificate, ClientPrivateKey: If you wish to use a different TLS client certificate key pair (otherwise, the certificate key pair is taken from the general TLS section, i.e., general.tls.{privateKey, certificate})

  • ListenPort: The port the cluster listens on. If blank, the port is the same port as the orderer general port (general.listenPort)

  • ListenAddress: The address the cluster service is listening on.

  • ServerCertificate, ServerPrivateKey: The TLS server certificate key pair which is used when the cluster service is running on a separate gRPC server (different port).

  • SendBufferSize: Regulates the number of messages in the egress buffer.

Note: ListenPort, ListenAddress, ServerCertificate, ServerPrivateKey must be either set together or unset together.

There are also hidden configuration parameters for general.cluster which can be used to further fine tune the cluster communication or replication mechanisms:

  • DialTimeout, RPCTimeout: Specify the timeouts of creating connections and establishing streams.

  • ReplicationBufferSize: the maximum number of bytes that can be allocated for each in-memory buffer used for block replication from other cluster nodes. Each channel has its own memory buffer. Defaults to 20971520 which is 20MB.

  • PullTimeout: the maximum duration the ordering node will wait for a block to be received before it aborts. Defaults to five seconds.

  • ReplicationRetryTimeout: The maximum duration the ordering node will wait between two consecutive attempts. Defaults to five seconds.

  • ReplicationBackgroundRefreshInterval: the time between two consecutive attempts to replicate existing channels that this node was added to, or channels that this node failed to replicate in the past. Defaults to five minutes.

Consensus parameters:

  • WALDir: the location at which Write Ahead Logs for etcd/raft are stored. Each channel will have its own subdirectory named after the channel ID.

  • SnapDir: specifies the location at which snapshots for etcd/raft are stored. Each channel will have its own subdirectory named after the channel ID.

There is also a hidden configuration parameter that can be set by adding it to the consensus section in the orderer.yaml:

  • EvictionSuspicion: The cumulative period of time of channel eviction suspicion that triggers the node to pull blocks from other nodes and see if it has been evicted from the channel in order to confirm its suspicion. If the suspicion is confirmed (the inspected block doesn’t contain the node’s TLS certificate), the node halts its operation for that channel. A node suspects its channel eviction when it doesn’t know about any elected leader nor can be elected as leader in the channel. Defaults to 10 minutes.

Channel configuration

Apart from the (already discussed) consenters, the Raft channel configuration has an Options section which relates to protocol specific knobs. It is currently not possible to change these values dynamically while a node is running. The node have to be reconfigured and restarted.

The only exceptions is SnapshotIntervalSize, which can be adjusted at runtime.

Note: It is recommended to avoid changing the following values, as a misconfiguration might lead to a state where a leader cannot be elected at all (i.e, if the TickInterval and ElectionTick are extremely low). Situations where a leader cannot be elected are impossible to resolve, as leaders are required to make changes. Because of such dangers, we suggest not tuning these parameters for most use cases.

  • TickInterval: The time interval between two Node.Tick invocations.

  • ElectionTick: The number of Node.Tick invocations that must pass between elections. That is, if a follower does not receive any message from the leader of current term before ElectionTick has elapsed, it will become candidate and start an election.

  • ElectionTick must be greater than HeartbeatTick.

  • HeartbeatTick: The number of Node.Tick invocations that must pass between heartbeats. That is, a leader sends heartbeat messages to maintain its leadership every HeartbeatTick ticks.

  • MaxInflightBlocks: Limits the max number of in-flight append blocks during optimistic replication phase.

  • SnapshotIntervalSize: Defines number of bytes per which a snapshot is taken.

Reconfiguration

The Raft orderer supports dynamic (meaning, while the channel is being serviced) addition and removal of nodes as long as only one node is added or removed at a time. Note that your cluster must be operational and able to achieve consensus before you attempt to reconfigure it. For instance, if you have three nodes, and two nodes fail, you will not be able to reconfigure your cluster to remove those nodes. Similarly, if you have one failed node in a channel with three nodes, you should not attempt to rotate a certificate, as this would induce a second fault. As a rule, you should never attempt any configuration changes to the Raft consenters, such as adding or removing a consenter, or rotating a consenter’s certificate unless all consenters are online and healthy.

If you do decide to change these parameters, it is recommended to only attempt such a change during a maintenance cycle. Problems are most likely to occur when a configuration is attempted in clusters with only a few nodes while a node is down. For example, if you have three nodes in your consenter set and one of them is down, it means you have two out of three nodes alive. If you extend the cluster to four nodes while in this state, you will have only two out of four nodes alive, which is not a quorum. The fourth node won’t be able to onboard because nodes can only onboard to functioning clusters (unless the total size of the cluster is one or two).

So by extending a cluster of three nodes to four nodes (while only two are alive) you are effectively stuck until the original offline node is resurrected.

Adding a new node to a Raft cluster is done by:

  1. Adding the TLS certificates of the new node to the channel through a channel configuration update transaction. Note: the new node must be added to the system channel before being added to one or more application channels.

  2. Fetching the latest config block of the system channel from an orderer node that’s part of the system channel.

  3. Ensuring that the node that will be added is part of the system channel by checking that the config block that was fetched includes the certificate of (soon to be) added node.

  4. Starting the new Raft node with the path to the config block in the General.GenesisFile configuration parameter.

  5. Waiting for the Raft node to replicate the blocks from existing nodes for all channels its certificates have been added to. After this step has been completed, the node begins servicing the channel.

  6. Adding the endpoint of the newly added Raft node to the channel configuration of all channels.

It is possible to add a node that is already running (and participates in some channels already) to a channel while the node itself is running. To do this, simply add the node’s certificate to the channel config of the channel. The node will autonomously detect its addition to the new channel (the default value here is five minutes, but if you want the node to detect the new channel more quickly, reboot the node) and will pull the channel blocks from an orderer in the channel, and then start the Raft instance for that chain.

After it has successfully done so, the channel configuration can be updated to include the endpoint of the new Raft orderer.

Removing a node from a Raft cluster is done by:

  1. Removing its endpoint from the channel config for all channels, including the system channel controlled by the orderer admins.

  2. Removing its entry (identified by its certificates) from the channel configuration for all channels. Again, this includes the system channel.

  3. Shut down the node.

Removing a node from a specific channel, but keeping it servicing other channels is done by:

  1. Removing its endpoint from the channel config for the channel.

  2. Removing its entry (identified by its certificates) from the channel configuration.

  3. The second phase causes:

    • The remaining orderer nodes in the channel to cease communicating with the removed orderer node in the context of the removed channel. They might still be communicating on other channels.

    • The node that is removed from the channel would autonomously detect its removal either immediately or after EvictionSuspicion time has passed (10 minutes by default) and will shut down its Raft instance.

TLS certificate rotation for an orderer node

All TLS certificates have an expiration date that is determined by the issuer. These expiration dates can range from 10 years from the date of issuance to as little as a few months, so check with your issuer. Before the expiration date, you will need to rotate these certificates on the node itself and every channel the node is joined to, including the system channel.

For each channel the node participates in:

  1. Update the channel configuration with the new certificates.

  2. Replace its certificates in the file system of the node.

  3. Restart the node.

Because a node can only have a single TLS certificate key pair, the node will be unable to service channels its new certificates have not been added to during the update process, degrading the capacity of fault tolerance. Because of this, once the certificate rotation process has been started, it should be completed as quickly as possible.

If for some reason the rotation of the TLS certificates has started but cannot complete in all channels, it is advised to rotate TLS certificates back to what they were and attempt the rotation later.

Metrics

For a description of the Operations Service and how to set it up, check out our documentation on the Operations Service.

For a list at the metrics that are gathered by the Operations Service, check out our reference material on metrics.

While the metrics you prioritize will have a lot to do with your particular use case and configuration, there are two metrics in particular you might want to monitor:

  • consensus_etcdraft_is_leader: identifies which node in the cluster is currently leader. If no nodes have this set, you have lost quorum.

  • consensus_etcdraft_data_persist_duration: indicates how long write operations to the Raft cluster’s persistent write ahead log take. For protocol safety, messages must be persisted durably, calling fsync where appropriate, before they can be shared with the consenter set. If this value begins to climb, this node may not be able to participate in consensus (which could lead to a service interruption for this node and possibly the network).

Troubleshooting

  • The more stress you put on your nodes, the more you might have to change certain parameters. As with any system, computer or mechanical, stress can lead to a drag in performance. As we noted in the conceptual documentation, leader elections in Raft are triggered when follower nodes do not receive either a “heartbeat” messages or an “append” message that carries data from the leader for a certain amount of time. Because Raft nodes share the same communication layer across channels (this does not mean they share data — they do not!), if a Raft node is part of the consenter set in many channels, you might want to lengthen the amount of time it takes to trigger an election to avoid inadvertent leader elections.

Bringing up a Kafka-based Ordering Service

Caveat emptor

This document assumes that the reader knows how to set up a Kafka cluster and a ZooKeeper ensemble, and keep them secure for general usage by preventing unauthorized access. The sole purpose of this guide is to identify the steps you need to take so as to have a set of Hyperledger Fabric ordering service nodes (OSNs) use your Kafka cluster and provide an ordering service to your blockchain network.

For information about the role orderers play in a network and in a transaction flow, checkout our The Ordering Service documentation.

For information on how to set up an ordering node, check out our Setting up an ordering node documentation.

For information about configuring Raft ordering services, check out Configuring and operating a Raft ordering service.

Big picture

Each channel maps to a separate single-partition topic in Kafka. When an OSN receives transactions via the Broadcast RPC, it checks to make sure that the broadcasting client has permissions to write on the channel, then relays (i.e. produces) those transactions to the appropriate partition in Kafka. This partition is also consumed by the OSN which groups the received transactions into blocks locally, persists them in its local ledger, and serves them to receiving clients via the Deliver RPC. For low-level details, refer to the document that describes how we came to this design. Figure 8 is a schematic representation of the process described above.

Steps

Let K and Z be the number of nodes in the Kafka cluster and the ZooKeeper ensemble respectively:

  1. At a minimum, K should be set to 4. (As we will explain in Step 4 below, this is the minimum number of nodes necessary in order to exhibit crash fault tolerance, i.e. with 4 brokers, you can have 1 broker go down, all channels will continue to be writeable and readable, and new channels can be created.)

  2. Z will either be 3, 5, or 7. It has to be an odd number to avoid split-brain scenarios, and larger than 1 in order to avoid single point of failures. Anything beyond 7 ZooKeeper servers is considered overkill.

Then proceed as follows:

  1. Orderers: Encode the Kafka-related information in the network’s genesis block. If you are using configtxgen, edit configtx.yaml. Alternatively, pick a preset profile for the system channel’s genesis block— so that:

  • Orderer.OrdererType is set to kafka.

  • Orderer.Kafka.Brokers contains the address of at least two of the Kafka brokers in your cluster in IP:port notation. The list does not need to be exhaustive. (These are your bootstrap brokers.)

  1. Orderers: Set the maximum block size. Each block will have at most Orderer.AbsoluteMaxBytes bytes (not including headers), a value that you can set in configtx.yaml. Let the value you pick here be A and make note of it —– it will affect how you configure your Kafka brokers in Step 6.

  2. Orderers: Create the genesis block. Use configtxgen. The settings you picked in Steps 3 and 4 above are system-wide settings, i.e. they apply across the network for all the OSNs. Make note of the genesis block’s location.

6. Kafka cluster: Configure your Kafka brokers appropriately. Ensure that every #Kafka broker has these keys configured:

  • unclean.leader.election.enable = false — Data consistency is key in a blockchain environment. We cannot have a channel leader chosen outside of the in-sync replica set, or we run the risk of overwriting the offsets that the previous leader produced, and —as a result— rewrite the blockchain that the orderers produce.

    • min.insync.replicas = M — Where you pick a value M such that 1 < M < N (see default.replication.factor below). Data is considered committed when it is written to at least M replicas (which are then considered in-sync and belong to the in-sync replica set, or ISR). In any other case, the write operation returns an error. Then:

      • If up to N-M replicas —out of the N that the channel data is written to become unavailable, operations proceed normally.

      • If more replicas become unavailable, Kafka cannot maintain an ISR set of M, so it stops accepting writes. Reads work without issues. The channel becomes writeable again when M replicas get in-sync.

  • default.replication.factor = N — Where you pick a value N such that N < K. A replication factor of N means that each channel will have its data replicated to N brokers. These are the candidates for the ISR set of a channel. As we noted in the min.insync.replicas section above, not all of these brokers have to be available all the time. N should be set strictly smaller to K because channel creations cannot go forward if less than N brokers are up. So if you set N = K, a single broker going down means that no new channels can be created on the blockchain network — the crash fault tolerance of the ordering service is non-existent.

    Based on what we’ve described above, the minimum allowed values for M and N are 2 and 3 respectively. This configuration allows for the creation of new channels to go forward, and for all channels to continue to be writeable. #. message.max.bytes and replica.fetch.max.bytes should be set to a value larger than A, the value you picked in Orderer.AbsoluteMaxBytes in Step 4 above. Add some buffer to account for headers —– 1 MiB is more than enough. The following condition applies:

       Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
    
    (For completeness, we note that ``message.max.bytes`` should be strictly
    smaller to ``socket.request.max.bytes`` which is set by default to 100
    MiB. If you wish to have blocks larger than 100 MiB you will need to edit
    the hard-coded value in ``brokerConfig.Producer.MaxMessageBytes`` in
    ``fabric/orderer/kafka/config.go`` and rebuild the binary from source.
    This is not advisable.) #. ``log.retention.ms = -1``. Until the ordering
    service adds support for pruning of the Kafka logs, you should disable
    time-based retention and prevent segments from expiring. (Size-based
    retention —see ``log.retention.bytes``— is disabled by default in Kafka at
    the time of this writing, so there's no need to set it explicitly.)
    
  1. Orderers: Point each OSN to the genesis block. Edit General.GenesisFile in orderer.yaml so that it points to the genesis block created in Step 5 above. While at it, ensure all other keys in that YAML file are set appropriately.

  2. Orderers: Adjust polling intervals and timeouts. (Optional step.)

    • The Kafka.Retry section in the orderer.yaml file allows you to adjust the frequency of the metadata/producer/consumer requests, as well as the socket timeouts. (These are all settings you would expect to see in a Kafka producer or consumer.)

    • Additionally, when a new channel is created, or when an existing channel is reloaded (in case of a just-restarted orderer), the orderer interacts with the Kafka cluster in the following ways:

      • It creates a Kafka producer (writer) for the Kafka partition that corresponds to the channel. . It uses that producer to post a no-op CONNECT message to that partition. . It creates a Kafka consumer (reader) for that partition.

      • If any of these steps fail, you can adjust the frequency with which they are repeated. Specifically they will be re-attempted every Kafka.Retry.ShortInterval for a total of Kafka.Retry.ShortTotal, and then every Kafka.Retry.LongInterval for a total of Kafka.Retry.LongTotal until they succeed. Note that the orderer will be unable to write to or read from a channel until all of the steps above have been completed successfully.

  3. Set up the OSNs and Kafka cluster so that they communicate over SSL. (Optional step, but highly recommended.) Refer to the Confluent guide for the Kafka cluster side of the equation, and set the keys under Kafka.TLS in orderer.yaml on every OSN accordingly.

  4. Bring up the nodes in the following order: ZooKeeper ensemble, Kafka cluster, ordering service nodes.

Additional considerations

1. Preferred message size. In Step 4 above (see Steps section) you can #also set the preferred size of blocks by setting the Orderer.Batchsize.PreferredMaxBytes key. Kafka offers higher throughput #when dealing with relatively small messages; aim for a value no bigger than 1 #MiB.

2. Using environment variables to override settings. When using the sample Kafka and Zookeeper Docker images provided with Fabric (see images/kafka and images/zookeeper respectively), you can override a Kafka broker or a ZooKeeper server’s settings by using environment variables. Replace the dots of the configuration key with underscores. For example, KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false will allow you to override the default value of unclean.leader.election.enable. The same applies to the OSNs for their local configuration, i.e. what can be set in orderer.yaml. For example ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s allows you to override the #default value for Orderer.Kafka.Retry.ShortInterval.

Kafka Protocol Version Compatibility

Fabric uses the sarama client library and vendors a version of it that supports Kafka 0.10 to 1.0, yet is still known to work with older versions.

Using the Kafka.Version key in orderer.yaml, you can configure which version of the Kafka protocol is used to communicate with the Kafka cluster’s brokers. Kafka brokers are backward compatible with older protocol versions. Because of a Kafka broker’s backward compatibility with older protocol versions, upgrading your Kafka brokers to a new version does not require an update of the Kafka.Version key value, but the Kafka cluster might suffer a performance penalty while using an older protocol version.

Debugging

Set environment variable FABRIC_LOGGING_SPEC to DEBUG and set Kafka.Verbose to true in orderer.yaml .

https://creativecommons.org/licenses/by/4.0/

命令参考

peer

Description

The peer command has five different subcommands, each of which allows administrators to perform a specific set of tasks related to a peer. For example, you can use the peer channel subcommand to join a peer to a channel, or the peer chaincode command to deploy a smart contract chaincode to a peer.

Syntax

The peer command has five different subcommands within it:

peer chaincode [option] [flags]
peer channel   [option] [flags]
peer node      [option] [flags]
peer version   [option] [flags]

Each subcommand has different options available, and these are described in their own dedicated topic. For brevity, we often refer to a command (peer), a subcommand (channel), or subcommand option (fetch) simply as a command.

If a subcommand is specified without an option, then it will return some high level help text as described in the --help flag below.

Flags

Each peer subcommand has a specific set of flags associated with it, many of which are designated global because they can be used in all subcommand options. These flags are described with the relevant peer subcommand.

The top level peer command has the following flag:

  • --help

    Use --help to get brief help text for any peer command. The --help flag is very useful – it can be used to get command help, subcommand help, and even option help.

    For example

    peer --help
    peer channel --help
    peer channel list --help
    

    See individual peer subcommands for more detail.

Usage

Here is an example using the available flag on the peer command.

  • Using the --help flag on the peer channel join command.

    peer channel join --help
    
    Joins the peer to a channel.
    
    Usage:
      peer channel join [flags]
    
    Flags:
      -b, --blockpath string   Path to file containing genesis block
      -h, --help               help for join
    
    Global Flags:
          --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
          --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
          --clientauth                          Use mutual TLS when communicating with the orderer endpoint
          --connTimeout duration                Timeout for client to connect (default 3s)
          --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
      -o, --orderer string                      Ordering service endpoint
          --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
          --tls                                 Use TLS when communicating with the orderer endpoint
    

    This shows brief help syntax for the peer channel join command.

peer chaincode

The peer chaincode command allows administrators to perform chaincode related operations on a peer, such as installing, instantiating, invoking, packaging, querying, and upgrading chaincode.

Syntax

The peer chaincode command has the following subcommands:

  • install

  • instantiate

  • invoke

  • list

  • package

  • query

  • signpackage

  • upgrade

The different subcommand options (install, instantiate…) relate to the different chaincode operations that are relevant to a peer. For example, use the peer chaincode install subcommand option to install a chaincode on a peer, or the peer chaincode query subcommand option to query a chaincode for the current value on a peer’s ledger.

Each peer chaincode subcommand is described together with its options in its own section in this topic.

Flags

Each peer chaincode subcommand has both a set of flags specific to an individual subcommand, as well as a set of global flags that relate to all peer chaincode subcommands. Not all subcommands would use these flags. For instance, the query subcommand does not need the --orderer flag.

The individual flags are described with the relevant subcommand. The global flags are

  • --cafile <string>

    Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint

  • --certfile <string>

    Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint

  • --keyfile <string>

    Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint

  • -o or --orderer <string>

    Ordering service endpoint specified as <hostname or IP address>:<port>

  • --ordererTLSHostnameOverride <string>

    The hostname override to use when validating the TLS connection to the orderer

  • --tls

    Use TLS when communicating with the orderer endpoint

  • --transient <string>

    Transient map of arguments in JSON encoding

peer chaincode install

Install a chaincode on a peer. This installs a chaincode deployment spec package (if provided) or packages the specified chaincode before subsequently installing it.

Usage:
  peer chaincode install [flags]

Flags:
      --connectionProfile string       Connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -c, --ctor string                    Constructor message for the chaincode in JSON format (default "{}")
  -h, --help                           help for install
  -l, --lang string                    Language the chaincode is written in (default "golang")
  -n, --name string                    Name of the chaincode
  -p, --path string                    Path to chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag
  -v, --version string                 Version of the chaincode specified in install/instantiate/upgrade commands

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

peer chaincode instantiate

Deploy the specified chaincode to the network.

Usage:
  peer chaincode instantiate [flags]

Flags:
  -C, --channelID string               The channel on which this command should be executed
      --collections-config string      The fully qualified path to the collection JSON file including the file name
      --connectionProfile string       Connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -c, --ctor string                    Constructor message for the chaincode in JSON format (default "{}")
  -E, --escc string                    The name of the endorsement system chaincode to be used for this chaincode
  -h, --help                           help for instantiate
  -l, --lang string                    Language the chaincode is written in (default "golang")
  -n, --name string                    Name of the chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
  -P, --policy string                  The endorsement policy associated to this chaincode
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag
  -v, --version string                 Version of the chaincode specified in install/instantiate/upgrade commands
  -V, --vscc string                    The name of the verification system chaincode to be used for this chaincode

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

peer chaincode invoke

Invoke the specified chaincode. It will try to commit the endorsed transaction to the network.

Usage:
  peer chaincode invoke [flags]

Flags:
  -C, --channelID string               The channel on which this command should be executed
      --connectionProfile string       Connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -c, --ctor string                    Constructor message for the chaincode in JSON format (default "{}")
  -h, --help                           help for invoke
  -I, --isInit                         Is this invocation for init (useful for supporting legacy chaincodes in the new lifecycle)
  -n, --name string                    Name of the chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag
      --waitForEvent                   Whether to wait for the event from each peer's deliver filtered service signifying that the 'invoke' transaction has been committed successfully
      --waitForEventTimeout duration   Time to wait for the event from each peer's deliver filtered service signifying that the 'invoke' transaction has been committed successfully (default 30s)

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

peer chaincode list

Get the instantiated chaincodes in the channel if specify channel, or get installed chaincodes on the peer

Usage:
  peer chaincode list [flags]

Flags:
  -C, --channelID string               The channel on which this command should be executed
      --connectionProfile string       Connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -h, --help                           help for list
      --installed                      Get the installed chaincodes on a peer
      --instantiated                   Get the instantiated chaincodes on a channel
      --peerAddresses stringArray      The addresses of the peers to connect to
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

peer chaincode package

Package a chaincode and write the package to a file.

Usage:
  peer chaincode package [outputfile] [flags]

Flags:
  -s, --cc-package                  create CC deployment spec for owner endorsements instead of raw CC deployment spec
  -c, --ctor string                 Constructor message for the chaincode in JSON format (default "{}")
  -h, --help                        help for package
  -i, --instantiate-policy string   instantiation policy for the chaincode
  -l, --lang string                 Language the chaincode is written in (default "golang")
  -n, --name string                 Name of the chaincode
  -p, --path string                 Path to chaincode
  -S, --sign                        if creating CC deployment spec package for owner endorsements, also sign it with local MSP
  -v, --version string              Version of the chaincode specified in install/instantiate/upgrade commands

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

peer chaincode query

Get endorsed result of chaincode function call and print it. It won't generate transaction.

Usage:
  peer chaincode query [flags]

Flags:
  -C, --channelID string               The channel on which this command should be executed
      --connectionProfile string       Connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -c, --ctor string                    Constructor message for the chaincode in JSON format (default "{}")
  -h, --help                           help for query
  -x, --hex                            If true, output the query value byte array in hexadecimal. Incompatible with --raw
  -n, --name string                    Name of the chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
  -r, --raw                            If true, output the query value as raw bytes, otherwise format as a printable string
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

peer chaincode signpackage

Sign the specified chaincode package

Usage:
  peer chaincode signpackage [flags]

Flags:
  -h, --help   help for signpackage

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

peer chaincode upgrade

Upgrade an existing chaincode with the specified one. The new chaincode will immediately replace the existing chaincode upon the transaction committed.

Usage:
  peer chaincode upgrade [flags]

Flags:
  -C, --channelID string               The channel on which this command should be executed
      --collections-config string      The fully qualified path to the collection JSON file including the file name
      --connectionProfile string       Connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -c, --ctor string                    Constructor message for the chaincode in JSON format (default "{}")
  -E, --escc string                    The name of the endorsement system chaincode to be used for this chaincode
  -h, --help                           help for upgrade
  -l, --lang string                    Language the chaincode is written in (default "golang")
  -n, --name string                    Name of the chaincode
  -p, --path string                    Path to chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
  -P, --policy string                  The endorsement policy associated to this chaincode
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag
  -v, --version string                 Version of the chaincode specified in install/instantiate/upgrade commands
  -V, --vscc string                    The name of the verification system chaincode to be used for this chaincode

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint
      --transient string                    Transient map of arguments in JSON encoding

Example Usage

peer chaincode instantiate examples

Here are some examples of the peer chaincode instantiate command, which instantiates the chaincode named mycc at version 1.0 on channel mychannel:

  • Using the --tls and --cafile global flags to instantiate the chaincode in a network with TLS enabled:

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P "AND ('Org1MSP.peer','Org2MSP.peer')"
    
    2018-02-22 16:33:53.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2018-02-22 16:33:53.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    2018-02-22 16:34:08.698 UTC [main] main -> INFO 003 Exiting.....
    
  • Using only the command-specific options to instantiate the chaincode in a network with TLS disabled:

    peer chaincode instantiate -o orderer.example.com:7050 -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P "AND ('Org1MSP.peer','Org2MSP.peer')"
    
    
    2018-02-22 16:34:09.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2018-02-22 16:34:09.324 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    2018-02-22 16:34:24.698 UTC [main] main -> INFO 003 Exiting.....
    
peer chaincode invoke example

Here is an example of the peer chaincode invoke command:

  • Invoke the chaincode named mycc at version 1.0 on channel mychannel on peer0.org1.example.com:7051 and peer0.org2.example.com:9051 (the peers defined by --peerAddresses), requesting to move 10 units from variable a to variable b:

    peer chaincode invoke -o orderer.example.com:7050 -C mychannel -n mycc --peerAddresses peer0.org1.example.com:7051 --peerAddresses peer0.org2.example.com:9051 -c '{"Args":["invoke","a","b","10"]}'
    
    2018-02-22 16:34:27.069 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2018-02-22 16:34:27.069 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    .
    .
    .
    2018-02-22 16:34:27.106 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> DEBU 00a ESCC invoke result: version:1 response:<status:200 message:"OK" > payload:"\n \237mM\376? [\214\002 \332\204\035\275q\227\2132A\n\204&\2106\037W|\346#\3413\274\022Y\nE\022\024\n\004lscc\022\014\n\n\n\004mycc\022\002\010\003\022-\n\004mycc\022%\n\007\n\001a\022\002\010\003\n\007\n\001b\022\002\010\003\032\007\n\001a\032\00290\032\010\n\001b\032\003210\032\003\010\310\001\"\013\022\004mycc\032\0031.0" endorsement:<endorser:"\n\007Org1MSP\022\262\006-----BEGIN CERTIFICATE-----\nMIICLjCCAdWgAwIBAgIRAJYomxY2cqHA/fbRnH5a/bwwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzEuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzEuZXhhbXBsZS5jb20wHhcNMTgwMjIyMTYyODE0WhcNMjgwMjIwMTYyODE0\nWjBwMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzETMBEGA1UECxMKRmFicmljUGVlcjEfMB0GA1UEAxMWcGVl\ncjAub3JnMS5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABDEa\nWNNniN3qOCQL89BGWfY39f5V3o1pi//7JFDHATJXtLgJhkK5KosDdHuKLYbCqvge\n46u3AC16MZyJRvKBiw6jTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAA\nMCsGA1UdIwQkMCKAIN7dJR9dimkFtkus0R5pAOlRz5SA3FB5t8Eaxl9A7lkgMAoG\nCCqGSM49BAMCA0cAMEQCIC2DAsO9QZzQmKi8OOKwcCh9Gd01YmWIN3oVmaCRr8C7\nAiAlQffq2JFlbh6OWURGOko6RckizG8oVOldZG/Xj3C8lA==\n-----END CERTIFICATE-----\n" signature:"0D\002 \022_\342\350\344\231G&\237\n\244\375\302J\220l\302\345\210\335D\250y\253P\0214:\221e\332@\002 \000\254\361\224\247\210\214L\277\370\222\213\217\301\r\341v\227\265\277\336\256^\217\336\005y*\321\023\025\367" >
    2018-02-22 16:34:27.107 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 00b Chaincode invoke successful. result: status:200
    2018-02-22 16:34:27.107 UTC [main] main -> INFO 00c Exiting.....
    

    Here you can see that the invoke was submitted successfully based on the log message:

    2018-02-22 16:34:27.107 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 00b Chaincode invoke successful. result: status:200
    

    A successful response indicates that the transaction was submitted for ordering successfully. The transaction will then be added to a block and, finally, validated or invalidated by each peer on the channel.

peer chaincode list example

Here are some examples of the peer chaincode list command:

  • Using the --installed flag to list the chaincodes installed on a peer.

    peer chaincode list --installed
    
    Get installed chaincodes on peer:
    Name: mycc, Version: 1.0, Path: github.com/hyperledger/fabric-samples/chaincode/abstore/go, Id: 8cc2730fdafd0b28ef734eac12b29df5fc98ad98bdb1b7e0ef96265c3d893d61
    2018-02-22 17:07:13.476 UTC [main] main -> INFO 001 Exiting.....
    

    You can see that the peer has installed a chaincode called mycc which is at version 1.0.

  • Using the --instantiated in combination with the -C (channel ID) flag to list the chaincodes instantiated on a channel.

    peer chaincode list --instantiated -C mychannel
    
    Get instantiated chaincodes on channel mychannel:
    Name: mycc, Version: 1.0, Path: github.com/hyperledger/fabric-samples/chaincode/abstore/go, Escc: escc, Vscc: vscc
    2018-02-22 17:07:42.969 UTC [main] main -> INFO 001 Exiting.....
    

    You can see that chaincode mycc at version 1.0 is instantiated on channel mychannel.

peer chaincode package example

Here is an example of the peer chaincode package command, which packages the chaincode named mycc at version 1.1, creates the chaincode deployment spec, signs the package using the local MSP, and outputs it as ccpack.out:

  peer chaincode package ccpack.out -n mycc -p github.com/hyperledger/fabric-samples/chaincode/abstore/go -v 1.1 -s -S
  .
  .
  .
  2018-02-22 17:27:01.404 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
  2018-02-22 17:27:01.405 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
  .
  .
  .
  2018-02-22 17:27:01.879 UTC [chaincodeCmd] chaincodePackage -> DEBU 011 Packaged chaincode into deployment spec of size <3426>, with args = [ccpack.out]
  2018-02-22 17:27:01.879 UTC [main] main -> INFO 012 Exiting.....

  ```

### peer chaincode query example

Here is an example of the `peer chaincode query` command, which queries the
peer ledger for the chaincode named `mycc` at version `1.0` for the value of
variable `a`:

* You can see from the output that variable `a` had a value of 90 at the time of
  the query.

  ```
  peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

  2018-02-22 16:34:30.816 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
  2018-02-22 16:34:30.816 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
  Query Result: 90

  ```

### peer chaincode signpackage example

Here is an example of the `peer chaincode signpackage` command, which accepts an
existing signed  package and creates a new one with signature of the local MSP
appended to it.

peer chaincode signpackage ccwith1sig.pak ccwith2sig.pak Wrote signed package to ccwith2sig.pak successfully 2018-02-24 19:32:47.189 EST [main] main -> INFO 002 Exiting…..


### peer chaincode upgrade example

Here is an example of the `peer chaincode upgrade` command, which
upgrades the chaincode named `mycc` at version `1.1` on channel
`mychannel` to version `1.2`, which contains a new variable `c`:

* Using the `--tls` and `--cafile` global flags to upgrade the chaincode
  in a network with TLS enabled:

  ```
  export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
  peer chaincode upgrade -o orderer.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.2 -c '{"Args":["init","a","100","b","200","c","300"]}' -P "AND ('Org1MSP.peer','Org2MSP.peer')"
  .
  .
  .
  2018-02-22 18:26:31.433 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
  2018-02-22 18:26:31.434 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
  2018-02-22 18:26:31.435 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode enabled
  2018-02-22 18:26:31.435 UTC [chaincodeCmd] upgrade -> DEBU 006 Get upgrade proposal for chaincode <name:"mycc" version:"1.1" >
  .
  .
  .
  2018-02-22 18:26:46.687 UTC [chaincodeCmd] upgrade -> DEBU 009 endorse upgrade proposal, get response <status:200 message:"OK" payload:"\n\004mycc\022\0031.1\032\004escc\"\004vscc*,\022\014\022\n\010\001\022\002\010\000\022\002\010\001\032\r\022\013\n\007Org1MSP\020\003\032\r\022\013\n\007Org2MSP\020\0032f\n \261g(^v\021\220\240\332\251\014\204V\210P\310o\231\271\036\301\022\032\205fC[|=\215\372\223\022 \311b\025?\323N\343\325\032\005\365\236\001XKj\004E\351\007\247\265fu\305j\367\331\275\253\307R\032 \014H#\014\272!#\345\306s\323\371\350\364\006.\000\356\230\353\270\263\215\217\303\256\220i^\277\305\214: \375\200zY\275\203}\375\244\205\035\340\226]l!uE\334\273\214\214\020\303\3474\360\014\234-\006\315B\031\022\010\022\006\010\001\022\002\010\000\032\r\022\013\n\007Org1MSP\020\001" >
  .
  .
  .
  2018-02-22 18:26:46.693 UTC [chaincodeCmd] upgrade -> DEBU 00c Get Signed envelope
  2018-02-22 18:26:46.693 UTC [chaincodeCmd] chaincodeUpgrade -> DEBU 00d Send signed envelope to orderer
  2018-02-22 18:26:46.908 UTC [main] main -> INFO 00e Exiting.....
  ```

* Using only the command-specific options to upgrade the chaincode in a
  network with TLS disabled:

  ```
  peer chaincode upgrade -o orderer.example.com:7050 -C mychannel -n mycc -v 1.2 -c '{"Args":["init","a","100","b","200","c","300"]}' -P "AND ('Org1MSP.peer','Org2MSP.peer')"
  .
  .
  .
  2018-02-22 18:28:31.433 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
  2018-02-22 18:28:31.434 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
  2018-02-22 18:28:31.435 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode enabled
  2018-02-22 18:28:31.435 UTC [chaincodeCmd] upgrade -> DEBU 006 Get upgrade proposal for chaincode <name:"mycc" version:"1.1" >
  .
  .
  .
  2018-02-22 18:28:46.687 UTC [chaincodeCmd] upgrade -> DEBU 009 endorse upgrade proposal, get response <status:200 message:"OK" payload:"\n\004mycc\022\0031.1\032\004escc\"\004vscc*,\022\014\022\n\010\001\022\002\010\000\022\002\010\001\032\r\022\013\n\007Org1MSP\020\003\032\r\022\013\n\007Org2MSP\020\0032f\n \261g(^v\021\220\240\332\251\014\204V\210P\310o\231\271\036\301\022\032\205fC[|=\215\372\223\022 \311b\025?\323N\343\325\032\005\365\236\001XKj\004E\351\007\247\265fu\305j\367\331\275\253\307R\032 \014H#\014\272!#\345\306s\323\371\350\364\006.\000\356\230\353\270\263\215\217\303\256\220i^\277\305\214: \375\200zY\275\203}\375\244\205\035\340\226]l!uE\334\273\214\214\020\303\3474\360\014\234-\006\315B\031\022\010\022\006\010\001\022\002\010\000\032\r\022\013\n\007Org1MSP\020\001" >
  .
  .
  .
  2018-02-22 18:28:46.693 UTC [chaincodeCmd] upgrade -> DEBU 00c Get Signed envelope
  2018-02-22 18:28:46.693 UTC [chaincodeCmd] chaincodeUpgrade -> DEBU 00d Send signed envelope to orderer
  2018-02-22 18:28:46.908 UTC [main] main -> INFO 00e Exiting.....
  ```

<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.

peer lifecycle chaincode

The peer lifecycle chaincode subcommand allows administrators to use the Fabric chaincode lifecycle to package a chaincode, install it on your peers, approve a chaincode definition for your organization, and then commit the definition to a channel. The chaincode is ready to be used after the definition has been successfully committed to the channel. For more information, visit Chaincode for Operators.

Note: These instructions use the Fabric chaincode lifecycle introduced in the v2.0 Alpha release. If you would like to use the old lifecycle to install and instantiate a chaincode, visit the peer chaincode command reference.

Syntax

The peer lifecycle chaincode command has the following subcommands:

  • package

  • install

  • queryinstalled

  • approveformyorg

  • queryapprovalstatus

  • commit

  • querycommitted

Each peer lifecycle chaincode subcommand is described together with its options in its own section in this topic.

peer lifecycle

Perform _lifecycle operations

Usage:
  peer lifecycle [command]

Available Commands:
  chaincode   Perform chaincode operations: package|install|queryinstalled|approveformyorg|queryapprovalstatus|commit|querycommitted

Flags:
  -h, --help   help for lifecycle

Use "peer lifecycle [command] --help" for more information about a command.

peer lifecycle chaincode

Perform _lifecycle operations: package|install|queryinstalled|approveformyorg|queryapprovalstatus|commit|querycommitted

Usage:
  peer lifecycle chaincode [command]

Available Commands:
  approveformyorg     Approve the chaincode definition for my org.
  commit              Commit the chaincode definition on the channel.
  install             Install a chaincode.
  package             Package a chaincode
  queryapprovalstatus Query approval status for chaincode definition.
  querycommitted      Query a committed chaincode definition by channel and name on a peer.
  queryinstalled      Query the installed chaincodes on a peer.

Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
  -h, --help                                help for chaincode
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

Use "peer lifecycle chaincode [command] --help" for more information about a command.

peer lifecycle chaincode package

Package a chaincode and write the package to a file.

Usage:
  peer lifecycle chaincode package [outputfile] [flags]

Flags:
      --connectionProfile string       The fully qualified path to the connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -h, --help                           help for package
      --label string                   The package label contains a human-readable description of the package
  -l, --lang string                    Language the chaincode is written in (default "golang")
  -p, --path string                    Path to the chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer lifecycle chaincode install

Install a chaincode on a peer.

Usage:
  peer lifecycle chaincode install [flags]

Flags:
      --connectionProfile string       The fully qualified path to the connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -h, --help                           help for install
      --peerAddresses stringArray      The addresses of the peers to connect to
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer lifecycle chaincode queryinstalled

Query the installed chaincodes on a peer.

Usage:
  peer lifecycle chaincode queryinstalled [flags]

Flags:
      --connectionProfile string       The fully qualified path to the connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -h, --help                           help for queryinstalled
      --peerAddresses stringArray      The addresses of the peers to connect to
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer lifecycle chaincode approveformyorg

Approve the chaincode definition for my organization.

Usage:
  peer lifecycle chaincode approveformyorg [flags]

Flags:
      --channel-config-policy string   The endorsement policy associated to this chaincode specified as a channel config policy reference
  -C, --channelID string               The channel on which this command should be executed
      --collections-config string      The fully qualified path to the collection JSON file including the file name
      --connectionProfile string       The fully qualified path to the connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -E, --endorsement-plugin string      The name of the endorsement plugin to be used for this chaincode
  -h, --help                           help for approveformyorg
      --init-required                  Whether the chaincode requires invoking 'init'
  -n, --name string                    Name of the chaincode
      --package-id string              The identifier of the chaincode install package
      --peerAddresses stringArray      The addresses of the peers to connect to
      --sequence int                   The sequence number of the chaincode definition for the channel (default 1)
      --signature-policy string        The endorsement policy associated to this chaincode specified as a signature policy
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag
  -V, --validation-plugin string       The name of the validation plugin to be used for this chaincode
  -v, --version string                 Version of the chaincode
      --waitForEvent                   Whether to wait for the event from each peer's deliver filtered service signifying that the transaction has been committed successfully (default true)
      --waitForEventTimeout duration   Time to wait for the event from each peer's deliver filtered service signifying that the 'invoke' transaction has been committed successfully (default 30s)

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer lifecycle chaincode queryapprovalstatus

Query approval status for chaincode definition.

Usage:
  peer lifecycle chaincode queryapprovalstatus [flags]

Flags:
      --channel-config-policy string   The endorsement policy associated to this chaincode specified as a channel config policy reference
  -C, --channelID string               The channel on which this command should be executed
      --collections-config string      The fully qualified path to the collection JSON file including the file name
      --connectionProfile string       The fully qualified path to the connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -E, --endorsement-plugin string      The name of the endorsement plugin to be used for this chaincode
  -h, --help                           help for queryapprovalstatus
      --init-required                  Whether the chaincode requires invoking 'init'
  -n, --name string                    Name of the chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
      --sequence int                   The sequence number of the chaincode definition for the channel (default 1)
      --signature-policy string        The endorsement policy associated to this chaincode specified as a signature policy
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag
  -V, --validation-plugin string       The name of the validation plugin to be used for this chaincode
  -v, --version string                 Version of the chaincode

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer lifecycle chaincode commit

Commit the chaincode definition on the channel.

Usage:
  peer lifecycle chaincode commit [flags]

Flags:
      --channel-config-policy string   The endorsement policy associated to this chaincode specified as a channel config policy reference
  -C, --channelID string               The channel on which this command should be executed
      --collections-config string      The fully qualified path to the collection JSON file including the file name
      --connectionProfile string       The fully qualified path to the connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -E, --endorsement-plugin string      The name of the endorsement plugin to be used for this chaincode
  -h, --help                           help for commit
      --init-required                  Whether the chaincode requires invoking 'init'
  -n, --name string                    Name of the chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
      --sequence int                   The sequence number of the chaincode definition for the channel (default 1)
      --signature-policy string        The endorsement policy associated to this chaincode specified as a signature policy
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag
  -V, --validation-plugin string       The name of the validation plugin to be used for this chaincode
  -v, --version string                 Version of the chaincode
      --waitForEvent                   Whether to wait for the event from each peer's deliver filtered service signifying that the transaction has been committed successfully (default true)
      --waitForEventTimeout duration   Time to wait for the event from each peer's deliver filtered service signifying that the 'invoke' transaction has been committed successfully (default 30s)

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer lifecycle chaincode querycommitted

Query a committed chaincode definition by channel and name on a peer.

Usage:
  peer lifecycle chaincode querycommitted [flags]

Flags:
  -C, --channelID string               The channel on which this command should be executed
      --connectionProfile string       The fully qualified path to the connection profile that provides the necessary connection information for the network. Note: currently only supported for providing peer connection information
  -h, --help                           help for querycommitted
  -n, --name string                    Name of the chaincode
      --peerAddresses stringArray      The addresses of the peers to connect to
      --tlsRootCertFiles stringArray   If TLS is enabled, the paths to the TLS root cert files of the peers to connect to. The order and number of certs specified should match the --peerAddresses flag

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

Example Usage

peer lifecycle chaincode package example

A chaincode needs to be packaged before it can be installed on your peers. This example uses the peer lifecycle chaincode package command to package a Golang chaincode.

  • Use the --label flag to provide a chaincode package label of myccv1 that your organization will use to identify the package.

    peer lifecycle chaincode package mycc.tar.gz --path github.com/hyperledger/fabric-samples/chaincode/abstore/go/ --lang golang --label myccv1
    
peer lifecycle chaincode install example

After the chaincode is packaged, you can use the peer chaincode install command to install the chaincode on your peers.

  • Install the mycc.tar.gz package on peer0.org1.example.com:7051 (the peer defined by --peerAddresses).

    peer lifecycle chaincode install mycc.tar.gz --peerAddresses peer0.org1.example.com:7051
    

    If successful, the command will return the package identifier. The package ID is the package label combined with a hash of the chaincode package taken by the peer.

    2019-03-13 13:48:53.691 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nEmycc:ebd89878c2bbccf62f68c36072626359376aa83c36435a058d453e8dbfd894cc" >
    2019-03-13 13:48:53.691 UTC [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: mycc:a7ca45a7cc85f1d89c905b775920361ed089a364e12a9b6d55ba75c965ddd6a9
    
peer lifecycle chaincode queryinstalled example

You need to use the chaincode package identifier to approve a chaincode definition for your organization. You can find the package ID for the chaincodes you have installed by using the peer lifecycle chaincode queryinstalled command:

```
peer lifecycle chaincode queryinstalled --peerAddresses peer0.org1.example.com:7051
```
A successful command will return the package ID associated with the
package label.
```
Get installed chaincodes on peer:
Package ID: myccv1:a7ca45a7cc85f1d89c905b775920361ed089a364e12a9b6d55ba75c965ddd6a9, Label: myccv1
```
peer lifecycle chaincode approveformyorg example

Once the chaincode package has been installed on your peers, you can approve a chaincode definition for your organization. The chaincode definition includes the important parameters of chaincode governance, including the chaincode name, version and the endorsement policy.

Here is an example of the peer lifecycle chaincode approveformyorg command, which approves the definition of a chaincode named mycc at version 1.0 on channel mychannel.

  • Use the --package-id to pass in the chaincode package identifier. Use the --signature-policy flag to define an endorsement policy for the chaincode. Use the init-required flag to request the execution of the Init function to initialize the chaincode.

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    .
    peer lifecycle chaincode approveformyorg  -o orderer.example.com:7050 --tls --cafile $ORDERER_CA --channelID mychannel --name mycc --version 1.0 --init-required --package-id myccv1:a7ca45a7cc85f1d89c905b775920361ed089a364e12a9b6d55ba75c965ddd6a9 --sequence 1 --signature-policy "AND ('Org1MSP.peer','Org2MSP.peer')"
    .
    2019-03-18 16:04:09.046 UTC [cli.lifecycle.chaincode] InitCmdFactory -> INFO 001 Retrieved channel (mychannel) orderer endpoint: orderer.example.com:7050
    2019-03-18 16:04:11.253 UTC [chaincodeCmd] ClientWait -> INFO 002 txid [efba188ca77889cc1c328fc98e0bb12d3ad0abcda3f84da3714471c7c1e6c13c] committed with status (VALID) at peer0.org1.example.com:7051
    
  • You can also use the --channel-config-policy flag use a policy inside the channel configuration as the chaincode endorsement policy. The default endorsement policy is Channel/Application/Endorsement

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    .
    peer lifecycle chaincode approveformyorg -o orderer.example.com:7050 --tls --cafile $ORDERER_CA --channelID mychannel --name mycc --version 1.0 --init-required --package-id myccv1:a7ca45a7cc85f1d89c905b775920361ed089a364e12a9b6d55ba75c965ddd6a9 --sequence 1 --channel-config-policy Channel/Application/Admins
    .
    2019-03-18 16:04:09.046 UTC [cli.lifecycle.chaincode] InitCmdFactory -> INFO 001 Retrieved channel (mychannel) orderer endpoint: orderer.example.com:7050
    2019-03-18 16:04:11.253 UTC [chaincodeCmd] ClientWait -> INFO 002 txid [efba188ca77889cc1c328fc98e0bb12d3ad0abcda3f84da3714471c7c1e6c13c] committed with status (VALID) at peer0.org1.example.com:7051
    
peer lifecycle chaincode queryapprovalstatus example

You can query which organizations have approved a chaincode definition before you commit the definition to the channel using the peer lifecycle chaincode queryapprovalstatus command. If an organization has approved the chaincode definition specified in the command, the command will return a value of true. You can use this command to learn whether enough channel members have approved a chaincode definition to meet the Application/Channel/Endorsement policy (a majority by default) before the definition can be committed to a channel.

```
export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
.
peer lifecycle chaincode queryapprovalstatus -o orderer.example.com:7050 --channelID mychannel --tls --cafile $ORDERER_CA --name mycc --version 1.0 --init-required --sequence 1
```
  • If successful, the command will return a JSON map that shows if an organization has approved the chaincode definition.

      {
            "Approved": {
                    "Org1MSP": true,
                    "Org2MSP": true
            }
      }
    
peer lifecycle chaincode commit example

Once a sufficient number of organizations approve a chaincode definition for their organizations (a majority by default), one organization can commit the definition the channel using the peer lifecycle chaincode commit command:

  • This command needs to target the peers of other organizations on the channel to collect their organization endorsement for the definition.

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    .
    peer lifecycle chaincode commit -o orderer.example.com:7050 --channelID mychannel --name mycc --version 1.0 --sequence 1 --init-required --tls --cafile $ORDERER_CA --peerAddresses peer0.org1.example.com:7051 --peerAddresses peer0.org2.example.com:9051
    .
    2019-03-18 16:14:27.258 UTC [chaincodeCmd] ClientWait -> INFO 001 txid [b6f657a14689b27d69a50f39590b3949906b5a426f9d7f0dcee557f775e17882] committed with status (VALID) at peer0.org2.example.com:9051
    2019-03-18 16:14:27.321 UTC [chaincodeCmd] ClientWait -> INFO 002 txid [b6f657a14689b27d69a50f39590b3949906b5a426f9d7f0dcee557f775e17882] committed with status (VALID) at peer0.org1.example.com:7051
    
peer lifecycle chaincode querycommitted example

You can query the chaincode definitions that have been committed to a channel by using the peer lifecycle chaincode querycommitted command. You can use this command to query the current definition sequence number before upgrading a chaincode.

  • You need to supply the chaincode name and channel name in order to query the chaincode definition.

    export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    .
    peer lifecycle chaincode querycommitted -o orderer.example.com:7050 --channelID mychannel --name mycc --tls --cafile $ORDERER_CA --peerAddresses peer0.org1.example.com:7051
    .
    Committed chaincode definition for chaincode 'mycc' on channel 'mychannel':
    Version: 1, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc
    

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

peer channel

The peer channel command allows administrators to perform channel related operations on a peer, such as joining a channel or listing the channels to which a peer is joined.

Syntax

The peer channel command has the following subcommands:

  • create

  • fetch

  • getinfo

  • join

  • list

  • signconfigtx

  • update

peer channel

Operate a channel: create|fetch|join|list|update|signconfigtx|getinfo.

Usage:
  peer channel [command]

Available Commands:
  create       Create a channel
  fetch        Fetch a block
  getinfo      get blockchain information of a specified channel.
  join         Joins the peer to a channel.
  list         List of channels peer has joined.
  signconfigtx Signs a configtx update.
  update       Send a configtx update.

Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
  -h, --help                                help for channel
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

Use "peer channel [command] --help" for more information about a command.

peer channel create

Create a channel and write the genesis block to a file.

Usage:
  peer channel create [flags]

Flags:
  -c, --channelID string     In case of a newChain command, the channel ID to create. It must be all lower case, less than 250 characters long and match the regular expression: [a-z][a-z0-9.-]*
  -f, --file string          Configuration transaction file generated by a tool such as configtxgen for submitting to orderer
  -h, --help                 help for create
      --outputBlock string   The path to write the genesis block for the channel. (default ./<channelID>.block)
  -t, --timeout duration     Channel creation timeout (default 10s)

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer channel fetch

Fetch a specified block, writing it to a file.

Usage:
  peer channel fetch <newest|oldest|config|(number)> [outputfile] [flags]

Flags:
  -c, --channelID string   In case of a newChain command, the channel ID to create. It must be all lower case, less than 250 characters long and match the regular expression: [a-z][a-z0-9.-]*
  -h, --help               help for fetch

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer channel getinfo

get blockchain information of a specified channel. Requires '-c'.

Usage:
  peer channel getinfo [flags]

Flags:
  -c, --channelID string   In case of a newChain command, the channel ID to create. It must be all lower case, less than 250 characters long and match the regular expression: [a-z][a-z0-9.-]*
  -h, --help               help for getinfo

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer channel join

Joins the peer to a channel.

Usage:
  peer channel join [flags]

Flags:
  -b, --blockpath string   Path to file containing genesis block
  -h, --help               help for join

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer channel list

List of channels peer has joined.

Usage:
  peer channel list [flags]

Flags:
  -h, --help   help for list

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer channel signconfigtx

Signs the supplied configtx update file in place on the filesystem. Requires '-f'.

Usage:
  peer channel signconfigtx [flags]

Flags:
  -f, --file string   Configuration transaction file generated by a tool such as configtxgen for submitting to orderer
  -h, --help          help for signconfigtx

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

peer channel update

Signs and sends the supplied configtx update file to the channel. Requires '-f', '-o', '-c'.

Usage:
  peer channel update [flags]

Flags:
  -c, --channelID string   In case of a newChain command, the channel ID to create. It must be all lower case, less than 250 characters long and match the regular expression: [a-z][a-z0-9.-]*
  -f, --file string        Configuration transaction file generated by a tool such as configtxgen for submitting to orderer
  -h, --help               help for update

Global Flags:
      --cafile string                       Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
      --certfile string                     Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer endpoint
      --clientauth                          Use mutual TLS when communicating with the orderer endpoint
      --connTimeout duration                Timeout for client to connect (default 3s)
      --keyfile string                      Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
  -o, --orderer string                      Ordering service endpoint
      --ordererTLSHostnameOverride string   The hostname override to use when validating the TLS connection to the orderer.
      --tls                                 Use TLS when communicating with the orderer endpoint

Example Usage

peer channel create examples

Here’s an example that uses the --orderer global flag on the peer channel create command.

  • Create a sample channel mychannel defined by the configuration transaction contained in file ./createchannel.txn. Use the orderer at orderer.example.com:7050.

    peer channel create -c mychannel -f ./createchannel.txn --orderer orderer.example.com:7050
    
    2018-02-25 08:23:57.548 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 08:23:57.626 UTC [channelCmd] InitCmdFactory -> INFO 019 Endorser and orderer connections initialized
    2018-02-25 08:23:57.834 UTC [channelCmd] readBlock -> INFO 020 Received block: 0
    2018-02-25 08:23:57.835 UTC [main] main -> INFO 021 Exiting.....
    

    Block 0 is returned indicating that the channel has been successfully created.

Here’s an example of the peer channel create command option.

  • Create a new channel mychannel for the network, using the orderer at ip address orderer.example.com:7050. The configuration update transaction required to create this channel is defined the file ./createchannel.txn. Wait 30 seconds for the channel to be created.

      peer channel create -c mychannel --orderer orderer.example.com:7050 -f ./createchannel.txn -t 30s
    
      2018-02-23 06:31:58.568 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
      2018-02-23 06:31:58.669 UTC [channelCmd] InitCmdFactory -> INFO 019 Endorser and orderer connections initialized
      2018-02-23 06:31:58.877 UTC [channelCmd] readBlock -> INFO 020 Received block: 0
      2018-02-23 06:31:58.878 UTC [main] main -> INFO 021 Exiting.....
    
      ls -l
    
      -rw-r--r-- 1 root root 11982 Feb 25 12:24 mychannel.block
    

    You can see that channel mychannel has been successfully created, as indicated in the output where block 0 (zero) is added to the blockchain for this channel and returned to the peer, where it is stored in the local directory as mychannel.block.

    Block zero is often called the genesis block as it provides the starting configuration for the channel. All subsequent updates to the channel will be captured as configuration blocks on the channel’s blockchain, each of which supersedes the previous configuration.

peer channel fetch example

Here’s some examples of the peer channel fetch command.

  • Using the newest option to retrieve the most recent channel block, and store it in the file mychannel.block.

    peer channel fetch newest mychannel.block -c mychannel --orderer orderer.example.com:7050
    
    2018-02-25 13:10:16.137 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 13:10:16.144 UTC [channelCmd] readBlock -> INFO 00a Received block: 32
    2018-02-25 13:10:16.145 UTC [main] main -> INFO 00b Exiting.....
    
    ls -l
    
    -rw-r--r-- 1 root root 11982 Feb 25 13:10 mychannel.block
    

    You can see that the retrieved block is number 32, and that the information has been written to the file mychannel.block.

  • Using the (block number) option to retrieve a specific block – in this case, block number 16 – and store it in the default block file.

    peer channel fetch 16  -c mychannel --orderer orderer.example.com:7050
    
    2018-02-25 13:46:50.296 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 13:46:50.302 UTC [channelCmd] readBlock -> INFO 00a Received block: 16
    2018-02-25 13:46:50.302 UTC [main] main -> INFO 00b Exiting.....
    
    ls -l
    
    -rw-r--r-- 1 root root 11982 Feb 25 13:10 mychannel.block
    -rw-r--r-- 1 root root  4783 Feb 25 13:46 mychannel_16.block
    

    You can see that the retrieved block is number 16, and that the information has been written to the default file mychannel_16.block.

    For configuration blocks, the block file can be decoded using the configtxlator command. See this command for an example of decoded output. User transaction blocks can also be decoded, but a user program must be written to do this.

peer channel getinfo example

Here’s an example of the peer channel getinfo command.

  • Get information about the local peer for channel mychannel.

    peer channel getinfo -c mychannel
    
    2018-02-25 15:15:44.135 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    Blockchain info: {"height":5,"currentBlockHash":"JgK9lcaPUNmFb5Mp1qe1SVMsx3o/22Ct4+n5tejcXCw=","previousBlockHash":"f8lZXoAn3gF86zrFq7L1DzW2aKuabH9Ow6SIE5Y04a4="}
    2018-02-25 15:15:44.139 UTC [main] main -> INFO 006 Exiting.....
    

    You can see that the latest block for channel mychannel is block 5. You can also see the cryptographic hashes for the most recent blocks in the channel’s blockchain.

peer channel join example

Here’s an example of the peer channel join command.

  • Join a peer to the channel defined in the genesis block identified by the file ./mychannel.genesis.block. In this example, the channel block was previously retrieved by the peer channel fetch command.

    peer channel join -b ./mychannel.genesis.block
    
    2018-02-25 12:25:26.511 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-25 12:25:26.571 UTC [channelCmd] executeJoin -> INFO 006 Successfully submitted proposal to join channel
    2018-02-25 12:25:26.571 UTC [main] main -> INFO 007 Exiting.....
    

    You can see that the peer has successfully made a request to join the channel.

peer channel list example

Here’s an example of the peer channel list command.

  • List the channels to which a peer is joined.

    peer channel list
    
    2018-02-25 14:21:20.361 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    Channels peers has joined:
    mychannel
    2018-02-25 14:21:20.372 UTC [main] main -> INFO 006 Exiting.....
    

    You can see that the peer is joined to channel mychannel.

peer channel signconfigtx example

Here’s an example of the peer channel signconfigtx command.

  • Sign the channel update transaction defined in the file ./updatechannel.txn. The example lists the configuration transaction file before and after the command.

    ls -l
    
    -rw-r--r--  1 anthonyodowd  staff   284 25 Feb 18:16 updatechannel.tx
    
    peer channel signconfigtx -f updatechannel.tx
    
    2018-02-25 18:16:44.456 GMT [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2018-02-25 18:16:44.459 GMT [main] main -> INFO 002 Exiting.....
    
    ls -l
    
    -rw-r--r--  1 anthonyodowd  staff  2180 25 Feb 18:16 updatechannel.tx
    

    You can see that the peer has successfully signed the configuration transaction by the increase in the size of the file updatechannel.tx from 284 bytes to 2180 bytes.

peer channel update example

Here’s an example of the peer channel update command.

  • Update the channel mychannel using the configuration transaction defined in the file ./updatechannel.txn. Use the orderer at ip address orderer.example.com:7050 to send the configuration transaction to all peers in the channel to update their copy of the channel configuration.

    peer channel update -c mychannel -f ./updatechannel.txn -o orderer.example.com:7050
    
    2018-02-23 06:32:11.569 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2018-02-23 06:32:11.626 UTC [main] main -> INFO 010 Exiting.....
    

    At this point, the channel mychannel has been successfully updated.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

peer version

The peer version command displays the version information of the peer. It displays version, Commit SHA, Go version, OS/architecture, and chaincode information. For example:

 peer:
   Version: 1.4.0
   Commit SHA: 0efc897
   Go version: go1.11.1
   OS/Arch: linux/amd64
   Chaincode:
    Base Image Version: 0.4.14
    Base Docker Namespace: hyperledger
    Base Docker Label: org.hyperledger.fabric
    Docker Namespace: hyperledger

Syntax

The peer version command takes no arguments.

peer version

Print current version of the fabric peer server.

Usage:
  peer version [flags]

Flags:
  -h, --help   help for version

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

peer node

The peer node command allows an administrator to start a peer node or check the status of a peer node.

Syntax

The peer node command has the following subcommands:

  • start

  • status

peer node start

Starts a node that interacts with the network.

Usage:
  peer node start [flags]

Flags:
  -h, --help                help for start
      --peer-chaincodedev   Whether peer in chaincode development mode

Example Usage

peer node start example

The following command:

peer node start --peer-chaincodedev

starts a peer node in chaincode development mode. Normally chaincode containers are started and maintained by peer. However in chaincode development mode, chaincode is built and started by the user. This mode is useful during chaincode development phase for iterative development. See more information on development mode in the chaincode tutorial.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

token

Token CLI 允许您使用 Fabtoken client 发行、列出、转账和赎回token。

Syntax

token 命令有以下子命令:

  • issue

  • list

  • transfer

  • redeem

  • saveConfig

token issue

usage: token issue [<flags>]

Import token command

Flags:
  --help                   Show context-sensitive help (also try --help-long and
                           --help-man).
  --configFile=CONFIGFILE  Specifies the config file to load the configuration
                           from
  --peerTLSCA=PEERTLSCA    Sets the TLS CA certificate file path that verifies
                           the TLS peer's certificate
  --tlsCert=TLSCERT        (Optional) Sets the client TLS certificate file path
                           that is used when the peer enforces client
                           authentication
  --tlsKey=TLSKEY          (Optional) Sets the client TLS key file path that is
                           used when the peer enforces client authentication
  --userKey=USERKEY        Sets the user's key file path that is used to sign
                           messages sent to the peer
  --userCert=USERCERT      Sets the user's certificate file path that is used to
                           authenticate the messages sent to the peer
  --MSP=MSP                Sets the MSP ID of the user, which represents the
                           CA(s) that issued its user certificate
  --channel=CHANNEL        Overrides channel configuration
  --mspPath=MSPPATH        Overrides msp path configuration
  --mspId=MSPID            Overrides msp id configuration
  --config=CONFIG          Sets the client configuration path
  --type=TYPE              Sets the token type to issue
  --quantity=QUANTITY      Sets the quantity of tokens to issue
  --recipient=RECIPIENT    Sets the recipient of tokens to issue

token list

usage: token list [<flags>]

List tokens command

Flags:
  --help                   Show context-sensitive help (also try --help-long and
                           --help-man).
  --configFile=CONFIGFILE  Specifies the config file to load the configuration
                           from
  --peerTLSCA=PEERTLSCA    Sets the TLS CA certificate file path that verifies
                           the TLS peer's certificate
  --tlsCert=TLSCERT        (Optional) Sets the client TLS certificate file path
                           that is used when the peer enforces client
                           authentication
  --tlsKey=TLSKEY          (Optional) Sets the client TLS key file path that is
                           used when the peer enforces client authentication
  --userKey=USERKEY        Sets the user's key file path that is used to sign
                           messages sent to the peer
  --userCert=USERCERT      Sets the user's certificate file path that is used to
                           authenticate the messages sent to the peer
  --MSP=MSP                Sets the MSP ID of the user, which represents the
                           CA(s) that issued its user certificate
  --channel=CHANNEL        Overrides channel configuration
  --mspPath=MSPPATH        Overrides msp path configuration
  --mspId=MSPID            Overrides msp id configuration
  --config=CONFIG          Sets the client configuration path

token transfer

usage: token transfer [<flags>]

Transfer tokens command

Flags:
  --help                   Show context-sensitive help (also try --help-long and
                           --help-man).
  --configFile=CONFIGFILE  Specifies the config file to load the configuration
                           from
  --peerTLSCA=PEERTLSCA    Sets the TLS CA certificate file path that verifies
                           the TLS peer's certificate
  --tlsCert=TLSCERT        (Optional) Sets the client TLS certificate file path
                           that is used when the peer enforces client
                           authentication
  --tlsKey=TLSKEY          (Optional) Sets the client TLS key file path that is
                           used when the peer enforces client authentication
  --userKey=USERKEY        Sets the user's key file path that is used to sign
                           messages sent to the peer
  --userCert=USERCERT      Sets the user's certificate file path that is used to
                           authenticate the messages sent to the peer
  --MSP=MSP                Sets the MSP ID of the user, which represents the
                           CA(s) that issued its user certificate
  --channel=CHANNEL        Overrides channel configuration
  --mspPath=MSPPATH        Overrides msp path configuration
  --mspId=MSPID            Overrides msp id configuration
  --config=CONFIG          Sets the client configuration path
  --tokenIDs=TOKENIDS      Sets the token IDs to transfer
  --shares=SHARES          Sets the shares of the recipients

token redeem

usage: token redeem [<flags>]

Redeem tokens command

Flags:
  --help                   Show context-sensitive help (also try --help-long and
                           --help-man).
  --configFile=CONFIGFILE  Specifies the config file to load the configuration
                           from
  --peerTLSCA=PEERTLSCA    Sets the TLS CA certificate file path that verifies
                           the TLS peer's certificate
  --tlsCert=TLSCERT        (Optional) Sets the client TLS certificate file path
                           that is used when the peer enforces client
                           authentication
  --tlsKey=TLSKEY          (Optional) Sets the client TLS key file path that is
                           used when the peer enforces client authentication
  --userKey=USERKEY        Sets the user's key file path that is used to sign
                           messages sent to the peer
  --userCert=USERCERT      Sets the user's certificate file path that is used to
                           authenticate the messages sent to the peer
  --MSP=MSP                Sets the MSP ID of the user, which represents the
                           CA(s) that issued its user certificate
  --channel=CHANNEL        Overrides channel configuration
  --mspPath=MSPPATH        Overrides msp path configuration
  --mspId=MSPID            Overrides msp id configuration
  --config=CONFIG          Sets the client configuration path
  --tokenIDs=TOKENIDS      Sets the token IDs to redeem
  --quantity=QUANTITY      Sets the quantity of tokens to redeem

token saveConfig

usage: token saveConfig

Save the config passed by flags into the file specified by --configFile

Flags:
  --help                   Show context-sensitive help (also try --help-long and
                           --help-man).
  --configFile=CONFIGFILE  Specifies the config file to load the configuration
                           from
  --peerTLSCA=PEERTLSCA    Sets the TLS CA certificate file path that verifies
                           the TLS peer's certificate
  --tlsCert=TLSCERT        (Optional) Sets the client TLS certificate file path
                           that is used when the peer enforces client
                           authentication
  --tlsKey=TLSKEY          (Optional) Sets the client TLS key file path that is
                           used when the peer enforces client authentication
  --userKey=USERKEY        Sets the user's key file path that is used to sign
                           messages sent to the peer
  --userCert=USERCERT      Sets the user's certificate file path that is used to
                           authenticate the messages sent to the peer
  --MSP=MSP                Sets the MSP ID of the user, which represents the
                           CA(s) that issued its user certificate

Example Usage

token issue example

您可以使用以下命令来发行属于User1@org1.example.com的100个Fabcoins。Tokens由Admin@org1.example.com发行。

  • Use the --config flag to provide the path to a file that contains the connection information for your fabric network, including your Prover peer. You can find a sample configuration file below. Use the --mspPath flag to provide the path to the MSP of the token issuer.

    export CONFIG_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json
    export MSP_PATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
    .
    token issue --config $CONFIG_FILE --mspPath $MSP_PATH --channel mychannel --type Fabcoins --quantity 100 --recipient Org1MSP:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp
    .
    2019-03-28 18:19:29.438 UTC [token.client] BroadcastReceive -> INFO 001 calling OrdererClient.broadcastReceive
    Orderer Status [SUCCESS]
    Committed [true]
    
token list example

您可以使用 token list 命令来发现您所拥有的tokens的tokenIDs。

  • Use the --mspPath flag to provide the path to the MSP of the token owner.

    export CONFIG_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json
    export MSP_PATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp
    .
    token list --config $CONFIG_FILE --mspPath $MSP_PATH --channel mychannel
    

    If successful, the command will return the tokenID, which is the ID of the transaction that created the token, as well as the type and quantity of assets represented by the token.

    {"tx_id":"23604056d205c656fa757f568a6a4f0105567ebc208303065aa7e5a11849c0c8"}
    [Fabcoins,100]
    
token transfer example

您可以使用 token transfer 命令将您拥有的token转账给通道的另一个成员。

  • Use the --tokenIDs flag to select the tokens that you want to transfer. Use the --shares flag to provide a path to a JSON file that describes how the input token will be distributed to the recipients of the transaction. You can find a sample shares file below.

    export CONFIG_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json
    export MSP_PATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp
    export SHARES=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/shares.json
    .
    token transfer --config $CONFIG_FILE --mspPath $MSP_PATH --channel mychannel --tokenIDs '[{"tx_id":"23604056d205c656fa757f568a6a4f0105567ebc208303065aa7e5a11849c0c8"}]' --shares $SHARES
    .
    2019-03-28 18:27:43.468 UTC [token.client] BroadcastReceive -> INFO 001 calling OrdererClient.broadcastReceive
    Orderer Status [SUCCESS]
    Committed [true]
    
token redeem example

赎回的tokens不能再转账给其他通道成员。Tokens只能由其所有者赎回。您可以使用以下命令赎回50枚 Fabcoins:

```
export CONFIG_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/configorg1.json
export MSP_PATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp
.
token redeem --config $CONFIG_FILE --mspPath $MSP_PATH --channel mychannel --tokenIDs '[{"tx_id":"30e6337fdc0d07a5c46f51d6b58c4958992e21fed0aed5c822b30f9f28366698"}]' --quantity 50
.
2019-03-28 18:29:29.656 UTC [token.client] BroadcastReceive -> INFO 001 calling OrdererClient.broadcastReceive
Orderer Status [SUCCESS]
Committed [true]
```
Configuration file example

配置文件给token CLI提供您的网络的端点信息。该文件包含您的组织将用于组装token交易的校准节点。

**Sample configuration file** ``` { "ChannelID":"", "MSPInfo":{ "MSPConfigPath":"", "MSPID":"Org1MSP", "MSPType":"bccsp" }, "Orderer":{ "Address":"orderer.example.com:7050", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem", "ServerNameOverride":"" }, "CommitterPeer":{ "Address":"peer0.org1.example.com:7051", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt", "ServerNameOverride":"" }, "ProverPeer":{ "Address":"peer0.org1.example.com:7051", "ConnectionTimeout":0, "TLSEnabled":true, "TLSRootCertFile":"/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt", "ServerNameOverride":"" } } ```
Shares file example

转账交易使用shares文件在转账的接收者之间分配由输入token表示的资产。没有转账给接收方的任何数量的输入token都会以新token的形式自动提供给原始所有者。

**Sample shares file** ``` [ { "recipient":"Org2MSP:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/User1@org2.example.com/msp", "quantity":"50" } ] ```

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

configtxgen

The configtxgen command allows users to create and inspect channel config related artifacts. The content of the generated artifacts is dictated by the contents of configtx.yaml.

Syntax

The configtxgen tool has no sub-commands, but supports flags which can be set to accomplish a number of tasks.

configtxgen

Usage of configtxgen:
  -asOrg string
    	Performs the config generation as a particular organization (by name), only including values in the write set that org (likely) has privilege to set
  -channelCreateTxBaseProfile string
    	Specifies a profile to consider as the orderer system channel current state to allow modification of non-application parameters during channel create tx generation. Only valid in conjunction with 'outputCreateChannelTx'.
  -channelID string
    	The channel ID to use in the configtx
  -configPath string
    	The path containing the configuration to use (if set)
  -inspectBlock string
    	Prints the configuration contained in the block at the specified path
  -inspectChannelCreateTx string
    	Prints the configuration contained in the transaction at the specified path
  -outputAnchorPeersUpdate string
    	Creates an config update to update an anchor peer (works only with the default channel creation, and only for the first update)
  -outputBlock string
    	The path to write the genesis block to (if set)
  -outputCreateChannelTx string
    	The path to write a channel creation configtx to (if set)
  -printOrg string
    	Prints the definition of an organization as JSON. (useful for adding an org to a channel manually)
  -profile string
    	The profile from configtx.yaml to use for generation. (default "SampleInsecureSolo")
  -version
    	Show version information

Usage

Output a genesis block

Write a genesis block to genesis_block.pb for channel orderer-system-channel for profile SampleSingleMSPSoloV1_1.

configtxgen -outputBlock genesis_block.pb -profile SampleSingleMSPSoloV1_1 -channelID orderer-system-channel
Output a channel creation tx

Write a channel creation transaction to create_chan_tx.pb for profile SampleSingleMSPChannelV1_1.

configtxgen -outputCreateChannelTx create_chan_tx.pb -profile SampleSingleMSPChannelV1_1 -channelID application-channel-1
Inspect a genesis block

Print the contents of a genesis block named genesis_block.pb to the screen as JSON.

configtxgen -inspectBlock genesis_block.pb
Inspect a channel creation tx

Print the contents of a channel creation tx named create_chan_tx.pb to the screen as JSON.

configtxgen -inspectChannelCreateTx create_chan_tx.pb
Output anchor peer tx

Output a configuration update transaction to anchor_peer_tx.pb which sets the anchor peers for organization Org1 as defined in profile SampleSingleMSPChannelV1_1 based on configtx.yaml.

configtxgen -outputAnchorPeersUpdate anchor_peer_tx.pb -profile SampleSingleMSPChannelV1_1 -asOrg Org1

Configuration

The configtxgen tool’s output is largely controlled by the content of configtx.yaml. This file is searched for at FABRIC_CFG_PATH and must be present for configtxgen to operate.

This configuration file may be edited, or, individual properties may be overridden by setting environment variables, such as CONFIGTX_ORDERER_ORDERERTYPE=kafka.

For many configtxgen operations, a profile name must be supplied. Profiles are a way to express multiple similar configurations in a single file. For instance, one profile might define a channel with 3 orgs, and another might define one with 4 orgs. To accomplish this without the length of the file becoming burdensome, configtx.yaml depends on the standard YAML feature of anchors and references. Base parts of the configuration are tagged with an anchor like &OrdererDefaults and then merged into a profile with a reference like <<: *OrdererDefaults. Note, when configtxgen is operating under a profile, environment variable overrides do not need to include the profile prefix and may be referenced relative to the root element of the profile. For instance, do not specify CONFIGTX_PROFILE_SAMPLEINSECURESOLO_ORDERER_ORDERERTYPE, instead simply omit the profile specifics and use the CONFIGTX prefix followed by the elements relative to the profile name such as CONFIGTX_ORDERER_ORDERERTYPE.

Refer to the sample configtx.yaml shipped with Fabric for all possible configuration options. You may find this file in the config directory of the release artifacts tar, or you may find it under the sampleconfig folder if you are building from source.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

configtxlator

The configtxlator command allows users to translate between protobuf and JSON versions of fabric data structures and create config updates. The command may either start a REST server to expose its functions over HTTP or may be utilized directly as a command line tool.

Syntax

The configtxlator tool has five sub-commands, as follows:

  • start

  • proto_encode

  • proto_decode

  • compute_update

  • version

configtxlator start

usage: configtxlator start [<flags>]

Start the configtxlator REST server

Flags:
  --help                Show context-sensitive help (also try --help-long and
                        --help-man).
  --hostname="0.0.0.0"  The hostname or IP on which the REST server will listen
  --port=7059           The port on which the REST server will listen
  --CORS=CORS ...       Allowable CORS domains, e.g. '*' or 'www.example.com'
                        (may be repeated).

configtxlator proto_encode

usage: configtxlator proto_encode --type=TYPE [<flags>]

Converts a JSON document to protobuf.

Flags:
  --help                Show context-sensitive help (also try --help-long and
                        --help-man).
  --type=TYPE           The type of protobuf structure to encode to. For
                        example, 'common.Config'.
  --input=/dev/stdin    A file containing the JSON document.
  --output=/dev/stdout  A file to write the output to.

configtxlator proto_decode

usage: configtxlator proto_decode --type=TYPE [<flags>]

Converts a proto message to JSON.

Flags:
  --help                Show context-sensitive help (also try --help-long and
                        --help-man).
  --type=TYPE           The type of protobuf structure to decode from. For
                        example, 'common.Config'.
  --input=/dev/stdin    A file containing the proto message.
  --output=/dev/stdout  A file to write the JSON document to.

configtxlator compute_update

usage: configtxlator compute_update --channel_id=CHANNEL_ID [<flags>]

Takes two marshaled common.Config messages and computes the config update which
transitions between the two.

Flags:
  --help                   Show context-sensitive help (also try --help-long and
                           --help-man).
  --original=ORIGINAL      The original config message.
  --updated=UPDATED        The updated config message.
  --channel_id=CHANNEL_ID  The name of the channel for this update.
  --output=/dev/stdout     A file to write the JSON document to.

configtxlator version

usage: configtxlator version

Show version information

Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).

Examples

Decoding

Decode a block named fabric_block.pb to JSON and print to stdout.

configtxlator proto_decode --input fabric_block.pb --type common.Block

Alternatively, after starting the REST server, the following curl command performs the same operation through the REST API.

curl -X POST --data-binary @fabric_block.pb "${CONFIGTXLATOR_URL}/protolator/decode/common.Block"
Encoding

Convert a JSON document for a policy from stdin to a file named policy.pb.

configtxlator proto_encode --type common.Policy --output policy.pb

Alternatively, after starting the REST server, the following curl command performs the same operation through the REST API.

curl -X POST --data-binary /dev/stdin "${CONFIGTXLATOR_URL}/protolator/encode/common.Policy" > policy.pb
Pipelines

Compute a config update from original_config.pb and modified_config.pb and decode it to JSON to stdout.

configtxlator compute_update --channel_id testchan --original original_config.pb --updated modified_config.pb | configtxlator proto_decode --type common.ConfigUpdate

Alternatively, after starting the REST server, the following curl commands perform the same operations through the REST API.

curl -X POST -F channel=testchan -F "original=@original_config.pb" -F "updated=@modified_config.pb" "${CONFIGTXLATOR_URL}/configtxlator/compute/update-from-configs" | curl -X POST --data-binary /dev/stdin "${CONFIGTXLATOR_URL}/protolator/encode/common.ConfigUpdate"

Additional Notes

The tool name is a portmanteau of configtx and translator and is intended to convey that the tool simply converts between different equivalent data representations. It does not generate configuration. It does not submit or retrieve configuration. It does not modify configuration itself, it simply provides some bijective operations between different views of the configtx format.

There is no configuration file configtxlator nor any authentication or authorization facilities included for the REST server. Because configtxlator does not have any access to data, key material, or other information which might be considered sensitive, there is no risk to the owner of the server in exposing it to other clients. However, because the data sent by a user to the REST server might be confidential, the user should either trust the administrator of the server, run a local instance, or operate via the CLI.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

cryptogen

cryptogen is an utility for generating Hyperledger Fabric key material. It is provided as a means of preconfiguring a network for testing purposes. It would normally not be used in the operation of a production network.

Syntax

The cryptogen command has five subcommands, as follows:

  • help

  • generate

  • showtemplate

  • extend

  • version

cryptogen help

usage: cryptogen [<flags>] <command> [<args> ...]

Utility for generating Hyperledger Fabric key material

Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).

Commands:
  help [<command>...]
    Show help.

  generate [<flags>]
    Generate key material

  showtemplate
    Show the default configuration template

  version
    Show version information

  extend [<flags>]
    Extend existing network

cryptogen generate

usage: cryptogen generate [<flags>]

Generate key material

Flags:
  --help                    Show context-sensitive help (also try --help-long
                            and --help-man).
  --output="crypto-config"  The output directory in which to place artifacts
  --config=CONFIG           The configuration template to use

cryptogen showtemplate

usage: cryptogen showtemplate

Show the default configuration template

Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).

cryptogen extend

usage: cryptogen extend [<flags>]

Extend existing network

Flags:
  --help                   Show context-sensitive help (also try --help-long and
                           --help-man).
  --input="crypto-config"  The input directory in which existing network place
  --config=CONFIG          The configuration template to use

cryptogen version

usage: cryptogen version

Show version information

Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).

Usage

Here’s an example using the different available flags on the cryptogen extend command.

    cryptogen extend --input="crypto-config" --config=config.yaml

    org3.example.com

Where config.yaml adds a new peer organization called org3.example.com

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Service Discovery CLI

The discovery service has its own Command Line Interface (CLI) which uses a YAML configuration file to persist properties such as certificate and private key paths, as well as MSP ID.

The discover command has the following subcommands:

  • saveConfig

  • peers

  • config

  • endorsers

And the usage of the command is shown below:

usage: discover [<flags>] <command> [<args> ...]

Command line client for fabric discovery service

Flags:
  --help                   Show context-sensitive help (also try --help-long and --help-man).
  --configFile=CONFIGFILE  Specifies the config file to load the configuration from
  --peerTLSCA=PEERTLSCA    Sets the TLS CA certificate file path that verifies the TLS peer's certificate
  --tlsCert=TLSCERT        (Optional) Sets the client TLS certificate file path that is used when the peer enforces client authentication
  --tlsKey=TLSKEY          (Optional) Sets the client TLS key file path that is used when the peer enforces client authentication
  --userKey=USERKEY        Sets the user's key file path that is used to sign messages sent to the peer
  --userCert=USERCERT      Sets the user's certificate file path that is used to authenticate the messages sent to the peer
  --MSP=MSP                Sets the MSP ID of the user, which represents the CA(s) that issued its user certificate

Commands:
  help [<command>...]
    Show help.

  peers [<flags>]
    Discover peers

  config [<flags>]
    Discover channel config

  endorsers [<flags>]
    Discover chaincode endorsers

  saveConfig
    Save the config passed by flags into the file specified by --configFile

Configuring external endpoints

Currently, to see peers in service discovery they need to have EXTERNAL_ENDPOINT to be configured for them. Otherwise, Fabric assumes the peer should not be disclosed.

To define these endpoints, you need to specify them in the core.yaml of the peer, replacing the sample endpoint below with the ones of your peer.

CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051

Persisting configuration

To persist the configuration, a config file name should be supplied via the flag --configFile, along with the command saveConfig:

discover --configFile conf.yaml --peerTLSCA tls/ca.crt --userKey msp/keystore/ea4f6a38ac7057b6fa9502c2f5f39f182e320f71f667749100fe7dd94c23ce43_sk --userCert msp/signcerts/User1\@org1.example.com-cert.pem  --MSP Org1MSP saveConfig

By executing the above command, configuration file would be created:

$ cat conf.yaml
version: 0
tlsconfig:
  certpath: ""
  keypath: ""
  peercacertpath: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/tls/ca.crt
  timeout: 0s
signerconfig:
  mspid: Org1MSP
  identitypath: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/signcerts/User1@org1.example.com-cert.pem
  keypath: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/ea4f6a38ac7057b6fa9502c2f5f39f182e320f71f667749100fe7dd94c23ce43_sk

When the peer runs with TLS enabled, the discovery service on the peer requires the client to connect to it with mutual TLS, which means it needs to supply a TLS certificate. The peer is configured by default to request (but not to verify) client TLS certificates, so supplying a TLS certificate isn’t needed (unless the peer’s tls.clientAuthRequired is set to true).

When the discovery CLI’s config file has a certificate path for peercacertpath, but the certpath and keypath aren’t configured as in the above - the discovery CLI generates a self-signed TLS certificate and uses this to connect to the peer.

When the peercacertpath isn’t configured, the discovery CLI connects without TLS , and this is highly not recommended, as the information is sent over plaintext, un-encrypted.

Querying the discovery service

The discoveryCLI acts as a discovery client, and it needs to be executed against a peer. This is done via specifying the --server flag. In addition, the queries are channel-scoped, so the --channel flag must be used.

The only query that doesn’t require a channel is the local membership peer query, which by default can only be used by administrators of the peer being queried.

The discover CLI supports all server-side queries:

  • Peer membership query

  • Configuration query

  • Endorsers query

Let’s go over them and see how they should be invoked and parsed:

Peer membership query:

$ discover --configFile conf.yaml peers --channel mychannel  --server peer0.org1.example.com:7051
[
	{
		"MSPID": "Org2MSP",
		"LedgerHeight": 5,
		"Endpoint": "peer0.org2.example.com:9051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICKTCCAc+gAwIBAgIRANK4WBck5gKuzTxVQIwhYMUwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzIuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzIuZXhhbXBsZS5jb20wHhcNMTgwNjE3MTM0NTIxWhcNMjgwNjE0MTM0NTIx\nWjBqMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzENMAsGA1UECxMEcGVlcjEfMB0GA1UEAxMWcGVlcjAub3Jn\nMi5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABJa0gkMRqJCi\nzmx+L9xy/ecJNvdAV2zmSx5Sf2qospVAH1MYCHyudDEvkiRuBPgmCdOdwJsE0g+h\nz0nZdKq6/X+jTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM49\nBAMCA0gAMEUCIQC0M9/LJ7j3I9NEPQ/B1BpnJP+UNPnGO2peVrM/mJ1nVgIgS1ZA\nA1tsxuDyllaQuHx2P+P9NDFdjXx5T08lZhxuWYM=\n-----END CERTIFICATE-----\n",
		"Chaincodes": [
			"mycc"
		]
	},
	{
		"MSPID": "Org2MSP",
		"LedgerHeight": 5,
		"Endpoint": "peer1.org2.example.com:10051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICKDCCAc+gAwIBAgIRALnNJzplCrYy4Y8CjZtqL7AwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzIuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzIuZXhhbXBsZS5jb20wHhcNMTgwNjE3MTM0NTIxWhcNMjgwNjE0MTM0NTIx\nWjBqMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzENMAsGA1UECxMEcGVlcjEfMB0GA1UEAxMWcGVlcjEub3Jn\nMi5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABNDopAkHlDdu\nq10HEkdxvdpkbs7EJyqv1clvCt/YMn1hS6sM+bFDgkJKalG7s9Hg3URF0aGpy51R\nU+4F9Muo+XajTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM49\nBAMCA0cAMEQCIAR4fBmIBKW2jp0HbbabVepNtl1c7+6++riIrEBnoyIVAiBBvWmI\nyG02c5hu4wPAuVQMB7AU6tGSeYaWSAAo/ExunQ==\n-----END CERTIFICATE-----\n",
		"Chaincodes": [
			"mycc"
		]
	},
	{
		"MSPID": "Org1MSP",
		"LedgerHeight": 5,
		"Endpoint": "peer0.org1.example.com:7051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICKDCCAc6gAwIBAgIQP18LeXtEXGoN8pTqzXTHZTAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu\nb3JnMS5leGFtcGxlLmNvbTAeFw0xODA2MTcxMzQ1MjFaFw0yODA2MTQxMzQ1MjFa\nMGoxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T\nYW4gRnJhbmNpc2NvMQ0wCwYDVQQLEwRwZWVyMR8wHQYDVQQDExZwZWVyMC5vcmcx\nLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEKeC/1Rg/ynSk\nNNItaMlaCDZOaQvxJEl6o3fqx1PVFlfXE4NarY3OO1N3YZI41hWWoXksSwJu/35S\nM7wMEzw+3KNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/RcwCgYIKoZIzj0E\nAwIDSAAwRQIhAKiJEv79XBmr8gGY6kHrGL0L3sq95E7IsCYzYdAQHj+DAiBPcBTg\nRuA0//Kq+3aHJ2T0KpKHqD3FfhZZolKDkcrkwQ==\n-----END CERTIFICATE-----\n",
		"Chaincodes": [
			"mycc"
		]
	},
	{
		"MSPID": "Org1MSP",
		"LedgerHeight": 5,
		"Endpoint": "peer1.org1.example.com:8051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICJzCCAc6gAwIBAgIQO7zMEHlMfRhnP6Xt65jwtDAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu\nb3JnMS5leGFtcGxlLmNvbTAeFw0xODA2MTcxMzQ1MjFaFw0yODA2MTQxMzQ1MjFa\nMGoxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T\nYW4gRnJhbmNpc2NvMQ0wCwYDVQQLEwRwZWVyMR8wHQYDVQQDExZwZWVyMS5vcmcx\nLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEoII9k8db/Q2g\nRHw5rk3SYw+OMFw9jNbsJJyC5ttJRvc12Dn7lQ8ZR9hW1vLQ3NtqO/couccDJcHg\nt47iHBNadaNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/RcwCgYIKoZIzj0E\nAwIDRwAwRAIgGHGtRVxcFVeMQr9yRlebs23OXEECNo6hNqd/4ChLwwoCIBFKFd6t\nlL5BVzVMGQyXWcZGrjFgl4+fDrwjmMe+jAfa\n-----END CERTIFICATE-----\n",
		"Chaincodes": null
	}
]

As seen, this command outputs a JSON containing membership information about all the peers in the channel that the peer queried possesses.

The Identity that is returned is the enrollment certificate of the peer, and it can be parsed with a combination of jq and openssl:

$ discover --configFile conf.yaml peers --channel mychannel  --server peer0.org1.example.com:7051  | jq .[0].Identity | sed "s/\\\n/\n/g" | sed "s/\"//g"  | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            55:e9:3f:97:94:d5:74:db:e2:d6:99:3c:01:24:be:bf
    Signature Algorithm: ecdsa-with-SHA256
        Issuer: C=US, ST=California, L=San Francisco, O=org2.example.com, CN=ca.org2.example.com
        Validity
            Not Before: Jun  9 11:58:28 2018 GMT
            Not After : Jun  6 11:58:28 2028 GMT
        Subject: C=US, ST=California, L=San Francisco, OU=peer, CN=peer0.org2.example.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:f5:69:7a:11:65:d9:85:96:65:b7:b7:1b:08:77:
                    43:de:cb:ad:3a:79:ec:cc:2a:bc:d7:93:68:ae:92:
                    1c:4b:d8:32:47:d6:3d:72:32:f1:f1:fb:26:e4:69:
                    c2:eb:c9:45:69:99:78:d7:68:a9:77:09:88:c6:53:
                    01:2a:c1:f8:c0
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                keyid:8E:58:82:C9:0A:11:10:A9:0B:93:03:EE:A0:54:42:F4:A3:EF:11:4C:82:B6:F9:CE:10:A2:1E:24:AB:13:82:A0

    Signature Algorithm: ecdsa-with-SHA256
         30:44:02:20:29:3f:55:2b:9f:7b:99:b2:cb:06:ca:15:3f:93:
         a1:3d:65:5c:7b:79:a1:7a:d1:94:50:f0:cd:db:ea:61:81:7a:
         02:20:3b:40:5b:60:51:3c:f8:0f:9b:fc:ae:fc:21:fd:c8:36:
         a3:18:39:58:20:72:3d:1a:43:74:30:f3:56:01:aa:26

Configuration query:

The configuration query returns a mapping from MSP IDs to orderer endpoints, as well as the FabricMSPConfig which can be used to verify all peer and orderer nodes by the SDK:

$ discover --configFile conf.yaml config --channel mychannel  --server peer0.org1.example.com:7051
{
    "msps": {
        "OrdererOrg": {
            "name": "OrdererMSP",
            "root_certs": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNMekNDQWRhZ0F3SUJBZ0lSQU1pWkxUb3RmMHR6VTRzNUdIdkQ0UjR3Q2dZSUtvWkl6ajBFQXdJd2FURUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhGREFTQmdOVkJBb1RDMlY0WVcxd2JHVXVZMjl0TVJjd0ZRWURWUVFERXc1allTNWxlR0Z0CmNHeGxMbU52YlRBZUZ3MHhPREEyTURreE1UVTRNamhhRncweU9EQTJNRFl4TVRVNE1qaGFNR2t4Q3pBSkJnTlYKQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVFlXNGdSbkpoYm1OcApjMk52TVJRd0VnWURWUVFLRXd0bGVHRnRjR3hsTG1OdmJURVhNQlVHQTFVRUF4TU9ZMkV1WlhoaGJYQnNaUzVqCmIyMHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUW9ySjVSamFTQUZRci9yc2xoMWdobnNCWEQKeDVsR1lXTUtFS1pDYXJDdkZBekE0bHUwb2NQd0IzNWJmTVN5bFJPVmdVdHF1ZU9IcFBNc2ZLNEFrWjR5bzE4dwpYVEFPQmdOVkhROEJBZjhFQkFNQ0FhWXdEd1lEVlIwbEJBZ3dCZ1lFVlIwbEFEQVBCZ05WSFJNQkFmOEVCVEFECkFRSC9NQ2tHQTFVZERnUWlCQ0JnbmZJd0pzNlBaWUZCclpZVkRpU05vSjNGZWNFWHYvN2xHL3QxVUJDbVREQUsKQmdncWhrak9QUVFEQWdOSEFEQkVBaUE5NGFkc21UK0hLalpFVVpnM0VkaWdSM296L3pEQkNhWUY3TEJUVXpuQgpEZ0lnYS9RZFNPQnk1TUx2c0lSNTFDN0N4UnR2NUM5V05WRVlmWk5SaGdXRXpoOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
            ],
            "admins": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNDVENDQWJDZ0F3SUJBZ0lRR2wzTjhaSzRDekRRQmZqYVpwMVF5VEFLQmdncWhrak9QUVFEQWpCcE1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4RnpBVkJnTlZCQU1URG1OaExtVjRZVzF3CmJHVXVZMjl0TUI0WERURTRNRFl3T1RFeE5UZ3lPRm9YRFRJNE1EWXdOakV4TlRneU9Gb3dWakVMTUFrR0ExVUUKQmhNQ1ZWTXhFekFSQmdOVkJBZ1RDa05oYkdsbWIzSnVhV0V4RmpBVUJnTlZCQWNURFZOaGJpQkdjbUZ1WTJsegpZMjh4R2pBWUJnTlZCQU1NRVVGa2JXbHVRR1Y0WVcxd2JHVXVZMjl0TUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJCnpqMERBUWNEUWdBRWl2TXQybVdiQ2FHb1FZaWpka1BRM1NuTGFkMi8rV0FESEFYMnRGNWthMTBteG1OMEx3VysKdmE5U1dLMmJhRGY5RDQ2TVROZ2gycnRhUitNWXFWRm84Nk5OTUVzd0RnWURWUjBQQVFIL0JBUURBZ2VBTUF3RwpBMVVkRXdFQi93UUNNQUF3S3dZRFZSMGpCQ1F3SW9BZ1lKM3lNQ2JPajJXQlFhMldGUTRramFDZHhYbkJGNy8rCjVSdjdkVkFRcGt3d0NnWUlLb1pJemowRUF3SURSd0F3UkFJZ2RIc0pUcGM5T01DZ3JPVFRLTFNnU043UWk3MWIKSWpkdzE4MzJOeXFQZnJ3Q0lCOXBhSlRnL2R5ckNhWUx1ZndUbUtFSnZZMEtXVzcrRnJTeG5CTGdzZjJpCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
            ],
            "crypto_config": {
                "signature_hash_family": "SHA2",
                "identity_identifier_hash_function": "SHA256"
            },
            "tls_root_certs": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNORENDQWR1Z0F3SUJBZ0lRZDdodzFIaHNZTXI2a25ETWJrZThTakFLQmdncWhrak9QUVFEQWpCc01Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTNWpiMjB4R2pBWUJnTlZCQU1URVhSc2MyTmhMbVY0CllXMXdiR1V1WTI5dE1CNFhEVEU0TURZd09URXhOVGd5T0ZvWERUSTRNRFl3TmpFeE5UZ3lPRm93YkRFTE1Ba0cKQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdUQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjVERWTmhiaUJHY21GdQpZMmx6WTI4eEZEQVNCZ05WQkFvVEMyVjRZVzF3YkdVdVkyOXRNUm93R0FZRFZRUURFeEYwYkhOallTNWxlR0Z0CmNHeGxMbU52YlRCWk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQk9ZZGdpNm53a3pYcTBKQUF2cTIKZU5xNE5Ybi85L0VRaU13Tzc1dXdpTWJVbklYOGM1N2NYU2dQdy9NMUNVUGFwNmRyMldvTjA3RGhHb1B6ZXZaMwp1aFdqWHpCZE1BNEdBMVVkRHdFQi93UUVBd0lCcGpBUEJnTlZIU1VFQ0RBR0JnUlZIU1VBTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0tRWURWUjBPQkNJRUlCcW0xZW9aZy9qSW52Z1ZYR2cwbzVNamxrd2tSekRlalAzZkplbW8KU1hBek1Bb0dDQ3FHU000OUJBTUNBMGNBTUVRQ0lEUG9FRkF5bFVYcEJOMnh4VEo0MVplaS9ZQWFvN29aL0tEMwpvTVBpQ3RTOUFpQmFiU1dNS3UwR1l4eXdsZkFwdi9CWitxUEJNS0JMNk5EQ1haUnpZZmtENEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
            ]
        },
        "Org1MSP": {
            "name": "Org1MSP",
            "root_certs": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1nN2VETnhwS0t0ZGl0TDRVNDRZMUl3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekV1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGd3TmpBNU1URTFPREk0V2hjTk1qZ3dOakEyTVRFMU9ESTQKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NUzVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1TNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQk41d040THpVNGRpcUZSWnB6d3FSVm9JbWw1MVh0YWkzbWgzUXo0UEZxWkhXTW9lZ0ovUWRNKzF4L3RobERPcwpnbmVRcndGd216WGpvSSszaHJUSmRuU2pYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSU9CZFFMRitjTVdhNmUxcDJDcE8KRXg3U0hVaW56VnZkNTVoTG03dzZ2NzJvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDQyt6T1lHcll0ZTB4SgpSbDVYdUxjUWJySW9UeHpsRnJLZWFNWnJXMnVaSkFJZ0NVVGU5MEl4aW55dk4wUkh4UFhoVGNJTFdEZzdLUEJOCmVrNW5TRlh3Y0lZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
            ],
            "admins": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNLakNDQWRDZ0F3SUJBZ0lRRTRFK0tqSHgwdTlzRSsxZUgrL1dOakFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NUzVsZUdGdGNHeGxMbU52YlRBZUZ3MHhPREEyTURreE1UVTRNamhhRncweU9EQTJNRFl4TVRVNE1qaGEKTUd3eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVE4d0RRWURWUVFMRXdaamJHbGxiblF4SHpBZEJnTlZCQU1NRmtGa2JXbHVRRzl5Clp6RXVaWGhoYlhCc1pTNWpiMjB3V1RBVEJnY3Foa2pPUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVFqK01MZk1ESnUKQ2FlWjV5TDR2TnczaWp4ZUxjd2YwSHo1blFrbXVpSnFETjRhQ0ZwVitNTTVablFEQmx1dWRyUS80UFA1Sk1WeQpreWZsQ3pJa2NCNjdvMDB3U3pBT0JnTlZIUThCQWY4RUJBTUNCNEF3REFZRFZSMFRBUUgvQkFJd0FEQXJCZ05WCkhTTUVKREFpZ0NEZ1hVQ3hmbkRGbXVudGFkZ3FUaE1lMGgxSXA4MWIzZWVZUzV1OE9yKzlxREFLQmdncWhrak8KUFFRREFnTklBREJGQWlFQXlJV21QcjlQakdpSk1QM1pVd05MRENnNnVwMlVQVXNJSzd2L2h3RVRra01DSUE0cQo3cHhQZy9VVldiamZYeE0wUCsvcTEzbXFFaFlYaVpTTXpoUENFNkNmCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
            ],
            "crypto_config": {
                "signature_hash_family": "SHA2",
                "identity_identifier_hash_function": "SHA256"
            },
            "tls_root_certs": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWUrZ0F3SUJBZ0lRZlRWTE9iTENVUjdxVEY3Z283UXgvakFLQmdncWhrak9QUVFEQWpCMk1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWZNQjBHQTFVRUF4TVdkR3h6ClkyRXViM0puTVM1bGVHRnRjR3hsTG1OdmJUQWVGdzB4T0RBMk1Ea3hNVFU0TWpoYUZ3MHlPREEyTURZeE1UVTQKTWpoYU1IWXhDekFKQmdOVkJBWVRBbFZUTVJNd0VRWURWUVFJRXdwRFlXeHBabTl5Ym1saE1SWXdGQVlEVlFRSApFdzFUWVc0Z1JuSmhibU5wYzJOdk1Sa3dGd1lEVlFRS0V4QnZjbWN4TG1WNFlXMXdiR1V1WTI5dE1SOHdIUVlEClZRUURFeFowYkhOallTNXZjbWN4TG1WNFlXMXdiR1V1WTI5dE1Ga3dFd1lIS29aSXpqMENBUVlJS29aSXpqMEQKQVFjRFFnQUVZbnp4bmMzVUpHS0ZLWDNUNmR0VGpkZnhJTVYybGhTVzNab0lWSW9mb04rWnNsWWp0d0g2ZXZXYgptTkZGQmRaYWExTjluaXRpbmxxbVVzTU1NQ2JieXFOZk1GMHdEZ1lEVlIwUEFRSC9CQVFEQWdHbU1BOEdBMVVkCkpRUUlNQVlHQkZVZEpRQXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QXBCZ05WSFE0RUlnUWdlVTAwNlNaUllUNDIKN1Uxb2YwL3RGdHUvRFVtazVLY3hnajFCaklJakduZ3dDZ1lJS29aSXpqMEVBd0lEU0FBd1JRSWhBSWpvcldJTwpRNVNjYjNoZDluRi9UamxWcmk1UHdTaDNVNmJaMFdYWEsxYzVBaUFlMmM5QmkyNFE1WjQ0aXQ1MkI5cm1hU1NpCkttM2NZVlY0cWJ6RFhMOHZYUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
            ],
            "fabric_node_ous": {
                "enable": true,
                "client_ou_identifier": {
                    "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1nN2VETnhwS0t0ZGl0TDRVNDRZMUl3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekV1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGd3TmpBNU1URTFPREk0V2hjTk1qZ3dOakEyTVRFMU9ESTQKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NUzVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1TNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQk41d040THpVNGRpcUZSWnB6d3FSVm9JbWw1MVh0YWkzbWgzUXo0UEZxWkhXTW9lZ0ovUWRNKzF4L3RobERPcwpnbmVRcndGd216WGpvSSszaHJUSmRuU2pYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSU9CZFFMRitjTVdhNmUxcDJDcE8KRXg3U0hVaW56VnZkNTVoTG03dzZ2NzJvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDQyt6T1lHcll0ZTB4SgpSbDVYdUxjUWJySW9UeHpsRnJLZWFNWnJXMnVaSkFJZ0NVVGU5MEl4aW55dk4wUkh4UFhoVGNJTFdEZzdLUEJOCmVrNW5TRlh3Y0lZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==",
                    "organizational_unit_identifier": "client"
                },
                "peer_ou_identifier": {
                    "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1nN2VETnhwS0t0ZGl0TDRVNDRZMUl3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekV1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGd3TmpBNU1URTFPREk0V2hjTk1qZ3dOakEyTVRFMU9ESTQKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NUzVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1TNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQk41d040THpVNGRpcUZSWnB6d3FSVm9JbWw1MVh0YWkzbWgzUXo0UEZxWkhXTW9lZ0ovUWRNKzF4L3RobERPcwpnbmVRcndGd216WGpvSSszaHJUSmRuU2pYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSU9CZFFMRitjTVdhNmUxcDJDcE8KRXg3U0hVaW56VnZkNTVoTG03dzZ2NzJvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDQyt6T1lHcll0ZTB4SgpSbDVYdUxjUWJySW9UeHpsRnJLZWFNWnJXMnVaSkFJZ0NVVGU5MEl4aW55dk4wUkh4UFhoVGNJTFdEZzdLUEJOCmVrNW5TRlh3Y0lZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==",
                    "organizational_unit_identifier": "peer"
                }
            }
        },
        "Org2MSP": {
            "name": "Org2MSP",
            "root_certs": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQUx2SWV2KzE4Vm9LZFR2V1RLNCtaZ2d3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekl1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGd3TmpBNU1URTFPREk0V2hjTk1qZ3dOakEyTVRFMU9ESTQKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NaTVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1pNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQkhUS01aall0TDdnSXZ0ekN4Y2pMQit4NlZNdENzVW0wbExIcGtIeDFQaW5LUU1ybzFJWWNIMEpGVmdFempvSQpCcUdMYURyQmhWQkpoS1kwS21kMUJJZWpYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSUk1WWdza0tFUkNwQzVNRDdxQlUKUXZTajd4Rk1ncmI1emhDaUhpU3JFNEtnTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDWnNSUjVBVU5KUjdJbwpQQzgzUCt1UlF1RmpUYS94eitzVkpZYnBsNEh1Z1FJZ0QzUlhuQWFqaGlPMU1EL1JzSC9JN2FPL1RuWUxkQUl6Cnd4VlNJenhQbWd3PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
            ],
            "admins": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNLVENDQWRDZ0F3SUJBZ0lRU1lpeE1vdmpoM1N2c25WMmFUOXl1REFLQmdncWhrak9QUVFEQWpCek1Rc3cKQ1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeQpZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTWk1bGVHRnRjR3hsTG1OdmJURWNNQm9HQTFVRUF4TVRZMkV1CmIzSm5NaTVsZUdGdGNHeGxMbU52YlRBZUZ3MHhPREEyTURreE1UVTRNamhhRncweU9EQTJNRFl4TVRVNE1qaGEKTUd3eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVJZd0ZBWURWUVFIRXcxVApZVzRnUm5KaGJtTnBjMk52TVE4d0RRWURWUVFMRXdaamJHbGxiblF4SHpBZEJnTlZCQU1NRmtGa2JXbHVRRzl5Clp6SXVaWGhoYlhCc1pTNWpiMjB3V1RBVEJnY3Foa2pPUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVJFdStKc3l3QlQKdkFYUUdwT2FuS3ZkOVhCNlMxVGU4NTJ2L0xRODVWM1Rld0hlYXZXeGUydUszYTBvRHA5WDV5SlJ4YXN2b2hCcwpOMGJIRWErV1ZFQjdvMDB3U3pBT0JnTlZIUThCQWY4RUJBTUNCNEF3REFZRFZSMFRBUUgvQkFJd0FEQXJCZ05WCkhTTUVKREFpZ0NDT1dJTEpDaEVRcVF1VEErNmdWRUwwbys4UlRJSzIrYzRRb2g0a3F4T0NvREFLQmdncWhrak8KUFFRREFnTkhBREJFQWlCVUFsRStvbFBjMTZBMitmNVBRSmdTZFp0SjNPeXBieG9JVlhOdi90VUJ2QUlnVGFNcgo1K2k2TUxpaU9FZ0wzcWZSWmdkcG1yVm1SbHlIdVdabWE0NXdnaE09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
            ],
            "crypto_config": {
                "signature_hash_family": "SHA2",
                "identity_identifier_hash_function": "SHA256"
            },
            "tls_root_certs": [
                "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTakNDQWZDZ0F3SUJBZ0lSQUtoUFFxUGZSYnVpSktqL0JRanQ3RXN3Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpJdVpYaGhiWEJzWlM1amIyMHdIaGNOTVRnd05qQTVNVEUxT0RJNFdoY05Namd3TmpBMk1URTEKT0RJNFdqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTWk1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTWk1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCRVIrMnREOWdkME9NTlk5Y20rbllZR2NUeWszRStCMnBsWWxDL2ZVdGdUU0QyZUVyY2kyWmltdQo5N25YeUIrM0NwNFJwVjFIVHdaR0JMbmNnbVIyb1J5alh6QmRNQTRHQTFVZER3RUIvd1FFQXdJQnBqQVBCZ05WCkhTVUVDREFHQmdSVkhTVUFNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdLUVlEVlIwT0JDSUVJUEN0V01JRFRtWC8KcGxseS8wNDI4eFRXZHlhazQybU9tbVNJSENCcnAyN0tNQW9HQ0NxR1NNNDlCQU1DQTBnQU1FVUNJUUNtN2xmVQpjbG91VHJrS2Z1YjhmdmdJTTU3QS85bW5IdzhpQnAycURtamZhUUlnSjkwcnRUV204YzVBbE93bFpyYkd0NWZMCjF6WXg5QW5DMTJBNnhOZDIzTG89Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
            ],
            "fabric_node_ous": {
                "enable": true,
                "client_ou_identifier": {
                    "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQUx2SWV2KzE4Vm9LZFR2V1RLNCtaZ2d3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekl1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGd3TmpBNU1URTFPREk0V2hjTk1qZ3dOakEyTVRFMU9ESTQKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NaTVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1pNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQkhUS01aall0TDdnSXZ0ekN4Y2pMQit4NlZNdENzVW0wbExIcGtIeDFQaW5LUU1ybzFJWWNIMEpGVmdFempvSQpCcUdMYURyQmhWQkpoS1kwS21kMUJJZWpYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSUk1WWdza0tFUkNwQzVNRDdxQlUKUXZTajd4Rk1ncmI1emhDaUhpU3JFNEtnTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDWnNSUjVBVU5KUjdJbwpQQzgzUCt1UlF1RmpUYS94eitzVkpZYnBsNEh1Z1FJZ0QzUlhuQWFqaGlPMU1EL1JzSC9JN2FPL1RuWUxkQUl6Cnd4VlNJenhQbWd3PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==",
                    "organizational_unit_identifier": "client"
                },
                "peer_ou_identifier": {
                    "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQUx2SWV2KzE4Vm9LZFR2V1RLNCtaZ2d3Q2dZSUtvWkl6ajBFQXdJd2N6RUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpJdVpYaGhiWEJzWlM1amIyMHhIREFhQmdOVkJBTVRFMk5oCkxtOXlaekl1WlhoaGJYQnNaUzVqYjIwd0hoY05NVGd3TmpBNU1URTFPREk0V2hjTk1qZ3dOakEyTVRFMU9ESTQKV2pCek1Rc3dDUVlEVlFRR0V3SlZVekVUTUJFR0ExVUVDQk1LUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnhNTgpVMkZ1SUVaeVlXNWphWE5qYnpFWk1CY0dBMVVFQ2hNUWIzSm5NaTVsZUdGdGNHeGxMbU52YlRFY01Cb0dBMVVFCkF4TVRZMkV1YjNKbk1pNWxlR0Z0Y0d4bExtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUEKQkhUS01aall0TDdnSXZ0ekN4Y2pMQit4NlZNdENzVW0wbExIcGtIeDFQaW5LUU1ybzFJWWNIMEpGVmdFempvSQpCcUdMYURyQmhWQkpoS1kwS21kMUJJZWpYekJkTUE0R0ExVWREd0VCL3dRRUF3SUJwakFQQmdOVkhTVUVDREFHCkJnUlZIU1VBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3S1FZRFZSME9CQ0lFSUk1WWdza0tFUkNwQzVNRDdxQlUKUXZTajd4Rk1ncmI1emhDaUhpU3JFNEtnTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDWnNSUjVBVU5KUjdJbwpQQzgzUCt1UlF1RmpUYS94eitzVkpZYnBsNEh1Z1FJZ0QzUlhuQWFqaGlPMU1EL1JzSC9JN2FPL1RuWUxkQUl6Cnd4VlNJenhQbWd3PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==",
                    "organizational_unit_identifier": "peer"
                }
            }
        },
        "Org3MSP": {
            "name": "Org3MSP",
            "root_certs": [
                "CgJPVQoEUm9sZQoMRW5yb2xsbWVudElEChBSZXZvY2F0aW9uSGFuZGxlEkQKIKoEXcq/psdYnMKCiT79N+dS1hM8k+SuzU1blOgTuN++EiBe2m3E+FjWLuQGMNRGRrEVTMqTvC4A/5jvCLv2ja1sZxpECiDBbI0kwetxAwFzHwb1hi8TlkGW3OofvuVzfFt9VlewcRIgyvsxG5/THdWyKJTdNx8Gle2hoCbVF0Y1/DQESBjGOGciRAog25fMyWps+FLOjzj1vIsGUyO457ri3YMvmUcycIH2FvQSICTtzaFvSPUiDtNtAVz+uetuB9kfmjUdUSQxjyXULOm2IkQKIO8FKzwoWwu8Mo77GNqnKFGCZaJL9tlrkdTuEMu9ujzbEiA4xtzo8oo8oEhFVsl6010mNoj1VuI0Wmz4tvUgXolCIiJECiDZcZPuwk/uaJMuVph7Dy/icgnAtVYHShET41O0Eh3Q5BIgy5q9VMQrch9VW5yajhY8dH1uA593gKd5kBqGdLfiXzAiRAogAnUYq/kwKzFfmIm/W4nZxi1kjG2C8NRjsYYBkeAOQ6wSIGyX5GGmwgvxgXXehNWBfijyNIJALGRVhO8YtBqr+vnrKogBCiDHR1XQsDbpcBoZFJ09V97zsIKNVTxjUow7/wwC+tq3oBIgSWT/peiO2BI0DecypKfgMpVR8DWXl8ZHSrPISsL3Mc8aINem9+BOezLwFKCbtVH1KAHIRLyyiNP+TkIKW6x9RkThIiAbIJCYU6O02EB8uX6rqLU/1lHxV0vtWdIsKCTLx2EZmDJECiCPXeyUyFzPS3iFv8CQUOLCPZxf6buZS5JlM6EE/gCRaxIgmF9GKPLLmEoA77+AU3J8Iwnu9pBxnaHtUlyf/F9p30c6RAogG7ENKWlOZ4aF0HprqXAjl++Iao7/iE8xeVcKRlmfq1ASIGtmmavDAVS2bw3zClQd4ZBD2DrqCBO9NPOcLNB0IWeIQiCjxTdbmcuBNINZYWe+5fWyI1oY9LavKzDVkdh+miu26EogY2uJtJGfKrQQjy+pgf9FdPMUk+8PNUBtH9LCD4bos7JSIPl6m5lEP/PRAmBaeTQLXdbMxIthxM2gw+Zkc5+IJEWX"
            ],
            "intermediate_certs": [
                "CtgCCkQKIP0UVivtH8NlnRNrZuuu6jpaj2ZbEB4/secGS57MfbINEiDSJweLUMIQSW12jugBQG81lIQflJWvi7vi925u+PU/+xJECiDgOGdNbAiGSoHmTjKhT22fqUqYLIVh+JBHetm4kF4skhIg9XTWRkUqtsfYKENzPgm7ZUSmCHNF8xH7Vnhuc1EpAUgaINwSnJKofiMoyDRZwUBhgfwMH9DJzMccvRVW7IvLMe/cIiCnlRj+mfNVAJGKthLgQBB/JKM14NbUeutyJtTgrmDDiCogme25qGvxJfgQNnzldMMicVyiI6YMfnoThAUyqsTzyXkqIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKiCZ7bmoa/El+BA2fOV0wyJxXKIjpgx+ehOEBTKqxPPJeSogAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAESIFYUenRvjbmEh+37YHJrvFJt4lGq9ShtJ4kEBrfHArPjGgNPVTEqA09VMTL0ARKIAQog/gwzULTJbCAoVg9XfCiROs4cU5oSv4Q80iYWtonAnvsSIE6mYFdzisBU21rhxjfYE7kk3Xjih9A1idJp7TSjfmorGiBwIEbnxUKjs3Z3DXUSTj5R78skdY1hWEjpCbSBvtwn/yIgBVTjvNOIwpBC7qZJKX6yn4tMvoCCGpiz4BKBEUqtBJsaZzBlAjBwZ4WXYOttkhsNA2r94gBfLUdx/4VhW4hwUImcztlau1T14UlNzJolCNkdiLc9CqsCMQD6OBkgDWGq9UlhkK9dJBzU+RElcZdSfVV1hDbbqt+lFRWOzzEkZ+BXCR1k3xybz+o="
            ],
            "admins": [
                "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUhZd0VBWUhLb1pJemowQ0FRWUZLNEVFQUNJRFlnQUVUYk13SEZteEpEMWR3SjE2K0hnVnRDZkpVRzdKK2FTYgorbkVvVmVkREVHYmtTc1owa1lraEpyYkx5SHlYZm15ZWV0ejFIUk1rWjRvMjdxRlMzTlVFb1J2QlM3RHJPWDJjCnZLaDRnbWhHTmlPbzRiWjFOVG9ZL2o3QnpqMFlMSXNlCi0tLS0tRU5EIFBVQkxJQyBLRVktLS0tLQo="
            ]
        }
    },
    "orderers": {
        "OrdererOrg": {
            "endpoint": [
                {
                    "host": "orderer.example.com",
                    "port": 7050
                }
            ]
        }
    }
}

It’s important to note that the certificates here are base64 encoded, and thus should decoded in a manner similar to the following:

$ discover --configFile conf.yaml config --channel mychannel  --server peer0.org1.example.com:7051 | jq .msps.OrdererOrg.root_certs[0] | sed "s/\"//g" | base64 --decode | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            c8:99:2d:3a:2d:7f:4b:73:53:8b:39:18:7b:c3:e1:1e
    Signature Algorithm: ecdsa-with-SHA256
        Issuer: C=US, ST=California, L=San Francisco, O=example.com, CN=ca.example.com
        Validity
            Not Before: Jun  9 11:58:28 2018 GMT
            Not After : Jun  6 11:58:28 2028 GMT
        Subject: C=US, ST=California, L=San Francisco, O=example.com, CN=ca.example.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:28:ac:9e:51:8d:a4:80:15:0a:ff:ae:c9:61:d6:
                    08:67:b0:15:c3:c7:99:46:61:63:0a:10:a6:42:6a:
                    b0:af:14:0c:c0:e2:5b:b4:a1:c3:f0:07:7e:5b:7c:
                    c4:b2:95:13:95:81:4b:6a:b9:e3:87:a4:f3:2c:7c:
                    ae:00:91:9e:32
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign, CRL Sign
            X509v3 Extended Key Usage:
                Any Extended Key Usage
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier:
                60:9D:F2:30:26:CE:8F:65:81:41:AD:96:15:0E:24:8D:A0:9D:C5:79:C1:17:BF:FE:E5:1B:FB:75:50:10:A6:4C
    Signature Algorithm: ecdsa-with-SHA256
         30:44:02:20:3d:e1:a7:6c:99:3f:87:2a:36:44:51:98:37:11:
         d8:a0:47:7a:33:ff:30:c1:09:a6:05:ec:b0:53:53:39:c1:0e:
         02:20:6b:f4:1d:48:e0:72:e4:c2:ef:b0:84:79:d4:2e:c2:c5:
         1b:6f:e4:2f:56:35:51:18:7d:93:51:86:05:84:ce:1f

Endorsers query:

To query for the endorsers of a chaincode call, additional flags need to be supplied:

  • The --chaincode flag is mandatory and it provides the chaincode name(s). To query for a chaincode-to-chaincode invocation, one needs to repeat the --chaincode flag with all the chaincodes.

  • The --collection is used to specify private data collections that are expected to used by the chaincode(s). To map from thechaincodes passed via --chaincode to the collections, the following syntax should be used: collection=CC:Collection1,Collection2,....

For example, to query for a chaincode invocation that results in both cc1 and cc2 to be invoked, as well as writes to private data collection col1 by cc2, one needs to specify: --chaincode=cc1 --chaincode=cc2 --collection=cc2:col1

Below is the output of an endorsers query for chaincode mycc when the endorsement policy is AND('Org1.peer', 'Org2.peer'):

$ discover --configFile conf.yaml endorsers --channel mychannel  --server peer0.org1.example.com:7051 --chaincode mycc
[
    {
        "Chaincode": "mycc",
        "EndorsersByGroups": {
            "G0": [
                {
                    "MSPID": "Org1MSP",
                    "LedgerHeight": 5,
                    "Endpoint": "peer0.org1.example.com:7051",
                    "Identity": "-----BEGIN CERTIFICATE-----\nMIICKDCCAc+gAwIBAgIRANTiKfUVHVGnrYVzEy1ZSKIwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzEuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzEuZXhhbXBsZS5jb20wHhcNMTgwNjA5MTE1ODI4WhcNMjgwNjA2MTE1ODI4\nWjBqMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzENMAsGA1UECxMEcGVlcjEfMB0GA1UEAxMWcGVlcjAub3Jn\nMS5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABD8jGz1l5Rrw\n5UWqAYnc4JrR46mCYwHhHFgwydccuytb00ouD4rECiBsCaeZFr5tODAK70jFOP/k\n/CtORCDPQ02jTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIOBdQLF+cMWa6e1p2CpOEx7SHUinzVvd55hLm7w6v72oMAoGCCqGSM49\nBAMCA0cAMEQCIC3bacbDYphXfHrNULxpV/zwD08t7hJxNe8MwgP8/48fAiBiC0cr\nu99oLsRNCFB7R3egyKg1YYao0KWTrr1T+rK9Bg==\n-----END CERTIFICATE-----\n"
                }
            ],
            "G1": [
                {
                    "MSPID": "Org2MSP",
                    "LedgerHeight": 5,
                    "Endpoint": "peer1.org2.example.com:10051",
                    "Identity": "-----BEGIN CERTIFICATE-----\nMIICKDCCAc+gAwIBAgIRAIs6fFxk4Y5cJxSwTjyJ9A8wCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzIuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzIuZXhhbXBsZS5jb20wHhcNMTgwNjA5MTE1ODI4WhcNMjgwNjA2MTE1ODI4\nWjBqMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzENMAsGA1UECxMEcGVlcjEfMB0GA1UEAxMWcGVlcjEub3Jn\nMi5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOVFyWVmKZ25\nxDYV3xZBDX4gKQ7rAZfYgOu1djD9EHccZhJVPsdwSjbRsvrfs9Z8mMuwEeSWq/cq\n0cGrMKR93vKjTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAII5YgskKERCpC5MD7qBUQvSj7xFMgrb5zhCiHiSrE4KgMAoGCCqGSM49\nBAMCA0cAMEQCIDJmxseFul1GZ26djKa6jZ6zYYf6hchNF5xxMRWXpCnuAiBMf6JZ\njZjVM9F/OidQ2SBR7OZyMAzgXc5nAabWZpdkuQ==\n-----END CERTIFICATE-----\n"
                },
                {
                    "MSPID": "Org2MSP",
                    "LedgerHeight": 5,
                    "Endpoint": "peer0.org2.example.com:9051",
                    "Identity": "-----BEGIN CERTIFICATE-----\nMIICJzCCAc6gAwIBAgIQVek/l5TVdNvi1pk8ASS+vzAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMi5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu\nb3JnMi5leGFtcGxlLmNvbTAeFw0xODA2MDkxMTU4MjhaFw0yODA2MDYxMTU4Mjha\nMGoxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T\nYW4gRnJhbmNpc2NvMQ0wCwYDVQQLEwRwZWVyMR8wHQYDVQQDExZwZWVyMC5vcmcy\nLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE9Wl6EWXZhZZl\nt7cbCHdD3sutOnnszCq815NorpIcS9gyR9Y9cjLx8fsm5GnC68lFaZl412ipdwmI\nxlMBKsH4wKNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwKwYDVR0j\nBCQwIoAgjliCyQoREKkLkwPuoFRC9KPvEUyCtvnOEKIeJKsTgqAwCgYIKoZIzj0E\nAwIDRwAwRAIgKT9VK597mbLLBsoVP5OhPWVce3mhetGUUPDN2+phgXoCIDtAW2BR\nPPgPm/yu/CH9yDajGDlYIHI9GkN0MPNWAaom\n-----END CERTIFICATE-----\n"
                }
            ]
        },
        "Layouts": [
            {
                "quantities_by_group": {
                    "G0": 1,
                    "G1": 1
                }
            }
        ]
    }
]

Not using a configuration file

It is possible to execute the discovery CLI without having a configuration file, and just passing all needed configuration as commandline flags. The following is an example of a local peer membership query which loads administrator credentials:

$ discover --peerTLSCA tls/ca.crt --userKey msp/keystore/cf31339d09e8311ac9ca5ed4e27a104a7f82f1e5904b3296a170ba4725ffde0d_sk --userCert msp/signcerts/Admin\@org1.example.com-cert.pem --MSP Org1MSP --tlsCert tls/client.crt --tlsKey tls/client.key peers --server peer0.org1.example.com:7051
[
	{
		"MSPID": "Org1MSP",
		"Endpoint": "peer1.org1.example.com:8051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICJzCCAc6gAwIBAgIQO7zMEHlMfRhnP6Xt65jwtDAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu\nb3JnMS5leGFtcGxlLmNvbTAeFw0xODA2MTcxMzQ1MjFaFw0yODA2MTQxMzQ1MjFa\nMGoxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T\nYW4gRnJhbmNpc2NvMQ0wCwYDVQQLEwRwZWVyMR8wHQYDVQQDExZwZWVyMS5vcmcx\nLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEoII9k8db/Q2g\nRHw5rk3SYw+OMFw9jNbsJJyC5ttJRvc12Dn7lQ8ZR9hW1vLQ3NtqO/couccDJcHg\nt47iHBNadaNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/RcwCgYIKoZIzj0E\nAwIDRwAwRAIgGHGtRVxcFVeMQr9yRlebs23OXEECNo6hNqd/4ChLwwoCIBFKFd6t\nlL5BVzVMGQyXWcZGrjFgl4+fDrwjmMe+jAfa\n-----END CERTIFICATE-----\n",
	},
	{
		"MSPID": "Org1MSP",
		"Endpoint": "peer0.org1.example.com:7051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICKDCCAc6gAwIBAgIQP18LeXtEXGoN8pTqzXTHZTAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu\nb3JnMS5leGFtcGxlLmNvbTAeFw0xODA2MTcxMzQ1MjFaFw0yODA2MTQxMzQ1MjFa\nMGoxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T\nYW4gRnJhbmNpc2NvMQ0wCwYDVQQLEwRwZWVyMR8wHQYDVQQDExZwZWVyMC5vcmcx\nLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEKeC/1Rg/ynSk\nNNItaMlaCDZOaQvxJEl6o3fqx1PVFlfXE4NarY3OO1N3YZI41hWWoXksSwJu/35S\nM7wMEzw+3KNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/RcwCgYIKoZIzj0E\nAwIDSAAwRQIhAKiJEv79XBmr8gGY6kHrGL0L3sq95E7IsCYzYdAQHj+DAiBPcBTg\nRuA0//Kq+3aHJ2T0KpKHqD3FfhZZolKDkcrkwQ==\n-----END CERTIFICATE-----\n",
	},
	{
		"MSPID": "Org2MSP",
		"Endpoint": "peer0.org2.example.com:9051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICKTCCAc+gAwIBAgIRANK4WBck5gKuzTxVQIwhYMUwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzIuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzIuZXhhbXBsZS5jb20wHhcNMTgwNjE3MTM0NTIxWhcNMjgwNjE0MTM0NTIx\nWjBqMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzENMAsGA1UECxMEcGVlcjEfMB0GA1UEAxMWcGVlcjAub3Jn\nMi5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABJa0gkMRqJCi\nzmx+L9xy/ecJNvdAV2zmSx5Sf2qospVAH1MYCHyudDEvkiRuBPgmCdOdwJsE0g+h\nz0nZdKq6/X+jTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM49\nBAMCA0gAMEUCIQC0M9/LJ7j3I9NEPQ/B1BpnJP+UNPnGO2peVrM/mJ1nVgIgS1ZA\nA1tsxuDyllaQuHx2P+P9NDFdjXx5T08lZhxuWYM=\n-----END CERTIFICATE-----\n",
	},
	{
		"MSPID": "Org2MSP",
		"Endpoint": "peer1.org2.example.com:10051",
		"Identity": "-----BEGIN CERTIFICATE-----\nMIICKDCCAc+gAwIBAgIRALnNJzplCrYy4Y8CjZtqL7AwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmFuY2lzY28xGTAXBgNVBAoTEG9yZzIuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh\nLm9yZzIuZXhhbXBsZS5jb20wHhcNMTgwNjE3MTM0NTIxWhcNMjgwNjE0MTM0NTIx\nWjBqMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN\nU2FuIEZyYW5jaXNjbzENMAsGA1UECxMEcGVlcjEfMB0GA1UEAxMWcGVlcjEub3Jn\nMi5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABNDopAkHlDdu\nq10HEkdxvdpkbs7EJyqv1clvCt/YMn1hS6sM+bFDgkJKalG7s9Hg3URF0aGpy51R\nU+4F9Muo+XajTTBLMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM49\nBAMCA0cAMEQCIAR4fBmIBKW2jp0HbbabVepNtl1c7+6++riIrEBnoyIVAiBBvWmI\nyG02c5hu4wPAuVQMB7AU6tGSeYaWSAAo/ExunQ==\n-----END CERTIFICATE-----\n",
	}
]

Fabric-CA Commands

The Hyperledger Fabric CA is a Certificate Authority (CA) for Hyperledger Fabric. The commands available for the fabric-ca client and fabric-ca server are described in the links below.

Fabric-CA Client

The fabric-ca-client command allows you to manage identities (including attribute management) and certificates (including renewal and revocation).

More information on fabric-ca-client commands can be found here.

Fabric-CA Server

The fabric-ca-server command allows you to initialize and start a server process which may host one or more certificate authorities.

More information on fabric-ca-server commands can be found here.

架构参考

架构起源

注解

本文档表现了Hyperledger Fabric v1.0的初始架构提案。虽然Hyperledger Fabric实现在概念上遵循了本架构提案,但是在实现过程中一些细节已经被更改。最初的架构方案是按照最初准备的方式呈现的。要更精确地表示架构,请参见`Hyperledger Fabric:一个许可区块链的分布式操作系统<https://arxiv.org/abs/1801.10228v2>`__。

超级账本Fabric架构具有以下优点:

  • 链码**链码信任的灵活性**该架构将链码(区块链应用程序)的*信任假设*与排序的信任假设分开。换句话说,排序服务可能由一组节点(排序器)提供,并允许其中一些节点失败或行为不端,而且每个链码的背书者可能不同。

  • 可伸缩性 由于负责特定链码的背书节点与排序器正交,因此系统的伸缩性可能比由相同节点执行这些功能要好。特别是,当不同的链码指定了互斥的背书器时,这将导致在背书器之间对链码进行分区,并允许并行的链码执行(背书)。此外,链码执行从排序服务的关键路径中删除,链码执行的成本可能很高。

  • **机密性。**本架构有助于部署对其交易的内容和状态更新具有“机密性”要求的链码。

  • **共识模块化。**该架构是*模块化的*,允许可插入的共识算法(即,排序服务)实现。

**第一部分:与Hyperledger Fabric v1**相关的架构元素

  1. 系统架构

  2. 交易背书的基本流程

  3. 背书策略

第二部分:架构的Post-v1 元素

  1. 账本检查点(修剪)

1. 系统架构

区块链是一个分布式系统,由许多相互通信的节点组成。区块链运行名为链码的程序,保存状态和账本数据,并执行交易。链码是中心元素,因为交易是在链码上调用的操作。交易必须“背书”,只有背书的交易才可以提交并对状态产生影响。可能存在一个或多个用于管理函数和参数的特殊链码,统称为*系统链码*。

1.1. 交易

交易可分为两类:

  • *部署交易*创建新的链码并将程序作为参数。当部署交易成功执行时,链码会被安装在区块链上。

  • *调用交易*在以前部署的链码上下文中执行操作。调用交易引用链码和它提供的某个函数。当成功时,链码执行指定的函数,这可能涉及修改相应的状态,并返回输出。

如后面所述,部署交易是调用交易的特殊情况,其中创建新链码的部署交易对应于系统链码上的调用交易。

备注: 本文档目前假设交易要么创建新的链码,要么调用*一个*已部署链码提供的操作。这个文档还没有描述: a)查询(只读)交易的优化(包含在v1中),b)对跨链码交易的支持(v1后续的特性).

1.2. 区块链数据结构
1.2.1 状态

区块链的最新状态(或者简单地说,状态)被建模为一个版本化的键值存储(KVS),其中键是名称,值是任意的blob。这些条目由区块链上运行的链码(应用程序)通过“put”和“get”KVS-操作操作。状态被持久地存储,并记录对状态的更新。注意,版本化的KVS被用作状态模型,实现可以使用既有的KVS,也可以使用RDBMS或任何其他解决方案。

更正式地说,状态``s`` 被建模为映射``K -> (V X N)``的一个元素,其中:

  • ``K``是一组键

  • V 是一组值

  • N``是版本号的无穷有序集合。单射函数 ``next: N -> N 接受``N``的元素并返回下一个版本号。

 V``和``N 都包含一个特殊的元素|falsum|(空类型),即``N`` 是最低的元素。最初,所有键都映射到(⊥, ⊥)。对于``s(k)=(v,ver)``,我们用``s(k).value``表示``v``, 用``s(k).version``表示``ver`` 。

KVS操作模型如下:

  • put(k,v) for kK and vV, 将区块链状态设置为 s,然后将状态修改为 s' ,于是 s'(k)=(v,next(s(k).version)) with s'(k')=s(k') for all k'!=k.

  • get(k) 返回``s(k)``。

状态由节点维护,而不是由排序器和客户端维护。

**状态分区。**KVS中的键可以从它们的名称中识别出属于某个特定链码,因为只有某个链码的交易可以修改属于这个链码的键。原则上,任何链码都可以读取属于其他链码的键。支持跨链码交易,可以修改属于两个或多个链码的状态,这是一个v1后续版本的特性。

1.2.2 账本

账本提供了一个可验证的历史记录,记录了在系统运行期间发生的所有成功的状态更改(我们说的*有效的*交易)和失败的状态更改尝试(我们说的*无效的*交易)。

账本是由排序服务(参见第1.3.3节)构造的,它是(有效或无效的)交易的*区块*的完全有序哈希链。哈希链强制账本中的区块的总顺序,每个区块包含一个完全有序的交易数组。这强制为所有交易指定了顺序。

账本保存在所有的节点上,也可以选择保存在部分排序器上。在排序器的上下文中,我们将账本称为“排序器账本”,而在节点的上下文中,我们将账本称为“节点账本”。节点账本 与``排序器账本`` 的不同之处在于,节点在本地维护一个比特掩码,该比特掩码将有效的交易与无效的交易区分开来(有关详细信息,请参阅XX节)。

如第XX节(v1后续特性)所述的,节点可以修剪 节点账本 。排序器维护“排序器账本”以获得容错性和(“节点账本”的)可用性,并可以决定随时对其进行修剪,前提是排序服务的属性得到了维护(参见第1.3.3节)。

账本允许节点重放所有交易的历史并重建状态。因此,Sec 1.2.1中描述的状态是一个可选的数据结构。

1.3. node节点

node节点是区块链的通信实体。一个“node节点”只是一个逻辑功能,因为不同类型的多个node节点可以运行在同一个物理服务器上。重要的是node节点如何在“信任域”中分组并与控制它们的逻辑实体相关联。

有三种类型的node节点:

  1. 客户端**或**提交客户端:是一个向背书方提交实际交易调用并向排序服务广播交易提议的客户端。

  2. peer节点:是一个提交交易并维护状态和账本副本的node节点(参见第1.2节)。此外,peer节点可以扮演一个特殊的**背书人**角色。

  3. 排序服务node节点**或**排序器:是一个运行实现交付担保的通信服务的node节点,例如原子性或总顺序广播。

接下来将更详细地解释node节点的类型。

1.3.1. 客户端

客户端扮演了代表最终用户的实体。它必须连接到与区块链通信的peer节点。客户端可以连接到它所选择的任何peer节点。客户端创建并调用交易。

如第2节所详细介绍的,客户端同时与peer节点和排序服务通信。

1.3.2. peer节点

peer节点接收来自排序服务的“区块”形式的有序状态更新,并维护状态和账本。

链码peer节点还可以承担**背书节点**或**背书人**的特殊角色。一个*背书节点*的特殊函数根据一个特定链码重现,包括在交易提交之前*背书*一个交易。每个链码都可以指定一个*背书策略*,该策略可以引用一组背书节点。策略为有效的交易背书(通常是一组背书人的签名)定义了必要条件和充分条件,后面的第2和3节将对此进行描述。在安装新链码的部署交易的特殊情况下,(部署)背书策略指定为系统链码的背书策略。

1.3.3. 排序服务node节点(排序器)

排序器*组成*排序服务,即,提供交付担保的通信结构。排序服务可以以不同的方式实现:从集中式服务(例如,在开发和测试中使用)到针对不同网络和node节点故障模型的分布式协议。

排序服务为客户端和peer节点提供共享的“通信通道”,为包含交易的消息提供广播服务。客户端连接到通道,并可以在通道上广播消息,然后将消息传递给所有peer节点。该通道支持所有消息的“原子”传递,即与全顺序传递和(特定于实现的)可靠性的消息通信。换句话说,通道将相同的消息输出给所有连接的peer节点,并将它们以相同的逻辑顺序输出给所有peer节点。这种原子通信保证在分布式系统上下文中也称为*全顺序广播*、原子广播*或*共识。所通信的消息是要包含在区块链状态中的候选交易。

**分区(排序服务通道)。**排序服务可能支持多个*通道*,类似于发布/订阅(发布/订阅)消息系统的*主题*。客户端可以连接到给定的通道,然后可以发送消息并获取到达的消息。通道可以看作是分区,连接到一个通道的客户端不知道其他通道的存在,但是客户端可以连接到多个通道。尽管在超级账本Fabric中包含的一些排序服务实现支持多个通道,但为了简化表示,在本文档的其余部分中,我们假设排序服务由一个通道/主题组成。

**排序服务API。**peer节点通过排序服务提供的接口连接到排序服务提供的通道。排序服务API由两个基本操作(一般称*异步事件*)组成:

TODO 添加API的一部分,用于在客户端/peer节点指定的序列号下获取特定的区块。

  • 广播(blob) ``:客户端调用它来广播任意消息``blob``以便在通道上传播。在BFT上下文中,当向服务发送请求时,也称为``request(blob)

  • deliver(seqno, prevhash, blob):排序服务在peer节点调用这个函数,以传递消息``blob`` ,其中包含指定的非负整数序列号 (seqno)和最近交付的blob的散列(prevhash)。换句话说,它是来自排序服务的输出事件。deliver()``在发布订阅系统中有时也被称为`notify(),或在BFT系统中被称为``commit()``。

账本和区块构成。账本(参见第1.2.2节)包含排序服务输出的所有数据。简而言之,它是``deliver(seqno, prevhash, blob)``事件的序列,这些事件根据前面描述的``prevhash`` 的计算形成一个哈希链。

大多数时候,出于效率的考虑,排序服务不会输出单个交易(blob),而是在单个``deliver``事件中对blob和输出*区块*进行分组(批处理)。在这种情况下,排序服务必须强制并传递每个区块中blob的确定性排序。区块中的区块数可以由排序服务实现动态选择。

下面,为了便于表达,我们定义了排序服务属性(本小节的其余部分),并解释了交易背书的工作流(第2节),假设每个``deliver`` 事件有一个blob。这些很容易扩展到区块,假设一个区块的“交付”事件对应于一个区块中的每个blob的单独的 ``deliver``事件序列,根据上面提到的区块中blob的确定性顺序。

排序服务属性

排序服务(或原子广播通道)的保证规定了广播消息发生了什么,以及传递的消息之间存在什么关系。这些保证如下:

  1. 安全(一致性保证):只要peer节点连接到通道的时间足够长(它们可以断开连接或崩溃,但会重新启动和重新连接),它们将看到一个*相同的*已交付的``(seqno, prevhash, blob)`` 消息序列。这意味着输出(deliver()``事件)在所有peer节点上以*相同的顺序*发生,并根据序列号,为相同的序列号携带*相同的内容* (``blob 和``prevhash``)。注意,这只是一个“逻辑顺序”,一个peer节点上的``deliver(seqno, prevhash, blob)`` 不需要与另一个peer节点上输出相同的``deliver(seqno, prevhash, blob)`` 消息发生实时关联。换句话说,给定一个特定的’seqno没有*两个正确的peer节点提供了*不同的*``prevhash`` 或``blob`` ‘值。此外,除非某个客户端(peer节点)实际调用了``broadcast(blob)`` ,否则不会传递任何值``blob`` ,即每个广播的blob只传递*一次

    此外,deliver()``事件包含前一个``deliver() 事件中的数据的加密哈希(prevhash)。当排序服务实现原子广播保证时,prevhash``是``deliver()``事件的参数和序号的加密哈希``seqno-1。这将在``deliver()``事件之间建立一个哈希链,用于帮助验证排序服务输出的完整性,后面的第4和第5节将对此进行讨论。在第一个``deliver()``事件的特殊情况下,``prevhash``有一个默认值。

  2. 存活性(交付保证):排序服务的存活性保证由排序服务的实现决定。准确的保证可能取决于网络和node节点故障模型。

    原则上,如果提交的客户端没有失败,那么排序服务应该确保连接到排序服务的每个正确peer节点最终交付每个提交的交易。

总而言之,排序服务确保以下特性:

  • *协议。*对于正确peer节点上的任何两个具有相同 seqno``的事件``deliver(seqno, prevhash0, blob0) 和``deliver(seqno, prevhash1, blob1)`` ,prevhash0==prevhash1``和``blob0==blob1;

  • 哈希链完整性。 对于正确peer节点上的任何两个事件``deliver(seqno-1, prevhash0, blob0)`` 和 deliver(seqno, prevhash, blob), prevhash = HASH(seqno-1||prevhash0||blob0)

  • 没有跳过。如果排序服务在一个正确的peer节点*p*上输出``deliver(seqno, prevhash, blob)`` ,例如 seqno>0,那么*p*已经交付了一个事件``deliver(seqno-1, prevhash0, blob0)``。

  • 没有创建。在正确的peer节点上的任何事件``deliver(seqno, prevhash, blob)``之前,在一些(可能不同的)peer节点上必然有一个``broadcast(blob)``事件;

  • 没有复制duplication (optional, yet desirable)。对任何两个事件``broadcast(blob)`` 和``broadcast(blob’)``,当两个事件``deliver(seqno0, prevhash0, blob)`` 和 deliver(seqno1, prevhash1, blob')``发生在正确的peer节点并且 ``blob == blob',那么``seqno0==seqno1``和``prevhash0==prevhash1``。

  • 存活性。如果一个正确的客户端调用一个事件``broadcast(blob)`` ,那么每个正确的peer节点“最终”发出一个事件``deliver(*, *, blob)``,其中 * 表示一个任意值。

2. 交易背书的基本流程

在下面的文章中,我们将概述交易的高级请求流程。

注: 注意以下协议*并不*假设所有交易都是确定性的,即,它允许非确定性交易。

2.1. 客户端创建一个交易并将其发送给它所选择的背书peer节点

要引起一个交易,客户端向其选择的一组背书peer节点发送一条“建议”消息(可能不是同时发送,参见2.1.2.节和2.3.节)。客户端可以通过peer节点获得给定“chaincodeID”的背书peer节点集,而peer节点又从背书策略中知道背书peer节点集合(参见第3节)。例如,可以将交易发送给给定``chaincodeID``’的*所有*背书节点。。也就是说,一些背书节点可能离线,其它背书节点可能会反对并选择不背书交易。提交客户端尝试使用可用的背书者来满足策略表达式。

在下面,我们首先详细介绍“提议”消息格式,然后讨论提交客户端和背书者之间可能的交互模式。

2.1.1. ``提议``消息格式

:’ ‘ proposal ‘ ‘消息的格式是’ ‘ < proposal,tx,[anchor]> ‘ ‘,其中’ ‘ tx ‘ ‘是必填参数,’ ‘ anchor ‘ ‘可选参数解释如下。 提议``消息的格式是 ``<PROPOSE,tx,[anchor]>,其中``tx`` 是必填参数,可选参数 anchor 解释如下。

  • tx=<clientID,chaincodeID,txPayload,timestamp,clientSig>, 其中

    • clientID 是提交客户端的ID,

    • “chaincodeID”指向交易所属的链码,

    • ``txPayload``是包含提交交易本身的载荷,

    • “timestamp”是客户端维护的一个简单递增的整数(对于每个新交易),

    • “clientSig”是客户端在“tx”的其他字段上的签名。

    “txPayload”的详细信息在调用交易和部署交易(即调用交易引用特定于部署的系统链码)是不同的。对于**调用交易**, ``txPayload``将包含两个字段

    • txPayload = <operation, metadata>, 其中

      • ``operation``表示链码操作(函数)和参数,

      • “metadata”表示与调用相关的属性。

    对于一个**部署交易**,txPayload 将包含三个字段

    • txPayload = <source, metadata, policies>, 其中

      • source 表示链码的源代码,

      • metadata 表示与链码和应用程序相关的属性,

      • ``policies``包含与链码相关的策略,所有peer节点都可以访问这些策略,例如背书策略。注意,在“部署”事务中的“txPayload”中没有提供背书策略,但是“部署”的“txPayload”包含背书策略ID及其参数(参见第3节)。

  • ``锚定``包含*读取版本依赖项*,或者更具体地说,键版本对儿(即``anchor``是 ``KxN``的子集),它将 ``提议``请求绑定或 “锚定”到KVS中指定版本的键(请参阅1.2.)。如果客户端指定了``锚定``参数,背书者仅在其本地KVS中对应键的*读*版本号匹配``锚定``时才背书交易(更多的细节参见第2.2节)。

所有node节点都使用``tx``的加密哈希作为惟一的交易标识符``tid`` (即``tid=HASH(tx)``)。客户端在内存中存储``tid``,并等待来自背书节点的响应。

2.1.2. 消息模式

客户端决定与背书者交互的顺序。例如,客户端通常会发送``<PROPOSE, tx>`` (即没有“锚定”参数)到单个背书者,这将生成版本依赖关系(“锚定”),客户端稍后可以将该依赖关系用作其向其他背书者发送的“提议”消息的参数。另一个例子是,客户端可以直接将``<PROPOSE, tx>``(没有’锚定)发送给它所选择的所有背书者。不同的通信模式是可能的,客户端可以自由决定这些模式(参见2.3.)。

2.2. 背书节点模拟交易并生成背书签名

当接收到来自客户端的``<PROPOSE,tx,[anchor]>`` 消息时,背书节点 epID 首先验证客户端的签名``clientSig``,然后模拟一个交易。如果客户端指定“锚定”,则背书节点在它的本地KVS中仅根据键对应的读版本号匹配由 ``锚定``指定的版本号。

通过调用交易引用的链码(“chaincodeID”)和背书节点本地持有的状态副本,模拟交易包括暂时*执行*一个交易(“txPayload”)。

执行的结果是,背书节点计算*读版本依赖项* (readset)和*状态更新* (writeset),在DB语言中也称为*MVCC+postimage info*。

回想一下,状态由键值对组成。所有键值条目都有版本控制;也就是说,每个条目都包含有序的版本信息,每次更新存储在键下的值时,该信息都会增加。解释交易的peer节点记录链码访问的所有键值对(无论对于读取或写入),但该peer节点尚未更新其状态。更具体地说:

  • 背书peer节点在执行一个交易之前的给定的状态``s`` ,交易读取的每个键``k`` ,键值对 (k,s(k).version)``被添加到``readset

  • 此外,对于被交易修改为新值到 v'``的每个键``k,键值对``(k,v’)``被添加到``writeset``中。或者, v'``可以是新值与旧值的差值 (``s(k).value)。

如果客户端在“提议”消息中指定“锚定”,则客户端指定的“锚定”必须等于模拟交易时由背书peer生成的“readset”。

然后,peer节点将内部的“tran-proposal”(可能还有“tx”)转发给它的(peer节点的)背书交易的逻辑部分,称为**背书逻辑**。默认情况下,在peer节点中背书逻辑接受“tran-proposal”,并简单地签署“tran-proposal”。然而,背书逻辑可能会解释任意的功能,例如,使用“trans -proposal”和“tx”做输入与遗留系统交互,来决定是否为交易背书。

如果背书逻辑决定背书一个交易,它会向提交客户端(tx.clientID)发送``<TRANSACTION-ENDORSED, tid, tran-proposal,epSig>``消息,其中:

  • tran-proposal := (epID,tid,chaincodeID,txContentBlob,readset,writeset),

    其中``txContentBlob``’是链码/交易指定信息。其目的是将``txContentBlob``用作 tx 的某种表示形式(例如``txContentBlob=tx.txPayload``)。

  • “epSig”是背书节点在“tran-proposal”上的签名。

否则,如果背书逻辑拒绝对交易进行背书,背书者*可以*向提交的客户端发送一条消息``(TRANSACTION-INVALID, tid, REJECTED)`` 。

请注意,背书者在此步骤中不会更改其状态,背书上下文中交易模拟生成的更新不会影响状态!

2.3. 提交客户端为交易收集背书并通过排序服务广播

提交客户端等待,直到它收到“足够”的消息和``(TRANSACTION-ENDORSED, tid, *, *)`` 语句上的签名,从而断定交易提议已被背书。如第2.1.2节所述,这可能涉及与背书者的一次或多次双向互动。

确切数目是否“足够”由链码背书策略而定(另见第3条)。如背书策略符合,则交易已获*背书*;注意,它还没有提交。来自背书节点的已签章“ ``TRANSACTION-ENDORSED``消息称为*背书*,并用“背书”表示。

如果提交的客户端未能为交易提议收集到背书,它将放弃该交易,并提供稍后重试的选项。

对于具有有效背书的交易,我们现在开始使用排序服务。提交客户端使用“broadcast(blob)”调用排序服务,其中“blob=背书”。如果客户端没有直接调用排序服务,它可以通过自己选择的某个peer节点代理其广播。客户端必须信任这样的peer节点不会从“背书”中删除任何消息,否则交易可能被视为无效。但是,请注意,代理peer节点不可以伪造有效的“背书”。

2.4. 排序服务向peer节点交付交易

当一个事件 deliver(seqno, prevhash, blob) 发生,并且一个peer节点为序号小于 ``seqno``的blob应用了所有状态更新时,peer节点执行以下操作:

  • 它根据所引用的链码(blob.tran-proposal.chaincodeID)的策略检查``blob.endorsement``’是有效的。

  • 在典型的情况下,它还验证依赖项(blob.endorsement.tran-proposal.readset)没有同时被违反。在更复杂的用例中,背书中的“trans -proposal”字段可能有所不同,在这种情况下,背书策略(第3节)指定状态如何发展。

根据为状态更新选择的一致性属性或“隔离保证”,可以以不同的方式实现依赖项的验证。**Serializability**是默认的隔离保证,除非链码背书策略指定了一个不同的隔离保证。通过要求``readset``中的*每个*键关联的版本等于该键在状态中的版本,提供了Serializability,并拒绝不满足此要求的交易。

  • 如果所有这些检查通过,交易将被视为“有效”或“已提交”。在本例中,peer节点在“PeerLedger”的比特掩码中使用1标记交易,应用``blob.endorsement.tran-proposal.writeset``到区块链状态(如果 ``tran-proposals``相同,否则背书策略逻辑定义接受 ``blob.endorsement``的函数)。

  • 如果背书策略验证``blob.endorsement``失败,交易无效,在``PeerLedger``的比特掩码中,peer节点将交易标记为0。需要注意的是,无效交易不会更改状态。

注意,在使用给定的序列号处理一个交付事件(区块)之后,所有(正确的)peer节点都具有相同的状态就足够了。也就是说,通过排序服务的保证,所有正确的peer节点将接收相同的``deliver(seqno, prevhash, blob)``事件序列。由于对背书策略的评估和对“readset”中的版本依赖关系的评估是确定的,所以所有正确的peer节点也会得出相同的结论,即blob中包含的交易是否有效。因此,所有peer节点提交和应用相同的交易序列,并以相同的方式更新它们的状态。

Illustration of the transaction flow (common-case path).

图1 . 说明一种可能的交易流程(常见情况路径)。

3. 背书策略

3.1. 背书策略规范

**背书策略**是*背书*交易的条件。区块链peer节点有一组预先指定的背书策略,这些策略由安装特定链码的 ``部署``交易引用。背书策略可以参数化,这些参数可以通过“部署”交易指定。

为了保证区块链和安全性属性,背书策略集**应该是一组经过验证的策略**,具有有限的函数集,以确保有限的执行时间(终止)、确定性、性能和安全性保证。

在有限的策略评估时间(终止)、确定性、性能和安全性保证方面,背书策略的动态添加(例如通过链码部署时的“部署”交易)非常敏感。因此,背书策略的动态添加是不允许的,但可以在将来支持。

3.2. 根据背书策略进行交易评估

只有根据策略进行了背书,交易才被声明为有效。链码的调用交易首先必须获得一个满足链码策略的*背书*,否则不会提交。这是通过提交客户端和背书节点之间的交互来实现的,如第2节所述。

形式上,背书策略是对背书的断言,并且可能是其计算结果为TRUE或FALSE的进一步的状态。对于部署交易,背书是根据系统范围的策略(例如从系统链码)获得的。

背书策略断言引用某些变量。可能是指:

  1. 与链码相关的键或标识(可在链码元数据中找到),例如,一组背书器;

  2. 链码的进一步元数据;

  3. “endorsement”和``endorsement.tran-proposal``的元素;

  4. 并且可能更多。

上面的列表是通过渐增的表达性和复杂性来排序的,也就是说,支持只引用node节点的键和身份的策略将相对简单。

背书策略断言的计算必须是确定的。**背书应由每一个peer节点在本地进行评估,以便该peer节点不需要与其他节点进行交互,但所有正直的peer节点对背书策略的评估是相同的。

3.3. 背书策略的例子

断言可以包含逻辑表达式,并且计算为TRUE或FALSE。通常情况下,该条件将在交易调用上使用数字签名,签名由为链码背书的peer节点签发。

假设链码指定背书集合 E = {Alice, Bob, Charlie, Dave, Eve, Frank, George}。一些示例策略:

  • E所有成员在相同``tran-proposal`` 上的一个有效签名。

  • E的任何一个成员的有效签名。

  • 根据条件``(Alice OR Bob) AND (any two of: Charlie, Dave, Eve, Frank, George)``,在相同``tran-proposal``上来自背书节点的有效签名。

  • 七名背书者中的五名在同一份``tran-proposal`` 上的有效签名。(更一般地说,对于具有``n > 3f``个背书者的链码,由``n``背书者中的任意``2f+1``个进行有效签名,或由多于``(n+f)/2``个背书者组成的任何组合进行有效签名。)

  • 假设有一个“股份”或“权重”分配给背书者,如 {Alice=49, Bob=15, Charlie=15, Dave=10, Eve=7, Frank=3, George=1},其中总股份为100:该策略要求的有效签名来自大多数股份的集合(例如组合股份严格大于50的组合),例如,{Alice, X} ,X不能是George,或 {出了Alice的全体组合在一起}。等等。

  • 在前面的示例条件中,股份的分配可以是静态的(在链码的元数据中固定),也可以是动态的(例如,取决于链码的状态,并在执行过程中进行修改)。

  • (Alice或Bob)在“tran-proposal1”上的有效签名,以及在“tran-proposal2”上的有效签名(查理、戴夫、伊芙、弗兰克、乔治中的任意两人),其中“tran-proposal1”和“tran-proposal2”的有效签名仅在背书节点和状态更新上存在差异。

这些策略的有用程度将取决于应用程序、解决方案对背书者的失败或不当行为的预期弹性,以及各种其他属性。

4 (post-v1). 验证账本和“PeerLedger”检查点(修剪)

4.1. 已验证账本 (VLedger)

为了维护只包含有效和已提交交易(例如比特币的作法)的账本抽象,peer节点可以在状态和账本之外维护*已验证(或VLedger)*。这是从账本中过滤掉无效交易而得到的哈希链。

VLedger区块(这里称为*vBlocks*)的构建过程如下。因为“PeerLedger”区块可能包含无效的交易(例如具有无效背书或无效版本依赖项的交易),在将来自区块的交易添加到vBlock之前,peer节点将筛选掉此类交易。每个peer节点都自己做这件事(例如,使用与``PeerLedger``关联的比特掩码)。vBlock被定义为一个不含无效交易的区块,无效交易被过滤掉了。这样的vblock在大小上是动态的,可能是空的。下图给出了vBlock构造的一个例子。

Illustration of vBlock formation

图2. 从账本(PeerLedger)区块中生成已验证区块(vBlock)的图解。

每个peer节点都将vBlocks链接到一个哈希链上。更具体地说,每一个已验证账本包含:

  • 前一个vBlock的哈希值。

  • vBlock号码。

  • 自最后一个vBlock被计算以来,peer节点提交的所有有效交易的有序列表(即相应区块中的有效交易列表)。

  • 对应区块的哈希值(在``PeerLedger``中),当前vBlock就是从中派生出来的。

所有这些信息都由一个peer节点连接和哈希,在已验证账本中生成vBlock的哈希。

4.2. PeerLedger 检查点

账本包含无效的交易,不一定要永远记录。然而,peer节点不能简单地丢弃``PeerLedger``区块,从而在他们建立了相应的vBlocks之后修剪``PeerLedger``。也就是说,在这种情况下,如果一个新的peer节点加入网络,其他peer节点不能将丢弃的区块(属于“PeerLedger”)转移到加入的peer节点,也不能说服加入的peer节点他们的vBlock的有效性。

为了方便修剪``PeerLedger``,本文档描述了一个*检查点*机制。该机制建立了跨peer网络的vBlock的有效性,并允许检查点vBlock替换被丢弃的``PeerLedger``区块。这进而减少了存储空间,因为不需要存储无效的交易。它还减少了为加入网络的新peer节点重构状态的工作(因为当通过重放``PeerLedger``来重构状态时,它们不需要建立单个交易的有效性,而只需重放已验证账本中包含的状态更新)。

4.2.1. 检查点协议

每个*CHK*区块的peer节点定期执行检查点,其中*CHK*是一个可配置参数。为了发起一个检查点,peer节点广播(例如gossip)消息``<CHECKPOINT,blocknohash,blockno,stateHash,peerSig>``到其它peer节点, 其中``blockno`` 是当前区块号和“blocknohash“是其各自的哈希,“stateHash“是根据区块的``blockno``验证的最新状态的哈希(例如,产生自Merkle哈希),``peerSig``peer节点在``(CHECKPOINT,blocknohash,blockno,stateHash)``的签名,指的是已验证账本。

一个peer节点收集``CHECKPOINT``消息,直到它获得与``blockno``、blocknohash 和``stateHash`` 匹配的足够正确签名的消息,以建立一个*有效的检查点*(参见4.2.2.)。

在为区块号``blockno`` 和``blocknohash``建立一个有效的检查点时,一个peer节点:

  • 如果 blockno>latestValidCheckpoint.blockno,则peer赋值 latestValidCheckpoint=(blocknohash,blockno),

  • 将构成有效检查点的各peer节点签名集合存储到集合``latestValidCheckpointProof``中,

  • 存储与 stateHash``对应的状态到``latestValidCheckpointedState,

  • (可选)修剪它的“PeerLedger”“到区块号“blockno”(包括)。

4.2.2. 有效的检查点

显然,检查点协议提出了以下问题:peer节点何时可以删除其“PeerLedger”?有多少“检查点”消息是“足够多的”?。这是由一个*检查点有效性策略*定义的,有(至少)两种可能的方法,也可以组合使用:

  • *本地(特定于peer节点)检查点有效性策略(LCVP)。*一个给定peer节点*p*的本地策略可以指定节点*p*信任的一组peer节点,并且它们的“CHECKPOINT”消息足以建立有效的检查点。例如,节点*Alice*的LCVP可能定义*Alice*需要从Bob、或同时从*Charlie*和*Dave*接收“CHECKPOINT”消息。

  • *全局检查点有效性策略(GCVP)。可以全局指定检查点有效性策略。这类似于本地peer节点策略,只是它是在系统粒度(区块链)而不是peer节点粒度中规定的。例如,GCVP可以指定:

    • 如果得到*11*个不同的peer节点确认,每个peer节点都可以信任这个检查点。

    • 在特定的部署中,每个排序器都在同一台机器中(即信任域)搭配了一个peer节点,即当多达*f* 个排序器可能(拜占庭式的)错误时,如果由*f+1*与排序器一起配置的不同peer节点确认,每个peer节点都可以信任检查点。

Transaction Flow

This document outlines the transactional mechanics that take place during a standard asset exchange. The scenario includes two clients, A and B, who are buying and selling radishes. They each have a peer on the network through which they send their transactions and interact with the ledger.

_images/step0.png

Assumptions

This flow assumes that a channel is set up and running. The application user has registered and enrolled with the organization’s Certificate Authority (CA) and received back necessary cryptographic material, which is used to authenticate to the network.

The chaincode (containing a set of key value pairs representing the initial state of the radish market) is installed on the peers and instantiated on the channel. The chaincode contains logic defining a set of transaction instructions and the agreed upon price for a radish. An endorsement policy has also been set for this chaincode, stating that both peerA and peerB must endorse any transaction.

_images/step1.png
  1. Client A initiates a transaction

What’s happening? Client A is sending a request to purchase radishes. This request targets peerA and peerB, who are respectively representative of Client A and Client B. The endorsement policy states that both peers must endorse any transaction, therefore the request goes to peerA and peerB.

Next, the transaction proposal is constructed. An application leveraging a supported SDK (Node, Java, Python) utilizes one of the available API’s to generate a transaction proposal. The proposal is a request to invoke a chaincode function with certain input parameters, with the intent of reading and/or updating the ledger.

The SDK serves as a shim to package the transaction proposal into the properly architected format (protocol buffer over gRPC) and takes the user’s cryptographic credentials to produce a unique signature for this transaction proposal.

_images/step2.png
  1. Endorsing peers verify signature & execute the transaction

The endorsing peers verify (1) that the transaction proposal is well formed, (2) it has not been submitted already in the past (replay-attack protection), (3) the signature is valid (using the MSP), and (4) that the submitter (Client A, in the example) is properly authorized to perform the proposed operation on that channel (namely, each endorsing peer ensures that the submitter satisfies the channel’s Writers policy). The endorsing peers take the transaction proposal inputs as arguments to the invoked chaincode’s function. The chaincode is then executed against the current state database to produce transaction results including a response value, read set, and write set (i.e. key/value pairs representing an asset to create or update). No updates are made to the ledger at this point. The set of these values, along with the endorsing peer’s signature is passed back as a “proposal response” to the SDK which parses the payload for the application to consume.

注解

The MSP is a peer component that allows peers to verify transaction requests arriving from clients and to sign transaction results (endorsements). The writing policy is defined at channel creation time and determines which users are entitled to submit a transaction to that channel. For more information about membership, check out our Membership documentation.

_images/step3.png
  1. Proposal responses are inspected

The application verifies the endorsing peer signatures and compares the proposal responses to determine if the proposal responses are the same. If the chaincode is only queried the ledger, the application would inspect the query response and would typically not submit the transaction to the ordering service. If the client application intends to submit the transaction to the ordering service to update the ledger, the application determines if the specified endorsement policy has been fulfilled before submitting (i.e. did peerA and peerB both endorse). The architecture is such that even if an application chooses not to inspect responses or otherwise forwards an unendorsed transaction, the endorsement policy will still be enforced by peers and upheld at the commit validation phase.

_images/step4.png
  1. Client assembles endorsements into a transaction

The application “broadcasts” the transaction proposal and response within a “transaction message” to the ordering service. The transaction will contain the read/write sets, the endorsing peers signatures and the Channel ID. The ordering service does not need to inspect the entire content of a transaction in order to perform its operation, it simply receives transactions from all channels in the network, orders them chronologically by channel, and creates blocks of transactions per channel.

_images/step5.png
  1. Transaction is validated and committed

The blocks of transactions are “delivered” to all peers on the channel. The transactions within the block are validated to ensure endorsement policy is fulfilled and to ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. Transactions in the block are tagged as being valid or invalid.

_images/step6.png
  1. Ledger updated

Each peer appends the block to the channel’s chain, and for each valid transaction the write sets are committed to current state database. An event is emitted, to notify the client application that the transaction (invocation) has been immutably appended to the chain, as well as notification of whether the transaction was validated or invalidated.

注解

Applications should listen for the transaction event after submitting a transaction, for example by using the submitTransaction API, which automatically listen for transaction events. Without listening for transaction events, you will not know whether your transaction has actually been ordered, validated, and committed to the ledger.

See the sequence diagram to better understand the transaction flow.

https://creativecommons.org/licenses/by/4.0/

Hyperledger Fabric SDKs

Hyperledger Fabric intends to offer a number of SDKs for a wide variety of programming languages. The first two delivered are the Node.js and Java SDKs. We hope to provide Python, REST and Go SDKs in a subsequent release.

Service Discovery

Why do we need service discovery?

In order to execute chaincode on peers, submit transactions to orderers, and to be updated about the status of transactions, applications connect to an API exposed by an SDK.

However, the SDK needs a lot of information in order to allow applications to connect to the relevant network nodes. In addition to the CA and TLS certificates of the orderers and peers on the channel – as well as their IP addresses and port numbers – it must know the relevant endorsement policies as well as which peers have the chaincode installed (so the application knows which peers to send chaincode proposals to).

Prior to v1.2, this information was statically encoded. However, this implementation is not dynamically reactive to network changes (such as the addition of peers who have installed the relevant chaincode, or peers that are temporarily offline). Static configurations also do not allow applications to react to changes of the endorsement policy itself (as might happen when a new organization joins a channel).

In addition, the client application has no way of knowing which peers have updated ledgers and which do not. As a result, the application might submit proposals to peers whose ledger data is not in sync with the rest of the network, resulting in transaction being invalidated upon commit and wasting resources as a consequence.

The discovery service improves this process by having the peers compute the needed information dynamically and present it to the SDK in a consumable manner.

How service discovery works in Fabric

The application is bootstrapped knowing about a group of peers which are trusted by the application developer/administrator to provide authentic responses to discovery queries. A good candidate peer to be used by the client application is one that is in the same organization. Note that in order for peers to be known to the discovery service, they must have an EXTERNAL_ENDPOINT defined. To see how to do this, check out our Service Discovery CLI documentation.

The application issues a configuration query to the discovery service and obtains all the static information it would have otherwise needed to communicate with the rest of the nodes of the network. This information can be refreshed at any point by sending a subsequent query to the discovery service of a peer.

The service runs on peers – not on the application – and uses the network metadata information maintained by the gossip communication layer to find out which peers are online. It also fetches information, such as any relevant endorsement policies, from the peer’s state database.

With service discovery, applications no longer need to specify which peers they need endorsements from. The SDK can simply send a query to the discovery service asking which peers are needed given a channel and a chaincode ID. The discovery service will then compute a descriptor comprised of two objects:

  1. Layouts: a list of groups of peers and a corresponding amount of peers from each group which should be selected.

  2. Group to peer mapping: from the groups in the layouts to the peers of the channel. In practice, each group would most likely be peers that represent individual organizations, but because the service API is generic and ignorant of organizations this is just a “group”.

The following is an example of a descriptor from the evaluation of a policy of AND(Org1, Org2) where there are two peers in each of the organizations.

Layouts: [
     QuantitiesByGroup: {
       “Org1”: 1,
       “Org2”: 1,
     }
],
EndorsersByGroups: {
  “Org1”: [peer0.org1, peer1.org1],
  “Org2”: [peer0.org2, peer1.org2]
}

In other words, the endorsement policy requires a signature from one peer in Org1 and one peer in Org2. And it provides the names of available peers in those orgs who can endorse (peer0 and peer1 in both Org1 and in Org2).

The SDK then selects a random layout from the list. In the example above, the endorsement policy is Org1 AND Org2. If instead it was an OR policy, the SDK would randomly select either Org1 or Org2, since a signature from a peer from either Org would satisfy the policy.

After the SDK has selected a layout, it selects from the peers in the layout based on a criteria specified on the client side (the SDK can do this because it has access to metadata like ledger height). For example, it can prefer peers with higher ledger heights over others – or to exclude peers that the application has discovered to be offline – according to the number of peers from each group in the layout. If no single peer is preferable based on the criteria, the SDK will randomly select from the peers that best meet the criteria.

Capabilities of the discovery service

The discovery service can respond to the following queries:

  • Configuration query: Returns the MSPConfig of all organizations in the channel along with the orderer endpoints of the channel.

  • Peer membership query: Returns the peers that have joined the channel.

  • Endorsement query: Returns an endorsement descriptor for given chaincode(s) in a channel.

  • Local peer membership query: Returns the local membership information of the peer that responds to the query. By default the client needs to be an administrator for the peer to respond to this query.

Special requirements

When the peer is running with TLS enabled the client must provide a TLS certificate when connecting to the peer. If the peer isn’t configured to verify client certificates (clientAuthRequired is false), this TLS certificate can be self-signed.

通道

超级账本Fabric“通道”是两个或两个以上特定网络成员之间通信的专用“子网”,用于进行私有和机密的交易。通道由成员(组织)、每个成员的锚点peer节点、共享账本、链码应用程序和排序服务节点定义。网络上的每个交易都在一个通道上执行,在这个通道上,每一方都必须经过身份认证和授权才能在该通道上进行交易。加入通道的每个peer节点都有自己的身份,由成员服务提供者(MSP)提供,MSP向其通道peer节点和服务认证每个peer节点。

要创建一个新通道,客户端SDK调用配置系统链码并引用属性,如“锚点节点”和成员(组织)。这个请求为通道账本创建一个“创世区块”,它存储关于通道策略、成员和锚点peer节点的配置信息。当将新成员添加到现有通道时,可以与新成员共享这个创世区块,如果适用,也可以共享最近的重配置区块。

注解

有关配置交易的属性和原型结构的更多细节,请参见:doc:`configtx`部分。

为通道上的每个成员选择一个“领导peer节点”,确定哪个peer节点代表该成员与排序服务通信。如果没有识别出领导,可以使用算法来识别领导。共识服务对交易进行排序,并将它们以一个区块的形式交付给每个领导peer节点,然后由每个领导peer节点将该区块分发给它的成员peer节点,并使用``gossip``协议跨通道分发。

尽管任何一个锚点peer节点都可以属于多个通道,因此可以维护多个账本,但是没有账本数据可以从一个通道传递到另一个通道。这种按通道划分的账本是由配置链码、身份成员服务和gossip数据传播协议定义和实现的。数据的传播,包括交易、账本状态和通道成员的信息,仅限于通道上拥有可验证成员资格的peer节点。这种按通道隔离peer节点和账本数据的方法,允许需要私有和机密交易的网络成员与业务竞争对手和其他受限制的成员在同一个区块链网络上共存。

功能需求

由于Fabric是一个分布式系统,通常会涉及多个组织(有时在不同的国家甚至大洲),所以网络中可能(而且是典型的)存在许多不同版本的Fabric代码。然而,重要的是,网络以相同的方式处理交易,这样每个人对当前网络状态都有相同的看法。

这意味着每个网络,以及该网络中的每个通道,必须定义一组我们称为“功能”的东西,以便能够参与处理交易。例如,Fabric v1.1引入了新的MSP角色类型“Peer”和“Client”。但是,如果1.0版本的节点不理解这些新角色类型,那么它将无法恰当地验证引用它们的背书策略。这意味着在使用新角色类型之前,网络必须同意启用v1.1的“通道”功能需求,确保所有节点都做出相同的决定。

只有支持所需功能的二进制文件才能参与通道,较新的二进制版本在启用相应功能之前不会启用新的验证逻辑。通过这种方式,功能需求确保即使使用不同的构建和版本,网络也不可能出现状态分支。

定义功能需求

功能需求是在通道配置中为每个通道定义的(可以在通道的最新配置区块中找到)。通道配置包含三个位置,每个位置定义了一个不同类型的功能。

功能类型

标准路径

JSON路径

通道

/通道/功能

.channel_group.values.Capabilities

排序器

/通道/排序器/功能

.channel_group.groups.Orderer.values.Capabilities

应用

/通道/应用/功能

.channel_group.groups.Application.values. Capabilities

  • **Channel:**这些功能同时适用于节点和排序器,并且位于根“通道”组中。

  • **Orderer:**只适用于排序器,并且位于“排序器”组中。

  • **Application:**只适用于节点,并且位于“应用程序”组中。

为了与现有的管理结构保持一致,将功能分解为这些组。更新排序器功能是排序器组织独立于应用组织来管理的。类似地,只有应用管理员才会管理更新应用功能。通过在“Orderer”和“Application”之间划分功能,一个假设的网络可以运行v1.6排序服务,同时支持v1.3节点应用网络。

然而,有些功能跨“Application”和“Orderer”组。正如我们前面看到的,添加新的MSP角色类型是排序器和应用管理员都同意和需要确认。排序器必须理解MSP角色的含义,以便允许交易通过排序,而节点必须理解角色,以便验证交易。这些跨应用和排序器组件的功能在顶层“Channel”组中定义。

注解

有可能将通道功能定义为版本v1.3,而将排序器和应用功能分别定义为版本1.1和版本1.4。在“通道”组级别启用功能并不意味着在更特定的“Orderer”和“Application”组级别也可以使用相同的功能。

设置功能

功能被设置为通道配置的一部分(作为初始配置的一部分,我们稍后将讨论它,或者作为重新配置的一部分)。

注解

我们有两个文档讨论通道重新配置的不同方面。首先,我们有一个教程:doc: ` channel_update_tutorial `,将带您通过这个过程。我们还有一个文档:doc: ‘ config_update ‘来进行讨论,它概述了各种可能的更新,并更全面地查看了签名过程。

由于新通道默认情况下复制了排序器系统通道的配置,因此将自动配置新通道,使其使用排序器系统通道的功能,以及通道创建交易指定的应用功能。但是,必须重新配置已经存在的通道。

功能值的数据结构在protobuf中定义为:

message Capabilities {
      map<string, Capability> capabilities = 1;
}

message Capability { }

例如,在JSON中呈现:

{
    "capabilities": {
        "V1_1": {}
    }
}
初始配置中的功能

在文件“configtx.yaml”的发布构件的“config”目录中,有一个“功能”部分,列举了每种功能类型(通道、排序器和应用)的可能功能。

用功能的最简单方法是选择一个v1.1示例概要文件(profile),并为您的网络定制它。例如:

SampleSingleMSPSoloV1_1:
    Capabilities:
        <<: *GlobalCapabilities
    Orderer:
        <<: *OrdererDefaults
        Organizations:
            - *SampleOrg
        Capabilities:
            <<: *OrdererCapabilities
    Consortiums:
        SampleConsortium:
            Organizations:
                - *SampleOrg

注意,在根级别(用于通道功能)和排序器级别(用于排序器功能)定义了一个“‘Capabilities’”部分。上面的示例使用了一个YAML引用来包含YAML底部定义的功能。

当定义排序器系统通道时,没有应用部分,因为这些功能是在创建应用通道时定义的。要在通道创建时定义新通道的应用功能,应用管理员应该在“SampleSingleMSPChannelV1_1”概要文件(profile)之后为其通道创建交易建模。

SampleSingleMSPChannelV1_1:
     Consortium: SampleConsortium
     Application:
         Organizations:
             - *SampleOrg
         Capabilities:
             <<: *ApplicationCapabilities

这里,Application部分有一个新元素``Capabilities`` ,它引用了在YAML末尾定义的``ApplicationCapabilities``部分。

注解

通道和排序器部分的功能继承自排序器系统通道中的定义,并在通道创建过程中由排序器自动包含。

CouchDB as the State Database

State Database options

State database options include LevelDB and CouchDB. LevelDB is the default key-value state database embedded in the peer process. CouchDB is an optional alternative external state database. Like the LevelDB key-value store, CouchDB can store any binary data that is modeled in chaincode (CouchDB attachment functionality is used internally for non-JSON binary data). But as a JSON document store, CouchDB additionally enables rich query against the chaincode data, when chaincode values (e.g. assets) are modeled as JSON data.

Both LevelDB and CouchDB support core chaincode operations such as getting and setting a key (asset), and querying based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against multiple parameters. For example a composite key of owner,asset_id can be used to query all assets owned by a certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions that update the ledger.

If you model assets as JSON and use CouchDB, you can also perform complex rich queries against the chaincode data values, using the CouchDB JSON query language within chaincode. These types of queries are excellent for understanding what is on the ledger. Proposal responses for these types of queries are typically useful to the client application, but are not typically submitted as transactions to the ordering service. In fact, there is no guarantee the result set is stable between chaincode execution and commit time for rich queries, and therefore rich queries are not appropriate for use in update transactions, unless your application can guarantee the result set is stable between chaincode execution time and commit time, or can handle potential changes in subsequent transactions. For example, if you perform a rich query for all assets owned by Alice and transfer them to Bob, a new asset may be assigned to Alice by another transaction between chaincode execution time and commit time, and you would miss this “phantom” item.

CouchDB runs as a separate database process alongside the peer, therefore there are additional considerations in terms of setup, management, and operations. You may consider starting with the default embedded LevelDB, and move to CouchDB if you require the additional complex rich queries. It is a good practice to model chaincode asset data as JSON, so that you have the option to perform complex rich queries if needed in the future.

注解

The key for a CouchDB JSON document cannot begin with an underscore (“_”). Also, a JSON document cannot use the following field names at the top level. These are reserved for internal use.

  • Any field beginning with an underscore, "_"

  • ~version

Using CouchDB from Chaincode

Chaincode queries

Most of the chaincode shim APIs can be utilized with either LevelDB or CouchDB state database, e.g. GetState, PutState, GetStateByRange, GetStateByPartialCompositeKey. Additionally when you utilize CouchDB as the state database and model assets as JSON in chaincode, you can perform rich queries against the JSON in the state database by using the GetQueryResult API and passing a CouchDB query string. The query string follows the CouchDB JSON query syntax.

The marbles02 fabric sample demonstrates use of CouchDB queries from chaincode. It includes a queryMarblesByOwner() function that demonstrates parameterized queries by passing an owner id into chaincode. It then queries the state data for JSON documents matching the docType of “marble” and the owner id using the JSON query syntax:

{"selector":{"docType":"marble","owner":<OWNER_ID>}}
CouchDB pagination

Fabric supports paging of query results for rich queries and range based queries. APIs supporting pagination allow the use of page size and bookmarks to be used for both range and rich queries. To support efficient pagination, the Fabric pagination APIs must be used. Specifically, the CouchDB limit keyword will not be honored in CouchDB queries since Fabric itself manages the pagination of query results and implicitly sets the pageSize limit that is passed to CouchDB.

If a pageSize is specified using the paginated query APIs (GetStateByRangeWithPagination(), GetStateByPartialCompositeKeyWithPagination(), and GetQueryResultWithPagination()), a set of results (bound by the pageSize) will be returned to the chaincode along with a bookmark. The bookmark can be returned from chaincode to invoking clients, which can use the bookmark in a follow on query to receive the next “page” of results.

The pagination APIs are for use in read-only transactions only, the query results are intended to support client paging requirements. For transactions that need to read and write, use the non-paginated chaincode query APIs. Within chaincode you can iterate through result sets to your desired depth.

Regardless of whether the pagination APIs are utilized, all chaincode queries are bound by totalQueryLimit (default 100000) from core.yaml. This is the maximum number of results that chaincode will iterate through and return to the client, in order to avoid accidental or malicious long-running queries.

注解

Regardless of whether chaincode uses paginated queries or not, the peer will query CouchDB in batches based on internalQueryLimit (default 1000) from core.yaml. This behavior ensures reasonably sized result sets are passed between the peer and CouchDB when executing chaincode, and is transparent to chaincode and the calling client.

An example using pagination is included in the Using CouchDB tutorial.

CouchDB indexes

Indexes in CouchDB are required in order to make JSON queries efficient and are required for any JSON query with a sort. Indexes can be packaged alongside chaincode in a /META-INF/statedb/couchdb/indexes directory. Each index must be defined in its own text file with extension *.json with the index definition formatted in JSON following the CouchDB index JSON syntax. For example, to support the above marble query, a sample index on the docType and owner fields is provided:

{"index":{"fields":["docType","owner"]},"ddoc":"indexOwnerDoc", "name":"indexOwner","type":"json"}

The sample index can be found here.

Any index in the chaincode’s META-INF/statedb/couchdb/indexes directory will be packaged up with the chaincode for deployment. When the chaincode is both installed on a peer and instantiated on one of the peer’s channels, the index will automatically be deployed to the peer’s channel and chaincode specific state database (if it has been configured to use CouchDB). If you install the chaincode first and then instantiate the chaincode on the channel, the index will be deployed at chaincode instantiation time. If the chaincode is already instantiated on a channel and you later install the chaincode on a peer, the index will be deployed at chaincode installation time.

Upon deployment, the index will automatically be utilized by chaincode queries. CouchDB can automatically determine which index to use based on the fields being used in a query. Alternatively, in the selector query the index can be specified using the use_index keyword.

The same index may exist in subsequent versions of the chaincode that gets installed. To change the index, use the same index name but alter the index definition. Upon installation/instantiation, the index definition will get re-deployed to the peer’s state database.

If you have a large volume of data already, and later install the chaincode, the index creation upon installation may take some time. Similarly, if you have a large volume of data already and instantiate a subsequent version of the chaincode, the index creation may take some time. Avoid calling chaincode functions that query the state database at these times as the chaincode query may time out while the index is getting initialized. During transaction processing, the indexes will automatically get refreshed as blocks are committed to the ledger.

CouchDB Configuration

CouchDB is enabled as the state database by changing the stateDatabase configuration option from goleveldb to CouchDB. Additionally, the couchDBAddress needs to configured to point to the CouchDB to be used by the peer. The username and password properties should be populated with an admin username and password if CouchDB is configured with a username and password. Additional options are provided in the couchDBConfig section and are documented in place. Changes to the core.yaml will be effective immediately after restarting the peer.

You can also pass in docker environment variables to override core.yaml values, for example CORE_LEDGER_STATE_STATEDATABASE and CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS.

Below is the stateDatabase section from core.yaml:

state:
  # stateDatabase - options are "goleveldb", "CouchDB"
  # goleveldb - default state database stored in goleveldb.
  # CouchDB - store state database in CouchDB
  stateDatabase: goleveldb
  # Limit on the number of records to return per query
  totalQueryLimit: 10000
  couchDBConfig:
     # It is recommended to run CouchDB on the same server as the peer, and
     # not map the CouchDB container port to a server port in docker-compose.
     # Otherwise proper security must be provided on the connection between
     # CouchDB client (on the peer) and server.
     couchDBAddress: couchdb:5984
     # This username must have read and write authority on CouchDB
     username:
     # The password is recommended to pass as an environment variable
     # during start up (e.g. LEDGER_COUCHDBCONFIG_PASSWORD).
     # If it is stored here, the file must be access control protected
     # to prevent unintended users from discovering the password.
     password:
     # Number of retries for CouchDB errors
     maxRetries: 3
     # Number of retries for CouchDB errors during peer startup
     maxRetriesOnStartup: 10
     # CouchDB request timeout (unit: duration, e.g. 20s)
     requestTimeout: 35s
     # Limit on the number of records per each CouchDB query
     # Note that chaincode queries are only bound by totalQueryLimit.
     # Internally the chaincode may execute multiple CouchDB queries,
     # each of size internalQueryLimit.
     internalQueryLimit: 1000
     # Limit on the number of records per CouchDB bulk update batch
     maxBatchUpdateSize: 1000
     # Warm indexes after every N blocks.
     # This option warms any indexes that have been
     # deployed to CouchDB after every N blocks.
     # A value of 1 will warm indexes after every block commit,
     # to ensure fast selector queries.
     # Increasing the value may improve write efficiency of peer and CouchDB,
     # but may degrade query response time.
     warmIndexesAfterNBlocks: 1

CouchDB hosted in docker containers supplied with Hyperledger Fabric have the capability of setting the CouchDB username and password with environment variables passed in with the COUCHDB_USER and COUCHDB_PASSWORD environment variables using Docker Compose scripting.

For CouchDB installations outside of the docker images supplied with Fabric, the local.ini file of that installation must be edited to set the admin username and password.

Docker compose scripts only set the username and password at the creation of the container. The local.ini file must be edited if the username or password is to be changed after creation of the container.

注解

CouchDB peer options are read on each peer startup.

Good practices for queries

Avoid using chaincode for queries that will result in a scan of the entire CouchDB database. Full length database scans will result in long response times and will degrade the performance of your network. You can take some of the following steps to avoid long queries:

  • When using JSON queries:

    • Be sure to create indexes in the chaincode package.

    • Avoid query operators such as $or, $in and $regex, which lead to full database scans.

  • For range queries, composite key queries, and JSON queries:

    • Utilize paging support (as of v1.3) instead of one large result set.

  • If you want to build a dashboard or collect aggregate data as part of your application, you can query an off-chain database that replicates the data from your blockchain network. This will allow you to query and analyze the blockchain data in a data store optimized for your needs, without degrading the performance of your network or disrupting transactions. To achieve this, applications may use block or chaincode events to write transaction data to an off-chain database or analytics engine. For each block received, the block listener application would iterate through the block transactions and build a data store using the key/value writes from each valid transaction’s rwset. The Peer channel-based event services provide replayable events to ensure the integrity of downstream data stores.

Peer channel-based event services

General overview

In previous versions of Fabric, the peer event service was known as the event hub. This service sent events any time a new block was added to the peer’s ledger, regardless of the channel to which that block pertained, and it was only accessible to members of the organization running the eventing peer (i.e., the one being connected to for events).

Starting with v1.1, there are two new services which provide events. These services use an entirely different design to provide events on a per-channel basis. This means that registration for events occurs at the level of the channel instead of the peer, allowing for fine-grained control over access to the peer’s data. Requests to receive events are accepted from identities outside of the peer’s organization (as defined by the channel configuration). This also provides greater reliability and a way to receive events that may have been missed (whether due to a connectivity issue or because the peer is joining a network that has already been running).

Available services

  • Deliver

This service sends entire blocks that have been committed to the ledger. If any events were set by a chaincode, these can be found within the ChaincodeActionPayload of the block.

  • DeliverFiltered

This service sends “filtered” blocks, minimal sets of information about blocks that have been committed to the ledger. It is intended to be used in a network where owners of the peers wish for external clients to primarily receive information about their transactions and the status of those transactions. If any events were set by a chaincode, these can be found within the FilteredChaincodeAction of the filtered block.

注解

The payload of chaincode events will not be included in filtered blocks.

How to register for events

Registration for events from either service is done by sending an envelope containing a deliver seek info message to the peer that contains the desired start and stop positions, the seek behavior (block until ready or fail if not ready). There are helper variables SeekOldest and SeekNewest that can be used to indicate the oldest (i.e. first) block or the newest (i.e. last) block on the ledger. To have the services send events indefinitely, the SeekInfo message should include a stop position of MAXINT64.

注解

If mutual TLS is enabled on the peer, the TLS certificate hash must be set in the envelope’s channel header.

By default, both services use the Channel Readers policy to determine whether to authorize requesting clients for events.

Overview of deliver response messages

The event services send back DeliverResponse messages.

Each message contains one of the following:

  • status – HTTP status code. Both services will return the appropriate failure code if any failure occurs; otherwise, it will return 200 - SUCCESS once the service has completed sending all information requested by the SeekInfo message.

  • block – returned only by the Deliver service.

  • filtered block – returned only by the DeliverFiltered service.

A filtered block contains:

  • channel ID.

  • number (i.e. the block number).

  • array of filtered transactions.

  • transaction ID.

    • type (e.g. ENDORSER_TRANSACTION, CONFIG.

    • transaction validation code.

  • filtered transaction actions.
    • array of filtered chaincode actions.
      • chaincode event for the transaction (with the payload nilled out).

SDK event documentation

For further details on using the event services, refer to the SDK documentation.

私有数据

注解

本主题假设您已经理解`“关于私有数据的文档 <private-data/private-data.html>`_ 中的概念。

私有数据集合定义

集合定义包含一个或多个集合,每个集合都有一个用来定义列出集合中的组织的策略,以及用于在背书时控制私有数据传播的属性,还有一个可选的操作,决定是否清除数据。

从Fabric v2.0 Alpha版引入的Fabric链码生命周期开始,集合定义是链码定义的一部分。集合由通道成员批准,然后当链码的定义被提交到通道时会部署集合。对所有通道成员来说,集合文件都是相同的。如果您使用节点CLI来批准和提交链码的定义,那么使用 --collections-config 参数来指定集合的定义文件所在的路径。如果您正在使用Node版的Fabric SDK,请访问 如何安装和启动链代码。使用 previous lifecycle process 来部署私有数据集合时,需要在 实例化你的链码 阶段使用 --collections-config 参数。

集合定义由以下属性组成:

  • name: 集合名称。

  • policy:私有数据集合分发策略定义了哪些组织的peer被允许使用``Signature``策略语法表示持久化集合数据,每个成员都包含在`OR`` 签名策略列表中。为了支持读/写交易,私有数据分发策略必须定义比链码背书策略更广泛的组织集合,因为peer必须拥有私有数据才能背书提议的交易。例如,在一个有十个组织的通道中,五个组织可能包括在一个私有数据集合分发策略中,但是背书策略可能要求任意三个组织背书。

  • requiredPeerCount: 在背书节点签署背书并将提案响应返回给客户端之前,每个背书节点必须成功地向其传播私有数据的最小节点数量(跨授权组织)。要求先传播才能背书的条件将确保即使背书的节点变的无法使用了,也能从网络中获得私有数据。当``requiredPeerCount`` 为``0``时,意味着分布存储不是必须的**required**,但是如果``maxPeerCount``大于0,则可能存在一些分布存储。通常不建议将 requiredPeerCount``设为``0,因为如果背书节点变得不可用,则可能导致网络中的私有数据丢失。通常,您可能希望在背书时至少需要一些私有数据的分布存储,以确保网络中多个节点上的私有数据有冗余。

  • maxPeerCount:出于数据冗余的目的,每个背书节点尝试将私有数据分发给的其他节点(跨授权组织)的最大节点数量。如果在背书时间和提交时间之间某个背书节点不可用,那么在背书时间还没接收到私有数据到集合成员节点将能够从已接收到私有数据的节点中拉取私有数据。如果将此值设置为``0``,则不会在背书时传播私有数据,从而迫使已被授权获取私有数据的节点在提交时从背书节点拉取私有数据。

  • blockToLive:这个属性表示以块的形式存储在私有数据库上的数据应该存在多长时间。数据将在私有数据库上保留本字段指定的数量的块,超出这个数量长度的将被清除,使该数据在网络中过期,目的是不能从链码查询它,从节点也请求不到。如果要无限期地保留私有数据,即永远不清除私有数据,请将 blockToLive 属性设置为 0

  • memberOnlyRead: 这个值为``true``表示节点自动强制要求只允许属于集合成员之一的组织的客户端对私有数据进行读访问。如果来自非成员组织的客户端试图执行一个链码函数,该函数执行对私有数据的读取功能,那么这个链码调用将以一个错误的形式终止。如果希望在每个链码函数中编码更细粒度的访问控制,请使用``false``值。

下边是一个定义集合的例子的JSON 文件,包含一个数组,数组内容为两个集合的定义:

[
 {
    "name": "collectionMarbles",
    "policy": "OR('Org1MSP.member', 'Org2MSP.member')",
    "requiredPeerCount": 0,
    "maxPeerCount": 3,
    "blockToLive":1000000,
    "memberOnlyRead": true
 },
 {
    "name": "collectionMarblePrivateDetails",
    "policy": "OR('Org1MSP.member')",
    "requiredPeerCount": 0,
    "maxPeerCount": 3,
    "blockToLive":3,
    "memberOnlyRead": true
 }
]

This example uses the organizations from the BYFN sample network, Org1 and Org2 . The policy in the collectionMarbles definition authorizes both organizations to the private data. This is a typical configuration when the chaincode data needs to remain private from the ordering service nodes. However, the policy in the collectionMarblePrivateDetails definition restricts access to a subset of organizations in the channel (in this case Org1 ). In a real scenario, there would be many organizations in the channel, with two or more organizations in each collection sharing private data between them.

Private data dissemination

Since private data is not included in the transactions that get submitted to the ordering service, and therefore not included in the blocks that get distributed to all peers in a channel, the endorsing peer plays an important role in disseminating private data to other peers of authorized organizations. This ensures the availability of private data in the channel’s collection, even if endorsing peers become unavailable after their endorsement. To assist with this dissemination, the maxPeerCount and requiredPeerCount properties in the collection definition control the degree of dissemination at endorsement time.

If the endorsing peer cannot successfully disseminate the private data to at least the requiredPeerCount, it will return an error back to the client. The endorsing peer will attempt to disseminate the private data to peers of different organizations, in an effort to ensure that each authorized organization has a copy of the private data. Since transactions are not committed at chaincode execution time, the endorsing peer and recipient peers store a copy of the private data in a local transient store alongside their blockchain until the transaction is committed.

When authorized peers do not have a copy of the private data in their transient data store at commit time (either because they were not an endorsing peer or because they did not receive the private data via dissemination at endorsement time), they will attempt to pull the private data from another authorized peer, for a configurable amount of time based on the peer property peer.gossip.pvtData.pullRetryThreshold in the peer configuration core.yaml file.

注解

The peers being asked for private data will only return the private data if the requesting peer is a member of the collection as defined by the private data dissemination policy.

Considerations when using pullRetryThreshold:

  • If the requesting peer is able to retrieve the private data within the pullRetryThreshold, it will commit the transaction to its ledger (including the private data hash), and store the private data in its state database, logically separated from other channel state data.

  • If the requesting peer is not able to retrieve the private data within the pullRetryThreshold, it will commit the transaction to it’s blockchain (including the private data hash), without the private data.

  • If the peer was entitled to the private data but it is missing, then that peer will not be able to endorse future transactions that reference the missing private data - a chaincode query for a key that is missing will be detected (based on the presence of the key’s hash in the state database), and the chaincode will receive an error.

Therefore, it is important to set the requiredPeerCount and maxPeerCount properties large enough to ensure the availability of private data in your channel. For example, if each of the endorsing peers become unavailable before the transaction commits, the requiredPeerCount and maxPeerCount properties will have ensured the private data is available on other peers.

注解

For collections to work, it is important to have cross organizational gossip configured correctly. Refer to our documentation on Gossip data dissemination protocol, paying particular attention to the “anchor peers” and “external endpoint” configuration.

Referencing collections from chaincode

A set of shim APIs are available for setting and retrieving private data.

The same chaincode data operations can be applied to channel state data and private data, but in the case of private data, a collection name is specified along with the data in the chaincode APIs, for example PutPrivateData(collection,key,value) and GetPrivateData(collection,key).

A single chaincode can reference multiple collections.

How to pass private data in a chaincode proposal

Since the chaincode proposal gets stored on the blockchain, it is also important not to include private data in the main part of the chaincode proposal. A special field in the chaincode proposal called the transient field can be used to pass private data from the client (or data that chaincode will use to generate private data), to chaincode invocation on the peer. The chaincode can retrieve the transient field by calling the GetTransient() API. This transient field gets excluded from the channel transaction.

Access control for private data

Until version 1.3, access control to private data based on collection membership was enforced for peers only. Access control based on the organization of the chaincode proposal submitter was required to be encoded in chaincode logic. Starting in v1.4 a collection configuration option memberOnlyRead can automatically enforce access control based on the organization of the chaincode proposal submitter. For more information about collection configuration definitions and how to set them, refer back to the Private data collection definition section of this topic.

注解

If you would like more granular access control, you can set memberOnlyRead to false. You can then apply your own access control logic in chaincode, for example by calling the GetCreator() chaincode API or using the client identity chaincode library .

Querying Private Data

Private data collection can be queried just like normal channel data, using shim APIs:

  • GetPrivateDataByRange(collection, startKey, endKey string)

  • GetPrivateDataByPartialCompositeKey(collection, objectType string, keys []string)

And for the CouchDB state database, JSON content queries can be passed using the shim API:

  • GetPrivateDataQueryResult(collection, query string)

Limitations:

  • Clients that call chaincode that executes range or rich JSON queries should be aware that they may receive a subset of the result set, if the peer they query has missing private data, based on the explanation in Private Data Dissemination section above. Clients can query multiple peers and compare the results to determine if a peer may be missing some of the result set.

  • Chaincode that executes range or rich JSON queries and updates data in a single transaction is not supported, as the query results cannot be validated on the peers that don’t have access to the private data, or on peers that are missing the private data that they have access to. If a chaincode invocation both queries and updates private data, the proposal request will return an error. If your application can tolerate result set changes between chaincode execution and validation/commit time, then you could call one chaincode function to perform the query, and then call a second chaincode function to make the updates. Note that calls to GetPrivateData() to retrieve individual keys can be made in the same transaction as PutPrivateData() calls, since all peers can validate key reads based on the hashed key version.

Using Indexes with collections

注解

The Fabric chaincode lifecycle being introduced in the Fabric v2.0 Alpha does not support using couchDB indexes with your chaincode. To use the previous lifecycle model to deploy couchDB indexes with private data collections, visit the v1.4 version of the Private Data Architecture Guide.

The topic CouchDB as the State Database describes indexes that can be applied to the channel’s state database to enable JSON content queries, by packaging indexes in a META-INF/statedb/couchdb/indexes directory at chaincode installation time. Similarly, indexes can also be applied to private data collections, by packaging indexes in a META-INF/statedb/couchdb/collections/<collection_name>/indexes directory. An example index is available here.

Considerations when using private data

Private data purging

Private data can be periodically purged from peers. For more details, see the blockToLive collection definition property above.

Additionally, recall that prior to commit, peers store private data in a local transient data store. This data automatically gets purged when the transaction commits. But if a transaction was never submitted to the channel and therefore never committed, the private data would remain in each peer’s transient store. This data is purged from the transient store after a configurable number blocks by using the peer’s peer.gossip.pvtData.transientstoreMaxBlockRetention property in the peer core.yaml file.

Updating a collection definition

To update a collection definition or add a new collection, you can upgrade the chaincode to a new version and pass the new collection configuration in the chaincode upgrade transaction, for example using the --collections-config flag if using the CLI. If a collection configuration is specified during the chaincode upgrade, a definition for each of the existing collections must be included.

When upgrading a chaincode, you can add new private data collections, and update existing private data collections, for example to add new members to an existing collection or change one of the collection definition properties. Note that you cannot update the collection name or the blockToLive property, since a consistent blockToLive is required regardless of a peer’s block height.

Collection updates becomes effective when a peer commits the block that contains the chaincode upgrade transaction. Note that collections cannot be deleted, as there may be prior private data hashes on the channel’s blockchain that cannot be removed.

Private data reconciliation

Starting in v1.4, peers of organizations that are added to an existing collection will automatically fetch private data that was committed to the collection before they joined the collection.

This private data “reconciliation” also applies to peers that were entitled to receive private data but did not yet receive it — because of a network failure, for example — by keeping track of private data that was “missing” at the time of block commit.

Private data reconciliation occurs periodically based on the peer.gossip.pvtData.reconciliationEnabled and peer.gossip.pvtData.reconcileSleepInterval properties in core.yaml. The peer will periodically attempt to fetch the private data from other collection member peers that are expected to have it.

Note that this private data reconciliation feature only works on peers running v1.4 or later of Fabric.

Read-Write set semantics

This document discusses the details of the current implementation about the semantics of read-write sets.

Transaction simulation and read-write set

During simulation of a transaction at an endorser, a read-write set is prepared for the transaction. The read set contains a list of unique keys and their committed versions that the transaction reads during simulation. The write set contains a list of unique keys (though there can be overlap with the keys present in the read set) and their new values that the transaction writes. A delete marker is set (in the place of new value) for the key if the update performed by the transaction is to delete the key.

Further, if the transaction writes a value multiple times for a key, only the last written value is retained. Also, if a transaction reads a value for a key, the value in the committed state is returned even if the transaction has updated the value for the key before issuing the read. In another words, Read-your-writes semantics are not supported.

As noted earlier, the versions of the keys are recorded only in the read set; the write set just contains the list of unique keys and their latest values set by the transaction.

There could be various schemes for implementing versions. The minimal requirement for a versioning scheme is to produce non-repeating identifiers for a given key. For instance, using monotonically increasing numbers for versions can be one such scheme. In the current implementation, we use a blockchain height based versioning scheme in which the height of the committing transaction is used as the latest version for all the keys modified by the transaction. In this scheme, the height of a transaction is represented by a tuple (txNumber is the height of the transaction within the block). This scheme has many advantages over the incremental number scheme - primarily, it enables other components such as statedb, transaction simulation and validation for making efficient design choices.

Following is an illustration of an example read-write set prepared by simulation of a hypothetical transaction. For the sake of simplicity, in the illustrations, we use the incremental numbers for representing the versions.

<TxReadWriteSet>
  <NsReadWriteSet name="chaincode1">
    <read-set>
      <read key="K1", version="1">
      <read key="K2", version="1">
    </read-set>
    <write-set>
      <write key="K1", value="V1"
      <write key="K3", value="V2"
      <write key="K4", isDelete="true"
    </write-set>
  </NsReadWriteSet>
<TxReadWriteSet>

Additionally, if the transaction performs a range query during simulation, the range query as well as its results will be added to the read-write set as query-info.

Transaction validation and updating world state using read-write set

A committer uses the read set portion of the read-write set for checking the validity of a transaction and the write set portion of the read-write set for updating the versions and the values of the affected keys.

In the validation phase, a transaction is considered valid if the version of each key present in the read set of the transaction matches the version for the same key in the world state - assuming all the preceding valid transactions (including the preceding transactions in the same block) are committed (committed-state). An additional validation is performed if the read-write set also contains one or more query-info.

This additional validation should ensure that no key has been inserted/deleted/updated in the super range (i.e., union of the ranges) of the results captured in the query-info(s). In other words, if we re-execute any of the range queries (that the transaction performed during simulation) during validation on the committed-state, it should yield the same results that were observed by the transaction at the time of simulation. This check ensures that if a transaction observes phantom items during commit, the transaction should be marked as invalid. Note that the this phantom protection is limited to range queries (i.e., GetStateByRange function in the chaincode) and not yet implemented for other queries (i.e., GetQueryResult function in the chaincode). Other queries are at risk of phantoms, and should therefore only be used in read-only transactions that are not submitted to ordering, unless the application can guarantee the stability of the result set between simulation and validation/commit time.

If a transaction passes the validity check, the committer uses the write set for updating the world state. In the update phase, for each key present in the write set, the value in the world state for the same key is set to the value as specified in the write set. Further, the version of the key in the world state is changed to reflect the latest version.

Example simulation and validation

This section helps with understanding the semantics through an example scenario. For the purpose of this example, the presence of a key, k, in the world state is represented by a tuple (k,ver,val) where ver is the latest version of the key k having val as its value.

Now, consider a set of five transactions T1, T2, T3, T4, and T5, all simulated on the same snapshot of the world state. The following snippet shows the snapshot of the world state against which the transactions are simulated and the sequence of read and write activities performed by each of these transactions.

World state: (k1,1,v1), (k2,1,v2), (k3,1,v3), (k4,1,v4), (k5,1,v5)
T1 -> Write(k1, v1'), Write(k2, v2')
T2 -> Read(k1), Write(k3, v3')
T3 -> Write(k2, v2'')
T4 -> Write(k2, v2'''), read(k2)
T5 -> Write(k6, v6'), read(k5)

Now, assume that these transactions are ordered in the sequence of T1,..,T5 (could be contained in a single block or different blocks)

  1. T1 passes validation because it does not perform any read. Further, the tuple of keys k1 and k2 in the world state are updated to (k1,2,v1'), (k2,2,v2')

  2. T2 fails validation because it reads a key, k1, which was modified by a preceding transaction - T1

  3. T3 passes the validation because it does not perform a read. Further the tuple of the key, k2, in the world state is updated to (k2,3,v2'')

  4. T4 fails the validation because it reads a key, k2, which was modified by a preceding transaction T1

  5. T5 passes validation because it reads a key, k5, which was not modified by any of the preceding transactions

Note: Transactions with multiple read-write sets are not yet supported.

Gossip data dissemination protocol

Hyperledger Fabric optimizes blockchain network performance, security, and scalability by dividing workload across transaction execution (endorsing and committing) peers and transaction ordering nodes. This decoupling of network operations requires a secure, reliable and scalable data dissemination protocol to ensure data integrity and consistency. To meet these requirements, Fabric implements a gossip data dissemination protocol.

Gossip protocol

Peers leverage gossip to broadcast ledger and channel data in a scalable fashion. Gossip messaging is continuous, and each peer on a channel is constantly receiving current and consistent ledger data from multiple peers. Each gossiped message is signed, thereby allowing Byzantine participants sending faked messages to be easily identified and the distribution of the message(s) to unwanted targets to be prevented. Peers affected by delays, network partitions, or other causes resulting in missed blocks will eventually be synced up to the current ledger state by contacting peers in possession of these missing blocks.

The gossip-based data dissemination protocol performs three primary functions on a Fabric network:

  1. Manages peer discovery and channel membership, by continually identifying available member peers, and eventually detecting peers that have gone offline.

  2. Disseminates ledger data across all peers on a channel. Any peer with data that is out of sync with the rest of the channel identifies the missing blocks and syncs itself by copying the correct data.

  3. Bring newly connected peers up to speed by allowing peer-to-peer state transfer update of ledger data.

Gossip-based broadcasting operates by peers receiving messages from other peers on the channel, and then forwarding these messages to a number of randomly selected peers on the channel, where this number is a configurable constant. Peers can also exercise a pull mechanism rather than waiting for delivery of a message. This cycle repeats, with the result of channel membership, ledger and state information continually being kept current and in sync. For dissemination of new blocks, the leader peer on the channel pulls the data from the ordering service and initiates gossip dissemination to peers in its own organization.

Leader election

The leader election mechanism is used to elect one peer per organization which will maintain connection with the ordering service and initiate distribution of newly arrived blocks across the peers of its own organization. Leveraging leader election provides the system with the ability to efficiently utilize the bandwidth of the ordering service. There are two possible modes of operation for a leader election module:

  1. Static — a system administrator manually configures a peer in an organization to be the leader.

  2. Dynamic — peers execute a leader election procedure to select one peer in an organization to become leader.

Static leader election

Static leader election allows you to manually define one or more peers within an organization as leader peers. Please note, however, that having too many peers connect to the ordering service may result in inefficient use of bandwidth. To enable static leader election mode, configure the following parameters within the section of core.yaml:

peer:
    # Gossip related configuration
    gossip:
        useLeaderElection: false
        orgLeader: true

Alternatively these parameters could be configured and overridden with environmental variables:

export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=true

注解

The following configuration will keep peer in stand-by mode, i.e. peer will not try to become a leader:

export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=false
  1. Setting CORE_PEER_GOSSIP_USELEADERELECTION and CORE_PEER_GOSSIP_ORGLEADER with true value is ambiguous and will lead to an error.

  2. In static configuration organization admin is responsible to provide high availability of the leader node in case for failure or crashes.

Dynamic leader election

Dynamic leader election enables organization peers to elect one peer which will connect to the ordering service and pull out new blocks. This leader is elected for an organization’s peers independently.

A dynamically elected leader sends heartbeat messages to the rest of the peers as an evidence of liveness. If one or more peers don’t receive heartbeats updates during a set period of time, they will elect a new leader.

In the worst case scenario of a network partition, there will be more than one active leader for organization to guarantee resiliency and availability to allow an organization’s peers to continue making progress. After the network partition has been healed, one of the leaders will relinquish its leadership. In a steady state with no network partitions, there will be only one active leader connecting to the ordering service.

Following configuration controls frequency of the leader heartbeat messages:

peer:
    # Gossip related configuration
    gossip:
        election:
            leaderAliveThreshold: 10s

In order to enable dynamic leader election, the following parameters need to be configured within core.yaml:

peer:
    # Gossip related configuration
    gossip:
        useLeaderElection: true
        orgLeader: false

Alternatively these parameters could be configured and overridden with environment variables:

export CORE_PEER_GOSSIP_USELEADERELECTION=true
export CORE_PEER_GOSSIP_ORGLEADER=false

Anchor peers

Anchor peers are used by gossip to make sure peers in different organizations know about each other.

When a configuration block that contains an update to the anchor peers is committed, peers reach out to the anchor peers and learn from them about all of the peers known to the anchor peer(s). Once at least one peer from each organization has contacted an anchor peer, the anchor peer learns about every peer in the channel. Since gossip communication is constant, and because peers always ask to be told about the existence of any peer they don’t know about, a common view of membership can be established for a channel.

For example, let’s assume we have three organizations—A, B, C— in the channel and a single anchor peer—peer0.orgC— defined for organization C. When peer1.orgA (from organization A) contacts peer0.orgC, it will tell it about peer0.orgA. And when at a later time peer1.orgB contacts peer0.orgC, the latter would tell the former about peer0.orgA. From that point forward, organizations A and B would start exchanging membership information directly without any assistance from peer0.orgC.

As communication across organizations depends on gossip in order to work, there must be at least one anchor peer defined in the channel configuration. It is strongly recommended that every organization provides its own set of anchor peers for high availability and redundancy. Note that the anchor peer does not need to be the same peer as the leader peer.

External and internal endpoints

In order for gossip to work effectively, peers need to be able to obtain the endpoint information of peers in their own organization as well as from peers in other organizations.

When a peer is bootstrapped it will use peer.gossip.bootstrap in its core.yaml to advertise itself and exchange membership information, building a view of all available peers within its own organization.

The peer.gossip.bootstrap property in the core.yaml of the peer is used to bootstrap gossip within an organization. If you are using gossip, you will typically configure all the peers in your organization to point to an initial set of bootstrap peers (you can specify a space-separated list of peers). The internal endpoint is usually auto-computed by the peer itself or just passed explicitly via core.peer.address in core.yaml. If you need to overwrite this value, you can export CORE_PEER_GOSSIP_ENDPOINT as an environment variable.

Bootstrap information is similarly required to establish communication across organizations. The initial cross-organization bootstrap information is provided via the “anchor peers” setting described above. If you want to make other peers in your organization known to other organizations, you need to set the peer.gossip.externalendpoint in the core.yaml of your peer. If this is not set, the endpoint information of the peer will not be broadcast to peers in other organizations.

To set these properties, issue:

export CORE_PEER_GOSSIP_BOOTSTRAP=<a list of peer endpoints within the peer's org>
export CORE_PEER_GOSSIP_EXTERNALENDPOINT=<the peer endpoint, as known outside the org>

Gossip messaging

Online peers indicate their availability by continually broadcasting “alive” messages, with each containing the public key infrastructure (PKI) ID and the signature of the sender over the message. Peers maintain channel membership by collecting these alive messages; if no peer receives an alive message from a specific peer, this “dead” peer is eventually purged from channel membership. Because “alive” messages are cryptographically signed, malicious peers can never impersonate other peers, as they lack a signing key authorized by a root certificate authority (CA).

In addition to the automatic forwarding of received messages, a state reconciliation process synchronizes world state across peers on each channel. Each peer continually pulls blocks from other peers on the channel, in order to repair its own state if discrepancies are identified. Because fixed connectivity is not required to maintain gossip-based data dissemination, the process reliably provides data consistency and integrity to the shared ledger, including tolerance for node crashes.

Because channels are segregated, peers on one channel cannot message or share information on any other channel. Though any peer can belong to multiple channels, partitioned messaging prevents blocks from being disseminated to peers that are not in the channel by applying message routing policies based on a peers’ channel subscriptions.

注解

1. Security of point-to-point messages are handled by the peer TLS layer, and do not require signatures. Peers are authenticated by their certificates, which are assigned by a CA. Although TLS certs are also used, it is the peer certificates that are authenticated in the gossip layer. Ledger blocks are signed by the ordering service, and then delivered to the leader peers on a channel.

2. Authentication is governed by the membership service provider for the peer. When the peer connects to the channel for the first time, the TLS session binds with the membership identity. This essentially authenticates each peer to the connecting peer, with respect to membership in the network and channel.

Frequently Asked Questions

Endorsement

Endorsement architecture:

Question

How many peers in the network need to endorse a transaction?

Answer

The number of peers required to endorse a transaction is driven by the endorsement policy that is specified in the chaincode definition.

Question

Does an application client need to connect to all peers?

Answer

Clients only need to connect to as many peers as are required by the endorsement policy for the chaincode.

Security & Access Control

Question

How do I ensure data privacy?

Answer

There are various aspects to data privacy. First, you can segregate your network into channels, where each channel represents a subset of participants that are authorized to see the data for the chaincodes that are deployed to that channel.

Second, you can use private-data to keep ledger data private from other organizations on the channel. A private data collection allows a defined subset of organizations on a channel the ability to endorse, commit, or query private data without having to create a separate channel. Other participants on the channel receive only a hash of the data. For more information refer to the 在Fabric里面使用私有数据 tutorial. Note that the key concepts topic also explains when to use private data instead of a channel.

Third, as an alternative to Fabric hashing the data using private data, the client application can hash or encrypt the data before calling chaincode. If you hash the data then you will need to provide a means to share the source data. If you encrypt the data then you will need to provide a means to share the decryption keys.

Fourth, you can restrict data access to certain roles in your organization, by building access control into the chaincode logic.

Fifth, ledger data at rest can be encrypted via file system encryption on the peer, and data in-transit is encrypted via TLS.

Question

Do the orderers see the transaction data?

Answer

No, the orderers only order transactions, they do not open the transactions. If you do not want the data to go through the orderers at all, then utilize the private data feature of Fabric. Alternatively, you can hash or encrypt the data in the client application before calling chaincode. If you encrypt the data then you will need to provide a means to share the decryption keys.

Application-side Programming Model

Question

How do application clients know the outcome of a transaction?

Answer

The transaction simulation results are returned to the client by the endorser in the proposal response. If there are multiple endorsers, the client can check that the responses are all the same, and submit the results and endorsements for ordering and commitment. Ultimately the committing peers will validate or invalidate the transaction, and the client becomes aware of the outcome via an event, that the SDK makes available to the application client.

Question

How do I query the ledger data?

Answer

Within chaincode you can query based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against multiple parameters. For example a composite key of (owner,asset_id) can be used to query all assets owned by a certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions that update the ledger.

If you model asset data as JSON in chaincode and use CouchDB as the state database, you can also perform complex rich queries against the chaincode data values, using the CouchDB JSON query language within chaincode. The application client can perform read-only queries, but these responses are not typically submitted as part of transactions to the ordering service.

Question

How do I query the historical data to understand data provenance?

Answer

The chaincode API GetHistoryForKey() will return history of values for a key.

Question

How to guarantee the query result is correct, especially when the peer being queried may be recovering and catching up on block processing?

Answer

The client can query multiple peers, compare their block heights, compare their query results, and favor the peers at the higher block heights.

Chaincode (Smart Contracts and Digital Assets)

Question

Does Hyperledger Fabric support smart contract logic?

Answer

Yes. We call this feature Chaincode. It is our interpretation of the smart contract method/algorithm, with additional features.

A chaincode is programmatic code deployed on the network, where it is executed and validated by chain validators together during the consensus process. Developers can use chaincodes to develop business contracts, asset definitions, and collectively-managed decentralized applications.

Question

How do I create a business contract?

Answer

There are generally two ways to develop business contracts: the first way is to code individual contracts into standalone instances of chaincode; the second way, and probably the more efficient way, is to use chaincode to create decentralized applications that manage the life cycle of one or multiple types of business contracts, and let end users instantiate instances of contracts within these applications.

Question

How do I create assets?

Answer

Users can use chaincode (for business rules) and membership service (for digital tokens) to design assets, as well as the logic that manages them.

There are two popular approaches to defining assets in most blockchain solutions: the stateless UTXO model, where account balances are encoded into past transaction records; and the account model, where account balances are kept in state storage space on the ledger.

Each approach carries its own benefits and drawbacks. This blockchain technology does not advocate either one over the other. Instead, one of our first requirements was to ensure that both approaches can be easily implemented.

Question

Which languages are supported for writing chaincode?

Answer

Chaincode can be written in any programming language and executed in containers. Currently, Golang, node.js and java chaincode are supported.

It is also possible to build Hyperledger Fabric applications using Hyperledger Composer.

Question

Does the Hyperledger Fabric have native currency?

Answer

No. However, if you really need a native currency for your chain network, you can develop your own native currency with chaincode. One common attribute of native currency is that some amount will get transacted (the chaincode defining that currency will get called) every time a transaction is processed on its chain.

Differences in Most Recent Releases

Question

Where can I find what are the highlighted differences between releases?

Answer

The differences between any subsequent releases are provided together with the 发布.

Question

Where to get help for the technical questions not answered above?

Answer

Please use StackOverflow.

Ordering Service

Question

I have an ordering service up and running and want to switch consensus algorithms. How do I do that?

Answer

This is explicitly not supported.

Question

What is the orderer system channel?

Answer

The orderer system channel (sometimes called ordering system channel) is the channel the orderer is initially bootstrapped with. It is used to orchestrate channel creation. The orderer system channel defines consortia and the initial configuration for new channels. At channel creation time, the organization definition in the consortium, the /Channel group’s values and policies, as well as the /Channel/Orderer group’s values and policies, are all combined to form the new initial channel definition.

Question

If I update my application channel, should I update my orderer system channel?

Answer

Once an application channel is created, it is managed independently of any other channel (including the orderer system channel). Depending on the modification, the change may or may not be desirable to port to other channels. In general, MSP changes should be synchronized across all channels, while policy changes are more likely to be specific to a particular channel.

Question

Can I have an organization act both in an ordering and application role?

Answer

Although this is possible, it is a highly discouraged configuration. By default the /Channel/Orderer/BlockValidation policy allows any valid certificate of the ordering organizations to sign blocks. If an organization is acting both in an ordering and application role, then this policy should be updated to restrict block signers to the subset of certificates authorized for ordering.

Question

I want to write a consensus implementation for Fabric. Where do I begin?

Answer

A consensus plugin needs to implement the Consenter and Chain interfaces defined in the consensus package. There are two plugins built against these interfaces already: solo and kafka. You can study them to take cues for your own implementation. The ordering service code can be found under the orderer package.

Question

I want to change my ordering service configurations, e.g. batch timeout, after I start the network, what should I do?

Answer

This falls under reconfiguring the network. Please consult the topic on configtxlator.

Solo

Question

How can I deploy Solo in production?

Answer

Solo is not intended for production. It is not, and will never be, fault tolerant.

Kafka

Question

How do I remove a node from the ordering service?

Answer

This is a two step-process:

  1. Add the node’s certificate to the relevant orderer’s MSP CRL to prevent peers/clients from connecting to it.

  2. Prevent the node from connecting to the Kafka cluster by leveraging standard Kafka access control measures such as TLS CRLs, or firewalling.

Question

I have never deployed a Kafka/ZK cluster before, and I want to use the Kafka-based ordering service. How do I proceed?

Answer

The Hyperledger Fabric documentation assumes the reader generally has the operational expertise to setup, configure, and manage a Kafka cluster (see Caveat emptor). If you insist on proceeding without such expertise, you should complete, at a minimum, the first 6 steps of the Kafka Quickstart guide before experimenting with the Kafka-based ordering service. You can also consult this sample configuration file for a brief explanation of the sensible defaults for Kafka/ZooKeeper.

Question

Where can I find a Docker composition for a network that uses the Kafka-based ordering service?

Answer

Consult the end-to-end CLI example.

Question

Why is there a ZooKeeper dependency in the Kafka-based ordering service?

Answer

Kafka uses it internally for coordination between its brokers.

Question

I’m trying to follow the BYFN example and get a “service unavailable” error, what should I do?

Answer

Check the ordering service’s logs. A “Rejecting deliver request because of consenter error” log message is usually indicative of a connection problem with the Kafka cluster. Ensure that the Kafka cluster is set up properly, and is reachable by the ordering service’s nodes.

BFT

Question

When is a BFT version of the ordering service going to be available?

Answer

No date has been set. We are working towards a release during the 1.x cycle, i.e. it will come with a minor version upgrade in Fabric. Track FAB-33 for updates.

Contributions Welcome!

We welcome contributions to Hyperledger in many forms, and there’s always plenty to do!

First things first, please review the Hyperledger Code of Conduct before participating. It is important that we keep things civil.

Ways to contribute

There are many ways you can contribute to Hyperledger Fabric, both as a user and as a developer.

As a user:

As a developer:

Getting a Linux Foundation account

In order to participate in the development of the Hyperledger Fabric project, you will need a Linux Foundation account. You will need to use your LF ID to access to all the Hyperledger community tools, including Gerrit, Jira, RocketChat, and the Wiki (for editing, only).

Project Governance

Hyperledger Fabric is managed under an open governance model as described in our charter. Projects and sub-projects are lead by a set of maintainers. New sub-projects can designate an initial set of maintainers that will be approved by the top-level project’s existing maintainers when the project is first approved.

Maintainers

The Fabric project is lead by the project’s top level maintainers. The maintainers are responsible for reviewing and merging all patches submitted for review, and they guide the overall technical direction of the project within the guidelines established by the Hyperledger Technical Steering Committee (TSC).

Becoming a maintainer

The project’s maintainers will, from time-to-time, consider adding or removing a maintainer. An existing maintainer can submit a change set to the MAINTAINERS.rst file. A nominated Contributor may become a Maintainer by a majority approval of the proposal by the existing Maintainers. Once approved, the change set is then merged and the individual is added to (or alternatively, removed from) the maintainers group. Maintainers may be removed by explicit resignation, for prolonged inactivity (3 or more months), or for some infraction of the code of conduct or by consistently demonstrating poor judgement. A maintainer removed for inactivity should be restored following a sustained resumption of contributions and reviews (a month or more) demonstrating a renewed commitment to the project.

Release cadence

The Fabric maintainers have settled on a quarterly (approximately) release cadence (see releases). We are also actively considering adopting an LTS (long term support) release process, though the details of this are still being worked out by the maintainers. Follow the discussion on the #fabric-maintainers channel in Chat.

Making Feature/Enhancement Proposals

First, take time to review JIRA to be sure that there isn’t already an open (or recently closed) proposal for the same function. If there isn’t, to make a proposal we recommend that you open a JIRA Epic or Story, whichever seems to best fit the circumstance and link or inline a “one pager” of the proposal that states what the feature would do and, if possible, how it might be implemented. It would help also to make a case for why the feature should be added, such as identifying specific use case(s) for which the feature is needed and a case for what the benefit would be should the feature be implemented. Once the JIRA issue is created, and the “one pager” either attached, inlined in the description field, or a link to a publicly accessible document is added to the description, send an introductory email to the fabric@lists.hyperledger.org mailing list linking the JIRA issue, and soliciting feedback.

Discussion of the proposed feature should be conducted in the JIRA issue itself, so that we have a consistent pattern within our community as to where to find design discussion.

Getting the support of three or more of the Hyperledger Fabric maintainers for the new feature will greatly enhance the probability that the feature’s related CRs will be included in a subsequent release.

Maintainers meeting

The maintainers hold a bi-weekly meeting every other Wednesday at 9 am ET on Zoom. Please see the community calendar for details.

The purpose of the maintainers meeting is to plan for and review the progress of releases, and to discuss the technical and operational direction of the project and sub-projects.

New feature/enhancement proposals as described above should be presented to a maintainers meeting for consideration, feedback and acceptance.

Release roadmap

The Fabric release roadmap of epics is maintained in JIRA.

Communications

We use RocketChat for communication and Google Hangouts™ for screen sharing between developers. Our development planning and prioritization is done in JIRA, and we take longer running discussions/decisions to the mailing list.

Contribution guide

Install prerequisites

Before we begin, if you haven’t already done so, you may wish to check that you have all the prerequisites installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.

Getting help

If you are looking for something to work on, or need some expert assistance in debugging a problem or working out a fix to an issue, our community is always eager to help. We hang out on Chat, IRC (#hyperledger on freenode.net) and the mailing lists. Most of us don’t bite :grin: and will be glad to help. The only silly question is the one you don’t ask. Questions are in fact a great way to help improve the project as they highlight where our documentation could be clearer.

Reporting bugs

If you are a user and you have found a bug, please submit an issue using JIRA. Before you create a new JIRA issue, please try to search the existing items to be sure no one else has previously reported it. If it has been previously reported, then you might add a comment that you also are interested in seeing the defect fixed.

注解

If the defect is security-related, please follow the Hyperledger security bug reporting process.

If it has not been previously reported, create a new JIRA. Please try to provide sufficient information for someone else to reproduce the issue. One of the project’s maintainers should respond to your issue within 24 hours. If not, please bump the issue with a comment and request that it be reviewed. You can also post to the relevant Hyperledger Fabric channel in Hyperledger Chat. For example, a doc bug should be broadcast to #fabric-documentation, a database bug to #fabric-ledger, and so on…

Submitting your fix

If you just submitted a JIRA for a bug you’ve discovered, and would like to provide a fix, we would welcome that gladly! Please assign the JIRA issue to yourself, then you can submit a change request (CR).

注解

If you need help with submitting your first CR, we have created a brief tutorial for you.

Fixing issues and working stories

Review the issues list and find something that interests you. You could also check the “help-wanted” list. It is wise to start with something relatively straight forward and achievable, and that no one is already assigned. If no one is assigned, then assign the issue to yourself. Please be considerate and rescind the assignment if you cannot finish in a reasonable time, or add a comment saying that you are still actively working the issue if you need a little more time.

Reviewing submitted Change Requests (CRs)

Another way to contribute and learn about Hyperledger Fabric is to help the maintainers with the review of the CRs that are open. Indeed maintainers have the difficult role of having to review all the CRs that are being submitted and evaluate whether they should be merged or not. You can review the code and/or documentation changes, test the changes, and tell the submitters and maintainers what you think. Once your review and/or test is complete just reply to the CR with your findings, by adding comments and/or voting. A comment saying something like “I tried it on system X and it works” or possibly “I got an error on system X: xxx ” will help the maintainers in their evaluation. As a result, maintainers will be able to process CRs faster and everybody will gain from it.

Just browse through the open CRs on Gerrit to get started.

CR Aging

As the Fabric project has grown, so too has the backlog of open CRs. One problem that nearly all projects face is effectively managing that backlog and Fabric is no exception. In an effort to keep the backlog of Fabric and related project CRs manageable, we are introducing an aging policy which will be enforced by bots. This is consistent with how other large projects manage their CR backlog.

CR Aging Policy

The Fabric project maintainers will automatically monitor all CR activity for delinquency. If a CR has not been updated in 2 weeks, a reminder comment will be added requesting that the CR either be updated to address any outstanding comments or abandoned if it is to be withdrawn. If a delinquent CR goes another 2 weeks without an update, it will be automatically abandoned. If a CR has aged more than 2 months since it was originally submitted, even if it has activity, it will be flagged for maintainer review.

If a submitted CR has passed all validation but has not been reviewed in 72 hours (3 days), it will be flagged to the #fabric-pr-review channel daily until it receives a review comment(s).

This policy applies to all official Fabric projects (fabric, fabric-ca, fabric-samples, fabric-test, fabric-sdk-node, fabric-sdk-java, fabric-chaincode-node, fabric-chaincode-java, fabric-chaincode-evm, fabric-baseimage, and fabric-amcl).

Setting up development environment

Next, try building the project in your local development environment to ensure that everything is set up correctly.

What makes a good change request?

  • One change at a time. Not five, not three, not ten. One and only one. Why? Because it limits the blast area of the change. If we have a regression, it is much easier to identify the culprit commit than if we have some composite change that impacts more of the code.

  • Include a link to the JIRA story for the change. Why? Because a) we want to track our velocity to better judge what we think we can deliver and when and b) because we can justify the change more effectively. In many cases, there should be some discussion around a proposed change and we want to link back to that from the change itself.

  • Include unit and integration tests (or changes to existing tests) with every change. This does not mean just happy path testing, either. It also means negative testing of any defensive code that it correctly catches input errors. When you write code, you are responsible to test it and provide the tests that demonstrate that your change does what it claims. Why? Because without this we have no clue whether our current code base actually works.

  • Unit tests should have NO external dependencies. You should be able to run unit tests in place with go test or equivalent for the language. Any test that requires some external dependency (e.g. needs to be scripted to run another component) needs appropriate mocking. Anything else is not unit testing, it is integration testing by definition. Why? Because many open source developers do Test Driven Development. They place a watch on the directory that invokes the tests automagically as the code is changed. This is far more efficient than having to run a whole build between code changes. See this definition of unit testing for a good set of criteria to keep in mind for writing effective unit tests.

  • Minimize the lines of code per CR. Why? Maintainers have day jobs, too. If you send a 1,000 or 2,000 LOC change, how long do you think it takes to review all of that code? Keep your changes to < 200-300 LOC, if possible. If you have a larger change, decompose it into multiple independent changes. If you are adding a bunch of new functions to fulfill the requirements of a new capability, add them separately with their tests, and then write the code that uses them to deliver the capability. Of course, there are always exceptions. If you add a small change and then add 300 LOC of tests, you will be forgiven;-) If you need to make a change that has broad impact or a bunch of generated code (protobufs, etc.). Again, there can be exceptions.

注解

Large change requests, e.g. those with more than 300 LOC are more likely than not going to receive a -2, and you’ll be asked to refactor the change to conform with this guidance.

  • Do not stack change requests (e.g. submit a CR from the same local branch as your previous CR) unless they are related. This will minimize merge conflicts and allow changes to be merged more quickly. If you stack requests your subsequent requests may be held up because of review comments in the preceding requests.

  • Write a meaningful commit message. Include a meaningful 55 (or less) character title, followed by a blank line, followed by a more comprehensive description of the change. Each change MUST include the JIRA identifier corresponding to the change (e.g. [FAB-1234]). This can be in the title but should also be in the body of the commit message. See the complete requirements for an acceptable change request.

注解

That Gerrit will automatically create a hyperlink to the JIRA item. e.g.

[FAB-1234] fix foobar() panic

Fix [FAB-1234] added a check to ensure that when foobar(foo string)
is called, that there is a non-empty string argument.

Finally, be responsive. Don’t let a change request fester with review comments such that it gets to a point that it requires a rebase. It only further delays getting it merged and adds more work for you - to remediate the merge conflicts.

Glossary

Terminology is important, so that all Hyperledger Fabric users and developers agree on what we mean by each specific term. What is a smart contract for example. The documentation will reference the glossary as needed, but feel free to read the entire thing in one sitting if you like; it’s pretty enlightening!

锚节点

Used by gossip to make sure peers in different organizations know about each other.

When a configuration block that contains an update to the anchor peers is committed, peers reach out to the anchor peers and learn from them about all of the peers known to the anchor peer(s). Once at least one peer from each organization has contacted an anchor peer, the anchor peer learns about every peer in the channel. Since gossip communication is constant, and because peers always ask to be told about the existence of any peer they don’t know about, a common view of membership can be established for a channel.

For example, let’s assume we have three organizations — A, B, C — in the channel and a single anchor peer — peer0.orgC — defined for organization C. When peer1.orgA (from organization A) contacts peer0.orgC, it will tell peer0.orgC about peer0.orgA. And when at a later time peer1.orgB contacts peer0.orgC, the latter would tell the former about peer0.orgA. From that point forward, organizations A and B would start exchanging membership information directly without any assistance from peer0.orgC.

As communication across organizations depends on gossip in order to work, there must be at least one anchor peer defined in the channel configuration. It is strongly recommended that every organization provides its own set of anchor peers for high availability and redundancy.

ACL

An ACL, or Access Control List, associates access to specific peer resources (such as system chaincode APIs or event services) to a Policy (which specifies how many and what types of organizations or roles are required). The ACL is part of a channel’s configuration. It is therefore persisted in the channel’s configuration blocks, and can be updated using the standard configuration update mechanism.

An ACL is formatted as a list of key-value pairs, where the key identifies the resource whose access we wish to control, and the value identifies the channel policy (group) that is allowed to access it. For example lscc/GetDeploymentSpec: /Channel/Application/Readers defines that the access to the life cycle chaincode GetDeploymentSpec API (the resource) is accessible by identities which satisfy the /Channel/Application/Readers policy.

A set of default ACLs is provided in the configtx.yaml file which is used by configtxgen to build channel configurations. The defaults can be set in the top level “Application” section of configtx.yaml or overridden on a per profile basis in the “Profiles” section.

Block

A Block

Block B1 is linked to block B0. Block B2 is linked to block B1.


A block contains an ordered set of transactions. It is cryptographically linked to the preceding block, and in turn it is linked to be subsequent blocks. The first block in such a chain of blocks is called the genesis block. Blocks are created by the ordering system, and validated by peers.

Chain

Blockchain

Blockchain B contains blocks 0, 1, 2.


The ledger’s chain is a transaction log structured as hash-linked blocks of transactions. Peers receive blocks of transactions from the ordering service, mark the block’s transactions as valid or invalid based on endorsement policies and concurrency violations, and append the block to the hash chain on the peer’s file system.

Chaincode

See Smart-Contract.

Channel

A Channel

Channel C connects application A1, peer P2 and ordering service O1.


A channel is a private blockchain overlay which allows for data isolation and confidentiality. A channel-specific ledger is shared across the peers in the channel, and transacting parties must be properly authenticated to a channel in order to interact with it. Channels are defined by a Configuration-Block.

Commit

Each Peer on a channel validates ordered blocks of transactions and then commits (writes/appends) the blocks to its replica of the channel Ledger. Peers also mark each transaction in each block as valid or invalid.

Concurrency Control Version Check

Concurrency Control Version Check is a method of keeping state in sync across peers on a channel. Peers execute transactions in parallel, and before commitment to the ledger, peers check that the data read at execution time has not changed. If the data read for the transaction has changed between execution time and commitment time, then a Concurrency Control Version Check violation has occurred, and the transaction is marked as invalid on the ledger and values are not updated in the state database.

Configuration Block

Contains the configuration data defining members and policies for a system chain (ordering service) or channel. Any configuration modifications to a channel or overall network (e.g. a member leaving or joining) will result in a new configuration block being appended to the appropriate chain. This block will contain the contents of the genesis block, plus the delta.

Consensus

A broader term overarching the entire transactional flow, which serves to generate an agreement on the order and to confirm the correctness of the set of transactions constituting a block.

Consenter set

In a Raft ordering service, these are the ordering nodes actively participating in the consensus mechanism on a channel. If other ordering nodes exist on the system channel, but are not a part of a channel, they are not part of that channel’s consenter set.

Consortium

A consortium is a collection of non-orderer organizations on the blockchain network. These are the organizations that form and join channels and that own peers. While a blockchain network can have multiple consortia, most blockchain networks have a single consortium. At channel creation time, all organizations added to the channel must be part of a consortium. However, an organization that is not defined in a consortium may be added to an existing channel.

Chaincode definition

A chaincode definition is used by organizations to agree on the parameters of a chaincode before it can be used on a channel. Each channel member that wants to use the chaincode to endorse transactions or query the ledger needs to approve a chaincode definition for their organization. Once enough channel members have approved a chaincode definition to meet the Lifecycle Endorsement policy (which is set to a majority of organizations in the channel by default), the chaincode definition can be committed to the channel. After the definition is committed, the first invoke of the chaincode (or, if requested, the execution of the Init function) will start the chaincode on the channel.

Current State

See World-State.

Dynamic Membership

Hyperledger Fabric supports the addition/removal of members, peers, and ordering service nodes, without compromising the operationality of the overall network. Dynamic membership is critical when business relationships adjust and entities need to be added/removed for various reasons.

Endorsement

Refers to the process where specific peer nodes execute a chaincode transaction and return a proposal response to the client application. The proposal response includes the chaincode execution response message, results (read set and write set), and events, as well as a signature to serve as proof of the peer’s chaincode execution. Chaincode applications have corresponding endorsement policies, in which the endorsing peers are specified.

Endorsement policy

Defines the peer nodes on a channel that must execute transactions attached to a specific chaincode application, and the required combination of responses (endorsements). A policy could require that a transaction be endorsed by a minimum number of endorsing peers, a minimum percentage of endorsing peers, or by all endorsing peers that are assigned to a specific chaincode application. Policies can be curated based on the application and the desired level of resilience against misbehavior (deliberate or not) by the endorsing peers. A transaction that is submitted must satisfy the endorsement policy before being marked as valid by committing peers.

FabToken

FabToken is an Unspent Transaction Output (UTXO) based token management system that allows users to issue, transfer, and redeem tokens on channels. FabToken uses the membership services of Fabric to authenticate the identity of token owners and manage their public and private keys.

FabToken

FabToken is an Unspent Transaction Output (UTXO) based token management system that allows users to issue, transfer, and redeem tokens on channels. FabToken uses the membership services of Fabric to authenticate the identity of token owners and manage their public and private keys.

Follower

In a leader based consensus protocol, such as Raft, these are the nodes which replicate log entries produced by the leader. In Raft, the followers also receive “heartbeat” messages from the leader. In the event that the leader stops sending those message for a configurable amount of time, the followers will initiate a leader election and one of them will be elected leader.

Genesis Block

The configuration block that initializes the ordering service, or serves as the first block on a chain.

Gossip Protocol

The gossip data dissemination protocol performs three functions: 1) manages peer discovery and channel membership; 2) disseminates ledger data across all peers on the channel; 3) syncs ledger state across all peers on the channel. Refer to the Gossip topic for more details.

Hyperledger Fabric CA

Hyperledger Fabric CA is the default Certificate Authority component, which issues PKI-based certificates to network member organizations and their users. The CA issues one root certificate (rootCert) to each member and one enrollment certificate (ECert) to each authorized user.

Init

A method to initialize a chaincode application. All chaincodes need to have an an Init function. By default, this function is never executed. However you can use the chaincode definition to request the execution of the Init function in order to initialize the chaincode.

Install

The process of placing a chaincode on a peer’s file system.

Instantiate

The process of starting and initializing a chaincode application on a specific channel. After instantiation, peers that have the chaincode installed can accept chaincode invocations. This method was used in the previous version of the chaincode lifecycle. For the current procedure used to start a chaincode on a channel with the new Fabric chaincode lifecycle introduced as part of the Fabric v2.0 Alpha, see Chaincode-definition.

Invoke

Used to call chaincode functions. A client application invokes chaincode by sending a transaction proposal to a peer. The peer will execute the chaincode and return an endorsed proposal response to the client application. The client application will gather enough proposal responses to satisfy an endorsement policy, and will then submit the transaction results for ordering, validation, and commit. The client application may choose not to submit the transaction results. For example if the invoke only queried the ledger, the client application typically would not submit the read-only transaction, unless there is desire to log the read on the ledger for audit purpose. The invoke includes a channel identifier, the chaincode function to invoke, and an array of arguments.

Leader

In a leader based consensus protocol, like Raft, the leader is responsible for ingesting new log entries, replicating them to follower ordering nodes, and managing when an entry is considered committed. This is not a special type of orderer. It is only a role that an orderer may have at certain times, and then not others, as circumstances determine.

Leading Peer

Each Organization can own multiple peers on each channel that they subscribe to. One or more of these peers should serve as the leading peer for the channel, in order to communicate with the network ordering service on behalf of the organization. The ordering service delivers blocks to the leading peer(s) on a channel, who then distribute them to other peers within the same organization.

Ledger

A Ledger

A Ledger, ‘L’

A ledger consists of two distinct, though related, parts – a “blockchain” and the “state database”, also known as “world state”. Unlike other ledgers, blockchains are immutable – that is, once a block has been added to the chain, it cannot be changed. In contrast, the “world state” is a database containing the current value of the set of key-value pairs that have been added, modified or deleted by the set of validated and committed transactions in the blockchain.

It’s helpful to think of there being one logical ledger for each channel in the network. In reality, each peer in a channel maintains its own copy of the ledger – which is kept consistent with every other peer’s copy through a process called consensus. The term Distributed Ledger Technology (DLT) is often associated with this kind of ledger – one that is logically singular, but has many identical copies distributed across a set of network nodes (peers and the ordering service).

Log entry

The primary unit of work in a Raft ordering service, log entries are distributed from the leader orderer to the followers. The full sequence of such entries known as the “log”. The log is considered to be consistent if all members agree on the entries and their order.

Member

See Organization.

Membership Service Provider

An MSP

An MSP, ‘ORG.MSP’

The Membership Service Provider (MSP) refers to an abstract component of the system that provides credentials to clients, and peers for them to participate in a Hyperledger Fabric network. Clients use these credentials to authenticate their transactions, and peers use these credentials to authenticate transaction processing results (endorsements). While strongly connected to the transaction processing components of the systems, this interface aims to have membership services components defined, in such a way that alternate implementations of this can be smoothly plugged in without modifying the core of transaction processing components of the system.

Membership Services

Membership Services authenticates, authorizes, and manages identities on a permissioned blockchain network. The membership services code that runs in peers and orderers both authenticates and authorizes blockchain operations. It is a PKI-based implementation of the Membership Services Provider (MSP) abstraction.

Ordering Service

Also known as orderer. A defined collective of nodes that orders transactions into a block. The ordering service exists independent of the peer processes and orders transactions on a first-come-first-serve basis for all channel’s on the network. The ordering service is designed to support pluggable implementations beyond the out-of-the-box SOLO and Kafka varieties. The ordering service is a common binding for the overall network; it contains the cryptographic identity material tied to each Member.

Organization


An Organization

An organization, ‘ORG’

Also known as “members”, organizations are invited to join the blockchain network by a blockchain service provider. An organization is joined to a network by adding its Membership Service Provider (MSP) to the network. The MSP defines how other members of the network may verify that signatures (such as those over transactions) were generated by a valid identity, issued by that organization. The particular access rights of identities within an MSP are governed by policies which are also agreed upon when the organization is joined to the network. An organization can be as large as a multi-national corporation or as small as an individual. The transaction endpoint of an organization is a Peer. A collection of organizations form a Consortium. While all of the organizations on a network are members, not every organization will be part of a consortium.

Peer

A Peer

A peer, ‘P’

A network entity that maintains a ledger and runs chaincode containers in order to perform read/write operations to the ledger. Peers are owned and maintained by members.

Policy

Policies are expressions composed of properties of digital identities, for example: Org1.Peer OR Org2.Peer. They are used to restrict access to resources on a blockchain network. For instance, they dictate who can read from or write to a channel, or who can use a specific chaincode API via an ACL. Policies may be defined in configtx.yaml prior to bootstrapping an ordering service or creating a channel, or they can be specified when instantiating chaincode on a channel. A default set of policies ship in the sample configtx.yaml which will be appropriate for most networks.

Private Data

Confidential data that is stored in a private database on each authorized peer, logically separate from the channel ledger data. Access to this data is restricted to one or more organizations on a channel via a private data collection definition. Unauthorized organizations will have a hash of the private data on the channel ledger as evidence of the transaction data. Also, for further privacy, hashes of the private data go through the Ordering-Service, not the private data itself, so this keeps private data confidential from Orderer.

Private Data Collection (Collection)

Used to manage confidential data that two or more organizations on a channel want to keep private from other organizations on that channel. The collection definition describes a subset of organizations on a channel entitled to store a set of private data, which by extension implies that only these organizations can transact with the private data.

Proposal

A request for endorsement that is aimed at specific peers on a channel. Each proposal is either an Init or an invoke (read/write) request.

Prover peer

A trusted peer used by the FabToken client to assemble a token transaction and list the unspent tokens owned by a given authorized party.

Prover peer

A trusted peer used by the FabToken client to assemble a token transaction.

Query

A query is a chaincode invocation which reads the ledger current state but does not write to the ledger. The chaincode function may query certain keys on the ledger, or may query for a set of keys on the ledger. Since queries do not change ledger state, the client application will typically not submit these read-only transactions for ordering, validation, and commit. Although not typical, the client application can choose to submit the read-only transaction for ordering, validation, and commit, for example if the client wants auditable proof on the ledger chain that it had knowledge of specific ledger state at a certain point in time.

Quorum

This describes the minimum number of members of the cluster that need to affirm a proposal so that transactions can be ordered. For every consenter set, this is a majority of nodes. In a cluster with five nodes, three must be available for there to be a quorum. If a quorum of nodes is unavailable for any reason, the cluster becomes unavailable for both read and write operations and no new logs can be committed.

Raft

New for v1.4.1, Raft is a crash fault tolerant (CFT) ordering service implementation based on the etcd library of the Raft protocol <https://raft.github.io/raft.pdf>`_. Raft follows a “leader and follower” model, where a leader node is elected (per channel) and its decisions are replicated by the followers. Raft ordering services should be easier to set up and manage than Kafka-based ordering services, and their design allows organizations to contribute nodes to a distributed ordering service.

Software Development Kit (SDK)

The Hyperledger Fabric client SDK provides a structured environment of libraries for developers to write and test chaincode applications. The SDK is fully configurable and extensible through a standard interface. Components, including cryptographic algorithms for signatures, logging frameworks and state stores, are easily swapped in and out of the SDK. The SDK provides APIs for transaction processing, membership services, node traversal and event handling.

Currently, the two officially supported SDKs are for Node.js and Java, while three more – Python, Go and REST – are not yet official but can still be downloaded and tested.

Smart Contract

A smart contract is code – invoked by a client application external to the blockchain network – that manages access and modifications to a set of key-value pairs in the World State. In Hyperledger Fabric, smart contracts are referred to as chaincode. Smart contract chaincode is installed onto peer nodes and then defined and used on one or more channels.

State Database

Current state data is stored in a state database for efficient reads and queries from chaincode. Supported databases include levelDB and couchDB.

System Chain

Contains a configuration block defining the network at a system level. The system chain lives within the ordering service, and similar to a channel, has an initial configuration containing information such as: MSP information, policies, and configuration details. Any change to the overall network (e.g. a new org joining or a new ordering node being added) will result in a new configuration block being added to the system chain.

The system chain can be thought of as the common binding for a channel or group of channels. For instance, a collection of financial institutions may form a consortium (represented through the system chain), and then proceed to create channels relative to their aligned and varying business agendas.

Transaction

A Transaction

A transaction, ‘T’

Transactions are created when a chaincode or FabToken client is used to read or write to data from the ledger. If you are invoking a chaincode, application clients gather the responses from endorsing peers and then package the results and endorsements into a transaction that is submitted for ordering, validation, and commit. If using FabToken to create a token transaction, the FabToken client uses a prover peer to create a transaction that is submitted to the ordering service and then validated by committing peers.

World State

Current State

The World State, ‘W’

Also known as the “current state”, the world state is a component of the HyperLedger Fabric Ledger. The world state represents the latest values for all keys included in the chain transaction log. Chaincode executes transaction proposals against world state data because the world state provides direct access to the latest value of these keys rather than having to calculate them by traversing the entire transaction log. The world state will change every time the value of a key changes (for example, when the ownership of a car – the “key” – is transferred from one owner to another – the “value”) or when a new key is added (a car is created). As a result, the world state is critical to a transaction flow, since the current state of a key-value pair must be known before it can be changed. Peers commit the latest values to the ledger world state for each valid transaction included in a processed block.

发布

hyperledger fabric发布的版本记录在fabric github页面<https://github.com/hyperledger/fabric#releases>

Still Have Questions?

We try to maintain a comprehensive set of documentation for various audiences. However, we realize that often there are questions that remain unanswered. For any technical questions relating to Hyperledger Fabric not answered here, please use StackOverflow. Another approach to getting your questions answered to send an email to the mailing list (hyperledger-fabric@lists.hyperledger.org), or ask your questions on RocketChat (an alternative to Slack) on the #fabric or #fabric-questions channel.

注解

Please, when asking about problems you are facing tell us about the environment in which you are experiencing those problems including the OS, which version of Docker you are using, etc.

Status

Hyperledger Fabric is in the Active state. For more information on the history of this project see our wiki page. Information on what Active entails can be found in the Hyperledger Project Lifecycle document.

注解

If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.