计算机科学作业代写:Study On Computation Efficient Multicast Key Distribution

8 年前 1,161次浏览 计算机科学作业代写:Study On Computation Efficient Multicast Key Distribution已关闭评论 , , , ,

计算机科学作业代写:Study On Computation Efficient Multicast Key Distribution

3.1 Key-Tree-Based Rekeying Scheme

The main idea of reducing the rekeying communication complexity of this scheme is to have the GC distribute subgroup keys in addition to individual member keys and the group session key. These keys are arranged in a logical tree hierarchy, where the group session key serves as the root, the individual member keys are the leaves, and the subgroup keys correspond to intermediate nodes. Each member stores all the keys along the path from the corresponding leaf to the root in the tree. Then, each subgroup key can be used to securely multicast to the members that are leaves of the corresponding subtree. During the rekeying process, the GC can thus securely multicast to a subgroup of members using their shared subgroup key instead of individual member keys.

Fig. 2 shows a key tree for a nine-member group. Ki (1 <= i <= 9) is the individual key of member i. K1- 9 is the group session key that is shared by all the members. Finally, K1-3, K4-6, and K7-9 are three subgroup keys for the three corresponding subgroups respectively. For example, K1-3 is shared by members 1 through 3, who form the first subgroup.

3.2 MDS Code-Based Rekeying on a Key Tree

The GC initialization and each member's initial join can be performed exactly the same on a key tree as in the basic scheme described in Section 2.2. Thus, we focus on the adaption of the basic scheme for rekeying on a key tree.

MDS Code Implementation for the Rekeying Operation

As pointed out previously, d needs to be 3 to minimize the communication complexity during rekeying. Hence, only two types of MDS codes are needed, which are (L; 2) and (L; 3) codes. In fact, the rekeying scheme only needs two specific MDS codes, i.e., an (L; 2) code and an (L; 3) code. Although any general (n; k) MDS code can be used for the rekeying purpose by setting k = 2 or k = 3, there are a number of optimization techniques that can be applied for special implementations of the (L; 2) and (L; 3) codes. As we shall see in the following, these techniques turn out to make the codes used for rekeying extremely fast, even though they do not readily extend to the implementations of general MDS codes.

Reed-Solomon (RS) Implementation: Vandermonde Representation versus Cauchy Representation

We choose the RS codes [28] as the MDS codes, since it is the most widely used MDS code. Among numerous ways of constructing an RS code, the two popular ones are Vandermonde and Cauchy representations. In the Vander-monde representation, a Vandermonde Matrix is used as a generator matrix for the chosen RS code [19], which is a traditional description of the RS code. Recently, a Cauchy representation for the RS code has been proposed to make general encoding and decoding of an RS code faster by using mostly XOR operations instead of more expensive finite-field operations [5]. In this representation, a Cauchy Matrix is used as a generator matrix.

For our rekeying purpose, an RS decoding operation is equivalent to solving a group of linear equations. It thus mainly involves two steps: 1) inverting a coefficient matrix and 2) multiplying the inverse matrix to get the values of the unknowns. In general, the second step has quite similar complexity among different representations for the same code. Thus, if one code representation has lower complexity in the first step (inverting the coefficient matrix), its overall decoding operation will be more efficient. It is well understood that the inversion of the Cauchy matrix requires less complexity than that of the Vandermonde matrix. Therefore, in general, Cauchy-matrix-based RS codes are considered as more efficient than Vandermonde-matrix-based RS codes [5], [25].

Quite contrary, we observe that Vandermonde representations are, in fact, more efficient for (L; 2) and (L; 3) RS codes in terms of decoding operations for the rekeying process. The main reason is that the inverse Vandermonde matrix is much simpler than the inverse Cauchy matrix for k = 2 and k = 3. Taking k = 3 as an example, a more

computation-efficient variant of a Vandermonde-matrix-based RS code can be represented as follows [23], [24]:

where i, j, and k are the positions of members assigned by the GC when they join the multicast group. They are also considered to be elements in the corresponding finite field GF(2m), which is used to construct the RS codes. Then, the inverse matrix can be represented as

Note that m1 is just the multicast session key here. Similarly, the RS codes constructed from the Cauchy

matrix can be represented as

where xi, xj, and xk change values based on the positions of the members, whereas y1, y2, and y3 are constants. The inverse matrix turns out to be much more complicated than that of the Vandermonde matrix, so we simply present one single entry to further elaborate. Letting d1;1 be the entry of the inverse matrix at row 1 and column 1, then

It is true that the common terms ( for example y1 y2 , y2 y3, and y1 y3) can be pre computed for each given code construction, but still, every entry requires more operations than the inverse of the Vandermonde matrix.

Based on the above observation, we choose to use the Vandermonde matrix to construct the (L; 2) and (L; 3) RS codes, instead of using the Cauchy matrix, as conventional wisdom suggests.

Optimized Lookup Table for Multiplication and Division

Finite-field multiplication and division are usually implemented via lookup tables. Each element (except 0) in a finite field can be represented as a power to a primitive element (say, ). Hence, the multiplication of two elements a and b can be done by first adding the power values together and then calculating the exponential value as

a x b = αlogαa+logαb: (5)

It is straightforward to pre compute a logarithmic table and an exponential table for a given finite field. With that, the multiplication can simply be implemented by look up tables as

a x b = exp[log[a] ) log[b]]. (6)

Similarly, the division can be implemented

a x b = exp[log[a] - log[b[]. (7)

Careful readers might immediately spot the potential problems of the exponential table, as 1) element a or b might be 0, which will not yield correct results from the lookup table, 2) the index to the table might exceed the table boundary in the multiplication case, and 3) the index to the table might become negative in the division case. These issues can essentially be addressed by augmenting the exponential table in the following ways: 1) augmenting the table such that the sum of two power values is always covered, 2) further augmenting the table such that 0 maps to a remote entry in the table such that a further operation (plus/minus) results in a region nearby, where all corresponding values are 0, and 3) shifting the pointer to the table such that negative indices are allowed. Of course, the last one is programming language specific. However, even without programming language support, the same goal can still be achieved simply by adding a constant offset to a computed (negative) index. These techniques are also applicable to general implementations of the RS codes.

Considering our special needs of implementing the (L; 2) and (L; 3) codes, we discover that the lookup table idea can be extended to further simplify operations. The most complicated calculation in (2) requires three multiplications and two divisions (for example ). Hence, instead of performing this calculation as a series of binary operations, where each operation is similar to (6) or (7), we can build a special augmented exponential table to accommodate the entire 5-ary calculation as

(a x b x c) ÷ (d x e)

= exp[log[a] + log[b] + log[c] - log[d] - log[e]]. (8)

In addition, notice that only element a could take value 0 (b, c, d, and e cannot be 0 due to the construction of the RS codes), which means that the table needs just slightly further augmentation to accommodate this special case. Indeed, when a finite-field GF(256) is used, an exponential table of size 2,304 is sufficient. Assuming that negative index is supported, then the table spans from [-1,535,768]. Exp[-1535, -512] always maps to 0 to handle the special case when a = 0 (the logarithmic table maps 0 to -1,024 here, i.e., log[0] = -1024), and exp[-511, 768] handles normal 5-ary operations (multiplications together with divisions). It is conceivable that such exponential table can easily fit into the fast cache of modern computing devices.

这些您可能会感兴趣

筛选出你可能感兴趣的一些文章,让您更加的了解我们。