Open Access

Exploiting joint sparsity in compressed sensing-based RFID

EURASIP Journal on Embedded Systems20162016:8

https://doi.org/10.1186/s13639-016-0025-y

Received: 21 December 2015

Accepted: 23 February 2016

Published: 21 April 2016

Abstract

We propose a novel scheme to improve compressed sensing (CS)-based radio frequency identification (RFID) by exploiting multiple measurement vectors. Multiple measurement vectors are obtained by employing multiple receive antennas at the reader or by separation into real and imaginary parts. Our problem formulation renders the corresponding signal vectors jointly sparse, which in turn enables the utilization of CS. Moreover, the joint sparsity is exploited by an appropriate algorithm.

We formulate the multiple measurement vector problem in CS-based RFID and demonstrate how a joint recovery of the signal vectors strongly improves the identification speed and noise robustness. The key insight is as follows: Multiple measurement vectors allow to shorten the CS measurement phase, which translates to shortened tag responses in RFID. Furthermore, the new approach enables robust signal support estimation and no longer requires prior knowledge of the number of activated tags.

Keywords

Compressed sensing Approximate message passing Joint sparsity Multiple measurement vectors Backscatter communication Multiple access

1 Introduction

In radio frequency identification (RFID), a reader device interrogates tags for identification. A large branch of RFID deals with the identification of a multitude of tags that may identify, e.g., products in a store, parts on a conveyor belt, or items in a warehouse. Predominantly, passive tags that are powered by the field emitted by the reader are employed. Such tags are cheap and can be produced in high volumes, which has made RFID a ubiquitous technology. An overview is provided in [1, 2].

Reducing the identification time and promoting quick identification of many tags has been a major research field in recent years. The key problem in customary protocols arises from collisions during interrogation: If several tags respond simultaneously, their responses superimpose at the reader and cause collisions, resulting in loss of data. The widely adopted EPCglobal standard [3] employs a collision avoidance protocol called frame slotted ALOHA (FSA)—a summary of collision avoidance schemes is provided by [4, 5]. These protocols separate the tag responses in the time domain. The authors in [68] improve FSA by performing collision recovery, which is accomplished by separating tag responses in the in-phase and quadrature-phase plane and by employing multiple receive antennas at the reader to resolve multiple collisions.

The compressed sensing (CS)-based identification protocols [914], on the other hand, cope with simultaneously responding tags and exploit collisions. This bears several advantages over FSA-based schemes, as reported in, e.g., [9, 14]. In particular, CS enables a quicker identification and provides an increased noise robustness. In this work, we demonstrate that CS-based schemes can be improved significantly if joint sparsity is exploited. To the best of the authors’ knowledge, this is truly novel in the realm of CS-based RFID, denoted as CS-RFID in the sequel. We discuss how multiple receive antennas at the reader and the separation into real and imaginary parts lead to a CS problem with inherent joint sparsity.

We employ the Bayesian structured signal approximate message passing (BASSAMP) algorithm [15] in order to exploit the joint sparsity. While the number of activated tags is assumed to be known or has to be estimated in an additional step in the current state-of-the-art protocols [914], we demonstrate how our approach eliminates the need for such prior knowledge by exploiting a novel signal support estimation scheme. Note that we do not present a new protocol but an improvement of the CS formulation that is applicable to all present protocols [914]. The benefits of our novel formulation comprise a strongly increased identification throughput, an increased noise robustness, and an implicit estimation of the number of activated tags.

Outline: In Section 2, we briefly summarize CS and introduce the concept of multiple measurement vectors and joint sparsity. Section 3 gives an overview of CS-RFID and discusses our novel contributions. Section 4 explains the origins of joint sparsity in CS-RFID and highlights the advantages of its exploitation. In Section 5, the channel model and channel coefficient distribution are introduced, and the BASSAMP algorithm is defined for the RFID scenario. Section 6 deals with the estimation of the signal support and the number of activated tags. Numerical results are provided and discussed in Section 7, and we conclude in Section 8.

Notation: Boldface letters such as A and a denote matrices and vectors, respectively. The bth column of a matrix A is denoted a b , while the nth entry of the bth column is denoted a n,b . The superscript (·)T denotes the transposition of a matrix or vector, and (·)H denotes the conjugate transpose. The vectorization of an M×N matrix into a column vector is denoted \(\mathbf {A}(:) \equiv \left [\mathbf {a}_{1}^{{\mathrm {T}}},...,\mathbf {a}_{N}^{{\mathrm {T}}}\right ]^{{\mathrm {T}}}\).

The N×N B all-one matrix is denoted \(\mathbf {1}_{N\times N_{B}}\). The Frobenius norm of a matrix A is denoted \(\|\mathbf {A}\|_{F}=\sqrt {\text {trace}(\mathbf {A} \mathbf {A}^{\mathrm {H}})}\). Calligraphic letters such as \(\mathcal {S}\) denote sets.

The cardinality of a set \(\mathcal {S}\) is denoted by \(|\mathcal {S}|\). Random variables, vectors, and matrices are written in sans serif font as x, x, and X, respectively, while realizations thereof are written in serif as x, x, and X.

2 Compressed sensing and joint sparsity

CS—introduced in [16, 17] and discussed in [18, 19]—aims at reconstructing a signal vector \(\mathbf {x} \in \mathbb {C}^{N}\) from M<N noisy linear measurements
$$ \mathbf{y} = \mathbf{A}\mathbf{x} + \mathbf{w}, $$
(1)

where \(\mathbf {y} \in \mathbb {C}^{M}\) is the measurement vector, \(\mathbf {A} \in \mathbb {R}^{M \times N}\) is the fixed sensing matrix, and \(\mathbf {w} \in \mathbb {C}^{M}\) is additive measurement noise. If the signal vector x features only KN nonzero entries, it is said to be K-sparse.

Using randomly generated sensing matrices with i.i.d. (sub-)Gaussian entries, x can be reconstructed from (1) by [20]
$$ M = \left\lceil c K \log \frac{N}{K} \right\rceil $$
(2)

measurements, with a small constant c. Most importantly for CS-RFID, this holds for Rademacher distributed sensing matrices where the entries are picked from the set {−1,1} with equal probability.

While many recovery algorithms aiming at solving (1) for x have been proposed in literature [18, 19], we utilize the versatile approximate message passing (AMP) framework that was introduced in [2123]. These algorithms enable efficient recovery with low computational complexity, while maintaining excellent recovery performance. Note that the CS-RFID schemes presented in [911] utilize computationally demanding convex optimization algorithms, while the schemes in [1214] employ an efficient AMP algorithm that iteratively solves the least absolute shrinkage and selection operator (LASSO) [24]. In this work, we utilize a powerful extension of the AMP algorithm—termed BASSAMP and introduced in [15]—that allows to leverage prior knowledge and joint sparsity. A detailed specification follows in Section 5.

Joint sparsity is defined as having N B signal vectors \(\mathbf {x}_{b}\in \mathbb {C}^{N}\), \(b\in \mathcal {B}=\{1,...,{N_{B}}\}\), that share a common support
$$ \mathcal{S}_{\mathbf{x}} \equiv \mathcal{S}_{\mathbf{x}_{b}},\forall b\in\mathcal{B}, $$
(3)

where \(\mathcal {S}_{\mathbf {x}_{b}}\) contains the indices of the nonzero entries in x b . Aside from having the same support, all vectors are K sparse with \(|\mathcal {S}_{\mathbf {x}}|=K\ll N\).

In general, we obtain N B measurement vectors similar to (1):
$$ \mathbf{y}_{b} = \mathbf{A}^{\!(b)}\mathbf{x}_{b} + \mathbf{w}_{b}, $$
(4)

where \(\mathbf {y}_{b} \in \mathbb {C}^{M}\), \(\mathbf {A}^{\!(b)}\in \mathbb {R}^{M \times N}\), and \(\mathbf {w}_{b} \in \mathbb {C}^{M}\). Let us collect the data blocks in matrices: \(\mathbf {Y} = [\mathbf {y}_{1},...,\mathbf {y}_{b},..., \mathbf {y}_{{N_{B}}\phantom {\dot {i}\!}}]\), \(\mathbf {X} = [\mathbf {x}_{1},...,\mathbf {x}_{b},..., \mathbf {x}_{{N_{B}}\phantom {\dot {i}\!}}]\), and \(\mathbf {W} = [\mathbf {w}_{1},...,\mathbf {w}_{b},..., \mathbf {w}_{{N_{B}}\phantom {\dot {i}\!}}]\).

If all N B sensing matrices are identical, i.e., \(\mathbf {A} \equiv \mathbf {A}^{\!(b)}, \forall b\in \mathcal {B}\), (4) can be rewritten as Y=A X+W. This is the relevant case for our approach.

3 CS-RFID: Related work and novel contributions

The emerging field of CS triggered a shift of paradigm in signal processing and digital communications that also sparked new ideas in the field of RFID. In this work, we focus on protocols where a single reader identifies a multitude of tags. It is investigated how multiple receive antennas at the reader improve the performance.

We assume commonly used passive RFID tags that employ backscatter modulation to convey information back to the reader [25]. Before going into details about the CS-RFID protocols presented in [914], let us first discuss what those schemes have in common.

3.1 CS-RFID problem formulation

All CS-based identification protocols [914] share a common problem formulation that can be interpreted as a compressed sensing measurement similar to (1). Let us recapitulate how this formulation is obtained.
  • We intend to identify K tags that are activated by the reader (i.e., are in read range).

  • After a query from the reader, all K tags respond simultaneously with a signature sequence.

  • Activated tag k responds with signature \(\mathbf {s}_{a_{k}}\), where a k {1,...,N} is the signature index, and there are N possible signatures in total.

  • Each signature entails M real-valued (ASK) symbols, i.e., \(\mathbf {s}_{a_{k}} \in \{b_{0},b_{1}\}^{M}\) (the two amplitudes of backscatter modulation).

  • The N signature sequences form the columns of the signature matrix S=[s 1,...,s N ]{b 0,b 1} M×N . The signature sequences are generated pseudo-randomly, each with a certain seed [26]. The possible seeds are known to the reader such that it can construct S.

As a response to a query, the reader receives a superposition of signature sequences \(\mathbf {s}_{a_{k}}\) that are weighted with the respective channel coefficients h k , which we formulate as CS measurement
$$ \mathbf{z} = \sum\limits_{k=1}^{K} \mathbf{s}_{a_{k}} h_{k} +\mathbf{w} = \mathbf{Sx} + \mathbf{w}, $$
(5)

where the nonzero entries of \(\mathbf {x}\in \mathbb {C}^{N}\) store the complex-valued channel coefficients and dictate which columns in S are selected, and with \(\mathbf {w}\in \mathbb {C}^{M}\) being additive Gaussian measurement noise with i.i.d. entries \(\mathsf {w}_{m} \sim \mathcal {CN}(0,\sigma _{\mathsf {w}}^{2})\). In our application, x is a sparse vector with KN nonzero entries, i.e., there are much fewer signatures (or tags to be identified) than exist in total. Our goal is to recover x from z knowing S, because the locations of the nonzero entries in x tell us which signatures have been chosen by the tags. This information is used to directly identify the tags [12, 14] or to establish a handshake mechanism in order to read out the tag information in an additional step [911, 13].

3.2 CS-RFID overview

Let us give an overview of the individual protocols. Most tag identification protocols can be separated into two subsequent phases:

Tag acquisition refers to the process of obtaining information about the activated tags in order to communicate with them. Data read-out refers to the process of obtaining the data (payload) of the acquired tags. For example, the widely employed FSA protocol [3] schedules the activated tags to respond during time slots in a frame, thereby trying to avoid collisions in the acquisition phase. The tags respond with a 16-bit pseudo-random sequence called RN16 [3]. The reader acquires the uncorrupted RN16 sequences from collision free slots, which enables a successful handshake mechanism with the corresponding tags. After acquisition, the data that identifies the tags is read out in a sequential manner (tag by tag). Let us discuss how these phases are handled by CS-RFID protocols. Buzz: This CS-based scheme was introduced in [9].

During the acquisition phase, the tags respond simultaneously with pseudo-random sequences that are seeded by the tag’s temporary identifier, which is a 16-bit random number (i.e., the RN16 number a tag would have picked for FSA). This is formulated as a CS measurement (5). Because the total number of possible identifiers (signatures in S) is N=216, the CS measurement (5) features a very large sensing matrix S that renders an efficient recovery of x infeasible. The scale of the problem is reduced by hashing the identifiers into buckets [9] and eliminating the buckets that contain no energy, thereby strongly reducing the number of possible signatures (and, consequently, N). However, this requires knowledge of the number of activated tags K that has to be estimated in a prior step. An improved scale reduction that utilizes a gradient algorithm was introduced in [10]. Another improvement that does not require arbitrary restrictions of the huge initial identifier space was proposed in [11].

In the data read-out phase, the tags respond simultaneously as well. Based on the temporary identifier (that is now known to the reader), each tag generates a random sequence of bits. If a bit is ‘1’, the tag transmits its data, whereas it is silent if a bit is ‘0’. This results in a rate-less code; the superposition of the randomly encoded tag responses can be decoded by a belief propagation decoder, for details, see [9]. CSF: In [12], we introduced a scheme to quickly identify tags in applications with fixed inventory, e.g., a book store with N books, and KN books are brought to the checkout to be identified. Each of the N tags features a unique signature (identifier) that is not based on a random number.

During the acquisition phase, the tags respond simultaneously with their signature sequence—this process is cast as a CS measurement (5). Recovering x from S yields complete identification because each signature corresponds to a unique item of the inventory, no data read-out phase is required for identification.

Another novelty proposed in [12] was the utilization and investigation of an AMP recovery algorithm that enables efficient iterative recovery of large-scale CS problems. CSR: In [13, 14], a flexible alternative to CSF was introduced that allows for arbitrary inventory sizes.

During the acquisition phase, the tags respond simultaneously with a signature sequence that is randomly chosen from a pool of N possible signatures (N is now a design parameter). Recovering x form S yields the estimated set of assigned signatures. A scale reduction as in Buzz is not required.

In the data read-out phase, these signatures are enquired and the corresponding tags that recognize their signature transmit their data. In [13, 14], the data is read out in a sequential manner (tag by tag). However, it is also possible to employ the rate-less code scheme from Buzz [9].

In [14], the optimal choices of signature length M and signature pool size N based on the number of tags K have been discussed.

A brief overview of the aforementioned schemes is provided in Table 1.
Table 1

How RFID protocols handle the two phases of tag identification

 

Phase 1

Phase 2

 

Tag acquisition

Data read-out

FSA

Schedule responses into time slots (avoid collisions)

Sequential (list)

Buzz

Concurrent responses, CS measurement (exploit collisions)

Simultaneous (rate-less code)

CSF

Concurrent responses, CS measurement (exploit collisions)

Not required

CSR

Concurrent responses, CS measurement (exploit collisions)

Sequential or simultaneous

3.3 Novel contributions

All CS-RFID protocols feature the same problem formulation (5) for tag acquisition. This is the key aspect that we build and improve upon throughout this work, i.e., we improve the CS recovery scheme for RFID applications rather than proposing a new protocol. Our novel contributions are as follows:
  • We identify the origins of joint sparsity in CS-RFID and provide a mathematical problem formulation. In particular, a reader with multiple receive antennas features multiple jointly sparse signals, and a separation into real and imaginary parts doubles their number.

  • We adapt BASSAMP—an algorithm used for CS recovery of jointly sparse signals—to the RFID problem formulation. This adaptation involves the calculation of several functions—used in the algorithm—for a specific channel coefficient prior distribution. We perform a rigorous calculation for a dyadic channel model, where the individual channels are Gaussian distributed, and finally provide closed form expressions for those functions. This allows for a straightforward implementation of the algorithm.

  • We propose a relaxation of the channel prior distribution in order to obtain simpler functions and to further reduce the computational complexity (reader side) of the implementation. This relaxation is validated by experiments.

  • We introduce a novel approach for robust signal support estimation that utilizes joint sparsity. This enables implicit estimation of the number of activated tags K. FSA requires K for an optimal choice of the frame size, Buzz requires K to reduce the scale of the CS problem, and CSF and CSR require K to determine the ideal signature length for optimal identification throughput. With our proposed approach, K can be implicitly estimated during CS recovery.

  • We compare CS-RFID to a FSA-based collision recovery scheme and show the superior performance of our proposed approach. It is investigated how the number of receive antennas at the reader influences the performance.

4 Exploiting joint sparsity in CS-RFID

In [1214], we explained how the computationally efficient AMP algorithm can be used to recover x from large-scale CS problems (5). In [15, 27], it was shown that the exploitation of additional signal structure—such as joint sparsity—strongly improves the recovery performance of AMP-based schemes. Here are the main benefits of exploiting joint sparsity in CS-RFID:
  • The same mean squared error (MSE) performance as standard AMP can be achieved by a significantly reduced number of CS measurements M. Consequently, shorter signature sequences (length M) can be employed for tag acquisition.

  • This increases the acquisition throughput and reduces the jitter sensitivity [14] (jitter refers to link frequency deviations among tags). The noise robustness is improved as well. Furthermore, passive tags require less energy during the acquisition phase.

  • Support estimation (location of nonzero entries) can be improved significantly by combining soft information from multiple vectors, see Section 6. This leads to fewer identification cycles [14] (i.e., fewer repetitions of the acquisition phase) and quicker identification. Furthermore, the number of activated tags K can be estimated implicitly.

In the following, we demonstrate how to obtain jointly sparse signal vectors in CS-RFID. Multiple receive antennas: In Fig. 1, a reader with N R receive antennas is depicted. Each antenna receives the superposition of signatures (5) with different noise realizations and, generally, with different channel coefficients for every tag (stored in signal vector x r ,r{1,...,N R }). Employing N R receive antennas provides us with N R jointly sparse signal vectors and, thus, with N R measurement vectors that are generated with the same signature matrix:
$$ \left[\mathbf{z}_{1},...,\mathbf{z}_{N_{R}}\right] = \mathbf{S} \left[\mathbf{x}_{1},...,\mathbf{x}_{N_{R}}\right] + \left[\mathbf{w}_{1},...,\mathbf{w}_{N_{R}}\right]. $$
(6)
Fig. 1

A reader with multiple receive antennas interrogates tags. We assume that a reader with one transmit antenna and N R receive antennas is identifying K activated tags

Real and imaginary parts: Having complex-valued channel coefficients in x r and a real-valued signature matrix S, we can separate the measurements into a real part \(\mathbf {z}_{r}^{(\mathfrak {R})}\) and an imaginary part \(\mathbf {z}_{r}^{(\mathfrak {I})}\), which yields two jointly sparse signal vectors, and two measurement vectors that are generated with the same sensing matrix:
$$ [\mathbf{z}_{r}^{(\mathfrak{R})},\mathbf{z}_{r}^{(\mathfrak{I})}] = \mathbf{S} [\mathbf{x}_{r}^{(\mathfrak{R})},\mathbf{x}_{r}^{(\mathfrak{I})}] + [\mathbf{w}_{r}^{(\mathfrak{R})},\mathbf{w}_{r}^{(\mathfrak{I})}]. $$
(7)
We combine the two variants as depicted in Fig. 2 and obtain
$$ N_{B}=2 N_{R} $$
(8)
Fig. 2

Origins of joint sparsity. By splitting the N R receive signals into real and imaginary parts, the number of measurement vectors is doubled to N B =2N R . The measurement vectors entail N B jointly sparse signal vectors that produce the measurements

jointly sparse vectors.

The CS measurement with jointly sparse vectors is illustrated by Fig. 3 and reads
$$ \begin{aligned} \underbrace{\left[\mathbf{z}_{1}^{(\mathfrak{R})},\mathbf{z}_{1}^{(\mathfrak{I})},...,\mathbf{z}_{N_{R}}^{(\mathfrak{R})},\mathbf{z}_{N_{R}}^{(\mathfrak{I})}\right]}_{\mathbf{Z}} &= \mathbf{S} \underbrace{\left[\mathbf{x}_{1}^{(\mathfrak{R})},\mathbf{x}_{1}^{(\mathfrak{I})},...,\mathbf{x}_{N_{R}}^{(\mathfrak{R})},\mathbf{x}_{N_{R}}^{(\mathfrak{I})}\right]}_{\mathbf{X}} \\ &+ \underbrace{\left[\mathbf{w}_{1}^{(\mathfrak{R})},\mathbf{w}_{1}^{(\mathfrak{I})},...,\mathbf{w}_{N_{R}}^{(\mathfrak{R})},\mathbf{w}_{N_{R}}^{(\mathfrak{I})}\right]}_{\mathbf{W}}. \end{aligned} $$
(9)
Fig. 3

CS measurement with N B blocks. Illustration of a CS measurement with N B jointly sparse signal vectors collected in matrix X, resulting in N B measurement vectors collected in Z. (Noise W omitted)

The BASSAMP Algorithm 1 is applied on the reformulation
$$ \mathbf{Y} = \left[\mathbf{y}_{1},..., \mathbf{y}_{N_{B}}\right] = \mathbf{AX} + \mathbf{W} $$
(10)
where the bth column of Y computes as
$$ \mathbf{y}_{b} = \mathbf{z}_{b} - \frac{1}{M}\sum_{m=1}^{M} z_{m,b} = \mathbf{A}\mathbf{x}_{b} + \mathbf{w}_{b}. $$
(11)

This reformulation is necessary in order to obtain an appropriate sensing matrix that is compatible with the employed recovery algorithm. Signature matrix S comprises entries from the set {b 0,b 1}. Assuming signature sequences (columns in S) where b 0 and b 1 are equally likely, all sequences have the same mean. Consequently, the reformulation features a sensing matrix \(\mathbf {A}\in \{-\overline {b},\overline {b}\}^{M\times N}\) with zero mean columns and with \(\overline {b}=|b_{1}-b_{0}|/2\), i.e., Rademacher distributed up to a constant factor. This renders A an appropriate sensing matrix for CS recovery that satisfies (2). In order to apply BASSAMP, we have to specify the functions used in Algorithm 1 (see below). Note that we use the original algorithm from [15] without algorithmic changes but demonstrate how the utilized functions have to be specified for our application case.

5 Adaptation of BASSAMP for RFID

The BASSAMP algorithm—introduced in [15] and depicted in Algorithm 1—aims at recovering X from Y in (10). It utilizes the knowledge of A, the signal prior, and the joint sparsity structure.

Let us briefly summarize how Algorithm 1 works.

AMP decouples each measurement vector y b of the measurement formulation (10) into N uncoupled scalar measurements of the form (using random variables, details see [15, 28])
$$ \mathsf{u}_{n,b} = \mathsf{x}_{n,b} + \widetilde{\mathsf{w}}_{n,b}, $$
(12)

where the noise \(\widetilde {\mathsf {w}}_{n,b}\) accounts for the measurement noise and the undersampling noise. It is assumed to be Gaussian distributed as \(\widetilde {\mathsf {w}}_{n,b} \sim \mathcal {N}(0,\beta _{b})\). This assumption is satisfied in the asymptotic case (M,N while \(\frac {M}{N}=const.\)) and approximately satisfied in finite but high dimensions. The decoupled measurement (12) refers to line 5 of Algorithm 1. The effective noise variance β b is estimated in line 6, and the current estimate for x b is computed in line 7 using the minimum mean squared error (MMSE) estimator function F(·;·,·) that will be defined later. A residual is computed in line 8. Above points subsume the Bayesian approximate message passing (BAMP) [2123] iteration that is executed for all N B blocks. In each BAMP iteration, the signal vector x b is newly estimated. The energy of the residual decreases over iterations, and so does the effective noise variance β b — the MMSE estimator F(·;·,·) acts as a denoiser. In line 9, the extrinsic group update function U G (·,·,·) enforces the joint sparsity structure. This is done via binary latent variables that indicate whether a signal entry is zero or nonzero; in a probabilistic manner, the prior probability for a specific signal entry to be zero is updated. The likelihood ratios that are generated by the extrinsic group update function are converted into new prior probabilities in line 10. After several iterations of BASSAMP, a consensus emerges. For a detailed derivation, we refer the interested reader to [15].

The algorithm assumes independently distributed signal entries for the BAMP iteration, where the prior distribution of the nth entry of the bth signal vector is denoted as \(f_{\mathsf {x}_{n,b}\phantom {\dot {i}\!}}(x_{n,b})\). This prior plays a major role in the computation of the functions F(·;·,·), F (·;·,·), and U G (·,·,·). Therefore, let us specify the signal prior for the RFID scenario, which is essentially dictated by the channel model.

5.1 Channel model and distribution

As illustrated by Fig. 1, we employ a widely used dyadic channel model (see, e.g., [6, 8, 25]) where the channel coefficients comprise a forward channel \(h_{k}^{(\mathrm {f})}\) and a backward channel \(h_{r,k}^{(\mathrm {b})}\). The total channel from the transmit antenna to the kth tag and back to the rth receive antenna reads
$$ h_{r,k} = h_{k}^{(\mathrm{f})} h_{r,k}^{(\mathrm{b})}. $$
(13)
We assume that the forward and backward channel coefficients are distributed according to a circularly symmetric complex normal distribution such that
$$\begin{array}{*{20}l} \mathsf{h}_{k}^{(\mathrm{f})} &\sim \mathcal{CN}\left(0,{\sigma^{(\mathrm{f})}}^{2}\right), \end{array} $$
(14)
$$\begin{array}{*{20}l} \mathsf{h}_{r,k}^{(\mathrm{b})} &\sim \mathcal{CN}\left(0,{\sigma^{(\mathrm{b})}}^{2}\right). \end{array} $$
(15)
In the sequel, we consider the separation of the channel coefficients into real and imaginary parts:
$$\begin{array}{*{20}l} \mathsf{h}_{k}^{(\mathrm{f})} &= \mathsf{h}_{k}^{(\mathrm{f},\mathfrak{R)}} + j \mathsf{h}_{k}^{(\mathrm{f},\mathfrak{I)}}, \end{array} $$
(16)
$$\begin{array}{*{20}l} \mathsf{h}_{r,k}^{(\mathrm{b})} &= \mathsf{h}_{r,k}^{(\mathrm{b},\mathfrak{R)}} + j \mathsf{h}_{r,k}^{(\mathrm{b},\mathfrak{I)}}, \end{array} $$
(17)
where the real and imaginary parts obey a zero mean normal distribution with half the original variance, respectively. The total channel (13) can now be expressed by
$$ {\begin{aligned} \mathsf{h}_{r,k} &= \mathsf{h}_{r,k}^{\mathfrak{(R)}} + j \mathsf{h}_{r,k}^{\mathfrak{(I)}}\\ &= \left(\mathsf{h}_{k}^{(\mathrm{f},\mathfrak{R)}}\mathsf{h}_{r,k}^{(\mathrm{b},\mathfrak{R)}} - \mathsf{h}_{k}^{(\mathrm{f},\mathfrak{I)}}\mathsf{h}_{r,k}^{(\mathrm{b},\mathfrak{I)}}\right) +j \left(\mathsf{h}_{k}^{(\mathrm{f},\mathfrak{R)}} \mathsf{h}_{r,k}^{(\mathrm{b},\mathfrak{I)}} + \mathsf{h}_{k}^{(\mathrm{f},\mathfrak{I)}} \mathsf{h}_{r,k}^{(\mathrm{b},\mathfrak{R)}}\right). \end{aligned}} $$
(18)
In order to obtain the distribution of the real and imaginary parts of h r,k , we remember the following relation ([29] Proposition 2.2.5): consider four random variables \(\mathsf {v}_{1},\mathsf {v}_{2}\sim \mathcal {N}(0,{\sigma _{a}^{2}})\) and \(\mathsf {v}_{3},\mathsf {v}_{4}\sim \mathcal {N}(0,{\sigma _{b}^{2}})\); the term v=v 1 v 3v 2 v 4 is Laplace distributed with probability density function (PDF)
$$ f_{\mathsf{v}}(v) = \frac{1}{2\sigma_{a}\sigma_{b}}\exp\left(-\frac{1}{\sigma_{a}\sigma_{b}}|v|\right). $$
(19)
Applying this to (18), we obtain the PDF
$$ f_{\mathsf{h}}(h) = \frac{1}{{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}}\exp{\left(-\frac{2}{{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}} |h|\right)}, $$
(20)

where h is a placeholder for the real part \(\mathsf {h}_{r,k}^{\mathfrak {(R)}}\) or imaginary part \(\mathsf {h}_{r,k}^{\mathfrak {(I)}}\) of the total channel (13).

5.2 Sparsity enforcing signal prior

The BASSAMP algorithm requires the specification of a signal prior distribution. In (20), we specified the distribution of the nonzero entries in the random matrix \(\boldsymbol {\mathsf {X}}=[\boldsymbol {\mathsf {x}}_{1},...,\boldsymbol {\mathsf {x}}_{N_{B}\phantom {\dot {i}\!}}]\). A realization thereof, X, contains the channel realizations as nonzero entries. The prior of signal entry x n,b reads
$$ f_{\mathsf{x}_{n,b}}(x_{n,b}) = \gamma_{n,b} \delta(x_{n,b}) + (1-\gamma_{n,b})\,\, f_{\mathsf{h}}(x_{n,b}), $$
(21)

where γ n,b is the probability that the nth signal entry of the bth vector is zero. If the number of activated tags, K, is known a priori, the initial value computes as \(\gamma _{n,b}=1-\frac {K}{N}\). In BASSAMP, this probability is adapted in each iteration, and it is sufficient to initialize it with a very coarse assumption of the number of activated tags; details follow in Section 6.

5.3 Specification of functions

Let us now discuss the computation of the functions (required in Algorithm 1, iteration index t omitted)
$$\begin{array}{*{20}l} {}F\left(u_{n,b}; \beta_{b}, \gamma_{n,b}\right) &= \mathbb{E}_{\mathsf{x}_{n,b}}\{\mathsf{x}_{n,b}|\mathsf{u}_{n,b}=u_{n,b} ; \beta_{b}, \gamma_{n,b} \}, \end{array} $$
(22)
$$\begin{array}{*{20}l} {}F^{\prime}\left(u_{n,b}; \beta_{b},\gamma_{n,b}\right) &= \frac{d}{d u_{n,b}}F\left(u_{n,b}; \beta_{b},\gamma_{n,b}\right). \end{array} $$
(23)

The conditional expectation (22) yields the MMSE estimate of x n,b given the decoupled measurement \(u_{n,b}=x_{n,b} +\widetilde {w}_{n,b}\), where \(\widetilde {\mathsf {w}}_{n,b}\sim \mathcal {N}(0,\beta _{b})\); for details, consider [15, 28]. Note that in Algorithm 1 (line 7), (22) is applied separately on the vector components u n,b of the vector input u b .

Let us calculate these functions for prior (21) (indices n and b are dropped for clarity):
$$ \begin{aligned} F(u;\beta,\gamma) &= \int_{-\infty}^{\infty} \widetilde{x} f_{\mathsf{x}|\mathsf{u}}(\widetilde{x}|u) d\widetilde{x} \\ &= \frac {\int_{-\infty}^{\infty} \widetilde{x} f_{\mathsf{u}|\mathsf{x}}(u|\widetilde{x})f_{\mathsf{x}}(\widetilde{x}) d\widetilde{x}} {\int_{-\infty}^{\infty} f_{\mathsf{u}|\mathsf{x}}(u|\widetilde{x})f_{\mathsf{x}}(\widetilde{x}) d\widetilde{x}} \\ &= \frac{\beta\left[h_{1}(u)k_{1}(u)+h_{2}(u)k_{2}(u)\right]}{\frac{\gamma}{1-\gamma}\frac{2{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}}{\sqrt{2\pi\beta}} + k_{1}(u) + k_{2}(u)} = \frac{p(u)}{q(u)}, \end{aligned} $$
(24)
with auxiliary functions
$$\begin{array}{*{20}l} g_{1}(u) &= \frac{\sqrt{2\beta}}{{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}} - \frac{u}{\sqrt{2\beta}}, \end{array} $$
(25)
$$\begin{array}{*{20}l} g_{2}(u) &= \frac{\sqrt{2\beta}}{{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}} + \frac{u}{\sqrt{2\beta}}, \end{array} $$
(26)
$$\begin{array}{*{20}l} h_{1}(u) &= \frac{u}{\beta}-\frac{2}{{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}}, \end{array} $$
(27)
$$\begin{array}{*{20}l} h_{2}(u) &= \frac{u}{\beta}+\frac{2}{{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}}, \end{array} $$
(28)
$$\begin{array}{*{20}l} k_{1}(u) &= \text{erfc}\left(g_{1}(u)\right)\exp\left(g_{1}(u)^{2}\right), \end{array} $$
(29)
$$\begin{array}{*{20}l} k_{2}(u) &= \text{erfc}\left(g_{2}(u)\right)\exp\left(g_{2}(u)^{2}\right), \end{array} $$
(30)
the derivative of (24) with respect to u calculates as
$$ \begin{aligned} F^{\prime}(u;\beta,\gamma) &= \frac{p^{\prime}(u)q(u)-p(u)q^{\prime}(u)}{q(u)^{2}}, \end{aligned} $$
(31)
where
$$ {{}{\begin{aligned} p^{\prime}(u) &= \ k_{1}(u) + k_{2}(u)\\ &+\! \sqrt{2\beta}\!\left[\! h_{2}(u)\!\left(\!g_{2}(u)k_{2}(u)\,-\,\frac{1}{\sqrt{\pi}}\!\right)\! -\! h_{1}\!(u)\!\left(\!g_{1}\!(u)k_{1}\!(u)\,-\,\frac{1}{\sqrt{\pi}}\!\right)\! \right], \end{aligned}}} $$
(32)
$$ \begin{aligned} q^{\prime}(u) &= \sqrt{\frac{2}{\beta}}\left[g_{2}(u)k_{2}(u)-g_{1}(u)k_{1}(u)\right]. \end{aligned} $$
(33)
Finally, we have to specify the update functions U G (·,·,·) and U P (·). The extrinsic group update (applied entry-wise) in iteration t accumulates extrinsic information about signal entry x n,b from the entries x n,l of the other signal vectors, \(l\in \mathcal {B}\backslash b\):
$$ {{}{\begin{aligned} \overline{L}_{n,b}^{t} &= U_{G}(\mathbf{U}^{t-1},\boldsymbol{\beta}^{t-1},\gamma_{n,b}^{0}) := \overline{L}_{n,b}^{0} +\! \sum_{l\in\mathcal{B} \backslash b} L_{n,l}^{t} \\ &=\log\frac{\gamma_{n,b}^{0}}{1-\gamma_{n,b}^{0}} + \! \sum_{l\in\mathcal{B}\backslash b}\!\log \frac{2{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}}{\sqrt{2\pi\beta_{l}^{t-1}}} \!- \log\!\left[\!k_{1}(u_{n,l}^{t-1})\,+\,k_{2}(u_{n,l}^{t-1})\right],\\ &\quad \forall n \in \{1,...,N\},\forall b \in \mathcal{B}=\{1,...,N_{B}\}. \end{aligned}}} $$
(34)
The prior update converts likelihood ratios into probabilities:
$$ {\begin{aligned} \gamma_{n,b}^{t} = U_{P}(\overline{L}_{n,b}^{t}) := \frac{1}{1+\exp\left(-\overline{L}_{n,b}^{t}\right)},\\ &&&\forall n \in \{1,...,N\}, \forall b \in \mathcal{B}=\{1,...,N_{B}\}. \end{aligned}} $$
(35)

A detailed explanation of functions (34) and (35) can be found in [15].

5.4 Specification of functions—Gaussian relaxation

Note that the implementation of the functions above may be computationally challenging—in particular, the terms k 1(u) and k 2(u) that occur in (24), (31), and (34) entail a complementary error function that is multiplied with an exponential function. This may cause numerical instabilities in the computation. In order to keep the demands on the reader hardware low and facilitate a simpler implementation, we propose to approximate the channel PDF (20) by a Gaussian distribution with zero mean and variance \(\sigma _{\mathsf {x}}^{2}=\frac {1}{2}{\sigma ^{(\text {f})}}^{2}{\sigma ^{(\text {b})}}^{2}\), i.e., with the same variance as the Laplace distribution 20. Doing so, we obtain the following functions for BASSAMP, see [15] (indices n and b are dropped for clarity):
$$\begin{array}{*{20}l} F(u; \beta, \gamma) &= u \cdot M(u, \gamma, \xi), \end{array} $$
(36)
$$\begin{array}{*{20}l} F^{\prime}(u; \beta, \gamma) &= M(u, \gamma, \xi) + \frac{1}{\beta}m(u, \gamma, \xi) \cdot F(u_n;\beta)^{2}, \end{array} $$
(37)
with auxiliary functions
$$\begin{array}{*{20}l} \xi&=\frac{\sigma_{\mathsf{x}}^{2}}{\beta} = \frac{{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}}{2\beta}, \end{array} $$
(38)
$$\begin{array}{*{20}l} m(u, \gamma, \xi) &= \frac{\gamma}{1-\gamma}\sqrt{1+\xi} \exp\left(-\frac{u^{2}}{2\beta}\frac{\xi}{1+\xi} \right), \end{array} $$
(39)
$$\begin{array}{*{20}l} M(u, \gamma, \xi) &= \frac{\xi}{1+\xi} \frac{1}{1+m(u, \gamma, \xi)}. \end{array} $$
(40)
The extrinsic group update for the Gaussian prior reads
$$ {\begin{aligned} \overline{L}_{n,b}^{t} &= U_{G}(\mathbf{U}^{t-1},\boldsymbol{\beta}^{t-1},\gamma_{n,b}^{0}) := \overline{L}_{n,b}^{0} + \sum\limits_{l\in\mathcal{B} \backslash b} L_{n,l}^{t} \\ &= \log\frac{\gamma_{n,b}^{0}}{1\,-\,\gamma_{n,b}^{0}} + \frac{1}{2} \!\! \sum\limits_{l\in\mathcal{B} \backslash b} \!\! \left(\log \frac{\beta_{l}^{t-1}+\sigma_{\mathsf{x}_{n,l}}^{2}}{\beta_{l}^{t-1}} - \frac{\left(u_{n,l}^{t-1}\right)^{2} \sigma_{\mathsf{x}_{n,l}}^{2}}{\beta_{l}^{t-1}\left(\beta_{l}^{t-1}+\sigma_{\mathsf{x}_{n,l}}^{2}\right)} \right),\\ &\ \ \ \ \forall n \in \{1,...,N\}, \forall b \in \mathcal{B}=\{1,...,N_{B}\}. \end{aligned}} $$
(41)

The prior update (35) stays the same.

To justify this approximation of the channel PDF, we conducted numerical experiments that suggest that the MSE performance is hardly affected, see Section 7.

5.5 Choice of parameters

The choice of the channel model and the forward channel variance σ (f) 2 and backward channel variance σ (b) 2 depends on the location of the reader antennas, the environment (scatterers and reflectors), and the effective read range. The variances describe the strength of the spatial fading of the forward and backward link, respectively. In order to estimate them, one would have to measure the forward and backward links separately for many prospective tag positions.

In practice, one would rather measure the total channel (13), or avoid measuring the channels and determining their distribution entirely. The AMP framework allows to perform prior estimation during recovery, i.e., the recovery algorithm can be adapted to estimate the prior over iterations. One such algorithmic extension was proposed in [30], and it was shown that the performance degradation due to unknown prior is negligible in practice.

6 Support estimation

We now present a method to estimate the signal support (3) based on [31]. The support estimation is crucial for quick tag identification as it yields
  • The estimated set of assigned signatures \(\widehat {\mathcal {T}}_{\mathrm {A}}\) [14] (this is the information from the acquisition phase used to communicate with the tags),

  • The estimated number of activated tags \(\widehat {K}\).

The schemes presented in [1214] assumed to know the number of activated tags K, i.e., \(\widehat {K}=K\). Utilizing this knowledge, the estimated set of assigned signatures \(\widehat {\mathcal {T}}_{\mathrm {A}}\) is dictated by the K largest entries of signal recovery \(\widehat {\mathbf {x}}\). The newly employed BASSAMP algorithm allows for a robust support estimation by combining the soft information of all recovered vectors \(\widehat {\mathbf {x}}_{b}\).

In each BAMP iteration of the BASSAMP Algorithm 1, the decoupled measurements u b are computed; they have the form (12). The effective noise variance β b is decreased over the BAMP iterations, and u n,b is used as input of the MMSE estimator function F(·;·,·), i.e., the signal estimate in each iteration depends on this value, see line 7 of Algorithm 1. The PDF of u n,b can be computed by convolving the signal PDF (21) with the PDF of the effective noise \(\widetilde {\mathsf {w}}_{n,b}\), a normal distribution with zero mean and variance β b . Doing so, we obtain (indices dropped for clarity)
$$ f_{\mathsf{u}}(u) = \gamma f_{0}(u) + (1-\gamma)f_{1}(u), $$
(42)
where
$$ \begin{aligned} f_{0}(u) &= \frac{1}{\sqrt{2\pi\beta}}\exp\left(-\frac{u^{2}}{2\beta}\right)\\ f_{1}(u) &= \frac{1}{2{\sigma^{(\mathrm{f})}}{\sigma^{(\mathrm{b})}}}\exp\left(-\frac{u^{2}}{2\beta}\right)\left[k_{1}(u) + k_{2}(u)\right]. \end{aligned} $$
(43)
If the Gaussian relaxation (see Section 5.4) is used, we get
$$ f_{1}(u) = \frac{1}{\sqrt{2\pi(\beta+\sigma_{\mathsf{x}}^{2})}}\exp\left(-\frac{u^{2}}{2(\beta+\sigma_{\mathsf{x}}^{2})}\right). $$
(44)
In order to compute the conditional probability that a signal entry was zero given u n,b , we introduce a latent binary random variable z n,b that indicates whether a signal entry x n,b in (12) was zero (z n,b =0) or nonzero (z n,b =1), also see [15, 31]. We intend to compute the overall probability that the nth entry of all N B jointly sparse vectors x b is zero respectively nonzero (since we have jointly sparse vectors, these are the only relevant cases; either all nth entries are zero, or all nth entries are nonzero). The posterior probability that all nth signal entries are zero (given u n,b and γ n,b , \(\forall b \in \mathcal {B}\)) computes as
$$ \epsilon_{n}^{(0)} := \prod\limits_{b=1}^{N_{B}} P(\mathsf{z}_{n,b}=0 | u_{n,b}, \gamma_{n,b}) = \frac{1}{d} \prod\limits_{b=1}^{N_{B}} \gamma_{n,b} f_{0}(u_{n,b}), $$
(45)
while the posterior probability that all nth signal entries are nonzero computes as
$$ {\begin{aligned} \epsilon_{n}^{(1)} := \prod\limits_{b=1}^{N_{B}} P(\mathsf{z}_{n,b}=1 | u_{n,b}, \gamma_{n,b}) = \frac{1}{d}\prod\limits_{b=1}^{N_{B}} (1-\gamma_{n,b}) f_{1}(u_{n,b}), \end{aligned}} $$
(46)
where d is a common partition factor. The estimate for the signal support (3) is equivalent to the estimated set of assigned signatures and is obtained by comparing these probabilities:
$$ \widehat{\mathcal{T}}_{\mathrm{A}} = \left\{n \in \mathcal{T}: \frac{\epsilon_{n}^{(1)}}{\epsilon_{n}^{(0)}} = \frac{\prod_{b=1}^{N_{B}} (1-\gamma_{n,b}) f_{1}(u_{n,b})}{ \prod_{b=1}^{N_{B}} \gamma_{n,b}\ f_{0}(u_{n,b})}> 1 \right\}. $$
(47)
Note that the estimated set of assigned signatures is of vital importance:
  • In Buzz [911], this set represents the seeds used in the pseudo random generator for the data read-out via rate-less code. An erroneous set hampers decoding, and the acquisition has to be repeated.

  • In CSF [12], this set directly identifies the activated tags. The reader enquires the signature indices in order to confirm the identification. An erroneous set prolongs this enquiry phase and leads to a repetition of the acquisition phase.

  • In CSR [13, 14], the indices of the assigned signatures are used to communicate with the tags for data read-out. Again, an erroneous set prolongs the enquiry phase and leads to a repetition of the acquisition phase.

For all schemes, a wrongly estimated set of assigned signatures prolongs the identification and increases the reader-to-tag communication overhead.

The estimated number of activated tags is defined as the cardinality of this set:
$$ \widehat{K} = |\widehat{\mathcal{T}}_{\mathrm{A}}|. $$
(48)

Note that the support estimation is performed after executing the BASSAMP algorithm; it considers the values u n,b , γ n,b , and β b after the last iteration t. The prior probabilities in BASSAMP Algorithm 1 are initialized with a coarse assumption of K, termed K 0: \(\gamma _{n,b}^{0} = 1-\frac {K^{0}}{N}\).

7 Numerical results and comparison

Let us introduce the figures of merit used in the subsequent evaluation. The signal-to-noise ratio (SNR) is defined as
$$ \text{SNR} = \frac{\left\| \mathbf{A}\mathbf{X} \right\|_{F}^{2}}{\left\|\mathbf{W}\right\|_{F}^{2}}. $$
(49)
The normalized mean squared error (NMSE) between original signal X and its estimate (recovery) \(\widehat {\mathbf {x}}\) is defined as
$$ \text{NMSE} = \frac{\| \mathbf{X} - \widehat{\mathbf{X}} \|_{F}^{2}}{\| \mathbf{X} \|_{F}^{2}}, $$
(50)

it gives indication about the overall recovery performance.

For the subsequent numerical experiments, the stopping criteria of Algorithm 1 were chosen as ε tol=10−5 and t max=100.

7.1 Gaussian relaxation of prior

In Section 5.4, we proposed a relaxation of the prior distribution (Gaussian instead of Laplace) in order to obtain an implementation of Algorithm 1 that features a lower computational complexity and better numerical stability. To demonstrate that the performance is not significantly affected by this relaxation, we investigate the empirical phase transition curves that illustrate the recovery performance over a wide range of parameters K and M for fixed N=1 000. We chose N R =1 receive antenna.

For the empirical phase transition curves, we consider an undersampling \(\left (\frac {M}{N}\right)\) versus sparsity \(\left (\frac {K}{M}\right)\) grid, where the values range from 0.05 to 0.95 with stepsize 0.05, respectively. At each grid point, 1 000 realizations of A, X, and W are simulated. Let us introduce a success indicator for each realization r:
$$ S_{r} = \left\{ \begin{aligned} 1 &\,\,\text{NMSE}_{r} < 10^{-4}\\ 0 &\,\,\text{else}\\ \end{aligned} \right.. $$
(51)

The average success is obtained as \(\overline {S} = \frac {1}{1\,000}\sum _{r=1}^{1\,000}S_{r}\). The empirical phase transition curves are finally obtained by plotting the 0.5 contour of \(\overline {S}\) using MATLAB®;function contour.

The results are shown in Fig. 4, where the AMP algorithm (with NMSE minimizing soft thresholding parameter λ=2.678K −0.181, see [12]) is compared to the BASSAMP algorithm with two different priors. The true channel prior is the Laplace distribution obtained in (20), the signal vectors were generated accordingly in each realization, with σ (f)=σ (b)=1. The relaxed channel prior is a zero mean Gaussian distribution with \(\sigma _{\mathsf {x}}^{2}=\frac {1}{2}{\sigma ^{(\text {f})}}^{2}{\sigma ^{(\text {b})}}^{2} = 0.5\). We conclude that the NMSE performance is only aggravated slightly by the Gaussian relaxation. This also suggests that a mismatch in the channel prior does not hamper the recovery significantly, and advocates the general use of a Gaussian prior for the considered application. Furthermore, we observe that in order to achieve the same NMSE performance as AMP, we can significantly reduce the signature length M. In the sequel, we employ the Gaussian relaxation.
Fig. 4

Empirical phase transition curves. Shown are the empirical phase transition curves for the recovery of a single complex-valued signal vector (N R =1, N B =2, SNR=). Plotted is the 50% success contour, where success is defined as NMSE<10−4. Success with high probability is observed below a contour line

7.2 Support estimation

In this section, we demonstrate the capabilities of the support detection scheme that was introduced in Section 6. The BASSAMP Algorithm 1 requires an initialization of the zero probabilities, which is done with a coarse initial assumption of the number of activated tags: \(\gamma _{n,b}^{0}=1-\frac {K^{0}}{N}\). We now demonstrate that the initial assumption of K 0 can indeed be very coarse.

We consider the number of correct detections (CDs) and false alarms (FAs) [32] that partition the estimated set of assigned signatures \(\widehat {\mathcal {T}}_{\mathrm {A}}\) (47) (an index in this set either refers to a CD or a FA).

Figure 5 shows the number of CDs and FAs averaged over 1 000 realizations versus variable SNR. The true number of activated tags was K=100; for a much lower K 0=50 or a much higher K 0=150 to initialize BASSAMP with, the results are only changed marginally. At low SNR, the cardinality of \(\widehat {\mathcal {T}}_{\mathrm {A}}\) is lower than K, and \(\widehat {K}=|\widehat {\mathcal {T}}_{\mathrm {A}}|\) is underestimated. The indices in \(\widehat {\mathcal {T}}_{\mathrm {A}}\) mostly represent the true support (many CDs, few FAs). The support estimation does not flood us with many FAs which is beneficial for the RFID protocols because it reduces the overhead of the reader-to-tag communication (the protocols do not have to deal with wrongly acquired tags). In this way, we obtain efficient identification cycles (see [1214]) that feature only very few wrongly enquired tags. For the schemes presented in [1214], it is important to obtain many CDs and few FAs rather than an exact estimate \(\widehat {K}\). With an increasing number of jointly sparse vectors (right plot in Fig. 5), the support estimation becomes very robust with respect to noise. In the considered example, N R =4 receive antennas result in perfect support estimation above SNR=15 dB which entails immediate identification after only one cycle.
Fig. 5

Support estimation (variable SNR). Plotted are the number of correct detections (CD) and false alarms (FA) in the estimated set of assigned signatures \(\widehat {\mathcal {T}}_{\text {A}}\) over variable SNR. K 0 is the assumed number of tags to initialize the BASSAMP algorithm with. Assuming a K 0 that is much larger or smaller than the original K affects the outcome of the support detection only marginally. With an increasing number of receive antennas N R , the support estimation becomes more robust. The light grey curves show the legacy approach used in [1214] that assumes to know K; there, the indices of the K largest entries in the AMP recovery compose \(\widehat {\mathcal {T}}_{\mathrm {A}}\)

Figure 6 shows the number of CDs and FAs averaged over 1 000 realizations versus variable M in the noiseless case. Again, three different initializations of K 0 are investigated, and the outcome is only marginally different. We observe that the number of CDs is hardly affected, whereas the number of FA increases (decreases) for over (under)-assumptions of K 0 during the phase transition, but declines quickly with increasing M. The signature length M should be chosen such that the algorithm operates in the successful regime where the support estimation features only CDs and no FAs. We conclude that a very coarse assumption of K is sufficient to initialize Algorithm 1.
Fig. 6

Support estimation (variable sequence length). Plotted are the number of correct detections (CD) and false alarms (FA) in the estimated set of assigned signatures \(\widehat {\mathcal {T}}_{\mathrm {A}}\) over variable M in the noiseless case. A similar behavior to Fig. 5 is observed. For an increased number of jointly sparse vectors N B , fewer measurements are required to achieve the same number of CD. The light grey curves show the legacy approach used in [1214]

7.3 Improvement of acquisition phase—perfect conditions

As motivated in Section 3.2, our approach improves the tag acquisition phase of CS-RFID protocols. To demonstrate this, we consider various identification schemes and compare the bit overhead for the acquisition phase, i.e., the number of bits required to acquire the activated tags. In order to facilitate a comparison of various schemes, we restrict ourselves to the number of bits transmitted by the tags and omit protocol overhead and commands from the reader. Our baseline is the widely employed FSA protocol where tags are randomly scheduled to transmit in slots of a frame in order to avoid collisions. It features following bit overhead for tag acquisition:
$$ \beta_{\text{FSA}}^{\text{(A)}} = \frac{16}{T_{\text{ps}}} K, $$
(52)

where T ps is the throughput per slot [8], i.e., the number of tags acquired per slot, and the number 16 refers to the RN16 sequences utilized during acquisition. If the number of activated tags is known, the optimal choice of the frame size leads to a maximum average throughput of T ps=e −1≈0.368 [33]. In [68], collision recovery schemes have been proposed that allow shortened frame sizes and, thus, increased throughput numbers. A reader with N R receive antennas can resolve up to 2N R collisions [8]. Assuming perfect channel knowledge and knowledge of the number of activated tags K, a reader with N R =1 receive antenna can resolve one collision and features a maximal theoretical throughput of T ps=0.841, while a reader with N R =4 receive antennas achieves a maximal theoretical throughput of T ps=4.479, for details, see [8].

On the other hand, we have the CS-based schemes where all activated tags respond simultaneously with sequences of length M during the acquisition phase. This results in a CS measurement of the type (1) that needs to be solved for x utilizing the M measurements in y. Therefore, the optimal bit overhead for tag acquisition reads (cf. (2))
$$ \beta_{\text{CS}}^{\text{(A)}} = M = \left\lceil c K \log \frac{N}{K} \right\rceil. $$
(53)

It was shown in [12] that an optimally tuned AMP recovery algorithm requires a measurement multiplier c=2 to yield perfect recovery results in the noiseless case. In Sections 7.1 and 7.2, we observed that our proposed scheme requires fewer measurements than the legacy AMP scheme, which enables a reduction of c that results in the same recovery quality. Scrutinizing Fig. 6, we observe that M≈210 (c≈0.9) for N R =1 and M≈140 (c≈0.6) for N R =4 lead to perfect recovery (i.e., only CDs and no FAs), respectively.

Let us collect above insights and compare the bit overhead for tag acquisition. Figure 7 depicts the bit overhead versus K for various schemes under perfect conditions, i.e., the channels are known to the FSA-based collision recovery scheme, and all schemes know the number of activated tags K. For the CS-based schemes, we assume N=1 000. We observe that the CS-based schemes strongly outperform FSA (despite its collision recovery capabilities) and that our novel approach that utilizes the BASSAMP algorithm reduces the bit overhead over the legacy AMP scheme by a factor of 2.2 (N R =1) respectively 3.3 (N R =4). Let us emphasize again that this improvement is observed in all CS-based schemes [914], as they all feature the same problem formulation in the acquisition phase.
Fig. 7

Bit overhead for tag acquisition under perfect conditions. The dashed lines depict the FSA-based schemes, while the solid lines depict the CS-based schemes (e.g., Buzz, CSF, see [914]). By exploiting our novel approach with the BASSAMP algorithm, we are able to reduce the bit overhead of CS-based schemes by a factor of 2.2 (N R =1) respectively 3.3 (N R =4) over the legacy AMP algorithm. Overall, the CS-based schemes strongly outperform the FSA-based schemes, despite collision recovery methods [8] that utilize several antennas

By utilizing the channel statistics and the joint sparsity among the signal vectors, a strong improvement over previous approaches is observed. Furthermore, the novel approach already shows a significant improvement for a reader that employs only N R =1 receive antenna.

7.4 Improvement of acquisition phase—imperfect conditions

Now that we have studied the performance under perfect conditions, let us move on to the noisy case. In particular, the tag responses of FSA are corrupted by noise, and the collision recovery scheme [8] faces channel estimation errors. The CS-based schemes have to deal with noisy measurements (10). Let us describe the simulation guidelines. FSA: For the legacy FSA approach [3], the reader features one receive antenna and no collision recovery capabilities. The advanced FSA approach features collision recovery capabilities as introduced in [8]. There, the channels have to be estimated by using a set of orthogonal sequences that are transmitted prior to the RN16 sequences—note that we omit the bit overhead for the channel estimation sequences, although they will have a significant overhead in practice. The frame size is adjusted in each cycle in order to maximize the throughput [8, 33], depending on the number of remaining tags. It is assumed that the schemes know the number of tags K. The average acquisition throughput is obtained as
$$ T_{\text{FSA}}^{\text{(A)}} = \frac{K}{\beta_{\text{FSA}}^{\text{(A)}}} = \frac{T_{\text{ps}}}{16}. $$
(54)
It is measured in acquired tags per bit, where we count the number of bits transmitted by a tag during the acquisition phase. We used the T ps values from [8]. CS: For the legacy CS approach, the reader features one receive antenna, and AMP is employed as CS recovery algorithm that estimates a single complex-valued signal vector. The improved CS approach features the BASSAMP algorithm that we specified for a dyadic channel model. We simulated 1 000 random realizations of A, X, and W and averaged the results. In each realization, CS measurements of the form (5) are performed in cycles [14]. Each cycle features a CS tag acquisition, and the cycles are repeated until all tags are identified. In each cycle, the number of measurements (sequence length) M is set according to (2) based on the remaining number of unidentified tags. AMP is assumed to know K, and \(\widehat {\mathcal {T}}_{\mathrm {A}}\) is composed of the indices that correspond to the K largest entries in the recovered signal vector. BASSAMP utilizes the support estimation (47). The average acquisition throughput is obtained as
$$ T_{\text{CS}}^{\text{(A)}} = \frac{K}{\beta_{\text{CS}}^{\text{(A)}}}. $$
(55)

Note that here, the simulated bit overhead \(\beta _{\text {CS}}^{\text {(A)}}\) that may include several CS measurements (cycles) is used, whereas (53) refers to the optimal bit overhead of only one CS measurement.

Figure 8 shows the average identification throughput of tag acquisition versus SNR for K=100 and N=1 000. We employ N R =4 receive antennas, and for the novel CS-based scheme, the signature length M is varied (c={0.5,1,2,3}). We observe that with the proposed approach, we can shorten the signatures and thereby drastically increase the acquisition throughput of the CS-RFID schemes. Compared to the legacy AMP scheme with N R =1, the new BASSAMP scheme with N R =4 features an increased noise robustness for the same signature length (c=3), i.e., the noise robustness increases with increasing N R . It was shown in [12] that c<2 generally does not admit successful AMP recoveries; with the newly utilized BASSAMP algorithm that exploits joint sparsity, smaller c and thus shorter signatures are possible. Note that shorter signatures also reduce the sensitivity to jitter [14], and the tags require less energy (shortened activation time). An advantage over collision recovery schemes is the fact that the CS-based schemes do not require channel knowledge, only coarse knowledge of the channel statistics. Overall, our proposed approach significantly improves the acquisition phase of CS-based schemes and strongly outperforms other schemes such as FSA with collision recovery.
Fig. 8

Acquisition throughput parameterized on M. With the newly employed BASSAMP algorithm and the exploitation of multiple receive antennas, the maximum acquisition throughput of CS-based schemes can be improved drastically by reducing the sequence length M (or equivalently, the measurement multiplier c). The proposed scheme strongly outperforms the FSA-based scheme with collision recovery (CR) capabilities, despite having the same number of receive antennas at the reader

Let us list the achieved improvements (at high SNR) by exploiting N R =4 receive antennas at the reader. The tag acquisition of the proposed CS-RFID scheme is
  • 4.3 times quicker than the legacy CS approach that employed AMP and a single receive antenna,

  • 3 times quicker than FSA with collision recovery,

  • 26 times quicker than legacy FSA without collision recovery capability, and a reader with a single receive antenna.

We emphasize again that our approach is applicable to all state-of-the-art CS-RFID schemes [914].

8 Conclusions

We proposed a novel extension to CS-RFID that improves the acquisition phase of the tag identification by leveraging joint sparsity. We demonstrated how multiple receive antennas at the reader produce multiple measurement vectors and that their number can be doubled beneficially by separation into real and imaginary parts. The corresponding signal vectors are jointly sparse, i.e., they share a common support. This is exploited by the BASSAMP algorithm that we defined for a dyadic channel model.

We showed that the usage of a Gaussian prior relaxation is applicable. Simulation results suggest that an exact knowledge of the channel coefficient distribution is not required. Furthermore, the relaxation promotes a low complexity implementation of our iterative recovery algorithm.

Robust signal support estimation is facilitated by combining the soft information from multiple jointly sparse signal vectors. Support estimation is crucial for quick tag identification, as the support dictates the overhead of the reader-to-tag communication (correct detections lead to correctly read out tags, while false alarms prolong the identification). It was shown that prior knowledge of the exact number of activated tags is not required for robust support estimation.

The main benefits of exploiting joint sparsity are the possible reduction of the sequence length (i.e., shorter tag responses) and the increased noise robustness during acquisition. This enables quicker, more reliable identification, reduces the sensitivity to jitter, and lowers the energy requirements of the tags.

Declarations

Acknowledgements

This work has been funded by the Christian Doppler Laboratory for Wireless Technologies for Sustainable Mobility, and its industrial partner Infineon Technologies. The financial support by the Federal Ministry of Economy, Family and Youth and the National Foundation for Research, Technology and Development is gratefully acknowledged.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Institute of Telecommunications, TU Wien
(2)
Christian Doppler Laboratory for Wireless Technologies for Sustainable Mobility

References

  1. Want, R (2006). An introduction to RFID technology. Pervasive Comput. IEEE, 5(1), 25–33.View ArticleGoogle Scholar
  2. Dobkin, DM. (2012). The RF in RFID, Second Edition: UHF RFID in Practice. Newnes. http://www.amazon.com/The-RF-RFID-Second-Edition/dp/0123945836, accessed 1 November 2015.
  3. EPCGlobal (2008). EPC Radio-frequency Identity Protocols Class-1 Generation-2 UHF RFID. http://www.gs1.org/gsmp/kc/epcglobal/uhfc1g2. accessed 1 November 2015.
  4. Shih, D-H, Sun, P-L, Yen, DC, Huang, S-M (2006). Taxonomy and survey of RFID anti-collision protocols. Comput. Commun., 29(11), 2150–2166.View ArticleGoogle Scholar
  5. Klair, DK, Chin, K-W, Raad, R (2010). A survey and tutorial of RFID anti-collision protocols. IEEE Commun. Surv. Tutor., 12(3), 400–421.View ArticleGoogle Scholar
  6. Angerer, C, Langwieser, R, Rupp, M (2010). RFID Reader receivers for physical layer collision recovery. IEEE Trans. Commun., 58(12), 3526–3537.View ArticleGoogle Scholar
  7. Kaitovic, J, Simko, M, Langwieser, R, Rupp, M (2012). Channel Estimation in Tag Collision Scenarios. In 2012 IEEE International Conference on RFID (RFID). pp. 74–80. Orlando, FL, USA: IEEE (details see http://www.ieee.org/conferences_events/conferences/conferencedetails/index.htm?Conf_ID=19933).
  8. Kaitovic, J, Langwieser, R, Rupp, M (2013). A smart collision recovery receiver for RFIDs. EURASIP J. Embedded Syst., 2013(1), 1–19.View ArticleGoogle Scholar
  9. Wang, J, Hassanieh, H, Katabi, D, Indyk, P (2012). Efficient and reliable low-power backscatter networks. ACM SIGCOMM Comput. Commun. Rev., 42(4), 61–72.View ArticleGoogle Scholar
  10. Lai, G, Liu, Y, Lin, X, Zhang, L (2013). Collision-based Radio Frequency Identification Using Compressive Sensing. In 15th IEEE International Conference on Communication Technology (ICCT). pp. 759–763. Guangxi, Guilin, China (details see http://www.ieee.org/conferences_events/conferences/conferencedetails/index.html?Conf_ID=31177).
  11. Kaneko, M, Hu, W, Hayashi, K, Sakai, H (2014). Compressed sensing-based tag identification protocol for a passive RFID system. IEEE Commun. Lett., 18(11), 2023–2026.View ArticleGoogle Scholar
  12. Mayer, M, Goertz, N, Kaitovic, J (2014). RFID Tag Acquisition via Compressed Sensing. In Proceedings of IEEE RFID Technology and Applications Conference (RFID-TA). pp. 26–31. Tampere, Finland (details see http://www.rfid-ta2014.fi/).
  13. Mayer, M, & Goertz, N (2015). RFID Tag Acquisition via Compressed Sensing: Flexibility by Random Signature Assignment. In Proceedings of the 5th International EURASIP Workshop on RFID Technology. pp. 53–58. Rosenheim, Germany (details see http://www.eurasip-rfid.org/).
  14. Mayer, M, & Goertz, N (2015). RFID Tag acquisition via compressed sensing: fixed vs. random signature assignment. IEEE Transactions on Wireless Communications (accepted). DOI:10.1109/TWC.2015.2498922, date: November 2015.
  15. Mayer, M, & Goertz, N (2015). Bayesian optimal approximate message passing to recover structured sparse signals. arXiv preprint arXiv:1508.01104. date: August 2015. Details: http://arxiv.org/abs/1508.01104.
  16. Donoho, DL (2006). Compressed sensing. IEEE Trans. Inform. Theory, 52(4), 1289–1306.MathSciNetView ArticleMATHGoogle Scholar
  17. Candes, EJ, Romberg, J, Tao, T (2006). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2), 489–509.MathSciNetView ArticleMATHGoogle Scholar
  18. Elad, M. (2010). Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing: Springer. http://www.springer.com/jp/book/9781441970107, accessed 1 November 2015.
  19. Eldar, YC, & Kutyniok, G. (2012). Compressed Sensing: Theory and Applications: Cambridge University Press. http://www.amazon.com/Compressed-Sensing-Applications-Yonina-Eldar/dp/1107005582/, accessed 1 November 2015.
  20. Candes, EJ, Romberg, J, Tao, T (2006). Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math., 59(8), 1207–1223.MathSciNetView ArticleMATHGoogle Scholar
  21. Donoho, DL, Maleki, A, Montanari, A (2009). Message-passing algorithms for compressed sensing. Proc. Nat. Acad. Sci., 106(45), 18914–18919.View ArticleGoogle Scholar
  22. Maleki, MA. (2011). Approximate Message Passing Algorithms for Compressed Sensing: PhD Thesis, Stanford University. http://www.ece.rice.edu/~mam15/thesis.pdf, accessed 1 November 2015.
  23. Donoho, DL, Maleki, A, Montanari, A. Message Passing Algorithms for Compressed Sensing: I. Motivation and Construction. In 2010 IEEE Information Theory Workshop (ITW). pp. 1–5. Cairo, Egypt (details see http://www2.ece.ohio-state.edu/itw10/).
  24. Tibshirani, R (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1), 267–288. Details: http://www.jstor.org/stable/2346178?seq=1#page_scan_tab_contents.MathSciNetMATHGoogle Scholar
  25. Kim, D, Ingram, M-A, JWW Smith (2003). Measurements of small-scale fading and path loss for long range RF tags. IEEE Trans. Antennas Propag., 51(8), 1740–1749. doi:10.1109/TAP.2003.814752.View ArticleGoogle Scholar
  26. Bardell, PH, McAnney, WH, Savir, J. (1987). Built-in Test for VLSI: Pseudorandom Techniques. New York, NY, USA: Wiley-Interscience.Google Scholar
  27. Ziniel, J, & Schniter, P (2013). Efficient high-dimensional inference in the multiple measurement vector problem. IEEE Trans. Signal Process., 61(2), 340–354.MathSciNetView ArticleGoogle Scholar
  28. Montanari, A. (2012). Graphical models concepts in compressed sensing. Compressed Sensing: Theory and Applications, (pp. 394–438): Cambridge University Press. http://www.amazon.com/Compressed-Sensing-Applications-Yonina-Eldar/dp/1107005582/.
  29. Kotz, S, Kozubowski, T, Podgorski, K. (2001). The Laplace Distribution and Generalizations: a Revisit with Applications to Communications, Economics, Engineering, and Finance: Springer. http://www.springer.com/us/book/9780817641665, accessed 1 November 2015.
  30. Guo, C, & Davies, ME (2015). Near optimal compressed sensing without priors: parametric SURE approximate message passing. IEEE Trans. Signal Process., 63(8), 2130–2141. doi:10.1109/TSP.2015.2408569.MathSciNetView ArticleGoogle Scholar
  31. Hannak, G, Mayer, M, Matz, G, Goertz, N (2015). An approach to complex Bayesian-optimal approximate message passing. arXiv preprint arXiv:1511.08238. Date: November 2015. Details: http://arxiv.org/abs/1511.08238.
  32. Mayer, M, & Goertz, N (2015). Improving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing. In Proceedings of 10th International ITG Conference on Systems, Communications and Coding (SCC 2015). Hamburg, Germany (details see http://www.scc2015.net/).
  33. Nazir, MH, Xu, Y, Johansson, A (2011). Optimal Dynamic Frame-Slotted Aloha. In Proceedings of 7th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM). pp. 1–4. Wuhan, China: IEEE (details see http://www.ieee.org/conferences_events/conferences/conferencedetails/index.html?Conf_ID=16733).

Copyright

© Mayer et al. 2016