Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na arXiv.org

This paper proposes a specific type of Local Linear Model, the Randomly Local
Linear Model (RLLM), that can be used as a universal approximator. Local
operating points are chosen randomly and linear models are used to approximate
a function or system around these points. The model can also be interpreted as
an extension to Extreme Learning Machines with Radial Basis Function nodes, or
as a specific way of using Takagi-Sugeno fuzzy models. Using the available
theory of Extreme Learning Machines, universal approximation of the RLLM and an
upper bound on the number of models are proved mathematically, and an efficient
algorithm is proposed.

http://arxiv.org/abs/1308.6498 2013/09/01 - 17:21

This paper studies the second-order asymptotics of discrete memoryless
classical-quantum channels - a refinement of the Holevo capacity. We
demonstrate that the Gaussian approximation of the logarithm of the blocklength
n, {\epsilon}-average error capacity holds. To do so, several important
properties of quantum information quantities, such as the capacity-achieving
output state, are generalized from their classical counterparts. Our results
are the classical-quantum analogue of Strassen's study of second-order
asymptotics in classical channel coding. We discuss how our results can be
extended to characterize the classical capacity of quantum channels.

http://arxiv.org/abs/1308.6503 2013/09/01 - 17:21

This paper studies the second-order asymptotics of discrete memoryless
classical-quantum channels - a refinement of the Holevo capacity. We
demonstrate that the Gaussian approximation of the logarithm of the blocklength
n, {\epsilon}-average error capacity holds. To do so, several important
properties of quantum information quantities, such as the capacity-achieving
output state, are generalized from their classical counterparts. Our results
are the classical-quantum analogue of Strassen's study of second-order
asymptotics in classical channel coding. We discuss how our results can be
extended to characterize the classical capacity of quantum channels.

http://arxiv.org/abs/1308.6503 2013/09/01 - 17:21

For high-dimensional sparse parameter estimation problems, Log-Sum Penalty
(LSP) regularization effectively reduces the sampling sizes in practice.
However, it still lacks theoretical analysis to support the experience from
previous empirical study. The analysis of this article shows that, like
$\ell_0$-regularization, $O(s)$ sampling size is enough for proper LSP, where
$s$ is the non-zero components of the true parameter. We also propose an
efficient algorithm to solve LSP regularization problem. The solutions given by
the proposed algorithm give consistent parameter estimations under less
restrictive conditions than $\ell_1$-regularization.

http://arxiv.org/abs/1308.6504 2013/09/01 - 17:21

For high-dimensional sparse parameter estimation problems, Log-Sum Penalty
(LSP) regularization effectively reduces the sampling sizes in practice.
However, it still lacks theoretical analysis to support the experience from
previous empirical study. The analysis of this article shows that, like
$\ell_0$-regularization, $O(s)$ sampling size is enough for proper LSP, where
$s$ is the non-zero components of the true parameter. We also propose an
efficient algorithm to solve LSP regularization problem. The solutions given by
the proposed algorithm give consistent parameter estimations under less
restrictive conditions than $\ell_1$-regularization.

http://arxiv.org/abs/1308.6504 2013/09/01 - 17:21

In this paper we consider skew bisubmodular functions as introduced in [9].
We construct a convex extension of a skew bisubmodular function which we call
Lov\'asz extension in correspondence to the submodular case. We use this
extension to show that skew bisubmodular functions given by an oracle can be
minimised in polynomial time.

http://arxiv.org/abs/1308.6505 2013/09/01 - 17:21

In this paper we consider skew bisubmodular functions as introduced in [9].
We construct a convex extension of a skew bisubmodular function which we call
Lov\'asz extension in correspondence to the submodular case. We use this
extension to show that skew bisubmodular functions given by an oracle can be
minimised in polynomial time.

http://arxiv.org/abs/1308.6505 2013/09/01 - 17:21

Given an LZW/LZ78 compressed text, we want to find an approximate occurrence
of a given pattern of length m. The goal is to achieve time complexity
depending on the size n of the compressed representation of the text instead of
its length. We consider two specific definitions of approximate matching,
namely the Hamming distance and the edit distance, and show how to achieve
O(nm^0.5k^2) and O(nm^0.5k^3) running time, respectively, where k is the bound
on the distance. Even for very small values of k, the best previously known
solutions required O(nm) time. Our main contribution is applying a
periodicity-based argument in a way that is computationally effective even if
we need to operate on a compressed representation of a string, while the
previous solutions were either based on a dynamic programming, or a black-box
application of tools developed for uncompressed strings.

http://arxiv.org/abs/1308.6509 2013/09/01 - 17:21

Given an LZW/LZ78 compressed text, we want to find an approximate occurrence
of a given pattern of length m. The goal is to achieve time complexity
depending on the size n of the compressed representation of the text instead of
its length. We consider two specific definitions of approximate matching,
namely the Hamming distance and the edit distance, and show how to achieve
O(nm^0.5k^2) and O(nm^0.5k^3) running time, respectively, where k is the bound
on the distance. Even for very small values of k, the best previously known
solutions required O(nm) time. Our main contribution is applying a
periodicity-based argument in a way that is computationally effective even if
we need to operate on a compressed representation of a string, while the
previous solutions were either based on a dynamic programming, or a black-box
application of tools developed for uncompressed strings.

http://arxiv.org/abs/1308.6509 2013/09/01 - 17:21

Accurate and comprehensible knowledge about the position of branch cuts is
essential for correctly working with multi-valued functions, such as the square
root and logarithm. We discuss the new tools in Maple 17 for calculating and
visualising the branch cuts of such functions, and others built up from them.
The cuts are described in an intuitive and accurate form, offering substantial
improvement on the descriptions previously available.

http://arxiv.org/abs/1308.6523 2013/09/01 - 17:21

Accurate and comprehensible knowledge about the position of branch cuts is
essential for correctly working with multi-valued functions, such as the square
root and logarithm. We discuss the new tools in Maple 17 for calculating and
visualising the branch cuts of such functions, and others built up from them.
The cuts are described in an intuitive and accurate form, offering substantial
improvement on the descriptions previously available.

http://arxiv.org/abs/1308.6523 2013/09/01 - 17:21

This work uses Game Theory to study the effectiveness of punishments as an
incentive for rational nodes to follow an epidemic dissemination protocol. The
dissemination process is modeled as an infinite repetition of a stage game. At
the end of each stage, a monitoring mechanism informs each player of the
actions of other nodes. The effectiveness of a punishing strategy is measured
as the range of values for the benefit-to-cost ratio that sustain cooperation.
This paper studies both public and private monitoring. Under public monitoring,
we show that direct reciprocity is not an effective incentive, whereas full
indirect provides a nearly optimal effectiveness. Under private monitoring, we
identify necessary conditions regarding the topology of the graph in order for
punishments to be effective. When punishments are coordinated, full indirect
reciprocity is also effective with private monitoring.

http://arxiv.org/abs/1308.6526 2013/09/01 - 17:21

This work uses Game Theory to study the effectiveness of punishments as an
incentive for rational nodes to follow an epidemic dissemination protocol. The
dissemination process is modeled as an infinite repetition of a stage game. At
the end of each stage, a monitoring mechanism informs each player of the
actions of other nodes. The effectiveness of a punishing strategy is measured
as the range of values for the benefit-to-cost ratio that sustain cooperation.
This paper studies both public and private monitoring. Under public monitoring,
we show that direct reciprocity is not an effective incentive, whereas full
indirect provides a nearly optimal effectiveness. Under private monitoring, we
identify necessary conditions regarding the topology of the graph in order for
punishments to be effective. When punishments are coordinated, full indirect
reciprocity is also effective with private monitoring.

http://arxiv.org/abs/1308.6526 2013/09/01 - 17:21

The k-core decomposition of a network has thus far mainly served as a
powerful tool for the empirical study of complex networks. We now propose its
explicit integration in a theoretical model. We introduce a Hard-core Random
Network model that generates maximally random networks with arbitrary degree
distribution and arbitrary k-core structure. We then solve exactly the bond
percolation problem on the HRN model and produce fast and precise analytical
estimates for the corresponding real networks. Extensive comparison with
selected databases reveals that our approach performs better than existing
models, while requiring less input information.

http://arxiv.org/abs/1308.6537 2013/09/01 - 17:21

The k-core decomposition of a network has thus far mainly served as a
powerful tool for the empirical study of complex networks. We now propose its
explicit integration in a theoretical model. We introduce a Hard-core Random
Network model that generates maximally random networks with arbitrary degree
distribution and arbitrary k-core structure. We then solve exactly the bond
percolation problem on the HRN model and produce fast and precise analytical
estimates for the corresponding real networks. Extensive comparison with
selected databases reveals that our approach performs better than existing
models, while requiring less input information.

http://arxiv.org/abs/1308.6537 2013/09/01 - 17:21

In a MIMO radar network the multiple transmit elements may emit waveforms
that differ on power and bandwidth. In this paper, we are asking, given that
these two resources are limited, what is the optimal power, optimal bandwidth
and optimal joint power and bandwidth allocation for best localization of
multiple targets. The well known Cr\'amer-Rao lower bound for target
localization accuracy is used as a figure of merit and approximate solutions
are found by minimizing a sequence of convex problems. Their quality is
assessed through extensive numerical simulations and with the help of a
lower-bound on the true solution. Simulations results reveal that bandwidth
allocation policies have a definitely stronger impact on performance than
power.

http://arxiv.org/abs/1308.6543 2013/09/01 - 17:21

In a MIMO radar network the multiple transmit elements may emit waveforms
that differ on power and bandwidth. In this paper, we are asking, given that
these two resources are limited, what is the optimal power, optimal bandwidth
and optimal joint power and bandwidth allocation for best localization of
multiple targets. The well known Cr\'amer-Rao lower bound for target
localization accuracy is used as a figure of merit and approximate solutions
are found by minimizing a sequence of convex problems. Their quality is
assessed through extensive numerical simulations and with the help of a
lower-bound on the true solution. Simulations results reveal that bandwidth
allocation policies have a definitely stronger impact on performance than
power.

http://arxiv.org/abs/1308.6543 2013/09/01 - 17:21

Integer-Forcing (IF) is a new framework, based on compute-and-forward, for
decoding multiple integer linear combinations from the output of a Gaussian
multiple-input multiple-output channel. This work applies the IF approach to
arrive at a new low-complexity scheme, IF source coding, for distributed lossy
compression of correlated Gaussian sources under a minimum mean squared error
distortion measure. All encoders use the same nested lattice codebook. Each
encoder quantizes its observation using the fine lattice as a quantizer and
reduces the result modulo the coarse lattice, which plays the role of binning.
Rather than directly recovering the individual quantized signals, the decoder
first recovers a full-rank set of judiciously chosen integer linear
combinations of the quantized signals, and then inverts it. In general, the
linear combinations have smaller average powers than the original signals. This
allows to increase the density of the coarse lattice, which in turn translates
to smaller compression rates. We also propose and analyze a one-shot version of
IF source coding, that is simple enough to potentially lead to a new design
principle for analog-to-digital converters that can exploit spatial
correlations between the sampled signals.

http://arxiv.org/abs/1308.6552 2013/09/01 - 17:21

Integer-Forcing (IF) is a new framework, based on compute-and-forward, for
decoding multiple integer linear combinations from the output of a Gaussian
multiple-input multiple-output channel. This work applies the IF approach to
arrive at a new low-complexity scheme, IF source coding, for distributed lossy
compression of correlated Gaussian sources under a minimum mean squared error
distortion measure. All encoders use the same nested lattice codebook. Each
encoder quantizes its observation using the fine lattice as a quantizer and
reduces the result modulo the coarse lattice, which plays the role of binning.
Rather than directly recovering the individual quantized signals, the decoder
first recovers a full-rank set of judiciously chosen integer linear
combinations of the quantized signals, and then inverts it. In general, the
linear combinations have smaller average powers than the original signals. This
allows to increase the density of the coarse lattice, which in turn translates
to smaller compression rates. We also propose and analyze a one-shot version of
IF source coding, that is simple enough to potentially lead to a new design
principle for analog-to-digital converters that can exploit spatial
correlations between the sampled signals.

http://arxiv.org/abs/1308.6552 2013/09/01 - 17:21

This paper considers the construction of Reproducing Kernel Hilbert Spaces
(RKHS) on the sphere as an alternative to the conventional Hilbert space using
the inner product that yields the L^2(S^2) function space of finite energy
signals. In comparison with wavelet representations, which have
multi-resolution properties on L^2(S^2), the representations that arise from
the RKHS approach, which uses different inner products, have an overall
smoothness constraint, which may offer advantages and simplifications in
certain contexts. The key contribution of this paper is to construct classes of
closed-form kernels, such as one based on the von Mises-Fisher distribution,
which permits efficient inner product computation using kernel evaluations.
Three classes of RKHS are defined: isotropic kernels and non-isotropic kernels
both with spherical harmonic eigenfunctions, and general anisotropic kernels.

http://arxiv.org/abs/1308.6566 2013/09/01 - 17:21

This paper considers the construction of Reproducing Kernel Hilbert Spaces
(RKHS) on the sphere as an alternative to the conventional Hilbert space using
the inner product that yields the L^2(S^2) function space of finite energy
signals. In comparison with wavelet representations, which have
multi-resolution properties on L^2(S^2), the representations that arise from
the RKHS approach, which uses different inner products, have an overall
smoothness constraint, which may offer advantages and simplifications in
certain contexts. The key contribution of this paper is to construct classes of
closed-form kernels, such as one based on the von Mises-Fisher distribution,
which permits efficient inner product computation using kernel evaluations.
Three classes of RKHS are defined: isotropic kernels and non-isotropic kernels
both with spherical harmonic eigenfunctions, and general anisotropic kernels.

http://arxiv.org/abs/1308.6566 2013/09/01 - 17:21

Analogical reasoning depends fundamentally on the ability to learn and
generalize about relations between objects. We develop an approach to
relational learning which, given a set of pairs of objects
$\mathbf{S}=\{A^{(1)}:B^{(1)},A^{(2)}:B^{(2)},\ldots,A^{(N)}:B ^{(N)}\}$,
measures how well other pairs A:B fit in with the set $\mathbf{S}$. Our work
addresses the following question: is the relation between objects A and B
analogous to those relations found in $\mathbf{S}$? Such questions are
particularly relevant in information retrieval, where an investigator might
want to search for analogous pairs of objects that match the query set of
interest. There are many ways in which objects can be related, making the task
of measuring analogies very challenging. Our approach combines a similarity
measure on function spaces with Bayesian analysis to produce a ranking. It
requires data containing features of the objects of interest and a link matrix
specifying which relationships exist; no further attributes of such
relationships are necessary. We illustrate the potential of our method on text
analysis and information networks. An application on discovering functional
interactions between pairs of proteins is discussed in detail, where we show
that our approach can work in practice even if a small set of protein pairs is
provided.

http://arxiv.org/abs/0912.5193 2013/09/01 - 17:21

Analogical reasoning depends fundamentally on the ability to learn and
generalize about relations between objects. We develop an approach to
relational learning which, given a set of pairs of objects
$\mathbf{S}=\{A^{(1)}:B^{(1)},A^{(2)}:B^{(2)},\ldots,A^{(N)}:B ^{(N)}\}$,
measures how well other pairs A:B fit in with the set $\mathbf{S}$. Our work
addresses the following question: is the relation between objects A and B
analogous to those relations found in $\mathbf{S}$? Such questions are
particularly relevant in information retrieval, where an investigator might
want to search for analogous pairs of objects that match the query set of
interest. There are many ways in which objects can be related, making the task
of measuring analogies very challenging. Our approach combines a similarity
measure on function spaces with Bayesian analysis to produce a ranking. It
requires data containing features of the objects of interest and a link matrix
specifying which relationships exist; no further attributes of such
relationships are necessary. We illustrate the potential of our method on text
analysis and information networks. An application on discovering functional
interactions between pairs of proteins is discussed in detail, where we show
that our approach can work in practice even if a small set of protein pairs is
provided.

http://arxiv.org/abs/0912.5193 2013/09/01 - 17:21

We consider supervised learning problems where the features are embedded in a
graph, such as gene expressions in a gene network. In this context, it is of
much interest to automatically select a subgraph with few connected components;
by exploiting prior knowledge, one can indeed improve the prediction
performance or obtain results that are easier to interpret. Regularization or
penalty functions for selecting features in graphs have recently been proposed,
but they raise new algorithmic challenges. For example, they typically require
solving a combinatorially hard selection problem among all connected subgraphs.
In this paper, we propose computationally feasible strategies to select a
sparse and well-connected subset of features sitting on a directed acyclic
graph (DAG). We introduce structured sparsity penalties over paths on a DAG
called "path coding" penalties. Unlike existing regularization functions that
model long-range interactions between features in a graph, path coding
penalties are tractable. The penalties and their proximal operators involve
path selection problems, which we efficiently solve by leveraging network flow
optimization. We experimentally show on synthetic, image, and genomic data that
our approach is scalable and leads to more connected subgraphs than other
regularization functions for graphs.

http://arxiv.org/abs/1204.4539 2013/09/01 - 17:21

We consider supervised learning problems where the features are embedded in a
graph, such as gene expressions in a gene network. In this context, it is of
much interest to automatically select a subgraph with few connected components;
by exploiting prior knowledge, one can indeed improve the prediction
performance or obtain results that are easier to interpret. Regularization or
penalty functions for selecting features in graphs have recently been proposed,
but they raise new algorithmic challenges. For example, they typically require
solving a combinatorially hard selection problem among all connected subgraphs.
In this paper, we propose computationally feasible strategies to select a
sparse and well-connected subset of features sitting on a directed acyclic
graph (DAG). We introduce structured sparsity penalties over paths on a DAG
called "path coding" penalties. Unlike existing regularization functions that
model long-range interactions between features in a graph, path coding
penalties are tractable. The penalties and their proximal operators involve
path selection problems, which we efficiently solve by leveraging network flow
optimization. We experimentally show on synthetic, image, and genomic data that
our approach is scalable and leads to more connected subgraphs than other
regularization functions for graphs.

http://arxiv.org/abs/1204.4539 2013/09/01 - 17:21

We provide a general method to prove the existence and compute efficiently
elimination orderings in graphs. Our method relies on several tools that were
known before, but that were not put together so far: the algorithm LexBFS due
to Rose, Tarjan and Lueker, one of its properties discovered by Berry and
Bordat, and a local decomposition property of graphs discovered by Maffray,
Trotignon and Vu\vskovi\'c. We use this method to prove the existence of
elimination orderings in several classes of graphs, and to compute them in
linear time. Some of the classes have already been studied, namely
even-hole-free graphs, square-theta-free Berge graphs, universally signable
graphs and wheel-free graphs. Some other classes are new. It turns out that all
the classes that we study in this paper can be defined by excluding some of the
so-called Truemper configurations. For several classes of graphs, we obtain
directly bounds on the chromatic number, or fast algorithms for the maximum
clique problem or the coloring problem.

http://arxiv.org/abs/1205.2535 2013/09/01 - 17:21

We provide a general method to prove the existence and compute efficiently
elimination orderings in graphs. Our method relies on several tools that were
known before, but that were not put together so far: the algorithm LexBFS due
to Rose, Tarjan and Lueker, one of its properties discovered by Berry and
Bordat, and a local decomposition property of graphs discovered by Maffray,
Trotignon and Vu\vskovi\'c. We use this method to prove the existence of
elimination orderings in several classes of graphs, and to compute them in
linear time. Some of the classes have already been studied, namely
even-hole-free graphs, square-theta-free Berge graphs, universally signable
graphs and wheel-free graphs. Some other classes are new. It turns out that all
the classes that we study in this paper can be defined by excluding some of the
so-called Truemper configurations. For several classes of graphs, we obtain
directly bounds on the chromatic number, or fast algorithms for the maximum
clique problem or the coloring problem.

http://arxiv.org/abs/1205.2535 2013/09/01 - 17:21

We construct non-associative key establishment protocols for all left
self-distributive (LD), multi-LD-, and other left distributive systems.
Instantiations of these protocols using generalized shifted conjugacy in braid
groups lead to instances of a natural and apparently new group-theoretic
problem, which we call the (subgroup) conjugacy coset problem.

http://arxiv.org/abs/1305.4401 2013/08/31 - 14:13

We present a method for incorporating missing data in non-parametric
statistical learning without the need for imputation. We focus on a tree-based
method, Bayesian Additive Regression Trees (BART), enhanced with "Missingness
Incorporated in Attributes," an approach recently proposed incorporating
missingness into decision trees (Twala, 2008). This procedure takes advantage
of the partitioning mechanisms found in tree-based models. Simulations on
generated models and real data indicate that our proposed method can forecast
well on complicated missing-at-random and not-missing-at-random models as well
as models where missingness itself influences the response. Our procedure has
higher predictive performance and is more stable than competitors in many
cases. We also illustrate BART's abilities to incorporate missingness into
uncertainty intervals and to detect the influence of missingness on the model
fit.

http://arxiv.org/abs/1306.0618 2013/08/31 - 14:13

Sites in an infinite d-dimensional lattice, open with probability greater or
equal to 1/d, form an infinite open path.

http://arxiv.org/abs/1307.2827 2013/08/31 - 14:13

This note presents a deterministic integer factorization algorithm of running
time complexity O(N^(1/6+e)), e > 0. This improves the current performances of
deterministic integer factorization algorithms rated at O(N^(1/4+e)) arithmetic
operations. Equivalently, given the least (log N)/6 bits of a factor of N = pq,
the algorithm factors the integer in polynomial time O(log(N)^c), c > 0
constant.

http://arxiv.org/abs/1308.2891 2013/08/31 - 14:13

Many real-world networks depend on other networks, often in non-trivial ways,
to keep their functionality. These interdependent "networks of networks" are
often extremely fragile. When a fraction $1-p$ of nodes in one network randomly
fail, the damage propagates to nodes in networks that are interdependent and a
dynamic cascade of failure occurs that affects the entire system. We present
novel dynamic equations for two interdependent networks that allow us to
reproduce the cascades of failure for an arbitrary pattern of interdependency.
We study the "rich club" effect found in many real interdependent network
systems in which the high-degree nodes are extremely interdependent,
correlating a fraction $\alpha$ of the higher degree nodes on each network. We
find a rich phase diagram in the plane $p-\alpha$, with a tricritical point
reminiscent of the tricritical point of liquids that separates a non-functional
phase from two functional phases.

http://arxiv.org/abs/1308.4216 2013/08/31 - 14:13

The Cloud Computing concept offers dynamically scalable resources provisioned
as a service over the Internet.Economic benefits are the main driver for the
Cloud, since it promises the reduction of capital expenditure and operational
expenditure.In order for this to become reality, however, there are still some
challenges to be solved. Amongst these are security and trust issues, since the
user data has to be released to the Cloud and thus leaves the protection sphere
of the data owner. Most of the discussions on these topics are mainly driven by
arguments related to organisational means. This paper focuses on various
security issues arising from the usage of Cloud services and especially by the
rapid development of Cloud computing arena. It also discusses basic security
model followed by various High Level Security threats in the industry.

http://arxiv.org/abs/1308.5996 2013/08/29 - 22:09

The advent of Bluetooth wireless technology makes it possible to transmit
real-time audio in mobile devices. Bluetooth is cost-efficient and
power-efficient, but it is not suitable for traditional audio encoding and
real-time streaming due to limited bandwidth, high degree of error rates, and
the time-varying nature of the radio link. Therefore, audio streaming over
Bluetooth poses problems such as guzzling of both power and bandwidth. In order
to overcome the above mentioned problems, an algorithm is proposed in this work
to optimize the audio stream from the source to the sink by estimating the
proximity between them. The optimization is achieved by adjusting the bit rate
of the audio stream thus conserving power. We considered carefully various
Bluetooth signal parameters and the most suitable parameter for estimating the
proximity has been determined experimentally. The experiments were carried out
using Class II BS003 Bluesoleil dongle. This work will enable the Bluetooth
users to perform a seamless and optimized streaming of MP3 stereo audio data.

http://arxiv.org/abs/1308.5999 2013/08/29 - 22:09

The Gripon-Berrou neural network (GBNN) is a recently invented recurrent
neural network embracing a LDPC-like sparse encoding setup which makes it
extremely resilient to noise and errors. A natural use of GBNN is as an
associative memory. There are two activation rules for the neuron dynamics,
namely sum-of-sum and sum-of-max. The latter outperforms the former in terms of
retrieval rate by a huge margin. In prior discussions and experiments, it is
believed that although sum-of-sum may lead the network to oscillate, sum-of-max
always converges to an ensemble of neuron cliques corresponding to previously
stored patterns. However, this is not entirely correct. In fact, sum-of-max
often converges to bogus fixed points where the ensemble only comprises a small
subset of the converged state. By taking advantage of this overlooked fact, we
can greatly improve the retrieval rate. We discuss this particular issue and
propose a number of heuristics to push sum-of-max beyond these bogus fixed
points. To tackle the problem directly and completely, a novel post-processing
algorithm is also developed and customized to the structure of GBNN.
Experimental results show that the new algorithm achieves a huge performance
boost in terms of both retrieval rate and run-time, compared to the standard
sum-of-max and all the other heuristics.

http://arxiv.org/abs/1308.6003 2013/08/29 - 22:09

We propose a new conjecture on some exponential sums. These particular sums
have not apparently been considered in the literature. Subject to the
conjecture we obtain the first effective construction of asymptotically good
tree codes. The available numerical evidence is consistent with the conjecture
and is sufficient to certify codes for significant-length communications.

http://arxiv.org/abs/1308.6007 2013/08/29 - 22:09

Associative memories are structures that can retrieve previously stored
information given a partial input pattern instead of an explicit address as in
indexed memories. A few hardware approaches have recently been introduced for a
new family of associative memories based on Sparse-Clustered Networks (SCN)
that show attractive features. These architectures are suitable for
implementations with low retrieval latency, but are limited to small networks
that store a few hundred data entries. In this paper, a new hardware
architecture of SCNs is proposed that features a new data-storage technique as
well as a method we refer to as Selective Decoding (SD-SCN). The SD-SCN has
been implemented using a similar FPGA used in the previous efforts and achieves
two orders of magnitude higher capacity, with no error-performance penalty but
with the cost of few extra clock cycles per data access.

http://arxiv.org/abs/1308.6021 2013/08/29 - 22:09

We prove the existence of approximate correlated equilibrium of support size
polylogarithmic in the number of players and the number of actions per player.
In particular, using the probabilistic method, we show that there exists a
multiset of polylogarithmic size such that the uniform distribution over this
multiset forms an approximate correlated equilibrium. Along similar lines, we
establish the existence of approximate coarse correlated equilibrium with
logarithmic support.

We complement these results by considering the computational complexity of
determining small-support approximate equilibria. We show that random sampling
can be used to efficiently determine an approximate coarse correlated
equilibrium with logarithmic support. But, such a tight result does not hold
for correlated equilibrium, i.e., sampling might generate an approximate
correlated equilibrium of support size \Omega(m) where m is the number of
actions per player. Finally, we show that finding an exact correlated
equilibrium with smallest possible support is NP-hard under Cook reductions,
even in the case of two-player zero-sum games.

http://arxiv.org/abs/1308.6025 2013/08/29 - 22:09

A hierarchy of semidefinite programming (SDP) relaxations approximates the
global optimum of polynomial optimization problems of noncommuting variables.
Generating the relaxation, however, is a computationally demanding task, and
only problems of commuting variables have efficient generators. We develop an
implementation for problems of noncommuting problems that creates the
relaxation to be solved by SDPA -- a high-performance solver that runs in a
distributed environment. We further exploit the inherent sparsity of
optimization problems in quantum physics to reduce the complexity of resulting
relaxation. Constrained problems with a relaxation of order two may contain up
to a hundred variables. The implementation is available in C++ and Python. The
tool helps solve problems such as finding the ground state energy or testing
quantum correlations.

http://arxiv.org/abs/1308.6029 2013/08/29 - 22:09

In this paper, we build up a framework for sparse interpolation. We first
investigate the theoretical limit of the number of unisolvent points for sparse
interpolation under a general setting and try to answer some basic questions of
this topic. We also explore the relation between classical interpolation and
sparse interpolation. We second consider the design of the interpolation points
for the $s$-sparse functions in high dimensional Chebyshev bases, for which the
possible applications include uncertainty quantification, numerically solving
stochastic or parametric PDEs and compressed sensing. Unlike the traditional
random sampling method, we present in this paper a deterministic method to
produce the interpolation points, and show its performance with $\ell_1$
minimization by analyzing the mutual incoherence of the interpolation matrix.
Numerical experiments show that the deterministic points have a similar
performance with that of the random points.

http://arxiv.org/abs/1308.6038 2013/08/29 - 22:09