Skip to Content

Instrukcja korzystania z Biblioteki


Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online


Astronomia Astrofizyka

Sztuka dawna i współczesna, muzea i kolekcje

Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Przewidywania Kosmologia Religie Ideologia Polityka

Geologia, geofizyka, geochemia, środowisko przyrodnicze

Biologia, biologia molekularna i genetyka

Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na

A key feature of the packet scheduler in LTE system is that it can allocate
resources both in the time and frequency domain. Furthermore, the scheduler is
acquainted with channel state information periodically reported by user
equipments either in an aggregate form for the whole downlink channel, or
distinguished for each available subchannel. This mechanism allows for wide
discretion in resource allocation, thus promoting the flourishing of several
scheduling algorithms, with different purposes. It is therefore of great
interest to compare the performance of such algorithms in different scenarios.
A very common simulation tool that can be used for this purpose is ns-3, which
already supports a set of well known scheduling algorithms for LTE downlink,
though it still lacks schedulers that provide throughput guarantees. In this
work we contribute to fill this gap by implementing a scheduling algorithm that
provides long-term throughput guarantees to the different users, while
opportunistically exploiting the instantaneous channel fluctuations to increase
the cell capacity. We then perform a thorough performance analysis of the
different scheduling algorithms by means of extensive ns-3 simulations, both
for saturated UDP and TCP traffic sources. The analysis makes it possible to
appreciate the difference among the scheduling algorithms, and to assess the
performance gain, both in terms of cell capacity and packet service time,
obtained by allowing the schedulers to work on the frequency domain. 2014/10/03 - 03:57

In this paper we present the performance evaluation for a MIMO in vivo WBAN
system, using ANSYS HFSS and the associated complete Human Body Model. We
analyzed MIMO system capacity statistically and FER performance based upon an
IEEE 802.11n system model, with receiver antennas placed at various angular
positions around the human body. We also analyzed MIMO system capacity with
receiver antennas at the front of the body at various distances from
transmitter antennas. The results were compared to SISO arrangements and we
demonstrate that by using 2x2 MIMO in vivo, better performance can be achieved,
and significantly higher system capacity can be achieved when receiver antennas
are located at the back of the body and in front of the body. 2014/10/03 - 03:57

In this paper, we deal with the problem of jointly determining the optimal
coding strategy and the scheduling decisions when receivers obtain layered data
from multiple servers. The layered data is encoded by means of Prioritized
Random Linear Coding (PRLC) in order to be resilient to channel loss while
respecting the unequal levels of importance in the data, and data blocks are
transmitted simultaneously in order to reduce decoding delays and improve the
delivery performance. We formulate the optimal coding and scheduling decisions
problem in our novel framework with the help of Markov Decision Processes
(MDP), which are effective tools for modeling adapting streaming systems.
Reinforcement learning approaches are then proposed to derive reduced
computational complexity solutions to the adaptive coding and scheduling
problems. The novel reinforcement learning approaches and the MDP solution are
examined in an illustrative example for scalable video transmission. Our
methods offer large performance gains over competing methods that deliver the
data blocks sequentially. The experimental evaluation also shows that our novel
algorithms offer continuous playback and guarantee small quality variations
which is not the case for baseline solutions. Finally, our work highlights the
advantages of reinforcement learning algorithms to forecast the temporal
evolution of data demands and to decide the optimal coding and scheduling
decisions. 2014/10/03 - 03:57

We consider the group testing problem, in the case where the items are
defective independently but with non-constant probability. We introduce and
analyse an algorithm to solve this problem by grouping items together
appropriately. We give conditions under which the algorithm performs
essentially optimally in the sense of information-theoretic capacity. We use
concentration of measure results to bound the probability that this algorithm
requires many more tests than the expected number. This has applications to the
allocation of spectrum to cognitive radios, in the case where a database gives
prior information that a particular band will be occupied. 2014/10/03 - 03:57

Motivated by Internet targeted advertising, we address several ad allocation
problems. Prior work has established these problems admit no randomized online
algorithm better than $(1-\frac{1}{e})$-competitive
(\cite{karp1990optimal,mehta2007adwords}), yet simple heuristics have been
observed to perform much better in practice. We explain this phenomenon by
studying a generalization of the bounded-degree inputs considered by Buchbinder
et al.~\cite{buchbinder2007online}, graphs which we call $(k,d)-bounded$. In
such graphs the maximal degree on the online side is at most $d$ and the
minimal degree on the offline side is at least $k$. We prove that for such
graphs, these problems' natural greedy algorithms attain competitive ratio
$1-\frac{d-1}{k+d-1}$, tending to \emph{one} as $d/k$ tends to zero. We prove
this bound is tight for these algorithms.

Next, we develop deterministic primal-dual algorithms for the above problems
achieving competitive ratio $1-(1-\frac{1}{d})^k>1-\frac{1}{e^{k/d}}$, or
\emph{exponentially} better loss as a function of $k/d$, and strictly better
than $1-\frac{1}{e}$ whenever $k\geq d$. We complement our lower bounds with
matching upper bounds for the vertex-weighted problem. Finally, we use our
deterministic algorithms to prove by dual-fitting that simple randomized
algorithms achieve the same bounds in expectation. Our algorithms and analysis
differ from previous ad allocation algorithms, which largely scale bids based
on the spent fraction of their bidder's budget, whereas we scale bids according
to the number of times the bidder could have spent as much as her current bid.
Our algorithms differ from previous online primal-dual algorithms, as they do
not maintain dual feasibility, but only primal-to-dual ratio, and only attain
dual feasibility upon termination. We believe our techniques could find
applications to other well-behaved online packing problems. 2014/10/03 - 03:57

We identify a formal connection between physical problems related to the
detection of separable (unentangled) quantum states and complexity classes in
theoretical computer science. In particular, we show that to nearly every
quantum interactive proof complexity class (including BQP, QMA, QMA(2), and
QSZK), there corresponds a natural separability testing problem that is
complete for that class. Of particular interest is the fact that the problem of
determining whether an isometry can be made to produce a separable state is
either QMA-complete or QMA(2)-complete, depending upon whether the distance
between quantum states is measured by the one-way LOCC norm or the trace norm.
We obtain strong hardness results by proving that for each n-qubit maximally
entangled state there exists a fixed one-way LOCC measurement that
distinguishes it from any separable state with error probability that decays
exponentially in n. 2014/10/03 - 03:57

In this paper we introduce constructible analogs of the discrete complexity
classes $\mathbf{VP}$ and $\mathbf{VNP}$ of sequences of functions. The
functions in the new definitions are constructible functions on $\mathbb{R}^n$
or $\mathbb{C}^n$. We define a class of sequences of constructible functions
that play a role analogous to that of $\mathbf{VP}$ in the more classical
theory. The class analogous to $\mathbf{VNP}$ is defined using Euler
integration. We discuss several examples, develop a theory of completeness, and
pose a conjecture analogous to the $\mathbf{VP}$ vs. $\mathbf{VNP}$ conjecture
in the classical case. In the second part of the paper we extend the notions of
complexity classes to sequences of constructible sheaves over $\mathbb{R}^n$
(or its one point compactification). We introduce a class of sequences of
simple constructible sheaves, that could be seen as the sheaf-theoretic analog
of the Blum-Shub-Smale class $\mathbf{P}_{\mathbb{R}}$. We also define a
hierarchy of complexity classes of sheaves mirroring the polynomial hierarchy,
$\mathbf{PH}_{\mathbb{R}}$, in the B-S-S theory. We prove a singly exponential
upper bound on the topological complexity of the sheaves in this hierarchy
mirroring a similar result in the B-S-S setting. We obtain as a result an
algorithm with singly exponential complexity for a sheaf-theoretic variant of
the real quantifier elimination problem. We pose the natural sheaf-theoretic
analogs of the classical $\mathbf{P}$ vs. $\mathbf{NP}$ question, and also
discuss a connection with Toda's theorem from discrete complexity theory in the
context of constructible sheaves. We also discuss possible generalizations of
the questions in complexity theory related to separation of complexity classes
to more general categories via sequences of adjoint pairs of functors. 2014/10/03 - 03:57

Fractional repetition (FR) codes is a family of codes for distributed storage
systems that allow for uncoded exact repairs having the minimum repair
bandwidth. However, in contrast to minimum bandwidth regenerating (MBR) codes,
where a random set of a certain size of available nodes is used for a node
repair, the repairs with FR codes are table based. This usually allows to store
more data compared to MBR codes. In this work, we consider bounds on the
fractional repetition capacity, which is the maximum amount of data that can be
stored using an FR code. Optimal FR codes which attain these bounds are
presented. The constructions of these FR codes are based on combinatorial
designs and on families of regular and biregular graphs. These constructions of
FR codes for given parameters raise some interesting questions in graph theory.
These questions and some of their solutions are discussed in this paper. In
addition, based on a connection between FR codes and batch codes, we propose a
new family of codes for DSS, namely fractional repetition batch codes, which
have the properties of batch codes and FR codes simultaneously. These are the
first codes for DSS which allow for uncoded efficient exact repairs and load
balancing which can be performed by several users in parallel. Other concepts
related to FR codes are also discussed. 2014/10/03 - 03:57

We show that the Survey Propagation-guided decimation algorithm fails to find
satisfying assignments on random instances of the "Not-All-Equal-$K$-SAT"
problem if the number of message passing iterations is bounded by a constant
independent of the size of the instance and the clause-to-variable ratio is
above $(1+o_K(1)){2^{K-1}\over K}\log^2 K$ for sufficiently large $K$. Our
analysis in fact applies to a broad class of algorithms described as
"sequential local algorithms". Such algorithms iteratively set variables based
on some local information and then recurse on the reduced instance. Survey
Propagation-guided as well as Belief Propagation-guided decimation algorithms -
two widely studied message passing based algorithms, fall under this category
of algorithms provided the number of message passing iterations is bounded by a
constant. Another well-known algorithm falling into this category is the Unit
Clause algorithm. Our work constitutes the first rigorous analysis of the
performance of the SP-guided decimation algorithm.

The approach underlying our paper is based on an intricate geometry of the
solution space of random NAE-$K$-SAT problem. We show that above the
$(1+o_K(1)){2^{K-1}\over K}\log^2 K$ threshold, the overlap structure of
$m$-tuples of satisfying assignments exhibit a certain clustering behavior
expressed in the form of constraints on distances between the $m$ assignments,
for appropriately chosen $m$. We further show that if a sequential local
algorithm succeeds in finding a satisfying assignment with probability bounded
away from zero, then one can construct an $m$-tuple of solutions violating
these constraints, thus leading to a contradiction. Along with (citation), this
result is the first work which directly links the clustering property of random
constraint satisfaction problems to the computational hardness of finding
satisfying assignments. 2014/10/03 - 03:57

Harmful contents are rising in internet day by day and this motivates the
essence of more research in fast and reliable obscene and immoral material
filtering. Pornographic image recognition is an important component in each
filtering system. In this paper, a new approach for detecting pornographic
images is introduced. In this approach, two new features are suggested. These
two features in combination with other simple traditional features provide
decent difference between porn and non-porn images. In addition, we applied
fuzzy integral based information fusion to combine MLP (Multi-Layer Perceptron)
and NF (Neuro-Fuzzy) outputs. To test the proposed method, performance of
system was evaluated over 18354 download images from internet. The attained
precision was 93% in TP and 8% in FP on training dataset, and 87% and 5.5% on
test dataset. Achieved results verify the performance of proposed system versus
other related works. 2014/10/03 - 03:57

The objective of the research is to develop a general method of constructing
Lyapunov functions for non-linear non-autonomous differential inclusions
described by ordinary differential equations with parameters. The goal has been
attained through the following ideas and tools. First, three-point Poincare
strategy of the investigation of differential equations and manifolds has been
used. Second, the geometric-topological structure of the non-linear
non-autonomous parametric differential inclusions has been presented and
analyzed in the framework of hierarchical fiber bundles. Third, a special
canonizing transformation of the differential inclusions that allows to present
them in special canonical form, for which certain standard forms of Lyapunov
functions exist, has been found. The conditions establishing the relation
between the local asymptotical stability of two corresponding particular
integral curves of a given differential inclusion in its initial and canonical
forms are ascertained. The global asymptotical stability of the entire free
dynamical systems as some restrictions of a given parametric differential
inclusion and the whole latter one per se has been investigated in terms of the
classificational stability of the typical fiber of the meta-bundle. There have
discussed the prospects of development and modifications of the Lyapunov second
method in the light of the discovery of the new features of Lyapunov functions. 2014/10/03 - 03:57

Many path-finding algorithms on graphs such as A* are sped up by using a
heuristic function that gives lower bounds on the cost to reach the goal.
Aiming to apply similar techniques to speed up sampling-based motion-planning
algorithms, we use effective lower bounds on the cost between configurations to
tightly estimate the cost-to-go. We then use these estimates in an anytime
asymptotically-optimal algorithm which we call Motion Planning using Lower
Bounds (MPLB). MPLB is based on the Fast Marching Trees (FMT*) algorithm
recently presented by Janson and Pavone. An advantage of our approach is that
in many cases (especially as the number of samples grows) the weight of
collision detection in the computation is almost negligible with respect to
nearest-neighbor calls. We prove that MPLB performs no more collision-detection
calls than an anytime version of FMT*. Additionally, we demonstrate in
simulations that for certain scenarios, the algorithmic tools presented here
enable efficiently producing low-cost paths while spending only a small
fraction of the running time on collision detection. 2014/10/03 - 03:57

Image denoising is an important field in image processing since the common
corruption in real world data. In this paper, we propose a non-convex
formulation to recover the authentic structure from corrupted data. Typically,
the specific structure is assumed to be low rank, which holds in a wide range
of data, such as image and video. And the corruption is assumed to be sparse.
In the literature, such problem is known as Robust Principle Component Analysis
(RPCA), which usually recovers the low rank structure by approximating the rank
function with a nuclear norm and penalizes the error by $\ell_1$-norm. Although
RPCA is a convex formulation and can be solved effectively, the introduced
norms are not tight approximations, which may deviate the solution from the
authentic one. Therefore, we consider here a non-convex relaxation, consisting
of a Schattern-$p$ norm and an $\ell_q$-norm that promote low rank and sparsity
respectively. We derive a proximal iteratively reweighted algorithm (PIRA) to
solve the problem. Our algorithm is based on alternating direction method of
multipliers, where in each iteration we linearize the underlying objective
function that allows us to have a closed form solution. We demonstrate that
solutions produced by the linearized approximation always converge and have a
tighter approximation than the convex counterpart. Experimental results on
benchmarks show encouraging results of our approach. 2014/10/03 - 03:57

The Pret a Voter cryptographic voting system was designed to be flexible and
to offer voters a familiar and easy voting experience. In this paper we present
a case study of our efforts to adapt Pret a Voter to the idiosyncrasies of
elections in the Australian state of Victoria. This technical report includes
general background, user experience and details of the cryptographic protocols
and human processes. We explain the problems, present solutions, then analyse
their security properties and explain how they tie in to other design
decisions. We hope this will be an interesting case study on the application of
end-to-end verifiable voting protocols to real elections.

A preliminary version of this paper appeared as the 10th February 2014
version of "Draft Technical Report for VEC vVote System". This version augments
that version with additional message sequence charts.

The team involved in developing the vVote design described in this report
were: Craig Burton, Chris Culnane, James Heather, Rui Joaquim, Peter Y. A.
Ryan, Steve Schneider and Vanessa Teague. 2014/10/03 - 03:57

Fundamental local symmetry breaking problems such as Maximal Independent Set
(MIS) and coloring have been recognized as important by the community, and
studied extensively in (standard) graphs. In particular, fast (i.e.,
logarithmic run time) randomized algorithms are well-established for MIS and
$\Delta +1$-coloring in both the LOCAL and CONGEST distributed computing
models. On the other hand, comparatively much less is known on the complexity
of distributed symmetry breaking in {\em hypergraphs}. In particular, a key
question is whether a fast (randomized) algorithm for MIS exists for

In this paper, we study the distributed complexity of symmetry breaking in
hypergraphs by presenting distributed randomized algorithms for a variety of
fundamental problems under a natural distributed computing model for
hypergraphs. We first show that MIS in hypergraphs (of arbitrary dimension) can
be solved in $O(\log^2 n)$ rounds ($n$ is the number of nodes of the
hypergraph) in the LOCAL model. We then present a key result of this paper ---
an $O(\Delta^{\epsilon}\text{polylog}(n))$-round hypergraph MIS algorithm in
the CONGEST model where $\Delta$ is the maximum node degree of the hypergraph
and $\epsilon > 0$ is any arbitrarily small constant.

To demonstrate the usefulness of hypergraph MIS, we present applications of
our hypergraph algorithm to solving problems in (standard) graphs. In
particular, the hypergraph MIS yields fast distributed algorithms for the {\em
balanced minimal dominating set} problem (left open in Harris et al. [ICALP
2013]) and the {\em minimal connected dominating set problem}. We also present
distributed algorithms for coloring, maximal matching, and maximal clique in
hypergraphs. 2014/10/03 - 03:57

Recent advances in the estimation of deep directed graphical models and
recurrent networks let us contribute to the removal of a blind spot in the area
of probabilistc modelling of time series. The proposed methods i) can infer
distributed latent state-space trajectories with nonlinear transitions, ii)
scale to large data sets thanks to the use of a stochastic objective and fast,
approximate inference, iii) enable the design of rich emission models which iv)
will naturally lead to structured outputs. Two different paths of introducing
latent state sequences are pursued, leading to the variational recurrent auto
encoder (VRAE) and the variational one step predictor (VOSP). The use of
independent Wiener processes as priors on the latent state sequence is a viable
compromise between efficient computation of the Kullback-Leibler divergence
from the variational approximation of the posterior and maintaining a
reasonable belief in the dynamics. We verify our methods empirically, obtaining
results close or superior to the state of the art. We also show qualitative
results for denoising and missing value imputation. 2014/10/03 - 03:57

There are several results on the stability of nonlinear positive systems in
the presence of time delays. However, most of them assume that the delays are
constant. This paper considers time-varying, possibly unbounded, delays and
establishes asymptotic stability and bounds the decay rate of a significant
class of nonlinear positive systems which includes positive linear systems as a
special case. Specifically, we present a necessary and sufficient condition for
delay-independent stability of continuous-time positive systems whose vector
fields are cooperative and homogeneous. We show that global asymptotic
stability of such systems is independent of the magnitude and variation of the
time delays. For various classes of time delays, we are able to derive explicit
expressions that quantify the decay rates of positive systems. We also provide
the corresponding counterparts for discrete-time positive systems whose vector
fields are non-decreasing and homogeneous. 2014/10/03 - 03:57

Background: With the ever increasing use of computational models in the
biosciences, the need to share models and reproduce the results of published
studies efficiently and easily is becoming more important. To this end, various
standards have been proposed that can be used to describe models, simulations,
data or other essential information in a consistent fashion. These constitute
various separate components required to reproduce a given published scientific

Results: We describe the Open Modeling EXchange format (OMEX). Together with
the use of other standard formats from the Computational Modeling in Biology
Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that
supports the exchange of all the information necessary for a modeling and
simulation experiment in biology. An OMEX file is a ZIP container that includes
a manifest file, listing the content of the archive, an optional metadata file
adding information about the archive and its content, and the files describing
the model. The content of a COMBINE Archive consists of files encoded in
COMBINE standards whenever possible, but may include additional files defined
by an Internet Media Type. Several tools that support the COMBINE Archive are
available, either as independent libraries or embedded in modeling software.

Conclusions: The COMBINE Archive facilitates the reproduction of modeling and
simulation experiments in biology by embedding all the relevant information in
one file. Having all the information stored and exchanged at once also helps in
building activity logs and audit trails. We anticipate that the COMBINE Archive
will become a significant help for modellers, as the domain moves to larger,
more complex experiments such as multi-scale models of organs, digital
organisms, and bioengineering. 2014/10/03 - 03:57

Many fields of science and engineering, ranging from predicting protein
structures to building machine translation systems, require large amounts of
labeled data. These labeling tasks have traditionally been performed by
experts; the limited pool of experts would limit the size of the datasets, and
make the process slow and expensive. In recent years, there is a rapidly
increasing interest in using crowds of semi-skilled workers recruited through
the Internet. While this 'crowdsourcing' can cheaply produce large amounts of
labeled data in short times, it is typically plagued by the problem of low
quality. To address this fundamental challenge in crowdsourcing, we design a
novel reward mechanism for acquiring high-quality data, which incentivizes
workers to censor their own low-quality data. Our main results are the
mathematical proofs showing that surprisingly, under a natural and desirable
'no-free-lunch' requirement, this is the one and only mechanism that is
incentive-compatible. The simplicity of the mechanism is an additional
attractive property. In preliminary experiments involving over 900
worker-tasks, we observe up to a three-fold drop in the error rates under this
unique incentive mechanism. 2014/10/03 - 03:57

To learn object models for robotic manipulation, unsupervised methods cannot
provide accurate object structural information and supervised methods require a
large amount of manually labeled training samples, thus interactive object
segmentation is developed to automate object modeling. In this article, we
formulate a novel dynamic process for interactive object segmentation, and
develop a solution based on particle filter and active learning so that a robot
can manipulate and learn object structures incrementally and automatically. We
demonstrate our method with a humanoidrobot on different types of objects, and
compare its segmentation performancewith established methods on selected
objects. The result shows that our approach allows more accurate object
modeling and reveals richer object structural information. 2014/10/03 - 03:57

In a robot-centered smart home, the robot observes the home states with its
own sensors, and then it can change certain object states according to an
operator's commands for remote operations, or imitate the operator's behaviors
in the house for autonomous operations. To model the robot's imitation of the
operator's behaviors in a dynamic indoor environment, we use multi-relational
chains to describe the changes of environment states, and apply inverse
reinforcement learning to encoding the operator's behaviors with a learned
reward function. We implement this approach with a mobile robot, and do five
experiments to include increasing training days, object numbers, and action
types. Besides, a baseline method by directly recording the operator's
behaviors is also implemented, and comparison is made on the accuracy of home
state evaluation and the accuracy of robot action selection. The results show
that the proposed approach handles dynamic environment well, and guides the
robot's actions in the house more accurately. 2014/10/03 - 03:57

We introduce a new algorithm, called adaptive sparse backfitting algorithm,
for solving high dimensional Sparse Additive Model (SpAM) utilizing symmetric,
non-negative definite smoothers. Unlike the previous sparse backfitting
algorithm, our method is essentially a block coordinate descent algorithm that
guarantees to converge to the optimal solution. It bridges the gap between the
population backfitting algorithm and that of the data version. We also prove
variable selection consistency under suitable conditions. Numerical studies on
both synthesis and real data are conducted to show that adaptive sparse
backfitting algorithm outperforms previous sparse backfitting algorithm in
fitting and predicting high dimensional nonparametric models. 2014/10/03 - 03:57

This letter introduces the concept of antifragile electronic warfare (EW),
which we define as the ability to allow a communications link to improve
performance due to the presence of a jammer. This concept should not be
confused with jamming countermeasures (a.k.a. anti-jamming or electronic
protection). Rather, antifragile EW can be thought of as the next step beyond
simply avoiding or mitigating jamming. After introducing the concept we narrow
down the subset of jammers this concept can be applied to, and provide a brief
example of an antifragile EW strategy. 2014/10/03 - 03:57

Telehealth offers interesting avenues for improving healthcare access in
vulnerable populations through use of electronic devices in the patient's home
that monitor and assess for early complications. However, complication of
operation and poor reliability hinders the wide acceptability of telehealth
services. We propose ensemble sensing on everyday wearable devices, which does
not impose the burden of carrying wearable sensors, yet offers a seamless and
simple platform to deliver telehealth services. 2014/10/03 - 03:57

This article discusses a more general and numerically stable Rybicki Press
algorithm, which enables inverting and computing determinants of covariance
matrices, whose elements are sums of exponentials. The algorithm is true in
exact arithmetic and relies on introducing new variables and corresponding
equations, thereby converting the matrix into a banded matrix of larger size.
Linear complexity banded algorithms for solving linear systems and computing
determinants on the larger matrix enable linear complexity algorithms for the
initial semi-separable matrix as well. Benchmarks provided illustrate the
linear scaling of the algorithm. 2014/10/03 - 03:57

In cognitive radio (CR) technology, the trend of sensing is no longer to only
detect the presence of active primary users. A large number of applications
demand for the relationship of user behaviour in spatial, temporal, and
frequency domains. To satisfy such requirements, we study the statistical
relationship of primary users by introducing a Bayesian network (BN) based
structure learning framework in this paper. How to learn the BN structure is a
long standing issue, not fully understood even in the machining learning field.
Besides, another key problem in the learning scenario is that the CR has to
identify how many variables in the BN, which is usually considered as prior
knowledge in machine learning applications. To solve such two issues
simultaneously, this paper proposes a BN structure learning scheme consisting
of an efficient structure learning algorithm and a blind variable
identification algorithm. The proposed scheme bears significantly lower
computational complexity compared with previous ones, and is capable of
determining the structure without assuming any prior knowledge about variables.
With this scheme, cognitive users could understand the statistical pattern of
primary networks efficiently, such that more efficient cognitive protocols
could be designed across different network layers. 2014/10/03 - 03:57

We explore the idea that authoring a piece of text is an act of maximizing
one's expected utility. To make this idea concrete, we consider the societally
important decisions of the Supreme Court of the United States. Extensive past
work in quantitative political science provides a framework for empirically
modeling the decisions of justices and how they relate to text. We incorporate
into such a model texts authored by amici curiae ("friends of the court"
separate from the litigants) who seek to weigh in on the decision, then
explicitly model their goals in a random utility model. We demonstrate the
benefits of this approach in improved vote prediction and the ability to
perform counterfactual analysis. 2014/10/03 - 03:57

We determine several key structural features of random intersection graphs, a
natural model for many real world networks where connections are given by
shared attributes. Notably, this model is mathematically tractable yet flexible
enough to generate random graphs with tunable clustering. Specifically, we
prove that in the homogeneous case, the model is logarithmically hyperbolic.
Further, we fully characterize the degeneracy and the expansion of homogeneous
random intersection graphs. Some of these results apply to inhomogeneous random
intersection graphs (under reasonable probability distributions). Finally, we
comment on the algorithmic implications of these results. 2014/10/03 - 03:57

Kernel-based approaches for sequence classification have been successfully
applied to a variety of domains, including the text categorization, image
classification, speech analysis, biological sequence analysis, time series and
music classification, where they show some of the most accurate results.

Typical kernel functions for sequences in these domains (e.g., bag-of-words,
mismatch, or subsequence kernels) are restricted to {\em discrete univariate}
(i.e. one-dimensional) string data, such as sequences of words in the text
analysis, codeword sequences in the image analysis, or nucleotide or amino acid
sequences in the DNA and protein sequence analysis. However, original sequence
data are often of real-valued multivariate nature, i.e. are not univariate and
discrete as required by typical $k$-mer based sequence kernel functions.

In this work, we consider the problem of the {\em multivariate} sequence
classification such as classification of multivariate music sequences, or
multidimensional protein sequence representations. To this end, we extend {\em
univariate} kernel functions typically used in sequence analysis and propose
efficient {\em multivariate} similarity kernel method (MVDFQ-SK) based on (1) a
direct feature quantization (DFQ) of each sequence dimension in the original
{\em real-valued} multivariate sequences and (2) applying novel multivariate
discrete kernel measures on these multivariate discrete DFQ sequence
representations to more accurately capture similarity relationships among
sequences and improve classification performance.

Experiments using the proposed MVDFQ-SK kernel method show excellent
classification performance on three challenging music classification tasks as
well as protein sequence classification with significant 25-40% improvements
over univariate kernel methods and existing state-of-the-art sequence
classification methods. 2014/10/03 - 03:57

We introduce a nominal actor-based language and study its expressive power.
We have identified the presence/absence of fields as a crucial feature: the
dynamic creation of names in combination with fields gives rise to Turing
completeness. On the other hand, restricting to stateless actors gives rise to
systems for which properties such as termination are decidable. This
decidability result still holds for actors with states when the number of
actors is bounded and the state is read-only. 2014/10/03 - 03:57

Probabilistic approaches for tensor factorization aim to extract meaningful
structure from incomplete data by postulating low rank constraints. Recently,
variational Bayesian (VB) inference techniques have successfully been applied
to large scale models. This paper presents full Bayesian inference via VB on
both single and coupled tensor factorization models. Our method can be run even
for very large models and is easily implemented. It exhibits better prediction
performance than existing approaches based on maximum likelihood on several
real-world datasets for missing link prediction problem. 2014/10/01 - 02:31

We study diffusion and consensus based optimization of a sum of unknown
convex objective functions over distributed networks. The only access to these
functions is via stochastic gradient oracles, each of which is only available
at a different node, and a limited number of gradient oracle calls is allowed
at each node. In this framework, we introduce a strongly convex optimization
algorithm based on the stochastic gradient descent (SGD) updates. Particularly,
we use a certain time-dependant weighted averaging of the SGD iterates, which
yields an optimal convergence rate of $O(\frac{N\sqrt{N}}{T})$ after $T$
gradient updates for each node on a network of $N$ nodes. We then show that
after $T$ gradient oracle calls, the SGD iterates at each node achieve a mean
square deviation (MSD) of $O(\frac{\sqrt{N}}{T})$. We provide the explicit
description of the algorithm and illustrate the merits of the proposed
algorithm with respect to the state-of-the-art methods in the literature. 2014/10/01 - 02:31

Extending and modifying his domain of life by artifact production is one of
the main characteristics of humankind. From the first hominid, who used a wood
stick or a stone for extending his upper limbs and augmenting his gesture
strength, to current systems engineers who used technologies for augmenting
human cognition, perception and action, extending human body capabilities
remains a big issue. From more than fifty years cybernetics, computer and
cognitive sciences have imposed only one reductionist model of human machine
systems: cognitive systems. Inspired by philosophy, behaviorist psychology and
the information treatment metaphor, the cognitive system paradigm requires a
function view and a functional analysis in human systems design process.
According that design approach, human have been reduced to his metaphysical and
functional properties in a new dualism. Human body requirements have been left
to physical ergonomics or "physiology". With multidisciplinary convergence, the
issues of "human-machine" systems and "human artifacts" evolve. The loss of
biological and social boundaries between human organisms and interactive and
informational physical artifact questions the current engineering methods and
ergonomic design of cognitive systems. New developpment of human machine
systems for intensive care, human space activities or bio-engineering sytems
requires grounding human systems design on a renewed epistemological framework
for future human systems model and evidence based "bio-engineering". In that
context, reclaiming human factors, augmented human and human machine nature is
a necessity 2014/10/01 - 02:31

In this work, we address the problem of spelling correction in the Arabic
language utilizing the new corpus provided by QALB (Qatar Arabic Language Bank)
project which is an annotated corpus of sentences with errors and their
corrections. The corpus contains edit, add before, split, merge, add after,
move and other error types. We are concerned with the first four error types as
they contribute more than 90% of the spelling errors in the corpus. The
proposed system has many models to address each error type on its own and then
integrating all the models to provide an efficient and robust system that
achieves an overall recall of 0.59, precision of 0.58 and F1 score of 0.58
including all the error types on the development set. Our system participated
in the QALB 2014 shared task "Automatic Arabic Error Correction" and achieved
an F1 score of 0.6, earning the sixth place out of nine participants. 2014/10/01 - 02:31

Sequences of unit vectors for which the Kaczmarz algorithm always converges
in Hilbert space can be characterized in frame theory by tight frames with
constant 1. We generalize this result to the context of frames and bases. In
particular, we show that the only effective sequences which are Riesz bases are
orthonormal bases. Moreover, we consider the infinite system of linear
algebraic equations $A x = b$ and characterize the (bounded) matrices $A$ for
which the Kaczmarz algorithm always converges to a solution. 2014/10/01 - 02:31

In this experimental study we consider Steiner tree approximations that
guarantee a constant approximation of ratio less than $2$. The considered
greedy algorithms and approaches based on linear programming involve the
incorporation of $k$-restricted full components for some $k \geq 3$. For most
of the algorithms, their strongest theoretical approximation bounds are only
achieved for $k \to \infty$. However, the running time is also exponentially
dependent on $k$, so only small $k$ are tractable in practice.

We investigate different implementation aspects and parameter choices that
finally allow us to construct algorithms (somewhat) feasible for practical use.
We compare the algorithms against each other, to an exact LP-based algorithm,
and to fast and simple $2$-approximations. 2014/10/01 - 02:31

In-memory read-only caches are widely used in cloud infrastructure to reduce
access latency and to reduce load on backend databases. Operators view coherent
caches as impractical at genuinely large scale and many client-facing caches
are updated in an asynchronous manner with best-effort pipelines.

Existing incoherent cache technologies do not support transactional data
access, even if the backend database supports transactions. We propose T-Cache,
a cache that supports read-only transactions despite asynchronous and
unreliable communication with the database. We also define
cache-serializability, a variant of serializability that is suitable for
incoherent caches, and prove that with unbounded resources T-Cache implements
it. With limited resources, T-Cache allows the system manager to choose a
trade-off between performance and consistency.

Our evaluation shows that T-Cache detects many inconsistencies with only
nominal overhead. We use synthetic workloads to demonstrate the efficacy of
T-Cache when data accesses are clustered and its adaptive reaction to workload
changes. With workloads based on the real-world topologies, T-Cache detects
43-70% of the inconsistencies and increases the rate of consistent transactions
by 33-58%. 2014/10/01 - 02:31

Through simultaneous energy and information transfer, radio frequency (RF)
energy harvesting (EH) reduces the energy consumption of the wireless networks.
It also provides a new approach for the wireless devices to share each other's
energy storage, without relying on the power grid or traffic offloading. In
this paper, we study RF energy harvesting enabled power balancing within the
decode-and-forward (DF) relaying-enhanced cooperative wireless system. An
optimal power allocation policy is proposed for the scenario where both source
and relay nodes can draw power from the radio frequency signals transmitted by
each other. To maximize the overall throughput while meeting the energy
constraints imposed by the RF sources, an optimization problem is formulated
and solved. Based on different harvesting efficiency and channel condition,
closed form solutions for optimal joint source and relay power allocation are
derived. 2014/10/01 - 02:31

Recent developments in linear system identification have proposed the use of
non-parameteric methods, relying on regularization strategies, to handle the
so-called bias/variance trade-off. This paper introduces an impulse response
estimator which relies on an $\ell_2$-type regularization including a
rank-penalty derived using the log-det heuristic as a smooth approximation to
the rank function. This allows to account for different properties of the
estimated impulse response (e.g. smoothness and stability) while also
penalizing high-complexity models. This also allows to account and enforce
coupling between different input-output channels in MIMO systems. According to
the Bayesian paradigm, the parameters defining the relative weight of the two
regularization terms as well as the structure of the rank penalty are estimated
optimizing the marginal likelihood. Once these hyperameters have been
estimated, the impulse response estimate is available in closed form.
Experiments show that the proposed method is superior to the estimator relying
on the "classic" $\ell_2$-regularization alone as well as those based in atomic
and nuclear norm. 2014/10/01 - 02:31

We consider continuous-time consensus systems whose interactions satisfy a
form or reciprocity that is not instantaneous, but happens over time. We show
that these systems have certain desirable properties: They always converge
independently of the specific interactions taking place and there exist simple
conditions on the interactions for two agents to converge to the same value.
This was until now only known for systems with instantaneous reciprocity. These
result are of particular relevance when analyzing systems where interactions
are a priori unknown, being for example endogenously determined or random. We
apply our results to an instance of such systems. 2014/10/01 - 02:31