Skip to Content

Instrukcja korzystania z Biblioteki


Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online


Astronomia Astrofizyka

Sztuka dawna i współczesna, muzea i kolekcje

Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Przewidywania Kosmologia Religie Ideologia Polityka

Geologia, geofizyka, geochemia, środowisko przyrodnicze

Biologia, biologia molekularna i genetyka

Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na

We consider wireless relay networks where a source node communicates to a
destination node with the help of multiple intermediate relay nodes. In
wireless, if a node can send information to another node, typically it can also
receive information from that node. Therefore, there are inherently a lot of
opportunities for feeding back information in such networks. However,
transmissions are not isolated and usually subject to broadcast and

In this paper, we ask the following question: Can the information transfer in
both directions of a link be critical to maximizing the end-to-end
communication rate in such networks? Equivalently, could one of the directions
in each bidirected link (and more generally at least one of the links forming a
cycle) be shut down and the capacity of the network still be approximately
maintained? Our main result is to show that in any arbitrary Gaussian relay
network with bidirected edges and cycles, we can always identify a directed
acyclic subnetwork that approximately maintains the capacity of the original
network. The edges of this subnetwork can be identified as the information
carrying links, and the remaining links as feedback, which can only provide
limited contribution to capacity. 2013/04/30 - 13:38

The trends of design and development of information systems have undergone a
variety of ongoing phases and stages. These variations have been evolved due to
brisk changes in user requirements and business needs. To meet these
requirements and needs, a flexible and agile business solution was required to
come up with the latest business trends and styles. Another obstacle in agility
of information systems was typically different treatment of same diseases of
two patients: business processes and information services. After the emergence
of information technology, the business processes and information systems have
become counterparts. But these two business halves have been treated under
totally different standards. There is need to streamline the boundaries of
these both pillars that are equally sharing information system's burdens and
liabilities. In last decade, the object orientation has evolved into one of the
major solutions for modern business needs and now, SOA is the solution to shift
business on ranks of electronic platform. BPM is another modern business
solution that assists to regularize optimization of business processes. This
paper discusses how object orientation can be conformed to incorporate or embed
SOA in BPM for improved information systems. 2013/04/30 - 13:38

In software modelling, the designers have to produce UML visual models with
software constraints. Similarly, in business modelling, designers have to model
business processes using business constraints (business rules). Constraints are
the key components in the skeleton of business or software models. A designer
has to write constraints to semantically compliment business models or UML
models and finally implementing the constraints into business processes or
source code. Business constraints/rules can be written using SBVR (Semantics of
Business Vocabulary and Rules) while OCL (Object Constraint Language) is the
well-known medium for writing software constraints. SBVR and OCL are two
significant standards from OMG. Both standards are principally different as
SBVR is typically used in business domains and OCL is employed to compliment
software models. However, we have identified a few similarities in both
standards that are interesting to study. In this paper, we have performed a
comparative analysis of both standards as we are looking for a mechanism for
automatic transformation of SBVR to OCL. The major emphasis of the study is to
highlight principal features of SBVR and OCL such as similarities, differences
and key parameters on which these both standards can work together. 2013/04/30 - 13:38

In recent years studying the content of the World Wide Web became a very
important yet rather difficult task. There is a need for a compression
technique that would allow a web graph representation to be put into the memory
while maintaining random access time competitive to the time needed to access
uncompressed web graph on a hard drive.

There are already available techniques that accomplish this task, but there
is still room for improvements and this thesis attempts to prove it. It
includes a comparison of two methods contained in state of art of this field
(BV and k2partitioned) to two already implemented algorithms (rewritten,
however, in C++ programming language to maximize speed and resource management
efficiency), which are LM and 2D, and introduces the new variant of the latter
one, called 2D stripes.

This thesis serves as well as a proof of concept. The final considerations
show positive and negative aspects of all presented methods, expose the
feasibility of the new variant as well as indicate future direction for
development. 2013/04/30 - 13:38

Constant entropy rate (conditional entropies must remain constant as the
sequence length increases) and uniform information density (conditional
probabilities must remain constant as the sequence length increases) are two
information theoretic principles that are argued to underlie a wide range of
linguistic phenomena. Here we revise the predictions of these principles to the
light of Hilberg's law on the scaling of conditional entropy in language and
related laws. We show that constant entropy rate (CER) and two interpretations
for uniform information density (UID), full UID and strong UID, are
inconsistent with these laws. Strong UID implies CER but the reverse is not
true. Full UID, a particular case of UID, leads to costly uncorrelated
sequences that are totally unrealistic. We conclude that CER and its particular
cases are incomplete hypotheses about the scaling of conditional entropies. 2013/04/30 - 13:38

A modern processor can dynamically set it's speed while it's active, and can
make a transition to sleep state when required. When the processor is operating
at a speed $s$, the energy consumed per unit time is given by a convex power
function $P(s)$ having the property that $P(0) > 0$ and $P"(s) > 0$ for all
values of $s$. Moreover, $C > 0$ units of energy is required to make a
transition from the sleep state to the active state. The jobs are specified by
their arrival time, deadline and the processing volume.

We consider a scheduling problem, called speed scaling with sleep state,
where each job has to be scheduled within their arrival time and deadline, and
the goal is to minimize the total energy consumption required to process these
jobs. Albers et. al. proved the NP-hardness of this problem by reducing an
instance of an NP-hard partition problem to an instance of this scheduling
problem. The instance of this scheduling problem consists of the arrival time,
the deadline and the processing volume for each of the jobs, in addition to $P$
and $C$. Since $P$ and $C$ depend on the instance of the partition problem,
this proof of the NP-hardness of the speed scaling with sleep state problem
doesn't remain valid when $P$ and $C$ are fixed. In this paper, we prove that
the speed scaling with sleep state problem remains NP-hard for any fixed
positive number $C$ and convex $P$ satisfying $P(0) > 0$ and $P"(s) > 0$ for
all values of $s$. 2013/04/30 - 13:38

In this paper, the block processing of a discrete-time (DT) improper-complex
second-order cyclostationary (SOCS) random process is considered. In
particular, it is of interest to find a pre-processing operation that enables
computationally efficient near-optimal post-processing. An invertible
linear-conjugate linear (LCL) operator named the DT FREquency Shift (FRESH)
properizer is first proposed. It is shown that the DT FRESH properizer converts
a DT improper-complex SOCS random process input to an equivalent DT
proper-complex SOCS random process output by utilizing the information only
about the cycle period of the input. An invertible LCL block processing
operator named the asymptotic FRESH properizer is then proposed that mimics the
operation of the DT FRESH properizer but processes a finite number of
consecutive samples of a DT improper-complex SOCS random process. It is shown
that the output of the asymptotic FRESH properizer is not proper but
asymptotically proper and that its frequency-domain covariance matrix converges
to a highly-structured block matrix with diagonal blocks as the block size
tends to infinity. Two representative estimation and detection problems are
presented to demonstrate that asymptotically optimal low-complexity
post-processors can be easily designed by exploiting these asymptotic
second-order properties of the output of the asymptotic FRESH properizer. 2013/04/30 - 13:38

We introduce a general algebraic setting for describing linear boundary
problems in a symbolic computation context, with emphasis on the case of
partial differential equations. The general setting is then applied to the
Cauchy problem for completely reducible partial differential equations with
constant coefficients. While we concentrate on the theoretical features in this
paper, the underlying operator ring is implemented and provides a sufficient
basis for all methods presented here. 2013/04/30 - 13:38

We investigate implementations of biometric cryptosystems protecting
fingerprint templates (which are mostly based on the fuzzy vault scheme by
Juels and Sudan in 2002) with respect to the security they provide. We show
that attacks taking advantage of the system's false acceptance rate, i.e.
false-accept attacks, pose a very serious risk --- even if brute-force attacks
are impractical to perform. Our observations lead to the clear conclusion that
currently a single fingerprint is not sufficient to provide a secure biometric
cryptosystem. But there remain other problems that can not be resolved by
merely switching to multi-finger: Kholmatov and Yanikoglu in 2007 demonstrated
that it is possible to break two matching vault records at quite a high rate
via the correlation attack.

We propose an implementation of a minutiae fuzzy vault that is inherently
resistant against cross-matching and the correlation attack. Surprisingly,
achieving cross-matching resistance is not at the cost of authentication
performance. In particular, we propose to use a randomized decoding procedure
and find that it is possible to achieve a GAR=91% at which no false accepts are
observed on a database generally used. Our ideas can be adopted into an
implementation of a multibiometric cryptosystem. All experiments described in
this paper can fully be reproduced using software available for download. 2013/04/30 - 13:38

We consider the problem of lossless compression of binary trees, with the aim
of reducing the number of code bits needed to store or transmit such trees. A
lossless grammar-based code is presented which encodes each binary tree into a
binary codeword in two steps. In the first step, the tree is transformed into a
context-free grammar from which the tree can be reconstructed. In the second
step, the context-free grammar is encoded into a binary codeword. The decoder
of the grammar-based code decodes the original tree from its codeword by
reversing the two encoding steps. It is shown that the resulting grammar-based
binary tree compression code is a universal code on a family of probabilistic
binary tree source models satisfying certain weak restrictions. 2013/04/30 - 13:38

In a process algebra with hiding and recursion it is possible to create
processes which compute internally without ever communicating with their
environment. Such processes are said to diverge or livelock. In this paper we
show how it is possible to conservatively classify processes as livelock-free
through a static analysis of their syntax. In particular, we present a
collection of rules, based on the inductive structure of terms, which guarantee
livelock-freedom of the denoted process. This gives rise to an algorithm which
conservatively flags processes that can potentially livelock. We illustrate our
approach by applying both BDD-based and SAT-based implementations of our
algorithm to a range of benchmarks, and show that our technique in general
substantially outperforms the model checker FDR whilst exhibiting a low rate of
inconclusive results. 2013/04/30 - 13:38

In this paper we present a sampling framework for RNA structures of fixed
topological genus. We introduce a novel, linear time, uniform sampling
algorithm for RNA structures of fixed topological genus $g$, for arbitrary
$g>0$. Furthermore we develop a linear time sampling algorithm for RNA
structures of fixed topological genus $g$ that are weighted by a simplified,
loop-based energy functional. For this process the partition function of the
energy functional has to be computed once, which has $O(n^2)$ time complexity. 2013/04/30 - 13:38

A new system for object detection in cluttered RGB-D images is presented. Our
main contribution is a new method called Bingham Procrustean Alignment (BPA) to
align models with the scene. BPA uses point correspondences between oriented
features to derive a probability distribution over possible model poses. The
orientation component of this distribution, conditioned on the position, is
shown to be a Bingham distribution. This result also applies to the classic
problem of least-squares alignment of point sets, when point features are
orientation-less, and gives a principled, probabilistic way to measure pose
uncertainty in the rigid alignment problem. Our detection system leverages BPA
to achieve more reliable object detections in clutter. 2013/04/30 - 13:38

We introduce a homogeneous pair approximation to the Naming Game (NG) model
by deriving a six-dimensional ODE for the two-word Naming Game. Our ODE reveals
the change in dynamical behavior of the Naming Game as a function of the
average degree < k > of an uncorrelated network. This result is in good
agreement with the numerical results. We also analyze the extended NG model
that allows for presence of committed nodes and show that there is a shift of
the tipping point for social consensus in sparse networks. 2013/04/30 - 13:38

Stopping sets and stopping set distribution of a linear code play an
important role in the performance analysis of iterative decoding for this
linear code. Let $C$ be an $[n,k]$ linear code over $\f$ with parity-check
matrix $H$, where the rows of $H$ may be dependent. Let $[n]=\{1,2,...,n\}$
denote the set of column indices of $H$. A \emph{stopping set} $S$ of $C$ with
parity-check matrix $H$ is a subset of $[n]$ such that the restriction of $H$
to $S$ does not contain a row of weight 1. The \emph{stopping set distribution}
$\{T_{i}(H)\}_{i=0}^{n}$ enumerates the number of stopping sets with size $i$
of $C$ with parity-check matrix $H$. Denote $H^{*}$ the parity-check matrix
consisting of all the non-zero codewords in the dual code $C^{\bot}$. In this
paper, we study stopping sets and stopping set distributions of some residue
algebraic geometry (AG) codes with parity-check matrix $H^*$. First, we give
two descriptions of stopping sets of residue AG codes. For the simplest AG
codes, i.e., the generalized Reed-Solomon codes, it is easy to determine all
the stopping sets. Then we consider AG codes from elliptic curves. We use the
group structure of rational points of elliptic curves to present a complete
characterization of stopping sets. Then the stopping sets, the stopping set
distribution and the stopping distance of the AG code from an elliptic curve
are reduced to the search, counting and decision versions of the subset sum
problem in the group of rational points of the elliptic curve, respectively.
Finally, for some special cases, we determine the stopping set distributions of
AG codes from elliptic curves. 2013/04/30 - 13:38

We give a simple deterministic $O(\log K / \log\log K)$ approximation
algorithm for the Min-Max Selecting Items problem, where $K$ is the number of
scenarios. While our main goal is simplicity, this result also improves over
the previous best approximation ratio of $O(\log K)$ due to Kasperski, Kurpisz,
and Zieli\'nski (Information Processing Letters (2013)). Despite using the
method of pessimistic estimators, the algorithm has a polynomial runtime also
in the RAM model of computation. We also show that the LP formulation for this
problem by Kasperski and Zieli\'nski (Annals of Operations Research (2009)),
which is the basis for the previous work and ours, has an integrality gap of at
least $\Omega(\log K / \log\log K)$. 2013/04/30 - 13:38

The school choice problem concerns the design and implementation of matching
mechanisms that produce school assignments for students within a given public
school district. Previously considered criteria for evaluating proposed
mechanisms such as stability, strategyproofness and Pareto efficiency do not
always translate into desirable student assignments. In this note, we explore a
class of one-sided, cardinal utility maximizing matching mechanisms focused
exclusively on student preferences. We adapt a well-known combinatorial
optimization technique (the Hungarian algorithm) as the kernel of this class of
matching mechanisms. We find that, while such mechanisms can be adapted to meet
desirable criteria not met by any previously employed mechanism in the school
choice literature, they are not strategyproof. We discuss the practical
implications and limitations of our approach at the end of the article. 2013/04/30 - 13:38

Fuzzy systems may be considered as knowledge-based systems that incorporates
human knowledge into their knowledge base through fuzzy rules and fuzzy
membership functions. The intent of this study is to present a fuzzy knowledge
integration framework using a Novel Evolutionary Strategy (NES), which can
simultaneously integrate multiple fuzzy rule sets and their membership function
sets. The proposed approach consists of two phases: fuzzy knowledge encoding
and fuzzy knowledge integration. Four application domains, the hepatitis
diagnosis, the sugarcane breeding prediction, Iris plants classification, and
Tic-tac-toe endgame were used to show the performance ofthe proposed knowledge
approach. Results show that the fuzzy knowledge base derived using our approach
performs better than Genetic Algorithm based approach. 2013/04/30 - 13:38

We propose a new scheme for increasing the throughput of video files in
cellular communications systems. This scheme exploits (i) the redundancy of
user requests as well as (ii) the considerable storage capacity of smartphones
and tablets. Users cache popular video files and - after receiving requests
from other users - serve these requests via device-to-device localized
transmissions. The file placement is optimal when a central control knows a
priori the locations of wireless devices when file requests occur. However,
even a purely random caching scheme shows only a minor performance loss
compared to such a genie-aided scheme. We then analyze the optimal
collaboration distance, trading off frequency reuse with the probability of
finding a requested file within the collaboration distance. We show that an
improvement of spectral efficiency of one to two orders of magnitude is
possible, even if there is not very high redundancy in video requests. 2013/04/30 - 13:38

In this paper, we study incentive mechanisms for retrieving information from
networked agents. Following the model in [Kleinberg and Raghavan 2005], the
agents are represented as nodes in an infinite tree, which is generated by a
random branching process. A query is issued by the root, and each node
possesses an answer with an independent probability $p=1/n$. Further, each node
in the tree acts strategically to maximize its own payoff. In order to
encourage the agents to participate in the information acquisition process, an
incentive mechanism is needed to reward agents who provide the information as
well as agents who help to facilitate such acquisition.

We focus on designing efficient sybil-proof incentive mechanisms, i.e., which
are robust to fake identity attacks. %We consider incentive mechanisms which
are sybil-proof, i.e., robust to fake identity attacks. We propose a family of
mechanisms, called the direct referral (DR) mechanisms, which allocate most
reward to the information holder as well as its direct parent (or direct
referral). We show that, when designed properly, the direct referral mechanism
is sybil-proof and efficient. In particular, we show that we may achieve an
expected cost of $O(h^2)$ for propagating the query down $h$ levels for any
branching factor $b>1$. This result exponentially improves on previous work
when requiring to find an answer with high probability. When the underlying
network is a deterministic chain, our mechanism is optimal under some mild
assumptions. In addition, due to its simple reward structure, the DR mechanism
might have good chance to be adopted in practice. 2013/04/30 - 13:38

Low complexity joint estimation of synchronization impairments and channel in
a single-user MIMO-OFDM system is presented in this letter. Based on a system
model that takes into account the effects of synchronization impairments such
as carrier frequency offset, sampling frequency offset, and symbol timing
error, and channel, a Maximum Likelihood (ML) algorithm for the joint
estimation is proposed. To reduce the complexity of ML grid search, the number
of received signal samples used for estimation need to be reduced. The
conventional channel estimation methods using Least-Squares (LS) fail for the
reduced sample under-determined system, which results in poor performance of
the joint estimator. The proposed ML algorithm uses Compressed Sensing (CS)
based channel estimation method in a sparse fading scenario, where the received
samples used for estimation are less than that required for an LS based
estimation. The performance of the estimation method is studied through
numerical simulations, and it is observed that CS based joint estimator
performs better than LS based joint estimator 2013/04/30 - 13:38

This paper investigates a natural generalization of the kappa-mu fading
channel in which the line-of-sight (LOS) component is subject to shadowing.
This fading distribution has a clear physical interpretation, good analytical
properties and unifies the one-side Gaussian, Rayleigh, Nakagami-m, Ricean,
kappa-mu and Ricean shadowed fading distributions. The three basic statistical
characterizations, i.e. probability density function (PDF), cumulative
distribution function (CDF) and moment generating function (MGF), of the
kappa-mu shadowed distribution are obtained in closed-form. Then, it is also
shown that the sum and maximum distributions of independent but arbitrarily
distributed kappa-mu shadowed variates can be expressed in closed-form. This
set of new statistical results is finally applied to the performance analysis
of several wireless communication systems. 2013/04/30 - 13:38

WebView is an essential component in Android and iOS. It enables applications
to display content from on-line resources. It simplifies task of performing a
network request, parsing the data and rendering it. WebView uses a number of
APIs which can interact with the web contents inside WebView. In the current
paper, Cross-site scripting attacks or XSS attacks specific to Android WebView
are discussed. Cross site scripting (XSS) is a type of vulnerability commonly
found in web applications. This vulnerability makes it possible for attackers
to run malicious code into victim's WebView,through HttpClient APIs. Using this
malicious code, the attackers can steal the victim's credentials, such as
cookies. The access control policies (that is,the same origin policy) employed
by the browser to protect those credentials can be bypassed by exploiting the
XSS vulnerability. 2013/04/30 - 13:38

We present the first streaming algorithm for counting an arbitrary hypergraph
$H$ of constant size in a massive hypergraph $G$. Our algorithm can handle both
edge-insertions and edge-deletions, and is applicable for the distributed
setting. Moreover, our approach provides the first family of graph polynomials
for the hypergraph counting problem. Because of the close relationship between
hypergraphs and set systems, our approach may have applications in studying
similar problems. 2013/04/30 - 13:38

We address the distributed estimation of an unknown scalar parameter in
Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations
over multiple access channel to a Fusion Center (FC) that reconstructs the
source parameter. The received signal is corrupted by noise and channel fading,
so that the FC objective is to minimize the Mean-Square Error (MSE) of the
estimate. In this paper, we assume sensor node observations to be correlated
with the source signal and correlated with each other as well. The correlation
coefficient between two observations is exponentially decaying with the
distance separation. The effect of the distance-based correlation on the
estimation quality is demonstrated and compared with the case of unity
correlated observations. Moreover, a closed-form expression for the outage
probability is derived and its dependency on the correlation coefficients is
investigated. Numerical simulations are provided to verify our analytic
results. 2013/04/30 - 13:38

A multidimensional optimization problem is formulated in the tropical
mathematics setting as to maximize a nonlinear objective function, which is
defined through a multiplicative conjugate transposition operator on vectors in
a finite-dimensional semimodule over a general idempotent semifield. The study
is motivated by problems drawn from project scheduling, where the deviation
between initiation or completion times of activities in a project is to be
maximized subject to various precedence constraints among the activities. To
solve the unconstrained problem, we first establish an upper bound for the
objective function, and then solve a system of vector equations to find all
vectors that yield the bound. As a corollary, an extension of the solution to
handle constrained problems is discussed. The results obtained are applied to
give complete direct solutions to the motivating problems from project
scheduling. Numerical examples of the development of optimal schedules are also
presented. 2013/04/30 - 13:38

K-means is undoubtedly the most widely used partitional clustering algorithm.
Unfortunately, due to its gradient descent nature, this algorithm is highly
sensitive to the initial placement of the cluster centers. Numerous
initialization methods have been proposed to address this problem. Many of
these methods, however, have superlinear complexity in the number of data
points, making them impractical for large data sets. On the other hand, linear
methods are often random and/or order-sensitive, which renders their results
unrepeatable. Recently, Su and Dy proposed two highly successful hierarchical
initialization methods named Var-Part and PCA-Part that are not only linear,
but also deterministic (non-random) and order-invariant. In this paper, we
propose a discriminant analysis based approach that addresses a common
deficiency of these two methods. Experiments on a large and diverse collection
of data sets from the UCI Machine Learning Repository demonstrate that Var-Part
and PCA-Part are highly competitive with one of the best random initialization
methods to date, i.e., k-means++, and that the proposed approach significantly
improves the performance of both hierarchical methods. 2013/04/30 - 13:38

One of the fundamental principles driving diversity or homogeneity in domains
such as cultural differentiation, political affiliation, and product adoption
is the tension between two forces: influence (the tendency of people to become
similar to others they interact with) and selection (the tendency to be
affected most by the behavior of others who are already similar). Influence
tends to promote homogeneity within a society, while selection frequently
causes fragmentation. When both forces are in effect simultaneously, it becomes
an interesting question to analyze which societal outcomes should be expected.

In order to study the joint effects of these forces more formally, we analyze
a natural model built upon active lines of work in political opinion formation,
cultural diversity, and language evolution. Our model posits an arbitrary graph
structure describing which "types" of people can influence one another: this
captures effects based on the fact that people are only influenced by
sufficiently similar interaction partners. In a generalization of the model, we
introduce another graph structure describing which types of people even so much
as come in contact with each other. These restrictions on interaction patterns
can significantly alter the dynamics of the process at the population level.

For the basic version of the model, in which all individuals come in contact
with all others, we achieve an essentially complete characterization of
(stable) equilibrium outcomes and prove convergence from all starting states.
For the other extreme case, in which individuals only come in contact with
others who have the potential to influence them, the underlying process is
significantly more complicated; nevertheless we present an analysis for certain
graph structures. 2013/04/30 - 13:38

The Immersed Boundary (IB) method is a widely-used framework for the
simulation of fluid-structure interaction problems. The IB formulation
maintains an Eulerian discretization for the fluid equations of motion while
maintaining a Lagrangian representation of immersed elastic structures. The
interaction between Lagrangian and Eulerian variables is mediated by
convolution with delta function kernels. Our specific motivation is the
modeling of platelets in hemodynamic flows. We explore the use of a new
geometric representation for the Lagrangian structures -- radial basis
functions (RBFs) -- within the IB method. We compare our new RBF-IB method with
the traditional IB method in terms of differences in Lagrangian marker
positions, computational cost, maximum stable time-step size and volume loss. 2013/04/30 - 13:38

Consider the problem of a Multiple-Input Multiple-Output (MIMO)
Multiple-Access Channel (MAC) at the limit of large number of users. Clearly,
in practical scenarios, only a small subset of the users can be scheduled to
utilize the channel simultaneously. Thus, a problem of user selection arises.
However, since solutions which collect Channel State Information (CSI) from all
users and decide on the best subset to transmit in each slot do not scale when
the number of users is large, distributed algorithms for user selection are
advantageous. In this paper, we suggest a distributed user selection algorithm,
which selects a group of users to transmit without coordinating be- tween all
users and without all users sending CSI to the base station. This
threshold-based algorithm is analyzed, and its expected capac- ity in the limit
of large number of users is investigated. It is shown that for large number of
users it achieves the same scaling laws as the optimal centralized scheme.
Multi-stage distributed schemes are also considered. 2013/04/30 - 13:38

An algorithm for constructing Tanner graphs of non-binary irregular
quasi-cyclic LDPC codes is introduced. It employs a new method for selection of
edge labels allowing control over the code's non-binary ACE spectrum and
resulting in low error-floor. The efficiency of the algorithm is demonstrated
by generating good codes of short to moderate length over small fields,
outperforming codes generated by the known methods. 2013/04/30 - 13:38

A skew-symmetric graph $(D=(V,A),\sigma)$ is a directed graph $D$ with an
involution $\sigma$ on the set of vertices and arcs. In this paper, we
introduce a separation problem, $d$-Skew-Symmetric Multicut, where we are given
a skew-symmetric graph $D$, a family of $\cal T$ of $d$-sized subsets of
vertices and an integer $k$. The objective is to decide if there is a set
$X\subseteq A$ of $k$ arcs such that every set $J$ in the family has a vertex
$v$ such that $v$ and $\sigma(v)$ are in different connected components of
$D'=(V,A\setminus (X\cup \sigma(X))$. In this paper, we give an algorithm for
this problem which runs in time $O((4d)^{k}(m+n+\ell))$, where $m$ is the
number of arcs in the graph, $n$ the number of vertices and $\ell$ the length
of the family given in the input.

Using our algorithm, we show that Almost 2-SAT has an algorithm with running
time $O(4^kk^4\ell)$ and we obtain algorithms for {\sc Odd Cycle Transversal}
and {\sc Edge Bipartization} which run in time $O(4^kk^4(m+n))$ and
$O(4^kk^5(m+n))$ respectively. This resolves an open problem posed by Reed,
Smith and Vetta [Operations Research Letters, 2003] and improves upon the
earlier almost linear time algorithm of Kawarabayashi and Reed [SODA, 2010].

We also show that Deletion q-Horn Backdoor Set Detection is a special case of
3-Skew-Symmetric Multicut, giving us an algorithm for Deletion q-Horn Backdoor
Set Detection which runs in time $O(12^kk^5\ell)$. This gives the first
fixed-parameter tractable algorithm for this problem answering a question posed
in a paper by a superset of the authors [STACS, 2013]. Using this result, we
get an algorithm for Satisfiability which runs in time $O(12^kk^5\ell)$ where
$k$ is the size of the smallest q-Horn deletion backdoor set, with $\ell$ being
the length of the input formula. 2013/04/30 - 13:38

Researchers since at least Darwin have debated whether and to what extent
emotions are universal or culture-dependent. However, previous studies have
primarily focused on facial expressions and on a limited set of emotions. Given
that emotions have a substantial impact on human lives, evidence for cultural
emotional relativity might be derived by applying distributional semantics
techniques to a text corpus of self-reported behaviour. Here, we explore this
idea by measuring the valence and arousal of the twelve most popular emotion
keywords expressed on the micro-blogging site Twitter. We do this in three
geographical regions: Europe, Asia and North America. We demonstrate that in
our sample, the valence and arousal levels of the same emotion keywords differ
significantly with respect to these geographical regions --- Europeans are, or
at least present themselves as more positive and aroused, North Americans are
more negative and Asians appear to be more positive but less aroused when
compared to global valence and arousal levels of the same emotion keywords. Our
work is the first in kind to programatically map large text corpora to a
dimensional model of affect. 2013/04/30 - 13:38

This paper investigates an uplink multi-cell processing (MCP) model where the
cell sites are linked to a central processor (CP) via noiseless backhaul links
with limited capacity. A simple compress-and-forward scheme is employed, where
the base-stations (BSs) quantize the received signals and send the quantized
signals to the CP using distributed Wyner-Ziv compression. The CP decodes the
quantization codewords first, then decodes the user messages as if the users
and the CP form a virtual multiple-access channel. This paper formulates the
problem of maximizing the overall sum-rate under a sum backhaul constraint for
such a setting. It is shown that setting the quantization noise levels to be
uniform across the BSs maximizes the achievable sum rate under high
signal-to-noise ratio (SNR). Further, for general SNR a low-complexity
fixed-point iteration algorithm is proposed to optimize the quantization noise
levels. This paper further shows that with uniform quantization noise levels,
the compress-and-forward scheme with Wyner-Ziv compression already achieves a
sum-rate that is within a constant gap to the sum capacity of the uplink MCP
model. The gap depends linearly on the number of BSs in the network but is
independent of the SNR and the channel matrix. 2013/04/30 - 13:38

The planar embedding conjecture asserts that any planar metric admits an
embedding into L_1 with constant distortion. This is a well-known open problem
with important algorithmic implications, and has received a lot of attention
over the past two decades. Despite significant efforts, it has been verified
only for some very restricted cases, while the general problem remains elusive.

In this paper we make progress towards resolving this conjecture. We show
that every planar metric of non-positive curvature admits a constant-distortion
embedding into L_1. This confirms the planar embedding conjecture for the case
of non-positively curved metrics. 2013/04/30 - 13:38

GCD computations and variants of the Euclidean algorithm enjoy broad uses in
both classical and quantum algorithms. In this paper, we propose quantum
circuits for GCD computation with $O(n \log n)$ depth with O(n) ancillae. Prior
circuit construction needs $O(n^2)$ running time with O(n) ancillae. The
proposed construction is based on the binary GCD algorithm and it benefits from
log-depth circuits for 1-bit shift, comparison/subtraction, and managing
ancillae. The worst-case gate count remains $O(n^2)$, as in traditional
circuits. 2013/04/30 - 13:38

A new analysis is presented for the direct-sequence code-division multiple
access (DS-CDMA) cellular uplink. For a given network topology, closed-form
expressions are found for the outage probability and rate of each uplink in the
presence of path-dependent Nakagami fading and log-normal shadowing. The
topology may be arbitrary or modeled by a random spatial distribution for a
fixed number of base stations and mobiles placed over a finite area with the
separations among them constrained to exceed a minimum distance. The analysis
is more detailed and accurate than existing ones and facilitates the resolution
of network design issues, including the influence of the minimum base-station
separation, the role of the spreading factor, and the impact of various
power-control and rate-control policies. It is shown that once power control is
established, the rate can be allocated according to a fixed-rate or
variable-rate policy with the objective of either meeting an outage constraint
or maximizing throughput. An advantage of the variable-rate policy is that it
allows an outage constraint to be enforced on every uplink, whereas the
fixed-rate policy can only meet an average outage constraint. 2013/04/30 - 13:38

In many applications, one has side information, e.g., labels that are
provided in a semi-supervised manner, about a specific target region of a large
data set, and one wants to perform machine learning and data analysis tasks
"nearby" that prespecified target region. For example, one might be interested
in the clustering structure of a data graph near a prespecified "seed set" of
nodes, or one might be interested in finding partitions in an image that are
near a prespecified "ground truth" set of pixels. Locally-biased problems of
this sort are particularly challenging for popular eigenvector-based machine
learning and data analysis tools. At root, the reason is that eigenvectors are
inherently global quantities, thus limiting the applicability of
eigenvector-based methods in situations where one is interested in very local
properties of the data.

In this paper, we address this issue by providing a methodology to construct
semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these
locally-biased eigenvectors can be used to perform locally-biased machine
learning. These semi-supervised eigenvectors capture
successively-orthogonalized directions of maximum variance, conditioned on
being well-correlated with an input seed set of nodes that is assumed to be
provided in a semi-supervised manner. We show that these semi-supervised
eigenvectors can be computed quickly as the solution to a system of linear
equations; and we also describe several variants of our basic method that have
improved scaling properties. We provide several empirical examples
demonstrating how these semi-supervised eigenvectors can be used to perform
locally-biased learning; and we discuss the relationship between our results
and recent machine learning algorithms that use global eigenvectors of the
graph Laplacian. 2013/04/30 - 13:38

Moss and Rabani[12] study constrained node-weighted Steiner tree problems
with two independent weight values associated with each node, namely, cost and
prize (or penalty). They give an O(log n)-approximation algorithm for the
prize-collecting node-weighted Steiner tree problem (PCST). They use the
algorithm for PCST to obtain a bicriteria (2, O(log n))-approximation algorithm
for the Budgeted node-weighted Steiner tree problem. Their solution may cost up
to twice the budget, but collects a factor Omega(1/log n) of the optimal prize.
We improve these results from at least two aspects.

Our first main result is a primal-dual O(log h)-approximation algorithm for a
more general problem, prize-collecting node-weighted Steiner forest, where we
have (h) demands each requesting the connectivity of a pair of vertices. Our
algorithm can be seen as a greedy algorithm which reduces the number of demands
by choosing a structure with minimum cost-to-reduction ratio. This natural
style of argument (also used by Klein and Ravi[10] and Guha et al.[8]) leads to
a much simpler algorithm than that of Moss and Rabani[12] for PCST.

Our second main contribution is for the Budgeted node-weighted Steiner tree
problem, which is also an improvement to [12] and [8]. In the unrooted case, we
improve upon an O(log^2(n))-approximation of [8], and present an O(log
n)-approximation algorithm without any budget violation. For the rooted case,
where a specified vertex has to appear in the solution tree, we improve the
bicriteria result of [12] to a bicriteria approximation ratio of (1+eps, O(log
n)/(eps^2)) for any positive (possibly subconstant) (eps). That is, for any
permissible budget violation (1+eps), we present an algorithm achieving a
tradeoff in the guarantee for prize. Indeed, we show that this is almost tight
for the natural linear-programming relaxation used by us as well as in [12]. 2013/04/30 - 13:38

Compressed sensing is by now well-established as an effective tool for
extracting sparsely distributed information, where sparsity is a discrete
concept, referring to the number of dominant nonzero signal components in some
basis for the signal space. In this paper, we establish a framework for
estimation of continuous-valued parameters based on compressive measurements on
a signal corrupted by additive white Gaussian noise (AWGN). While standard
compressed sensing based on naive discretization has been shown to suffer from
performance loss due to basis mismatch, we demonstrate that this is not an
inherent property of compressive measurements. Our contributions are summarized
as follows: (a) We identify the isometries required to preserve fundamental
estimation-theoretic quantities such as the Ziv-Zakai bound (ZZB) and the
Cramer-Rao bound (CRB). Under such isometries, compressive projections can be
interpreted simply as a reduction in "effective SNR." (b) We show that the
convergence of the ZZB to the CRB provides a criterion for determining the
minimum number of measurements for "accurate" parameter estimation. (c) We
provide detailed computations of the number of measurements needed for the
isometries in (a) to hold for the problem of frequency estimation in a mixture
of sinusoids. We show via simulations that the design criterion in (b) is
accurate for estimating the frequency of a single sinusoid. 2013/04/30 - 13:38