Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na arXiv.org

Quantum Key Distribution is a secret distribution technique that requires an
authenticated channel. This channel is usually created on top of an
un-authenticated communication medium using unconditionally secure Message
Authentication Codes (MAC) and an initial common secret. We examine the
consequences of replacing this MAC algorithm by a cryptographic hash-based
signature algorithm, like the Lamport algorithm. We show that provided one-way
functions exist, the Lamport algorithm or its variants can be instantiated in a
secure way in the Universally Composable sense, and can therefore be plugged
into any QKD protocol with a composable security proof in a secure manner. This
association, while relying on short-term computational hardness assumptions,
results in an increase of the practical security of QKD and eases its
deployment.

http://arxiv.org/abs/1109.2844 2013/07/27 - 20:59

We propose a blind watermarking scheme for 3-D meshes which combines sparse
quantization index modulation (QIM) with deletion correction codes. The QIM
operates on the vertices in rough concave regions of the surface thus ensuring
impeccability, while the deletion correction code recovers the data hidden in
the vertices which is removed by mesh optimization and/or simplification. The
proposed scheme offers two orders of magnitude better performance in terms of
recovered watermark bit error rate compared to the existing schemes of similar
payloads and fidelity constraints.

http://arxiv.org/abs/1204.2214 2013/07/27 - 20:59

Two way buffer system is a system that exhibits transfer of data using two
buffers concurrently. It includes processes that synchronize to exchange data
with each other along with executing certain delays between these
synchronizations. In existing Tiny Two Way Buffer System, both generators
produce packets in half duplex manner in no time, deterministic time, and non
deterministic time. Analysis of the model for above time options leads the
model in deadlock. The model can be out of the deadlock if timings in the model
is incorporated in alternative fashion. The generators produce packets after a
delay of 10 seconds. However, generator one has an initial shift of 5 seconds
after which it begins sending a packet every 10 seconds. Hence, initial delay
for generator one is 15 seconds and for generator two it is 10 seconds. Due to
this initial shift, both generators produce packets alternatively and is
deadlock free as the packets do not meet at the same time instant. Moreover,
the existing system model is not concurrent and hence takes more time for
packet transfer in every iteration. In this paper we have proposed a model of
buffer system using an additional dummy buffer for transfer of data packets, we
thus checking the model with various time models as no time, deterministic time
and non deterministic time. The results of proposed model under above time
models are in deadlock. We achieve deadlock free situation by introducing
appropriate delay in various buffers of the proposed system, the delay timing
is nondeterministic time. The new approach speeds up the transfer of packets,
as a result the transfer of data becomes concurrent, deadlock free and hence
the model proposed is time efficient. Simulation results shows that the
proposed two way buffer system is fully concurrent and time efficient as
compared to the existing buffer system.

http://arxiv.org/abs/1209.2376 2013/07/27 - 20:59

Multi-cell cooperation (MCC) mitigates intercell interference and improves
throughput at the cell edge. This paper considers a cooperative downlink,
whereby cell-edge mobiles are served by multiple cooperative base stations. The
cooperating base stations transmit identical signals over paths with
non-identical path losses, and the receiving mobile performs diversity
combining. The analysis in this paper is driven by a new expression for the
conditional outage probability when signals arriving over different paths are
combined in the presence of noise and interference, where the conditioning is
with respect to the network topology and shadowing. The channel model accounts
for path loss, shadowing, and Nakagami fading, and the Nakagami fading
parameters do not need to be identical for all paths. To study performance over
a wide class of network topologies, a random spatial model is adopted, and
performance is found by statistically characterizing the rates provided on the
downlinks. To model realistic networks, the model requires a minimum separation
among base stations. Having adopted a realistic model and an accurate analysis,
the paper proceeds to determine performance under several resource-allocation
policies and provides insight regarding how the cell edge should be defined.

http://arxiv.org/abs/1306.3786 2013/07/27 - 20:59

In this paper we present a novel probabilistic sampling-based motion planning
algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is
specifically aimed at solving complex motion planning problems in
high-dimensional configuration spaces. This algorithm is proven to be
asymptotically optimal and is shown to converge to an optimal solution faster
than its state-of-the-art counterparts, chiefly, PRM* and RRT*. An additional
advantage of FMT* is that it builds and maintains paths in a tree-like
structure (especially useful for planning under differential constraints). The
FMT* algorithm essentially performs a "lazy" dynamic programming recursion on a
set of probabilistically-drawn samples to grow a tree of paths, which moves
steadily outward in cost-to-come space. As such, this algorithm combines
features of both single-query algorithms (chiefly RRT) and multiple-query
algorithms (chiefly PRM), and is conceptually related to the Fast Marching
Method for the solution of eikonal equations. As a departure from previous
analysis approaches that are based on the notion of almost sure convergence,
the FMT* algorithm is analyzed under the notion of convergence in probability:
the extra mathematical flexibility of this approach allows for significant
algorithmic advantages and is of independent interest. Numerical experiments
over a range of dimensions and obstacle configurations confirm our theoretical
and heuristic arguments by showing that FMT* returns substantially better
solutions than either PRM* or RRT*.

http://arxiv.org/abs/1306.3532 2013/07/26 - 11:48

Conditional statements, and especially subjunctive and counterfactual
conditionals, are the source of many enduring challenges in formal reasoning.
Here is presented a method to use probability to distinguish among several
different kinds of conditional statements. Four main types of conditionals are
considered: material, existential, subjunctive, and feasibility. Also addressed
is the matter of reasoning about probabilities attached to formulas in the
propositional calculus (including statements of material implication); such
formulas can be embedded within probability models. Parametric probability
networks provide an expressive formal language in which semantically-different
conditional statements can be represented as syntactically-different polynomial
constraints. Two phases of analysis, symbolic probability inference and
polynomial optimization, allow the computation of interesting results from
parametric probability networks. This methodology complements what is available
in conventional mathematical logic.

http://arxiv.org/abs/1307.3802 2013/07/26 - 11:48

In this paper, we propose a new rateless coded cooperation scheme for a
general multi-user cooperative wireless system. We develop cooperation methods
based on Raptor codes with the assumption that the channels face erasure with
specific erasure probabilities and transmitters have no channel state
information. A fully coded cooperation (FCC) and a partially coded cooperation
(PCC) strategy are developed to maximize the average system throughput. Both
PCC and FCC schemes have been analyzed through AND-OR tree analysis and a
linear programming optimization problem is then formulated to find the optimum
degree distribution for each scheme. Simulation results show that optimized
degree distributions can bring considerable throughput gains compared to
existing degree distributions which are designed for point-to-point binary
erasure channels. It is also shown that the PCC scheme outperforms the FCC
scheme in terms of average system throughput.

http://arxiv.org/abs/1307.4463 2013/07/26 - 11:48

Optimizing the cellular network's cell locations is one of the most
fundamental problems of network design. The general objective is to provide the
desired Quality-of-Service (QoS) with the minimum system cost. In order to meet
a growing appetite for mobile data services, heterogeneous networks have been
proposed as a cost- and energy-efficient method of improving local spectral
efficiency. Whilst unarticulated cell deployments can lead to localized
improvements, there is a significant risk posed to network-wide performance due
to the additional interference.

The first part of the paper focuses on state-of-the-art modelling and
radio-planning methods based on stochastic geometry and Monte-Carlo
simulations, and the emerging automatic deployment prediction technique for
low-power nodes (LPNs) in heterogeneous networks. The technique advises a LPN
where it should be deployed, given certain knowledge of the network. The second
part of the paper focuses on algorithms that utilize interference and physical
environment knowledge to assist LPN deployment. The proposed techniques can not
only improve network performance, but also reduce radio-planning complexity,
capital expenditure, and energy consumption of the cellular network. The
theoretical work is supported by numerical results from system-level
simulations that employ real cellular network data and physical environments.

http://arxiv.org/abs/1307.5182 2013/07/26 - 11:48

Consider N points in d-dimensions and M >= 2 local coordinate systems that
are related through unknown rigid transforms. For each point we are given
(possibly noisy) measurements of its local coordinates in some of the
coordinate systems. Alternatively, for each coordinate system we observe the
coordinates of a subset of the points. We study the problem of estimating the
global coordinates (up to a global rigid transform) of the points from such
measurements. This problem comes up in distributed approaches to molecular
conformation (determination of molecular structures using NMR) and sensor
network localization, and also in computer vision and graphics applications.

We formulate a least-squares-based program, in which the variables are the
global coordinates and the rigid transforms associated with each local
coordinate system. While there exists a well-known closed form SVD-based
solution when M=2, for M > 2 the least-squares problem is non-convex and no
closed-form solution is known.

We show that the non-convex least-squares problem can be relaxed into a
standard semidefinite program (SDP), and demonstrate how the global coordinates
(and rigid transforms) can be efficiently computed from this SDP. Under
specific measurement and noise models, we prove that the proposed convex
relaxation exactly and stably recovers the global coordinates from the noisy
local coordinates. We perform some numerical simulations to validate the
algorithm and confirm the theoretical findings.

http://arxiv.org/abs/1306.5226 2013/07/26 - 11:48

In this paper we use marker method and propose a new mutation operator that
selects the nearest neighbor among all near neighbors solving Traveling
Salesman Problem.

http://arxiv.org/abs/1307.5674 2013/07/24 - 19:41

Temporal social networks are characterized by {heterogeneous} duration of
contacts, which can either follow a power-law distribution, such as in
face-to-face interactions, or a Weibull distribution, such as in mobile-phone
communication. Here we model the dynamics of face-to-face interaction and
mobile phone communication by a reinforcement dynamics, which explains the data
observed in these different types of social interactions. We quantify the
information encoded in the dynamics of these networks by the entropy of
temporal networks. Finally, we show evidence that human dynamics is able to
modulate the information present in social network dynamics when it follows
circadian rhythms and when it is interfacing with a new technology such as the
mobile-phone communication technology.

http://arxiv.org/abs/1307.5675 2013/07/24 - 19:41

Nowadays, optimization problem have more application in all major but they
have problem in computation. Computation global point in continuous functions
have high calculation and this became clearer in large space .In this paper, we
proposed Sub- Dividing Genetic Method(SGM) that have less computation than
other method for achieving global points . This method userotation mutation and
crossover based sub-division method that sub diving method is used for minimize
search space and rotation mutation with crossover is used for finding global
optimal points. In experimental, SGM algorithm is implemented on De Jong
function. The numerical examples show that SGM is performed more optimal than
other methods such as Grefensstette, Random Value, and PNG.

http://arxiv.org/abs/1307.5679 2013/07/24 - 19:41

When the interval between a transient ash of light (a "cue") and a second
visual response signal (a "target") exceeds at least 200ms, responding is
slowest in the direction indicated by the first signal. This phenomenon is
commonly referred to as inhibition of return (IOR). The dynamic neural field
model (DNF) has proven to have broad explanatory power for IOR, effectively
capturing many empirical results. Previous work has used a short-term
depression (STD) implementation of IOR, but this approach fails to explain many
behavioral phenomena observed in the literature. Here, we explore a variant
model of IOR involving a combination of STD and delayed direct collicular
inhibition. We demonstrate that this hybrid model can better reproduce
established behavioural results. We use the results of this model to propose
several experiments that would yield particularly valuable insight into the
nature of the neurophysiological mechanisms underlying IOR.

http://arxiv.org/abs/1307.5684 2013/07/24 - 19:41

As defined by the Memento Framework, TimeMaps are ma-chine-readable lists of
time-specific copies -- called "mementos" -- of an archived original resource.
In theory, as an archive acquires additional mementos over time, a TimeMap
should be monotonically increasing. However, there are reasons why the number
of mementos in a TimeMap would decrease, for example: archival redaction of
some or all of the mementos, archival restructuring, and transient errors on
the part of one or more archives. We study TimeMaps for 4,000 original
resources over a three month period, note their change patterns, and develop a
caching algorithm for TimeMaps suitable for a reverse proxy in front of a
Memento aggregator. We show that TimeMap cardinality is constant or
monotonically increasing for 80.2% of all TimeMap downloads observed in the
observation period. The goal of the caching algorithm is to exploit the ideally
monotonically increasing nature of TimeMaps and not cache responses with fewer
mementos than the already cached TimeMap. This new caching algorithm uses
conditional cache replacement and a Time To Live (TTL) value to ensure the user
has access to the most complete TimeMap available. Based on our empirical data,
a TTL of 15 days will minimize the number of mementos missed by users, and
minimize the load on archives contributing to TimeMaps.

http://arxiv.org/abs/1307.5685 2013/07/24 - 19:41

Since the early 2000s, computational visual saliency has been a very active
research area. Each year, more and more new models are published in the main
computer vision conferences. Nowadays, one of the big challenges is to find a
way to fairly evaluate all of these models. In this paper, a new framework is
proposed to assess models of visual saliency. This evaluation is divided into
three experiments leading to the proposition of a new evaluation framework.
Each experiment is based on a basic question: 1) there are two ground truths
for saliency evaluation: what are the differences between eye fixations and
manually segmented salient regions?, 2) the properties of the salient regions:
for example, do large, medium and small salient regions present different
difficulties for saliency models? and 3) the metrics used to assess saliency
models: what advantages would there be to mix them with PCA? Statistical
analysis is used here to answer each of these three questions.

http://arxiv.org/abs/1307.5691 2013/07/24 - 19:41

In the last few decades, significant achievements have been attained in
predicting where humans look at images through different computational models.
However, how to determine contributions of different visual features to overall
saliency still remains an open problem. To overcome this issue, a recent class
of models formulates saliency estimation as a supervised learning problem and
accordingly apply machine learning techniques. In this paper, we also address
this challenging problem and propose to use multiple kernel learning (MKL) to
combine information coming from different feature dimensions and to perform
integration at an intermediate level. Besides, we suggest to use responses of a
recently proposed filterbank of object detectors, known as Object-Bank, as
additional semantic high-level features. Here we show that our MKL-based
framework together with the proposed object-specific features provide
state-of-the-art performance as compared to SVM or AdaBoost-based saliency
models.

http://arxiv.org/abs/1307.5693 2013/07/24 - 19:41

Colour refinement is a basic algorithmic routine for graph isomorphism
testing, appearing as a subroutine in almost all practical isomorphism solvers.
It partitions the vertices of a graph into "colour classes" in such a way that
all vertices in the same colour class have the same number of neighbours in
every colour class. Tinhofer (Disc. App. Math., 1991), Ramana, Scheinerman, and
Ullman (Disc. Math., 1994), and Godsil (Lin. Alg. and its App., 1997)
established a tight correspondence between colour refinement and fractional
automorphisms of a graph.

We introduce versions of colour refinement for weighted directed graphs, for
matrices, and for linear programs and extend existing quasilinear algorithms
for computing the colour classes. Then we generalise the correspondence between
colour refinement and fractional automoprphisms, giving a new proof that is
much simpler than the known proofs even in the setting of unweighted undirected
graphs.

We apply these results to reduce the dimensions linear programs.
Specifically, we show that any given linear program L can efficiently be
transformed into a (potentially) smaller linear program L' whose number of
variables and constraints is the number of colour classes of the colour
refinement algorithm. (When applied to a linear program, colour refinement
yields partitions both of the constraints and of the variables.) The
transformation is such that we can easily (by a linear mapping) transform both
feasible and optimal solutions back and forth between the two LPs. We
demonstrate empirically that colour refinement can indeed greatly reduce the
cost of solving linear programs. A precusor of the method proposed here has
been applied successfully by the second and third author (jointly with Ahmadi)
to MAP inference problems in machine learning (AISTATS 2012).

http://arxiv.org/abs/1307.5697 2013/07/24 - 19:41

The human visual system employs a selective attention mechanism to understand
the visual world in an eficient manner. In this paper, we show how
computational models of this mechanism can be exploited for the computer vision
application of scene recognition. First, we consider saliency weighting and
saliency pruning, and provide a comparison of the performance of different
attention models in these approaches in terms of classification accuracy.
Pruning can achieve a high degree of computational savings without
significantly sacrificing classification accuracy. In saliency weighting,
however, we found that classification performance does not improve. In
addition, we present a new method to incorporate salient and non-salient
regions for improved classification accuracy. We treat the salient and
non-salient regions separately and combine them using Multiple Kernel Learning.
We evaluate our approach using the UIUC sports dataset and find that with a
small training size, our method improves upon the classification accuracy of
the baseline bag of features approach.

http://arxiv.org/abs/1307.5702 2013/07/24 - 19:41

One of the key challenges in the area of signal processing on graphs is to
design dictionaries and transform methods to identify and exploit structure in
signals on weighted graphs. To do so, we need to account for the intrinsic
geometric structure of the underlying graph data domain. In this paper, we
generalize one of the most important signal processing tools - windowed Fourier
analysis - to the graph setting. Our approach is to first define generalized
convolution, translation, and modulation operators for signals on graphs, and
explore related properties such as the localization of translated and modulated
graph kernels. We then use these operators to define a windowed graph Fourier
transform, enabling vertex-frequency analysis. When we apply this transform to
a signal with frequency components that vary along a path graph, the resulting
spectrogram matches our intuition from classical discrete-time signal
processing. Yet, our construction is fully generalized and can be applied to
analyze signals on any undirected, connected, weighted graph.

http://arxiv.org/abs/1307.5708 2013/07/24 - 19:41

Region-based artificial attention constitutes a framework for bio-inspired
attentional processes on an intermediate abstraction level for the use in
computer vision and mobile robotics. Segmentation algorithms produce regions of
coherently colored pixels. These serve as proto-objects on which the
attentional processes determine image portions of relevance. A single
region---which not necessarily represents a full object---constitutes the focus
of attention. For many post-attentional tasks, however, such as identifying or
tracking objects, single segments are not sufficient. Here, we present a
saliency-guided approach that groups regions that potentially belong to the
same object based on proximity and similarity of motion. We compare our results
to object selection by thresholding saliency maps and a further
attention-guided strategy.

http://arxiv.org/abs/1307.5710 2013/07/24 - 19:41

Navigating through a visual maze relies on the strategic use of eye movements
to select and identify the route. When navigating the maze, there are
trade-offs between exploring to the environment and relying on memory. This
study examined strategies used to navigating through novel and familiar mazes
that were viewed from above and traversed by a mouse cursor. Eye and mouse
movements revealed two modes that almost never occurred concurrently:
exploration and guidance. Analyses showed that people learned mazes and were
able to devise and carry out complex, multi-faceted strategies that traded-off
visual exploration against active motor performance. These strategies took into
account available visual information, memory, confidence, the estimated cost in
time for exploration, and idiosyncratic tolerance for error. Understanding the
strategies humans used for maze solving is valuable for applications in
cognitive neuroscience as well as in AI, robotics and human-robot interactions.

http://arxiv.org/abs/1307.5713 2013/07/24 - 19:41

Jamming techniques require just moderate resources to be deployed, while
their effectiveness in disrupting communications is unprecedented. In this
paper we introduce several contributions to jamming mitigation. In particular,
we introduce a novel adversary model that has both (unlimited) jamming reactive
capabilities as well as powerful (but limited) proactive jamming capabilities.
Under this powerful but yet realistic adversary model, the communication
bandwidth provided by current anti-jamming solutions drops to zero. We then
present Silence is Golden (SiG): a novel anti jamming protocol that,
introducing a tunable, asymmetric communication channel, is able to mitigate
the adversary capabilities, enabling the parties to communicate. For instance,
with SiG it is possible to deliver a 128 bits long message with a probability
greater than 99% in 4096 time slots in the presence of a jammer that jams all
the on-the-fly communications and the 74% of the silent radio spectrum---while
competing proposals simply fail. The provided solution enjoys a thorough
theoretical analysis and is supported by extensive experimental results,
showing the viability of our proposal.

http://arxiv.org/abs/1307.5714 2013/07/24 - 19:41

The information available to robots in real tasks is widely distributed both
in time and space, requiring the agent to search for relevant data. In humans,
that face the same problem when sounds, images and smells are presented to
their sensors in a daily scene, a natural system is applied: Attention. As
vision plays an important role in our routine, most research regarding
attention has involved this sensorial system and the same has been replicated
to the robotics field. However,most of the robotics tasks nowadays do not rely
only in visual data, that are still costly. To allow the use of attentive
concepts with other robotics sensors that are usually used in tasks such as
navigation, self-localization, searching and mapping, a generic attentional
model has been previously proposed. In this work, feature mapping functions
were designed to build feature maps to this attentive model from data from
range scanner and sonar sensors. Experiments were performed in a high fidelity
simulated robotics environment and results have demonstrated the capability of
the model on dealing with both salient stimuli and goal-driven attention over
multiple features extracted from multiple sensors.

http://arxiv.org/abs/1307.5720 2013/07/24 - 19:41

The practice of compressive sensing suffers importantly in terms of the
efficiency/accuracy trade-off when acquiring noisy signals prior to
measurement. It is rather common to find results treating the noise affecting
the measurements, avoiding in this way to face the so-called noise-folding
phenomenon, related to the noise in the signal, eventually amplified by the
measurement procedure. In this paper we present a new decoding procedure,
combining l1-minimization followed by a selective least p-powers, which not
only is able to reduce this component of the original noise, but also has
enhanced properties in terms of support identification with respect to the sole
l1-minimization. We prove such features, providing relatively simple and
precise theoretical guarantees. We additionally confirm and support the
theoretical estimates by extensive numerical simulations, which give a
statistics of the robustness of the new decoding procedure with respect to more
classical l1-minimization. Noise folding in compressed sensing, support
identification, l1-minimization, selective least p-powers, iterative
thresholding, phase transitions.

http://arxiv.org/abs/1307.5725 2013/07/24 - 19:41

In this work, we define cost-free learning (CFL) formally in comparison with
cost-sensitive learning (CSL). The main difference between them is that a CFL
approach seeks optimal classification results without requiring any cost
information, even in the class imbalance problem. In fact, several CFL
approaches exist in the related studies, such as sampling and some
criteria-based pproaches. However, to our best knowledge, none of the existing
CFL and CSL approaches are able to process the abstaining classifications
properly when no information is given about errors and rejects. Based on
information theory, we propose a novel CFL which seeks to maximize normalized
mutual information of the targets and the decision outputs of classifiers.
Using the strategy, we can deal with binary/multi-class classifications
with/without abstaining. Significant features are observed from the new
strategy. While the degree of class imbalance is changing, the proposed
strategy is able to balance the errors and rejects accordingly and
automatically. Another advantage of the strategy is its ability of deriving
optimal rejection thresholds for abstaining classifications and the
"equivalent" costs in binary classifications. The connection between rejection
thresholds and ROC curve is explored. Empirical investigation is made on
several benchmark data sets in comparison with other existing approaches. The
classification results demonstrate a promising perspective of the strategy in
machine learning.

http://arxiv.org/abs/1307.5730 2013/07/24 - 19:41

An efficient speech to text converter for mobile application is presented in
this work. The prime motive is to formulate a system which would give optimum
performance in terms of complexity, accuracy, delay and memory requirements for
mobile environment. The speech to text converter consists of two stages namely
front-end analysis and pattern recognition. The front end analysis involves
preprocessing and feature extraction. The traditional voice activity detection
algorithms which track only energy cannot successfully identify potential
speech from input because the unwanted part of the speech also has some energy
and appears to be speech. In the proposed system, VAD that calculates energy of
high frequency part separately as zero crossing rate to differentiate noise
from speech is used. Mel Frequency Cepstral Coefficient (MFCC) is used as
feature extraction method and Generalized Regression Neural Network is used as
recognizer. MFCC provides low word error rate and better feature extraction.
Neural Network improves the accuracy. Thus a small database containing all
possible syllable pronunciation of the user is sufficient to give recognition
accuracy closer to 100%. Thus the proposed technique entertains realization of
real time speaker independent applications like mobile phones, PDAs etc.

http://arxiv.org/abs/1307.5736 2013/07/24 - 19:41

In video-surveillance, person re-identification is the task of recognising
whether an individual has already been observed over a network of cameras.
Typically, this is achieved by exploiting the clothing appearance, as classical
biometric traits like the face are impractical in real-world video surveillance
scenarios. Clothing appearance is represented by means of low-level
\textit{local} and/or \textit{global} features of the image, usually extracted
according to some part-based body model to treat different body parts (e.g.
torso and legs) independently. This paper provides a comprehensive review of
current approaches to build appearance descriptors for person
re-identification. The most relevant techniques are described in detail, and
categorised according to the body models and features used. The aim of this
work is to provide a structured body of knowledge and a starting point for
researchers willing to conduct novel investigations on this challenging topic.

http://arxiv.org/abs/1307.5748 2013/07/24 - 19:41

Trusted timestamping consists in proving that certain data existed at a
particular point in time. Existing timestamping methods require either a
centralized and dedicated trusted service or the collaboration of other
participants using the timestamping service. We propose a novel trusted
timestamping scheme, called DNStamp, that does not require a dedicated service
nor collaboration between participants. DNStamp produces shortlived timestamps
with a validity period of several days. The generation and verification
involves a large number of Domain Name System cache resolvers, thus removing
any single point of failure and any single point of trust. Any host with
Internet access may request or verify a timestamp, with no need to register to
any timestamping service. We provide a full description and analysis of
DNStamp. We analyze the security against various adversaries and show
resistance to forward-dating, back-dating and erasure attacks. Experiments with
our implementation of DNStamp show that one can set and then reliably verify
timestamps even under continuous attack conditions.

http://arxiv.org/abs/1307.5756 2013/07/24 - 19:41

1-in-3 SAT is an NP-complete variant of 3-SAT\ where a "clause" is satisfied
iff exactly one of its three literal is satisfied. We present here an exact
algorithm solving \oit\ in time $O^*(1.260^n)$.

http://arxiv.org/abs/1307.5776 2013/07/24 - 19:41

Efficient security management has become an important parameter in todays
world. As the problem is growing, there is an urgent need for the introduction
of advanced technology and equipment to improve the state-of art of
surveillance. In this paper we propose a model for real time background
subtraction using AGMM. The proposed model is robust and adaptable to dynamic
background, fast illumination changes, repetitive motion. Also we have
incorporated a method for detecting shadows using the Horpresert color model.
The proposed model can be employed for monitoring areas where movement or entry
is highly restricted. So on detection of any unexpected events in the scene an
alarm can be triggered and hence we can achieve real time surveillance even in
the absence of constant human monitoring.

http://arxiv.org/abs/1307.5800 2013/07/24 - 19:41

This paper considers a cooperative network with multiple source-destination
pairs and one energy harvesting relay. The outage probability experienced by
users in this network is characterized by taking the spatial randomness of user
locations into consideration. In addition, the cooperation among users is
modeled as a canonical coalitional game and the grand coalition is shown to be
stable in the addressed scenario. Simulation results are provided to
demonstrate the accuracy of the developed analytical results.

http://arxiv.org/abs/1307.5827 2013/07/24 - 19:41

For a set of points in the plane and a fixed integer k > 0, the Yao graph Y_k
partitions the space around each point into k equiangular cones, and connects
each point to a nearest neighbor in each cone. With the exception of Y_5, it is
known for all Yao graphs whether or not they are geometric spanners. In this
paper we resolve this gap and show that Y_5 is a t-spanner with spanning ratio
t less than 10.9. We also improve the known spanning ratio of all the Yao
graphs for odd k > 5. Finally, we revisit the Y_6 graph, which plays a
particularly important role as the transition between the graphs (k > 6) for
which simple inductive proofs are known, and the graphs (k \le 6) whose best
spanning ratios are established by complex arguments. Here we reduce the known
spanning ratio of Y_6 from 17.6 to 5.8.

http://arxiv.org/abs/1307.5829 2013/07/24 - 19:41

In this paper we introduce a capacity allocation game which models the
problem of maximizing network utility from the perspective of distributed
noncooperative agents. Motivated by the idea of self-managed networks, in the
developed framework decision-making entities are associated with individual
transmission links, deciding on the way they split capacity among concurrent
flows. An efficient decentralized algorithm is given for computing strongly
Pareto-optimal strategies, constituting a pure Nash equilibrium. Subsequently,
we discuss the properties of the introduced game related to the Price of
Anarchy and Price of Stability. The paper is concluded with an experimental
study.

http://arxiv.org/abs/1206.2448 2013/07/24 - 19:41

The paper studies the problem of recovering a spectrally sparse object from a
small number of time domain samples. Specifically, the object of interest with
ambient dimension $n$ is assumed to be a mixture of $r$ complex sinusoids,
while the underlying frequencies can assume any continuous values in the unit
disk. Conventional compressed sensing paradigms suffer from the basis mismatch
issue when imposing a discrete dictionary on the Fourier representation. To
address this problem, we develop a novel nonparametric algorithm, called
Enhanced Matrix Completion (EMaC), based on structured matrix completion. The
algorithm starts by arranging the data into a low-rank enhanced form with
multi-fold Hankel structure whose rank is upper bounded by $r$, and then
attempts recovery via nuclear norm minimization. Under mild incoherence
conditions, EMaC allows perfect recovery as soon as the number of samples
exceeds the order of $r\log^{2}n$, and is robust against bounded noise. Even if
a constant portion of samples are corrupted with arbitrary magnitude, EMaC can
still allow accurate recovery if the number of samples exceeds the order of
$r\log^{3}n$. Along the way, our results demonstrate that accurate completion
of a low-rank multi-fold Hankel matrix is possible when the number of observed
entries is proportional to the information theoretical limits (except for a
logarithmic gap) under mild conditions. The performance of our algorithm and
its applicability to super resolution are further demonstrated by numerical
experiments.

http://arxiv.org/abs/1304.8126 2013/07/24 - 19:41

Ontology Matching aims to find a set of semantic correspondences, called an
alignment, between related ontologies. In recent years, there has been a
growing interest in efficient and effective matching methods for large
ontologies. However, most of the alignments produced for large ontologies are
logically incoherent. It was only recently that the use of repair techniques to
improve the quality of ontology alignments has been explored. In this paper we
present a novel technique for detecting incoherent concepts based on ontology
modularization, and a new repair algorithm that minimizes the incoherence of
the resulting alignment and the number of matches removed from the input
alignment. An implementation was done as part of a lightweight version of
AgreementMaker system, a successful ontology matching platform, and evaluated
using a set of four benchmark biomedical ontology matching tasks. Our results
show that our implementation is efficient and produces better alignments with
respect to their coherence and f-measure than the state of the art repairing
tools. They also show that our implementation is a better alternative for
producing coherent silver standard alignments.

http://arxiv.org/abs/1307.5322 2013/07/23 - 18:05

Let A be a finite alphabet and f: A^* --> A^* be a morphism with an iterative
fixed point f^\omega(\alpha), where \alpha{} is in A. Consider the subshift (X,
T), where X is the shift orbit closure of f^\omega(\alpha) and T: X --> X is
the shift map. Let S be a finite alphabet that is in bijective correspondence
via a mapping c with the set of nonempty suffixes of the images f(a) for a in
A. Let calS be a subset S^N be the set of infinite words s = (s_n)_{n\geq 0}
such that \pi(s):= c(s_0)f(c(s_1)) f^2(c(s_2))... is in X. We show that if f is
primitive and f(A) is a suffix code, then there exists a mapping H: calS -->
calS such that (calS, H) is a topological dynamical system and \pi: (calS, H)
--> (X, T) is a conjugacy; we call (calS, H) the suffix conjugate of (X, T). In
the special case when f is the Fibonacci or the Thue-Morse morphism, we show
that the subshift (calS, T) is sofic, that is, the language of calS is regular.

http://arxiv.org/abs/1307.5329 2013/07/23 - 18:05

We are after a generator of the elimination ideal of an ideal generated by
two polynomials in two variables. Such a generator is unique (up to
multiplication by units) and can be computed via Gr\"obner bases. We are
interested in finding the generator of the elimination ideal using the
resultant of the generators of the ideal. All the factors of the generators are
factors of the resultant but the multiplicities might be different.
Geometrically we are looking at two algebraic curves. Factors of the resultant
give us projections of all the intersection points with their multiplicities.
We are investigating the difference between the multiplicities of the factors
of the Gr\"obner basis and the resultant. In the case of general ideals we can
express the difference between the variety of the elimination ideal and the
variety of the set of the resultants of pairs of the generators. Also we show
an attempt to use resultants in Gr\"obner bases computations.

http://arxiv.org/abs/1307.5330 2013/07/23 - 18:05

The use of robo-readers to analyze news texts is an emerging technology trend
in computational finance. In recent research, a substantial effort has been
invested to develop sophisticated financial polarity-lexicons that can be used
to investigate how financial sentiments relate to future company performance.
However, based on experience from other fields, where sentiment analysis is
commonly applied, it is well-known that the overall semantic orientation of a
sentence may differ from the prior polarity of individual words. The objective
of this article is to investigate how semantic orientations can be better
detected in financial and economic news by accommodating the overall
phrase-structure information and domain-specific use of language. Our three
main contributions are: (1) establishment of a human-annotated finance
phrase-bank, which can be used as benchmark for training and evaluating
alternative models; (2) presentation of a technique to enhance financial
lexicons with attributes that help to identify expected direction of events
that affect overall sentiment; (3) development of a linearized phrase-structure
model for detecting contextual semantic orientations in financial and economic
news texts. The relevance of the newly added lexicon features and the benefit
of using the proposed learning-algorithm are demonstrated in a comparative
study against previously used general sentiment models as well as the popular
word frequency models used in recent financial studies. The proposed framework
is parsimonious and avoids the explosion in feature-space caused by the use of
conventional n-gram features.

http://arxiv.org/abs/1307.5336 2013/07/23 - 18:05

The development of energy selective, photon counting X-ray detectors allows
for a wide range of new possibilities in the area of computed tomographic image
formation. Under the assumption of perfect energy resolution, here we propose a
tensor-based iterative algorithm that simultaneously reconstructs the X-ray
attenuation distribution for each energy. We use a multi-linear image model
rather than a more standard "stacked vector" representation in order to develop
novel tensor-based regularizers. Specifically, we model the multi-spectral
unknown as a 3-way tensor where the first two dimensions are space and the
third dimension is energy. This approach allows for the design of tensor
nuclear norm regularizers, which like its two dimensional counterpart, is a
convex function of the multi-spectral unknown. The solution to the resulting
convex optimization problem is obtained using an alternating direction method
of multipliers (ADMM) approach. Simulation results shows that the generalized
tensor nuclear norm can be used as a stand alone regularization technique for
the energy selective (spectral) computed tomography (CT) problem and when
combined with total variation regularization it enhances the regularization
capabilities especially at low energy images where the effects of noise are
most prominent.

http://arxiv.org/abs/1307.5348 2013/07/23 - 18:05

The locking effect is a phenomenon which is unique to quantum information
theory and represents one of the strongest separations between the classical
and quantum theories of information. The Fawzi-Hayden-Sen (FHS) locking
protocol harnesses this effect in a cryptographic context, whereby one party
can encode n bits into n qubits while using a log(n)-bit secret key, with the
guarantee that an eavesdropper who performs a measurement on the encoded qubits
can recover only an arbitrarily small number of the encoded bits. This is an
extreme violation of Shannon's classical theorem, which states that
information-theoretic security holds in the classical case if and only if the
secret key is the same size as the message. Given this intriguing phenomenon,
it is of practical interest to study this effect in the presence of noise,
which can occur on both the system of the legitimate receiver and the
eavesdropper. This paper formally defines the locking capacity of a quantum
channel as the maximum amount of locked information that can be reliably
transmitted to a legitimate receiver by exploiting many independent uses of a
quantum channel and an amount of secret key sublinear in the number of channel
uses. We provide general operational bounds on the locking capacity in terms of
other well-known capacities from quantum Shannon theory. We also study the
important case of bosonic channels, finding limitations on these channels'
locking capacity when coherent-state encodings are employed and particular
locking protocols for these channels that might be physically implementable.

http://arxiv.org/abs/1307.5368 2013/07/23 - 18:05