Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na arXiv.org

Here we discuss the application of an edge detection filter, the Sobel filter
of GIMP, to the recently discovered motion of some sand dunes on Mars. The
filter allows a good comparison of an image HiRISE of 2007 and an image of 1999
recorded by the Mars Global Surveyor of the dunes in the Nili Patera caldera,
measuring therefore the motion of the dunes on a longer period of time than
that previously investigated.

http://arxiv.org/abs/1308.5315 2013/08/27 - 11:57

What is the effect of the combined direct and indirect social influences-peer
pressure (PP)-on a social groups collective decisions? We present a model that
captures PP as a function of the socio-cultural distance between individuals in
a social group. Using this model and empirical data from 15 real-world social
networks we found that the PP level determines how fast a social group reaches
consensus. More importantly, the levels of PP determine the leaders who can
achieve full control of their social groups. PP can overcome barriers imposed
upon a consensus by the existence of tightly connected communities with local
leaders or the existence of leaders with poor cohesiveness of opinions. A
moderate level of PP is also necessary to explain the rate at which innovations
diffuse through a variety of social groups.

http://arxiv.org/abs/1308.5317 2013/08/27 - 11:57

The present study gives a mathematical framework for self-evolution within
autonomous problem solving systems. Special attention is set on universal
abstraction, thereof generation by net block homomorphism, consequently
multiple order solving systems and the overall decidability of the set of the
solutions. By overlapping presentation of nets new abstraction relation among
nets is formulated alongside with consequent alphabetical net block renetting
system proportional to normal forms of renetting systems regarding the
operational power. A new structure in self-evolving problem solving is
established via saturation by groups of equivalence relations and iterative
closures of generated quotient transducer algebras over the whole evolution.

http://arxiv.org/abs/1308.5321 2013/08/27 - 11:57

Based on fixed point theory, this paper proposes a simple but efficient
method for image integrity authentication, which is different from Digital
Signature and Fragile Watermarking. By this method, any given image can be
transformed into a fixed point of a well-chosen function, which can be
constructed with periodic functions. The authentication can be realized due to
the fragility of the fixed points. The experiments show that 'Fixed Point
Image' performs well in security, transparence, fragility and tampering
localization.

http://arxiv.org/abs/1308.5326 2013/08/27 - 11:57

We discuss the problem of runtime verification of an instrumented program
that misses to emit and to monitor some events. These gaps can occur when a
monitoring overhead control mechanism is introduced to disable the monitor of
an application with real-time constraints. We show how to use statistical
models to learn the application behavior and to "fill in" the introduced gaps.
Finally, we present and discuss some techniques developed in the last three
years to estimate the probability that a property of interest is violated in
the presence of an incomplete trace.

http://arxiv.org/abs/1308.5329 2013/08/27 - 11:57

Formal verification has been successfully developed in computer science for
verifying combinatorial classes of models and specifications. In like manner,
formal verification methods have been developed for dynamical systems. However,
the verification of system properties, such as safety, is based on reachability
calculations, which are the sources of insurmountable complexity. This talk
addresses indirect verification methods, which are based on abstracting the
dynamical systems by models of reduced complexity and preserving central
properties of the original systems.

http://arxiv.org/abs/1308.5330 2013/08/27 - 11:57

Networked Embedded Control Systems are distributed control systems where the
communication among plants, sensors, actuators and controllers occurs in a
shared network. They have been the subject of intensive study in the last few
years. In this paper we survey our contribution to this research topic.

http://arxiv.org/abs/1308.5331 2013/08/27 - 11:57

Complex systems are naturally hybrid: their dynamic behavior is both
continuous and discrete. For these systems, maintenance and repair are an
increasing part of the total cost of final product. Efficient diagnosis and
prognosis techniques have to be adopted to detect, isolate and anticipate
faults. This paper presents an original integrated theoretical framework for
diagnosis and prognosis of hybrid systems. The formalism used for hybrid
diagnosis is enriched in order to be able to follow the evolution of an aging
law for each fault of the system. The paper presents a methodology for
interleaving diagnosis and prognosis in a hybrid framework.

http://arxiv.org/abs/1308.5332 2013/08/27 - 11:57

In this work, we continue our study on discrete abstractions of dynamical
systems. To this end, we use a family of partitioning functions to generate an
abstraction. The intersection of sub-level sets of the partitioning functions
defines cells, which are regarded as discrete objects. The union of cells makes
up the state space of the dynamical systems. Our construction gives rise to a
combinatorial object - a timed automaton. We examine sound and complete
abstractions. An abstraction is said to be sound when the flow of the time
automata covers the flow lines of the dynamical systems. If the dynamics of the
dynamical system and the time automaton are equivalent, the abstraction is
complete.

The commonly accepted paradigm for partitioning functions is that they ought
to be transversal to the studied vector field. We show that there is no
complete partitioning with transversal functions, even for particular dynamical
systems whose critical sets are isolated critical points. Therefore, we allow
the directional derivative along the vector field to be non-positive in this
work. This considerably complicates the abstraction technique. For
understanding dynamical systems, it is vital to study stable and unstable
manifolds and their intersections. These objects appear naturally in this work.
Indeed, we show that for an abstraction to be complete, the set of critical
points of an abstraction function shall contain either the stable or unstable
manifold of the dynamical system.

http://arxiv.org/abs/1308.5333 2013/08/27 - 11:57

Hybrid automata are a natural framework for modeling and analyzing systems
which exhibit a mixed discrete continuous behaviour. However, the standard
operational semantics defined over such models implicitly assume perfect
knowledge of the real systems and infinite precision measurements. Such
assumptions are not only unrealistic, but often lead to the construction of
misleading models. For these reasons we believe that it is necessary to
introduce more flexible semantics able to manage with noise, partial
information, and finite precision instruments. In particular, in this paper we
integrate in a single framework based on approximated semantics different over
and under-approximation techniques for hybrid automata. Our framework allows to
both compare, mix, and generalize such techniques obtaining different
approximated reachability algorithms.

http://arxiv.org/abs/1308.5334 2013/08/27 - 11:57

We propose an extension of Hybrid I/O Automata (HIOAs) to model agent systems
and their implicit communication through perturbation of the environment, like
localization of objects or radio signals diffusion and detection. The new
object, called World Automaton (WA), is built in such a way to preserve as much
as possible of the compositional properties of HIOAs and its underlying theory.
From the formal point of view we enrich classical HIOAs with a set of world
variables whose values are functions both of time and space. World variables
are treated similarly to local variables of HIOAs, except in parallel
composition, where the perturbations produced by world variables are summed. In
such way, we obtain a structure able to model both agents and environments,
thus inducing a hierarchy in the model and leading to the introduction of a new
operator. Indeed this operator, called inplacement, is needed to represent the
possibility of an object (WA) of living inside another object/environment (WA).

http://arxiv.org/abs/1308.5335 2013/08/27 - 11:57

The model-checking problem for hybrid systems is a well known challenge in
the scientific community. Most of the existing approaches and tools are limited
to safety properties only, or operates by transforming the hybrid system to be
verified into a discrete one, thus loosing information on the continuous
dynamics of the system. In this paper we present a logic for specifying complex
properties of hybrid systems called HyLTL, and we show how it is possible to
solve the model checking problem by translating the formula into an equivalent
hybrid automaton. In this way the problem is reduced to a reachability problem
on hybrid automata that can be solved by using existing tools.

http://arxiv.org/abs/1308.5336 2013/08/27 - 11:57

Decentralized monitoring (DM) refers to a monitoring technique, where each
component must infer, based on a set of partial observations if the global
property is satisfied. Our work is inspired by the theoretical results
presented by Baurer and Falcone at FM 2012, where the authors introduced an
algorithm for distributing and monitoring LTL formulae, such that satisfaction
or violation of specifications can be detected by local monitors alone.
However, their work is based on the main assumption that neither the
computation nor communication take time, hence it does not take into account
how to set a sampling time among the components such that their local traces
are consistent. In this work we provide a timed model in UPPAAL and we show a
case study on a networked embedded systems board.

http://arxiv.org/abs/1308.5337 2013/08/27 - 11:57

We present a hybrid model of a biological filter, a genetic circuit which
removes fast fluctuations in the cell's internal representation of the extra
cellular environment. The model takes the classic feed-forward loop (FFL) motif
and represents it as a network of continuous protein concentrations and binary,
unobserved gene promoter states. We address the problem of statistical
inference and parameter learning for this class of models from partial,
discrete time observations. We show that the hybrid representation leads to an
efficient algorithm for approximate statistical inference in this circuit, and
show its effectiveness on a simulated data set.

http://arxiv.org/abs/1308.5338 2013/08/27 - 11:57

In this paper we study solutions to stochastic differential equations (SDEs)
with discontinuous drift. We apply two approaches: The Euler-Maruyama method
and the Fokker-Planck equation and show that a candidate density function based
on the Euler-Maruyama method approximates a candidate density function based on
the stationary Fokker-Planck equation. Furthermore, we introduce a smooth
function which approximates the discontinuous drift and apply the
Euler-Maruyama method and the Fokker-Planck equation with this input. The point
of departure for this work is a particular SDE with discontinuous drift.

http://arxiv.org/abs/1308.5339 2013/08/27 - 11:57

We investigate a compressive sensing framework in which the sensors introduce
a distortion to the measurements in the form of unknown gains. We focus on {\em
blind} calibration, using measures performed on {\em multiple} unknown (but
sparse) signals and formulate the joint recovery of the gains and the sparse
signals as a convex optimization problem. The first proposed approach is an
extension to the basis pursuit optimization which can estimate the unknown
gains along with the unknown sparse signals. Demonstrating that this approach
is successful for a sufficient number of input signals except in cases where
the phase shifts among the unknown gains varies significantly, a second
approach is proposed that makes use of quadratic basis pursuit optimization to
calibrate for constant amplitude gains with maximum variance in the phases. An
alternative form of this approach is also formulated to reduce the complexity
and memory requirements and provide scalability with respect to the number of
input signals. Finally a third approach is formulated which combines the first
two approaches for calibration of systems with any variation in the gains. The
performance of the proposed algorithms are investigated extensively through
numerical simulations, which demonstrate that simultaneous signal recovery and
calibration is possible when sufficiently many (unknown, but sparse)
calibrating signals are provided.

http://arxiv.org/abs/1308.5354 2013/08/27 - 11:57

This paper discusses the key principles of Gigabit Passive Optical Network
(GPON) which is based on Time Division Multiplexing Passive Optical Network
(TDM PON) and also, Wavelength Division Multiplexing Passive Optical Network
(WDM PON), which is considered to be next generation (tech- nology). In the
present day scenario, the broadband access is increasing rapidly. Because of
the advantages of fibre access in terms of capacity and cost, most of the
countries have started deploying GPON access as an important part of national
strategy. Though GPON is promising, it has few limitations such as scalabilty
and interoperability. Whereas WDM PON, a next generation network, is quite
promising. Unlike GPON, it is scalable and interoperable with different
vendors. This paper thus provides an overview of GPON, WDM PON and its
differences

http://arxiv.org/abs/1308.5356 2013/08/27 - 11:57

As a result of the recent advances in physical (PHY) layer communication
techniques, it is possible to receive multiple packets at the receiver
concurrently. This capability of a receiver to decode multiple simultaneous
transmissions is known as multi-packet reception (MPR). In this paper, we
propose a simple Medium Access Control (MAC) protocol for an MPR wireless
channel, where we modify the backoff procedure as a function of number of
ongoing transmissions in the channel. Our protocol is backward compatible with
the IEEE 802.11 DCF protocol. The performance analysis of the proposed protocol
is carried out using extensive simulations and it is compared with some of the
existing MPR MAC protocols. The proposed mechanism improves the throughput and
delay performance of the IEEE 802.11 DCF.

http://arxiv.org/abs/1308.5360 2013/08/27 - 11:57

As a subclass of linear codes, cyclic codes have applications in consumer
electronics, data storage systems, and communication systems as they have
efficient encoding and decoding algorithms. In this paper, five families of
three-weight ternary cyclic codes whose duals have two zeros are presented. The
weight distributions of the five families of cyclic codes are settled. The
duals of two families of the cyclic codes are optimal.

http://arxiv.org/abs/1308.5373 2013/08/27 - 11:57

A {\it dynamic reasoning system} (DRS) is an adaptation of a conventional
formal logical system that explicitly portrays reasoning as a temporal
activity, with each extralogical input to the system and each inference rule
application being viewed as occurring at a distinct time step. Every DRS
incorporates some well-defined logic together with a controller that serves to
guide the reasoning process in response to user inputs. Logics are generic,
whereas controllers are application-specific. Every controller does,
nonetheless, provide an algorithm for nonmonotonic belief revision. The general
notion of a DRS comprises a framework within which one can formulate the logic
and algorithms for a given application and prove that the algorithms are
correct, i.e., that they serve to (i) derive all salient information and (ii)
preserve the consistency of the belief set. This paper illustrates the idea
with ordinary first-order predicate calculus, suitably modified for the present
purpose, and two examples. The latter example revisits some classic
nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how
these can be resolved in the context of a DRS, using an expanded version of
first-order logic that incorporates typed predicate symbols. All concepts are
rigorously defined and effectively computable, thereby providing the foundation
for a future software implementation.

http://arxiv.org/abs/1308.5374 2013/08/27 - 11:57

Pedestrian behavior has much more complicated characteristics in a dense
crowd and thus attracts the widespread interest of scientists and engineers.
However, even successful modeling approaches such as pedestrian models based on
particle systems are still not fully considered the perceptive mechanism
underlying collective pedestrian behavior. This paper extends a behavioral
heuristics based pedestrian model to an adaptive agent-based model, which
explicitly considers the crowding effect of neighboring individuals and
perception anisotropy on the representation of a pedestrian visual information.
The adaptive agents with crowding perception are constructed to investigate
complex, self organized collective dynamics of pedestrian motion for
bidirectional and unidirectional flows. Simulation results show that the
emergence of lane formation in pedestrian counter flow can be well reproduced.
To investigate this further, increasing view distance has a significant effect
on reducing the number of lanes, increasing lane width, and stabilizing the
self organized lanes. The paper also discusses phase transitions of fundamental
diagrams of pedestrian crowds with unidirectional flow. It is found that the
heterogeneity of crowding perception in the population has a remarkable impact
on the flow quality, which results in the buildup of congestion and rapidly
decreases the efficiency of pedestrian flows. It also indicates that the
concept of heterogeneity may be used to explain the instability of phase
transitions.

http://arxiv.org/abs/1308.5380 2013/08/27 - 11:57

We describe steps toward an interactive directory for the town of Norfolk,
Nebraska for the years 1899 and 1900. This directory would extend the
traditional city directory by including a wider range of entities being
described, much richer information about the entities mentioned and linkages to
mentions of the entities in material such as digitized historical newspapers.
Such a directory would be useful to readers who browse the historical
newspapers by providing structured summaries of the entities mentioned. We
describe the occurrence of entities in two years of the Norfolk Weekly News,
focusing on several individuals to better understand the types of information
which can be gleaned from historical newspapers and other historical materials.
We also describe a prototype program which coordinates information about
entities from the traditional city directories, the federal census, and from
newspapers. We discuss the structured coding for these entities, noting that
richer coding would increasingly include descriptions of events and scenarios.
We propose that rich content about individuals and communities could eventually
be modeled with agents and woven into historical narratives.

http://arxiv.org/abs/1308.5395 2013/08/27 - 11:57

Traffic shaping is a mechanism used by Internet Service Providers (ISPs) to
limit subscribers' traffic based on their service contracts. This paper
investigates the current implementation of traffic shaping based on the token
bucket filter (TBF), discusses its advantages and disadvantages, and proposes a
cooperative TBF that can improve subscribers' quality of service (QoS)/quality
of experience (QoE) without compromising business aspects of the service
contract model by proportionally allocating excess bandwidth from inactive
subscribers to active ones based on the long-term bandwidths per their service
contracts.

http://arxiv.org/abs/1308.5397 2013/08/27 - 11:57

Fiore and Hur recently introduced a conservative extension of universal
algebra and equational logic from first to second order. Second-order universal
algebra and second-order equational logic respectively provide a model theory
and a formal deductive system for languages with variable binding and
parameterised metavariables. This work completes the foundations of the subject
from the viewpoint of categorical algebra. Specifically, the paper introduces
the notion of second-order algebraic theory and develops its basic theory. Two
categorical equivalences are established: at the syntactic level, that of
second-order equational presentations and second-order algebraic theories; at
the semantic level, that of second-order algebras and second-order functorial
models. Our development includes a mathematical definition of syntactic
translation between second-order equational presentations. This gives the first
formalisation of notions such as encodings and transforms in the context of
languages with variable binding.

http://arxiv.org/abs/1308.5409 2013/08/27 - 11:57

This paper proposes a measurement approach for estimating the privacy leakage
from Intrusion Detection System (IDS) alarms. Quantitative information flow
analysis is used to build a theoretical model of privacy leakage from IDS
rules, based on information entropy. This theoretical model is subsequently
verified empirically both based on simulations and in an experimental study.
The analysis shows that the metric is able to distinguish between IDS rules
that have no or low expected privacy leakage and IDS rules with a significant
risk of leaking sensitive information, for example on user behaviour. The
analysis is based on measurements of number of IDS alarms, data length and data
entropy for relevant parts of IDS rules (for example payload). This is a
promising approach that opens up for privacy benchmarking of Managed Security
Service providers.

http://arxiv.org/abs/1308.5421 2013/08/27 - 11:57

Stemming is the process of extracting root word from the given inflection
word. It also plays significant role in numerous application of Natural
Language Processing (NLP). The stemming problem has addressed in many contexts
and by researchers in many disciplines. This expository paper presents survey
of some of the latest developments on stemming algorithms in data mining and
also presents with some of the solutions for various Indian language stemming
algorithms along with the results.

http://arxiv.org/abs/1308.5423 2013/08/27 - 11:57

The robust principles of treating interference as noise (TIN) when it is
sufficiently weak, and avoiding it when it is not, form the background for this
work. Combining TIN with the topological interference management (TIM)
framework that identifies optimal interference avoidance schemes, a baseline
TIM-TIN approach is proposed which decomposes a network into TIN and TIM
components, allocates the signal power levels to each user in the TIN
component, allocates signal vector space dimensions to each user in the TIM
component, and guarantees that the product of the two is an achievable number
of signal dimensions available to each user in the original network.

http://arxiv.org/abs/1308.5434 2013/08/27 - 11:57

We present a unified framework for designing and analyzing algorithms for
online budgeted allocation problems (including online matching) and their
generalization, the Online Generalized Assignment Problem (OnGAP). These
problems have been intensively studied as models of how to allocate impressions
for online advertising. In contrast to previous analyses of online budgeted
allocation algorithms (the so-called "balance" or "water-filling" family of
algorithms) our analysis is based on the method of randomized dual fitting,
analogous to the recent analysis of the RANKING algorithm for online matching
due to Devanur et al. Our main contribution is thus to provide a unified method
of proof that simultaneously derives the optimal competitive ratio bounds for
online matching and online fractional budgeted allocation. The same method of
proof also supplies $(1-1/e)$ competitive ratio bounds for greedy algorithms
for both problems, in the random order arrival model; this simplifies existing
analyses of greedy online allocation algorithms with random order of arrivals,
while also strengthening them to apply to a larger family of greedy algorithms.
Finally, for the more general OnGAP problem, we show that no algorithm can be
constant-competitive; instead we present an algorithm whose competitive ratio
depends logarithmically on a certain parameter of the problem instance, and we
show that this dependence cannot be improved.

http://arxiv.org/abs/1308.5444 2013/08/27 - 11:57

The phase retrieval problem has a long history and is an important problem in
many areas of optics. Theoretical understanding of phase retrieval is still
limited and fundamental questions such as uniqueness and stability of the
recovered solution are not yet fully understood. This paper provides several
additions to the theoretical understanding of sparse phase retrieval. In
particular we show that if the measurement ensemble can be chosen freely, as
few as 4k-1 phaseless measurements suffice to guarantee uniqueness of a
k-sparse M-dimensional real solution. We also prove that k^2-k+2 Fourier
magnitude measurements are sufficient under rather general conditions.

http://arxiv.org/abs/1308.5447 2013/08/27 - 11:57

In this paper we study the property of phase retrievability by redundant
sysems of vectors under perturbations of the frame set. Specifically we show
that if a set $\fc$ of $m$ vectors in the complex Hilbert space of dimension n
allows for vector reconstruction from magnitudes of its coefficients, then
there is a perturbation bound $\rho$ so that any frame set within $\rho$ from
$\fc$ has the same property. In particular this proves the recent construction
in \cite{BH13} is stable under perturbations. By the same token we reduce the
critical cardinality conjectured in \cite{BCMN13a} to proving a stability
result for non phase-retrievable frames.

http://arxiv.org/abs/1308.5465 2013/08/27 - 11:57

We demonstrate in this paper the use of tools of complex network theory to
describe the strategy of Australia and England in the recently concluded Ashes
2013 Test series. Using partnership data made available by cricinfo during the
Ashes 2013 Test series, we generate batting partnership network (BPN) for each
team, in which nodes correspond to batsmen and links represent runs scored in
partnerships between batsmen. The resulting network display a visual summary of
the pattern of run-scoring by each team, which helps us in identifying
potential weakness in a batting order. We use different centrality scores to
quantify the performance, relative importance and effect of removing a player
from the team. We observe that England is an extremely well connected team, in
which lower order batsmen consistently contributed significantly to the team
score. Contrary to this Australia showed dependence on their top order batsmen.

http://arxiv.org/abs/1308.5470 2013/08/27 - 11:57

Pressing questions in cosmology such as the nature of dark matter and dark
energy can be addressed using large galaxy surveys, which measure the
positions, properties and redshifts of galaxies in order to map the large-scale
structure of the Universe. We review the Fourier-Laguerre transform, a novel
transform in 3D spherical coordinates which is based on spherical harmonics
combined with damped Laguerre polynomials and appropriate for analysing galaxy
surveys. We also recall the construction of flaglets, 3D wavelets obtained
through a tiling of the Fourier-Laguerre space, which can be used to extract
scale-dependent, spatially localised features on the ball. We exploit a
sampling theorem to obtain exact Fourier-Laguerre and flaglet transforms, such
that band-limited signals can analysed and reconstructed at floating point
accuracy on a finite number of voxels on the ball. We present a potential
application of the flaglet transform for finding voids in galaxy surveys and
studying the large-scale structure of the Universe.

http://arxiv.org/abs/1308.5480 2013/08/27 - 11:57

This text is a conceptual introduction to mixed effects modeling with
linguistic applications, using the R programming environment. The reader is
introduced to linear modeling and assumptions, as well as to mixed
effects/multilevel modeling, including a discussion of random intercepts,
random slopes and likelihood ratio tests. The example used throughout the text
focuses on the phonetic analysis of voice pitch data.

http://arxiv.org/abs/1308.5499 2013/08/27 - 11:57

We study, in the context of algorithmic randomness, the closed amenable
subgroups of the symmetric group $S_\infty$ of a countable set. In this paper
we address this problem by investigating a link between the symmetries
associated with Ramsey Fra\"iss\'e order classes and algorithmic randomness.

http://arxiv.org/abs/1308.5506 2013/08/27 - 11:57

We view web forums as virtual living organisms feeding on user's attention
and investigate how these organisms grow at the expense of collective
attention. We find that the "body mass" ($PV$) and "energy consumption" ($UV$)
of the studied forums exhibits the allometric growth property, i.e., $PV_t \sim
UV_t ^ \theta$. This implies that within a forum, the network transporting
attention flow between threads has a structure invariant of time, despite of
the continuously changing of the nodes (threads) and edges (clickstreams). The
observed time-invariant topology allows us to explain the dynamics of networks
by the behavior of threads. In particular, we describe the clickstream
dissipation on threads using the function $D_i \sim T_i ^ \gamma$, in which
$T_i$ is the clickstreams to node $i$ and $D_i$ is the clickstream dissipated
from $i$. It turns out that $\gamma$, an indicator for dissipation efficiency,
is negatively correlated with $\theta$ and $1/\gamma$ sets the lower boundary
for $\theta$. Our findings have practical consequences. For example, $\theta$
can be used as a measure of the "stickiness" of forums, because it quantifies
the stable ability of forums to convert $UV$ into $PV$, i.e., to remain users
"lock-in" the forum. Meanwhile, the correlation between $\gamma$ and $\theta$
provides a convenient method to evaluate the `stickiness" of forums. Finally,
we discuss an optimized "body mass" of forums at around $10^5$ that minimizes
$\gamma$ and maximizes $\theta$.

http://arxiv.org/abs/1308.5513 2013/08/27 - 11:57

Obtaining reliable answers to the major scientific questions raised by
climate change in time to take appropriate action gives added urgency to the
open access program.

http://arxiv.org/abs/1308.5533 2013/08/27 - 11:57

Non-negative blind source separation (BSS) has raised interest in various
fields of research, as testified by the wide literature on the topic of
non-negative matrix factorization (NMF). In this context, it is fundamental
that the sources to be estimated present some diversity in order to be
efficiently retrieved. Sparsity is known to enhance such contrast between the
sources while producing very robust approaches, especially to noise. In this
paper we introduce a new algorithm in order to tackle the blind separation of
non-negative sparse sources from noisy measurements. We first show that
sparsity and non-negativity constraints have to be carefully applied on the
sought-after solution. In fact, improperly constrained solutions are unlikely
to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA
(non-negative Generalized Morphological Component Analysis), makes use of
proximal calculus techniques to provide properly constrained solutions. The
performance of nGMCA compared to other state-of-the-art algorithms is
demonstrated by numerical experiments encompassing a wide variety of settings,
with negligible parameter tuning. In particular, nGMCA is shown to provide
robustness to noise and performs well on synthetic mixtures of real NMR
spectra.

http://arxiv.org/abs/1308.5546 2013/08/27 - 11:57

Given a tesselation of the plane, defined by a planar straight-line graph
$G$, we want to find a minimal set $S$ of points in the plane, such that the
Voronoi diagram associated with $S$ "fits" \ $G$. This is the Generalized
Inverse Voronoi Problem (GIVP), defined in \cite{Trin07} and rediscovered
recently in \cite{Baner12}. Here we give an algorithm that solves this problem
with a number of points that is linear in the size of $G$, assuming that the
smallest angle in $G$ is constant.

http://arxiv.org/abs/1308.5550 2013/08/27 - 11:57

We are motivated by the need, in some applications, for impromptu or
as-you-go deployment of wireless sensor networks. A person walks along a line,
making link quality measurements with the previous relay at equally spaced
locations, and deploys relays at some of these locations, so as to connect a
sensor placed on the line with a sink at the start of the line. In this paper,
we extend our earlier work on the problem (see [1]) to incorporate two new
aspects: (i) inclusion of path outage in the deployment objective, and (ii)
permitting the deployment agent to make measurements over several consecutive
steps before selecting a placement location among them (which we call
backtracking). We consider a light traffic regime, and formulate the problem as
a Markov decision process. Placement algorithms are obtained for two cases: (i)
the distance to the source is geometrically distributed with known mean, and
(ii) the average cost per step case. We motivate the per-step cost function in
terms of several known forwarding protocols for sleep-wake cycling wireless
sensor networks. We obtain the structures of the optimal policies for the
various formulations, and provide some sensitivity results about the policies
and the optimal values. We then provide a numerical study of the algorithms,
thus providing insights into the advantage of backtracking, and a comparison
with simple heuristic placement policies.

http://arxiv.org/abs/1308.0686 2013/08/25 - 23:45

Active replication is commonly built on top of the atomic broadcast
primitive. Passive replication, which has been recently used in the popular
ZooKeeper coordination system, can be naturally built on top of the
primary-order atomic broadcast primitive. Passive replication differs from
active replication in that it requires processes to cross a barrier before they
become primaries and start broadcasting messages. In this paper, we propose a
barrier function tau that explains and encapsulates the differences between
existing primary-order atomic broadcast algorithms. We also show that
implementing primary-order atomic broadcast on top of a generic consensus
primitive and tau inherently results in higher time complexity than atomic
broadcast, as witnessed by existing algorithms. We overcome this problem by
presenting an alternative, primary-order atomic broadcast implementation that
builds on top of a generic consensus primitive and uses consensus itself to
form a barrier. This algorithm is modular and matches the time complexity of
existing tau-based algorithms.

http://arxiv.org/abs/1308.2979 2013/08/25 - 23:45