Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na arXiv.org

We formally study iterated block ciphers that alternate between two sequences
of independent and identically distributed (i.i.d.) rounds. It is demonstrated
that, in some cases the effect of alternating increases security, while in
other cases the effect may strictly decrease security relative to the
corresponding product of one of its component sequences. As this would appear
to contradict conventional wisdom based on the ideal cipher approximation, we
introduce new machinery for provable security comparisons. The comparisons made
here simultaneously establish a coherent ordering of security metrics ranging
from key-recovery cost to computational indistinguishability.

http://arxiv.org/abs/1307.4107 2013/07/17 - 04:55

Internet is composed of numbers of independent autonomous systems. BGP is
used to disseminate reachability information and establishing path between
autonomous systems. Each autonomous system is allowed to select a single route
to a destination and then export the selected route to its neighbors. The
selection of single best route imposes restrictions on the use of alternative
paths during interdomain link failure and thus, incurred packet loss. Packet
loss still occurs even when multiple paths exist between source and destination
but these paths have not been utilized. To minimize the packet loss, when
multiple paths exist, multipath routing techniques are introduced. Multipath
routing techniques ensure the use of alternative paths on link failure. It
computes set of paths which can be used when primary path is not available and
it also provides a way to transit domains to have control over the traffic
flow. This can be achieved by little modification to current BGP. This paper
highlights different multipath routing techniques and also discusses the
overhead incurred by each of them.

http://arxiv.org/abs/1307.4124 2013/07/17 - 04:55

This paper presents comparison of different hierarchical (position and
non-position based) protocols with respect to different mobility models.
Previous work mainly focuses on static networks or at most a single mobility
model. Using only one mobility model may not predict the behavior of routing
protocol accurately. Simulation results show that mobility has large impact on
the behavior of WSN routing protocols. Also, position based routing protocols
performs better in terms of packet delivery compared to non position based
routing protocols.

http://arxiv.org/abs/1307.4127 2013/07/17 - 04:55

Network simulation is the most useful and common methodology used to evaluate
different network to-pologies without real world implementation. Network
simulators are widely used by the research community to evaluate new theories
and hypotheses. There are a number of network simulators, for instance, ns-2,
ns-3, OMNET++, SWAN, OPNET, Jist, and GloMoSiM etc. Therefore, the selection of
a network simulator for evaluating research work is a crucial task for
researchers. The main focus of this paper is to compare the state-of-the-art,
open source network simulators based on the following parameters: CPU
utilization, memory usage, computational time, and scalability by simulating a
MANET routing protocol, to identify an optimal network simulator for the
research community.

http://arxiv.org/abs/1307.4129 2013/07/17 - 04:55

As the penetration level of transmission-scale time-intermittent renewable
generation resources increases, control of flexible resources will become
important to mitigating the fluctuations due to these new renewable resources.
Flexible resources may include new or existing synchronous generators as well
as new energy storage devices. Optimal placement and sizing of energy storage
to minimize costs of integrating renewable resources is a difficult
optimization problem. Further,optimal planning procedures typically do not
consider the effect of the time dependence of operations and may lead to
unsatisfactory results. Here, we use an optimal energy storage control
algorithm to develop a heuristic procedure for energy storage placement and
sizing. We perform operational simulation under various time profiles of
intermittent generation, loads and interchanges (artificially generated or from
historical data) and accumulate statistics of the usage of storage at each node
under the optimal dispatch. We develop a greedy heuristic based on the
accumulated statistics to obtain a minimal set of nodes for storage placement.
The quality of the heuristic is explored by comparing our results to the
obvious heuristic of placing storage at the renewables for IEEE benchmarks and
real-world network topologies.

http://arxiv.org/abs/1307.4143 2013/07/17 - 04:55

The l1-regularized logistic regression (or sparse logistic regression) is a
widely used method for simultaneous classification and feature selection.
Although many recent efforts have been devoted to its efficient implementation,
its application to high dimensional data still poses significant challenges. In
this paper, we present a fast and effective sparse logistic regression
screening rule (Slores) to identify the 0 components in the solution vector,
which may lead to a substantial reduction in the number of features to be
entered to the optimization. An appealing feature of Slores is that the data
set needs to be scanned only once to run the screening and its computational
cost is negligible compared to that of solving the sparse logistic regression
problem. Moreover, Slores is independent of solvers for sparse logistic
regression, thus Slores can be integrated with any existing solver to improve
the efficiency. We have evaluated Slores using high-dimensional data sets from
different applications. Extensive experimental results demonstrate that Slores
outperforms the existing state-of-the-art screening rules and the efficiency of
solving sparse logistic regression is improved by one magnitude in general.

http://arxiv.org/abs/1307.4145 2013/07/17 - 04:55

Physical layer security is an emerging technique to improve the wireless
communication security, which is widely regarded as a complement to
cryptographic technologies. To design physical layer security techniques under
practical scenarios, the uncertainty and imperfections in the channel knowledge
need to be taken into consideration. This paper provides a survey of recent
research and development in physical layer security considering the imperfect
channel state information (CSI) at communication nodes. We first present an
overview of the main information-theoretic measures of the secrecy performance
with imperfect CSI. Then, we describe several signal processing enhancements in
secure transmission designs, such as secure on-off transmission, beamforming
with artificial noise, and secure communication assisted by relay nodes or in
cognitive radio systems. The recent studies of physical layer security in
large-scale decentralized wireless networks are also summarized. Finally, the
open problems for the on-going and future research are discussed.

http://arxiv.org/abs/1307.4146 2013/07/17 - 04:55

One of the main bottlenecks in practical full-duplex systems is the
oscillator phase noise, which bounds the possible cancellable self-interference
power. In this paper, a digitaldomain self-interference cancellation scheme for
full-duplex orthogonal frequency division multiplexing systems is proposed. The
proposed scheme increases the amount of cancellable selfinterference power by
suppressing the effect of both transmitter and receiver oscillator phase noise.
The proposed scheme consists of two main phases, an estimation phase and a
cancellation phase. In the estimation phase, the minimum mean square error
estimator is used to jointly estimate the transmitter and receiver phase noise
associated with the incoming self-interference signal. In the cancellation
phase, the estimated phase noise is used to suppress the intercarrier
interference caused by the phase noise associated with the incoming
self-interference signal. The performance of the proposed scheme is numerically
investigated under different operating conditions. It is demonstrated that the
proposed scheme could achieve up to 9dB more self-interference cancellation
than the existing digital-domain cancellation schemes that ignore the
intercarrier interference suppression.

http://arxiv.org/abs/1307.4149 2013/07/17 - 04:55

Consider a systematic linear code where some (local) parity symbols depend on
few prescribed symbols, while other (heavy) parity symbols may depend on all
data symbols. Local parities allow to quickly recover any single symbol when it
is erased, while heavy parities provide tolerance to a large number of
simultaneous erasures. A code as above is maximally-recoverable if it corrects
all erasure patterns which are information theoretically recoverable given the
code topology. In this paper we present explicit families of
maximally-recoverable codes with locality. We also initiate the study of the
trade-off between maximal recoverability and alphabet size.

http://arxiv.org/abs/1307.4150 2013/07/17 - 04:55

Sparse learning has recently received increasing attention in many areas
including machine learning, statistics, and applied mathematics. The mixed-norm
regularization based on the l1q norm with q>1 is attractive in many
applications of regression and classification in that it facilitates group
sparsity in the model. The resulting optimization problem is, however,
challenging to solve due to the inherent structure of the mixed-norm
regularization. Existing work deals with special cases with q=1, 2, infinity,
and they cannot be easily extended to the general case. In this paper, we
propose an efficient algorithm based on the accelerated gradient method for
solving the general l1q-regularized problem. One key building block of the
proposed algorithm is the l1q-regularized Euclidean projection (EP_1q). Our
theoretical analysis reveals the key properties of EP_1q and illustrates why
EP_1q for the general q is significantly more challenging to solve than the
special cases. Based on our theoretical analysis, we develop an efficient
algorithm for EP_1q by solving two zero finding problems. To further improve
the efficiency of solving large dimensional mixed-norm regularized problems, we
propose a screening method which is able to quickly identify the inactive
groups, i.e., groups that have 0 components in the solution. This may lead to
substantial reduction in the number of groups to be entered to the
optimization. An appealing feature of our screening method is that the data set
needs to be scanned only once to run the screening. Compared to that of solving
the mixed-norm regularized problems, the computational cost of our screening
test is negligible. The key of the proposed screening method is an accurate
sensitivity analysis of the dual optimal solution when the regularization
parameter varies. Experimental results demonstrate the efficiency of the
proposed algorithm.

http://arxiv.org/abs/1307.4156 2013/07/17 - 04:55

This volume contains the proceedings of the Fourth International Symposium on
Games, Automata, Logic and Formal Verification (GandALF 2013). The symposium
took place in Borca di Cadore, Italy, from 29th to 31st of August 2013.

The proceedings of the symposium contain the abstracts of three invited talks
and 17 papers that were accepted after a careful evaluation for presentation at
the conference. The topics of the accepted papers range over a wide spectrum,
including algorithmic and behavioral game theory, game semantics, formal
languages and automata theory, modal and temporal logics, software
verification, hybrid systems.

http://arxiv.org/abs/1307.4162 2013/07/17 - 04:55

Connectivity augmentation and orientation are two fundamental classes of
problems related to graph connectivity. The former includes k-edge-connected
subgraph and more generally, survivable network design problem. In orientation
problems the goal is to find orientations of an undirected graph that achieve
prescribed connectivity properties such as global and rooted k-edge
connectivity. In this paper, we consider network design problems that address
combined augmentation and orientation settings.

We give a polynomial time 6-approximation algorithm for finding a minimum
cost subgraph of an undirected graph G that admits an orientation covering a
nonnegative crossing G-supermodular demand function, as defined by Frank. The
most important example is (k,l)-edge-connectivity, a common generalization of
global and rooted edge-connectivity. Our algorithm is based on the iterative
rounding method though the application is not standard. First, we observe that
the standard linear program is not amenable to the iterative rounding method
and therefore use an alternative LP with partition and co-partition
constraints. To apply the iterative rounding framework, we first need to
introduce new uncrossing techniques to obtain a simple family of constraints
that characterize basic feasible solutions. Then we do a careful counting
argument based on this family of constraints.

We also consider the directed network design problem with orientation
constraints where we are given an undirected graph G=(V,E) with costs c(u,v)
and c(v,u) for each edge (u,v) in E and an integer k. The goal is to find a
subgraph F of minimum cost which has an k-edge connected orientation A. Khanna
et al showed that the integrality gap of the natural linear program is at most
4 when k=1 and conjectured that it is constant for all k. We disprove this
conjecture by showing an O(|V|)-integrality gap even when k=2.

http://arxiv.org/abs/1307.4164 2013/07/17 - 04:55

Developing CPU scheduling algorithms and understanding their impact in
practice can be difficult and time consuming due to the need to modify and test
operating system kernel code and measure the resulting performance on a
consistent workload of real applications. As processor is the important
resource, CPU scheduling becomes very important in accomplishing the operating
system (OS) design goals. The intention should be allowed as many as possible
running processes at all time in order to make best use of CPU. This paper
presents a state diagram that depicts the comparative study of various
scheduling algorithms for a single CPU and shows which algorithm is best for
the particular situation. Using this representation, it becomes much easier to
understand what is going on inside the system and why a different set of
processes is a candidate for the allocation of the CPU at different time. The
objective of the study is to analyze the high efficient CPU scheduler on design
of the high quality scheduling algorithms which suits the scheduling goals. Key
Words:-Scheduler, State Diagrams, CPU-Scheduling, Performance

http://arxiv.org/abs/1307.4165 2013/07/17 - 04:55

The main objective of this paper is to improve the Round Robin scheduling
algorithm using the dynamic time slice concept. CPU scheduling becomes very
important in accomplishing the operating system (OS) design goals. The
intention should be allowed as many as possible running processes at all time
in order to make best use of CPU. CPU scheduling has strong effect on resource
utilization as well as overall performance of the system. Round Robin algorithm
performs optimally in time-shared systems, but it is not suitable for soft real
time systems, because it gives more number of context switches, larger waiting
time and larger response time. In this paper, a new CPU scheduling algorithm
called An Optimum Multilevel Dynamic Round Robin Scheduling Algorithm is
proposed, which calculates intelligent time slice and changes after every round
of execution. The suggested algorithm was evaluated on some CPU scheduling
objectives and it was observed that this algorithm gave good performance as
compared to the other existing CPU scheduling algorithms.

http://arxiv.org/abs/1307.4167 2013/07/17 - 04:55

To understand Working of Human Brain measurements related to the brain
function are required. These measurements should be possibly non-invasive.
Brain should be disturbed as less as possible during the measurement.
Integration of various modalities plays a vital role in understanding the
cognitive and the behavioral changes in the human brain. It is an important
source of converging evidence about specific aspects of neural functions and
dysfunctions under certain pathological conditions. Focal changes in cortical
blood flow are tightly coupled with the changes in neuronal activity. This
constitutes the option to map the hemodynamic response and infer principles of
the cortical processing, even of complex tasks. The very high temporal
resolution of EEG and good spatial resolution by NIRS make this concurrent
measurement unique to study the spatio-temporal dynamics of large scale
neuronal networks in the human brain. Such integration of two techniques will
help to overcome the limitations of a specific method. Such as insensitivity of
electroencephalogram (EEG) to unsynchronized neural events or lack of near
infrared spectroscopy (NIRS) to low metabolic demand. A combination of EEG and
NIRS will be more informative than the two separate analyses in both
modalities.

http://arxiv.org/abs/1307.4171 2013/07/17 - 04:55

The upcoming technology support for semantic web promises fresh directions
for Software Engineering community. Also semantic web has its roots in
knowledge engineering that provoke software engineers to look for application
of ontology applications throughout the Software Engineering life cycle. The
internal components of a semantic web are light weight and may be of less
quality standards than the externally visible modules. In fact the internal
components are generated from external (ontological) component. That is the
reason agile development approaches such as feature driven development are
suitable for applications internal component development. As yet there is no
particular procedure that describes the role of ontology in the processes.
Therefore we propose an ontology based feature driven development for semantic
web application that can be used form application model development to feature
design and implementation. Features are precisely defined in the OWL-based
domain model. Transition from OWL based domain model to feature list is
directly defined in transformation rules. On the other hand the ontology based
overall model can be easily validated through automated tools. Advantages of
ontology-based feature Driven development are also discussed.

http://arxiv.org/abs/1307.4174 2013/07/17 - 04:55

Swarm intelligence and bio-inspired algorithms form a hot topic in the
developments of new algorithms inspired by nature. These nature-inspired
metaheuristic algorithms can be based on swarm intelligence, biological
systems, physical and chemical systems. Therefore, these algorithms can be
called swarm-intelligence-based, bio-inspired, physics-based and
chemistry-based, depending on the sources of inspiration. Though not all of
them are efficient, a few algorithms have proved to be very efficient and thus
have become popular tools for solving real-world problems. Some algorithms are
insufficiently studied. The purpose of this review is to present a relatively
comprehensive list of all the algorithms in the literature, so as to inspire
further research.

http://arxiv.org/abs/1307.4186 2013/07/17 - 04:55

A topological graph drawn on a cylinder whose base is horizontal is
\emph{angularly monotone} if every vertical line intersects every edge at most
once. Let $c(n)$ denote the maximum number $c$ such that every simple angularly
monotone drawing of a complete graph on $n$ vertices contains at least $c$
pairwise disjoint edges. We show that for every simple complete topological
graph $G$ there exists $\Delta$, $0<\Delta<n$, such that $G$ contains at least
$\max \{\frac n\Delta, c(\Delta)\}$ pairwise disjoint edges. By combining our
result with a result of T\'oth we obtain an alternative proof for the best
known lower bound of $\Omega(n^\frac 13)$ on the maximum number of pairwise
disjoint edges in a simple complete topological graph proved by Suk. Our proof
is based on a result of Ruiz-Vargas.

http://arxiv.org/abs/1307.4191 2013/07/17 - 04:55

We consider the model checking problem for Gap-order Constraint Systems (GCS)
w.r.t. the branching-time temporal logic CTL, and in particular its fragments
EG and EF.

GCS are nondeterministic infinitely branching processes described by
evolutions of integer-valued variables, subject to Presburger constraints of
the form $x-y\ge k$, where $x$ and $y$ are variables or constants and $k\in\N$
is a non-negative constant.

We show that EG model checking is undecidable for GCS, while EF is decidable.
In particular, this implies the decidability of strong and weak bisimulation
equivalence between GCS and finite-state systems.

http://arxiv.org/abs/1307.4207 2013/07/17 - 04:55

In this paper we show that if a linear cocycle is robustly periodical stable
then it is uniformly stable.

http://arxiv.org/abs/1307.4209 2013/07/17 - 04:55

This review describes the currently available simulation models for window
material to calculate daylighting with the program "Radiance". The review is
based on four abstract and general classes of window materials, depending on
their scattering and redirecting properties (bidirectional scatter distribution
function, BSDF). It lists potential and limits of the older models and includes
the most recent additions to the software. All models are demonstrated using an
exemplary indoor scene and two typical sky conditions. It is intended as
clarification for applying window material models in project work or teaching.
The underlying algorithmic problems apply to all lighting simulation programs,
so the scenarios of materials and skies are applicable to other lighting
programs.

http://arxiv.org/abs/1307.4214 2013/07/17 - 04:55

Airborne wind energy is a new renewable technology that promises to deliver
electricity at low costs and in large quantities. Despite the steadily growing
interest in this field, very limited results with real-world data have been
reported so far, due to the difficulty faced by researchers when realizing an
experimental setup. Indeed airborne wind energy prototypes are mechatronic
devices involving many multidisciplinary aspects, for which there are currently
no established design guidelines. With the aim of making research in airborne
wind energy accessible to a larger number of researchers, this work provides
such guidelines for a small-scale prototype. The considered system has no
energy generation capabilities, but it can be realized at low costs, used with
little restrictions and it allows one to test many aspects of the technology,
from sensors to actuators to wing design and materials. In addition to the
guidelines, the paper provides the details of the design and costs of an
experimental setup realized at the University of California, Santa Barbara, and
successfully used to develop and test sensor fusion and automatic control
solutions.

http://arxiv.org/abs/1307.4215 2013/07/17 - 04:55

In this paper we propose a method to enhance the life time as well as improve
the performance of the mobile ad hoc networks (MANET). Since MANET consists of
devices that run on batteries, having limited amount of energy and due to the
self-configuring and dynamic change of topology, all operations are performed
by the node itself.

http://arxiv.org/abs/1307.4221 2013/07/17 - 04:55

Social dilemmas are situations in which collective interests are at odds with
private interests: pollution, depletion of natural resources, and intergroup
conflicts, are at their core social dilemmas.

Because of their multidisciplinarity and their importance, social dilemmas
have been studied by economists, biologists, psychologists, sociologists, and
political scientists. These studies typically explain tendency to cooperation
by dividing people in proself and prosocial types, or appealing to forms of
external control or, in iterated social dilemmas, to long-term strategies.

But recent experiments have shown that cooperation is possible even in
one-shot social dilemmas without forms of external control and the rate of
cooperation typically depends on the payoffs. This makes impossible a
predictive division between proself and prosocial people and proves that people
have attitude to cooperation by nature.

The key innovation of this article is in fact to postulate that humans have
attitude to cooperation by nature and consequently they do not act a priori as
single agents, as assumed by standard economic models, but they forecast how a
social dilemma would evolve if they formed coalitions and then they act
according to their most optimistic forecast. Formalizing this idea we propose
the first predictive model of human cooperation able to organize a number of
different experimental findings that are not explained by the standard model.
We show also that the model makes satisfactorily accurate quantitative
predictions of population average behavior in one-shot social dilemmas.

http://arxiv.org/abs/1307.4228 2013/07/17 - 04:55

A novel comparative research and analysis method is proposed and applied on
the Hungarian economic sectors. The question of what factors have an effect on
their net income is essential for enterprises. First, the potential indicators
related to economic sectors were studied and then compared to the net income of
the surveyed enterprises. The data resulting from the comparison showed that
the growing penetration of electronic marketpalces contributed to the change of
the net income of enterprises in various economic sectors to the extent of 37%.
Among all the potential indicators, only the indicator of electronic
marketplaces has a direct influence on the net income of enterprises. Two
clusters based on the potential indicators were indicated.

http://arxiv.org/abs/1307.4230 2013/07/17 - 04:55

The Maximum Weight Independent Set of Polygons problem is a fundamental
problem in computational geometry. Given a set of weighted polygons in the
2-dimensional plane, the goal is to find a set of pairwise non-overlapping
polygons with maximum total weight. Due to its wide range of applications, the
MWISP problem and its special cases have been extensively studied both in the
approximation algorithms and the computational geometry community. Despite a
lot of research, its general case is not well-understood. Currently the best
known polynomial time algorithm achieves an approximation ratio of n^(epsilon)
[Fox and Pach, SODA 2011], and it is not even clear whether the problem is
APX-hard. We present a (1+epsilon)-approximation algorithm, assuming that each
polygon in the input has at most a polylogarithmic number of vertices. Our
algorithm has quasi-polynomial running time.

We use a recently introduced framework for approximating maximum weight
independent set in geometric intersection graphs. The framework has been used
to construct a QPTAS in the much simpler case of axis-parallel rectangles. We
extend it in two ways, to adapt it to our much more general setting. First, we
show that its technical core can be reduced to the case when all input polygons
are triangles. Secondly, we replace its key technical ingredient which is a
method to partition the plane using only few edges such that the objects
stemming from the optimal solution are evenly distributed among the resulting
faces and each object is intersected only a few times. Our new procedure for
this task is not more complex than the original one, and it can handle the
arising difficulties due to the arbitrary angles of the polygons. Note that
already this obstacle makes the known analysis for the above framework fail.
Also, in general it is not well understood how to handle this difficulty by
efficient approximation algorithms.

http://arxiv.org/abs/1307.4257 2013/07/17 - 04:55

We revisit a classical problem in transportation, known as the continuous
(bilevel) network design problem, CNDP for short. We are given a graph for
which the latency of each edge depends on the ratio of the edge flow and the
capacity installed. The goal is to find an optimal investment in edge
capacities so as to minimize the sum of the routing cost of the induced Wardrop
equilibrium and the investment cost. While this problem is considered as
challenging in the literature, its complexity status was still unknown. We
close this gap showing that CNDP is strongly NP-complete and APX-hard, even for
instances with affine latencies.

As for the approximation of the problem, we first provide a detailed analysis
for a heuristic studied by Marcotte for the special case of monomial latency
functions (Mathematical Programming, Vol.~34, 1986). Specifically, we derive a
closed form expression of its approximation guarantee for arbitrary sets S of
allowed latency functions. Second, we propose a different approximation
algorithm and show that it has the same approximation guarantee. As our final
-- and arguably most interesting -- result regarding approximation, we show
that using the better of the two approximation algorithms results in a strictly
improved approximation guarantee for which we give a closed form expression.
For affine latencies, e.g., this algorithm achieves a 1.195-approximation which
improves on the 5/4 that has been shown before by Marcotte.

http://arxiv.org/abs/1307.4258 2013/07/17 - 04:55

Particle systems are physical systems of simple computational particles that
can bond to neighboring particles and use these bonds to move from one spot to
another (non-occupied) spot. These particle systems are supposed to be able to
self-organize in order to adapt to a desired shape without any central control.
Self-organizing particle systems have many interesting applications like
coating objects for monitoring and repair purposes and the formation of
nano-scale devices for surgery and molecular-scale electronic structures. While
there has been quite a lot of systems work in this area, especially in the
context of modular self-reconfigurable robotic systems, only very little
theoretical work has been done in this area so far. We attempt to bridge this
gap by proposing a model inspired by the behavior of ameba that allows rigorous
algorithmic research on self-organizing particle systems.

http://arxiv.org/abs/1307.4259 2013/07/17 - 04:55

This paper presents a quantitative study of Twitter, one of the most popular
micro-blogging services, from the perspective of user influence. We crawl
several datasets from the most active communities on Twitter and obtain 20.5
million user profiles, along with 420.2 million directed relations and 105
million tweets among the users. User influence scores are obtained from
influence measurement services, Klout and PeerIndex. Our analysis reveals
interesting findings, including non-power-law influence distribution, strong
reciprocity among users in a community, the existence of homophily and
hierarchical relationships in social influences. Most importantly, we observe
that whether a user retweets a message is strongly influenced by the first of
his followees who posted that message. To capture such an effect, we propose
the first influencer (FI) information diffusion model and show through
extensive evaluation that compared to the widely adopted independent cascade
model, the FI model is more stable and more accurate in predicting influence
spreads in Twitter communities.

http://arxiv.org/abs/1307.4264 2013/07/17 - 04:55

The fitness-level method, also called the method of f-based partitions, is an
intuitive and widely used technique for the running time analysis of randomized
search heuristics. It was originally defined to prove upper and lower bounds on
the expected running time. Recently, upper tail bounds were added to the
technique; however, these tail bounds only apply to running times that are at
least twice as large as the expectation.

We remove this restriction and supplement the fitness-level method with sharp
tail bounds, including lower tails. As an exemplary application, we prove that
the running time of randomized local search on OneMax is sharply concentrated
around n ln n - 0.1159 n.

http://arxiv.org/abs/1307.4274 2013/07/17 - 04:55

Recently, a RGB image encryption algorithm based on DNA encoding and chaos
map was proposed. The present paper analyzes the security of the algorithm
against chosen-plaintext attacks, and finds that the algorithm can be broken
efficiently with only four chosen plain-images and the corresponding
cipher-images. Experimental analyses are provided to support the proposed
attack. In addition, two other security defects are also reported.

http://arxiv.org/abs/1307.4279 2013/07/17 - 04:55

We give a polynomial time, $(1+\epsilon)$-approximation algorithm for the
traveling repairman problem (TRP) in the Euclidean plane, on weighted planar
graphs, and on weighted trees. This improves on the known quasi-polynomial time
approximation schemes for these problems. The algorithm is based on a simple
new technique that reduces the TRP to a problem of finding an optimal
combination of K traveling salesman paths, where $K$ is a constant depending on
$\epsilon$ only. It is shown that any $\alpha$-approximation algorithm for this
$K$-TSP gives an $\alpha(1+\epsilon)$-approximation for the TRP in the same
metric space. Subsequently, approximation schemes are given for this $K$-TSP
problem in different metric spaces. The $K$-TSP with K=1 is equivalent to the
$k$-TSP (lowercase $k$) for which a $(2+\epsilon)$-approximation is known for a
general metric space. Hence, this approach through the $K$-TSP gives new
impulse for improving on the 3.59-approximation for TRP in a general metric
space.

A similar reduction applies to many other minimum latency problems. To
illustrate the strength of this approach we apply it to the well-studied
scheduling problem of minimizing total weighted completion time under
precedence constraints, $1|prec|\sum w_{j}C_{j}$, and present a polynomial time
approximation scheme for the case of interval order precedence constraints.
This improves on the known 3/2-approximation for this problem.

Both approximation schemes apply as well if release dates are added to the
problem.

http://arxiv.org/abs/1307.4289 2013/07/17 - 04:55

The information system (T.V., newspapers, blogs, social network platforms)
and its inner dynamics play a fundamental role on the evolution of collective
debates and thus on the public opinion. In this work we address such a process
focusing on how the current inner strategies of the information system
(competition, customer satisfaction) once combined with the gossip may affect
the opinions dynamics. A reinforcement effect is particularly evident in the
social network platforms where several and incompatible cultures coexist (e.g,
pro or against the existence of chemical trails and reptilians, the new world
order conspiracy and so forth). We introduce a computational model of opinion
dynamics which accounts for the coexistence of media and gossip as separated
but interdependent mechanisms influencing the opinions evolution. Individuals
may change their opinions under the contemporary pressure of the information
supplied by the media and the opinions of their social contacts. We stress the
effect of the media communication patterns by considering both the simple case
where each medium mimics the behavior of the most successful one (in order to
maximize the audience) and the case where there is polarization and thus
competition among media reported information (in order to preserve and satisfy
their segmented audience). Finally, we first model the information cycle as in
the case of traditional main stream media (i.e, when every medium knows about
the format of all the others) and then, to account for the effect of the
Internet, on more complex connectivity patterns (as in the case of the web
based information). We show that multiple and polarized information sources
lead to stable configurations where several and distant opinions coexist.

http://arxiv.org/abs/1307.4292 2013/07/17 - 04:55

Computational modeling of Wnt signaling pathway has gained prominence for its
use as computer aided diagnostic tool to develop therapeutic cancer target
drugs and predict of test samples as cancerous and non cancerous. This
manuscript focuses on development of simple static bayesian network models of
varying complexity that encompasses prior partially available biological
knowledge about intra and extra cellular factors affecting the Wnt pathway and
incorporates epigenetic information like methylation and histone modification
of a few genes known to have inhibitory affect on Wnt pathway. It might be
expected that such models not only increase cancer prediction accuracies and
also form basis for understanding Wnt signaling activity in different states of
tumorigenesis.

Initial results in human colorectal cancer cases indicate that incorporation
of epigenetic information increases prediction accuracy of test samples as
being tumorous or normal. Receiver Operator Curves (ROC) and their respective
area under the curve (AUC) measurements, obtained from predictions of state of
test sample and corresponding predictions of the state of activation of
transcription complex of the Wnt pathway for the test sample, indicate that
there is significant difference between the Wnt pathway being on (off) and its
association with the sample being tumorous (normal). Two sample
Kolmogorov-Smirnov test confirm the statistical deviation between the
distributions of these predictions. At a preliminary stage, use of these models
may help in understanding the yet unknown effect of certain factors like DKK2,
DKK3-1 and SFRP-2/3/5 on {\beta}-catenin transcription complex.

http://arxiv.org/abs/1307.4296 2013/07/17 - 04:55

In this paper we present a Marathi part of speech tagger. It is a
morphologically rich language. It is spoken by the native people of
Maharashtra. The general approach used for development of tagger is statistical
using trigram Method. The main concept of trigram is to explore the most likely
POS for a token based on given information of previous two tags by calculating
probabilities to determine which is the best sequence of a tag. In this paper
we show the development of the tagger. Moreover we have also shown the
evaluation done.

http://arxiv.org/abs/1307.4299 2013/07/17 - 04:55

Machine Transliteration has come out to be an emerging and a very important
research area in the field of machine translation. Transliteration basically
aims to preserve the phonological structure of words. Proper transliteration of
name entities plays a very significant role in improving the quality of machine
translation. In this paper we are doing machine transliteration for
English-Punjabi language pair using rule based approach. We have constructed
some rules for syllabification. Syllabification is the process to extract or
separate the syllable from the words. In this we are calculating the
probabilities for name entities (Proper names and location). For those words
which do not come under the category of name entities, separate probabilities
are being calculated by using relative frequency through a statistical machine
translation toolkit known as MOSES. Using these probabilities we are
transliterating our input text from English to Punjabi.

http://arxiv.org/abs/1307.4300 2013/07/17 - 04:55

A global optimization problem is studied where the objective function $f(x)$
is a multidimensional black-box function and its gradient $f'(x)$ satisfies the
Lipschitz condition over a hyperinterval with an unknown Lipschitz constant
$K$. Different methods for solving this problem by using an a priori given
estimate of $K$, its adaptive estimates, and adaptive estimates of local
Lipschitz constants are known in the literature. Recently, the authors have
proposed a one-dimensional algorithm working with multiple estimates of the
Lipschitz constant for $f'(x)$ (the existence of such an algorithm was a
challenge for 15 years). In this paper, a new multidimensional geometric method
evolving the ideas of this one-dimensional scheme and using an efficient
one-point-based partitioning strategy is proposed. Numerical experiments
executed on 800 multidimensional test functions demonstrate quite a promising
performance in comparison with popular DIRECT-based methods.

http://arxiv.org/abs/1307.4302 2013/07/17 - 04:55

In 1985, Razborov discovered a proof that the monotone circuit complexity of
the clique problem is super-polynomial. Alon and Boppana improved the result
into exponential lower bound exp(\Omega((n / log n)^{1/3})) of a monotone
circuit C to compute cliques of size (1/4) (n / log n)^{2/3}, where n is the
number of vertices in a graph. Both proofs are based on the method of
approximations using Erdos and Rado's sunflower lemma. There has been an
interest in further generalization of the proof scheme.

In this paper, we present a new approach to show the exponential monotone
complexity. Unlike the standard method, it dynamically constructs a counter
example: Assuming a monotone circuit C of sub-exponential size to compute
k-cliques c, an algorithm finds an edge set t containing no c in the
disjunctive normal form constructed at the root of C. We call such t a shift.
The proof shows that t is disjoint with an edge set z whose removal leaves no
k-cliques.

We explore the set theoretical nature of computation by Boolean circuits. We
develop a theory by finding topological properties in the Hamming space 2^{[n]}
where [n]={1, 2, ..., n}. A structural theorem is presented, which is closely
related to the sunflower lemma and claims a stronger statement in most cases.
The theory lays the foundation of the above shift method. It also shows the
existence of a sunflower with small core in a family of sets, which is not an
obvious consequence of the sunflower lemma.

Lastly, we point out that the new methodology has potential to apply to a
general circuit computing cliques due to the dynamic selection of t and z, and
to improve the Alon-Boppana bound exp(\Omega((n / log n)^{1/3})).

http://arxiv.org/abs/1307.4308 2013/07/17 - 04:55

We consider the stochastic electro-mechanical dynamics of the power system in
the vicinity of the saddle-node bifurcation associated with the loss of global
stability such as voltage collapse. The fluctuations are driven by random
variations of loads and intermittent renewable generation. In the vicinity of
the collapse the system experiences the so-called critical slowing-down
characterized by slowing and amplification of the system state vector
fluctuations. In generic case of co-dimension 1 bifurcation there is a single
mode that is responsible for the slowing-down. We characterize the stochastic
fluctuations using the formal perturbative expansion over the lowest eigenvalue
of the system power flow Jacobian and verify the resulting expressions for
correlation functions with the help of direct numerical simulations. We
conclude that the onset of critical slowing-down is a good marker of approach
to the threshold of global instability. It can be straightforwardly detected
from the analysis of single-node autocorrelation functions of system state
variables and as such does not require full observability of the grid.

http://arxiv.org/abs/1307.4318 2013/07/17 - 04:55

Conventional random number generators provide the speed but not necessarily
the high quality output streams needed for large-scale stochastic simulations.
Cryptographically-based generators, on the other hand, provide superior quality
output but are often deemed too slow to be practical for use in large
simulations. We combine these two approaches to construct a family of hybrid
generators that permit users to choose the desired tradeoff between quality and
speed for a given application. We demonstrate the effectiveness, performance,
and practicality of this approach using a standard battery of tests, which show
that high quality streams of random numbers can be obtained at a cost
comparable to that of fast conventional generators.

http://arxiv.org/abs/1307.4320 2013/07/17 - 04:55