Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na arXiv.org

We consider the problem of controlling the propagation of an epidemic
outbreak in an arbitrary network of contacts by investing on disease awareness
throughout the network. We model the effect of agent awareness on the dynamics
of an epidemic using the SAIS epidemic model, an extension of the SIS epidemic
model that includes a state of "awareness". This model allows to derive a
condition to control the spread of an epidemic outbreak in terms of the
eigenvalues of a matrix that depends on the network structure and the
parameters of the model. We study the problem of finding the cost-optimal
investment on disease awareness throughout the network when the cost function
presents some realistic properties. We propose a convex framework to find
cost-optimal allocation of resources. We validate our results with numerical
simulations in a real online social network.

http://arxiv.org/abs/1308.3662 2013/08/19 - 10:34

We investigate whether an n-vertex instance (G,k) of Treewidth, asking
whether the graph G has treewidth at most k, can efficiently be made sparse
without changing its answer. By giving a special form of OR-cross-composition,
we prove that this is unlikely: if there is an e > 0 and a polynomial-time
algorithm that reduces n-vertex Treewidth instances to equivalent instances, of
an arbitrary problem, with O(n^{2-e}) bits, then NP is in coNP/poly and the
polynomial hierarchy collapses to its third level.

Our sparsification lower bound has implications for structural
parameterizations of Treewidth: parameterizations by measures that do not
exceed the vertex count, cannot have kernels with O(k^{2-e}) bits for any e >
0, unless NP is in coNP/poly. Motivated by the question of determining the
optimal kernel size for Treewidth parameterized by vertex cover, we improve the
O(k^3)-vertex kernel from Bodlaender et al. (STACS 2011) to a kernel with
O(k^2) vertices. Our improved kernel is based on a novel form of
treewidth-invariant set. We use the q-expansion lemma of Fomin et al. (STACS
2011) to find such sets efficiently in graphs whose vertex count is
superquadratic in their vertex cover number.

http://arxiv.org/abs/1308.3665 2013/08/19 - 10:34

One of the major challenges being faced by Database managers today is to
manage the performance of complex SQL queries which are dynamic in nature.
Since it is not possible to tune each and every query because of its dynamic
nature, there is a definite possibility that these queries may cause serious
database performance issues if left alone. Conventional indexes are useful only
for those queries which are frequently executed or those columns which are
frequently joined in SQL queries. This proposal is regarding a method, a query
optimizer for optimizing database queries in a database management system. Just
In Time(JIT) indexes are On Demand, temporary indexes created on the fly based
on current needs so that they would be able to satisfy any kind of queries. JIT
indexes are created only when the configured threshold values for resource
consumption are exceeded for a query. JIT indexes will be stored in a temporary
basis and will get replaced by new JIT indexes in course of time. The proposal
is substantiated with the help of experimental programs and with various test
cases. The idea of parallel programming is also brought into picture as it can
be effectively used in a multiprocessor system. Multiple threads are employed
while one set of threads proceed in the conventional way and the other set of
threads proceed with the proposed way. A live switch over is made when a
suitable stage is reached and from then onwards the proposed method will only
come into picture.

http://arxiv.org/abs/1308.3679 2013/08/19 - 10:34

Numerous algorithms have been proposed to allow legged robots to learn to
walk. However, their vast majority are devised to learn to walk along a
straight line, which not sufficient to accomplish any real-world mission.

Here we introduce TBR-Learning, a new learning algorithm that simultaneously
discovers several hundreds of simple walking controllers, one for each possible
direction. By taking advantage of solutions that are usually discarded,
TBR-Learning is substantially faster than independently learning each
controller. Our technique relies on two methods: (1) novelty search with local
competition, which comes from the artificial life research field and (2) the
transferability approach, which combines simulations and real tests to optimize
a policy. We evaluate this new technique on a hexapod robot. Results show that
with only a few dozens of short experiments performed on the physical robot,
the algorithm learns a collection of controllers that allows the robot to reach
each point of its reachable space. Overall, TBR-Learning opens a new kind of
learning algorithm that simultaneously optimizes all the achievable behaviors
of a robot.

http://arxiv.org/abs/1308.3689 2013/08/19 - 10:34

This paper gives an analytical method to determine the economic and indirect
implications of denial of service and distributed denial of service attacks. It
is based on time preference dynamics applied to the monetary mass for the
restoration of capabilities, on long term investments to rebuild capabilities,
and of the usability level of the capabilities after an attack. A simple
illustrative example is provided for a denial of service on a corporate data
centre. The needed data collection methodologies are categorized by classes of
targets. The use of the method is explained in the context of legal or policy
driven dissuasive, retaliation or compensation/ restoration actions. A concrete
set of deployment cases in mobile communications services is discussed. The
conclusion includes policy recommendations as well as information exchange
requirements.

http://arxiv.org/abs/1308.3693 2013/08/19 - 10:34

RNA-seq has rapidly become the de facto technique to measure gene expression.
However, the time required for analysis has not kept up with the pace of data
generation. Here we introduce Sailfish, a novel computational method for
quantifying the abundance of previously annotated RNA isoforms from RNA-seq
data. Sailfish entirely avoids mapping reads, which is a time-consuming step in
all current methods. Sailfish provides quantification estimates much faster
than existing approaches (typically 20-times faster) without loss of accuracy.

http://arxiv.org/abs/1308.3700 2013/08/19 - 10:34

We construct unsatisfiable k-CNF formulas where every clause has k distinct
literals and every variable appears in at most (2/e + o(1))2^{k}/k clauses. The
Lopsided Local Lemma, applied with assignment of random values according to
counterintuitive probabilities, shows that our result is asymptotically best
possible. The determination of this extremal function is particularly important
as it represents the value where the k-SAT problem exhibits its complexity
hardness jump: from having every instance being a YES-instance it becomes
NP-hard just by allowing each variable to occur in one more clause. The
asymptotics of other related extremal functions are also determined. Let l(k)
denote the maximum number, such that every k-CNF formula with each clause
containing k distinct literals and each clause having a common variable with at
most l(k) other clauses, is satisfiable. We establish that the lower bound on
l(k) obtained from the Local Lemma is asymptotically optimal, i.e., l(k) = (1/e
+ o(1))2^{k}. The construction of our unsatisfiable CNF-formulas is based on
the binary tree approach of [16] and thus the constructed formulas are in the
class MU(1)of minimal unsatisfiable formulas having one more clauses than
variables. To obtain the asymptotically optimal binary trees we consider a
continuous approximation of the problem, set up a differential equation and
estimate its solution. The trees are then obtained through a discretization of
this solution. The binary trees constructed also give asymptotically precise
answers for seemingly unrelated problems like the European Tenure Game
introduced by Doerr [9] and the search problem with bounded number of
consecutive lies, considered in a problem of the 2012 IMO contest. As yet
another consequence we slightly improve two bounds related to the Neighborhood
Conjecture of Beck.

http://arxiv.org/abs/1006.0744 2013/08/19 - 10:34

Recently, physical layer security based approaches have drawn considerable
attentions and are envisaged to provide secure communications in the wireless
networks. However, most existing literatures only focus on the physical layer.
Thus, how to design an effective transmission scheme which also considers the
requirements from the upper layers is still an unsolved problem. We consider
such cross-layer resource allocation problem in the multi-user downlink
environment for both having instantaneous and partial eavesdropping channel
information scenarios. The problem is first formulated in a new security
framework. Then, the control scheme is designed to maximize the average
admission rate of the data, incorporating delay, power, and secrecy as
constraints, for both non-colluding and colluding eavesdropping cases in each
scenario. Performance analysis is given based on the stochastic optimization
theory and the simulations are carried out to validate the effectiveness of our
scheme.

http://arxiv.org/abs/1210.1139 2013/08/19 - 10:34

We present a new probabilistic symbolic algorithm that, given a variety
defined in an n-dimensional affine space by a generic sparse system with fixed
supports, computes the Zariski closure of its projection to an l-dimensional
coordinate affine space with l < n. The complexity of the algorithm depends
polynomially on combinatorial invariants associated to the supports.

http://arxiv.org/abs/1303.0266 2013/08/19 - 10:34

We present a market-based approach to the Air Traffic Flow Management (ATFM)
problem. The goods in our market are delays and buyers are airline companies;
the latter pay money to the FAA to buy away the desired amount of delay on a
per flight basis. We give a notion of equilibrium for this market and an LP
whose solution gives an equilibrium allocation of flights to landing slots as
well as equilibrium prices for the landing slots. Via a reduction to matching,
we show that this equilibrium can be computed combinatorially in strongly
polynomial time. Moreover, there is a special set of equilibrium prices, which
can be computed easily, that is identical to the VCG solution, and therefore
the market is incentive compatible in dominant strategy.

http://arxiv.org/abs/1305.3241 2013/08/19 - 10:34

In this paper we discuss the potential of emerging spintorque devices for
computing applications. Recent proposals for spinbased computing schemes may be
differentiated as all-spin vs. hybrid, programmable vs. fixed, and, Boolean vs.
non-Boolean. All spin logic-styles may offer high area-density due to small
form-factor of nano-magnetic devices. However, circuit and system-level design
techniques need to be explored that leverage the specific spin-device
characteristics to achieve energy-efficiency, performance and reliability
comparable to those of CMOS. The non-volatility of nanomagnets can be exploited
in the design of energy and area-efficient programmable logic. In such
logic-styles, spin-devices may play the dual-role of computing as well as
memory-elements that provide field-programmability. Spin-based threshold logic
design is presented as an example (dynamic resisitve threshold logic and
magnetic threshold logic). Emerging spintronic phenomena may lead to ultralow-
voltage, current-mode, spin-torque switches that can offer attractive computing
capabilities, beyond digital switches. Such devices may be suitable for
non-Boolean data-processing applications which involve analog processing.
Integration of such spin-torque devices with charge-based devices like CMOS and
resistive memory can lead to highly energy-efficient information processing
hardware for applications like pattern-matching, neuromorphic-computing,
image-processing and data-conversion. Towards the end, we discuss the
possibility of applying emerging spin-torque switches in the design of
energy-efficient global interconnects, for future chip multiprocessors.

http://arxiv.org/abs/1308.2745 2013/08/19 - 10:34

Recovering a low-rank tensor from incomplete information is a recurring
problem in signal processing and machine learning. The most popular convex
relaxation of this problem minimizes the sum of the nuclear norms of the
unfoldings of the tensor. We show that this approach can be substantially
suboptimal: reliably recovering a $K$-way tensor of length $n$ and Tucker rank
$r$ from Gaussian measurements requires $\Omega(r n^{K-1})$ observations. In
contrast, a certain (intractable) nonconvex formulation needs only $O(r^K +
nrK)$ observations. We introduce a very simple, new convex relaxation, which
partially bridges this gap. Our new formulation succeeds with $O(r^{\lfloor K/2
\rfloor}n^{\lceil K/2 \rceil})$ observations. While these results pertain to
Gaussian measurements, simulations strongly suggest that the new norm also
outperforms the sum of nuclear norms for tensor completion from a random subset
of entries.

Our lower bound for the sum-of-nuclear-norms model follows from a new result
on recovering signals with multiple sparse structures (e.g. sparse, low rank),
which perhaps surprisingly demonstrates the significant suboptimality of the
commonly used recovery approach via minimizing the sum of individual sparsity
inducing norms (e.g. $l_1$, nuclear norm). Our new formulation for low-rank
tensor recovery however opens the possibility in reducing the sample complexity
by exploiting several structures jointly.

http://arxiv.org/abs/1307.5870 2013/08/18 - 09:50

This paper provides lower bounds on the reconstruction error for transmission
of two continuous correlated random vectors sent over both sum and parallel
channels using the help of two causal feedback links from the decoder to the
encoders connected to each sensor. This construction is considered for both
uniformly and normally distributed sources with zero mean and unit variance.
Additionally, a two-way retransmission protocol, which is a non-coherent
adaptation of the original work by Yamamoto is introduced for an additive white
Gaussian noise channel with one degree of freedom. Furthermore, the novel
protocol of a single source is extended to the dual-source case again for two
different source distributions. Asymptotic optimality of the protocols are
analyzed and upper bounds on the distortion level are derived for two-rounds
considering two extreme cases of high and low correlation among the sources. It
is shown by both the upper and lower-bounds that collaboration can be achieved
through energy accumulation. Analytical results are supported by numerical
analysis for both the single and dual-source cases to show the improvement in
terms of distortion to be gained by retransmission subject to the average
energy used by protocol . To cover a more realistic scenario, the same protocol
of a single source is adapted to a wireless channel and their performances are
compared through numerical evaluation.

http://arxiv.org/abs/1307.6459 2013/08/18 - 09:50

This paper is a reply to the opinion paper: Transdisciplinary electric power
grid science (PNAS), 2013 [arXiv:1307.7305].

http://arxiv.org/abs/1308.3229 2013/08/16 - 21:07

This paper deals with spectrum sensing for cognitive radio scenarios where
the decision fusion center (DFC) exploits array processing. More specifically,
we explore the impact of user cooperation and orthogonal transmissions among
secondary users (SUs) on the reporting channel. To this aim four protocols are
considered: (i) non-orthogonal and non-cooperative; (ii) orthogonal and
non-cooperative; (iii) non-orthogonal and cooperative; (iv) orthogonal and
cooperative. The DFC employs maximum ratio combining (MRC) rule and performance
are evaluated in terms of complementary receiver operating characteristic
(CROC). Analytical results, coupled with Monte Carlo simulations, are
presented.

http://arxiv.org/abs/1308.3239 2013/08/16 - 21:07

In this paper, we propose a robust approach for text extraction and
recognition from Arabic news video sequence. The text included in video
sequences is an important needful for indexing and searching system. However,
this text is difficult to detect and recognize because of the variability of
its size, their low resolution characters and the complexity of the
backgrounds. To solve these problems, we propose a system performing in two
main tasks: extraction and recognition of text. Our system is tested on a
varied database composed of different Arabic news programs and the obtained
results are encouraging and show the merits of our approach.

http://arxiv.org/abs/1308.3243 2013/08/16 - 21:07

This work studies the hardness of finding independent sets in hypergraphs
which are either 2-colorable or are almost 2-colorable, i.e. can be 2-colored
after removing a small fraction of vertices and the incident hyperedges. To be
precise, say that a hypergraph is (1-eps)-almost 2-colorable if removing an eps
fraction of its vertices and all hyperedges incident on them makes the
remaining hypergraph 2-colorable. In particular we prove the following results.

For an arbitrarily small constant gamma > 0, there is a constant xi > 0, such
that, given a 4-uniform hypergraph on n vertices which is (1 - eps)-almost
2-colorable for eps = 2^{-(log n)^xi}, it is quasi-NP-hard to find an
independent set of n/(2^{(log n)^{1-gamma}}) vertices.

For any constants eps, delta > 0, given as input a 3-uniform hypergraph on
$n$ vertices which is (1-eps)-almost 2-colorable, it is NP-hard to find an
independent set of delta n vertices. Assuming the d-to-1 Games Conjecture the
following holds.

For any constant delta > 0, given a 2-colorable 3-uniform hypergraph on n
vertices, it is NP-hard to find an independent set of delta n vertices.

The hardness result on independent set in almost 2-colorable 3-uniform
hypergraphs was earlier known only assuming the Unique Games Conjecture. In
this work we prove the result unconditionally. For independent sets in
2-colorable 3-uniform hypergaphs we prove the first strong hardness result,
albeit assuming the d-to-1 Games Conjecture. Our result on almost 2-colorable
4-uniform hypergraphs gives the first nearly polynomial hardness factor for
independent set in hypergraphs which are (almost) colorable with constantly
many colors. It partially bridges the gap between the previous best lower bound
of poly(log n) and the algorithmic upper bounds of n^{Omega(1)}.

http://arxiv.org/abs/1308.3247 2013/08/16 - 21:07

Motivated by understanding non-strict and strict pure strategy equilibria in
network anti-coordination games, we define notions of stable and, respectively,
strictly stable colorings in graphs. We characterize the cases when such
colorings exist and when the decision problem is NP-hard. These correspond to
finding pure strategy equilibria in the anti-coordination games, whose price of
anarchy we also analyze. We further consider the directed case, a
generalization that captures both coordination and anti-coordination. We prove
the decision problem for non-strict equilibria in directed graphs is NP-hard.
Our notions also have multiple connections to other combinatorial questions,
and our results resolve some open problems in these areas, most notably the
complexity of the strictly unfriendly partition problem.

http://arxiv.org/abs/1308.3258 2013/08/16 - 21:07

This paper characterizes the degrees of freedom (DoF) regions for the
multi-user vector broadcast channel with periodic channel state information
(CSI) feedback. As a part of the characterization, a new transmission method
called space-time interference alignment is proposed, which exploits both the
current and past CSI jointly. Using the proposed alignment technique, an inner
bound of the sum-DoF region is characterized as a function of a normalized CSI
feedback frequency, which measures CSI feedback speed compared to the speed of
user's channel variations. One consequence of the result is that the achievable
sum-DoF gain is improved significantly when a user sends back both current and
outdated CSI compared to the case where the user sends back current CSI only.
Then, a trade-off between CSI feedback delay and the sum-DoF gain is
characterized for the multi-user vector broadcast channel in terms of a
normalized CSI feedback delay that measures CSI obsoleteness compared to
channel coherence time. A crucial insight is that it is possible to achieve the
optimal DoF gain if the feedback delay is less than a derived fraction of the
channel coherence time. This precisely characterizes the intuition that a small
delay should be negligible.

http://arxiv.org/abs/1308.3272 2013/08/16 - 21:07

An oblivious subspace embedding (OSE) for some eps, delta in (0,1/3) and d <=
m <= n is a distribution D over R^{m x n} such that for any linear subspace W
of R^n of dimension d,

Pr_{Pi ~ D}(for all x in W, (1-eps) |x|_2 <= |Pi x|_2 <= (1+eps)|x|_2) >= 1 -
delta.

We prove that any OSE with delta < 1/3 must have m = Omega((d +
log(1/delta))/eps^2), which is optimal. Furthermore, if every Pi in the support
of D is sparse, having at most s non-zero entries per column, then we show
tradeoff lower bounds between m and s.

http://arxiv.org/abs/1308.3280 2013/08/16 - 21:07

This paper provides new stability results for Action-Dependent Heuristic
Dynamic Programming (ADHDP), using a control algorithm that iteratively
improves an internal model of the external world in the autonomous system based
on its continuous interaction with the environment. We extend previous results
by ADHDP control to the case of general multi-layer neural networks with deep
learning across all layers. In particular, we show that the introduced control
approach is uniformly ultimately bounded (UUB) under specific conditions on the
learning rates, without explicit constraints on the temporal discount factor.
We demonstrate the benefit of our results to the control of linear and
nonlinear systems, including the cart-pole balancing problem. Our results show
significantly improved learning and control performance as compared to the
state-of-art.

http://arxiv.org/abs/1308.3282 2013/08/16 - 21:07

This paper discloses a simple algorithm for encrypting text messages, based
on the NP-completeness of the subset sum problem, such that the similarity
between encryptions is roughly proportional to the semantic similarity between
their generating messages. This allows parties to compare encrypted messages
for semantic overlap without trusting an intermediary and might be applied, for
example, as a means of finding scientific collaborators over the Internet.

http://arxiv.org/abs/1308.3294 2013/08/16 - 21:07

Cliques are defined as complete graphs or subgraphs; they are the strongest
form of cohesive subgroup, and are of interest in both social science and
engineering contexts. In this paper we show how to efficiently estimate the
distribution of clique sizes from a probability sample of nodes obtained from a
graph (e.g., by independence or link-trace sampling). We introduce two types of
unbiased estimators, one of which exploits labeling of sampled nodes neighbors
and one of which does not require this information. We compare the estimators
on a variety of real-world graphs and provide suggestions for their use. We
generalize our estimators to cases in which cliques are distinguished not only
by size but also by node attributes, allowing us to estimate clique composition
by size. Finally, we apply our methodology to a sample of Facebook users to
estimate the clique size distribution by gender over the social graph.

http://arxiv.org/abs/1308.3297 2013/08/16 - 21:07

Analysis and design of filtered-x adaptive algorithms are conventionally done
by assuming that the transfer function in the secondary path is a discrete-time
system. However, in real systems such as active noise control, the secondary
path is a continuous-time system. Therefore, such a system should be analyzed
and designed as a hybrid system including discrete- and continuous- time
systems and AD/DA devices. In this article, we propose a hybrid design taking
account of continuous-time behavior of the secondary path via lifting
(continuous-time polyphase decomposition) technique in sampled-data control
theory.

http://arxiv.org/abs/1308.3300 2013/08/16 - 21:07

In this paper, we revisit the emptiness problem of measure many one-way
quantum automata. Then the undecidability of the equivalence of cut-point
language recognized by measure many 1-way quantum automata and enhanced 1-way
quantum automata are obtained by utilizing emptiness result. This clarifies an
author's misapprehended corollary.

http://arxiv.org/abs/1308.3301 2013/08/16 - 21:07

YY filter, named after the founder Prof. Yutaka Yamamoto, is a digital filter
designed by sampled-data control theory, which can optimize the analog
performance of the signal processing system with AD/DA converters. This article
discusses problems in conventional signal processing and introduces advantages
of the YY filter.

http://arxiv.org/abs/1308.3302 2013/08/16 - 21:07

In this paper, parameterized Gallager's first bounding technique (GFBT) is
presented by introducing nested Gallager regions, to derive upper bounds on the
ML decoding error probability of general codes over AWGN channels. The three
well-known bounds, namely, the sphere bound (SB) of Herzberg and Poltyrev, the
tangential bound (TB) of Berlekamp, and the tangential-sphere bound (TSB) of
Poltyrev, are generalized to general codes without the properties of
geometrical uniformity and equal energy. When applied to the binary linear
codes, the three generalized bounds are reduced to the conventional ones. The
new derivation also reveals that the SB of Herzberg and Poltyrev is equivalent
to the SB of Kasami et al., which was rarely cited in the literatures.

http://arxiv.org/abs/1308.3303 2013/08/16 - 21:07

Recent real-time heuristic search algorithms have demonstrated outstanding
performance in video-game pathfinding. However, their applications have been
thus far limited to that domain. We proceed with the aim of facilitating wider
applications of real-time search by fostering a greater understanding of the
performance of recent algorithms. We first introduce eight
algorithm-independent complexity measures for search spaces and correlate their
values with algorithm performance. The complexity measures are statistically
shown to be significant predictors of algorithm performance across a set of
commercial video-game maps. We then extend this analysis to a wider variety of
search spaces in the first application of database-driven real-time search to
domains outside of video-game pathfinding. In doing so, we gain insight into
algorithm performance and possible enhancement as well as into search space
complexity.

http://arxiv.org/abs/1308.3309 2013/08/16 - 21:07

This paper gives the approximate capacity region of a two-user MIMO
interference channel with limited receiver cooperation, where the gap between
the inner and outer bounds is in terms of the total number of receive antennas
at the two receivers and is independent of the actual channel values. The
approximate capacity region is then used to find the degrees of freedom region.
For the special case of symmetric interference channels, we also find the
amount of receiver cooperation in terms of the backhaul capacity beyond which
the degrees of freedom do not improve. Further, the generalized degrees of
freedom are found for MIMO interference channels with equal number of antennas
at all nodes. It is shown that the generalized degrees of freedom improve
gradually from a "W" curve to a "V" curve with increase in cooperation in terms
of the backhaul capacity.

http://arxiv.org/abs/1308.3310 2013/08/16 - 21:07

In recent years, many researchers have focused on wireless sensor networks
and their applications. To obtain scalability potential in these networks most
of the nodes are categorized as distinct groups named cluster and the node
which is selected as cluster head or Aggregation Node offers the operation of
data collection from other cluster nodes and aggregation and sending it to the
rest of the network. Clustering and data aggregation increase network
scalability and cause that limited resources of the network are used well.
However, these mechanisms also make several breaches in the network, for
example in clustered networks cluster head nodes are considered Desirable and
attractive targets for attackers since reaching their information whether by
physical attack and node capturing or by other attacks, the attacker can obtain
the whole information of corresponding cluster. In this study secure clustering
of the nodes are considered with the approach of reducing energy consumption of
nodes and a protocol is presented that in addition to satisfying Advanced
security needs of wireless sensor networks reduces the amount of energy
consumption by the nodes. Due to network clustering there is scalability
potential in such a network and According to frequent change of cluster head
nodes load distribution is performed in the cluster and eventually increase the
network lifetime.

http://arxiv.org/abs/1308.3312 2013/08/16 - 21:07

In this note, we introduce a new algorithm to deal with finite dimensional
clustering with errors in variables. The design of this algorithm is based on
recent theoretical advances (see Loustau (2013a,b)) in statistical learning
with errors in variables. As the previous mentioned papers, the algorithm mixes
different tools from the inverse problem literature and the machine learning
community. Coarsely, it is based on a two-step procedure: (1) a deconvolution
step to deal with noisy inputs and (2) Newton's iterations as the popular
k-means.

http://arxiv.org/abs/1308.3314 2013/08/16 - 21:07

Testability is the probability whether tests will detect a fault, given that
a fault in the program exists. How efficiently the faults will be uncovered
depends upon the testability of the software. Various researchers have proposed
qualitative and quantitative techniques to improve and measure the testability
of software. In literature, a plethora of reliability growth models have been
used to assess and measure the quantitative quality assessment of software
during testing and operational phase. The knowledge about failure distribution
and their complexity can improve the testability of software. Testing effort
allocation can be made easy by knowing the failure distribution and complexity
of faults, and this will ease the process of revealing faults from the
software. As a result, the testability of the software will be improved. The
parameters of the model along with the proportion of faults of different
complexity to be removed from the software have been presented in the paper .We
have used failure data of two object oriented software developed under open
source environment namely MySQL for python and Squirrel SQL Client for
estimation purpose

http://arxiv.org/abs/1308.3320 2013/08/16 - 21:07

For an undirected, simple, finite, connected graph $G$, we denote by $V(G)$
and $E(G)$ the sets of its vertices and edges, respectively. A function
$\varphi:E(G)\rightarrow \{1,...,t\}$ is called a proper edge $t$-coloring of a
graph $G$, if adjacent edges are colored differently and each of $t$ colors is
used. The least value of $t$ for which there exists a proper edge $t$-coloring
of a graph $G$ is denoted by $\chi'(G)$. For any graph $G$, and for any integer
$t$ satisfying the inequality $\chi'(G)\leq t\leq |E(G)|$, we denote by
$\alpha(G,t)$ the set of all proper edge $t$-colorings of $G$. Let us also
define a set $\alpha(G)$ of all proper edge colorings of a graph $G$: $$
\alpha(G)\equiv\bigcup_{t=\chi'(G)}^{|E(G)|}\alpha(G,t). $$

An arbitrary nonempty finite subset of consecutive integers is called an
interval. If $\varphi\in\alpha(G)$ and $x\in V(G)$, then the set of colors of
edges of $G$ which are incident with $x$ is denoted by $S_G(x,\varphi)$ and is
called a spectrum of the vertex $x$ of the graph $G$ at the proper edge
coloring $\varphi$. If $G$ is a graph and $\varphi\in\alpha(G)$, then define
$f_G(\varphi)\equiv|\{x\in V(G)/S_G(x,\varphi) \textrm{is an interval}\}|$.

For a graph $G$ and any integer $t$, satisfying the inequality $\chi'(G)\leq
t\leq |E(G)|$, we define: $$
\mu_1(G,t)\equiv\min_{\varphi\in\alpha(G,t)}f_G(\varphi),\qquad
\mu_2(G,t)\equiv\max_{\varphi\in\alpha(G,t)}f_G(\varphi). $$

For any graph $G$, we set: $$ \mu_{11}(G)\equiv\min_{\chi'(G)\leq
t\leq|E(G)|}\mu_1(G,t),\qquad \mu_{12}(G)\equiv\max_{\chi'(G)\leq
t\leq|E(G)|}\mu_1(G,t), $$ $$ \mu_{21}(G)\equiv\min_{\chi'(G)\leq
t\leq|E(G)|}\mu_2(G,t),\qquad \mu_{22}(G)\equiv\max_{\chi'(G)\leq
t\leq|E(G)|}\mu_2(G,t). $$

For regular graphs, some relations between the $\mu$-parameters are obtained.

http://arxiv.org/abs/1308.3322 2013/08/16 - 21:07

Due to widespread demand of E-governance and exponentially increasing size of
data, new technologies like Open source solutions and cloud computing need to
be incorporated. In this paper, the latest trends of technology that the
government of most of the country has adopted have been discussed. While
working on this project we have concluded that E-Governance has made the
working of government more efficient and more transparent to its citizens We
have also presented an exhaustive list of E-Governance projects which is
currently being used in India and in international scenario. We have provided a
mechanism for improving E-Governance by including technologies such as Open
Source and Cloud Computing.

http://arxiv.org/abs/1308.3323 2013/08/16 - 21:07

In this paper we address the problem of coalition formation in hedonic
context. Our modelling tries to be as realistic as possible. In previous
models, once an agent joins a coalition it would not be able to leave the
coalition and join the new one; in this research we made it possible to leave a
coalition but put some restrictions to control the behavior of agents. Leaving
or staying of an agent in a coalition will affect on the trust of the other
agents included in this coalition. Agents will use the trust values in
computing the expected utility of coalitions. Three different risk behaviors
are introduced for agents that want to initiate a coalition. Using these risk
behaviors, some simulations are made and results are analyzed.

http://arxiv.org/abs/1308.3324 2013/08/16 - 21:07

We consider the two-dimensional sorted range reporting problem. Our data
structure requires O(n lglg n) words of space and O(lglg n + k lglg n) query
time, where k is the number of points in the query range. This data structure
improves a recent result of Nekrich and Navarro [8] by a factor of O(lglg n) in
query time, and matches the state of the art for unsorted range reporting [1].

http://arxiv.org/abs/1308.3326 2013/08/16 - 21:07

We study the issues of existence and inefficiency of pure Nash equilibria in
linear congestion games with altruistic social context, in the spirit of the
model recently proposed by de Keijzer {\em et al.} \cite{DSAB13}. In such a
framework, given a real matrix $\Gamma=(\gamma_{ij})$ specifying a particular
social context, each player $i$ aims at optimizing a linear combination of the
payoffs of all the players in the game, where, for each player $j$, the
multiplicative coefficient is given by the value $\gamma_{ij}$. We give a broad
characterization of the social contexts for which pure Nash equilibria are
always guaranteed to exist and provide tight or almost tight bounds on their
prices of anarchy and stability. In some of the considered cases, our
achievements either improve or extend results previously known in the
literature.

http://arxiv.org/abs/1308.3329 2013/08/16 - 21:07

In this paper we study the Steiner tree problem over a dynamic set of
terminals. We consider the model where we are given an $n$-vertex graph
$G=(V,E,w)$ with nonnegative edge weights. Our goal is to maintain a tree $T$
which is a good approximation of the minimum Steiner tree in $G$ when the
terminal set $S \subseteq V$ changes over time. The changes applied to the
terminal set are either terminal additions (\emph{incremental} scenario),
terminal removals (\emph{decremental} scenario), or both ({\em fully dynamic}
scenario). We obtain the first known sublinear time algorithm for dynamic
Steiner tree problem in planar graphs. In particular we develop a
polylogarithmic time incremental algorithm, an $\bigo(\sqrt{n} \log^{2.5} n
\log \strecz)$ time decremental algorithm and an $\bigo(\sqrt{n} \log^{5} n
\log \strecz)$ %and $\bigo(n^{1/2+\delta})$ time fully-dynamic algorithm.

http://arxiv.org/abs/1308.3336 2013/08/16 - 21:07

Among optimal hierarchical algorithms for the computational solution of
elliptic problems, the Fast Multipole Method (FMM) stands out for its
adaptability to emerging architectures, having high arithmetic intensity,
tunable accuracy, and relaxed global synchronization requirements. We
demonstrate that, beyond its traditional use as a solver in problems for which
explicit free-space kernel representations are available, the FMM has
applicability as a preconditioner in finite domain elliptic boundary value
problems, by equipping it with boundary integral capability for finite
boundaries and by wrapping it in a Krylov method for extensibility to more
general operators. Compared with multilevel methods, it is capable of
comparable algebraic convergence rates down to the truncation error of the
discretized PDE, and it has superior multicore and distributed memory
scalability properties on commodity architecture supercomputers. Compared with
other methods exploiting the low rank character of off-diagonal blocks of the
dense resolvent operator, FMM-preconditioned Krylov iteration appears to offer
superior performance on 2D model problems. We describe our tests in
reproducible detail with freely available codes and outline directions for
further extensibility.

http://arxiv.org/abs/1308.3339 2013/08/16 - 21:07

One of the most remarkable social phenomena is the formation of communities
in social networks corresponding to families, friendship circles, work teams,
etc. Since people usually belong to several different communities at the same
time, the induced overlaps result in an extremely complicated web of the
communities themselves. Thus, uncovering the intricate community structure of
social networks is a non-trivial task with great potential for practical
applications, gaining a notable interest in the recent years. The Clique
Percolation Method (CPM) is one of the earliest overlapping community finding
methods, which was already used in the analysis of several different social
networks. In this approach the communities correspond to k-clique percolation
clusters, and the general heuristic for setting the parameters of the method is
to tune the system just below the critical point of k-clique percolation.
However, this rule is based on simple physical principles and its validity was
never subject to quantitative analysis. Here we examine the quality of the
partitioning in the vicinity of the critical point using recently introduced
overlapping modularity measures. According to our results on real social- and
other networks, the overlapping modularities show a maximum close to the
critical point, justifying the original criteria for the optimal parameter
settings.

http://arxiv.org/abs/1308.3340 2013/08/16 - 21:07