Skip to Content

Instrukcja korzystania z Biblioteki


Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online


Astronomia Astrofizyka

Sztuka dawna i współczesna, muzea i kolekcje

Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Przewidywania Kosmologia Religie Ideologia Polityka

Geologia, geofizyka, geochemia, środowisko przyrodnicze

Biologia, biologia molekularna i genetyka

Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na

We consider the extensions of modal transition systems (MTS), namely Boolean
MTS and parametric MTS and we investigate the refinement problems over both
classes. Firstly, we reduce the problem of modal refinement over both classes
to a problem solvable by a QBF solver and provide experimental results showing
our technique scales well. Secondly, we extend the algorithm for thorough
refinement of MTS providing better complexity then via reductions to previously
studied problems. Finally, we investigate the relationship between modal and
thorough refinement on the two classes and show how the thorough refinement can
be approximated by the modal refinement. 2013/04/23 - 00:28

The model-checking problem for probabilistic systems crucially relies on the
translation of LTL to deterministic Rabin automata (DRW). Our recent Safraless
translation for the LTL(F,G) fragment produces smaller automata as compared to
the traditional approach. In this work, instead of DRW we consider
deterministic automata with acceptance condition given as disjunction of
generalized Rabin pairs (DGRW). The Safraless translation of LTL(F,G) formulas
to DGRW results in smaller automata as compared to DRW. We present algorithms
for probabilistic model-checking as well as game solving for DGRW conditions.
Our new algorithms lead to improvement both in terms of theoretical bounds as
well as practical evaluation. We compare PRISM with and without our new
translation, and show that the new translation leads to significant
improvements. 2013/04/23 - 00:28

Most recent e-health initiatives perceive customers (patients) as recipients
of medical care where they do not have a significant role in the process of
health decision making. However, the advancement of Web 2.0 offers patients to
have a greater role in the decision making process related to their health as
they can be empowered with the ability to access and control information that
fits with their personalized needs. However, providing patient empowerment in
e-health through Web 2.0 is challenging task because the complexity nature of
healthcare business processes. Empowerment closely relates to the concept of
Customer Relationship Management (CRM) in managing good relationships with the
customers. The adoption of Web 2.0 in CRM systems is known as Social CRM or CRM
2.0. Social CRM emerges to accommodate dynamic means of interaction between
patients with their healthcare providers. The aim of this paper is to present a
model that embeds empowerment of patient through Social CRM intervention that
may extend the role of the patient as an individual health actor, a social
health agent, and a medical care partner. A survey has been conducted to gain a
feedback from customers regarding the proposed model. A prototype derived from
the model namely Clinic 2.0 has also been developed. Using the prototype we
measure its impact towards customer satisfaction and health literacy. The
results show that the system intervention through Clinic 2.0 improves the level
of satisfaction and health literacy of participants. 2013/04/23 - 00:28

Can we make Bayesian posterior MCMC sampling more efficient when faced with
very large datasets? We argue that computing the likelihood for N datapoints
twice in order to reach a single binary decision is computationally
inefficient. We introduce an approximate Metropolis-Hastings rule based on a
sequential hypothesis test which allows us to accept or reject samples with
high confidence using only a fraction of the data required for the exact MH
rule. While this introduces an asymptotic bias, we show that this bias can be
controlled and is more than offset by a decrease in variance due to our ability
to draw more samples per unit of time. We show that the same idea can also be
applied to Gibbs sampling in densely connected graphs. 2013/04/23 - 00:28

The FITS is the standard file format in astronomy, and it has been extended
to agree with astronomical needs of the day. However, astronomical datasets
have been inflating year by year. In case of ALMA telescope, a ~ TB scale
4-dimensional data cube may be produced for one target. Considering that
typical Internet bandwidth is a few 10 MB/s at most, the original data cubes in
FITS format are hosted on a VO server, and the region which a user is
interested in should be cut out and transferred to the user (Eguchi et al.,
2012). The system will equip a very high-speed disk array to process a TB scale
data cube in a few 10 seconds, and disk I/O speed, endian conversion and data
processing one will be comparable. Hence to reduce the endian conversion time
is one of issues to realize our system. In this paper, I introduce a technique
named "just-in-time endian conversion", which delays the endian conversion for
each pixel just before it is really needed, to sweep out the endian conversion
time; by applying this method, the FITS processing speed increases 20% for
single threading, and 40% for multi-threading compared to CFITSIO. The speed-up
by the method tightly relates to modern CPU architecture to improve the
efficiency of instruction pipelines due to break of "causality", a programmed
instruction code sequence. 2013/04/23 - 00:28

The central issue in direct-sequence code-division multiple-access (DS-CDMA)
ad hoc networks is the prevention of a near-far problem. This paper considers
two types of guard zones that may be used to control the near-far problem: a
fundamental exclusion zone and an additional CSMA guard zone that may be
established by the carrier-sense multiple-access (CSMA) protocol. In the
exclusion zone, no mobiles are physically present, modeling the minimum
physical separation among mobiles that is always present in actual networks.
Potentially interfering mobiles beyond a transmitting mobile's exclusion zone,
but within its CSMA guard zone, are deactivated by the protocol. This paper
provides an analysis of DS-CSMA networks with either or both types of guard
zones. A network of finite extent with a finite number of mobiles and uniform
clustering as the spatial distribution is modeled. The analysis applies a
closed-form expression for the outage probability in the presence of Nakagami
fading, conditioned on the network geometry. The tradeoffs between exclusion
zones and CSMA guard zones are explored for DS-CDMA and unspread networks. The
spreading factor and the guard-zone radius provide design flexibility in
achieving specified levels of average outage probability and transmission
capacity. The advantage of an exclusion zone over a CSMA guard zone is that
since the network is not thinned, the number of active mobiles remains
constant, and higher transmission capacities can be achieved. 2013/04/23 - 00:28

Software as a service or (SaaS) is a new software development and deployment
paradigm over the cloud and offers Information Technology services dynamically
as "on-demand" basis over the internet. Trust is one of the fundamental
security concepts on storing and delivering such services. In general, trust
factors are integrated into such existent security frameworks in order to add a
security level to entities collaborations through the trust relationship.
However, deploying trust factor in the secured cloud environment are more
complex engineering task due to the existence of heterogeneous types of service
providers and consumers. In this paper, a formal trust management model has
been introduced to manage the trust and its properties for SaaS in cloud
computing environment. The model is capable to represent the direct trust,
recommended trust, reputation etc. formally. For the analysis of the trust
properties in the cloud environment, the proposed approach estimates the trust
value and uncertainty of each peer by computing decay function, number of
positive interactions, reputation factor and satisfaction level for the
collected information. 2013/04/23 - 00:28

Wireless streaming of high-definition video is a promising application for 60
GHz links, since multi-Gigabit/s data rates are possible. In particular we
consider a sports stadium broadcasting system where video signals from multiple
cameras are transmitted to a central location. Due to the high pathloss of
60\,GHz radiation over the large distances encountered in this setting, the use
of relays is required. This paper designs a quality-aware coding and relaying
algorithm for maximization of the overall video quality. We consider the
setting that the source can split its data stream into parallel streams, which
can be transmitted via different relays to the destination. For this, we derive
the related formulation and re-formulate it as convex programming, which can
guarantee optimal solutions. 2013/04/23 - 00:28

Automated testing improves the efficiency of testing practice at various
levels of projects in the organization. Unfortunately, we do not have a common
architecture or common standards for designing frameworks across different test
levels, projects and test tools which can assist developers, testers and
business analysts. To address the above problem, in this paper, I have first
proposed a unique reference model and then a design architecture using the
proposed model for designing any Data Driven Automation Frameworks. The
reference model is K model which can be used for modeling any data-driven
automation framework. The unique Design architecture, based on above model is
Snow Leopard. 2013/04/23 - 00:28

High-resolution depth maps can be inferred from low-resolution depth
measurements and an additional high-resolution intensity image of the same
scene. To that end, we introduce a bimodal co-sparse analysis model, which is
able to capture the interdependency of registered intensity and depth
information. This model is based on the assumption that the co-supports of
corresponding bimodal image structures are aligned when computed by a suitable
pair of analysis operators. No analytic form of such operators exist and we
propose a method for learning them from a set of registered training signals.
This learning process is done offline and returns a bimodal analysis operator
that is universally applicable to natural scenes. We use this to exploit the
bimodal co-sparse analysis model as a prior for solving inverse problems, which
leads to an efficient algorithm for depth map super-resolution. 2013/04/23 - 00:28

We study the minimum latency broadcast scheduling (MLBS) problem in
Single-Radio Multi-Channel (SR-MC) wireless ad-hoc networks (WANETs), which are
modeled by Unit Disk Graphs. Nodes with this capability have their fixed
reception channels, but can switch their transmission channels to communicate
with their neighbors. The single-radio and multi-channel model prevents
existing algorithms for single-channel networks achieving good performance.
First, the common assumption that one transmission reaches all the neighboring
nodes does not hold naturally. Second, the multi-channel dimension provides new
opportunities to schedule the broadcast transmissions in parallel. We show MLBS
problem in SR-MC WANETs is NP-hard, and present a benchmark algorithm: Basic
Transmission Scheduling (BTS), which has approximation ratio of 4k + 12. Here k
is the number of orthogonal channels in SR-MC WANETs. Then we propose an
Enhanced Transmission Scheduling (ETS) algorithm, improving the approximation
ratio to k + 23. Simulation results show that ETS achieves better performance
over BTS, and the performance of ETS approaches the lower bound. 2013/04/23 - 00:28

Mobile traffic explosion causes spectrum shortage and polarization of data
usage among users, which will eventually decrease user welfare in mobile
communication services. Therefore, governments around the world are planning to
make more spectrum available for mobile broadband use, and the key policy issue
is to find an efficient spectrum allocation method that will improve user
welfare. In this paper, we propose a data subsidy scheme where the regulator
offers a spectrum price discount to mobile network operators (MNOs) in return
for imposing the responsibility of providing a predefined data amount to users
free of charge. To mathematically analyze the subsidy effect, we take the
two-stage Cournot and Bertrand competition model and apply the equilibrium
analysis based on the game-theoretic approach. Consequently, we show that the
data subsidy scheme increases user welfare even further. An interesting
observation is that the increase in user welfare does not involve MNO profit
loss and the increasing amount is higher than the regulator's expenses for
implementing the data subsidy scheme. 2013/04/23 - 00:28

With the passage of time and as new types of storage devices are introduced
into the marketplace, contemporary devices will slowly lose their compatibility
with current operating systems and PC hardware. As a result, such legacy
devices will pose an analytical challenge to the field of digital forensics.
Dated technology, while still fully functional, is becoming increasingly
incompatible with most contemporary computing hardware and software and thus
cannot be properly examined in present-day digital forensic environments. This
fact will not be lost on those who utilize legacy hardware to commit criminal
acts. This paper describes the technical challenge of accessing legacy devices
by describing an effort to resuscitate a Bernoulli Drive, a portable storage
device manufactured in 1983 by Iomega Corporation. A number of lessons-learned
are provided and the implication of legacy devices to digital forensic science
is discussed. 2013/04/23 - 00:28

Renewable energies become more important, and they contribute to the EU's
goals for greenhouse-gas reduction. However, their fluctuating nature calls for
demand-side-management techniques, which balance energy generation and
consumption. Such techniques are currently not broadly deployed. This paper
describes the latest results from the FINSENY project on how Future-Internet
enablers and market mechanisms can be used to realise such systems. 2013/04/23 - 00:28

In this paper, we consider the challenge of maximizing an unknown function f
for which evaluations are noisy and are acquired with high cost. An iterative
procedure uses the previous measures to actively select the next estimation of
f which is predicted to be the most useful. We focus on the case where the
function can be evaluated in parallel with batches of fixed size and analyze
the benefit compared to the purely sequential procedure in terms of cumulative
regret. We introduce the Gaussian Process Upper Confidence Bound and Pure
Exploration algorithm (GP-UCB-PE) which combines the UCB strategy and Pure
Exploration in the same batch of evaluations along the parallel iterations. We
prove theoretical upper bounds on the regret with batches of size K for this
procedure which show the improvement of the order of sqrt{K} for fixed
iteration cost over purely sequential versions. Moreover, the multiplicative
constants involved have the property of being dimension-free. We also confirm
empirically the efficiency of GP-UCB-PE on real and synthetic problems compared
to state-of-the-art competitors. 2013/04/23 - 00:28

In this paper we study distributed storage systems with exact repair. We give
a construction for regenerating codes between the minimum storage regenerating
(MSR) and the minimum bandwidth regenerating (MBR) points and show that in the
case that the parameters n, k, and d are close to each other our constructions
are close to optimal when comparing to the known capacity when only functional
repair is required. We do this by showing that when the distances of the
parameters n, k, and d are fixed but the actual values approach to infinity,
the fraction of the performance of our codes with exact repair and the known
capacity of codes with functional repair approaches to one. 2013/04/23 - 00:28

We consider the problem of computing the q-gram profile of a string \str of
size $N$ compressed by a context-free grammar with $n$ production rules. We
present an algorithm that runs in $O(N-\alpha)$ expected time and uses
$O(n+q+\kq)$ space, where $N-\alpha\leq qn$ is the exact number of characters
decompressed by the algorithm and $\kq\leq N-\alpha$ is the number of distinct
q-grams in $\str$. This simultaneously matches the current best known time
bound and improves the best known space bound. Our space bound is
asymptotically optimal in the sense that any algorithm storing the grammar and
the q-gram profile must use $\Omega(n+q+\kq)$ space. To achieve this we
introduce the q-gram graph that space-efficiently captures the structure of a
string with respect to its q-grams, and show how to construct it from a
grammar. 2013/04/23 - 00:28

Given a non empty set $S$ of vertices of a graph, the partiality of a vertex
with respect to $S$ is the difference between maximum and minimum of the
distances of the vertex to the vertices of $S$. The vertices with minimum
partiality constitute the fair center of the set. Any vertex set which is the
fair center of some set of vertices is called a fair set. In this paper we
prove that the induced subgraph of any fair set is connected in the case of
trees and characterise block graphs as the class of chordal graphs for which
the induced subgraph of all fair sets are connected. The fair sets of $K_{n}$,
$K_{m,n}$, $K_{n}-e$, wheel graphs, odd cycles and symmetric even graphs are
identified. The fair sets of the Cartesian product graphs are also discussed. 2013/04/23 - 00:28

This paper applies results on the robust stability of nonlinear quantum
systems to a system consisting an optical cavity containing a saturated Kerr
medium. The system is characterized by a Hamiltonian operator which contains a
non-quadratic term involving a quartic function of the annihilation and
creation operators. A saturated version of the Kerr nonlinearity leads to a
sector bounded nonlinearity which enables a quantum small gain theorem to be
applied to this system in order to analyze its stability. Also, a non-quadratic
version of a quantum Popov stability criterion is presented and applied to
analyze the stability of this system. 2013/04/23 - 00:28

There are growing uncertainties surrounding the classical model of
computation established by G\"odel, Church, Kleene, Turing and others in the
1930s onwards. The mismatch between the Turing machine conception, and the
experiences of those more practically engaged in computing, has parallels with
the wider one between science and those working creatively or intuitively out
in the 'real' world. The scientific outlook is more flexible and basic than
some understand or want to admit. The science is subject to limitations which
threaten careers. We look at embodiment and disembodiment of computation as the
key to the mismatch, and find Turing had the right idea all along - amongst a
productive confusion of ideas about computation in the real and the abstract
worlds. 2013/04/23 - 00:28

We consider logic-based argumentation in which an argument is a pair (Fi,al),
where the support Fi is a minimal consistent set of formulae taken from a given
knowledge base (usually denoted by De) that entails the claim al (a formula).
We study the complexity of three central problems in argumentation: the
existence of a support Fi ss De, the validity of a support and the relevance
problem (given psi is there a support Fi such that psi ss Fi?). When arguments
are given in the full language of propositional logic these problems are
computationally costly tasks, the validity problem is DP-complete, the others
are SigP2-complete. We study these problems in Schaefer's famous framework
where the considered propositional formulae are in generalized conjunctive
normal form. This means that formulae are conjunctions of constraints build
upon a fixed finite set of Boolean relations Ga (the constraint language). We
show that according to the properties of this language Ga, deciding whether
there exists a support for a claim in a given knowledge base is either
polynomial, NP-complete, coNP-complete or SigP2-complete. We present a
dichotomous classification, P or DP-complete, for the verification problem and
a trichotomous classification for the relevance problem into either polynomial,
NP-complete, or SigP2-complete. These last two classifications are obtained by
means of algebraic tools. 2013/04/23 - 00:28

The domain of analysis and conception of Decisional Information System (DIS)
is, highly, applying new techniques and methods to succeed the process of the
decision and minimizing the time of conception. Our objective in this paper is
to define a group of patterns to ensure a systematic reuse of our approach to
analyse a DIS s business requirements. We seek, through this work, to guide the
discovery of an organizations business requirements, expressed as goals by
introducing the notion of context, to promote good processes design for a DIS,
to capitalize the process and models proposed in our approach and systematize
reuse steps of this approach to analyze similar projects or adapt them as
needed. The patterns are at the same time the process s patterns and product s
patterns as they capitalize models and their associated processes. These
patterns are represented according to the PSIGMA formalism. 2013/04/23 - 00:28

Power grids vulnerability is a key issue in society. A component failure may
trigger cascades of failures across the grid and lead to a large blackout.
Complex network approaches have shown a direction to study some of the problems
faced by power grids. Within Complex Network Analysis structural
vulnerabilities of power grids have been studied mostly using purely
topological approaches, which assumes that flow of power is dictated by
shortest paths. However, this fails to capture the real flow characteristics of
power grids. We have proposed a flow redistribution mechanism that closely
mimics the flow in power grids using the PTDF. With this mechanism we enhance
existing cascading failure models to study the vulnerability of power grids.

We apply the model to the European high-voltage grid to carry out a
comparative study for a number of centrality measures. `Centrality' gives an
indication of the criticality of network components. Our model offers a way to
find those centrality measures that give the best indication of node
vulnerability in the context of power grids, by considering not only the
network topology but also the power flowing through the network. In addition,
we use the model to determine the spare capacity that is needed to make the
grid robust to targeted attacks. We also show a brief comparison of the end
results with other power grid systems to generalise the result. 2013/04/23 - 00:28

Reaction networks are systems in which the populations of a finite number of
species evolve through predefined interactions. Such networks are found as
modeling tools in many disciplines, spanning biochemistry, epidemiology,
pharmacology, ecology and social networks. It is now well-established that, for
small population sizes, stochastic models for reaction networks are necessary
to capture randomness in the interactions. The tools for analyzing them,
however, still lag far behind their deterministic counterparts. In this paper,
we bridge this gap by developing a constructive framework for examining the
long-term behavior and stability properties of the reaction dynamics in a
stochastic setting. In particular, we address the problems of determining
ergodicity of the reaction dynamics, which is analogous to having a globally
attracting fixed point for deterministic dynamics, and determining moment
bounds for the underlying stochastic process. Theoretical and computational
solutions for these problems are obtained by utilizing a blend of ideas and
techniques from probability theory, linear algebra, polynomial analysis and
optimization theory. We demonstrate that stability properties of a wide class
of networks can be assessed from theoretical results that can be recast as
efficient and scalable linear programs, well-known for their tractability. It
is notably shown that the computational complexity is often linear in the
number of species, but worst-case quadratic. We illustrate the validity, the
efficiency and the universality of our results on several reaction networks
arising in fields such as biochemistry, epidemiology and ecology. 2013/04/23 - 00:28

In this study we show that current state-of-the-art synthetically generated
fingerprints can easily be discriminated from real fingerprints. Tests are
conducted on the 12 publicly available databases of FVC2000, FVC2002 and
FVC2004 which are important benchmarks for evaluating the performance of
fingerprint recognition algorithms; 3 of these 12 databases consist of
artificial fingerprints generated by the SFinGe software. We propose a method
based on extended minutiae histograms which can distinguish between real and
synthetic prints with very high accuracy. This 'test of realness' can be
applied to synthetic fingerprints produced by any method. The connection to the
knowledge about the biological formation process of finger patterns is
discussed and suggestions for the improvement of synthetic fingerprint
generation are given. Two additional application areas for extended minutiae
histograms are considered: identification and quantifying the weight of
fingerprint evidence in court. 2013/04/23 - 00:28

Intersymbol interference (ISI) is a major cause of degradation in the
receiver performance of high-speed data communications systems. This arises
mainly due to limited bandwidth available. The minimum Euclidean distance
between any two symbol sequences is an important parameter in this case at
moderate to high signal to noise ratios. It is proven here that as ISI
increases the minimum distance strictly decreases when the worst case scenario
is considered. From this it follows that the minimum eigenvalue of the worst
case ISI channel of a given length is unique. 2013/04/23 - 00:28

We define a weak notion of universality in symbolic dynamics and, by
generalizing a proof of Mike Hochman, we prove that this yields necessary
conditions on the forbidden patterns defining a universal subshift: These
forbidden patterns are necessarily in a Medvedev degree greater or equal than
the degree of the set of subshifts for which it is universal.

We also show that this necessary condition is optimal by giving constructions
of universal subshifts in the same Medvedev degree as the subshifts they
simulate and prove that this universality can be achieved by the sofic
projective subdynamics of such a subshift as soon as the necessary conditions
are verified. 2013/04/23 - 00:28

The following two decision problems capture the complexity of comparing
integers or rationals that are succinctly represented in
product-of-exponentials notation, or equivalently, via arithmetic circuits
using only multiplication and division gates, and integer inputs:

Input instance: four lists of positive integers: a_1, ...., a_n ; b_1,....,
b_n ; c_1,....,c_m ; d_1, ...., d_m ; where each of the integers is represented
in binary.

Problem 1 (equality testing): Decide whether a_1^{b_1} a_2^{b_2} ....
a_n^{b_n} = c_1^{d_1} c_2^{d_2} .... c_m^{d_m} .

Problem 2 (inequality testing): Decide whether a_1^{b_1} a_2^{b_2} ...
a_n^{b_n} >= c_1^{d_1} c_2^{d_2} .... c_m^{d_m} .

Problem 1 is easily decidable in polynomial time using a simple iterative
algorithm. Problem 2 is much harder. We observe that the complexity of Problem
2 is intimately connected to deep conjectures and results in number theory. In
particular, if a refined form of the ABC conjecture formulated by Baker in 1998
holds, or if the older Lang-Waldschmidt conjecture (formulated in 1978) on
linear forms in logarithms holds, then Problem 2 is decidable in P-time (in the
standard Turing model of computation). Moreover, it follows from the best
available quantitative bounds on linear forms in logarithms, e.g., by Baker and
W\"{u}stholz (1993) or Matveev (2000), that if m and n are fixed universal
constants then Problem 2 is decidable in P-time (without relying on any

We describe one application: P-time maximum probability parsing for arbitrary
stochastic context-free grammars (where \epsilon-rules are allowed). 2013/04/23 - 00:28

In 2006, Varacca and V\"olzer proved that on finite graphs, \omega-regular
large sets coincide with \omega-regular sets of probability 1, by using the
existence of positional strategies in the related Banach-Mazur games. Motivated
by this result, we try to extend it to other classes of sets by means of
various notions of simple strategy introduced in a recent paper of Gr\"adel and
Le{\ss}enich. We derive that the existence of a move-counting winning strategy
implies that the winning condition has probability~1. In particular, we observe
that the above result of Varacca and V\"olzer extends to the \omega S-regular
sets introduced by Bojanczyk and Colcombet in 2006. Then, we introduce a
generalisation of the classical Banach-Mazur game and in particular, a
probabilistic version whose goal is to characterise sets of probability~1 (as
classical Banach-Mazur games characterise large sets). We obtain a determinacy
result for these games, when the winning set is a countable intersection of
open sets. 2013/04/23 - 00:28

Usual techniques to solve WCSP are based on cost transfer operations coupled
with a branch and bound algorithm. In this paper, we focus on an approach
integrating extraction and relaxation of Minimal Unsatisfiable Cores in order
to solve this problem. We decline our approach in two ways: an incomplete,
greedy, algorithm and a complete one. 2013/04/23 - 00:28

A huge number of academic papers are coming out from a lot of conferences and
journals these days. In these circumstances, most researchers rely on key-based
search or browsing through proceedings of top conferences and journals to find
their related work. To ease this difficulty, we propose a Personalized Academic
Research Paper Recommendation System, which recommends related articles, for
each researcher, that may be interesting to her/him. In this paper, we first
introduce our web crawler to retrieve research papers from the web. Then, we
define similarity between two research papers based on the text similarity
between them. Finally, we propose our recommender system developed using
collaborative filtering methods. Our evaluation results demonstrate that our
system recommends good quality research papers. 2013/04/23 - 00:28

Wikipedia, the world largest encyclopedia contains a lot of knowledge that is
expressed as formulae exclusively. Unfortunately, this knowledge is currently
not fully accessible by intelligent information retrieval systems. This immense
body of knowledge is hidden form value-added services, such as search. In this
paper, we present our MathSearch implementation for Wikipedia that enables
users to perform a combined text and fully unlock the potential benefits. 2013/04/23 - 00:28

The niche graph of a digraph $D$ is the (simple undirected) graph which has
the same vertex set as $D$ and has an edge between two distinct vertices $x$
and $y$ if and only if $N^+_D(x) \cap N^+_D(y) \neq \emptyset$ or $N^-_D(x)
\cap N^-_D(y) \neq \emptyset$, where $N^+_D(x)$ (resp. $N^-_D(x)$) is the set
of out-neighbors (resp. in-neighbors) of $x$ in $D$. A digraph $D=(V,A)$ is
called a semiorder (or a unit interval order) if there exist a real-valued
function $f:V \to \mathbb{R}$ on the set $V$ and a positive real number $\delta
\in \mathbb{R}$ such that $(x,y) \in A$ if and only if $f(x) > f(y) + \delta$.
A digraph $D=(V,A)$ is called an interval order if there exists an assignment
$J$ of a closed real interval $J(x) \subset \mathbb{R}$ to each vertex $x \in
V$ such that $(x,y) \in A$ if and only if $\min J(x) > \max J(y)$.

S. -R. Kim and F. S. Roberts characterized the competition graphs of
semiorders and interval orders in 2002, and Y. Sano characterized the
competition-common enemy graphs of semiorders and interval orders in 2010. In
this note, we give characterizations of the niche graphs of semiorders and
interval orders. 2013/04/23 - 00:28

A backbone of a propositional CNF formula is a variable whose truth value is
the same in every truth assignment that satisfies the formula. The notion of
backbones for CNF formulas has been studied in various contexts. In this paper,
we introduce local variants of backbones, and study the computational
complexity of detecting them. In particular, we consider k-backbones, which are
backbones for sub-formulas consisting of at most k clauses, and iterative
k-backbones, which are backbones that result after repeated instantiations of
k-backbones. We determine the parameterized complexity of deciding whether a
variable is a k-backbone or an iterative k-backbone for various restricted
formula classes, including Horn, definite Horn, and Krom. We also present some
first empirical results regarding backbones for CNF-Satisfiability (SAT). The
empirical results we obtain show that a large fraction of the backbones of
structured SAT instances are local, in contrast to random instances, which
appear to have few local backbones. 2013/04/23 - 00:28

Quipper is a recently developed programming language for expressing quantum
computations. This paper gives a brief tutorial introduction to the language,
through a demonstration of how to make use of some of its key features. We
illustrate many of Quipper's language features by developing a few well known
examples of Quantum computation, including quantum teleportation, the quantum
Fourier transform, and a quantum circuit for addition. 2013/04/23 - 00:28

Clique-width is a graph invariant that has been widely studied in
combinatorics and computer science. However, computing the clique-width of a
graph is an intricate problem, the exact clique-width is not known even for
very small graphs. We present a new method for computing the clique-width of
graphs based on an encoding to propositional satisfiability (SAT) which is then
evaluated by a SAT solver. Our encoding is based on a reformulation of
clique-width in terms of partitions that utilizes an efficient encoding of
cardinality constraints. Our SAT-based method is the first to discover the
exact clique-width of various small graphs, including famous graphs from the
literature as well as random graphs of various density. With our method we
determined the smallest graphs that require a small pre-described clique-width. 2013/04/23 - 00:28

We motivate this study from a recent work on a Stochastic Gradient Descent
(SGD) method with only one projection, which aims at alleviating the
computational bottleneck of a standard SGD method in performing the projection
at each iteration. It can find applications in many learning problems,
especially when the optimization domain is complex (e.g., a PSD cone). It has
been shown to enjoy an $O(\log T/T)$ convergence rate for strongly convex
optimization. Recently, \cite{Zhang:arXiv1304.0740} improve the convergence
rate to an optimal convergence rate of $O(1/T)$ with $O(\log T)$ projections.
However, their algorithm has several drawbacks: (i) first they assume the
objective function is both strongly convex and smooth and (ii) the number of
projections in their algorithm depend on the condition number of the problem
(namely the ratio of the smoothness parameter to the strong convexity

In this paper, we make further contributions along the line. First, we
develop an epoch projection SGD method that only makes $\log (T)$ projections
but achieves an optimal convergence rate $O(1/T)$ for {\it strongly convex
optimization}. Second, we present a proximal extension to utilize the structure
of the objective function that could further speed-up the computation and
convergence for sparse regularized loss minimization problems. Finally, we
consider an application of the proposed techniques to solving the high
dimensional large margin nearest neighbor classification problem, yielding a
speed-up of orders of magnitude. 2013/04/23 - 00:28

Social Media offer a vast amount of geo-located and time-stamped textual
content directly generated by people. This information can be analysed to
obtain insights about the general state of a large population of users and to
address scientific questions from a diversity of disciplines. In this work, we
estimate temporal patterns of mood variation through the use of emotionally
loaded words contained in Twitter messages, possibly reflecting underlying
circadian and seasonal rhythms in the mood of the users. We present a method
for computing mood scores from text using affective word taxonomies, and apply
it to millions of tweets collected in the United Kingdom during the seasons of
summer and winter. Our analysis results in the detection of strong and
statistically significant circadian patterns for all the investigated mood
types. Seasonal variation does not seem to register any important divergence in
the signals, but a periodic oscillation within a 24-hour period is identified
for each mood type. The main common characteristic for all emotions is their
mid-morning peak, however their mood score patterns differ in the evenings. 2013/04/23 - 00:28

Moving Sink (MS) in Wireless Sensor Networks (WSNs) has appeared as a
blessing because it collects data directly from the nodes where the concept of
relay nodes is becomes obsolete. There are, however, a few challenges to be
taken care of, like data delay tolerance and trajectory of MS which is NP-hard.
In our proposed scheme, we divide the square field in small squares. Middle
point of the partitioned area is the sojourn location of the sink, and nodes
around MS are in its transmission range, which send directly the sensed data in
a delay-tolerant fashion. Two sinks are moving simultaneously; one inside and
having four sojourn locations and other in outer trajectory having twelve
sojourn locations. Introduction of the joint mobility enhances network life and
ultimately throughput. As the MS comes under the NP-hard problem, we convert it
into a geometric problem and define it as, Geometric Sink Movement (GSM). A set
of linear programming equations has also been given in support of GSM which
prolongs network life time. 2013/04/23 - 00:28

We obtain upper and lower bounds for running times of exponential time
algorithms of the detection of weak backdoor sets of 3CNF formulas, considering
various base classes. These results include (i) a 4.54^k algorithm to detect
whether there is a weak backdoor set on at most k variables to the class of
Horn formulas; (ii) a 2.27^k algorithm to detect whether there is a weak
backdoor set on at most k variables to the class of Krom formulas; both from an
n variable 3CNF formula (omitting polynomial factors). These bounds improve an
earlier known bound of 6^k. We also prove a 2^k lower bound for these problems,
subject to the Strong Exponential Time Hypothesis. 2013/04/23 - 00:28