Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Referaty z Computer Science na arXiv.org

In this paper, we consider the secrecy capacity of a wiretap channel in the
presence of causal state information and secure rate-limited feedback. In this
scenario, the causal state information from the channel is available to both
the legitimate transmitter and legitimate receiver. In addition, the legitimate
receiver can send secure feedback to the transmitter at a limited rate Rf . We
shown that the secrecy capacity is bounded. Moreover, when the channel to the
legitimate receiver is less noisy than the channel to the eavesdropper, the
bound is shown to be tight. The capacity achieving scheme is based on both the
Wyner wiretap coding and two steps of shared-key generation: one from the state
information and one via the noiseless feedback. Finally, we consider several
special cases. When state information is available only at the legitimate
receiver, the analysis suggests that unlike previous results involving
feedback, it is better to use the feedback to send the state information to the
transmitter (when possible), rather than send a random key.

http://arxiv.org/abs/1308.6736 2013/09/02 - 18:52

Noel, Reed, and Wu proved the conjecture of Ohba stating that the choice
number of a graph $G$ equals its chromatic number when $|V(G)|\le 2\chi(G)+1$.
We extend this to a general upper bound: \[\ch(G)\le
\max\left\{\chi(G),\left\lceil\frac{|V(G)|+\chi(G)-1}{3}\right\rceil\right\}.\]
For $|V(G)|\le 3\chi(G)$, Ohba provided examples where equality holds.

http://arxiv.org/abs/1308.6739 2013/09/02 - 18:52

Data Mining is a way of extracting data or uncovering hidden patterns of
information from databases. So, there is a need to prevent the inference rules
from being disclosed such that the more secure data sets cannot be identified
from non sensitive attributes. This can be done through removing or adding
certain item sets in the transactions Sanitization. The purpose is to hide the
Inference rules, so that the user may not be able to discover any valuable
information from other non sensitive data and any organisation can release all
samples of their data without the fear of Knowledge Discovery In Databases
which can be achieved by investigating frequently occurring item sets, rules
that can be mined from them with the objective of hiding them. Another way is
to release only limited samples in the new database so that there is no
information loss and it also satisfies the legitimate needs of the users. The
major problem is uncovering hidden patterns, which causes a threat to the
database security. Sensitive data are inferred from non-sensitive data based on
the semantics of the application the user has, commonly known as the inference
problem. Two fundamental approaches to protect sensitive rules from disclosure
are that, preventing rules from being generated by hiding the frequent sets of
data items and reducing the importance of the rules by setting their confidence
below a user-specified threshold.

http://arxiv.org/abs/1308.6744 2013/09/02 - 18:52

Cloud Computing is a recent computing model provides consistent access to
wide area distributed resources. It revolutionized the IT world with its
services provision infrastructure, less maintenance cost, data and service
availability assurance, rapid accessibility and scalability. Grid and Cloud
Computing Intrusion Detection System detects encrypted node communication and
find the hidden attack trial which inspects and detects those attacks that
network based and host based cant identify. It incorporates Knowledge and
behavior analysis to identify specific intrusions. Signature based IDS monitor
the packets in the network and identifies those threats by matching with
database but It fails to detect those attacks that are not included in
database. Signature based IDS will perform poor capturing in large volume of
anomalies. Another problem is that Cloud Service Provider hides the attack that
is caused by intruder, due to distributed nature cloud environment has high
possibility for vulnerable resources. By impersonating legitimate users, the
intruders can use a services abundant resources maliciously. In Proposed System
we combine few concepts which are available with new intrusion detection
techniques. Here to merge Entropy based System with Anomaly detection System
for providing multilevel Distributed Denial of Service. This is done in two
steps: First, Users are allowed to pass through router in network site in that
it incorporates Detection Algorithm and detects for legitimate user. Second,
again it pass through router placed in cloud site in that it incorporates
confirmation Algorithm and checks for threshold value, if its beyond the
threshold value it considered as legitimate user, else its an intruder found in
environment.

http://arxiv.org/abs/1308.6745 2013/09/02 - 18:52

In theory, coordinated multipoint transmission (CoMP) promises vast gains in
spectral efficiency. However, industrial field trials show rather disappointing
throughput gains, whereby the major limiting factor is properly sharing channel
state information. This so-called limited feedback problem has been considered
in many recent papers in context of CoMP under the assumptions: 1) infinite SNR
regime, 2) no user selection and 3) ideal link adaption; rendering the analysis
overly optimistic. In this paper we make a step forward towards a more
realistic assessment of the limited feedback problem by introducing an improved
metric for the performance evaluation, which better captures the throughput
degradation. We obtain the relevant scaling laws (lower and upper bounds) and
show that they are, at finite SNR, fundamentally different from existing ones.
Moreover we provide a robust iterative interference alignment algorithm and
corresponding feedback strategies achieving the optimal scaling laws. The main
idea is that instead of sending the complete channel matrix to each base
station each user fixes a receive filter in each iteration and feeds back a
quantized version of the effective channel. Finally we underline the findings
with simulations for the proposed system.

http://arxiv.org/abs/1308.6750 2013/09/02 - 18:52

Digital payment schemes show an ever increasing importance. Out of the
countless different schemes available this article focuses on the popular
Bitcoin system. The authors provide a description of Bitcoin's unique
technological basis and its accompanying ecosystem of users, miners, trading
platforms and vendors. Furthermore, this article discusses Bitcoin's
currency-like features and the first regulatory actions take in the European
Union and in the United States of America.

http://arxiv.org/abs/1308.6760 2013/09/02 - 18:52

Tor hidden services allow running Internet services while protecting the
location of the servers. Their main purpose is to enable freedom of speech even
in situations in which powerful adversaries try to suppress it. However,
providing location privacy and client anonymity also makes Tor hidden services
an attractive platform for every kind of imaginable shady service. The ease
with which Tor hidden services can be set up has spurred a huge growth of
anonymously provided Internet services of both types. In this paper we analyse
the landscape of Tor hidden services. We have studied Tor hidden services after
collecting 39824 hidden service descriptors on 4th of Feb 2013 by exploiting
protocol and implementation flaws in Tor: we scanned them for open ports; in
the case of HTTP services, we analysed and classified their content. We also
estimated the popularity of hidden services by looking at the request rate for
hidden service descriptors by clients. We found that while the content of Tor
hidden services is rather varied, the most popular hidden services are related
to botnets.

http://arxiv.org/abs/1308.6768 2013/09/02 - 18:52

In this paper we study decomposition methods based on separable
approximations for minimizing the augmented Lagrangian. In particular, we study
and compare the Diagonal Quadratic Approximation Method (DQAM) of Mulvey and
Ruszczy\'{n}ski and the Parallel Coordinate Descent Method (PCDM) of
Richt\'arik and Tak\'a\v{c}. We show that the two methods are equivalent for
feasibility problems up to the selection of a single step-size parameter.
Furthermore, we prove an improved complexity bound for PCDM under strong
convexity, and show that this bound is at least $8(L'/\bar{L})(\omega-1)^2$
times better than the best known bound for DQAM, where $\omega$ is the degree
of partial separability and $L'$ and $\bar{L}$ are the maximum and average of
the block Lipschitz constants of the gradient of the quadratic penalty
appearing in the augmented Lagrangian.

http://arxiv.org/abs/1308.6774 2013/09/02 - 18:52

We present a simple and versatile formulation of grid-based graph
representation problems as an integer linear program (ILP) and a corresponding
SAT instance. In a grid-based representation vertices and edges correspond to
axis-parallel boxes on an underlying integer grid; boxes can be further
constrained in their shapes and interactions by additional problem-specific
constraints. We describe a general d-dimensional model for grid representation
problems. This model can be used to solve a variety of NP-hard graph problems,
including pathwidth, bandwidth, optimum st-orientation, area-minimal (bar-k)
visibility representation, boxicity-k graphs and others. We implemented
SAT-models for all of the above problems and evaluated them on the Rome graphs
collection. The experiments show that our model successfully solves NP-hard
problems within few minutes on small to medium-size Rome graphs.

http://arxiv.org/abs/1308.6778 2013/09/02 - 18:52

Even if entanglement is a fundamental tool for quantum technologies, its non
ambiguous detection and quantification is still limited to low dimensions or
specific classes of states. In this sense every extension beyond such special
cases is of the utmost importance. Here, we consider quantum states in
arbitrary $d\times d$ dimensions, particularly relevant to quantum optics,
written in terms of a pair basis where each state in the subsystem A is paired
with only one state in B. We extend the class of states for which negativity is
a necessary and sufficient measure of entanglement to mixtures of states in a
pair basis. In addition, we provide analytical expressions for a tight
lower-bound estimation of the entanglement of formation, a central quantity for
quantum information applications.

http://arxiv.org/abs/1308.6783 2013/09/02 - 18:52

Given a set $V$ of $n$ objects, an online ranking system outputs at each time
step a full ranking of the set, observes a feedback of some form and suffers a
loss. We study the setting in which the (adversarial) feedback is a choice of
an item from $V$, and the loss is the position (1st, 2nd, 3rd...) of the item
in the outputted ranking. For this simple problem we present an algorithm of
expected regret $O(n^{3/2}\sqrt{T})$ for a time horizon of $T$ steps, with
respect to the best single ranking in hindsight. This improves on previous
known algorithms in two ways: (i) it shaves off a $\Omega(\sqrt{\log n})$
factor in the regret compared to the previously best known algorithm, and (ii)
it is extremely simple to implement, compared to previous algorithms (for some
of which it is not even clear how to execute in sub-exponential time). Our
algorithm works for a more general class of ranking problems in which the
feedback is a vector of values of elements in $V$, and the loss is the sum of
magnitudes of pairwise inversions (also known as AUC in the literature). The
main tool is the use of randomized sorting algorithms that satisfy a stochastic
version of the social choice theoretical Independence of Irrelevant
Alternatives (IIA), ensuring that the projection of the algorithm on a fixed
pair of items follows a simple multiplicative weights update scheme on a binary
action set.

http://arxiv.org/abs/1308.6797 2013/09/02 - 18:52

In this paper we study \emph{many-to-one boundary labeling with backbone
leaders}. In this new many-to-one model, a horizontal backbone reaches out of
each label into the feature-enclosing rectangle. Feature points that need to be
connected to this label are linked via vertical line segments to the backbone.
We present dynamic programming algorithms for label number and total leader
length minimization of crossing-free backbone labelings. When crossings are
allowed, we aim to obtain solutions with the minimum number of crossings. This
can be achieved efficiently in the case of fixed label order, however, in the
case of flexible label order we show that minimizing the number of leader
crossings is NP-hard.

http://arxiv.org/abs/1308.6801 2013/09/02 - 18:52

Intrinsic isometric shape matching has become the standard approach for pose
invariant correspondence estimation among deformable shapes. Most existing
approaches assume global consistency, i.e., the metric structure of the whole
manifold must not change significantly. While global isometric matching is well
understood, only a few heuristic solutions are known for partial matching.
Partial matching is particularly important for robustness to topological noise
(incomplete data and contacts), which is a common problem in real-world 3D
scanner data. In this paper, we introduce a new approach to partial, intrinsic
isometric matching. Our method is based on the observation that isometries are
fully determined by purely local information: a map of a single point and its
tangent space fixes an isometry for both global and the partial maps. From this
idea, we develop a new representation for partial isometric maps based on
equivalence classes of correspondences between pairs of points and their
tangent spaces. From this, we derive a local propagation algorithm that find
such mappings efficiently. In contrast to previous heuristics based on RANSAC
or expectation maximization, our method is based on a simple and sound
theoretical model and fully deterministic. We apply our approach to register
partial point clouds and compare it to the state-of-the-art methods, where we
obtain significant improvements over global methods for real-world data and
stronger guarantees than previous heuristic partial matching algorithms.

http://arxiv.org/abs/1308.6804 2013/09/02 - 18:52

Without requiring objects to carry any transceiver, device-free based object
tracking provides a promising solution for many localization and tracking
systems to monitor non-cooperative objects such as intruders. However, existing
device-free solutions mainly use sensors and active RFID tags, which are much
more expensive compared to passive tags. In this paper, we propose a novel
motion detection and tracking method using passive RFID tags, named Twins. The
method leverages a newly observed phenomenon called critical state caused by
interference among passive tags. We contribute to both theory and practice of
such phenomenon by presenting a new interference model that perfectly explains
this phenomenon and using extensive experiments to validate it. We design a
practical Twins based intrusion detection scheme and implement a real prototype
with commercial off-the-shelf reader and tags. The results show that Twins is
effective in detecting the moving object, with low location error of 0.75m in
average.

http://arxiv.org/abs/1308.6805 2013/09/02 - 18:52

In earlier work, we showed that it is possible to achieve $O(\log N)$
streaming delay with high probability in a peer-to-peer network, where each
peer has as little as four neighbors, while achieving any arbitrary fraction of
the maximum possible streaming rate. However, the constant in the $O(log N)$
delay term becomes rather large as we get closer to the maximum streaming rate.
In this paper, we design an alternative pairing and chunk dissemination
algorithm that allows us to transmit at the maximum streaming rate while
ensuring that all, but a negligible fraction of the peers, receive the data
stream with $O(\log N)$ delay with high probability. The result is established
by examining the properties of graph formed by the union of two or more random
1-regular digraphs, i.e., directed graphs in which each node has an incoming
and an outgoing node degree both equal to one.

http://arxiv.org/abs/1308.6807 2013/09/02 - 18:52

We propose a generic model for weak memory architectures, in axiomatic . We
show how to instantiate this framework for SC, x86-TSO, C++ restricted to
release-acquire atomics, and Power. For Power, we compare our model to a
preceding, widely accepted, model in which we found a flaw. To do so, we define
an operational model that we show equivalent to our axiomatic model.

We also propose a model for ARM. Our testing on this architecture revealed a
behaviour later acknowledged as a bug by ARM, and more recently 64 additional
anomalies.

We offer a new simulation tool, called herd, which allows the user to specify
the model of his choice in a concise way. Given a specification of a model, the
tool becomes a simulator for that model. The tool relies on an axiomatic
description, and we show that this choice allows us to outperform all previous
simulation tools. Additionally, we confirm that verification time is vastly
improved, in the case of bounded model-checking.

Finally, we put our models in perspective, in the light of empirical data
obtained by analysing the C and C++ code of an entire Debian Linux
distribution. We present our new analysis tool, called mole, which explores a
piece of code to find the weak memory idioms that it uses.

http://arxiv.org/abs/1308.6810 2013/09/02 - 18:52

In modern data science problems, techniques for extracting value from big
data require performing large-scale optimization over heterogenous, irregularly
structured data. Much of this data is best represented as multi-relational
graphs, making vertex programming abstractions such as those of Pregel and
GraphLab ideal fits for modern large-scale data analysis. In this paper, we
describe a vertex-programming implementation of a popular consensus
optimization technique known as the alternating direction of multipliers
(ADMM). ADMM consensus optimization allows elegant solution of complex
objectives such as inference in rich probabilistic models. We also introduce a
novel hypergraph partitioning technique that improves over state-of-the-art
partitioning techniques for vertex programming and significantly reduces the
communication cost by reducing the number of replicated nodes up to an order of
magnitude. We implemented our algorithm in GraphLab and measure scaling
performance on a variety of realistic bipartite graph distributions and a large
synthetic voter-opinion analysis application. In our experiments, we are able
to achieve a 50% improvement in runtime over the current state-of-the-art
GraphLab partitioning scheme.

http://arxiv.org/abs/1308.6823 2013/09/02 - 18:52

We define strict confluent drawing, a form of confluent drawing in which the
existence of an edge is indicated by the presence of a smooth path through a
system of arcs and junctions (without crossings), and in which such a path, if
it exists, must be unique. We prove that it is NP-complete to determine whether
a given graph has a strict confluent drawing but polynomial to determine
whether it has an outerplanar strict confluent drawing with a fixed vertex
ordering (a drawing within a disk, with the vertices placed in a given order on
the boundary).

http://arxiv.org/abs/1308.6824 2013/09/02 - 18:52

We consider polynomial differential equations and make a number of
contributions to the questions of (i) complexity of deciding stability, (ii)
existence of polynomial Lyapunov functions, and (iii) existence of sum of
squares (sos) Lyapunov functions.

(i) We show that deciding local or global asymptotic stability of cubic
vector fields is strongly NP-hard. Simple variations of our proof are shown to
imply strong NP-hardness of several other decision problems: testing local
attractivity of an equilibrium point, stability of an equilibrium point in the
sense of Lyapunov, invariance of the unit ball, boundedness of trajectories,
convergence of all trajectories in a ball to a given equilibrium point,
existence of a quadratic Lyapunov function, local collision avoidance, and
existence of a stabilizing control law.

(ii) We present a simple, explicit example of a globally asymptotically
stable quadratic vector field on the plane which does not admit a polynomial
Lyapunov function (joint work with M. Krstic). For the subclass of homogeneous
vector fields, we conjecture that asymptotic stability implies existence of a
polynomial Lyapunov function, but show that the minimum degree of such a
Lyapunov function can be arbitrarily large even for vector fields in fixed
dimension and degree. For the same class of vector fields, we further establish
that there is no monotonicity in the degree of polynomial Lyapunov functions.

(iii) We show via an explicit counterexample that if the degree of the
polynomial Lyapunov function is fixed, then sos programming may fail to find a
valid Lyapunov function even though one exists. On the other hand, if the
degree is allowed to increase, we prove that existence of a polynomial Lyapunov
function for a planar or a homogeneous vector field implies existence of a
polynomial Lyapunov function that is sos and that the negative of its
derivative is also sos.

http://arxiv.org/abs/1308.6833 2013/09/02 - 18:52

We study the pointwise stabilizability of a discrete-time, time-homogeneous,
and stationary Markovian jump linear system. By using measure theory, ergodic
theory and a splitting theorem of state space we show in a relatively simple
way that if the system is essentially product-bounded, then it is pointwise
convergent if and only if it is pointwise exponentially convergent.

http://arxiv.org/abs/1107.0132 2013/09/02 - 18:52

An {\em acyclic edge coloring} of a graph $G$ is a proper edge coloring such
that the subgraph induced by any two color classes is a linear forest (an
acyclic graph with maximum degree at most two). The {\em acyclic chromatic
index} $\chiup_{a}'(G)$ of a graph $G$ is the least number of colors needed in
any acyclic edge coloring of $G$. Fiam\v{c}\'{i}k (1978) conjectured that
$\chiup_{a}'(G) \leq \Delta(G) + 2$, where $\Delta(G)$ is the maximum degree of
$G$. This conjecture is well-known as Acyclic Edge Coloring Conjecture (AECC).
A graph $G$ with maximum degree at most $\kappa$ is {\em
$\kappa$-deletion-minimal} if $\chiup_{a}'(G) > \kappa$ and $\chiup_{a}'(H)
\leq \kappa$ for every proper subgraph $H$ of $G$. In this paper, we present
some structural lemmas on $\kappa$-deletion-minimal graphs. By using the
structural lemmas, we firstly prove that AECC is true for the graphs with
maximum average degree less than four (\autoref{NMAD4}). We secondly prove that
AECC is true for the planar graphs without triangles adjacent to cycles of
length at most four, with an additional condition that every 5-cycle has at
most three edges contained in triangles (\autoref{NoAdjacent}), from which we
can conclude some known results as corollaries. We thirdly prove that every
planar graph $G$ without intersecting triangles satisfies $\chiup_{a}'(G) \leq
\Delta(G) + 3$ (\autoref{NoIntersect}). Finally, we consider one extreme case
and prove it: if $G$ is a graph with $\Delta(G) \geq 3$ and all the
$3^{+}$-vertices are independent, then $\chiup_{a}'(G) = \Delta(G)$. We hope
the structural lemmas will shed some light on the acyclic edge coloring
problems.

http://arxiv.org/abs/1302.2405 2013/09/02 - 18:52

Communities play a crucial role to describe and analyse modern networks.
However, the size of those networks has grown tremendously with the increase of
computational power and data storage. While various methods have been developed
to extract community structures, their computational cost or the difficulty to
parallelize existing algorithms make partitioning real networks into
communities a challenging problem. In this paper, we propose to alter an
efficient algorithm, the Louvain method, such that communities are defined as
the connected components of a tree-like assignment graph. Within this
framework, we precisely describe the different steps of our algorithm and
demonstrate its highly parallelizable nature. We then show that despite its
simplicity, our algorithm has a partitioning quality similar to the original
method on benchmark graphs and even outperforms other algorithms. We also show
that, even on a single processor, our method is much faster and allows the
analysis of very large networks.

http://arxiv.org/abs/1308.6276 2013/09/01 - 17:21

Artifact-Centric systems have emerged in the last years as a suitable
framework to model business-relevant entities, by combining their static and
dynamic aspects. In particular, the Guard-Stage-Milestone (GSM) approach has
been recently proposed to model artifacts and their lifecycle in a declarative
way. In this paper, we enhance GSM with a Semantic Layer, constituted by a
full-fledged OWL 2 QL ontology linked to the artifact information models
through mapping specifications. The ontology provides a conceptual view of the
domain under study, and allows one to understand the evolution of the artifact
system at a higher level of abstraction. In this setting, we present a
technique to specify temporal properties expressed over the Semantic Layer, and
verify them according to the evolution in the underlying GSM model. This
technique has been implemented in a tool that exploits state-of-the-art
ontology-based data access technologies to manipulate the temporal properties
according to the ontology and the mappings, and that relies on the GSMC model
checker for verification.

http://arxiv.org/abs/1308.6292 2013/09/01 - 17:21

Many real networks are formed by a mesoscopic level of organization
characterized by clusters comprising nodes with a high internal degree of
connectivity. Currently, identifying community structure represents a potent
apparatus in the study of the structure of complex networks. Although many
methods employed to uncover the community structure have been proposed in the
last years, only a few studies have probed the suitability of these methods in
incomplete networks. Here we assess the accuracy of community detection
techniques in networks formed from sampling processes. We show that the
walktrap and fast greedy are able to correctly detect the modular structure of
incomplete complex networks even in the absence of many nodes of the underlying
network. We also propose a method for unveiling the structure of communities
that is based on the identification of communities in simplified networks. We
demonstrate that our method improves the time performance of traditional
methods while keeping the accuracy rate in identifying the community membership
of nodes. In the light of the results, we believe that the proposed methodology
can be applied to speed up virtually any community detection method, especially
if the network under analysis is tightly connected.

http://arxiv.org/abs/1308.6295 2013/09/01 - 17:21

Even though considerable attention has been given to the polarity of words
(positive and negative) and the creation of large polarity lexicons, research
in emotion analysis has had to rely on limited and small emotion lexicons. In
this paper we show how the combined strength and wisdom of the crowds can be
used to generate a large, high-quality, word-emotion and word-polarity
association lexicon quickly and inexpensively. We enumerate the challenges in
emotion annotation in a crowdsourcing scenario and propose solutions to address
them. Most notably, in addition to questions about emotions associated with
terms, we show how the inclusion of a word choice question can discourage
malicious data entry, help identify instances where the annotator may not be
familiar with the target term (allowing us to reject such annotations), and
help obtain annotations at sense level (rather than at word level). We
conducted experiments on how to formulate the emotion-annotation questions, and
show that asking if a term is associated with an emotion leads to markedly
higher inter-annotator agreement than that obtained by asking if a term evokes
an emotion.

http://arxiv.org/abs/1308.6297 2013/09/01 - 17:21

Knowing the degree of semantic contrast between words has widespread
application in natural language processing, including machine translation,
information retrieval, and dialogue systems. Manually-created lexicons focus on
opposites, such as {\rm hot} and {\rm cold}. Opposites are of many kinds such
as antipodals, complementaries, and gradable. However, existing lexicons often
do not classify opposites into the different kinds. They also do not explicitly
list word pairs that are not opposites but yet have some degree of contrast in
meaning, such as {\rm warm} and {\rm cold} or {\rm tropical} and {\rm
freezing}. We propose an automatic method to identify contrasting word pairs
that is based on the hypothesis that if a pair of words, $A$ and $B$, are
contrasting, then there is a pair of opposites, $C$ and $D$, such that $A$ and
$C$ are strongly related and $B$ and $D$ are strongly related. (For example,
there exists the pair of opposites {\rm hot} and {\rm cold} such that {\rm
tropical} is related to {\rm hot,} and {\rm freezing} is related to {\rm
cold}.) We will call this the contrast hypothesis. We begin with a large
crowdsourcing experiment to determine the amount of human agreement on the
concept of oppositeness and its different kinds. In the process, we flesh out
key features of different kinds of opposites. We then present an automatic and
empirical measure of lexical contrast that relies on the contrast hypothesis,
corpus statistics, and the structure of a {\it Roget}-like thesaurus. We show
that the proposed measure of lexical contrast obtains high precision and large
coverage, outperforming existing methods.

http://arxiv.org/abs/1308.6300 2013/09/01 - 17:21

This paper deals with the recognition and matching of text in both
cartographic maps and ancient documents. The purpose of this work is to find
similar text regions based on statistical and global features. A phase of
normalization is done first, in object to well categorize the same quantity of
information. A phase of wordspotting is done next by combining local and global
features. We make different experiments by combining the different techniques
of extracting features in order to obtain better results in recognition phase.
We applied fontspotting on both ancient documents and cartographic ones. We
also applied the wordspotting in which we adopted a new technique which tries
to compare the images of character and not the entire images words. We present
the precision and recall values obtained with three methods for the new method
of wordspotting applied on characters only.

http://arxiv.org/abs/1308.6309 2013/09/01 - 17:21

This paper deals with the recognition and matching of text in both
cartographic maps and ancient documents. The purpose of this work is to find
similar text regions based on statistical and global features. A phase of
normalization is done first, in object to well categorize the same quantity of
information. A phase of wordspotting is done next by combining local and global
features. We make different experiments by combining the different techniques
of extracting features in order to obtain better results in recognition phase.
We applied fontspotting on both ancient documents and cartographic ones. We
also applied the wordspotting in which we adopted a new technique which tries
to compare the images of character and not the entire images words. We present
the precision and recall values obtained with three methods for the new method
of wordspotting applied on characters only.

http://arxiv.org/abs/1308.6309 2013/09/01 - 17:21

The analysis of historical documents is still a topical issue given the
importance of information that can be extracted and also the importance given
by the institutions to preserve their heritage. The main idea in order to
characterize the content of the images of ancient documents after attempting to
clean the image is segmented blocks texts from the same image and tries to find
similar blocks in either the same image or the entire image database. Most
approaches of offline handwriting recognition proceed by segmenting words into
smaller pieces (usually characters) which are recognized separately.
Recognition of a word then requires the recognition of all characters (OCR)
that compose it. Our work focuses mainly on the characterization of classes in
images of old documents. We use Som toolbox for finding classes in documents.
We applied also fractal dimensions and points of interest to categorize and
match ancient documents.

http://arxiv.org/abs/1308.6311 2013/09/01 - 17:21

The analysis of historical documents is still a topical issue given the
importance of information that can be extracted and also the importance given
by the institutions to preserve their heritage. The main idea in order to
characterize the content of the images of ancient documents after attempting to
clean the image is segmented blocks texts from the same image and tries to find
similar blocks in either the same image or the entire image database. Most
approaches of offline handwriting recognition proceed by segmenting words into
smaller pieces (usually characters) which are recognized separately.
Recognition of a word then requires the recognition of all characters (OCR)
that compose it. Our work focuses mainly on the characterization of classes in
images of old documents. We use Som toolbox for finding classes in documents.
We applied also fractal dimensions and points of interest to categorize and
match ancient documents.

http://arxiv.org/abs/1308.6311 2013/09/01 - 17:21

Jamming attacks can significantly impact the performance of wireless
communication systems. In addition to reducing the capacity, such attacks may
lead to insurmountable overhead in terms of re-transmissions and increased
power consumption. In this paper, we consider the multiple-input single-output
(MISO) broadcast channel (BC) in the presence of a jamming attack in which a
subset of the receivers can be jammed at any given time. Further,
countermeasures for mitigating the effects of such jamming attacks are
presented. The effectiveness of these anti-jamming countermeasures is
quantified in terms of the degrees-of-freedom (DoF) of the MISO BC under
various assumptions regarding the availability of the channel state information
(CSIT) and the jammer state information at the transmitter (JSIT). The main
contribution of this paper is the characterization of the DoF region of the two
user MISO BC under various assumptions on the availability of CSIT and JSIT.
Partial extensions to the multi-user broadcast channels are also presented.

http://arxiv.org/abs/1308.6316 2013/09/01 - 17:21

Jamming attacks can significantly impact the performance of wireless
communication systems. In addition to reducing the capacity, such attacks may
lead to insurmountable overhead in terms of re-transmissions and increased
power consumption. In this paper, we consider the multiple-input single-output
(MISO) broadcast channel (BC) in the presence of a jamming attack in which a
subset of the receivers can be jammed at any given time. Further,
countermeasures for mitigating the effects of such jamming attacks are
presented. The effectiveness of these anti-jamming countermeasures is
quantified in terms of the degrees-of-freedom (DoF) of the MISO BC under
various assumptions regarding the availability of the channel state information
(CSIT) and the jammer state information at the transmitter (JSIT). The main
contribution of this paper is the characterization of the DoF region of the two
user MISO BC under various assumptions on the availability of CSIT and JSIT.
Partial extensions to the multi-user broadcast channels are also presented.

http://arxiv.org/abs/1308.6316 2013/09/01 - 17:21

Characterizing noisy or ancient documents is a challenging problem up to now.
Many techniques have been done in order to effectuate feature extraction and
image indexation for such documents. Global approaches are in general less
robust and exact than local approaches. That's why, we propose in this paper, a
hybrid system based on global approach(fractal dimension), and a local one
based on SIFT descriptor. The Scale Invariant Feature Transform seems to do
well with our application since it's rotation invariant and relatively robust
to changing illumination.In the first step the calculation of fractal dimension
is applied to images in order to eliminate images which have distant features
than image request characteristics. Next, the SIFT is applied to show which
images match well the request. However the average matching time using the
hybrid approach is better than "fractal dimension" and "SIFT descriptor" if
they are used alone.

http://arxiv.org/abs/1308.6319 2013/09/01 - 17:21

Characterizing noisy or ancient documents is a challenging problem up to now.
Many techniques have been done in order to effectuate feature extraction and
image indexation for such documents. Global approaches are in general less
robust and exact than local approaches. That's why, we propose in this paper, a
hybrid system based on global approach(fractal dimension), and a local one
based on SIFT descriptor. The Scale Invariant Feature Transform seems to do
well with our application since it's rotation invariant and relatively robust
to changing illumination.In the first step the calculation of fractal dimension
is applied to images in order to eliminate images which have distant features
than image request characteristics. Next, the SIFT is applied to show which
images match well the request. However the average matching time using the
hybrid approach is better than "fractal dimension" and "SIFT descriptor" if
they are used alone.

http://arxiv.org/abs/1308.6319 2013/09/01 - 17:21

This paper extends the author's previous two-dimensional work with Ou and
LeVeque to high-resolution finite volume modeling of systems of fluids and
poroelastic media in three dimensions, using logically rectangular mapped
grids. A method is described for calculating consistent cell face areas and
normal vectors for a finite volume method on a general non-rectilinear
hexahedral grid. A novel limiting algorithm is also developed to cope with
difficulties encountered in implementing high-resolution finite volume methods
for anisotropic media on non-rectilinear grids; the new limiting approach is
compatible with any limiter function, and typically reduces solution error even
in situations where it is not necessary for correct functioning of the
numerical method. Dimensional splitting is used to reduce the computational
cost of the solution. The code implementing the three-dimensional algorithms is
verified against known plane wave solutions, with particular attention to the
performance of the new limiter algorithm in comparison to the classical one. An
acoustic wave in brine striking an uneven bed of orthotropic layered sandstone
is also simulated in order to demonstrate the capabilities of the simulation
code.

http://arxiv.org/abs/1308.6320 2013/09/01 - 17:21

This paper extends the author's previous two-dimensional work with Ou and
LeVeque to high-resolution finite volume modeling of systems of fluids and
poroelastic media in three dimensions, using logically rectangular mapped
grids. A method is described for calculating consistent cell face areas and
normal vectors for a finite volume method on a general non-rectilinear
hexahedral grid. A novel limiting algorithm is also developed to cope with
difficulties encountered in implementing high-resolution finite volume methods
for anisotropic media on non-rectilinear grids; the new limiting approach is
compatible with any limiter function, and typically reduces solution error even
in situations where it is not necessary for correct functioning of the
numerical method. Dimensional splitting is used to reduce the computational
cost of the solution. The code implementing the three-dimensional algorithms is
verified against known plane wave solutions, with particular attention to the
performance of the new limiter algorithm in comparison to the classical one. An
acoustic wave in brine striking an uneven bed of orthotropic layered sandstone
is also simulated in order to demonstrate the capabilities of the simulation
code.

http://arxiv.org/abs/1308.6320 2013/09/01 - 17:21

In this paper, we apply Classification Restricted Boltzmann Machine
(ClassRBM) to the problem of predicting breast cancer recurrence. According to
the Polish National Cancer Registry, in 2010 only, the breast cancer caused
almost 25% of all diagnosed cases of cancer in Poland. We propose how to use
ClassRBM for predicting breast cancer return and discovering relevant inputs
(symptoms) in illness reappearance. Next, we outline a general probabilistic
framework for learning Boltzmann machines with masks, which we refer to as
Dropping. The fashion of generating masks leads to different learning methods,
i.e., DropOut, DropConnect. We propose a new method called DropPart which is a
generalization of DropConnect. In DropPart the Beta distribution instead of
Bernoulli distribution in DropConnect is used. At the end, we carry out an
experiment using real-life dataset consisting of 949 cases, provided by the
Institute of Oncology Ljubljana.

http://arxiv.org/abs/1308.6324 2013/09/01 - 17:21

Convex optimization models find interesting applications, especially in
signal/image processing and compressive sensing. We study some augmented convex
models, which are perturbed by strongly convex functions, and propose a dual
gradient algorithm. The proposed algorithm includes the linearized Bregman
algorithm and the singular value thresholding algorithm as special cases. Based
on fundamental properties of proximal operators, we present a concise approach
to establish the convergence of both primal and dual sequences, improving the
results in the existing literature.

http://arxiv.org/abs/1308.6337 2013/09/01 - 17:21

This paper analyzes circulant Johnson-Lindenstrauss (JL) embeddings which, as
an important class of structured random JL embeddings, are formed by
randomizing the column signs of a circulant matrix generated by a random
vector. With the help of recent decoupling techniques and matrix-valued
Bernstein inequalities, we obtain a new bound
$k=O(\epsilon^{-2}\log^{(1+\delta)} (n))$ for Gaussian circulant JL embeddings.
Moreover, by using the Laplace transform technique (also called Bernstein's
trick), we extend the result to subgaussian case. The bounds in this paper
offer a small improvement over the current best bounds for Gaussian circulant
JL embeddings for certain parameter regimes and are derived using more direct
methods.

http://arxiv.org/abs/1308.6339 2013/09/01 - 17:21

We introduce a new algorithm to learn the potentials of a large class of
Markov Random Fields (MRFs) with exact and efficient inference. If every node
in the MRF has a bounded number of neighbors, the proposed algorithm has linear
complexity in the number of nodes in the MRF. For example, for a fully observed
square lattice MRF, parameter estimation using the junction tree inference
engine has exponential complexity, but our algorithm only has linear
complexity. Moreover, the algorithm is naturally parallel and hence applicable
to large scale data modeling. The algorithm applies to many practical MRFs of
great interest, including 2D and 3D lattices, chimera models, and skip-chain
conditional random fields.

http://arxiv.org/abs/1308.6342 2013/09/01 - 17:21