Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Journal of Computers

This paper presents an auto-recognition method for pointer-type meter based on computer vision. Two calibrated cameras are located in the right and left sides of the meter to capture the meter displayed images. In each displayed image, the fast Hough transform method is used to locate the approximate pointer area and then the least squares fitting method is used to determine the precise line that represents the pointer indicator. Two precise lines obtained from both right and left displayed images are reconstructed in three-dimensional space according to the epipolar constraint. The line reconstructed is then projected into the target plane to determine the true indication of the meter. Experiments show that the proposed method works very well for the dial pointer identification. The maximum uncertainty in the determination of the pointer’s indication is less than the human eye can discriminate.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904787793 2014/04/21 - 13:49

In this paper, we propose a novel algorithm to solve the starving problem of the small jobs and reduce the process time of the small jobs on Hadoop platform. Current schedulers of MapReduce/Hadoop are quite successful in achieving data locality and scheduling the reduce tasks with a greedy algorithm. Some jobs may have hundreds of map tasks and just several reduce tasks, in which case, the reduce tasks of the large jobs require more time for waiting, which will result in the starving problem of the small jobs. Since the map tasks and the reduce tasks are scheduled separately, we can change the way the scheduler launches the reduce tasks without affecting the map phase. Therefore we develop an optimized algorithm to schedule the reduce tasks with the shortest remaining time (SRT) of the map tasks. We apply our algorithm to the fair scheduler and the capacity  scheduler, which are both widely used in real production environment. The evaluation results show that the SRT algorithm can decrease the process time of the small jobs effectively.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904794801 2014/04/21 - 13:49

There had been past attempts at making adaptation strategies with analytic model that used the current environment information in Cyber-physical System (CPS) to keep the software architecture from deteriorating. However, the research is still in its infancy on the closely-related issue of how to take corrective self-adaptive actions to reconcile the CPS system behavior with the variability of the survival environment. In particular, architects have almost no assistance in reasoning about questions such as: How should we stage the architectural evolution to improve self-adaptation to accommodate uncertainties in CPS? In this work, the specification of software architecture is extended using CHAM (Chemical Abstract Machine) in the presence of uncertainty. The key benefits of our approach are that it leverages standard software architecture models, and quantifies behaviors within the system in terms of relevant architectural elements. Compared to the traditional model, the proposed method could arrange optimized response sequence and adjust the behaviors within the software system to be adaptive to the new situations over time in CPS.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904802811 2014/04/21 - 13:49

Clustering is an important technique in machine learning, which has been successfully applied in many applications such as text and webpage classifications, but less in transaction database classification. A large organization usually has many branches and accumulates a huge amount of data in their branch databases called multi-databases. At present, the best way of mining multi-databases is, first, to classify them into different classes. In this paper, we redefine related concepts of transaction database clustering, and then in connection to the traditional clustering method, we propose a strategy of clustering transaction databases based on the k-mean. To prove that our strategy is effective and efficient, we implement the proposed algorithms. The results showed that the method of clustering transaction databases based on the k-mean is better than present methods.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904812816 2014/04/21 - 13:49

With the development of ad hoc network technology, its mobility models have recently emerged as a premier research topic. Several mobility models of ad hoc network were introduced in this paper, and a formula to calculate the connectivity of mobile ad hoc network was proposed. Three of those mobility models were simulated in NS2.The relationship function among the number of nodes, the range of nodes and the connectivity of ad hoc network were gained by curve-fitting using Boltzmann Function for the first time, which provides a judgment criterion for ad hoc networking.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904817821 2014/04/21 - 13:49

Research the heart failure medical cases of TCM (traditional Chinese medicine) to effectively mine the association rules of differential diagnosis and treatment. TCM medical cases are of vast amounts of data and strong relatedness, and a new and improved firefly algorithm based on the guide of normative knowledge has been proposed to overcome the shortcomings of traditional association rules mining algorithms with the handling of TCM medical cases data such as low efficiency, slow convergence rate and rules underreporting, etc. The algorithm sets the support degree threshold through the penalty function, adaptively adjusts the hunting zone by normative knowledge to improve the convergence rate and exploration ability of the algorithm; it uses the way of random disturbance to conduct disturbance operation so as to increase the population diversity and effectively avoid algorithm prematurity. Confirmatory experiment of TCM medical cases for the treatment of heart failure has been conducted, the experimental results show that this method has achieved a great improvement on individual diversity and the efficiency of effective rules extraction compared with traditional association rules mining algorithms, and the mining results are of a certain reference value for TCM clinical diagnosis and treatment of heart failure.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904822829 2014/04/21 - 13:49

Wireless Sensor Network (WSN) for environment monitoring, which typically has heavy data transmission load, has critical real-time requirement. Thus the time delay at relay-nodes should be reduced.  This paper models queue scheduling as a reinforcement learning process and presents a scheduling algorithm of relay-node based on self-adaptive weighted learning. The presented algorithm schedules queues dynamically. Simulation results under two circumstances (with sufficient bandwidth and limited bandwidth) show that the algorithm can improve the real-time performance and maintain fairness.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904830835 2014/04/21 - 13:49

In this paper, we proposed a novel sparse codingalgorithm by using the class labels to constrain the learningof codebook and sparse code. We not only use the classlabel to train the classifier, but also use it to constructclass conditional codewords to make the sparse code asdiscriminative as possible. We first construct ideal sparsecodes with regarding to the class conditional codewords,and then constrain the learned sparse codes to the idealsparse codes. We proposed a novel loss function composed ofparse reconstruction error, classification error, and the idealsparse code constrain error. This problem can be optimizedby using the transitional KSVD method. In this way, wemay learn a discriminative classifier and a discriminativecodebook simultaneously. Moreover, using this codebook, thelearnt the sparse codes of the same class are similar to eachother. Finally, exhaustive experimental results show thatthe proposed algorithm outperforms other sparse codingmethods.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904836844 2014/04/21 - 13:49

Learning the enormous number of parameters is a challenging problem in model-based Bayesian reinforcement learning. In order to solve the problem, we propose a model-based factored Bayesian reinforcement learning (F-BRL) approach. F-BRL exploits a factored representation to describe states to reduce the number of parameters. Representing the conditional independence relationships between state features using dynamic Bayesian networks, F-BRL adopts Bayesian inference method to learn the unknown structure and parameters of the Bayesian networks simultaneously. A point-based online value iteration approach is then used for planning and learning online. The experimental and simulation results show that the proposed approach can effectively reduce the number of learning parameters, and enable online learning for dynamic systems with thousands of states.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904845850 2014/04/21 - 13:49

More and more mobile malware appears on mobile internet and pose great threat to mobile users. It is difficult for traditional signature-based anti-malware system to detect the polymorphic and metamorphic mobile malware. A mobile malware behavior analysis method based on behavior classification and self-learning data mining is proposed to detect the malicious network behavior of the unknown or metamorphic mobile malware. A network behavior classification module is used to divide the network behavior data of mobile malware into different categories according to the behavior characteristic in the training and detection phase. Three types of network behavior data of mobile malware and normal network access are employed to train the different Naïve Bayesian classifier respectively. Those classifiers are used to analyze the corresponding type of network behavior to detect the new or metamorphic mobile malware. An incremental self-learning method is adopted to gradually optimize those Naïve Bayesian Classifiers for different behavior. The simulation results showed that those Naïve Bayesian Classifiers based on behavior classification have better accuracy rate of analysis on mobile malware network behavior. Performance simulation results showed that the network behavior analysis system based on the proposed method can analyze the mobile malware on mobile internet in real time.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904851858 2014/04/21 - 13:49

A distributed implementation of Dang's Fixed-Point algorithm isproposed for searching one Nash equilibrium of a finite n-persongame in normal form. In this paper, the problem consists of twosubproblems. One is changing the problem form to a mixed 0-1 linearprogramming form. This process is derived from applications of theproperties of pure strategy and multilinear terms in the payofffunction. The other subproblem is to solve the 0-1 linearprogramming generated in the former subproblem. A distributedcomputation network which is based on the Dang's Fixed-Point methodis built to solve this 0-1 linear programming. Numerical resultsshow that this distributed computation network is effective tofinding a pure-strategy Nash equilibrium of a finite n-person gamein normal form and it can be easily extended to other NP-hardproblems.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904859866 2014/04/21 - 13:49

To study recognition of Chinese number and quantifier prefix (CNQP) in Chinese-English machine translation, which was used for improving the results of a statistical parser, this paper proposes a method for the recognition of CNQPs based on rules and independent of word segmentation. First, it analyzed the Components of CNQPs, and offered samples of each component. In addition, it supplied Backus-Naur Forms (BNF) to express more complex components, which were composed by other components and smaller components. Then, it gave ten production rules for the recognition of CNQPs, and 277 words for substance quantifiers, 17 words for action quantifiers, 13 words for time quantifiers, and 129 words for measurement quantifiers in appendix, which were important resources for the recognition method. Afterwards, it described the algorithm, and illustrated the processing flow, which used the components, BNFs, and ten rules. To avoid the word segmentation noise, the algorithm took the numeral as the active information, and utilized a forward maximum matching method to obtain the compositions of the CNQPs, which can be fed into the Chinese parser to enhance the parsing results. The experimental results indicate the proposed method can be integrated into the statistical parser as a pre-processing module without retraining on experimental data constructed manually, which can further boost the translation quality.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904867874 2014/04/21 - 13:49

The rural power networks play a very important role in the power network in China. The rural networks built in the old days are usually not standardized, and therefore, a lot of problems emerged these days. Such as, most of the transmission line is aged, the insulation ability of the transmission line degraded, the household voltage is relatively low, the distribution network line loss is very high, and so on. As the rural power network mainly distributes in the hilly and mountainous land, the altitude of the load points has an important impact to the location of distribution transformer. Meanwhile, because of the existence of the sag, the length of the transmission line is not a straight-line distance. Therefore, this paper modified the traditional optimization model used in the location of distribution transformer, introduced the line correction coefficient (α) which was connected with the sag, improved the line distance computational formula according to the altitude of the load points, and put forward the modified optimization model which is called M-TLOM. Then, an optimized location system of distribution transformer (DTLOS 1.0) based on the Visual Basic 6.0 was developed. The instance analysis indicated that, the modified model could reduce the loss of distribution network.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904875882 2014/04/21 - 13:49

Optimization has been a basic tool in all areas of applied mathematics, engineering, medicine, economics and other sciences.There has been much attention to develop iterative methods for solving nonlinear equations in these years.New algorithms and theoretical techniques have been developed, the diffusion into other disciplines has proceeded at a rapid pace. One of the most striking trends in optimization is the constantly increasing emphasis on the interdisciplinary nature of the field.Among wide various of papers have been published in the recent years, there are some progress about multi-step methods.These multi-step methods have been suggested by combining the well-known Newton's method with other methods.In this work, we develop a simple yet practical algorithm for solving nonlinear optimization problems by solving nonlinear equations with a good local convergence.The algorithm uses a continued fraction interpolation that can be easily implemented in software packages for achieving desired convergence orders.For the general $n$-point formula,the order of convergence rate of the presented algorithm is $\tau_n$, the unique positive root of the equation $x^n-x^{n-1}-\cdots-x-1=0$.Computational results ascertain that the developed algorithm is efficient and demonstrate equal or better performance as compared with other well known methods.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904883890 2014/04/21 - 13:49

In the paper, the region of the number of equilibrium points of every cell in cellular neural networks with negative slope activation function is considered by the relationship between parameters of cellular neural networks. Some sufficient conditions are obtained by using the relationship among connection weights. Three theorems and a corollary are gotten by our new methods. Depending on these sufficient conditions, inputs and outputs of a CNN, the regions of the values of parameters can be obtained. Some numerical simulations are presented to support the effectiveness of the theoretical analysis.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904891895 2014/04/21 - 13:49

To improve the spatial resolution of low resolution image with Gaussian blur and Pepper & salt noise, a blind single-image super resolution reconstruction method is proposed. In the low resolution imaging model, the Gaussian blur, down-sampling, as well as Pepper & Salt noise are all considered. Firstly, the Pepper & Salt noise in the low resolution image is reduced through median filtering method. Then, the Gaussian blur of the de-noised image is estimated through error-parameter analysis method. Finally, super resolution reconstruction is carried out through iterative back projection algorithm. Experimental results show that the Gaussian blur is estimated with high accuracy, and the Pepper & Salt noise are removed effectively. The visual effect and peak signal to noise ratio (PSNR) of the super resolution reconstructed image is improved. In addition, the importance of Gaussian blur in single-image super resolution reconstruction is justified in an experimental way.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904896902 2014/04/21 - 13:49

As ideal tools, autonomous underwater vehicles (AUVs) are used to implement underwater monitoring missions instead of human. Because the underwater images captured are many close-range images, the mosaicing and fusion method is applicable to create large visual representations of the sea floor. A novel mosaicing and fusion approach based on weighted aggregation energy threshold using Biorthogonal wavelet transform was proposed in order to improve the quality and contrast of the underwater images. Firstly, adopting the phase correlation method, the overlapped areas of the source images were determined, and then the overlapped areas were decomposed using Biorthogonal wavelet transform. Secondly, the overlapped area were fused and constructed adopting the low-frequency and high frequency images fusion algorithms, and the mosaicing image with clarity contrast was obtained. Finally, fusion performance evaluations were carried out to evaluate the qualitative and quantitative of the image.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904903907 2014/04/21 - 13:49

Speckle noise in interferometric synthetic aperture radar (InSAR) phase images seriously degrades the quality of interferogram, disenables interferogram to reflect accurate phase characteristics of the target and increases the difficulty in extracting DEM information of the target area. Therefore, reducing speckle noise by interferogram filtering is a significant step in InSAR processing. First, a noise-included interferometric SAR phase image is simulated based on a terrain model and geometrical parameters of InSAR system. The phase image can be characterized by multilook phase distribution. Then, three interferogram filtering algorithm are explored to remove speckle noise: Goldstain filter, rotating kernel transformation and Lee filter. Proper implementation of three methods is given. Based on experimental results, performances of three different methods are compared. Two aspects need to be comprehensively considered in noise reduction process: the required accuracy in practical application and the processing duration. And also second-time or multiple combined noise reduction is highly recommended.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904908915 2014/04/21 - 13:49

In burst communication systems like Body Area Networks (BAN), burst packet should be detected prior to the demodulation of the signal at the receiver. Meanwhile, different modulation and demodulation algorithms may affect the BER performance of the entire system seriously. To satisfy the conditions that BAN nodes or devices have to be small in size and very low in power consumption, algorithm design of modulation and packet detection are very important. Referring to IEEE 802.15.6 BAN standard, theoretical analysis of the modulation and demodulation for π/2-DBPSK and π/4-DQPSK is carried out first. The two modulations are realized by using two different two-step looking up table respectively and then united to be one by combining the two phase increment tables. By use of the special structure of π/2-DBPSK modulated preamble, packet detection algorithm with low complexity is also carefully designed. Simulation results of BER performance are presented at last. In AWGN channel, both modulations achieve good performance, and performance degradation of π/4-DQPSK happens in multipath channel which may not satisfy the requirements. Adaptive modulation is considered to solve this problem by choosing different modulation scheme according to channel quality.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904916921 2014/04/21 - 13:49

In order to obtain more complete and continuous edge information of the image, the image enhancement methods and Hessian matrix are used in the process of edge detection. Based on the gradient information of color images, the pseudo-color edges can be got by using the multi-channel edge detection. Then, enhance the edge information and remove the correlation to obtain complete edge information. The Hessian matrix is used in the last place to remove edges with redundant background texture and coarse edges so that the edge information will be more continuous and smooth. Two experiments are conducted to verify the effectiveness of the proposed method. The software is developed by using MATLAB. Experiments have confirmed that the result of proposed detection is more continuous and clearer and more details are contained. Furthermore, a good balance between the integrity and accuracy of edge detection is achieved as well. 

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904922929 2014/04/21 - 13:49

PAC-Bayes risk bound integrating theories of Bayesian paradigm and structure risk minimization for stochastic classifiers has been considered as a framework for deriving some of the tightest generalization bounds. A major issue in practical use of this bound is estimations of unknown prior and posterior distributions of the concept space. In this paper, by formulating the concept space as Reproducing Kernel Hilbert Space (RKHS) using the kernel method, we proposed a refined Markov Chain Monte Carlo (MCMC) sampling algorithm by incorporating feedback information of the simulated model over training examples for simulating posterior distributions of the concept space. Furthermore, we used a kernel density method to estimate their probability distributions in calculating the Kullback-Leibler divergence of the posterior and prior distributions. The experimental results on two artificial data sets show that the simulation is reasonable and effective in practice.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904930937 2014/04/21 - 13:49

We analyze a recent RFID lightweight authentication protocol, namely EOHLCAP scheme. Based on our analysis, however, a new traceability attack algorithm makes the scheme insecurity, if the attacker has the ability to distinguish the target tag. Some lightweight authentication protocols are susceptible to the similar traceability attack, so we propose a new traceability attack algorithm. More precisely, we also show how our attack algorithm can be applied to the older schemes. To counteract such security issue, we revise the EOHLCAP scheme with the rotation operation and show that the proposed model satisfies the indistinguishability property. Finally, we introduce a revised protocol, namely EOHLCAP+ scheme, which conforms to the security requirement and resists the tracing attack.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904938946 2014/04/21 - 13:49

The efficiency and performance of Twin Support Vector Machines (TWSVM) is better than the traditional support vector machines when it deals with the problems. However, it also has some problems. As the same as the traditional support vector machines, its parameters are difficult to be appointed and it is not easy to select the appropriate kernel function. TWSVM generally selects the Gaussian radial basis kernel function. Although its learning ability is very strong, its generalization ability is relatively weak. To a certain extent, this limits the performance of TWSVM .In order to solve these two problems, in this paper, we propose the Mixed Kernel Twin Support Vector Machines based on the shuffled frog leaping algorithm (SFLA-MK-TWSVM). To make full use of both the excellent generalization ability of global kernel functions and the learning ability of local kernel functions, SFLA-MK-TWSVM constructs a mixed kernel which has better performance. Then SFLA-MK-TWSVM uses the shuffled frog leaping algorithm to determine the parameters of both TWSVM and the mixed kernel function to further improve the performance of TWSVM. The experimental results indicate that SFLA-MK-TWSVM significantly improves the classification accuracy of TWSVM.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904947955 2014/04/21 - 13:49

The indicators system and indicators normalization method for the condition assessment of Transformer is developed and membership function of the indicators is established. The establishment of Expert Weight is decided jointly by subjective weight and objective weight which is based on entropy weight thoughts, while the indicators weight is gained by the weight which is derived from the standard Analytic Hierarchy Process and the expert weight. A comprehensive condition assessment model of transformer based on OWA(Ordered Weighted Averaging) operator and fuzzy assessment is proposed and divided into two layers, fuzzy polymerization is adopted for the second layer sub-indicators to get the fuzzy membership of the first layer main indicators, while OWA operator polymerized with the final comprehensive condition assessment of the transformer is adopted for the main indicators of the first layer. In order to fully consider the impact of indicators weight and membership on the condition assessment result, a fuzzy polymerization conversion function based on OWA operator is introduced in the model, so as to integrate the attribute information from various important information sources. Case analysis indicates that the condition assessment of transformer can be carried out by this method; the reasonable and objective conclusion is close to the true condition of transformer.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904956965 2014/04/21 - 13:49

To make glowworm swarm optimization (GSO) algorithm solve multi-extremum global optimization more effectively, taking into consideration the disadvantages and some unique advantages of GSO, the paper proposes a hybrid algorithm of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm and GSO, i.e., BFGS-GSO by adding BFGS local optimization operator in it, which can solve the problems effectively such as unsatisfying solving precision, and slow convergence speed in the later period. Through the simulation of eight standard test functions, the effectiveness of the algorithm is tested and improved. It proves that the improved BFGS-GSO abounds in better multi-extremum global optimization in comparison with the basic GSO.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904966973 2014/04/21 - 13:49

LT codes, the first universal erasure-correcting codes, have near-optimal performance over binary erasure channels for any erasure probability, but exhibit high bit error rate and error floor over the noisy channels. This paper investigated the performance of LT codes over the additive white Gaussian noise channels. We designed the systematic LT codes through reconstructing the bipartite graph and proposed a modification of the encoding scheme for the systematic LT codes to eliminate the cycles in generator matrix. With the proposed encoding scheme, the systematic LT codes are almost left-regular. Consequently, two types of the systematic LT codes, left-regular right-regular and left-regular right-irregular LT codes, were considered from the perspective of bit error rate. For the left-regular right-irregular LT code, we modified the degree distributions and proposed three kinds of check-node degree distributions. And then we analyzed the performance of the above-mentioned systematic LT codes with the proposed encoding scheme and different degree distributions. Simulations results show that the performance of the systematic LT codes with the proposed encoding scheme outperforms that of the conventional LT codes and the bit error rate of the systematic LT codes declines more than one order compared with that of the conventional LT codes. Finally, we proposed a class of the concatenated code, which serially concatenate the conventional LT codes with the systematic LT codes adopting the proposed encoding scheme. The performance of the proposed concatenated code was evaluated through simulations.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904974981 2014/04/21 - 13:49

Blind source separation problem has recently received a great deal of attention in signal processing and unsupervised neural learning. In the current approaches, the additive noise is negligible so that it can be omitted from the consideration. To be applicable in realistic scenarios, blind source separation approaches should deal evenly with the presence of noise. In this contribution, we proposed approaches to independent component analysis when the measured signals are contaminated by additive noise. A noisy multiple channels neural learning algorithm of blind separation is proposed based on independent component analysis. The data have no noise are used to whiten the noisy data, and the windage wipe off technique is used to correct the infection of noise, a neural network model having denoise capability is adopted to recover some original signals from their noisy mixtures observed by the same number of sensors. And a relaxation factor is introduced into the iteration algorithm, thus the new algorithm can implement convergence. Computer simulations and experiment results prove the feasibility and validity of the neural network modeling and control method based on independent component analysis, which can renew the original images effectively.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904982989 2014/04/21 - 13:49

In order to suppress image noise effectively and extract the edge clearly and accurately, a novel approach based on omni-directional and multi-scale mathematical morphology for edge detection is proposed in this paper. In order to compare different object geometries a modified Hausdorff distance application is composed. With a new moments rule used, Opening operation and closing operation of large scale structural elements are adopted for suppression of noise, while small scale structural elements are used for edge extraction. Finally, the weighting combination calculation of edge maps acquired under different directions is executed according to different weights. The experimental results show that the improved algorithm has a great promotion in extracting edge information compared with several reference algorithms.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0904990997 2014/04/21 - 13:49

Speech signal is one of the major means for communication which carries not only semantic, but personal information, such as genders and emotions. The researches about speech emotion have become more and more important to human-computer interaction. To this end, from speech, the long-term and short-term emotional features are extracted, the dimensionality of which is then reduced by virtue of the multi linear PCA algorithm. Finally, the kernel partial least squares regression is used for speech emotional recognition. The results show that in comparison with other current classifiers, the algorithm proposed herein can improve recognition rates by about 6% to 23%.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp09049981004 2014/04/21 - 13:49

Virtualization is a key enabling technology in cloud computing. Multiple tenants can share computing resource of cloud provider on demand. While sharing can reduce the expenses of computing, it brings security vulnerability as well since the isolation between different VMs could be violated through side-channel attacks. Recent researches point out that by leveraging memory bus contention, two colluded malware within different VMs (but on the same host) may use diversity of memory access latency as  a covert channel to deliver security critical information, such as user passwords or credit card numbers, which can bypass access control policies enforced by the guest OS or even the hypervisor. The bandwidth of such covert channel could be up to hundreds of kilobytes per second, which is fast enough to transfer large data objects. In this paper we propose a covert channel aware scheduler that considers security as first class to mitigate such side-channel attack. The scheduler is able to control the execution time overlapping of different VMs, and can also inject noise periodically to mitigate the threat of potential side channels. We have built a prototype of the proposed scheduler that enables overlapping control and noise injection.  The performance evaluations show that the overhead introduced is acceptable. Meanwhile, the new scheduler offers the user to dynamically configure scheduling parameters to adapt to diverse circumstances, in order to make a balance between performance and security.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp090410051013 2014/04/21 - 13:49

The rapid development of mobile devices and mobile Internet increases demands for location-based services. In outdoor environments, the mobile devices can obtain precise positioning through GPS localization. While in indoor environments, it is difficult to receive GPS signal. Researchers have proposed many methods to produce relative position rather than physical position. In this paper, we propose a new localization approach called OA-Loc which utilizes outdoor mobile devices’ GPS information for precise reference points. And with the RSSI between outdoor and indoor devices it could help the latter to obtain physical position. We analyze in detail the working principle of the approach, and provide a detailed design. We also implement our technique on mobile devices and evaluate it in real-world scenarios. The results show that our approach has high feasibility and decent accuracy. 

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp090410141019 2014/04/21 - 13:49

Aimed at coping with the complexity of construction engineering cost evaluation, the advantages of rough set theory, particle swarm algorithm and BP neural network are integrated to put forward a new model of construction engineering cost evaluation, namely, the model of construction engineering cost evaluation of optimized particle swarm and BP neural network on the basis of rough set theory. First, rough set theory was used to reduce the factors affecting construction engineering cost and optimize input variables of BP neural network. Then, the improved particle swarm algorithm with constriction factors is adopted to optimize the initial weights and thresholds. Through this method, BP neural network can be used in a better way to solve nonlinear problems and to improve the rate of convergence and the ability to search global optimum. An engineering project in a city of Hunan is selected to make empirical analysis. It shows that based on the features of engineering, this new model enjoys a high practical value as it can be applied to make scientific evaluation of costs of construction engineering.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp090410201025 2014/04/21 - 13:49

Pipeline systems are vital infrastructure to national economy, and are widely used for transporting liquid and gas matter, such as oil, natural gas, water and chemic materials. However, effective and efficient management of pipeline systems are challenging, due to its mere lengths and the diverse deployment environments. Wireless sensor network (WSN) consists of a large number of sensors, which can automatically and constantly collect and transmit monitored data, and thus can enable effective and timely management of pipeline systems. Successful WSNs rely on the deployment of sensor nodes. Most current research assumes that sensor nodes are deployed on a two-dimensional plane. However, in reality, sensor nodes deployed on  pipeline surface exist in a three-dimensional space. In this paper, we present an optimized 3D deployment model of WSN particularly for pipeline systems. The model is based on analyzing various relationships between sensing ranges of sensor nodes and pipeline radii.  We also provide an efficient deployment algorithm based on the model. Empirical simulation results show that the proposed model and the algorithm can provide both theoretical guidance and practical basis for the three-dimensional deployment of sensor nodes in pipeline systems.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp090410261032 2014/04/21 - 13:49

Traditional fuzzy clustering algorithm is applicable for noiseless image segmentation. However, it is powerless for the images with noise, special point values and defects. An algorithm which combines spatial fuzzy clustering and level set for indoor scene segmentation is proposed in this paper. Firstly, the image is classified using fuzzy clustering with space information to get a larger difference in image gray level; secondly, the image is segmented using level set; finally, the contour in the boundary of target area is gotten accurately. The improved method can not only preserve details of images but also reduce the number of iterations. The results show that the proposed method has good segmentation quality and efficiency in segmentation for indoor scene image.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp090410331039 2014/04/21 - 13:49

This paper presents an efficient technique to hide text information in 3D medical images such as MRI and PET. It embeds text information only in the non-anatomical pixels. It ensures that no anatomical part of the 3D medical image gets contaminated. It makes the retrieval of 100% data. The technique has been tested on several MRI and PET images.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0903513518 2014/03/19 - 01:01

The Network-on-Chip (NoC) approach fordesigning (System-on-Chip) SoCs is currently used   forovercoming the scalability and efficiency problems of traditional on-chipinterconnection schemes, such as shared buses and point-to-point links. NoCdesign draws on concepts from computer networks to interconnect IntellectualProperty (IP) cores in a structured and scalable way, promoting design re-use.This paper presents the design and evaluation of a parameterizable NoC routerfor FPGAs. The importance of low area overhead for NoC components is pivotal inFPGAs, which have limited logic and routing resources. We obtain a low arearouter design by applying optimizations in switching fabric and dual purposebuffer/connection signals. We utilize a store and forward flow control withinput and output buffering. We proffer a component library to increase re-useand allow tailoring of parameters for application specific NoCs of varioussizes. The proposed router supports the mesh architecture which is well knownfor its scalability and simple XY routing algorithm. We presentIP-core-to-router mapping strategies for multi-local port routers that enableample opportunity to optimize the NoC for application specific data traffic. Aset of experiments were conducted to explore the design space of the proposedNoC router using different values of key router parameters: channel width (flitsize), arbitration scheme and IP-core-to-router mapping strategy. Area andlatency results from the experiments are presented and analyzed. These resultswill be useful to designers who want to implement NoC on FPGAs.  

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0903519528 2014/03/19 - 01:01

k-Exclusion is a generalization of Mutual Exclusion that allows up to k processes to be in the critical section concurrently. Starvation Freedom and First-In-First-Enabled (FIFE) are two desirable progress and fairness properties of k-Exclusion algorithms. We present the first known bounded-space k-Exclusion algorithm that uses only atomic reads and writes, satisfies Starvation Freedom, and has a bounded Remote Memory Reference (RMR) complexity. Our algorithm also satisfies FIFE, and has an RMR complexity of O(n) in both the cache-coherent and distributed shared memory models.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0903529536 2014/03/19 - 01:01

Scientific data sharing has been an important area of research for several years. However, due to the sharp increase in scientific data, the existing scientific data sharing systems are becoming  complicated and cannot meet the demands of current scientific domain communication.In this paper, the realization of material scientific data cloud is introduced to manage large-scale scientific data resources. In fact, we provide a framework for improved integration and  data sharing capability improvements through interconnecting agent system and the unified environment for data mapping and integration. At present, we have been realized the prototype system of material scientific data cloud and demonstrate its effectiveness and practicality in material scientific data sharing  project implementation.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0903537542 2014/03/19 - 01:01

Designontology plays an important role in modern multi-party collaborations andknowledge-intensive designs. In this paper, a product design ontology system isproposed by analyzing and deconstructing the design function–behavior–structure(FBS) model and geometry application programming interfaces (APIs). GeometryAPIs and surface behavior are considered minimum structure units and concretefunction realization, respectively. APIs and their surface behavior are definedto build the semantic connection among functions, behavior, and structures andrefine further the ontology of FBS design. According to some inference rules, designontology can be used to obtain automatically or semi-automatically thepreliminary product function and structure. The studied cases prove that therepresentation of the design knowledge based on the FBS-API ontology overcomesthe limit of mere word conventions and is more conducive to the quantizationand derivation of product design.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0903543550 2014/03/19 - 01:01

A new link prediction method using active learning technique, named HALLP, is proposed in this paper. The method provides the user with most useful examples from the large number of unlabeled examples (i.e. unlinked node pairs in the network) for query. Once labeled by users, these examples will be fed to the learner for the improvement of the link predictor in next round. The utility of an example is decided by its uncertainty measure calculated simultaneously by its local structure and its hierarchical structure in networks. Experiments indicate link prediction method can be improved with the use of active learning techniques and both the local structure and global structure are beneficial for selecting examples with high utility.

http://ojs.academypublisher.com/index.php/jcp/article/view/jcp0903551556 2014/03/19 - 01:01