Skip to Content

Instrukcja korzystania z Biblioteki


Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online


Astronomia Astrofizyka

Sztuka dawna i współczesna, muzea i kolekcje

Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Przewidywania Kosmologia Religie Ideologia Polityka

Geologia, geofizyka, geochemia, środowisko przyrodnicze

Biologia, biologia molekularna i genetyka

Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Journal of Computers

The Networking and Distributed Technologies are the most vital parts of IT technologies in the current days and the future. When the next generation of Internet comes true and mobile systems go to 3G or even 4G in the future, there are trends to transform legacy software into Internet applications. After International Conference on Networking and Distributed Computing (ICNDC2010, ICNDC2011, ICNDC2012), which were technically sponsored by IEEE Computer Society, were successful held, selected papers with significant modifications are published by the special issue of Journal of Computers. After peer review, the guest editor accepted 5 final accepted papers for the special issue after revisions based on reviewer feedbacks and further revisions.1. Qiming Fang and Guangwen Yang in their paper “Efficient Top-k Query Processing Algorithms in Highly Distributed Environments” address the problem of efficient query processing in highly distributed environments by vertically partitioned data, and propose two novel algorithms-- BulkDBPA and 4RUT to improve the solutions for this problem.2. Xiaoke Jiang, Jun Bi and You Wang in their paper “MCBS: Matrix Computation Based Simulator of NDN” present  a light-weight matrix computation based simulator of Named Data Networking (NDN), which has advantage of convenient usage by comparing with Network Simulator and CCNx Project.3. Bishan Ying, Pingping Zhu and Ye Gu in their paper “A Fast Searching Approach for Top-k Partner Selection in Dynamic Alliances of Virtual Enterprises” introduce the optimized idea of Top-k into the selection of partner and classifications the original indexes in supply chain management, and present Optimize Procedure (OP) and improve Optimize Procedure (IMOP) Algorithms based on specific index. 4. Guanlin Chen, Shengquan Li, Xiaoyang Shen, Yujia Zhang and Gang Chen in their paper “A Public Opinion Analysis System for Urban Management Information“ designed a public opinion analysis system for urban management information (POASUMI), in which public opinion information acquisition, Chinese word segmentation, analysis and statistic of sentiment and hotspot of public opinion information were implemented.   5. Jie Hu, Huaxiong Zhang, Yu Zhang, Jie Feng, and Hanjie Ma in their paper “A Scale Adaptive Method Based on Quaternion Correlation in Object Tracking” proposed a scale adaptive Kalman filter algorithm based on quaternion correlation of color image.    We are especially grateful to the authors who submitted their papers to this special issue. We express our deepest gratitude to the reviewers for their careful reviews and valuable suggestions which improved the quality of the final result. We thank Professor Prabhat Mahanti and the Editorial Board of the Journal of Computers for the exceptional effort they did throughout this process. We thank all the people whose dedication ensured a good selection of articles and made this special issue possible. Finally, we sincerely hope that you will enjoy reading this special issue. 2014/08/15 - 13:03

Efficient top-k query processing in highly distributed environments is a valuable but challenging research topic. This paper focuses on the problem over vertically partitioned data and aims to propose more efficient algorithms.. The effort is put on limiting the data transferred and communication round trips among nodes to reduce the communication cost of the query processing. Two novel algorithms, BulkDBPA and 4RUT, are proposed. BulkDBPA is derived from the centralized algorithm BPA2 which requires very low data access. BulkDBPA borrows the idea of best position from BPA2 and so has the advantage of low data transferred. It further reduces the communication round trips by utilizing bulk read and bulk transfer mechanism. 4RUT is inspired by the algorithm TPUT which only requires three communication round trips to get the exact top-k results. 4RUT improves its top-k lower bound estimate by introducing one additional communication round trip, which can subsequently reduce the data transferred in query processing. Experimental results show that both BulkDBPA and 4RUT require much less data transferred and response time than the competitors including Simple Algorithm and TPUT and each has its own suitable application environments respectively. 2014/08/15 - 13:03

This paper presents a lightweight matrix computation-based simulator of Named Data Networking (NDN). This simulator treats the experiment network as a whole. Matrix is used to describe network states, and matrix operation is used to simulate different network events. The simulator, just like Newtonian mechanics to some extend, once given the system initial state, including routing table, content distribution, and initial data requesting information of NDN network, can figure out all of the subsequent network state based on matrix computation.This simulator splits packet process into different events, such as interest generating, interest forwarding, cache or content hitting and transmitting, and turns them into matrix computation on the network scale.One of advantages of the simulator is convenient and user-friendly compared to CCNx Project and NS-3 based simulator.And they can provide similar simulation result. This simulatorcan works on different platform, including Linux, mac OSX and Windows.Computational complexity of the simulator depends on number of involved contents and nodes, which makes analysis between small group of nodes and contents very simple and fast. 2014/08/15 - 13:03

The success of supply chain management largely depends on establishing the partnership in dynamic alliance. In the past, many researches focus on selection of indexes and establish selection model according to the indexes. In order to speed up the process of selection, this paper introduces this optimized idea of Top-k into the selection of partner and classifies the original indexes. The paper presents algorithm of OP (Optimize Procedure) based on specific index and the experiment shows that these algorithms can efficiently improve the process of selecting partners. Furthermore, the paper also presents an algorithm of IMOP (Improve Optimize Procedure) based on OP algorithm. It can effectively overcome the false alarm rate of the OP algorithm and improve the accuracy of partner selection. 2014/08/15 - 13:03

Digital urban management has been a fast developing trend in modern cities. And as the development of the urban management, there are an increasing number of people finding that public opinion information is quite valuable and important for this process. So in this paper, we design a public opinion analysis system for urban management information (POASUMI), in which we implement certain functionalities like public opinion information acquisition, Chinese word segmentation, analysis and statistic of sentiment and hotspot of public opinion information. The experimental results indicate that the results generated by POASUMI can match actual condition very well, definitely providing quite valuable information for urban management. 2014/08/15 - 13:03

In this paper, we proposed a scale adaptive Kalman filter algorithm based on quaternion correlation of color image. First, Kalman filter is used to estimate the object motion direction. The correlation of object and the searching window image is calculated to get the accurate position of the object. The scale adaptive method is efficient to the situation of object size changing. In order to reduce the influence of illumination, we proposed to use HSV color space instead of RGB color space. Experiments results showed that the algorithm can detect the object correctly even the size and the color of the object changed. 2014/08/15 - 13:03

Herein, a new identity recognition method of multi-haptic pressure feature based on sparse representation was investigated. According to the common dynamic features, the regional feature and the ratio of length vs. width of external bounding rectangle (extracted by using the least area method) were extracted. The subset of dynamic feature was optimized by correlation criterion, the sparse representation of haptic pressure was obtained according to the sparse basis (i.e., wavelet basis), and the sparse feature vector was calculated by the Topelitz measurement matrix. After that, the haptic pressure feature set was created by combining dynamic feature subset and sparse feature subset linearly. Furthermore, Support Vector Machine (SVM) classifier identified more than two objects following the one to many rule and output the identification result according to the rule of majority voting, and the stability of features is studied by calculating the intraclass correlation coefficient (ICC) and coefficient of variation (C.V). Overall, the improved acuracy of identity recognition demonstrating the effectiveness and stability of the multihaptic pressure feature. 2014/08/15 - 13:03

Aiming at the medical images security problem during storage and transmission, the author provides a fragile watermark method to protect it. And this method can be able to achieve integrity detection and accurate tampering localization. This method adopts image block, chaotic modulation in watermark information and encryption to set password. The watermark information will be embedded to the least significant bits of original medical image’s sub-block area. Experimental results show that the algorithm has the following features: (1) The watermarked medical images has high and stable quality. (2)  It is sensitive to manipulations, and any manipulation of any pixel can be detected. (3) The tampering localization can be accurate into 2×2 pixel area. (4) The algorithm achieves the blind detection with high security. 2014/08/15 - 13:03

In this paper, a hybrid TS-DE algorithm based on Tabu search and differential evolution algorithm is proposed to solve the reliability redundancy optimization problem. A differential evolution algorithm is embedded in Tabu search algorithm. TS is applied for searching solutions space, and DE is used for generating neighborhood solutions. The advantages of both algorithms are considered simultaneously. And an adaptive hybrid TS-DE approach is developed to solve three benchmark reliability redundancy allocation problems.  By comparing with other algorithms reported in previous literatures, experimental results show that the proposed method is effective and efficient for solving the reliability redundancy optimization problem. 2014/08/15 - 13:03

A permanent magnet synchronous motor (PMSM) with rotor embedded segmented permanent magnet was analyzed. The structural features and flux weakening principle are introduced. The shape of the rotor is optimized to get a sinusoidal air gap magnetic density waveform and reduce the toque ripple. The whole-permanent-magnet (PM) motor and the segmented-permanent-magnet motor are compared from the aspects of the no-load performance, the rated load performance and the flux-weakening performance by finite element method (FEM)The theoretical analysis and simulation by FEM indicate the availability and validity of flux-weakening for the segmented-PM motor. 2014/08/15 - 13:03

Reliability of a navigation system is one of great importance for navigation purposes. Therefore, an integrity monitoring system is an inseparable part of aviation navigation system. Failures or faults due to malfunctions in the systems should be detected and repaired to keep the integrity of the system intact. According to the characteristic of GPS (Global Positioning System) receiver noise distribution and particle degeneracy and sample impoverishment problem in particle filter, an improved particle filter algorithm based on genetic algorithm for detecting satellite failures is proposed. The combination of the re-sampling method based on genetic algorithm and basic particle filter is used for GPS receiver autonomous integrity monitoring (RAIM). Dealing with the low weight particles on the basis of the genetic operation, genetic algorithm is used to classify the particles. It brings the selection, crossover and mutation operation in genetic algorithm into the basis particle filter .The method for detecting satellite failures which affect only subsets of system measurement. In addition to a main particle filter, which processes all the measurements to give the optimal state estimate, a bank of auxiliary particle filters is also used, which process subsets of the measurements to provide the state estimates which serve as failure detection references. The consistency of test statistics for detection and isolation of satellite fault is established. The failure detection is undertaken by checking the system state logarithmic likelihood ratio (LLR). The RAIM algorithm combined the genetic particle filter and the likelihood method is illustrated in detail. Experimental results based on GPS real raw data demonstrate that the algorithm under the condition of non-Gaussian measurement noise can improve the accuracy of state estimation, effectively detect and isolate fault satellite, improve the performance of the fault detection. Experimental results demonstrate that the proposed approach is available and effective for GPS RAIM. 2014/08/15 - 13:03

Performance in IEEE 1588 synchronization depends on serveral related factors. Among them, the symmetry of packet delay is the most basic one. But most existing networks could not provide symmetry packet delay between master and slave clocks. From research we found that, FIFO waiting during packet transmitting is one of the main reasons that lead the asymmetry. This paper puts forward a packet delay estimation algorithm to select those "lucky packets" which survied from FIFO waiting, attenuating the FIFO waiting effects on IEEE 1588 synchronization. Verified by some meaningful tests, compared with no optimization, in an asymmetric network, the accuracy of packet delay estimation increases almost 25ns with stability increasing almost 2 order, and the accuracy of IEEE 1588 synchronization increases more than one-times with stability increasing almost four-times. 2014/08/15 - 13:03

According to the problem of speech signal denoising, we propose a novel method in this paper, which combines empirical mode decomposition (EMD), wavelet threshold denoising and independent component analysis with reference (ICA-R). Because there is only one mixed recording, it is a single-channel independent component analysis (SCICA) problem in fact, which is hard to solve by traditional ICA methods. EMD is exploited to expand the single-channel received signal into several intrinsic mode functions (IMFs) in advance, therefore traditional ICA of multi-dimension becomes applicable. First, the received signal is segmented to reduce the processing delay. Secondly, wavelet thresholding is applied to the noise-dominated IMFs. Finally, fast ICA-R is introduced to extract the object speech component from the processed IMFs, whose reference signal is constructed by assembling the high-order IMFs. The simulations are carried out under different noise levels and the performance of the proposed method is compared with EMD, wavelet thresholding, EMD-wavelet and EMD-ICA approaches. Simulation results indicate that the proposed method exhibit superior denoising performance especially when signal-to-noise ratio is low, with a half shorter running time. 2014/08/15 - 13:03

Detecting human interactions in a public place where no physical touch occurs has important applications in many surveillance tasks. In this paper, we explore the possibilities to automatically detect such distant human interactions without recognizing the specific human actions. Specifically, we use a highly simplified formulation of the interaction in this paper: 1) when a person does not interact with others, he always performs a non-interactive action that is largely periodical, such as walking, 2) when two persons interact with each other, they both perform a same short-duration interactive action, such as waving hand, that are different from their non-interactive actions. Based on this formulation, we develop a new approach to localize the subvideos that describes the interactive actions from Kinect videos, which provide both the RGB and depth information. We then develop a new approach to compare the pose and kinematic features in these subvideos (from different people) to see whether they describe a same interactive action. Without any supervised learning and action recognition, the proposed approaches are not limited to a specified set of interactive and non-interactive actions. In the experiments, we justify the performance of the proposed approaches on 100 Kinect videos with 10 different interactive actions. 2014/08/15 - 13:03

In multiple-input multiple-output (MIMO) radar, independent waveforms are transmitted from different antennas, and the target parameters are estimated via the linearly independent echoes frim different targets.Several adaptive approaches are directly applied to target angle and target amplitude estimation, including Capon, APES(amplitude and phase estimation). The CCA (canonical correlation analysis) approach is first proposed to estimate target locations which has high peak amplitudes, then a gradient-based algorithm is presented to improve the target angle estimation accuracy based on CCA approach without spectrum peak searching. With an initial angle, the angle sequence is iteratively updated with adaptive steps and converges to local peaks which indicate the target locations. Simulations show that the target angle accuracy is improved, and the common DOA (direction-of -arrive) problem is avoided. 2014/08/15 - 13:03

Detecting and tracking small targets in the aerospace is both a significant and difficult issue for the satellite tracking system. It is fundamentally challenging due to the existence of strong noise in captured images, the few characteristics of spot-like target, the simultaneous multiple targets, the real-time requirements, and so on. To address these challenges, we respectively design the detector and tracker who coordinate with each other. We formulate the detection stage as a two-step problem, which integrates the variance vector detection and 2th variance detection. The pixels of images are projected to the variance subspace of two-dimension. In the first step, the candidate targets are extracted with the optimal threshold achieved by K-means and proposed Weighted Maximum Right Probability, namely WMRP. In the second step, the true targets are checked out by adopting the proposed 2th variance feature and multi-scale threshold. In the tracking stage, the Markov based dynamic model forecasts the probable area of the target in next time which is presented to the variance detector. Then the real location of the target estimated by the detector is transmitted to the tracker to generate the next probable area. Experiments demonstrate the proposed two-step framework can efficiently and rapidly detect the small multiple targets, and the cooperative working of the detector and tracker can satisfy the conventional application of space target tracking. 2014/08/15 - 13:03

Finding frequent itemsets is computationally the most expensive step in association rules mining, and most of the research attention has been focused on it. With the observation that support plays an important role in frequent item mining, in this paper, a conjecture on support count is proved and improvements of traditional Eclat algorithm are presented. The new Bi-Eclat algorithm sorted on support: Items sort in descending order according to the frequencies in transaction cache while itemsets use ascending order of support during support count. Compared with traditional Eclat algorithm, the results of experiments show that the Bi-Eclat algorithm gains better performance on several public databases given.  Furthermore, the Bi-Eclat algorithm is applied in analyzing combination principles of prescriptions for Hepatitis B in Traditional Chinese Medicine, which shows its efficiency and effectiveness in practical usefulness. 2014/08/15 - 13:03

In this paper, a modified particle swarm optimization(MPSO) algorithm is proposed to solve the reliability redundancy optimization problem. This algorithm modifies the strategy of generating new position of particles. For each generation solution,the flight velocity of particles is removed. Whereas the new position of each particle is generated by using difference strategy. Moreover, an adaptive parameter is used to ensure diversity of feasible solutions. Experimental results on four benchmark problems demostrate that the proposed MPSO has better robustness, effectiveness and efficiency than other algorithms reported in literatures for solving the reliability redundancy optimization problem. 2014/08/15 - 13:03

To resolve the difficulty of parameter estimation of Storm rainfall intensity formulation, a new multicellular GEP parameter estimation algorithm, named MC_GEP_MPO, with a novel individual coding scheme based on Gene Expression Programming algorithm is proposed in this paper. MC_GEP_MPO is used for solving the parameter estimation problem of the single return period of rainfall intensity forecast model using historical rainfall statistical data as a learning example. And its effectiveness in real compute instance has been evaluated. The compared experiment result shows that the proposed method exploring for parameter estimation of Storm rainfall intensity formulation is feasible and precise. 2014/08/15 - 13:03

Appropriately adapting mutation strategies is a challengeable problem of the literature of the Differential Evolution (DE). The Strategy adaptation Mechanism (SaM) can convert a control parameter adaptation algorithm to a strategy adaptation algorithm. To improve the quality of optimization result, the exploration property is important in the early stage of optimization process and the exploitation property is significant in the late stage of optimization process. To ensure these, we modified the SaM for strictly controlling a balance between the exploration and the exploitation properties, which called the bias SaM (bSaM). We extended the Adaptive Cauchy Differential Evolution (ACDE) by attaching the bSaM. We compared the bSaM with SaM and the bSaM extended ACDE with the state-of-the-art DE algorithms in various benchmark problems. The result of the performance evaluation showed that the bSaM and the bSaM extended ACDE performs better than SaM and the state-of-the-art DE algorithms not only unimodal but also multimodal benchmark problems. 2014/08/15 - 13:03

Binarization method is the key process to restore degraded historical document image. In this paper, the framework for degraded Thai historical document image restoration is proposed. The proposed framework consists of three stage including image filtering stage, local-based thresholding stage, and cluster analysis stage. Image filtering stage aims to eliminate some noises by using Wiener filter. Local-based thresholding stage aims to calculate the optimal threshold of a local block by using Niblack’s methods. Cluster analysis stage aims to improve the quality of binary image by using Kim’s method. The experiments are implemented by using Matlab and conducted on real degraded Thai historical document image dataset which is provided by Nation Library of Thailand. The experimental results are evaluated by using three widely used indices including precision, recall and f-index. The experimental results show that the proposed framework outperforms four classical binarization methods. 2014/08/15 - 13:03

Now-a-days the challenge that is increasingly being faced in different domains is how to integrate a large amount of available data and information for a particular domain. The purpose of this integration of data is to get an easy understanding of that particular domain. In this paper our main focus is integration of biological data in single ontology which will be developed using an efficient engineering algorithm. Data integration is being provided through a series of tools and techniques but lack of reliable terminologies and concepts cause hurdles. As a result, on internet a large amount of the available data remains detached & disjoint, even with having access to the internet, we cannot get the advantage of this available data at its maximum level. The dire need to deal with such type of issues has demanded the construction of a large number of ontologies for these domains. Our goal is to provide new technique for gene-ontology to increase the efficiency, completeness and correctness of ontology engineering technique as compared to the previous ontologies developed by different experts. 2014/08/15 - 13:03

An EEG-based brain-computer system for automating home appliances is proposed in this study. Brain-computer interface (BCI) system provides direct pathway between human brain and external computing resources or external devices. The system translates thought into action without using muscles through a number of electrodes attached to the user’s scalp. The BCI technology can be used by disabled people to improve their independence and maximize their capabilities at home. In this paper, a novel BCI system was developed to control home appliances from a dedicated Graphical User Interface (GUI). The system is structured with six units: EMOTIV EPOC headset, personal computer, Flyport module, quad band GSM/GPRS communication module, LinkSprite JPEG Colour camera, and PIC-P40 board. EMOTIV EPOC headset detects and records neuronal electrical activities that reflect user’s intent from different locations on the scalp. Those activities are then sent to the computer to extract specific signal features. Those features are then translated into commands to operate all appliances at home. The proposed system has been implemented, constructed, and tested. Experimental results demonstrates the feasibility of our proposed BCI system in controlling home appliances based on  the user’s physiological states. 2014/08/15 - 13:03

Various adaptive appearance models have been proposed to deal with the challenges in tracking objects such as occlusions, illumination changes, background clutter, and pose variation. In this paper, first, we present a novel Fragments-based Similarity Measurement algorithm for object tracking in video sequence. Both the target and the reference are divided by multiple fragments of the same size. Then, we find the similarity of each fragment with the overlapped smaller patches by comparing the average intensity value of the patches. The accuracy of the tracking results can be improved by adjusting the size of the patches. Finally we incorporate the global similarity measurement using two kinds of distances between them. This method encodes the color and the spatial information so that it can track non-rigid objects under complex scene. We use this coarse-to-fine method to get a balance between the accuracy and the computational cost. Extensive experiments are conducted to verify the efficiency and the reliability of our proposed algorithm in the realistic videos. 2014/08/15 - 13:03

Most high performance computing systems are large-scale computing systems, and consist tens of thousands computing nodes with superior capabilities. FPGAs are able to accelerate large scope and complicated computing with flexible configurations. More and more companies and research institutions integrate multi-FPGAs into high performance computing systems to get a better trade-off between high-performance and low power. How to design an effective topology for these integrated multi-FPGAs according different applications has become a key problem in this area. Acluster based architectureand corresponding partitioning approach are proposed in this paper. The proposed hierarchical topology taking full advantages of both traditional metallic lines and emerging interconnections to implement one-hop local communication within the cluster and one-hop global high-speed communication between clusters. The case study proved that the proposed architecture and partitioning approach can implement the fast mapping from the design to real computing system with multi-FPGAs, and accelerate the realization of high performance reconfigurable computing systems. 2014/08/15 - 13:03

Exposure fusion is an efficient method for directly fusing multi-exposure images into a high-quality low dynamic range image, without the high dynamic range (HDR) production and tone mapping process. The previous exposure fusion methods only produced an image that contains a fixed amount of details, which can not satisfy further demands for more detail information. We introduce Local Laplacian Filtering (LLF) for edge-aware image processing to multi-resolution Laplacian pyramid weighted blending method. Our method not only preserves details in the overexposed and underexposed areas, but also shows multi-level detail manipulation by adjusting a simple parameter. Compared with previous methods, our interactive method has a greater flexibility on detail display for the needs of different users. 2014/08/15 - 13:03

Performance and properties of bipolar junction transistor (BJT) devices are affected due to the harsh radiation environment. This report reviews the typical effects occurring in BJT devices due to irradiation with x-rays. The defect parameters on the device tested is obtained by in situ experimental technique. In order to study the self-annealing behaviour in BJTS due to ionizing and displacement modifications, damage efficiencies at different bias current levels are compared. The study reveals that higher gain degradation dispersion occurs at lower bias current level. Damage creation in the BJTs is dominated by the excitation mechanism of valence electron to the conduction band. This leads to the production of a large number of excited atoms and increases the holes in the valence band. The increase of holes in the base region due to trapping will increase the probability of recombination and reducing the number of electrons that reaches the collector region. 2014/08/15 - 13:03

In this paper, scheduling problem of flowshop with the criterion of minimizing the total flow time has been considered. An effective hybrid Quantum Genetic Algorithm and Variable Neighborhood Search (QGA-VNS or QGAVNS) has been proposed as solution of Flow Shop Scheduling Problem (FSSP). First, the QGA is considered for global search in optimal solution and then VNS has been integrated for enhancing the local search capability. An adaptive two-point crossover and quantum interference operator (QIC) has been used in quantum chromosomes, which is based on the probability learning and quality of solution at each iteration. Further, a Longest Common Sequence (LCS) method has been adopted to construct the neighborhood solutions for intensifying local search with VNS. The neighborhood solutions will be based on the common sequence similar to the longest common sequence in global solution in each iteration, represented as LCSg. After selection of individual, VNS will be applied further exploring the local search space based on LCS neighborhood solutions. Results and comparisons with different algorithms based on the famous benchmarks demonstrates the effectiveness of proposed QGA-VNS. 2014/08/15 - 13:03

Reactive power optimization is important to ensure power quality, improve system security, and reduce active power loss. So, this paper proposed parallel immune particle swarm optimization (PIPSO) algorithm. This algorithm makes basic particle swarm optimization (BPSO) and discrete particle swarm optimization (DPSO) to optimize in parallel, and improves the convergence capability of particle swarm optimization with convergence ability. It is effective to overcome the problem of local convergence by immune operator, at the same time, it is more reasonable to solve the complex coding problem which discrete variables and continuous variables mixed by parallel optimization. Finally, the simulation results of IEEE-14, IEEE-30, IEEE-118 nodes system show that compared to the genetic algorithm and basic particle swarm optimization, the parallel immune particle swarm optimization can achieve the convergence effect faster and more stable, and better to solve the large-scale power system reactive power optimization. 2014/08/15 - 13:03

Lifting scheme based orthogonal wavelet packet multiplexing (LOWPM) system can increase the computation efficiency and save storage space. By using the orthogonality of the translational wavelet functions, LOWPM system efficiently avoids inter-symbol interference (ISI) and inter-channel interference (ICI). Compared with the orthogonal frequency division multiplexing (OFDM) system, LOWPM system has higher spectrum efficiency and anti-jamming performance. However, in frequency-selective fast-fading channel, the performance of system is still not ideal. In order to implement high bit rate transmission, channel estimation technique is needed to combat noise of the channel. This paper presents an improved channel estimation algorithm in which the cubic convolution interpolation is applied to the pilot based linear minimum mean square error (LMMSE) channel estimation algorithm so as to get higher estimation accuracy. Simulation results show that, the algorithm can effectively improve the performance of the LOWPM system. 2014/08/15 - 13:03

Spectrum sensing is the basis of cognitive radio technology. Cooperative spectrum sensing has been shown to increase the reliability of spectrum sensing. To reduce sensing overhead and total energy consumption, it is recommended that the users with good performance should be selected to increase the sensing reliability. However, which of the cognitive users have the best detection performance is not known a priori. In this paper, a selective cooperative sensing strategy and a user selection method based on B value are proposed so as to increase the sensing reliability and reduce sensing overhead. Simulations are used to evaluate and compare the B value method with other methods. Simulation results show that B value selection has the same sensing performance with signal-to-noise ratio (SNR) selection. B value selection obviously outperforms the simple counting selection in the presence of noise uncertainty. In general, the SNRs of all cognitive users are not known a priori and there must be certain noise uncertainty, in this sense, B selection is a simple, feasible and effective selection method. 2014/08/15 - 13:03

Interferometric Ranging System with Hopped Frequencies (IRHF) is a novel ranging technique with advanced anti-jamming capability in wireless sensor networks. This paper investigates the ranging performance of Maximum Likelihood (ML) estimator of IRHF under multi-tone jamming (MTJ), which is a potential threat faced by wireless sensor nodes. Firstly, the jamming model with one malicious node transmitting MTJ signal is introduced. Secondly, the region where the false estimation locates is detected. Finally, a closed-form expression of the probability of false estimation versus signal-to-jamming ratio and some system parameters is derived with the tool of pair-wise probability. The consistence between the simulation results and the theoretical approximations validates our analyses. The study shows that the probability of false estimation proposed here can predict the ML ranging performance of IRHF accurately and relieve the requirement of time-consuming computer simulations. 2014/08/15 - 13:03

Handling appearance variations caused by the occlusion or abrupt motion is a challenging task for visual tracking. In this paper, we propose a novel tracking method that deals with the appearance changes based on sparse representation in a particle swarm optimization (PSO) framework. First, we divide each candidate state into multiple structural patches to cope with the partial occlusions of the object. Once the object is lost, we present an object’s recovery scheme by the scale invariant feature transforms (SIFT) correspondence between two frames to reacquire the rough object position. Then the tracking state is searched in the vicinage of the rough object position using the PSO iteration. In addition, an online dictionary updating mechanism is presented to capture the object appearance variations. The object information from the initial frame is never updated in the tracking, while other templates in the dictionary are progressively updated based on the coefficients of templates. Compared with several conventional trackers, the experimental results demonstrate that our approach is more robust in dealing with the occlusions and abrupt motion variations. 2014/08/15 - 13:03

For a long time, syntacticians and semanticists have been seeking a way to integrate semantic analysis into syntactic parsing, or inverse; because they have realized that understanding cannot be fully explained simply with syntax or semantics. Categorial Grammarians devoted a lot to extend the expressiveness of CG, but they missed a point what behinds the scene is meanings rather than grammar rules. Frame Semantics invented a very good formal representation of meanings; however it lacks an apparatus of syntactic operation like the one of CG. So, in this paper, we integrated them together, so as they could complement each other. This theoretical model can explain why some grammatical sentences are unacceptable in semantics. Besides, we introduced our Parsing System built on this theoretical model, which well supports our argument that if a semantic category is allowed by the frame, then the sentence is acceptable in semantics. 2014/08/15 - 13:03

Computers and information technology (IT) have continued to significantly influence and drive all areas of human endeavour for many years. This trend will continue for a long time to come. From weather forecasting to satellite missions, from automation in agriculture and livestock production to graphical imaging of vast areas of ocean floor thousands of meters deep, we cannot think about achieving the success we have had without today’s smart computers and software programs. They enable us to communicate with numerous sensors and exchange vital information. Miniaturised computers embedded seamlessly in applications throughout our environment promise to usher in the ‘Internet-of-Things’, where our interactions with the world around us will surely be different from what it is today.  Consequently, continued investment, research and development in new areas of computer and information technology are of paramount importance. Continued collaboration and dissemination of latest findings are also equally important. With this in mind, the first International Conference on Computer and Information Technology (ICCIT) was held at Bangladesh University of Engineering and Technology in 1998. Since then the annual ICCIT conference has grown into one of the most prominent computer and IT related research conferences in the South Asian region, with participation of academics and researchers from many countries around the world. The conference is hosted every year by a public or private university in Bangladesh, and has been successful in bringing together researchers, IT professionals and IT managers to discuss and disseminate state-of-the-art research activities and outcomes. A double blind review process is followed whereby each paper is reviewed by at least two independent reviewers of high international standing. The acceptance rate of papers in recent years has been around 35% or less. This is an indication of the rigor applied to the review and selection of papers for presentation at the conference. The proceedings of ICCIT have been included in IEEExplore since 2008, enhancing the visibility of research activities of the participating researchers with possible citations in a wider sense.This Special Issue of the Journal of Computers presents eight papers selected from the Fifteenth International Conference on Computer and Information Technology (ICCIT 2012) held at the University of Chittagong, Bangladesh during December 22-24, 2012. A total of 491 papers were submitted to the conference. After initial scrutiny, 318 papers were shortlisted for review. After a double blind review process 126 papers were selected for presentation at the conference and subsequent publication in the conference proceedings. From the 126 papers accepted for the conference, authors of 17 papers were invited to submit extended versions for this special issue. The authors were asked to enhance their conference papers significantly, with at least 30% extension. Only eight papers were successful in meeting the expectations of the review process and have been selected for inclusion in this special issue. These eight papers cover four domains of computing, namely efficient algorithm design, cloud computing, fault tolerant systems and biomedical signal processing.The first five papers in this special issue are in the area of efficient algorithm design. The first paper is titled “Effective Sparse Dynamic Programming Algorithms for Merged and Block Merged LCS Problems,” and is authored by A. H. M. Mahfuzur Rahman and M. Sohel Rahman. This paper has presented a study on effective sparse dynamic programming to solve the longest common subsequence (LCS) problem. The approach taken in the paper measures the relationship among three sequences, where two of the sequences are interleaved in different ways and then these interleaved sequences are subsequently matched with the third sequence, as pairwise longest common subsequence problems. The paper presents an improved algorithm to find out the LCS between two sequences. Then it proposes an improved algorithm to solve a block constrained variation of the problem. Finally, it proposes a hybrid algorithm which utilizes the advantages of the above algorithms and the existing state-of-the-art algorithms to provide the best possible output in every possible case, in terms of time as well as space efficiency.The second paper is titled “An Efficient and Scalable Approach for Ontology Instance Matching,” and is authored by Rudra Pratap Deb Nath, Hanif Seddiqui and Masaki Aono. Ontology instance matching is a key interoperability enabler across heterogeneous data resources in the Semantic Web for integrating data semantically. According to the authors, research on ontology matching is shifting from ontology schema or concept level to instance level to fulfil the vision of “Web of Data”. Ontology instances define data semantically and are kept in knowledge base. Since, heterogeneous sources of massive ontology instances grow sharply day-by-day, scalability has become a major research issue in ontology instance matching of semantic knowledge bases. In this study, the authors propose a method by filtering instances of knowledge base into two stages to address the scalability issue. The first stage groups the instances based on the relation of concepts and next stage further filters the instances based on the properties associated to instances. Then, the proposed instance matcher works by comparing an instance within a classification group of one knowledge base against the instances of same sub-group of other knowledge base to achieve interoperability.The third paper in this category is titled “Longest Common Subsequence Problem for Run-Length-Encoded Strings,” and is authored by Shegufta Bakht Ahsan, Syeda Persia Aziza and M. Sohel Rahman. In this paper, the authors present a new and efficient algorithm for solving the Longest Common Subsequence (LCS) problem between two run-length-encoded (RLE) strings. The authors claim that their algorithm outperforms some of the existing algorithms for solving the same problem. The next paper is titled “An intelligent Decision Support System for Arsenic Mitigation in Bangladesh,” and is authored by Mohammad M. Elahi, Muhammad I. Amin, Mohammad M. Haque, Mohammad N. Islam and Md. R. Miah. The paper deals with challenges associated with allocating resources such as tube wells efficiently and effectively to mitigate arsenic hazard is Bangladesh. To allocate resources based on different arsenic hazard parameters, the authors propose a Decision Support System that enables the user to observe the effect of allocation policy both in tabular and spatial format using statistical models. They also propose an algorithm for optimal allocation of resources. An interactive and user-friendly Smart User Interface has been developed for the users to support the decision making process.The fifth and final paper in the category of algorithm design is on the set covering problem, titled “A CLONALG-based approach for the set covering problem,” and is authored by Masruba Tasnim, Shahriar Rouf and M. Sohel Rahman. In this paper, the authors propose a CLONALG-based simple heuristic, which is one of the most popular artificial immune system (AIS) models, for the non-unicost set covering problem (SCP). They have also modified the heuristic to solve the unicost SCP, which can be used to model several real world situations such as crew scheduling in airlines, facility location problem, production planning in industry etc. The approach taken in this paper is to use Artificial Immune System to solve SCP.The next paper falls in the category of security in cloud computing. It is titled “Data Intensive Dynamic Scheduling Model and Algorithm for Cloud Computing Security,” and is authored by Md. Rafiqul Islam and Mansura Habiba. In order to ensure adequate security for different data storages in cloud, this paper proposes a three tier security framework. The security overhead has been analysed mathematically for different security services such as confidentiality, integrity as well as authenticity to show that the proposed framework is able to provide adequate level of security and enhance the processing speed of security services without additional time overhead. The paper has also proposed a scheduling algorithm to ensure security for data intensive applications.The next paper is in the area of fault tolerant systems and is authored by Seemanta Saha and Muhammad Sheikh Sadi. Titled “Synthesizing Fault Tolerant Safety Critical Systems”, the paper proposes an efficient approach to synthesize safety critical systems that allows continuity of program execution in the presence of fault. In this method, the authors expect the program to transition from one state to another to perform its desired functions and in every state it will check whether the safety specifications of the system are maintained or not. To check the maintenance of safety specifications, reachability to program’s bad transitions will need to be checked. This checking will be done by forward traversing of program states during program execution. Consequently, the authors expect the proposed method to enforce completion of program execution tolerating the fault due to a soft error.The last paper in this special issue is in the area of biomedical signal processing. It is titled “Determination of the Effect of Having Energy Drinks by Analyzing Blood Perfusion Signal,” and is authored by Muhammad Muinul Islam, Md. Bashir Uddin, Mohiuddin Ahmad, Fatema Khatun, Md. Nafiur Rahman Protik and Md. Mehedi Islam. In this paper, the authors present an evaluation of the effect of having energy drinks using a laser Doppler Flowmetry technique by analyzing the blood perfusion signal before and after having energy drinks on healthy human subjects. They postulate that it is essential to study various physiological signals during work or exercise to be able to assess whether the energy drinks have any contribution to enhance human performance. The authors have observed a significant change in metabolic and sympathetic nerve activity after having energy drinksThirty three reviewers from different countries assisted the guest editors in reviewing the papers submitted to this Special Issue during two rounds of review. They have contributed immensely to the process by responding to the guest editors in the shortest possible time and by dedicating their valuable time to ensure that the Special Issue contains high-quality papers. The guest editors would like to express their sincere gratitude to all the reviewers, namely, A. B. M. Shawkat Ali,  Mortuza Ali, Todd Andel, Theus Aspiras, Waleed Al-Assadi, Allen Benter, Subrata Chakraborty, Duanbing Chen, Chenwei Deng, Yakov Diskin, Diamantino Freitas, Smitha Kavallur Pisharath Gopi, Junyi Guo, Afzal Hossain, Rafiqul Islam, Zahid Islam, Mohammed Kaosar, Hing-Wah Kwok, Kang-Ping Lin, Jihong Liu, Alex Mathew, Ghulam Muhammad, Aibing Ning, Damian Paul O’Dea, Ashfaqur Rahman, Neethu Robinson, Samuel H. Russ, Adel A. Sakla, Rajesh Thiagarajan, Qingguo Wang, Xiaodong Wang, Yaohua Yu, and Li Zengyong. 2014/07/26 - 18:24

The longest common subsequence problem has been widely studied and used to find out the relationship between sequences. In this paper, we study the interleaving relationship between sequences, that is, we measure the relationship among three sequences, where two of those are interleaved in different ways and then the interleaved sequences are subsequently matched with the third remaining sequence, as pairwise longest common subsequence problems. Given a target sequence T and two other sequences A and B, we need to find out the LCS between M(A,B) and T, where M(A,B) denotes the mergedsequence which consists of A and B. We first present an improved O((Rr + Pm) log log r) time algorithm, where we consider only the matches between sequences; here |T| = n, |A| = m and |B| = r (m _ r); R is the total number of ordered pairs of positions at which the two strings A and T match and P denotes the total number of ordered pairs of positions at which the two strings B and T match. Basing on the same idea, we also propose an improved algorithm to solve a block constrained variation of the problem. Running time of the blocked version is O(max{R_ log log r,P_ log log r}), where _ denotes the number of blocks in A and _ denotes the number of blocks in B. However, these improved algorithms do not provide best output in every cases. Therefore we finally fabricate a hybrid algorithm which utilizes the advantages of our proposed algorithms and previous state of the arts algorithms to provide best output in every possible cases, from both time efficiency and space efficiency. 2014/07/26 - 18:24

Ontology instance matching is a key interoperability enabler across heterogeneous data resources in the Semantic Web for integrating data semantically. Although most of the research has been emphasized on schema level matching so far, research on ontology matching is shifting from ontology schema or concept level to instance level to fulfill the vision of “Web of Data”.  Ontology instances define data semantically and are kept in knowledge base. Since, heterogeneous sources of massive ontology instances grow sharply day-by-day, scalability has become a major research concern in ontology instance matching of semantic knowledge bases. In this study, we propose a method by filtering instances of knowledge base into two stages to address the scalability issue. First stage groups the instances based on the relation of concepts and next stage further filters the instances based on the properties associated to instances. Then, our instance matcher works by comparing an instance within a classification group of one knowledge base against the instances of same sub-group of other knowledge base to achieve interoperability. We experiment our proposed method with several benchmark data sets namely OAEI-2009, OAEI-2010 and OAEI-2011. On comparison with other baseline methods, our proposed method shows satisfactory result. 2014/07/26 - 18:24

In this paper, we present a new and efficient algorithm for solving the Longest Common Subsequence (LCS) problem between two run-length-encoded (RLE) strings. Suppose Ŷ and ^X are two RLE strings having length ^k and ^ℓ respectively. Also assume that Y and X are the two uncompressed versions of the two RLE strings Ŷ and ^X having length k and ` respectively. Then, our algorithm runs in O((^k+ ^ℓ)+Rlog log(^k^ℓ)+Rloglogω) time, where ω = k + ℓ and R is the total number of ordered pairs of positions at which the two RLE strings match. Our algorithm outperforms the best algorithms for the same problem in the literature. 2014/07/26 - 18:24

Arsenic contamination of groundwater in many nations including Bangladesh shows that this is a global problem. Because of the delayed health effects, poor reporting, and low levels of awareness in some communities, the extent of the adverse health problems caused by arsenic in drinking water is at alarming level in Bangladesh. Also, allocating resources such as tube wells efficiently and effectively to mitigate arsenic hazard is a challenging task in Bangladesh. To allocate resources based on different arsenic hazard parameters, we have developed a Decision Support System that enables the user to observe the effect of allocation policy both in tabular and spatial format using statistical models. We have also developed an algorithm for optimal allocation of resources. A Smart User Interface is designed for the users so that they will find an interactive, user-friendly, intelligible, logical, clear, and sound environment to work with. Finally, we have analyzed and demonstrated the efficacy of our algorithm graphically. 2014/07/26 - 18:24

In this paper, we propose a CLONALG-based simple heuristic, which is one of the most popular artificial immune system (AIS) models, for the non-unicost set covering problem (SCP). In addition, we have modified our heuristic to solve the unicost SCP. It is well known that SCP is an NP-hard problem that can model several real world situations such as crew scheduling in airlines, facility location problem, production planning in industry etc. In real cases, the problem instances can reach huge sizes, making the use of exact algorithms impractical. So, for finding practically efficient approaches for solving SCP, different kind of heuristic approaches have been applied in the literature. To the best of our knowledge, our work here is the first attempt to solve SCP using Artificial Immune System. We have evaluated the performance of our algorithm on a number of benchmark non-unicost instances. Computational results have shown that it is capable of producing high-quality solutions for non-unicost SCP. We have also performed some experiments on unicost instances that suggest that our heuristic also performs well on unicost SCP. 2014/07/26 - 18:24