Skip to Content

Instrukcja korzystania z Biblioteki


Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online


Astronomia Astrofizyka

Sztuka dawna i współczesna, muzea i kolekcje

Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Przewidywania Kosmologia Religie Ideologia Polityka

Geologia, geofizyka, geochemia, środowisko przyrodnicze

Biologia, biologia molekularna i genetyka

Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Journal of Software

Texture feature extraction is an important step inthe facial expression recognition system. The traditional LBPmethod ignored the statistical characteristics of the texturechange direction in the process of feature extraction, and wecan extract more detailed texture information by the LDPmethod based on LBP, but the computational complexityis greatly increased. In order to extract more detailedtexture information with the computational complexity is notincreased, we proposed a method named Local Vector Model(LVM). In this method, modulus value and direction of thelocal texture changes are extracted as the features of classification. Furthermore, in order to improve the robustness thatthe algorithm to the subtle deformation of expression image,the Image Euclidean Distance is introduced and embeddedin LVM. Finally, the even decreasing function is used toget the neighbor classification distance. Experiments onJAFFE facial expression databases with different resolutiondemonstrated that the method proposed in this paper isbetter than other modern methods. 2014/09/12 - 23:33

A novel method based on Wikipedia for clustering keyword of reviews is proposed. Users can quickly finding the themes they interest through it. First the method extracts keywords, then calculates word similarity based on Wikipedia to generate similarity matrix, finally uses k-means to cluster. The performance is better than the methods which based on How-net and Word-net. The accuracy is around 77%. 2014/09/12 - 23:33

SaaS (Software as a Service) has been attracted significant attentions from both industry and academia. Owing to serving multiple clients in the Long Tail, many notable SaaS applications have accomplished big successes in many traditional domains, such as CRM (Customer Relationship Management) and HRM (Human Resource Management).To promote the effect and efficiency of the traditional publishing industry, a solution of CPP (Cloud Publishing Platform) based on SaaS is proposed. Compared to the traditional e-publishing systems used by the publishers, it has the following distinguished features: 1) it is developed and operated by independent SaaS provider who not only provides valuable publishing services to enterprise tenants, such as periodical presses, but also provides significant services to individual tenants, such as scholars and researchers; 2) any individual tenant is not just a dedicator to a specific publisher, but a dedicator to all enterprise tenants, and his academic reputation can be accumulated by all the activities carried out in the platform; 3) a publishing cycle will be established based on CPP and benefit all the stakeholders.At the end, five key points of developing and operating a successful SaaS application are concluded, which can be used as a guideline to general SaaS application development. 2014/09/12 - 23:33

Software aging results from runtime environment degradation, and is significantly correlated with available computing resources. A set of variables evolving with time can describe the running state of computer system. Consequently, software aging is analogous to evolution of a dynamic system in this paper. We construct a nonlinear dynamic model based on the experimental observations. First, we assume the mathematical form of nonlinear dynamic equations. Then, we select resource parameters which can reflect the “health” of the whole computer system as variables of our model. Finally, we estimate the values of each parameters in our model using nonlinear inversion. Our approach is validated by two different datasets. The dynamic model can describe the evolution of software aging and interpret the interplay of various resource parameters.  Moreover, this model can be used to forecast abrupt state degradation and help us to explore the root cause of software aging. For example, by comparing the output of our model against real values, with a suspected “aging factor” as input, we can identify which resource variable is the root cause of injuring the stability of computer system. 2014/09/12 - 23:33

Metamorphic testing has been applied in various systems from different domains. Many studies showed that the selection of metamorphic relations greatly affected the effectiveness of metamorphic testing. However, these studies mainly focused on the fault-detection effectiveness. They did not consider the cost that metamorphic relations involved, such as the number of test inputs. Good metamorphic relations should have high fault-detection effectiveness with a low cost. In this paper, we propose a cost-driven approach for metamorphic testing. The key idea is to design metamorphic relations sharing the same test inputs to reduce the testing cost. We conduct a case study on a bank system and compare the cost-effectiveness of metamorphic relations derived from this approach and those constructed by the conventional approach. The experimental results show that metamorphic relations derived from our approach are more cost-effective. We also find that this approach not only reduces the cost of metamorphic testing, but also helps to construct different metamorphic relations to detect different types of faults. 2014/09/12 - 23:33

Because the existing attribute reduction algorithms based on rough set theory and genetic algorithm have the main problems: the complexity in calculating fitness function and slow speed in convergence. An attribute reduction algorithm based on rough set theory and an improved genetic algorithm is proposed in this paper. In order to simplify the calculation of fitness function under the condition of keeping the algorithm correct, the relative importance of chromosome is used to define the fitness function. Beyond that, by introducing the core attributes into the initial population and using an improved mutation operator, the algorithm can not only maintain the feature of whole optimization, but also have a quicker convergence speed. The experimental results show that the algorithm can obtain the optimal attribute reduction set of information system quickly and efficiently. 2014/09/12 - 23:33

High energy consumption becomes an urgent problem in cloud datacenters. Based on virtualization technologies, the pay-as-you-go resource provision paradigm has become a trend. Specifically, Virtual Machine (VM) is the basic resource unit in data center for resource migration and provisioning. Many researches have been devoted to improve datacenter resource utilization and reduce power consumption by VM placement. As the most important power consumption resource, CPU has a fluctuant frequency range. Based on CPU frequency scaling, a new approach for VMs placement is proposed. The approach is realized in two stages. In the initial stage, we propose a multi-objective heuristic ant colony algorithm, which will find the optimization solution for energy saving as well as service-level agreement (SLA). In the dynamic stage, by using autoregressive prediction and CPU frequency scaling, the proposed approach can adjust the CPU utilization if needed, not depending on whole VM migration. The experiments show that the energy saving algorithms based on CPU frequency scaling are much better than the traditional BFD and FFD algorithms in saving energy and satisfying SLA. 2014/09/12 - 23:33

Delay Tolerant Networks are characterized by frequent network topology partition, limited resource, extremely high latency, etc. Consequently successful deliveries of messages in such networks face great challenges. In this paper, by capturing the temporary clusters, we timely use the temporary end-to-end paths to directly and successfully deliver messages. Besides, by collecting a large number of encounter history information as the samples, we use the methods of statistical analysis to objectively and accurately evaluate encountered nodes, thus selecting fewer but better relay nodes to spread messages. Finally based on the above schemes, a Statistical Analysis and Temporary Cluster based routing algorithm (SATC) is proposed to improve the routing performance. Extensive simulations have been conducted and the results show that SATC can achieve a higher delivery ratio and a fewer hop count compared to Epidemic and PRoPHET. Furthermore, its overhead ratio is 70% and 65% less than Epidemic and PRoPHET respectively. 2014/09/12 - 23:33

Model checking has been successfully applied to verify the security properties of network protocols. In this paper, we propose a formal modeling method using the PROMELA language based on simplified model of SET payment protocol proposed by Lu & Smolka, and use   LTL to describe authentication property. Under the hypothesis of the network environment being controlled by the intruder, we use SPIN to find the attacks and improve the verification efficiency by using the optimization strategies of atomic steps and Bit-state hashing technology. Finally, we improve the existing vulnerabilities of the SET protocol. 2014/09/12 - 23:33

To classify thrombosis and pectinate muscle in cardiac ultrasound image sequences, a classification method based on sparse representation is proposed. This method extracts GLCM based texture features to form the sample set and compute the sparse solution with coefficients how a test sample be represented by the training set. After that, two kinds of constraints and classification strategy are added to achieve the classification. Experiment results shows that the proposed approach can achieve a classification accuracy of 91.92%, significantly higher than other popular classifiers. 2014/09/12 - 23:33

This study presents an ontology-based and multi-agent system for PPHIIS assessment, an integrated information system for the health of People’s Police (PPHIIS) was investigated, and an ontology model was designed and implemented to represent the health domain knowledge combined with the agents’ cooperation, the SOsRCE-ontology model not only solved semantic problemsabout heterogeneous data but also enabled medical staff to extract detailed medical information more efficiently. Meanwhile, a level of abstraction at which we envisage integrated systems carrying out cooperative work by inter-operating globally across networks connecting police, hospitals and terminals was offered by the multi-agent view. 2014/09/12 - 23:33

Based on the software evolution theory, this paper discusses the software trustworthiness growth process in which software trustworthiness gradually develops from low trustworthy state into high trustworthy state, presents the cycle model of software trustworthiness growth and summarizes the key factors of influencing trustworthiness growth. On this basis, the relative expansion concepts of trustworthy software are put forward, five typical characteristics of software trustworthiness are analyzed and software trustworthiness evolution system is constructed, which have great significance in correctly understanding the essence of software trustworthiness and exploring the strategy and technology of software trustworthiness growth. 2014/09/12 - 23:33

A multi-focus image fusion method, which is based on bidimensional empirical mode decomposition and improved local energy algorithm, is presented in this paper. First, source image is decomposed by bidimensional empirical mode decomposition. Then maximum criterion combined with weighted average fusion rule based on local energy is applied to bidimensional intrinsic mode function components of the corresponding frequency segment. If phase of bidimensional intrinsic mode function coefficients decomposed by bidimensional empirical mode decomposition on two source images is same, local energy maxima criterion is used in frequency coefficients of fused image, elseif corresponding phase is opposite, bidimensional intrinsic mode function coefficients of fused image is determined by weighted average method based on local energy. Finally, fusion result is received by inverse bidimensional empirical mode decomposition transform on fusion coefficient. Simulation shows that the proposed algorithm is significantly outperforms the traditional methods, such as maximum criterion, weighted average and wavelet fusion rules. 2014/09/12 - 23:33

The supply chain facility location problem is an important foundation in the supply chain. The quality of facilities location directly influences the production and operation status of enterprise. Therefore, it is critical for enterprises to adopt scientific and effective methods for facility location problem assessment and a smooth decision. This article concentrates on the development status of location problem, focusing on the linear programming model and a genetic algorithm in the location problem analysis and analytical method. In the linear programming model, because the given complex table calculation method is too complicated and the workload is very large, the Excel software is proposed to solve the location problem, which can greatly improve the efficiency of enterprise facility location problem. In addition, a genetic algorithm based on MATLAB toolbox is applied to another type of facility location problem, which provides a referential method for location decision under different conditions and different facilities. 2014/09/12 - 23:33

Existing top-k high utility itemset (HUI) mining algorithms generate candidate itemsets in the mining process; their time & space performance might be severely affected when the dataset is large or contains many long transactions; and when applied to data streams, the performance of corresponding mining algorithm is especially crucial. To address this issue, propose a sliding window based top-k HUIs mining algorithm TOPK-SW; it first stores each batch data of current window as well as the items’ utility information to a tree called HUI-Tree, which ensures effective retrieval of utility values without re-scan the dataset, so as to efficiently improve the mining performance. TOPK-SW was tested on 4 classical datasets; results show that TOPK-SW outperforms existing algorithms significantly in both time and space efficiency, especially the time performance improves over 1 order of magnitude. 2014/09/12 - 23:33

Sentiment visualization on tweet topics has recently gained attentions due to its ability to efficiently analyze and understand the people’s feelings for individuals and companies. In this paper, we propose a chart, SentimentRiver, which effectively demonstrates the dynamics of sentiment evolvement on a topic of tweets. The gradient colors of the river flow indicate the variation of topical sentiments, via introducing the membership weight to a sentiment class in a fuzzy mathematical view. Besides, with the value of the point-wise mutual information and information retrieval (PMI-IR), representative sentiment words are extracted and labeled in each time slot of the river flow. In the experiments, we compare SentimentRiver on the topic of Obama election, with other statistic charts, which demonstrates its effectiveness for visualizing and analyzing the topical sentiments on tweet stream. 2014/09/12 - 23:33

We propose a novel method for license plate detection in nighttime videos with complicated background scenes. The proposed method use multi-features-based combination way in the residual edge image. Our work firstly is color space conversion from RGB to La*b* color space and get enhancement L component of license plate image, then extract out multi-features of license plate, thirdly, multi-features combination method is used to get license plate detection, finally, search the license plate localization and segment the characters from the original nighttime environment images by using nonlinear vector quantization method. The experimental results show that the proposed method has more robust to interference characters and more accurate when compared with other methods. 2014/09/12 - 23:33

Frequent itemset mining plays an important part in college library data analysis. Because there are a lot of redundant data in library database, the mining process may generate intra-property frequent itemsets, and this hinders its efficiency significantly. To address this issue, we propose an improved FP-Growth algorithm we call RFP-Growth to avoid generating intra-property frequent itemsets, and to further boost its efficiency, implement its MapReduce version with additional prune strategy. The proposed algorithm was tested using both synthetic and real world library data, and the experimental results showed that the proposed algorithm outperformed existing algorithms. 2014/09/12 - 23:33

High-accuracy optimization is the key component of time-sensitive applications in computer sciences such as machine learning, and we develop single-GPU Iterative Discrete Approximation Monte Carlo Optimization (IDA-MCS) and multi-GPU IDA-MCS in our previous research. However, because of the memory capability constrain of GPUs in a workstation, single-GPU IDA-MCS and multi-GPU IDA-MCS may be in low performance or even functionless for optimization problems with complicated shapes such as large number of peaks. In this paper, by the novel idea of parallelizing Iterative Discrete Approximation with CUDA-MPI programming, we develop the GPU cluster version (GPU-cluster) of IDA-MCS with two different parallelization strategies: Domain Decomposition and Local Search, under the style of Single Instruction Multiple Data by CUDA 5.5 and MPICH2, and we exhibit the performance of GPU-cluster IDA-MCS by optimizing complicated cost functions. Computational results show that, by the same number of iterations, for the cost function with millions of peaks, the accuracy of GPU-cluster IDA-MCS is approximately thousands of times higher than that of the conventional method Monte Carlo Search. Computational results also show that, the optimization accuracy from Domain Decomposition IDA-MCS is much higher than that of Local Search IDA-MCS. 2014/09/12 - 23:33

Nowadays, scientists can collect and analyze massive mobile data generated by various sensors and applications of smart phones. smart phones have become an important platform for the understanding of social activities, such as community detection, social dynamics and influence. It is extremely important to store and retrieve mobile data efficiently for various data mining tasks. In this paper, we propose Mobile Data Warehouse (MobileDW) model which is based on GraphChi, a system designed for large-scale graph computation on one PC. We propose multi-shard data structure and Time-based Parallel Sliding Windows (TPSW) to store Social data such as call logs and SMS. We further propose Mobile Index (MIndex) structure and Mobile Position Compression Algorithm (MPCA) to warehouse Position data such as GPS, Bluetooth etc. The MIndex structure can compress Position data significantly. The data compression process is based on the following observations: (1) The position of the individual users within a certain period of time often unchanged. (2) A crowd of people tend to move and stay together.  Experimental results demonstrate the effectiveness and efficiency of Mobile Data Warehouse. 2014/09/12 - 23:33

    The research of imbalance data classification is the hot point in the field of data mining. Conventional classifiers are not suitable to the imbalanced learning tasks since they tend to classify the instances to the majority class which is the less important class. This paper pays close attention to the uniqueness of uneven data distribution in imbalance classification problems. Without change the original imbalance training data, this paper indicated the advantages of proximal classifier for imbalance data classification. In order to improve the accuracy of classification, this paper proposed a new model named LS-NPPC, based the classical proximal SVM models which find two nonparallel planes for data classification. The LS-NPPC model is applied to six UCI datasets and one real application. The results indicate the effectiveness of the proposed model for imbalanced data classification problems. 2014/09/12 - 23:33

Most existing task scheduling algorithms in cloud storage fail to aware users ' QoS preference. In addition, these algorithms result in low user satisfaction rate for they do not consider the characteristics of cloud storage. In order to address these problems, the "optimal order comparison method " is used to aware users ' QoS preference, and also helps experts use their professional knowledge to decide the weight of QoS classes. We redefined the fitness function of the particle swarm optimization (PSO) algorithm by using these weights and proposed the "PSO based hierarchical task scheduling with QoS preference awareness: PSO-HQoSPA" algorithm. By consider both user and expert experience, the method can aware users’ QoS preference and deal with multiple QoS requirements. The simulation results show our method achieved acceptable user satisfaction rate, and the same time maintains the efficiency as traditional PSO based method. 2014/09/12 - 23:33

To  establish  links  between  a  large  number  ofreviews  and  events,  we  propose  a  web  reviews  and  eventsmatching  approach  by  event  feature  segments  and semi-Markov conditional random fields  (CRFs). We extract named  entities  and  verb  phrases  from  reviews  as  event feature  segments.  We  use  semi-Markov  CRFs  to  label  the reviews  and  to  recognize  event  feature  segments  at  the segment level. This approach uses event feature segments to match  reviews  and  events.  Therefore,  it  is  more  accurate than  other  approaches  which  use  only  named  entities  to match.  We  use  several  feature  rules  to  recognize  the variants  of  named  entities,  such  as  abbreviation  and acronym.  In  addition,  we  use  phrase  dependency  parsing tree  to  recognize  verb  phrases.  A  compositive  similarity measurement  function  is  presented  to  combine  similarity results  of  event  feature  segments.  Experimental  results demonstrate that this method can accurately match reviews and events. 2014/09/12 - 23:33

Large-scale social networks emerged rapidly in recent years. Social networks have become complex networks. The structure of social networks is an important research area and has attracted much scientific interest. Community is an important structure in social networks. In this paper, we propose a community detection algorithm based on influential nodes. First, we introduce how to find influential nodes based on random walk. Then we combine the algorithm with order statistics theory to find community structure. We apply our algorithm in three classical data sets and compare to other algorithms. Our community detection algorithm is proved to be effective in   the experiments. Our algorithm also has applications in data mining and recommendations. 2014/09/12 - 23:33

Association rules mining approach can find the relationship among items. Using association rules mining algorithm to mine resource fault, can reduce the number of wrong alarm resources to be replaced. This paper proposed an efficient association rules mining algorithm: CSRule, for mining closed strong association rules based on association rule merging strategies. CSRule algorithm adopts several pruning strategies to mine closed strong association rules without storing the candidate set. To improve the mining efficiency, CSRule algorithm adopts effective pruning strategies to mine closed strong association rules in real time, instead of secondary mining only through the definition. The experimental results show our algorithm is more efficient than traditional algorithm. 2014/09/12 - 23:33

With the rapid development of Internet, the internet of things and other information technology, big data usually exists in cyberspace as the form of the data stream. It brings great benefits for information society. Meanwhile, it also brings crucial challenges on big data mining in the data stream. Recently, academic and industrial communities have a widespread concern on massive data mining problems and achieve some impressive results. However, due to the big data mining in data stream, those researches are not enough. In this paper, we analyzed the characteristics of stream data mining under big data environment, discussed the challenges and research issues of big data mining in data stream and summarized research achievements on data mining of massive data. 2014/09/12 - 23:33

In this paper, We use Adaboost to create MILBoost and propose a new MILBoost approach to automatically recognize the facial expression from video sequences by constructing the MILBoost methods. At first, we determine facial velocity information using optical flow technique, which is used to charaterize facial expression. Then visual words based on facial velocity is used to represent facial expression using Bag of Words. Final MILBoost model is used for facial expression recognition, in order to improve the recognition accuracy, the class label information was used for the learning of the MILBoost model. Experiments were performed on a facial expression dataset built by ourselves and evaluated the proposed method, the experiment results show that the average recognition accuracy is over 89.2%, which validates its effectiveness. 2014/09/12 - 23:33

Metadata distribution is important in mass storage system. Sub-tree partition and hash are two traditional metadata distribution algorithms used in file system. But they both have a defect in system scalability. This paper presents a new metadata management method, Directory Path Code Hash (“DPCH”). This method is to store directory and file metadata separately, and effectively solving the unbalanced metadata distribution and access hot point problems in Sub-tree partition and the excessive reading times and large metadata migration amount after directory property modification in hash algorithm. The experiment indicates that this method proposed significantly outweighs other algorithms in terms of throughput rate, metadata distribution, reading times, etc. 2014/09/12 - 23:33

The separation of concerns design principle improves software reutilization, understandability, extensibility and maintainability. By using the object-oriented paradigm, it is not always possible to separate into independent modules the different concerns of an application. The result is that the source code of crosscutting concerns are tangled and scattered across the whole application. Aspect-oriented programming offers a higher level of modularity, providing a solution for the code tangling and scattering problem. To show how aspect-oriented programming can be used as a suitable mechanism to improve the modularity of object-oriented applications, this divulgative article presents the implementation of a typical design pattern following both the object- and aspect-oriented paradigms. The two approaches are compared from the modularity perspective, establishing a discussion on the benefits provided and is current use. 2014/09/12 - 23:33

Semantic similarity between words is becoming a generic problems for many applications of computational linguistics and artificial intelligence.  The difficulty lies in how to develop a computational method that is capable of generating satisfactory results close to how humans perceive. This paper proposes a semantic similarity approach that is based on multi-feature combination. One of the benchmarks is Miller and Charles’ list of 30 noun pairs which had been manually designated similarity measurements. We correlate our experiments with those computed by several other methods. Experiments on Chinese word pairs show that our approach is close to human similarity judgments. 2014/09/12 - 23:33

The analytic hierarchy process (AHP) has been applied in many fields and especially to complex engineering problems and applications. The AHP is capable of structuring decision problems and finding mathematically determined judgments built on knowledge and experience. This suggests that the AHP should prove useful in agile software development, where complex decisions occur routinely. This paper provides a ranking approach to help the XP team to set the rules of pairing two persons for pair programming and proposes several criteria can be used for the AHP evaluation. Two academic and the three-indusial case studies have applied the AHP to decide these rules in pairing. 2014/09/12 - 23:33

Managing the tacit knowledge in organizations raises substantial challenges in regards of the associated processes. In software project management decision making has the critical role in this scenario since it defines the manager’s responsibilities and stems from the various sources linked to the process. With the respect of tacit knowledge, the decision making constructs the essential foundation and thereby it needs a reliable framework for modeling of the decision structure. In this paper a conceptual multi-method simulation based framework will be introduced in a modality to cover multiple levels of the decision structure over software project management process. The methods used are integrated towards a multi-method simulation model whereas each of these methods exclusively realizes distinct aspect of software project management. The framework evolves the manner of decision making by a paradigm which establishes the foundation for a tactical level understanding and decision support for practitioners. At the results section an optimal policy for the framework will be presented. 2014/09/12 - 23:33

To many consumers, online shopping has become one major way to shop, so e-commerce and other related industries has enjoyed fast growth in recent years. Online retailers interact with their customers via Web-based or so-called virtual storefronts. Inevitably, various unpleasant shopping experience keep emerging along with increasing adoption of shopping via virtual storefronts. A successful online retailer must be aware of these negative factors and know how to handle them effectively. This research work investigates the perceptions of online shoppers, identifies the critical incidents leading to consumers' unpleasant experiences during shopping processes, and gains insight into the reasons behind these experiences. Furthermore, a set of solutions for increasing customers' satisfaction were proposed accordingly. 2014/09/12 - 23:33

Component retrieval is important to improve software productivity in the field of component based software development (CBSD). In this paper, static and dynamic behavior information of component interface is considered as retrieval items for component retrieval system at the same time. And interface automaton is adopted as the model to describe retriever’s query and component in repository. Three kinds of matching models are developed to satisfy exact or approximate matching according to the information retriever can give. The implementation of the matching is illustrated based on incidence matrix of digraph corresponding to interface automaton. A retrieving algorithm is developed in which offline computation of matching relationship in repository is used to reduce the searching space and amend the retriever’s request. 2014/09/12 - 23:33

In the process of image segmentation, the classic Fuzzy C-Means (FCM) algorithm is time-consuming and depends heavily on initialization center. Based on Graphic Processing Unit (GPU), this paper proposes a novel FCM algorithm by improving the computational formulas of membership degree and the update criterion of cluster centers. Our algorithm can initialize cluster centers purposefully and further optimize them according to the analysis on the thread model of the graphic hardware. The compared experimental results with the classic FCM algorithm show that our algorithm has obvious superiority in improving image segmentation quality and efficiency. 2014/08/01 - 19:06

Location-based applications require a user's location data to provide customized services. However, location data is a sensitive piece of information that should not be revealed unless strictly necessary which induces the emerging of a number of location privacy protection methods, such as anonymity and obfuscation. However, in many applications, one needs to verify the authenticity and other properties (e.g. inclusion to an area) of location data which becomes an intractable problem because of the using of location privacy protection. How to achieve both location assurance, i.e. assuring the authenticity and other properties of location data, and location privacy protection seems to be an intangible problem without complex trusted computing techniques.            By borrowing range proof techniques in cryptography, however, we achieve them both successfully with minimized trusted computing assumptions. The Pedersen commitment scheme is employed to give location data a commitment which would be used for possibly future location assurance. Area proof, testing whether a private location is within some area, is employed to test whether or not the location data having the commitment is within any definite area. Our system model do not rely on third trusted party and we give reasonable explanations for our system model and for the trusted computing assumptions.   We present a new range proof protocol and a new area proof protocol which are based on a new data structure, i.e. Perfect $k$-ary Tree (PKT). Some deeper properties of PKT are presented which are used to analyze our protocols' complexity. The analysis results show that our protocols are more efficient that the former and are flexible enough to support some existing mobile applications, such as tracking services and location-based access control. 2014/08/01 - 19:06

Normal communication of deaf people in ordinary life still remains an unrealized task, despite the fact that Sign Language Recognition (SLR) made a big improvement in recent years. We want here to address this problem proposing a portable and low cost system, which demonstrated to be effective for translating gestures into written or spoken sentences. This system relies on a home-made sensory glove, used to measure the hand gestures, and on Wavelet Analysis (WA) and a Support Vector Machine (SVM) to classify the hand’s movements. In particular we devoted our efforts to translating the Italian Sign Language (LIS, Linguaggio Italiano dei Segni), applying WA for feature extractions and SVM for the classification of one hundred different dynamic gestures. The proposed system is light, not intrusive or obtrusive, to be easily utilized by deaf people in everyday life, and it has demonstrated valid results in terms of signs/words conversion. 2014/08/01 - 19:06

We analysed and solved possible singularity for an improved MFE multivariate public key (Medium Field Multivariate Public Key Encryption) and studied the use of it in software copy protection. We used our new MFE multivariate public key cryptosystem to design an algorithm of software registration, in which a given plaintext can result in multi-cipher-text.. The breaking is hard because the ciphertext is variable. The ability to withstand algebraic attacks is enhanced. The dependence of registration string on the fingerprints of machine prevents any registration string from being shared by multiple machines. 2014/08/01 - 19:06

      In the information age, IT companies pay more and more attention to the quality of service. Social network service providers concertrates on saving the cost and developing products with better quality. The explosion of Web service has provided good opportunities for these companies, and also brought severe challenges to them. Therefore, the efficiency and the resuability would be the important factors that developers should consider about. In order to meet the requirements above, this paper designs the model of social network service with Service-Oriented Architecture Modeling Language (SOAML), which supports the service-oriented modeling and design. The service-oriented modeling method focuses on abstracting the different services and finding the relationships among them. The key steps of this method contains decomposing the business processes, selecting the candidate services and extracting the control processes. After comparing with other modeling methods, it turns out that the SOAML modeling method introduced by this paper is of high efficiency, high resuability and low cost. 2014/08/01 - 19:06

Collaborative Filter is proved to be effective in recommendations and widely used in the recommender system for online stores. The mechanism of this method is to find similarities among users in rating score. The item can be recommended based on the similar user’s choice. The calculation of user similarities is based on distance metrics and vector similarity measures. However, the effect of CF methods is limited by several problems, such as the new item problem and how to recommend the items in the long-tail. The data sparsity, which means fewer scores in user rating matrix, can lead to difficulties in finding a relationship among users for recommendations. It is particularly important to design new similarity metrics which is based on the inherent relationship between items rather than rating score by users. In this paper, we introduce an approach using ontology-based similarity to estimate missing values in the user rating matrix. To accommodate different features of items, we investigate several kinds of metrics to estimate the similarity of item ontology, such as Tversky’s similarity, Spearman’s rank correlation coefficient, and Latent Dirichlet Allocation. The missing rating score was filled by the mechanism based on the similarity of the item ontology. With the new rating matrix, the original CF method could get better performance in recall. Experiments using Hetrec’11 dataset were carried out to evaluate the proposed methods using Top-N recall metrics. The results show the effect of the proposed method compared with state-of-the-art approaches when applied to new item cold start and long-tail situations. 2014/08/01 - 19:06