Skip to Content

Instrukcja korzystania z Biblioteki


Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online


Astronomia Astrofizyka

Sztuka dawna i współczesna, muzea i kolekcje

Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Przewidywania Kosmologia Religie Ideologia Polityka

Geologia, geofizyka, geochemia, środowisko przyrodnicze

Biologia, biologia molekularna i genetyka

Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Journal of Software Engineering

A service class can be formed by Web services with the same function from an abstract operation. Some operations for the service class are presented to reflect its self-adaptation. On the basis of the service class, Web service discovery and composition algorithms are given. In Web service discovery process, a projective operation is proposed and a selection algorithm is provided to choose the optimal Web services and in Web service composition process, some service composition operations are given. The Web service discovery and composition efficiency is improved based on service classes. Finally, the effectiveness of the proposed methods is illustrated by the service composition for travel planning. 2013/04/30 - 17:57

Recently, image fusion has attracted a lot of interest in various areas. Presented is a novel image fusion study, called medical image fusion with guided filtering and pixel screening. First, source image is merged as subsequent filtering input image by weighted fusion. Then, filtering outcome is compressed by Dynamic Range Compression (DRC) to highlight edge information. Finally, exploiting a pixel screening strategy to consummate texture structure of fused result. Comparing the fusion results of weight-averaging, Discrete Wavelet Transform (DWT) and guided filtering output, the Mutual Information (MI) of the proposed study is the largest and fusion results are also very satisfactory in terms of edge and texture information. The comparison results show that this study has better performance over state-of-the-art fusion schemes in improving the quality of the fused image. 2013/04/30 - 17:57

The existence of infeasible program paths remains an obstacle in applying static
analysis to software engineering activities, such as test data generation and
bug finding. Knowledge about these infeasible paths is valuable to improve the
precision of static analysis. This study presented a hybrid approach for detecting
infeasible paths effectively, by combining program analysis and data mining
techniques. The approach is based upon two assumptions: (1) most infeasible
paths are caused by branch correlations and (2) runtime values of correlated
branch predicates can display certain patterns which reveal the underlying correlations
between the branches. The approach discovers the possible correlation rules
by mining the data collected through instrumenting target programs and executing
adequate test cases. Then the approach scores the infeasibility of each program
path according to the number of rules it breaks. The evaluation shows that the
approach can detect a large portion of infeasible paths and some of them are
difficult to be identified by existing infeasible path detecting methods. 2013/01/24 - 22:58

“Hooks” are an important part of tool integration in software engineering. They allow any development tool to broadcast a development event to some subscribing tools. Most of the existing software development tools have a rich catalog of well-defined events which can be exploited by third parties. This allows any tool to have a complete view of the development environment, without forcing the team to adopt a monolithic, all-encompassing tool. However, process-support tools have been rather weak as contributors to such integration strategy, giving preference to a style of integration where the process-support tool is the central orchestrator of the development environment. This study argues that not only do process support-tools have a rich catalog of events of interest to third party tool but the availability of such events can also significantly improve the overall level of development support. It thus proposed formalism for modeling process events, identified a set of process events of interest for other development tools and described an implementation of the approach in a process server. 2013/01/24 - 22:58

The basic theories of Particle Swarm Optimization (PSO) is introduced and illustrated with flowchart. In this study one of its improved algorithms Adaptive Particle Swarm Optimization (APSO) is introduced. Characteristics of basic PSO algorithms are outlined. Some methods of APSO at present were introduced and analyzed with their parameters. Limitation of these APSO algorithms was analyzed. Pointed out that APSO algorithms can be improved with adjustment of parameters and some other hybrid APSO are referred. Finally, pointed out application of PSO needs to be extended, hybrid with other algorithms is thought a good way to improve APSO algorithm and applying the improved algorithm to complex problems is the goal of our study. 2012/08/30 - 13:13

Multi Agent System (MAS) is a collection of an agents that work together to achieve a goal through communication and collaboration among each other. MASs are often distributed and agent has proactive and reactive features which are very useful. Cloud computing moves the application software and databases to the large data centers or Cloud Data Storages (CDSs), where the management of the data and services may not be fully trustworthy. Considering that the data is distributed, updated, created through different sources. This unique attribute however, poses many new security challenges which have not been well understood. To ensure the confidentiality, correctness assurance, availability and integrity of users’ data in the cloud, a security framework based on MAS architecture is proposed. This prototype is named as GSecaaS (Ganawa Security as a Service), this prototype tends to use specialized autonomous agents for specific services and allows agents to interact. The proposed MAS architecture includes five types of agents: Cloud Service Provider Agent (CSPA), Cloud Data Confidentially Agent (CDConA), Cloud Data Correctness Agent (CDCorA), Cloud Data Availability Agent (CDAA) and Cloud Data Integrity Agent (CDIA). To simulate the Agents, Oracle database packages and triggers are used to implement agent functions and Oracle jobs are utilized to create Agents. Each agent is considered as an instance of the agent in the environment that can work independently and can communicate with other agents in order to fulfill its needs or fulfill the others requests. Rasch software is used to analyze the data. 2012/08/30 - 13:13

In general, a large number of different Intellectual Property (IP) cores can
be implemented on a System-On-Chip (SOC) in parallel. However, this was not
resource efficient, as depending on the application, only a subset of those
cores would active at the same time. This study focused on the design method
of the Dynamic Partial Reconfigurable (DPR) system that design and implements
a DPR system to address the problems. The result of experiment shows that DPR
can greatly improve FPGA’s resource
utilization and save reconfigurable time. 2012/06/28 - 07:41

Workflow management systems have received much attention in the last few years
as tools for improving the business process efficiency within organizations.
They aim to structure and decompose business processes and assist in the management
of coordinating, scheduling, executing and monitoring of organization activities.
This study describes the design and implementation processes of a workflow system
for the University of Bahrain to deal with students’
exemption fees process handling mechanisms. The system is built on top of an
object-oriented database system and incorporates a number of modeling functionalities
supporting adaptive features such as object orientation, roles, rules and other
active capabilities. 2012/06/28 - 07:41

This research presents an empirical study on the program comprehension and debugging processes of novice programmers. We provide empirical evidence that the increase exposure to a large number of quality code modification and adaptation in formal teaching is a viable technique for novices to learn program debugging but not for program comprehension. The empirical study is based on case studies at one of the Malaysian universities among the first-degree Information Technology programs students taking Java Programming, an elective programming course. We design a quasi-experiment with non-randomized quota sampling control group with pre-test-post-test. This experiment looks into the program comprehension and debugging constructs at the micro level. Code segments in Java programming language between 5-25 lines of codes are given for the students to try to comprehend or debug manually with pen and paper within a specific timeframe. It will form part of the normal assessment tests for the course. The pre-test involve correct code while the post-test involve both correct and (logical and run-time) bugged code. A control group of 80 students and a treated group of 24 students form the non-randomized quota samples. 2012/03/20 - 19:51

Secure software is the demand of time in this connected world. Security needs to be given high priority in software development life cycle. Considering security as a non-functional requirement in software and giving a side thought to it after development of software only results a software with vulnerabilities. Software engineering is still lagging in secure software development processes. Due to it’s critically, security should be integrated in software life cycle process from the very beginning of development of software. Current research implements the secure development phases in spiral model and proposes a new security aware spiral. 2012/03/20 - 19:51

This research studies the key problem of multi-attribute allocation in multi-attribute decision. Proper attribute allocation is very important in the decision process. In order to solve uncertainty from distribution of attributes, a novel multi-attribute allocation method is presented based on optimization theory and maximum entropy principle. A linear combination weights mathematical model is also proposed through mathematical derivation. Numerical results are provided using typical test data and prove the efficiency of the novel method. 2012/03/20 - 19:51

Pairwise constraints can effectively improve the clustering results. However, noise constraints will seriously affect the performance of clustering. To improve the distributed clustering with constraints, distributed k-means based-on soft constraints, which constraint violations can be effectively dealt with, is presented in this paper. Aiming at the limitation of distributed clustering, such as communication cost and data privacy etc., only positive constraints by chunklets are used in the proposed method. To simplify the treatment of constrained data points, the mean value of chunklet is used as the representative point. Then positive constraints among chunklet are approximately transformed into pairwise positive constraints between each data points from the chunklet and the mean value. Thus, the cluster label of each mean value is regarded as the label estimation of data points from the chunklet. Based on the above approximation, a new measure of partition cost used to deal with constraint violations is defined. Therefore, for unconstrained data points, the within-cluster sum of distance squares can be minimized. Meanwhile, for constrained data points, the sum of distance between data points and corresponding centriods and the cost of constraint violations is minimized too. The experimental results showed that the proposed method decreases the computation complexity of constraint violations. Compared with hard constrained distributed clustering, the clustering accuracy of the proposed method is increased. 2012/01/03 - 03:49

Flooding is a natural part of a river's life cycle but it is a major disaster affecting many regions around the world, year after year. Malaysia is among the countries that faces potential flooding problems due to rapid development and, improper river systems. The Skudai river basin covers an area of 293.7 km2 in the south-western part of Johor in Malaysia. The Skudai River has come under the spotlight due to the impacts of future development projects. Land clearing for urbanization and, infrastructure construction may increase the magnitude of flood. Flood risk map is one of the best ways to study and understand the flood behavior. To produce flood level at various locations along the river and flood plain, hydraulic modeling is required to carry out the flood simulation. However, analysis a river system requires tremendous amount of data such as rainfall distribution, river properties and, most important, the flood plain topography. This study presents flood mapping results in Skudai River basin in Johor, Malaysia using InfoWorks software 1D modeling. The tasks involved hydrological modeling, hydrodynamic modeling. Ground model and generation of flood risk map. The results show that eighteen locations are affected by flood of 100 years ARI. 2012/01/03 - 03:49

Data clustering is a powerful technique for discovering knowledge from textual documents. In this field, K-means family algorithms have many applications because of simplicity and high speed in clustering of large scale data. In these algorithms, the criterion of cosine similarity only measures the pairwise similarity of documents that it doesn't have fine operation whenever the clusters are not properly separated. On the contrary, the concepts of Neighbors and Link with the spot of general information in calculating of closeness rate of two documents, in addition to pairwise similarity between them, have better operation. In this model, semantic relations between words have been ignored and only documents with the same terms have been clustered together. This study uses WordNet Ontology for making new model of documents representation that semantic relations between words for reweighing words frequency in documents vector space model, have been used and then Neighbors and Link concepts applied to this model. Results of using the proposed method (Semantic Neighbors) on real-world text data show better operation than previous methods and more efficient in text document clustering. 2012/01/03 - 03:49

A combined approach to enhance the security of oil and water supply pipeline
infrastructures is proposed and investigated. The most important factor affecting
the performance of the traditional system is the mannual patrolling which is
difficult and only provides observation where the patrolling team is present.
This study highlighted that by deploying a combination of wireless sensor network
in conjunction with the conventional trends and microwave network, the time
of reporting any leakage to the control room can be reduced considerably. This
in turn provides ability to protect the oil pipelines from further loss or damage
and discontinuation of operation. The simulation tests were carried under three
different scenarios i.e., minor level leakage, major level leakage and any criminal
activity to break a pipeline. Based on the simulation results, it is demonstrated
that the reliability of monitoring oil pipelines provided by the new proposed
system can be increased. The paper presents the motivation for and the potential
advantages of the proposed WSN system for enhancing the security of oil and
water supply pipelines. 2012/01/03 - 03:49

Security plays an important role in the development of Multi Agent Systems (MAS). However, a careful analysis of software development processes shows that the definition of security requirements is, usually, considered after the design of the system. This is, mainly, due to the fact that agent oriented software engineering methodologies have not integrated security concerns throughout their developing stages. Designing a team of agents that can work together toward a common goal is one of the challenges in the research area of agent-oriented software engineering. Prometheus is an agent-oriented software engineering methodology. The Prometheus Design Tool (PDT) is a graphical editor which supports the design tasks specified within the Prometheus methodology for designing agent systems. The tool propagates information where possible and ensures consistency between various parts of the design. The main purpose of this paper is to design MAS architecture that can be used to facilitate confidentiality, correctness assurance, availability and integrity of Cloud Data Storage (CDS) or cloud datacenter. The proposed MAS architecture includes five types of agents: Cloud Service Provider Agent (CSPA), Cloud Data Confidentiality Agent (CDConA), Cloud Data Correctness Agent (CDCorA), Cloud Data Availability Agent (CDAA) and Cloud Data Integrity Agent (CDIA). 2012/01/01 - 02:32

Test case generation is one of the most important and costly steps in software testing. the techniques for automatic generation of test cases try to efficiently find a small set of cases that allow an adequacy criterion to be fulfilled, thus, reducing the cost of software testing and resulting in more efficient testing of software products. In this study, we analyze the application of different machine learning methods for automatic test case generation task. We describe in this study how these algorithms can help in white-box testing. Different algorithms, consists of random, GA, MA and a proposed hybrid method called GA-NN, are then considered and studied in order to understand when and why a learning algorithm is effective for a testing problem. For the experiments we use a benchmark program, called Triangle classifier. Finally, the evaluations and some open research questions are given. 2012/01/01 - 02:32

Software architecture is emerging as an important discipline for engineers of software. Software architects have been limited by a lack of standardized ways to represent architecture as well as analysis methods to predict whether an architecture will result in an implementation that meets the requirements. Architects also have had little guidance in how to go about designing the architecture, which decisions should be made first what level of detail the architecture should encompass, how conflicting concerns should be satisfied and what range of issues the architecture should cover. This study makes an attempt to illustrate architectural design guidance in form of functional dimensions and structural dimensions required to identify the requirements as well as overall structure of a user-interface system. 2012/01/01 - 02:32

Advances in computer architectures, namely the prevalence of muticore architecture have raised challenges for software developers to take advantage of the parallelism. The paper introduces synchronization scheme based concurrent programming that is easy (i.e., GUI based construction), robust and reusable. The synchronization schemes consist of several synchronization units that can be composed together. The schemes can then be applied to code to achieve required concurrency and parallelism. This is done by configuring rather than coding. It is robust because the CScheme engine is linked to a pluggable concurrency bug detection engine. Reusability can be achieved by reusing the scheme on several similar nature problems. We illustrated the architecture of our CScheme engine and discussed the components in it that fulfilled our objectives. Few examples of the synchronization schemes such as Single Threaded Execution Scheme, Reader Writer Scheme and Thread Coordination Scheme together with their units were also build to prove our approach. 2012/01/01 - 02:32

Unified modeling language-based web engineering (UWE) is a software engineering approach for the Web domain aiming to cover the whole life-cycle of Web application development. UWE uses both UML and UML extension mechanisms to develop a web application, UML extension mechanisms define specific stereotype to model system requirements model, conceptual model, navigation model and presentation model. Each one of these models has its modeling elements. Transformation rules that are used for mapping between different models are rarely investigated. Besides the absence of modeling elements to model server side process, client side process, web service which is a method of communication between different web applications and threading concept which is the smallest unit of processing that can be scheduled. A UWE framework is proposed transformation rules for mapping between different models and provides additional modeling elements. UWE based framework consists of four phases (requirements modeling phase, conceptual modeling phase, navigation modeling phase and presentation modeling phase) with each of which having its model and modeling elements. Proposed framework includes a set of transformation rules for the mapping process between different phases. 2011/12/03 - 22:09

Mining source code by using different data mining techniques to extract the informative patterns like programming rules, variable correlation, code clones and frequent API usage is an active area of research. However, no practical framework for integrating these tasks has been attempted. To achieve this objective an integrated framework is designed that can detect different types of bugs to achieve software quality and assist developer in reusing API libraries for rapid software development. Proposed framework automatically extracts large variety of programming patterns and finds the locations where the extracted patterns are violated. Violated patterns are reported as programming rule violation, copy paste code related bugs and inconsistent variable update bugs. Although, the bugs are different but the framework can detect these bugs in one pass and produces higher quality software systems within budget. The framework also helps in code reusing by suggesting the programmer how to write API code to facilitate rapid software development. Proposed framework is validated by developing a prototype that developed in C# (MS Visual Studio, 2008) and evaluated on large application like ERP. Results shows proposed technique greatly reduced time and cost of manually checking defects from source code by programmers. 2011/12/03 - 22:09

This study introduced some new approaches for software test automation in general and testing graphical user interfaces in particular. The study presented ideas in the different stages of the test automation framework. Test automation framework main activities include test case generation, execution and verification. Other umbrella activities include modeling, critical paths selection and some others. In modeling, a methodology is presented to transform the user interface of applications into XML (i.e., eXtensible Markup Language) files. The purpose of this intermediate transformation is to enable producing test automation components in a format that is easier to deal with (in terms of testing). Test cases are generated from this model, executed and verified on the actual implementation. The transformation of products’ Graphical User Interface (GUI) into XML files also enables the documentation and storage of the interface description. There are several cases where we need to have a stored documented format of the GUI. Having it in XML universal format, allows it to be retrieved and reused in other places. XML Files in their hierarchical structure make it possible and easy to preserve the hierarchical structure of the user interface. Several GUI structural metrics are also introduced to evaluate the user interface from testing perspectives. Those metrics can be collected automatically using the developed tool with no need for user intervention. 2011/07/16 - 06:37

Software process modeling has been recognized as an important topic since the early days of software engineering. Process modeling language plays a crucial role when it is used to define and analyze the processes. Complex processes have many elements and there are many potential ways to improve them. Without a quantitative understanding of the process steps, it is difficult to tell which ones are effective. Processes can be measured for size, effort, schedule and cost under successful performance. A process should be defined based on organization’s intents or business goals. Although current studies on software process improvement have aroused interest of many researchers in this field, software development strategies do not provide a clear indication for constituents of software processes. Under this circumstance, an approach based on Unified Modeling Language (UML) was proposed in the present study to define a software process as an effective way for identifying and defining the essential process elements of good evaluating practice. The study aimed to embed quantitative evaluation faculty into project’s process definition stage and satisfy the goal of organization process improvement. The proposed approach focused on the study of UML static models as class diagrams complemented with a set of Object Constraint Language (OCL) constraints. 2011/07/16 - 06:37

The basic goal of project planning is to look into the future, identify the activities that need to be done to complete the project successfully and plan scheduling and resource allocation for these activities. Software effort estimation is the most important activity in project planning. So far many models are proposed by using machine learning algorithms, but no model is proved successful for efficiently and consistently predicting the effort. In this study we proposed two models using particle swarm optimization (PSO) with Constriction Factor for fine tuning of parameters of the constructive cost model (COCOMO) effort estimation. The models deals efficiently with imprecise and uncertain input and enhances the reliability of software effort estimation. The experimental part of the study illustrates the approach and contrast it with the standard numeric version of the COCOMO, standard singal variable models, Triangular Membership Function and Gbell function Models. 2011/07/16 - 06:37

This study aims to improve an automated test case generation method to minimize a number of test cases while maximizing an ability to identify critical domain specific requirements. It has been proven that the software testing phase is one of the most critical and important phases in the software development life cycle. In general, the software testing phase takes around 40-70% of the effort, time and cost. This area has been well researched over a long period of time. Unfortunately, while many researchers have found methods of reducing time and cost during the testing process, there are still a number of important related issues that need to be researched. This study introduces a new test case generation process with a requirement prioritization method to resolve the following research problems: (1) inefficient test case generation techniques with limited resources (2) lack of an ability to identify critical domain requirements in the test case generation process (3) inefficient automated test case generation techniques and (4) ignoring a number of generated test cases. In brief, the contributions are to: (1) study a comprehensive set of test case generation techniques since 1990, (2) compare existing test case generation methods and address the limitations of each technique, (3) introduce a new classification of test case generation techniques, (4) define a new process to generate test cases by proposing a requirement prioritization method and (5) propose a new effective test generation method. 2011/02/07 - 03:22

This study aims at designing and implementing an exam-form generation software tool (named ExPro) for Multiple-Choice-Based (MCB) exams. The study is motivated by the fact that student number in Jordanian universities is continuously growing at high rate. This growth is not accompanied by an equivalent growth of educational resources (instructors, labs, etc.). A result of this situation is having large number of students in class-rooms. Consequently, providing and using online-examining systems could be intractable and expensive. Alternatively, paper-based MCB tests can be used. The design and evaluation of ExPro is done by considering a basic set of design principles that are based on a list of identified Functional Requirements (FRs). Deriving those FRs is made possible by developing ExPro using the Iterative and incremental model from software engineering domain. We show that ExPro proves helpful to instructors in preparing multiple-choice tests. Further, ExPro makes archiving previous exams possible and effective to search for in future. The ExPro is available for free over the Internet and has been in use. ExPro users agree upon that ExPro (1) is easy to be learned and used and (2) proves to be a cost-effective alternative to online examining systems. 2011/02/07 - 03:22