Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

American Journal of Engineering and Applied Sciences

The imposition of practice, the current world, the laboratory measurement, calibration should be approved by points of credit to national or international and should be compatible with the requirements specification (ISO 17025) for the adoption of efficient laboratories. Those requirements were included the testing process or scale limits to doubt that mentioned in the measurement certificate, which recognizes the customer to achieve quality and efficiency in the process of measurement. In this study we would theoretically try to clarify, indicate what the uncertainty in the measurement, standard types of uncertainty and how to calculate the budget of uncertainty as we should show some examples of how the scientific calculation of the budget challenge with some measure the lengths of the laboratory. After analyzing the results we had found during the measurement using CMM, we had found that the value of non-statistical uncertainty in the measurement type (b) piece length of one meter was ±1.9257 µm. and when using the configuration measuring device, we had gotten the value of the extended standard combined uncertainty ±2.030 µm when measured the screws value of 1.2707 mm. When used the configuration measuring device, we had gotten the value of the extended standard combined uncertainty ±2.030 µm when measuring the screws value of 1.2707 mm. We concluded that the impact of uncertainty on the measured results a high fineness degree and less impact on the smoothness of a piece with low fineness, careful calibration of measuring instrument Careful calibration of measuring instrument and equipment by measurement standard is of the utmost importance and equipment by measurement standard is of the utmost importance and laboratories must calculate the uncertainty budget as a part of measurement evaluation to provide high quality measurement results.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.114.118 2012/07/28 - 14:12

The imposition of practice, the current world, the laboratory measurement, calibration should be approved by points of credit to national or international and should be compatible with the requirements specification (ISO 17025) for the adoption of efficient laboratories. Those requirements were included the testing process or scale limits to doubt that mentioned in the measurement certificate, which recognizes the customer to achieve quality and efficiency in the process of measurement. In this study we would theoretically try to clarify, indicate what the uncertainty in the measurement, standard types of uncertainty and how to calculate the budget of uncertainty as we should show some examples of how the scientific calculation of the budget challenge with some measure the lengths of the laboratory. After analyzing the results we had found during the measurement using CMM, we had found that the value of non-statistical uncertainty in the measurement type (b) piece length of one meter was ±1.9257 µm. and when using the configuration measuring device, we had gotten the value of the extended standard combined uncertainty ±2.030 µm when measured the screws value of 1.2707 mm. When used the configuration measuring device, we had gotten the value of the extended standard combined uncertainty ±2.030 µm when measuring the screws value of 1.2707 mm. We concluded that the impact of uncertainty on the measured results a high fineness degree and less impact on the smoothness of a piece with low fineness, careful calibration of measuring instrument Careful calibration of measuring instrument and equipment by measurement standard is of the utmost importance and equipment by measurement standard is of the utmost importance and laboratories must calculate the uncertainty budget as a part of measurement evaluation to provide high quality measurement results.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.121.125 2012/07/04 - 14:49

One of the major problems facing motorists in the Dar Es Salaam city today is congestion. Bus bays have a significant influence on the capacity of a roadway because they interfere with passing vehicles primarily when buses maneuver to pull into and out of bus bays. Bus bay stops will also interfere with vehicle movement if bus demand exceeds the bus bay capacity resulting in some buses waiting in the travel lane until the buses occupying the bay exit the bay. This study presents the results of a study which was carried out to evaluate the bus bay performance and its influence on the capacity of the roadway network in the city of Dar Es Salaam. Case study area was covered 11 bus stops along Morogoro road from Ubungo to Magomeni Mapipa. Capacity of bus bays was studied using the procedure outlined in the Transit Capacity and Quality of Service Manual of 2003. This was enabled the researcher to determine parameters such as dwell times and clearance times which were major determinants of bus stop capacity. The results indicate that only 18% of the bus bay stops studied did not have adequate capacity to cater for the available demand. 9% did not have adequate capacity during peak hours but the capacity was adequate during off-peak hours. The remaining 73% of bus bay stops possess adequate capacity all the time. Although most bus bay stops studying possess adequate capacity, severe congestion was observed at these locations. This was due to erratic behavior of bus drivers who do not utilize the provided space for them to drop off and pick up passengers. Clearly, this is an area that requires more strict enforcement in order to ease the congestion problem in the city by operating the existing capacity more efficiently.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.107.113 2012/07/01 - 04:19

The fundamental frequency (F0) of the human speech corresponds to the vibration frequency of the human vocal chords. To extract the F0 from a speech utterance, one approach is based on the Cepstral analysis. In Thai, there are four main dialects spoken by Thai people residing in four core region including central, north, northeast and south regions. Environmental noises are also playing an important role in corrupting the speech quality. It is needed to study of effects of noises on F0 extraction using the Cepstral analysis for Thai dialects. The Cepstral analysis is performed and some coefficients are used to determine the corresponding F0 values. Four types of environmental noises are simulated with different levels of power. The differences among the extracted F0 from clean speech and the extracted F0 from noise-corrupted speech are calculated in Root Mean Square (RMS) errors. The selected noises are train, factory, car and air conditioner. Five levels of each type of noise vary from 0-20 dB. From the experimental results, it has been noticed that the effects of noises are different. The lowest effect is of air conditioner, meanwhile the noise level of 0 dB is of the highest effect. By using the Cepstral analysis, F0 values can be extracted from the noise-corrupted speech with different level of effects depending on the type and level of noises.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.114.120 2012/07/01 - 04:19

Over 500 million tonnes of wheat straw are produced annually worldwide, the majority of which are burnt in the field causing significant environmental and health problems as well as serious traffic accidents in ad-dition to loss of a valuable resource. Wheat straw is abundantly available and renewable and can be used as an energy source in gasification and combustion systems. Proper understanding of the physical properties of wheat straw is necessary for utilizing these materials in thermochemical conversion processes. Wheat straws were collected from Egypt (Africa), Canada (North America) and Guyana (South America) and ground using medium size Wiley Mill. The physical properties (moisture content, particle size, bulk density and porosity) of wheat straws were determined using standard procedures. The moisture contents of wheat straws were in the range of 5.02-7.79%. The majority (56.87-93.36%) of the wheat straw particles were less than 0.85 mm and the average particle sizes were in the range of 0.38-0.69 mm. The average bulk density of the wheat straws were in the range of 97.52-177.23kg m-3. A negative linear relationship between the bulk density and the average particle size was observed for the wheat straws. The average porosity of the wheat straws were in the range of 46.39-84.24%. A positive linear relationship between the porosity and the average particle size for the wheat straws was also observed. The wheat straw varieties collected from different countries had different physical properties due to variations in climatic conditions, soil type and used fertilizer. Also, significant differences were observed among the varieties grown under same climatic and cultivation conditions."

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.98.106 2012/06/26 - 18:22

Problem statement: A Reconnaissance geophysical survey of an area near Sg.Udang, Melaka was conducted using geoelectrical resistivity and seismic refraction methods. The main objective of this study is to determine the depth of bedrock in the study area. The resistivity imaging measurement employing Wenner electrode configuration was carried out using an ABEM SAS 1000 terrameter and electrode selector system ES464. Electrodes were arranged in a straight line with constant spacing and connected to a multicore cable. The refraction seismic was conducted using 24 channel ABEM Mark6 signal enhancement seismograph with 5 m geophone spacing. The resistivity layer is associated with the residual soil with thickness of about 0.5-3 m. The high velocity layer is an average depth of about 9.4 m. The intermediate velocity zone is associated with weathered schist with thickness of about 2.5 m. The low velocity zone is with thickness of about 6 m. The thickness of residual soil obtained by seismic refraction survey appears to agree very well with the borehole data. Resistivity and the seismic refraction data have been successfully used to determine the thickness of residual soil layer and depth of bedrock. Approach: In this survey, electrodes were arranged in a straight line with constant spacing and connected to a multicore cable. The refraction seismic was conducted using 24 channel ABEM Mark6 signal enhancement seismograph with 5 m geophone spacing. The seismic data have been interpreted using SeisOpt@2D which automatically produced 2-D seismic velocity sections of the subsurface. Results: The resistivity results showed that the subsurface layers are associated with variable resistivity (296-2600Ω. m). The resistivity layer is associated with the residual soil with thickness of about 0.5-3 m. The interpreted 2-D seismic sections showed three different velocity layers. The high velocity layer (1600-2000 m sec-1) is interpreted to be associated with bedrock at an average depth of about 9.4 m. The intermediate velocity zone (1000-1600 m sec-1) is associated with weathered schist with thickness of about 2.5 m. The low velocity zone (450-900 m sec-1) corresponds to clayey silt of residual soil with thickness of about 6 m. Borehole data indicate that the depth of bedrock is about 10 m which appears to be in good agreement with that of seismic results. Conclusion: Interpretation of the resistivity and the seismic refraction data have been successfully used to determine the thickness of residual soil layer and depth of bedrock in the study area. The thickness of residual soil obtained by seismic refraction survey appears to agree very well with the borehole data.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.93.97 2012/06/22 - 09:59

Problem statement: A Reconnaissance geophysical survey of an area near Sg.Udang, Melaka was conducted using geoelectrical resistivity and seismic refraction methods. The main objective of this study is to determine the depth of bedrock in the study area. The resistivity imaging measurement employing Wenner electrode configuration was carried out using an ABEM SAS 1000 terrameter and electrode selector system ES464. Electrodes were arranged in a straight line with constant spacing and connected to a multicore cable. The refraction seismic was conducted using 24 channel ABEM Mark6 signal enhancement seismograph with 5 m geophone spacing. The resistivity layer is associated with the residual soil with thickness of about 0.5-3 m. The high velocity layer is an average depth of about 9.4 m. The intermediate velocity zone is associated with weathered schist with thickness of about 2.5 m. The low velocity zone is with thickness of about 6 m. The thickness of residual soil obtained by seismic refraction survey appears to agree very well with the borehole data. Resistivity and the seismic refraction data have been successfully used to determine the thickness of residual soil layer and depth of bedrock. Approach: In this survey, electrodes were arranged in a straight line with constant spacing and connected to a multicore cable. The refraction seismic was conducted using 24 channel ABEM Mark6 signal enhancement seismograph with 5 m geophone spacing. The seismic data have been interpreted using SeisOpt@2D which automatically produced 2-D seismic velocity sections of the subsurface. Results: The resistivity results showed that the subsurface layers are associated with variable resistivity (296-2600Ω. m). The resistivity layer is associated with the residual soil with thickness of about 0.5-3 m. The interpreted 2-D seismic sections showed three different velocity layers. The high velocity layer (1600-2000 m sec-1) is interpreted to be associated with bedrock at an average depth of about 9.4 m. The intermediate velocity zone (1000-1600 m sec-1) is associated with weathered schist with thickness of about 2.5 m. The low velocity zone (450-900 m sec-1) corresponds to clayey silt of residual soil with thickness of about 6 m. Borehole data indicate that the depth of bedrock is about 10 m which appears to be in good agreement with that of seismic results. Conclusion: Interpretation of the resistivity and the seismic refraction data have been successfully used to determine the thickness of residual soil layer and depth of bedrock in the study area. The thickness of residual soil obtained by seismic refraction survey appears to agree very well with the borehole data.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.100.104 2012/06/14 - 22:52

Problem statement: Wireless Mesh Networks (WMN) is communication network made up of mesh routers and mesh clients, with mesh routers forming the backbone of the network. The WMN can be accessed by other networks through gateways and bridging functions in the mesh routers. The mesh clients can either be static or mobile with an option of forming a client mesh network with the mesh routers. Routing in WMN is through multi-hop relays including the access points and gateways. Many ad hoc routing protocols such as Highly Dynamic Destination-Sequenced Distance Vector (DSDV), Ad Hoc on Demand Distance Vector (AODV) are used in WMN. Though these routing protocols are used in WMN, the protocols do not address constraints inherent to WSN, due to which network resources are not properly utilized and there is a fall in Quality of Service (QoS). Thus, these routing protocols are enhanced with new routing metrics more appropriate to WMN. Approach: In this study, a Modified on-demand routing algorithm for Mobile Ad-hoc Networks (MANETs), Ant Mesh Network AODV is proposed. AODV is modified to include the ant colony based optimization. The modified routing protocol improves the throughput and decreases the packet loss along with reduction in routing overhead. Results and Conclusion: The proposed optimization technique decreases the energy overhead of nodes in the network which are one hop neighbor to the sink."

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.93.99 2012/06/09 - 13:07

Problem statement: Motors play a very important role in our life and among which is the DC servo motors. The techniques of controlling these DC motors are plenty, among which is sound. In this study, a voce-based technique was implemented to control the speed and the direction of rotation for a DC motor. Approach: A Microcontroller-based electronic control circuit was designed and implemented to achieve this goal. Results: The speed of the motor was controlled, in both directions, using pulse width modulation and a microcontroller was used to generate the right signal to be applied to the motor. Conclusion: The loudness of human voice was successfully divided into different levels where each level drives the motor at different speed.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.89.92 2012/06/05 - 21:53

In spite of the OSHA Laboratory and Hazards Communication Standards, incidents which result in injuries and property loss continue to occur in the research and teaching locations. Application of Process Hazard Analysis (PHA) of OSHA Process Safety Management (PSM) to laboratory pilot plant operations has the potential to further reduce risk associated with this location. However, a major challenge is unavailability of the easy and effective system to comply with PHA requirements. This study presents a system to manage the implementation of PHA in pilot plant namely Process Hazards Management for Lab Scale Pilot Plant (PHM-LabPP). It provides organized strategies to manage and track information, documents, recommendations and corrective actions related to the process hazards. Application of PHM-LabPP at High Gravitational Natural Gas pilot plant as a case study is examined and discussed. The implementation of this system could help end users to overcome inadequate of managing and controlling process hazards in pilot plant that had contributed to numbers of accidents.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.84.88 2012/05/27 - 19:53

Problem statement: Membrane processes have received a significantly high attention on natural gas separation processes since this technology provides higher operating and cost advantages as compared to other technologies. One of the major problems related to natural gas separation using membrane is the existence of feed impurities including CO2, heavy hydrocarbon and water. Approach: Hence, mathematical model that is able to study the effect of impurities towards the performance of the membrane is substantially crucial. In this study, a membrane transport model involves dual mode sorption and partial immobilization of the whole population was employed to predict the behavior of penetrant in the membrane. Results: Results related with sorption, diffusion, permeation and selectivity of Penetrants in the membrane were demonstrated in the present study. Conclusion/Recommendations: The sorbed concentration and permeability coefficient of a binary mixture were found to be lower as compared to pure gases, which represented the competition between Penetrants.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.78.83 2012/05/22 - 21:11

Problem statement: Voltage instability and voltage collapse have been considered as a major threat to present power system networks due to their stressed operation. It is very important to do the power system analysis with respect to voltage stability. Approach: Flexible AC Transmission System (FACTS) is an alternating current transmission system incorporating power electronic-based and other static controllers to enhance controllability and increase power transfer capability. A FACTS device in a power system improves the voltage stability, reduces the power loss and also improves the load ability of the system. Results: This study investigates the application of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) to find optimal location and rated value of Static Var Compensator (SVC) device to minimize the voltage stability index, total power loss, load voltage deviation, cost of generation and cost of FACTS devices to improve voltage stability in the power system. Optimal location and rated value of SVC device have been found in different loading scenario (115%, 125% and 150% of normal loading) using PSO and GA. Conclusion/Recommendations: It is observed from the results that the voltage stability margin is improved, the voltage profile of the power system is increased, load voltage deviation is reduced and real power losses also reduced by optimally locating SVC device in the power system. The proposed algorithm is verified with the IEEE 14 bus, IEEE 30 bus and IEEE 57 bus.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.70.77 2012/05/13 - 05:56

Problem statement: Currently it is evident that the web service is emerging as a solution for business applications and deployment of web services witnesses exponential growth. Web services rely on networks particularly internet for implementing and consuming them in different applications. Whenever a network is involved in a software system, network traffic payload measurement and communication pattern play a key role in determining and enhancing performance of the software system. Thus, this study emphasizes the need for network traffic payload measurement in evaluating performance of implementation of web services. Approach: Further empirical measurement of the network traffic payload in implementation of web services is carried out to analyze performance of realization of web services. In this work, the web service realization is done through a model with three-tier approach which is suitable for business applications. Results and Conclusion: This study reports empirical results of network traffic involved in different implementations of web services.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.63.69 2012/04/28 - 20:14

Problem statement: The use and reduce cooking oil is a common phenomena in our society. While some of this cooking oil is further refine most of it however and not subject to any filtration in the refining process medium such as carbon active, silica are commonly use. Approach: The used of bagasse as adsorbent is not common. This is odd especially when structural component of bagasse which is made up of carbon material is suitable as adsorbent and the fact that, adsorbent bagasse further reduce solid waste disposal and hence reducing one source of environmental pollution. Results: This study was undertaken to explore the possibility of using bagasse as adsorbent. Specifically, bagasse is being experimented to reduce the harmful content such as Free Fatty Acid (FFA) and color density in used cooking oil. The variation of adsorbent weight and contact time are used in this research as parameters to determine the effective time and the amount of adsorbent that should be used in the oil refining process. From the experiment conducted, it can be established that bagasse when use as an adsorbent can reduce FFA to 82.14% which is lower the harmful limit. Conclusion/Recommendations: This result is obtained when using 7.5 gr of bagasse for 60 m contact time. Similarly, the color of oil is reduced to 75.67% which is significant and this is base on 10 gr of bagasse with 60 m of contact time.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.59.62 2012/03/02 - 15:37

Problem statement: A simple analytical approach to the synthesis of a sharp transition, linear phase, multiband FIR filter is presented. The filter magnitude response is modeled using trigonometric functions of frequency. Approach: Employing variable density of ripple cycles in passband and stopband regions with large density of ripples cycles at the sharp transition edges, reduces the abrupt discontinuities at these edges. Results: As a result, Gibb’s phenomenon is reduced in the filter implementation thus giving a flat passband and good stopband attenuation. A closed form expression for impulse response coefficients is obtained. The filter design is easily tunable and allows for variation in transition bandwidth of each band. A speech processing scheme is implemented using a pair of the proposed sharp transition multiband FIR filters to split the speech spectrum into complementary short time spectral bands. Conclusion: The adjacent speech formants are fed dichotically to the two ears to reduce the effect of spectral masking and hence improve speech perception in sensorineural hearing impaired.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.42.48 2012/02/28 - 14:56

Problem statement: In today’s manufacturing outsourcing of resources has significant importance. Efficient supplier selection process is a central part in supply chain management for enterprises for outsourcing. Approach: The nature of supplier selection is a multi criteria decision making problem and in the selection process multiple criteria must be considered. In this study a multiple attribute utility theory base on Data Envelopments Analysis (DEA) applied to tackle this problem with consideration of some inputs and outputs. Results: A real case study was implemented to show the application of DEA method and through this method the efficient and inefficent suppliers were identified to ranking them. Conclusion: DEA is a tactical model to cope with multiple criteria in purchasing decisions.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.49.52 2012/02/28 - 14:55

Problem statement: Mitigation of global warming and energy crisis has called upon the need of an efficient tool for electricity planning. This study thus presents an electricity planning tool that incorporates RE with Feed in-Tariff (FiT) for various sources of Renewable Energy (RE) to minimize grid-connected electricity generation cost as well as to satisfy nominal electricity demand and CO2 emission reduction target. Approach: In order to perform these tasks, a general Mixed Integer Linear Programming (MILP) model was developed and implemented in General Algebraic Modeling System (GAMS). The RE options considered including landfill gas, municipal solid waste, palm oil residue and hydro power. While the model presents a general approach for electricity planning, Iskandar Malaysia is applied as a case study in this research. Results: By considering the cost, FiT, availability of the Renewable Energy Source (RES) and limit of RE fund for FiT remuneration in Malaysia. The optimization result indicates that Iskandar Malaysia can satisfy the set target of 40% carbon emission reduction by 2015 by implementing biomass RE. Conclusion: It’s revealed that a total of 875 MW of RE is required from Biomass Bubbling Fluidized Bed (BBFB) using various palm oil biomass fuel (mesofiber-215 MW, Empty Fruit Bunch (EFB)-424 MW and kernel-236 MW). However, this increases the Cost Of Electricity (COE) by 69-6.5% cents/kWh.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.53.58 2012/02/28 - 14:55

Problem statement: A simple analytical approach to the synthesis of a sharp transition, linear phase, multiband FIR filter is presented. The filter magnitude response is modeled using trigonometric functions of frequency. Approach: Employing variable density of ripple cycles in passband and stopband regions with large density of ripples cycles at the sharp transition edges, reduces the abrupt discontinuities at these edges. Results: As a result, Gibb’s phenomenon is reduced in the filter implementation thus giving a flat passband and good stopband attenuation. A closed form expression for impulse response coefficients is obtained. The filter design is easily tunable and allows for variation in transition bandwidth of each band. A speech processing scheme is implemented using a pair of the proposed sharp transition multiband FIR filters to split the speech spectrum into complementary short time spectral bands. Conclusion: The adjacent speech formants are fed dichotically to the two ears to reduce the effect of spectral masking and hence improve speech perception in sensorineural hearing impaired.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.42.48 2012/02/28 - 14:55

Problem statement: A simple analytical approach to the synthesis of a sharp transition, linear phase, multiband FIR filter is presented. The filter magnitude response is modeled using trigonometric functions of frequency. Approach: Employing variable density of ripple cycles in passband and stopband regions with large density of ripples cycles at the sharp transition edges, reduces the abrupt discontinuities at these edges. Results: As a result, Gibb’s phenomenon is reduced in the filter implementation thus giving a flat passband and good stopband attenuation. A closed form expression for impulse response coefficients is obtained. The filter design is easily tunable and allows for variation in transition bandwidth of each band. A speech processing scheme is implemented using a pair of the proposed sharp transition multiband FIR filters to split the speech spectrum into complementary short time spectral bands. Conclusion: The adjacent speech formants are fed dichotically to the two ears to reduce the effect of spectral masking and hence improve speech perception in sensorineural hearing impaired.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.42.48 2012/02/28 - 14:55

In this study the effect of inlet air cooling system and components irreversibilities on the performance of an active 25MW gas turbine power plant was investigated. The objective of this study was to establish the potential benefits of improving the performance of the current gas turbine plant into a more advanced cycle with high efficiency and power output through inlet air cooling. Problem statement: The hypothesis was that the low performance of the gas turbine plant was caused by high ambient temperature, the use of spray cooler was adopted to bring the air condition temperature close to ISO condition. Approach: In this study, performance characteristics were determined for a set of actual operational conditions including ambient temperature, relative humidity, turbine inlet temperature and pressure ratio. Results: The results obtained show that the use of a spray cooler on the existing gas turbine cycle gives a better thermal efficiency and less irreversibility rate in the components system and the entire plant. The power output of the gas turbine plant with spray cooler was found to have increased by over 7%, accompanied by 2.7% increase in machine efficiency with a reduction in specific fuel consumption of 2.05 and 10.03% increase in the energy of exhaust. Furthemore, a 0.32% reduction in the total irreversibility rate of the plant for the cooled cycle was obtained and a 0.39, 0.29 and 0.17% reduction in the irreversibility rate of compressor, turbine and combustion chamber respectively, were also obtained. Conclusion: The results show that retrofitting the existing gas turbine plant with inlet air cooling system gives a better system performance and may prove to be an attractive investment opportunity.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.35.41 2012/02/24 - 19:12

Problem statement: The presence of metastasis in the regional lymph nodes is the most important factor in predicting prognosis in breast cancer. Many biomarkers have been identified that appear to relate to the aggressive behaviour of cancer. However, the nonlinear relation of these markers to nodal status and also the existence of complex interaction between markers have prohibited an accurate prognosis. Approach: The aim of this study is to investigate the effectiveness of a Multilayer Perceptron (MLP) for predicting breast cancer progression using a set of four biomarkers of breast tumors. The biomarkers include DNA ploidy, cell cycle distribution (G0G1/G2M), steroid receptors (ER/PR) and S-Phase Fraction (SPF). A further objective of the study is to explore the predictive potential of these markers in defining the state of nodal involvement in breast cancer. Two methods of outcome evaluation viz. stratified and simple k-fold Cross Validation (CV) are studied in order to assess their accuracy and reliability for neural network validation. Criteria such as output accuracy, sensitivity and specificity are used for selecting the best validation technique besides evaluating the network outcome for different combinations of markers. Results: The results show that stratified 2-fold CV is more accurate and reliable compared to simple k-fold CV as it obtains a higher accuracy and specificity and also provides a more stable network validation in terms of sensitivity. Best prediction results are obtained by using an individual marker-SPF which obtains an accuracy of 65%. Conclusion/Recommendations: Our findings suggest that MLP-based analysis provides an accurate and reliable platform for breast cancer prediction given that an appropriate design and validation method is employed.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.42.51 2012/02/24 - 19:12

Problem statement: The presence of metastasis in the regional lymph nodes is the most important factor in predicting prognosis in breast cancer. Many biomarkers have been identified that appear to relate to the aggressive behaviour of cancer. However, the nonlinear relation of these markers to nodal status and also the existence of complex interaction between markers have prohibited an accurate prognosis. Approach: The aim of this study is to investigate the effectiveness of a Multilayer Perceptron (MLP) for predicting breast cancer progression using a set of four biomarkers of breast tumors. The biomarkers include DNA ploidy, cell cycle distribution (G0G1/G2M), steroid receptors (ER/PR) and S-Phase Fraction (SPF). A further objective of the study is to explore the predictive potential of these markers in defining the state of nodal involvement in breast cancer. Two methods of outcome evaluation viz. stratified and simple k-fold Cross Validation (CV) are studied in order to assess their accuracy and reliability for neural network validation. Criteria such as output accuracy, sensitivity and specificity are used for selecting the best validation technique besides evaluating the network outcome for different combinations of markers. Results: The results show that stratified 2-fold CV is more accurate and reliable compared to simple k-fold CV as it obtains a higher accuracy and specificity and also provides a more stable network validation in terms of sensitivity. Best prediction results are obtained by using an individual marker-SPF which obtains an accuracy of 65%. Conclusion/Recommendations: Our findings suggest that MLP-based analysis provides an accurate and reliable platform for breast cancer prediction given that an appropriate design and validation method is employed.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.576.585 2012/02/21 - 00:03

Problem statement: The presence of metastasis in the regional lymph nodes is the most important factor in predicting prognosis in breast cancer. Many biomarkers have been identified that appear to relate to the aggressive behaviour of cancer. However, the nonlinear relation of these markers to nodal status and also the existence of complex interaction between markers have prohibited an accurate prognosis. Approach: The aim of this study is to investigate the effectiveness of a Multilayer Perceptron (MLP) for predicting breast cancer progression using a set of four biomarkers of breast tumors. The biomarkers include DNA ploidy, cell cycle distribution (G0G1/G2M), steroid receptors (ER/PR) and S-Phase Fraction (SPF). A further objective of the study is to explore the predictive potential of these markers in defining the state of nodal involvement in breast cancer. Two methods of outcome evaluation viz. stratified and simple k-fold Cross Validation (CV) are studied in order to assess their accuracy and reliability for neural network validation. Criteria such as output accuracy, sensitivity and specificity are used for selecting the best validation technique besides evaluating the network outcome for different combinations of markers. Results: The results show that stratified 2-fold CV is more accurate and reliable compared to simple k-fold CV as it obtains a higher accuracy and specificity and also provides a more stable network validation in terms of sensitivity. Best prediction results are obtained by using an individual marker-SPF which obtains an accuracy of 65%. Conclusion/Recommendations: Our findings suggest that MLP-based analysis provides an accurate and reliable platform for breast cancer prediction given that an appropriate design and validation method is employed.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.576.585 2012/02/21 - 00:03

Problem statement: Multiple Access Interference (MAI) signals and poor estimation of the unknown channel parameters in the presence of limited training sequences are two of the major problems that degrade the systems’ performance. Two synchronous multiuser receivers with Rake reception of Interleave Division Multiple Access (IDMA) and Code Division Multiple Access (CDMA) systems, in conjunction with channel estimation, are considered for communication over different short range shallow water acoustic channels. Approach: The proposed hard/soft chip channel estimation and carrier phase tracking are jointly optimized based on the Mean Square Error (MSE) criterion and adapted iteratively by the reconstructed MAI signal. This is generated from exchanged soft information in terms of Log-Likelihood Ratio (LLR) estimates from the single-users’ channel decoders. The channel parameters and error estimation are used to enable the chip cancellation process to retrieve an accurate measurement of the detrimental effects of Intersymbol Interference (ISI) and MAI. Results: The performance of the proposed receiver structures with small processing gain are investigated and compare with 2 and 4 synchronous users using memoryless Quadrature Phase-Shift Keying (QPSK) at an effective rate of 439.5 bps per user. Conclusion: The results demonstrate that the performance is limited by MAI and ISI signals and the IDMA performance outperforms long code CDMA and short code CDMA.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.556.565 2012/02/21 - 00:03

Problem statement: Multiple Access Interference (MAI) signals and poor estimation of the unknown channel parameters in the presence of limited training sequences are two of the major problems that degrade the systems’ performance. Two synchronous multiuser receivers with Rake reception of Interleave Division Multiple Access (IDMA) and Code Division Multiple Access (CDMA) systems, in conjunction with channel estimation, are considered for communication over different short range shallow water acoustic channels. Approach: The proposed hard/soft chip channel estimation and carrier phase tracking are jointly optimized based on the Mean Square Error (MSE) criterion and adapted iteratively by the reconstructed MAI signal. This is generated from exchanged soft information in terms of Log-Likelihood Ratio (LLR) estimates from the single-users’ channel decoders. The channel parameters and error estimation are used to enable the chip cancellation process to retrieve an accurate measurement of the detrimental effects of Intersymbol Interference (ISI) and MAI. Results: The performance of the proposed receiver structures with small processing gain are investigated and compare with 2 and 4 synchronous users using memoryless Quadrature Phase-Shift Keying (QPSK) at an effective rate of 439.5 bps per user. Conclusion: The results demonstrate that the performance is limited by MAI and ISI signals and the IDMA performance outperforms long code CDMA and short code CDMA.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.556.565 2012/02/21 - 00:03

Problem statement: Current Neuroimaging developments, in biological research and diagnostics, demand an edge-defined and noise-free MRI scans. Thus, this study presents a generalized parallel 2-D MRI filtering algorithm with their FPGA-based implementation in a single unified architecture. The parallel 2-D MRI filtering algorithms are Edge, Sobel X, Sobel Y, Sobel X-Y, Blur, Smooth, Sharpen, Gaussian and Beta (HYB). Then, the nine MRI image filtering algorithm, has empirically improved to generate enhanced MRI scans filtering results without significantly affecting the developed performance indices of high throughput and low power consumption at maximum operating frequency. Approach: The parallel 2-d MRI filtering algorithms are developed and FPGA implemented using Xilinx System Generator tool within the ISE 12.3 development suite. Two unified architectures are behaviorally developed, depending on the abstraction level of implementation. For performance indices comparison, two Virtex-6 FPGA boards, namely, xc6vlX240Tl-1lff1759 and xc6vlX130Tl-1lff1156 are behaviorally targeted. Results: The improved parallel 2-D filtering algorithms enhanced the filtered MRI scans to be edge-defined and noise free grayscale imaging. The single architecture is efficiently prototyped to achieve: high filtering performance of (11230 frames/second) throughput for 64*64 MRI grayscale scan, minimum power consumption of 0.86 Watt with a junction temperature of 52°C and a maximum frequency of up to (230 MHz). Conclusion: The improved parallel MRI filtering algorithms which are developed as a single unified architecture provide visibility enhancement within the filtered MRI scan to aid the physician in detecting brain diseases, e.g., trauma or intracranial haemorrhage. The high filtering throughput is feasibly nominee the nine parallel MRI filtering algorithms for applications such as real-time MRI potential future applications. Future Work: a set of parallel 3-D fMRI filtering algorithms will be investigated to be developed and fast FPGA prototyped for future research project.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.566.575 2012/02/21 - 00:03

Problem statement: Current Neuroimaging developments, in biological research and diagnostics, demand an edge-defined and noise-free MRI scans. Thus, this study presents a generalized parallel 2-D MRI filtering algorithm with their FPGA-based implementation in a single unified architecture. The parallel 2-D MRI filtering algorithms are Edge, Sobel X, Sobel Y, Sobel X-Y, Blur, Smooth, Sharpen, Gaussian and Beta (HYB). Then, the nine MRI image filtering algorithm, has empirically improved to generate enhanced MRI scans filtering results without significantly affecting the developed performance indices of high throughput and low power consumption at maximum operating frequency. Approach: The parallel 2-d MRI filtering algorithms are developed and FPGA implemented using Xilinx System Generator tool within the ISE 12.3 development suite. Two unified architectures are behaviorally developed, depending on the abstraction level of implementation. For performance indices comparison, two Virtex-6 FPGA boards, namely, xc6vlX240Tl-1lff1759 and xc6vlX130Tl-1lff1156 are behaviorally targeted. Results: The improved parallel 2-D filtering algorithms enhanced the filtered MRI scans to be edge-defined and noise free grayscale imaging. The single architecture is efficiently prototyped to achieve: high filtering performance of (11230 frames/second) throughput for 64*64 MRI grayscale scan, minimum power consumption of 0.86 Watt with a junction temperature of 52°C and a maximum frequency of up to (230 MHz). Conclusion: The improved parallel MRI filtering algorithms which are developed as a single unified architecture provide visibility enhancement within the filtered MRI scan to aid the physician in detecting brain diseases, e.g., trauma or intracranial haemorrhage. The high filtering throughput is feasibly nominee the nine parallel MRI filtering algorithms for applications such as real-time MRI potential future applications. Future Work: a set of parallel 3-D fMRI filtering algorithms will be investigated to be developed and fast FPGA prototyped for future research project.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.566.575 2012/02/21 - 00:03

The present paper reports the results of a study undertaken in enhancing properties of fly ash concrete composites-FMCC with locally available natural fiber. In this context a composite with fly ash, concrete and treated coconut fibers-available in plenty in rural areas of India-can be a good proposition and with this background, experimental investigation to study the effects of replacement of cement (by weight) with different percentages of fly ash and the effects of addition of processed natural coconut fiber on flexural strength, compressive strength, splitting tensile strength and modulus of elasticity was taken up. A control mixture of proportions 1:1:49:2.79 with w/c of 0.45 was designed for the normally popular M20 concrete. Cement was replaced with five percentages (10, 15, 20, 25 and 30%) of Class C fly ash. Four percentages of coconut fibers (0.15, 0.30, 0.45 and 0.60%) having 40 mm length were used. Test results show that the replacement of 43 grade ordinary Portland cement with fly ash showed an increase in compressive strength, splitting tensile strength, flexural strength and modulus of elasticity for the chosen mix proportion. Addition of coconut fibers resulting in fly ash mixed concrete composite -FMCC- did enhance the mechanical properties of FMCC and at the same time increased the energy levels reflected by increased failure strain, making the material suitable for seismic sustenance.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.29.34 2012/02/21 - 00:03

The present paper reports the results of a study undertaken in enhancing properties of fly ash concrete composites-FMCC with locally available natural fiber. In this context a composite with fly ash, concrete and treated coconut fibers-available in plenty in rural areas of India-can be a good proposition and with this background, experimental investigation to study the effects of replacement of cement (by weight) with different percentages of fly ash and the effects of addition of processed natural coconut fiber on flexural strength, compressive strength, splitting tensile strength and modulus of elasticity was taken up. A control mixture of proportions 1:1:49:2.79 with w/c of 0.45 was designed for the normally popular M20 concrete. Cement was replaced with five percentages (10, 15, 20, 25 and 30%) of Class C fly ash. Four percentages of coconut fibers (0.15, 0.30, 0.45 and 0.60%) having 40 mm length were used. Test results show that the replacement of 43 grade ordinary Portland cement with fly ash showed an increase in compressive strength, splitting tensile strength, flexural strength and modulus of elasticity for the chosen mix proportion. Addition of coconut fibers resulting in fly ash mixed concrete composite -FMCC- did enhance the mechanical properties of FMCC and at the same time increased the energy levels reflected by increased failure strain, making the material suitable for seismic sustenance.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.29.34 2012/02/21 - 00:03

Problem statement: Since the residual stresses can cause a scienificant reduction in strength of the mechanical components, having a semi-destructive measuring method that would predict and model theses stresses would be an advantage. The hole drilling method has been used, by many researches, for measurement of the residual stresses in different materials. In this article, the finite element method is used to simulate the hole drilling method. Approach: The calibration factors required for measuring the residual stresses in composite materials are performed using a finite element procedure instead of experimental techniques. In this approach, the elements within the whole boundary are firstly removed from the finite element model of the specimen. Then, the average strains released around the whole area are measured using a there-strain gage rosette located near the whole boundary. Results: The calibration factors which are used to convert the released strains to residual stresses are obtained by the hole drilling method. The results for an orthotropic unidirectional ply made of glass/epoxy are presented. Conclusion: The compliance factors are represented as a 3×3 matrix for the orthotropic material. The results obtained from the finite element method are compared with the analytical solution method. A good agreement between the results shows the reliability of the numerical method presented in this study.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.25.28 2012/02/21 - 00:03

Problem statement: Since the residual stresses can cause a scienificant reduction in strength of the mechanical components, having a semi-destructive measuring method that would predict and model theses stresses would be an advantage. The hole drilling method has been used, by many researches, for measurement of the residual stresses in different materials. In this article, the finite element method is used to simulate the hole drilling method. Approach: The calibration factors required for measuring the residual stresses in composite materials are performed using a finite element procedure instead of experimental techniques. In this approach, the elements within the whole boundary are firstly removed from the finite element model of the specimen. Then, the average strains released around the whole area are measured using a there-strain gage rosette located near the whole boundary. Results: The calibration factors which are used to convert the released strains to residual stresses are obtained by the hole drilling method. The results for an orthotropic unidirectional ply made of glass/epoxy are presented. Conclusion: The compliance factors are represented as a 3×3 matrix for the orthotropic material. The results obtained from the finite element method are compared with the analytical solution method. A good agreement between the results shows the reliability of the numerical method presented in this study.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.25.28 2012/02/21 - 00:03

Problem statement: Nowadays, Reconfigurable System on Chip (RSoC) shows great potential in many high performance applications that benefit from Hardware customization. Approach: In this study, we present a design approach of FPGA based Controller for electromechanical system. In this way, we present solutions obtained by Hardware/Software Code sign methodology targeted for the implementation of a motor control drive system using Multiprocessor SoC (MPSoC) architecture. In order to enhance flexibility and performance of the considered system, we design different modules of HW current controller of electronic motor. A Dynamic Partial Reconfiguration (DPR) mechanism allowing switching on the fly between those modules is described. Results: Test and validation are done to validate the approach adopted. Experimental results confirmed the efficiency of the approach and allow us to determine more recommendations that should be considered while designing a RSoC control drive system. Conclusion/Recommendations: DPR enable flexible control system hardware design. This concept allows switching between different low order controls.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.34.44 2012/02/18 - 23:12

Problem statement: The presence of metastasis in the regional lymph nodes is the most important factor in predicting prognosis in breast cancer. Many biomarkers have been identified that appear to relate to the aggressive behaviour of cancer. However, the nonlinear relation of these markers to nodal status and also the existence of complex interaction between markers have prohibited an accurate prognosis. Approach: The aim of this study is to investigate the effectiveness of a Multilayer Perceptron (MLP) for predicting breast cancer progression using a set of four biomarkers of breast tumors. The biomarkers include DNA ploidy, cell cycle distribution (G0G1/G2M), steroid receptors (ER/PR) and S-Phase Fraction (SPF). A further objective of the study is to explore the predictive potential of these markers in defining the state of nodal involvement in breast cancer. Two methods of outcome evaluation viz. stratified and simple k-fold Cross Validation (CV) are studied in order to assess their accuracy and reliability for neural network validation. Criteria such as output accuracy, sensitivity and specificity are used for selecting the best validation technique besides evaluating the network outcome for different combinations of markers. Results: The results show that stratified 2-fold CV is more accurate and reliable compared to simple k-fold CV as it obtains a higher accuracy and specificity and also provides a more stable network validation in terms of sensitivity. Best prediction results are obtained by using an individual marker-SPF which obtains an accuracy of 65%. Conclusion/Recommendations: Our findings suggest that MLP-based analysis provides an accurate and reliable platform for breast cancer prediction given that an appropriate design and validation method is employed.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.45.54 2012/02/18 - 23:12

Problem statement: Since the residual stresses can cause a scienificant reduction in strength of the mechanical components, having a semi-destructive measuring method that would predict and model theses stresses would be an advantage. The hole drilling method has been used, by many researches, for measurement of the residual stresses in different materials. In this article, the finite element method is used to simulate the hole drilling method. Approach: The calibration factors required for measuring the residual stresses in composite materials are performed using a finite element procedure instead of experimental techniques. In this approach, the elements within the whole boundary are firstly removed from the finite element model of the specimen. Then, the average strains released around the whole area are measured using a there-strain gage rosette located near the whole boundary. Results: The calibration factors which are used to convert the released strains to residual stresses are obtained by the hole drilling method. The results for an orthotropic unidirectional ply made of glass/epoxy are presented. Conclusion: The compliance factors are represented as a 3×3 matrix for the orthotropic material. The results obtained from the finite element method are compared with the analytical solution method. A good agreement between the results shows the reliability of the numerical method presented in this study.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.55.58 2012/02/18 - 23:12

Problem statement: Current Neuroimaging developments, in biological research and diagnostics, demand an edge-defined and noise-free MRI scans. Thus, this study presents a generalized parallel 2-D MRI filtering algorithm with their FPGA-based implementation in a single unified architecture. The parallel 2-D MRI filtering algorithms are Edge, Sobel X, Sobel Y, Sobel X-Y, Blur, Smooth, Sharpen, Gaussian and Beta (HYB). Then, the nine MRI image filtering algorithm, has empirically improved to generate enhanced MRI scans filtering results without significantly affecting the developed performance indices of high throughput and low power consumption at maximum operating frequency. Approach: The parallel 2-d MRI filtering algorithms are developed and FPGA implemented using Xilinx System Generator tool within the ISE 12.3 development suite. Two unified architectures are behaviorally developed, depending on the abstraction level of implementation. For performance indices comparison, two Virtex-6 FPGA boards, namely, xc6vlX240Tl-1lff1759 and xc6vlX130Tl-1lff1156 are behaviorally targeted. Results: The improved parallel 2-D filtering algorithms enhanced the filtered MRI scans to be edge-defined and noise free grayscale imaging. The single architecture is efficiently prototyped to achieve: high filtering performance of (11230 frames/second) throughput for 64*64 MRI grayscale scan, minimum power consumption of 0.86 Watt with a junction temperature of 52°C and a maximum frequency of up to (230 MHz). Conclusion: The improved parallel MRI filtering algorithms which are developed as a single unified architecture provide visibility enhancement within the filtered MRI scan to aid the physician in detecting brain diseases, e.g., trauma or intracranial haemorrhage. The high filtering throughput is feasibly nominee the nine parallel MRI filtering algorithms for applications such as real-time MRI potential future applications. Future Work: a set of parallel 3-D fMRI filtering algorithms will be investigated to be developed and fast FPGA prototyped for future research project.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.25.34 2012/02/18 - 00:32

Problem statement: Multiple Access Interference (MAI) signals and poor estimation of the unknown channel parameters in the presence of limited training sequences are two of the major problems that degrade the systems’ performance. Two synchronous multiuser receivers with Rake reception of Interleave Division Multiple Access (IDMA) and Code Division Multiple Access (CDMA) systems, in conjunction with channel estimation, are considered for communication over different short range shallow water acoustic channels. Approach: The proposed hard/soft chip channel estimation and carrier phase tracking are jointly optimized based on the Mean Square Error (MSE) criterion and adapted iteratively by the reconstructed MAI signal. This is generated from exchanged soft information in terms of Log-Likelihood Ratio (LLR) estimates from the single-users’ channel decoders. The channel parameters and error estimation are used to enable the chip cancellation process to retrieve an accurate measurement of the detrimental effects of Intersymbol Interference (ISI) and MAI. Results: The performance of the proposed receiver structures with small processing gain are investigated and compare with 2 and 4 synchronous users using memoryless Quadrature Phase-Shift Keying (QPSK) at an effective rate of 439.5 bps per user. Conclusion: The results demonstrate that the performance is limited by MAI and ISI signals and the IDMA performance outperforms long code CDMA and short code CDMA.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.15.24 2012/02/16 - 22:37

Problem statement: The rearing of animals for domestic consumption and export invariably lead to the production of methane as a product of digestion. This study investigated the emission of methane from Malaysian livestock between 1980 and 2008. Approach: Seven categories of animals identified were camel, buffalo, sheep, goats, horse, pigs and poultry. The estimation of methane was based on the IPCC Tier 1 and Tier 2 methods. Methane emission from cattle rose by 44% within the period from 45.61-65.57 Gg. Results: Buffalo recorded a drop in methane emission by 54% from 17.12-7.86 Gg while the methane emission from sheep initially rose by 350% in 1992 only to drop by another 56% by 2008. Goats emission only declined by 17% from 1.79 Gg in 1980-1.49 Gg by 2008. Methane emission from horse has been consistent at around 0.14 Gg. The decreasing stock of pigs has led to a drop in methane emission from these set of animals with most of the emission coming from manure management. Conclusion: The healthy export market for poultry has seen a rise in methane emission by 274% from 2.18 Gg in 1980-8.17 Gg by 2008. The overall increase in methane emission from all the livestock is 20% from 81.83 Gg in 1980-98.76 Gg in 2008. With the aggressive drive of government to boost cattle and goat production, there is the likelihood of an increase in methane emission in the future and mitigation options will have to be applied.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.1.8 2012/02/01 - 06:15

Problem statement: Although, literature proves the importance of the technology role in the effectiveness of virtual Research and Development (R&D) teams for new product development. However, the factors that make technology construct in a virtual R&D team are still ambiguous. The manager of virtual R&D teams for new product development does not know which type of technology should be used. Approach: To address the gap and answer the question, the study presents a set of factors that make a technology construct. The proposed construct modified by finding of the field survey (N = 240). We empirically examine the relationship between construct and its factors by employing the Structural Equation Modeling (SEM). A measurement model built base on the 19 preliminary factors that extracted from literature review. The result shows 10 factors out of 19 factors maintaining to make technology construct. Results: These 10 technology factors can be grouped into two constructs namely Web base communication and Web base data sharing. The findings can help new product development managers of enterprises to concentrate in the main factors for leading an effective virtual R&D team. In addition, it provides a guideline for software developers as well. Conclusion: The second and third generation technologies are now more suitable for developing new products through virtual R&D teams.

http://www.thescipub.com/abstract/10.3844/ajeassp.2012.9.14 2012/02/01 - 06:15

Problem statement: The objective of this study is to improve runner design of the Francis turbine and analyze its performance with the Computational Fluid Dynamics (CFD) technique. Approach: A runner design process uses a direct method with the following design conditions: flow rate of 3.12m3/sec head of 46.4 m and speed of 750 rpm or dimensionless specific speed of 0.472. Results: The first stage involves the calculation of various dimensions such as the blade inlet and exit angle at hub and the mean and shroud positions to depict the meridional plane. The second stage deals with the CFD simulation. Various results were calculated and analyzed for factors affecting runner’s performance. Results indicated that the head rise of the runner at the design point was approximately 39 m, which is lower than the specified head. Based on past experiences, the meridional plane was modified and blade inlet and lean angles were corrected. The process of meridional plane modification was repeated until the head rise was nearly equal to the specified head. Velocity vector and streamline should be a uniform stream. Conclusion/Recommendations: Results from calculating runner’s performance were approximately 90% at design point. Existing absolute velocity component from CFD simulation pointed out that swirling flow occurred at the exit of runner. Based on the comparison of runner’s performance between simulation results and experimental data from previous work reported in the literature, it is possible to use this method to simulate runner’s performance of the Francis turbine.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.540.547 2012/01/26 - 01:12

Problem statement: Manufactured homes are susceptible to hurricane damage. Each year, significant losses, in terms of fatalities and property damage, are reported. There is always a prevalent concern about lateral load resistance capacity of tie-down system of manufactured homes when subjected to windstorms. This study is performed to determine the effects of hurricane wind on manufactured homes’ foundations. Approach: A 1:120th scale model of single wide manufactured home of size 14 ft by 80 ft was designed for the wind tunnel test. Proper instrumentations and simulations were considered to measure wind forces applied on the model. Sting balance and Pitot static tube were used to measure forces and air velocity during the wind tunnel test. Displacements of anchors were observed during the test. Results: The ultimate forces as well as the displacements of the anchors were determined at different angles of wind direction ranging from 30-180°. Wind speed inside the tunnel was increased at the rate of 5 miles h-1. Conclusion/Recommendations: Test result showed that auger anchors used to support lateral load are incapable to resist hurricane wind loads. It was found that anchors displaced 2 in. vertically and 4 in. horizontally at loads less than 4725lb. Tested manufactured homes anchors experienced maximum force of 4087 lb when 45 miles h-1 wind acted in transverse direction to the wall. The manufactured home anchors displaced more than 2 inches in vertical direction and 4 inches in horizontal direction due to this wind load. This research indicated that manufactured homes ground anchors can sustain wind velocity of 95 miles h-1 when the wind is acting at longitudinal direction.

http://www.thescipub.com/abstract/10.3844/ajeassp.2011.548.555 2012/01/26 - 01:12