Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

American Journal of Biostatistics

This article considers the analysis of Multiple Linear Regressions (MLRs) that are essential statistical method for the analysis of medical data in various fields of medical research like prognostic studies, epidemiological risk factor studies, experimental studies, diagnostic studies and observational studies. An approach is used in this article to select the “true” regression model with different sample sizes. We used the simulation study to evaluate the approach in terms of its ability to identify the “true” model with two options of distance measures: Ward's Minimum Variance Approach and the Single Linkage Approach. The comparison of the two options performed was in terms of their percentage of the number of times that they identify the “true” model. The simulation results indicate that overall, the approach exhibited excellent performance, where the second option providing the best performance for the two sample sizes considered. The primary result of our article is that we recommend using the approach with the second option as a standard procedure to select the “true” model.

http://www.thescipub.com/abstract/10.3844/amjbsp.2014.29.33 2014/09/20 - 12:41

Menopause is not an illness but rather an important event as it changes the body physiology and mental cognition via hormonal changes. During data analysis of menopauses incidence data, new bivariate distribution is discovered. Their marginal, conditional distribution and statistical properties including the inter and partial correlations are explored and utilized to interpret menopauses data. A likelihood ratio hypothesis testing procedure is constructed to test the statistical significance of the sample estimate of the chance for menopause and estimate of the chance for operative menopause. The menopause data are analyzed and interpreted in the illustration. Research directions for future work are pointed out.

http://www.thescipub.com/abstract/10.3844/amjbsp.2014.34.44 2014/08/24 - 16:19

Multi-state stochastic models are useful tools for studying complex dynamics such as chronic diseases. The purpose of this study is to determine factors associated with the progression between different stages of the disease and to model the progression of HIV/AIDS disease of an individual patient under ART follow-up using semi-Markov processes. A sample of 1456 patients has been taken from a hospital record at Amhara Referral Hospitals, Amhara Region, Ethiopia, who have been under ART follow up from June 2006 to August 2013. The states of disease progression adopted in the multi-state model were defined based on of the following CD4 cell counts: ≥500(SI); 200 to 499(SII); <200(SIII); and death (D). The first three states are named as good. Female patients were 1.6 times more likely to move from state 2 to state 1 than those of male patients (adjusted HR = 1.60, CI = 1.02-2.49). Patients, who is not drug addicted, were 2.49 times more likely to move from state 3 to state 2 than those of drug addicted(adjusted HR = 2.67, CI = 1.52-4.68). Patients with tuberculosis were 2.67 times more likely to move from state 3 to state 4 than those with no tuberculosis (adjusted HR = 2.67, CI = 1.52-4.68). On the other hand, the probability of staying in same state until a given number of month decreases with increasing time. Multi-state modeling is a powerful approach for studying chronic diseases and estimating factors associated with transitions between each stage of progression. The major predictors of the intensity of transitions between different states of HIV/AIDS patients were gender, age, drug addicted and TB status. The dynamic nature of the AIDS progression is confirmed with particular findings that there is more likely to be in worse state than better one unless interventions are made.

http://www.thescipub.com/abstract/10.3844/amjbsp.2014.21.28 2014/08/12 - 06:03

A primary data of 836 eligible women in the age group of 15-49 years is used to determine the causal effects of covariates on under-five mortality. The eight covariates viz., Number of family Members (NHM), Type of Toilet Facility (TTF), Total Children ever Born (TCB), Parity (PAR), Duration of Breastfeeding (DBF), use Contraceptive (CMT), DPT and Ideal Number of Girl (ING) are considered as covariates of the study. By applying Cox’s regression analysis, six covariates viz., TTF, NHM, CMT, DBF, DPT and ING have substantially and significantly effect on under-five mortality. Further, a life table of under-five children under study is constructed using the estimate of survival function obtained from Cox’s regression model.

http://www.thescipub.com/abstract/10.3844/amjbsp.2014.1.10 2014/07/18 - 21:02

The Wald interval is easy to calculate; it is often used as the confidence interval for binomial proportions. However, when using this confidence interval, the actual coverage probability often falls under the nominal coverage probability in small cases. On the other hand, several confidence intervals where the actual cover age probability does not fall under the nominal coverage probability are suggested. In this study, we intro-duce five exact confidence intervals where the actual coverage probability does not fall under the nominal coverage probability and we calculate the expected length of the confidence intervals and compare/verify the accuracy of the coverage probabilities. Further, we examined the characteristics of these five exact confidence intervals at length. Coverage probability of Sterne was significantly closer to 0.95 than the other confidence intervals and stable. Its expected Length are not scattered in the width compared with the other methods. As a result, we found that the quality of the confidence interval based on the Sterne test is its availability for small samples.

http://www.thescipub.com/abstract/10.3844/amjbsp.2014.11.20 2014/07/18 - 21:02

Biomass and extracellular polysaccharide of Ganoderma tsugae have various biological activity including anti-inflamatory activity, antioxidant activity and antitumor activity. However, the growth rate of G. tsugae in nature is very slow. Therefore, many studies have attempted to develop mass culture systems for G. tsugae using laboratory techniques. Many parameters of submerged fermentation for G. tsugae were studies to determine the optimization of process by combination of statistical techniques. Ten parameters from preliminary results and literature reviews (maltose, skim milk, KH2PO4+K2HPO4, MgSO4-7H2O, CaCO3, vitamin B5+B6, olive oil, ethanol, pH and shaking speed) were screened by Packett Berman design. The significant parameters were determined the optimal ranges by path of steepest ascent method. The optimal condition of process was performed by response surface method. Maltose, skim milk and pH are significant parameters for G. tsugae cultivation. The conditions of 31.031 g L-1 maltose, 14.055 g L-1 skim milk and an initial pH of 7.12 resulted in the maximum extracellular polysaccharide content of 415 mg L-1 and the same fermentation broth at an initial pH of 6.46 exhibited the most biomass at 15.776 g L-1. Finally, the optimal condition was compared with un-optimal condition which result indicates that the combination of statistical techniques enhance the productions of biomass and extracellular polysaccharide (13X and 1.5X of the control, respectively). Therefore, these strategies are useful for improvement of submerged fermentation of G. tsugae which it can apply in pharmaceutical industry.

http://www.thescipub.com/abstract/10.3844/amjbsp.2013.38.46 2014/07/05 - 16:22

The model is an abstraction of the reality. The selection of the usual inverse binomial as an underlying model for the number of patients waiting in months for heart and lung transplant is questionable because the data exhibit not the required balance between the dispersion and its functional equivalent in terms of the mean but rather an over or under dispersion. This phenomenon of over/under dispersion has been a challenge to find an appropriate underlying model for the data. This article offers an innovative approach with a new model to resolve the methodological breakdown. The new model is named Imbalanced Inverse Binomial Model (IIBM). A statistical methodology is devised based on IIBM to analyze the collected data. The methodology is illustrated with a real life data on the number of patients waiting in months for heart and lung transplants together. The results in the illustration do convince that the new approach is quite powerful and brings out a lot more information which would have been missed otherwise. In specific, the odds of receiving the organs are higher under an estimated imbalance in the data than under an ideal zero imbalance in all the states except Alabama. The odds are consistently higher under an estimated imbalance in the data than under an ideal zero imbalance across all the age groups waiting in months. Further research work is needed to identify and explain the factors which might have caused the imbalance between the observed dispersion in the data and its functionally equivalent amount according to the underlying inverse binomial model for the data. The contents of this article remains the foundation on which the future research work will be built.

http://www.thescipub.com/abstract/10.3844/amjbsp.2013.30.37 2014/01/29 - 12:38

The rape victims are frightened to report with a fear of retaliation or humiliation. Consequently, the number of reported rapes is under-estimated. How should the number of unreported rapes be identified is discussed in this article. For this purpose, the Poisson distribution is modified and it is named Bumped-up Poisson distribution in this article. Related probability-informatics are derived to estimate the unreported rapes and proportion fearing to report. A hypothesis testing procedure is developed to assess the significance of an estimated proportion fearing. Our approach is tried with the reported rapes during the years 2007 and 2008 in a random sample of nations in all the continents. Proximities among the nations are identified in rape incidences.

http://www.thescipub.com/abstract/10.3844/amjbsp.2013.17.29 2013/11/04 - 23:54

The amount of health benefits derived from breastfeeding is influenced by age of the child at initiation of the first breast milk, the duration and intensity of breastfeeding and age at which the child is introduced to supplementary foods and other liquids. In this study, the general trend of timing of breastfeeding initiation among nursing mothers in Nigeria between 1990 and 2003 is examined. The timing of initiation of the first breast milk to a child by her mother is measured in a three-level ordinal scale (immediately, within 24 h and days after birth) and the impacts of some socio-economic and maternal factors on this are determined. Results from this study revealed a significant improvement in the trend of early initiation of breast milk among Nigeria mothers between 1990 and 2003 (p<0.0001). Mother’s age at birth, her educational attainment, baby’s deliveries at hospitals and mothers’ domiciling in urban areas contributed positively to early initiation of the first breast milk to babies by Nigerian nursing mothers (p<0.05). In the contrary however, delivery through caesarean operation and the current birth being the first delivery by a mother are both found to militate against early initiation of breastfeeding in Nigeria (p<0.05). Three waves of national data from Nigerian Demographic and Health Surveys for 1990, 1999 and 2003 were employed in the study

http://www.thescipub.com/abstract/10.3844/amjbsp.2013.1.10 2013/09/05 - 13:26

The simulations on single vacancy defect transients for FCC structure were conducted to study the change in its final structure, especially the average atomic volume. The numerical code “ALINE” was employed for this purpose. The results obtained showed that when a single vacancy defect occurred in a perfect FCC-crystal structure, the average atomic volume was found to be suddenly increased and then gradually decreased down the value close to the initial value. This suggested that the FCC structure was able to expand and fill the volume originally occupied by the missing atom.

http://www.thescipub.com/abstract/10.3844/amjbsp.2013.11.16 2013/09/05 - 13:26

A prelude to interpret a pattern in the repeating incidences is to identify the underlying frequency distribution of the collected data. A case in point is the Poisson distribution which is often selected for medical count data such as gene mutations, medication error and number of ambulatory pickups in a day. A requirement for the Poisson distribution is that the variance ought to be equal to the mean. The variance signifies the volatility in the occurrences. An implication is that the volatility becomes more when the average incidence is higher. When this requirement of the functional equivalence of the Poisson mean and variance is breached, the data deviates from a Poisson distribution. How could a data analyst recognize and point out to the medical team the dilution level of the requirement in their data? For this purpose, a simple and easier geometrical approach is developed in this article and illustrated with several historical data sets in the literature.

http://www.thescipub.com/abstract/10.3844/amjbsp.2011.56.60 2013/06/09 - 15:18

The Poisson distribution is one of the most useful probability distributions to fit rare event data. Confidence interval for the SNR is an important issue among the researchers in image processing. This study considers several confidence intervals for the SNR of a Poisson distribution. Different confidence intervals available in literature are reviewed and compared based on the coverage probability and average width of the intervals. Since a theoretical comparison is not possible, a simulation study has been conducted to compare the performance of the interval estimators. Based on the simulation study we observed that most of our proposed interval estimators are performing well in the sense of attaining nominal size and they have been recommended for the researchers. Most of the proposed intervals except methods Wald, Waldz and bootstrap are performing well in the sense of attaining nominal size. The exact method performed the best followed by VSS, Wald B and Bayes in the sense of attaining nominal size and shorter width when the SNR is large.

http://www.thescipub.com/abstract/10.3844/amjbsp.2011.44.55 2012/09/19 - 16:57

Motivation for this research work started while helping a hospital administrator to assess whether patient oriented activity duration, X≥0 is reflecting the service’s efficiency? Higher value of the sample mean duration, X implies lesser productivity in the hospital and more healthcare cost. Likewise, larger value of sample dispersion, sx2 in the service durations is an indicator of lesser reliability and inefficiency. Of course, the dispersion, sx2 in a healthcare hospital operation could be due to diverse medical complications among patients or operational inefficiency. Assuming that it is not the diverse medical complications of patients, how should the pertinent information from data be extracted, quantified and interpreted to address inefficient operation? This is the problem statement for discussion in this article. To be specific, in an inefficient hospital operation, the sample dispersion and mean of service durations are likely to be highly correlated. Their correlation is a clue to identify an inefficient operation of a hospital. To compute the correlation, currently there is no appropriate formula in the literature. The aim of this article is, therefore, to derive a working formula to compute the correlation between sample dispersion and mean. The dispersion is too valuable statistical measure to quickly dispense, not only in healthcare operations but also in engineering, economics, business, social or sport applications. The approach starts first in quantifying a general relationship between the dispersion and mean in a given data. This relationship might range from a linear to a quadratic, cubic or higher degree. Suppose that the dispersion, σ2 is a function, f(μ)of the mean, μ of patient oriented activity durations. Specific functionality depends on the frequency pattern of the data. The tangent at a locus of their relationship curve is either declining or inclining line with an angle θ whose cosine value is indeed the correlation between the mean, x and dispersion, sx2. An expression to compute the angle is nowhere seen in the literature. Therefore, this article derives a general expression based on geometric concepts and then obtains specific formula for several count and continuous distributions. These expressions are foundations for further data analyses. To initiate, promote or maintain an efficient service operation for patients in a hospital, practical strategies have to be formulated based on the cluein the form of correlation value. For this purpose, a one-to-one relationship between sample dispersion and mean could be utilized to improve the service efficiency. In this process, a formula is developed to check whether the model parameters are orthogonal. The curvature and the shifting angle in the relationship between dispersion and mean are captured when the mean changes one unit. Both Poisson and exponential distributions are illustrated to comprehend the concepts and the derived expressions of this article. Efficient healthcare service is a necessity not only in USA but also in other nations because of an escalating demand by medical tourists in this era of globalized medical treatment. A reformation to the entire healthcare field could be achievable with the help of biostatical concepts and tools. To extract and comprehend pertinent data information in the patient oriented activity durations, the correlation is a tool. The data information holds the key to make the much needed reformation and operational efficiency. This article illustrates that the correlation between the data mean and dispersion provides clues. The correlation helps to assess healthcare service efficiency as it is demonstrated in this article with data. Similar applications occur in engineering, business and science fields.

http://www.thescipub.com/abstract/10.3844/amjbsp.2011.36.43 2012/08/30 - 14:36

In this study, the interrelation concepts of trivariate distribution function, trivariate survival function, trivariate probability density function and trivariate hazard rate function of trivariate Weibull distribution are presented. The goal of this contribution is to estimate the trivariate Weibull hazard rate parameters. To reach this goal, we will use an analitical approach in estimating called the Maximum Likelihood Estimation (MLE) method. Using numerical iterative procedure the scale parameters, the shape parameters and the power parameter estimators on trivariate hazard rate of trivariate Weibull distribution must be obtained. The MLE technique estimates accurately the trivariate Weibull hazard rate parameters.

http://www.thescipub.com/abstract/10.3844/amjbsp.2011.26.35 2012/08/12 - 06:38

Problem statement: Relative risk has concrete meanings of comparing two groups and measuring the association between exposures and outcomes in medical and public health studies. Log-binomial model, using a log link function on binary outcomes, is straightforward to estimate risk ratios, whereas generates boundary problems. When the estimates are located near the boundary of constrained parameter space, common approaches or procedures using software such as R or SAS fail to converge. Approach: In this study we proposed a truncated algorithm to estimate relative risk using the log-binomial model. We used simulation studies on both single and multiple covariates models to investigate its performance and compare with other similar methods. Results: Our algorithm was shown to outperform other methods regarding precision, especially in high dimensional predictor space. Conclusion: The truncated IWLS method solves the slow convergence problem and provides valid estimates when previously proposed methods fail.

http://www.thescipub.com/abstract/10.3844/amjbsp.2011.20.25 2012/05/10 - 22:16

Problem statement: Logistic regression, perhaps the most frequently used regression model after the General Linear Model (GLM), is extensively used in the field of medical science to analyze prognostic factors in studies of dichotomous outcomes. Unlike the GLM, many different proposals have been made to measure the explained variation in logistic regression analysis. One of the limitations of these measures is their dependency on the incidence of the event of interest in the population. This has clear disadvantage, especially when one seeks to compare the predictive ability of a set of prognostic factors in two subgroups of a population. Approach: The purpose of this article is to study the base-rate sensitivity of several R2 measures that have been proposed for use in logistic regression. We compared the base-rate sensitivity of thirteen R2 type parametric and nonparametric statistics. Since a theoretical comparison is not possible, a simulation study was conducted for this purpose. We used results from an existing dataset to simulate populations with different base-rates. Logistic models are generated using the covariate values from the dataset. Results: We found nonparametric R2 measures to be less sensitive to the base-rate as compared to their parametric counterpart. Logistic regression is a parametric tool and use of the nonparametric R2 may result inconsistent results. Among the parametric R2 measures, the likelihood ratio R2 appears to be least dependent on the base-rate and has relatively superior interpretability as a measure of explained variation. Conclusion/Recommendations: Some potential measures of explained variation are identified which tolerate fluctuations in base-rate reasonably well and at the same time provide a good estimate of the explained variation on an underlying continuous variable. It would be, however, misleading to draw strong conclusions based only on the conclusions of this research only.

http://www.thescipub.com/abstract/10.3844/amjbsp.2011.11.19 2012/02/25 - 17:53

Problem statement: We propose a Bayesian method (RBP) to recursively infer the independence structure of epistatic interactions in case-control study. Approach: Based on the results of BEAM2, RBP can powerfully detect the marginal and conditional independence within interacting SNPs even in the complicated interaction cases. Results: We did extensive simulations to test RBP and compare it with stepwise logistic regression. Simulation results show that this approach is more powerful than stepwise logistic regression in detecting in marginal independence and conditional independence as well as more complicated dependence structure. We then applied BEAM2 and RBP on dbMHC Type 1 Diabetes (T1D) data and we found in MHC region, genes DRB1 and DQB1 are associated with T1D with saturated interaction structure which is consistent with the current knowledge of haplotype effect of these two genes on T1D. Conclusion: RBP is a powerful method to infer detailed dependence structures in epistatic interactions.

http://www.thescipub.com/abstract/10.3844/amjbsp.2011.1.10 2012/01/14 - 01:47

Problem statement: Foetal electrocardiogram (FECG) was the best method used to
diagnose Foetal heart problem. Knowledge of the foetal heart signal prevents Foetal problems in the
earlier stage. Recently, there has been a growing interest in noninvasive method rather than the old
invasive method which was more risky for the mother’s health. The most significant problem in
noninvasive method is the extraction of the Foetal signals from maternal signals and many
contaminated noises. The problems of extraction of the Foetal signals are the problems that plagued
researchers in the field of signal processing. Objective to develop a technique for extracting FECG
signals based on adaptive filter and simple Genetic algorithm. Approach: Practical method for
extraction using computer simulations was proposed. The proposed method detects Foetal ECG by
denoising abdominal ECG (AECG) and lead to the subsequent cancellation of maternal ECG (MECG)
by adaptive filtering. The thoracic signal (TECG) which is purely of Mother signal (MECG) was used
to cancel MECG in abdominal signal and the Foetal ECG detector extracts the FECG through Simple
Genetic algorithm which enters as the editor of unwanted noise. Results: The FECG signal which was
obtained appears to agree with the standard Foetal ECG signals. A program for carrying out the
calculations was developed in matlab. The testing of the algorithms was done by using real data from
SISTA/DAISY and Physionet. Conclusion: the proposed technique for extraction of FECG was useful
and the results appear to agree with the mean values of FECG.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.75.81 2011/04/14 - 12:39

Problem statement: The Binomial distribution is one of the most useful probability
distributions in the filed of quality control, physical and medical sceinces. Many questions of interest
to the health worker related to make inference about the unknown population proportion, parameter of
binomial distribution. This study considers the problem of hypotheses testing of the parameter of a
binomial distribution. Approach: Different test statistics available in literature are reviewed and
compared based on the empirical size and power properties. Since a theoretical comparison is not
possible, a simulation study has been conducted to compare the performance of the test statistics. To
illustrate the findings of the paper, two real life health related data are analyzed. Results: The
simulation study suggests that some methods have better size and power properties than the other test
statistics. The performnace of the proposed test statistics also depend on the hypothesized value of the
binomial parameter. Conclusions/Recommendations: The practitioners should be careful about the
hypothesized value of the binomial parameter p. If the hypothesized value is near 0.5, any test is
acceptable for moderate to large sample size. However, for testing the end or small value of p, one
might need very large sample size to have a good power and actual size of the test.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.82.93 2011/04/14 - 12:39

Problem statement: Reliability studies are concerned with the study of “consistency” or “repeatability” of measurements. Often times (but not always) the reliability coefficients are Intra-class Correlation Coefficients (ICC). Depending on the design or the conceptual intent of the study there are three types of intra-class correlation coefficients, termed intra-class correlation coefficients Case 1, 2 and 3, for measuring the reliability of a single interval measure. While methods for sample size calculations for intra-class correlation coefficients in Case 1 are available and implemented in PASS (Power Analysis and Sample Size System); to our knowledge, no methods based on intra-class correlation coefficients in Case 2 and 3 are available. Develop a method for calculating the size of a reliability study based using intra-class correlation coefficients Case 1 and 2. Approach: A practical method for computing sample size using simulations was proposed. We proposed to compute sample size based on the expected width of the confidence interval. For a given target value of the intra-class correlation coefficient, the proposed method chooses the design assures a 95% confidence interval with average length shorter than a pre-specified value. The applicability of the proposed method in practice for intra-class coefficients Case 2 was supported by demonstrating three invariance properties of the proposed confidence intervals. Results: Tables with sample size requirements were derived and displayed. A program for carrying out the calculations was developed in R. The method was used to size a trial aimed to study the reliability of a scale that measures the cleanness of the colon at the time of colonoscopy. Conclusion: A simple method for sample size calculation for intra-class correlation coefficient Case 1 and 2 based on the average length of confidence intervals was proposed. The proposed was implemented by the authors in R (freely available software). Three invariance properties of the confidence intervals for the intra-class correlation coefficients Case 2 were studied by simulations. These properties are an important tool when considering the design of this type of studies.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.1.8 2011/01/11 - 03:33

Problem statement: The common assumption in non-randomized Phase II clinical trials is a homogeneous population with homogeneous response. This assumption is at odds with many trials today; a heterogeneous response due to the existence of subgroups. Approach: In order to examine the effects of heterogeneity on the trial outcome, a systematic platform is developed to quantify the range and classes of possible response heterogeneity using a mixed model approach. Five recent methods developed to handle heterogeneity, stratified analysis, beta-binomial models, Bayesian hierarchical models and regression models are compared and contrasted using a set of performance criteria to provide clinicians with scenarios where each method is applicable. Results: All methods require a priori information on the subgroup composition, a limiting factor in most trial conduct. The Bayesian methods require the least amount of assumptions, provide a methodology to share information across subgroups and allow partial subgroup outcomes, but require substantial computational resources and time. The stratified methods provide a simple improvement over the standard phase II Simon design, but lack the methodology to allow for partial subgroup stopping. Conclusion: The heterogeneity model provides a useful tool to model data under a heterogeneity assumption. The proper handling of heterogeneous populations under a Phase II design is a contentious debate; ignoring this fundamental assumption may lead to incorrect trial outcomes. New methods need to be developed which can include the heterogeneity structure in the trial design and allow for partial hypothesis testing.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.9.16 2011/01/11 - 03:33

Problem statement: For 2×2×K contingency tables, the measure is considered to represent the degree of departure from a log-linear model of No Three-Factor Interaction (NOTFI). We are interested in considering a similar measure for general I×J×K contingency tables. Approach: The present study proposed a measure to represent the degree of departure from the NOTFI model for I×J×K contingency tables. Also the approximate confidence interval for the proposed measure is given. Results: The proposed measure was applied and analyzed (1) for a 3×4×4 cross-classification data of dumping severity, hospital and operation which treat duodenal ulcer patients corresponding to removal of various amounts of the stomach and (2) for a 2×3×4 cross-classification data of experiment of animals (mouse and rat) on cancer (the tumor of leukemia and lymphoma) and tolazamide. Conclusion: The proposed measure is useful for comparing the degrees of departure from the NOTFI model in several tables.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.17.22 2011/01/11 - 03:33

Problem statement: Noninferiority tests are frequently used in clinical trials to demonstrate that the response for study drugs is not much worse than the response for reference drugs. Several test statistics exist. However, a detailed comparison of those test statistics is not researched. Moreover, a little complex calculation might be necessary in some of those test statistics. Approach: In this study, we investigated the performance of the existing test statistics and propose new test statistics. Further, we compare them with existing test methods by means of simulation and devise a suitable technique of using of these test statistics. Results: We found that for the proposed test statistics, the actual type I error was close to the nominal level. Further, when the sample size is moderate it is found that, the new test statistics have a little higher power than other test statistics. Conclusion: One of the biggest advantages of our method is that it does not require complicated calculations.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.23.31 2011/01/11 - 03:33

Problem statement: This article considers the analysis of multivariate regression experiment that is used frequently in variety of applications research such as psychiatric epidemiologic studies. Our study concerned with multivariate regression model in which the responses were correlated in particular ways for both standard and non-standard multivariate model structures. Our objective is to find reliable procedure that can be used to guide the selection of the best multivariate regression model that has the right covariance structure and in the same time has the right multivariate model structure for both standard and non-standard multivariate model structures. Approach: In this study, we were proposing and evaluating a new three stages procedure that could be used to guide the selection of the best multivariate regression model that has the right covariance structure and in the same time has the right multivariate model structure using bootstrap simulation procedure. Results: The simulation results indicated that the performance of the new procedure in identifying the right multivariate regression model that has the right covariance structure and in the same time the right multivariate model structure from both standard and non-standard multivariate model structures was excellent overall. Conclusion/Recommendations: We recommended using the new procedure as stander tools to guide the selection of the best multivariate regression model that has the right covariance structure and in the same time has the right multivariate model structure.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.32.41 2011/01/11 - 03:33

Problem statement: Breastfeeding is of utmost importance in the maternal life of a woman, particularly exclusive breastfeeding. Exclusive breastfeeding during the first 6 months of life supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Many probability distributions were proposed to model such data such as the mixed Poisson distributions. However, the estimation methodologies based on such mixed Poisson distributions may be complicated and may not yield consistent and efficient regression estimates. Approach: In this study, we proposed a negative-binomial regression model to analyze the local practices of exclusive breastfeeding and factors affecting this practice. Results: The estimation of parameters is carried out using a quasi-likelihood estimation technique based on a marginal approach via Newton-Raphson iterative procedure. Conclusion: The negative binomial distribution is applied on a sample of data on infant feeding practices in 2006 and has yielded reliable estimates of the regression and over-dispersion parameters.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.42.45 2011/01/11 - 03:33

Problem statement: Numerous trials have been conducted to compare the body growth curves and hence growth rates relying on smoothing and modeling different growth curves using different parameter values for the same model. This study aimed to construct a test of the equality of two percentile growth curves and of a set of percentile growth curves from two independent populations regardless of the shape of these curves. Currently available tests allow us to make a decision on one group. Making a decision regarding the whole curve necessitates building new tests. Approach: This study developed two tests of the equality of two growth curves based on the concept of the precedence and the chi-square tests and a test of the equality of a set of growth curves. The Monte Carlo simulation technique was used to investigate the power of the three tests under a shift in the location parameter and under a shift in the scale parameter of the normal and gamma distributions. The tests were applied to the weight-for-age percentile growth curves of Egyptian regions. Results: The curve precedence test is more powerful than the curve chi-square test in testing the equality of growth curves under a shift in the location parameter of both the normal and gamma distributions. It is also more powerful than the curve chi-square test in testing the equality of growth curves under a shift in the scale parameter of the gamma distribution and in testing equality of growth curves with high ranks under a shift in the scale parameter of the normal distribution. Applying the new tests to the weight-for-age growth curves of the two Egyptian regions showed that the regions have different growth curves. Conclusion: The new tests are powerful in testing the equality of growth curves. According to them, the two Egyptian regions have different nutritional status.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.46.61 2011/01/11 - 03:33

Problem statement: For square contingency tables with ordered categories, we are
interested in considering a structure of weak symmetry when Bowker’s symmetry model does not hold
and in measuring the degree of departure from weak symmetry. Approach: The present study
considered the average cumulative symmetry model that has a weaker restriction than the structure of
symmetry. It also gave a measure to represent the degree of departure from average cumulative
symmetry. When the conditional symmetry and the cumulative linear diagonals-parameter symmetry
holds, the proposed measure can measure what degree of departure from the symmetry is. Results: The
proposed model and the measure were applied and analyzed (1) for the data of 4×4 contingency table
of unaided distance vision of 7477 women aged 30-39 employed in Royal Ordnance factories in
Britain from 1943-1946 and (2) the data of 4×4 contingency table of the 59 matched pairs using from
dose levels of conjugated oestrogen. Conclusion: The proposed model is useful when the symmetry
model does not hold and the proposed measure is useful for comparing the degree of departure from
the weak symmetry model in several tables. Especially the proposed measure is useful to measure the
degree of departure from symmetry when the conditional symmetry (the cumulative diagonalsparameter
symmetry) model holds.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.62.66 2011/01/11 - 03:33

Problem statement: In the practice of conventional point sampling and line sampling
methods in the forestry survey, we often encounter problems such as boundary and hole problems.
These problems could introduce bias in the results of forest sampling. Proper modifications are needed.
Approach: This study developed novel probability computation approaches in the utilization and
modification of horizontal point sampling and line sampling in the forestry inventory. It reviewed
conventional point sampling and line sampling methods, identifies specific problems associated with
actual forest sampling and provides modification solutions using probability computations. Results: By
modifying the original point sampling and line sampling procedures, this study proposed novel
solutions to these problems and provides better sampling methods with reduced bias in the forestry
survey. Conclusion: In this study, only horizontal gauging for point sampling and line sampling was
discussed. For the corresponding problems encountered in vertical gauging, the solutions are similar to
the ones for the horizontal gauging sampling. These modifications had been presented with varying
levels of complexity. To maintain a balance between precision and costs, modifications with an
appropriate level of complexity may be selected.

http://www.thescipub.com/abstract/10.3844/amjbsp.2010.67.74 2011/01/11 - 03:33