This Statistics Essay example is published for educational and informational purposes only. If you need a custom essay or research paper on this topic please use our writing services. EssayEmpire.com offers reliable custom essay writing services that can help you to receive high grades and impress your professors with the quality of each essay or research paper you hand in.
In an early text on statistical reasoning in sociology, Mueller et al. defined the concept of statistics in two related manners: ”(1) the factual data themselves, such as vital statistics, statistics on trade, production, and the like; and (2) the methods, theories, and techniques by means of which the collected descriptions are summarized and interpreted” (1970: 2). Moreover, Blalock (1972) identified five steps that all statistical tests have in common. First, assumptions concerning the population and the ability of the generalizations from the sample must be made. The assumptions also influence the formal stating of hypotheses (e.g., the null hypothesis is a statement of no association, and the research hypothesis is the alternative to the null). Then, the theoretical sampling distribution must be obtained or the probability distribution of the statistic must be rendered. Next, an appropriate significance level and critical region for the statistic must be selected. Fourth, the test statistic must be calculated. Lastly, based on the magnitude of the test statistic and its associated significance, a decision about the acceptance or rejection of hypotheses must be made.
The field of social statistics, in practice, is probably more concerned with the levels of measurement and the various types of statistical tests rather than the laws and rules that make such analysis possible in the first place. Indeed, the level at which social phenomena are measured dictates the type of statistical test that can be calculated. Concepts are measured at four levels: nominal, ordinal, interval, and ratio. Once a characteristic is measured, and the characteristic shows variation, then it is called a variable. Ratio measurement is the most precise because the distance between values is both equal and known, and variables measured at the ratio level may contain a true zero, which signifies the total absence of the attribute. As with variables expressed at the ratio level, interval-level variables are continuous, and the distance between values is also both known and constant. However, for variables measured at the interval level, no true zero point exists; the zero in interval-level data is arbitrary. Variables measured at the nominal and ordinal levels are categorical. With ordinal-level data, the response categories are both mutually exclusive and rank-ordered. The categories of a variable measured at the nominal level have no relationship with one another; they simply signify the presence or absence of a particular quality.
Statistical tests are generally univariate, bivariate, or multivariate in nature. Univariate statistics involve the description of one variable. If the variable was measured at a nominal level, then it is possible to report the mode (i.e., the most commonly occurring value), proportions, percentages, and ratios. When the variable is measured at the ordinal level, it becomes possible to calculate medians, quartiles, deciles, and quartile deviations. Then, at the interval and ratio levels of measurement, univariate procedures include means (i.e., the arithmetic average), medians (i.e., the midpoint), variances, and standard deviations. Measures of central tendency include the mode, median, and mean; measures of dispersion or the spread of the values for a given variable are typically reported as a quartile, percentile, variance, or standard deviation.
Bivariate statistics involve tests of association between two variables. Again, the level of measurement determines the appropriate bivariate statistic. For example, when both the dependent variable (i.e., the effect or the characteristic that is being affected by another variable) and the independent variable (i.e., the cause or the characteristic affecting the outcome) are measured at a nominal level, then the chi-square statistic is most commonly used. Unfortunately, the chi-square test only reveals if two variables are related; in order to determine the strength of a bivariate relationship involving two nominal variables, other statistics such as lambda or phi are used. When the dependent variable is measured at an interval or ratio level, and the independent variable is categorical (i.e., nominal or ordinal), then it becomes necessary to compare means across the categories of the independent variable. When the independent variable is dichotomous, the t-test statistic is used, and when the independent variable contains more than two categories, an analysis of variance (ANOVA) must be used. When both variables are measured at an interval or ratio level, then statistical tests based on the equation for a line, such as Pearson’s correlation and least-squares regression, become appropriate procedures. Multivariate statistics often test for the relationship between two variables while holding constant a number of other variables; this introduces the principle of statistical control.
Bibliography:
- Blalock, M. (1972) Social Statistics, 2nd edn. McGraw-Hill, New York.
- Blalock, H. M. (ed.) (1974) Measurement in the Social Sciences: Theories and Strategies. Aldine, Chicago, IL.
- Mueller, J. H., Schuessler, K. F., & Costner, H. L. (1970) Statistical Reasoning in Sociology, 2nd edn. Houghton Mifflin, Boston, MA.