top of page



This paper contains an analysis of three dissertations concerning experiential learning and training in organizations. The importance of continued development and training to the field of organization and management is analyzed in the context of how effective training practices are to different organizations. Two of the dissertations are based on quantitative research and the third mixes quantitative with qualitative research. The main processes in each research design are described. Then methodologies of each dissertation are outlined and analyzed according to these processes. Strengths and weaknesses of the different methodologies are analyzed, along with preferred approaches and reasons for these preferences.


For the final project in RM 502M, Advanced Study in Research Methods, three dissertations were reviewed and the research methodologies in each were explored. Each dissertation revolved around the subject matter of training and development and how organizations can fully utilize the impact of the training program to increase their ability to be more productive.

There are several objectives that apply directly to training: the first of which is to improve systematically. Drucker (2001, p. 2) notes that, “whatever an enterprise does, both internally and externally, needs to be improved systematically and continually: the product or service, the production processes, marked technology, the training and development of people, and the use of information.”

Balance and change are vital to the training of employees and managers. Balance and change are two objectives that while they seem to contradict one another, actually complement each other as they are used together to establish a pattern of learning and development.

Continuity is the last objective that a program would need to accomplish in order to be effective and sustaining. Drucker (2001, p. 5) points out that “people need continuity. They need to know where they stand. They need to know the people they work with. They need to know the values and the rules of the organization. They do not function well when the environment is not predictable, not understandable, not known.”

Inspire, instill, internalize. Definitions according to the New Webster's Dictionary: inspire; is to breathe in; to infuse thought or feeling into; to affect as with supernatural influence; to give inspiration. Instill; is to put in by drops; to infuse slowly; to introduce by degrees (into the mind), and internalize; is of or on the inside; having to do with or belonging to the inner nature of man; intrinsic.

Those definitions illuminate the essence of what an organization is striving for. One is literally trying to affect and inspire, drop by drop, the inner nature of one’s employees. Consistency is important, and everyday actions and words weigh heavily.

Training, development, and education are all about the same thing; to enact change. They are tied together through purpose, philosophy, and orientation. An organization must use them together or it risks inherent errors in the future (Ford, 2000).

The three dissertations that I reviewed were all interested in the subject matter of training and development. Each research study conducted was well thought out and reflected the issues and questions being posed by practitioners and researchers in various academic journals and other popular periodicals. For example, Larsen (1997) completed a dissertation on the application of cognitive, affective, and behavioral theories to measure learning outcomes in management training. Since her dissertation was published, additional research has been completed in this area by other researchers in this subject area. Larsen’s study and data are surely a source of review for many researchers and practitioners. Larsen uses a theoretical model similar to Kraiger, Ford, and Salas (1993) for evaluating outcomes, and further, Larsen tests the value of that model. Kraiger, et al, (1993) noted that no theoretical models of training evaluation existed, proposed one by developing a classification scheme for evaluating learning outcomes.

Other newly completed research and articles additionally complement the work of the other two researchers in the dissertations reviewed. Smith (2000) examined the role of experiential learning in changing how people think about managing organizations, and Biendenburg (1997) looked at learner participation in training program development and its effect on achievement. Being able to change how people think and effect achievement in training and development has become increasingly important for many companies, yet many find difficulty in implementation of such training and thinking.

The three dissertations reviewed had pertinent information for improving and developing training programs. Larsen and Biendenburg completed an in-depth literature review, whereas Smith used a mixed study, which did not contain a review of the literature. However, following the research study, each provided significant analysis of their findings as well as feasible conclusions and recommendations for future researchers to develop.

The difference between qualitative and quantitative research

Qualitative Data analysis consists of open-ended answers on surveys, relatively unstructured data, and requires that the researcher explores and sensitively interprets complex data. Qualitative data analysis is a term applied to a very wide range of methods for handling data that is and considered not appropriate to reduce to numbers. These methods all have their own techniques and literature.

The researcher using such data is usually seeking to gain new understanding of a situation, experience or process; learning from the detailed accounts that people give in their own words, or that the researcher records in field notes from participant observation or discovers in documents.

Analysis of such data requires sensitivity to detail and context, as well as accurate access to information and ways of rigorously and carefully exploring themes and discovering and testing patterns. Research contexts determine time spent and goals. In some settings the emphasis is on complete understanding of a process over time, in others the emphasis is on rapid access, swift discovery and illustration, for example of themes in a focus group (QSR, 2002).

Quantitative data analysis is data-driven, for example: precoded answers on surveys, reducing the data to numbers, working with a fixed body of previously collected data, and rigid divisions between data and interpretation.

Quantitative data analysis consists of distribution of data, predictive modeling, and predictive scoring. Data mining and statistical analysis are the technologies that make predictive analytics possible. The goal is to transform data into knowledge. Effective data mining tools construct classifiers, build numerical models, find association rules, and identify data anomalies (SPSS, 2002).

Combined or Mixed Research

Quantitative and qualitative research methods are usually used separate of each other or united as a mixed methodology. When considering which method is best for any particular research project the determining factor will primarily rest upon the researcher and the type of research being conducted (Cooper & Schindler, 2001). Trochim (2002) proposes that mixing the methodology provides a substantial amount of value to the research.

Types of research questions

There are three basic types of questions that research projects can address. The first is descriptive, which is when a study is designed primarily to describe what is going on or what exists. For example, public opinion polls that seek only to describe the proportion of people who hold various opinions are primarily descriptive in nature. For instance, if the researcher wanted to know what percent of the population would vote for a Democratic or a Republican in the next presidential election, he or she is simply interested in describing something.

The second type of research question is relational. This is when a study is designed to look at the relationships between two or more variables. A public opinion poll that compares what proportion of males and females say they would vote for a Democratic or a Republican candidate in the next presidential election is essentially studying the relationship between gender and voting preference.

The third type is causal. This is when a study is designed to determine whether one or more variables (e.g., a program or treatment variable) causes or affects one or more outcome variables. If a researcher did a public opinion poll to try to determine whether a recent political advertising campaign changed voter preferences, he or she would essentially be studying whether the campaign (cause) changed the proportion of voters who would vote Democratic or Republican (effect).

The three question types can be viewed as cumulative. That is, a relational study assumes that the researcher can first describe (by measuring or observing) each of the variables the researcher is trying to relate. And, a causal study assumes that the researcher can describe both the cause and effect variables and that the researcher can show that they are related to each other. Causal studies are probably the most demanding of the three (Trochim, 2002).

Type of study

All three dissertations were casual studies. Larsen set out to determine whether two training sessions (cause), one affective and one cognitive, would influence cognitive and affective scores (effect) of the groups. She employed a traditional educational experimental design using intact basic engineering core classes as participants. The students were pretested with a cognitive and affective instrument, developed specifically for the experiment. Then some were trained and others were not trained, then all the groups were post tested.

Smith found that despite great interest in the application of computer-based simulation technology, a review of the literature found little evidence, if any, to support claims that computer simulation (cause) can change how people think (effect) about the performance of work systems. To address this deficit, Smith conducted a laboratory experiment to test the hypothesis that computer-based simulation was more effective than a traditional classroom lecture in delivering a lesson. He used Deming’s (1992) funnel exercise as the lesson. Smith used a “test-treatment-retest” methodology, which is a causal study.

Biedenweg wanted to find out whether learner participation (cause) in industrial training program development leads to increased achievement (effect). This study took place in an industrial setting and incorporated employees who were required to take Occupational Safety Health Administration (OSHA) training.

Two-group experimental design

Basically, an experiment is to show: If X, then Y and if not X, then not Y. Or in other words: If the program is given, then the outcome occurs and if the program is not given, then the outcome does not occur. This, of course coincides with a causal study.

That is exactly what an experimental design tries to achieve. In the simplest type of experiment two groups are formed that are "equivalent" to each other. One group (the program group) receives the program and the other group (the comparison or control group) does not. In all other respects, the groups are treated the same. They have similar people, live in similar contexts, have similar backgrounds, and so on.

The researcher then observes to see if there are differences in outcomes between the two groups. If so, then the differences must be due to the only thing that differs between them -- that one received the program and the other did not (Trochim, 2002).

Two-group experimental design of the dissertationsLarsen’s design and hypotheses

The experimental design utilized in Larsen’s research is the nonequivalent control group design from Campbell and Stanley’s (1963) definitive book of experimental designs. It is one of the most widespread experimental designs in educational research and it involves an experimental group and a control group both given a pretest and a posttest, in which the groups comprise naturally assembled collectives such as classrooms as similar as resources permit.

The assignment of the treatment to one group or another is assumed to be random and under the the experimenter’s control. The more similar the groups are, and if that similarity is confirmable by the scores on the pretest, the more effective this control becomes. Presuming the desired criteria are achieved, the researcher can regard the design as controlling Campbell and Stanley’s internal validity concerns of the effect of history, maturation, testing, and instrumentation. In that case, any difference for the experimental group between pretest and posttest (if greater than for the control group) cannot be explained by main effects of these variables since they impact both the experimental and the control group.

The only possible threat to internal validity proposed by Cambell and Stanley is regression and it can be controlled, as far as mean differences are concerned, by the experimenter not permitting differential recruitment of the groups.

Larsen had three groups taken from a homogeneous group of students. These were pretested, then after a period of time, the experimental treatment (training) was applied to the “experimental groups” only, followed by postesting for all groups. In Larsen’s research, two experimental groups each received a different training treatment (one cognitive, the other, affective), while the third, the control group, was not exposed to any treatment.

Larsen tested the following hypotheses:

Hypothesis H(A) The cognitive and affective scores of individuals recieviving cognitive or affective management training will not demonstrate a significant difference from each other in their change between the pretest and the posttest.

The cognitive and affective scores of those individuals receiving no training will demostrate a significant difference from the treatment groups in the change between the pretest and the posttest.

The null Hypothesis H(o). There is a statistically significant difference in the cognitive and affective scores of individuals receiving cognitive or affective management training in their change between the pretest and the posttest.

Smith’s design and hypotheses

Smith utilized a mixed methodology to test three hypotheses.

Hypothesis 1. The traditional lecture can induce managers to think in systematic ways, where “systematic” is defined in this case as not thinking that it is necessary to intervene whenever feedback about system performance indicates a deviation from expectations.

Hypothesis 2. Computer-based simulation represents a more effective approach than does readily-accepted, time-honored, lecture-based instruction in causing a change toward more systemic ways of thinking.

Hypothesis 3. Experince can change management thinking toward more systematic approaches. Efforts at promoting systematic management thinking must therefore consider the organizational context.

The first two hypotheses were investigated utilizing a laboratory experiment. The third hypothesis was investigated utilizing exploratory interviews, and these interviews will be described later.

The experiment portion of Smith’s research examined the potential that computer-based simulation has to change how people think about the job of the manager, relative to the potential of traditional lecture-based instruction. A survey was used before and after the treatment (simulation or lecture) to assess any changes in thinking caused by the treatment. By computing different scores for survey items (the score following the treatment minus the score before the treatment) for each subject, comparisons were made between the groups to determine the extent of differences in effectivemness between the two different treatments.

The experiment was conducted in a facility designed for teaching computer skills within the School of Business Administration at Portland State University. This facility allowed installation of the simulation on 25 individual computers, and provided an instructor’s computer connected to a data projector for orientation in the simulation condition, and presentation of the slides for the lecture.

Biedenweg’s design and hypotheses

Biedenweg had one research question and one hypothesis. They are as follows:

Research question: Will learners, in an industrial setting, who participate in the development of a mandatory training program achieve at a higher level than those learners who receive identical training, materials, and information, without participating in the development of their training?

Hypothesis 1. There will be no relationship between the pre-test and post-test gain scores of the experimental group and the independent variables of sex, age, number or years of formal education, number of years of company experience, number of years of manufacturing experience, and whether the employee is paid hourly or salary.


Biedenweg, K. S. (1997). Learner participation in training program development and its effect on achievement. UMI Dissertation Services. Ann Arbor, MI: Bell & Howell Information and Learning Company. (UMI No: 9729147).

Campbell, D. T. & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.

Cooper, D. R., Schindler, P. S. (2001). Business research methods. (7th edition). New York, NY: McGraw-Hill.

Drucker, P. F. (2001). The new commandments of change. New York: Harper.

Deming, W. E. (1992). The new economics for education, government and industry. Cambridge, Massachusetts: MIT CAES.

Ford, L. (2000). Make training stick like glue. American Society for Training & Development (ASTD) T & D Magazine, Nov. [Online]. Available: [2001, Oct. 12].

Kraiger, K. J., Ford, J.K., & Salas, E. (1993). Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology. 78(2), 311-328.

Larsen, J. A. (1997). Application of cognitive, affective, and behavioral theories to measure learning outcomes in management training. UMI Dissertation Services. Ann Arbor, MI: Bell & Howell Information and Learning Company. (UMI No: 9724012).

Smith, M. E. (2000). The role of experiential learning in changing how people think about managing business organizations. UMI Dissertation Services. Ann Arbor, MI: Bell & Howell Information and Learning Company. (UMI No: 9999866).

SPSS. (2002) Complete end-to-end analysis. [Online]. Available: [2002, Dec. 17].

Trochim, W. (2002). Research methods knowledge base. [Online]. Available[2002, Dec. 12].

QSR. (2002). What is qualitative analysis? [Online]. Available [2002, Dec. 17].

bottom of page