(in progress)
anecdotal data
Data obtained through intuition or personal experiences, not backed up by measurement or scientific methods of collecting and analyzing. Because of that, usually less adequate to use in evaluation and research.
bias (instrumentation)
Anything that produces systematic error in evaluation findings during the instrumentation phase of evaluation. It might include, for example, bias in questionnaires that can influence a responder’s answer to a question.
bias (selection)
Anything that produces systematic error in evaluation findings in the selection phase of evaluation. It might include, for example, choice of words, sentence structure, and the sequence of questions in questionnaires, and choices of selection methodology for answers.
case study
A method for teaching and learning that examines the characteristics of a case in context, when students might have an opportunity to direct their own learning. Case studies are intensely used in medicine and business and administration.
causation
The existence of a relation that connects a first event (the cause) brings about a simultaneous or successive event (the effect), establishing a connection between both.
central tendency
Measures that include mean, mode, and median, and indicate the concentration around a stipulated value.
correlation
Interdependence, association, or relationship between two or more variables or sets of scores, used for estimating test reliability and validity.
criterion-referenced test
Tests designed to provide information about specific knowledge or skills possessed by a student, usually including a cutoff score for separating adequate from inadequate students.
deductive reasoning
Method that allows us to depart from propositions and, using the valid laws of reasoning, reach conclusions that are necessary and universal. It is a rational (not empirical) method of reasoning, which usually flows from the general to the more specific.
effect size
The size of the relationship between two variables.
effectiveness
Measures the relationship between goals and results.
efficiency
Measures the relationship between costs and results.
emic
Relative to the description and study of unities in terms of its function inside a system, and specifically to the way members of a social group divide up reality, through its language and culture, which differs from culture to culture. Ethnoscience studies emic categories. The description happens, though, from the inside.
empirical research
Research that uses data drawn from observation or experience.
ethnography
The study of different ethnic groups, including its anthropological, cultural, and social characteristics. In qualitative market research, it means the types of research where the researcher spends time observing and/or interacting with participants in their own environment and everyday life.
etic
Description of a behavior by an observer (that is to say, from the outside).
evaluator’s program description (EPD)
Its purpose is to clarify aspects of a program, facilitate its monitoring function, reveal its goals and objectives, discriminate activities planned to accomplish these goals and objectives, and reveal its measurement tools. Is composed by evaluation questions, which will be answered by the evaluation, goals and objectives, activities to accomplish these goals, and tools used to measure this accomplishment.
expert evaluation
Evaluation of a program performed by an expert in the field.
field trial
Trial that involves collecting data in real life environments.
formative evaluation
Evaluation of a program during its development.
generalizability
The property of moving ing from specific items or reasoning to general laws or rules.
grounded theory
An inductive research methodology based on the systematic analysis of data collected from a specific situation, without presupposing a theory to be tested. In this sense, the theory is not what is going to be tested, a problem, but it grows from data, it is the end of the process, not its beginning, what is concluded after the research and detailed analysis of data.
hypothesis
Supposition or conjecture proposed as an explanation of a situation or problem, anticipating its results, which should then be verified and proved.
impact
Measures how a program changed behavior in an extended period of time.
inductive reasoning
Contrary to deduction, it departs from reality. Based on the similarity of properties observed in objects, or the repetition of certain events, we construct a new concept (or semantic expands its boundaries), a concept which now seeks to cover not only the objects or events already observed, but a wider range of events, which are expected to follow the same rules as previously observed. The induction reasoning initially infers that there are phenomena similar to those already observed, previewing how something should probably be, repeated the conditions of the experience.
informed consent
Legal procedure to ensure that a patient or client is adequately informed of all the risks and costs involved in a treatment, as well as of possible alternative treatments. The consent should be given voluntary.
interim report (or interim statement)
Financial statement issued periodically by a firm whose equities are traded on a stock exchange, declaring its trading results for that period.
interval
Ordinal data that have equal intervals (e.g. distance between points, scores etc.)
longitudinal study
A study in which individuals are followed over time, and compared with themselves at different times.
meta-analysis
An analysis developed to aggregate findings from a series of related evaluations dealing with a specific topic.
monitoring
Involves collecting data of a program during a period of time.
nominal
Mutually exclusive categories of data, with no order or value placed on the categories. E.g.: gender (male/female).
norm-referenced test (or NRT)
Test designed to compare a student’s performance, usually to a general population of students, called a norm group.
one-to-one evaluation
A stage in the formative evaluation, involving the interaction with individual tryout students.
ordinal
Rank-ordered categories of data (e.g. strongly agree, agree, no opinion, disagree, strongly disagree).
population
The collection of elements being studied.
post-test
A test given after a period of instruction to determine what the students have learned.
pre-test
A preliminary test administered to determine the student’s previous knowledge, which results can be used for different purposes in the instructional design.
program planning
The development of the components of program, including goals, implementation and evaluation strategies.
qualitative
Data based on numbers, used to predict and to show causal relationships, and collected using tests, counts, measures, and instruments.
quantitative
Verbal narratives based on observations, interviews, surveys, case studies, and existing artifacts and documents.
quasi-experimental
A research design with some characteristics of an experimental design.
random sample
A random sample is a sample whose members are chosen at random from a given population.
ratio
Data that have an absolute zero point (e.g. temperature, which involves negative numbers).
reliability
It can mean different things: the quality of being reliable, when referring for example to information sources; the extent to which a test is stable and consistent when administered to the same individuals on different occasions; the way to measure the percentage of errors in a test.
reliability (interrater)
Concerned with the extent to which raters consistently judge an item or test.
resources
Assets available and anticipated for operations, which might include people, equipment, facilities and other items used to plan, implement, and evaluate programs.
sample
The individuals from whom data will be collected, which should represent the main characteristics of the group, and, if adequately chosen, allows to generalize the results to all individuals in the group.
small group evaluation
A stage of formative evaluation that uses small groups of tryout students to assess the effectiveness of the instruction.
stakeholders
Groups interested or affected by a program, though interested in its evaluation.
standards
References for evaluating the performance and the results of a program.
statistic
Can refer both to a number or characteristic (30%, for example), or a tool or technique used to collect and analyze data.
summative evaluation
Evaluation that assesses the results or outcomes of a program.
summative report
Report that assesses the results or outcomes of a program.
triangulation
A technique in which multiple methods, observers, theories, perspectives, sources, and empirical materials are used in a study, in order to increase the credibility and validity of the results.
trustworthiness
The quality of being reliable.
validity
Quality or condition of being validity, that is to say, of having its causes, methods, results etc. adequately defined.
validity (construct
Measures how well an instrument performs in practice from the standpoint of the specialists who use it.
validity (external)
The extent to which the results of a study can be generalized or extended to others. It involves selecting a representative group or samples, so that the results can be extended to other groups or populations.
validity (internal)
Occurs when a researcher controls all extraneous variables, so that the only variable influencing the results of a study is the one being manipulated.
variability
Measures that include range and standard deviation, and show the distribution of data.
variable (dependent)
What we seek to explain or discover, and that can appear, disappear, or change as the researcher produces or modifies the independent variable.
variable (independent)
What we believe influences or affects a phenomenon, launched as a hypothesis to be the determining factor, the condition, or the cause for certain result, effect, or consequence. It will usually be the factor manipulated in the research, so that we can study its effects.
heuristic
The art of inventing, the science of discovering.
***
SOURCES
Boulmetis, J., Dutwin, P. (2005). The ABCs of evaluation: timeless techniques for program and project managers. 2nd ed. San Francisco: Jossey-Bass.
Dicionário Houaiss – CD-Rom.
EPA/ESD Program Evaluation Glossary
MATTAR, João. Metodologia científica na era da informática. 3. ed. São Paulo: Saraiva, 2009.
Pearson – Basic Measurement Concepts Glossary
SMITH, Patricia L., & RAGAN, Tilman J. (2005). Instructional design. 3rd ed. Hiboken, NJ: John Wiley & Sons.