1 / 18

test batch

practice makes man perfect

amona2cool
Download Presentation

test batch

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Strategies, Research Designs, and Research Procedures

  2. Research Process/Procedure • A research procedureis an exact, step-by-step description of a specific research study.

  3. Content of Standard Sections of a Research Report / Article Introduction • Basic introduction to the topic area • Literature review • Research question, purpose, or hypothesis of the study • Brief outline of the methodology • Specific prediction of the study Method • Participants/Subjects—description of the sample that participated in the study • Procedures—description of how the study was conducted, including a description of the questionnaires and equipment used in the study Results • Findings • Statistical analyses • Figures and tables of data Discussion • Conclusions • Applications of the research • Ideas for future studies References • Bibliographic information for each item cited in the article

  4. Quantitative, Qualitative and Mix Method research Three commonly used research types or designs based on data or instruments are quantitative, qualitative, and mixed research. • Quantitative research is based on measuring variables for individual participants to obtain scores, usually numerical values that are submitted to statistical analysis for summary and interpretation. • Qualitative research is based on making observations that are summarized and interpreted in a narrative report. • Mixed research combines or mixes quantitative and qualitative research techniques in a single study. Two sub-types of mixed research includes mixed method research—using qualitative and quantitative approaches for different phases of the study—and mixed model research—using quantitative and qualitative approaches within or across phases of the study.

  5. Ethical Issues to Consider Before Beginning Research Research ethics concerns the responsibility of researchers to be honest and respectful to all individuals who are affected by their research studies or their reports of the studies’ results. Researchers are usually governed by a set of ethical guidelines that assist them to make proper decisions and choose proper actions. In psychological research, the American Psychological Association (APA) maintains a set of ethical principles for research (APA, 2002). • No Harm • Privacy and Confidentiality • Institutional Approval • Competence • Record Keeping • Informed Consent to Research • Dispensing with Informed Consent • Offering Inducements for Research Participation • Deception in Research • Debriefing

  6. No Harm • Psychologists take reasonable steps to avoid harming their research participants, and to minimize harm where it is foreseeable and unavoidable. Privacy and Confidentiality • Confidentialityis the practice of keeping strictly secret and private the information or measurements obtained from an individual during a research study. Anonymity is the practice of ensuring that an individual’s name is not directly associated with the information or measurements obtained from that individual. Institutional Approval • When institutional approval is required, psychologists provide accurate information about their research proposals and obtain approval prior to conducting the research. Competence • Psychologists conduct research with populations and in areas only within the boundaries of their competence. Psychologists planning to conduct research involving populations, area, techniques, or technologies new to them undertake relevant education, training, supervised experience, consultation or study. Record Keeping • Psychologists create, and to the extent the records are under their control, maintain, disseminate, store, retain, and dispose of records and data relating to their scientific work in order to allow for replication of research design and analyses and meet institutional requirements. Psychologists maintain confidentiality in creating, storing, accessing, transferring, and disposing of records under their control, whether these are written, automated, or in any other medium. Informed Consent to Research • The principle of informed consentrequires the investigator to provide all available information about a study so that an individual can make a rational, informed decision to participate in the study.

  7. Dispensing with Informed Consent • Psychologists may dispense with informed consent only (1) where research would not reasonably be assumed to create distress or harm, and involves or (2) where otherwise permitted by law or federal or institutional regulations. Offering Inducements for Research Participation • Psychologists make reasonable efforts to avoid offering excessive or inappropriate financial or other inducements for research participation when such inducements are likely to coerce participation. Deception in Research • Deception occurs when a researcher purposefully withholds information or misleads participants with regard to information about a study. There are two forms of deception: passive and active. • Passive deception (or omission) is the withholding or omitting of information; the researcher intentionally does not tell participants some information about the study. • Active deception (or commission) is the presenting of misinformation about the study to participants. The most common form of active deception is misleading participants about the specific purpose of the study. Debriefing • A debriefingis a post-experimental explanation of the purpose of a study that is given to a participant, especially if deception was used. • Psychologists provide a prompt opportunity for participants to obtain appropriate information about the nature, results, and conclusions of the research, and then take reasonable steps to correct any misconceptions that participants may have of which the psychologists are aware. If scientific or humane values justify delaying or withholding this information, psychologists take reasonable measures to reduce the risk of harm.

  8. Humane Care and Use of Animals in Research • Psychologists acquire, care for, use, and dispose of all animals in compliance with current federal, state, and local laws and regulations, and with professional standards. • Psychologists trained in research methods and experienced in the care of laboratory animals closely supervise all procedures involving animals and are responsible for ensuring appropriate consideration of their comfort, health, and humane treatment. • Psychologists ensure that all individuals under their supervision who are using animals have received instruction in research methods and in the care, maintenance, and handling of the species being used, to the extent appropriate for their role. • Psychologists make reasonable efforts to minimize discomfort, infection, illness, and pain of animal subjects. • Psychologists use a procedure subjecting animals to pain, stress, or privation only when an alternative procedure is unavailable and the goal is justified by its prospective scientific, educational, or applied value. • Psychologists perform surgical procedures under appropriate anesthesia and follow techniques to avoid infection and minimize pain during and after surgery. • When it is appropriate that an animal’s life be terminated, psychologists proceed rapidly, with an effort to minimize pain, and in accordance with accepted procedures.

  9. Ethical Issues and Scientific Integrity APA ethical standards (2002) relate to these issues: • Reporting of Research • Psychologists do not fabricate data. (See also Standard 5.01, Avoidance of False or Deceptive Statements—Psychologists do not make false, deceptive, or fraudulent statements concerning their publications or research findings.) • If psychologists discover significant errors in their published data, • they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means. • Plagiarism • Plagiarismis the representation of someone else’s ideas or words as one’s • own, and it is unethical. Psychologists do not present portions of another’s work or data astheir own, even if the other work or data source is cited occasionally. • Fraud is the explicit effort of a researcher to falsify or misrepresent data. • Replication is repetition of a research study using the same basic procedures used • in the original. Either the replication supports the original study by duplicating the original results, or it casts doubt on the original study by demonstrating that the original result is not easily repeated.

  10. Variable Variables are 'qualities, properties, and or characteristics of persons, things, or situations that change or vary, and that can be manipulated, measured, or controlled in a research study.' (Burns & Groves 2005:755) There are different types of variables: •  Independent Variables • An independent variable is the treatment, the intervention, or the experimental activity that is manipulated or varied by the researcher during the research study in order to create an effect (i.e. change) on the dependent variable. •  Dependent Variables • A dependent variable is the response, the behavior, or the outcome that is predicted and measured in research. Changes in the dependent variable are presumed to be caused by the independent variables.

  11. Hypothesis A hypothesis is a tentative statement about the relationship between two or more variables. Variables are any measurable conditions, events, characteristics, Types of Hypotheses • The null hypothesis (H0) states that in the general population there is no change, no difference, or no relationship. In the context of an experiment, H0 predicts that the independent variable (treatment) has no effect on the dependent variable (scores) for the population. • The alternative hypothesis (H1) states that there is a change, a difference, or a relationship for the general population. In the context of an experiment, H1 predicts that the independent variable (treatment) does have an effect on the dependent variable. Examples • There is no significant difference in the proportion of male and female with IT literacy in the community population; • There is a significant relationship between academic performance and goal achievement.

  12. Selecting Research Participants Populations and Samples • A population is the entire set of individuals of interest to a researcher. Although the entire population usually does not participate in a research study, the results from the study are generalized to the entire population. • A sample is a set of individuals selected from a population and usually is intended to represent the population in a research study. • A target population is the group defined by the researcher’s specific interests. Representative of Sample • The representativeness of a sample refers to the extent to which the characteristics of the sample accurately reflect the characteristics of the population. • A representative sample is a sample with the same characteristics as the population. • A biased sample is a sample with different characteristics from those of the population. • Selection bias or sampling bias occurs when participants or subjects are selected in a manner that increases the probability of obtaining a biased sample.

  13. Sampling • Samplingis the process of selecting individuals to participate in a research study. Types of Sampling • Probability Sampling • In probability sampling, the entire population is known, each individual in the population has a specifiable probability of selection, and sampling occurs by a random process based on the probabilities. • A random process is a procedure that produces one outcome from a set of possible outcomes. The outcome must be unpredictable each time, and the process must guarantee that each of the possible outcomes is equally likely to occur. • Non Probability Sampling • In non probability sampling, the population is not completely known, individual probabilities cannot be known, and the sampling method is based on factors such as common sense or ease, with an effort to maintain representativeness and avoid bias.

  14. Important to Know • Theories • Theories are statements about the mechanisms underlying a particular behavior. Theories help organize and unify different observations related to the behavior, and good theories generate predictions about the behavior. • Constructs • Constructs are hypothetical attributes or mechanisms that help explain and predict behavior in a theory. • Operational Definition • An operational definition specifies a measurement procedure (a set of operations) for measuring an external, observable behavior, and uses the resulting measurements as a definition and a measurement of the hypothetical construct.

  15. Validity The validityof a research study is the degree to which the study accurately answers the question it was intended to answer. Types of Validity • External validity refers to the extent to which we can generalize the results of a research study to people, settings, times, measures, and characteristics other than those used in that study. A threat to external validityis any characteristic of a study that limits the ability to generalize the results from a research study. • Internal Validity, A research study has internal validity if it produces a single, unambiguous explanation for the relationship between two variables. Any factor that allows for an alternative explanation is a threat to internal validity. Other Types • Concurrent validity is demonstrated when scores obtained from a new measure are directly related to scores obtained from an established measure of the same variable. • Face validity is an unscientific form of validity demonstrated when a measurement procedure superficially appears to measure what it claims to measure. • Predictive validity is demonstrated when scores obtained from a measure accurately predict behavior according to a theory. • Construct validity requires that the scores obtained from a measurement procedure behave exactly the same as the variable itself. • Convergent validity is demonstrated by a strong relationship between the scores obtained from two different methods of measuring the same construct. • Divergent validity is demonstrated by using two different methods to measure two different constructs. Then convergent validity must be shown for each of the two constructs. Finally, there should be little or no relationship between the scores obtained for the two different constructs when they are measured by the same method.

  16. Reliability The reliabilityof a measurement procedure is the stability or consistency of the measurement. If the same individuals are measured under the same conditions, a reliable measurement procedure produces identical (or nearly identical) measurements. Types of Reliability • Test-retest reliability is established by comparing the scores obtained from two successive measurements of the same individuals and calculating a correlation between the two sets of scores. If alternative versions of the measuring instrument are used for the two measurements, the reliability measure is called parallel-forms reliability. • Inter-rater reliability is the degree of agreement between two observers who simultaneously record measurements of the behaviors. • Split-half reliability is obtained by splitting the items on a questionnaire or test in half, computing a separate score for each half, and then calculating the degree of consistency between the two scores for a group of participants.

  17. Scales of Measurement In very general terms, measurement is a procedure for classifying individuals. The set of categories used for classification is called the scale of measurement. Traditionally, researchers have identified four different types of measurement scales: nominal, ordinal, interval, and ratio. The differences among these four types are based on the relationships that exist among the categories that make up the scales. The Nominal Scale • The categories that make up a nominal scalesimply represent qualitative (not quantitative) differences in the variable measured. The categories have different names but are not related to each other in any systematic way. For example, if you were measuring academic majors for a group of college students, the categories would be art, chemistry, English, history, psychology, and so on. The Ordinal Scale • The categories that make up an ordinal scale have different names and are organized sequentially. Often, an ordinal scale consists of a series of ranks (first, second, third, and so on) like the order of finish in a horse race. Occasionally, the categories are identified by verbal labels such as small, medium, and large drink sizes at a fast-food restaurant. Interval and Ratio Scales • The categories on interval and ratio scales are organized sequentially and all categories are the same size. Thus, the scale of measurement consists of a series of equal intervals like the inches on a ruler. Other common examples of interval or ratio scales are the measures of time in seconds, weight in pounds, and temperature in degrees Fahrenheit. Notice that in each case, one interval (one inch, one second, and one pound, one degree) is the same size, no matter where it is located on the scale.

  18. Institutional Review Board (IRB) • The Institutional Review Board (IRB) is a committee that examines all proposed research with respect to its treatment of human participants. IRB approval must be obtained before any research is conducted with human participants.

More Related