Week 1 – Discussion 2
Qualitative Validity
Many researchers, particularly those from the hard sciences like mathematics or physics, consider quantitative research, with the ability to determine “statistical significance,” as more rigorous than qualitative research. Qualitative research does not lend itself to such mathematical determination of validity, rather it is highly focused on providing descriptive and/or exploratory results. However, this does not relieve the qualitative researcher from designing studies that are rigorous and high in “trustworthiness,” often the word used to describe validity in a qualitative study. There is no agreed upon set of criteria for ensuring a quality qualitative study, but there are a number of models of quality criteria.
Don't use plagiarized sources. Get Your Custom Essay on
Week 1 – Discussion 2
Just from $13/Page
Instructions:
1.)After reading the assigned articles by Shenton (2004) and Freeman, deMarrais, Preissle, Roulston, and St. Pierre (2007), discuss at least three things a qualitative researcher can consider to increase the validity of a study’s results.
* Give at least one example from one of the qualitative study articles you have found on your own topic of how a claim (reported result) is supported.
* How does that article report on the validity of the study’s results?
* Do the authors do a good job of demonstrating validity? If not, what could/should they have done differently?
Post should be at least 275 words.
Week 2 – Discussion 2
Qualitative Methodologies
Qualitative methodologies involve collecting non-numerical data, usually through interviews or observation. There are many approaches to qualitative research and no fully agreed upon “list” of methodologies. The text (Malec and Newman, 2013) describes six approaches in Section 3.1. The Frank and Polkinghorne (2010) article also describes three main qualitative approaches. The best way to learn about a variety of qualitative research methods is to read reports or articles of research around a topic you are interested in.
Instructions:
2.) For your initial post, choose 2 articles that use a qualitative research method to answer a research question on your topic of interest. Remember that qualitative research is exploratory in nature, and is used to go deeper into issues of interest and explore nuances related to the problem at hand. Common data collection methods used in qualitative research include group discussions, focus groups, in-depth interviews, and uninterrupted observations. Data analysis typically involves identifying themes or categories, or providing in- depth descriptions of the data. Use the Anderson (2006) article to obtain a better understanding of what qualitative research includes.
* Briefly describe the particular qualitative research approach/methodology utilized in each of the two articles you selected (e.g. case study, ethnographic study, phenomenological study, etc.).
* Refer to the week’s readings (or recommended articles) to help you explain.
* Compare and contrast the two qualitative methods used:
* What is the same and what is different and why?
* Does either methods seem a good fit to explore your topic of interest?
* Why/why not?
**CHOOSE 2 SCHOLARLY ARTICLES OF YOUR CHOICE**
Observational Research (Qualitative or Descriptive Design)
Moving further along the continuum of control, we come to the descriptive design with the greatest amount of researcher control. Observational research involves studies that directly observe behavior and record these observations in an objective andsystematic way. In previous psychology courses, you may have encountered the concept of attachment theory, which argues that aninfant’s bond with his or her primary caregiver has implications for later social and emotional development. Mary Ainsworth, aCanadian developmental psychologist, and John Bowlby, a British psychologist and psychiatrist, articulated this theory in the early1960s, arguing that children can form either “secure” or a variety of “insecure” attachments with their caregivers (Ainsworth & Bell,1970; Bowlby, 1963).
In order to assess these classifications, Ainsworth and Bell (1970) developed an observational technique called the “strangesituation.” Mothers would arrive at their laboratory with their children for a series of structured interactions, including having themother play with the infant, leave him or her alone with a stranger, and then return to the room after a brief absence. Theresearchers were most interested in coding the ways in which the infant responded to the various episodes (eight in total). Onegroup of infants, for example, showed curiosity when the mother left but then returned to playing with their toys, trusting that shewould return. Another group showed immediate distress when the mother left and clung to her nervously upon her return. Based onthese and other behavioral observations, Ainsworth and colleagues classified these groups of infants as “securely” and “insecurely”attached to their mothers, respectively.
Pros and Cons of Observational Research
Observational designs are well suited to a wide range of research questions, provided the questions can be addressed throughdirectly observable behaviors and events; for example, if the researcher is able to observe parent–child interactions, nonverbal cuesto emotion, or even crowd behavior. However, if a researcher is interested in studying thought processes—such as how mothersinterpret their interactions—then observation will not suffice. This harkens back to our discussion of behavioral measures in Chapter2 (Section 2.2, Reliability and Validity): In exchange for giving up access to internal processes, you gain access to unfilteredbehavioral responses.
To capture these unfiltered behaviors, it is vital for the researcher to be as unobtrusive as possible. As we have already discussed,people have a tendency to change their behavior when they are being observed. In the bullying study by Craig and Pepler (1997)discussed at the beginning of this chapter, the researchers used video cameras to record children’s behavior unobtrusively;otherwise, the occurrence of bullying might have been artificially low. If you conduct an observational study in a laboratory setting,there is no way to hide the fact that people are being observed, but the use of one-way mirrors and video recordings can helppeople to become comfortable with the setting (versus having an experimenter staring at them across the table). If you conduct anobservational study out in the real world, there are even more possibilities for blending into the background, including usingobservers who are literally hidden. For example, let’s say you hypothesize that people are more likely to pick up garbage when theweather is nicer. Rather than station an observer with a clipboard by the trash can, you could place someone out of sight, standingbehind a tree or perhaps sitting on a park bench pretending to read a magazine. In both cases, people would be less conscious ofbeing observed and therefore more likely to behave naturally.
One extremely clever strategy for blending in comes from a study by the social psychologist Muzafer Sherif, involving observationsof cooperative and competitive behaviors among boys at a summer camp (Sherif et al., 1954). You can imagine that it wasparticularly important to make observations in this context without the boys realizing they were part of a research study. Sherif tookon the role of camp janitor, allowing him to be a presence in nearly all of the camp activities. The boys never paid enough attentionto the “janitor” to realize his omnipresence—or his discreet note taking. The brilliance of this idea is that it takes advantage of thefact that people tend to blend into the background once we become used to their presence.
Types of Observational Research
There are several variations on observational research, according to the amount of control that a researcher has over the datacollection process.
Structured Observation
Structured observation involves creating a standard situation in a controlled setting and then observing participants’ responses to apredetermined set of events. The “strange situation” studies of attachment (discussed previously) are a good example of structuredobservation—mothers and infants are subjected to a series of eight structured episodes, and researchers systematically observe andrecord the infants’ reactions. Even though these types of studies are conducted in a laboratory, they differ from experimental studiesin an important way: Rather than systematically manipulate a variable to make comparisons, researchers present the same set ofconditions to all participants.
Another example of structured observation comes from the research of John Gottman, a psychologist at the University ofWashington. For nearly three decades, Gottman and his colleagues have conducted research on the interaction styles of marriedcouples. Couples who take part in this research are invited for a 3-hour session in a laboratory that closely resembles a living room.Gottman’s goal is to make couples feel reasonably comfortable and natural in the setting, in order to get them talking as they mightdo at home. After allowing them to settle in, Gottman adds the structured element by asking the couple to discuss an “ongoing issueor problem” in their marriage. The researchers then sit back to watch the sparks fly, recording everything from verbal and nonverbalcommunication to measures of heart rate and blood pressure. Gottman has observed and tracked so many couples over thedecades that he is able to predict, with remarkable accuracy, which couples will divorce in the 18 months following the lab visit(Gottman & Levenson, 1992).
Naturalistic Observation
Naturalistic observation involves observing and systematically recording behavior out in the real world. This can be done in twobroad ways—with or without intervention on the part of the researcher. Naturalistic studies that involve researcher interventionconsist of manipulating some aspect of the environment and then observing responses. For example, you might leave a shoppingcart just a few feet away from the cart return area and measure whether people move the cart. (Given the number of carts that areabandoned just inches away from their proper destination, someone must be doing this research all the time. . . .) In anotherexample you may remember from Chapter 1 (in our discussion of ethical dilemmas in Section 1.7, Ethics in Research), Harari andassociates (1995) used this approach to study whether people would help in emergency situations. In brief, these researchersstaged what appeared to be an attempted rape in a public park and then observed whether groups or individual males were morelikely to rush to the victim’s aid.
Alternatively, naturalistic studies can involve simply recording ongoingbehavior without any attempt by the researchers to intervene or influencethe situation. In these cases, the goal is to observe and record behavior ina completely natural setting. For example, you might station yourself at a liquor store and observe the numbers of men and womenwho buy beer versus wine. Or, you might observe the numbers of people who give money to the Salvation Army bell ringers duringthe holiday season. You can use this approach to make comparisons of different conditions, provided the differences occur naturally.That is, you could observe whether people donate more money to the Salvation Army on sunny or snowy days or compare donationrates when the bell ringers are of a different gender or race. Do people give more money when the bell ringer is an attractivefemale? Or do they give more to someone who looks needier? These are all research questions that could be addressed using awell-designed naturalistic observation study.
Participant Observation
Participant observation involves having the researcher(s) conduct observations while engaging in the same activities as theparticipants. The goal is to interact with these participants in order to gain better access and insight into their behaviors. In onefamous example, the psychologist David Rosenhan (1973) was interested in the experience of people hospitalized for mental illness.To study these experiences, he had eight perfectly sane people gain admission to different mental hospitals. These fake patientswere instructed to give accurate life histories to a doctor except for lying about one diagnostic symptom; they all supposedly heardvoices occasionally, a symptom of schizophrenia.
Once admitted, these “patients” behaved in a normal and cooperative manner, with instructions to convince hospital staff that theywere healthy enough to be released. In the meantime, they observed life in the hospital and took notes on their experiences—abehavior that many doctors interpreted as “paranoid note taking.” The main finding of this study was that hospital staff tended to seeall patient behaviors through the lens of their initial diagnoses. Despite immediately acting “normally,” these fake patients werehospitalized an average of 19 days (with a range from 7 to 52!) before being released. And all but one was given a diagnosis of”schizophrenia in remission” upon release. The other striking finding was that treatment was generally depersonalized, with staffspending little time with individual patients.
In another great example of participant observation, Festinger, Riecken, and Schachter (1956) decided to join a doomsday cult totest their new theory of cognitive dissonance. Briefly, this theory argues that people are motivated to maintain a sense of consistencyamong their various thoughts and behaviors. So, for example, if you find yourself smoking a cigarette despite being aware of thehealth risks, you might rationalize your smoking by convincing yourself that lung cancer risk is really just genetic. In this case,Festinger and colleagues stumbled upon the case of a woman named Mrs. Keach, who was predicting the end of the world, viaalien invasion, at 11 p.m. on a specific date 6 months in the future. What would happen, they wondered, when this prophecy failedto come true?
Post should be at least 275 words.
Malec, T. & Newman, M. (2013). Research methods: Building a knowledge base. San Diego, CA: Bridgepoint Education, Inc. ISBN-13: 9781621785743, ISBN-10: 1621785742.
Chapter 3: Qualitative and Descriptive Designs – Observing Behavior
PART 2 OF SOURCE
5.3 Experimental Validity
Chapter 2 (Section 2.2) discussed the concept of validity, or the degree to which measures capture the constructs that they weredesigned to capture. For example, a measure of happiness needs to actually capture differences in people’s levels of happiness. Inthis section, we return to the subject of validity in an experimental context. Similar to our earlier discussion, validity refers here towhether the experimental results are demonstrating what we think they are demonstrating. We will cover two types of validity thatare relevant to experimental designs. The first is internal validity, which assesses the degree to which results can be attributed toindependent variables. The second is external validity, which assesses how well the results generalize to situations beyond thespecific conditions laid out in the experiment. Taken together, internal and external validity provide a way to assess the merits of anexperiment. However, each of these has its own threats and remedies, as discussed in the following sections.
Internal Validity
In order to have a high degree of internal validity, experimenters strive for maximum control over extraneous variables. That is, theytry to design experiments so that the independent variable is the only cause of differences between groups. But, of course, no studyis ever perfect, and there will always be some degree of error. In many cases, errors are the result of unavoidable causes, such asthe health or mood of the participants on the day of the experiment. In other cases, errors are caused by factors that are, in fact,under the experimenter’s control. In this section, we will focus on several of these more manageable threats to internal validity anddiscuss strategies for reducing their influence.
Experimental Confounds
To avoid threats to the internal validity of an experiment, it is important to control and minimize the influence of extraneous variablesthat might add noise to a hypothesis test. In many cases, extraneous variables can be considered relatively minor nuisances, aswhen our mood experiment was accidentally run in a depressing room. But now, let’s say we ran our study on temperature andmood, and owing to a lack of careful planning, we accidentally placed all of the warm-room participants in a sunny room, and thecool-room participants in a windowless room. We might very well find that the warm-room participants were in a much better mood.But would this be the result of warm temperatures or the result of exposure to sunshine? Unfortunately, we would be unable to tellthe difference because of a confounding variable, or confound (in the case of correlation studies, third variable). Theconfounding variable changes systematically with the independent variable. In this example, room lighting would be confounded withroom temperature because all of the warm-room participants were also exposed to sunshine, and all of the cool-room participants toartificial lighting. This combination of variables would leave us unable to determine which variable actually had the effect on mood.The result would be that our groups differed in more than one way, which would seriously hinder our ability to say that theindependent variable (the room) caused the dependent variable (mood) to change.
Selection Bias
Internal validity can also be threatened when groups are different before the manipulation, whichis known as selection bias. Selection bias causes problems because these inherent differences might be the driving factor behindthe results. Imagine you are testing a new program that will help people stop smoking. You might decide to ask for volunteers whoare ready to quit smoking and put them through a 6-week program. But by asking for volunteers—a remarkably common error—yougather a group of people who are already somewhat motivated to stop smoking. Thus, it is difficult to separate the effects of yournew program from the effects of this a priori motivation.
One easy way to avoid this problem is through either random or matched-random assignment. In the stop-smoking example, youcould still ask for volunteers, but then randomly assign these volunteers to one of the two programs. Because both groups wouldconsist of people motivated to quit smoking, this would help to cancel out the effects of motivation. Another way to minimizeselection bias is to use the same people in both conditions so that they serve as their own control. In the stop-smoking example,you could assign volunteers first to one program and then to the other. However, you might run into a problem with this approach—participants who successfully quit smoking in the first program would not benefit from the second program. This technique is knownas a within-subject design, and we will discuss its advantages and disadvantages in the subsection “Within-Subject Designs” inSection 5.4, Experimental Designs.
Differential Attrition
Despite your best efforts at random assignment, you could still have a biased sample at the end of a study as a result of differential attrition. The problem of differential attrition (sometimes called the mortality threat) occurs when subjects drop out ofexperimental groups for different reasons. Let’s say you’re conducting a study of the effects of exercise on depression levels. Youmanage to randomly assign people to either 1 week of regular exercise or 1 week of regular therapy. At first glance, it appears thatthe exercise group shows a dramatic drop in depression symptoms. But then you notice that approximately one third of the people inthis group dropped out before completing the study. Chances are you are left with those who are most motivated to exercise, toovercome their depression, or both. Thus, you are unable to isolate the effects of your independent variable on depressionsymptoms. While you cannot prevent people from dropping out of your study, you can look carefully at those who do. In manycases, you can spot a pattern and use it to guide future research. For example, it may be possible to discover a profile of peoplewho dropped out of the exercise study and use this knowledge to increase retention for the next attempt.
Outside Events
As much as we strive to control the laboratory environment, participants are often influenced by events in the outside world. Theseevents—sometimes called history effects—are often large-scale events such as political upheavals and natural disasters. The threatto research is that it becomes difficult to tell whether participants’ responses are the result of the independent variable or thehistorical event(s). One great example of this comes from a paper published by social psychologist Ryan Brown, now a professor atthe University of Oklahoma, on the effects of receiving different types of affirmative action as people were selected for a leadershipposition. The goal was to determine the best way to frame affirmative action in order to avoid undermining the recipient’s confidence(Brown, Charnsangavej, Keough, Newman, & Rentfrow, 2000). For about a week during the data collection process, students at theUniversity of Texas, where the study was being conducted, were protesting on the main lawn about a controversial lawsuit regardingaffirmative action policies. The result was that participants arriving for this laboratory study had to pass through a swarm of peopleholding signs that either denounced or supported affirmative action. These types of outside events are difficult, if not impossible, tocontrol. But, because these researchers were aware of the protests, they made a decision to exclude from the study data gatheredfrom participants during the week of the protests, thus minimizing the effects of these outside events.
Expectancy Effects
One final set of threats to internal validity results from the influence of expectancies on people’s behavior. This can cause trouble forexperimental designs in three related ways. First, experimenter expectancies can cause researchers to see what they expect tosee, leading to subtle bias in favor of their hypotheses. In a clever demonstration of this phenomenon, the psychologist RobertRosenthal asked his graduate students at Harvard University to train groups of rats to run a maze (Rosenthal & Fode, 1963). Healso told them that based on a pretest, the rats had been classified as either bright or dull. As you might have guessed, these labelswere pure fiction, but they still influenced the way that the students treated the rats. Rats labeled “bright” were given moreencouragement and learned the maze much more quickly than rats labeled “dull.” Rosenthal later extended this line of work toteachers’ expectations of their students (Rosenthal & Jacobson, 1968) and found support for the same conclusion: People oftenbring about the results they expect by behaving in a particular way.
One common way to avoid experimenter expectancies is to have participants interact with a researcher who is “blind” (i.e., unaware)to the condition that each participant is in. The researcher may be fully aware of the research hypothesis, but his or her behavior isunlikely to affect the results. In the Rosenthal and Fode (1963) study, the graduate students’ behavior influenced the rats’ learningspeed only because they were aware of the labels “bright” and “dull.” If these had not been assigned, the rats would have beentreated fairly equally across the conditions.
Second, participants in a research study often behave differently based on their own expectancies about the goals of the study.These expectancies often develop in response to demand characteristics, or cues in the study that lead participants to guess thehypothesis. In a well-known study conducted at the University of Wisconsin, psychologists Leonard Berkowitz and Anthony LePagefound that participants would behave more aggressively—by delivering electric shocks to another participant—if a gun was in theroom than if there were no gun present (Berkowitz & LePage, 1967). This finding has some clear implications for gun controlpolicies, suggesting that the mere presence of guns increases the likelihood of violence. However, a common critique of this study isthat participants may have quickly clued in to its purpose and figured out how they were “supposed” to behave. That is, the gunserved as a demand characteristic, possibly making participants act more aggressively because they thought it was expected ofthem.
To minimize demand characteristics, researchers use a variety of techniques, all of which attempt to hide the true purpose of thestudy from participants. One common strategy is to use a cover story, or a misleading statement about what is being studied. InChapter 1 (Section 1.4, Hypotheses and Theories, and Section 1.7, Ethics in Research), we discussed Milgram’s famous obediencestudies, which discovered that people were willing to obey orders to deliver dangerous levels of electric shocks to other people. Inorder to disguise the purpose of the study, Milgram described it to people as a study of punishment and learning. And the affirmativeaction study by Ryan Brown and colleagues (Brown et al., 2000) was presented as a study of leadership styles. The goal in usingthese cover stories is to give participants a compelling explanation for what they experience during the study and to direct theirattention away from the research hypothesis.
Another strategy is to use the unrelated-experiments technique, which leads participants to believe that they are completing twodifferent experiments during one laboratory session. The experimenter can use this bit of deception to present the independentvariable during the first experiment and then measure the dependent variable during the second experiment. For example, a studyby Harvard psychologist Margaret Shih and colleagues (Shih, Pittinsky, & Ambady, 1999) recruited Asian American females andasked them to complete two supposedly unrelated studies. In the first, they were asked to read and form impressions of one of twomagazine articles; these articles were designed to make them focus on either their Asian American identity or their female identity. Inthe second experiment, they were asked to complete a math test as quickly as possible. The goal of this study was to examine theeffects on math performance of priming different aspects of identity. Based on previous research, these authors predicted thatpriming an Asian American identity would remind participants of positive stereotypes regarding Asians and math performance,whereas priming a female identity would remind participants of negative stereotypes regarding women and math performance. Asexpected, priming an Asian American identity led this group of participants to do better on a math test than did priming a femaleidentity. The unrelated-experiments technique was especially useful for this study because it kept participants from connecting theindependent variable (magazine article prime) with the dependent variable (math test).
A final way in which expectancies shape behavior is the placebo effect, meaning that change can result from the mere expectationthat change will occur. Imagine you wanted to test the hypothesis that alcohol causes people to become aggressive. One relativelyeasy way to do this would be to give alcohol to a group of volunteers (aged 21 and older) and then measure how aggressive theybecame in response to being provoked. The problem with this approach is that people also expect alcohol to change their behavior,so you might see changes in aggression simply because of these expectations. Fortunately, there is an easy solution: Add a placebo control group to your study that mimics the experimental condition in every way but one. In this case, you might tell allparticipants that they will be drinking a mix of vodka and orange juice but only add vodka to half of the participants’ drinks. Theorange-juice-only group serves as the placebo control, so any differences between this group and the alcohol group can beattributed to the alcohol itself.
External Validity
In order to attain a high degree of external validity in their experiments, researchers strive for maximum realism in the laboratoryenvironment. External validity means that the results extend beyond the particular set of circumstances created in a single study.Recall that science is a cumulative discipline and that knowledge grows one study at a time. Thus, each study is more meaningful tothe extent that it sheds light on a real phenomenon and to the extent that the results generalize to other studies. Let’s examine eachof these criteria separately.
Mundane Realism
The first component of external validity is the extent to which an experiment captures the real-world phenomenon under study. Onepopular question in the area of aggression research is whether rejection by a peer group leads to aggression. That is, when peopleare rejected from a group, do they lash out and behave aggressively toward the members of that group? Researchers must findrealistic ways to manipulate rejection and measure aggression without infringing on participants’ welfare. Given the need to strikethis balance, how real can things get in the laboratory? How do we study real-world phenomena without sacrificing internal validity?
The answer is to strive for mundane realism, meaning that the research replicates the psychological conditions of the real-worldphenomenon (sometimes referred to as ecological validity). In other words, we need not re-create the phenomenon down to the lastdetail; instead, we aim to make the laboratory setting feel like the real world. Researchers studying aggressive behavior andrejection have developed some rather clever ways of doing this, including allowing participants to administer loud noise blasts orserve large quantities of hot sauce to those who rejected them. Psychologically, these acts feel like aggressive revenge becauseparticipants are able to lash out against those who rejected them, with the intent of causing harm, even though the behaviorsthemselves may differ from the ways people exact revenge in the real world.
In a 1996 study, Tara MacDonald and her colleagues at Queen’s University in Ontario, Canada, examined the relationship betweenalcohol and condom use (MacDonald, Zanna, & Fong, 1996). The authors pointed out a puzzling set of real-world data: Most peoplereported that they would use condoms when engaging in casual sex, but the rates of unprotected sex (i.e., having sexual intercoursewithout a condom) were also remarkably high. In this study, the authors found that alcohol was a key factor in causing “commonsense to go out the window” (p. 763), resulting in a decreased likelihood of condom use. But how on earth might they study thisphenomenon in the laboratory? In the authors’ words, “even the most ambitious of scientists would have to conclude that it isimpossible to observe the effects of intoxication on actual condom use in a controlled laboratory setting” (p. 765).
To solve this dilemma, MacDonald and colleagues developed a clever technique for studying people’s intentions to use condoms.Participants were randomly assigned to either an alcohol or placebo condition, and then they viewed a video depicting a youngcouple that was faced with the dilemma of whether to have unprotected sex. At the key decision point in the video, the tape wasstopped and participants were asked what they would do in the situation. As predicted, participants who were randomly assigned toconsume alcohol said they would be more willing to proceed with unprotected sex. While this laboratory study does not capture thefull experience of making decisions about casual sex, it does a nice job of capturing the psychological conditions involved.
Generalizing Results
The second component of external validity is the extent to which research findings generalize to other studies. Generalizabilityrefers to the extent to which the results extend to other studies, using a wide variety of populations and a wide variety of operationaldefinitions (sometimes referred to as population validity). If we conclude that rejection causes people to become more aggressive,for example, this conclusion should ideally carry over to other studies of the same phenomenon, using different ways of manipulatingrejection and different ways of measuring aggression. If we want to conclude that alcohol reduces intentions to use condoms, wewould need to test this relationship in a variety of settings—from laboratories to nightclubs—using different measures of intentions.
Thus, each study that we conduct is limited in its conclusions. In order for your particular idea to take hold in the scientific literature,it must be replicated, or repeated in different contexts. These replications can take one of four forms. First, exact replicationinvolves trying to re-create the original experiment as closely as possible in order to verify the findings. This type of replication isoften the first step following a surprising result, and it helps researchers to gain more confidence in the patterns. The second andmuch more common method, conceptual replication, involves testing the relationship between conceptual variables using newoperational definitions. Conceptual replications would include testing our aggression hypotheses using new measures or examiningthe link between alcohol and condom use in different settings. For example, rejection might be operationalized in one study byhaving participants be chosen last for a group project. A conceptual replication might take a different approach: operationalizingrejection by having participants be ignored during a group conversation or voted out of the group. Likewise, a conceptual replicationmight change the operationalization of aggression by having one study measure the delivery of loud blasts of noise and anothermeasure the amount of hot sauce that people give to their rejecters. Each variation studies the same concept (aggression orrejection) but uses slightly different operationalizations. If all of these variations yielded similar results, this would provide furtherevidence of the underlying ideas—in this case, that rejection causes people to be more aggressive.
The third method, participant replication, involves repeating the study with a new population of participants. These types ofreplication are usually driven by a compelling theory as to why the two populations differ. For example, you might reasonablyhypothesize that the decision to use condoms is guided by a different set of considerations among college students than amongolder, single adults. Finally, constructive replication re- creates the original experiment but adds elements to the design. Theseadditions are typically designed to either rule out alternative explanations or extend knowledge about the variables under study. Inour rejection and aggression example, you might test whether males and females respond the same way or perhaps compare theimpact of being rejected by a group versus an individual.
APA Format
Double Spaced
550 words TOTAL (275 for each post)
Total of 4 sources (2 for each discussion question)
*2 already provided*
Required sources:
Malec, T. & Newman, M. (2013). Research methods: Building a knowledge base. San Diego, CA: Bridgepoint Education, Inc. ISBN-13: 9781621785743, ISBN-10: 1621785742.
Chapter 3: Qualitative and Descriptive Designs – Observing Behavior
Section 5.3: Experimental Validity: A Note on Qualitative Research Validity and Reliability
Anderson, J. D. (2006). Qualitative and quantitative research. Available at http://web20kmg.pbworks.com/w/file/fetch/82037432/QualitativeandQuantitativeEvaluationResearch.pdf |