tag:blogger.com,1999:blog-7246838997491107294.post2004374130724067506..comments2023-08-06T04:09:45.468-07:00Comments on ...and hijinks ensued.: Dastardly data!-Princess Pointfulhttp://www.blogger.com/profile/10911296163218358167noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-7246838997491107294.post-16290492179064609952007-12-08T22:20:00.000-08:002007-12-08T22:20:00.000-08:00All of that sounds very bastardly. Especially bei...All of that sounds very bastardly. Especially being roused by Nazi landlords. No wonder you hate moving.<BR/><BR/>Hope you're having a nice weekend right about now.<BR/><BR/>Take it easy, Princess. <BR/><BR/>Peace and love, friend.eric1313https://www.blogger.com/profile/13807078704660045859noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-12534261526838327882007-04-18T03:55:00.000-07:002007-04-18T03:55:00.000-07:00I have no adviceI have no adviceUltra Toast Mosha Godhttps://www.blogger.com/profile/05450892955592722188noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-54756440749204628552007-04-17T07:33:00.000-07:002007-04-17T07:33:00.000-07:00Well it looks as though everyone covered what I ha...Well it looks as though everyone covered what I had to say in great detail. We just recently had a long presentation in our area group concerning RAs and data integrity so I would suggest looking at the two data sets separately to start. Good luck!Psych-o by nowhttps://www.blogger.com/profile/13425491027940643919noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-91307962203806268092007-04-17T04:41:00.000-07:002007-04-17T04:41:00.000-07:00Ant - usually in psychology anyway, RAs can be pai...Ant - usually in psychology anyway, RAs can be paid or volunteers, usually undergraduates looking for research experience if they are planning to go to graduate school. Psych is insanely competitive for graduate school positions, and research experience is important. RAs also usually have a less integral role in the development of a research design. <BR/><BR/>Princess - I think everyone covered any suggestions I had (e.g. a highly homogeneous sample?).iFreudhttps://www.blogger.com/profile/07659277558397583134noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-28001881518946256842007-04-17T02:41:00.000-07:002007-04-17T02:41:00.000-07:00Ah, not got any advice but I'm interested that the...Ah, not got any advice but I'm interested that the title RA does actually mean "Research Assistant" over there?<BR/><BR/>I'm an RA and essentially drive the strategic direction of our entire project - not sure if that's a quirk of computing, a quirk of our department or whether it's just me and my pushy personality... :o)<BR/><BR/>And you say these dudes are voluntary? So are they Masters students or something?Anthttps://www.blogger.com/profile/09303340501621622951noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-36766590490309133242007-04-17T00:31:00.000-07:002007-04-17T00:31:00.000-07:00Thanks for all your advice so far!! Very much appr...Thanks for all your advice so far!! Very much appreciated!!<BR/><BR/>To answer a few Qs:<BR/><BR/>-It is very unlikely there was a coding error. The majority of the data is collected via the computer, and nothing has changed in how it is programmed. The rest of the coding, which ties in some pre-questionnaire data and content analysis stuff via ID number, is all done by me. The only thing my RA coded both times was the actual content analysis results, which aren't even analyzed for many of my hypotheses.<BR/><BR/>- The data collected this semester represents a little over 1/4 of my total data (around 40 participants out of 150-ish, split into three groups). Hence, some of my stronger results remained significant-- but those just bordering on significant results tended to just slip out of significance, when I was hoping the extra power would drive them the other way.<BR/>I really don't think my original results were spurious, though I am probably biased. Though not huge, I did have a reasonable sample size before, and the results fit well within a theoretical framework. (though there were certainly some that didn't pan out, as always!)<BR/><BR/>- I haven't yet taken a look at my manipulation check for this new chunk of data, but I did for the data on which I based my MA thesis, and it was all working fine (my materials were also pretty extensively pilot tested). Very good idea to look at!<BR/><BR/>- I'll also need to take a look a little closer for outliers- but, with a surface scan, it all looked ok today.<BR/><BR/>- As per individual differences, I will have to take a closer look at the demographic data. The study is only run on people who meet certain demographic criteria, so that cuts some of that potential variability out. The other difference I can think of is that my RA seemed to have an especially hard time recruiting people this semester, and often had to contact them several times to come it, whereas I never had that great of a problem. I don't know if this may be something specific to those who take intro psyc (my primary population base) in the spring semester.<BR/><BR/>- As a side note, Abbey, my primary IV is alternatively considered more trait-like or state-like, depending on the researcher. I was actually surprised at how trait-like and stable it remained in my original results, as it doesn't always have super-high test-retest reliability-- maybe that is catching up to me?<BR/><BR/>- I will definitely be running some comparitive stats between the two data sets (I started today, hence why I noted a rather drastic change in effect sizes in my sample and his sample). I hadn't thought of a MANOVA, though-- thanks PsycGrad!<BR/><BR/>Phew. I really doubt anyone read all that. I think it was more meant for me to clear my head!<BR/><BR/>***<BR/><BR/>Thanks all!!<BR/><BR/>***<BR/><BR/>I hope I don't seem too hard on my poor RA. I feel bad, as he is a good guy, and very committed to the project, though, as I said, there were a few minor concerns re: sloppiness with some of the details. It just sucks because after my last debacle (and I'm much more convinced it was related to her-- she reversed a basic cognitive effect that has massive meta-analytic support-- I have no clue how she even did that!), I've been really reluctant to give up even the most banal of controls on my study.Princess Pointfulhttps://www.blogger.com/profile/10911296163218358167noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-41612016912437651422007-04-16T19:57:00.000-07:002007-04-16T19:57:00.000-07:00Yeah...I think Caleb and Abbey gave good suggestio...Yeah...I think Caleb and Abbey gave good suggestions. It could be that your smaller sample size had an outlier that really pulled the results in the desired direction. I would start by treating the data sets separately and do some data screening (particularly univariate and multivariate outliers). I would also do some comparisons between your variables to see exactly where the differences are. You could also run a MANOVA treating the data you collected as group 1 and the data your RA collected as group 2 and then just see what DVs differ between the groups. Just try to get a feel for where the differences are and then perhaps take a closer look at potential demographic or confounding reasons for the differences. And, yes, definitely check the coding and talk to the RA to make sure that he/she was using the same procedure as you.<BR/><BR/>Good luck!PGhttps://www.blogger.com/profile/02160883537054763513noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-53736256710557534052007-04-16T19:38:00.000-07:002007-04-16T19:38:00.000-07:00Hmm, this is a tough one. You got the major things...Hmm, this is a tough one. You got the major things that immediately came to mind.<BR/><BR/>I considered it being something about the RA. However, if you used two different RAs for each study...it's unlikely to be something unique but similar to each other about both of them to nullify both studies. <BR/><BR/>I also considered whether you might be biasing them in some way but I don't know if there's really a way to rule that out.I know it's too late at this point too, but a manipulation check inbedded within the study would also help you determine if the RAs are just not hitting the key points and the participants are missing the manipulation. If you do have a MC, ou could use it as a covariant or pull out all those that failed the MC. <BR/><BR/>As little student suggested, I'd also check the direction of the coding. And if not that, I'd check the data entry in general. Are there really wierd outliers? Maybe if you pull the outliers (I think we use a 2 st deviation as a rule of thumb) your results will come back. Are there any numbers that shouldn't be there? A 7 on a scale from 1-6?<BR/><BR/>Another possibility, as little student suggested, it may be a sample size thing. Although I believe it's more typical that you fail to find results due to power than to find spurious significant results. I'd expect as sample size increased,your results would strengthen if it was random error or a power issue. If this spurious result thing was possible, it may be that the new data was more 'typical' and pulled your results closer to true. However, I think that's unlikely.<BR/><BR/>My last suggestion would be to check individual differences between your sample and your RAs sample. Could you, for instance, argue that people that came later in the study were somehow different than those earlier? Something special about the semester you collected data from the one the RAs did (i.e., popular media, politics, attention to a new drug - depending on what your variables are). <BR/><BR/>And along those lines, are there gender or ethnic differences in your sample? Maybe if you covary out some individual differences...you might get your results back.<BR/><BR/>Like I said it's tough to call. In my own experience, I've been lucky enough not to have to run my own participants. In general I'm pretty trusting of RAs but I've known some that are really funny about letting RAs actually enter data.<BR/><BR/>And as to your concerns about meaningfulness if your participants can be biased...being a spin doctor of sorts, I'd argue that what had typically been considered a trait is possibly really a state that is able to be affected. This may have practical implications if whatever you are studying is bad and something that someone might want to change.Abbeyhttps://www.blogger.com/profile/01338660653004655997noreply@blogger.comtag:blogger.com,1999:blog-7246838997491107294.post-2798280985944581612007-04-16T19:06:00.000-07:002007-04-16T19:06:00.000-07:00Without the specifics I doubt I can be much help. ...Without the specifics I doubt I can be much help. However, the first thing I would do is interview the RA to ensure he/she gathered the data correctly. Also, is it possible there was a coding error, either on your end, or the RA's end? Have the sample demographics changed? How much did your n change with the RA's data? I'm sorry to hear your analyses have changed so much. I too am inclined to think there is something fishy with it. <BR/><BR/>Sorry I'm not much more help!The Little Student...https://www.blogger.com/profile/07006761564837506327noreply@blogger.com