Thursday, April 24, 2014

Team research

I often (half) jokingly say I was absent the day they taught us to share in Kindergarten. I tend not to work well in teams because I get frustrated when team members don't pull their weight, and I don't like relying on others to get work done. However, there are huge benefits conducting research in a team, not the least of which is sharing insights and engaging in collaborative reflexive practice.

Barry et al. stood out to me this week. We informally engage in reflexive conversations, sharing insights, assumptions, and thoughts, and these conversations are always informative and generally move the work forward. Making this a formal practice could be extremely beneficial, but the concern that Barry et al. raise about getting team members on board is a real one. We are all so busy already, and I'm not sure how I would engage my team without making it feel like extra work or like an assignment.

That stated, I wrote my first skill builder on using Pinterest for teacher reflections, but I focused my article on researcher reflexivity. That shift was an interesting one to explore, and while I'm not sure I could get my current team to do it, it is something I want to refine to use for a fresh team.

Sunday, April 20, 2014

Collaborative Reflexivity in Research

Barry et al.'s (1999) experience in collaborative reflextivity made me think of the way knowledge is distributed in communities of practice. Both Barry et al. and Lee and Gregory (2008) discuss the value that is brought to a research effort when teams participate in collaborative reflexivity. As Barry et al. (1999) discuss, this can be an uncomfortable process at first, but the result of healthy collaboration and feedback in this regard seems to be a deeper understanding of both the phenomena witnessed and the group's ultimate characterization of it.

I ended up rereading Anderson and Kanuka (2003) and their discussion of the e-researcher and literature reviews before I realized that that was not one of the assigned readings this week. However, that chapter brought out some interesting points that highlight the importance of the work by Barry et al. (1999) and Lee and Gregory (2008). In their 2003 article, Anderson and Kanuka are quite skeptical of the "e-researcher" and the use of the Net, discussing the Internet in language that seems foreign or ill-at-ease. But what this highlighted for me was that knowledge and expertise is everywhere - it is distributed amongst the tools and people on the research team, and finding ways to take advantage of all of these perspectives is important in ensuring a robust research process. 

While Barry et al. (1999) do point out that multiple voices in a research article may hinder its purpose (p. 36), they also discuss collaborative reflexivity as another form of triangulation. In my first skill builder, I wrote about using Pinterest as a collaborative reflection tool, and cited many of the issues that both papers brought up. It is difficult to share one's impressions with others, as we are used to going through this process in a solitary manner, but doing so both gives individuals insights into one another and contributes to the groups understanding of the phenomena and their own (tacit) epistemological stances. In this way, collaborative reflexivity is indeed a way of triangulating the data - it allows the research team to conceptualize the different ways in which specific data or codes may be interpreted.

Friday, April 18, 2014

CAQDAS as a Data Management Tool

One of the really interesting points that was made on Tuesday was that a researcher can use ATLAS.ti as a data management tool, and that coding is only part of the analysis process. I think it is easy to get lost in the (sometimes overwhelming) task of coding, and I have found myself feeling bad that I am not coding as much as I probably should be. But a series of events over teh last two weeks has helped me realize that perhaps it hasn't been time to code yet. I've been conducting formal and informal interviews, taking notes, and writing memos as I think about and synthesize information.

I have all of these documents in different places. It is nice to know I can utilize a CAQDAS package to store it all. I wonder if NVivo has similar functionality in this regard to ATLAS? I'm going to need to make a decision soon about which package I am going to use so I can get organized.I'm still not sure which one I should use. My advisor can get NVivo fairly cheaply, and I wonder if I need the structure it provides. But (I think) I like the theoretical underpinnings and assumptions of ATLAS. I'm looking forward to next week when we dive in ourselves. In the meantime, I have downloaded trials and am beginning to test them out.

Monday, April 14, 2014

Staying Close to Data

Taylor, Lewins, and Gibbs (2005) discuss the debate over using CAQDAS packages for data analysis. In their article, they discuss the many concerns over using digital tools, one of which is the worry that using a digital tool may not allow the researcher to stay "close to the data." The authors define this as the physical handling of transcripts to make their discussion clear.

This discussion always interests me, and the way Taylor, Lewins, and Gibbs discuss the issue of closeness made me think about what digital tools generally and CAQDAS packages specifically afford in terms of allowing a researcher to get close to the data. The authors explain that one may move away from the data by not handing the transcripts and hand coding, but they also discuss that digital tools afford the opportunity to add new codes in an easy way. Tools also allow a researcher to look at their codes in the aggregate and make links between codes and excerpts in a fairly simple and streamlined way. This could be a very laborious task if one was hand coding, and there may be inaccuracies because it may be difficult to see all of the excerpts with the same code.

Furthermore, CAQDAS packages allow for the manipulation and refinements of codes. Because the researcher can see the codes and the excerpts to which they are attached at a glance, they can look for consistencies in language and make sure that their word choice connotes the meaning they intend. This may be particularly useful for revealing underlying biases and assumptions.

It seems to me that CAQDAS packages actually allow the researcher to be more intimate with their data because they allow the researcher to code in a non-linear fashion if they so choose, and then visualize the data in accessible ways that reveal consistency across coding and may focus analysis.

Thursday, April 10, 2014

A comment on Alas.ti and representation of findings

I'm still digesting what we talked about in class, and I'm only just beginning to explore atlas.ti, so I'm not sure I have a lot of thoughts to share yet.

My initial reaction to atlas.ti is that I like its underlying assumptions. I like that you can work in a non-linear process (though, as I said in class, I need to remind myself that that's ok), and I like that you can work with a kind of messy space. But I wonder how I would do with it - I'm thinking I might get lost in the messiness...I might need the kind of structure that comes with some of the other tools, but I'm not sure. I need to play with it.

I also just want to touch on the point we started to discuss about bringing findings to the public without compromising participants. I don't know how to do this, and I've grappled with this problem a bit in previous posts. I really like the notion of making the data and the research process accessible, open to the public, and even commentable. But when we do that, we put our participants at risk because even if we use pseudonyms and change the genders, they at least will know their stories and feel vulnerable, whether they are or not from an IRB standpoint. This concerns me greatly as I write up this research. I'd really like to explore this more.

Sunday, March 30, 2014

Types of Questions, Typs of Data

As I re-read my post from Wednesday, I was thinking about my inital reaction to NVivo'sassumption that one must have predetermined  categories. After letting the thoughts from class and from the blog settle, I thought that actually it is important to admit that we do come into research with assumptions and things we are looking for; NVivo allows the researcher to add and change categories (I think?), so it may not be as constraining as I initially thought. I also wondered if it would be wise to use different software packages for different purposes and questions. Woods and Dempster's (2011) article was nicely timed to help me think through this question.

I appreciated the authors' description of Transana, and while I don't fully understand its capabilities, the ability to look at multiple transcripts at once is intruiging. However, the big takeaway for me was the assertion that different questions call for different types of analysis, and that different data may call for different analysis software in the same way that one would use different methods for different data and questions.

Choosing and committing to a particular analytical software package is a daunting task that I don't want to make until I have a more grounded understanding of the differing affordances of each package. It seems that each has some basic functionality, but they exist because they answer different kinds of questions. Therefore, as usual, it is the context that matters; one isn't necessarily better than another - they are more or less well-suited for different purposes.

And this thought reminded me that, while I have a certain epistemologial frame that shapes the way I approach research and the methods I use, I need to keep looking for the right method for asking a specific question with X kind of data. So while Woods and Dempster focused on explaining how different questions can be answered within a particular tool, their discussion led me to think more broadly about choosing the right tool for the specified questions.

This post is kind of a ramble, but it is this way because I am at a place where I am beginning to really feel my grounding as a researcher, and I am having to make deliberate choices as I move away from my advisor's work and into my own. I think I'm going through an intellectual growth spurt, and this think-aloud post is part of that process. In any case, I really appreciated Woods and Dempster's article because, in addition to providing information about Transana and software packages in general, it got me thinking on a broader level. Apologies for the stream-of-conscioussness of this post.

Wednesday, March 26, 2014

Predetermined Categories and Linguistic Precision

We had a really interesting discussion about the importance of choosing the words we use for codes carefully. Words connote meaning we may or may not want to associate with our data, and being precise (and consistent) with our phrasing is important both for our own data analysis and others' interpretation and evaluation of it.

And because these choices are so important, it seems odd to me that NVivo works best when you come with predetermined categories. I recognize that I come into my research with assumptions and goals, but I think I would feel strange defining categories before I know what codes are going to emerge. When I started the research I am doing now, I thought it was going to be about a six-week professional development workshop, and at the end of those six weeks, I conducted what I thought were exit interviews and tried to start writing the paper. And it didn't work.

I have been writing and revising this paper since August 2013, and sometime in November I came to the realization that I needed to be patient and let things unfold and emerge. I sat back and watched the teachers teach and listened to their stories. I took notes and marked conversations, and only now as the courses ended, was I able to identify enlightening moments and begin to craft a more robust narrative.

This is just a knee-jerk reaction. I'm looking forward to playing with NVivo and exploring its features. Maybe it is the right tool; I certainly like to Word interface. But I don't know yet. I'll have to work with the tool I'm glad we're going to get the opportunity to explore many of them.